We present single-sample image-fusion upsampling, a training-set-free computational single-image super-resolution method. Unlike typical example-based super-resolution techniques (such as neighbor embedding or deep learning), our method draws all information from a single measurement; we have no external training set, and thus no external training bias. Nonetheless, our method can draw more information about lifetime than the low-resolution image natively contains by exploiting dependencies between it, and our high-resolution intensity measurement. We differentiate two types of dependency, local and global. Local dependencies, matching regions where lifetime and quantum yield correlate strongly, are found by fitting function on windows of corresponding lifetime and intensity values, whereas global dependencies (between morphology and fluorescence lifetime) are extracted by a deep neural network trained to label intensity patches with the central lifetime measurement of that patch, drawing patches exclusively from the given sample. These physics-based dependencies are used to create local and globals "priors", which constrain a convex optimisation pipeline that upsamples the low-resolution lifetime image.
@article{kapitany2024single,
title={Single-sample image-fusion upsampling of fluorescence lifetime images},
author={Kapit{\'a}ny, Valentin and Fatima, Areeba and Zickus, Vytautas and Whitelaw, Jamie and McGhee, Ewan and Insall, Robert and Machesky, Laura and Faccio, Daniele},
journal={arXiv preprint arXiv:2404.13102},
year={2024}}