Search
Menu
Sheetak -  Cooling at your Fingertip 11/24 LB

Open-Source Photon Counting Advances Biological Research

Facebook X LinkedIn Email
Fast and volumetric intravital imaging represents the most demanding scenario in which photon counting is drastically improving the performance of existing systems.

PABLO BLINDER, LIOR GOLGHER, AND HAGAI HAR-GIL, TEL AVIV UNIVERSITY

A surgeon removing a malignant tumor1, a security agent detecting harmful molecules at a stadium2, an autonomous car avoiding a collision3, a weather satellite spotting hurricane formations4, a neuroscientist tracking neuronal activity deep within the brain5 — all of these situations involve quickly identifying small optical signals buried within a large noisy volume. Efforts made by many to achieve this difficult goal have led to a “convergent evolution” toward a common solution: time-tagged photon counting using sensitive single-pixel photodetectors. Consequently, free and open-source software recently developed for neuroscientific research may find its way to assisting clinicians and researchers in otherwise unrelated fields.

TPLSM has the potential to improve outcomes for cancer patients undergoing surgery. Courtesy of iStock.com/STEEX.


TPLSM has the potential to improve outcomes for cancer patients undergoing surgery. Courtesy of iStock.com/STEEX.

Functional imaging at the cellular level in the living brain is commonly achieved with two-photon laser-scanning microscopy (TPLSM), which enables researchers to face the challenges imposed by scattering of turbid media6. For example, using this technique, researchers are able to record the activity of individual cells deep inside the living murine cortex, with minimal perturbation to the imaged animal. Although TPLSM has become the gold standard for imaging under these challenging conditions, overcoming existing trade-offs inherent with this technique will expand the scope of the scientific questions that could be asked.

In particular, as a point-scanning technique, TPLSM forces microscopists to balance between imaging rate (pixel dwell time) and the size of the field of view. If one wishes to acquire a large number of frames every second — perhaps to capture fast dynamic processes — it is expected that all pixels in each frame will be darker, as less photons per frame reach the detector. This trade-off becomes more severe as scientists continuously try to image larger volumes of the brain in a rapid fashion6.

This trade-off mandates constant improvement of the collection optics, as well as the use of very sensitive and precise detectors to increase the probability that each photon that does arrive at the detector will be tallied successfully. Photomultiplier tubes are the workhorse of TPLSM. For each detected photon, the detector emits a characteristic electrical current, referred to as a single electron response (SER). The SER pulse height is highly variable7. The detector also emits noisy baseline current, even when no photon is detected.

The signal coming out of the detector can be processed by two main methods. The common scheme used in most microscopes is known as analog integration (Figure 1, bottom), in which the continuous signal is summed after a short period of time. This value is used as the brightness value corresponding to the pixel. The more photons arriving at the sample, the higher that sum will be. However, analog integration also attributes a higher brightness to photons that have randomly caused a higher SER and sums them with baseline noise fluctuations. These factors degrade the quality of the resulting image.

Alternatively, the second acquisition scheme is digital photon counting (Figure 1, top). Its main principle is to threshold the incoming SER signal from the detector, registering only pulses that cross some preset value. In this manner, the output of the detector is digitized, producing a “1” if and when a photon is detected. Importantly, a photon counter gives equal weight to each photon detection event, thereby eliminating most of the SER-associated variance.

Figure 1. Illustration of the difference between digital photon counting and analog integration acquisition schemes. The raw output from the detector (center) is either summed in consecutive time bins for the analog integration or thresholded during photon counting. Courtesy of the Blinder Laboratory.


Figure 1. Illustration of the difference between digital photon counting and analog integration acquisition schemes. The raw output from the detector (center) is either summed in consecutive time bins for the analog integration or thresholded during photon counting. Courtesy of the Blinder Laboratory.

Previous implementations of photon counting acquisition schemes are not as well suited for the increasingly rapid imaging conditions described above, because the electronics used for the thresholding and counting of the digital events suffer from issues such as “dead time” after each detected event, and low bandwidth. While suitable devices were available in other fields of research, including nuclear physics, the process of adapting them to biological imaging was cumbersome and unclear. This led to photon counting in a rather limited scope because it requires either having a high degree of electrical expertise to set up and use to its full extent8 or opting for rather expensive commercial solutions.

Photon counting

Recently, a new study, originally published in Optica, the journal of the Optical Society of America, has paved the way to a wider adoption of photon counting by dramatically simplifying the integration of state-of-the-art electronics into standard two-photon microscopes. This innovation is driven by a new, open-source application called PySight, which aims to bridge the technological gap between complex hardware and the researchers who need to operate it5.

Embedding photon counting into existing imaging systems using PySight requires minimal modification — mainly rerouting the output of the detectors into commercially available off-the-shelf hardware. It also takes into account the timing signals from beam scanning elements used in the microscopes (e.g., galvanometric and resonant mirrors). The provided software analyzes the collected data sets. With these pieces in place, the day-to-day operation of the microscope remains largely unchanged, while the data is now saved in a time-tagged photon-counting mode8.

The resulting images better reflect the activity of the imaged cells, likely because of the elimination of most SER-associated variance and because of suppression of background (Figure 2), overall yielding a more reliable representation of the underlying process that generated the collected photons. In other words, areas in the field of view that should be dark because of lack of emitters remain completely black in the resulting images, whereas the recorded brightness of the interesting features better corresponds to their actual brightness. This results in a net gain for most, if not all, imaging experiments.

Figure 2. Photon counting reduces noise artifacts. The left column shows the sum of three frames as captured with a well-established analog system. The noise is clearly visible when observing the Fourier transform of that image (bottom). Yet the same imaging conditions provide an improved result when using the photon counting imaging modality (right column), as seen in the resulting Fourier transform. The images show the cortex of a mouse 200 µm below the surface, genetically encoded to express fluorescent proteins in its neurons. They were captured at 15 Hz and span 335 × 335 µm2. Scale bar = 50 µm. Fourier transform images are log-scaled. Reprinted/adapted with permission from Reference 5, The Optical Society.


Figure 2. Photon counting reduces noise artifacts. The left column shows the sum of three frames as captured with a well-established analog system. The noise is clearly visible when observing the Fourier transform of that image (bottom). Yet the same imaging conditions provide an improved result when using the photon counting imaging modality (right column), as seen in the resulting Fourier transform. The images show the cortex of a mouse 200 µm below the surface, genetically encoded to express fluorescent proteins in its neurons. They were captured at 15 Hz and span 335 × 335 µm2. Scale bar = 50 µm. Fourier transform images are log-scaled. Reprinted/adapted with permission from Reference 5, The Optical Society.


Meadowlark Optics - Wave Plates 6/24 MR 2024
Furthermore, this unique data acquisition scheme allows labs to integrate advanced scanning elements almost seamlessly. This is because the position of the beam steering elements is represented by a simple synchronization signal recorded along with the photon detection events. The study’s researchers used the fastest depth-scanning variofocal lens currently available to obtain volumetric images, rather than the typical 2D images generated by other TPLSM setups5 (Figure 3). Previously, integrating this lens required designing custom electronic circuits — a job many labs do not wish to undertake.

Figure 3. Intravital volumetric imaging of a living fruit fly, genetically encoded to express fluorescent proteins in parts of its brain. The entirety of the volume, recorded at 73.4 volumes per second and spanning 234 × 600 × 330 µm3 (a). Responses of the rectangular areas similarly colored in (a) to an odor puff (b, c). Scale bars = 50 µm. Reprinted/adapted with permission from Reference 5, The Optical Society.


Figure 3. Intravital volumetric imaging of a living fruit fly, genetically encoded to express fluorescent proteins in parts of its brain. The entirety of the volume, recorded at 73.4 volumes per second and spanning 234 × 600 × 330 µm3 (a). Responses of the rectangular areas similarly colored in (a) to an odor puff (b, c). Scale bars = 50 µm. Reprinted/adapted with permission from Reference 5, The Optical Society.

Using PySight, however, integrating this advanced lens to achieve rapid volumetric data is merely a single cable connection away. The recorded volumetric movie successfully captures rapid changes in calcium concentration at distinct areas of a fly’s brain as a response to the presentation of different odors.

Aside from planar and volumetric imaging, time-tagged photon counting can also be used for other imaging modalities common in the field of microscopy, that is, if they rely on bucket detectors such as photomultiplier tubes. For example, fluorescence lifetime imaging is the practice of using information about the decay rate of emitters to observe changes in their state8. The new hardware and software solution allows researchers to collect data that can be used simultaneously for both modalities: fast, functional and fluorescence lifetime imaging. It is also extensible to wide-field compressive multiphoton imaging9, which is an emerging, promising technique for rapid bioimaging at increased depths with reduced phototoxicity.

Future applications

The applications of time-tagged photon counting are not limited to neuroscience, nor to basic scientific research. Nonlinear microscopy is steadily migrating into the clinical setting, already proving its potential for improved tumor resection in human patients1, as well as for label-free tumor diagnosis, which may eventually obviate biopsies.

A promising clinical future application is two-photon photodynamic therapy, namely, targeted destruction of cancer cells using reactive oxygen species emitted by photosensitizers that are excited by NIR light10. By using time-tagged photon counting to process multiphoton volumetric movies, surgeons could quickly detect and target malignant cells lurking hundreds of microns beneath unresected human tissue, thereby potentially improving surgical outcomes in cancer patients.

While conventional analog integration incurs writing to disk the sampled voltage for each voxel, time-tagging photon-counting solutions only write to disk a sparse list of photon detection times. Consequently, the size of the recorded data sets only scales with the number of detected photons, whereas the size of the recorded data sets using analog integration grows with the number and size of the scanned dimensions. Time-tagged photon-counting systems bear further potential for measuring the time of flight of photons from a pulsed emitter to targets of interest, thereby estimating their distance.

Such light detection and ranging (lidar) systems are used for diverse applications, ranging from planetary mapping through tracking of atmospheric cloud formation4 to obstacle detection for self-driving cars3. Additionally, rapid volumetric hyperspectral imaging may enable the detection of explosives and toxic chemicals localized within a large venue2.

This diverse and growing community of time-tagged photon-counting users has been enriched by use of free and open-source tools such as PySight.

Meet the authors

Pablo Blinder, Ph.D., leads a multidisciplinary laboratory in the Neurobiology, Biochemistry, and Biophysics School’s life science faculty at Tel Aviv University in Israel. He received a degree in neuroscience from Ben-Gurion University and trained as a postdoctoral researcher in the physics department at the University of California, San Diego.

Lior Golgher is a doctoral student at the Sagol School of Neuroscience at Tel Aviv University. He has bachelor’s degrees in physics and electrical engineering from the Technion Israel Institute of Technology.

Hagai Har-Gil is a graduate student at Tel Aviv University. He has a bachelor’s degree in physics and is currently researching superresolution methods at Tel Aviv University’s Sagol School of Neuroscience.

References

1. S.R. Kantelhardt et al. (2016). In vivo multiphoton tomography and fluorescence lifetime imaging of human brain tumor tissue. J Neurooncol, Vol. 127, Issue 3, pp. 473-482.

2. A.C. Geiger et al. (2019). Sparse-sampling methods for hyperspectral infrared microscopy. Proc SPIE, Image Sensing Technologies: Materials, Devices, Systems, and Applications VI, Vol. 10980, p. 1098016.

3. F. Zhang et al. (2017). Adaptive strategy for CPPM single-photon collision avoidance LIDAR against dynamic crosstalk. Opt Express, Vol. 25, Issue 11, pp. 12237-12250.

4. T. Markus et al. (2017). The ice, cloud, and land elevation satellite-2 (ICESat-2): science requirements, concept, and implementation. Remote Sens Environ, Vol. 190, pp. 260-273.

5. H. Har-Gil et al. (2018). Pysight: plug and play photon counting for fast continuous volumetric intravital microscopy. Optica, Vol. 5, Issue 9, pp. 1104-1112.

6. N. Ji et al. (2016). Technologies for imaging neural activity in large volumes. Nat Neurosci, Vol. 19, pp. 1154-1164.

7. S. Moon et al. (2008). Analog single-photon counter for high-speed scanning microscopy. Opt Express, Vol. 16, Issue 18, pp. 13990-14003.

8. W. Becker, ed. (2015). Advanced Time-Correlated Single Photon Counting Applications. Springer Series in Chemical Physics, Vol. 111.

9. M. Alemohammad et al. (2018). Widefield compressive multiphoton microscopy. Opt Lett, Vol. 43, Issue 12, pp. 2989-2992.

10. S. Wang et al. (2019). Polymerization-enhanced two-photon photosensitization for precise photodynamic therapy. ACS Nano, Vol. 13, Issue 3, pp. 3095-3105.

Published: August 2019
Glossary
lidar
Lidar, short for light detection and ranging, is a remote sensing technology that uses laser light to measure distances and generate precise, three-dimensional information about the shape and characteristics of objects and surfaces. Lidar systems typically consist of a laser scanner, a GPS receiver, and an inertial measurement unit (IMU), all integrated into a single system. Here is how lidar works: Laser emission: A laser emits laser pulses, often in the form of rapid and repetitive laser...
MicroscopyLasersdetectorsImagingTel Aviv UniversityPablo Blindertwo-photon laser-scanning microscopyTPLSMNIRlidarsingle electron responseFeatures

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.