Method Special: Correlative Light Electron Microscopy
Combined Imaging Forces
by Steven Buckingham, Labtimes 05/2017
Imaging technologies provide crucial insights into the way cells, tissues and organs work. But it is not all about getting better images of smaller things. New approaches to imaging are making it possible to think about biological problems of widely different scales.
We are a visual species. When we understand something, we say, “I see”. Someone with a clear plan for the future is said to “have vision”. When we finally understand something, we talk about the “light coming on”. We avoid reading books with no pictures and the GUI has displaced the terminal.
One quarter of our brains is dedicated largely to visual processing. Is that the reason, perhaps, why we find visual images so powerful? Journal covers are now stunning pieces of art with glaring colours. Show someone a picture of a brain lighting up under fMRI and they’ll believe anything you say. Although biologists, in their envy of the “hard” sciences, have often tried to emulate the abstraction of mathematics, at the end of the day, biologists still go by the old saying, “seeing is believing”.
Imaging is a way of opening up the invisible world through the medium of sight. Sure, there are machines that spew out lists of abstract numbers that, when expertly interpreted, tell us how close proteins get to one another, how wide a mitochondrion is, or how deeply into a cell the membrane can invaginate. But when we see something with our own eyes, we open things up to our native ability to instantly gain an intuitive understanding of what is going on. Microscopes are a prosthesis to leverage the amazing natural power of our brains.
Now, here is the point I have been angling my way to: there are some big things happening in cell imaging and, if what I have been philosophising about is actually true, that means big things will be soon happening in biology, too. So what are these big things that have been happening in the imaging world? Two major developments stand out: breaking the diffraction limit and combining together diverse imaging approaches into one approach, using a family of methods called “correlative imaging”.
Remember being told at school that you cannot image anything to a resolution greater than half of the wavelength of light? The reason we were given had something to do with diffraction – as you approach a value equal to half the light’s wavelength, diffraction causes single spots to look like fuzzy discs. To understand why, imagine for a moment a feature you are trying to look at, a spot, say, roughly half the wavelength of the incident light. As light passes by either side of the spot, it gets diffracted. This bending of the light means that it gets spread out around the spot. So, what you actually see is not a point at all but instead something like a fuzzy disc – the so-called “Airy disc”.
Now, imagine two such spots next to each other. If the Airy disc is bigger than the separation between the spots, you won’t be able to distinguish them apart. They will look like just one big blob. The point to remember is that this is a fundamental consequence of the wave nature of light and no amount of magnification or bigger lenses or any other workaround you can think of can overcome it. This phenomenon sets a theoretical limit of resolution of about 300 nm for any optical microscope.
But like some other things we were told at school (and if someone reading this taught biology at Devonport High School in the 1970s please contact me – I was right about that photosynthesis experiment), this story is not quite true. Indeed, successes in effectively overcoming the diffraction limit is one of the key developments that has made the new field of correlational microscopy so exciting over the past decade.
So, just how have they broken the diffraction barrier? You probably already know one answer to that – electron microscopy. The basic idea here is to replace the beam of photons used in optical microscopy with a beam of electrons. Electrons in a beam have a much shorter wavelength, hence the minimum size at which you start to get diffraction interference is lower. This is how one version of EM, Transmission Electron Microscopy (TEM), works; whereas its sister technique, scanning electron microscopy (SEM), flashes a beam onto the surface of the target material and the effects resulting from this – back-scattered electrons, secondary electrons and X-rays, for example – are interpreted back into an image.
Then there is confocal imaging. Modern confocal imaging uses a laser to ensure that only a very finely defined spot of the sample is illuminated. Because only this spot is illuminated, there is no (or at least very little) light from other sources to degrade the image. This doesn’t, however, get over the diffraction problem because you will still get Airy discs. All the same, whereas the best optical microscopes cannot get better than around 200 nm, confocals get down to about half of this value, partly because of the pinpoint illumination and partly because you can use different wavelengths.
But that’s cheating, you tell me: neither of those really break the diffraction barrier, they just move it. Okay, point accepted. There are still other ways, in which the diffraction barrier has genuinely been broken and a lot of them are based on looking at individual molecules. They comprise a bag of tricks to make sure that only one molecule is excited at a time, resulting in what has been called “Super-resolution light microscopy” (SLM). Take STED, for example. STED stands for STimulated Emission Depletion and the idea is to isolate the fluorescence from a single label molecule. How is that done?
A powerful laser is shone onto the region surrounding a single spot. This forces the electrons of the fluorophore into the ground state, keeping them from emitting fluorescence. A small patch in the centre, however, is left untouched and free to emit. The ring of grounded fluorophore molecules is at a distance that corresponds to the diffraction length of the light, so you don’t get destructive interference. STED won its inventor, Stefan Hell, the Nobel Prize in Chemistry in 2014.
Another method is called STORM (STochastic Optical Reconstruction Microscopy). Imagine you have a structure in your cell that has been labelled with a fluorescent probe. You stimulate the probe at just the right level, so that only a small fraction of the molecules gets excited. In the brief time it takes for them to emit and drop back into their dark state, you measure the location of the emitted signal. If you get the level of illumination quite right, you are statistically unlikely to have all that many molecules close enough to each other to create any destructive interference. With a bit of mathematical post-processing, you can re-assemble a detailed picture of the target. These are just two examples of what is turning out to be something of an explosion of super-resolution imaging methods, which are now bringing the resolution down to the 10 nm range and even below. Impressive but still not quite as good as electron microscopy, which easily gets down to 1 nm. There is, however, an important point here that can easily be missed. Super-resolution imaging does not get down to EM resolutions but it is getting close enough, to make it worthwhile to find ways of combining EM and SLM.
Why is that so important? Because combining two methods amplifies their respective good points and erodes their bad points. In fact, our brains are doing this all the time. When you listen to someone speaking, you are combining auditory information (their speech) with visual information (the movements of their lips). When information from one modality is not so good (the person talking to you is also eating a doughnut), the other channel makes up for it. The whole becomes greater than the sum of the parts.
Now, let’s apply this to imaging. What are SLM’s good points compared to EM? For one, there are lots of fluorescent probes and they are easy to make, whereas with EM you have to come up with something electron-dense. Also, with SLM you can image without damaging the cell, whereas to do EM you need to freeze the cell (to preserve water content), fix it and put it under a high vacuum. EM has its own advantages, too – resolution being the main one. There are so many benefits in combining SLM and EM that they have given it a name: CLEM (Correlative Light Electron Microscopy). The idea behind CLEM is to do SLM and EM on the same preparation.
Imagine, for example, the distribution of a fluorescently-labelled protein at the 50 nm scale on wet, perhaps even live, cells. But you want to know where the protein is located at the nanometre scale, so that once you have finished running SLM, you then go and put the sample into an electron microscope to get a high-resolution image of the same material. You would then correlate the two sets of images, to draw inferences that span the two different scales. In one sense, we have been doing that all the time without realising it – think of those figures where one has the whole cell in view and an inset has a magnified region. CLEM, and indeed, all correlational microscopy techniques, are more thorough ways of doing just that.
But there are many problems with trying to do two different techniques on the same tissue. First of all, the two techniques treat the tissue in very different ways because they have very different requirements. EM, for example, needs the tissue to be strongly fixed and the strong fixatives used can quench the fluorescence of the probe. Again, SLM has to be done using wide-aperture, oil-immersion objectives and that means you need to mount the preparation under a glass coverslip. But you can’t use glass in EM because the glass is non-conductive and that means you get charge build-up, which ruins the image. In short, the preparation requirements for EM and SLM are at many points completely incompatible and mutually exclusive. One way of overcoming this is to do them in series, one after the other. Do your SLM on a wet, possibly living, cell first, then go and do the EM. But even this brings quite a few problems. Think about that manipulation step – after the SLM step you then have to transfer the preparation from its glass coverslip onto an EM grid. Are you sure you won’t bend, stretch or otherwise distort the preparation in the process? And what about distortions caused by the dehydration, fixing and embedding that is needed for EM? Remember the scale we are talking about here: even just a few nanometres of movement will make correlation very difficult, to say the least.
Several labs have come up with ingenious ways of getting around this. Wojcik et al. came up with the solution of coating cells with a layer of graphene (Nature Communications, 6:7384). Graphene is all but invisible to the light used in SLM but is electroconductive, so it can be used in EM. Other labs have solved the problem by thinking carefully and strategically about exactly which methods they are going to combine. For example, several labs have used cryofixation, in which cells are plunged into a cryogen (e.g. liquid helium), vitrifying the cell contents without allowing the water to crystallise. Ultrastructure is preserved, obviating the need for fixation. In this way, SLM and EM can be done on the same sample under the same, or similar, preparation conditions. However, it does mean doing all the experiments at below -140℃°C and it is hard to predict how a fluorophore is going to act at those temperatures. Besides, microscope objectives will need a lot of additional features engineered into them, if they are to work under these conditions.
CLEM imaging of human hepatoma cells, expressing a GFP-tagged viral protein (green), performed by Mirko Cortese and Volker Lohmann at the University of Heidelberg. Light microscopic images, showing cellular lipid droplets (red) and the nucleus (blue) were correlated with data sets from transmission electron microscopy (gray) to identify ultrastructural details (left). Images: M. Cortese & V. Lohmann
An entirely different set of problems arises because data from two different scales are being combined. The task of image registration is the process of identifying corresponding positions in the two imaging results. It is a bit like looking at two maps: one covers the whole county, the other just your local village. To get the benefit of correlating the two maps, you have to match up landmarks in both of them. This can get quite difficult. The two maps may use different colour codes (motorways are blue in one, white in the other, for example) and different sets of symbols (one map may not have built-up areas marked) as well as the different scale. The very same problem is faced in the crucial step of finding where, in your 10 nm “map” of a cell, that intriguing blob in the 0.1 nm image corresponds. One way researchers overcome this is to introduce their own fiducial (trustworthy) landmarks. For this approach to work, you have to find a stain that will be visible in both “maps”, for example, both fluorescent and electron-dense – no easy task. The problem of registration is exaggerated when you use serial methods, such as SLM followed by EM, because you never know what distortions you may have introduced, while moving the specimen from the glass slide to the EM grid. Remember, we are working here in the 0.1 to 10 nm range of scales, so there is not much tolerance allowed.
Here again, researchers have demonstrated ingenuity in getting around this problem. Tian Cao at the University of North Carolina at Chapel Hill and colleagues turned to computer vision to solve the registration problem. In computer vision there is a trick called “image analogies”, in which you give the computer pairs of images and it learns the relationship between them. Cao showed how you could apply the same method to pairs of images, acquired using different imaging technologies (Med Image Anal 18(6):914-26).
Researchers have also been challenged with other aspects of matching two different approaches working at two different scales. Not only do movement or distortion artefacts have to be controlled for, but differences arising from using diverse mechanisms of staining also need to be accounted for. For example, think of imaging a fluorescent probe with SLM and matching it to an electron-dense probe in EM. With any approach, using binding of a probe to a target, there is inevitably a distance between the emitting/absorbing site of the ligand and the actual target, and for many probes that may be quite big. If this distance is significant, relative to the distribution of the target, each of the two techniques introduce an uncorrelated error.
All the same, the potential advantages of correlating microscopy at different scales are compelling enough for many labs to undergo the hard work of overcoming these many challenges. The result is that just about every combination of microscopic technique has been tried in one way or another. And it is not just combining SLM with EM. SLM has also been combined with Atomic Force Microscopy (AFM), in which a vibrating cantilever brushes over a cell surface and works out, at submolecular dimensions, the shape of the landscape it crosses. Recent advances have allowed this to be done at high speed, fast enough in fact to observe the dynamics of molecular motors. The big plus from combining AFM with SLM is that the two techniques are much more compatible than, say, SLM and EM.
Hermann Spaink's coworkers injected fluorescently-labeled M. marinum in the tail fin of zebrafish larvae and visualised the microbial infection by confocal laser scanning microscopy (CLSM). Image: Spaink lab
Correlative approaches aren’t just about combining information over different spatial scales but also over different temporal scales. SLM is still quite slow, thanks to the unavoidable trade-off between resolution and speed. In other words, you get good quality spatial data but poorly resolved (or no) temporal data. So, what happens if you are interested, not only in where things are but also how they are moving or changing over time? One way around this is to correlate SLM with something that works faster, thus combining the spatial data of SLM with temporal data of the complementary technique. An exciting example of this combines SLM with mass spectrometry. Silvio Rizzoli and Johannes Wessels of University of Göttingen Medical Centre combined secondary ion mass spectrometry (SIMS) with STED, to measure protein turnover in specific organelles.
SIMS works by applying a beam of particles (such as caesium ions) onto a sample and doing mass spectrometry on the material that is emitted as a result. By scanning across a sample, you can get an image of the isotopic composition of the sample. But the method is low resolution and the data cannot be attributed to organelles. In Rizzoli’s and Wessel’s approach, SIMS was correlatively combined with STED to couple the spatial resolution of the latter with the chemical information of SIMS (Nature Communications 5: 3664).
The group then correlated CLSM and transmission electron microscopy (TEM) images of the same specimen to study the ultrastructure of the autophagic response to M. marinum infection Image: Spaink lab
Some biological questions can only be answered by simultaneously addressing different scales of organisation and this is just the kind of problem at which correlational methods excel. The brain is a case in point. To understand how the brain controls behaviour, we need answers to questions at the molecular, cellular, circuit and whole-brain scales. Several labs have correlatively combined fMRI with optical probes or calcium imaging with scanning electron microscopy. In some cases, even just the very size of some biological macromolecules forces a multi-scale approach. Chromatin, for example, is a grand total of some two metres long (in humans), so a complete understanding of how it is packed into a nucleus requires us to look at it on an astoundingly wide range of scales. Sure, you can use EM to look at its structure but getting a wider picture is like trying to finding your front lawn on Google Earth – without zooming out. Clodagh C. O’Shea solved this problem by taking a correlational approach that combined electron microscopy tomography (taking optical slices of a sample at different angles, from which a 3D picture can be reconstructed) with a probe (ChromEM) that labels DNA (Science, 357, 6349).
This method (ChromEMT) was used to yield new insights into its packing design. But perhaps the prize for the biggest difference in scale goes to a group around Marcel Schaaf and Herman Spaink of the Institute of Biology, Leiden University (see figures above), who combined EM with whole-animal imaging, to gain new insights into the way, in which autophagy protects against infection (Autophagy 10:10, 1844-57).
The number of potential applications of correlative imaging is quite literally exponential, because it entails combinatorial strategies. But there are major technical challenges and each of these is likely to be very application-specific. Solutions found in one particular context are unlikely to transfer in detail to another. All the same, now that the power of correlation is making itself clearer, we are likely to see more efforts to solve these challenges. Correlational imaging will, dare I say, really put us in the picture.
Last Changed: 01.10.2017