Protein aggregates such as Amyloid β plaques and tau tangles are thought to be causative of Alzheimer’s disease, the most common form of dementia. Each of these two protein can serve as a biomarker for the disease, meaning that detection of the protein can help to qualitatively (Do I have Alzheimer’s?) and quantitatively (How advanced is the disease already?) characterize patients. First of all, biomarkers are important to facilitate therapy. The idea is: The earlier a disease is diagnosed, the more damage can be prevented. Unfortunately, in the case of Alzheimer’s disease currently no drugs are available to cure the disease. However, several options are available for patients and their families to decrease the disease burden either pharmacologically, by psychotherapy or by adapting the living environment into a form that prolongs autonomy and decreases stress. Here biomarkers can significantly decrease the disease burden since an early adaption to the disease avoids distress, wrong diagnosis and might even add several years that are experienced as ‘positive’. Secondly, biomarkers are important for the development of effective anti-Alzheimer’s drugs. Patients for clinical trials need to be identified early, so that the effects of drug candidates on disease progression can be judged appropriately. Both Alzheimer’s-linked proteins Amyloid β and tau can be detected in patients as fluid and imaging biomarkers. Unfortunately, detecting both proteins in the cerebral fluid is not risk-free and a complicated procedure. Also imaging is not straightforward and can not be applied frequently in large patient groups.

Because of the high medical need, but current obstacles for efficient Alzheimer’s disease biomarkers, a recent publication by Oliver Preische, Matthias Jucker and many more scientists received a lot of attention. In this publication, they describe a protein that can be used as a blood-based biomarker for Alzheimer’s disease. Blood can of course be easily, safely and repeatedly drawn from large groups of patients. But what did the study find exactly? And how precise and accurate would this test be?

In this blog post, I will briefly describe some of the key methodologies, the principle of the test, some of the implications, and potential weaknesses. In case you are interested in the full detail of the original paper, please check here.

First of all, I should mention that the biochemical basis for the test itself is not completely new. Similar tests, using the same
neurofilament light (NfL) protein as biomarker, have been developed before, for example for Huntington’s disease or Parkinson’s disease. NfL is a protein which is found in neurons. Once these neurons become damaged, NfL ‘leaks out’ and can be identified in the blood of patients by an anti-body based test. Because NfL principally only indicates neuron damage, but is not specific for one type of disease, careful controls are necessary when using this tests in patients. In the present study, the researchers have used this existing test, but quite cleverly so and above all, they chose a good study group that allowed a good validation of the test.

So how did the researchers apply and validate the already known NfL-biomarker test in Alzheimer’s disease?

Patients with known Alzheimer’s disease mutations (but still without the manifested disease) were compared to healthy members of their own family. The figure below shows how this data looks like: Each red dot represents a human with an Alzheimer’s-causing mutation and each blue dot represents a control person without this mutation. In people without Alzheimer’s mutation you never know when and if the disease ever breaks out. That’s why its is very hard to use such people to test a new detection method. In the case of a well-studied mutation, it is quite clear when the disease will break out, and therefore, in the first step, the years up to the first onset of symptoms (= estimated years to symptom onset (EYO)) could be calculated.

The researchers determined the estimated years to symptom onset (EYO) for people carrying known genetic Alzheimer’s mutations (red dots) and compared this data to members of their family without the mutation (blue dots). Next they measured the amount of NfL protein in the blood of the two groups. The data shows that the closer a person approaches the measurable outbreak of the disease, the higher the NfL levels become. Importantly, their is a difference between both groups. People with Alzheimer’s mutation seem to have higher NfL values, especially in the early phase, when the disease has just started. Therefore, NfL could be potentially used a biomarker in Alzheimer’s disease.

In the second step, the actual biomarker test was carried out. The amount of the biomarker protein NfL was displayed as a function of the years until the onset of the disease, even though people are not yet sick (check the figure above).

There are however some open questions to this approach:

The theoretical assumptions concerning the duration until disease outbreak might not be very accurate in practice and overall epidemiological data might not be accurate for a particular person in the data cloud.
It is also important that there are significant differences between the groups only from -2 EYO onwards (strong increase of the red compared to the blue curve). This means: The biomarker levels become really different
only 2 years before the onset of the disease. Whether two years are actually enough to treat the disease well or prevent it, is currently unclear. This test is therefore not suited for the long-term prediction of disease outbreak. NfL levels are simply not high enough very early on.

What is certain, however, is that this test will be very important for clinical trials of potential drugs. Surprisingly, one of the major problems with drug trials is that there is often no real evidence to suggest that you can prevent a disease. How would you ever know if you have prevented a disease although it might not break out for a particular person anyway?

This is where this novel NfL Alzheimer’s test comes into play. Now that we know that it works in principle, it can be applied to the majority of studies with patients who do not carry a clear Alzheimer’s mutation or in which the disease breaks out later in life. Patients who test positive because they have another neurological disorder that also releases NfL proteins because nerve cells are dying (for example, Parkinson’s disease or multiple sclerosis) need to be adjusted for afterwards. Sadly, at the moment, there is no other option.

So in essence, the test described here is relatively sensitive (shortly before Alzheimer’s breaks out), but not very specific. A good test should be both. However, at the moment this is technically not feasible because we do not know any specific biomarkers yet. Despite this and by using a clever study design, the researchers have now at least identified NfL as a sensitive biomarker that might help to find a long thought-after curing or anti-progressive Alzheimer’s drug.

During my PhD we used multi-color live cell and fixed cell single molecule imaging to light up the unknown fate of mRNA molecules during the stress response, which spans from the bright transcription site into the dark corners of the cytosol where mRNAs can encounter P-bodies and Stress Granules.

You can learn more about this exciting work by reading the most recent publication of the Jeffrey Chao lab:

Wrist-based heart rate measurement is on the rise. But is it really time to ditch the good-old chest strap to determine your heart rate? Most new and high-end running GPS watches are either wrist-based and chest strap compatible or at least chest strap compatible. Which one is “better” or at least more suited for your needs?

Wrist-based heart rate measurement is light based. A small LED at the back of the watch emits green light which is reflected more from the wrist vein when there is less blood blood present in the vein. When the heart pumps, more blood arrives in the wrist vein and the green light reflection is decreased. These small “reflective cycles” are detected by the watch. The time between the cycles is used to calculate the heart rate. However, this repetitive pattern analysis is challenging and prone to measurement “noise”. This noise might be introduced by the individual wearing the watch. For example some people’s veins might reflect light better than others. Also the position of the watch matters. It should be worn relatively tight and about 1-2 cm above your wrist. Finally, the “blood signal” measured at the wrist might be weaker than when measured directly at the heart, simply because it is further away from the source. As a result, changes between maximum and minimum reflection might be small and difficult to analyze. This is especially true during situations with rapidly changing heart rates where the reflective “peaks” and “valleys” occur in difficult to analyze irregular patterns.

Chest strap based heart rate measurement on the other hand relies on the detection of an electrical signal emitted from the heart. Weak electricity propagates through the heart muscle in order for it to contract and to be able to pump blood. Measuring this signal directly at the “source” is very accurate and less “noise” is introduced by the biology of the user or other factors. Despite this, also a chest strap needs to worn correctly (tightly, right under the chest muscle).

So which technology is better for your workout? To check this I wore both a chest-strap based Garmin Foreunner 610 and a wrist-based Garmin Foreunner 735XT and went for a 11km run. This run included some sprints in the beginning (100m, 200m, 500m, 200m, 100m) and a 1km uphill tempo run towards the end. Here’s the bottom line: Both technologies are very comparable! Fig. 1 below shows the two differently measured heart rates (FR610 = chest strap, FR735 = wrist). Especially at “steady state” when the heart rate does not fluctuate much both technologies are very comparable. Before the run when I was just standing around both showed about 70 bpm (beats per minute, not plotted). Also the averages of both technologies either over the entire run of 11 kilometers or per finished kilometer are very comparable. Overall both curves look very similar, but the difference is significant when analyzed with a Student’s t-test (p = 0.027, two-tail, paired). Despite this, the difference of on average 1.4 bpm is so low that it has no practical relevance when your heart rate is around 160 bpm during a workout. Even the sprints around kilometer 3 and the long high-intensity hill sprint around kilometer 9 did not introduce a large difference on average.


Fig. 1: Comparison of the average heart rates per kilometer over the course of a 11km run obtained with a chest strap-based technology (FR610) or a wrist-based technology (FR735). Both technologies are comparable when averaged over longer time periods, although the wrist-based measurement seems to measure slightly lower values.

But how do things look like when you zoom into the data? Especially, when concentrating on situations with a lot variability? I did the pyramid sprints in the beginning to find this out. I sprinted 100m, 200m, 500m, 200m and finally again 100m with 200m breaks in between to let my heart rate go repeatedly up and down. For both watches, in Fig. 2, you can clearly see the black lines, indicating the speed, go up and down as you would expect for the five sprints and recovery phases. In both cases also the heart rate, represented by the gray lines, react as expected: Similar peaks as the speed, but with a slight delay. It is, however, surprising that the chest strap-based heart rate seems to be measured in a much smoother way. In addition the recovery phases between the sprints are picked up as expected. The wrist-based technology seems to have trouble here. Heart rate maxima are very similar, but drops during the 200m recovery phases lasting about 1 minute are less pronounced.


Fig. 2: Comparison of heart rate and speed during pyramid sprints obtained with a chest strap-based technology (FR610) or a wrist-based technology (FR735). GPS-based speeds are similar as expected, but the heart rate fluctuates much more when measured at the wrist and does not respond as accurately as chest strap-based measurements to rapid changes, for example during the recovery phases between two sprints.

During short time scales wrist-based heart rate measurements might therefore have more trouble to pick up rapid heart rate changes, especially when large differences between maxima and minima are present. Despite this, the overall performance of both technologies is very comparable and probably accurate when averaged over longer times such as minutes or an entire run, even if high-intensity phases such as sprints or hill running are present.

For you as an athlete this means the following: Wrist-based measurement is a hassle-free and accurate way of gaining insights into training status and workout intensities during longer training runs or training runs involving longer intervals such as Fartleks or hill sprints. If your training requires accurate heart rate measurements during very variable high intensity workouts, you might want to consider using the good-old chest strap. Classical track-based workouts involving 200m or 400m repeats with relatively short recovery phases in between, would fall into this category.

Happy training with all this technology around your wrist and chest!

If you are into biking, running or any other outdoor activity in can be quite entertaining/helpful to record your activities with a GPS. The resulting tracks (in .gpx format) are easily obtainable from for example Garmin devices or the Strava app. Routinely these tracks are overlayed with maps such as Google Maps or Open Streetmap, but it is also possible to take the latitude, longitude and altitude information out of these GPX tracks and plot them in 3D.

Here’s an example how to do this in MATLAB (you need the Mapping Toolbox).

First, add the .gpx file into the MATLAB path folder, here I call it “Bergankunft.gpx”.

Now tell MATLAB to read the .gpx file, extract latitude, longitude, and elevation and plot the obtained information:

route = gpxread(‘Bergankunft_with_Niklas.gpx’);
x = route.Latitude;
y = route.Longitude;
h = route.Elevation;
plot3(x, y, h, ‘b’, ‘LineWidth’, 8)
grid on

Now you have your track as 3D figure and you can manually turn it and study your efforts in 3D 😉

In order to make a video of the whole thing download a MATLAB function created by Alan Jennings which allows you to create a video of any rotating 3D ‘lot (

I used the following settings:


CaptureFigVid([-180,10;-360,20;], ‘Bergankunft3D’,OptionZ)

Please also check out Alan’s instructions in order to understand what the variables mean. Most important are frame rate, the total duration, and the angles at which the plot is turning and captured.

The resulting small video (MP4) will be saved in your MATLAB path folder and you can watch it with any media player.

In my case (a trailrun from Lauterbrunnen to the Gspaltenhornhütte in Switzerland) the result looks like this:

The following small summary on a protein complex called EJC was inspired by a lecture given by Hervé Le Hir (ENS, Paris) at the Friedrich Miescher Institute in Basel in November 2014:

After transcription an mRNA becomes processed, exported, stored or transported, translated and degraded. Several multimeric protein complexes carry out these tasks and readily transform the initially naked mRNA into a large messenger ribonucleoprotein (mRNP) complex. For a long time it was believed that the described functional steps are occurring sequentially and relatively independent of each other. However, more recently it became clearer that many events during the life of an mRNA leave permanent protein marks which can influence the efficiency or occurrence of subsequent functional events and which are dependent on the sequence context. One of the first mRNPs that forms in the nucleus after transcription is the splicing machinery. It splices introns out of the pre-mRNA molecule, thereby creating the mature mRNA. The splicing reaction, however, leaves a relatively stable mark on the newly created spliced mRNA: The Exon Junction Complex (EJC).

What is an EJC and where is it formed?

The EJC is a multiprotein complex that forms as a consequence of splicing upstream of exon-exon junctions. Although the EJC’s composition is dynamic it contains four core proteins: The RNA helicase and eukaryotic initiation factor 4A3 (eIF4A3), metastatic lymph node 51 (MLN51), and the heterodimer Magoh/Y14. eIF4A3 possesses two ATP-dependent RecA domains which bind RNA in a “clamp-like” fashion. Magoh/Y14 seems to prevent conformational changes of eIF4A3, while the conserved SELOR domain of MLN51 also binds to the RNA and in addition stabilizes the RecA clamps further (1). This tetrameric core now serves as a platform allowing for the binding of other factors that catalyze different regulative processes during export, transport and translation of the mRNP. By using both fluorescence and electron microscopy approaches it became possible to narrow down the assembly zone of the tetrameric EJC core to nuclear punctuate regions termed perispeckles (periphery of nuclear speckles). All EJC subunits are enriched and fully assembled in these structures while MLN51, Magoh, and Y14 mutants fail to localize to the perispeckle region. Furthermore, perispeckles seem to contain polyA mRNAs and transcripts which are actively undergoing splicing (2). These nuclear compartments had earlier already been described as storage and assembly cites for splicing factors which highlights the possibility that EJC proteins join in a co- and post-splicing manner.

Which processes does the EJC catalyze?
Splicing of certain long intron containing mRNAs is affected by EJCs and the complex also seems to be responsible for the catalysis of one form of alternative splicing. Furthermore, the EJC is implicated in mRNA transport and plays an important role during nonsense-mediated decay (NMD) of transcript possessing a premature stop-codon. When such an erroneous codon is present, some EJCs remain bound to the mRNA because they are not displaced by the progressing ribosome and become bound by the up-frameshift factors Upf1, Upf2, and Upf3. Together these proteins trigger mRNA decay (3). For a long time it has been known that the presence of introns enhances the translation of a construct when compared to a similar construct that is lacking introns. Another
important task of EJCs therefore seems to be the enhancement of translational efficiency of spliced mRNAs. This has mainly been demonstrated by tethering all four EJC components artificially to mRNAs in Xenopus oocytes (4). The molecular details of this process have, however, remained elusive until recently.

How does the EJC influence translation?
It has been described that the EJC is the functional linker between splicing and an enhanced translation efficiency. Recently it emerged that the EJC component MLN51 might mediate this relationship by interacting with the translation initiation factor eIF3 (5). First of all it was observed that overexpression of MLN51 enhances translation of spliced luciferase reporters versus identical non-spliced reporters. Furthermore, MLN51 also enhances translation if the remaining three EJC components are not present. Immunoprecipitations then showed that several translation initiation factors and ribosomal subunits can bind EJC components, but only MLN51 binds via its SELOR domain to the initiation factor eIF3. This interaction might lead to a stabilization of the mRNP complex so that translation can initiate successfully. One problem, however, persists: Several studies have described that the ribosome displaces the EJC from the mRNP complex during the first round of translation. The question whether an upregulation of the first round of translation is sufficient to explain the observed positive effect on translation efficiency by the EJC is therefore still open. One explanation could be that EJCs increase the absolute pool of translated mRNAs via MLN51. Alternatively, MLN51 might increase the total number of initiating ribosomes on the single mRNA before the EJCs become displaced. It might also be possible that MLN51 survives on the mRNA after displacement, and thereby is able to initiate subsequent rounds of translation. This hypothesis seems probable since the other three EJC components are not required for an increased translation efficiency. Since a large number of factors have been described that peripherally bind EJCs (1) the molecular mechanism of translation enhancement is likely be more complex and more functional interactions of MLN51 need to be identified. The past years of research have, however, shown that the sequence context and all lifecycle steps of an mRNA are closely linked and the EJC serves as an interesting example for the complexity of an mRNAs life.

1. Le Hir H, Andersen GR. Structural insights into the exon junction complex. Curr Opin Struct Biol. 2008 Feb;18(1):112–9.
2. Daguenet E, Baguet A, Degot S, Schmidt U, Alpy F, Wendling C, et al. Perispeckles are major assembly sites for the exon junction core complex. Mol Biol Cell. 2012 May 1;23(9):1765–82.
3. Gehring NH, Kunz JB, Neu-Yilik G, Breit S, Viegas MH, Hentze MW, et al. Exon-junction complex components specify distinct routes of nonsense-mediated mRNA decay with differential cofactor requirements. Mol Cell. 2005 Oct 7;20(1):65–75.
4. Wiegand HL, Lu S, Cullen BR. Exon junction complexes mediate the enhancing effect of splicing on mRNA expression. Proc Natl Acad Sci U S A. 2003 Sep 30;100(20):11327–32.
5. Chazal P-E, Daguenet E, Wendling C, Ulryck N, Tomasetto C, Sargueil B, et al. EJC core component MLN51 interacts with eIF3 and activates translation. Proc Natl Acad Sci. 2013 Apr 9;110(15):5903–8.

Is it possible to quantify the impact that a certain research project has on society? And is it beneficial to attach a societal relevance to research in general? In times of tight research budgets it becomes increasingly important that scientists and universities are able to demonstrate what the impact of their research is. A very important aspect is for example the ability to interact with “the society” in order to find out what current needs are or to convince the taxpayers that basic research is actually important for well-being. But how could this interaction between science and society be measured?

About a year ago I wrote a review paper in which I tried to answer some of the above mentioned questions. As it turns out social media can be a powerful partner to communicate your science while also being useful to assess the impact your research has made on others. An additional dimension social media has to offer is the possibility to actually create “societal relevance” through educating your followers and demonstrating that science can be understood and appreciated by many folks out there and not only a few in the ivory towers.

A very useful (and interesting!) way to measure how fast new research can spread in the digital age has been developed by the people at Altmetric. This tool is able to extract how and where published work is shared in social networks. In my small extracurricular project that I have mentioned above I applied this tool to assess how different scientific fields and universities differ in spreading their scientific results and how these results are perceived by the general public.

As participant of the GPP 2014 program you might be interested in the Altmetric tool and the question how researchers and universities can make their work more appealing to the public. Here is the link to my short paper: TheRelevanceOfResearch.

In case you are specifically interested in the Altmetric tool, there is also a more large-scale study and quantitative assessment of this topic which has been published last year and can be found here.

Feel free to discuss these science communication issues with me. Either by email or in person in a few weeks from now.

Remember to forget

April 1, 2014

Yes, forgetting is essential! In order not to overload your brain with “useless” information from the past you need to be able to forget. But how does forgetting work? Synapses connect neurons in the brain and it is thought that an altered neuronal structure (read: different wiring or less wiring) leads to forgetting. While a lot of time, money and careers are invested into the question how synaptic networks are formed, it is not very clear how the complexity can actually decrease. Assuming that a reduced synaptic “landscape” is equal to the well-known process of forgetting, it is therefore not very much known about this process. Although not te first of its kind, a recent paper addresses this issue and proposes a molecular mechanism which is mainly based on the regulation of the actin cytoskeleton via a post-transcriptional mechanism. And the evidence seems strong! The model organism used here are the C. elegans worms that can actually be trained to avoid a certain taste because they were starved of food when they were in contact with it for the first time. Remembering and forgetting this Pavlovian training by the worms can then be used as a proxy for memory function. As already mentioned, the major player in the competition between memory formation and forgetting is the rate at which synapses are formed and degraded. An already previously described and neuronal active protein called MSI-1 is proposed here to be responsible for the degradation part by inhibiting the translation of at least three mRNA types (arx-1, 2 and 3) who´s protein products would normally from the Arp2/3 complex. This complex is normally responsible for remodeling the actin skeleton of the synapses by the induction of actin branching. MSI-1 therefore prevents the Arp2/3 complex formation and thereby leads to decreased synaptical structure retention. In other words: MSI-1 increases the tendency for synapses to disappear, which might be one factor to answer the question why we forget things. This interplay is further strengthened by the authors finding that the deletion of the add-1 gene (responsible for actin capping and therefore stabilization) leads to memory loss. However, this phenotype could be reversed when msi-1 is deleted at the same time. As a consequence, add-1 and msi-1 must both be involved in memory formation and retention, but with opposing functions.

An unresolved question, however, remains how MSI-1 is “activated” to suppress arx mRNA translation. It is likely that forgetting is a neuronally regulated and controlled process, just like memory formation. The authors propose that the glutamate receptor GLR-1  might play a role in this process because it´s expression is exclusively increased in the MSI-1 positive neurons during learning. On the contrary GLR-1 is also required for MSI-1 function and therefore memory loss. How the upstream regulator GLR-1 can influence these two opposing events at the same time therefore remains an open question for future studies. Another interesting and open question is the link between the AVA neurons in which MSI-1 was predominantly found and neurons in the gut of the worms in which MSI-1 was also found. Can this link be explained by the food/starvation related setup of the experiment? And do other forms of training/memory acquisition and the resulting forgetting mechanism work differently? Furthermore, what are the effects of MSI-1 on the other numerous actin remodeling factors?

Despite these open questions, the paper presents compelling evidence for an additional molecular mechanism explaining neuronal information retention and loss. In summary and interestingly, memories seem to be regulated in a balanced way that is deeply influenced by the synaptical actin skeleton which is actively constructed, and passively degraded by the inhibition of its formation through the translation repressor MSI-1.

Yes we can image mRNA

October 18, 2013


Time to focus

September 27, 2013

Life from single molecules to entire populations takes place in four dimensions. Three of which are spatial dimensions and last, but not least, the dimension of time. Interestingly, researchers ignored these hard realities for quite some time. During my PhD project on translational regulation within cells, we would like to master the four dimensions as good as we can. Live-cell imaging is a good method to monitor a single cell over time and to observe what is changing. However, live-cell imaging requires sharp and crisp images in order to be able to track single molecules over longer time spans. The biggest problem with conventional light microscopes are in fact the three spatial dimensions (x, y, z), because all the light from the specimen that you are observing is collected. This means not only the light of a single plane (x,y dimensions) is collected (and later observed), but also the light originating from all other planes above or below (z dimension) (see also Figure 1). Collecting a lot of this so-called “out of focus” light leads to blurred pictures, which means that fine details cannot be distinguished from each other anymore.  A powerful tool to circumvent this problem is a variation of classical light microscopy called CONFOCAL MICROSCOPY. Here, I would like to give a short introduction into this extremely powerful and widely used microscope technique.


Figure 1: A cell that is observed under a microscope has three dimensions (x, y, z). However, the optics of a microscope dictate that only one z-plane can be “in focus” and not all planes at the same time. A standard microscope collects the light of all planes and therefore often produces blurred images when larger objects such as cells are observed.

In order to make sense of the confocal technique (con-focal = “having the same focus”) I would like to draw your attention to Figure 2. With the help of the steps 1 to 5 I will guide you through the figure. First of all a confocal microscope needs a strong light source. This role is often fulfilled by a short-wavelength laser (Step 1). The laser light is then reflected in a 45° angle by a so-called dichroic mirror (Step 2). This special mirror reflects short wavelengths (such as the green excitation laser), but is permissive for longer wavelengths (such as the emitted red light). The reflected green laser light is focused by the objective lens onto the specimen. Unfortunately, it is impossible to focus the light on only one single z-plane. As a consequence, a number of z-planes are excited by the green light and depending on the fluorescent molecule emit longer wavelength light, here depicted as red, orange, and purple (Step 3). Partly, this emitted light will later form the image that you can observe, but first it needs to travel to your eye: As explained above, the dichroic is permissive for the emitted longer wavelength light. Therefore, the light originating from all z-planes can pass. Since the light is originating from different planes, it also hits the so-called focal lens at different positions resulting in different focal points of this light. And now a small slit, called pinhole comes into play (Step 4): Most light (based on its origin) cannot pass this tiny opening because it is either focused in front of or behind the pinhole. The reason why a confocal microscope produces crisp images is that only light from a single z-plane is able to pass since its focal point is exactly within the pinhole (in this example the red light). Consequently, this light can reach the detector (Step 5) where it is converted into a visible image.


Figure 2: The setup of a confocal microscope can be described in five simple steps (see text). The pinhole is the central element because it blocks all “out of focus” light originating from non-desired z-planes.

Unfortunately, the seemingly simplistic confocal approach also has two important side effects. First of all, a lot of light is lost because it is shielded by the pinhole. This in turn requires a very strong light source which can damage the sample if applied for a long time. In order to prevent this from happening, the specimen is scanned point-for-point in the x,y dimension. This leads us to the second side effect: Scanning takes a lot of time and this is kind of impractical if you want to observe a live cell. But: Both problems can be (partly) resolved by a variation of confocal microscopy called “spinning-disc confocal microscopy”.

More on this technique in my next post!

Traditionally single-molecule experiments are performed in vitro and therefore in a reduced environment. Recently, it has become possible to combine this single-molecular accuracy with a living single cell and to observe what happens in real time (“live”). For biologists the combination of these three technological ideas creates a lot of possibilities to answer a number of currently unanswered questions. I am very happy to be able to be part of this adventure. In the following I would like to address some aspects of my work:

What am I doing?

Currently I am working on the intriguing and big question how cells translate their DNA into protein. Interestingly, many important sub-questions of this problem still remain unanswered, especially when focusing on the fate of mRNA molecules once they have left the nucleus and are present in the cytoplasm. The quantification of the translation process in time and space, characterization of its steps and major molecular players is our focus area. In order to elucidate what happens to mRNAs in the cellular context we mark them with fluorescent proteins and apply single- and live-cell imaging. In addition, new labeling and detecting technologies allow to study mRNAs at the single-molecular level.

Why study translation live and in single cells?

The so-called central dogma of biology, namely the conversion of information stored in the DNA into proteins, has been dissected by a large number of scientists. However, in most traditional approaches the mRNAs as the central information carriers are isolated from large numbers of cells and therefore removed from their natural cellular context. This results in functional deficits and loss of spatio-temporal information (“Why is this mRNA at this place in this cell at this time?”). In contrast, the combination of single- and live-cell imaging allows to study the fate of mRNAs during translation in their physiologic environment, over a longer period of time and with a minimum of disturbing factors. The use of only single cells also allows to detect differences between cells of the same kind (for example neurons or muscle cells). An organ represents a very heterogeneous environment, so cells have to be different in order to be able to adapt to their local environment. Even 150 years ago Charles Darwin already noted that observable traits can vary widely within a species. Why couldn’t this also be the case for individual cells?

Why single-molecular accuracy?

Next to the advantages that live single-cell analysis has to offer, it is important to keep in mind that most biological processes can be reduced to the level of molecules. When, however, a larger number of molecules is observed (even within a single cell) this automatically leads to an averaging effect. A complicated biological process, like the mRNA translation into protein, involving a number of molecules during specific stages might therefore only be recognized as single event with a “before” and “after” without knowing what really happened in between. By visualizing single molecules it becomes possible to track their role as a puzzle piece within the big picture.

Nice. And how is this done?

There are two major tools. The first one is a microscope (more specific: a light microscope called confocal spinning disc microscope) to observe the single cell with its mRNA molecules. However, the resolution of a light microscope is limited to about 220 nm (1 nm = 1 m / 1,000,000,000). Even though a RNA molecule might be longer, it is also about 1,000 times thinner and therefore not detectable. In order to be able to still detect them we label them with fluorescent proteins. The emitted light results in a so-called “diffraction limited spot” which can be detected by the cameras of our microscope. For the RNA labeling we apply the MS2 and PP7 systems which use specific bacteriophage proteins that are again fused to fluorescent proteins and can bind to specific regions within the mRNA molecule of interest. Importantly, the MS2/PP7 labeling does not harm the biological processes within the observed cell. With this system it is also possible to label a single mRNA molecule in two colors (for example red and green). During the mRNA translation process different parts of the mRNA are targeted by the translation machinery in a sequential manner which has an influence on the binding of the green and red proteins. The appearance of both colors at the same time (yellow), first green and then red, or the other way around, the speed at which this change occurs, and the location within the cell can tell us a lot about the translation process.

In case I could spark your interest for single-molecule live cell imaging please also see our website or check out the following three articles on mRNA labeling and detection:

  1. Hocine et al., Single-molecule analysis of gene expression using two-color RNA labeling in live yeast. Nat Methods. 2013 Feb;10(2):119-21.
  2. Wu et al., Fluorescence fluctuation spectroscopy enables quantitative imaging of single mRNAs in living cells. Biophys J. 2012 Jun 20;102(12):2936-44.
  3. Larson et al., Real-time observation of transcription initiation and elongation on an endogenous yeast gene. Science. 2011 Apr 22;332(6028):475-8.

More on

  • the Spinning-disc microscope
  • the MS2 and PP7 labeling systems
  • and “diffraction limited spots”

will follow later.