This little article attempts to introduce the research that professor Charlotte Hemelrijk and her group “Behavioural Ecology and Self-organization” at the University of Groningen perform on understanding the complexity of bird flocks and fish schools. In the context of the Honours College course “Leadership or Not in Animal Societies” the questions was addressed whether a single leader is necessary to organize the complex behaviour that one can observe in large and moving fish schools or bird flocks.

Is a leader required to guide the instant movement of thousands of birds or fish into one direction in order to find food or escape an enemy?

Or is some kind of intrinsic property that emerges from the school or flock sufficient to explain the observed behaviour?

In the following I will describe how the estimation of movements parameters, computer simulations, and the careful observation of fish schools in nature led to a robust model that can explain why no leader is necessary to coordinate the movement of fish schools.

For fish it is very attractive to organize in schools because spawning an area, finding food, access to mates, protection from predation and hydrodynamic effects all become more optimal for the individual fish. In order to understand how complex school behaviour evolves it first became necessary to be able to describe the formation of a school in general. It has been hypothesized that collective movements, as they occur in a school, are characterized by a directional and temporal coordination. This coordination might only become possible if individuals mutually influence each other by the distance towards other members of the school (Huth and Wissel, 1992). Fig. 1 shows which effects the distance of one to another fish has on its movement according to the formulated hypothesis. These parameters were then used in a computer simulation in order to test whether the resulting model schooling behaviour resembled the natural schooling behaviour. Interestingly, the parameters seemed to be sufficient to describe the natural behaviour (Fig. 1 (C)) that has been noted earlier (Partridge, 1981).

Fig.1Fig. 1: Hypothesized effects of the presence of a second fish on the movement of the first fish depending of the distance to each other. (A) and (B) display how a first fish reacts when a second fish is present in four different proximity zones (based on Huth and Wissel, 1992). (C) shows how the modelled fish schooling behaviour (bottom) clearly resembles the observed behaviour in reality (top) (based on Partridge, 1981).     

Based on the above described parameters attraction, alignment, and avoidance further studies were able to significantly link the behaviour of individual fish to distinct school shapes. However the researchers needed to introduce the factor “speed” into their model in order to being able to reproduce observations in nature (Kunz and Hemelrijk, 2003 and Hemelrijk and Hildenbrandt, 2008). The researchers found that through coordination and collision avoidance a transition to an oblong shaped (with respect to movement direction) school occurred as a function of speed. In other words, a fish school reduced its width and increases its lengths at increasing velocities because this enables the individual fish to avoid collisions. Later, Hemelrijk and colleagues were able to prove that the conclusion drawn from their model also holds true for fish schools in real-life experiments (Hemelrijk et al., 2010).

In order to narrow the gap to the experimental observations, the researchers also introduced a factor to describe the effect that the number of neighbours have on the movement decisions of the fish. They hypothesized the existence of two mechanisms that could govern the movement of individual fish when surrounded by more than one neighbouring fish. Fig. 2 schematically depicts both hypothesis and their respective outcomes in computer model when applied over a number of “decision cycles”. The first model assumes that the movement of an individual fish (3.) who has at least two neighbours (1. and 2.) in his field of view largely is the result of taking the average path between both fish. The second model was assumed to be more realistic because the fish (3.) would have a priority direction that largely depends on factors such as distance to his neighbour. A computer simulation of both models, however, resulted in a surprising outcome. After a number of cycles the priority model had led to a disturbed school pattern that is never observed in nature. On the contrary, the average direction model resulted in an accurate reproduction of field observations. The researchers therefore assumed that the priority direction effect is probably averaged out in a large school because there are many and changing neighbours. The final result is an average directional movement.

 Fig.2Fig. 2: Two models and their simulation results that take neighbouring fish into account during the movement decision process of an individual fish. (A) The average direction model assumes that the movement of fish 3. is the result of averaging between the direction of its neighbours. The priority direction model assumes that fish 3. decides to follow the closest neighbour. (B) Simulations of both models resulted in a dispersed fish distribution in the case of the priority model (right) and a more realistic ordered fish distribution for the average model (left).

The findings presented in Fig. 2 and the fundamental work presented in Fig. 1 therefore prove that individual fish can lead to an emergent property, such as the coordinated behaviour of a large group, without requiring a leader. It is important to note that the individual perceptions of the fish within and on the edges of the school are vital for the coordination. This means that the final direction of the fish school is probably and to a large extend based on the “decisions” that fish on the edges of the school make. These movements “decisions” might be based on knowledge and experience, but also on motivational factors such as hunger. Whether these factors can drive the behaviour of individual fish and therefore the movement of the whole school still remains to be elucidated.

The computational tools of theoretical biology therefore seem to be a good approach to describe complex behavioural patterns in animals groups. Other research projects of the Hemelrijk group used similar parameter-simulation approaches to describe the behaviour of bird flocks and their internal dynamics (Hildenbrandt et al., 2010). Also in bird flocks no real leader is necessary, but the individual movement decisions of birds in their neighbouring context seem to govern the movement of the flock as a whole. These studies can also help to improve understanding on how large groups of humans act in situations of panic and fear when rational decisions might become overruled by movement decisions that are based on the individual context.


Hemelrijk, C.K., and Hildenbrandt, H. (2008). Self-Organized Shape and Frontal Density of Fish Schools. Ethology 114, 245–254.

Hemelrijk, C.K., Hildenbrandt, H., Reinders, J., and Stamhuis, E.J. (2010). Emergence of Oblong School Shape: Models and Empirical Data of Fish. Ethology 116, 1099–1112.

Hildenbrandt, H., Carere, C., and Hemelrijk, C.K. (2010). Self-organized aerial displays of thousands of starlings: a model. Behavioral Ecology 21, 1349–1359.

Huth, A., and Wissel, C. (1992). The simulation of the movement of fish schools. Journal of Theoretical Biology 156, 365–385.

Kunz, H., and Hemelrijk, C.K. (2003). Artificial fish schools: collective effects of school size, body size, and body form. Artif. Life 9, 237–253.

Partridge, B.L. (1981). Internal dynamics and the interrelations of fish in schools. J. Comp. Physiol. 144, 313–325.

Yes, also bacteria seem to have an immune system. Bacteria frequently become attacked by phages and viruses, so they need protection too!

Here I want to briefly introduce the CRISPR/Cas system which is a very interesting type of bacterial defense system using former viral DNA sequences to guide bacterial DNA endonucleases to cellular targets where viral DNA is present. It has always been hypothesized that CRISPR/Cas could be used for biotechnological non-invasive genome editing. However, recently a number of breakthroughs concerning this application have been described. In the following I especially would like to discuss three Science journal; papers that, in my opinion, have been groundbreaking in paving the way for real future applications of CRISPR/Cas and on the other hand helped to understand the molecular basis of this system. Here I will especially concentrate on the type II CRISPR/Cas system (of in total three). In general a bacterial immune response against a viral invader can be split up into three phases: Adaption, expression, and interference. Fig. 1 schematically shows how the type II CRISPR/Cas system is currently assumed to work during the expression and interference phases.


Fig. 1: Schematic depiction of the type II CRISPR/Cas system present in bacteria to destroy invading DNA originating from viruses. The CRISPR/Cas gene cluster contains previously obtained sequence information about foreign DNA (colored triangles) which are separated by repeats (black rectangles). Upon induction this information is transcribed into pre-crRNA together with the expression of the Cas9 protein and so-called tracrRNA which serves as a universal linker to connect crRNA with Cas9. In the following steps tracrRNA and pre-crRNA are cleaved to smaller sizes at least twice. Now the crRNA-Cas9-tracrRNA complex is able to bind foreign DNA at a homologous site termed “protospacer” which is followed by a second, but very short identifier called PAM (protospacer adjacent motif). Once stable binding has been achieved Cas9 seems to cleave the invading DNA and thereby induces double-stranded breaks that inhibit the expression of viral genes. Scheme created by myself, based on (1) Supplementary Fig. 1.

Even though Fig. 1 depicts some of the molecular details that occur during a CRISPR/Cas mediated response there is probably more to the system. Especially during the adaptive phase that governs the incorporation of DNA fragments into the bacterial genome important functional key features are still unidentified. Soon after the first functional properties of the CRISPR/Cas system became evident in the late 1980s researchers began to hypothesize about the biotechnological usability of this defense system. Since then the expression and interference phases have been studied very extensively and especially during the last couple of months some exiting insights have been gained with regard to an actual application by different researchers. Emmanuelle Charpentier and coworkers described in a proof-of-principle study how a fused and custom made crRNA-tracrRNA can be applied to target sequences of interest in a DNA plasmid and in addition identified the two Cas9 protein domains that are responsible for the double-stranded target DNA cleavage (Fig. 2) (1). Partially based on Charpentier’s groundbreaking work, in a second and third paper published last month, researchers from the Massachusetts Institute of Technology and Harvard Medical School describe an exciting approach to silence entire gene loci in mouse and human cellular DNA. The key to successfully being able to target specific sequences in eukaryote cells seems to have been the co-delivery of an expression vector including both pre-crRNA sequences and the Cas9 genes (2). This approach also makes use of the chimeric crRNA-tracrRNA hybrid (Fig. 2) that mimics the naturally occurring crRNA:tracrRNA duplex described by Charpentier and colleagues (1). In a parallel study targeting rates of 4 to 25% under different conditions are described. In addition 40% of all human exons are identified by a bioinformatic approach as being potentially available for CRISPR/Cas silencing. Cloning these target sequences into a 200 base pair format for the first time allowed the creation of a library describing potential target sites in the human genome (3).

Chimeric_crRNA-tracrRNAFig. 2: Schematic depiction of the the naturally occurring crRNA:tracrRNA duplex which in conjunction with Cas9 is able to cleave viral DNA in a target specific manner (top). By creating a linker loop that fuses two functional RNA sequences it became possible to engineer a crRNA-tracrRNA chimera that, based on experimental evidence, has the same functionality as its natural counterpart and has been proven to be very effective for genome targeting purposes in eukaryotic organisms (bottom). Based on (1) Fig. 5.

I consider the CRISP/Cas system extremely interesting because of its seemingly simplistic nature consisting of only a cleavage protein, a target, and a target-identifier. Whether this is the whole story remains to be seen, but some of the most important functional elements now seem to have been identified on a molecular level. The construction and usability of an artificial linker which connects RNAs and endonuclease demonstrates how far knowledge has proceeded. Or in Richard Feynman’s famous words: “What I cannot create, I do not understand“. We might not understand it completely, but at least we can “create” an important part of it. I am confident that CRISP/Cas will play a very important role in bacterial/industrial biotechnology and also genome editing in the future because it dramatically decreases the challenges that accompany the silencing of eukaryotic genes in their native context. By further advancing knowledge and consequently the technology it might even become possible to use the system for genome editing through achieving controlled and sticky-end like double-stranded breaks. This would enable the ligation of desired sequences into eukaryote genes.

It would not be the first time that a giant leap across the phylogenetic divide is made. The bacterial Taq polymerase used during PCR everyday around the globe is just one example how small the world can be at the molecular level. Darwin would have enjoyed it.


(1) Jinek M. et al., A programmable dual-RNA-guided DNA endonuclease in adaptive bacterial immunity, Science 2012 Aug 17;337(6096):816-21.

(2) Cong L., et al., Multiplex Genome Engineering Using CRISPR/Cas Systems, Science. 2013 Jan 3 [electronic publication ahead of print].

(3) Mali P., et al., RNA-Guided Human Genome Engineering via Cas9, Science. 2013 Jan 3 [electronic publication ahead of print].

Random impressions

February 7, 2013

No biology today, just photos from Boston. Click here for a little soundtrack and click the individual photos for higher resolution.

28 12 13 14 15 16 17 18 19_b 20 22 23 24 25 26 27

Getting visual

December 9, 2012

My current project explained in a short video. For biological science beginners and only in German. My apologies for these restrictions!

Check the video HERE.

Seeing is believing…

November 30, 2012

…and understanding!

Most cellular processes are ingeniously and non-intuitively orchestrated. In addition they contain a large number of components. Understanding all this can pose a challenge. Not surprisingly research experience over the last decades has shown that it actually is a challenge! When ignoring crystal structures, the bulk molecular knowledge about cellular processes comes from indirect evidence such as blots, gels and other traditional techniques. Would it not be interesting to observe processes (1) in vivo, (2) at high resolution and magnification, and (3) in real-time? Probably not many biologists will disagree with this. A major challenge towards this aim, however, is the nature and chemical structure of the cell surface/membrane. From an optical perspective this lipid bilayer structure can be viewed as a so-called “opaque surface”. First of all this means that it is non-transparent and scatters light (Fig. 1). Looking through a sample therefore becomes impossible because the incoming light is so distributed all over the place that it is hard to attach any meaningful information to it.

Image Fig. 1: Plane waves that hit a rough surface are scattered instead of being directly reflected. The resulting random speckles lead to a distorted image of the object in the human eye (Source:

In the past techniques have been developed to extract information from opaque layers which at least let through a small amount of direct beams.  Here, however, I want to present a new approach which has recently been published by a team of Dutch and Italian researchers who managed to retrieve images through a completely opaque surface.

Two layers of material were used for this study. The first layer is the opaque layer which scatters the light and does not allow a direct observation of a 50 μm sized object on the second layer (Fig. 2 (A)). In this case the object to become imaged was the Greek letter pi made out of fluorescent polymers. Both layers are 6 mm apart, which is a large distance considering the size of the object. As depicted in Fig. 2 (B) a laser with 532 nm wavelength is consequently directed at the first opaque layer and serves as the light source. Due to scattering of the light that passes the first layer and due to scattering of the light which fluoresce back through the first layer, it becomes impossible to identify which object is hiding behind the layer (Fig. 2 (C)). It is, however, possible to measure the overall amount of light that originates from the fluorescent object. The researchers now assumed that all the information that one would need in order to reconstruct the image is already contained in this recorded light. Due to scattering and speckling this information is disorganized and cannot be readout by conventional means (such as our eyes and brain). Of course the key element of the study was to develop an algorithm and a technical procedure that were able to extract meaningful information from the chaotic light mix (Fig. 4 (D)). In the following I will explain this procedure in a bit more detail. Since I am NOT a physicist I omitted some of the details, but I am sure that I am mentioning enough facts to understand the technique.


Fig. 2: Experimental design involving an opaque first layer and a second layer with a fluorescent object that could be retrieved by an algorithm involving scanning over different angles and autocorrelating the resulting information. For details see text (Source: Press release “Looking through an opaque material” by the University of Twente, the Netherlands).

Four variables are essential for being able to recalculate the nature of the object behind the opaque layer. First the object’s fluorescent response O which roughly translates to the fluorescent intensity at a given point in space. Secondly the speckle intensity S is important. It describes the amount of light speckle formation also at a given point in space. The third important factor is the angle θ at which the laser shines through the first layer. Finally, the measured overall light intensity I was essential.

During the course of the study, the angle of incident light was slightly varied. By iteratively changing the laser angle and consequently measuring the overall intensity I it became possible to calculate correlations between all four factors. The interesting part is that these correlations are the founding block for organizing the information which is hidden in the scattered fluorescent light and that was thought to be of totally random nature before.

The first step in order to achieve this was the discovery of the relation between incident laser angle and the measured fluorescent intensity I. Interestingly, speckle intensity S and the objects response O remained largely unchanged under one set laser angle. This enabled the researchers to use nine different angle scans, which yielded enough information to autocorrelate S and O. Spatial information from the previously randomly distributed intensity, could know be extracted because the relationship how S, O, and θ influence I was known.

In the same paper the authors also demonstrate the use of this technique for imaging a more complex biological sample hidden behind an opaque layer. However, the amount of detail of such a sample is much greater than the amount of detail of a relatively simple pi letter and reconstruction becomes much more calculation intensive. In order to solve this issue the resolution had to be decreased by increasing the size of the speckle spots, thereby lowering the amount of incoming information. Despite these practical limitations the researchers have clearly demonstrated that it is possible to noninvasively image through an opaque layer, such as a cell membrane. I am sure that this discovery has great potential for molecular and cell biological in vivo studies. This potential is enhanced even more by the possibility of increasing resolution by decreasing speckle spot sizes and by introducing 3D imaging by measuring speckle patterns in an additional direction.

Article: Jacopo Bertolotti, Elbert G. van Putten, Christian Blum, Ad Lagendijk, Willem L. Vos, Allard P. Mosk, Non-invasive imaging through opaque scattering layers, Nature 491(7423), 232–234, 2012.

Since November 1st I am doing research on the mechanisms of polymerase exchange under DNA damage conditions in the laboratory of Dr. Joseph Loparo at Harvard Medical School in Boston, USA.

In case you do not belong to the small group of people who intuitively know what my projects entails, this page intends to make it more clear.

Below you see a photograph taken out of my window at night. It symbolically stands for the need to regulate complex processes which are required to achieve a certain task. As with traffic a constant “flow” is also important for DNA replication. Every cell contains the same DNA so replication events are constantly occurring during life. However, there are certain “stop-lights” involved which come into play under certain conditions. During rush-hour traffic lights are controlled in a different manner then at 3 o’clock at night (at least this is desired).


In a living cell it’s almost always rush-hour. Still, many checks lead to a surprisingly perfect flow with very few mistakes occurring. Despite this, external “stress factors” can distort the flow. Imagine the driver on the right would not care about his or her red sign. The following accident would require the set-up of a detour around the site of the accident so that a minimum of traffic flow can be guaranteed. However, this detour will make the overall traffic situation less efficient (especially during rush-hour!!) and everything will take way more time than needed. But at least a total breakdown can be prevented.

The sun’s UV-light leads to sort like accidents during DNA replication flow, because it changes the regular DNA structure which causes the formation of a so-called “roadblocks”. Regular enzymes called Polymerase III that catalyze the DNA replication can not replicate across these roadblocks. Polymerase III normally guarantees a high fidelity DNA replication meaning that there is an extremely low copy error rate. A problem related to this high accuracy is that Polymerase III is not very tolerant to DNA damage. So in order to prevent a total DNA replication breakdown a “detour” enzyme comes in (for example Polymerase II or IV). These enzymes are build up differently and are more tolerant to DNA roadblocks. However, they slow down the whole replication process and can lead to even more errors when used for too long.

This is why the use of Polymerases II and IV needs to be tightly regulated. Despite the importance of this process currently not a lot is known about it. And this is what my work will focus on.

The following explanations might become a bit more technical, but I will try to keep it to a minimum. To recap you knowledge about the general DNA replication process I want to introduce Fig.1. It depicts the protein complex that is required and sufficient for DNA replication in Escherichia coli bacteria. Together these proteins constitute the replisome.

Fig. 1: Schematic overview of the replisome. DNA helicase unwinds the double stranded DNA. Two single DNA strands arise; one in 5’-3’ and the other one in 3’-5’ direction. Because the DNA polymerase complex is only able to synthesize into the 5’-3’ direction in a continuous fashion, the inverse direction needs to be synthesized in small bits called Okazaki fragments. The strands are therefore termed “leading” and “lagging” strand, respectively. Primase attaches primers to the DNA so that replication can be repeatedly initialized on the lagging strand. In E. coli the two DNA polymerases are tethered to the helicase by the γ clamp-loader complex which is especially required to load the β clamps through which the αεθ DNA polymerase subunit attaches to the single DNA strand. Single strand binding proteins (SSBs) ensure that the single strand does not coil up and remains accessible for the lagging strand polymerase (based on: van Oijen, Loparo, Annual Reviews Biochemistry, 2010).

Back to the switching between the polymerases under DNA damage conditions: This process is termed translesion synthesis response (TLS) because the DNA roadblock (lesion) is bridged. A few aspects of TLS are already known, but in order to understand what they entail it is important that you have good grasp of Fig. 1.

It has been elucidated by O’Donnell and coworkers (Molecular Cell, 2005) and Sutton and colleagues (PNAS, 2009) that the β clamps which tethers the αεθ polymerase III subunits to the DNA plays an essential role during TLS. Fig. 2 explains why this is the case.

Fig. 2: Structure of the DNA β clamp with its rim and cleft contacts sites for Polymerase IV. Within the structure of the β clamp two hydrophobic clefts have been identified. These seem to be able to accommodate certain amino acid residues of the so-called “little finger” domain of Polymerase IV. This essential key-lock mechanism is supported by additional rim contacts (Loparo, based on Bunting et al., EMBO, 2003).

Therefore the β clamp seems to serve as the basis for polymerase exchange during TLS.

The next question would be: How can we study the dynamical changes between Polymerase III and the other polymerases such as Polymerase II or IV?

In Dr. Loparos’s laboratory several different methods have been developed how these changes can be observed on a single-molecule basis. The focus of my project will especially  lie on the TLS polymerase II (Pol II) and how it is able to replace the replicative polymerase III (Pol III). Pol II is special in regard to its high-fidelity and proofreading capability by its 3’-5’ exonuclease. These features are not present in the other existing translesion polymerase and make Pol II an interesting object to study. Also there are indications existing that the Pol III – Pol II exchange might work differently than the Pol III – Pol IV exchange. Single-molecule biology only works when methodology originating from physics is combined with biologically relevant questions, and classical biochemical techniques.
Traditional biochemical studies have obtained almost all of the current DNA replication knowledge and are continuing to do so. However, bulk effects seem to average out the dynamic states which are so essential for protein functioning. Single-molecule and fluorescence methods are a powerful means to study the functional trajectories of proteins while they are functioning. Studying the formation of (replisome) complexes is a very interesting application and has been demonstrated to be successful. The development of procedures to very locally and quantitatively study the mechanisms and stoichiometry of polymerase exchange will be central to this project.
Fluorescent organic molecules (dyes) and inorganic molecules (quantum dots) will be used to label proteins and DNA. Innovative laser microscopic techniques such as Total Internal Reflexion
Fluorescence (TIRF) and Förster Resonance Energy Transfer (FRET) microscopy can then be applied to image and quantify the associations of the labelled single molecules.

Central to these approaches is a single-molecule primer extension assay. It works by the application of a microfluidic flow-cell in which a DNA molecule is attached to a glass surface and is consequently stretched by a laminar buffer flow (Fig. 3, top). Extending this linear molecular and observing the change via fluorescence microscopic techniques allows to make conclusions about polymerase behaviour and dynamics (Fig. 3, bottom).

Fig. 3: The principle of a laminar flow cell which can be used to determine the single-molecule dynamics of polymerases (top) and an example of the fluorescence patterns that can consequently be observed by dark field microscopy (bottom). See the text for more details (based on: Tanner & van Oijen, Methods in Enzymology, 2010).

Based on these data (DNA length extension and time) it now becomes possible to actually observe polymerase switching. Previous studies have shown that Pol III is much faster in synthesizing long DNA molecules than other polymerases. By determining the difference between polymerase extension speeds it therefore becomes possible to determine which polymerase is active. Furthermore, it has been determined that a single Pol III synthesizes about 900 base pairs before it leaves the replication fork and a new Pol III comes in. Between these events always a small pause occurs. The lengths of the pause depends on the polymerase concentration. This is logical because at a higher polymerase concentration a switch can occur faster because there simply more molecules available in a certain location. By measuring a number of single-molecule trajectories under different conditions and with different polymerase types it becomes feasible characterize polymerase dynamics. Fig. 4 shows the result of a plot where two different polymerases were tested. Because the pause length and the reaction speed are known it is possible to distinguish between Pol III (fast) and Pol IV (slow).

Fig. 4: Time and DNA length info from a flow-stretching assay helps to identify switching dynamics of polymerases if synthesis speed pausing behaviour is known (see also Fig. 3).

Most of the above described approaches and the derived knowledge is only valid for Pol III/Pol IV switching. The situation might be different for Pol III/Pol II switching. Elucidating Pol II switching behaviour with the above describes assay will be a main aspect of my project.

Please stay tuned for updates!

GIMP my protein

September 8, 2012

While playing around with the open-source GNU Image Manipulation Program (GIMP) I asked myself: Why not transform proteins into a construct that does not resemble the standard Protein Data Base structures? Well and this is the humble result: Another perspective on the famous green fluorescent protein, which has become so important for many projects. The GFP molecule you are seeing above is actually a modified version of the “real” protein. By modifying certain amino acids researchers around Jenny J. Yang (Georgia State University, Department of Chemistry) managed to lend calcium and proteinase detecting properties to GFP. GFPs can then be incorporated into tissues were it is important to monitor calcium concentrations. In many organisms calcium is used to build up gradients to for example transmit chemical “information” as in nerve cells. Therefore, this type of protein, next to its funny looking barrel shape, also offers some handy features to work with and might help to detect diseases such as cardiomyopathy Alzheimer’s on a molecular scale (= earlier).

With the fitting amino acid mutations GFP can not only be used for the detection of calcium ions, but also for the detection of other ions. A very famous ion is the hydrogen ion. The concentration of hydrogens ions determines the pH (yes, that’s the H in pH). The following few lines might be a bit technical, but they explain why GFP is loved by so many scientists. And why it is a perfect example of a biomolecular structure-function relationship. High pH sensitivity and specificity, rapid signal response and good optical properties are important characteristics for a pH sensor. In addition it must be possible to reach  intracellular sites in a non-invasive and non toxic manner. GFP fulfills many of these prerequisites and therefore has been used as intracellular pH sensor. But how does it work? The sensitivity and specificity of GFP mainly seems to be based on the protonation state of the phenolic group of the chromophore (the green thing in the center). In other words the charge of the light emitting parts of the GFP molecule changes. The GFP chromophore 4-(p-hydroxybenzylidene)imidazolidin-5-one (HBI) lies in the center of GFP’s β-barrel structure (made up of the white arrows) and normaly is non-fluorescent due to its unionized phenol group. However, the inward facing amino acid residues Ser65–Tyr66–Gly67 can cause the ionisation of the phenol group, leading to potential fluorescence. During different pHs, different protonation states of the chromophores phenol group are likely to influence its photochemical properties by influencing the ionisation of the phenol group. Different pH values therefore change the charges of the centrally located molecule that is responsible for the green light. A different pH value translates into a different light intensity.

Why light becomes emitted in a different manner/intensity during different protonation states is, however, a topic for a future blog entry. Only this far: The hydroxyl group of the phenol and its protonation states within the chromophore play a role, leading to different relaxation times of the electrons that become excited by the incoming laser…

Check Ulrich Haupt’s and coworkers article if you want to know more.

Building a Single Speed

September 5, 2012

Mine is done. Now construct your own Single Speed bike. Here’s what you’ll need:

  1. Everything starts out with a nice frame. I would suggest a 1970s/80s steel frame, but that’s basically a matter of taste. Buy a new one (expensive!), buy a used one, or use that old dusty frame from your garage. Some things you should care about when deciding for a frame: (A) Think about the size you need. This table gives you an overview. Mentioned is the height in centimeters of the seat tube. (B) Check whether a headset and a bottom bracket is still part of the frame, since both are quite expensive parts and relatively difficult to mount. If you can check/ask for the quality (smooth movements, no slackness). (C) Next thing on the list is to check what kind of back dropouts the frame has. For a single speed bicycle horizontal dropouts are best because they allow you to adjust chain tension. I have a frame with vertical dropouts you’ll need an extra chain tensioner later (which kind of destroys the great look of your new bike). (D) The last thing is to check is in what state the frame finishing/paint is. Make sure there is no rust in the headset and bottom bracket area.
  2. If your frame has a bottom bracket already just buy a fitting crankset. If a chainring is attached already make sure that it’s not too large. A 53-tooth chainring is definitely too large for a single speed bike and pedalling will be quite heavy. For me a 46-tooth chainring is perfect. In case your frame doesn’t have a bottom bracket, you should first find out which standard-type you need.This mainly depends on the diameter and the width. Here’s an overview. For the headset there’s basically only one standard. Mounting both parts into your frame is quite complex and if you are not an experienced bike mechanic you should seek the assistance of your favourite bicycle shop.
  3. Now the hardest part is over and you can start searching for the essential parts of your bike that actually make the frame look like a bike: The wheels. Since a single speed bike is not a fixed-gear bike you can use a freehub. I think it’s a bit more comfortable and safer for a beginner. Best thing is to use 28” (inch) diameter rims. For the back wheel just remove the cassette from the freehub and replace it with a single speed conversion kit. Once you have it it’s about 15 minutes work to set it up. I am using a 18-tooth sprocket in combination with the already mentioned 46-tooth chainring in front which is perfect for flat cities and longer journeys. So choose your sprocket size wisely.
  4. For the likely case that you have horizontal dropouts: Be aware that back wheels with quick release skewer might slip if you are a strong guy. I recommend to replace the quick release axle with a standard axle with nuts. If you can find a solid axle with the same diameter and thread pitch and a couple of nuts it will work. Remember that your  new axle should also be at least 4 cm longer than the old (quick release) hollow axle. You need some special tools for it, but you local bike shop can probably assist you.
  5. Now buy some tires. This might seem a bit early, but it’s good to buy them now because you need to check whether they fit your brakes and frame. I’m using 28 mm wide Continental Grand Prix Four Season tires because I think they are fast and comfortable at the same time. Get some high quality inner tubes as well.
  6. It’s time for brakes now. Mounting them is not difficult, just have a look at an existing bike. However, sometimes some fiddling is necessary to get them parallelly aligned.
  7. Next, organize yourself a seat post and a saddle. It’s very important that the diameter of the seat tube exactly matches the diameter of the seat post. Otherwise, you’ll always have fun with a slightly down-sliding seat post. Also find a handlebar now that fits your needs and you think is comfortable.
  8. A bike is nothing without the chain. I would buy a new chain to prevent slipping issues. A special single speed chain is not necessary. To shorten the chain you need a special tool that you can again find in your favourite bike shop (or borrow one from a friend). Concerning the chain: Shimano offers a system called “Quick Link” which makes closing the chain, after having adjusted the length, very easy.
  9. Last, but not least get some pedals. If you cannot decide whether you want to use bike shoes with cleats or not, just buy pedals with a regular and a cleat side.

Parts you should consider to buy new include bottom bracket and headset, as well as front chain ring, sprocket and a chain. New rims and tires are nice, but not necessary. As I wrote, sometimes you will probably need the assistance of an experienced bike mechanic. Probably you will also have to pay him or her, but I definitely think you can have a very beautiful and good working single speed bicycle for around 200€. Well, I think that’s it. Have fun. For more info there is also the famous single speed/fixed-gear bible by the late Sheldon Brown. His “Bicycle Technical Info” page supplies with everything you need to know. If you are from Groningen, you should also pay a visit to Fietsje, a great bike shop with many useful single speed accessories.

The University of Groningen where I currently study has formulated three key research areas that are meant as a strategic aim in order to distinguish itself from other universities. One of these research areas is called “Healthy Ageing”. This week the scientific journal PNAS will publish an interesting article on work by Dr. Ellen Nollen and coworkers in which they describe how the depletion of a certain gene leads to higher tryptophan levels and seems to protect C. elegans worms from protein aggregate-related disease. Since protein aggregates are hold responsible for Alzheimer and Parkinson disease in humans and not a lot is known about the related protein metabolism, this paper indicates some new potential therapeutic targets and fits perfectly into the university’s aim to promote research on “Healthy Ageing”.

As this paper indicates that high levels of thryptophan might have a protective effect against protein aggregate-related diseases I considered it interesting to look into it a little bit deeper. The gene that is central within this research is called tryptophan 2,3-dioxygenase (tdo-2) and encodes the enzyme TDO-2 which is responsible for the digestion of the essential amino acid tryptophan. During their studies the researchers from the Netherlands, Germany, and France found that genetic suppression of tdo-2 (also called knockdown) leads to inhibition of the tryptophan metabolism and thereby to higher levels of this amino acid in the tissues of the worms which were the labrats-substitutes here. Deciphering a metabolic pathway is very nice, but what makes van der Groot’s et al. results even more interesting is the fact that a longer lifespan of the worms was observed upon tdo-2 knockdown. It also proved to be effective to supply excess amounts of tryptophan and leave the tdo-2 gene “switched on”. Since in general C. elegans is regarded as a good model organisms for age-related diseases such as Parkinson or Alzheimer, in the following let us have a closer look at the study and its results.

The worms which were part of this study actually suffered from the expression of alpha-synnuclein known for its protein aggregation promoting behaviour. High levels of this compounds actually lead to less motility of the worms and expressed in the unit body bends/minute. Suppressing tdo-2 with RNAi considerably extended the time period within the worms lives with a high motility rate. According to the researchers this indicates suppressed effects of alpha-synuclein.

As metabolic networks are important in biology the next question asked as whether an up- or downstream (concerning the location of tdo-2) element was responsible for the observed effects. All the genes mentioned in Fig. 1 were knocked down in combination with or without tdo-2 expression and it was observed that knock-downs of downstream genes had no or only a very small effect on worm motility. Therefore the levels of tryptophan seemed to be the key to the observed effects.

Fig. 1: TDO-2 related metabolic pathway. 

However and as depicted in the above figure tryptophan is also a part of the neurotransmitter serotonin synthesizing pathway. Serotonin levels are currently a prime element in the understanding of Alzheimer disease and are also used in therapeutic approaches. A significant element of Nollen’s and her colleagues’ work is, however, that they prove that serotonin has nothing to do with the observed motility and lifespan effects. Knocking down the tph-1 gene in combination with tdo-2 does not significantly change the worms abilities to move. Summing up, all conducted knock-downs during this study therefore lead to the conclusion that the TDO-2 enzyme and (when it is suppressed) higher tryptophan levels are responsible for the increased mobility and decreased alpha-synuclein toxicity. This point was further stressed by the addition of tryptophan to the diet of worms which did express TDO-2 (Fig. 2). In a dose dependent manner tryptophan compensates for alpha-synuclein toxicity. 

Fig. 2: Raised tryptophan levels suppress alpha-nuclein toxicity also in the absence of a tdo-2 knockdown.

Now it starts to become interesting: What does tryptophan do? How does it prevent protein aggregation on a molecular basis? Does it regulate some yet unknown biochemical pathways? Sadly enough the authors stay very brief on these points, either because they do not know more themselves yet or because another publication in Nature or Science is waiting in the future. What they say is that they do not expect tryptophan to be directly responsible for the observed effects, but this amino acid (or its derivates) probably plays a role on other biochemical pathways or their signalling molecules. Nevertheless this work proves that a lot of knowledge about protein aggregate-related diseases still remains in the dark. It also opens up possibilities to study the observed effects in mammals since the tdo-2 gene and its enzyme product is evolutionarily extremely well conserved. TDO-2 is one of the proteins that links us with C. elegans worms. To what extend the tryptophan metabolism plays a role in human age-related diseases such as Alzheimer and Parkinson is a question that many research groups will work on in the future.

van der Groot AT et al., Delaying aging and aging-associated decline in protein homeostasis by inhibition of tryptophan degradation. Proceedings of the National Academy of Sciences of the United States of America, published online before print on August 27th 2012, accessed on August 29th 2012. 

Some things in biology can be observed best when concentrating on one molecule and its functions. System biologists will probably not agree and intervene that in biology it’s all about networks and interactions. The presence and concentration of A influences B which increases the concentration of C which consequently down regulates A again. This is all true. However, in single-molecule biology it’s about the functioning and dynamics of (you guessed it) single molecules. When looking at larger systems there is always the danger of missing out elements that might occur only under certain conditions, low concentrations, or that are masked by certain other secondary processes. The weak point of single-molecule studies has always been the fact that complex systems are drastically reduced. Again, you miss-out a lot of stuff even though now you are able to study one molecule in detail.

But: Change will come. As I described in an earlier post super resolution microscopy has been around now for a few years and it is a ready-to-use technique now. For just $ 1,000,000 you can get your own. Theoretically many fascinating research results should have been published  until now. Observe single molecule dynamics in their native environment, what more could you wish for? Indeed some spectacular footage has been produced. Stefan Hell and coworkers, for example, were able to record neurons within the cerebral cortex of a living mouse with a resolution of around 70 nm. Until 20 years ago physics books would have told you that this is impossible. So have a look yourself, right here.

Strangely enough this research at the same time also demonstrates and interesting phenomenon that can be observed when scanning through the live-cell super-resolution microscopy: Most of the time only structurally large (>200 nm) and functionally already known molecules (like neurons) are observed. Further, the temporal resolution is not great and in the order of seconds. Fast moving molecules are still hard to image due to hardware (CCD camera, scanning) limitations. Of course it is very interesting to see how dendrites in the brain expand during learning, but it does not raise any new questions and most importantly does not answer any old ones. I am sure that super-resolution microscopy has a golden  future, but it is important to improve sample preparation techniques, optimize fluorophores even further, and develop sensors that have a shorter integration time for the small amount of photons they are capturing per frame.