To celebrate our 40th anniversary, we challenged you to ask EMBL anything. Over the coming weeks, this post will grow with the answers to 40 questions, submitted from all over the world by curious people of all ages.
#EMBL40Q@EMBLorg if scientists are machines that turn coffee into papers, how many coffees, how many papers, in the last 40 years?
If scientists are machines that turn coffee into papers, how many coffees, how many papers, in the last 40 years? This very fundamental question – tweeted by Aidan Budd from Heidelberg, Germany – is core to the life of so many scientists, and called for a multidisciplinary answer that would be both precise and fact-based…
Tobias Sack, Senior Librarian in EMBL’s Szilárd Library, researched the number of articles published by EMBL: according to the best estimations available, since 1975 our institution has published more than 14000 articles, and in 2013 alone: 613.
In the meantime, Michael Hansen, Head of Canteen and Cafeteria in Heidelberg, looked into how much coffee we’d drank at the headquarters laboratory in 2013: 399195 cups! A quick extrapolation indicated that across the five sites over the same period of time we’d been fueled by approximately 725809 cups of coffee. That’s equivalent to two bathtubs (perhaps like the one where Archimedes had his eureka moment!).
If we consider that articles are not only the end result of scientific experiments, but also the outcome of the environment in which the scientists work, and how much support they get from other departments, then it is legitimate to take the total consumption of coffee at EMBL into account.
In 2013, EMBL published 613 articles and drank 725809 cups of coffee, which means that 1184 cups of coffee are necessary to produce one article. We haven’t looked into the number of biscuits that go with that, but try multiplying that figure by around three …
If we consider that all articles ever published at EMBL were drafted under the same working conditions (probably the only far-fetched assumption we’ve done here…), then we get a total of 16576000 cups of coffee drank since the founding of the institute!
Question 6: Which different species of plants inhabit EMBL’s Heidelberg campus?
This question was asked by Cecilia Bebeacua, from Heidelberg. If you would like to ask a question, please fill in the form here on EMBLetc.
EMBL’s headquarters is located in the middle of a lively forest home to many different plants – which serve as inspiration in observing and studying the various forms that life can take. Although scientists at EMBL do that every day with microscopes, spectrometers, and other sophisticated tools, how many people actually look deeper at the lush vegetation in and around the campus? This was the question Adrian Rundle, Learning Curator in the Research and Evaluation team of the Content Department from the Natural History Museum in London, set out to answer when paying a visit in early November. Rundle collected and annotated the leaves of the many species he found living in and around us and brought his findings together in this herbarium.
“I must say that your gardeners have done a great job and I admire their inspired planting. I have a collection of British seeds and fruits from over 2,300 species of plant. I still have quite a lot to find but it is getting very difficult,” smiles Adrian Rundle. “A highlight of the EMBL grounds was seeing a lot of the ground cover plant Carpet Box (Pachysandra terminalis) some of which were in fruit. I have looked at a great many plants of this in Britain but they don’t seem to fruit here. It was one of my niggles not having it. Now all is resolved.”
Some retroviruses are able to carry out reverse transcription using special enzymes. Why doesn’t reverse translation happen? Is it possible to induce it in the lab setting? This question was asked by Pranavathiyani Gnanasekar, from India, here on EMBLetc.
To carry out the instructions contained in our genome, the information stored in DNA is first transcribed into a complementary sequence of RNA. That RNA is then translated into a string of amino acids that form the protein.
In some cases, RNA can be turned back into DNA in a process called reverse transcription. This process is used by viruses like HIV: because their genetic material is coded in RNA, they need to “reverse-transcribe” it into DNA to insert it into their host cell and produce viral proteins. Reverse transcription is also used regularly in labs to clone genes and induce the production of specific proteins.
Although at first glance it might seem like a similar process, reverse translation would be a much more complicated affair.
“Reverse transcription represented a revolution in biology from a conceptual point of view – the idea that transcription could ‘go backwards’ and produce DNA from RNA. However, the process is fairly straightforward, because the DNA and RNA code is basically the same. In other words, one ribonucleotide in the RNA (A, G, C and U) is replaced by its equivalent deoxyribonucleotide in the DNA (A, G, C and T). The ‘reverse translation’ of proteins into RNA is an extremely challenging task, since each amino acid is encoded by a triplet of nucleotides – for instance, GUU encodes Valine.
To make the equation even more difficult, more than one triplet can be translated into the same amino acid: Valine can be encoded not only by GUU, but also by GUC, GUA, or GUG. So it’s almost impossible to look at a sequence of amino acids and determine the original RNA sequence that encoded it.
When converting a given amino acid like Valine into nucleotide triplets, the ‘reverse translation’ machinery would have multiple possible outcomes, and could translate the same protein into a completely different RNA sequences each time. Although the ‘protein encryption’ would be conserved, the resulting RNA triplets would be highly variable in sequence, so reverse translation is neither of use to a cellular organism nor a reliable way to determine the RNA sequence of a protein in the lab.”
Alfredo Castello, EMBL Heidelberg
Question 4: How could one build a functional brain?
If one wants to build an artificial human brain in laboratory (in order to use it later in treatment of dementia and so on), what are the main problems one must solve? How to make neurons and neuroglia live in vitro and work altogether, and then – the million-dollar question – how to make these neurons bear memory from an actual human being? This question was asked by Anna, from Russia, here on EMBLetc.
Currently it is very difficult to cultivate neural circuits: you need a three-dimensional framework that the neurons can develop in and, most importantly, you need to replicate in vitro the many variables that are involved in the development of the brain. There is research going on at the moment to build the different layers of the brain in the lab. However, the key challenge is to deliver signals to the neuron cultures so the neurons self-organise into brain structures, like the layers of the cortex for example. If you want to be very precise and build a realistic nervous system, you have to play with all these signals very accurately; but, since we still don’t know what all those signals are, this is technologically very difficult. When we know all the parameters, we will be able to do it, but it’s a matter of time and, of course, of will.
However, creating a brain that could think and feel emotions is yet another story that would require a few more zeroes at the end of that $1 000 000. That’s a multibillion-dollar question at least: how would you feed experience into that artificial brain? During early development of the mammalian brain, almost all neurons are connected to each other. Then the brain matures by removing some of those connections and acquiring specific functions. It does that through experience. Function and experience both contribute toward building a structure that can store memories, direct the body, and so on… The question then becomes, how do you fake all those experiences in that brain to make it human? That would be very difficult without a body.
In a way, our bodies are windows to our brains – the brain issues commands, the body performs the corresponding actions, and then the peripheral nervous system brings the consequences of these actions back to the brain so that the brain knows what happened and adapts. When the same experience comes up again, the brain already knows its consequences and how to adapt. That’s how children develop, and how animals develop. If the brain doesn’t have a body to do that, it cannot learn. In theory, if you can mimic that body, then you can culture a human brain in vitro. Maybe when we are able to generate all the neuron types in a human brain in vitro, we will also be able to manipulate them to create a mature brain – but we are still far from that…
Hernando Martinez Vergara, EMBL Heidelberg
We asked a scientist at each of EMBL’s sites to answer the question Scott Schorr posed to us on facebook. Here’s what they had to say:
Unpacking DNA riddles
The double helix structure of DNA was discovered more than 60 years ago, but many questions about DNA remain unanswered. The most fascinating one is how the DNA molecule can function while being compressed so tightly inside the cell’s nucleus. If a strand of DNA were as thick as a fine thread, it would be like packing 20 km of thread into a chicken egg. Furthermore, all the functions in the human body depend on various processes to unfold, duplicate, segregate, couple and decouple this bundle of DNA correctly, so that the right genes are expressed at the right time. Even though many of these mechanisms have been documented and studied, they are so complex that we still don’t know all the details of how they work, as if we are watching a game of sport without knowing the rules. New computational developments, like increased computing power and new software, are needed in order to find and categorise all the “players” involved and fully unravel this enigma.
Single cell sequencing
Progress will continue to be made not only by computational methods, but also by advances in the experimental techniques that develop alongside them. I think that one of the most promising techniques is single-cell sequencing, where we investigate the genomes and transcriptomes of individual cells, rather than relying on data that represents an average across millions of cells. Single-cell techniques will allow us to investigate the heterogeneity within a population, which in turn helps us model accurate networks that describe cell behaviours. These single-cell techniques are gaining in popularity, but the computational and statistical frameworks that are required to analyze the generated data are continuing to be developed and optimised. Single-cell approaches will likely improve data acquisition as well, since each cell can be thought of as a separate experiment. This will greatly increase the amount of data available to construct models of complex cellular networks. Understanding these networks will enable us to make predictions about the behaviour of cells under normal conditions as well as during disease, opening up new avenues for treatment and prevention.
A bigger picture
Rather than look at the evolution of one particular gene or molecule, advances in computing will allow us to effectively look at all of evolution at once. We can in principle do this today, but the problems scale so rapidly that it’s not currently possible to evaluate evolution across large numbers of organisms. There are always evolutionary events that conflict with overall trends – horizontal gene transfer introduces ‘jumping genes’ whose individual gene histories violate the overall species history; the migration of small groups may violate a population’s history, and structural domain rearrangements give rise to ‘mosaic genes’ where some parts of a protein have different origins than others. Accurately representing, modelling and recapturing the full complexity of evolution, where we estimate overall trends but allow for and make full use of exceptions, is computationally hard. Even ambitious evolutionary analysis projects need to restrict themselves to only some of the available genomes and/or filter out the weaker conflicting signals, because to do the same algorithms for everything available might take a hundred years or more based on today’s computing power. Advances in computing power in the coming years and decades will transform this landscape.
Personalised genomes, personalised medicine
Currently, personalized DNA sequencing is available for everyone, but it’s very expensive – about $10,000 per genome. In the future, when computing power increases and the cost of sequencing goes down, we will all have our entire genome sequenced and will know what kinds of diseases we are likely to develop. That will allow for the development of hyper-individualised diets and pharmaceutical treatments that could potentially prevent the disease or improve the symptoms. In addition, faster computing will lead to better analysis of the links between disease incidence and DNA sequences. Some genetic indicators are very reliable; mutations in some genes (p53, BRCA1) lead to cancer almost 100% of the time. On the other hand, many diseases (like high blood pressure) are caused by the interactions of multiple genes or environmental factors, so it is much harder to predict the tendency of developing such a disease. I personally believe the environment causes about 30-40% of disease development. The rest is in the genes.
Andrea Cerase, postdoc, EMBL Monterotondo
Predicting drug interactions
One of the core concepts that molecular biology tries to explain is the complex interplay between (small) molecules – like drugs and neurotransmitters – and (large) proteins – the drugs’ targets. The nature of this problem is exponential, as each interaction can directly or indirectly influence other interactions, and depends on an organism’s genetics, health status, diet, etc. Current computational methods can predict the most extreme of expected interactions, and sometimes even the effects that result from giving a drug to ‘the average human,’ but our computational hardware is unable to simulate all the possible interactions and make these predictions personalised. In the next twenty years I expect that the increase in public data and our understanding of molecular biology will lead to the ability make more accurate predictions about how a given drug will affect different individuals. This in turn will allow scientists and doctors to make reliable assessments of both the desired effects and other (side) effects before a drug is given to a patient, resulting in better drug dosing and more effective medicine.
Question 2: What is the EMBL logo?
One of the most common questions we are asked has to do with the EMBL logo. What is it supposed to represent? Where did it come from? Why is one of the spots red? The logo was created by Lennart Philipson, the second Director General of EMBL and a beloved and instrumental figure in the Lab’s early development. Philipson passed away in 2011, but fortunately, he revealed the logo’s origin in an interview published a decade ago in the EMBLAnnual Report 2003/2004.
“Before coming to EMBL in 1982, I had had a group in Uppsala, Sweden whose focus was adenovirus research. At EMBL I felt I had to embark on a new topic in cell biology; therefore, here at EMBL, my group focused on growth control in mammalian cells. But the adenovirus was still on my mind, and I came back to it when I left for the Skirball Institute in New York in 1993. There we finally cloned and characterised the receptor.
When [head of Human Resources] Konrad Müller asked me for a logo for EMBL, it was natural for me to think of the adenovirus and claim that the 252 hexons represented all the laboratories in Europe, with the red spot being EMBL, now fully integrated in the European map. The location of the red spot was close to its position on the European chart.”
To study the effect of commonly used drugs on bacterial envelopes, EMBL scientists applied a biochemical assay using a colour reaction. The deeper the red, the stronger the disruptive effect of the drug.