Stephan Guttinger Centre for Philosophy of Natural and Social Science, London School of Economics
1. Introduction
2. The history of functional genomics
2.1. The early days
2.2. A complex narrative
2.3. Alternative takes on functional genomics
3. The mature phase of functional genomics
3.1. How to proceed?
3.2. The biochemical approach
3.3. ENCODE as a failure?
4. Transforming the picture of the genome
4.1. Building on the HGP
4.2. From gene to functional element
5. Conclusions: Beyond the genome
Notes
References
1. Introduction
When the completion of the Human Genome Project (HGP) was announced in 2003, scientists had already been working for years on the next steps of genomics. Whilst politicians and other stakeholders still talked of the sequence itself as the ‘blueprint’ that holds the secrets to human health, many in the research community saw sequencing as a mere first step to something much bigger and more powerful, namely, functional genomics. The aim of this new field was to move beyond traditional ‘structural’ genomics and to investigate what role each part of the genome plays. Such data, it was hoped, would open the door to new medical treatments. These would not only include traditional drugs but also direct interventions to re-write the genome. Such targeted modifications have become technically possible in recent years through the development of ‘molecular scissors’, enzymes that can alter the genome of living organisms in specific locations.
Functional genomics is an ambitious project that has triggered excitement and controversy amongst scientists. But equally important, the project has also been deeply transformative. By trying to unravel the secrets of the human genome, functional genomics has changed our understanding of its nature and functioning.
This article introduces the concept of functional genomics, illustrates its transformative power, and discusses the wider consequences it has (or should have) on science and health policy. This analysis will be based on an in-depth look at one of the most important functional genomics initiatives to date: the ENCODE project.
2. The History of Functional Genomics
As mentioned above, the central idea behind functional genomics is relatively simple: to understand the functioning of genomes. But what exactly does it mean (and take) to understand how a genome works? In what follows I trace the early history of the idea of functional genomics to get a better understanding of the different dimensions of this ‘next-step’ of genomics.
2.1. The early days
The term ‘functional genomics’ made its first appearance in the academic literature in 1995, in a review article titled “Mapping the mouse genome: Current status and future prospects” (Dietrich et al. 1995). It is a shy and brief appearance – the term is only mentioned once, in the very last sentence of the paper. The article assessed the status and future directions of mouse genomics. The authors conclude their review by observing that:
“[…] the mouse will likely provide a crucial resource as efforts begin to turn from the structural genomics of the 20th century to the functional genomics of the 21st century.”
In 1997, the term makes a more prominent appearance in the literature, this time in a review article that was fully dedicated to the emerging field and which appeared in the prestigious journal Science (Hieter and Boguski 1997). The authors not only highlight the meteoric rise but also the confusing diversity of the field in the 1990s:
“Functional genomics” is a term that has taken root in the scientific community. What exactly do people mean when they refer to functional genomics? […] Perusal of the several hundred functional genomics websites that have sprung up over the last 12 months clearly demonstrates that interpretations of the term are diverse and highlights the substantial degree of “hype” that is being used to promote the functional genomics approach, with little data to support it” (Hieter and Boguski, 1997, 601).
But the authors also provide a first explicit definition of the term. They write:
“Specifically, functional genomics refers to the development and application of global (genome-wide or system-wide) experimental approaches to assess gene function by making use of the information and reagents provided by structural genomics” (ibid.).
After these first appearances, ‘functional genomics’ quickly became a regular term in the scientific literature. In 1997, only 11 papers contained the term in the title or abstract. From 2004 onwards, each year saw the publication of at least 500 papers on the topic (Figure 1).

Figure 1: Number of articles published with the term ‘functional genomics’ in their title or abstract (data obtained from the PubMed database). Note the significant increase in publications after 2013. It is not clear what triggered this increase, but it could be linked to the completion of the first production phase of ENCODE in 2012 and the new tools and data this provided the research community with.
2.2. A complex narrative
The initial appearances of the term ‘functional genomics’ are interesting because they pick up several strands of the functional genomics narrative that are important to this day. The first that stands out is the idea of a ‘new era’ (or ‘the next step’): functional genomics is next-century stuff, it is where the future of the field lies. Functional genomics is a departure from old ways of doing things. Interestingly, Hieter and Boguski already detected a lot of hype that was mixed into the way functional genomics was being promoted at the time. Such hype has been and would remain a companion and driver of genomics for many years (see note 1).
A second important theme is that of traditional genomics as being a ‘mere resource’ for the real thing (i.e. functional genomics): the sequencing and mapping of genomes are only stepping stones to bigger things. They are seen as important but only in the sense that they provide a foundation on which ‘real’ biological insight can be generated, based on functional or mechanistic studies.
A third and connected theme is the distinction between structural and functional genomics. This is a variant of the older and more general distinction between structural and functional biology. There are different takes on what this distinction amounts to but, roughly put, structural biology is taken to be a descriptive enterprise that answers ‘what, when, where’ questions. Functional biology, on the other hand, is supposed to tell researchers how something works. Functional genomics, according to this understanding, is the field that tries to understand the operation(s) of the genome and its different parts, whereas structural genomics provides mere lists of parts (see note 2).
The structural/functional distinction and the idea of traditional genomics as a mere resource are linked to a machine-mechanistic view of biological systems (and genomes in particular). In such a mechanistic view the first task of the researcher is to identify all the parts of a system (structural genomics). In a second step the researcher then figures out what each part is actually doing in the system (functional genomics). The first step therefore creates a mere inventory that might be a necessary condition for research, but which is not sufficient to create any actual insight into the workings of the machine. I return to this picture and the role it plays in debates about functional genomics in section 4.
A fourth important aspect in the above statements is the strong focus on gene function, something that can be observed in functional genomics to this day (see note 3). To many researchers, understanding the genome means understanding what genes are doing. This approach is a consequence of a traditional view of what genomes are, i.e. assemblies of genes that are linearly arranged on the genomic DNA (see note 4). These genes were for a long time seen as the real ‘doers’ of the system, the parts that need to be catalogued and then assessed.
2.3. Alternative takes on functional genomics
Functional genomics, or the idea of it, also played an important role in the HGP, in particular in the planning for the time after the project. Many involved in the HGP never saw the project as just a sequencing effort. They also thought of it as an opportunity to push the development of new technologies and to advance the debate about the broader implications of genomics. Importantly, they also saw it as their responsibility to prepare genomics for the post-HGP phase.
It is in this context that the term ‘functional genomics’ makes an appearance in the writings of the HGP planners, specifically in the last five-year plan (1998–2003) in a section titled “Goal 4 – Technology for functional genomics” (Collins et al. 1998). As the title already implies, this contains an assessment of the new technologies that would be needed to make a large-scale analysis of genome function a reality. What is interesting and different about the way in which HGP researchers looked at the idea of functional genomics is that they approached the genome in a more open and liberal way. There was no narrow focus on gene function when they discussed future experimental work and the technology needed to conduct it. Rather, the HGP researchers defined the goal of functional genomics as the “interpretation of the function of DNA sequence on a genomic scale” (ibid, 686). This implies that there is no privileging of specific parts of the genome; the HGP researchers were interested in any part of the genome, not just genes.
What is also relevant in this context is that Collins and colleagues did not think of the genome as something that simply functions on its own. Instead, they emphasised that genome function results from the interaction of genomes with their environment (ibid, 686). This interconnectedness between the entity of interest (the genome in this case) and its environment is something that will become even more prominent in what I call the mature phase of functional genomics (section 3).
These seemingly subtle differences matter as they point to two different ways of talking about functional genomics within biology: one that operates with the idea of genomes as sets of genes and another that operates with a more open and liberal view of the genome. The emergence and development of this alternative view will be the focus of the remainder of this article. A crucial player in this process, I argue, was the functional genomics project that grew out of the HGP, i.e. ENCODE. In the next section, I introduce this first large-scale attempt at doing functional genomics. In section 4, I then show how the project is changing our understanding of the genome and how its transformative power is connected to its roots in the HGP.
3. The mature phase of functional genomics
The ENCyclopedia Of DNA Elements (ENCODE) is an ongoing research project that was launched in 2003 and is funded by the National Human Genome Research Institute (NHGRI) in the USA. Its key aim is to identify all functional elements in the human genome (ENCODE Project Consortium 2004). Functional elements include, for instance, protein-coding regions or regulatory elements such as ‘enhancers’ or ‘silencers’, elements that stimulate or reduce the expression of neighbouring DNA sequences.
ENCODE was the first large-scale project in functional genomics. Its initial production phase, labelled ‘ENCODE 2’ as it followed a pilot phase (2003–2007), included 422 researchers and led to the simultaneous publication of 30 articles in different journals in 2012 (see note 5). The project continues to this day and is currently in Phase 4.
3.1. How to proceed?
The key challenge that ENCODE was facing from the beginning was methodological in nature. There are two main approaches to studying the functional parts of a system: the first is to isolate the individual parts and to measure their activities and other properties in specialized reporter systems. Researchers, for instance, use in vitro assays in which a sequence element X is isolated from the genome and put in a reporter system to check whether it is capable of affecting a process of interest (e.g. stimulating or inhibiting gene expression; see note 6). The second approach is to delete or inhibit the parts in order to see how this intervention affects the system of interest.
To give an example of this second approach: if we want to know whether a particular component in a car has a function in the steering mechanism we could remove or block the particular component and then check whether the steering of the car still works as it should. Similarly, if researchers want to know whether a sequence X in a genome is, for instance, involved in gene expression they could delete X and check whether expression of a target gene Y or Z has changed.
Both these approaches face complications that make their use less straightforward than it might at first seem. For instance, the presence of back-up mechanisms can mean that deleting a single part in vivo has no detectable effect on the functioning of the system, as the back-up will take over functionality. But one of the biggest hurdles for ENCODE researchers was the fact that such functional analysis only works quickly and efficiently if the researchers already have a parts list of the relevant system (the human genome in this case). This, of course, was not the case for ENCODE researchers, whose task it was to build such a map in the first place.
In the context of the ENCODE project this meant that any intervention- or isolation-based assay would be extremely time- and labour-intensive, as scientists would have to test the genome piece-by-piece. There was previous research available that had mapped some elements, such as protein-coding areas and their associated regulatory sequences. And comparative genomics – which was a crucial part of ENCODE from the beginning – could give researchers some guidance as to where important sequences were located (since sequences that are highly conserved across species usually have functional significance). However, the guidance comparative genomics could provide was limited as only 5–15% of the human genome (depending on the analysis method and data used) is conserved. As researchers knew that non-conserved sequences can also be functional, the problem was to find out where these additional functional sequences are and what they are doing. This is a daunting task, given that the human genome contains about 3 billion nucleotides. To systematically delete and/or isolate candidate sequence elements (be it single nucleotides or longer stretches of DNA) was technically and financially not feasible at the time. This led ENCODE researchers to pursue what they called the ‘biochemical’ approach (Kellis et al. 2014).
3.2. The biochemical approach
The basic idea behind the biochemical approach is the following: wherever there is a functional element in the genome there will be a process taking place eventually. For instance, if a gene is present then there is a high likelihood that it will be transcribed at some point in the cell’s life cycle. And these processes usually leave traces, such as RNA molecules transcribed from the gene. Rather than going through the whole genome trying to isolate and test specific functional elements, ENCODE researchers therefore went on the hunt for such traces of activity, which they referred to as ‘sites of biochemical activity’. These traces could give researchers an indication of where putative functional elements are placed in the genome.
An example of this approach is the analysis of DNA methylation, a process that consists in the addition of a small molecule (a so-called ‘methyl group’) to genomic DNA. DNA methylation is used by the cell to regulate gene expression and this modification can therefore be found in DNA regions that could be functionally relevant. If researchers find methylation patterns in a particular area of the genomic DNA, they have good reason to assume that the region is relevant for the process of gene expression. In the case of ENCODE, any region that shows methylation was therefore classified as a functional element.
Another important assay that was used by ENCODE researchers is the so-called ‘DNase I hypersensitivity assay’. This assay makes use of an enzyme (DNase I) that can cut DNA. The activity of DNase I depends on the conformation of the target DNA: if the DNA is wrapped up in proteins and other factors, DNase I cannot gain access to the DNA molecule and its cutting activity is reduced or eliminated. If, however, the target DNA is in a more open state DNase I cutting activity will be higher. This matters because a key hallmark of most regulatory DNA elements is their accessibility: areas of the genome that are being used for the regulation of gene expression are usually in a more open conformation. They will therefore be hypersensitive to DNase I activity. ENCODE researchers used this feature to map (putative) regulatory elements in the human genome by assessing how sensitive to DNase I different parts of the genome are.
These are just two examples of about 24 different types of assays that ENCODE researchers used to analyse the number and distribution of putative functional elements in the human genome (see note 7).
3.3. ENCODE as a failure?
Whilst this indirect approach solved some of the methodological challenges ENCODE faced, it did not necessarily help its reputation. The first main production phase of ENCODE was seen by many researchers as limited in its value because it provided a list of potential sites of interest rather than insight into the functioning of the genome. Its output was seen as a ‘mere resource’ for actual functional analysis, which would have to be done using isolationist or interventionist assays. Interestingly, such a negative take on the initial ENCODE output was also adopted by researchers who are part of the latest phase of the project. As one ENCODE 4 project description puts it:
“The [previous] ENCODE projects have revealed millions of putative regulatory elements across more than one hundred cell types and tissues. While these maps have significantly expanded our knowledge of non-coding sequences, there are still large gaps between having descriptive maps of functional elements and understanding the biology of these elements underlying gene regulation.”
These researchers make it clear that the first production phase of ENCODE only generated descriptive data and not actual understanding, implicitly classifying it as structural genomics. This, they claim, is changing now as they make use of new technological tools that have become available in the last few years and which allow them to make interventionist or isolationist analyses at large scale. Examples are the so-called ‘massively parallel reporter assays’ (MPRAs) and new genome editing tools, such as CRISPR-Cas9. These tools are now used to validate and further analyse the putative elements ENCODE originally identified.
It is easy to buy into the narrative that ENCODE is only now entering the ‘true’ functional genomics phase. But at the same time this clashes with the view other researchers have of the project. Some proponents of ENCODE, for instance, don’t see its initial output as a mere list of things that does not provide deeper understanding. Rather, they think of this mapping project as “yielding deep insights into genome function” (Stamatoyannopoulos 2012, 1602), directly contradicting the above-mentioned views of ENCODE 4 researchers.
But why could the vast amount of data produced by the original ENCODE be seen as more than just a list of putative parts? How can this repository of highly contextualised biochemical traces provide functional insight? The answer to this question, I argue, lies in recognising the transformative power of ENCODE. In particular, ENCODE was a crucial part of the research that changed our understanding of the genome’s nature and functioning. This also changed how scientists would approach functional studies. This transformative power of ENCODE becomes clearer if we look closer at its methodology and its links to the HGP.
4. Transforming the picture of the genome
Context matters for any investigation of the genome, even if researchers operate with the traditional gene-centric view of the genome. Scientists have long known, for instance, that certain genes will not be activated in particular cell types or at specific stages of development. This means that the signatures associated with the activity of such parts of the genome will only be detectable if the correct cell type or cell stage is used in the analysis. As a consequence of this, an investigation into the functioning of the genome would have to use different cell lines to cover different activation contexts.
What is interesting about the approach ENCODE pursued is that from the beginning it went much further in its exploration of diverse methods and contexts than what would have been required by the traditional view of the genome. In particular, ENCODE researchers not only used different cell types but they also made use of primary cell lines, cells that are directly extracted from a living organism and which have not been previously cultivated in the laboratory. The reason for this was not to capture more cell type diversity or different developmental stages but to factor in a larger set of contextual factors that could affect the structure and functioning of the genome.
This attention to detail is, I argue, a reflection of the acute awareness ENCODE’s planners had for the importance of context in shaping what the genome looks like and how it behaves. Cell lines that have been propagated in a particular context for an extended period of time develop features that are a response to that particular environment. As a consequence, their genomes don’t look or behave like genomes in a physiological context. Using primary cell lines alongside established cell lines allowed ENCODE researchers to account, to some degree, for such differences.
This broad approach is interesting as it shows that the genome was not just seen as a given entity whose activity is merely triggered by external inputs. Rather, the genome’s makeup and nature was understood to depend on the context it is placed in. To get a better understanding of this inherently dynamic entity it was therefore important to sample as many different contexts as possible.
This brings us back to something that we have already encountered in the last five-year plan of the HGP (section 2.3), where it was highlighted that genome function is a result of the interaction between the genome and its environment. This same awareness of the central importance of the genome’s environment also comes through in the ENCODE project. This, I suggest, is not a coincidence as the roots of the ENCODE project in the HGP – rather than just technological limitations – have shaped ENCODE’s particular approach to functional analysis.
4.1. Building on the HGP
The Human Genome Project was first and foremost a DNA sequencing effort, as well as being a driver of technology development. But it could also be argued that the HGP was the first large-scale project to catalogue functional elements in the genome, namely genes. This focus on genes arose from the traditional view of genomes as sets of genes and from the idea that these genes are the key agents in a cell.
However, as the philosopher and historian of science Evelyn Fox Keller has highlighted, in the 1980s there was a shift in how genes were conceived, triggered in part by the rise of developmental biology as a discipline. This was further pushed forward, she argues, by the work done as part of the HGP itself in the 1990s. This shift was characterized by a transition from talk of ‘gene action’ to talk of ‘gene activation’. This also meant that the locus of control shifted from genes to biochemical processes, such as protein-protein or protein-DNA interactions (Keller 1995).
Already in the early days of the HGP we therefore find a shift from talk of well-defined ‘active entities’ to talk of interactions, networks, communication, feedback loops, and system-level phenomena. It is no longer just genes in the nucleus that matter but the cytoplasm as well, and proteins, and the environment of the cell. It is no longer just one entity that controls a specific process (‘the’ gene). As a consequence, context starts to matter and becomes more central to any investigation into the functioning of genes and organisms; to identify and analyze genes we also have to look at the functioning and the dynamics of the organism.
At the same time, the borders of genes became more complex, in the sense that they became less well-defined and more context-dependent. Keller (2000) remarks that in early molecular biology, the gene was not only seen as a site of causal agency but also as a well-defined single entity with a particular structure and function. This picture of the well-defined gene started to fall apart, in part because of the work of the HGP itself. Whereas the research in the 1980s and 1990s in developmental biology brought about a shift from talk of gene action to gene activation, the HGP undermined the idea of the gene as a well-defined structural unit.
What is important here is that the idea of the gene as a unit turns into something more dynamic and context-dependent (and not just context-sensitive). As Keller noted: “[…] the functional gene may have no fixity at all: its existence is often both transitory and contingent, depending critically on the functional dynamics of the entire organism” (Keller 2000, 71; see note 8).
This shift also had methodological consequences, at least for those who bought into it. It was no longer enough to simply create a map of the well-defined entities and the activities they might display in different contexts. In fact, to this date there is no definitive count of how many genes there are in the human genome. Rather than trying to arrive at such a precise count, many researchers shifted their focus to questions of diversity and the context-dependent dynamics of DNA transcription. There is a more integrated, multi-factorial approach that emerges from these developments in genomics, an approach that again emphasizes context and dynamics.
4.2. From gene to functional element
This focus on context and interconnectedness influenced how ENCODE was set up and was further strengthened and expanded by the insights provided by the first main production phase of ENCODE. In particular, the findings of the project challenged the traditional view of modular regulatory elements that are linearly arranged on the genome (Stamatoyannopoulos 2012). What ENCODE showed was that not just genes but also other functional elements are dynamic, relational entities that come to matter in a specific interactive context.
This new view of functional elements also implies that there is little point in trying to develop a definitive library of given elements with their activities. Because of the state-specific nature of many genomic features the whole endeavour of functional genomics obtains an open-endedness that makes the idea of ‘completeness’ questionable; there simply might not be the final catalogue of functional elements that researchers can present as the ultimate output of their research (Stamatoyannopoulos 2012).
5. Conclusions: Beyond the genome
The above analysis shows that we have at least two different narratives at work in functional genomics. In the first, the original ENCODE is seen as a somewhat limited attempt at going beyond structural genomics, hampered by the technical limitations of its time. According to this view, ENCODE is only now developing its true potential by using MPRAs and genome editing tools that allow researchers to test the modules of the genome directly. This picture is informed by a traditional view of the genome as a set of genes that display machine-like functioning.
The other narrative is one that portrays ENCODE as building on and further pushing a transformation in genomics that has been going on for some time. It is a narrative in which context and local differences deeply matter and which requires a diverse experimental landscape to capture it. There is not simply a given genome with a fixed set of well-defined elements that can be put through its paces in reporter assays. Measuring everything in a local context is required to develop a picture of a dynamic entity that looks more like a process than a ‘thing’. In this second narrative, the original ENCODE is seen as visionary and transformative, rather than limited and of lesser value.
The above analysis also suggests that both these views live side-by-side within ENCODE and functional genomics more generally. More research will be needed to understand the temporal and geographic dynamics of these complex narratives. What is clear, however, is that the traditional view – and the methodological norms that come with it – have become less dominant over the last decade or so. This shift in thinking not only has important consequences for genomics as a science but also for its relation to health and science policy.
One important debate that is affected by the idea of the genome as a highly context-dependent and therefore plastic entity is the debate about genome editing. As mentioned in the introduction, removing or even re-writing specific parts of the genome in a living organism has become a reality in recent years due to the development of so-called ‘molecular scissors’. The editing of genomes with such tools is often presented as a modification of a well-defined set of modules: genes can be exchanged like a mechanic would change the engine on a car to fix it or to enhance its speed. However, the findings from functional genomics (and other fields in what are now often called the ‘postgenomic’ life sciences) suggest that such an approach will not necessarily work when it comes to the genome, an inherently dynamic and context-dependent entity. Whilst changing the engine of a car will not change its chassis or breaks, changing a part of the genome might well change its broader structure and behavior. Removing a part of the genomic DNA could affect how other parts behave and thus change the nature of the genome as an active entity. Such effects can travel far, not just in spatial terms but also in terms of the developmental time of the organism, potentially leading to critical effects on the organism’s health later in life. Editing a plastic and highly context-dependent genome therefore poses unique challenges that have not been fully considered yet (Guttinger 2019).
Clearly, there is still a lot we have to learn about the genome, its dynamics and the effects it has on the body. Functional genomics is the discipline that not only gives us a key to these questions, it is also what transforms our understanding of the very thing (or process) we are looking at.
Notes
- See, for example, Ball (2010) on the hype that surrounded the HGP.
- Note that the question of what ‘function’ means in a biological context has been intensely discussed in philosophy of science. For an overview see Neander (2012) and Garson (2016). I do not touch on these highly technical debates here.
- For examples see here or here
- For a discussion of different definitions of what genomes are, see: Keller (2011) and Guttinger and Dupré (2016).
- The results from ENCODE 2 triggered much controversy within the research community, in particular regarding the question of how much of the genome has to be deemed ‘functional’. For more on the controversy, see: Guttinger and Dupré (2016, Appendix).
- Such in vitro assays often use so-called ‘reporter genes’ to measure whether there is a change in activity. These reporter genes have specific properties that make their detection relatively easy. One example is the green fluorescent protein, which, as its name already implies, lights up green when it is illuminated with light of a specific wavelength.
- For a discussion of the different types of experimental approaches used in ENCODE 2 see: ENCODE Project Consortium (2012) and Kellis et al. (2014).
- Note that the old view has not simply disappeared from science. The idea of the gene as a well-define unit and as a doer is appealing and has become deeply entrenched in biological discourse; Keller (2000).
References
Ball, Philip (2010) Bursting the genomics bubble. Nature online, 31st March 2010.
Collins, Francis S., et al. (1998) New Goals for the U.S. Human Genome Project: 1998-2003. Science, volume 282, issue 5389, pages 682–689.
ENCODE Project Consortium (2004) The ENCODE (ENCyclopedia Of DNA Elements) Project. Science, volume 306, issue 5696, pages: 636–640.
ENCODE Project Consortium (2012) An integrated encyclopedia of DNA elements in the human genome. Nature, volume 489, issue 7414, pages 57–74.
Garson, Justin (2016) A Critical Overview of Biological Functions. Springer International Publishing.
Guttinger, Stephan (2019) Editing the Reactive Genome: Towards a Postgenomic Ethics of Germline Editing. Journal of Applied Philosophy.
Guttinger, Stephan and John Dupré (2016) Genomics and Postgenomics. The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), edited by Edward N. Zalta.
Hieter, Philip and Mark Boguski (1997) Functional Genomics: It’s All How You Read It. Science, volume 278, issue 5338, pages 601–602.
Keller, Evelyn Fox (1995) Refiguring Life: Metaphors of Twentieth-Century Biology. Columbia University Press.
Keller, Evelyn Fox (2000) The Century of the Gene. Harvard University Press.
Keller, Evelyn Fox (2011) Genes, Genomes, and Genomics. Biological Theory, volume 6, number 2, pages 132–140.
Kellis, Manolis, et al. (2014) Defining functional DNA elements in the human genome. Proceedings of the National Academy of Sciences, volume 111, number 17, pages 6131–6138.
Neander, Karen. (2012) Biological function. Routledge Encyclopedia of Philosophy.
Nicholson, Daniel J. and John Dupré, editors (2018) Everything Flows: Towards a Processual Philosophy of Biology. Oxford University Press.
Richardson, Sarah S. and Hallam Stevens, editors (2015) Postgenomics: Perspectives on Biology after the Genome. Duke University Press.
Stamatoyannopoulos, John A. (2012) What does our genome encode?. Genome Research, volume 22, number 9, pages 1602–1611.
Published online: 2nd August 2019
Lead reviewer: James Lowe
Also participated in review process: Miguel García-Sancho and Siddharthiya Pillay
Please cite as: Guttinger, Stephan (2019) Beyond the genome: the transformative power of functional genomics. Genomics in Context, edited by James Lowe, published 2nd August 2019.