James Lowe, University of Edinburgh
2. Origins of selective breeding
3. Development of quantitative genetics for livestock selective breeding
4. ‘Conventional’ selective breeding in the second-half of the 20ᵗʰ-century
5. Enter genomics: genetic markers and marker-assisted selection
6. The development of genomic selection
7. Implications of genomic selection
8. Closing reflection
Sources and further reading
Genomic selection is a kind of genomic prediction that has been developed to improve the processes and outcomes of selective breeding in agriculture. In genomic selection, data on many thousands of detectable variants – markers – in the genome are used to identify which animals or crop plants to incorporate in programmes of selective breeding.
The methods of genomic selection were pioneered by quantitative geneticists, building on a tradition of genetics research directed towards agricultural concerns. In this article, focusing mainly on livestock breeding, I outline the historical origins and context of genomic selection, with particular reference to the wider history of selective breeding and the contributions made by quantitative genetics. I then detail the origins of genomic selection as something that was initially an extension of the logic of ‘marker-assisted selection’, an approach devised in the 1990s as genomic approaches were promoted. Once the principles and practice of genomic selection were worked out further, however, it became quite distinct from previous approaches. I summarise how it operates and its implications, comparing it to other genomic prediction approaches such as Genome-Wide Association Studies.
2. Origins of selective breeding
Selective breeding is the human-guided reproduction of particular animals and plants to direct change over the course of generations, such as improving milk yields in dairy cattle. The first kind of selective breeding was the domestication of animals and plants themselves, estimated to have originally taken place from 10,000 to 4,500 years ago.
Initially, this took the form of mass selection, in which a farmer breeds the individuals that they determined had the best set of attributes. For crops, this would take the form of the selection of seeds for sowing the next generation. For animals, it would involve the selection of which individuals were kept to produce the next generation of meat, egg, fur and milk producers. The criteria by which farmers selected would not have been based on a single attribute, but a collection of them. Disease resistance as well as ease of picking or handling and productivity, for instance. For animals, maybe wool production as much as meat.
Another way to selectively breed animals arose in 18ᵗʰ-century England: progeny testing. This was pioneered by Robert Bakewell (1725–1795), who used it to found a systematic national programme of livestock improvement, creating new breeds of sheep, and hiring out breeding bulls to farmers. Progeny testing was based on the principle that the breeder should pick the animals to breed not on the basis of their own attributes, but the attributes of their offspring. This made sense, as improving the herd or flock over generations was the priority, and the animals with the best attributes themselves may not give rise to the offspring with improved characteristics.
Measuring the offspring of potential breeding bulls and rams would therefore be a key feature of Bakewell’s breeding programmes. These programmes also promoted inbreeding – the mating of closely-related animals – as a motor of rapid improvement in the attributes of whole populations of animals over generations. In the 19ᵗʰ-century, systematic planned breeding gave rise to the familiar breeds of livestock farming, the breeding of bizarre varieties of ‘fancy’ pigeons, and the development of thoroughbred horses based on imports of Arabian horses, among other developments.
3. Development of quantitative genetics for livestock selective breeding
The next stage of the story is the advent of a new scientific discipline in the early-20ᵗʰ century: genetics. There were two early rival traditions, the Mendelians and the biometricians, though the contemporary significance of this rivalry and the supposed ‘victory’ of the Mendelians has been qualified by historians.
Biometricians such as Francis Galton (1822-1911) and Karl Pearson (1857-1936) were interested in the inheritance of continuous traits, such as weight or height. The Mendelians drew inspiration from the re-discovery of the work of the naturalist monk Gregor Mendel (1822–1884) in 1900. Mendel had been interested in hybrid plants. To explore the phenomenon of hybridity, he conducted experiments crossing (mating) pea plants with different characteristics, for instance whether the skins of the peas were wrinkled or smooth; yellow or green. Generally ignored in his own lifetime, upon ‘re-discovery’ of his work by several scientists more-or-less simultaneously, his findings were interpreted as laws of heredity concerning the transmission of discrete factors associated with traits. These individual factors became known as genes.
These two approaches to the study of inheritance were held to be contradictory, until a synthesis was effected that has been chiefly attributed to R. A. Fisher (1890–1962), J. B. S. Haldane (1892–1964) and Sewall Wright (1889–1988). The key feature of this synthesis was to explain continuous traits in terms of discrete Mendelian factors. Rather than one or a few genes strongly determining the nature of a discrete trait, multitudes of genes – each contributing in minute ways to the form and function of organisms, their phenotype – were invoked to explain how traits varied continuously in a population. Fisher, Haldane and Wright each pursued different versions of what became known as population genetics. Particularly important for our story was Fisher’s development of the infinitesimal model – which posited an infinite number of positions in the genome (loci) each contributing infinitesimally-small effects to the phenotype – and Wright’s work on the effects of (natural and artificial) selection in small populations.
These developments were drawn upon by Jay Lush (1896–1982), a geneticist working at Iowa State University, who was interested in applying the latest insights from population genetics towards animal breeding. One of his key contributions was the formulation of the ‘Breeder’s equation’:
Spelt out, the breeders equation meant that the response to selection was dependent on the heritability of the trait and the ‘selection differential’. Heritability is a statistical measure of the extent to which variation in a trait is accounted for by genetic variation in a given population. The selection differential is a function of selection intensity, selection accuracy and generation intervals. Selection intensity is the selectivity of selective breeding: breeding 100 animals from a herd of 10,000 is more intense than breeding 1,000 from that same population. Selection accuracy is a measure of how well those 100 or 1,000 animals are selected for breeding according to the goals of the breeder. The higher the selection accuracy and intensity, and heritability of a trait, and the more frequent the generation interval, the faster the response to selection is going to be. For example, microorganisms have a very short generation interval, reproducing many times a day under certain circumstances, which is why they can rapidly evolve resistance to antibiotics, compared to the more leisurely pace of adaptation in plants and animals that produce offspring after much longer intervals.
These insights in the mid-20ᵗʰ century inspired programmes of selective breeding that included inbreeding within populations to improve traits quantitatively and crossbreeding to introduce improved traits into inbred populations. This also enabled breeders to exploit heterosis, in which the average performance of the offspring exceeds the average of parents, a phenomenon sometimes called ‘hybrid vigour’.
The aim of the genetics pursued by scientists such as Lush and his followers today, has been to improve the efficiency of the work of breeders; to increase accuracy, speed and therefore response to selection – in the desired direction. This meant a move beyond the visual inspection of animals to understanding and using the underlying genetics to produce more accurate Estimated Breeding Values (EBV).
4. ‘Conventional’ selective breeding in the second-half of the 20ᵗʰ-century
In the second half of the 20ᵗʰ-century, we see the development of genetically-informed programmes of selective breeding, incorporating methods such as inbreeding, crossing and introgression: the introduction of a gene or genes from one population to another by way of a carefully-designed breeding programme. The exact form programmes took depended in part on the structure of the farming sector, the models of breeding operations (e.g. cooperatives, national schemes, and private companies), the biology of the animals, technical developments (such as artificial insemination), national policies and research infrastructures, and interactions between publicly-funded animal genetics research and livestock breeding industries.
Key to all schemes have been the development and application of quantitative genetics, and the collection of performance-related data. Performance testing and the collection, analysis and implementation of data on traits could be used to construct selection indexes in which traits could be weighted differently according to distinct breeding goals or contexts of the production of the animals. One important development in the 20ᵗʰ-century was the advent of Best Linear Unbiased Prediction (BLUP), a statistical technique that separates variance due to environment and variance due to genetics using data from relatives, to enable breeders to identify the genetic potential of animals.
This ‘conventional breeding’ using the insights of quantitative genetics has had a considerable impact. In spite of the name, conventional breeding has not been a static approach. It has been based on constant improvements to the definition and measurement of traits, quantitative methods, and the design of breeding programmes. This has been enabled by innovation in quantitative and statistical methods and theory, as well as the development of new technologies. Such technologies range from artificial insemination to enable the genetics of animals with EBVs to spread across the world, to electronic identification tags and other instances of agricultural engineering that enable the monitoring of multiple animals, for instance tracking their behaviour and feed intake.
As a result of conventional breeding, considerable intended improvements have been obtained, such as those for milk yield for dairy cattle, egg production for layer hens, growth rate for broiler chickens, and feed efficiency for pigs. Carbon emissions per unit of production has declined. Conventional breeding has also caused some problems. Extreme selectivity, exacerbated in cattle by the widespread use of artificial insemination, has led to a decline in effective population size and genetic diversity. Cow fertility has declined as milk production has increased. This is being ameliorated by breeders taking a wider array of traits that manifest across the whole lifetime of the cow into consideration when calculating EBVs. Pigs selected for a particular shape and musculature led to the development and spread of Porcine Stress Syndrome (PSS), caused by a mutation in the gene ryr1, which was closely linked to genes being selected for through those breeding goals. Even intended effects of breeding have been criticised, for example the changes to broiler chickens, on welfare grounds.
Some negative effects of conventional breeding cannot necessarily be separated from the overall political economy of the livestock and food industries. While advances in conventional breeding may have contributed to the creation of so-called ‘factory farming’ and increasing concentration and vertical integration (ownership of different parts of the supply chain by the same company, such as a fast-food company owning cattle farms) of food production and supply, these have also been shaped by forces largely independent of the breeding enterprise.
5. Enter genomics: genetic markers and marker-assisted selection
In the 1980s, new techniques enabled researchers to identify increasing numbers of genes and different versions of genes that were implicated in phenotypic variation. This raised the prospect of being able to identify the particular versions of genes involved in human disease, with diagnostic and therapeutic implications. Indeed, the particular variants – mutants – responsible for hereditary diseases such as cystic fibrosis and Huntingdon’s were found in 1989 and 1993 respectively. The potential benefits of identifying genetic variants associated with traits of interest to farmers and breeders was also articulated – this information could be used to make breeding more effective or even genetically modify animals and plants. Researchers, with the support of industry, therefore pursued the mapping of genomes.
Before genome sequencing became practical and affordable, this took the form of identifying and mapping genetic markers, detectable stretches of DNA sequence that differ between individuals. Using a variety of techniques, different kinds of these markers could be found and their position in the chromosomes worked out. While these markers were usually not genes themselves, the idea at the outset of the 1990s was that if enough of them could be mapped, this would aid the process of identifying and narrowing down the location of particular genes. However, it soon became clear that the markers could be of use in and of themselves. One of the first applications of the increased ability to identify and map genetic markers was the identification of a susceptibility marker for PSS, enabling its elimination from herds. Others followed throughout the 1990s, often deriving from the mapping and technical development of projects such as the international Pig Gene Mapping Project (PiGMaP), as well as US initiatives funded by the US Department of Agriculture. PiGMaP was funded by the European Commission from 1991 to 1996, and represented an intersection between molecular methods with quantitative methods deriving from animal breeding.
This led to the development of an approach known as Marker-Assisted Selection (MAS), which used – and encouraged – the development of increasingly dense maps of markers. The idea was to use a set of markers – each of which was found to be linked to variation in phenotypic traits of interest to breeders – in selection programmes. The extent of the success of MAS as a breeding strategy is difficult to gauge. Not all companies adopted it at the centre of their breeding approach, and quantitative approaches still dominated.
One of the reasons for this was the increasing belief in the validity of Fisher’s infinitesimal model. It posits that for any quantitatively varying trait, variance can be explained with reference to an infinite number of loci, each with infinitesimal effects. This was taken to be an operating assumption rather than a precise description of reality. Nevertheless, it became clear that there were fewer genes that had significantly large effects on phenotypic variation to which markers could be associated for MAS to prove a successful strategy for the industry, in spite of some successful cases.
6. The development of genomic selection
MAS nevertheless retained its promise. To fulfil it, two main conceptual leaps took place. The first aimed to extend the principle of MAS in order to make it effective. The second, enabled by the new availability of masses of genomic data on the animals of interest, extended it so far as to transform it into something different, something called genomic selection.
Oddly, the first did not depend primarily on genomics as a source of inspiration. It is now difficult to imagine a time when genomics was not the future of medicine, agriculture, or many other things. Yet, in two periods, reproductive or embryological approaches were conceived to be the future of innovation in breeding practices. Developments in artificial insemination and embryo transfer fostered the promise of these approaches in the 1970s. The advent of cloning of adult mammals heralded by the birth of Dolly the sheep reignited this in the late-1990s.
The birth of Dolly prompted two researchers at Dolly’s birthplace, the Roslin Institute, to consider the implications of current and projected embryological methods for breeding practices. Both Chris Haley and Peter Visscher were quantitative geneticists, though, not embryologists. They argued that the rapid development of embryological methods would soon enable a ‘whizzogenetics’ – fast genetics, in other words. Like a previous scheme for a ‘velogenetics’ articulated in 1991 by livestock geneticists Michel Georges and Joseph M. Massey, the idea was to test for the presence or absence of certain desired gene-linked markers – that, is to genotype – oocytes from female animals. Oocytes are cells in ovaries that can undergo meiosis, division of the cell resulting in two daughter cells each with half the complement of the parent cell. The aim is to reduce the generation interval of selection, which would in turn increase the selection differential, making breeding more effective.
However, embryological methods subsequently advanced much more slowly than predicted. It was genomics that was becoming ‘whizzo’. Genome sequencing had been getting cheaper and faster to do from the 1980s onwards. Though this trend was not confined to the large specialist genome sequencing centres that were established in the 1990s, these contributed to the trend, especially from the mid-1990s onwards. These centres contributed to the sequencing of the genomes of cattle (Baylor College of Medicine), pig (Sanger Institute), and chicken (Washington University) in the early-to-mid 2000s, to name three relevant examples.
Before the data from these projects became available, the second conceptual extension of MAS was made by Theo Meuwissen, Ben Hayes and Michael Goddard: genomic selection. The basic idea behind genomic selection is to use many more genetic markers across the genome than the dozens used in MAS. This sounds deceptively simple, but it required simulation experiments conducted by Meuwissen, Hayes and Goddard and others to demonstrate it in principle, and considerable further research and development to enable it to work in practice. Contained in it, though, was a significant conceptual shift. By using many more markers – in the thousands or tens of thousands – it would be more likely that markers would be located close enough to genes that are actually involved in processes that affect phenotypic outcomes, i.e. measurable changes to the organism. What this means is, that if one can detect (this is called genotyping) a sufficiently large number of markers that are spread across the genome, what any marker does or whether they are near to any genes does not need to be known. Raw statistical power is all that is required.
There are two main factors that enabled this approach to be developed for livestock breeding. One is that the purpose is to improve the effectiveness of selection acting on populations. This is quite unlike, for example, human medicine, where it is important that particular disease-related genes and the variants of them be identified so that the processes underlying normal function and disease can be unpicked to give clues to possible therapies. Here, one is aiming to identify measures that will help individuals with particular genetics and physiology, even if some aspects of the process by which this is achieved (e.g. clinical trials or public health systems) operate at the population level. For livestock and poultry, the main aim is to make sure the next generation has better phenotypic measurements than the prior one. If, in applying genomic selection to a herd or flock, it did not work out for a few individuals, this only matters to the extent that they contribute to the overall results for the population.
As genomic selection aims to improve effectiveness of selection in populations, markers only need to be predictive, they don’t need to be the sequences that are actually involved in the biological processes involved in phenotypic variation. Genomic selection can therefore be quite different to Genome-Wide Association Studies (GWAS), which statistically relates data on the presence of particular markers and variation in traits to try to identify causative loci (e.g. genes). GWAS takes a different statistical approach to genomic selection. When researchers are trying to find genes, they need to make sure that they avoid false positives, i.e. something that looks like it has some effect on phenotypic variation, but actually does not. To avoid these, researchers apply a threshold (a p value) for identifying valid effects. There is no need for such a threshold for genomic selection. Rather than something either exceeding or falling short of the threshold, a range of effects of different magnitudes are built into the models. It is still possible to get false positives, though, and a plethora of tiny effects that might add up in combination to something phenotypically-measurable can be presumed to have zero effect. Models can be tweaked to account for these. As previously mentioned though, in selective breeding it is less problematic if these give rise to incorrect predictions about an individual, as long as it works overall for the population.
The other factor enabling the development of genomic selection was advances in genomics. These produced masses of DNA sequence data, and stimulated developments in bioinformatics and statistics. Genomic selection (like GWAS) makes use of Single Nucleotide Polymorphisms (SNPs) – single nucleotide variants in the genome that can be correlated with quantitative changes in traits of interest. Sequencing projects therefore enabled development of microarrays, a technology that allows a plethora of SNPs to be genotyped: for this they are also known as ‘SNP chips’. These SNP chips allow the presence or absence of many thousands of markers to be tested. They are produced by companies such as Illumina and Affymetrix, often in close collaboration with a community of researchers on a given species who select the SNPs to be included on them.
Genotyping data generated from the SNP chips can be associated with trait data. In genomic selection, this is done with a reference population of animals with rigorously documented phenotypic data to produce a prediction model. This model is then used when a target population for selection is genotyped, to calculate genomic Estimated Breeding Values (gEBV) for all the members of the population. This is then used to identify selection candidates, without needing to collect phenotypic data on them or their progeny. All of this requires considerable resources and investment to execute. Many thousands of animals in the reference population are needed, as well as the means by which to genotype and accurately phenotype the animals in this and the target population. The data must then be integrated and analysed using appropriate statistical and computational tools, informed by the range of traits identified for selection purposes, according to defined breeding goals.
Just as conventional breeding was never a fixed approach or methodology, genomic selection is a dynamic field with regular new developments. It is not possible to recite them all in an article such as this, but a selection gives an indication of the breadth: reducing the cost of genotyping by imputation methods that use low-coverage DNA sequencing of large populations of animals combined with knowledge of pedigrees to fill in the genomic gaps; new ways of conceiving and measuring phenotypic traits, for instance using indicative molecular biomarkers rather than gross phenotypic measurements like fat depth or milk yield; changing selection indexes; extending genomic selection to new areas such as crop breeding; and combining genomic selection with genome editing to introduce new genomic variants into populations.
7. Implications of genomic selection
Genomic selection has been shown to improve the selection differential by enhancing all three elements on the right-hand side of the breeder’s equation: it provides more accurate EBVs, enables greater selection intensity and can reduce the generation time.
Shorter generation time is particularly significant in dairy cattle breeding. Rather than having to wait to measure the milk yield of the daughters of bulls before the latter can be picked as selection candidates, for example, the embryos can be tested, and breeding bulls picked much sooner. Due to this, the existing international infrastructure for genetic evaluation of bulls (e.g. through Interbull) and the greater ease of artificial insemination in cattle, genomic selection has had its greatest impact in dairy cattle so far. This illustrates how an approach with general applicability may be implemented differently with varying outcomes across different species of livestock and poultry, due to existing infrastructure and technologies, the structure of the breeding industry and its relationship to producers, and the economics and practicality of the breeding enterprise for different organisms.
The advent of genomic selection has in turn helped to transform some of the factors influencing its reception and impact. For instance, for pigs, the extent of investment required to implement genomic selection means that breeding companies have had to make a decision whether to do so or sell-up to or merge with those who will. This has exacerbated the tendency towards concentration already existent in the sector. Indeed, becoming a larger and more integrated breeder offers distinct advantages in the context of genomic selection. A prediction model is more effective if it is based on more animals and phenotypic data, entrenching the existing advantage of larger breeding companies.
Genomic selection also has implications for the competitiveness of livestock breeds, and genetic diversity. Genomic evaluations on a target population are valid to the extent that they genetically resemble the reference population; in other words, they need to be comprised of individuals of the same breed or sub-breed. This means that more prevalent breeds are more likely to have larger quantities of data generated on them, to have prediction models for use on same-breed target populations, and to be able to construct larger reference populations, and so have more accurate prediction models. As commercial demand for data from cross-breed reference populations that could serve multiple breeds is limited, the implication is that the already-entrenched advantages of particular breeds will be advanced further, extending the domination of particular breeds (e.g. ‘Holsteinisation’ in dairy cattle) already produced by conventional breeding. Publicly-funded research is underway to compensate for this, for example using functional genomics approaches that can be implemented in smaller breeds.
Inbreeding and the concomitant reduction in genetic diversity was supposed to be reduced by genomic selection, as it no longer relied on pedigrees. However, measurements of inbreeding indicated that when genomic selection was introduced, it actually increased, as fewer animals were selected for breeding. Furthermore, where the generation time can be reduced, even if the rate of inbreeding stays the same per generation, inbreeding will still increase every year. The increase in the selection differential enabled by genomic selection therefore inadvertently encouraged inbreeding and reduced genetic diversity. There are interventions that can be used to ameliorate or reverse this trend, for instance including a wider range of different traits in selection indexes, or changes in the design of breeding programmes.
8. Closing reflection
I close with a reflection on the wildly different reception to genomic selection by scholars in the humanities and social sciences, compared with the attention given to genome editing. Unlike genome editing, which has so far had limited real-world impact outside of the laboratory, genomic selection has already resulted in significant genetic change in livestock species. This has ramifications for the future shaping of livestock species and diversity, the breeding industry and its relationship to producers and academic research, and so to food security and sustainability. So why is there so little attention to it from scholars or bodies that deliberate the implications of new technologies? I would suggest that more scholars and commentators outside of the natural sciences tend to direct more attention towards developments more closely associated with molecular biology than with other parts of the life sciences.
I can only speculate as to the reasons for this. One may be the genealogy of much of the scholarly work dealing with new techniques and technologies developed in the life sciences and applied in the real world. These genealogies owe their origins to debates or demands within the natural sciences. Key wellsprings were the fresh impetus given to bioethics by the conscious stimulation by biochemists and geneticists of a debate over recombinant DNA technologies soon after their invention, and the ‘Ethical, Legal and Social Implications of genomics’ programmes of humanities and social science research supported by the large-scale genome sequencing funders from the 1990s onwards. Together with social controversies such as those over genetically-modified crops, these origins of scholarly engagement with genomics have framed the genome primarily as a molecular object, distinct from the more abstract renderings of it favoured by the quantitative genetics tradition. It is additionally possible that many scholars are more comfortable with controversies requiring some engagement with a very concrete sort of biology that involves easily-graspable metaphors of scissors, cutting and pasting, than the mathematical world of quantitative genetics and its progeny. This is unlikely to change, but a greater attention to new developments of potentially huge significance such as genomic selection is still possible even against that background.
Sources and further reading:
Alan L. Archibald (1986) A molecular genetic approach to the porcine stress syndrome. In: Evaluation and Control of Meat Quality in Pigs, edited by P. V. Tarrant, G. Eikelenboom and G. Moni, pages 343–357. Dordrecht: Martinus Nijhoff Publishers.
Jack C. M. Dekkers and Julius H. J. van der Werf (2007) Strategies, limitations and opportunities for marker-assisted selection in livestock. In: Marker-assisted selection: Current status and future perspectives in crops, livestock, forestry and fish, edited by Elcio P. Guimarães, John Ruane, Beate D. Scherf, Andrea Sonnino, and James D. Dargie, pages 167–184. Food and Agriculture Organization of the United Nations.
Margaret E. Derry (2015) Masterminding Nature: The Breeding of Animals, 1750-2010. University of Toronto Press.
Harmen P. Doekes, Roel F. Veerkamp, Piter Bijma, Sipke J. Hiemstra and Jack J. Windig (2018) Trends in genome-wide and region-specific genetic diversity in the Dutch-Flemish Holstein–Friesian breeding program from 1986 to 2015. Genetics Selection Evolution, Volume 50, article 15.
Mehrnush Forutan, Saeid Ansari Mahyari, Christine Baes, Nina Melzer, Flavio Schramm Schenkel and Mehdi Sargolzaei (2018) Inbreeding and runs of homozygosity before and after genomic selection in North American Holstein cattle. BMC Genomics, Volume 19, article 98.
Jeffrey Gulcher (2012) Microsatellite Markers for Linkage and Association Studies. Cold Spring Harbor Protocols: doi:10.1101/pdb.top068510
Chris Haley and Peter M. Visscher (1998) Strategies to Utilize Marker-Quantitative Trait Loci Associations. Journal of Dairy Science, Volume 81 Number 2, pages 85–97.
William G. Hill (2010) Understanding and using quantitative genetic variation. Philosophical Transactions of the Royal Society B, Volume 365, pages 73–85.
William G. Hill (2014) Applications of Population Genetics to Animal Breeding, from Wright, Fisher and Lush to Genomic Prediction. Genetics, Volume 196, pages 1–16.
David A. Hume, C. Bruce A. Whitelaw and Alan L. Archibald (2011) The future of animal production: improving productivity and sustainability. Journal of Agricultural Science, Volume 149 Supplement 1, pages 9-16.
Noelia Ibáñez-Escrichea, Selma Forni, Jose Luis Noguera and Luis Varon (2014) Genomic information in pig breeding: Science meets industry needs. Livestock Science, Volume 166, pages 94–100.
Egbert F. Knol, Bjarne Nielsen and Pieter W. Knap (2016) Genomic selection in commercial pig breeding. Animal Frontiers, Volume 6 Number 1, pages 15–22.
Russell Lande and Robin Thompson (1990) Efficiency of Marker-Assisted Selection in the Improvement of Quantitative Traits. Genetics, Volume 124, pages 743–756.
James W. E. Lowe and Ann Bruce (2019) Genetics without genes? The centrality of genetic markers in livestock genetics and genomics. History and Philosophy of the Life Sciences, Volume 41, article 50.
Theo Meuwissen (2007) Genomic selection: marker assisted selection on a genome wide scale. Journal of Animal Breeding and Genetics, Volume 124, pages 321–322.
Theo H. E. Meuwissen, Ben J. Hayes and Michael E. Goddard (2001) Prediction of Total Genetic Value Using Genome-Wide Dense Marker Maps. Genetics, Volume 157, pages 1819–1829.
Theo Meuwissen, Ben Hayes and Mike Goddard (2016) Genomic selection: A paradigm shift in animal breeding. Animal Frontiers, Volume 6, Number 1, pages 6–14
William B. Provine (2001) The Origins of Theoretical Population Genetics. The University of Chicago Press.
Hans-Jörg Rheinberger and Staffan Müller-Wille; translated by Adam Bostanci (2017) The Gene: From Genetics to Postgenomics. The University of Chicago Press.
Roger Ros-Freixedes, Andrew Whalen, Ching-Yi Chen, Gregor Gorjanc, William O. Herring, Alan J. Mileham and John M. Hickey (2020) Accuracy of whole-genome sequence imputation using hybrid peeling in large pedigreed livestock populations. Genetics Selection Evolution, Volume 52, article 17.
Elizabeth S. Russell (1989) Sewall Wright’s Contributions to Physiological Genetics and to Inbreeding Theory and Practice. Annual Review of Genetics, Volume 23, pages 1–18.
Hsin-Yuan Tsai, Oswald Matika, Stefan McKinnon Edwards, Roberto Antolín–Sánchez, Alastair Hamilton, Derrick R. Guy, Alan E. Tinch, Karim Gharbi, Michael J. Stear, John B. Taggart, James E. Bron, John M. Hickey and Ross D. Houston (2017) Genotype Imputation To Improve the Cost-Efficiency of Genomic Selection in Farmed Atlantic Salmon. G3 Genes|Genomes|Genetics, Volume 7, Number 4, pages 1377–1383
George R. Wiggans, John B. Cole, Suzanne M. Hubbard and Tad S. Sonstegard (2017) Genomic Selection in Dairy Cattle: The USDA Experience. Annual Review of Animal Biosciences, Volume 5, pages 309–327.
Published online: June 2021
Lead reviewer: Ann Bruce
Please cite as: Lowe, James (2021) Genomic Selection. Genomics in Context, edited by James Lowe, published June 2021.