Thursday, April 30, 2009

T. rex in the news again! or, The dinosaur parable?

Sixty-five million years ago, an asteroid hit the earth in what is now the Gulf of Mexico and the resulting environmental changes killed off the dinosaurs, opening the way for the subsequent evolution of mammals.

Or not. This scenario has been taught to schoolchildren for decades, but Gerta Keller and Thierry Addate, geoscientists at Princeton University and the University of Lausanne, Switzerland, have reanalyzed old data and collected new and they suggest that the asteroid hit 300,000 years before the mass extinction. Theirs isn't a paradigm-shifting suggestion because they don't reject the environmental impact theory, proposing that rather than an asteroid, the environmental change could have been due to the explosion of massive volcanoes and the resulting dust in the atmosphere blocking out the sun and so on. Some scientists have replied in defense of the asteroid theory. But, it is a further reminder that every scientist before yesterday was wrong, at least to some extent. Is the instant death idea so fixed that if it wasn't an asteroid we need another cloudy explanation?

Ah, but everyone has in mind the Disney animated view of a herd of Brontosauruses grazing, then looking up curiously at the flaming meteor and its ensuing dust-cloud obscuring the sun then grimacing in terror (cartoon reptiles can fear in their eyes), then staggering, and plopping down dead (with a few farewell quivers from the tips of their tails, accompanied by ominous strains from the bass section).

So perhaps you can pick your favorite dust cloud. Whether or not any global dust-cloud can cause such extinctions (leaving countless species of all sorts, including reptiles, alive) is one of the debates that have accompanied the dust-cloud theory. But there probably has been a bit too much uncritical acceptance of the one-hit-killed-all (except those it didn't) theory.

Melodramatic global smudges provide a parable for what happened, since we certainly all know that the T. rex really is gone (unless it went under water to get away from the dust, and became the Loch Ness moster!), but may not be an accurate reflection of the actual day-to-day facts at that time deep in the mists of history.

Tuesday, April 28, 2009

The Darwin parable?

Every human culture is embedded in stories about itself in the world, its lore, based on some accepted type of evidence. Science is a kind of lore about the world that is accepted by modern industrialized cultures.

It has long been pointed out that science is only temporary knowledge in the sense that, as we quoted in an earlier post, every scientist before yesterday has been wrong at least to some extent. As Galileo observed, rather than being true, sometimes the accepted wisdom is the opposite of the truth--he was referring to Aristotle, whose views had been assumed to be true for nearly 2000 years.

Scientific theory provides a kind of parable of the world, a simplified story. That's not the same as the exact truth. Here is a cogent bit of dialog from JP Shanley's play Doubt, referring to a parable the priest, Father Flynn, had used in a recent sermon:

Sister James: "Aren't the things that actually happen in life more worthy of interpretation than a made-up story?"

Father Flynn: "No. What actually happens in life is beyond interpretation. The truth makes for a bad sermon. It tends to be confusing and have no clear conclusion."

Darwinian theory is like that. The idea that traits that are here today are here because they were better for their carriers' ancestors than what their peers had, is a tight, taut, and typically unfalsifiable kind of explanation. Since what is here is here because it worked and what did not work is not here, this becomes true by definition. It's a catch-all 'explanation'. It at least has the appearance of truth, even when some particular Darwnian explanation invokes some specific factor--often treated from Darwin's time to the present as a kind of 'Newtonian' force--that drove genetic change in the adaptive direction we see today.

That's the kind of scenario that's offered to account for the evolution of flight to capture prey, showy peacock feathers to attract mates, protective coloration to hide from predators, or why people live long enough to be grandmothers (to care for their genetic descendants). Some of these explanations may very well be factually true, but almost all could have other plausible explanations or are impossible to prove.

Simple, tight, irrefutable but unprovable stories like these are, to varying but unknown extent, parables rather than literal truth. Unfortunately, while science often (and often deservedly!) has little patience with pat religious parables that are invoked as literal truth, science often too blithely accepts its own theories as literal truth rather than parable.

We naturally have our own personal ideas about the nature of life, and we think (we believe) that they are generally true. They are sometimes different, as we try to outline in our book and in these postings, from what many others take for granted as truth. Strong darwinian selectionism and strong genetic determinism, in the ways we have discussed, are examples.

It may be difficult for people in any kind of culture, even modern technical culture, to be properly circumspect about their own truth-stories. Perhaps science must cling to theories too tight to be literally true, by dismissing the problems and inconsistencies that almost always are known. Accepted truths provide a working research framework, psychological safety in numbers, and the conformism needed to garner society's resources and power (here, among other things, in the form of grants, jobs, publications).

In fact, as a recent book by P. Kyle Stanford, Exceeding Our Grasp (Oxford Press, 2006) discusses at length, most theories including those in biology are under-determined: this means that many different theories, especially unconceived alternatives to current theory, could provide comparable fit to the existing data.

We can't know when and how such new ideas will arise and take their turn in the lore that is science. But in principle, at least, a bit more humility, a bit more recognition that our simple stories are more likely to be parable than perfect, would do us good.

Good parables do have semblance to plausibility and truth. Otherwise, they would not be useful parables. As we confront the nature of genomes, we see things that seem to fit our theoretical stories. In science as in other areas of human affairs, that fit is the lure, the drug, that draws us to extend our well-supported explanations to accept things as true that really are parable. We see this all the time, in the nature of much that is being said in genetics these days, as we have discussed.

Probably there's a parable about wisdom that could serve as a lesson in this regard. Maybe a commenter will remind us what it is!

Monday, April 27, 2009

Genetic perceptions and(/or) illusions?

If as we tend to think, genes are not as strongly deterministic as seems often to be argued, then why do humans always give birth to humans, and voles to voles? Isn't the genome all-important, and therefore doesn't it have to be a blueprint (or, in more current terms, a computer program) for the organism? Even the other ingredients in the fertilized egg are often to a great extent dismissed in importance, because at some stage they are determined by genes (for example, in the mother when she produces the egg cell).

It's true that most people enter this world with the same basic set of parts, even if each part varies among people. But, in many senses the person is not predictable by his or her genome. You get this disease, other people get that one. You are athletic, others are not. You can do math, others can do metalwork. And so on, but most often a genetic predisposition for these traits can't be found.

We have made a number of recent posts about that genotype to phenotype connection problem. You generally resemble your relatives, which must be at least partly for genetic reasons, but the idea of predicting much more than that about your specific life from your specific genome is not working out very well, except vaguely or, it is certainly true, for a number of genetic variants with strong effects, that are often rare or pathological. In the latter cases are the genetic causes of diseases like muscular dystrophy or cystic fibrosis.

But these facts seem inconsistent! How can your genome determine whether you are an oak, a rabbit, or a person....and yet not determine whether you'll be a musician, get cancer, or win a Nobel prize? Is the idea of genetic control an illusion?

Yes and no. Part of the explanation has to do with what we refer to in our book as 'inheritance with memory'. Genomes acquire mutations and, if they aren't lethal, they are faithfully transmitted from parent to offspring so that some people have blue eyes and some have brown, some have freckles and others don't.

Over time, differences accumulate in genomes, and with isolation of one kind or another, lineages diverge into different species, and then differences continue to build up over evolutionary time. When millions of years later you compare a rabbit to a person (much less to a maple!), so much genetic variation has accumulated that there is clearly no confusing these different organisms. The genes, and the result of their action are unmistakable.





What happens at each step during development of an embryo reflects the cumulative effect of many genes, and is contingent on what has already happened. Thus, step by step by step in a developing rabbit embryo the rabbit foundation gets laid and everything that happens next basically depends on getting more rabbit instructions, because, except for genes that are very similar (conserved) across species, they are the only instructions that the cells in the embryo can receive.

Variation is tolerated but within limits--some rabbits have very floppy ears and some don't, but none have elephant ears or a rack of antlers (as in the jackalope pictured here) or can survive a mutation that gives it, say, a malfunctioning heart. So, you get a continuum of rabbit types, but because of how development works, a newborn rabbit can't veer very far from 'rabbitness'.

The same is true for humans--among those whose ancestors were separated on different continents, during thousands of generations, genetic variation arose and accumulated in these populations, and it's often possible to tell from a genome where a person's ancestors lived. That is, that person's genome's geographic ancestry.

When the subject turns to variation within a species or a population, however, the scale of variation that we are studying greatly changes. Now we are trying to identify genetic variation that, while still compatible with its species and population, and with successful embryological development, contributes to trait variation. Relative to species differences, such variation is usually very slight. But, it happens because, within limits, biology is imprecise and a certain amount of sloppiness (mutation, in this case) is compatible with life. In fact, it drives evolution.

What does this mean about the genetics of disease, or other traits that are often of interest to researchers, like behavior, artistic ability, intelligence, etc.? Culturally, we may make much of these differences, such as who can play shortstop and who can't. But often, these are traits that are not all that far from average or only are manifest after decades of life (e.g., even 'early onset' heart or Alzheimer disease means onset in one's 50s). Without even considering the effects of the environment, it is no surprise, and no illusion, that it is difficult to identify the generally small differences in gene performance that are responsible.

Of course, like a machine, there are many ways in which a broken part can break the whole machine, so that within any species there are many ways in which mutations that have major effects on some gene can have major effects on the organism. Mostly, those are lethal or present early in life. And, many mutations probably happen in the developing egg or sperm rendering that cell unable to survive--that's prezygotic selection. But even serious diseases are small relative to the fact that a person with huntington's disease or cancer is, first and foremost, a person.

Weiss and Buchanan, 2009
So, there may seem to be an inconsistency between the difficulty of finding genetic causes of variation among humans, and the obvious fact that genetic variance is responsible for our development and differences from other species. A major explanation is the scale of difference one is thinking about. Just because genes clearly and definitively determine the difference between you and a maple tree--and it's easy to identify the genes that contribute to that difference--that does not mean that the genetic basis of the trait differences between you and anyone else is going to be easy to identify. Or between a red maple and a sugar maple. Or that genetic variation alone is going to explain your disease risk or particular skills.

And there's another point. In biomedical genetics we are drawing samples from billions of people, whose diseases come to the attention of specialty clinics around the developed world, and hence are reported, included in data bases, and are put under the genetic microscope for examination. This means that we systematically identify the very rarest, most aberrant genotypes in our species. This can greatly exaggerate the amount of genetically driven variation in humans compared to most, if not all, other species.

However, it must be said that even the instances of other species (such as a hundred or so standard lines of inbred mice, or perhaps a few thousand lines of fruit flies or Arabidopsis plants (as in the drawing), from which much of our knowledge of genetic variation is derived), finds similar mixes of genetic simplicity and complexity. In other words, one does not need a sample space of hundreds of millions to encounter the difficulty of trying to predict phenotypes from genotypes.

So while this is true, it's also true that some variation in human traits is controlled by single genes, and those behave at least to some extent like the classical genetic traits that Mendel studied in peas. This variation arises in exactly the same way variant genes with small effects that contribute to polygenic traits arises--by mutation. But, the effects of mutation follow a distribution from very small to very large, and the genetic variants affecting the extreme are easier to identify. Some common variants of this kind do exist, because life is a mix of whatever evolution happened to produce. But complex traits remain, for understandable evolutionary reasons, complex.

The Science Factory

An op/ed piece called "End the university as we know it" in today's New York Times , by Mark Taylor, a professor of religion at Columbia, has caught people's attention. As it should. He describes a system in which, among other problems, universities train far too many graduate students for the number of available academic jobs, because they need them to work in their labs and teach their classes. While this is certainly true, we would like to add a few more points.

Research universities have become institutions dependent on money brought in by research faculty to cover basic operating expenses, and this naturally leads to emphasis on those aspects of faculty performance. This means that promotions, raises and tenure, not to mention our natural human vanities, depend substantially if not centrally upon research rather than pedagogical prowess. In that sense, universities have become welfare systems, only the faculty are the sales force as well as the protected beneficiaries.

Things differ among fields, however. In the humanities, that even university presidents recognize have at least some value and can't totally be abolished, publication is the coin of the realm. It is not new to say that most of what is published is rarely read by anyone (if you doubt this, check it for yourself--it's true even in the sciences!), yet this research-effort usually comes at the expense of teaching students.

In the humanities, where little money is at stake and work remains mainly at the individual level, students are still largely free to pick the dissertation topic that interests them. But their jobs afterwards are generally to replace their mentors in the system.

In the sciences there are many more jobs, both academically and in industry and the public sector. We've been very lucky in that regard in genetics. But at the same time, science is more of an industrial operation, depending on serfs at the bench who are assigned to do a professor's bidding--work on his/her grant-based projects. There is much less freedom to explore new ideas or the students' own interests. They then go on to further holding areas, called post-docs, before hopefully replacing aging professors or finding a job in industry.

In both cases, however, education is taking an increasing back seat at research universities and even good liberal arts colleges. Lecturers and graduate students teach classes, rather than professors, and there is pressure to develop money-making online courses that are rarely as good as good-old person to person contact. The system has become decidedly lopsided.

Taylor identifies very real problems, but to us, his solutions (eliminate departments and tenure, and train fewer students, among other things) don't address the real root of the problem. As long as universities are so reliant upon overhead money from grants, which they will be for the foreseeable future, universities can't return to education as their first priority.

So long as this is based on a competitive market worldview, as it is today, the growth ethic dominates. One does whatever we need to do to get more. Exponential growth expectations were largely satisfied during the boom time, starting around 40 years ago, as universities expanded (partly because research funding did, enabling faculty to be paid on grants rather than tuition money). Science industries grew. Status for a professor was to train many graduate students. We ourselves have been great beneficiaries of this system!

Understanding the current situation falls well within the topical expertise of anthropology. One class, the professoriate, used the system to expand its power and access to resources. Whether by design or not, a class of subordinates developed. A pyramid of research hierarchy grew, with concentration of resources in larger and wealthier groups, labs, or projects. The peer review system designed to prevent self-perpetuating old-boy networks has been to a considerable extent coopted by the new old boys.

As everyone knows, or should know, exponential growth is not sustainable. If each of us trains many students, and then they do the same, etc., we eventually saturate the system, and that is at least temporarily what is happening now.

What can be done about it, or should be done about it, are not clear. The idea that we'll somehow intentionally change to what Taylor wants seems very unanthropological: we simply don't change in that way as a rule. Cultural systems, of which academia is one, change by evolution in ways that are usually not predictable. At present, there too many vested interests (such as ourselves, protecting our jobs and privileges, etc.). Something will change, but it will be most humane if it's gradual rather than chaotic. Maybe, whether in the ways Taylor suggests or in others, a transition can be undertaken that does not require the system to collapse first.

We'll all just have to stay tuned....

It would never work out



More from our Humor Editor

http://news.yahoo.com/comics/speedbump;_ylt=ArrJ4pWQkHmu.kjkHYe7I_MK_b4F

Friday, April 24, 2009

Prove me wrong

Sun Apr 19, 12:00 AM ET
http://d.yimg.com/a/p/uc/20090419/lft090419.gif

Thank you, Jennifer, once again!

Thursday, April 23, 2009

Who doesn't 'believe' in genes? Who does believe in miracles?

Disputes in science, as in any other field, can become polarizing and accusatory. Skepticism is almost by definition the term applied, usually in a denigrating way, to a minority view. Minority views are often if not usually wrong, but history shows that majority views can be similarly flawed (as we mentioned in our post of April 6). Indeed, major scientific progress occurs specifically when the majority view is shown to be incorrect in important ways.

It is sometimes said, or implied, about those who doubt some of the statements made about mapping complex traits by GWAS (see previous posts) and other methods, that the skeptics 'don't even believe in genes!' The word 'believe' naturally comes to the tongue when characterization of heretics is afoot, and reflects an important aspect of majority views (including the current basic theory in any science): they are belief systems.

In genetics, we know of nobody who doesn't 'believe' in genes. The question is not one of devil-worship by witches like those Macbeth met on a Scottish heath. The question is about what, where, and how genes work and how that is manifest in traits we in the life sciences try to understand. Not to believe in genes would be something akin to not believing in molecules, or heat.

In the case of human genetics, genes associated with and/or responsible for hundreds of traits including countless diseases, have been identified (you can easily read about them in OMIM and elsewhere). Many of these are clearly understandable as the effects, sometimes direct effects, of variation in specific genes.

In some examples, like cystic fibrosis, almost every case of a trait or disease is due to variation in the same gene. In others, such as hereditary deafness, different affected people or families are affected because of the effects of different genes, but it appears that the causal variants are so rare in the population that in each family deafness is only one of them. This is called multiple unilocus causation, and many different genes have been found for such traits (as in the deafness case shown).

Even for more complex traits like, say, diabetes or various forms of cancer, variation at some genes has strong enough effect that standard methods of gene-hunting were able to find them (and, yes, GWAS can find them, too!). BRCA1 and its association with risk of breast cancer is a classic instance of that. But often the results can't be replicated in other studies or populations, or even different families, a problem that has been much discussed in the human genetics literature, including papers written by us.

So, what is at issue these days is not whether genes exist, are important, or are worthy of study (and the same applies to human disease, or studies of yeast, bacteria, insects, flowering plants, or whatever you're interested in). Instead, what is at issue is how genes work in the context of complex traits that involve many interacting genetic and environmental factors.

And in addition to the basic understanding of how genes work in this context (and, incidentally, the rapidly expanding senses of DNA functions besides the usual concept of what a 'gene' is), is the question of how to find the genes and their effects, and what kinds of information that may provide in regard to applications in agriculture or human health.

In the latter case especially, the promise has been that we can predict your health from your DNA. Widely publicized companies are selling this idea in various ways from customer-submitted DNA samples, and some medical geneticists are promising personalized medicine in glowing terms, too.

Indeed, there has long been an effective profession dedicated to this general problem. It's called genetic counseling. Genetic counselors work in a monitored setting, with standardized approaches and ethical procedures in dealing with clients. They systematically collect appropriate clinical and other information. And for known genetic variation associated with disease they, or knowledgeable physicians, can make useful, personalized predictions, explaining options for treatment, family planning, and so on.

It is not a lack of 'belief' in genes, but the opposite--an understanding of genetics--that leads some scientists to question the likely efficacy of this or that proposed direction in health-related research, or in other areas such as criteria for developing evolutionary explanations (i.e., scenarios for past natural selection) for various traits including complex traits like diabetes and even social behavior.

Dispute in science should not be viewed or characterized as if it were the same as dispute in religion....even though both are similar cultural phenomena that often center around accepted theory or dogma. Questions about priorities and dramatic promises for scientific approaches are legitimate and all dogma should be questioned.

Every biologist we know 'believes' in genes. But not all biologists believe in miracles!

Wednesday, April 22, 2009

Tuesday, April 21, 2009

GWAS revisited: vanishing returns at expanding costs?

We've now had a chance to read the 4 papers on genomewide association studies (GWAS) in the New England Journal of Medicine last week, and we'd like to make a few additional comments. Basically, we think the impression left by the science commentary in the New York Times that GWAS are being seriously questioned by heretofore strong adherents was misleading. Yes, the authors do suggest that all the answers are not in yet, but they are still believers in the genetic approach to common, complex disease.

David Goldstein (whose paper can be found here) makes the point that SNPs (single nucleotide polymorphisms, or genetic variants) with major disease effects have probably been found by now, and it's true that they don't explain much of the heritability (evidence of overall inherited risk) of most diseases or traits. He believes that further discoveries using GWAS will generally be of very minor effects. He concludes that GWAS have been very successful in detecting the most common variants, but now have reached the point of diminishing returns. He says that "rarer variants will explain missing heritability", and these can't be identified by GWAS, so human genetics now needs to turn to sequencing whole genomes to find these.

Joel Hirschhorn (you can find his paper here) states that the main goal of GWAS has never been disease prediction, which indeed they've only had modest success with, but rather the discovery of biologic pathways underlying polygenic disease or traits. GWAS have been very successful at this--that is, they've confirmed that drugs already in use are, as was basically also known, targeting pathways that are indeed related to the relevant disease, although he says that further discoveries are underway. Unlike Goldstein, he believes that larger GWAS will find significant rare variants associated with disease.

Peter Kraft and David Hunter (here) tout the "wave of discoveries" that have resulted from GWAS. They do say that by and large these discoveries have low discriminatory ability and predictive power, but believe that further studies of the same type (only much bigger) will find additional risk loci that will help explain polygenic disease and yield good estimates of risk. They suggest that, because of findings from GWAS, physicians will be able to predict individual risk for their patients in 2 to 3 years.

John Hardy and Andrew Singleton (here) describe the GWAS method and point out that people are surprised to learn that it's often just chromosome regions that this method finds, not specific genes, and that some of these are probably not protein-coding regions but rather have to do with regulating gene expression. Notably, unlike the other authors who all state that the "skeptics were wrong", but somehow don't bother to cite their work so that the reader could check that claim (they do cite the Terwilliger and Hiekkalinna paper we mentioned here last week, but that is on a specialized technical issue, not the basic issues related to health effects).

They also state that the idea of gene by environment interaction is a cliche, and has never been demonstrated. Whether they mean by this that there is no environmental effect on risk or simply that it's difficult to quantify (or, a technical point, that environmental effects are additive) is not clear. If the former, that's patently false--even risk of breast cancer in women who do carry the known and undoubted BRCA1 or BRCA2 risk alleles, varies significantly by decade of birth. Or the huge rise in diabetes, obesity, asthma, autism, ADHD, various cancers, and many other diseases just during the memory of at least some living scientists who care to pay attention. And, see our post of 4/18.

So, we find none of the supposed general skepticism here. Yes, these papers do acknowledge that risk explained by GWAS has been low, but they claim this as 'victory' for the method, and dismiss, minimize, or (worse) misrepresent problems that were raised long ago, and instead say either that risk will be explained with bigger studies, or GWAS weren't meant to explain risk in the first place (it's not clear that the non-skeptics agree with each other about the aim of GWAS, or about whether they have now served their purpose and it's time to move on--to methods that apparently actually do or also do explain heritability and predict risk.)

The 'skeptics' never said that GWAS would find nothing. What at least some of us said was that what would be found would be some risk factors, but that complex traits could not by and large be dissected into the set of their genetic causes in this way.

Rather than face these realities, we feel that what is being done now is to turn defeat into victory by claiming that ever-larger efforts will finally solve the problem. We think that is mythical. Unstable and hardly-estimable small, probabilistic relative risks will not lead to revolutionary 'personalized medicine', and there are other and better ways to find pathways. If a pathway (network of interacting genes) is so important, it should have at least some alleles that are common and major enough that they should already be known (or could easily be known from, say, mouse experiments); once one member is known, experimental methods exist to find its interacting gene partners.

In a way it's also a sad consequence of ethnocentric thinking to suppose that because we can't find major risk alleles in mainstream samples from industrialized populations, that such undiscovered alleles might not exist in, or even be largely confined to, smaller or more isolated populations, where they could be quite important to public health. They do and, ironically, mapping methods (a technical point: especially linkage mapping) can be a good way to find them.

But if we're in an era of diminishing, if not vanishing, returns, we're also in an era in which we think we will not only get less, but will have to spend and hence sequester much more research resources to get it. So there are societal as well as scientific issues at stake.

In any case, we already have strong, pathway-based personalized medicine! By far the major risk factor for most diabetes, for example, involves energy and fat metabolic pathways. Individuals at risk can already target those pathways in simple, direct ways: walk rather than taking the elevator, and don't over-eat!

If those pathways were addressed in this way, there would actually be major public impact, and ironically, what would remain would be a residuum of cases that really are genetic in the meaningful sense, and they would be more isolated and easier to study in appropriate genetic, genomic, and experimental ways.

Monday, April 20, 2009

Doubt

"Have you ever held a position in an argument past the point of comfort....given service to a creed you no longer utterly believed?" So asks John Patrick Shaney in the preface to his 2004 play Doubt. It's a very good play (and equally good new movie of the same name), about nuances and our tendency to fail to acknowledge how little we actually know. Did the priest...or didn't he?

Our society is currently commodified, dumbed-down, and rewards self-assurance, assertion, and extreme advocacy--belief. It's reflected in the cable 'news' shouting contests, and the fast-paced media orientation that pervades many areas of our society. There are penalties for circumspection.

Science is part of society and shares its motifs. Scientists are driven by natural human vanities, of course, but also by the fact that as middle-class workers we need salaries, pension funds, and health care coverage. We are not totally disinterested parties, standing by and abstractly watching knowledge of the world accumulate. Academic work is viewed as job-training for students, and research is largely filtered through the lens of practical 'importance'. Understanding Nature for its own sake is less valued than being able to manipulate Nature for gain. This was less prominent in times past (Darwin's time, for example; though at the height of industrialization and empire, when there was certainly great respect for practical science and engineering, he could afford the luxury of pure science).

It is difficult to have a measured discussion about scientific issues--stem cells are a prime example, but equally difficult are discussions about the nature of genomes and what they do, and the nature of life and evolution. Our societal modus vivendi imposes subtle, unstated pressure to take a stand and build a fortress around it. Speakers get invited to give talks based on what people think they're going to say (often, speaking to the already-converted). The hardest sentence to utter is "I don't know." (Another, "I was wrong," is impossible, so it isn't even on the radar).

A theme of our recent postings has to do with what we actually know in science, and what we don't know. It is always true that scientists firmly cling to their theoretical worldview--their paradigm, to use Thomas Kuhn's word for it. Maybe that's built into the system. In many ways that is quite important because, as we've recently said, it's hard to design experiments and research if you don't have a framework to build it on.

But frameworks can also be cages. Dogma and rigid postures may not be good for an actual understanding of Nature, and are costly in terms of wasted research resources. Clinging to an idea leads to clinging to existing projects (often, trying to make them larger and last longer), even when they can, from a disinterested point of view, be seen to be past their sell-by date. Great practical pressures favor that rather than simply saying enough is enough, it's not going to go very far, so let's start investing in something new that may have, or may lead us to insights that have, better prospects. Let's ask different questions, perhaps.

GM failed to respond to such a situation in their persistence to make gas-guzzling SUVs, and oil companies now try to fight off alternative energy sources. So science is not alone. But it's a concern, nonetheless.

Science would be better with less self-assurance, less reward for promotional skills, if experiments were more designed to learn about Nature than to prove pet ideas or provide findings that can be sold in future grant or patent applications. Most null hypotheses being 'tested' are surrogates for set-up studies where at least some of the answer is basically known; for example, when searching for genes causing some disease, our null hypothesis is that there is no such gene. Yet before the study we almost always know there is at least some evidence (such as resemblance among family members). So our degree of surprise should be much less than is usually stated when we reject the null. The doubt we express is less than fully sincere in that sense, but leads to favorable interpretations of our findings that we use to perpetuate the same kind of thinking we began with.

Real doubt, the challenge of beliefs, has a fundamental place in science. Let's nurture it.

Saturday, April 18, 2009

The rear-view mirror and the road ahead

We've already posted some critiques of the current push for ever-larger genomewide association-style studies of disease (GWAS) which have been promoted by glowing promises that huge-scale studies and technology will revolutionize medicine and cure all the known ills of humankind (a slight exaggeration on our part, but not that far off the spin!). We want to explain our reasoning a bit more.

For many understandable reasons, geneticists would love to lock up huge amounts of research grant resources, for huge amounts of time, to generate huge amounts of data that will be deliciously interesting to play with. But such vast up-front cost commitments may not be the best way to eliminate the ills of humankind. It may not even be the best way to understand the genetic involvement in those ills.

In a recent post we cited a number of our own papers in which we've been pointing out problems in this area for many years, and while we didn't give references we did note that a few others have recently been saying something like this, too. The problem is that searching for genetic differences that may cause disease is based on designs such as comparing cases and controls, which don't work very well for common, complex diseases like diabetes or cancers. Among other reasons this is because, if the genetic variant is common, people without the disease, the controls, may still carry a variant that contributes to risk, but they might remain disease-free because, say, they haven't been exposed to whatever provocative environment is also associated with risk (diet, lack of exercise, etc.). And these designs don't work very well for explaining normal variation.

As we have said, the knowledge of why we find as little as we are finding has been around for nearly a century, and it connects us to what we know about evolutionary genetics. Since the facts apply as well to almost any species--even plants, inbred laboratory mice, and single-celled species like yeast--they must be telling us something about life that we need to listen to!

Part of the problem is that environments interact with many different genes to produce the phenotypes (traits, including disease) in ways that would be good to understand. However, our methods of understanding causation necessarily look backwards in time (they are 'retrospective'): we study people who have some trait, like diabetes, and compare them to age-sex-etc. matched controls, to see how they differ. Geneticists and environmental epidemiologists stress their particular kinds of risks, but the trend recently has strongly been to focus on genes, partly because environmental risk factors have proven to be devilishly hard to figure out, and genetics has more glamour (and plush funding) these days: it may have the sexy appearance of real science, since it's molecular!

Like looking in the rear-view mirror, we see the road of risk-factor exposures that we have already traveled. But what we really want to understand is the causal process itself, and for 'personalized medicine' and even public health we need to look forward in time, to current people's futures. That is what we are promising to predict, so we can avoid all ills (and produce perfect children).

We need to look at the road ahead, and what we see in the rear-view mirror may not be all that helpful. We know that the environmental component of most common diseases contributes far more to risk than any specific genetic factors, probably far more than all genetic factors combined do on their own. We know that clearly from the fact that many if not most common diseases have changed, often dramatically, in prevalence just in the last couple of generations, while we've had very good data and an army of investigators tracking exposures, lifestyles, and outcomes.

Those changes in prevalence are a warning shot across the genetics bow that geneticists have had a very convenient tin ear to. They rationalize these clear facts by asserting that changes in common diseases are due to interactions between susceptible genotypes and these environmental changes. Even if such unsupported assertions were true, what we see in the rear-view mirror does not tell us what the road ahead will be like, for the very simple, but important reason that there is absolutely no way to know what the environmental--the non-genetic--risk factor exposures will be.

No amount of Biobanking will change this, or make genotype-based risk prediction accurate (except for the small subset of diseases that really are genetic), because each future is a new road and risks are inevitably assessed retrospectively. Even if causation were relatively simple and clear, which is manifestly not the case. No matter how accurately we can identify the genotypes of everyone involved (and there are some problems there, too that we will have to discuss another time).

This is a deep problem in the nature of knowledge in regard to problems such as this. It is one sober, not far-out, not anti-scientific, reason why scientists and public funders should be very circumspect before committing major amounts of funding, for decades into the future, to try to track everyone, and everyone's DNA sequences. And here we don't consider the great potential for intrusiveness that such data will enable.

As geneticists, we would be highly interested in poking around in the data mega-studies would yield. But we think it would not be societally responsible data to generate, given the other needs and priorities (some of which actually are genetic), that we know we can address with available resources and on other problems or approaches.

We can learn things by checking the rear-view mirror, but life depends on keeping our eye on the road ahead.

Friday, April 17, 2009

Darwin and Malthus, evolution in good times and bad

Yesterday, again in the NY Times, Nicholas Kristof reported on studies of the genetic basis of IQ. This has long been a sensitive subject because, of course, the measurers come from the upper social strata, and they design the tests to measure things they feel are important (e.g, in defining what IQ even is). It's a middle-class way of ranking middle-class people who are competing for middle-class resources. Naturally, the lower social strata do worse. Whether or not that class-based aspect is societally OK is a separate question and a matter of one's social politics: Kristof says no, and so do we, but clearly not everyone has that view and there are still those who want to attribute more mental agility to some races than to others.

By most if not all measurements, IQ (and 'intelligence', whatever it is, like almost anything else) has substantial 'heritability'. What that means is that the IQ scores of close relatives are more similar than the scores of random pairs of individuals. As everyone who talks about heritability knows (or should know), it's a measure of the relative contribution of shared ancestry to score similarities. It is relative to the contribution of other factors that are referred to as 'environmental.'

Kristof's column points out that IQ is suppressed in deprived circumstances of lower socioeconomic strata. And so is the heritability--the apparent relative contribution of genes. That makes sense, because no matter what your 'genetic' IQ, if you have no books in your house etc., your abilities have little chance of becoming realized. How you do relative to others who are deprived is largely a matter of chance, hardly correlated with your genotype. Correspondingly, there is evidence that scores and heritability rise when conditions, including educational investment, are improved. The point is that when conditions are bad, everyone suffers. When conditions are good, all can gain, and there are opportunities for inherited talent to shine.

Can we relate this to one of the pillars of evolutionary biology? That is the idea, due to both Darwin and Wallace, that natural selection works because in times of overpopulation (which they argued, following Thomas Malthus, was basically all times), those with advantageous genotypes would proliferate at the relative expense of others in the population. That fits the dogma that evolution is an endless brutal struggle for survival, often caricatured as 'survival of the fittest'.

Such an idea is certainly possible in principle. But, hard times might actually be less likely to support innovation evolutionarily. When there is a food shortage, it could be that everyone suffers more comparably, so that even what would otherwise be 'better' genotypes simply struggle along or don't make it at all. By contrast, good times might be good for all on average, but might provide the wiggle room for superior phenotypes, and their underlying genotypes, to excel.

This is not at all strange or out in left field. Natural selection can only select among genetically based fitness differences. If hard times mean there is little correlation between genotype and phenotype, selective differences have little if any evolutionary effect, and survival is mostly luck.

In this sense, from Malthus to the present, this central tenet of evolutionary theory may have been wrong, or at least inaccurate. Adaptive evolution may actually have occurred most in plentiful times, not under severe overpopulation. In most environments, competition is clearly not all that severe--if it were, most organisms would be gaunt and on the very thin edge of survival, which manifestly is not generally true.

The IQ story only reflects some here-and-now findings, not evolutionary ones per se. But it may suggest reasons to think about similar issues more broadly.

Thursday, April 16, 2009

GWAS: really, should anyone be surprised?

There's a story today in the New York Times about papers just published in the New England Journal of Medicine (embargoed for 6 months, so we can't link to it here) questioning the value of GWAS--genomewide association studies. We're interested because we, but largely Ken, often in collaboration with Joe Terwilliger at Columbia, have been writing for many years--in many explicit ways and for what we think are the right reasons, for 20 years or more--about why most common diseases won't have simple single gene, or even few gene explanations.

"The genetic analysis of common disease is turning out to be a lot more complex than expected," the reporter writes. Further, "...the kind of genetic variation [GWAS detect] has turned out to explain surprisingly little of the genetic links to most diseases." Of course, it depends on who's expectations you're talking about, and who you're trying to surprise. It may be a surprise for a genetics true-believer, but not for those who have been paying attention to the nature of genomic causation and how it evolved to be as it is (the critical facts and ideas have been known, basically, since the first papers in modern human genetics more than 100 years ago).

GWAS are the latest darling of the genetics community. Meant to identify common genetic factors that influence disease risk, the method scans the entire genome of people with and without the disease to look for genetic variants associated with disease risk. Many papers have been published claiming great success with this approach, and proclaiming that finally we're about to crack disease genetics and the age of personalized medicine is here. But upon scrutiny these successes turn out not to explain very much risk at all--often as little as 1%, 3% (even if the relative risk is, say 1.3--a 30% increase, or in a few cases 3.0 or more--there will always be excpetions). And this is for reasons that have been entirely predictable, based on what is known about evolution and genes.

Briefly, genes with major detrimental effects by and large are weeded out quickly by natural selection, most traits, including disease, except for the frankly single-gene diseases (which can stay around because they're partly recessive), are the result of many interacting genes, and, particularly for diseases with a late age of onset, environmental effects. And, there are many genetic pathways to any trait, so that the assumption that everyone with a given disease gets there the same way has always been wrong. Each genome is unique, with different, and perhaps rare or very rare genes contributing to disease risk and these will be difficult or impossible to find with current methods. Risk alleles can vary between populations, too--different genes are likely to contribute to diabetes in, say, Finns than in the Navajo. Or even the French. Or even different members of the same family! Inconvenient truths.

Now, news stories and even journal articles rarely point out any of these caveats. Indeed, David Goldstein is cited in the Times story as saying that the answer is individual whole genome sequencing, another very expensive and still deterministic approach (that surely will now be contorted by the proponents of big Biobanks to show that this is just the thing they've had in mind all along!). Nobody backs away from big long-term money just to satisfy what the science actually tells us. Now maybe there is the story, in the public interest, that a journalist ought to take on.

So, the push will be for ever-more complete DNA sequences to play with, but this is not in the public interest in the sense that it will not have major public health impact. Even if it identifies new pathways, one of the major rationales given in the face of the awkward epidemiological facts, which is unlikely to be a major outcome in public health terms. We have many ways to identify pathways, and more are becoming available all the time. While whole sequences can identify unique rare haplotypes or polygenotypes that some affected people share, that is really little more than trying the same method on new data, like Cinderella's wicked step-sisters forcing their feet into the glass slipper.

If it's true as it seems to be, that most instances of a disease are due to individually rare polygenotypes, then the foot will not fit be any better than before. And it won't get around the relative vs absolute risk, nor the environmental, nor the restrospective/prospective epistemiological problem. These are serious issues, but we'll have to deal with them separately. And the rarer the genotype the harder to show how often it is found in controls, hence the harder to estimate its effect.

------------------------
A few reasons why no one should be surprised:

*Update* Here are a few of the papers that have made these points in various ways and contexts over the years. They have references to other authors who have recently made some of these points. The point (besides vanity) is that we are not opportunistically jumping on a new bandwagon, now that more people are (more openly) recognizing the situation. In fact, the underlying facts and reasons have been known for even a far longer time.

The basic facts and theory were laid out early in the 20th century by some of the leaders of genetics, including RA Fisher, TH Morgan, Sewall Wright, and others.

Kenneth Weiss, Genetic Variation and Human Disease, Cambridge University Press, 1993.

Joseph Terwilliger, Kenneth Weiss, Current Opinion in Biotechnology, 9(6), 578-594 (1998). Linkage disequilibrium mapping of complex disease: fantasy or reality?

Kenneth Weiss, Joseph Terwilliger, Nature Genetics 26, 151 - 157 (2000). How many diseases does it take to map a gene with SNPs?

Joseph Terwilliger, Kenneth Weiss, Annals of Medicine 35: 532-544 (2003). Confounding, ascertainment bias, and the quest for a genetic "Fountain of Youth".

Kenneth Weiss, Anne Buchanan, Genetics and the Logic of Evolution, Wiley, 2004.

Joseph Terwilliger, Tero Hiekkalinna, European Journal of Human Genetics, 14, 426–437 (2006). An utter refutation of the 'Fundamental Theorem of the HapMap'.

Anne Buchanan, Kenneth Weiss, Stephanie M Fullerton, International Journal of Epidemiology, 35(3):562-571 (2006). Dissecting complex disease: the quest for the Philosopher's Stone?

Kenneth Weiss, Genetics 179, 1741–1756 (2008). Tilting at Quixotic Trait Loci (QTL): An Evolutionary Perspective on Genetic Causation.

Anne Buchanan, Sam Sholtis, Joan Richtsmeier, Kenneth Weiss, BioEssays, What are genes for or where are traits from? What is the question? BioEssays 31:198-208 (2009).

Wednesday, April 15, 2009

Everything else is but a detail...

It's been said that once the cell evolved, everything else in life is but a detail. It sounds like glib reference to the fact that all life (well, all cellular life--viruses excluded) is cellular. But as we've noted in an earlier post on cognition in bacteria, and in our books and writing, even single-celled organisms have complex abilities to evaluate and respond to their environments. Those abilities include, in many or even perhaps most such organisms, the ability to form multicellular organisms under some conditions (slime mold, bacterial biofilms, and others).

In fact, this is not a trivial kind of exception, but a profound one. The same kinds of mechanisms that make you a single, multicellular entity lead otherwise single-celled organisms to do remarkable things. These 'simple' cells have sophisticated ways to monitor their environment, respond collectively (even sometimes as collections of different species) in response. That is, they are adaptable, one of the basic principles of life.

An example that has been noted by others, but that we learned about this week on the BBC World Service radio program, The Forum, with biologist Brian J Ford, is the amoeba called Difflugia coronata. This single celled organism, living in ponds or other watery worlds, builds a house of sand which it lives in and carries around as it moves. The house is 150 thousandths of a millimeter in diameter, but apparently carefully constructed in a replicable form. As the amoeba grows, it ingests sand of varying sizes and when it divides to reproduce, one of the daughter cells inherits the house and the other gets the ingested sand so that it can build a house of its own. You can read more about it in the book, Built by Animals: The Natural History of Animal Architecture by Mike Hansell [Oxford University Press. 2008] (which can be found online in abbreviated Google form here). This gorgeous picture of one Difflugia's house is from that book.

It's not only the amoeba, of course. Cells of red algae even circle around a wounded peer, protecting it from the outside environment and providing its physiological needs until it recovers. And so on.

If we think about life in this way, and not in terms of exceptionalism for 'higher' organisms, life becomes more of a unitary phenomenon. Also, many things, including perhaps especially behaviors, that we might wish to credit adaptive evolution for having produced in our own precious ancestry as a species, have been around billions of years before the first hominids had that lusty gleam in their eyes. And, with little doubt, these 'primitive' amoebae and algae will still have their orderly social life eons after we advanced creatures have departed this Earth.

Tuesday, April 14, 2009

The Remarkable Fact That We Actually Know Anything!

Scientists always say they are working at the frontier of knowledge (we usually increase the self-praise by calling it the 'cutting edge'). But that's a trivial way to express vested interest, really, because things that are known are not being explored by science, so in a sense it's the definition of science to be studying what we don't already know.

On the other hand, what we do know is rather remarkable when you think of the complexity and elusiveness of Nature. DNA and molecular interactions can't be seen the way ordinary objects and interactions can. We are dealing with very large numbers of very small particles interacting in very many ways. In fact, everything genetic, genomic, and cellular turns out to be related to everything else (to oversimplify a bit).

Yet, almost no matter what you may ask about, Googling will reveal a substantial, usually rather huge, literature on the subject. Nonetheless, the subject isn't closed, the problem not 'solved', and the complexities are manifest.

One can ask, for example, about the genetic involvement or cause of a disease, even a rare disease of variable or multiple symptoms, and find that something is known about its molecular cause.

It may be a neurotransmitter problem, or an energy metabolism one, or a developmental anomaly, etc. Variants at some gene(s) are usually known that 'cause', or at least are involved with, the trait. Yet if you dig deeper, the stories are not very tight. Prediction from gene to trait is, with some usually-rare exceptions, not that strong, and often treatment, and almost always prevention, remain elusive. Is this because we just haven't gotten around to figuring these things out, or because we don't know how to know them? Do we need a new way to think about complexity?

It's remarkable in many ways that we know anything about these problems, even if it's equally sobering how difficult it is to truly understand them. It's a combined testimony to the power of research methods, the army of investigators employing them, but also the way in which these methods reveal facts as well as uncertainties. Often the tag line in papers or the news is about what we know, and what we don't know is kept quieter. Some believe that with time and resources, our methods and technology will finish the job. Others would say that if we believe that, we are not really accepting and dealing with complexity head-on as we should. And, as we wrote yesterday, it's always sobering to realize that the assumptions we base our knowledge on may themselves be faulty--but we never know at the time which ones they are or what's wrong with them.

There's probably no one way to view this. But, still, it's amazing how much we do know about things that are so little and complex.

Sunday, April 12, 2009

Never say die

Today's news has a story of a report that female mice continue to generate at least some new egg cells after they are born. Eggs, as well as heart and other muscle, and most types of neurons, were for a long time believed to be post-mitotic: that is, they could not be regenerated. But if various reports are accurate, all of these types of cells can regenerate at least to some extent.

If the exceptions to the once-held 'rules' are real, and not trivial, this can be potentially good news for the development of therapeutic approaches in which an individual's own cells could be used to generate lost or damaged cells. But it also raises some interesting basic scientific questions, too.

Most if not all of these results come from animal models. So why is it touted as good news for humans? If one is a fervent Darwinist, and thinks that the pressure of competition is always pushing species towards ever more specialized 'adaptive' states, then there is no reason to expect that human cells would behave the same way as those of laboratory models. But, if one believes that animal models, such as mice or chicks (much less flies and flatworms!) represent the human state, one is correspondingly less rigidly Darwinian.

The issue is a practical one. Regardless of one's views about natural selection, we know that our models are only approximate: but to what extent can we trust results from work with animal models? Many of us work with such models every day (in our case, with mice) and daily lab life can be very frustrating as a result, because it is easy to see that not even the animal models are internally consistent or invariant. But there is an even more profound issue here.

Whether we're working with animal models or taking some other approach, we design our research, and interpret our results, in light of what we accept ('believe'?) to be true. If what we accept is reasonably accurate, we can do our work without too much concern. But if our basic assumptions are far from the truth, we can be way off.

As we noted in another post, every scientist before today is wrong. We are always, to some extent, fishing in the dark. It's another frustration, when you build a study around a bunch of published papers, and then discover that they've all copied one another on basic assumptions and, like the drunk looking for his keys under the lamp-post, have been exploring the same territory in ever-increasing, but perhaps ever more trivial detail.

Yet in fact we can never know just where our assumptions may be wrong. On the other hand, we can't just design new experiments, presumably to advance knowledge beyond its current state, without in some sense building upon that state. How can one be freed of assumptions--such as interpreting data as if certain cell types cannot divide and replenish--and yet do useful research?

There is no easy answer, perhaps no answer at all except that we have to keep on plugging away unless or until we realize we're getting nowhere, or someone has a better, transformative idea. For the former, we're trapped in the research enterprise system that presses us to keep on going just to keep the millwheels turning. For the latter, we have to wait for a stroke of luck, and they may only come every century or more.

Saturday, April 11, 2009

Appropriate vs. intrusive genetics I. Natural selection in humans

Human DNA sequences are being scrutinized and mined by many investigators, for anything that can be found in them. The amount and detail of new data now available is unprecedented in history, so the interest is natural. Much can be said about these data, and there are countless papers being written about them. But in this enthusiasm much less is being said about when and whether interpretation becomes over-interpretation, and when these genetic investigations have non-trivial potential to be harmful to the society that is paying for the research. So this is an appropriate place to voice some of the issues, as we see them, at least, and several areas now under intense investigation deserve some attention. This is the first of several posts on this subject.

1. The search for evidence for natural selection in the history of different modern human populations. Humans vary geographically in lots of ways. 'Race' is a traditional term to refer to that variation, and generally based on traits that are visually obvious. However, the term race is properly discredited because it is usually greatly oversimplified, over-categrorical, historically and notoriously difficult to define with any scientific rigor, and has been used to discriminate against people in the worst ways. Skin color is the classical 'race' trait, and certainly varies globally for genetic reasons. But Victorian anthropologists spent a lot of time classifying and identifying additional race-traits. These were often given adaptive explanations, and eugenicists and the Nazis put such explanations into political practice, with tragic consequences. The same traits are still used today to characterize human populations.

As a reaction to World War II, most scientists became restrained in the pursuit of 'race' biology. Some work on racial variation was done with beneficent intent, such as studies of anti-malarial genetic resistance in sickle-cell, and other diseases. Nonetheless, a few mavericks insisted on racial genetic studies, especially in what is known as 'behavior genetics', often not-surprisingly focused on socially sensitive traits, like criminality (defined by the upper class as prohibited lower-class acts). Above all, though, always in the background, or perhaps basement, were studies of race differences in intelligence. That hasn't gone away.

Searches for selection among human groups by definition identify what is 'good' in the sense of having been favored, vs what is 'bad', that which was disfavored by Nature. The fact that all groups still vary greatly, the 'bad' is still here (e.g., in the genes of some inferior 'races'?), and that the overlap between them is usually much greater than their average difference (as in the figure), are important, conveniently overlooked, subtle issues beyond the space of this posting. It's all too easy to say that, genetically, blacks are this, whites are that, or males are this, females are that.

In the current fervor for things genetic, searches for racial differences in various traits has been creeping to respectability again. Often couched in terms of 'disease', which is often a transparent way to rationalize grant support for such studies, or a way to market race-specific medicine, the stand-by racial traits, including visual traits, are again being discussed.

Humans within as well as between groups differ in essentially every trait. Most traits have substantial heritability or familial aggregation, suggesting that genetic variation contributes to them. In principle, it can be important to understand the specific genes involved. When this is done in regard to rabbits, butterflies, or plants nobody cares and good knowledge can come of it (though explanations are often overstated or oversimplified).

But when this same kind of searching is applied to humans, usually from samples collected with prior conceptions of geographic variation in mind (that is, sampling is from the classical 'races' even if subconsciously or never saying so explicitly), it is not just detached science, but it becomes relevant to the societies that are paying for the research. The work can easily become, and often is, related to value judgments about who's good and who's bad, or who's better and who's worse in this or that group, or which group is more advanced than which other group, or where to put supporting or repressive societal effort, such as in racial profiling in medicine, investment in education, or forensic genetics.

So, there is an important issue of what kinds of science should be acceptable or paid for, and what kinds of samples are legitimate--and who should decide. We should not be captives of past history, but history shows what monsters can be loosed by acquiescing in whatever scientists want to do. It is naive or self-interested to pretend that we in the 'civilized' world are above repetitions of past racist disasters--even if they would, if they occurred, take somewhat different form.

If, regardless of intent, there is reasonable potential for claims of findings to lead to discrimination, then it is reasonable to suggest that this kind of work should not be done. There are, after all, all sorts of research that IRBs (Institutional Review Boards, whose job is to oversee university research) don't approve. There are historical precedents for the consequences of looking the other way rather than speaking up and opposing such work, even though science censorship raises problems of its own. And yet we each have our own ideas about what is legitimate, what is downright good, interesting, or great to do, and what should not be done.

In future posts we will discuss how these issues apply directly to forensic genetics, genetic ancestry estimation, and phenotype prediction (of disease, or of what a face looks like, etc.).

Wednesday, April 8, 2009

The 9/12 Syndrome

On September 11th, 2001 -- now known universally as 9/11 -- the dreadful assault on the Trade Towers in New York took place. The reactions on the next day, 9/12, were interesting and revealing about the nature of human behavior, at least in our society.

Those who on 9/10 had criticized President Bush for being an international bully and a sabre rattler, said that the tragedy of 9/11 showed that the US needed to rejoin the world as a constructive partner of other nations and one working towards international understanding. But those who on 9/10 were feeling the US's power said that 9/11 showed that, as they had said, the US should become more definitive and unilateral in its international actions, less accommodating to the 'soft' countries of the world.

In other words: 9/11's tragedy changed nobody's mind.

It is true in all areas of human life, perhaps, that we seek tribal affiliation that we are loathe to break. Religion, political party, school allegiance, and various belief systems are clung to fervently, and the same facts are routinely used to justify the unchanging position.

The same is largely true in science. Unless a new discovery really forces everyone to change their views, or provides a new tool, toy, or me-too rationale for a new kind of research, scientists cling to their views.

We have a remarkable degree of free speech in this country, and science too has so many avenues for expressing ideas that even the most wacky can see the light of print (or Blog!). Despite alternatives being available and knowable to those who care to know, the herd's belief system, what everyone is doing at any given time, or the prevailing theory, is largely impervious to critics or skeptics. As in any 'tribal' setting of this kind, new ideas much less a new tribe are threats to the current order. In science the threat is in part seen as undermining the grant base, the publication base, or just as importantly the base of feeling that we as scientists understand the world profoundly, and are documenting that understanding.

It is as difficult to be open-minded in science as in any other area. Our mythology is that science always challenges its assumptions, but the truth is much more that we cling hard to our beliefs. We design studies to establish hypotheses that we like, we skirt around evidence that's not supportive, and we set up straw-man 'null' hypotheses that we know in advance we can shoot down. As a consequence, the rejection of the null is usually not as persuasive as claimed (sometimes expressed in terms of a significance level, or p value).

The Nature-Nurture cycle is a good example. In any era there are those who believe in biological (these days that means genetic) inherency: you are what you inherited. And there is the Nurture crowd, who believe you are what you experience. Both views are expressed openly (in free societies). But a given era mainly has an ear for one or the other dogma. Darwinian selectionism and genetic determinism represent the inherency view, and its down sides are racism and social discrimination. Pure environmentalism is the Nurture view, stressing free will and malleability, to justify the redistribution of wealth, which is a big downside for those whose earned wealth is being redistributed to others.

We ourselves tend unapologetically to be on the skeptical side in many areas of modern science, as our book, this blog, and our other writing show. We try to stay away from a polarized, much less ideological position, though it is difficult for anyone to do that completely. Like anyone else, we want to be right! But a major objective issue is that it takes a lot of time and wasted funds as the ship of science changes course from time to time, in tidal flow of new discoveries, but also of changing vested interests. The 9/12 syndrome delays such changes and makes them more expensive, even when the evidence supports the change.

Scientists are not immune from the 9/12 syndrome: we tend to see the world as we want to see it.

The Lobbyists: the emics and etics of our culture

Much of what human culture is all about has to do with the distribution of resources--material resources such as wealth or property, and psychological resources such as power and prestige. Anthropologists studying the world's populations routinely observe what people actually do, and ask people what they believe that they do. The first, what a culture looks like to an external observer, referred to as 'etics' in anthropology-speak, is usually not the same as the latter, the insider's view, the 'emics'. People may routinely act in ways that deviate from the accepted tenets of their culture, for various reasons including self-interest, and this is often rationalized by the 'deviant'.

Anthropology has a long tradition of labeling cultures by some major feature--'The Basketmakers', 'The Fierce People', and so on. While anthropology is popularly seen as the study of the exotic 'other', the same principles of analysis also apply to our own culture. These days, one might refer to the US and other industrialized cultures as "The Lobbyists". What we do is organize, posture, dissemble, advocate, pressure, and persuade to gain preferential access to resources. Scientists may be among the most educated people in our society (according to some definitions of 'educated', at least), but we are not exempt from emic-etic differences.

Lobbying for research funds is part of our system. Lobbying includes providing, stressing, repeating and so on, our reasons why this or that particular project that we want to do should be funded. There is always an emic element--some justification of the argument in terms of our beliefs (e.g., that this will lead to major health advances). But the facts are routinely stretched, dissembled, and strategized in order that we, rather than somebody else, will corner the resources. We even give our graduate students courses in 'grantsmanship' which often if not typically amounts to teaching how to manipulate the funding system--it's certainly not about how to share funding resources!

We, your bloggers, often complain about the kind of science that is being funded. Among the reasons are not the sour grapes of being deprived of resources, because we have done well for decades in that regard, but that we are unhappy with the hypocrisy and self-interest intrinsic to the system. We think that is not good for society, and not good for science.

From an emic point of view, our complaining may be OK--what science is doesn't match what it is supposed to be! But our complaints probably reflect a poor acceptance of the etics of the situation on our part--science works like all other systems for sequestering resources, and we should not expect it to be perfect or in perfect synch with its emics.

Anthropologists are trained to try to be detached when evaluating a culture, even their own. From a detached, anthropological point of view, our system (our mix of emics and etics) is what it is. As anthropologists, perhaps we should learn to accept these realities, rather than complain about them as if emics could ever be identical to etics, which they never are. Whether the discrepancy in relation to science and its lobbying is serious, damaging to society--or, despite its lack of complete honesty actually good for society--are interesting and important questions, that themselves require one to specify what is good, and for whom.

In understanding our culture as The Lobbyists, we should not be surprised at its nature: we understand how it is, and the game is open to all to play. We are as free to dissemble as anyone, and we can dive after funding resources as greedily as anyone. In fact, the players generally (if privately) recognize the nature of the game. In that sense the rules are known so the game is fair, as games go.

Still, we have not been able to accommodate our views on science to the etics. We try to cling to our emics, thinking that science should be more honest and free of vested self-interest or greed. It doesn't take away from our, or anyone's skepticism about what is being said or done in science these days. And if the science is distorted because of its material or psychological venality, it is fair game for criticism--it may that only if at least a few point out the emic-etic disparities that things are adjusted to stay within societally accepted limits. Still, we should probably just learn to accept that we, too, are part of the The Lobbyist society!

Since the deadline is nearing, we have to end this blog, so we can get back to work on our stimulus-package grant application.

Tuesday, April 7, 2009

Proliferative differentiation and interaction

Our book examines the nature of cooperation at all levels, from molecules on up (although in fact we did not say much about social cooperation) so it is interesting that a variety of scientists are reacting to the excessive focus on competition in biological thinking (and even today, political columnist David Brooks in the NY Times, discusses cooperation in the sense of social and moral behavior). The centrality of cooperation is our basic theme, and we agree with Mr Zimmer and Brooks on its importance.

It should be said that the defenders of darwinian orthodoxy generally argue that, one way or another and call it what you will, if something proliferates something else doesn't, and that's called 'natural selection' and is the very definition of 'competition'. Since that is a socially loaded word, that is then used to justify aspects of our social structure, we like the equally loaded word 'cooperation' for what is actually going on most of the time--even if there is always some element of 'competition.' In our book we outline ways things can come about other than by classical selection, including drift, which is much more important to the evolution of traits (not just DNA sequence) than is usually credited. Cooperation is a much more accurate word to describe what goes on every day in every organism and every cell, even if over very long time periods, the effects of competition and chance add up to divergence.

We could unload this discussion somewhat if we used terms like 'differential proliferation' and 'interaction' instead of competition and cooperation.

The emergence of complexity

Carl Zimmer has a nice piece on the development of limbs in the Science section of today's New York Times. He has the space to go into much more detail than Lewis Wolpert was able to on The Forum the other day, and what he describes illustrates the basic principles we've discussed, if not in so many words. He talks about the essential role of the signaling that goes on between cells to tell them what to do next, of contingency, the fact that what a cell does next depends on what it has just done, of cooperation at many levels, of modularity and chance. A pretty simple description of how complexity arises.

Monday, April 6, 2009

Every scientist since yesterday.....

One objective of science is to unlock Nature's 'secrets', and there is a natural hunger to be among those who see most deeply what others have not seen before. That's why our culture properly respects our Newtons and Darwins (and why there are priority squabbles). But (and as priority squabbles show), we have to lobby and promote our ideas to get them both recognized and accepted.

When we do that, especially if there has been financial investment in our ideas, we naturally tend to be defensive about them. We can easily back ourselves into a conceptual corner in doing so. Being wrong is not what our ethos is all about. Defending dated ideas is not good for science, but it is largely the way science actually works, until a better idea forces earlier ones off the stage. This was a central point in Thomas Kuhn's analysis of scientific 'revolutions' (like the Darwinian one of which we're the beneficiaries).

But being wrong and having only imperfect knowledge is part of the game. As put in a cogent quote in a very fine recent biography of Ernst Haeckel by Robert Richards (The Tragic Sense of Life, 2009, U Chicago Press), 'every scientist since yesterday' has been wrong. If there's a lesson here for all of us, it would--or at least should--be to be more humble in promoting our favorite ideas. They are all wrong, in one way or another.

Unfortunately, imperfection is the gap that opponents of science itself often use as a wedge to dislodge an understanding of the world from its empirical foundations. In this case, I've been barraged with a listserv of messages from a group of people (mainly scientists of various kinds) who support a theological interpretation of life by hammering away at the imperfections, and excessive claims, of evolutionary biologists. They use various arguments, but mainly the false syllogism that because evolutionary biologists don't know everything, they must be wrong about evolution....and therefore some God-based explanation must be right.

In genetics and evolution there are many unknowns, and we tend to minimize them (except those that help us in a grant application), and overstate or oversimplify our own particular worldviews. We are doubtlessly wrong in many ways, but it is not true that every scientist since yesterday was completely wrong, and there can be little doubt that we understand Nature much better today than we did yesterday.

Life is a tough subject to study, and we should be more careful about what we don't know and the range of plausible explanations for our phenomena. But it is also true that what we don't know is not evidence for some specific alternative theory, religious or otherwise. Scientific theories may always be underdetermined--more than one explanation being consistent with the available facts, but there is nonetheless likely to be some truth out there, and it must be compatible with those same facts. We should do our best not to shun or exclude alternative ideas, while at the same time defending the nature of science as an imperfect attempt to understand Nature that needs to have a coherent operating framework.

It is, in fact, remarkable that blobs of protoplasm, called 'humans', could have evolved to have even the level of ability to understand Nature that we have. Since every scientist since yesterday has been wrong in one way or another, the Aristotelian kind of argument that we evolved to have a correct intuitive understanding of Nature does not account for our species' abilities. Indeed, we evolved by doing what we needed to do, so it is likely that we would have cognition at least consistent with the relevant subset of the nature of Nature. Beyond that is the remarkable fact that, fallible though we are, the evolution of general problem-solving ability has led us to go so deeply beyond the specifics of our past survival challenges.

Saturday, April 4, 2009

Credible research

Marion Nestle, Professor of Nutrition and Food Studies at NYU, was on campus last week to speak, sponsored by the Penn State Rock Ethics Institute. Nestle is the author of a number of popular books about the politics of food, and an outspoken critic of the influence of the food industry on how and what we eat, and thus, on the health of the American population. She's particularly concerned with obesity in children and the role of advertizing in promoting the consumption of excess calories even in children as young as two. She believes that any money researchers take from the food industry is tainted money. Her point is that it's impossible for a scientist to do unbiased research, however well-intentioned, if the money comes from a funder that stands to gain from the findings. Indeed, it has been found that results are significantly more likely to favor the funder when research is paid for by industry.

The same can and has been said about the pharmaceutical industry and drug research, of course, and, though we don't know the particulars, it has to be equally true of chemistry or rehab or finance or fashion design. But, as we hope our posts about lobbying last week make clear, the problem of potentially tainted research doesn't start and stop with the involvement of money from industry. Research done with public money can be just as indebted to vested interests, its credibility equally as questionable. It can be somewhat different because researchers tend not to feel indebted to the actual source of the money -- the taxpayer -- but research done on the public dollar can be just as likely to confirm the idea or approach the funding agency supports.

Even when money isn't the motivation, there are many reasons that research might not be free from bias -- the rush to publish, the desire to be promoted or get a pay raise, commitment to given results, prior assumptions, unwillingness to be shown wrong. Many prominent journals won't publish negative results and of course journals and the media like to tout if not exaggerate positive findings. There is pressure to make positive findings -- and quickly -- to use to get one's next grant (and salary). This is one reason it is commonly said that one applies for funds to do what's already been done. This makes science very conservative and incremental when careers literally depend on the march of funding, no matter what their source.

Besides the pressure to conform and play it safe, a serious problem is that such bias doesn't necessarily make the science wrong, but it does make it more difficult to know how or where it's most accurate and worthy. And it can stifle innovative, truly creative thinking. Some of the most important results are likely to be negative results, because they can tell us what isn't true or important, and guide us to what is. But that isn't necessarily what sponsors, especially corporate sponsors, want, and it isn't what journals are likely to publish.

So, while it's essential, as Marion Nestle and others consistently point out, to eliminate the taint of vested interest from research, it's impossible to rid research of all possible sources of bias. And the reality is, at least for our current time, that it's only the fringe of those most secure in their jobs etc., who can speak out about the issues (as Nestle said, she has tenure and doesn't need money to do her work, so she can say anything she wants to) -- and they do not have the leverage to change the biases built into our bottom-line, market- and career-driven system.

Friday, April 3, 2009

How does a cell know what to become?

This week on The Forum, a BBC World Service radio program, Lewis Wolpert, a distinguished Emeritus Professor of Cell and Developmental Biology at University College London, and two non-scientists were interviewed. Prof. Wolpert was asked to explain development, and how cells 'know' what kind of cell they will be. The interviewer, Bridget Kendall, is quite well-versed in scientific issues, but when she asked Wolpert to tell her how a cell knows what it will become, while he got some of it right, and certainly knows enough to answer the question, in the end his answer was quite unsatisfying and confused the interviewer as well as her other guests.

Cells talk to each other, Wolpert said. It's to do with signaling. And nobody is in charge.

So far, so good. Cells have to be prepared to receive a signal, and in normal development they have been primed, usually by earlier signals, to respond appropriately.

But then Kendall asked how cells arrange themselves in a certain pattern. How does a cell know it should be in the right or the left hand? A basic and fascinating question, the likes of which has hooked many a developmental biologist.

There is no fundamental difference between the right and left hand, he said.

This didn't help at all. Kendall pressed him.

Cells get instructions from other cells about what to do, he said.

Now one of the other guests was confused. Understandably. He wanted to know how chaos ends in order if no one is in charge. "There has to be a blueprint somewhere so that a human doesn't end up a frog."

"That's the cleverness of cells," Wolpert said. "There is no blueprint whatsoever." He was adamant about this. It's due to genes that a human cell becomes a human and not a frog, he went on to explain. Bringing us frustratingly back to Kendall's first question of how cells develop.

And, indeed, the listener could be forgiven for not being able to quite tell the difference between genes, which tell a cell whether it's to be a human or a frog, which Wolpert allows are important, though boring, and a blueprint, which is an outside document that tells a builder whether to construct a skyscraper or a factory and which Wolpert categorically denied as a useful metaphor for development.

But, are genes a blueprint for an organism? Certainly not literally -- unlike a blueprint, an organism has no designer, for starters. And, much more is inherited along with genes (by which is usually meant classical protein-coding segments of DNA, which make up only 5% of the genome, after all, and by no means all of the kinds of functional elements in genomes), so genes alone don't tell a cell what to become.

Is the whole genome the blueprint, then? Still no, since the fertilized egg contains more than DNA, and environmental factors have a significant influence over how a cell develops -- ambient temperature determines the sex of a developing turtle egg, for example. But the genes in a human cell can't instruct the cell to become a frog, so in some metaphorical sense, they are a blueprint but, unlike a blueprint, the DNA does not come into an awaiting cell and tell it what to do: an organism is already a complete cell, with its DNA and its other materials that interpret the DNA.

How does a cell know what to become?

Wolpert was right that it depends on signaling, but it would have helped if he had gone on to say that signaling happens in order, and what a cell does next is contingent on what it has just done. Step by step, cells all over thee embryo are single-mindedly, so to speak, responding independently to different signals, each one oblivious to what's happening even several cells away. Signal upon signal, response after response, cell division upon cell division, all these steps combined lead to differentiated, semi-autonomous cells all working together to make an organism. Preparing to respond to signals, and then responding, is what cells do.

It's fairly simple -- unless you're concerned with how one cell becomes part of the thyroid gland and another becomes part of the retina of an eye, and you want to know the specific genetic and timing details. Otherwise, it's enough to know that genes code for proteins that become signals, cell-surface receptors to read those signals, and then to respond. Cells know nothing about the bigger picture, and there is no master painter, but step-by-step, because of contingency and cooperation among cells, the bigger picture emerges. These generalizations are, in fact, rather universal and reflect basic properties of the nature of life -- what we might call parts of a broader theory of life.