Wednesday, October 31, 2012

Hurricane Sandy and the collapse of the Maya

On our whacked society
So, there are those enlightened members among us who have declared in their wisdom that hurricane Sandy is God's punishment for our tolerance of gays, for Obama (naturally--he's the wrong color), for our loose morals, and I forget what else (maybe too many liberals).  What an insight!

This hurricane sent by divine wisdom to punish us for allowing gay marriage was not aimed very precisely, however, and one wonders at His supposed omniscience.  After all, the people displaced or killed were mainly not gay couples or people sinning by the various means suggested.  They were just folks, minding their own business.  That suggests a God who's like a surgeon working with a band saw.  Can't He target his gales and floods to those who really deserve it?  I mean, some whole states that are under water and wind don't even allow the mortal sin of gay marriage.  We should expect more skill from the entity supposedly to be our savior!

Why do we bother to maintain schools in this country?  Obviously, the lessons are not sinking in.  After all, 40% or more of our citizenry thinks evolution didn't happen, and liken Darwin's ideas to misleading fantasies such as the idea of an Easter bunny (like fossils, those eggs in your lawn on Easter were just laid there to mislead you).  Maybe God Himself hasn't got much of an education, as his skill level in aiming floods and hurricanes is about equivalent to  many people who have college degrees these days, perhaps reflecting the slipping standards we in the profession have tolerated.  One begins to wonder whether the Noachian flood was really meant just to drown the Pharaoh's troops or something.  Maybe the Red Sea parted too early and should have drowned the Jews.  Who knows what blunders have been recorded as miracles?

On the other hand, there are those who are saying, after this recent flood, "WHY AREN'T PEOPLE  PAYING ATTENTION??  THIS PROVES WHAT WE'VE BEEN SAYING ABOUT GLOBAL WARMING!!"   Well, this, too reeks of ideology and tribalism, the tendency to see in an event support for one's favorite group's  pre-existing biases.  Does the tidal surge that caused so much damage on the East coast bode poorly for the future, on the grounds of human-induced global warming?  In fact, that does seem to us to be plausible -- certainly more plausible than the idea that the hurricane was God's wrath -- and the extent of the damage could be viewed as a warning sign.  Some analysis seems to suggest that the tide was higher by a foot or so as a result of global warming, than it would have been from the same storm a century ago.  Indeed, the equation that suggests that climate change is exacerbating extreme weather takes into account more moisture in the air, warmer ocean temperatures and so on -- here's an explanation of this.  But one swallow doesn't make a spring and whether this one storm should be given much if any weight, on its own, in the global warming dialog is a relevant question.

The Mayan collapse
The climate-change argument to account for Sandy's destructiveness raises an interesting thing to think about, over and above whether it is evidence of human-caused climate change (or God's wrath).

Archeologists wonder and debate about the causes of the disappearance of major civilizations, of which there are many instances.  One on the American landscape was the collapse of major Mayan urban centers in Central America.  These had declined and largely been deserted, at least as major urban centers, by the time the Spanish arrived--that is, they were not, as were the Aztecs, done in by Spanish swords or by new diseases.

What caused the abandonment of the Mayan cities?  Was it war, a catastrophic event, a decimating conquest?  Or was it gradual, over many generations, due to the diminution of the soil or climate for supporting agriculture?  If slow, did the Mayans even know it was happening, or did each generation just do as best it could, not realizing the decline in population and so on? 

We might get a hint from Sandy and global warming debates.  What if there are more and more frequent incidents like Sandy in the next few years?  Unlike priests who may blame this on the cursed sinfulness of New Yorkers (which is a true characterization of them even if unrelated to hurricanes), climate scientists might correctly blame it on global warming.  At some point, even current skeptics would begin wondering what to do to keep from ruining their shoes several times a year.

A major company might be the first to decide that at the price of sandbags, the cost of staying in Manhattan was no longer good business, and would move to, say, Cleveland where such disasters don't seem imminent.  Banks and the stock exchange might decide to move to greener pastures, so to speak, like Omaha, where George Soros lives and seems to have done very well.  The entertainment media might move to Dallas or Minneapolis.  Once enough major companies did this, in the following decade other large entities might follow suit.  Starbucks and McD would be out of customers, and would close their doors.  Nobody would be there to see Macy's Christmas displays, so they'd move to St Louis.  There could be a kind of sheep-like trend-following by which those entities just decided to do as others have done, with or without good reasons of their own.  Or, it could become apparent that the gravitational center of the whatever-business universe had moved and that anyone hoping to make it in that business had to move to where the action was.

Fencing off of abandoned buildings, closing some subway lines or even, say, the lower tip of Manhattan, could lead the movement of even more away from the City.  After a few decades, New York could become a virtual ghost town.

Of course, we keep extensive records of what we're doing, so unlike our difficulty deconstructing what happened in Central America, future historians of the 4th millennium might know what happened here directly.  Or, records could be lost and they might have to make the  kinds of guesses about New York that archeologists today make about the Mayans:  What happened to the New York civilization to lead to its collapse?  Indeed, if the climate argument is correct, the east coast concentrations of buildings might go the way of Atlantis, submerged into mythology.

Whatever the reason, if there is any particular long-term reason, for what Sandy did, it may give a hint of how events that everyone was aware of at the time led to gradual abandonment of former civilization centers, that thousands of years later could look like sudden catastrophes.

Maybe the Mayans blamed their droughts on God's anger, too.  But whether the reason was that  they had allowed gay marriage is something one can only speculate about.

Tuesday, October 30, 2012

Science funding: what it's really (or at least largely) about

An Op-Ed in Monday's hurricane-nervous NY Times is a plea for more federal science research funding.  It's by a former science adviser and of course it's an advocacy piece.  It makes the attempt to show the benefits to society of university-based federally sponsored research--the usual claim that this leads to new medicines, cleaner energy, and more science jobs.

Of course, these things are true in principle and in some instances actually true.  We'd create more science jobs if we actually did our duty with respect to raising the expectations and standards for our educational programs, especially for undergraduates (but that's not what professors' jobs are all about any more).  The extent we really generate usable research that leads to products is something we don't know much about, because industry loves foisting their responsibility (to do research for their company's products) off on the public, while still being secretive and competitive so that most of the key research is done in house.  Pharmas are trying to cooperate in their support for basic facts which will then be turned over to their own private (and secret) value-added research.

By buying research from universities, companies impose various levels of privatization of the results (and commercial incentives for faculty), which undermines the proper role of public institutions, and in some ways actually privatizes public research.

Federal research is, in a naturally expectable way, bureaucratized to give it inertia so that program officers' portfolios are stable or enlarged, and prominent investigators' labs (and their universities' general funds), have continuity.  That may sound good, but it means safe, incremental work without any serious level of accountability for producing what one promised (here, we refer not just to miracle results, which can't be promised, but things like adequate statistical power to detect what one proposes one has the power to detect, or advances in things like disease therapy that is promised in grant applications.).

One can always crab about this waste of funding generated by the way we know how to work the system in our favor.  We ourselves have been regularly funded for decades, so this post is not  a matter of sour grapes on our part.  But there is, from an anthropological point of view, a broader truth--one that shows in a way the difference between what are called a culture's emics and its etics.  The emics are what we say we're all about, and the etics are what an observer can see we are really up to.  Here, part of the usually unspoken truth about huge government investments is that citizens are given promises by the priests (the recipients) of some specific good in return for investment.  But the momentum and inertia is largely for a different reason.  As the Times author says:
Moreover, the $3.8 billion taxpayers invested in the Human Genome Project between 1988 and 2003 helped create and drive $796 billion in economic activity by industries that now depend on the advances achieved in genetics, according to the Battelle Memorial Institute, a nonprofit group that supports research for the industry. 

So science investments not only created jobs in new industries of the time, like the Internet and nanotechnology, but also the rising tax revenues that made budget surpluses possible.
This is both a post-hoc rationale (one can always look backwards and identify successes and thus try to justify the expense, and its continuation, but is not usually compelled to argue what else, or what better, could have been done by government--or by taxpayers keeping their money--had the policies been different.

At the same time, it's a very legitimate argument.  If science investment doesn't lead to a single real advance in health or energy efficiency, but if it does lead to jobs for lots of people, not just including the scientists, but the people who make, transport, market, advertise, and design their gear, their reagents, even their desks and computers, then those funds are circulating in society and in that sense doing good.

It's a poor kind of justification for the investment relative to its purported purpose.  But life is complex.  Sequencing machines or enzymes or petri-dishes are made by people.  The challenge to identify real societal needs (or to decentralize) and achieve success without just building self-interested groups and bureaucracy is a major one.  Often it leads to disasters, like wars or poor agricultural management, and so on.  But it also is part of the engine of a society, whatever that society's emic delusions about what they're up to may be.

Monday, October 29, 2012

The microbiome: competition or cooperation, adaptation or adaptability?

We're just now getting around to blogging about a Perspectives piece in the Oct 12 Science called "Animal Behavior and the Microbiome" by Vanessa Ezenwa et al. It's an overview of current thinking about the role microorganisms play in animal behavior.  The Human Microbiome Project documenting the extent of such organisms in humans, and the essential role these guys play in human health and disease, has found that the genes in the trillions of microorganisms with which we share our bodies outnumber ours by 100 to 1.

Since at least some of these are necessary for life, one offshoot of learning about this is to ask what 'the' human genome really is.  Most bacteria we know of, like the ones in our gut, have to do with rather prosaic, if vital, physiology such as digestion.  These are interesting and important, but they don't involve more sensitive issues such as our personal identity -- our behavior.  The role of microbes in animal behavior is just beginning to be understood, and it may be more profound than had been thought. 

Kudzu bug; Wikipedia
For example, as described in the paper, "the Kudzu bug (Megacopta cribraria), an agricultural pest, is born without any symbionts (species with which both have a mutually necessary affiliation for survival). After birth it acquires a specific symbiont from bacterial capsules left by its mother. If these capsules are removed, the bugs show dramatic wandering behaviors, presumably to search for symbiont capsules left with nearby eggs."

Or, bumble bees acquire gut microbiota either through contact with nest mates or by feeding on feces containing the microbiota required by the gut. Bees without these microbiota were more susceptible to a bumble bee parasite, Crithidia bombi. Fruit flies that share the same diet-acquired microbiota are much more likely to mate with each other than with those that don't.  And then there's the zombie ant, infected by killer fungi, and the rats -- and cat ladies -- infected by Toxoplasma gondii, both of which we described here.  The examples go on and on.

But what does the recognition that we don't go through life alone mean for the usual understanding of social context, ecosystems and the evolution of behavior?  It's tempting to suggest that these are examples of exquisitely fine-tuned co-evolution, and the usual darwinian interpretation would be that every organism is out for itself, selfishly hijacking another's gut, brain, feces, nasal passages, skin, eyes, now manipulating their behavior -- any and everything -- to make a living.  And needing to out-compete all the other microbes fighting for the same territory.  But don't get too greedy or you'll kill your host and then you're in trouble too.  (Reminiscent of how humans feel about climate change -- we have to save the planet so we can continue to exploit it ourselves.) 

But this is rather a stretch, really, and depends on fitting the facts to a preconceived view of the purpose of organismal interactions (apply our take on why people believe microbes will be found on Mars here).  And that preconceived view is that life is all about selfishness, exploitation and competition.

But there's an alternative view, and that is that what this represents is cooperation, one of the fundamental principles of life that we've often written about here and in our book MT.  It's a principle that requires abandoning the long-held belief in the primacy of "survival of the fittest" because that very rarely happens.  A better description would be "failure of the frail" -- it's only the weakest organisms that can't reproduce; most everyone else does just fine.  Plus, much of survival depends to a large degree on luck and has nothing to do with genes or competition or your ability to outwit your neighbor.

So, this Russian doll kind of life-within-life-within-life that's being catalogued is an ongoing documentation of the centrality of cooperation in life.  There's surely some adaptation going on -- the bumble bee is better off without Crithidia bombi than with, but 10-20% of worker bees in hives in the field have been shown to be infected and bees have carried on; it's only now that they're bombarded with infection with multiple parasites and more that it's a problem.  But the bee did not evolve to be infected with gut microbiota to fight off C. bombi, the bee evolved with the ability to host gut microbiota and to fight off the parasite, however that happens. 

Further, some infection was survivable, and the parasite didn't need the bumble bee because it's an equal opportunity infector, infecting other insects.  This brings up another fundamental principle of life, and that's adaptability.  Because it's ubiquitous, we believe adaptability is a characteristic of life that was present very early in evolution. So, humans can't live without a gut full of microbiota, but the species that we host are widely variable, they change when we're ill or pregnant, we can kill them off in great numbers with antibiotics, can add more with probiotics or natural exposures, and we're fine.  The same has to be true for other organisms.

One can say that what's here has to work, or at least to have worked successfully enough in the past to be here today.  But that's only a part of the biology, and there has been a tendency to focus more on how that evolved via competition, than on the interactions themselves.  How cooperation works is turning out to be an elegant but complex business.  Even if Darwinian explanations are 100% correct -- and there are reasons to temper such a view -- understanding how such things work today is in itself a challenge, and a very interesting one at that.  Though, perhaps our very interest in it is because of some microbe in our brains, that makes us sympathetic to the lives of microbes...

Friday, October 26, 2012

The true meaning of "candidate" genes revealed

The biology of behavior
I take a deep breath.  If you were here with me you'd have noticed that it was breath signifying annoyance, annoyance over something I've just read.  The annoyance triggered my brain to trigger my diaphragm to contract and elevate my lower ribs and expand my thoracic cavity vertically, while at the same time my external intercostal muscles and interchondral muscles have elevated my upper rib cage to expand the width of my thoracic cavity to allow the intake of air.  The process is reversed as I breathe out. It's a biological thing.  My annoyance also surely triggered hormonal releases of some sort as well, and other downstream reactions that may or may not catch up with me someday in the form of "stress-related illness."

But ok, that deep breath out of the way now, I start to type.  My brain has begun to formulate sentences in response to what I've just read, and now transferring those sentences to the screen requires a complex interplay between the parts of my brain that think (clearly or not) and the muscles that govern my fingers on the keyboard.

More biology.  Genes are firing all over the place, creating and controlling the complex interactions that make all this, and more, happen simultaneously without me making it happen or even being aware of what's going on.  That's because in many senses I'm nothing but an automaton controlled by my genetic makeup to respond to my environment with biological impulses. 

But I'm also eating a pear as I type.  My hunger is a biological drive -- a genetic predisposition, even -- to which I'm responding.  I have to eat or I can't fulfill my darwinian destiny to survive and reproduce.

But why am I eating a pear and not a durian fruit?  Or a betel nut?  Or a peanut butter and jelly sandwich on Wonder Bread?  Because durians and betel nuts aren't sold at my local grocery stores, or even my local farmers' markets, and I don't like Wonder Bread.  (Have you tried peanut butter and pickle sandwiches though?  My mother's favorite lunch, a taste passed down to me.)  So my clearly biological drive has to be satisfied in culturally specific ways, and based on my own personal taste.  In part taught to me by my mother. 

Let's go back to the source of my annoyance.  It's a commentary in this week's Nature: "Biology and ideology: the anatomy of politics," about how biology shapes our politics.
An increasing number of studies suggest that biology can exert a significant influence on political beliefs and behaviours. Biological factors including genes, hormone levels and neurotransmitter systems may partly shape people's attitudes on political issues such as welfare, immigration, same-sex marriage and war. And shrewd politicians might be able to take advantage of those biological levers through clever advertisements aimed at voters' primal emotions.
Many of the studies linking biology to politics remain controversial and unreplicated. But the overall body of evidence is growing and might alter how people think about their own and others' political attitudes.
Of course biology affects our beliefs and behaviors, in much the same ways that it affects what we choose to eat or our responses to things that annoy us, which in turn have been affected by our culture and upbringing.  The work described in this commentary annoys me but it might strike you as perfectly fine.  We are biological beings, and everything we do and are is affected by genes and hormones and neurotransmitters.  But that is not the same as saying that everything we do is determined by our biology.  I eat a pear but not a durian fruit, if I'm a southerner I voted Democratic in my youth and Republican now.

Genes 'for' 
Genetics is now becoming deeply entrenched in the social sciences; economists, psychologists, sociologists, political scientists are being seduced by the appeal of genetics and Darwin as they try to explain why humans do what they do.  Hell, evolutionary theory is even used to explain why characters in classic novels behave the way they do.

But this shows how little social scientists understand what genetics can actually tell us about complex traits like, well, like all the traits of interest to these disciplines.  We can't even find genes 'for' truly biological traits like type 1 diabetes or clinical depression, even if the commentary assumes we have:
The past few decades have seen a wave of research connecting genes to disorders such as schizophrenia, depression and alcoholism, and to complex outcomes such as sexual orientation and how far people progress in education.
Well, but we have made very little progress in geneticizing such things, and where we've found genetic variants that affect such traits, they usually have very little, and inconsistent, effect.  Indeed, when we throw in the effects of culture and learning, not to mention neuroplasticity of the brain, almost all bets are off in terms of identifying biological forces that shape what we believe or how we behave.

And then there's the question of just what phenotype is being measured anyway.  This is hard enough for biological traits -- what constitutes 'high' blood pressure, obesity, or autism?  Very large studies to find genes for obesity find essentially entirely different candidates depending on the measure being used (e.g., body mass index, waist-hip ratio), or the obesity-related traits like diabetes, hypertension, or Alzheimer's disease.  So how on Earth do you measure political belief in such a way that it is a proxy for some protein coded for by some gene?  

Eugenics
What about this sentence, that you might have just glided over above:
...shrewd politicians might be able to take advantage of those biological levers through clever advertisements aimed at voters' primal emotions.
We've been down this road before.

Once Darwin gave people, especially the noble science class, the idea that Nature not God made people what they were, and that through their insight they the scientists rather than clergy, could divine what was good and bad, they (like clergy) are the ones who should evaluate who was naughty and who was nice, and indeed could help Nature out by doing something about the offenders.  Of course it was all for the good.  Just like clergy helping to save souls, scientists would help save society.  And since Darwin showed that our essence was not our soul, but our biology, which within decades became  genes, their term for their wise engineering was not absolution, but eugenics.

The temptation was of course to believe that they, through their science, could find out people's true essence, and recommend what to do about it.  All for the protection of society.  Does it sound a lot like the inquisition, where clergy decided how to test, and judge, and engineer (get rid of) those who polluted the true and faithful?

Eugenics in its early form, which crept in on little cat's feet, led by the biomedical research establishment, was all for human good, of course.  But of course power corrupts and demagogues are always ready to use it, and scientists as we know very well from the past -- and today -- are easily co-opted by the hands that feed them.  And of course that kind of thinking, in its various guises, led to the Nazi exterminations (and, here and elsewhere, incarcerations, involuntary sterilizations, and the like).  It also led to the abuses of the Stalinist era called Lysenkoism, which was the inverse of Darwinism out of control.

Well, you might say, that was then and this is now.  Yes, the early eugenicists were the well-respected scientists and physicians, but they were misguided.  Now, we know better, and our scientists have only everybody (else's) good at heart, don't they?  Intrusive abuses couldn't happen any more, could they?

Thursday, October 25, 2012

More on Mars life: How are those little green men made?

Why the hype about life on Mars?
We've commented on the talk swirling around these days about the search for life on Mars.  We don't think there is any good evidence that there is any, or was any, such life.  But of course we have no way to know that....yet.

In part, NASA talks up the life-on-Mars scenario because it wants support for extended, extensive Mars missions, a big budget even to include sending people to explore, and eventually colonizing the Red Planet.  It's an understandable desire if you work for NASA, of course.  Nobody will support all this if the place is just a big rock with some polar ice caps.But it goes beyond that.  It has to do with the truly interesting question: Are we alone in the universe?

If life is a commonplace occurrence, a chemical inevitability that occurs all over the place, it will heat up space exploration, and the public's willingness to pay for it. It will be driven by its interplay with philosophy, science, religion, and even the arts. 

What might be found?
Maybe the NASA hype machine will turn its very Disney-like graphical promotions and, yes, its extremely compelling actual footage from Curiosity and prior vehicles, onto some real, wriggling little green worm, at least.  If it does that, we will of course be mesmerized as will everyone else.

Source: Vincedevries
So, given the expectations that this 'life' is vastly most likely to be former life, what do those in the know expect to see?  Basically, it's carcasses of microbes.  This as we suggested in our previous two posts on the topic is based on the purported precedent of microbe-like forms in the ALH84001 meteorite found a few years ago in Antarctica, and on scenarios suggested even by reputable scientists that life on Earth and Mars followed highly parallel courses from the founding of the planets about 4.5 billion years ago, until geologic events killed it all off on Mars roughly 3.5 billion years ago.  Only remnants could be found today (though the hyped hope is included that maybe stuff is still alive in or under the ice or ground there).

The search initially will be for molecules compatible with having been part of life at that time.  That means the expectation that life there is like life here.  Either that's a kind of inevitability belief, or a belief that chemistry has proven that that's the only way life could exist, or it's just a total gullibility on the part of scientists for science fantasy.  If the latter, the only reason we don't still expect complex creatures like little green men is that we have already explored Mars, on the ground and all over the planet by satellite, and haven't seen any, nor even the tracks of their cars and houses (nor their spoors or footprints).

Scientists who seek organic molecules know (and sometimes, when being candid, acknowledge) that these kinds of molecules can arise in many ways and, indeed, have always rained down on Mars (and Earth) from space, having nothing at all to do with life.  But at least one scientist who should know better has suggested in an interview that the similarity to these atoms and molecules and the ingredients in DNA is a relevant fact. As we noted, that makes the humongous leap of faith in parallelism to believe that Mars life would be based on a polymer coding system related to proteins as the basis for its organization.

So, let's suspend judgment and ask what we might find.  Of course, on Earth at least, DNA degrades too rapidly to be conserved for more than a few tens of thousands of years, which is why mammoth and Neandertal, but not dinosaur, DNA can be found.  So of course one can find some carbon and fantasize that into DNA, but let's pretend we core down into the Mars ice (as part of mining it for water to pipeline out to support our space colony there, as is being suggested by NASA's PR office), and do find some DNA.  What might it show?

Well, normally, DNA sequences are basically random strings of nucleotides.  That means there's no formula from the sequence itself by which knowing the nucleotide in one position can predict what the nucleotide will be at any other position.  Yet, bioinformatics is a highly sophisticated science that does identify very non-random aspects of DNA sequence--the location of genes, for example, and their regulatory regions along chromosomes.  We can do this because we can compare species' DNA, and we have a century of experiments that have revealed what bits of sequence do, and why they do it.

Part of this, for example, is the nucleotide code for amino acids, by which sets of 3 successive nucleotides in DNA code for specific amino acids and hence for the structure of proteins.  That is a primary function of DNA.  But on Mars, leaping to the assumption that DNA is to be expected is the assumption that it will be a protein code  as well, could we read that code?  Is the code, and the set of available amino acids, the same as on Earth?  How much inevitability or parallelism would that imply?

In fact, some analysis has suggested that, to some extent at least, the evolution of the code can be related to primitive RNA/amino acid interactions.  That has been used to explain the specific aspects of the code, like which triplet would code for which amino  acid, and why the redundancy of the code is as it is.  But such ideas, even if correct, don't show that our system is inevitable.  That makes what are almost anti-evolutionary assumptions: that evolution must follow a path that is predictable from the outset, down to its very specifics.

One way we relate species to each other, and infer genetic function here on Earth is to align DNA sequences from different species to find corresponding regions to show, for example, that we are closely related to mice, less so to alligators, and so on.  We can reconstruct the general phylogeny of all life that way, step by step.  On Mars, if we don't know the coding system, and if it were different from Earth's, it would be very difficult to make any sense of fragmentary ancient DNA we would find there.  In fact, unless we struck some kind of miraculously preserved long stretches, and maybe also some corresponding RNA (which doesn't preserve well and is unlikely to be there), it would be nigh impossible to make sense of them--in terms of Earth life.

Suppose it's really there?
But, hell, let's be generous and suppose we found, for example, that Mars microbial DNA had very similar functions to Earth microbes' DNA.  Maybe they used similar respiration and metabolism and so on.  Maybe they had little ring DNA, called plasmids here on Earth, that protect from viruses!  What would the explanation be?

By far, the most likely explanation is not high parallelism or inevitability of Earth-like life!

The most likely explanation will be that Earth and Mars life share the same common ancestry, not that they arose independently.  We could compare Earth and Mars microbial DNA to estimate the date of that common origin, and perhaps from geological and cosmological evidence make a guess as to whether life started here, or there, and was transported by meteorites splattering off one planet and ending up on the other.  Or it might represent stuff raining down from space that, as some have long suggested rather fancifully, could survive the harsh space environment, with some origin 'out there' somewhere in inter-stellar space that we don't know about.  Geez, maybe the newly found planet around Alpha Centauri that has everyone so excited!

The same would be true if we inferred from Mars DNA that it used the same protein code, but we couldn't show species similarities to Earth microbes.  That's because, in 3.5 billion years Earth life has diverged from its primordial forms to the point that one could not argue that the difference from Mars life showed independent origin.

Finding DNA-driven life on Mars would be very interesting, of course, but would not contribute to theories about its inevitability, or that life had to be based on DNA as a protein code, and such finding could work the other way--suggesting that the only way we get life is by being seeded from some unique source.  It won't answer questions about whether life as we know it is a predictable chemical phenomenon.  Or how common it is in the cosmos.

So as we said the other day, in our music analogy to start this little series of comments, the way ideas about extra-terrestrial life are embedded in our current science culture, often subtly and perhaps not in our awareness, is profound.  We mix wishful thinking with limited thinking.  We do it to advance interest and knowledge, and science, based on how we know it today.  In that sense, we don't show enough respect for what we don't know.  Of course, if you did that, perhaps you simply couldn't get anyone to pay for the exploring you want to do.  Because the paying public is at least as deeply rooted in our current culture as scientists are.

Tuesday, October 23, 2012

Music to your ears

A follow up on Martian life?
This post is a follow up of thoughts that were triggered by the things we hear about the search for life on Mars.  We said that speculation about how to look for life on Mars is couched in terms of what we know about life on Earth and how we know about life on Earth in our present time.  Here is a bit of further musing on the nature of independent thinking--if there is such thing.

The history of music.
One might think that the history of music is pretty far removed from anything useful to thinking about science, but in at least one important way it is quite relevant.  This is triggered by some reading we've been doing--just for interest--about music, and that got our attention.  Here, we're referring to composed music rather than what people were singing in the showers or on the streets.

Bach's Violin Sonata No. 1 in G minor
Musical styles change through history.  Medieval music was church-based and had to follow lots of rules.  It was almost strictly vocal.  Then, in the Renaissance, more secular and more a mix of voice and instrumental accompaniment.  In the Baroque period, there were formal structures that were followed, by the likes of Bach and others.  And then came modern style opera and orchestral music.

The genius of his time was Bach, but later it was Mozart, then Haydn, then Beethoven, then....  One can ask, if they had exchanged time periods, could Mozart or Beethoven have written the works they did?  The answer is no.

Albert Schweitzer (1875-1965) was a famous organist, theologian, philosopher, dedicated medical missionary in Africa--and a Nobel Peace Prize winner.  Among things that he did in his life, was to write a detailed biography of Bach, first published in 1911.  I was browsing this, and came across the following:
The more we look into the development of things, in any field whatever, the more we become conscious that to each epoch there are set certain limits of knowledge, before which it has to come to a halt, and always at the very moment when it was apparently bound to advance to a higher and definitive knowledge that seemed just within its grasp. The real history of progress in physics, philosophy...is the history of incomprehensible cessations, of conceptions that were unattainable by a given epoch, in spite of all that happened to lead it up to them, -- of the thoughts that it did not think, not because it could not, but because there was some mysterious command upon it not to.
JS Bach
In music the constraints included both the understanding of theory, of chords, harmonies, and how they 'sound' to listeners, but also they involved the economics--concert halls for the middle class replaced recitals in wealthy homes, or the church constrained what was legitimate; they involved technology, including the very instruments that existed. Without pianos, subtle keyboard music wasn't possible, without modern string instruments, a whole range of emotions and ensemble playing wasn't in the cards.

Why can we have a history of music, of art, of architecture...or of science?  It is because of the chain of evolving context.  What we are now is a product of what we (or our ancestors) were.  We are constrained by the legacy, and we can't live in our future, even if we do live in the future of all the people who once lived.  Unlike a perfect law-like world in which the present could predict the future, for example, as you can predict the path a thrown ball will take, we cannot predict the future of art, or of science.

When we struggle against the unknowns of our own day, we do so in the context of what we know, of the technologies we have available, of the social context of jobs, approvals, status, and a sense of meaning.  We need funding, and that means we must appeal to the contexts of our own day.

Unlike the arts, science purportedly deals with the objective nature of Nature.  Presumably that is a given, we are born to observe and try to understand it.  In the arts, we think more of inspiration and open creativity, not constrained in the same sense with the realities of the physical world.  But the two areas are not so different.  Both science and music build on the shoulders of giants, as Newton put it, but they stray too far from the giant at their own risk.  Mavericks don't generally fare well.  Most of us inch along, even when we are well aware that that's just what we're doing (no matter how much we may trumpet our worth).  Many who succeed had some unpleasant, disturbing, unusual or quirky childhood or circumstances that led them to see further.  It was rarely (if ever) due to what they learned in school--many of the greats in any field were not the products of institutionalized schooling.

Usually, even the transforming geniuses of any age are building on, by departing from, the context they inherited.  That's why, for example, even in science we can find inklings of transformative ideas in the murmurings of predecessors.  Often, indeed, we can find a maverick or two who said what the genius said, but in an unprepared soil so the message was lost.  Partly conservatism and normal-science, partly the reality that we can only do what we can do, even if we know there are problems that go beyond just the fact that much is still unknown--there are problems with the entire approach, etc.

So when we complain about the modus operandi of our era, we understand that first, most scientists can't just invent something totally new; second, they could not have careers if they tried to; third that they would  most likely fail if they did; yet fourth, one cannot order up genius the way you order a hamburger; but, nonetheless, it is important for at least some in the field to harp on its limitations to try to stimulate new thinking.

And Mars....
So this brings us back to life--on Mars. The ideas about what that must be like--the expectation that we'll find 'microbes' or, perhaps more revealing, the interpretation of structures in a Mars meteor as microbes, shows how we interpret even something so alien (so to speak) as life elsewhere, from the lenses of our current experience.  In the past, even among scientists, when we didn't have the DNA and biochemical focus we have today, nor any satellite or huge telescopic reconnaissance directly from the planet, much more dramatic and well-developed forms of life were imagined.

Life today has some basic characteristics, that we discussed yesterday. If we were to discover a quivering slime somewhere, that was clearly a form of life, or some other form that's foreign to Earth life, then our speculations will change accordingly.  Just as we think it laughably far-fetched to imagine little green men on Mars, future scientists may think it was laughably naive to suggest such parallelism between Earth and Mars as is done today.

It's possible as we said yesterday that our understanding of chemistry is so complete that we can rule out truly different kinds of 'life.'  But that would be remarkable, given the history of science to date.  We always think we know the parameters as well as the perimeters of Nature.  The 'end of science' has been proclaimed before.  But so far, we've been wrong because we've always been very limited in our understanding.

Similarly, no one can say what kind of music here on earth will please our descendants.

Monday, October 22, 2012

Life on Mars: Why microbes?

Tropes of our time
When the proponents of costly explorations of Mars start justifying these projects, they usually reduce the reason to the objective of finding life there.  That's good space-travel-as-Disney imagery that will get the public to open up their wallets.

In fact, even the most hyperbolic NASA proponents don't promise 'advanced' life like humans (or, even, little green men) there.  No, they are always referring to microbes, so tiny that we can't see them from telescopes or cameras aloft in orbiting spacecraft.  We have to land, and indeed we have to land guys with shovels to find it.

The argument that Mars life must be primitive is based on the geological history of Mars which suggests that it and the Earth originated at similar times, but about 4 billion years ago the smaller Mars settled down to a nearly atmosphere-free, more hostile environment unlike the hospitable environments here.  But if life had begun on Mars at about the same time as it seems to have begun here, so the tale goes, it would have reached a comparable stage of primitive forms, the kind that evidence from 3.5 billion year old rocks shows was then here on Earth....life that the newly hostile Martian environment killed off.  At best, if there are any surviving forms, they'll have to be hiding, protected, under the ice or huddling inside rocks.

That's a pretty powerful belief in parallel histories up to that point, as if life is nearly inevitable.  It seems like quite a stretch, but that's not all.  Even purportedly knowledgeable scientists speak of carbon and oxygen and so on in Mars rocks, in the context of noting that these are the building blocks of life, and especially of DNA.  This shows how deeply the flash-words, or 'tropes' of our time control our thinking, although we know that it's by far more likely that RNA came first (here on Earth, at least).  But this is how current science is embedded in current culture.  RNA has very similar molecular contents, but the implied idea is that a DNA-based protein code must be the way life works.  That's how it is here, and it's the core of current life science, so it must be that way there, too!

NASA will be peeking with great Curiosity everywhere it can look for anything it could claim suggests life, and eventually exploratory vehicles will try to bore down to find life, shivering modestly, in or under the Martian ice.  Again, we look there because, with its unshielded solar radiation, cold temperature, and little atmosphere, Martian life can't live on the surface--or that is, earth life couldn't--and assuming the same about Martians provides a convenient escape clause for why we we haven't actually seen any Mars life.

Principles of Life
Although our current scientific culture, and its popular image, is centered on or even obsessed by Darwinian competition as the essence of life, there are many other principles that are much more pervasive and important.  We described these at great length in our book MT, and in other papers.  Not only do we argue that cooperation (that is, functionally successful interactions) among many contributing elements is the rule, but the same fact fits with other aspects of life.  Among them are that life, from genes on up, is organized around modular functional units that interact in partially isolated or compartmentalized structures.

The same MT principles are even more deeply, if subtly and even implicitly, at work in surmises about Mars life.  The assumption that we're looking for 'microbes' is that Martians will look like bacteria. This expectation was whetted by the idea, fostered a few years ago, that rod-like remnants of 'microbes' were found in a Mars meteorite named ALH84001, found in Antarctica in 1984. Whether this is biogenetic or biobulldroppings is beyond our expertise.  But the interpretation that it was life rests to a great extent on the tacit assumption that sequestered modularized structures--cells with internal structures that sequester the inside from the outside--is a universal feature of life, wherever we may find it.

The reference to DNA reflects also the assumption that life at the molecular level is a polymer phenomenon--a string of units (assumed to be nucleotides) that are grouped into local, distinct, functional parts).  That means life is not just a reaction among identical molecules, even a 'cooperative' one like the formation of crystals, but is instead based on cooperative interaction.

These kinds of statements about Mars life show the latent assumptions about life, not just or even not mainly about evolution but about how it must be organized. It is an implicit reflection of the kinds of principles we discussed in MT. 

At the same time, it reflects a lack of imagination or much thought.  It is embedded in our current culture, here on Earth, where we are focused on DNA and bacteria as the primitive detectable form of life, because that's what we see here and that's the current theme of the life sciences.

Could life be other than something than this?  And here we mean something beyond the question of whether life must involve carbon or water, etc.  Rather, we ask whether the laws of physics and chemistry mean that to be an evolvable builder of orderly but non-homogeneous complexity--forms built up with variable subunits, the way we are built of cells, organs, organ systems, and populations--must be based on spatial relationships among polymer-like molecules, whose combinatorial presence enables structures to be built.  If that is the case, it is probably a rather profound truth.

And yet, can that be the case?  It would seem not likely, since even our understanding of life as it happened here is that it arose merely as chemical reactions in a primordial soup in some lakes, ponds or oceans.  That is, it was initially some open reactions, not encased in membrane compartments, not diversified based on an array of 'instruction' molecules (RNA or proteins).
 
If not, if there could be very different ways of what we would classify as 'life', then we could be in for some startling surprises as space is explored. And it would show how rooted human thinking is in its cultural context, no matter how objective science tries to be.

Friday, October 19, 2012

Social Malaria

My name is Daniel Parker and I am a PhD candidate at Penn State University in the Anthropology and Demography Departments.  I consider myself to be a population scientist and my research concerns a range of population scales, from the microscopic level to human metapopulations (populations of populations).  Humans are my favorite study organism; however I am also very interested in the microparasites and invertebrate vectors that plague humans.  My dissertation research looks at human migration and malaria in Southeast Asia.  Anne and Ken invited me to write a guest post on this subject, and this is it.
--------------------------------



Are there social determinants to malaria infection?

If you’re a social scientist you might be quick to say yes, but if you understand the biology of the disease the question may not make much sense to you.

A female anopheline mosquito feeds on someone carrying the sexual stage of the parasite.  The blood meal gives her the nutrition necessary for laying her eggs.  Assuming that the parasite has successfully undergone another transformation in the mosquito gut, and that the mosquito feeds on another person, she may transfer the infection.  Mosquitoes probably don’t care about the socio-economic status of the people on which they feed (though they do seem to prefer people with stinky feet and pregnant women).  It is probably safe to say that all other things being equal, mosquitoes really don’t care who they bite.  But are all other things equal?  Not even close…

Let’s consider our not-too-distant history with malaria in the U.S. since it was a plague of non-trivial proportions for a large swath of our nation.  During the 1860s a prominent scientist (one of the first to publicly suggest that malaria may come from mosquitoes) argued for having a giant screen placed around Washington D.C. (which was a swampy, malaria infested city up until the mid 1900s).[1]  Several of our presidents seem to have suffered from the disease.  George Washington suffered throughout much of his life with bouts of fever that were likely malaria.  Presidents Monroe, Jackson, Lincoln, Grant, and Garfield also may have suffered from malaria.  On a personal note, both of my grandparents contracted malaria growing up in modern day Oklahoma (at that time it was still Indian Territory).  My grandmother still drinks tonic water, which contains the antimalarial Quinine, when she feels a headache or chills today.    The following maps (I apologize for the poor resolution) come from a CDC webpage about the history of malaria in the U.S.

 
CDC Malaria History

A question, then, is: How were we so successful at eradicating malaria here?  Furthermore, why didn’t we do that everywhere else?!!!

A favorite story for many anti-environmentalists is that it was all or mostly because we used DDT.  And beginning in the 1930s we did use the hell out of DDT.  Apparently it was common practice for parents in the Southern U.S. to encourage their children to run behind DDT fog trucks as they drove down streets.  (See this blog post for some related stories).  But some real problems with DDT are that it doesn’t just target mosquitoes, probably also targets the predators that would feed on mosquitoes and other pests, and can potentially cause all sorts of troubles (with regard to bioaccumulation and/or biomagnifications) as it works its way through trophic levels.  A few people noticed this could be a problem (see Silent Spring by Rachel Carson) and DDT production was halted in the U.S in 1972.  (Soon after there were global efforts at banning its use for agricultural purposes).

But DDT wasn’t the only thing that changed in the U.S. during the Second Great War.  The U.S. was just coming out of the Great Depression and there were some interesting demographic things going on too.  For example, lots of working-aged males were away for the war, returned in masse, and then some major baby-making ensued.  The economy was rebounding and suburbia was born, meaning that many of those baby-makers could afford houses (increasingly with air conditioning units) that wouldn’t have been possible in previous years.  There were major public works projects aimed at building and improving drainage systems and sanitation.

During this same time period chloroquine, a major antimalarial drug with some important improvements on quinine, went into wide-spread use (mostly in the 1940s) but by the 1950s there were drug resistant parasite strains in Southeast Asia and South America.  This isn’t a surprising occurrence.  Antimalarials provide a pretty heavy selective force against the parasites.  Furthermore, those parasites undergo both clonal and sexual reproduction, meaning they can potentially generate a lot of novel variants and strains.  This has been the curse of antimalarials ever since, soon after they are rolled out the parasites develop resistance and resistant strains quickly spread globally.

Eradication of malaria in the U.S. occurred during a time when we were using heavy amounts of DDT, when we had access to relatively cheap antimalarials, and when we were undergoing some major socio-economic, structural, and demographic changes.  However the DDT was becoming an issue of its own and wasn't working as well as it once did.  The antimalarials weren't working as well as they once did either.  Despite this fact, and despite the fact that mosquito vectors for malaria still exist in the U.S., we still don’t have a real malaria problem.  And while it is almost impossible to tease out all of the contributors to our current malaria-free status, I argue that the social and economic factors that changed during this time period are the main reason why malaria is no longer a problem for us here in the U.S.  If that weren't the case, we’d be back to using insecticides and antimalarials to try to eradicate it once again.

I’m certainly not the first to notice such things.  A study on dengue fever (a mosquito-borne viral disease) in a Southern Texas/Northern Mexico town split by the international border (los dos Laredos) found that people without air conditioning units seem to have more dengue infections when compared to people who do.[2]  Poor people, living on the Mexico side of the border, tended to leave their largely unscreened windows open since they didn't have AC units to combat the sometimes brutal heat in that part of the world.  This is a clear example of how socio-economic factors can influence mosquito-borne disease transmission, but it plays out in other ways in other environments and parts of the world.

In Southeast Asia, where I do malaria research, many if not most of the people who are afflicted with malaria are poor, ethnic minorities and migrants who have been marginalized by governments and rival ethnic groups.[3]  Constant, low-grade warfare in Myanmar (Burma) for the last half century has left many of the residents of that nation in a state of public health crisis.  And, since pathogens don’t normally respect international borders, malaria remains a problem for neighboring countries such as Thailand (which is mostly malaria free when you exclude its border regions).  The story is the same along China’s border with Myanmar in Yunnan Province.  Mosquitoes don’t target people because they’re poor disenfranchised ethnic minorities.  But a lot of those ethnic minorities do happen to live in conditions that allow malaria to persist, and the mosquitoes who pick up malaria go on to feed on other potential human hosts, regardless of their economic status.  This means that your neighbor’s poverty can actually be bad for you too.

Arguably, most (not all!) public health advances can be largely attributed to socio-economic change (google: McKeown hypothesis).  Increasing the standard of living for entire populations tends to increase the health of populations too.  In Asia, nations such as Taiwan, Japan, most of South Korea (excluding its border zone with North Korea), and Singapore are malaria free.  Obviously, it isn’t always an easy task to increase the standard of living for a population, but the benefits go far beyond putting some extra cash in peoples’ pockets and letting them have nice homes.  The benefits include decreases in diseases of many types, not just malaria, and that is good for everyone.

Consider, now, the amount of money that is dumped into attempts at creating new antimalarials or that ever elusive malaria vaccine.  Consider the amount of money that has been dumped into genome sequencing and countless other really expensive scientific endeavors.  And then consider whether or not they actually have a lot of promise for eliminating or controlling malaria in places that are still plagued by this disease.  Sure, sequencing can provide insight into the evolutionary dynamics associated with the emergence and spread of drug resistance (and that is really exciting).  Some people believe that genomics will lead to personalized medicine, but even if this is true then I am skeptical that it will ever trickle down to the people that most need medical attention.  New antimalarials and new combinations of antimalarials may work for a while.  But it seems pretty obvious to me that what actually works over the long term, regardless of parasite evolution and genetics, is what we did right here in the U.S.  So, at the risk of jeopardizing my own future in malaria research, I've got to ask:

From a public health standpoint, is it possible that it’s cheaper to attack socio-economic problems in malarious places rather than to have thousands and thousands of labs spending millions and millions of dollars for cures that seem to always be short lived?  

Wouldn't we all get more bang for our buck if we took an approach that doesn't only address one specific parasite?       

1. Charles, S. T. Albert F. A. King (1841-1914), an armchair scientist. Journal of the history of medicine and allied sciences 24, 22–36 (1969).
2. Reiter, P. et al. Texas lifestyle limits transmission of dengue virus. Emerging Infectious Diseases 9, 86 (2003).
3. WHO, Strengthening malaria control for ethnic minorities in the Greater Mekong Subregion. 2011, (2008).

Thursday, October 18, 2012

Science and social policy

We recently blogged about science denial being a trait we can't just attribute to creationists, but one that we scientists share as well.  We said that to a great extent political views can determine how we pick and choose the evidence we believe.  Right-wingers tend to deny climate change but climate change can make left-wingers very anxious; to a right-winger IQ is real and even a genetically determined group characteristic while to a left-winger it's impossible to measure and is environmentally determined.  So in this light, a story at the BBC website on Tuesday was of interest.

Entitled "Childhood adversity affects adult brain and body functions, researchers find," and written by Alok Jha, the piece describes a number of studies presented at the Society of Neuroscience meetings this week in New Orleans.  The papers aren't yet published but the abstracts are online, and here's the first sentence of one example (E. Pakulak, Y. Yamada et al.):
A large and growing literature documents the profound impact of lower socioeconomic status (SES) on cognitive skills and brain structures and functions in children (Hackman, Farah, & Meaney, 2010).
Well, so if you're a believer in IQ being genetically determined and the idea that people earn their socioeconomic status by virtue of their IQ, this isn't right.  That's because you think the cause and effect are the other way around: cognitive skills have a profound impact on where we end up in the SES hierarchy, children inherit their cognitive abilities and therefore they inherit their place in the order of things.  But if you believe that the brain is plastic, and can be affected by experience, you're perfectly ok with how this abstract begins.  In fact, probably you liked the presentation title and that's why you kept reading.

So, say you wanted to sort out which comes first, cognitive ability or SES (and just as importantly, who's right), how would you do it?  Clearly just declaring the order in which you think things happen isn't enough.  A lot of work on neuroplasticity has been published in the last decade or so, much of which is pretty convincing, which may or may not predispose you to believe findings that brains can respond to environment, but even so let's think about it.  Since it's impossible to determine cause and effect just by looking at outcome, what's required is an intervention study.

You'd need to look at the IQ/brain structure/cognitive abilities -- whatever you think the right measure/outcome is -- before and after some kind of intensive training/attention/input.  Anything from a repeat IQ test to determine whether cognitive abilities have changed to a follow-up MRI or fMRI or PET scan to assess changes in brain structure or functioning.  Though, if you did see improved IQ scores, you'd have to worry about whether repeat testing itself is what improved the scores.  And you then have to do the genome sequencing of each individual to be able, in principle, to separate out prior inborn, and later experiential effects, assuming inborn factors are identifiable.  Clearly, these kinds of studies have to be carefully planned and interpreted, at best. 

Judging from the abstract, the study described by Pakulak et al. was an intervention.  They gave adults from a number of SES backgrounds a battery of memory and language proficiency tests and found that childhood SES was strongly predictive of working memory, language proficiency and attention span.  But, see above as to cause and effect.  Then they did an intervention with parents of lower SES children, after which they measured "attention/executive function" in this group and in controls and found attention improved. They conclude that cognitive abilities are malleable and that neuroplasticity extends into adulthood. They did not test the role of genetics.

At the same meeting, Suzanne Houston reported that
...the size of different parts of the brain could be affected by growing up in different homes. "We found higher parent education, smaller amygdala. The higher the income, the larger the hippocampus."
Her interest is in determining which environmental factors affect brain growth.   Others reported that excessive stress or abuse in childhood affects the functioning of the brain and is associated with ill health in adulthood. And so on. 

Does all this work resolve this debate?  Of course not.  If you think IQ is fixed at birth, you wonder first whether the executive function Pakulak et al. measured improved, and if not why not.  That's not addressed in their  abstract. Does something other than improved cognition explain improved attention?  Say, the desire to please the researchers?  You might wonder whether the sample size (72) was large enough.  A researcher can't do the experiment of having a child grow up in one home and then another and measure the difference in the brain.  Nor adequately control for confounders in studies of, say, stress and adult morbidity and mortality. So, you might think this is all yet more evidence that we should stop pouring public money into programs based on the principle of equal opportunity for all when they clearly can't work for everyone. 

But, if you already like the idea of neuroplasticity, and believe that society has a responsibility to let everyone live up to his or her potential and that everyone shares the same potential, you think this kind of study is yet more evidence of neuroplasticity, that social programs can improve the lot of those in the lower SES and that tax money should be spent equally on everyone.

In all of this, we also know that uterine experience can affect growth, development, gene expression and physiological states, and these themselves can be inherited.  Whether this applies to traits like IQ is an open question today, as far as we know, and indeed gene expression would have to be tested for each relevant tissue, and whether this was only relevant during gestation or remains so during life. The point here is just that uterine experience adds another potentially major source of variables that would have to be measured.

It's much easier to pick apart a study we disagree with than one we like, even if we don't recognize that's what's going on.  We all have our subtle biases and may not even be aware of them.  And then there are the not-so-subtle biases.  And, since there's no such thing as a perfect experiment, any study can be criticized.  Rarely do we learn, when we're taught the scientific method, how much our evaluation of the evidence depends on what we already believe rather than how well the experiment is done. 

Wednesday, October 17, 2012

How old is old, part II: 3465...and counting!

Early life
How old is old?  We asked this question on Monday when it came to relate to human evolution and our species' age and history as revealed by genetic data, in light of new estimates of the genetic mutation rate.  Here, we're talking about differences of around 1 million or so years since we evolved from chimps, or much less since modern humans diverged from our barbarous colleagues like Neandertals.

We see some modest changes in less than 10 million years, and debate whether it was 10 million or 'only' 7.  But let's put that in perspective.

The really early life
A former college classmate of mine, Bill Schopf, at UCLA, has been one of the discoverers of the earliest of early life, the first fossils of life that are found on earth.  These are bacterial colony constructs.  Some are organized in compacted layers, known as stromatolites.  The structures resemble modern stromatolites, and they and other structures of fossil bacteria bear remarkable resemblance to today's bacteria and their biofilms, modern day versions of fossil stromatolites.  How old are they?  A mere 3465 Million, that is 3.465 billion years!

A recent report by Schopf and Anatoliy Kudryatsev, in the journal Gondwana Research (Gondwana is the name for the earliest continent), reports on modern tests to show definitively that these fossils are real, as this as been the subject of some debate.  Here are images of these fossils and of their bacterial structures found from that report. After many years of various types of highly technical tests, the internal details as well as external shape and cellular structures now seem convincing that these are not natural mineral formations, but really are evidence of life.


The earth itself is estimated to be 4.54 Bya.  This means that from the first fireball, to life's first primordial 'soup' essentially to modern bacteria took only 1 Billion years.  If bacteria and their aggregations were very primitive, or just barely making it as living organisms, then one might say that's about what we'd expect: a billion years to make the first staggering living things.  But these are, in a reasonable sense of the word, already modern.

One has to assume that their genomes would be very different from bacterial genomes today.  If you look at genetic divergence among bacteria and other comparably primitive forms of life, you see that they are as different as you'd expect for such an old beginning....that is, they have diverged by an amount consistent with the 3+ billion year common ancestry. There is a touch of circularity in species split-time estimates because they are calibrated by fossil and other geological dating.  But the picture is consistent.

What this means is that the morphology, to a great extent, has been conserved for aeons of time, a rather remarkable fact, given that other descendants have diverged hugely, leading to plants, and us!  How we can explain this  level of conservation of structure, given the divergence of genomes, is a major but largely unrecognized challenge for evolutionary genetics.

These findings make it seem trivial to ask how a few hundred million year old arthropods, like horseshoe crabs, or ancient but much more recent fossil insects or fish, can appear to be so highly conserved, if evolution is a relentless rat-race to adapt to a competitive and changing environment.

The environments on earth are always changing, even if gradually and with some long-term stability.  One can imagine that once adapted to an environment, it may be risky for a mutant organism to survive, if most mutations, occurring randomly with respect to function, are more likely to cause harm than to be beneficial.  But since so much life is no longer bacterial and since there are so many  kinds of bacteria, do we have a serious question to ask how any recognizable morphology of this nearly earliest of life could have persisted so long--when their descendant genomes as a whole, even among bacteria, have diverged in a reasonably molecular clock-like way?

One would expect drift to occur in many, if not all traits: small, gradual changes that didn't harm fitness but that accumulated over billions of years to make the founding forms basically unrecognizable. Apparently this is not the case.  Thinking about how to explain that is interesting--at least as interesting as accounting for Neandertal vs modern human variation and evolution.

Tuesday, October 16, 2012

23andLess

A new report out of 23andMe suggests that one can learn more about risk for common traits from relatives than from individual DNA sequences.  The fact is about as surprising as a sunrise, and has been known for a long time (as has the reason for it).  Acknowledging this sounds more than typically forthcoming from this company, but they do put their own spin on it, saying the combination of family history and the kind of genetic risk estimation they sell is best.  Family history predicts common diseases and risk estimates the rare ones. 

The reason that family history is so predictive is quite simple: family history integrates all your relatives' genetic variation to reveal, at least within their respective environmental contexts, the net effect of that variation.  You inherit half your variation from each parent (given some reasonable assumptions), so the net risk experienced by your parents is much greater than that likely to be predicted from a single variant in a single gene in your DNA sequence.

Several papers have shown this in various ways.  A clever one a few years ago noted that Francis Galton's family correlations from Darwin's time (he was a cousin of Darwin's) provides a better prediction than modern genotype sequencing.

We might be expected to revel in this confession by 23andLess, since we have criticized this whole endeavor of personalized genomic medicine as a bit of snake-oil selling.  Of course, there are countless variants that for some generally rare, mostly pediatric traits are highly predictive.  Companies can provide this information if you have these variants, but even then unless they are recessive in their effects, if you have the gene you'd already have the disease or trait.  For decades there have been honorable practitioners, called genetic counselors, who worked within medical schools generally and were carefully licensed to do this.  They identify known genetic risks and advise about recurrence risks and the like.  But they were professionals, working with physicians, not hustling to the public for corporate profit.

At the same time, let's step back and ask about the idea of family risk.  If that's based on genetic variation (that is, if you filter out environmental effects), then certainly your DNA sequence would contain the variants involved!  That means that, properly done, sequencing should be able to identify the variants that account for your resemblance to your parents. But studies to date have shown that variants identified by GWAS and other approaches only account for a small fraction of the parent-offspring correlation. That is the gist of the current report, stated in another way.   The issue, then, is how to identify what's causal and what isn't, among the huge amount of DNA sequence you share with any individual relative.

Various authors have made suggestions. Some say the problem is that each of us is affected by one or more very rare variants and even if you carry the same from your parent, finding it in a sea of sequence data is nigh impossible.  Authors favoring the rare-variant idea are trying their best to devise such methods.  One is to look at multiple close, affected relatives and winnow down the shared amount to the part that's actually causal.  Another prominent group is trying to argue that it is not the sum of individual genetic variants that determines risk but that interactions among variants is what contributes to risk.  This is a statistical nightmare to work out, but if it were true it would mean that we really have already identified the variants in question, but not the way they interact.

The most likely truth at this stage is that such common traits like heart disease or how tall or heavy you are, are determined by a very large number of genes, mostly with individually very small effects.  Each person with the 'same' trait--each diabetic, say--has that trait for a different genetic reason.  Individual genetic variants may be causal contributors, but they are not very important.

If this is so, and we could find a way to document the individual effects, we could use each person's genome sequence to tally up all their particular set of risk variants and compute their risk, even if specific approaches usually couldn't be tailored to his/her specific risk set.  In essence, we would gain nothing but a usually false sense of precision by doing so.  We'd be just as well off, in practice, to look at family history (or even more so, directly relevant risk traits like glucose levels, obesity, etc.).  And this is under the assumption that we know of or don't have to worry about, environmental effects.  That is a huge can of worms that everyone is just conveniently ignoring.

The same, by the way, applies to attempts to identify or characterize traits in terms of genes responsible for their adaptive evolution--and for essentially the same reasons.

Monday, October 15, 2012

How old is 'old'?

A story in last Friday's Science by Ann Gibbons concerns various recent, direct estimates of the DNA mutation rate in humans, and its import for the way we reconstruct our geographic and demographic origins.  The point is that new estimates are that mutations, nucleotide by nucleotide each generation, occur only about half as frequently as earlier estimates.  Instead of about one per 100 million nucleotides the estimates are about twice that.  How could numbers so small have any import at all?

The timing of various aspects--here we'll loosely call them 'events' even if they occurred very gradually in human lifetime terms--of human prehistory is at issue.  When we look at human variation today, and compare it to that in other primates, we try to account for how rapidly new variation arises.  If we take into account estimates of population size, the rate of new mutation is counterbalanced by loss due to chance, and the amount of standing variation within our population, or of difference from our nearest relatives, can be used to estimate the timing of expansions and migrations and geographic variation within our own species, when we diverged from those earlier relatives as a new species, and when/where/if we split into sub-groups (like Neandertals) that then interbred or became extinct.

The standard story had been that we diverged from chimps about 6-7 million years ago (Mya), and that the Neandertal group had started 400 thousand years (Kya) or so.  New direct estimates of mutation, on the order of 1 per 10^8 nucleotides, are slower and hence imply longer time periods for these inferred events in our history.

That would make no important difference, unless thinking that humans diverged from chimps 9 rather than 7 million years ago matters to you.  That's rather abstract, at best.  But there is some import in these estimates.

They are not fixed in stone, either in terms of their accuracy as averages from relatively sparse data (for example, mutation rates vary along the genome and between individuals), or over historical time and environments.  The per-year rates involve body size and hence generation length, and may have differed in the past.  The mating pattern may matter, since older males have had more sperm-line cell divisions in which mutation can have occurred, so populations with, say, older dominant males mating more often will have higher mutation rates per generation.  And so on.

But things really matter most when it comes to being consistent with the fossil record.  Fossils have tended to suggest more recent event times, and mutation rates that are often based on such times as calibrating points have been fitted accordingly, at least in part.  Of course, by the time morphology shows recognizable species differences, the species have already been clearly split for an unknown length of time.  Likewise gene-based separation times usually underestimate actual isolation events, for well-known reasons.

If, for example, we now don't know when human ancestors and/or Neandertals left Africa, our estimates of their history of admixture (or not) will be affected.  They could not have admixed if they hadn't evolved or emerged from Africa!  Right now, paleontologists are not easily agreeing with revised mutation rate estimates, but clearly the two kinds of data must be brought into agreement.

Probably, there will be little difference in the overall picture we get from fossil and genetic data on past and present variation.  The times will matter.  If that leads to other issues, they will have to be faced.

The issues arise because we're getting better data from genomic sequences, so they are legitimate--they are not just stubborn food fights among different research groups.  Whether they will make any substantial difference in our understand of our evolution as a species seems less likely, but remains open.

Friday, October 12, 2012

Finnish diet? No thanks!: Diet and health

There were two stories about stroke on yesterday's BBC website, one warning that incidence is rising among younger people and the other that tomatoes are protective.  Both are from studies published in the October Neurology, the first a study of a region in Ohio with a population of 1.3 million, in which incidence was ascertained for 2 different periods, July 1993- June 1994 and 1999-2005.  Mean age at stroke decreased from 71.2 years in 1993/4 to 69.2 in the later period, with the proportion occurring in people under age 55 increasing from 12.9% in 1993/4 to 18.6% in 2005.

This is alarming, says the paper and the BBC story, because stroke in younger people can mean more years of debilitation.  But, did they find that incidence is actually increasing in younger people, or is it that incidence is decreasing in older people?  It turns out that it's both.  Stroke has risen from 109 per 100,000 people in 1993/4 to 176 per 100,000 in 2005, and the rate has fallen in those in the oldest age groups.

This change could be real or, cautions a neurologist from University College London, it's possible that the way stroke was detected during the study has changed, and could explain some of the increase.  Younger people might have been more likely to have the more reliable diagnostic scans.

A spokesperson from the Stroke Association in the UK said that while these results are alarming, stroke can be prevented.  "For example, eating a balanced diet, exercising regularly and getting your blood pressure checked can all make a huge difference."

Or, just eat more tomatoes?  The second study also appears in Neurology and reports that people with the highest levels of lycopene in their blood were the least likely to have a stroke -- lycopene is a compound found in tomatoes.

But, hold on.
The study involved 1,031 men in Finland between the ages of 46 and 65. The level of lycopene in their blood was tested at the start of the study and they were followed for an average of 12 years. During that time, 67 men had a stroke.
Among the men with the lowest levels of lycopene, 25 of 258 men had a stroke. Among those with the highest levels of lycopene, 11 of 259 men had a stroke. When researchers looked at just strokes due to blood clots, the results were even stronger. Those with the highest levels of lycopene were 59 percent less likely to have a stroke than those with the lowest levels.
"This study adds to the evidence that a diet high in fruits and vegetables is associated with a lower risk of stroke,” said study author Jouni Karppi, PhD, of the University of Eastern Finland in Kuopio. “The results support the recommendation that people get more than five servings of fruits and vegetables a day, which would likely lead to a major reduction in the number of strokes worldwide, according to previous research.”
Hm, maybe, but the sample sizes are pretty small here.  Sounds to us as though the a priori assumption is that eating fruits and vegetables prevents stroke, and that these two studies support that view.  Maybe they do, and we certainly won't argue, but we'd need a lot more evidence than this if we didn't already believe it.

A more convincing twenty-year study ended not so long ago, in Eastern Finland where heart disease risk was as high or higher than anywhere else in the world.  The study asked Finns to eat a 'Mediterranean' diet, which was high in vegetables and so on.  Their heart disease risk was reduced, according to the report, by about 75%!  The details and specific food elements, if it is anything specific, were not identified, but the switch to fruits and vegetables from animal fats, cheese and so on, was in general credited with the reduction in risk.

Then the investigators proposed that an Italian population should adopt the Finnish diet to 'square' out the study design and its ability to compare and isolate dietary elements.  The Italians (smartly!) refused.

Thursday, October 11, 2012

Mental illness is complex too

Mental illness causes significant morbidity and mortality around the world but we know very little about its physiology or about how to treat it.  Yesterday was World Mental Health Day, so it seems appropriate to talk about this now.  A piece in Science last week addresses this issue* and the news isn't good.  The problems are multiple -- the brain is complex and difficult to study, it's hard to know which aspects of the functioning of the brain to focus on in trying to determine what goes wrong and thus how to make it right again, animal models may or may not be adequate stand-ins for human disease and drug response.  And, current medications are no more effective than drugs first used 50 years ago, but the problem is so complex that pharmaceutical companies find that it's just not cost-effective to invest in research into psychiatric drugs because progress is so uncertain.
"We just don't know enough,” says Thomas Insel, director of the U.S. National Institute of Mental Health (NIMH) in Bethesda, Maryland. “Research and development in this area has been almost entirely dependent on the serendipitous discoveries of medications. From the get-go, none of it was ever based on an understanding of the pathophysiology of any of the illnesses involved.”
This is no different from the story of most other complex diseases, of course.  GWAS and other genetic approaches have not provided any significant breakthroughs, even though, as with most complex diseases, the genetic hammer has been thrown at these disorders time and time again.  The basic biology is poorly understood (at best) so one hasn't much clue which regions of the genome to look at most intensely. Leads arise, excite,...., and then fade.  Knowledge of many psychiatric illnesses may be even more rudimentary than that of non-psychiatric illnesses: schizophrenia rates may or may not differ across time and place, autism rates may or may not be increasing, ADHD may or may not be a disorder.

As it happens, a $10 million study of the use of fish oil to reduce suicide risk in US soldiers was announced this week, as reported in multiple places including here.
The Medical University of South Carolina, the Veterans Administration and the National Institutes of Health announced the study of omega-3 fatty acids on Monday, which is being conducted for the U.S. Army.
In the controlled study, veterans already receiving mental health services will be given smoothies high in omega-3s for a six-month period. Others will be given a placebo.
"One of the questions this study hopes to address is do we see a clinical effect that is strong enough that the military would then consider providing supplements to all military personnel, not just those who are already experiencing depression," said Bernadette Marriott, a professor in the Institute of Psychiatry at the Medical University of South Carolina and the principal investigator in the study.
This might be laughable, really, given how much we know we don't know about mental illness.  Except that this kind of study presumably can be done empirically, without any real understanding of the pathophysiology of either depression or any possible effect of omega-3 on the disease.  If the suicide rate is importantly lower among those given fish oil than those given placebo, and that's really the only significant difference between the groups, it suggests fish oil might well be why.  Well, with all the usual caveats about confounders and so on.  And even if it's, say, the taste of fish oil rather than any effect on the brain, really, who cares as long as it works.

In the same vein, a September episode of the BBC Radio 4 program, The Life Scientific, with British psychiatrist David Nutt was a discussion of, among things, the potential for magic mushrooms, ecstasy, LSD, cannabis or mephedrone as treatments for PTSD or depression.  His group has published a recent paper in PNAS reporting a study of fMRI's of the brains of people on psilicybin, the active ingredient in magic mushrooms.  He was surprised at the results, he told Life Scientific presenter Jim Al-khalili-- he expected to see much higher brain activity but instead cortical activity was greatly slowed.  And, from the paper:
These results strongly imply that the subjective effects of psychedelic drugs are caused by decreased activity and connectivity in the brain's key connector hubs, enabling a state of unconstrained cognition.
Why?  If you read the "Discussion" section of the paper you'll see it is full of uncertainty about what these results are telling them or might mean; lots of mays, ifs and mights.  They do draw some conclusions; "Depression been characterized as an “overstable” state, in which cognition is rigidly pessimistic," pessimism has been linked to the areas of the brain in which this study found lowered activity in people on psilocybin, and brain "hyperactivity has been linked to pathological brooding."  So, the idea is that reducing brain activity might interrupt pathologically negative or self-destructive thoughts.

Of course, they also point out that "Further work is required to test this hypothesis and the putative utility of psilocybin in depression."  Again, this can be done empirically, and understanding the pathophysiology is not required.  But, fMRI's are not required either.  It looks as though Nutt had a hypothesis about these compounds having potential therapeutic utility, did the study and then constructed a hypothesis to fit the results.  But ok, no one knew how aspirin worked for a long time either.  And if these compounds work, if fish oil works, and actual progress can be made in treating mental illness, this could change millions of lives for the better, no matter how or why.

But, given what we know about complexity, we're betting there's no such simple answer to the problem of brain function and its variation and how to fix what can go wrong. 
-----------------------------

*Science hosted a live chat on treating mental illness yesterday afternoon and it's archived here.