Monday, January 26, 2015

What's 'precise' about 'precision' medicine (besides desperate spin)?

Not very long ago we were promised 'personalized genomic medicine'. Surely you remember.   It was a slogan like any advertising slogan and if you read the fine print (the caveats, if you can find them) you'll see various safety valves.  Medicine has always been 'personalized' but the implication was that we'd be treating everyone in ways that are specifically dictated by what we find in their genome (whole DNA sequence of their 'constitutive' or inherited genome).

The idea was an advertising or promotional way of lobbying for funds for the belief system that genomes cause everything (except, perhaps, the final Super Bowl score...but even that, well, if the quarterbacks' and receivers' skills are--as surely they must be--genetically determined, maybe we can even predict that!).  Of course, when someone carries a particular genotype at some locus with a strong effect, and many of those are known, a clinician should, indeed must, take that into account.

But that is nothing new, and has been the business of the profession of genetic counselors and so on (not necessarily of online DNA businesses one might name).  That sort of personalized genomic medicine is no more novel than 'evidence-based' medicine, which is another slogan, this time perhaps for the for-profit HMO businesses who want to dictate how the doctors in their stable they control treat their patients, and it's nominal objective is to eliminate poor doctoring, which is good, but anyone not thinking it's basically about profits is willfully naive.

Anyway, now we're seeing a new slogan, so mustn't that mean we have successfully achieved the goal of 'personalized genomic medicine'?  Obviously, if we have any accountability left in government spending, even under Republicans, if we didn't solve the previous objective, what's the justification for a new one, much less expecting Congress to allocate the funding for it?  A cynic (but not us!) might say that the lobbying aspect of biomedical research is now onto the next slogan, changing the packaging even if the product's not really changed.  One has to keep changing slogans if one wants to keep customers' (and Congress') attention so they'll keep giving you money.

The new slogan is 'precision medicine'.  Sara Reardon says this about that in Nature:
The agency seems to have been planning the effort for some time, listing 'precision medicine' as one of its four priorities in its 2015 budget proposal; another was 'big data'. Other government agencies are also expected to participate, as may some private companies. There is no word on how much the initiative will cost, but details are likely to trickle out as Obama prepares his budget request for fiscal year 2016, which is due to be released on 2 February.
Wow!  This will replace, one has to assume, all that 'sloppy' medicine of the past, just as 'evidence-based' medicine presumably replaced 'lack-of-evidence-based medicine'.  Now, docs will know precisely what ails you and precisely what to do about it, and this will precisely involve genetic approaches since that's all NIH's leadership seems to understand.

OK, OK, so we're (again) being snide.  But not just snide.  You ask yourself what one might mean by 'precision' medicine?  Does it mean 100% accurate, no mistakes?  Of course not, so then, what?  If it means the doc does his/her best, is that anything new or something to write home about?  Surely Dr Collins isn't suggesting that he's been imprecise with his promises about genomics.  Surely not!  Instead, the claim is that we'll be able to look at your genome and hence know precisely what is in your future and what to do for (or to) you as a patient.   Anyone who knows anything about genetics knows that, with some clear-cut but generally rare exceptions, that's bollocks!

What does 'precision' mean (if anything)?
The goal of precision genomic medicine sounds laudable, though if it refers to everything being precisely based on genes, that would be the only thing that's new.  In some venues, at least, the idea has instantly received the ridicule it precisely deserves from the get-go. At least, conscientious doctors have always done precisely as well as they could, given their knowledge at the time.  So, what must be meant is that now genetics will let us treat patients in a way that is, finally, really precise!

But wait: what does the word 'precise' actually mean? Well, look it up. It means 'marked by exactness or accuracy'.  OK again, that sounds great.....or does it?  Does even Francis Collins, your genome's best friend, really believe that we'll have exact diagnosis or treatment based on genome sequences? Or what about 'accurate'?  Which doesn't necessarily mean correct, or appropriate -- just targeted at a given spot.

'Precision' can refer to perfection or exactness.  Or it can refer to something less, some kind of within-knowable error.  Proper science always deals in precision, but properly, by specifying the degree of accuracy.  One says an estimate is precise to within x percent, based on the current data.  But a risk of, say, 5% can be precise to within , say, 1 or 1/10 of a percent or more or less.  It's not all that reassuring that your genome gives your doc something like that unless the range is narrow and reliable.  In general that is the kind of meaning we can assign to 'precision', when done honestly and honorably, but it's far from the scientific reality in this field.

Indeed, when the precision of estimates is presented properly, one is in essence expressing inexactness--of a machine or instrument or estimate.  To claim that we're going to be using NIH research funds to generate precision medicine is saying that we're going to do the best we can.  One can only hope so!

Precision and accuracy are adjectives that do not in themselves mean anything until the degree is stated, along with how that degree is known and by what criterion.  Does it relate to replication of a process and similarity of results?  Does it mean what will happen or what might happen with some specifiable accuracy or probability?  If you don't state that, you're just playing empty word games.

Worse than that, honest research and clinical treatment has always been 'the best we can' at any given time.  Likewise, genome-based prediction, usually very imprecise both in the sense of being inaccurate and also of being of very poorly known degree of inaccuracy, is nothing new.  It's nobody's fault, because the genome and its interactions with the environment are complex.  One hopes that whatever real genomic information on risk, response to treatment, diagnosis etc. we have will be as precise as possible, in the proper sense of the term, and that new knowledge from research will increase that precision.

But it is dishonorable to imply that this is something new and different, and to suggest even implicitly that genome sequencing and the like are leading us to anything close to what most people think of when they hear the word 'precision', is anything new or is generally even in the cards.  Especially following on the previously largely vacuous promise of 'personalized' genomic medicine.

It is very misleading to suggest otherwise.  It takes guts and ruthless lobbying.  'Precision' is literally almost meaningless in this context!

The million genomes project
In the same breath, we're hearing that we'll be funding a million genomes project.  The implication is that if we have a million whole genome sequences, we will have 'precision medicine' (personalized, too!).  But is that a serious claim or is it a laugh?

A million is a large number, but if most variation in gene-based risk is due, as mountains of evidence shows, to countless very rare variants, many of them essentially new, and hordes of them perhaps per person, then even a million genome sequences will not be nearly enough to yield much of what is being promised by the term 'precision'!  We'd need to sequence everybody (I'm sure Dr Collins has that in mind as the next Major Slogan, and I know other countries are talking that way).

Don't be naive enough to take this for something other than what it really is:  (1) a ploy to secure continued funding perpetrated on his Genome Dream, but in the absence of new ideas and the presence of promises any preacher would be proud of, and results that so far clearly belie it; and (2) a way to protect influential NIH clients with major projects that no longer really merit continued protection, but which will be included in this one (3) to guarantee congressional support from our representatives who really don't know enough to see through it or who simply believe or just want cover for the idea that these sorts of thing (add Defense contracting and NASA mega-projects as other instances) are simply good for local business and sound good to campaign on.

Yes, Francis Collins is born-again with perhaps a simplistic one-cause worldview to go with that.  He certainly knows what he's doing when it comes to marketing based on genetic promises of salvation.  This idea is going to be very good for a whole entrenched segment of the research business, because he's clever enough to say that it will not just be one 'project' but is apparently going to have genome sequencing done on an olio of existing projects.  Rationales for this sort of 'project' are that long-standing, or perhaps long-limping, projects will be salvaged because they can 'inexpensively' be added to this new effort.  That's justified because then we don't have to collect all that valuable data over again.

But if you think about what we already know about genome sequences and their evolution, and about what's been found with cruder data, from those very projects to be incorporated among others, a million genome sequences will not generate anything like what we usually understand the generic term 'precision' to mean.  Cruder data?  Yes, for example, the kinds of data we have on many of these ongoing studies, based on inheritance, on epidemiological risk assessment, or on other huge genomewide mapping has consistently shown that there is scant new serious information to be found by simply sequencing between mapping-marker sites.  The argument that the significance level will raise when we test the actual site doesn't mean the signal will be strong enough to change the general picture.  That picture is that there simply are not major risk factors except, certainly, some rare strong ones hiding in the sequence leaf-litter of rare or functionless variants.

Of course, there will be exceptions, and they'll be trumpeted to the news media from the mountain top.  But they are exceptions, and finding them is not the same as a proper cost-benefit assessment of research priorities. If we have paid for so many mega-GWAS studies to learn something about genomic causation, then we should heed the lessons we ourselves have learned.

Secondly, the data collected or measures taken decades ago in these huge long-term studies are often no longer state of the art, and many people followed for decades are now pushing up daisies, and can't be followed up.

Thirdly, is the fact that the epidemiological (e.g., lifestyle, environment...) data have clearly been shown largely to yield findings that get reversed by the next study down the pike.   That's the daily news that the latest study has now shown that all previous studies had it wrong:  factor X isn't a risk factor after all.  Again, major single-factor causation is elusive already, so just pouring funds on detailed sequencing will mainly be finding reasons for existing programs to buy more gear to milk cows that are already drying up.

Fourth, many if not even most of the major traits whose importance has justified mega-epidemiological longterm follow up studies, have failed to find consistent risk factors to begin with. But for many of the traits, the risk (incidence) has risen faster than the typical response to artificial selection.  In that case, if genomic causation were tractably simple, such strong 'selection' should reflect those few genes whose variants respond to the changed environmental circumstances.  But these are the same traits (obesity, stature, diabetes, autism,.....) for which mapping shows that single, simple genetic causation does not obtain (and, again, that assumes that the environmental risk factors purportedly responsible are even identified, and the yes-no results just mentioned above shows otherwise).

Worse than this, what about the microbiome or the epigenome, that are supposedly so important? Genome sequencing, a convenient way to carry on just as before, simply cannot generally turn miracles in those areas, because they require other kinds of data (and, not available from current sequencing samples nor, of course, from deceased subjects even if we had stored their blood samples).

And these data will be almost completely blind to another potentially very important genetic causal process, that of somatic mutation.  Tomorrow we'll discuss that issue.

Of course, there are many truly convincing genetic factors that are clearly relevant to know and to use in diagnosis or treatment decisions.  Those are precisely the factors to test for, investigate, intervene with and so on.  NIH should be closed down entirely if they are aiming at anything different (and no one suggests that, in general, they are).  Even if rare, or 'orphan' disorders, they are ones for targeting engineering preventive or curative approaches to; if science is good at anything, it is engineering.

Precisely what will the current proposed work yield, then?  It will yield an even better picture of how wasteful this sort of perpetuate-every-big-study-you-can-identify project will have been.  That is, at least, one precise  prediction!

Caveat emptor!  The Ayn Rand factor: don't mistake it for science
Everybody naturally wants precision-based medicine, using genes in those areas in which that is wholly appropriate.  But what we should expect is that NIH puts its resources precisely where they will do the most good for the investment.  Those, including Congress, who think that NIH has been doing that in the recent past are precisely those who should start paying closer attention.

If you are among those who have paid attention to the miracles NIH officials, Dr Collins in particular, have been promising and the language they have used, now for over 20 years, then you should suggest that they leave NIH and get new jobs in a place where truthfulness is not part of the deal: Madison Avenue. Unless, of course, simply finding a rationale for keeping the funding tap open to existing clients is the real underlying objective.

Some readers may say enough winging about this is enough!  After all, genes are important so this will be good science even if promises are exaggerated.  There will be good science, certainly, but this HyperProject will undermine the idea of nimble science, driven by ideas rather than empires.

Science does involve money and so is never too far from underlying politics. In a largely Republican environment, dancing to the ghost of Ayn Rand, it will be interesting to see if those now in power keep indulging the 'haves' in science.  One might expect them to, but at the same time, this is, when you look closely, largely a welfare project for the science in-groups, to keep life in large, long-standing, tired projects well past their point of diminished returns: like bailing out rusting industries, the long-term projects once found useful things but are now clearly degraded from good cost-benefit  profiles.  So one can ask if even the Republicans are paying attention?

A Big Data bailout will, of course, preserve jobs and career status for the main recipients.  The Old Boy networks in science were to some extent dismantled by the democratization of funding that began in NIH and NSF around the 1980s.  Peer review included peers of both genders, racially distributed and funding geographically dispersed.  It was never perfectly equitable, but it was much more open and democratic than the back-room sense that seemed to have prevailed before.

But the new Old Boys (and now Girls, too, just as acquisitive as the Boys) are back with a vengeance!  As always, those at the top, with the proposed bail-out project extension, will hire many minions of workers, including technicians, post-docs and other well-trained scientists.  Those are jobs, certainly, but as before they're largely located in the elite universities that have long had a big grasp on funds (and have munificent private sources they could use instead of feeding so hungrily at the public trough).  You can voice your own view about whether this elitism is what's best for science or not.  But no matter, welfare for scientists and technicians, controlled by the Big Lab aristocracy, is largely what's afoot--again.

Aristocracies maintain themselves by making enough people dependent on them--the research cogs.
If you have a liberal bent of mind, as we tend to, you can't object to the use of public sector resources so people have jobs, though inside tracks for science-related people isn't exactly democratic.  But what the welfare-for-tired-projects will do, as adverted to above, is to deprive even these clever inside people of chances to be innovative, and for science to be more nimble.  That's because the army of scientists and staff, those on the hundreds-of-author papers, are forced by this system to be cogs in the Mega-wheel of Big Data projects.  That's not good for science.

For, not against
Our arguments are not against science, but for it.  They are for nimble, fleet-footed science with a fair, idea-driven marketplace rather than institutionally inertial one.  And, also as noted earlier, a million sounds good but isn't nearly enough for some of the implied success and seems more likely to be intentionally setting the table for the future of this welfare system--whole population sequencing! Why not?  It would be rather surprising if the Director, given his track record, hasn't got this in his back pocket for when the current catch-phrase is worn.

An irony is that our comments here might be interpreted as dismissing the causal importance of genetics in the nature of organisms and their evolution.  In a sense, our message is the opposite: it's that by building genetics into the sociopolitical institutional structure of science, and hence its particular welfare or self-maintenance system, we routinize what isn't yet well enough understood to be routinized.  We trivialize genetics in that way, the opposite of what should be done.  We benumb minds that should be sharpened by facing an open, rather than channeled frontier.

One thing, though: this is not a shell game!  It's all being done in plain sight.  You and everyone who thinks about it, knows what this is.  We personally are newly retired and have no dog in the fight. But one would expect that sooner or later a wide community of scientists will tire of Dr Collins' continual feeding of his narrow ideology (or his dependents, view it how you wish), for lack of better scientific ideas.  If the victims whose careers and ideas are not being protected by this welfare system don't care enough, don't act, or can't find a way to resist by credible challenges to the status quo, the status quo will remain.  It's that simple.

Friday, January 23, 2015

What is 'inappropriate' use of baby aspirin? The risk of estimating risk

Something like a third of the American population* takes a baby aspirin every day to prevent cardiovascular disease (CVD).  But a new study ("Frequency and Practice-Level Variation in Inappropriate Aspirin Use for the Primary Prevention of Cardiovascular Disease : Insights From the National Cardiovascular Disease Registry’s Practice Innovation and Clinical Excellence Registry", Hira et al., J Amer Coll Cardiol) suggests that more than 1 in 10 of these people are taking it 'inappropriately.'



Aspirin slows blood clotting, and blood coagulation plays a role in vascular disease, so the thinking is that some heart attacks and strokes can be prevented with regular use of aspirin, and indeed there is empirical support for this.  As with many drug therapies, it was the side effects of aspirin use for something else, in this case rheumatoid arthritis (RA), that first suggested it could play a role in CVD prevention -- a 1978 study reported that aspirin use lowered the risk of myocardial infarction, angina pectoris, sudden death, and cerebral infarction in RA patients (study cited in an editorial by Freek Verheugt accompanying the Hira paper), a result that kick-started its use for CVD prevention.

The new Hira et al. study included about 68,000 patients in 119 different practices taking aspirin for prevention of a first heart attack or stroke, not recurrence.  The authors looked at clinical records in a network of cardiology practices to assess the proportion of patients in each practice that was taking aspirin, and whether they met the 10-year risk criteria for 'appropriate use' as determined by the Framingham risk calculator.  The calculator uses an algorithm based on age, sex, total cholesterol, HDL cholesterol, smoking status, blood pressure and whether the patient is taking medication to control blood pressure.

Appropriate use, according to Hira et al., is a 10-year risk of greater than 6%.  According to the calculator itself, 6% risk means that 6 of 100 people with whichever set of factors yields this risk will have a heart attack within the next 10 years.  The reason this even has to be thought about is because there is some risk to taking aspirin because it's an anticoagulant and can cause major bleeding, so maximizing the cost/benefit ratio, preventing CVD as well as major bleeds, is what's at issue here.  If the benefit is a long-shot because an aspirin user isn't likely to have CVD anyway, the potential cost can outweigh the pluses.

As Verheugt explains:
Major coronary events (coronary heart disease mortality and nonfatal MI) are reduced by 18% with aspirin but at the cost of an increase of 54% in major extracranial bleeding. For every 2 major coronary events shown to be prevented by prophylactic aspirin, they occur at the cost of 1 major extracranial bleed. Primary prevention with aspirin is widely applied, however. This regimen is used not only because of its cardioprotection but also because there is increasing evidence of chemoprotection of aspirin against cancer.
Hira et al. found that 11.6% of the population of patients visiting a cardiology practice were taking aspirin inappropriately, having a risk less than 6% as calculated by the Framingham calculator.  That is, their risk of bleeding outweighs the potential preventive effect of aspirin.

But, about this 6% risk.  Does it sound high to you?  Would you change your behavior based on a 6% risk, or would you figure the risk is low enough that you can continue to eat those cheese steaks?  Or maybe you'd just start popping aspirin, figuring that made it really safe to continue to eat those cheese steaks?

And why the 6% threshold?  So precise.  Indeed, a 2011 study suggested different risk thresholds for different age categories, increasing with age.  And, different calculators (such as this one from the University of Edinburgh) return different risk estimates, varying by several percentage points given the same data, so so much for precision.

Risk is, of course, estimated from population data, based on the many studies that have found an association between cholesterol, blood pressure, smoking status, and heart attack, particularly in older men.  A distribution of risk factors and outcomes would thus show that for a given set of cholesterol and blood pressure values, on average x% will have a heart attack or stroke.  These are group averages, and using them to make predictions for individuals cannot be done with precision that we know to be true.  Indeed, one of the strongest risk factors known to epidemiology, smoking, causes lung cancer in 'only' 10% of smokers, and it's impossible to predict who. But that's why these CVD risk calculators never estimate 100% risk.  The highest risk I could force them to estimate was "greater than 30%".

Hard to know what that actually means for any individual.  At least, I have a hard time knowing what to make of these figures.  If 6 of 100 people in the threshold risk risk category will have an MI in the next 10 years, this means that 94 will not.  So, another way to think about this is that the risk for 94 people is in fact 0, while risk for the unlucky 6 is 100%.  For everyone over the 6% threshold, the cost -- possible major bleed -- is assumed to be outweighed by the benefit -- prevention of MI --  even when that's in fact only true for 6 out of 100 people in this particular risk category.  But, since it's impossible to predict which 6 are at 100% risk, the whole group is treated as though it's at 100% risk, and put on preventive baby aspirin, and perhaps statins as well, and counseled on lifestyle changes and so on, all of which can greatly affect the outcome, and alter our understanding of risk factors -- or the effectiveness of preventive aspirin.  And what if it's true that a drink a day lowers heart failure risk?  How do we factor that in?

Further, a lot of more or less well-established risk factors for CVD are not included in the calculation. After decades of cardiovascular disease research, it seems to be well-established that obesity is a risk factor, as well as diabetes, and certainly family history.  Why aren't these pieces of information included?  Tens if not hundreds of genes have been identified to have at least a weak effect on risk (and even this number only account for a fraction of the genetic risk as estimated from heritability studies), and these aren't included in the calculation either.  And, we all know people who seemed totally fit, who had a heart attack on the running trail, or the bike trail, so at least some people are in fact at risk even with none of the accepted risk factors.

So, 11.6% of baby aspirin takers shouldn't be taking aspirin.  But, when risk estimation is as imprecise as it is, and as hard to understand, this seems like a number that we should be taking with a grain of salt, if not a baby aspirin.  Well, except that salt is a risk factor for hypertension which is a risk factor for heart disease....or is it?


------------------
*Or something like that.  It turns out that the Hira paper cited a 2007 paper, which cited a 2006 paper, which cited the Behavioral Risk Factor Surveillance System 2003 estimate of 36% of the American population taking a baby aspirin a day.  But this is a 12 year old figure, and I couldn't find anything more recent.

Thursday, January 22, 2015

Your money at work...er, waste: the million genomes project

Bulletin from the Boondoggle Department

In desperate need for a huge new mega-project to lock up even more NIH funds before the Republicans (or other research projects that are actually focused on a real problem) take them away, or before individual investigators who actually have some scientific ideas to test, we read that Francis Collins has apparently persuaded someone who's not paying attention to fund the genome sequencing of a million people!  Well, why not?  First we had the (one) human genome project.  Then after a couple of iterations, the 1000 genomes project, then the hundred thousand genomes 'project'.  So, what next?  Can't just go up by dribs and drabs, can we?  This is America, after all!  So let's open the bank for a cool million. Dr Collins has, apparently, never met a genome he didn't like or want to peer into.  It's not lascivious exactly, but the emotion that is felt must be somewhat similar.

We now know enough to know just what we're (not) getting from all of this sequencing, but what we are getting (or at least some people are getting) is a lot of funds sequestered for a few in-groups or, more dispassionately perhaps, for a belief system, the belief that constitutive genome sequence is the way to conquer every disease known to mankind.  Why, this is better than what you get by going to communion every week, because it'll make you immortal so you don't have to worry that perhaps there isn't any heaven to go to after all.

Anyway, why not, the genomes are there, their bearers will agree and they've got the blood to give for the cause.  Big cheers from the huge labs, equipment manufacturers and those eyeing the Europe and and Chinese to make sure we don't fall behind anyone (and knowing they're eyeing us for the very same reason).  And this is also good for the million author papers that are sure to come.  And that's good for the journals, because they can fill many pages with author lists, rather than substance.

Of course, we're just being snide (though, being retired, not jealous!).  But whether in fact this is good science or just ideology and momentum at work is debatable but won't be debated in our jealous me-too or me-first environment.

Is there any slowing down the largely pointless clamor for more......?

We've written enough over the past few years not to have to repeat it here, and we are by no means the only ones to have seen through the curtain and identified who the Wiz really is.  If this latest stunt doesn't look like a masterful, professionally skilled boondoggle to you, then you're seeing something very different from what we see.  One of us needs to get his glasses cleaned.  But for us it's moot, of course, since we don't control any of the funds.

Wednesday, January 21, 2015

Dragonfly the hunter

For vertebrates and invertebrates alike, hunting is a complex behavior.  Even if it seems to involve just a simple flick of the tongue, the hunter must first note the presence of its prey, and then successfully capture it, even when the prey makes unpredictable moves. Vertebrates hunt by predicting and planning, relying on what philosophers of mind call 'internal models' that allow them to anticipate the movement of their prey and respond accordingly, but whether invertebrates do the same has not been known.  The typical human-centered reflex is to dismiss insects as mere genetic robots, mechanically linking sensory input to automatic, hard-wired action.

But that may be far too egocentric, because a new paper in the January 15 issue of Nature ("Internal models direct dragonfly interception steering," Mischiati et al.) describes the hunting behavior of dragonflies, and suggests that dragonflies have internal models as well.
Prediction and planning, essential to the high-performance control of behaviour, require internal models. Decades of work in humans and non-human primates have provided evidence for three types of internal models that are fundamental to sensorimotor control: physical models to predict properties of the world; inverse models to generate the motor commands needed to attain desired sensory states; and forward models to predict the sensory consequences of self-movement
Dragonflies generally don't hunt indoors, so Mischiati et al. decked out a laboratory to look like familiar hunting grounds, brought some dragonfly fodder indoors, and videotaped and otherwise assessed the behavior of the dragonflies in pursuit of their next meals to determine what they were looking at, and to assess their body movements as they pursued their prey.  These measurements suggested to them that the heads of the dragonflies were moving in sync with their prey, meaning that they were anticipating rather than reacting to the flight of their prey.

Anisoptera (Dragonfly), Pachydiplax longipennis (Blue Dasher), female, photographed in the Town of Skaneateles, Onondaga County, New York. Creative Commons

And this in turn suggests that, like vertebrates, dragonflies have internal models that facilitate their hunting. Rather than dashing after insects after they've already moved, dragonflies are able to predict their movements, and successfully capture their prey 90-95% of the time.  Compared with, say, echolocating bats, this is a remarkable success rate -- e.g., estimates of the success rate of Eptesicus nilssonii, a Eurasian bat, range from 36% for moths to 100% for the slow-moving dung beetle (Rydell, 1992). And it's an even more remarkable success rate compared with Pennsylvania deer hunters -- for every 3 or 4 hunting licenses sold, 1 deer was killed in 2012-13, which means that if, like dragonflies or bats, people had to rely on venison for their survival, they'd be in deep trouble.



But, apparently humans, bats and dragonflies are using essentially the same kind of internal model to hunt, a model that allows them to anticipate the future and take action accordingly.  More specifically, the model is a 'forward model', and it has been thought to be the foundation for cognition in vertebrates, but is at least the basis of motor control (as described here and here).  You can dismissively call it just 'computing' or you can acknowledge it as 'intelligent', but it is clearly more than simple hard-wired reflex: it involves judgment.

This is interesting and relevant, because if all that's required is the ability to predict and plan accordingly, why is there so much variation in the success rate of the hunt, even within a given species?  Clearly other factors and abilities are required -- other aspects of the nervous system, for example, or speed relative to prey, and population density of predator and prey.  Indeed, insects would be expected to vary in their 'intelligence' the way people do, in a way that means that most are able to succeed.

It seems that the study of insect behavior is building a more and more complex model of how insects do what they do.  The view of the insect brain is broadening into one that allows for much more complexity than robotic hard-wired behavior, or motor responses to sensory input.  A few months ago, we blogged about bee intelligence, writing about a PNAS paper that described how bees find their way home, credibly by using a cognitive map.

The author of a recent paper in Trends in Neuroscience ("Cognition with few neurons: higher-order learning in insects," Martin Giurfa, 2013) speculated about unexpected insect cognitive abilities, welcoming an approach to understanding plastic insect behavior that allows for the possibility of complex, sophisticated learned rather than mere associative learning.  But Giurfa cautions that there are many reasons why we don't yet understand insect behavior, including our tendency to anthropomorphize, using words for insect behavior derived from what we know about human abilities that, when applied to insects, imply more complexity than warranted, or to interpret experimental results as though they represent all that insects can do, rather than all that they were asked to do in the study.

On the other hand, many of the genes insects use for their sensory and neural functions are evolutionarily related to the genes mammals, including humans, use. So we likely share many similar genetically based mechanisms.

From the outside of this field looking in, it seems as though it's early days in understanding invertebrate brains.  And it seems to me that this is largely because observational studies are difficult to do on insects, must be interpreted because insects can't talk, and our interpretations are necessarily built on our assumptions about insect behavior, which in turn seem to follow trends in what people are currently thinking about cognition.  Until recently, researchers have assumed that insects, with far fewer neurons than we have, are pretty dumb.  The dragonfly hunter's success rate alone should be humbling enough to challenge this assumption.

In this sense, it's wrong to think simply that size matters.  Maybe its organization that matters more.

Monday, January 19, 2015

We can see the beast....but it's been us!

The unfathomable horrors of what the 'Islamists' are doing these days can hardly be exaggerated.  It is completely legitimate, from the usual mainstream perspective at least, to denigrate the perpetrators in the clearest possible way, as simply absolute evil.  But a deeper understanding raises sobering questions.

It's 'us' pointing at 'them' at the moment, and some aspects of what's going on reflect religious beliefs: Islam vs Christianity, Judaism, or the secular western 'faith'.  If we could really believe that we were fundamentally better than they are we could feel justified in denigrating their wholly misguided beliefs, and try to persuade them to come over to our True beliefs about morally, or even theologically acceptable behavior.

Unfortunately, the truth is not so simple.  Nor is it about what 'God' wants.  The scientific atheists (Marxist) slaughtered their dissenters or sent them to freeze in labor camps by the multiple millions. It was the nominally Christian (and even Socialist) Nazis who gassed their targets by the millions. And guess who's bombing schools in Palestine these days?

Can we in the US feel superior?  Well, we have the highest per capita jailed population, and what about slavery and structural racism?  Well, what about the Asians?  Let's see, the rape of Nanking, Mao's Cultural Revolution, the rapine Huns.....

Charlie Hebdo is just a current example that draws sympathy, enrages, and makes one wonder about humans.  Haven't we learned?  I'd turn it around and ask: has anything even really changed?

Christians have made each other victims, of course.  Read John Fox's Book of Martyrs from England in the 1500's (or read about the more well-known Inquisition).  But humans are equal opportunity slaughterers. Think of the crusades and back-and-forth Islamic-Christian marauding episodes.  Or the Church's early systematic 'caretaking' of the Native Americans almost from the day Columbus first got his sneakers wet in the New World, not to mention its finding justification for slavery (an idea going back to those wonderful classic Greeks, and of course previously in history).  Well, you know the story.

Depiction of Spanish atrocities committed in the conquest of Cuba in Bartolomé de Las Casas's "Brevisima relación de la destrucción de las Indias", 1552.   The rendering was by the Flemish Protestantartist Theodor de Bry. Public Domain. 

But this post was triggered not just by the smoking headlines of the day, but because I was reading about that often idealized gentle, meditative Marcus Aurelius, the Roman Emperor in the second century AD.  In one instance, some--guess who?--Christians had been captured by the Romans and were being tortured: if they didn't renounce their faith, they were beheaded (sound familiar?) or fed to the animals in a colosseum.  And this was unrelated to the routine slavery of the time. Hmmm...I'd have to think about whether anyone could conceive of a reason that, say, lynching was better than beheading.

It is disheartening, even in our rightful outrage at the daily news from the black-flag front, to see that contemporary horrors are not just awful, they're not even new!  And, indeed, part of our own Western heritage.

Is there any science here?  If not, why not?
We try to run an interesting, variable blog, mainly about science and also its role in society.  So the horrors on the Daily Blat are not as irrelevant as they might seem:  If we give so much credence, and resources, to science, supposedly to make life better, less stressful, healthier and longer, why haven't we moved off the dime in so many of these fundamental areas that one could call simple decency--areas that don't even need much scientific investment to document?

Physics, chemistry and math are the queens of science.  Biology may be catching up, but that would seem today mainly to be to the extent we are applying molecular reductionism (everything in terms of DNA, etc). That may be physics worship or it may be good; time will tell, but of course applied biology can claim many major successes. The reductionism of these fields gives them a kind of objective, or formalistic, rigor.  Controlled samples or studies, with powerful or even precise instrumentation are possible to measure and evaluate data, and to form testable credible theory about the material world.

But a lot of important things in life seem so indirect, relative to molecules, that one would think there could also be, at least in principle,  comparably effective social and behavioral sciences that did more than lust after expensive, flashy reductionist equipment (DNA sequencing, fMRI imaging, super-computing, etc.) and the like.  Imaging and other technologies certainly have made much of the physical sciences possible by enabling us to 'see' things our organic powers, our eyes, nose, ears, etc.,  could not detect.  But the social sciences?  How effective or relevant is that lust to the problems being addressed?

The cycling and recycling of social science problems seems striking.  We have plentiful explanations for things behavioral and cultural, and many of them sound so plausible.  We have formal theories structured as if they were like physics and chemistry: Marxism and related purportedly materialist theories of economics, cultural evolution, and behavior, and 'theories' of education, which are legion yet the actual result has been sliding for decades.  We have libraries-full of less quantitively or testably rigorous, more word-waving 'theories' by psychologists, anthropologists, sociologists, economists and the like.  But the flow of history and, one might say, its repeated disasters, shows, to me, that we as yet don't in fact have nothing very rigorous, despite a legacy going back to Plato and the Greek philosophers.

We spend a lot of money on the behavioral and social sciences with 'success' ranging from very good for very focal types of traits, to none at all when it comes to what are the major sociocultural phenomena like war, equity, and many others.  We have journal after journal, shelves full of books of social 'theory', including some (going back at least to Herbert Spencer) that purport to tie physical theory to biology to society, and Marx and Darwin are often invoked, along with ideas like the second law of thermodynamics and so on.  Marx wanted a social theory as rigorous as physics, and materialist, too, but in which there would be an inevitable, equitable end to the process.  Spencer had an end in mind, too, but one with a stable inequality of elites and the rest.  Not exactly compatible!

And this doesn't include social theories derived from this or that world religion.  Likewise, of course, we go through psychological and economic theories as fast as our cats go through kibbles, and we've got rather little to show for it that could seriously claim respect as science in the sense of real understanding of the phenomena.  When everyone needs a therapist, and therapists are life-long commitments, something's missing.





Karl Marx and Herbert Spencer, condemned to face each other for eternity at Highgate Cemetery in London (photos: A Buchanan)

Either that, or these higher-levels of organized traits simply don't follow 'laws' the way the physical phenomena do.  But that seems implausible since we're made of physical stuff, and such a view would take us back to the age-old mind-matter duality, endless debate about free will, consciousness, soul, and all the rest back through the ages.  And while this itemization is limited to western culture, there isn't anything more clearly 'true' in the modern East, nor in the cultures elsewhere or before ours.

Those with vested interests in their fMRI machines, super-computer modeling, or therapy practices will likely howl 'Foul!' It's hard not to believe that in the past there were a far smaller percentage of people with various behavioral problems needing chemical suppression or endless 'therapy' than there is today.  But if there were, and things are indeed changing for the worse, this further makes the point.  Why aren't mental health problems declining, after so much research?

You can defend the social sciences if you want, but in my personal view their System is, like the biomedical one, a large vested interest that keeps students off the street for a few years, provides comfy lives for professors, fodder for the news media and lots of jobs in the therapy and self-help industries (including think-tanks for economics and politics).....but has not turned daily life, even in the more privileged societies, into Nirvana.

One can say that those interests just like things to stay the way they are, or argue that while their particular perspective can't predict every specific any more than a physicist can predict every molecule's position, generic, say, Darwinian competition-is-everything views are simply true. Such assertions--axioms, really--are then just accepted and treated as if they're 'explanations'. If you take such a view, then we actually do understand everything!  But even if these axioms--Darwinian competition, e.g.--were true, they have become such platitudes that they haven't proven themselves in any serious sense, because if they had we would not have multiple competing views on the same subjects.  Despite debates on the margins, there is, after all, only one real chemistry, or physics, even if there are unsolved aspects of those fields.

The more serious point is this:  we have institutionalized research in the 'soft' as well as 'hard' sciences.  But a cold look at much of what we spend funding on, year after year without demanding actual major results, would suggest that we should be addressing the lack of real results as perhaps the more real or at least more societally important problem these fields should be addressing--and with the threat of less or no future funding if something profoundly better doesn't result.  In a sense, engineering works in the physical sciences because we can build bridges without knowing all the factors involved in precise detail.  But social engineering doesn't work that way.

After all, if we are going to spend lots of money on minorities (like professors, for example), we would be better to take an engineering approach to problems like 'orphan' (rare) diseases, which are focused and in a sense molecular, and where actual results could be hoped for.  The point would be to shift funds from wasteful, stodgy areas that aren't going very far.  Even if working on topics like orphan diseases is costly, there are no other paths to the required knowledge other than research with documentable results.  Shifting funding in that direction would temporarily upset various interests, but would instead provide employment dollar to areas and people who could make a real difference, and hence would not undermine the economy overall.

At the same time, what would it take for there to be a better kind of social science, the product of which would make a difference to human society, so we no longer had to read about murders and beheadings?

Thursday, January 15, 2015

When the cat brings home a mouse

To our daughter's distress, she needs to find a new home for her beloved cats, so overnight we've gone from no cats to three cats, while we try to find them someplace new.  I haven't lived with cats since I was a kid really, because I was always allergic.  When I visited my daughter, I'd get hives if Max, her old black cat, sadly now gone, rubbed against my legs, and I always at least sneezed even when untouched by felines.  But now with three cats in the house, I'm allergy-free and Ken, never allergic to cats before, is starting to sneeze -- loudly.


Old Max

Casey


Oliver upside-down


But the mystery of the immune system is just one of the mysteries we're confronting -- or that's confronting us -- this week.  Here's another.  The other day my daughter brought over a large bag of dry cat food.  I put it in a closet, but the cats could smell it, and it drove them nuts, so I moved it into the garage.  A few days later I noticed that the cats were all making it clear that they really, really wanted to go into the garage, but we were discouraging that given the dangers of spending time in a location with vehicles that come and go unpredictably. I just assumed they could smell the kibbles, or were bored and wanted to explore new horizons.

But two nights ago I went out to the garage myself to get pellets for our pellet stove, and Mu managed to squeeze out ahead of me.  He made a mad dash for the kibbles.  Oliver was desperate to follow, but I squeezed out past him and quickly closed the door.  At which point, Mu came prancing back, squeaking.  Oh wait, he wasn't squeaking, it was the mouse he was carrying in his mouth that was squeaking!  He was now just as eager to get back in the house as he'd been to get out.  After a few minutes he realized that wasn't going to happen, so he dropped the now defunct mouse, and I let him back in.

Mu, the Hunter
So, that 'tear' in the kibbles bag that I'd noticed a few days before?  Clearly made by a gnawing mouse (mice?).  And the cats obviously had known about this long before I did.  But how did Mu know exactly where to make a beeline to to catch the mouse?  He'd never seen where I put the bag, nor the mouse nibbling at it!  And I have to assume the other cats would have been equally able hunters had they been given the chance.

Amazing.  A whole undercurrent of sensory awareness and activity going on right at our feet, and we hadn't clued in on any of it.  I'd made unwarranted assumptions about holes in the bag, but the cats knew better.  Yes, I could have looked more closely at the kibble that had spilled out of the bag and noticed the mouse droppings.  But I didn't, because, well, because it didn't occur to me.

Though, now that I'm clued in, I believe we've got another mouse...


Mu and Ollie at the door to the garage yesterday afternoon


And?
I might even have been able to detect the mouse without seeing any of the evidence, just like the cats, if I'd tuned in more attentively, but I'm pretty sure it would have required better hearing.  In any case, other bits of evidence more suited to my perceptive powers were available, but I didn't notice.  I take this as yet another cautionary tale about how we know what we know, and I will claim it applies as well to politics, economics, psychology, forensics, religion, science, and more.  We build our case on preconceived notions, beliefs, assumptions, what we think is true, rarely re-evaluating those beliefs -- unless we're forced to, when, say, Helicobacter pylori is found to cause stomach ulcers, or our college roommate challenges our belief in God, or economic austerity does more harm than good.

As Holly often says, scientists shouldn't fall in love with their hypothesis.  Hypotheses are made to be tested; stretched, pounded, dropped on the floor and kicked, and afterwards, and continually, examined from every possible angle, not defended to the death.  But we often get too attached, and don't notice when the cat brings home a mouse.

An illustrative blog post in The Guardian by Alberto Nardelli and George Arnett last October tells a similar tale (h/t Amos Zeeberg on Twitter).  "Today’s key fact: you are probably wrong about almost everything."  Based on a survey by Ipsos Mori, Nardelli and Arnett report disconnects between what people around the world believe is true about the demographics of their country, and what's actually true.

So, people in the US overestimate the percentage of Muslims in the country, thinking it's 15% when it's actually 1%.  Japanese think the percentage of Muslims is 4% when it's actually 0.4%, and the French think it's 31% while it's actually 8%.

In the US, we think immigrants make up 32% of the population, but in fact they are 13%.  And so on.  We think we know, but very often we're wrong.  We're uninformed, ill-informed, or under informed, even while we think we're perfectly well informed.

Source: The Guardian

The Guardian piece oozes political overtones, sure.  But I think it is still a good example of how we go about our days, thinking we're making informed decisions, based on facts, but it's not always so.  A minority of Americans accept evolution, despite the evidence; you made up your mind about whether Adnan is guilty or innocent if you listened to Serial, even though you weren't a witness to the murder, and the evidence is largely circumstantial.  And so on.  And this all has consequences.

In a sense, even if we are right about what we think, or its consequences, based on what we know, it's hard to know if we are missing relevant points because we simply don't have the data, or haven't thought to evaluate it correctly, as me in regard to Mu and the mouse.  We have little choice but to act on what we know, but we do have a choice about how much confidence, or hubris, we attribute to what we know, to consider that what we know may not be all there is to know.

This is sobering when it comes to science, because the evidence for a novel or alternative interpretation might be there to be seen in our data, but our brains aren't making the connections, because we're not primed to or because we're unaware of aspects of the data.  We think we know what we're seeing, and it's hard to draw different conclusions.

Fortunately, occasionally an Einstein or a Darwin or some other grand synthesizer comes along and looks at the evidence in a different way, and pushes us forward.  Until then, it's science as usual; incremental gains based on accepted wisdom.  Indeed, even when such a great synthesizer provides us with dramatically better explanations of things, there is a tendency to assume that now, finally, we know what's up, and to place too much stock in the new theory......repeating the same cycle again.

Tuesday, January 13, 2015

The Genome Institute and its role

The NIH-based Human Genome Research Institute (NHGRI) has for a long time been funding the Big Data kinds of science that is growing like mushrooms on the funding landscape.  Even if overall funding is constrained, and even if this also applies to the NHGRI (I don't happen to know), the sequestration of funds in too-big-to-stop projects is clear. Even Francis Collins and some NIH efforts to reinvigorate individual-investigator RO1 awards don't really seem to have stopped the grab for Big Data funds.

That's quite natural.  If your career, status, or lab depends on how much money you bring into your institution, or how many papers you publish, or how many post-docs you have in your stable, or your salary and space depend on that, you will have to respond in ways that generate those score-counting coups.  You'll naturally exaggerate the importance of your findings, run quickly to the public news media, and do whatever other manipulations you can to further your career.  If you have a big lab and the prestige and local or even broader influence that goes with that, you won't give that up easily so that others, your juniors or even competitors can have smaller projects instead.  In our culture, who could blame you?

But some bloggers, Tweeters, and Commenters have been asking if there is a solution to this kind of fund sequestration, largely reserved (even if informally) for the big usually private universities.  The arguments have ranged from asking if the NHGRI should be shut down (e.g., here) to just groping for suggestions.  Since many of these questions have been addressed to me, I thought I would chime in briefly.

First, a bit of history or perspective, as informally seen over the years from my own perspective (that is, not documented or intended to be precise, but a broad view as I saw things):
The NHGRI was located administratively where it was for reasons I don’t know.  Several federal institutes were supporting scientific research.  NIH was about health, and health 'sells', and understandably a lot of fund is committed to health research.  It was natural to think that genome sequences and sciences would have major health implications, if the theory that genes are the fundamental causal elements of life was in fact true.  Initially James Watson, discoverer of DNA's structure, and perhaps others advocated the effort.  He was succeeded by Francis Collins who is a physician and clever politician.
However, there was competition for the genome ‘territory’, at least with the Atomic Energy Commission.  I don’t know if NSF was ever in the ‘race’ to fund genomic research, but one driving force at the time was the fear of mutations that atomic radiation (therapeutic, from wars, diagnostic tests, and weapons fallout) generated.  There was also a race with the private sector, notably Celera as a commercial competitor that would privatize the genome sequence.  Dr Collins prominently, successfully, and fortunately defended the idea of open and free public access.  The effort was seen as important for many reasons, including commercial ones, and there were international claimants in Japan, the UK, and perhaps elsewhere, that wanted to be in on the act.  So the politics were rife as well as the science, understandably.
It is possible that only with the health-related promises was enough funding going to be available, although nuclear fears about mutations and the Cold War probably contributed, along with the usual less savory for self-interest, to AEC's interests.
Once a basic human genome sequence was available, there was no slowing the train. Technology, including public and private innovation promised much quicker sequencing in the future, that was quickly to become available even to ordinary labs (like mine, at the time!).  And once the Genome Institute (and other places such as the Sanger Centre in Britain and centers in Japan, China, and elsewhere) were established, they weren't going to close down!  So other sequences entered the picture--microbes, other species, and so on.  
It became a fad and an internecine competition within NIH.  I know from personal experiences at the time that program managers felt the need to do 'genomics' so they would be in on the act and keep their budgets.  They had to contribute funds, in some way I don't recall, to the NHGRI's projects or in other ways keep their portfolios by having genomics as part of this.  -Omics sprung up like weeds, and new fields such as nutrigenomics, cancer genomics, microbiomics and many more began to pull in funding, and institutes (and the investigators across the country) hopped aboard.  Imitation, especially when funds and current fashion are involved, is not at all a surprise, and efficiency or relative payoff in results took the inevitable back seat: promises rather than deliveries naturally triumphed.
In many ways this has led to the current of exhaustively enumerative Big Data: a return to 17th century induction.  This has to do not just with competition for resources, but a changed belief system also spurred by computing power: Just sample everything and pattern will emerge!
Over the decades the biomedical (and to some lesser extent biological) university establishment grew on the back of the external funding which was so generous for so long.  But it has led to a dependency.  Along with exponential growth in the number of competitors, hierarchies of elite research groups developed--another natural human tendency.  We all know the career limitations that are resulting from this.  And competition has meant that deans and chairs expect investigators always to be funded, in part because there aren't internal funds to keep labs running in the absence of grants. It's been a vicious self-reinforcing circle over the past 50 years.
As hierarchies built, private donors were convinced (conned?) into believing that their largesse would lead to the elimination of target diseases ('target' often meaning those in the rich donors' families). Big Data today is the grandchild of the major projects, like the Manhattan Project in WWII, that showed that some kinds of science could be done on a large scale.  Many, many projects during past decades showed something else: Fund a big project, and you can't pull the plug on it!  It becomes too entrenched politically.  
The precedents were not lost on investigators!  Plead for bigger, longer studies, with very large investments, and you have a safe bet for decades, perhaps your whole career. Once started, cost-benefit analysis has a hard time paring back, much less stopping such projects. There are many examples, and I won't single any of them out.  But after some early splash, by and large they have got to diminishing returns but not got to any real sense of termination: too big to kill.
This is to some extent the same story with the NHGRI.  The NIH has got too enamored of Big Data to keep the NHGRI as limited or focused as perhaps it should have been (or should be). In a sense it became an openly anti-focused-research sugar daddy (Dr Collins said, perhaps officially, that NHGRI didn’t fund ‘hypothesis-based research”) based on pure inductionism and reductionism, so it did not have to have well-posed questions.  It basically bragged about not being focused.
This could be a change in the nature of science, driven by technology, that is obsolescing the nature of science that was set in motion in the Enlightenment era, by the likes of Galileo, Newton, Bacon, Descartes and others.  We'll see.  But the socioeconomic, political sides of things are part of the process, and that may not be a good thing.
Will focused, hypothesis-based research make a comeback?  Not if Big Data yields great results, but decades of it, no matter how fancy, have not shown the major payoff that has been promised.  Indeed, historians of science often write that the rationale, that if you collect enough data its patterns (that is, a theory) will emerge, has rarely been realized.  Selective retrospective examples don't carry the weight often given them.

There is also our cultural love affair with science.  We know very clearly that many things we might do at very low cost would yield health benefits far exceeding even the rosy promises of the genomic lobby.  Most are lifestyle changes.  For example, even geneticists would (privately, at least) acknowledge that if every 'diabetes' gene variant were fixed, only a small fraction of diabetes cases would be eliminated. The recent claim that much of cancer is due just to bad mutational luck has raised lots of objections--in large part because Big Data researchers' business would be curtailed. Everyone knows these things.


What would it take to kill the Big Data era, given the huge array of commercial, technological, and professional commitments we have built, if it doesn't actually pay off on its promises?  Is focused science a nostalgic illusion? No matter what, we have a major vested interest on a huge scale in the NHGRI and other similar institutes elsewhere, and grantees in medical schools are a privileged, very well-heeled lot, regardless of whether their research is yielding what it promises.


Or, put another way, where are the areas in which Big Data of the genomic sort might actually pay, and where is this just funding-related institutional and cultural momentum?  How would we decide?


So what do to?  It won't happen, but in my view the NHGRI does not, and never did, belong properly in NIH. It should have been in NSF, where basic science is done.  Only when clearly relevant to disease should genomics be funded for that purpose (and by NIH, not NSF).  It should be focused on soluble problems in that context.
NIH funds the greedy maw of medical schools.  The faculty don't work for the university, but for NIH.  Their idea of 'teaching' often means giving 5-10 lectures a year that mainly consist of self-promoting reports about their labs, perhaps the talks they've just given at some meeting somewhere. Salaries are much higher than at non-medical universities--but in my view grants simply should not pay faculty salaries.  Universities should.  If research is part of your job's requirements, its their job to pay you.  Grants should cover research staff, supplies and so on.
Much of this could happen (in principle) if the NHGRI were transferred to NSF and had to fund on an NSF-level budget policy.  Smaller amounts, to more people, on focussed basic research.  The same total budget would go a lot farther, and if it were restricted to non-medical school investigators there would be the additional payoff that most of them actually teach, so that they disseminate the knowledge to large numbers of students who can then go out into the private sector and apply what they've learned.  That's an old-fashioned, perhaps nostalgic(?) view of what being a 'professor' should mean.  
Major pare-backs of grant size and duration could be quite salubrious for science, making it more focused and in that sense accountable.  The employment problem for scientists could also be ameliorated.  Of course, in a transition phase, universities would have to learn how to actually pay their employees.
Of course, it won't happen, even if it would work, because it's so against the current power structure of science.  And although Dr Collins has threatened to fund more small RO1 grants it isn't clear how or whether that will really happen.  That's because there doesn't seem to be any real will to change among enough people with the leverage to make it happen, and the newcomers who would benefit are, like all such grass-roots elements, not unified enough.
These are just some thoughts, or assertions, or day-dreams about the evolution of science in the developed world over the last 50 years or so.  Clearly there is widespread discontent, clearly there is large funding going on with proportionately little results.  Major results in biomedical areas can't be expected over night.   But we might expect that research had more accountability.