An essay based on the article Sarewitz, Daniel 2016 Saving Science The New Atlantis
https://www.thenewatlantis.com/publications/saving-science
During the COVID-19 pandemic, science once again provided the solution to a health, humanitarian, social and economic crisis. Scientists, spread across universities, research institutes, start-ups, large and small companies, have improved diagnostic methods, tested drugs, developed, produced and tested the vaccines that will soon end the pandemic in record time.
With billions of doses applied, we can say that society has been impressed by the results of the vaccine development effort. Part of society legitimately wonders why science can’t find a solution to so many other problems, many of which are older, well-documented and less complex.
Scientific funding models
Looking at recent advances in science, the COVID-19 vaccine seems more the exception than the rule. John Horgan, in his 1999 book ‘The End of Science’, suggests that scientific activity follows a negative returns curve and discusses the idea of the limits of knowledge. But the reason for so few discoveries could be something else: the science funding model itself, which doesn’t encourage problem-solving.
The idea was first presented by Donald Stokes in his 1997 book ‘The Pasteur Quadrant’. More than 20 years later, little or nothing has changed in the way science is funded. We continue to follow the model proposed by Vannevar Bush in 1945 for the creation of the National Science Foundation (NSF) in the United States, which served as a model for the formation of funding agencies in practically every part of the world. This methodology is so entrenched that not even the growing evidence of its inadequacy, such as that reported in John Ioannidis’ 2005 article ‘Why Most Scientific Discoveries Are False’, has been enough to bring about change.
The article ‘Saving Science’ by Daniel Sarewitz, published in 2016, once again makes a sharp criticism of the model developed by Vannevar Bush. However, better than other authors, Sarewitz reports on an alternative model of science funding, which coexisted with the NSF model. Not because of their previous experience, but because of their effectiveness in achieving results and efficiency in managing resources. We analyzed the article and highlighted lessons that we can apply at the Bio Bureau and in Brazil to solve economic and social problems based on science and technological development.
Who is science accountable to?
Daniel Sarewitz’s article in the magazine New Atlantis starts off strongly:
Science, pride of modernity, our one source of objective knowledge, is in deep trouble. Stoked by fifty years of growing public investments, scientists are more productive than ever, pouring out millions of articles in thousands of journals covering an ever-expanding array of fields and phenomena. But much of this supposed knowledge is turning out to be contestable, unreliable, unusable, or flat-out wrong. From metastatic cancer to climate change to growth economics to dietary standards, science that is supposed to yield clarity and solutions is in many instances leading instead to contradiction, controversy, and confusion.
Sarewitz summarizes the problem with a premise taken from the famous report ‘Science: the endless frontier’ published by the illustrious Vannevar Bush in 1945:
Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.
The nobility of the premise and the reputation of its author made it unquestionable. Bush was an engineer with a clear vision of the importance of science for the well-being of society and national security. He was the president’s first scientific advisor, coordinated the efforts of more than 6,000 scientists during the conflict and initiated the Manhattan Project, which developed and built the atomic bombs, ensuring that it received top priority from the government.
I, like many, was deeply impacted when I first read your article “As We May Think” which describes the memex machine and the associative leaps we learn from that influenced the creation of hypertexts.
And his premise sounded like common sense.
As the war drew to a close, Bush envisioned transitioning American science to a new era of peace […] pursuing “research in the purest realms of science” scientists would build the foundation for “new products and new processes” to deliver health, full employment, and military security to the nation.
However, in his eagerness to escape the restrictive control of the military, Bush created a system in which scientists were accountable to no one but themselves:
Politicians delivered taxpayer funding to scientists, but only scientists could evaluate the research they were doing. Outside efforts to guide the course of science would only interfere with its free and unpredictable advance.
The history of science is full of important discoveries, such as x-rays and penicillin, which reinforce this fundamental role of serendipity.
And at first glance, the investment in science seems to have paid off handsomely for society:
When Bush wrote his report, nothing made by humans was orbiting the earth; software didn’t exist; smallpox still did.
More recently, we can add to this list the discovery of gravitational waves, the human genome and the internet.
But Sarewitz suggests another explanation. Less romantic, but more robust, well-documented and objective: the induction of scientific discovery by the technological demands of the US Department of Defense (DoD).
When Vannevar Bush created the NSF along the lines of free intellects exploring the unknown in pursuit of their curiosity, the DoD didn’t stop funding and encouraging science in its own way, which can be defined as the ‘military-industrial complex’. His logic was that the cost was less important than the objective: to ensure that American military technology was the best in the world.
Often the DoD didn’t foster innovation through outright funding, as foundations and funding agencies like the NSF do, but rather as a client, a beta client, willing to pay a lot for prototypes and Minimum Viable Products (MVPs) that had limited functionality and low efficiency. This protected the bold innovations that the military needed from the ‘market’ rationale that would have condemned most of the radical and overpriced projects:
For example, the first digital computer-built in the mid-1940s to calculate the trajectories of artillery shells and used to design the first hydrogen bomb-cost about $500,000 (around $4.7 million today), operated billions of times more slowly than modern computers, took up the space of a small bus, and had no immediate commercial application. […] The earliest jet engines, back in the 1940s, needed to be overhauled about every hundred hours and were forty-five times less fuel-efficient than piston engines. […] military planners knew that jet power promised combat performance greatly superior to planes powered by piston engines. For decades the Air Force and Navy funded research and development in the aircraft industry to continually drive improvement of jet engines. [And] of the thirteen areas of technological advance that were essential to the development of the iPhone, eleven-including the microprocessor, GPS, and the Internet-can be traced back to vital military investments in research and technological development.
For him, Americans, but not only them, idolize the ‘head in the clouds’ scientists stereotyped as Einstein and the garage entrepreneurs like Steve Jobs or Bill Gates, but the inconvenient truth is that much of today’s technology exists because of the investment and direction of science by the military:
Science has been important for technological development, of course. Scientists have discovered and probed phenomena that have turned out to have enormously broad technological applications. But the miracles of modernity in the above list came not from “the free play of free intellects,” but from the leashing of scientific creativity to the technological needs of the U.S. Department of Defense (DOD).
In Brazil, the health industrial complex, proposed by Carlos Gadelha of Fiocruz during the administration of
José Gomes Temporão
at the Ministry of Health, was a similar model. With the Productive Development Partnerships (PDP), the government financed innovation as a client and not as a development agency. A dozen biological medicines have had their production cycle dominated by Brazilian startups, such as Hygea, which would never have obtained private capital investment for this development if they hadn’t had future purchase contracts signed by the Ministry of Health. It’s unfortunate that the PoPs are suspended.
The technological counter-evidence
But it wasn’t discipline, money or military motivation that guaranteed the success of scientific research. It was counter-proof of the technological application. Sarewitz suggests that technology was a way of measuring the progress (or efficiency, or effectiveness) of science:
Science has been such a wildly successful endeavor over the past two hundred years in large part because technology blazed a path for it to follow. Not only have new technologies created new worlds, new phenomena, and new questions for science to explore, but technological performance has provided a continuous, unambiguous demonstration of the validity of the science being done. The electronics industry and semiconductor physics progressed hand-in-hand not because scientists, working “in the manner dictated by their curiosity for exploration of the unknown,” kept lobbying new discoveries over the lab walls that then allowed transistor technology to advance, but because the quest to improve technological performance constantly raised new scientific questions and demanded advances in our understanding of the behavior of electrons in different types of materials.
And he goes further: without technology, there is no way to measure the progress of science:
Technology is what links science to human experience; it is what makes science real for us. A light switch, a jet aircraft, or a measles vaccine, these are cause-and-effect machines that turn phenomena that can be described by science-the flow of electrons, the movement of air molecules, the stimu- lation of antibodies-into reliable outcomes: the light goes on, the jet flies, the child becomes immune. The scientific phenomena must be real or the technologies would not work.
In fact, without the technology to measure the progress of science, it remains adrift at the mercy of the whims of researchers:
The professional incentives for academic scientists to assert their elite status are perverse and crazy, and promotion and tenure decisions focus above all on how many research dollars you bring in, how many articles you get published, and how often those articles are cited in other articles. […] Universities-competing desperately for top faculty, the best graduate students, and government research funds-hype for the news media the results coming out of their laboratories, encouraging a culture in which every scientist claims to be doing path-breaking work that will solve some urgent social problem. […] The scientific publishing industry exists not to disseminate valuable information but to allow the ever-increasing number of researchers to publish more papers-now on the order of a couple million peer-reviewed articles per year-so that they can advance professionally. […] Bias is an inescapable attribute of human intellectual endeavor, and it creeps into science in many different ways, from bad statistical practices to poor experimental or model design to mere wishful thinking. If biases are random then they should more or less balance each other out through multiple studies. But as numerous close observers of the scientific literature have shown, there are powerful sources of bias that push in one direction: come up with a positive result, show something new, different, eye-catching, transformational, something that announces you as part of the elite. […] A survey of more than 1,500 scientists published by Nature in May 2016 shows that 80 percent or more believe that scientific practice is being undermined by such factors as “selective reporting” of data, publication pressure, poor statistical analysis, insufficient attention to replication, and inadequate peer review.
The consequence is poor quality science:
The number of retracted scientific publications rose tenfold during the first decade of this century, […] poor quality, unreliable, useless, or invalid science may in fact be the norm in some fields, and the number of scientifically suspect or worthless publications may well be counted in the hundreds of thousands annually. […] Richard Horton, editor-in-chief of The Lancet, puts it like this: “The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. […] an economic analysis published in June 2015 estimates that $28 billion per year is wasted on biomedical research that is unreproducible. Science isn’t self-correcting; it’s self-destructing.
In fact, not all phenomena are susceptible to scientific conclusions in the same way. Some are more context-dependent than others and this can impact an experiment to the point of making it impossible to run with the right controls. In these cases, it is very important to restrict the context of the phenomenon being observed/experimented as much as possible so that we can legitimately conclude something. This is what physicist Alvin Weinberg called trans-science in a 1972 article (Science and Trans-Science):
Weinberg observed that society would increasingly be calling upon science to understand and address the complex problems of modernity-many of which, of course, could be traced back to science and technology. But he accompanied this recognition with a much deeper and more powerful insight: that such problems “hang on the answers to questions that can be asked of science and yet which cannot be answered by science.” He called research into such questions “trans-science.” If traditional sciences aim for precise and reliable knowledge about natural phenomena, trans-science pursues realities that are contingent or in flux. The objects and phenomena studied by trans-science-populations, economies, engineered systems-depend on many different things, including the particular conditions under which they are studied at a given time and place, and the choices that researchers make about how to define and study them. This means that the objects and phenomena studied by trans-science are never absolute but instead are variable, imprecise, uncertain-and thus always potentially subject to interpretation and debate. By contrast, Weinberg argues, natural sciences such as physics and chemistry study objects that can be characterized by a small number of measurable variables. […] This combination of predictable behavior and invariant fundamental attributes is what makes the physical sciences so valuable in contributing to technological advance-the electron, the photon, the chemical reaction, the crystalline structure, when confined to the controlled environment of the laboratory or the engineered design of a technology, behaves as it is supposed to behave pretty much all the time.
He fears that the predictive power that science has for some disciplines may simply not exist for others:
But many other branches of science study things that cannot be unambiguously characterized and that may not behave predictably even under controlled conditions-things like a cell or a brain, or a particular site in the brain, or a tumor, or a psychological condition. Or a species of bird. Or a toxic waste dump. Or a classroom. Or “the economy.” Or the earth’s climate. Such things may differ from one day to the next, from one place or one person to another. Their behavior cannot be described and predicted by the sorts of general laws that physicists and chemists call upon, since their characteristics are not invariable but rather depend on the context in which they are studied and the way they are defined. Of course scientists work hard to come up with useful ways to characterize the things they study, like using the notion of a species to classify biologically distinct entities, or GDP to define the scale of a nation’s economy, or IQ to measure a person’s intelligence, or biodiversity to assess the health of an ecosystem, or global average atmospheric temperature to assess climate change. Or they use statistics to characterize the behavior of a heterogeneous class of things, for example the rate of accidents of drivers of a certain age, or the incidence of a certain kind of cancer in people with a certain occupation, or the likelihood of a certain type of tumor to metastasize in a mouse or a person. But these ways of naming and describing objects and phenomena always come with a cost-the cost of being at best only an approximation of the complex reality. Thus scientists can breed a strain of mouse that tends to display loss of cognitive function with aging, and the similarities between different mice of that strain may approximate the kind of homogeneity possessed by the objects studied by physics and chemistry. This makes the mouse a useful subject for research. But we must bear the cost of that usefulness: the connection between the phenomena studied in that mouse strain and the more complex phenomena of human diseases, such as Alzheimer’s, is tenuous-or even, as Susan Fitzpatrick worries, nonexistent.
Weinberg’s solution doesn’t even seem feasible to him: scientists develop an altruistic honesty, recognizing the limits of their research and conclusions.
To ensure that science does not become completely infected with bias and personal opinion, Weinberg recognized that it would be essential for scientists to “establish what the limits of scientific fact really are, where science ends and trans-science begins.” But doing so would require “the kind of selfless honesty which a scientist or engineer with a position or status to maintain finds hard to exercise.” Moreover, this is “not at all easy since experts will often disagree as to the extent and reliability of their expertise.”
That’s why technology must once again play the role of validating science, so that we don’t have to rely on the altruistic honesty of scientists.
[If you fund] scientists and left them alone to do their work, [you’d] ended up with a lot of useless knowledge and a lot of unsolved problems.
The current dominant paradigm will continue to crumble under the weight of its own contradictions, but it will also continue to hog most of the resources and insist on its elevated social and political status.
In the absence of a technological application that can select for useful truths that work in the real world of light switches, vaccines, and aircraft, there is often no “right” way to discriminate among or organize the mass of truths scientists create.
“Have no constituency in the research community, have it only in the end-user community.” if your constituency is society, not scientists, then the choice of what data and knowledge you need has to be informed by the real-world context of the problem to be solved.
The innovation industrial complex
My reading of the article is that science needs to be more entrepreneurial and operate through mechanisms that are more similar to entrepreneurship: love of the problem and not its solution or area of activity:
In the future, the most valuable science institutions will be closely linked to the people and places whose urgent problems need to be solved; they will cultivate strong lines of accountability to those for whom solutions are important; they will incentivize scientists to care about the problems more than the production of knowledge. They will link research agendas to the quest for improved solutions-often technological ones-rather than to understanding for its own sake. The science they produce will be of higher quality, because it will have to be.
We need to create new incentives, an innovation industrial complex in which the parties are encouraged to solve society’s problems and are accountable for the resources invested in solving these problems, not to themselves, but at least to each other. Brazil has everything it takes to launch or take off in this new model. We have incentive laws and a strong industry that provides the resources for research and development. With the end of the contingency of FNDCT resources, we should no longer have problems with resources for financing research and development. We have a solid base of scientists and a good infrastructure for research, although there are many problems with supply and supply management. Perhaps our biggest problem is distrust, but today we know that, with well-distributed incentives, we can relate without the need for trust.
Although this innovation industrial complex doesn’t exist, at Bio Bureau we work as if it did. We propose research projects supported by a clear technological development roadmap. We use agile management methodology to review the contribution of preliminary results to our final objective and prioritize those experiments that add the most value to the product. We only participate in projects in which we can have part of the intellectual property and participation rights in the business, which encourages us to create things that work and not just generate publications.
We publish our findings freely, so that other researchers can work with our biological models and contribute to advancing our understanding of the problems we need to solve independently. We’re looking for ‘beta’ clients who are willing to buy prototypes (albeit limited ones) rather than just hiring development projects. We facilitate the licensing of technology that we develop, but we have no interest in commercializing it (because it’s not part of the problem) because we are interested in the whole industrial complex flourishing.