
A Climate Science Manual for Judges Discredits Itself
Many people had to look the other way or remain oblivious to the meaning of their decisions to permit the current composition of the Reference Manual.
If the spectacular decline in public trust in science is a result of growing unease with powerful networks of institutions that use technical expertise to organize society, then the fourth edition of the Reference Manual for Scientific Evidence is a perfect example of these layered trends.
The Reference Manual, a joint product of the Federal Judicial Center (FJC) and the National Academies of Sciences, Engineering, and Medicine (NASEM), marries a premier institution of law with a premier institution of science, with the latter making sense of itself in terms of the needs of the former. The Reference Manual also contained an obvious conflict of interest: the authors of the chapter, who intended to help judges make sense of climate science, are leaders in the activist-academic community advancing legal theory to expand climate litigation.
Called out for its offense, the FJC quickly agreed to omit the climate chapter from the reference manual. The Wall Street Journal editorial board sums up the situation quite well:
“This might be a case of all’s well that ends well, except that someone on the Judicial Center was either asleep or tried to slip ideology into what should be ‘a dispassionate guide,’ to borrow Justice Kagan’s words. A public accounting of how that happened would be useful.”
The problems do not stop with the climate chapter, however.
The report’s second chapter, “How Science Works,” has bearing on how courts prioritize scientific consensus reports and understand science as an institution. It prompts the legal experts to trust the abstract system of science against fabled stories and political myths. What this chapter refuses to understand and admit is that science never generates policy outcomes through an infallible formula; those outcomes emerge from deliberation and judgment about competing resources and goals of the political community.
The Reference Manual has thus become its own case study, an extreme example, of the internal workings of science being used politically such that it undermines its own credibility and, in the process, brings into question the authority of our core institutions that need scientific expertise to inform decision-making.
Righting the ship needs a full public account of what happened to produce such a troubling rendition of the Reference Manual by bringing together faces with names and their decisions.
The second chapter of the Reference Manual purports to explain how science works. It begins with dated conventional wisdom, a fable, really, about the role of science in the 1987 Montreal Protocol on Substances That Deplete the Ozone Layer. The fable holds that, after a reduction in uncertainty and scientific consensus on the role of chlorofluorocarbons (CFCs) in ozone layer depletion, policymakers acted to ban CFC use through the Montreal Protocol. Chapter 2 explains,
“In this case, science worked exactly as it ought. Hypotheses were sifted through, scrutinized by different parties with different interests, compared against multiple lines of evidence, and iteratively modified, ultimately leading to reliable and actionable scientific knowledge.”
This simple story has been widely refuted. Although alternative tellings vary a bit, they are more realistically founded in the chronology of events and messy integration of law, policy, politics, and business. Science played a role, but it was not the star, and it was not because of “consensus.” In the fable, science did all the heavy lifting. In reality, it was our political institutions that were hard at work.
The placement of this fable in the Reference Manual serves a specific rhetorical strategy, however. By attributing the success of the Montreal Protocol to scientific consensus, the story situates consensus reports as the enabler of decision-making. In academic circles, this is known as a linear model of science, in which unfettered research begets scientific consensus, which in turn begets public value consensus, which then begets policy action and societal benefit.
By making consensus reporting a necessary precursor to action and an inevitable outcome of scientific research, the linear model circumscribes messy but legitimate democratic decision-making processes. Chapter 2 literally waves away the “twists and turns” that enabled success and, in so doing, positions itself as the world champion of scientific consensus reports- like those produced by NASEM. The implication is that decision-makers need only fall in line with the reasoning and worldviews of scientists.
The fable is false, cliché-ridden, and, in today’s age, utterly tone-deaf. Nonetheless, the FJC-NASEM powers that be saw it fit to feature it in the latest Reference Manual.
In practice, the linear model deepens the politicization of science. If scientists understand consensus reporting as a prerequisite to policymaking or court rulings, then decisions about how to represent scientific knowledge, what constitutes consensus reporting, and the choice of report authors are conceptualized as influence opportunities.
The Reference Manual series originated in the 1980s debates about federal judicial workload and included concerns about the expanding use of science in litigation. At the time of the first edition of the Reference Manual, providing judges with education in scientific modes of reasoning was considered less politically contentious than the existing alternative of court-appointed experts, which judges disliked as an “extraordinary activity” that undermined the standard adversarial process.
Throughout these early discussions about judges’ scientific needs, the central contention focused on how best to assist judges in understanding the logic and methods of science, and whether they needed help. Advocacy to more actively engage scientists with the judiciary, such as through the use of a “science court,” focused little to no attention on science as an institution vulnerable to large-scale perverse incentives, or on how processes of judicial science education could become targets for advocacy interests.
However, as part of the Congressionally directed review of the federal judicial workload, a subcommittee led by Judge Richard Posner critiqued the idea of specialized technical tribunals as inherently prone to special interest capture,
“The risk of capture by an interest group is much greater, however, with a specialized court. The interest group will find it easier to influence appointments to a specialized bench that bears solely, and only the responsibility for deciding the issues with which an interest group is concerned.”
From their view, the controversy about science in the courts was more deeply rooted in political disputes over policy choices, and “specialized adjudication is not a substitute for forging public agreement on a legislative agenda.”
The linear model embraced through the telling of the CFC fable precludes recognition of the potential for distortion by enabling a shared paradigm of the inherent virtue of science and scientists, regardless of the background political dispute.
Chapter 2 relies heavily on a myth of internal accountability by scientific norms when arguing that “scientists who are found to violate [scientific norms] knowingly are likely to be impeded from publishing, receiving grants, and advancing their career.” The argument harks back to the discussion of scientific ethos in a 1942 essay by sociologist Robert Merton, credited with originating the idea that science self-corrects through professional practices that “subject scientists to rigorous policing” by their peers.
Though scientists are subject to peer policing and do self-correct, they do so neither readily nor graciously. This is especially true when scientists (and journal editors) view their work as the dominant factor in policymaking. There can often be resistance from peers, internal gatekeeping, and the business model of elite journals.
There is a long history of leaders in scientific institutions invoking their community's professional norms to resist public calls for accountability. However, history also provides examples where this self-policing fails dramatically.
From the 1980s to the early 2000s, cases of researcher misconduct in biomedical research led to Congressional hearings. In one notable hearing, Congressional members became annoyed that, while researchers testified that fraud was limited and manageable through scientific norms, they also presented evidence that there was more fraud than Congress knew.
Al Gore, at the time, a young Tennessee Representative, observed that, “In one short hour, the subcommittee went from the Olympian heights of a nonproblem to the depths of a potential perjury.” Gore called this “scientific schizophrenia” and encouraged public accountability of the scientific community. “It is my belief, however, that all of us, including the scientific community, will be better off for the examination of its dark corners.”
The conflicts of interest in the Reference Manual’s climate chapter, and the fable and mythological undertones of Chapter 2, deserve considerable scrutiny. A lot of people had to look the other way or remain oblivious to the meaning of their decisions to permit the current composition of the Reference Manual.
Of late, NASEM has shown itself to struggle under advocacy pressure in its approach to climate change. The problems NASEM is facing are important because of the prestige the institution carries and its prominent position in the nation’s research power structure. These are representative problems across our institutions of knowledge. The Reference Manual controversy suggests NASEM is as good as captured by its philanthropic funders and their political agenda.
If one wanted to move the needle on this sad situation and rebuild public trust in science to improve societal stability, then holding the FJC and NASEM to account is an ideal place to dig in. The end game should be to identify and scrutinize undesirable conduct and processes that clash with public expectations of an ethical research enterprise. Both entities are created by Congressional statute, so it is well within legislators’ purview to demand and deliver a public account of what is happening in science’s dark corners.
Jessica Weinkle is an associate professor in the Department of Public and International Affairs at the University of North Carolina Wilmington and a Senior Fellow at The Breakthrough Institute.

American Immortals
Unlike other abolitionists, though, Douglass never lost his faith in the American Founding. Indeed, he believed that the fundamental premises of our republic were the very principles that could save her from this moral crisis.

A Climate Science Manual for Judges Discredits Itself
All is not well with the Reference Manual for Scientific Evidence. This is a loss for the public trust of science.

When Duvall Played Stalin
It’s strange to compliment an actor for impersonating a tyrant, but it is an act of courage.
Get the Civitas Outlook daily digest, plus new research and events.




.jpeg)
