(원문)
Academic leaders must audit departments for flaws and strengths, then tailor practices to build good behaviour, say C. K. Gunsalus and Aaron D. Robinson.
One of us (C.K.G.) teaches leadership skills and works with troubled departments. At almost every session, someone will sidle up, curious about a case study: they want to know how what happened at their university came to be known externally. Of course, it didn’t.
From what we’ve observed as a former university administrator and consultant (C.K.G.) and as a graduate student and working professional (A.D.R.), toxic research environments share a handful of operational flaws and cognitive biases. Researchers and institutional leaders must learn how these infiltrate their teams, and tailor solutions to keep them in check.
People who enter research generally share several values. Honesty, openness and accountability come up again and again when C.K.G. asks researchers to list what makes a good scientist. The US National Academies of Sciences, Engineering, and Medicine says that these values give rise to responsibilities that “make the system cohere and make scientific knowledge reliable”1. Yet every aspect of science, from the framing of a research question through to publication of the manuscript, is susceptible to influences that can counter good intentions.
C.K.G. coined the mnemonic TRAGEDIES (Temptation, Rationalization, Ambition, Group and authority pressure, Entitlement, Deception, Incrementalism, Embarrassment and Stupid systems) to capture the interlocking factors that can lead scientists astray2 (see ‘A table of tragedies’).
Consider this true story. A professor asked a beginning graduate student to verify that numbers on a data sheet matched those in a figure in a scientific manuscript, and to state in an e-mail that the data were accurate as far as he could tell. The paper described work that had been completed before the student arrived on campus and about which he knew little. Later, the student discovered that the paper was submitted the day he sent his confirmation e-mail — and that he was listed as a co-author. We can imagine his reactions.
He might be tempted to let the inappropriate authorship stand to gain a publication and avoid confronting his adviser. He could rationalize that he was new and the professor knew what was appropriate. He could feel ambition to get ahead, and pressure from an authority figure, and he would be aware of some deception — he knew he didn’t qualify for authorship because he had not been involved in the research in any substantive way. There’s no evidence that he disputed his inclusion as an author.
He was almost certainly unaware that his name had been added because reviewers who rejected a previously submitted version of the manuscript had questioned whether a single researcher could have done the work. Later, an investigation found that the professor had orchestrated the paper’s publication with “falsifying intent” to suggest that a different publication had been independently confirmed. We know about his dilemma because of a misconduct investigation that underwent congressional scrutiny. As far as we can tell, that student is no longer in science.
Here’s another real example. A research coordinator decided to resign in the face of problematic data. Before leaving, she told a postdoc that some scans had been done on only 6 of the 50 subjects in the data set — and that the results did not support the hypotheses of the principal investigator3. The tragedy here is that the postdoc didn’t have the resources to work the dilemma through and instead simply looked for a new job. Again, pressure from an authority figure and potential embarrassment made it challenging to take appropriate action.
Here’s a situation that wraps everything together. An assistant professor knows that her paper is more likely to have higher citation counts, which her institution values, if she includes as an author a senior person in her field who didn’t contribute to the paper other than a passing lunch discussion. The tragedies here encompass temptation, rationalization, ambition, authority pressure, deception, stupid systems — and maybe entitlement if she’s working hard and feels that the ends justify the means. If she does it once, and gets rewarded, how likely is she to do it again? (This is incrementalism.)
Take responsibility
Departments and institutions might protest that there is little they can actually do: the funding and recognition system itself favours poor methods and can lead to “the natural selection of bad science”4. We respond that institutional leaders — from those who supervise students to presidents and chancellors — must take responsibility for the working environment at their organizations.
There are two fundamental steps to improve the situation that are completely under local control. One is assessing empirically the integrity of research cultures. The second is developing research ethics education that is relevant to and integrated with how trainees actually learn to do science.
Unfortunately, most education provided on the responsible conduct of research, at least in the United States and Canada, focuses almost exclusively on compliance. Few students need to be able to define fabrication formally or to identify relevant sections of the Belmont Report, the 1978 document codifying how to treat human subjects of research.
What they really need is information about how to take action and to make decisions in tricky circumstances. And how to approach a senior faculty member or colleague over concerns about data in a constructive, non-threatening manner. And how to identify people who can give useful, disinterested advice. And how to blow the whistle.
Researchers such as Michael Mumford at the University of Oklahoma in Norman have found that effective programmes give students multiple strategies for analysing situations to identify ethical issues. They encourage interaction and provide real case studies of positive and negative examples with emotional impact — not just regulations or guidelines5. Students learn more than rules; they rehearse strategies for responding to tough cases and for anticipating consequences.
Context is as important as content. Courses on the responsible conduct of research are often outsourced or run online, which underscores the low priority of this instruction. Instead, courses should be taught by scientists within trainees’ disciplines and run for more than a single session. For example, a programme at the University of Kansas in Lawrence for engineering students has 15 hour-long sessions with guest lectures by active faculty members. The University of California, Berkeley, incorporates topics on responsible conduct alongside experimental design and statistics. Department heads and lab leaders should also integrate a wider range of issues around research integrity — including mentoring, methods courses and career seminars —into group meetings, seminars, journal clubs and any event at which researchers discuss how science is done.
Many faculty members will feel ill-equipped for these discussions, or worry that it will take up precious time. But if researchers and institutional leaders want to support the most rigorous research, or even just forestall scandals, they must make this commitment. A small first step is to acknowledge that TRAGEDIES exist and discuss strategies to check them. A simple way to approach the topic is by talking regularly about their own dilemmas and how they resolved them, successfully or not. Any faculty member who conducts research, submits proposals, reviews manuscripts or works as an editor will have anecdotes that trainees can learn from.
Beyond that, there are a wealth of case studies and instructional materials that have been compiled through EthicsCORE, the National Academy of Engineering Online Ethics Center and the Committee on Publication Ethics. Several societies also offer relevant materials online, including the American Geophysical Union, the American Physical Society, the American Society for Cell Biology and the Society for Neuroscience.
Assess the climate
Even exemplary training will not alter a toxic work environment. The informal curriculum — what researchers observe about how work is actually done — will always trump a formal one. To support research integrity, institutions must get a handle on what their local informal curriculum is teaching, and that means evaluating the current research environment.
There are many ways to gather data to make change. A quick self-assessment using our Academic Unit Diagnostic Tool (AUDiT) (go.nature.com/2jliagk), can help to surface indicators of vibrancy and problems in a unit’s culture. Conducting ‘exit interviews’ with PhD students, postdocs and professors, and looking for patterns, has also proven valuable, as have institution-wide or department-wide surveys about student and staff experiences.
The only validated tool we know of in this area is the Survey of Organizational Research Climate (SOURCE). It assesses seven dimensions, including integrity norms, adviser–advisee relations and departmental expectations. Results correlate with self-reported rates of detrimental research practices: institutions with low scores of integrity norms will also tend to have higher levels of reported fraud and sloppy record keeping6.
The survey can be done online in 15 minutes, and responses are aggregated to ensure individual confidentiality but still show differences across groups. That can help to identify both pockets of good practice and areas needing improvement. One large institution in the midwestern United States has used results to prompt faculty members within specific departments to talk more with graduate students about authorship, peer review and data management.
As well as being used to compare departments across an institution, the results can be compared against anonymous benchmarking data aggregated by the National Center for Professional and Research Ethics at the University of Illinois at Urbana-Champaign (which C.K.G. runs). Now no one can retort, “well, all departments in our field are that way”.
The management literature is clear that one powerful way to bring systemic organizational change is to find ‘bright spots’ — systems or places in an organization that are working well — study them and seek to spread their successful practices. For that, we need data on where the bright spots are, and the will to act.
The solutions are straightforward, if not necessarily simple.
Nature 557, 297-299 (2018)