He was a graduate student who seemingly had it all: drive, a big idea and the financial backing to pay for a sprawling study to test it.
In 2012, as same-sex marriage advocates were working to build support in California, Michael LaCour, a political science researcher at the University of California, Los Angeles, asked a critical question: Can canvassers with a personal stake in an issue — in this case, gay men and women — actually sway voters’ opinions in a lasting way?
He would need an influential partner to help frame, interpret and place into context his findings — to produce an authoritative scientific answer. And he went to one of the giants in the field, Donald P. Green, a Columbia University professor and co-author of a widely used text on field experiments.
“I thought it was a very ambitious idea, so ambitious that it might not be suitable for a graduate student,” said Dr. Green, who signed on as a co-author of Mr. LaCour’s study in 2013. “But it’s such an important question, and he was very passionate about it.”
Last week, their finding that gay canvassers were in fact powerfully persuasive with people who had voted against same-sex marriage — published in December in Science, one of the world’s leading scientific journals — collapsed amid accusations that Mr. LaCour had misrepresented his study methods and lacked the evidence to back up his findings.
On Tuesday, Dr. Green asked the journal to retract the study because of Mr. LaCour’s failure to produce his original data. Mr. LaCour declined to be interviewed, but has said in statements that he stands by the findings.
The case has shaken not only the community of political scientists but also public trust in the way the scientific establishment vets new findings. It raises broad questions about the rigor of rules that guide a leading academic’s oversight of a graduate student’s research and of the peer review conducted of that research by Science.
New, previously unreported details have emerged that suggest serious lapses in the supervision of Mr. LaCour’s work. For example, Dr. Green said he had never asked Mr. LaCour to detail who was funding their research, and Mr. LaCour’s lawyer has told Science that Mr. LaCour did not pay participants in the study the fees he had claimed.
Dr. Green, who never saw the raw data on which the study was based, said he had repeatedly asked Mr. LaCour to post the data in a protected databank at the University of Michigan, where they could be examined later if needed. But Mr. LaCour did not.
“It’s a very delicate situation when a senior scholar makes a move to look at a junior scholar’s data set,” Dr. Green said. “This is his career, and if I reach in and grab it, it may seem like I’m boxing him out.”
But Dr. Ivan Oransky, A co-founder of “Retraction Watch,” which first published news of the allegations and Dr. Green’s retraction request, said, “At the end of the day he decided to trust LaCour, which was, in his own words, a mistake.”
Many of the most contentious particulars of how the study was conducted are not yet known, and Mr. LaCour said he would produce a “definitive” accounting by the end of next week. Science has published an expression of concern about the study and is considering retracting it, said Marcia McNutt, editor in chief.
“Given the negative publicity that has now surrounded this paper and the concerns that have been raised about its irreproducibility, I think it would be in Michael LaCour’s best interest to agree to a retraction of the paper as swiftly as possible,” she said in an interview on Friday. “Right now he’s going to have such a black cloud over his head that it’s going to haunt him for the rest of his days.”
Only three months ago he posted on Facebook that he would soon be moving across country for his “dream job” as a professor at Princeton. That future could now be in doubt. A Princeton spokesman, Martin Mbugua, noting that Mr. LaCour was not yet an employee there, said, “We will review all available information and determine the next steps.”
Critics said the intense competition by graduate students to be published in prestigious journals, weak oversight by academic advisers and the rush by journals to publish studies that will attract attention too often led to sloppy and even unethical research methods. The now disputed study was covered by The New York Times, The Washington Post and The Wall Street Journal, among others.
“You don’t get a faculty position at Princeton by publishing something in the Journal Nobody-Ever-Heard-Of,” Dr. Oransky said. Is being lead author on a big study published in Science “enough to get a position in a prestigious university?” he asked, then answered: “They don’t care how well you taught. They don’t care about your peer reviews. They don’t care about your collegiality. They care about how many papers you publish in major journals.”
The details that have emerged about the flaws in the research have prompted heated debate among scientists and policy makers about how to reform the current system of review and publication. This is far from the first such case.
The scientific community’s system for vetting new findings, built on trust, is poorly equipped to detect deliberate misrepresentations. Faculty advisers monitor students’ work, but there are no standard guidelines governing the working relationship between senior and junior co-authors.
The reviewers at journals may raise questions about a study’s methodology or data analysis, but rarely have access to the raw data itself, experts said. They do not have time; they are juggling the demands of their own work, and reviewing is typically unpaid.
In cases like this one — with the authors on opposite sides of the country — that trust allowed Mr. LaCour to work with little supervision.
“It is simply unacceptable for science to continue with people publishing on data they do not share with others,” said Uri Simonsohn, an associate professor at the Wharton School of the University of Pennsylvania. “Journals, funding agencies and universities must begin requiring that data be publicly available.”
Mr. LaCour met Dr. Green at a summer workshop on research methods in Ann Arbor, Mich., that is part education, part pilgrimage for young scientists. Dr. Green is a co-author of the textbook “Field Experiments: Design, Analysis and Interpretation.” He has published more than 100 papers, on topics like campaign finance and party affiliation, and is one of the most respected proponents of rigorous analysis and data transparency in social science.
He is also known to offer younger researchers a hand up.
“If it is an interesting question, Don is interested,” said Brian Nosek, a professor of psychology at the University of Virginia who has collaborated with Dr. Green.
Mr. LaCour, whose résumé mentions a stint as the University of Texas Longhorns’ mascot “Hook Em” as well as an impressive list of academic honors, approached Dr. Green after class at the workshop one day with his idea.
His proposal was intriguing. Previous work had found that standard campaign tactics — ads, pamphleteering, conventional canvassing — did not alter core beliefs in a lasting way. Mr. LaCour wanted to test canvassing done by people who would personally be affected by the outcome of the vote.
His timing was perfect. The Los Angeles LGBT Center, after losing the fight over Proposition 8, which barred same-sex marriage in California, was doing just this sort of work in conservative parts of the county and wanted to see if it was effective. Dave Fleischer, director of the center’s leadership lab, knew Dr. Green and had told him of the center’s innovative canvassing methods.
“Don said we were in luck because there was a Ph.D. candidate named Mike LaCour who was interested in doing an experiment,” Mr. Fleischer said.
Money seemed ample for the undertaking — and Dr. Green did not ask where exactly it was coming from.
“Michael said he had hundreds of thousands in grant money, and, yes, in retrospect, I could have asked about that,” Dr. Green said. “But it’s a delicate matter to ask another scholar the exact method through which they’re paying for their work.”
Dr. McNutt said that for Dr. Green to be “in a situation where he’s so distant from the student that he would have so little opportunity to really keep tabs on what was happening with him and with this data set — it’s just not a good situation.”
The canvassing was done rigorously, Mr. Fleischer said. The LGBT Center sent people into neighborhoods that had voted against same-sex marriage, including Boyle Heights, South Central and East Los Angeles. The voters were randomly assigned to either gay or straight canvassers, who were trained to engage them respectfully in conversation.
Mr. LaCour’s job was to track those voters’ attitudes toward same-sex marriage multiple times, over nine months, using a survey tool called the “feeling thermometer,” intended to pick up subtle shifts. He reported a response rate of the participants who completed surveys, 12 percent, that was so high that Dr. Green insisted the work be replicated to make sure it held up.
Mr. LaCour told Dr. Green that the response rate was high because he was paying respondents to participate, a common and accepted practice. After he told that Dr. Green a second run of the experiment had produced similar results, Dr. Green signed on.
Mr. Fleischer said that sometime during the project, “Mike had the strong opinion that we would find that the gay canvassers were doing much better.”
Mr. Fleischer said he was doubtful that would be the result, noting that same-sex marriage advocates differ on whether gay or straight people are better at persuading opponents.
The LaCour-Green findings electrified some in the field. Joshua Kalla, a Ph.D. candidate at the University of California, Berkeley, saw the study presented before it was published.
“It was very exciting, and partly because it wasn’t just theoretical, it was something that could be applied in campaigns,” he said.
He and a fellow student, David Broockman, who will soon be an assistant professor at Stanford, decided to test the very same approach on another political issue, also working with the Los Angeles LGBT Center. Mr. Fleischer of the center said the issue was transgender equality in Florida. Mr. Kalla and Dr. Broockman paid participants as they thought Mr. LaCour had, but their response rate was only 3 percent.
“We started to wonder, ‘What are we doing wrong?’ ” Mr. Kalla said. “Our response rate was so low, compared to his.”
There are now serious questions about whether Mr. LaCour achieved the high response rate he claimed. He has acknowledged that he did not pay participants as he had claimed, according to Dr. Green and Dr. McNutt, the Science editor in chief.
In a letter that he sent through his lawyer, Dr. McNutt said, Mr. LaCour said he had instead allowed participants the chance to win an iPad, saying “that was incentive enough.” Dr. McNutt said the supposed payments had convinced the reviewers that the response rate was as high as the study reported.
Dr. Green asked Mr. LaCour for the raw data after the study came under fire. Mr. LaCour said in the letter to Dr. McNutt that he erased the raw data months ago, “to protect those who answered the survey,” Dr. McNutt said.
She said that it was possible some voters had responded to some surveys, but that it was most likely that too few had done so to provide enough data to reach persuasive conclusions.
Survey data comes in many forms, and the form that journal peer-reviewers see and that appears with the published paper is the “cleaned” and analyzed data. These are the charts, tables, and graphs that extract meaning from the raw material — piles of questionnaires, transcripts of conversations, “screen grabs” of online forms. Many study co-authors never see the raw material.
Mr. Kalla, trying to find out why he and Dr. Broockman were getting such a low response rate, called the survey company that had been working with Mr. LaCour. The company, which he declined to name, denied any knowledge of the project, he said.
“We were over at Dave’s place, and he was listening to my side of the conversation, and when I hung up,” we just looked at each other, he said. “Then we went right back into the data, because we’re nerdy data guys and that’s what we do.”
On Saturday, they quickly found several other anomalies in Mr. LaCour’s analysis and called their former instructor, Dr. Green. Over the weekend, the three of them, with the help of an assistant professor at Yale, Peter Aronow, discovered that statistical manipulations could easily have accounted for the findings. Dr. Green called Mr. LaCour’s academic adviser, Lynn Vavreck, an associate professor, who confronted Mr. LaCour.
Dr. McNutt of Science said editors there were still grappling with a decision on retracting.
“This has just hit us,” she said. “There will be a lot of time for lessons learned. We’re definitely going to be thinking a lot about this and what could have been done to prevent this from happening.”
Source link
- http://bit.ly/1HGVbUp