Tuesday, August 30, 2016

Don’t Neglect Your Data!

English: Histogram of sepal widths for Iris ve...
Histogram of research data (Photo credit: Wikipedia)
y

This situation comes up a lot, especially when I speak to social scientists.

A student has 1 year left before they have to submit their PhD thesis. They have written 10s of thousands of words. They have read countless articles. They have an outline of their thesis and a few drafts of some of their chapters.

They have some good ideas, they have a good knowledge of their field, and have done all their data collection … but they have done zero data analysis. Sometimes, they won’t have transcribed their interviews. Some won’t have even listened to the interviews since they conducted them (perhaps several years ago).

This leaves them in a very precarious situation; with only a few months remaining, not knowing whether there is anything useful in their data, and not knowing how to find out. They have to learn how to do the analysis for the first time under enormous pressure, with no opportunity to redo any of the practical work should there be a problem with the data (or to further investigate anything interesting).

It’s a nightmare situation, but so easily avoidable if you start learning how to do data analysis as early as possible. This includes;
  • Learning how to use the software you need (eg NVIVO)
  • Putting data in an appropriate format (transcription)
  • Basic analytical techniques (coding)
You don’t need a full data set to get started. It doesn’t even need to be real data. Starting early means that you be more comfortable with the analysis once you do have a full data set, and having an understanding of the analytical process will help you get better quality data. Don’t neglect your data, and don’t treat the analysis as something you can throw together at the end.

What to do if you have neglected your data until now

First, make sure you know where the data is, then start on whatever formatting needs to be done. For example, if you have audio recordings of interviews, these will probably need to be transcribed. Many underestimate how long this takes, so start immediately.

Once you have one transcribed file, that’s enough to load into whatever software you are using so you can play around with the basics of analysis. If you know someone who has used the software before, ask them nicely if they can show you what their process is for analysing data. If you don’t know anyone, find some online tutorials to get you started.

You must then get all your data into a usable state. Until this is done, you don’t really have anything to work with. It’s time-consuming and can be tedious, but it has to be done. Try to put together a checklist so you have a consistent process to follow. Take note of where you save every file, and ALWAYS keep an unaltered copy of the original raw data.

Only once you have the data in an analyzable form can you start to figure out whether you have anything valuable. The earlier you do this, the better.

"Box of floppy disks and USB memory stick" by JIP - Own work. Licensed under CC BY-SA 3.0 via Commons.
Box of floppy disks and USB memory stick” by JIPOwn work. Licensed under CC BY-SA 3.0 via Commons.

Impact Sensationalism: A Means to an End?

by : https://theresearchwhisperer.wordpress.com/2016/08/30/impact-sensationalism/

Photo by kazuend | unsplash.com
Photo by kazuend | unsplash.com
Jenn Chubb is in the final stages of a PhD at the University of York examining the philosophical effects of the impact agenda in the UK. Jenn’s background is in Philosophy and she has a particular interest in virtue ethics, academic freedom and the philosophy of science. She tweets at @jennchubb

Richard Watermeyer is a Sociologist of education specialising in critical social studies of higher education. His research interests include higher education policy, management and governance; academic identity and practice; public engagement; impact; and neoliberalism. He tweets at @rpwatermeyer.

We recently published an article in the Journal of Studies in Higher Education titled ‘Artifice or integrity in the marketization of research impact? Investigating the moral economy of (pathways to) impact statements within research funding proposals in the UK and Australia.

Our paper reveals that the need to articulate the potential impact of research, where it is not immediately obvious, can lead academics to embellish and create stories or charades about the impact of their work. Impact projections were described as “illusions”; “virtually meaningless”, “made up stories” - that were seen as necessary in order to secure a professional advantage.

This is perhaps not entirely surprising. After all, in making a pitch for funding researchers are inherently ‘selling’ themselves or their ideas. Polishing or enhancing claims may be the default position to make sure that a proposal stands out. However, the extent to which this is being done may signal a deeper, systemic moral dilemma concerning the integrity of competitive research funding processes.

Impact in the UK and Australia

In recent years, research councils in the UK and Australia have required applicants to include projections of potential impact in their funding proposals. In addition to this, impact is a component of the Research Exercise Framework, an exercise used to assess the quality of the UK’s research. A consultation concerning impact as a companion piece to Australia’s own national research evaluation exercise, the Excellence in Research for Australia, has also just been completed.

‘Impact’ (the effect and influence that research has on the non-academic environment) has been the subject of significant debate recently. Its critics claim that it has the potential to impede academic freedom, whilst its proponents cite the enrichment of research and public accountability.

Importantly, Research Councils UK maintain that excellent research is the primary criteria for the assessment of grant applications and that impact is a secondary concern. They make clear in their policies that where no route to impact is perceived, a researcher should instead use that part of the application to explain why this is the case. In their Pathways to Impact advice, they state:
“It is expected that being able to describe a pathways to impact will apply for the vast majority of proposals. In the few exceptions where this is not the case, the Pathways to Impact statement should be used to fully justify the reasons why this is not possible”.
Those critical of the impact agenda suggest that to be asked the impact question: ‘how will (non-academics) benefit from this research’, is just an indirect way of asking what the impact will be. There is still little evidence as to how much an impact statement can influence the outcome of a funding decision.

Some academics in our study claimed that impact was not something they considered to be a deciding factor when assessing grants, others claimed the complete opposite. A researcher’s interpretation, conceptualization and confidence in the policies in place influences their behavior in responding to this agenda. There appears, therefore, to be a disconnect in understanding what funders require and separating how this might play out in reality within peer-review.

Despite the messages set out by policy, our study identified that academics clearly locate a sense of moral tension when having to answer the impact question. This was particularly the case for academics in theoretical disciplines or ‘pure’ and blue skies research.
If I want to do basic science I have to tell you lies - UK, Professor.
The primary motivator for embellishment was the need to secure research funds, a regrettable but perhaps necessary evil:
Would I believe it? No, would it help me get the money - yes - UK, Professor.
Many claimed that this was a symptom of academic life and expressed a survival instinct over their decisions to embellish the truth. Participants felt that fiercely upholding any moral imperative to be truthful could risk one’s own job, perhaps suggesting that the moral question of impact lies not with the academics themselves, but with those demanding it:
If you can find me a single academic who hasn’t had to bullshit or bluff or lie or embellish in order to get grants, then I will find you an academic who is in trouble with his [sic] Head of Department. If you don’t play the game, you don’t do well by your university. So anyone that’s so ethical that they won’t bend the rules in order to play the game is going to be in trouble, which is deplorable - Australia, Professor.
The other concerns articulated by our interviewees were more localized, with academics reporting that the need to predict impact at the outset of the research was simply unscientific, and not feasible:
The idea therefore that impact could be factored in in advance was viewed as a dumb question put in there by someone who doesn’t know what research is. I don’t know what you’re supposed to say, something like ‘I’m Columbus, I’m going to discover the West Indies?!’ - Australia, Professor.
Others claimed that this ran counter to the research process itself, and that it was in direct conflict with the very philosophies and principles of science:
It’s disingenuous, no scientist really begins the true process of scientific discovery with the belief it is going to follow this very smooth path to impact because he or she knows full well that that just doesn’t occur and so there’s a real problem with the impact agenda - and that is it’s not true it’s wrong - it flies in the face of scientific practice - UK, Professor.
To ultimately conform to what many described as a neoliberal mandate in a now marketised higher education environment seemed, for a large number of our academics, to be the only answer. However, for some, this was tempered with the ability to draw a distinction between impact sensationalism and being disingenuous in applications:
They’re telling a good story as to how this might fit into the bigger picture. That’s what I’m talking about. It might require a bit of imagination, it’s not telling lies. It’s just maybe being imaginative - Australia, Lecturer.
Perhaps integrity is therefore not at risk - it’s just “creative people telling creative stories” as one of our interviewees believed? The picture on this front is less than straightforward.

Is integrity at risk?

For some, the prominence of perceived game-playing, insincerity and a tacit coercion to inflate the truth surely risks the view that academics are truthful authorities, worthy of the trust of the communities that support them. For others, the response came that the moral obligation sits not with those at its mercy but with those who impose it and, ultimately, those who assess it. The truth is perhaps somewhere in between.

We have seen how the research councils in the UK and Australia repeatedly reassure academics that the primary assessment of grants is the excellence of the research itself.

When our paper was published, it prompted significant debate on Twitter and in online news outlets such as the Conversation. The Australian Broadcasting Corporation (ABC) also ran a feature on it including comments from Professor Aidan Byrne (former Chief Executive Officer of the Australian Research Council) who stated that ‘a small number of proposals did go too far, but most were accurate and all were heavily examined’. He claimed that the peer review will protect integrity and sift out bogus claims of impact:
“The proposals are reviewed by experts who do have a really good and sharp sense of what’s plausible and what’s implausible, and what’s fictitious and what’s not” - Professor Byrne, Chief Executive Officer of the Australian Research Council.
Indeed, our own research was described using sensationalist headlines that were elaborations of what we found. There are, however, issues to be discussed given the testimonies of academics who are struggling, and feeling the need to embellish and dramatize the important work they do.

Despite reassurance from those who create research policy, the testimonies provided in our study tell a somewhat problematic, less straightforward story about research impact policy.

Our findings raise the concern that policies that encourage certain behaviors that run counter to the intrinsic moral fabric of academics risks becoming ineffective for all parties.