The majority of science we see published and read about is of a certain form. It has been polished and ‘perfected’, the straggly ends lacerated and robust conclusions pronounced. Data mining, digging for the perfect numbers to make the results statistically significant, is all shamefully too common place. We are all only human, we are all biased, knowingly or not, exaggerated or not.  The temptation to omit an ‘odd’ result to produce the perfect graph is there – and it will be until robots and machines have ultimate control of our analysis and data. This is in part due to the grave struggle and frustration that comes hand in hand with research, you know that your experiment should show this but it just isn’t working. We hopefully then trench back and forth over again until an error is realised or sometimes, your cells in your experiment just start ‘behaving again’, or perhaps even your study volunteers! A bit of healthy scepticism is highly recommended when reading scientific literature and we hope to explore this in the next series of blog posts.

The ‘perfect’ science is the science that gets published and this is a pressure science journals put on themselves and on their contributing authors. There are two sides to this. We of course want high quality robust data that we can trust. But we also want to share work in progress, incomplete mechanisms and pathways not yet fully known, we want to know what didn’t work and negative results – but these are not the articles sought by journals. Editors and authors alike are obsessed with impact factors and how many citations/references they will get – hounded by the need for more funding and research grants that require these superficial merits. Knowledge transfer is something that is often talked about by societies and groups with common interests, but in my experience this is lacking. Research groups often confide only in themselves and do not share work that isn’t published for fear of others getting there first. Hopefully this will change in the future, as we need to learn from each other’s successes and mistakes to advance STEM subjects. (We would love to hear from your own experiences, get in touch with us on Facebook or Twitter).

Science writing is an interesting case. It is characteristically supposed to be acute, concise, non-biased and factual (basically no fun or excitement). But in that case why are some articles so damn difficult to read and understand? I have a degree in science and I struggle at the end of many publications and think what are they trying to say? – and why can’t they just spit it out and say it!? The meaning of their research gets thwarted with worn out ‘therefores, howevers and furthermores’. A casual round up of the research in their own words, leaving out any flourish to try and convince you what they are saying is important, would be helpful to everyone. And on that matter, they need to engage with the public, which is a topic in itself for another day! It is no wonder articles get misinterpreted and exploded by the media, the public, and scientists themselves. This frustration, seen leaching from this blog post, has led to a new regular theme that will appear on STEM Outreach Nottingham – Naked Science. We will aim to bring you plain English interpretations of articles that are out in the news recently (nutritional delusions is a pet peeve!) or ones we find interesting ourselves. Please get in touch via our Facebook or Twitter if you have any suggestions or would like to write a piece yourself :).

Sophie