Tampilkan postingan dengan label evidence. Tampilkan semua postingan
Tampilkan postingan dengan label evidence. Tampilkan semua postingan

Kamis, 07 Maret 2013

Horizon scanning

Posted by Linda Penn

As I watched the midnight sun skim the lake horizon in Finland, I should have remembered there are ‘no free lunches’. Collating the European Diabetes Prevention Study (EDIPS) data required much effort and was mostly a spare time endeavour for Jaana, Annemieke, and me. Our first tangible reward, to which all co-authors have contributed much thought and substantial quantities of virtual red-ink, is our publication in PLoS ONE.  

Midnight sun in Finland

There are three main messages in this paper:

1. Type 2 diabetes can be prevented by lifestyle intervention in European countries (including the UK, and not just in Finland - which was already known).

2. Weight loss has a diabetes preventive effect and maintaining weight loss for more than a year has a greater (quite dramatic) preventive effect.

3. Convenient identification of people at high-risk, so that interventions can be offered to the right people, remains unclear (and research is needed).

Drafting, redrafting and re-redrafting this paper was a learning experience and a privilege. As any first author may appreciate, coordinating input from my co-author list of illustrious professors might incur challenges. Since publication I have received several complimentary responses from colleagues - such is the power of social media - and I am satisfied that this paper represents good research. However it is just research. Translation to sustainable service provision remains the goal. Finland has made progress in this direction, but in the UK it is raining costs and cut-backs.

Recent NICE guidance for prevention of type 2 diabetes confirms the need for intensive lifestyle intervention services, and we are currently evaluating a translational feasibility study in Middlesbrough that is based on our EDIPS experience. We also plan further EDIPS publications. We agreed outlines for these during a November meeting in Helsinki. Finland was different in November - no sun, horizontal sleet and frozen snow - but still warm hospitality.

I have the additional bonus of including this paper in my portfolio towards my PhD by publication, which is of course the soft option for a PhD – or perhaps not. I wonder if I can include a Fuse Blog post as supporting documentation?

Selasa, 29 Januari 2013

On potty training a black swan. Exploring the limits of the evidence base in public health

Posted by Heather Yoeli

In recent years public health has been trying to make itself more evidence-based, which is probably why policy making, commissioning and service providing organisations seem to be listening to and funding research centres such as Fuse at the moment. Evidence is an epistemology, a theory of knowledge. Evidence is the main epistemology which science believes in.

The primacy of evidence was first asserted by the eighteenth century philosophers Locke and Hume, who created empiricism as the theory that our only way of knowing anything is to see, hear, smell, touch or taste it. Empiricism has developed into the methodology of twentieth century scientific research by Popper, with the aid of a few black swans along the way. The processes and protocols for creating evidence bases from scientific research continue to evolve, with new mechanisms for ensuring rigour, validity and trustworthiness of peer review developing in response to new challenges.

At this present moment, however, one of the greatest challenges in my life is in trying to persuade my daughter (as many twee American parenting websites would put it) ‘to go potty in the bathroom’. As a research scientist I have sought help from the evidence-based publication Poo Goes to Pooland. Poo Goes to Pooland was written out of the doctoral research of local psychologist Tamsin Black, and is the of Poo, who is lonely and unhappy in a child’s bottom and wants to go home his mummy in Pooland, and about how Pooland is down the toilet and we can all send our poos there too. For clever empiricists, however, Poo Goes to Pooland has one inherent problem, and my daughter noticed it before me, asking ‘but Mummy how do we know that Pooland is down the toilet? How do we know that Pooland isn’t on the floor or in my knickers? Can you not pwove it?’ I can’t pwove it. She’s right. And she’s too bright to be fobbed off by my attempts to show her Poo’s Mummy peeping out from behind the U-bend, either…

And so, the need to locate Pooland reveals the limits of empiricism and the drawbacks of evidence-based methodology. I have therefore been attempting to use some alternative epistemologies to persuade her;

1. The rationalist method (Descartes, Leibniz etc): We can’t prove that Pooland is down the toilet but we can theorise that it’s there. Thousands of toddlers have successfully used Poo Goes to Pooland to teach them to poo in the toilet, so Pooland must be down there somewhere.

2. The psychoanalytic method (Jung, Klein etc): It’s a category error to demand proof. Poo Goes to Pooland is a story, a piece of literature. Pooland is a metaphor created to teach us where to poo, not a real place.

3. The ontological method (Anselm, Heidegger etc): We don’t need to prove it as such. Few of the millions of people who believe in God value any attempt to verify his existence, because even by believing something to be there we can create its existence for ourselves. Pooland is.

There are alternatives, then, to the empiricist epistemology of the evidence base. But I can report that none of them are working either; my daughter is still not going potty in the bathroom.

There are, of course, evidence-based alternatives to the Poo Goes to Pooland method. And even within evidence-based potty training practice, evidence bases can produce many approaches;

4. The medical approach: Stop thinking about Pooland for just a minute. Is there something making it sore for you to poo on the toilet?

5. The behavioural approach: It doesn’t matter where Pooland is. Just sit calmly on the toilet reading The Lorax until you poo and you’ll get a Peppa Pig sticker on your chart.

6. The hermeneutic approach: Let’s talk about Pooland, shall we? Let’s chat about poo and toilets for a bit…

So far, none of these are working either. And then, as is usually the case in most areas of public health, epistemologies get tangled up with ideologies and politics;

7. The communist/kibbutz method: All of you together, arrange your potties into a neat line and sit and poo together. Nobody moves until everyone poos. (Just to clarify, I haven’t tried this)



8. The authoritarian method: You will poo where I tell you because I am your parent. End of. (Again, to clarify, I’m not going there.)

9. The attachment-parenting approach: If we keep breastfeeding, co-sleeping and home-educating you for long enough and relax, you’ll poo in the toilet eventually, even if it takes a few more years.

So… it’s all providing a brilliant introduction to epistemology; fascinating insight into the range of ways we can think about what we know, and daughter is having a fantastic time intellectually out-manoeuvring me. But she’s still not going potty in the bathroom, so all further ideas (and/or donations of Ecover) are most welcome.

Senin, 17 September 2012

This is what evidence is made of

Posted by Jean Adams

I recently re-joined the systematic review club. I did a systematic review once. It was fine. I learnt how to do it, I did it, I published it. It was a good learning experience. Certainly good enough to learn that I didn’t need to do another one in a hurry. Or at least I didn’t need to do the nitty-gritty reviewing myself. But things happen and before you know it you’re second reviewer on a systematic review that you just can’t pass on to anyone else. 
I love a good (and sometimes not so good) radio drama
There are some jobs that were designed to be Friday afternoon jobs. Jobs that clearly need to be done, but that you don’t need to think too hard about. Jobs that you can do whilst catching up on BBC Radio 4 drama serials on the iPlayer. Reformatting the tables in your latest rejected manuscript to meet the exact, esoteric, requirements for the next journal in your list. Adding references from Endnote into Word.

I love a little pile of Friday afternoon jobs. As they don’t require much brain input, I find them easy to churn through and they make me feel unusually productive. Productive, unthinking, with added radio stories. Just what I need to end the week.

In contrast, other jobs are very clearly Tuesday morning jobs. Jobs that need sustained, un-interrupted thought. Jobs where even Radio 3 is intrusive. Drafting the justification section of grant applications. Deciding what exactly is the key message in your latest paper. Working out the analysis plan for the 3MB of data you’ve just received.

I don’t mind Tuesday morning jobs. If I have the time, the space, the right environment and am making progress, I really like the satisfaction of biting off big chunks of Tuesday morning jobs. In fact, high quality Tuesday mornings jobs are what keep me in the job.

I know some people don’t mind systematic reviewing. I know some people even positively enjoy systematic reviewing. These are wonderful people. We need systematic reviews and we need systematic reviewers. I am pleased to count systematic reviewers among my friends. But, really, I am not a systematic reviewer. I’m always happy to come up with the idea and justification for a systematic review on a quiet Tuesday morning. But the real-life screening and data extraction, bread and butter of systematic reviewing are not my bag at all.

The problem, I have decided, with systematic reviewing, is that it is neither a Friday afternoon job, nor a Tuesday morning job. You need to concentrate to decide if the paper you’re reading meets all of the inclusion criteria you’ve set. You can’t possibly listen to radio stories whilst you’re systematic reviewing. But you don’t really have to come up with any great new ideas. The ideas happened way back on a Tuesday morning in November when you drafted the protocol.

I procrastinate outrageously when I am systematic reviewing. I check Twitter. I make a cup of tea. I decide I’m procrastinating too much and that I must not do anything but review until I have reviewed 10 more papers. I wonder what’s happening in the tennis and convince myself that I’ll review much better if I just check the scores and get it out of my system. I think of blog posts I could write.

But, as I am slogging my way through and slowly passing papers from the ‘to screen’ to the ‘screened’ pile, I try and remember that it is systematic reviews that we hope might guide decisions; that this pain is what evidence is made of.

Kamis, 13 September 2012

On evidence

Posted by Simon Howard

In my first week at medical school, one of the professors warned that most of what we were to be taught was factually wrong. It was an arresting statement, but it may have been true: Studies have shown that textbooks and experts frequently lag behind evidence, sometimes recommending “treatments” that are actually known to be harmful.

Do Primary Care Trusts do the same? PCTs, like the one I work in, currently commission the majority of NHS services provided to patients in their catchment areas (though not for much longer). Sometimes, academics get frustrated with PCTs for seemingly doing things that either have little evidence, or appear to contradict it altogether. Given that evidence is the bedrock of public health, and given the potential for decisions to affect whole populations, this might seem worrying.


In defence of PCTs, a lot of evidence based work does happen. Most major pieces of work include a review of academic literature at an early stage, and follow the findings. The annual Joint Strategic Needs Assessment and regular detailed Health Needs Assessments also take into account published literature and local and national data in a fairly systematic way.

But there are lots of barriers to following the evidence. Books and books could be written on this topic, from the applicability of evidence in the real-world to deciding if research is really relevant to a particular population. But I’m no expert, and I’m not going to try and describe anything technical, complicated, or even remotely clever. These are just a few examples of practical barriers to following the letter of the academic evidence in public health.

One huge barrier is – as with most things in life – money. In a world of ever-tightening budgets, an academic’s seemingly reasonable intervention can be unaffordable. As an extreme example, research by the FAA and CAA suggests that three or four lives would be saved in an average aircraft fire if all passengers were provided with smoke hoods. However, the vanishing rarity of in-flight fires, the enormous cost of supplying and maintaining smoke hoods, and the cost of the fuel required to propel them around the world, all make this proposal financially unjustifiable.

Not all examples are quite so clear-cut. Sometimes, instead of choosing not to do something, PCTs try to cherry-pick the best bits of interventions in a way that is almost certainly infuriating to the academics who pioneered them, and possibly less effective in practice. But, sometimes, doing something is better than doing nothing.

Often, there can be a big lag between publication of evidence and its implementation. One reason is the complex contractual nature of commissioning: it’s often difficult to make small changes to services that have already been commissioned. The constant pressure to reduce costs incentivises longer contracts which spread the financial risk, but which also increase the evidence-practice lag. I’m sure it’s deeply frustrating to be an academic shouting “there’s a better way to do this” while services continue unchanged.

There’s also a political element to public health. Decisions to cut services that are no longer supported by evidence are particularly tricky. In England and Northern Ireland, the evidence that cervical screening in women under the age of 25 causes more harm than good has led to a withdrawal of the service in this age group. The clear evidence, combined with clear recommendations from the World Health Organisation and National Screening Committee hasn’t stopped this becoming a topic for political debate and petition, and hasn’t (yet) changed policy in Wales or Scotland. It seems likely that this political element will play a bigger part in decision making as public health moves to the overtly politicised world of local authorities.

To me personally, the most frustrating barrier to following the evidence is an inability to access it. It continues to baffle me that the NHS doesn’t have anything like the level of straightforward desktop access to literature that university colleagues have. In the 21st century, it seems crazy that I sometimes have to ask the BMA to take a paper journal off a physical shelf, scan it in, and email it to me as the only practical cost-effective way to access a paper that’s of general interest, rather than something specific to any individual project.

I think a latent awareness of what’s going on in academia is important in public health. It might not matter so much when someone’s doing a big literature review prior to introducing a new service, but it can help with horizon-scanning, and with those little every day decisions that aren’t worthy of a trawl though the literature, and with planning for the future. This is something we can all play a part in: public health professionals probably need to broaden their awareness of the academic things going on around them, and academics probably need to shout louder about the latest developments in their fields. As an associate member, I’m probably biased, but I think FUSE is great at helping both groups.

Rabu, 11 Juli 2012

How to get the evidence message across

Guest post by Katie Cole

The mantra of “but there’s no evidence for it!” is one I’ve said or thought many times, both in my work, discussions with family and friends, or when shouting at the BBC Today programme.


But as an early-career academic, I’m increasingly aware there is a complex web of considerations when trying to translate evidence into policy, and that there are times when chanting our mantra may do more harm than good.

I recently attended a Royal College of Physicians/Alma Mata seminar on alcohol advocacy. At one point, a panel member suggested that social norms interventions to address excessive alcohol consumption on university campuses “sounded very promising” and policy-makers were considering it. I’ve looked into US research into these interventions: a national evaluation concluded that they are ineffective in reducing alcohol consumption. Whilst I could have made this point, I felt it was more complex than that. Don’t we need to test the policy in the UK drinking context to make a more robust contribution to the debate? Shouldn’t we seek to support policy-makers to integrate evaluations into pilots, or to finance full-scale trials?

Another challenge I’ve had was during a placement at a Primary Care Trust. I was involved in the Individual Funding Request process, where the PCT considers funding treatments and procedures not normally available on the NHS. I worked up a number of cases, looked at the evidence base and presented the case to a panel of clinicians and non-clinicians. In most cases, the evidence base was of poor quality: finding a case series for the exact condition and treatment in question represented a minor professional achievement. Usually, the case series found that, lo and behold, most cases improved, which often sparked disproportionate optimism that we had a justification for funding the treatment. In contrast, when I found a randomised controlled trial with only modest results, the panel were more inclined to propose not funding the treatment. Here I was challenged to explain the difference between the strength of the evidence base, and the strength of the effect size; whilst at the same time, acknowledging the difficulty of decision-making against a poor evidence base.

A final challenge has been in developing The Lancet UK Policy Matters website, which includes short summaries of the evidence underpinning a range of UK health-related policy changes. In developing the format of the summaries, we had to be very clear to authors that statements purporting the intended benefit of the policy should not be included in the ‘evidence’ sections of the summary – this was reserved for peer-reviewed research or evaluations. Our experience in guiding authors highlighted to us how meticulous we as professionals need to be in the choice of language we use when drawing on our scientific expertise.

Above all other lessons, these experiences have taught me that advocating for evidence in policy making is challenging, complicated and requires skill. It demands an understanding of the evidence itself – its strengths and limitations – but also of the policy making process. Whilst these issues can be difficult to reconcile, the above experiences have only strengthened my drive to communicate effectively with all actors in the policy making process.

Katie Cole co-founded The Lancet UK Policy Matters website with Rob Aldridge and Louise Hurst.