The Effectiveness of Public Health Interventions

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

The health of the whole population is a very important issue. Conditions which are likely to affect the whole population or large sections of the population are considered to be public health issues and are the subject of specific healthcare promotions and interventions. These can take a range of forms; those aimed at raising awareness of symptoms or lifestyle factors that are implicated in developing a particular condition; management of health conditions to improve quality of life and/or longevity or recognition of symptoms to obtain early treatment. Public health interventions are developed to address identified public health issues (National Institute for Health and Care Excellence, 2015). Once these are put in place, it is important to be able to assess the impact of the interventions and their effectiveness in respect of the present situation and also to increase the knowledge base for development of further interventions in the future (Brownson, et al., 2010). This essay will consider the ways in which the effectiveness of public health interventions can be determined.

Discussion

One of the main factors that needs to be considered in public health interventions is cost-effectiveness (The King’s Fund, 2014). The NHS has increasing demands on its services and so, when developing new interventions or reviewing those already in place, cost effectiveness is one of the most important issues. A further aspect of the effectiveness of public health interventions is the extent to which they have demonstrably achieved the aims set for the intervention (Scutchfield & Keck, 2003). These two areas will now be considered in greater detail.

There is a finite budget available to the NHS to provide healthcare and this has to be utilised in the most efficient way. The economic constraints that have been in place for some time have created an even greater need for financial efficiency. One way that this can be achieved is through reducing the numbers of people who are suffering from conditions which are considered to be avoidable. Conditions such as diabetes and obesity for example, are considered to be largely avoidable by people changing lifestyle habits to improve their health. Thus a range of public health interventions have been directed to focus on these types of issues in order to prevent people from becoming ill as this would represent a substantial saving in costs of treatment for subsequent illnesses. It would also provide benefit to the public in that people would lead longer, healthier lives. However, preventative interventions present difficulties in measuring their effectiveness. A reduction in the numbers of people developing diabetes, for instance, may be attributable to a public health intervention or it may be the result of one or more other factors. The individuals measured may not have developed the condition anyway and so it cannot be proven that the intervention itself was solely responsible for them remaining well. As it can be difficult to accurately measure effectiveness of outcomes, the cost-effectiveness is also difficult to assess. Historically, preventative healthcare promotion has been a problematic area due to the difficulties in establishing effectiveness and this made obtaining funding for such activities particularly challenging. However, the increasing demand for services has meant that there has been a shift in perspective and a greater focus on prevention. Thus, the means of evaluating public health interventions in this area has become important. Although the financial implications cannot be the sole driver for health promotion, financial issues are of necessity a major factor as there are obligations on the NHS to produce evidence that their funding has been properly and effectively spent.

The effectiveness of health promotions from the perspective of health improvement of the population should be the primary motivation of interventions rather than cost. In order to improve public health, there are a range of options for interventions. The impact of health interventions was described by Frieden, (2010) as being in the formof a five-tier pyramid with the bottom tier being the most effective as it reaches the largest sector of the population and has the greatest potential to improve the social/economic determinants of health. The higher tiers of the pyramid relate to areas where the individual is helped to make healthy choices. Topics that are within the bottom tier of the pyramid include the improvements in health brought about by changing lifestyle habits such as smoking. Wide-scale promotions and interventions have been in place for many years and this has reduced the numbers of people who already smoke together with encouraging people not to begin smoking. As a result, the risk factors of health issues such as heart conditions has been reduced. Whilst this may not completely prevent some people from developing such conditions in terms of public health, which takes the wider perspective, a higher proportion of people will be at a lower risk. Thus, the effectiveness of interventions in this case can be measured by the proportion of the population who currently smoke, who have given up smoking and who have started smoking by comparison to previous years records (Durkin, et al., 2012). The numbers of people coming forward for help through smoking cessation provisions offered by their GPs can also be measured, together with the effectiveness of the those interventions in helping people to achieve their goal to stop smoking.

The longstanding interventions to reduce the numbers of people with HIV/AIDS also fell within the same category of public health interventions (as just described in respect of smoking) once it was clear that it was a potential risk to a large section of the population. In this instance, there was a large amount of public health promotional activity when the issue was first known in the 1980’s but this has largely subsided currently with few if any national high profile promotions/interventions (Bertozzi, et al., 2006). However, the risk has not been eradicated and there has been an increase in older people developing the condition (AVERT, 2015). This may be due to them not considering they are at risk or they may not have been targeted by the original campaigns which had a greater focus on the homosexual communities, needle using drug addicts and sexually active, younger adults. Married couples were not then considered to be the primary target audience for such campaigns. This demonstrates that there is a need for on-going interventions, particularly in terms of public awareness, to ensure that there is a consistent and improving impact (AVERT, 2015). Unless a health risk has been eradicated, there is likely to be a need for continuing interventions to maintain public knowledge levels. The way in which HIV/AIDS and smoking are directed at the wider population are examples of Frieden’s bottom sections of the pyramid.

When interventions are applied in the top levels of Frieden’s pyramid they address individuals more directly, rather than the whole population (2010). Thus, it could be argued that such interventions would overall, have a greater impact as any public changes need to involve each individual changing. Unless each person is reached by the intervention and perceives that it is a valuable change for them, publicly directly interventions will have reduced effectiveness. National interventions will of necessity be broadly based and they will, therefore, not reach all those people to whom it is aimed as they may feel that it does not apply to them. Thus, the use of interventions that are more specifically targeted to individuals can take into account their socio-economic status and other factors to make the interventions more easily seen to be applicable to them (Frieden, 2010 ).

A different view of public health interventions considers the situation for people with terminal or long term conditions. Many of the interventions focus heavily on the medical model and do not take into account the impact on the patient or how they would prefer to be cared for. The medical view of what constitutes good health may be considered to be a more laboratory based, theoretical view that does not necessarily reflect the lived experience of individuals (Higgs, et al., 2005). Physical incapacity may not impact badly on an individual who has found ways to live a fulfilling life whilst someone who is considered fit and well may not consider that they have good quality of life (Asadi-Lari, et al., 2004). Therefore, the impact of interventions on the public also needs to be considered. A medically effective intervention may be unpleasant or difficult for the patient to endure and thus, viewed as being less effective. Furthermore, if the intervention is too unpleasant the patient may fail to comply and thus, also not obtain the level of effectiveness that the medical model would suggest it should (Asadi-Lari, et al., 2004).

One area of public health that has proved to be somewhat controversial in recent years is that of immunisation. The possible links between the MMR vaccine and autism, for instance, has impacted heavily on the numbers of people having their children immunised (BMJ, 2013). Vaccination is an important branch of public health and relies upon sufficient people being immunised against diseases so that should isolated cases occur the disease will not spread. Many parents today will be unaware of the health implications of illnesses such as German measles and mumps as vaccination has made cases rare. The rarity of the cases has also led to the incorrect belief that these illnesses have been eradicated. Therefore, in this instance the effectiveness of the intervention has been varied by the influence of the media reports or adverse outcomes. The fear that was generated has been difficult to overcome and this has resulted in a loss of faith in the process. This then results in reduced effectiveness of the intervention. However, it can prove very difficult to restore public support following situation such as this that have continued for a long time. The impact can be measured in both the numbers of people coming forward to have their children immunised and in the numbers of cases of the various illnesses that occur each year. The current statistics, however, do suggest that the levels of immunisation with MMR has now been restored to an appropriate level (NHS, 2013).

The provision of the ‘flu vaccine is another instance where public health interventions may have varying effectiveness. The actual effectiveness of a ‘good’ vaccine is not considered to be 100% when the correct formula has been provided. In 2014, however, the vaccine was not for the actual strain of ‘flu that occurred and so there was little protection provided (Public Health England, 2015). As a result, it is likely that there will be a downturn in the numbers of people who will come forward to receive the ‘flu vaccination this year as the value may be perceived to be doubtful. This also demonstrates the need to provide the public with correct information so that they are aware of the potential effectiveness of the intervention. So in the case of ‘flu, if the vaccine has a 60% chance of preventing the illness this should perhaps be specifically stated. There may be a level at which the majority of people feel that it is not worth having the vaccination. If, hypothetically, an effectiveness of less than 30% was considered by the majority of people to be so low that it was not worth having the vaccination, there could be few people immunised and a major epidemic could follow. Therefore, it is important that the information provided is correct and that the intervention itself is seen to be of sufficient value to the individual to warrant them making that choice to take advantage of what is offered (NHS, 2015).

Conclusion

This essay has asserted that the effectiveness of public health interventions can be viewed from two main perspectives: the cost effectiveness of the provision and the impact on the target audience. Whilst there are considerable pressures in the NHS financially, this should not be the primary consideration in respect of public health. The aim of public health interventions is to improve the health and well-being of the population as a whole and uses a wide range of methods to achieve this. Some provisions are aimed at the whole population and others are designed for the individual or smaller target groups. For these to be effective, they need to reach the target audience and have meaning for them so that they will be encouraged to take the required action. Continuous changes in the provision may also be needed to ensure that long term issues remain in the public awareness.

Bibliography

Asadi-Lari, M., Tamburini, M. & Gray, D., 2004. Patients’ needs, satisfaction, and health related quality of life: Towards a comprehensive model. Health and Quality of Life Outcomes , 2(32).

AVERT, 2015. HIV/AIDS Statistics 2012. [Online] Available at: http://www.avert.org/hiv-aids-uk.htm [Accessed 28 September 2015].

Bertozzi, S.; Padian, N.S.; Wegbreit, J.; DeMaria, L.M.; Feldman, B.; Gayle, H.; Gold, J.; Grant, R.; Isbell, M.T., 2006. Disease Control Priorities in Developing Countries. New York: World Bank.

BMJ, 2013. Measles in the UK: a test of public health competency in a crisis. BMJ, 346(f2793).

Brownson, R.C.; Baker, E.A.; Leet, T.L.; Gillespie, K.N.; True, W.R., 2010. Evidence-Based Public Health. Oxford: Oxford University Press.

Durkin, S., Brennan, E. & Wakefield, M., 2012. Mass media campaigns to promote smoking cessation among adults: an integrative review. Tobacco Control, Volume 21, pp. 127-138.

Frieden, T. R., 2010 . A Framework for Public Health Action: The Health Impact Pyramid. American Journal of Public Health, 100(4), p. 590–595.

Higgs, J., Jones, M., Loftus, S. & Christensen, N., 2005. Clinical Reasoning in the Health Professions. New York: Elsevier Health Sciences.

National Institute for Health and Care Excellence, 2015. Methods for the development of NICE public health guidance (third edition). [Online] Available at: https://www.nice.org.uk/article/pmg4/chapter/1%20introduction [Accessed 28 September 2015].

NHS, 2013. NHS Immunisation Statistics, London: NHS.

NHS, 2015. Flu Plan Winter 2015/16. [Online] Available at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/418038/Flu_Plan_Winter_2015_to_2016.pdf [Accessed 28 September 2015].

Public Health England, 2015. Flu vaccine shows low effectiveness against the main circulating strain seen so far this season. [Online] Available at: https://www.gov.uk/government/news/flu-vaccine-shows-low-effectiveness-against-the-main-circulating-strain-seen-so-far-this-season [Accessed 28 September 2015].

Scutchfield, F. & Keck, C., 2003. Principles of Public Health Practice. Clifton Park: Delmar Learning.

The King’s Fund, 2014. Making the case for public health interventions. [Online] Available at: http://www.kingsfund.org.uk/audio-video/public-health-spending-roi?gclid=CM_ExbKomcgCFcZuGwodE44Lkg [Accessed 28 September 2015].

Transference Countertransference Therapeutic Relationship

This work was produced by one of our professional writers as a learning aid to help you with your studies

Describe the transference-countertransference element of the therapeutic relationship

An examination of the development of transference and counter-transference as a therapeutic tool with an exploration of the ways in which it can be defined and used in a therapeutic setting, with an overview and brief discussion of the way the concept of transference/counter-transference has been received by different schools of therapy.

Introduction

This essay explores the development of transference and countertransference from their origins in Freud’s work to their current uses in different psychotherapeutic schools. The Kleinian contribution is identified as a major catalyst to re-thinking countertransference as a resource rather than simply an obstacle to treatment.

An unseemly event and a fortuitous discovery

In 1881, the physician Dr Josef Breuer began treating a severely disturbed young woman who became famous in the history of psychoanalysis as “Anna O”. She had developed a set of distressing symptoms, including severe visual disturbances, paralysing muscular spasms, paralyses of her left forearm and hand and of her legs, as well as paralysis of her neck muscles (Breuer, 1895, in Freud and Breuer 1985/2004, p. 26). Medical science could not explain these phenomena organically, save to designate them as symptoms of what was then known as “hysteria”, so Breuer took the radical step of visiting his young patient twice a day and listening carefully to her as she spoke about her troubles. He was to make a powerful discovery which deeply influenced his young assistant, Dr Sigmund Freud: whenever Anna found herself spontaneously recounting memories of traumatic events from her early history, memories she had hitherto had no simple access to through conscious introspection, her symptoms began to disappear one by one. But for the purposes of this essay, one event was to be of pivotal importance: just as Breuer was about to conclude his treatment of the young woman as a success, she declared to him that she was in love with him and was pregnant with his child.

Perhaps unsurprisingly, Breuer was traumatised and withdrew from this intimate method of treatment promptly. Freud’s original biographer, Ernest Jones, reports that Breuer and Freud originally described the incident as an “untoward” event (Jones, 1953, p. 250); but where Breuer admonished himself for experimenting with an unethically intimate method which may have made him seem indiscreet to the young woman, Freud studied the phenomenon with scrupulous scientific neutrality. He, too, had experienced spontaneous outbursts of apparent love from his psychotherapeutic patients, but as Jones (1953, p. 250) observes, he was certain that such declarations had little or nothing to do with any magnetic attraction on his part. The concept of transference was born: patients, Freud argued, find themselves re-experiencing intense reactions in the psychotherapeutic relationship which were in origin connected with influential others in their childhoods (such as parents or siblings). Without being aware of doing so, patients tended to transfer their earlier relationship issues onto the person of the therapist.

As Spillius, Milton, Couve and Steiner (2011) argue, at the time of the Studies in Hysteria, Freud tended to regard manifestations of transference as a predominantly positive force: the patient’s mistaken affections could be harnessed in the service of a productive alliance between therapist and client to explore and analyse symptoms. But by 1905, his thinking about transference began to undergo a profound change. Already aware that patients could direct unjustifiably hostile feelings toward the analyst as well as affectionate ones, his work with the adolescent “Dora” shook him deeply when she abruptly terminated her analysis in a surprisingly unkind and perfunctory manner (Freud, 1905/2006). He had already worked out that both the positive and negative manifestations of transference functioned as forms of resistance to the often unpleasant business of understanding one’s own part in the events of the past (it is, for example, a good deal easier to lay the blame for one’s present-day failings on “bad” or unsupportive figures from the past or on their selected stand-ins in the present than it is to acknowledge that one rejected or failed to make full use of one’s opportunities). But he began to realise that Dora had actively repeated a pattern of relationship-behaviour with him that had actually arisen from her unacknowledged hostility toward her father, as well as to a young man she had felt attracted to, because both had failed to show her the affection and consideration she believed herself entitled to.

She took her revenge out on Freud – and she was not alone in actively re-enacting critical relationship scenarios inside the therapeutic relationship; other patients, he began to see, also frequently actively relived relational patterns in this way while totally unaware that they were repeating such established patterns. By 1915, transference was no longer a resistance to recovering hazy and unpleasant memories for Freud; instead, it was an active, lived repetition of earlier relationships based on mistakenly perceived similarities between here-and-now characteristics of the analyst and there-and-then characteristics of previously loved or hated figures (Freud, 1915/2002)

The interplay between psychical reality and social reality

Melanie Klein, a pioneer of child psychoanalysis, accepted Freud’s view of transference as a form of re-enactment, but using her meticulous observations of the free play of very young (and very disturbed) child patients, she began to develop the view that it was not the dim-and-distant past that was re-enacted but, on the contrary, the present. Psychical reality and social reality were not coterminous or even continuous; they were involved instead in a ceaseless dialectical interplay (Likierman, 2001, esp. pp. 136 – 144). Real people may constitute the child’s external world, but for Klein, the only way to make sense of the often violent and disturbing content of the children’s play she observed was to posit the existence of a psychical reality dominated by powerful unconscious phantasies involving frighteningly destructive and magically benevolent inner figures or “objects” (Klein, 1952/1985). Children didn’t simply re-enact actual, interpersonal relationships, they re-enacted relationships between themselves and their unique unconscious phantasy objects. In spontaneous play, children were dramatising and seeking to master or domesticate their own worst fears and anxieties, she believed.

Klein’s thought has changed the way transference is viewed in adult psychotherapy, too. If transference involves not simply the temporal transfer of unremembered historical beliefs into the present but the immediate transfer of phantasies, in the here-and-now, which are active in the patient’s mind, handling transference becomes a matter of immediate therapeutic concern: one does not have to wait until a contingency in the present evokes an event from the past, nor for the patient to make direct references to the therapist in her associations, because a dynamic and constantly shifting past is part of the present from the first moments of therapy in Kleinian thought. For example, Segal (1986, pp.8 – 10) describes a patient opening her first therapy session by talking about the weather – it’s cold and raining outside. Of all the issues a patient could choose to open a session – the latest political headlines, a currently active family drama, a dream, a quarrel with a work colleague, and so on – it is always significant when a patient “happens” to select a particular theme; for Segal, following Klein, this selection indicates the activity of unconscious phantasy objects. Transference is immediate: Segal asks whether the patient is actually exploring, via displacement onto the weather, her transferential fear that the analyst may be an unfriendly, cold, and joy-dampening figure.

Countertransference, its development and its use by different schools of therapy

The foregoing has focussed on transference but implicit throughout has been the complementary phenomenon of countertransference, from Breuer’s shocked withdrawal from Dora’s transferential love to Freud’s distress at being abruptly abandoned by Dora who, he later realised, was re-enacting a revenge scenario. Intensely aware that emotions could be roused all too easily in the analyst during a psychoanalytic treatment, Freud was exceptionally circumspect about any form of expression of these feelings to the patient. In his advice to practitioners, he suggested that the optimal emotional stance for the therapist was one of “impartially suspended attention” (Freud, 1912b/2002, p. 33). He did not, however, intend this to be a stable, unfluctuating position of constantly benevolent interest; he urged therapists to be as free of presuppositions and as open-minded as possible to their patients’ spoken material, to be willing to be surprised at any moment, and to allow themselves the freedom to shift from one frame of mind to another. But he was unambiguous in his advice about how the therapist should comport him- or herself during analysis:

“For the patient, the doctor should remain opaque, and, like a mirror surface, should show nothing but what is shown to him.” (Freud, 1912b, p. 29)

As his paper on technique makes clear, Freud considered the stirring up of intense emotions on the part of the therapist as inevitable during analytic work; but he also considered these responses to the patient an obstacle to analytic work, the stirring up of the therapist’s own psychopathology which required analysis rather than in-session expression. The analyst had an obligation to remove his own blind-spots so as to attend to the patient’s associations as fully and prejudicially as possible.

By the 1950s, psychoanalysts were beginning to explore countertransference as a potential source of insight into the patient’s mind. As Ogden (1992) draws out in his exploration of the development of Melanie Klein’s notion of projective identification, Kleinian analysts such as Wilfred Bion, Roger Money-Kyrle, Paula Heimann and Heinrich Racker began arguing that it was an interpersonal mechanism rather than an intrapsychic one (as Klein had intended). Patients, they believed, could evoke aspects of their own psychic reality, especially those aspects they that they found difficult to bear, inside the mind of the analyst by exerting subtle verbal and behavioural pressures on the therapist. Therapists should not, therefore, dismiss such evoked emotions as purely arising from their own psychopathology, but as a form of primitive, para- or pre-verbal communication from the patient. As Ogden (a non-Kleinian) puts it:

“Projective identification is that aspect of transference that involves the therapist being enlisted in an interpersonal actualization of (an actual enactment between patient and therapist) of a segment of the patient’s internal object world.”
(Ogden, 1992, p. 69)

Countertransference, in other words, when handled carefully and truthfully by the therapist, can be a resource rather than an obstacle, and as such it has spread well beyond the Kleinian School. For example, while advocating caution in verbalising countertransference effects in therapy, the Independent psychoanalyst Christopher Bollas (1987) suggests that the analyst’s mind can be used by patients as a potential space, a concept originally developed by Winnicott (1974) to designate a safe, delimited zone free of judgement, advice and emotional interference from others, within which people can creatively express hitherto unexplored aspects of infantile experience. Bollas cites the example of a patient who recurrently broke off in mid-sentence just as she was starting to follow a line of associations, remaining silent for extended periods. Initially baffled and then slightly irritated, Bollas worked on exploring his countertransference response carefully over several months of analytic work. He eventually shared a provisional understanding with her that came from his own experience of feeling, paradoxically, in the company of someone who was absent, who was physically present but not emotionally attentive or available. He told her that he had noticed that her prolonged silences left him in a curious state, which he wondered was her attempt to create a kind of absence he was meant to experience. The intervention immediately brought visible relief to the patient, who was eventually able to connect with previously repressed experiences of living her childhood with an emotionally absent mother (Bollas, 1987, pp. 211 – 214).
Other schools of psychoanalytic therapy such as the Lacanians remain much more aligned with Freud’s original caution, believing that useful though countertransference may be, it should never be articulated in therapy but taken to supervision or analysis for deeper understanding (Fink, 2007).

References

Bollas, C. (1987). Expressive uses of the countertransfeence:notes to the patient from oneself. In C. Bollas, The Shadow of the Object: Psychoanlsyis of the Unthought Known (pp. 200 – 235). London: Free Associations Books.

Breuer, J. (1883-5/2004). Fraulein Anna O. In S. Freud, & J. Breuer, Studies in Hysteria (pp. 25 – 50). London and New York: Penguin (Modern Classics Series).

Fink, B. (2007). Handling Transference and Countertransference. In B. Fink, Fundamentals of Psychoanalytic Technique: A Lacanian Approach for Practitioners (pp. 126 – 188). New York and London: W.W. Norton & Company.

Freud, S. (1905/2006). Fragment of an Analysis of Hysteria (Dora). In S. Freud, The Psychology of Love (pp. 3 – 109). London and New York: Penguin (Modern Classics Series).

Freud, S. (1912/2002). Advice to Doctors on Psychoanaytic Treatment. In S. Freud, Wild Analysis (pp. 33 – 41). London and New York : Penguin (Modern Classics Series).

Freud, S. (1912/2002). On The Dymanics of Transference. In S. Freud, Wild Analysis (pp. 19 – 30). Londona nd New York: Penguin (Modern Classics Series).
Freud, S. (1915/2003). Remenbering, Repeating and Working Through. In S. Freud, Beyond the Pleasure Priciple and Other Writings (pp. 31 – 42). London and New York: Penguin (Modern Classics Series).

Jones, E. (1953). The Life and Work of Sigmund Freud: The Formative Years and the Great Discoveries, 1856-1900 – Vol. 1. New York: Basic Books.
Klein, M. (1952/1985). The Origins of Transference. In M. Klein, Envy and Gratitude and Other Works (pp. 48 – 60). London: The Hogarth Press & The Institute of Psycho-Analysis.

Likierman, M. (2001). Melanie Klein: Her Work in Context. London and New York: Continuum.

Ogden, T. (1992). Projective Identification and Psychotherapeutic Technique. London: Maresfield Library.

Segal, H. (1986). Melanie Klein’s Technique. In H. Segal, The Work of Hanna Segal: Delusion and Artistic Creativity & Other Psycho-analytic Essays (pp. 3 – 34). London: Free Associations Books/Maresfield Library.

Spillius, E., Milton, J., P., G., Couve, C., & Steiner, D. (2011). The New Dictionary of Kleinian Thought. East Sussex and New York: Routledge.
Winnicott, D. W. (1974). Playing and Reality. London: Pelican.

The role of Epidemiology in Infection Control

This work was produced by one of our professional writers as a learning aid to help you with your studies

The role of epidemiology in infection control and the use of immunisation programs in preventing epidemics

The discipline of epidemiology is broadly defined as “the study of how disease is distributed in populations and the factors that influence or determine this distribution” (Gordis, 2009: 3). Among a range of core epidemiologic functions recognised (CDC, 2012), monitoring and surveillance as well as outbreak investigation are most immediately relevant to identifying and stopping the spread of infectious disease in a population.

Most countries perform routine monitoring and surveillance on a range of infectious diseases of concern to their respective jurisdiction. This allows health authorities to establish a baseline of disease occurrence. Based on this data, it is possible to subsequently discern sudden spikes or divergent trends and patterns in infectious disease incidence. In addition to cause of death which is routinely collected in most countries, many health authorities also maintain a list of notifiable diseases. In the UK, the list of reportable diseases and pathogenic agents maintained by Public Health England includes infectious diseases such as Tuberculosis and Viral Haemorrhagic Fevers, strains of influenza, vaccine-preventable diseases such as Whooping Cough or Measles, and food borne infectious diseases such as gastroenteritis caused by Salmonella or Listeria. (Public Health England, 2010) At the international level, the World Health Organization requires its members to report any “event that may constitute a public health emergency of international concern” (International Health Regulations, 2005). Cases of Smallpox, Poliomyelitis, Severe Acute Respiratory Syndrome (SARS), and new influenza strains are always notifiable. (WHO, undated) These international notification duties allow for the identification of trans-national patterns by collating data from national surveillance systems. Ideally, the system would enable authorities to anticipate and disrupt further cross-national spread by alerting countries to the necessity of tightened control at international borders or even by instituting more severe measures such a bans on air travel from and to affected countries.

As explained in the previous paragraph, data collected routinely over a period of time allows authorities to respond to increases in the incidence of a particular disease by taking measures to contain its spread. This may include an investigation into the origin of the outbreak, for instance the nature of the infectious agent or the vehicle. In other cases, the mode of transmission may need to be clarified. These tasks are part of the outbreak investigation. Several steps can be distinguished in the wake of a concerning notification or the determination of an unusual pattern. These include the use of descriptive epidemiology and analytical epidemiology, the subsequent implementation of control measures, as well as reporting to share experiences and new insights. (Reintjes and Zanuzdana, 2010)

In the case of an unusual disease such as the possibility of the recent Ebola outbreak in West Africa to result in isolated cases in Western Europe, it might not be necessary to engage in further epidemiological analysis once the diagnosis has been confirmed. Instead, control measures would be implemented immediately and might include ensuring best practice isolation of the patient and contact tracing to ensure that the infection does not spread further among a fully susceptible local population. Similarly, highly pathogenic diseases such as meningitis that tend to occur in clusters might prompt health authorities to close schools to disrupt the spread. In other types of outbreak investigations identifying the exact disease or exact strain of an infectious agent is the primary epidemiologic task. This might, for instance, be the case if clusters of relatively non-specific symptoms occur and need be confirmed as linked to one another and identified as either a known disease/infectious agent or be described and named. In the same vein, in food-borne infectious diseases, the infectious organism and vehicle of infection may have to be pinpointed by retrospectively tracing food intake, creating comparative tables, and calculating measures of association between possible exposures and outcome (CDC, 2012). Only then can targeted control measures such as pulling product lots from supermarket shelves and issuing a pubic warning be initiated.

Beyond identifying and controlling infectious disease outbreaks, monitoring and surveillance also plays a role in ensuring that primary prevention works as effectively as possible: collecting information on behavioural risk factors in cases such as sexually transmitted diseases can help identify groups that are most at risk and where Public Health interventions may yield the highest benefit. In another example, monitoring immunization coverage and analysing the effectiveness of vaccines over the life course may predict epidemics in the making if coverage is found decreasing or immunity appears to decline in older populations. In addition, the ability to anticipate the potential spread of disease with a reasonable degree of confidence hinges not only on good data collection. Advanced epidemiological methods such as mathematical modelling are equally instrumental in predicting possible outbreak patterns. Flu vaccines, for instance, need to be formulated long before the onset of the annual flu season. Against which particular strains the vaccines are to provide immunity can only be determined from past epidemiological data and modelling. (M’ikanatha et al., 2013) Mathematical models have also played a role in determining the most effective vaccine strategies, including target coverage and ideal ages and target groups, to eliminate the risk of epidemic outbreaks of infectious diseases (Gordis, 2009).

In addition to controlling outbreaks at the source and assuring the key protective strategies such as mass immunisation are effectively carried out, epidemiology is also a tool that allows comprehensive planning for potential epidemics. A scenario described in a research article by Ferguson and colleagues (2006) has as its premise a novel and therefore not immediately vaccine-preventable strain of influenza that has defied initial attempts at control and reached pandemic proportions. The large scale simulation of the theoretical epidemic assesses the potential of several intervention strategies to mitigate morbidity and mortality: international border and travel restrictions, a measure that is often demanded as a kneejerk reaction by policy-makers and citizens is found to have minimal impact, at best delaying spread by a few weeks even if generally adhered to (Ferguson et al., 2006). By contrast, interventions such as household quarantines or school closures that are aimed at interrupting contact between cases, potential carriers, and susceptible individuals are much more effective. . (Ferguson et al., 2006) Time sensitive antiviral treatment and post exposure prophylaxis using the same drugs are additional promising strategies identified. (Ferguson et al., 2006) The latter two potential interventions highlight the role of epidemiological risk assessment in translating anticipated spread of infectious disease into concrete emergency preparedness. For instance, both mass treatment and mass post exposure prophylaxis require advance stockpiling of antivirals. During the last H1N1 epidemic, public and political concern emerged over shortages of the antiviral drug oseltamivir (brand name Tamiflu). (De Clerq, 2006). However, advance stockpiling requires political support and significant resources at a time when governments are trying to reign in health spending and the threat is not immediate. Thus, epidemiologists also need to embrace the role of advocates and advisors that communicate scientific findings and evidence-based projections to decision-makers.

That being said, immunisation remains the most effective primary preventive strategies for the prevention and control of epidemics. As one of the most significant factors in the massive decline of morbidity and mortality form infectious disease in the Western world over the last century, vaccination accounts for an almost 100% reduction of morbidity from nine vaccine-preventable diseases such as Polio, Diphtheria, and Measles in the United States between 1900 to 1990. (CDC, 1999) Immunisation programmes are designed to reduce the incidence of particular infectious diseases by decreasing the number of susceptible individuals in a population. This is achieved by administering vaccines which stimulate the body’s immune response. The production of specific antibodies allows the thus primed adaptive immune system to eliminate the full strength pathogen when an individual becomes subsequently exposed to it. The degree of coverage necessary to achieve so called herd immunity- the collective protection of a population even if not every single individual is immune- depends on the of the infectivity and pathogenicity of the respective infectious agent. (Nelson, 2014) Infectivity, in communicable diseases, measures the percentage of infections out of all individuals exposed, whereas pathogenicity is the percentage of infected individuals that progress to clinical disease. (Nelson, 2014). Sub-clinical or inapparent infections are important to take into account because, even though they show no signs and symptoms of disease, people may still be carriers capable of infecting others. Polio is an example of an infectious disease where most infections are inapparent, but individuals are infectious. (Nelson, 2014).

Gauging infectivity is crucial to estimating the level of coverage needed to reach community immunity. The so called basic reproductive rate is a numerical measure of the average number of secondary infections attributable to one single source of disease, e.g. one infected individual. The rate is calculated by taking into account the average number of contacts a case makes, the likelihood of transmission at each contact point, and the duration of infectiousness. (Kretzschmar and Wallinga, 2010). The higher the reproductive rate, i.e. the theoretical number of secondary cases, the higher the percentage of the population that needs to be immunised in order to prevent or interrupt an outbreak of epidemic proportions. For instance, smallpox, which was successfully eradicated in 1980 (World Health Organization, 2010), is estimated to have a basic reproduction number of around 5, requiring a coverage of only 80% of the population to achieve herd immunity. By contrast, the estimated reproduction number for Measles is around 20 and it is believed that immunisation coverage has to reach at least 96% for population immunity to be ensured. (Kretzschmar and Wallinga, 2010). Once the herd immunity threshold is reached, the remaining susceptible individuals are indirectly protected by the immunised majority around them: in theory, no pathogen should be able to reach them because nobody else is infected or an asymptomatic carrier. Even if the unlikely event of an infection among the unvaccinated eventuated, the chain of transmission should be immediately interrupted thanks to the immunised status of all potential secondary cases. Vaccinating primary contacts of isolated cases is also an important containment strategy where a cluster of non-immune individuals was exposed to an infected individual. Such scenarios may apply, for example, where groups of vaccine objectors or marginalized groups not caught by the regular immunisation drive are affected or an imported disease meets a generally susceptible population.

However, epidemic prevention does not stop with having reached vaccination targets. Instead, constant monitoring of current coverage is required and adaptations of the immunisation strategy may be needed to ensure that epidemics are reliably prevented. Recent trends underscore the enduring challenge of permanently keeping at bay even diseases that are officially considered eradicated or near eradication: in the United Kingdom, a marked spike in the number of confirmed measles cases has been observed in the last decade, with an increase from under 200 cases in 2001 to just over 2,000 cases in 2012. (Oxford Vaccine Group, undated) The underlying cause is evident from a comparison of case numbers with data from vaccine coverage monitoring: indeed, the number of children receiving the combination Measles vaccine decreased in the 2000s roughly in parallel with the increase in Measles incidence. (Oxford Vaccine Group, undated) Other countries have seen similar trends and have responded with measures intended to increase vaccine uptake: for instance, in Australia, the government recently decided to enact measures that would withhold child benefit payments to parents who refuse to have their children vaccinated. (Lusted and Greene, 2015)

In conclusion, epidemiology, and in particular routine monitoring and surveillance, is a potent tool that enables health authorities to anticipate, detect, and contain the spread of infectious disease. Over the last century, immunisation has proven itself as one of the key interventions to curb infectious disease morbidity and mortality. However, with vaccine-preventable diseases again on the rise in UK and other industrialised countries, epidemiologic monitoring of vaccine coverage and disease incidence remains critically important. Where vaccines are not available or vaccine-induced immunity is short-lived, an effective system to detect cases and contain outbreaks is even more instrumental to the effort of preventing infectious disease epidemics.

Bibliography

Centers for Disease Control and Prevention (CDC) (2012) Principles of Epidemiology in Public Health Practice, 2nd edition, Atlanta, GA: US Department of Health and Human Services.

Centers for Disease Control and Prevention (CDC) (1999) ‘Achievements in Public Health, 1900-1999 Impact of Vaccines Universally Recommended for Children — United States, 1990-1998’, MMWR, vol. 48, no. 12, pp. 243-248.

De Clercq, E. (2006) ‘Antiviral agents active against influenza A viruses’, Nature Reviews Drug Discovery, vol. 5, no. 12, pp. 1015-1025.

Ferguson, N. et al. (2006) ‘Strategies for mitigating an influenza pandemic’, Nature, vol. 442, July, pp. 448-452.

Gordis, L. (2009) Epidemiology, 4th edition, Philadelphia, PA: Saunders Elsevier.

Kretzschmar, M. and Wallinga, J. (2010) ‘Mathematical Models in Infectious Disease Epidemiology’, in: Kramer, A. et al. (ed.) Modern Infectious Disease Epidemiology, New York, NY: Springer Science + Business Media.

Lusted, P. and Greene, A. (2015) Childcare rebates could be denied to anti-vaccination parents under new Federal Government laws. ABC News [Online], Available: http://www.abc.net.au/news/2015-04-12/parents-who-refuse-to-vaccinate-to-miss-out-on-childcare-rebates/6386448 [12 Feb 2015].

M’ikanatha, N. et al. (2013) ‘Infectious disease surveillance: a cornerstone for prevention and control’, in: M’ikanatha, N. et al. (ed.) Infectious Disease Surveillance, 2nd edition, West Sussex, UK: John Wiley & Sons.

Nelson, K. (2014) ‘Epidemiology of Infectious Disease: General Principles’, in: Nelson, K., Williams, C. and Graham, N. (ed.) Infectious disease epidemiology: theory and practice, 3rd edition, Burlington, MA: Jones & Bartlett Learning.

Oxford vaccine Group (undated) Measles [Online], Available: http://www.ovg.ox.ac.uk/measles [12 Feb 2015].

Public Health England (first published 2010) Notifications of infectious diseases (NOIDs) and reportable causative organisms: legal duties of laboratories and medical practitioners [Online], Available: https://www.gov.uk/notifiable-diseases-and-causative-organisms-how-to-report [12 Feb 2015].

Reintjes, R. and Aryna Zanuzdana. (2010) ‘Outbreak Investigations’, in: Kramer, A. et al. (ed.) Modern Infectious Disease Epidemiology, New York, NY: Springer Science + Business Media.

World Health Organization (WHO) (2005). ‘Notification and other reporting requirements under the IHR’, IHR Brief, No. 2 [Online], Available: http://www.who.int/ihr/ihr_brief_no_2_en.pdf [12 Feb 2015].

World Health Organization (WHO). (2010) Statue Commemorates Smallpox Eradication. Available: http://www.who.int/mediacentre/news/notes/2010/smallpox_20100517/en/index.html [12 Feb 2015].

The Epidemiology of Alcohol Abuse and Alcoholism

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

According to Alcohol Concern Organisation (2015) more than 9 million people in England consume alcoholic beverages more than the recommended daily limits. In relation to this, the National Health Service (2015) actually recommends no more than 3 to 4 units of alcohol a day for men and 2 to 3 units a day for women. The large number of people consuming alcohol more than the recommended limits, highlights the reality that alcoholism is a major health concern in the UK which can lead to a multitude of serious health problems. Moss (2013) states that alcoholism and chronic use of alcohol are linked to various medical, psychiatric, social and family problems. To add to this, the Health and Social Care Information Centre (2014) reported that between 2012 and 2013, a total of 1,008,850 admissions related to alcohol consumption where an alcohol-related disease, injury or condition was the primary cause for hospital admission or a secondary diagnosis. This shows the detrimental impact of alcoholism on the health and overall wellbeing of millions of people in the UK. It is therefore vital to examine the aetiology of alcoholism in order to understand why so many people end up consuming excessive alcohol. The National Institute on Alcohol Abuse and Alcoholism (NIAAA) (n.d.) supports this by stating that learning the natural history of a disorder will provide information essential for assessment and intervention and for the development of effective preventive measures. This essay will also look into the different public health policies that address the problem of alcoholism in the UK. A brief description of what alcoholism is will first be provided.

What is Alcoholism?

It is safe to declare that alcoholism is a lay term that simply means excessive intake of alcohol. It can be divided into two forms namely; alcohol misuse or abuse and alcohol dependence. Alcohol misuse simply means excessive intake of alcohol more than the recommended limits (National Health Service Choices 2013). A good example of this is binge drinking.

Alcohol dependence is worse because according to the National Institute for Health and Care Excellence (2011, n.p.) it “indicates craving, tolerance, a preoccupation with alcohol and continued drinking regardless of harmful consequences” (e.g. liver disease). Under the Diagnostic Statistical Manual of Mental Disorders (DSM)- 5, these two have been joined as one disorder called alcohol use disorder or AUD with mild, moderate and severe sub-classifications (NIAAA 2015).

Genetic Aetiologic Factor of Alcoholism

Alcoholism is a complex disorder with several factors leading to its development (NIAAA 2005). Genetics and other biological aspects can be considered as one factor involved in the development of alcohol abuse and dependence (NIAAA 2005). Other factors include cognitive, behavioural, temperament, psychological and sociocultural (NIAAA 2005).

According to Goodwin (1985) as far as the era of Aristotle and the Bible, alcoholism was believed to run in the families and thus could be inherited. To some extent, there is some basis that supports this ancient belief because in reality, alcoholic parents have about four to five times higher probability of having alcoholic children (Goodwin 1985). Today, this belief seems to lack substantially clear and direct research-based evidence. On the other hand, studies also do not deny the role of genetics in alcoholism. With this view, it is therefore safe to argue that genetics is considered still as an important aetiologic factor in alcoholism.

The current consensus simply indicates that there is more to a simple gene or two that triggers the predisposition of an individual to become an alcoholic. Scutti (2014) reports that although scientists have known for some time that genetics take an active role in alcoholism, they also propose that an individual’s inclination to be dependent on alcohol is more complicated than the simple presence or absence of any one gene. The National Institute on Alcohol Abuse and Alcoholism (2008) states that there is no one single gene that fully controls a person’s predisposition to alcoholism rather multiple genes play different roles in a person’s susceptibility in becoming an alcoholic. The NIAAA (2005) further claims that the evidence for a genetic factor in alcoholism lies mainly with studies that involve extended pedigree, those that involve identical and fraternal twins and those that include adopted individuals raised apart from their alcoholic parents.

For pedigree studies, it is believed that the risk of suffering from alcoholism is increased four to seven fold among first-degree relatives of an alcoholic (Cotton 1979; Merikangas 1990 cited in NIAAA, 2005.). First degree relatives naturally refer to parent-child relationships; hence, a child is therefore four to seven times at higher risk of becoming an alcoholic, if one or both of their parents are alcoholics. Moss (2013) supports this by stating that children whose parents are alcoholic are at higher risk of becoming alcoholics themselves when compared to children whose parents are non-alcoholics.

A study conducted by McGue, Pickens and Svikis (1992 cited in NIAAA 2005) revealed that identical twins generally have a higher concordance rate of alcoholism compared to fraternal twins or non-twin siblings. This basically means that a person who has an alcoholic identical twin, will have a higher risk of becoming an alcoholic himself when compared to if his alcoholic twin is merely a fraternal twin or a non-twin sibling. This study further proves the role of genetics in alcoholism because identical twins are genetically the same; hence, if one is alcoholic, the other must therefore also carry the alcoholic gene.

The genetic factor in alcoholism is further bolstered by studies conducted by Cloninger, Bohman and Sigvardsson 1981 cited in NIAAA 2005 and Cadoret, Cain and Grove (1980 cited in NIAAA 2005) involving adopted children wherein the aim was to separate the genetic factor from the environmental factor of alcoholism. In these studies, children of alcoholic parents were adopted and raised away from their alcoholic parents but despite this, some of these children still develop alcoholism as adults at a higher rate than those adopted children who did not have an alcoholic biological parent (Cloninger et al., 1981 cited in NIAAA 2005 and Cadoret et al., 1980 cited in NIAAA 2005).

One interesting fact about aetiologic genetic factor is that although there are genes that indeed increase the risk of alcoholism, there are also genes that protect an individual from becoming an alcoholic (NIAAA 2008). For example, some people of Asian ancestry carry a gene that modifies their rate of alcohol metabolism which causes them to manifest symptoms such as flushing, nausea and tachycardia and these generally lead them to avoid alcohol; thus, it can be said that this gene actually helps protect those who possess it from becoming alcoholic (NIAAA 2008).

Environment as an Aetiologic Factor of Alcoholism

Another clearly identifiable factor is environment, which involves the way an individual is raised and his or her exposure to different kinds of activities and opportunities. The National Institute on Alcohol Abuse and Alcoholism (2005) relates that the genetic factor and the environmental factor have a close relationship in triggering alcoholism in an individual. This can be explained by the simple fact that even if an individual is genetically predisposed to becoming an alcoholic, if he is not exposed to a particular kind of environment which triggers activities that lead to alcohol intake, the likelihood of his becoming an alcoholic will be remote.

There are certain aspects within the environment that makes it an important aetiologic factor. According to Alcohol Policy MD (2005) these aspects include acceptance by society, availability and public policies and enforcement.

Acceptance in this case refers to the idea that drinking alcoholic drinks even those that should be deemed excessive is somewhat encouraged through mass media, peer attitudes and behaviours, role models, and the overall view of society. Television series, films and music videos glorify drinking sprees and even drunken behaviour (Alcohol Policy MD 2005). TV and film actors and sports figures, peers and local role models also encourage a positive attitude towards alcohol consumption which overshadows the reality of what alcohol drinking can lead to (Alcohol Policy MD 2005). In relation to this, a review of different studies conducted by Grube (2004) revealed that mass media in the form of television shows for instance has an immense influence on the youth (age 11 to 18) when it comes to alcohol consumption. In films, portrayals regarding the negative impact of alcohol drinking are rare and often highlight the idea that alcohol drinking has no negative impact on a person’s overall wellbeing (Grube 2004). In support of these findings, a systematic review of longitudinal studies conducted by Anderson et al. (2009) revealed that the constant alcohol advertising in mass media can lead adolescents to start drinking or to increase their consumption for those who are already into it.

Availability of alcoholic drinks is another important environmental aetiologic factor of alcoholism simply because of the reality that no matter how predisposed an individual is to become an alcoholic, the risk for alcoholism will still be low if alcoholic drinks are not available. On the other hand, if alcoholic beverages are readily available as often are today, then the risk for alcoholism is increased not only for those who are genetically predisposed to alcoholism but even for those who do not carry the “alcoholic genes”. The more licensed liquor stores in an area, the more likely people are to drink (Alcohol Policy MD 2005). The cheaper its price, the more affordable it is for people to buy and consume it in excess (Alcohol Policy MD 2005).

Another crucial environmental aetiologic factor is the presence or absence of policies that regulate alcohol consumption and its strict or lax enforcement. It includes restricting alcohol consumption in specified areas, enacting stricter statutes concerning drunk driving and providing for penalties for those who sell to, buy for or serve to underage individuals (Alcohol Policy MD 2005). It is worthy to point out that in the UK, the drinking age is 18 and a person can be stopped, fined or even arrested by police if he or she is below this age and is seen drinking alcohol in public (Government UK 2015a). It is also against the law for someone to sell alcohol to an individual below 18; however, an individual age 16 or 17 when accompanied by an adult can actually drink but not buy alcohol in a pub or drink beer, wine or cider with a meal (Government UK 2015a).

Policies to Combat Alcoholism

One public health policy that can help address the problem on alcoholism is the mandatory code of practice for alcohol retailers which banned irresponsible alcohol promotions and competitions, and obliged retailers to provide free drinking water, compelled them to offer smaller measures and required them to have proof of age protocol. It can be argued that this policy addresses the problem of alcoholism by restricting the acceptance, availability and advertising of alcohol (Royal College of Nursing 2012). Another is the Police Reform and Social Responsibility Act 2011 which is a statute that enables local authorities to take a tougher stance on establishments which break licensing rules about alcohol sale (Royal Collage of Nursing 2012).

There is also the policy paper on harmful drinking which provides different strategies in addressing the problem of alcoholism. One such strategy is the advancement of the Change4Life campaign which promotes healthy lifestyle and therefore emphasises the recommended daily limit of alcohol intake for men and women (Government UK 2015b). Another strategy within this policy is the alcohol risk assessment as part of the NHS health check for adults ages 40 to 75 (Government UK 2015b). This policy aims to prevent rather than cure alcoholism which seems to be logical for after all, an ounce of prevention is better than a pound of cure.

Conclusion

Alcoholism which includes both alcohol misuse and alcohol dependence is a serious health problem which affects millions in the UK. Its aetiology is actually a combination of different factors. One vital factor is genetics wherein it can be argued that some people are predisposed to becoming an alcoholic. For example, an individual is at higher risk of becoming an alcoholic if he or she has a parent who is also alcoholic. When coupled with environmental factors, the risk of suffering from alcoholism becomes even greater. Environment refers to the acceptability and availability of alcohol and the presence or absence of policies that regulate alcohol sale and consumption. Vital health policies such as Harmful Drinking Policy Paper advocated by the government, are important preventive measures in reducing the incidence and prevalence of alcoholism in the UK.

References

Alcohol Concern Organisation (2015). Statistics on alcohol. [online]. Available from: https://www.alcoholconcern.org.uk/help-and-advice/statistics-on-alcohol/ [Accessed on 28 September 2015].

Alcohol Policy MD (2005). The effects of environmental factors on alcohol use and abuse. [online]. Available from: http://www.alcoholpolicymd.com/alcohol_and_health/study_env.htm[Accessed on 28 September 2015].

Anderson, P., de Brujin, A., Angus, K., Gordon, R. and Hastings, G. (2009). Impact of alcohol advertising and media exposure on adolescent alcohol use: A systematic review of longitudinal studies. Alcohol and Alcoholism. 44(3):229-243.

Goodwin, D. (1985). Alcoholism and genetics: The sins of the fathers. JAMA Psychiatry. 42(2):171-174.

Government UK (2015a). Alcohol and young people. [online]. Available from: https://www.gov.uk/alcohol-young-people-law [Accessed on 28 September 2015].

Government UK (2015b). policy paper 2010 to 2015 government policy: Harmful drinking. [online]. Available from: https://www.gov.uk/government/publications/2010-to-2015-government-policy-harmful-drinking/2010-to-2015-government-policy-harmful-drinking [Accessed on 28 September 2015].

Grube, J. (2004). Alcohol in the media: Drinking portrayals, alcohol advertising, and alcohol consumption among youth. [online]. Available from:http://www.ncbi.nlm.nih.gov/books/NBK37586/ [Accessed on 28 September 2015].

Health and Social Care Information Centre (2014). Statistics on alcohol England, 2014. [online]. Available from: http://www.hscic.gov.uk/catalogue/PUB14184/alc-eng-2014-rep.pdf [Accessed on 28 September 2015].

Moss, H.B. (2013). The impact of alcohol on society: A brief overview. Social Work in Public Health. 28(3-4):175-177.

National Health Service (2015). Alcohol units. [online]. Available from: http://www.nhs.uk/Livewell/alcohol/Pages/alcohol-units.aspx [Accessed on 28 September 2015].

National Health Services Choices (2013). Alcohol misuse. [online]. Available from: http://www.nhs.uk/conditions/alcohol-misuse/pages/introduction.aspx [Accessed on 28 September 2015].

National Institute on Alcohol Abuse and Alcoholism (2015). Alcohol use disorder: A comparison between DSM-IV and DSM-5. [online]. Available from: http://pubs.niaaa.nih.gov/publications/dsmfactsheet/dsmfact.pdf [Accessed on 28 September 2015].

National Institute on Alcohol Abuse and Alcoholism (2008). Genetics of alcohol use disorder. [online]. Available from: http://www.niaaa.nih.gov/alcohol-health/overview-alcohol-consumption/alcohol-use-disorders/genetics-alcohol-use-disorders [Accessed on 28 September 2015].

National Institute on Alcohol Abuse and Alcoholism (2005). Module 2: Etiology and natural history of alcoholism. [online]. Available from: http://pubs.niaaa.nih.gov/publications/Social/Module2Etiology&NaturalHistory/Module2.html [Accessed on 28 September 2015].

National Institute for Health and Care Excellence (2011). Alcohol-use disorders: Diagnosis, assessment and management of harmful drinking and alcohol dependence. [online]. Available from: https://www.nice.org.uk/guidance/CG115/chapter/Introduction [Accessed on 28 September 2015].

Royal College of Nursing (2012). Alcohol: policies to reduce alcohol-related harm in England. [online]. Available from: https://www.rcn.org.uk/__data/assets/pdf_file/0005/438368/05.12_Alcohol_Short_Briefing_Feb2012.pdf [Accessed on 28 September 2015.

Scutti, S. (2014). Is alcoholism genetic? Scientists discover link to a network of genes in the brain. [online]. Available from: http://www.medicaldaily.com/alcoholism-genetic-scientists-discover-link-network-genes-brain-312668 [Accessed on 28 September 2015].

Student Diet & Health Concerns

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

The obesity epidemic observed in the UK and other Western nations over the past two decades has increased the focus on eating habits of the nation (James, 2008, p. S120). Obesity, most often caused by prolonged poor diet, is associated with an increased risk of several serious chronic illnesses, including diabetes, hypertension and hyperlipidaemia, as well as possibly being associated with increased risk of mental health issues including depression (Wyatt et al., 2006, p. 166). In an attempt to promote better health of the population and reduce the burden of obesity and related health conditions on the NHS, the recent government white paper Healthy Lives, Healthy People (HM Government, 2010, p. 19) has identified improvements in diet and lifestyle as a priority in public health policy.

The design of effective interventions for dietary behaviour change may rely on having a thorough understanding of the factors determining individual behaviour. Although there has been a great deal of research published on eating habits of adults and school children (e.g. Raulio et al., 2010, p. 987) there has been much less investigation of the university student subpopulation, particularly within the UK. This may be important given that the dietary choices of general populations vary markedly across different countries and cultures, including within the student population (Yahia et al., 2008, p. 32; Dodd et al., 2010, p. 73).

This essay presents a discussion of the current research available on the eating habits of UK undergraduate students, including recent work being undertaken at Coventry University (Arnot, 2010, online). The essay then describes a small study conducted to supplement this research, using data collected from six students at a different university, exploring the influences which underpin the decisions made by students relating to their diet. The results of this study are presented and used to derive a set of recommendations for both a localized intervention and a national plan, targeted at university students, to improve dietary behaviour.

Eating Habits of University Students

It is widely accepted that students leaving home to attend university are likely to experience a significant shift in their lifestyle, including their diet, and this is supported by research evidence from the UK and other European countries (Papadaki et al., 2007, p. 169). This may encompass increased alcohol intake, reduced intake of fruit and vegetables, and increased intake of processed or fatty foods, as well as impacting on overall eating patterns (Arnot, 2010, online; Dodd et al., 2010, p. 73; Spanos & Hankey, 2010, p. 102).

Results of a study including 80 undergraduate students from Scotland found that around a quarter of participants never consumed breakfast (Spanos & Hankey, 2010, p. 102). Skipping breakfast habitually has been shown to be associated with increased risk of obesity and overweight amongst adolescents (Croezen et al., 2009, p. 405). The precise reasons for this are not entirely clear, although it could be due to increased snacking, on energy-dense, high-fat foods later in the day. This is based on the remainder of the results reported by Spanos and Hankey (2010, p. 102) which showed that three-quarters of students regularly used vending machines, snacking on chocolate bars and crisps; this was also shown to be significantly associated with body mass index (BMI).

Some studies have suggested that there may be different patterns of unhealthy eating amongst male and female groups of students. For example research conducted by Dr. Ricardo Costa and Dr. Farzad Amirabdollahian at Coventry University found that male students may be at risk of what they term “disordered eating patterns”. In addition, the study also suggests that males are at greater risk of not eating five portions of fruit and vegetables per day. This research is based on a substantial sample size, using data derived from in-depth interviews with approximately 130 undergraduates, although there are plans to increase this to include nearly 400 participants. It is acknowledged by the researchers that this may represent only those events occurring at one university, although there are also plans to expand the study sample across another two universities in the future (Arnot, 2010, online).

However, not all studies published support the existence of gender differences in eating behaviours. For example, research into risk factors for an unhealthy lifestyle reported by Dodd et al. (2010, p. 75) found that there were no differences in gender when measuring rates of eating five portions of fruit or vegetables per day.

Factors in Dietary Change

It is unsurprising that students’ dietary habits change when leaving home to attend university, since it has been identified that life transitions form a major factor in influencing eating habits (Lake et al., 2009, p. 1200). Studies have suggested that the dietary shift is most likely due to young adults leaving the family home and assuming responsibility for meal planning and preparation for the first time. This is supported by observations that university students who remain living at the family home may maintain a relatively healthier lifestyle than those moving out of home (Papadaki et al., 2007, p. 169). Early results from a Coventry University study also support this as a major factor, as it has been identified that cooking skills may be very limited amongst undergraduates, with the exception of mature students (Arnot, 2010, online).
Early results from Coventry University suggest that there is little evidence within their sample of any significant differences in eating habits between students from different social backgrounds (Arnot, 2010, online).

Arnot (2010, online) identifies that any trends in eating habits within the undergraduate population may reflect a phase, which the individuals may grow out of naturally. Lake et al. (2009, p. 1200) also suggest that changes in eating habits may simply be due to the life transition associated with the general maturation process, moving from adolescence to adulthood. This would then suggest that eating habit changes may be consistent across all groups of young adults, not differentiated within the undergraduate population. However, it is possible that the relationship between other factors such as stress may make the situation more complex, with university students possibly experiencing higher stress levels, therefore at increased risk of weight gain associated with diet change (Serlachius et al., 2007, p. 548).

Barriers and Facilitators to Healthy Eating

A systematic review of studies by Shepherd et al. (2005, p. 239) found that the major barriers to healthy eating included access to healthy foods, relative prices and personal preference, for example liking fast foods. This study also identified a lack of provision of healthy school meals as a major barrier, reflecting the fact that this review focused on exploring healthy eating in secondary school children, aged 11 to 16 years. It is therefore different barriers are most important in the university student population, as this group take a greater level of responsibility for their own food choices.

For example, evidence from the Coventry University study suggests that while undergraduate males were influenced by media images and were motivated to look good, this did not necessarily translate to improved healthy food choices. Instead, this appears to be associated with an increased risk of disordered eating within this group, alongside increased use of supplements such as protein powders, creatine and amino acids. This approach also led to increased intake of protein-rich foods but very little fruit and vegetable intake. It would be anticipated that factors such as availability and cost may still be important factors in this group.

The systematic review by Shepherd (2005, p. 239) suggested that support from family and friends, high levels of availability of healthy foods, an interest and desire to maintain appearance, and will-power were all major facilitators of eating healthily. Again, it is possible that different factors may be considered important within the university student population, who are older and have greater responsibility for their eating habits.

Methodology

The short review of the literature presented thus far in the essay demonstrates that there is still only a limited understanding of the underlying factors influencing eating habits in undergraduate students. Yet this is the information which is required if effective behavioural change interventions are to be designed and disseminated.

Research Aims

The aim of this small study was to investigate the decision-making processes which underlie the decisions of undergraduate students with regards to eating behaviours, including influences over these decisions. This could then be used alongside other published material to design a social marketing strategy on both a local and national level to improve healthy eating within this group.

Study Sample

A total of six undergraduate students from Manchester University were recruited to participate in the research. Convenience sampling was used to recruit participants to the study sample. Posters were displayed within the business school at the university, requesting participants to attend research focus groups. Eight participants contacted the researcher, but two subsequently withdrew, leaving a sample of four female and two male students. No further inclusion or exclusion criteria were applied to participants, other than that they were current undergraduate students at the university. This method of sampling may not provide a truly representative sample, therefore it may be difficult to generalize the results to the wider population of interest (Babbie, 2010, p. 192). However, this was the most appropriate recruitment approach given the limited time and budget constraints for the project. The diversity of the study sample would also suggest that there was little bias introduced.

Focus Group Methods

Focus groups were selected for data collection from study participants. Focus groups may be particularly useful for gaining an understanding of topics with a group behaviour element, but have also been shown to be very useful in the field of marketing for understanding the impact of marketing stimuli. They were considered to be of particular use in this instance as they allow integrated exploration of associations between lifestyle factors and reactions to marketing materials (Stewart et al., 2007, pp. 2-9).

The focus group was arranged for a two-hour session on one morning, and was moderated by the author. The entire session was video recorded so as to allow for further analysis of responses and behavioural cues at a later date. All participants were given assurance that their responses would remain anonymous and confidential and permission was sought to record the session before it began. Participants were also given information at the beginning of the session as to the purpose of the data collection, and were given opportunity to ask any questions, before being asked to provide consent for participation (Litosseliti, 2003, pp. 70-71).

The focus group began with some short introductory questions to break the ice between participants (Litosseliti, 2003, p. 73), before moving on to focus on the topic of interest: eating behaviours and potential influences. The questions included in the moderator guide, which was prepared to facilitate the focus group, are included in Box 1.

Box 1: Focus group questions

Tell me a little about what you would eat in a typical day.
Do you find that you eat regular meals?
What types of foods do you most like to eat?
Would you say that you eat many snacks? What type of snacks do you eat?
Is there anything you can think of that affects this – for example, do you eat differently on different days of the week?
How would you describe your cooking abilities – do you find it easy to plan meals and cook and prepare food?
How does the way you eat now compare to how you used to eat before coming to university?
Do you find that you eat differently when you go home for the weekend or for holidays?
Would you say that you have any concerns about the way in which you eat?
How do you think that the way in which you eat affects your health?
Are you at all concerned about whether the way you eat affects how you look?
What type of things affect whether you choose healthy foods over non-healthy foods?
Do you find it difficult to find/purchase healthy food?
Would cost have any impact on whether the food you buy is healthy?
Study Results

Overall, the results of the focus group suggested that the students in the sample had experienced a significant change in eating habits since leaving home to attend university. Although the daily eating patterns of participants differed significantly, all felt that they ate a less healthy diet since leaving home. The main difference noted was that regular meals were eaten less often, with several participants reporting that they skipped breakfast regularly, and that other meals were eaten based on convenience rather than at a regular time each day.

Most participants agreed that their eating patterns did differ on a daily basis. In particular, weekends were noted to follow more regular eating patterns, but often involve higher levels of alcohol and unhealthy foods such as takeaways. Participants also generally agreed that they returned to a healthier way of eating when returning home for the weekend or for holidays.

The actual components of diet varied widely across participants. While some participants reported that they regularly ate five portions of fruit and vegetables per day, others indicated that they ate only low levels. Four participants agreed that they ate convenience foods and takeaways on a regular basis, and it was acknowledged that these were usually calorie-dense, high fat foods.

All participants also agreed that they ate snacks on a regular basis, particularly where it was inconvenient to eat meals at regular intervals, and where breakfast was skipped. One participant reported that they felt that their snacking was healthy, however, as they usually snacked on fruit, nuts or seeds rather than chocolate bars or crisps. Given the small sample size and selection procedures, it was difficult to determine whether differences could be attributed to characteristics of the participants, for example gender (Babbie, 2010, p. 192).

There were a number of factors which influenced food choices which emerged from the focus group. The major factor appeared to be convenience. The patterns of meals which were eaten were largely driven by having the time to prepare and food, or having access to healthy foods which could be purchased and eaten within the university campus. Participants also agreed that cost played a major factor.

Only two participants agreed that their low level of cooking ability had any role in how healthy their diet was. The other participants claimed that while they could cook, convenience, cost and motivation were major barriers to doing so.

Food preferences were also a major factor in determining food choices, with all except one participant agreeing that they enjoyed fast food and several reporting that they preferred unhealthy foods to healthy ones. In spite of this, three participants reported that they did try to limit how often they ate fast foods, as it was acknowledged that it was bad for their health to eat them regularly.

In spite of this, the food choices of participants did not appear to be driven overall by concern over their health. Participants suggested that while they were aware of how their diet could impact on their health, other factors were more important influences. Similarly, only one participant agreed that maintaining the way that they looked played any role in influencing their dietary choices.

Social Marketing Strategy Design

Social marketing, first proposed as a public health tool in the 1970s, refers to the application of marketing techniques, using communication and delivery to encourage behaviour change. Such a strategy follows a sequential planning process which includes market research and analysis, segmentation, setting of objectives, and identifying appropriate strategies and tools to meet these objectives (DH, 2008, online). The literature review and focus group discussed thus far comprise the market research and analysis components of this process, with the remaining steps addressed below.

Market Segmentation

Market segmentation may be performed according to geographic distinctions, demographics or psychographic characteristics (Health Canada, 2004, online).
Based on the limited amount of information which is available so far, it would be difficult to segment the market geographically, as it is unclear whether differences exist according to which university is attended.

The demographics of undergraduate students may also be largely shared, with literature indicating that social background may hold little influence over eating habits within this subpopulation, and only limited evidence of any difference between genders (Arnot, 2010, online; Dodd et al., 2010, p. 75).

Instead, it may be preferential to segment on the basis of psychographic characteristics, according to shared knowledge, attitudes and beliefs with regard to changing dietary behaviour. The “Stages of Change” model proposed by Prochaska and DiClemente may be a useful tool to guide this segmentation, in which any change in behaviour is suggested to occur in six steps: precontemplation, contemplation, preparation, action, maintenance and termination (Tomlin & Richardson, 2004, pp. 13-6).
Those in the precontemplative stage do not see their behaviour as a problem (Tomlin & Richardson, 2004, p. 14), therefore targeting this segment could be targeted with a marketing campaign to increase knowledge. Evidence from the US would appear to indicate that higher levels of knowledge regarding dietary guidelines may be associated with better dietary choices, although there is little evidence which shows direct causality (Kolodinsky et al., 2007, p. 1409). Given the many different factors which appear to contribute to unhealthy diets amongst students, simply increasing knowledge may be insufficient to generate any significant improvements. This is further supported by current healthy eating initiatives aimed at the general population, such as the 5 A Day campaign, which incorporates additional, practical information, rather than simply educating people on the need to eat more fresh food (NHS Choices, 2010, online).

Those in the contemplative stage are aware that they need to change, but don’t really want to. It would be unlikely that targeting a marketing campaign at this group would have any significant effect (Tomlin & Richardson, 2004, p. 15). Once individuals reach the action stage, they are actively initiating or maintaining a change, until the initial issue is finally resolved in the termination stage (Tomlin & Richardson, 2004, pp. 15-6). Instead, it would be better to target those in the preparation stage, who have made the decision to change but may be unclear about how to initiate this change. Here, improving knowledge, but also providing information on effective ways in which to change behaviour, may be the most appropriate strategy, as that adopted by the 5 A Day campaign.

Strategy Objectives

Based on the information generated from the focus study, along with that from other research, the main aim of the strategy should be to improve the overall diet of undergraduate students. There already exist campaigns such as the 5 A Day campaign which aim to encourage eating more fruit and vegetables (NHS Choices, 2010, online). The main issues within the undergraduate group instead appear to lie in choosing unhealthy foods, or skipping meals, due to convenience and cost. Therefore this is where the campaign should focus. The following objectives may therefore be identified:

1. Reduce the number of undergraduate students experiencing disordered eating patterns.
2. Improve knowledge and awareness within the undergraduate student population of tasty, cost-effective, convenient alternatives to takeaways and other junk foods.

National Plan

The national strategy would comprise of two main arms. The first would be an educational campaign, which would be targeted specifically at the segment described above, therefore focusing on providing practical information to assist healthy eating choices amongst students. This appears to have been moderately successful with the 5 A Day campaign within the general population (Capacci & Mazzocchi, 2011, p. 87). Evidence from the US suggests that within the undergraduate population specifically, providing information which is directly relevant to their lifestyle may also be effective (Pires et al., 2008, p. 16).

This campaign would be run through national media, as the evidence suggests that such campaigns are associated not only with increased knowledge, but also moderate levels of behaviour change (Noar, 2006, p. 21). Online and social media campaigns may also be effective based on previous case studies. For example, the Kirklees Up For It project found that running a campaign which utilized Facebook alongside its own Website was a successful way of reaching a moderate audience of 18 to 24 year olds (NSMC, 2010, online). Therefore social media such as Twitter and Facebook would provide a simple means of providing weekly tips to students on how to create easy, cheap healthy meals.

Tips could also be given on how to choose healthier snacks which cost less, for example by preparing them at home. By tailoring the advice to the motives of the group, which appear to be related to convenience and cost, previous research would suggest that this should be more effective in changing snacking behaviour (Adriaanse et al., 2009, p. 60).

The second arm of the national campaign would involve lobbying of the government to introduce regulation on the food choices offered by university campuses, particularly where food is provided as part of an accommodation package. This is based on similar recent moves to improve school meals, which has been suggested to be an effective means of improving diet, even if obesity levels have not yet seen any impact (Jaime & Lock, 2009, p. 45). It is also consistent with the data collected in this study, which suggested that access to healthy foods and convenience were major barriers to healthy eating for students.

Localised Intervention

In addition to the national strategy, a local project aimed at providing food preparation workshops would also be piloted in Manchester. This concept is based on the observation that students mostly select unhealthy choices due to convenience and cost, and may not be aware of ways in which healthy food may also be prepared quickly and cheaply. Previous case studies have shown that these practical activities may be an effective means of reaching this target audience. For example a healthy living project called Up For It, run by Kirklees Council in association with NHS Kirklees, found on surveying young adults aged between 16 and 24 years that interventions which were fun and social were preferred to those which focus too much on health (NSMC, 2010, online). Provision of one-off sessions which provide information on where to eat healthily on campus have also shown some success within the undergraduate population in the US (Pires et al., 2008, p. 12).

Based on the budget for the Up For It project, it would be anticipated that approximately ?100 000 would be required to set up and run this local section of the strategy (NSMC, 2010, online). It would be assumed that lobbying and media coverage required as part of the national strategy would be managed by the Department of Health.

Conclusions

It is clear that there is some truth to the assumption that undergraduate students in the UK live on a relatively unhealthy diet. While the reasons for this may be somewhat complex, convenience and cost appear to play a major role in the diet decisions which are made by this group. It is also clear that many are aware of the health impact which their diet is likely to have, although this is overridden by other factors. Targeting students who recognize the need to change their diet, by providing information on how to prepare healthier food quickly and cheaply, may help to overcome the barriers of cost and convenience, thereby improving health within this population.

References

Adriaanse, M.A., de Ridder, D.T.D. & de Wit, J.B.F. (2009) ‘Finding the critical cue: Implementation intentions to change one’s diet work best when tailored to personally relevant reasons for unhealthy eating’. Personality and Social Psychology Bulletin, 35(1), 60-71.
Arnot, C. (2010) ‘Male students eschew balanced diet in favour of supplements’. The Guardian, 9 November 2010. Available [online] from: http://www.guardian.co.uk/education/2010/nov/09/male-students-eating-habits [Accessed 27/03/2011].
Babbie, E.R. (2010) The Practice of Social Research. Belmont, CA: Wadsworth, p. 192.
Capacci, S. & Mazzochi, M. (2011) ‘Five-a-day, a price to pay: An evaluation of the UK program impact accounting for market forces’. Journal of Health Economics, 30(1), 87-98.
Croezen, S., Visscher, T.L.S., ter Bogt, N.C.W., Veling, M.L. & Haveman-Nies, A. (2009) ‘Skipping breakfast, alcohol consumption and physical inactivity as risk factors for overweight and obesity in adolescents: Results of the E-MOVO project’. European Journal of Clinical Nutrition, 63, 405-412.
DH (2008) Social Marketing. Department of Health. Available [online] from: http://webarchive.nationalarchives.gov.uk/+/www.dh.gov.uk/en/Publichealth/Choosinghealth/DH_066342 [Accessed 28/03/2011].
Dodd, L.J., Al-Nakeeb, Y., Nevill, A. & Forshaw, M.J. (2010) ‘Lifestyle risk factors of students: A cluster analytical approach’. Preventative Medicine, 51(1), 73-77.
Health Canada (2004) Section 2: Market Segmentation and Target Marketing. Available [online] from: http://www.hc-sc.gc.ca/ahc-asc/activit/marketsoc/tools-outils/_sec2/index-eng.php [Accessed 26/03/2011].
HM Government (2010) Healthy Lives, Healthy People: Our strategy for public health in England. London: Public Health England. Available [online] from: http://www.dh.gov.uk/dr_consum_dh/groups/dh_digitalassets/@dh/@en/@ps/documents/digitalasset/dh_122347.pdf [Accessed 26/03/2011].
Jaime, P.C. & Lock, K. (2009) ‘Do school based food and nutrition policies improve diet and reduce obesity’. Preventative Medicine, 48(1), 45-53.
James, W.P.T. (2008) ‘WHO recognition of the global obesity epidemic’. International Journal of Obesity, 32, S120-S126.
Kolodinsky, J., Harvey-Berino, J.R., Berlin, L., Johnson, R.K. & Reynolds, T.W. (2007) ‘Knowledge of current dietary guidelines and food choice by college students: Better eaters have higher knowledge of dietary guidance’. Journal of the American Dietetic Association, 107(8), 1409-1413.
Lake, A.A., Hyland, R.M., Rugg-Gunn, A.J., Mathers, J.C. & Adamson, A.J. (2009) ‘Combining social and nutritional perspectives: From adolescence to adulthood’. British Food Journal, 111(11), 1200-1211.
Litosseliti, L. (2003) Using Focus Groups in Research. London: Continuum, pp. 70-73.
NHS Choices (2010) 5 A Day. Available [online] from: http://www.nhs.uk/livewell/5aday/pages/5adayhome.aspx/ [Accessed 26/03/2011].
Noar, S.M. (2006) ‘A 10-year retrospective of research in health mass media campaigns: Where do we go from here?’ Journal of Health Communication, 11(1), 21-42.
NSMC (2010) Up For It. Available [online] from: http://thensmc.com/component/nsmccasestudy/?task=view&id=156 [Accessed 26/03/2011].
Papadaki, A., Hondros, G., Scott, J.A. & Kapsokefalou, M. (2007) ‘Eating habits of university students living at, or away from home in Greece’. Appetite, 49(1), 169-176.
Pires, G.N., Pumerantz, A., Silbart, L.K. & Pescatello, L.S. (2008) ‘The influence of a pilot nutrition education program on dietary knowledge among undergraduate college students’. Californian Journal of Health Promotion, 6(2), 12-25.
Raulio, S., Roos, E. & Prattala, R. (2010) ‘School and workplace meals promote health food habits’. Public Health Nutrition, 13, 987-992.
Serlachius, A., Hamer, M. & Wardle, J. (2007) ‘Stress and weight change in university students in the United Kingdom’. Physiology & Behavior, 92(4), 548-553.
Shepherd, J., Harden, A., Rees, R., Brunton, G., Garcia, J., Oliver, S. & Oakley, A. (2005) ‘Young people and healthy eating: A systematic review of research on barriers and facilitators’. Health Education Research, 21(2), 239-257.
Spanos, D. & Hankey, C.R. (2010) The habitual meal and snacking patterns of university students in two countries and their use of vending machines. Journal of Human Nutrition and Dietetics, 23(1), 102-107.
Stewart, D.W., Shamdasani, P.N. & Rook, D.W. (2007) Focus Groups: Theory and Practice – 2nd Edition. Thousand Oaks, CA: Sage Publications, Inc., pp. 2-9.
Tomlin, K.M. & Richardson, H. (2004) Motivational Interviewing and Stages of Change. Center City: MN: Hazelden, pp. 14-16.
Wyatt, S.B., Winters, K.P. & Dubbert, P.M. (2006) ‘Overweight and obesity: Prevalence, consequences, and causes of a growing public health problem’. American Journal of the Medical Sciences, 331(4), 166-174.
Yahia, N., Achkar, A., Abdallah, A. & Rizk, S. (2008) ‘Eating habits and obesity among Lebanese university students’. Nutrition Journal, 7, 32-36.

Spinal Cord Trauma Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Abstract

Loss of sensory and motor function below the injury site is caused by trauma to the spinal cord. Approximately 10,000 people experience serious spinal cord injury each year. There are four general types of spinal cord injury, cord maceration and laceration, contusion and solid core injury. There are three phases of SCI response that occur after injury: the acute, secondary, and chronic. The most immediate concern is patient stabilization. Additionally interventions may be instituted in an effort to improve function and outcome. Through health, and future development one day there will be hope for recovery from the spinal cord injury.

Introduction

Loss of sensory and motor function below the injury site is caused by trauma to the spinal cord. As indicated by Huether & McCance (2008) normal activity of the spinal cord cells at and below the level of injury ceases due to loss of continuous tonic discharge from the brain and brain stem. Depending on the extent of the injury reflex function below the point of injury may be completely lost. This involves all skeletal muscles, bladder, bowel, sexual function and autonomic control. In the past hope for recovery has been minimal. With medical advancements and better understanding today hope for recovery is better but still limited.

Risk Factors and Incidence

According to Huether & McCance (2008) approximately 10,000 people experience serious spinal cord injury each year. 81% of those injuries are males with an average age of 33.4 years. As indicated by Hulsebosch (2002) the majority of injuries are divided into four separate groups; 44% of the injuries are young people sustained through motor vehicle crashes or other high energy traumatic accident; 18% are sustained through sports activities, and 24% are sustained through violence and 22% are sustained in the elderly population either through falls or cervical spinal stenosis caused by congenital narrowing or spondylosis.

Categories of Injury

According to Hulsebosch (2002) there are four general types of spinal cord injury: 1) cord maceration 2) cord laceration 3) contusion injury, and 4) solid cord injury. In the first two injuries, the surface of the cord is lacerated and a prominent connective tissue response is invoked, whereas in the latter two the spinal cord surface is not breached and the connective tissue component is minimal. The contusion injury represents from 25 to40% of all injuries and is a progressive injury that enlarges overtime.

Cellular Level Physiology

Hulsebosch (2002) gives us three phases of response directly after the injury of the spinal cord. The acute phase begins with the moment of injury and extends for the first few days. A variety of pathophysiological processes begins. There is immediate mechanical soft tissue damage, including endothelial cells of the vasculature. Cell death, resulting from mechanical forces and ischemic consequences is instantaneous.

Over the next few minutes there are significant electrolytic shifts, intracellular concentrations of sodium increase. Extracellular concentrations of potassium increase. Intracellular levels of calcium increase to toxic levels that contribute to a failure in neural function. Electrolyte shifts cascade to a generalized demonstration of spinal shock, which is representative of a “failure of circuitry in the spinal neural network. As indicated by Shewmon (1999) spinal shock is a transient functional depression of a structurally intact cord below the site of an acute spinal cord injury.

It does not occur with slowly progressive lesions. Limited function or loss of function typically lasts two to six weeks followed by recovery of functions. The secondary phase occurs over the next few minutes to the next few weeks. Ischemic cellular death, electrolytic shifts, and edema continue. As a result of cell lysis extracellular concentrations of glutamate and other amino acids reach toxic concentrations within the first fifteen minutes after injury.

Free-radical production amplifies. Neutrophils accumulate in the spinal parenchyma within 24 hours. Lymphocytes follow the neutrophils and reach their peak numbers within forty eight hours. Local concentrations of cytokines and chemokines increase as part of the inflammation process. As inflammation and ischemia proceed the injury site grows in size from the initial mechanical force response site into the area around the site, encompassing a larger region of cell death.

Regeneration is inhibited by factors expressed within the dominos of responsive reactions. The chronic phase occurs over a time course of days to years. Cell death continues. The cord becomes scarred and tethered. Conduction deficits result from demyelination of the cord. Regeneration and emergence of axons is exhibited but inhibitory factors suppress any resultant growth. Alteration of neural circuits often results in chronic pain syndromes for many spinal cord injury patients.

Therapeutic Management

Spinal cord injury is diagnosed by physical examination, radiological exam, CT scans, MRI scans, and myelography. The most immediate concern in the management of an acute spinal cord injury is patient stabilization. The vertebral column is subject to surgical stabilization using variety of surgical rods, pins, and wires.

Hardware must be meticulously placed. Surgical intervention has the potential to instigate additional spinal trauma. Hemostatic body systems must be supported through fluid resuscitation, medication management and electrolyte support. Additionally the following interventions may be instituted in an effort to improve function and outcome:

Edema Reduction

Reduction of the inflammatory response is one intervention of concentrating in the treatment of the acute spinal cord injury. Steroids have provided a primary tool to reduce edema and inflammation, the most successful of which is methylprednisolone (MP). According to Bracken (1993) the administration of a high dose of MP, if given within eight hours of the insult in patients with both complete and incomplete SCI, as proposed by the National Acute Spinal Cord Injury Study (NASCIS-2), has been promising with respect to improved clinical outcome. The cellular and molecular mechanisms by which MP improves function may involve antioxidant properties, the inhibition of inflammatory response, and/or a role in immunosuppression.

Inhibition of Inflammation: by use of Anti-Inflammatory Agents

Although inflammation is generally held to be are pair mechanism that is restorative in nature, recent work has demonstrated that the inflammatory cascade produces several pathways that are degradative in nature, such as the prostagland in pathways.

Anti-inflammatory agents have been administered with successful limitation of the inflammatory process. As indicated by Hains, Yucra and Hulsebosch (2001) selective cyclooxygenase (COX)-2inhibitors given systemically to spinal card injury patients have demonstrated significant improvements. Provision of inhibition of the enzyme activation sequence appears to be the safest medication action at this time.

Application of either whole body hypothermia or local cord cooling appears to hold promise for those suffering from neuro trauma. Application of hypothermia, either spinally or systemically, is thought to provide protection for neural cells and to reduce secondary inflammation, decreasing immediate mortality. According to Hayes, Hsieh, Potter, Wolfe, Delaney, and Blight (1993) local spinal cord cooling within eight and a half hours of injury in ten patients produced a better-than-expected rate of recovery of sensory and motor function.

Rescue from Neural Cell Death

Cells die due to a programmed cell death after SCI. An excellent opportunity is present for intervention with factors that could rescue the cells at risk. As presented by Eldadah and Faden (2000) one approach to cell rescue is the inhibition of caspases. Caspases are regulated signalling proteases that that accomplish a primary role in mediating cell apoptosis thru division at specific sites within proteins. These proteins inhibit programmed cell death and are a part of the bcl-2 oncogene products. According to Shibata, Murray,

Tessler, Ljubetic, Connors and Saavedra (2000) recent work has demonstrated prevention of retrograde cell loss and atrophy reduction by direct intra-spinal administration of the Bcl-2 protein into the damaged site.

Another group of proteins with potential cell death inhibition properties are calpains. Calpains are calcium-activated proteases that assist in degradation of cytoskeletal demolition of injured cells. Substances with calpain inhibitor properties could prove of benefit in reduction of cell death.

Demyelination and Conduction

According to Waxman (2001) the strategy of inhibiting the neural injury induced by the increased barrage of action potentials early in the injury phase or by inhibiting the voltage-dependent sodium channels, which provide the ionic basis for the action potential may be beneficial. In addition, neural injury and disease may introduce altered ionic channel function on nerve processes that would result in impaired conduction properties, which produces persistent hyperexcitability leading to the basis for chronic pain after CNS neural trauma.

As a result of secondary injury to the spinal cord many axons are demylinated. Infusion of a fast, voltage-sensitive potassium channel blocker may provide partial restoration of conduction properties to demylinated axons. As presented by Guest, Hiester and Bunge (2005) another strategy for the improvement in demyelinationis the transplantation of Schwann cells which may contribute to the restoration of myelin sheaths around some spinal axons.

Promotion of Axonal Regeneration

During development of the central nervous system, an assortment of axonal growth promoting proteins are present in the extracellular environment. The environment stimulates axon growth and neural development. Once the central nervous system is established the growth stimulating agents decline. The adult central nervous system shifts toward inhibition of axonal growth permitting a stable and circuitry. These inhibition and stimulatory factors provide an opportunity for research that will promote axonal growth after a spinal cord injury perhaps rebuilding a neural communication network.

Cell Replacement Strategies

After spinal cord injury function of nerve cells and cells that produce myelin that insulates and provides a positive impulse conduction venue has vanished. Cellular replacement to rebuild conduction properties is a promising therapy. As indicated by Normura, Tator and Shoichet (2006) there is promise that technology utilizing cellular treatment procedures including olfactory ensheathing cells, (the cells that form the myelin on olfactory nerves),

Schwann cells (the cells that form the myelin on peripheral nerves), dorsalroot ganglia, adrenal tissue, and neural stem cells can promote repair of the injured spinal cord. It is postulated that these tissues would rescue, replace, or provide a regenerative pathway for injured adult neurons, which would then integrate or promote the regeneration of the spinal cord circuitry and restore function after injury. As indicated by Nakamura (2005) there is promise that bioengineering technology utilizing cellular treatment advances can promote repair of the injured spinal cord. Transplantation of these cells promotes functional recovery of locomotion and reflex responses.

The engineering of cells combines the therapeutic advantage of the cells along with a delivery system. For example, if delivery of neurotrophins (neuro- related to cell nerves, tropin- a turning) is desired, cells that secrete neutrophins and cells that create myelin can be engineered to stimulate axon growth and rebuild nerve function.

In an effort to further enhance beneficial effects autoimmune agents such as macrophages can be extracted from the patient’s own system and inserted at the injury site. The patient’s own activated macrophages will scavenge degenerating myelin debris, rich in non-permissive factors, and at the same time encourage regenerative growth without eliciting an immune response.

Retrain the Brain with Aggressive Physical Therapy

It is apparent that recovery of locomotion is dependent on sensory input that can “reawaken” spinal circuits and activate central pattern generators in the spinal cord, as demonstrated by spontaneous “stepping” in the lower limbs of one patient. According to Calancie, Alexeeva, Broton and Molano (2005) it may take six or more months for reflexes to appear following acute SCI suggesting they might be due to new synaptic interconnections.

Electrical Stimulation

Functional electrical stimulation (FES) that contributes to improved standing can improve quality of life for the individual and the caregiver. There is considerable interest in computer-controlled FES for strengthening the lower extremities and for cardiovascular conditioning, which has met with some success in terms of physiological improvements such as increased muscle mass, improved blood flow, and better bladder and bowel function. With added benefit there are decreases in medical complications such as venous thrombosis, osteoporosis, and bone fractures. Stimulation of the phrenic nerve, which innervates the diaphragm, is used in cases where there is damage to respiratory pathways.

Chronic Central Pain

As indicated by Siddall & Cousins (1997) pain continues to be a significant problem in patients with spinal cord injuries. There is little consensus regarding the terminology, definitions and nature of the pain. Treatment studies have lacked congruence due to inaccurate identification of pain types. There has been little progress in efforts to bring an understanding of the pathophysiology of CCP to the development of therapeutic approaches for the SCI patient population.

Chronic central pain (CCP) syndromes develop in the majority of spinal cord injury patients. As indicated by Que, Siddall and Cousins (2007) chronic pain is a disturbing aspect of spinal cord injury, often interfering with basic activities, effective rehabilitation and the quality of life of the patient. Evidence that neurons in pain pathways are pathophysiologically altered after spinal cord injury comes from both clinical and animal literature. In addition, the development of the chronic pain state correlates with structural alterations such as intra-spinal sprouting of primary afferent fibres.

According to Que, Siddall and Cousins (2007) pain in the cord-injured patient is often resistant to treatment. Recognition of Chronic Central Pain has led to utilization of non-opioid analgesics. According to Siddall and Middleton (2006) Baclofen, once used exclusively in treatment of spasticity and the anticonvulsant gabapentin originally used to treat epilepsy, have had some success with attenuating muskuloskeletal CCP syndromes. The tricyclic antidepressant amitriptyline has shown effective in treatment of dysesthetic pain.

Conclusion

Stem cell therapy will offer hope for spinal cord injury patients with opportunities for the abundance of cell replacement strategies. Advances in the field of electronic circuitry will lead to better FES and robotic devices. Pharmacological advances offer intervention direction to aid in recovery and improve patient’s’ quality of life every day. The re-establishment of cell, nerve and muscle communication interconnections will be potentially possible. Through tenacity, health, and future development one day victims of spinal cord injury will be told there is hope of recovery.

References

American Psychological Association (2001). Publication manual of the American Psychological Association (5th ed.). Washington, DC: American Psychological Association.

Bracken, M.B., & Holford, T.R. (1993). Effects of timing of methylprednisolone or naloxone administration on recovery of segmental and long-tract neurological function in NASCIS 2. Journal of Neurosurgery. 79(4), 500-7.

Bunge, R.P., Puckett, W.R., and Hiester, E.D. (1997). Observations on the pathology of several types of human spinal cord injury, with emphasis on the astrocyte response to penetrating injuries. Adv Neurol.72, 305-315.

Calancie, B., Alexeeva, N., Broton, J.G., & Molano, M.R. (2005). Interlimb reflex activity after spinal cord injury in man: strengthening response patterns are consistent with ongoing synaptic plasticity. Clinical Neurophysiology : Official Journal of the International Federation of Clinical Neurophysiology. 116(1), 75-86.

Eldadah, B.A., & Faden, A.I. (2000). Caspase pathways, neuronal apoptosis, and CNS injury. Journal of Neurotrauma. 17(10), 811-29.

Guest, J.D., Hiester, E.D., & Bunge, R.P. (2005). Demyelination and Schwann cell responses adjacent to injury epicenter cavities following chronic human spinal cord injury. Experimental Neurology. 192(2), 384-93.

Hains, B.C., Yucra, J.A., & Hulsebosch, C.E. (2001). Reduction of pathological and behavioral deficits following spinal cord contusion injury with the selective cyclooxygenase-2 inhibitor NS-398. Journal of Neurotrauma. 18(4), 409-23.

Hayes, K.C., Hsieh, J.T., Potter, P.J., Wolfe, D.L., Delaney, G.A., & Blight, A.R. (1993). Effects of induced hypothermia on somatosensory evoked potentials in patients with chronic spinal cord injury. Paraplegia. 31(11), 730-41.

Huether, S.E., & McCance, K.L. (2008). Understanding pathophysiology (4th ed.). St. Louis, MO: Mosby, Inc.

Hulsebosch, C.E. (2002). Recent advances in pathophysiology and treatment of spinal cord injury. Advan. Physiol.Edu. 26, 238-255

Nakamura, M., Okano, H., Toyama, Y., Dai, H.N., Finn, T.P., & Bregman, B.S. (2005). Transplantation of embryonic spinal cord-derived neurospheres support growth of supraspinal projections and functional recovery after spinal cord injury in the neonatal rat. Journal of Neuroscience Research. 81(4), 457-68.

Nomura, H., Tator, C.H., & Shoichet, M.S. (2006). Bioengineered strategies for spinal cord repair. Journal of Neurotrauma. 23(3-4), 496-507.

Que, J.C., Siddall, P.J., & Cousins, M.J. (2007). Pain management in a patient with intractable spinal cord injury pain: a case report and literature review. Anesthesia and Analgesia. 105(5), 1462-73, table of contents.

Shewmon, D.A. (1999). Spinal shock and “brain death”: somatic pathophysiological equivalance and implications for the integrative-unity rationale. Spinal Cord 37, 313-324

Shibata, M., Murray M., Tessler, A., Ljubetic, C., Connors, T., & Saavedra, R.A. (2000). Single injections of a DNA plasmid that contains the human Bcl-2 gene prevent loss and atrophy of distinct neuronal populations after spinal cord injury in adult rats. Neurorehabilitation and Neural Repair. 14(4), 319-30.

Siddall, P.J., & Middleton, J.W. (2006). A proposed algorithm for the management of pain following spinal cord injury. Spinal Cord 44, 67-77

Tator, C.H. (1998). Biology of neurological recovery and functional restoration after spinal card injury. Neurosurgery. 42(4), 696-707

Waxman, S.G. (2001). Acquired channelopathies in nerve injury and MS. Neurology. 56(12), 1621-7.

Sickle-cell Disease (SCD) Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Sickle-cell disease

Sickle cell disease (SCD), or sickle cell anemia, is a group of genetic conditions, resulting from the inheritance of a mutated form of the gene coding for the ? globulin chain of the hemoglobin molecule, which causes malformation of red blood cells (RBCs) in their deoxygenated state. Specifically, this single point mutation occurs at position 6 of the ? globulin chain, where a valine is substituted for glutamic acid (Ballas et al. 2012). This abnormal hemoglobin causes a characteristic change in the RBC morphology, where it becomes abnormally rigid and sickle-like, rather than the usual biconcave disc. These cells do not flow as freely throughout the circulatory system as the normal phenotype, and can become damaged and hemolysed, resulting in vascular occlusion (Stevens and Lowe 2002).

SCD is an autosomal recessive condition, thus patients with SCD will have inherited a copy of the mutated gene from each of their parents (homozygous genotype). Individuals who only inherit one copy (heterozygous genotype) are termed sickle cell (SC) carriers, who may pass on the affected gene to their children (Stevens & Lowe 2002). The severity of SCD varies considerably from patient to patient, most likely as the result of environment or other unknown genetic factors (Bean et al. 2013).

Patients with SCD are typically of African or African-Caribbean origin, but all ethnic groups may be affected. In 2014 the National Institute of Clinical Excellence (NICE) estimated that between 12,500 and 15,000 people in the UK suffer from SCD (NICE quality standard 58, 2014), with more than 350 babies born with SCD between 2007 and 2008. Patients in developed countries typically live into their 40s and 50s. However in developing countries, it is estimated that between 50% (Odame 2014) and 90% of children die by the age of 5 (Gravitz and Pincock 2014).

SCD is more prevalent in the ethnic African population because SCD carriers exhibit a 10-fold reduction in severe malarial infection, which is common in many African countries and associated with significant mortality. One proposed mechanism for this is that on infection with the malarial plasmid, RBCs in SCD carriers become sickle shaped and are then removed from the circulation and destroyed. Consequently, it is genetically beneficial to be a SCD carrier, thus more SCD carriers survive to reproduction age, in turn increasing the incidence of the SCD mutation in the population. (Kwiatkowski 2005).

SC patients experience periods of acute illness termed “crises” resulting from the two different effects of SCD; vaso-occlusion (pain, stroke and acute chest syndrome) and those from hemolysis (for example, anemia from RBC destruction and inefficient oxygen carrying capacity) (Glassberg 2011). The frequency of these may be several times a week, or less than once a year. Patients typically present with anemia, low blood oxygen levels and pyrexia (NICE quality standard 58, 2014).

?

There are 3 classifications of crises:

1. Sequestration crisis (rapid pooling of RBCs in organs, typically the spleen, which may result in patient death from the acute reduction in available red cells for oxygen transportation).

2. Infarctive crisis (blockage of capillaries causing an infarction).

3. Aplastic crises (where the spleen is damaged from 1&2 which compromises RBC production (Stevens & Lowe 2002).

The result of these crises can be irreversible damage to a wide range of organs from the spleen to the retina which can cause extreme pain (Stevens & Lowe 2002). However, patients not currently experiencing a crisis can also present with anemia as the result of poor oxygen transport function, loss of RBCs due to sequestration in organs such as the spleen and reduced red cell production as the result of impaired spleen function (Ballas et al. 2012).

Typically, patients will initially present with an enlarged spleen in early childhood (due to pooling of malformed RBCs), which then becomes hypertrophied, ultimately resulting in a state of almost complete loss of function (autosplenectomy). Several complications of SCD are recognised, including impaired neurocognitive function, which is most likely the result of anemia or silent cerebral infarcts (Ballas et al. 2012).

In the UK, SCD is usually diagnosed antenatally or in the first few weeks of life. Prenatal screening is offered to parents who may be at risk of carrying the SCD causing gene. NICE recommend that screening is offered early in pregnancy for high risk groups (ideally before 10 weeks gestation) or via a family origin questionnaire in low risk groups. Full screening can then be offered if family history is suggested. In the case of a positive test, counselling should be offered immediately, and the parents offered the option of termination of pregnancy (NICE Clinical Guideline 62, 2014). However, if screening has not occurred, SCD is one of the diseases screened for by the newborn heel prick test in the first week of life (NICE quality standard 58, 2014). In older patients or those not in countries where screening is offered, patients present with anemia or acute crisis. Histological analysis of blood samples can also reveal sickle shaped RBCs and the characteristic abnormal hemoglobin can be identified by high performance liquid chromatography or electrophoresis (Glassberg 2011).

There are three approaches to treatment of SCD. The first is to manage the condition prophylactically in the hope of reducing the incidence of complications and crises. The second is to effectively manage crises, both to reduce the risk of organ damage and life threatening events, as well as control the severe pain associated with a SCD crisis. The third approach is to target the cause of the condition itself.

Penicillin (de Montalembert et al. 2011) and folic acid are usually offered to patients in order to prevent complications by bacterial disease and are associated with a significant increase in survival and quality of life (NICE quality standard 58, 2014). Children are also vaccinated against pneumococcal infection. Transcranial doppler imaging of the cerebral vessels can be used to identify children at risk of stroke (de Montalembert et al. 2011). As previously discussed, SCD carriers are conferred some protection from malarial infection. Paradoxically, SCD sufferers display an increased sensitivity to malarial infection and should also be treated with anti-malarial prophylaxis where appropriate (Oniyangi and Omari 2006).

Hydroxyurea has been used in the treatment of SCD, as it appears to increase the production of fetal hemoglobin (HbF), thus reducing the proportion of abnormal hemoglobin although the exact mechanism of this is unclear (Rang et al. 1999). Suggested mechanisms include induction of HbF by nitric oxide, or by ribonucleotide inhibition. Other suggested mechanisms include the increasing of RBC water content and reduced endothelial adhesion, which reduces the incidence of infarction (Charache et al. 1995).

Blood transfusion is an important tool in treating SCD, especially in children. It almost immediately improves the capacity of the blood to transport oxygen, and in the longer term as the “healthy” donor RBCs are not as destroyed as quickly as the sickle shaped RBCs, repeated transfusion is associated with a reduction in erythropoiesis (RBC production) in the SCD patient, thus reducing the proportion of sickle shaped RBCs in circulation, which in turn reduces the risk of a crisis or stroke. Exchange transfusion is also possible, whereby abnormal sickle RBCs are removed from the circulating volume prior to transfusion with donor blood. However there are drawbacks to transfusion, namely the inherent safety risks such as immunological sensitivity, contamination of blood products with infectious disease and a lack of available donated blood (Drasar et al. 2011).

The severe pain of a crisis must be controlled, most often with opioid analgesics. These are effective analgesics which act by binding to µ, ? and ? receptors. The common approach is intravenous infusion of morphine either by continuous drip or patient controlled analgesia (PCA) pump infusion. Non-opioid drug options, including paracetamol, tramadol and corticosteroids may also be considered, but these drugs have a limit to the analgesia they can produce, whereas opioid drugs are more often limited by their side effects, such as respiratory suppression, vomiting and itching (Ballas et al. 2012).

Bone marrow transplant is currently the only curative therapy for SCD. However it is dependent on locating a suitable donor with a HLA tissue match, usually from a healthy sibling. It is associated with some risks and complications including grant rejection, but generally is associated with a very positive prognosis (Maheshwari et al. 2014). As SCD is an autosomal recessive disease with one well identified causative gene, gene therapy to replace one copy of the faulty gene with a normal copy is of great interest to researchers. However this is very much still in development in humans and a 2014 review of SCD clinical trials found no trials of gene therapy as yet (Olowoyeye and Okwundu 2014)

In addition to the acute effects of SCD, patients are also at risk from a number of potentially fatal consequence of SCD such as acute splenic sequestration. In this condition, which often occurs after an acute viral or bacterial infection (classically parvovirus B19), the malformed RBCs become trapped in the sinuses of the spleen causing rapid enlargement. Patients will present with often severe abdominal pain and enlargement, pallor and weakness and potentially tachycardia and tachypnea. Patients may also suffer from hypovolemic shock from the significant reduction of available hemoglobin (acute aplastic crisis). This is managed by emergency treatment of the hypovolemia and transfusion of packed RBCs. Because the rate of recurrence for splenic sequestration is high (approximately 50%), a splenectomy may be performed after the patient has recovered from the event (NICE quality standard 58, 2014).

Acute chest syndrome is also a serious complication of SCD and may be fatal. It is characterised by the occlusion of the pulmonary blood vessels during an occlusive crisis. Patients typically present with chest pain, cough and low oxygen levels (Ballas et al. 2012). It is also associated with asthma, and it is recommended that asthma in patients with SCD be carefully monitored. Treatment of acute chest syndrome is usually by antibiotics, bronchodilators if indicated and transfusion or exchange transfusion also considered (de Montalembert et al. 2011).

Another consequence of rapid turnover of the abnormally shaped RBCs is the increased production of bile, which may cause hepatobiliary disease, specifically gallstones and vascular conditions of the liver. Liver pathology can result from ischemia-reperfusion injury following a crisis, endothelial dysfunction and overloading with iron as the result of the liver sequestering iron from the destroyed RBCs (Ballas et al. 2012). SCD patients are also at significant risk of ischemic stroke, resulting from a cerebral infarctive crisis, with one study suggesting that 11% of patients will suffer a stroke by 20 years of age, and 24% by 45. Children who suffer stroke may also go on to develop moya-moya syndrome, which is associated with s significant decrease in cognitive function and increased risk of further stroke (Ballas et al. 2012).

SCD is a complex condition and is associated with significant challenges in treatment as it requires the use of a multi-disciplinary team to cover the wide range of its affects and significant prophylactic treatments. As discussed, the effects of these potential complications can be life threatening and have life changing consequences.

An additional difficulty is that while screening, prophylactic and curative treatments are available in the developed world, they are not in the developing world where rates of the disease are in fact highest. In sub-Saharan Africa mortality is estimated to be between 50% (Odame 2014) and 90% (Gravitz & Pincock 2014) yet in developed counties life expectancy ranges from 40s to 50s (Gravitz & Pincock 2014). Currently, laboratory diagnosis and screening is prohibitively expensive in developing countries thus there is a need for the development of low cost techniques. The Gavi Vaccine Alliance also endeavors to make prophylactic treatment more available, specifically the pneumococcal vaccine. Of the therapies discussed here, hydroxyurea is likely to be the most affordable and increasing the availability of this would be of significant benefit and clinical trials have commenced in Africa in 2014 (Odame 2014).

?

References

Ballas, S.K., Kesen, M.R., Goldberg, M.F., Lutty, G.A., Dampier, C., Osunkwo, I., Wang, W.C., Hoppe, C., Hagar, W., Darbari, D.S., & Malik, P. 2012. Beyond the definitions of the phenotypic complications of sickle cell disease: an update on management. ScientificWorldJournal., 2012, 949535 available from: PM:22924029

Bean, C.J., Boulet, S.L., Yang, G., Payne, A.B., Ghaji, N., Pyle, M.E., Hooper, W.C., Bhatnagar, P., Keefer, J., Barron-Casella, E.A., Casella, J.F., & Debaun, M.R. 2013. Acute chest syndrome is associated with single nucleotide polymorphism-defined beta globin cluster haplotype in children with sickle cell anaemia. Br.J.Haematol., 163, (2) 268-276 available from: PM:23952145

Charache, S., Terrin, M.L., Moore, R.D., Dover, G.J., Barton, F.B., Eckert, S.V., McMahon, R.P., & Bonds, D.R. 1995. Effect of hydroxyurea on the frequency of painful crises in sickle cell anemia. Investigators of the Multicenter Study of Hydroxyurea in Sickle Cell Anemia. N.Engl.J.Med., 332, (20) 1317-1322 available from: PM:7715639

de Montalembert M., Ferster, A., Colombatti, R., Rees, D.C., & Gulbis, B. 2011. ENERCA clinical recommendations for disease management and prevention of complications of sickle cell disease in children. Am.J.Hematol., 86, (1) 72-75 available from: PM:20981677

Drasar, E., Igbineweka, N., Vasavda, N., Free, M., Awogbade, M., Allman, M., Mijovic, A., & Thein, S.L. 2011. Blood transfusion usage among adults with sickle cell disease – a single institution experience over ten years. Br.J.Haematol., 152, (6) 766-770 available from: PM:21275951

Glassberg, J. 2011. Evidence-based management of sickle cell disease in the emergency department. Emerg.Med.Pract., 13, (8) 1-20 available from: PM:22164362

Gravitz, L. & Pincock, S. 2014. Sickle-cell disease. Nature, 515, (7526) S1 available from: PM:25390134

Kwiatkowski, D.P. 2005. How malaria has affected the human genome and what human genetics can teach us about malaria. Am.J.Hum.Genet., 77, (2) 171-192 available from: PM:16001361

Maheshwari, S., Kassim, A., Yeh, R.F., Domm, J., Calder, C., Evans, M., Manes, B., Bruce, K., Brown, V., Ho, R., Frangoul, H., & Yang, E. 2014. Targeted Busulfan therapy with a steady-state concentration of 600-700 ng/mL in patients with sickle cell disease receiving HLA-identical sibling bone marrow transplant. Bone Marrow Transplant., 49, (3) 366-369 available from: PM:24317124

NICE Clinical Guideline 62 – Antenatal Care. Guideline CG62, published March 2008, revised February 2014. https://www.nice.org.uk/guidance/cg62

NICE quality standard 58: Sickle cell acute painful episode, Guidelines CG143, publication date June 2012, reviewed May 2014. https://www.nice.org.uk/guidance/cg143

Odame, I. 2014. Perspective: we need a global solution. Nature, 515, (7526) S10 available from: PM:25390135

Olowoyeye, A. & Okwundu, C.I. 2014. Gene therapy for sickle cell disease. Cochrane.Database.Syst.Rev., 10, CD007652 available from: PM:25300171

Oniyangi, O. & Omari, A.A. 2006. Malaria chemoprophylaxis in sickle cell disease. Cochrane.Database.Syst.Rev. (4) CD003489 available from: PM:17054173

Rang, Dale, & Ritter 1999. Pharmacology, 4th ed. Churchill Livingstone.

Stevens & Lowe 2002. Pathology, 2nd ed. London, Mosby.

Leadership and Management in Professional contexts

This work was produced by one of our professional writers as a learning aid to help you with your studies

Part 1: Management Style
Description and feelings

This essay aims to reflect on my experience when working with a group of seven students tasked to critically analyse a case study and develop a group presentation. The Gibbs (1988) model of reflection will be used to discuss and analyse the lessons gained from my experience. At the start of our group meeting, a leader was selected and helped the group in planning and implementing the task. However, my experience with the group was marked with difficulties and challenges. In the first stages of our group formation, or the norming stage, we had difficulties meeting as a group due to differences in university schedule. During the meetings, some of the members chose not to participate while others were more demanding and tried to dominate the discussions. The leader tried to create some sense of order in our first meetings and demonstrated the authoritarian leadership style. Throughout our team meetings, some of the members were absent, while others who were present continued to depend on the more dominant members to accomplish the tasks. I was frustrated in the beginning of our meetings and felt that we could have been successful in our presentation if we managed to work more effectively. Our team presentation was not what I expected. I was disappointed with our overall team performance.

Discussion and Analysis

Management is described as a process where leaders govern and make decision-making within an organisation (Bach and Ellis, 2011). This also involves planning of tasks, organising work, staffing, directing activities and controlling (Belbin, 2010). The main aim of management is for managers to influence or encourage team members to accomplish a task (Belbin, 2010). On reflection, my team leader demonstrated the authoritarian leadership style. This type of leadership is described as one where the leader provides the direction of the team and gives specific instructions and directives on how to achieve the team goal (Daly et al., 2015). An authoritarian leader also supervises the activities of the subordinates and strongly discourages members to validate or question his or her directives (Bach and Elllis, 2011). This type of leadership is appropriate in workplaces where there is a highly-structured setting with routine operations (Bishop, 2009). Autocratic leadership is also favourable for activities that are simple and of shorter duration (Marquis and Huston, 2012). On evaluation of my experience in the team, we had very little interaction and cohesion during the first few stages of the team working.

According to Tuckman’s model of team development, there are four stages of group formation (Clark et al., 2007). These include the following: forming, norming, storming and performing. Our lack of cohesion and difficulties in conducting team meetings may reflect the first stage of group formation, which is the establishing stage. In this this step, Clark et al. (2007) has explained that team members are still beginning to form their team roles and tend to be polite and diplomatic. At this stage, a team leader was chosen, who in turn reflected the authoritarian leadership style. Since most team members were reluctant to accept a task, our leader decided to assign team roles and ensured that each team member would attend the team meetings. The leader also supervised the entire group. On reflection, the authoritarian leadership style was appropriate in the first few stages of our team working since this ensured that tardiness and absenteeism were prevented (Belbin, 2010). Further, the authoritarian leadership style was also appropriate since our assigned task was not complex and was of shorter duration (Bishop, 2009). Our group leader was able to make follow-ups on our assigned task. However, as we progressed towards the second stage, which is the storming stage, conflicts soon arose.

There were members who tended to dominate the discussion and did not agree with our leader on our assigned team roles and how the case study should be presented. Although Goodman and Clemow (2010) argue that conflicts in teams are natural and may not always have a negative impact on the function and development of the team, in my experience, the conflicts had negative impact on our team development. Members who disagreed with our team leader on how the case study should be presented chose not to participate in our succeeding meeting and role-playing. Since the authoritarian leadership style was adopted, our team leader did not consider the team member’s suggestions. Morgan et al. (2015) reiterate that conflicts could help in the development of a team if each team member acknowledges the differences of the team members and learn to adjust to their individual roles. On reflection, most of my team members chose not to adjust to our individual differences. In turn, this created a discordant team, which also reflected on our final presentation. I felt that our presentation was chaotic and reflected poorly on our role as team members. On consideration, our team would have benefitted with the transformational leadership style. This type of leadership encourages members to actively participate in decision-making and is associated with achievement of goals and objectives (Bach and Ellis, 2010).

Conclusion

The authoritarian leadership style was not the most appropriate style in managing our team since this failed to encourage team members to participate in decision-making. This type of leadership is also not applicable in actual healthcare settings since patient-centred care is promoted and team working and participation highly encouraged.

Action Plan

When managing a team in the future, I will ensure that I am aware of my own team role. Conflicts should be used to develop and not destroy teams. I will also adopt a leadership style that allows teams members to actively participate in decision-making. Specifically, I will develop the transformational leadership style since this ensures that all members have opportunities to be actively involved and valued during achievement of a task (Bishop, 2009).

References:
Bach, S. & Ellis, P. (2011) Leadership, Management and Team Working in Nursing. Exeter: Learning Matters.
Belbin, R. (2010) Management of teams: why they succeed or fall. London: Butterworth-Heinemann.
Bishop. V. (2009) Leadership for nursing and allied healthcare professionals. Open University Press: Milton Keynes.
Clark, P., Cott, C. & Drinka, T. (2007) ‘Theory and practice in interprofessional ethics: a framework for understanding ethical issues in health care teams’, Journal of Interprofessional Care, 21(6), pp. 591-603.
Daly, J., Speedy, S. & Jackson, D. (2015) Leadership and Nursing. Contemporary Perspectives. 2nd ed. Chatswood: Elsevier.
Gibbs, G. (1988) Learning by doing: A guide to teaching and learning methods, Oxford: Further Educational Unit, Oxford Polytechnic.
Goodman, B. & Clemow, R. (2010) Nursing and collaborative practice: A guide to interprofessional learning and working. Exeter: Learning Matters, Ltd.
Marquis. B. & Huston. C. (2012) Leadership and management tools for the new nurse. A case study approach. Lippincott: Philadelphia.
Morgan, S., Pullon, S. & McKinlaey, E. (2015) ‘Observation of interprofessional collaborative practice in primary care teams: An integrative literature review’, International Journal of Nursing Studies, doi: 10.1016/j.ijnurstu.2015 03.008 [Online]. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25862411 (Accessed: 15 May 2015).
Part 2: Leadership, Management and Change
Description and Feelings

In our team meetings, the concept of change management surfaced since our team leader struggled in influencing team members to assume different team roles. I also realised that I used to complete tasks individually and not as a team. Although I was not the team leader, I also have to learn how to reflect an appropriate leadership style that will be used in future team working. During our team meetings, I was frustrated since we were accomplishing little, but in the end, I felt that I have developed my ability to work in a team.

Discussion and Analysis

Change is described as a transition that involves movement from the present state of an organisation to a desired, future state (Marquis and Huston, 2012). Changes often occur in healthcare settings and require change management. During the role-play and team meetings, collaborative team working was encouraged to achieve the goals of the team. This represented a change in how I accomplish tasks. From completing assigned tasks individually, I have to learn how to complete tasks as a group. Apart from changes on how to complete tasks, there was also a suggested change on leadership style from authoritarian to the transformational leadership style. On evaluation, change management was necessary in our group since this could have addressed the factors that caused our poor performance and increased the factors that would lead to a successful group performance.

Practising change management is crucial since this would help prepare myself in my future role as a registered nurse and as a nurse leader. At least three theories have been proposed in managing change. These include the Plan, Do, Study, Act cycle (PDSA), Kotter’s Model and Lewin’s change model (Bach and Ellis, 2010; Appelbaum et al., 2012; Reed and Card, 2016). The PDSA cycle is often used in the NHS and allows nurse leaders and other healthcare practitioners to create a plan on how to implement a change while the ‘do’ stage constitutes the actual performance of the plan. In the third or ‘study’ phase, nurse leaders and team members analyse the performance and whether this needs to be enhanced or changed (Reed and Card, 2016). In the ‘act’ phase, the proposed changes in the action plan and performance are implemented. The entire process is then repeated until change has been integrated within an organisation. A critique of the PDSA is the difficulty in repeating this cycle, with Reed and Card (2016) noting that only 20% of healthcare groups using PDSA actually repeat the cycle. The applicability of the PDSA is limited with some healthcare settings not benefitting from this type of change management (Taylor et al., 2013).

Meanwhile, the Kotter Model of change adopts the top-down approach and is often used in corporate settings (Appelbaum et al., 2012). It is difficult to use this model of change in actual healthcare settings since the NHS encourages all team members and patients to actively participate in planning and implementation of a change initiative (NHS Leadership Academy, 2011). However, a reflection of my own group would show that the Kotter Model of change was demonstrated as our team leader exercised the authoritarian leadership style. The change came from the leader and trickled down to the team members. Finally, the Lewin’s model of change proposes three stages of change: unfreezing, change and refreezing (Gopee and Galloway, 2013). This model is often used in healthcare settings since it takes into account the factors that enable or deter change in actual practice. Force-field analysis is done and factors that enable change are increased while factors that deter change are reduced (Gopee and Galloway, 2013).

On reflection, employing this type of change management is crucial in my future role as a registered nurse leading a multidisciplinary team. In the NHS, it is recognised that there are several factors that deter or promote change in practise. For instance, the perception that a proposed change initiative only increases paperwork could deter the uptake of change in practice (Bach and Ellis, 2011). This perception is supported in literature with the Royal College of Nursing (2013) reporting that nurses spend an average of 2.5 million hours per week completing clerical tasks. Hence, I have to be aware of factors that deter or enable change. On reflection, the autocratic leadership style, coupled with the top-down approach to change did not lead to a successful performance of my group. The Lewin’s model of change would have been more appropriate in helping my team members accept their individual roles and in changing their own way of completing tasks. This model would have helped our team leader investigate the factors that lead to poor attendance to our team meetings and the team members’ refusal to resolve conflicts.

Conclusion

Effective leadership and change management are crucial when implementing a change initiative and in completing group tasks. Using the Lewin’s model of change would have helped the team leader identify the factors that enable and deter change. Successful use of this model would lead to achievement of the goals of the team.

Action Plan

I will develop my leadership skills and abilities to carry out Lewin’s change model. I will find opportunities to practice change management skills in my own healthcare setting and report regularly to my mentor and colleagues on my progress. I will ask feedback from my mentor and colleagues if I have achieved leadership and change management skills.

References:
Appelbaum, S., Habashy, S., Malo, J. & Shafiz, H. (2012) ‘Back to the future: revisiting Kotter’s 1996 change model’, Journal of Management Development, 31(8), pp. 764-782.
Bach, S. & Ellis, P. (2011) Leadership, Management and Team Working in Nursing. Exeter: Learning Matters.
Gopee, N. & Galloway, J. (2013) Leadership and Management in Healthcare. 2nd ed. London: Sage.
Marquis. B. & Huston. C. (2012) Leadership and management tools for the new nurse. A case study approach. Lippincott: Philadelphia.
NHS Leadership Academy (2011) Clinical Leadership Competency Framework. Coventry: NHS Institute for Innovation and Improvement.
Reed, J. & Card, A. (2016) ‘The problem with Plan-do-study-act cycles’, British Medical Journal Quality and Safety, 25(3), pp. 147-152.
Royal College of Nursing (2013) Nurses spend 2.5 million hours a week on paperwork- RCN Survey [Online]. Available at: https://www2.rcn.org.uk/newsevents/press_releases/uk/cries_unheard_-_nurses_still_told_not_to_raise_concerns (Accessed: 10 May, 2017).
Taylor, M., McNicholas, C., Nicolay, C., Darzi, A., Bell, D. & Reed, J. (2013) ‘Systematic review of the application of the plan-do-study-act method to improve quality in healthcare’, British Medical Journal Quality and Safety, doi: 10.1136/bmjqs-2013-001862.
Part 3: Leadership, Management and Decision Making
Description and Feelings

In our group work, our team leader did not make a decision to identify the factors that deterred participants from resolving conflicts and adjusting to team roles. There was also no decision to reflect on why team members were reluctant to accept the assigned tasks and the reasons for poor attendance to the team meetings. I felt that these non-decisions heavily influenced our team performance. As a group, we made the erroneous conclusion that our team leader can handle all the required tasks. This group conclusion might have also contributed to our failed group presentation. During our meetings, I was anxious and apprehensive that we were not accomplishing our tasks with the given time frame.

Discussion and Analysis

The indecision to identify factors that deterred the group from participating in meetings and accepting tasks had a negative impact on our team performance. The ability to make decisions is crucial when completing tasks as a student nurse and in preparation for my role as a registered nurse or a nurse leader. Marriner-Tomey (2009) has argued that decision-making is crucial in healthcare organisations and within teams. In actual healthcare settings, decisions are made constantly and range from decision on whether to admit a patient to decisions on which interventions to use for a specific healthcare condition. These decisions are influenced by legislations, policies, leadership styles and the practice of patient-centred care (NHS Leadership Academy, 2011). On analysis, it is crucial to make decisions within groups. However, it is cautioned that collective decisions might reflect ‘groupthink’ and lead to failure instead of success (Marriner-Tomey, 2009). The theory of groupthink is described as faulty decision made by a group that represents deterioration in reality testing, mental efficiency and moral judgment (Wilcox, 2010). Groups who demonstrate groupthink often do so without realising the impact of their decisions on other groups and in the process, ignore alternatives or actions (Cooke and Young, 2002). It is important to note that groupthink often occurs when members have similar background, when rules for decision-making are not clear and when members do not consider the opinions of others (Wilcox, 2010). In my experience, we were not able to make a decision or demonstrate groupthink despite the similarities of our background. I felt that our lack of cohesion prevented us from also making faulty decisions, which are common when a team ‘groupthinks’.

An analysis of our group revealed that we were not able to examine the power relations within the group. Power relations could have an impact on who make the decisions and whether these decisions are followed (Bach and Ellis, 2010). Power is described according to who has the formal authority to make decisions for the group and according to who has access to resources (McDonald et al., 2012). Power is also described according to who has less ability to control ideas (McDonald et al., 2012). In teams, there may be power imbalance especially when professional systems, social and cultural factors reinforce these imbalances (Martin-Rodriguez et al., 2005). This power imbalance may be more evident in hospital settings where medical dominance is seen. For example, medical doctors have traditionally retained their independence and professional autonomy and status when collaborating with other groups of healthcare workers (Hudson, 2002). This may create power imbalance as doctors tend to have more power in decision-making compared to the rest of the group. This is in contrast with what is often seen in community healthcare settings where each member of a healthcare team tends to share power and make decisions according to what is best for the patient (Hudson, 2002).

Meanwhile, Weir-Hughes (2011) asserts that in order for a therapeutic relationship to develop, there is a need to consider the power relationships between healthcare practitioners and patients. It is suggested that power may be used negatively (i.e. through coercion and force) or positively (i.e. through encouragement and empowerment). On analysis, my ability to understand power relations through my experiences in team working will be essential when caring for actual patients. In our team, power was used negatively since our team leader had to force our team members to accept assignments. However, I realised that in actual settings, it is important to encourage and empower patients and my colleagues to improve patient care. It has been shown that patient empowerment tends to improve the quality of care and patient outcomes (Sullivan and Garland, 2010). On analysis, there was power imbalance in our group since the team leader made all the decisions and the top-down approach to change was followed.

Conclusion

Making decisions is crucial in team working and when caring for patients. However, the ability to make decisions would depend on one’s power. Those with more access to resources and power have greater ability to influence decisions. In healthcare settings, it is crucial to use power positively and empower patients and other members of the healthcare team to make decisions. Positive use of power is also important in preventing ‘groupthink’, a phenomenon that tends to result to negative consequences for the group.

Action Plan

When faced with a similar situation in the future, I will ensure that I actively participate in decision-making. However, I need to empower others and myself to make good decisions. Empowerment is necessary to prevent power imbalance. I will continue to engage in training on how to practice effective leadership and management skills in order to empower others to actively engage in decision-making.

References:
Bach, S. & Ellis, P. (2011) Leadership, Management and Team Working in Nursing. Exeter: Learning Matters.
Cooke, M. & Young A. (2002) Managing and Implementing Decisions in Healthcare. London: Healthcare Balliere Tindall/RCN.
Marriner-Tomey (2009) Guide to Nursing Management and Leadership. St. Louis: Mosby Elsevier.
Martin-Rodriguez, L., Beaulieu, M., D’Amour, D. & Ferrada-Videla, M. (2005) ‘The determinants of successful collaboration: a review of theoretical and empirical studies’, Journal of International Care, 19(2), pp. 132-147.
McDonald, J., Jayasuriya, R. & Harris, M. (2012) ‘The influence of power dynamics and trust on multidisciplinary collaboration: a qualitative case study of type 2 diabetes mellitus’, BMC Health Services Research, 12(63). Doi: 10.1186/1472-6963-12-63.
NHS Leadership Academy (2011) Clinical Leadership Competency Framework. Coventry: NHS Institute for Innovation and Improvement.
Sullivan E., Garland G. (2010) Practical Leadership and Management in Nursing. Pearson Education, Harlow.
Wilcox, C. (2010) Groupthink: An impediment to success. USA: Xlibris Corporation.
Part 4: Reflection on Development of Skill
Description and Feelings

I participated in a second group activity where I was chosen as the leader. In the second group, I was able to practice leadership skills such as effective communication, motivation, change management and integrity. During one of our discussions, I assigned a group member to search for evidence-based interventions for a specific healthcare condition. Following some research, my team member decided to use the case of a real patient to explain the interventions. However, she identified the name of the patient and the context of her care, including the names of the nurses who were involved in her care. I talked to my colleague privately after our discussion and informed her of the NMC (2015) code of conduct on patient autonomy and the need to observe the privacy of the patient. I asked her to use a pseudonym instead when discussing the case of a patient. My colleague accepted my suggestion and protected the identity of the patient during succeeding discussions. On reflection, I felt that my decision to inform my colleague on how to discuss patient care was based on the ethics principles of patient autonomy.

Discussion and Analysis

From my participation in teams/groups throughout the module, I was able to develop effective communication skills. Specifically, I learned how to listen and show compassion to my colleagues and my patients during placement when they converse with me. Kourkouta and Papathanasiou (2014) have emphasised that effective communication skills is crucial in healthcare settings and when working in teams. These communication skills include recognising both verbal and non-verbal messages (Johnston, 2013). Patients who feel that their nurses are listening intently tend to report higher patient satisfaction with the care they receive (Kourkouta and Papathanasiou, 2014). Effective communication skills are also necessary in resolving conflicts in teams and understanding the perspectives of others (Craig and Moore, 2015). In nursing teams or when working with patients, it is recognised that conflicts in ideas also occur. Hence, the ability to communicate effectively and resolve conflicts will be necessary in preparing myself in my future role as a registered nurse (Craig and Moore, 2015).

Apart from effective communication, I also learned how to motivate my fellow team members. Motivation is crucial in team working since this would help team members to complete tasks. In my experience with my first group, team motivation was not practiced. In contrast, my second team was able to use motivation to help team members accept and carry out tasks. I realised that the main difference was the support that my team members received in the succeeding group. Craig and Moore (2015) state that team support is critical in team working since the absence of support could create dissatisfaction and loss of motivation. In addition to the skills on motivation, I also saw the importance of change management in our team. In my first group, change management was not practised. Managing change is critical in healthcare practice. Thorpe (2015) has stated that planned change, which is described as purposeful, requires collaborative effort and the presence of a change agent. The NMC (2015) has emphasised that nurses must deliver quality care that is based on evidence, suggesting that nurses have to continually update their skills and practice. This also means that changes in practice have to be made. However, in practice, implementing change is challenging. It is suggested that almost 70% of change projects do not succeed (Mitchell, 2013).

In my experience with the group, I also realised the necessity of recognising factors that influence or deter change. Mitchell (2013) suggests that advances in science, shortages of the nursing workforce, an ageing population, the need to increase patient satisfaction and rising cost of treatment all influence change. Inappropriate leadership, poor communication and under-motivated staff also deter the uptake of change in practice (O’Neal and Manley, 2007). In my future practice, I have to identify factors that promote change in practice. On reflection, I was not able to promote change in our first group. I could have assisted the team leader in my first group in analysing the factors that deter my colleagues from accepting their assigned tasks.

Integrity was also practiced in the succeeding groups that I was involved in. Specifically, power was not misused as all team members in these groups had equal chances to participate in decision-making. In addition, the team leader and group members exercised honesty and transparency in the decisions made. Finally, ethics in decision-making was observed. For instance, all personal information of patients discussed during case studies was not mentioned and patient autonomy was observed. The NMC (2015) has reiterated the importance of protecting the privacy and autonomy of the patients.

Conclusion

Practising effective leadership skills and ethical decision-making are important when working as teams and in providing quality care to the patients. Inability to work effectively could result to poor performance, which in turn could affect the quality of care that my future patients will receive. Developing these leadership skills early in my undergraduate years would help prepare me in my role as a registered nurse.

Action Plan

As part of my action plan, I will continue to engage in training on how to develop effective communication skills. Specifically, I will refine my skills on how to show empathy when listening to my patients and colleagues. The ability to demonstrate empathy is crucial since this would help my patients feel that they matter to the team (Fowler, 2015).

References:
Craig. M. & Moore. A. (2015) ‘Providing support for teams in difficulty’, Nursing Times. 111(16), pp. 21 – 23.
Fowler. J. (2015) ‘What makes a good leader?’, British Journal of Nursing, 24(11), pp. 598 – 599.
Johnston, B. (2013) ‘Patient satisfaction and its discontents’, Journal of the American Medical Association, 173(22), pp. 2025-2026.
Kourkouta, L. & Papathanasiou, I. (2014) ‘Communication in nursing practice’, Materia Socio Medica, 26(1), pp. 65-67.
Mitchell, G. (2013) ‘Selecting the best theory to implementing planned change’, Nursing Management, 20(1), pp. 32-37.
Nursing and Midwifery Council (NMC, 2015) The Code: Professional Standards of practice and behaviour for nurses and midwives [Online]. Available from: http://www.nmc.org.uk/globalassets/sitedocuments/nmc-publications/revised-new-nmc-code.pdf (Accessed: 12 May, 2015).
O’Neal, H. & Manley, K. (2007) ‘Action planning: making change happen in clinical practice’, Nursing Standard, 21(35), pp. 35-39.
Thorpe. R. (2015) ‘Planning a change project in mental health nursing’, Nursing Standard, 30(1), pp. 38 – 44.

Preventing Wound Infection in Orthopaedic Surgery

This work was produced by one of our professional writers as a learning aid to help you with your studies

Choose one gender group and critically discuss how their health outcomes can be improved in regards to ageing.

Introduction

Wound infection post-surgery, now preferably known as Surgical Site Infection (SSI) refers to infections at or near a surgical site within 30 days after surgery or within one year, if the procedure involved insertion of an implant (Illingworth et al., 2013; Owens and Stoessel 2008). While definite statistics of the incidence of SSI are complicated given the gamut of surgical procedures, environment and patients, available data indicate that SSI contributes to more than 15% of reported Hospital-acquired infections (HAI) for all patients and about 38% for surgical patients (Campbell et al., 2013; Owens and Stoessel, 2008; Reichman and Greenberg, 2009). Also, data from across Europe indicate that, depending on surgical procedure and/or surveillance methods used, incidence of SSI may be as high as 20% for all surgical procedures (Leaper et al., 2004). Although, HAIs generally, and SSI are relatively less common in Orthopaedic surgery, compared with other surgical procedures (Johnson et al., 2013), however, when they do occur, osteo-articular infections for example, can be very difficult to treat, with significant risk of lifelong recurrence (Faruqui and Choubey, 2014). SSI leads to significantly higher costs of care from longer hospital stays; it poses a major burden on healthcare providers and the healthcare system, jeopardises the health outcomes of patients and remains a major cause of morbidity and mortality despite improvements in surgical procedures and infection control techniques (Owens and Stoessel, 2008; Tao et al., 2015). Consequently, understanding evidenced-based approaches to reduce/prevent incidence of SSI has attracted significant interests from researchers, healthcare administrators and policy-makers. This essay intends to review current best-practices in prevention of SSIs and to offer recommendations for future practice within orthopaedic settings.

Rationale

This review of best practices in the prevention of SSI following orthopaedic surgery is underpinned by two major reasons. One, despite the considerable improvement in surgical procedures and techniques in most orthopaedic settings, SSI negatively impact on patient outcomes and imposes significant cost on the healthcare system. According to a case-control study reported by Owens and Stoessel (2008), patients who suffer SSI are more likely to require readmission to hospital and have more than double the risk of death compared to patients without SSI. In addition, the median duration of hospitalisation required due to SSI was put at 11 days and the extra cost to the healthcare system estimated at ˆ325 per day (Owens and Stoessel, 2008). Two, the prevention of SSI is hardly straightforward. Given the wide range of factors that modify the risk of SSI, a ‘bundle’ approach with ‘systematic attention to multiple risk factor’ is required for any effective prevention of SSI (Uckay et al., 2013). Thus, by undertaking a state-of-the-art review of orthopaedic SSI prevention techniques/processes, this essay may contribute towards better orthopaedic surgery outcomes for patients and providers.

Prevention of SSI in orthopaedic surgery: Best Practices

According to the Health Protection Agency (2011), the most common pathogenic organisms responsible for surgical wound infections in orthopaedic surgery include methicillin-sensitive Staphylococcus aureus (MSSA), methicillin-resistant Staphylococcus aureus (MRSA), Coagulate negative Staphylococci (CoNS), Enterobacteriaceae, Enterococcus spp, Pseudomonas spp, Stretococcus spp as well as occasional cases of unspecified diphtheroids’ of the Corynebacterium spp. and other gram-positive organisms. Moreover, SSIs can be categorised into superficial incisional, deep incisional and organ space SSI (Reichman and Greenberg, 2009). Superficial incisional SSI refers to infection that involves only skin and subcutaneous tissue at the point of incision; deep incisional SSI refers to infection of the underlying soft tissues, while organ space SSI refers to infection involving organs or organ spaces that were opened or manipulated during the surgical procedure. Since the risk of ending up with SSI and the specific type of SSI suffered are determined by factors related to the patient, procedure and hospital environment, current best-practices and guidelines for preventing SSI can be broadly elaborated under these categories.

Patient-related Practices

Existing patient conditions like diabetes mellitus, obesity and/or rheumatoid arthritis have been associated with increased risk of SSI (Illingworth et al., 2013; Johnson et al., 2013). As part of effective patient management, pre-operatively, current body of evidence recommends aggressive glucose control for diabetes patient to reduce the heightened risk of infection due to hyperglycaemia pre or post-surgery. In patients with rheumatoid arthritis, corticosteroids and anti-tumour necrosis factor (TNF) therapy have been argued to delay wound healing and increase risk of infection. However, the British Society for Rheumatology (BSR) recommends that in deciding whether to cease these medications for such patients, pre-surgery, the potential benefits of preventing post-surgery infection should be balanced with the risk of disease flare, pre-surgery (Dixon et al., 2006; Luqmani et al., 2006). In addition, orthopaedic surgery for patients who currently smoke or are obese (BMI above 30kg/m2) should be delayed (until smoking cessation/loss of weight) to reduce the risk of SSI. For example, a randomised, controlled study reported that smoking cessation for just 4 weeks significantly reduced the odds of incisional SSI (Sorensen et al., 2003), while Namba et al. (2005) reported significantly higher odds of SSI in obese patients (>35kg/m2) undergoing total hip and knee replacement surgery, compared with patients that were not obese.

Screening patients for presence of MSSA and MRSA and subsequent decolonisation is one of the most recommended techniques for preventing SSI. Staphylococcus aureus colonisation is reportedly found in the nares of about 30% of healthy individuals (Kalmeijer et al., 2002). This nasal carriage of both methicillin sensitive/resistant S. aureus have been demonstrated as a significant risk factor for SSI. Kelly et al. (2012) reported a significant drop in SSI from 2.3% to 0.3% with the use of intranasal mupirocin and triclosan showers to decolonise patients before orthopaedic surgery. Also, a review of eight randomised controlled trial by van Rijen et al. (2008) reported that the use of mupirocin significantly reduced the incidence of MRSA and MSSA associated SSI. However, guidelines from the National Institute for Health and Care Excellence (NICE, 2008) recommends a combination of nasal mupirocin and chlorhexidine showers for patient decolonisation while Uckay et al. (2013) indicated that available evidence from orthopaedic literature suggests that S. aureus screening, decolonisation and shower constitute a cost-saving, effective strategy to reduce the incidence of SSI in orthopaedic surgeries.

Surgical Procedure-related Practices

Preoperative preparation of skin before incision is one of the major avenues to prevent SSI (Kelly et al., 2012). However, there is no consensus on what antiseptic agent offers the most effective protection against SSI. While NICE (2008) guidelines suggest that both aqueous and alcohol based preparations e.g. povidone-iodine or chlorhexidine are suitable for skin preparation, Darouiche et al. (2010) and Milstone et al. (2008) have raised concerns about the development of bacterial resistance to chlorhexidine. These studies report the relative superiority of 2% chlorhexidine mixed with 70% isopropyl alcohol, while some experts have suggested increasing the chlorhexidine concentration to 4% or the use of 10% povidone-iodine (Uckay et al., 2013). Nevertheless, povidone-iodine or chlorhexidine still remain the gold standard for preoperative skin preparation.

Also as part of skin preparation, NICE recommends that hair should only be removed if necessary, should be done immediately before surgery and with electronic clippers, not razor blades. Recent evidence suggests that use of razor blades can sometimes result in microscopic skin cuts that may act as foci for micro-organisms colonisation, thus increasing the risk of infection (Owens and Stoessel, 2008).

Preoperative administration of antibiotic prophylaxis to reduce the risk of surgical wound infection is widely accepted for surgery in orthopaedic settings, including bone trauma. Several large scale studies have demonstrated that antibiotic prophylaxis, when administered properly, help reduce tissue contamination, during surgery, to levels that do not overwhelm the patient’s immune system, and thus, can reduce the risk of SSI by up to 75% (Chen et al., 2013; Faruqui and Choubey, 2014; Illingworth et al., 2013; Uckay et al. 2013). However, NICE (2008) recommends that potential adverse effects, optimal dosage and most effective time for administration pre-operatively should be carefully considered to maximize the benefit of antibiotic prophylaxis. Uckay et al. (2013) believe that first or second generation parenteral cephalosporins are sufficient in most cases, except in cases of skin colonisation with MRSA, in which case glycopeptide antibiotics may be more effective. However, this should be considered in relation to individual patients’ allergy history. Uckay et al. (2013) also recommend that 30mins – 1hr before incision is the idea time to administer prophylaxis. While this is generally accepted, NICE (2008) recommends that prophylaxis may be given earlier in procedures where a tourniquet is used.

In addition to minimising the risks from the skin and endogenous flora of the patient, the surgical team must also strive to reduce chances of contamination from either their person, the tools used or the procedure itself. NICE (2008) recommends that every member of the surgical team must thoroughly scrub before wearing surgical gown and gloves. There is growing support for double-gloving and frequent glove-changing to reduce the risk of contamination from tiny punctures in surgical gloves that often go unnoticed during surgery. While evidence in support of double-gloving and/or frequent glove-changing intra-operatively as a strategy for reducing risk of SSI remain inconclusive, Widmer et al. (2010) conclude that the practice is supported by expert opinion, especially for lengthy procedures. Moreover, excellent surgical techniques are crucial in preventing SSI. For example, maintaining effective haemostasis while preserving adequate blood supply, removal of devitalized tissues, eradication of dead space(s), gentle handling of tissue and effective management of surgical wound postoperatively can all help reduce the chance of SSI (Uckay et al., 2013).

Hospital Environment-related Practices

The CDC and World Health Organization recommend that doors to the operating room should be kept closed and traffic kept to a minimum to reduce potential contamination of surgical sites (Tao et al., 2015). To achieve this, essential equipment and tools should be stored in the operating room. In fact, Health Protection Agency (2011) suggest that frequency of operating room door opening is a positive predictor of increased bacterial count in the operating room. Airflow in the operating room is another modifier of SSI risk. Vertical or horizontal laminar-flow ventilation systems have been advocated for orthopaedic surgery to achieve ultra-clean air within the operating room and reduce airborne contaminants. Although evidence supporting the effect of laminar airflow systems on SSI risk remains inconclusive, the reduction in airborne contaminants is perhaps an added advantage (Owens and Stoessel, 2008; Reichman and Greenberg, 2009).

Lastly, constant surveillance is an important part of preventing SSI. By following up on patients post-operatively and reporting appropriate data to the surgical team, surgical decisions can be improved upon based on historical records (Skramm et al., 2012). Moreover, surveillance ensures that cases of SSI are identified early and treated before complications arise. Data from surveillance could also form the basis of evidenced-based decision making on facility specific service improvements to reduce incidences of SSI and improve outcomes for all concerned (Skramm et al., 2012).

Recommendations

This essay have reviewed current knowledge on surgical site infection and strategies to reduce its incidence. It is pertinent to state that despite the various precautions elaborated above, complete eradication of surgical site contamination is almost impossible as some endogenous micro-organisms always remain and environmental factors cannot be totally eliminated. To reduce incidence of SSI to the barest minimum, the following are recommended:

It is crucial to adopt a ‘bundle’ approach that ensures that patient, procedure and facility related factors are controlled for as much as possible.
While improving surgical and care delivery is always crucial, surveillance and data collection should also promoted to ensure that changes/improvements in procedures and facility practices are evidenced-based
New technologies and strategies are continually been developed to reduce complications like SSI and improve outcomes for patients, it is important to always stay on top of these developments to ensure that orthopaedic surgeries are not only evidenced-based but contemporary, achieving the best outcome possible for all parties.
Conclusion

Surgical site infection (SSI) poses a significant challenge to patients undergoing orthopaedic surgeries, the surgical team as well as the healthcare system in general. SSI negatively impact patient outcomes and imposes unnecessary demand on healthcare resources. Fortunately, much of the burden associated with SSI can be avoided. This review identifies the multitude of patient and procedure-related factors that modify SSI risk and highlights various evidence-based strategies to mitigate these risks. The paper demonstrates that there is consensus in the literature that by screening and subsequent decolonisation of patients, administering antibiotic prophylaxis, ensuring that surgical tools, equipments and garments are properly sterilised and the operating room is free of airborne contaminants, cases of surgical wound infection in orthopaedic surgeries can be effectively prevented.

Bibliography

Campbell, K. A., Phillips, M. S., Stachel, A., Bosco Iii, J. A. and Mehta, S. A. (2013) Incidence and riskfactors for hospital-acquired Clostridium difficile infection among inpatients in an orthopaedic tertiary care hospital. Journal of Hospital Infection, 83(2), pp. 146-149.

Chen, A. F. M. D. M. B. A., Wessel, C. B. M. L. S. and Rao, N. M. D. (2013) Staphylococcus aureus Screening and Decolonization in Orthopaedic Surgery and Reduction of Surgical Site Infections. Clinical Orthopaedics and Related Research, 471(7), pp. 2383-99.

Darouiche, R. O., Wall, M. J., Itani, K. M. F., Otterson, M. F., Webb, A. L., Carrick, M. M., Miller, H. J., Awad, S. S., Crosby, C. T., Mosier, M. C., AlSharif, A. and Berger, D. H. (2010) Chlorhexidine–Alcohol versus Povidone–Iodine for Surgical-Site Antisepsis. New England Journal of Medicine, 362(1), pp. 18-26.

Dixon, W. G., Watson, K., Lunt, M., Hyrich, K. L., Silman, A. J. and Symmons, D. P. M. (2006) Rates of serious infection, including site-specific and bacterial intracellular infection, in rheumatoid arthritis patients receiving anti–tumor necrosis factor therapy: Results from the British Society for Rheumatology Biologics Register. Arthritis & Rheumatism, 54(8), pp. 2368-2376.

Faruqui, S. A. and Choubey, R. (2014) Antibiotics Use in Orthopaedic Surgery; An Overview. National Journal of Medical and Dental Research, 2(4), pp. 52-58.

Health Protection Agency (2011) Sixth report of the mandatory surveillance of surgical site infection in orthopaedic surgery, April 2004 to March 2010. in,London: Health Protection Agency.

Illingworth, K. D., Mihalko, W. M., Parvizi, J., Sculco, T., McArthur, B., el Bitar, Y. and Saleh, K. J. (2013) How to minimize infection and thereby maximize patient outcomes in total joint arthroplasty: a multicenter approach: AAOS exhibit selection. The Journal of bone and joint surgery. American volume, 95(8), pp. 1.

Johnson, R., Jameson, S. S., Sanders, R. D., Sargant, N. J., Muller, S. D., Meek, R. M. D. and Reed, M. R. (2013) Reducing surgical site infection in arthroplasty of the lower limb: A multi-disciplinary approach. Bone and Joint Research, 2(3), pp. 58-65.

Kalmeijer, M. D., Coertjens, H., van Nieuwland-Bollen, P. M., Bogaers-Hofman, D., de Baere, G. A. J., Stuurman, A., van Belkum, A. and Kluytmans, J. A. J. W. (2002) Surgical Site Infections in Orthopedic Surgery: The Effect of Mupirocin Nasal Ointment in a Double-Blind, Randomized, Placebo-Controlled Study. Clinical Infectious Diseases, 35(4), pp. 353-358.

Kelly, J. C., O’Briain, D. E., Walls, R., Lee, S. I., O’Rourke, A. and Mc Cabe, J. P. (2012) The role of pre-operative assessment and ringfencing of services in the control of methicillin resistant Staphlococcus aureus infection in orthopaedic patients. The Surgeon, 10(2), pp. 75-79.

Leaper, D. J., van Goor, H., Reilly, J., Petrosillo, N., Geiss, H. K., Torres, A. J. and Berger, A. (2004) Surgical site infection – a European perspective of incidence and economic burden. Int Wound J, 1(4), pp. 247-73.

Luqmani, R., Hennell, S., Estrach, C., Birrell, F., Bosworth, A., Davenport, G., Fokke, C., Goodson, N., Jeffreson, P., Lamb, E., Mohammed, R., Oliver, S., Stableford, Z., Walsh, D., Washbrook, C., Webb, F., Rheumatology, o. b. o. t. B. S. f., British Health Professionals in Rheumatology Standards, G. and Group, A. W. (2006) British Society for Rheumatology and British Health Professionals in Rheumatology Guideline for the Management of Rheumatoid Arthritis (the first two years). Rheumatology, 45(9), pp. 1167-1169.

Milstone, A. M., Passaretti, C. L. and Perl, T. M. (2008) Chlorhexidine: expanding the armamentarium for infection control and prevention. Clin Infect Dis, 46(2), pp. 274-81.

Namba, R. S., Paxton, L., Fithian, D. C. and Stone, M. L. (2005) Obesity and perioperative morbidity in total hip and total knee arthroplasty patients. J Arthroplasty, 20(7 Suppl 3), pp. 46-50.

National Institutte for Health and Care Excellence (2008) Surgical site infections: prevention andention and treatmenttreatment. Clinical guideline. in,Manchester: NICE.

Owens, C. D. and Stoessel, K. (2008) Surgical site infections: epidemiology, microbiology and prevention. Journal of Hospital Infection, 70, Supplement 2, pp. 3-10.

Reichman, D. E. and Greenberg, J. A. (2009) Reducing Surgical Site Infections: A Review. Reviews in Obstetrics and Gynecology, 2(4), pp. 212-221.

Skramm, I., SaltytA— Benth, J. and Bukholm, G. (2012) Decreasing time trend in SSI incidence for orthopaedic procedures: surveillance matters! Journal of Hospital Infection, 82(4), pp. 243-247.

Sorensen, L. T., Karlsmark, T. and Gottrup, F. (2003) Abstinence from smoking reduces incisional wound infection: a randomized controlled trial. Ann Surg, 238(1), pp. 1-5.

Tao, P., Marshall, C. and Bucknill, A. (2015) Surgical site infection in orthopaedic surgery: an audit of peri-operative practice at a tertiary centre. Healthcare Infection, 20(2), pp. 39-45.

Uckay, I., Hoffmeyer, P., Lew, D. and Pittet, D. (2013) Prevention of surgical site infections in orthopaedic surgery and bone trauma: state-of-the-art update. Journal of Hospital Infection, 84(1), pp. 5-12.

van Rijen, M., Bonten, M., Wenzel, R. and Kluytmans, J. (2008) Mupirocin ointment for preventing Staphylococcus aureus infections in nasal carriers. Cochrane Database Syst Rev, (4), pp. Cd006216.

Widmer, A. F., Rotter, M., Voss, A., Nthumba, P., Allegranzi, B., Boyce, J. and Pittet, D. (2010) Surgical hand preparation: state-of-the-art. J Hosp Infect, 74(2), pp. 112-22.

Impact of Drug Abuse on Health of Teenagers Aged 13-19

This work was produced by one of our professional writers as a learning aid to help you with your studies

Literature Review
1.0 Introduction

This chapter provides a comprehensive critical literature review of a small number of sources that are considered to be particularly useful in exploring the two key themes of this dissertation. The first of these themes is the impact of drug abuse on the health of the teenagers aged 13-19 in London, while the second is the impact of governmental strategies in tackling drug abuse amongst teenagers aged 13-19 in London. These themes are discussed using the resources selected, and the quality, methodological approach, relevance and ethical and anti-oppressive practices are all part of the critical review. The chapter finishes with a short summary bringing these key ideas together.

1.1 The Impact of Drug Abuse on the Health of Teenagers Aged 13 – 19 in London

The first theme investigates the impact of drug abuse on specific aspects of health on teenagers in London. There are two key sources that form the core of this critical review for this theme. Even so, neither of these relate solely to the target population, and in each case some extrapolation of findings is made in order to describe the likely characteristics of 13 – 19 years’ olds in London.

The first is source is the case-controlled study carried out by Di Forti et al (2015:1), and briefly discussed in Chapter Two above. Looking more closely at this study, and reviewing it critically, it still remains a useful article, as it focuses on the mental health impacts of cannabis and shows a clear association between the use of the drug in its high potency form (skunk) and psychosis. It might not at first appear that the study is relevant given that it started in 2005. However, it continued recruiting for over 6 years, and amassed a wealth of data on those individuals abusing drugs – specifically high potency and easily available cannabis.

The research study used a primary research methodology. For the recruitment of cases, the authors approached all patients (18 – 65 years) with first episode psychosis presenting at the inpatient units of the South London and Maudsley Hospital. They invited people to participate in the study only if they met the International Classification of Diseases 10 criteria for a diagnosis of non-affective (F20–F29) or affective (F30–F33) psychosis, which they validated by administering the Schedules for Clinical Assessment in Neuropsychiatry (SCAN) (Di Forti et al, 2015:2). For the controls, the authors used internet and newspaper adverts and also distributed leaflets on public transport and in shops and job centres. The controls were given the Psychosis Screening Questionnaire and were excluded if they met the criteria for a psychotic disorder. While the two groups only included the last two years of the target population group for this study i.e. 18 and 19 year olds, it was a study located in London, and on analysis appeared to indicate a number of characteristics that were felt to be useful for providing information that would also be useful for younger teenagers.

All participants (cases and controls) included in the study gave written informed consent under the ethical approval obtained from the Institute of Psychiatry Local Research Ethics Committee. There did not appear to be any unethical practices, but the study had the potential to be oppressive as by the nature of the patients presenting at the clinics, and by the nature of their access to skunk, being more likely to be of certain ethnic groups – especially of black West Indian origin – it could be argued that the study to some extent misrepresented the populations of south west London, and more specifically, the West Indian communities found there. In other words, the inclusion of participants from these origins might be likely to give observers an unjust view of the ethnic group or of the population of that area of London as a whole.

The method used with the participants was quantitative and involved questionnaire assessments, specifically socioeconomic data and the Cannabis Experience Questionnaire modified version (CEQmv) which included data on history of use of alcohol. tobacco, alcohol, any other use of recreational drugs, and detailed information on cannabis use (i.e. first use age, use duration, frequency of use, type of cannabis used) (Di Forti et al, 2015:2). Between 2005 and 2011, the researchers approached 606 patients of which 145 (24%) refused to participate, therefore 461 patients with first-episode psychosis were recruited. Using a range of statistical tests, and adjusting for a number of variables including the variables for frequency of cannabis use and the type of cannabis used, and in combining these the authors found that controls were more likely to be occasional users of hash, whilst the frequent users were more likely to be using skunk. They also found, using logistic regression, that those people who had started using cannabis at a younger age had a greater risk of developing psychotic episodes (Di Forti et al, 2015:5).

The second resource to be analysed was the study by McCardle (2004). This was a literature review focusing on the impacts of substance abuse by children and young people. Although this did not use primary research, it provided a useful analysis of a number of other studies. Although the age of this study meant that it might have had limited relevance to teenagers in 2017, in fact the study related directly to the findings of the later Di Forti et al study. This was because McCardle (2004:1) found that cannabis was becoming stronger than it had been in the past – just as Di Forti et al found that skunk use was increasing and that it was of a much higher potency than previously. McCardle (2004:2) also found that there was a range of mental health issues resulting from the use of cannabis, including an increased risk of suicide, and an increase in aggressive, disassociated behaviours, anxiety, depression and other similar problems (McCardle, 2004:2). Another useful aspect of this research was that it identified the problems of terminology relating to the gathering and analysis of data – so many different terms are used that it is often difficult to ascertain accurate trends and outcomes (McCardle, 2004:3). While it would have been preferred to have used a London based source or one that engaged participants of the target age group though a primary method, the lack of sources of academic literature meant that this study was valuable in that it analysed other studies, and also existing datasets from the UK government. The article also focused on the social impacts of cannabis, for example, looking at the developmental impacts, and the negative effects on education, both of which could lead to poor outcomes in terms of quality of life and attainment in later life.

The findings from these two articles provided valid evidence of the relationship between the use of cannabis and mental, emotional, social and physical health of teenagers and young people. Although there was limited focus on the population age target group for the dissertation specifically, both articles provided relevant points of interest, and it is possible to extrapolate from them to state that teenagers in London engaged in cannabis abuse are very likely to be at risk of experiencing the various health effects identified above.

3.2 The Impact of Government Strategies in Tackling Drug Abuse Amongst Teenagers Aged 13-19 in London

Finding academic research sources that focused on recent government strategies aimed at the target group based in London was very challenging. For the most recent strategy – the Troubled Families Programme, Lambert and Crossley (2017:1) get to the very heart of the ethical and oppressive practices issue, as they argue that this government strategy is one of a wider spectrum of policies that locates problems within the family itself, and which emphasises behaviour as the target for action irrespective of the socio-economic influences that exist. This is a review study – critically reviewing a strategy – and is very current, as the TFP has recently been revisited by the Government, who are considering an extension, despite evidence that it has not met its targets or expected outcomes. While this article is not based on a piece of primary data, the authors have conducted primary data about this issue through interviews in the very recent past, and the article refers to these. They have found that TFP has continued the view of target families as an ‘underclass’, as ‘neighbours from hell’ and as expensive and very difficult to ‘treat’. While the TFP took a holistic approach, using one individual or team to work with families on all of their problems, Lambert and Crossley (2017:4), and others (Bonell et al, 2016) argue that the underlying attitude of the Government and of the strategy meant that its approach was unlikely to succeed.

3.3 Summary

This chapter showed that there were clearly associated health impacts with the use of cannabis; some of these impacts were severe, and often included mental illness and behavioural change, especially where high potency cannabis was used. It also showed that despite many years of government strategies and policies, there still does not appear to be a solution that can reduce the use or impacts of cannabis and other drugs. The final chapter provides a reflection on the research undertaken for this dissertation, and provides some brief conclusions and recommendations.

CHAPTER FOUR – REFLECTIONS, CONCLUSIONS AND RECOMMENDATIONS
4.0 Introduction

In this final chapter, three tasks are completed. First, a reflective account of the research is undertaken. In research and practice, reflection on a task and outcome is very important because it provides the author with the opportunity to look back and learn from their actions. There are in fact two types of reflection, both of which might be applicable to this work. The first definition is that of ‘reflection’ which is considered to be a ‘process or activity’ that involves thinking and is judged to include cognitive processes of problem finding and problem solving (Leitch and Day, 2000:180). The second type of reflection is that of ‘reflective practice’. This is the use of reflection and reflective skills to transfer learnt knowledge i.e. theories to the application of those theories to the everyday practices of an individual. It has been shown to be very important for individual practitioners as it aids their ability to learn from their actions and associated outcomes, and enables them to develop improvements based on experience and theoretical knowledge (White et al, 2016:9).

There are two main models of reflection that can be used to support the reflective researcher or the reflective practitioner. These are Kolb’s model of experiential learning (Kolb, 1984) and Gibbs’ reflective cycle (Gibbs, 1988). Gibbs developed his model as a refinement of the earlier Kolb model, and it is Gibbs’ model that is used in this dissertation.

Figure 1: Gibbs’ Model of Reflection (Park and Son, 2011:2)

The Gibbs Model provides a researcher with the opportunity to gain a deep understanding of what they have learned (Park and Kastanis, 2009:11) and the strengths and weaknesses of their work, their underlying values, the insufficiency of their approach, and areas of improvement (Park and Son, 2012:3). For these reasons the Gibbs Model will be applied below.

4.1 Reflection on the Process of the Research

4.1.1 The Experience

The process of writing the dissertation was both challenging and enjoyable. It was enjoyable because any research activity is one of problem solving and of searching for information, and these two activities can be very satisfying when they result in finding out something new. While primary research is often seen as the most valid form of activity, in fact secondary research, based as it is on the gathering of existing data, and the synthesis of that data to suggest new outcomes or findings, can be just as valid, and just as difficult as carrying out processes that collect new or primary data.

4.1.2 The Challenges and the Achievements

As alluded to a number of times throughout this dissertation there were a number of difficulties or challenges. The choice of the topic was in retrospect a good one because it focussed on a population group in a particular location, London, that had clearly received little research focus previously. While there has been substantial data gathered on drug use and abuse more generally in the UK and more generally across age ranges, very little has been done in relation to the 13 – 19 year old age group. In fact, it was this aspect that caused the greatest difficulty in completing the dissertation – the lack of resources and data available that were relevant to this age group, in London, for any kind of drug abuse other than newspaper articles that often used the issue of drug abuse in relation to crime, ethnic minorities or deprivation, meant that the data that was available had to be used carefully. For example, it was possible to obtain academic resources such as that of Di Forti et al, that looked at drug abuse, specifically, cannabis, in London, but only two years of respondents in that study (18 and 19 year olds) fit into this dissertation, whilst the study by McCardle (2004) provided relevance to the wider age group (15 – 24) but was not based in London, so could point to some useful outcomes but did not have specific locational knowledge. In relation to the strategies developed to address the issue, again the resources of an academic nature were very limited, made even more challenging because the most recent strategies, i.e. those that had occurred in the past five years, have yet to undergo much academic analysis, but as they are a very different approach from those used a decade or so ago, there is little point in trying to evaluate those older approaches.

Despite the difficulties outlined above, it was felt that there were a number of positives obtained from the research. As there was such a dearth of resources available, this dissertation appears to provide new research and new analysis of data for this group of the population in this location. As a result, the author felt that the validity of their choice of topic and their research approach was justified to some extent. In terms of time management, it was felt that the research was planned well, and even though the search for data and resources took longer than expected, it was still possible to incorporate the timing required into the overall research schedule. The research also challenged the overall beliefs and judgements held by the author at the start of the process. Whilst it was felt that some degree of knowledge was held about these issues, there were some preconceptions held about the type of teenagers that participated in drug abuse. The gathering of the data enabled the author to begin to challenge those preconceptions especially in relation to the factors that cause people of this age to start abusing drugs. This new understanding allowed the author to start to view the issues differently.

4.1.3 Changes Required

There are a number of changes that could be implemented to make the research easier and to address the question of limited resources. Firstly, the age range would be extended to include children from the age of 0 years to 24 or 25 years, as this would enable a greater number of data sources to be used, and they could be more easily analysed and extrapolation made for teenage years. Second, the inclusion of drug abuse by parents impacting on the health of their children would be included, as this issue has consistently emerged as a key problem for children and teenagers throughout the data collection, and can be a major factor in determining whether teenagers participate in drug use and abuse. Finally, although London would still be the locational focus, because a lot of data that is collected for London and the South-East, the locational boundaries would be stretched to incorporate this area within the research. If these changes were put into place, it would be a positive exercise to undertake the research process again to see if it was possible to obtain data and achieve findings that were even more valuable than those already developed.

4.1.4 Applying the Gibb’s Model of Reflection

Figure 2: Biggs’ Reflective Model Applied to This Research

Having applied Gibbs’ model of reflection it is helpful to see that the reflection that is carried out in stages can lead to a targeted plan of action, which can form the framework for new research. Gibbs’ model does not necessarily allow for complexity, however, as it is a linear-cyclical model, and used in this way cannot represent the many complexities and variables that characterise the issue of drug abuse amongst teenagers.

4.2 Conclusions

The research question that this dissertation set out to examine was:

What patterns of drug abuse occur amongst teenagers in London, and what are the causes, health impacts and possible solutions?

Despite the difficulties in obtaining specific data for teenagers aged 13 – 19 in London, there was sufficient information available to be able to provide an answer to this research question. From the prevalence perspective, the data showed that while the prevalence of drug abuse was decreasing overall, there were areas of London that had disproportionately higher levels, especially amongst specific ethnic groups. However, amongst all drug abusers, cannabis was the most used drug. The causes of drug abuse amongst teenagers was found to be a complex mixture of environmental, emotional, mental health and peer pressure related factors, meaning that addressing the problem is always going to be challenging for policy makers and healthcare providers.

In relation to the health impacts, the previous chapter has revealed that there is clear evidence that its use can be clearly associated with health outcomes of mental health including psychosis and the development of schizophrenia for drug abusers of any age. Not only that, but it is also quite apparent that teenagers engaging in drug abuse are much more likely to experience other health related problems because of their attitude to risk, and their participation in high-risk behaviours when they are under the influence of the drug. These other problems include contracting STIs, teenage pregnancy, the taking of other drugs and substances that have more severe health impacts, participating in criminal activities that can lead to violence in an attempt to obtain money to buy drugs and so on.

Looking at the strategy that has most recently been developed to try and address the problem of teenage drug use in London, it is apparent that it has not succeeded in its aims, objectives or targets. This seems to be the result largely of the oppressive nature of all such strategies held by UK Governments over recent years – an attitude that views those with drug abuse and other problems, as ‘problem families’ that need to be ‘solved’, instead of trying to really understand what it is about society in general that leads to such families existing in the first place. A focus on social, economic and environmental issues rather than on the families themselves might result in a better outcome.

4.3 Recommendations

Having carried out a review of the literature surrounding this issue, there are some key recommendations that can immediately be made. The first of these recommendations relates to the data available for this issue – as indicated previously, one of the challenges of completing this dissertation was the paucity of data relating to the specific population being studied. It is, therefore, recommended, that research studies, or government agencies collecting data, should target this age group specifically when data is being collected about drug use or abuse. An alternative to this is for researchers to obtain the raw data from the various data collection agencies and sources, and to extrapolate the data that crosses the boundaries of the targeted populations group, and reprocess that data for the target age group. The second recommendation relates not to the data, but to the issues. It appears that controlling the availability of drugs is difficult, especially as there are so many types, and some, like cannabis, appear to be regularly available. As there seems to be an ongoing reduction in the number of young people using these illegal drugs, it would seem sensible to capitalise on this trend by providing better educational initiatives to inform people of the dangers to their health. It would also be appropriate to try and determine which factors were most likely to cause teenagers to start abusing drugs and to find ways of addressing these factors more effectively than has been the case to date.

References
Bonell, C., McKee, M., and Fletcher, A. (2016). Troubled Families, Troubled Policy making. BMJ, 355, doi: https://doi.org/10.1136/bmj.i5879.
Di Forti, M., Marconi, A., Carra, E., Fraietta, S., Trotta, A., Bonomo, M., Bianconi, F., Gardner-Sood, P., O’Connor, J., Russo, M., Stilo, S.A., Marques, T.R., Mondelli, V., Dazzan, P., Pariante, C., David, A.S., Gaughran, F., Atakan, Z., Iyegbe, C., Powell, J., Morgan, C., Lynskey, M., and Murray, R.M. (2015). Proportion of patients in south London with first-episode psychosis attributable to use of high potency cannabis: a case-control study. Lancet Psychiatry, http://dx.doi.org/10.1016/S2215-0366(14)00117-5
Gibbs, G. (1998). Learning by doing: A guide to teaching and learning. London: FEU.
Kolb, D. (1984). Experiential learning: Experience as the source of learning and development. Englewood Cliffs NJ: Prentice-Hall
Lambert, M., and Crossley, S. (2017). ‘Getting with the (Troubled Families) Programme’: A Review. Social Policy and Society, 16(1), pp. 87 – 97.
Leitch, R., and Day, C. (2000). Action Research and Reflective Practice: Towards a Holistic View. Educational Action Research, 8(1), pp. 179 – 193.
McCardle, P. (2004). Substance Abuse by Children and Young People. Archives of Disease in Childhood, 89(8), pp.701
Park, J.Y., and Kastanis, L.S. (2009). Reflective learning through social network sites in design education. International Journal of Learning,16(8), 11-22.
Park, J.Y., and Son, J.B. (2011). Expression and Connection: The Integration of the Reflective Learning Process and the Public Writing Process into Social Network Sites. Journal of Online Learning and Teaching, 7(1), pp. 1 – 6.
White, P., Laxton, J., and Brooke, R. (2016). Reflection: Importance, Theory and Practice. Leeds: University of Leeds.