The role of Epidemiology in Infection Control

This work was produced by one of our professional writers as a learning aid to help you with your studies

The role of epidemiology in infection control and the use of immunisation programs in preventing epidemics

The discipline of epidemiology is broadly defined as “the study of how disease is distributed in populations and the factors that influence or determine this distribution” (Gordis, 2009: 3). Among a range of core epidemiologic functions recognised (CDC, 2012), monitoring and surveillance as well as outbreak investigation are most immediately relevant to identifying and stopping the spread of infectious disease in a population.

Most countries perform routine monitoring and surveillance on a range of infectious diseases of concern to their respective jurisdiction. This allows health authorities to establish a baseline of disease occurrence. Based on this data, it is possible to subsequently discern sudden spikes or divergent trends and patterns in infectious disease incidence. In addition to cause of death which is routinely collected in most countries, many health authorities also maintain a list of notifiable diseases. In the UK, the list of reportable diseases and pathogenic agents maintained by Public Health England includes infectious diseases such as Tuberculosis and Viral Haemorrhagic Fevers, strains of influenza, vaccine-preventable diseases such as Whooping Cough or Measles, and food borne infectious diseases such as gastroenteritis caused by Salmonella or Listeria. (Public Health England, 2010) At the international level, the World Health Organization requires its members to report any “event that may constitute a public health emergency of international concern” (International Health Regulations, 2005). Cases of Smallpox, Poliomyelitis, Severe Acute Respiratory Syndrome (SARS), and new influenza strains are always notifiable. (WHO, undated) These international notification duties allow for the identification of trans-national patterns by collating data from national surveillance systems. Ideally, the system would enable authorities to anticipate and disrupt further cross-national spread by alerting countries to the necessity of tightened control at international borders or even by instituting more severe measures such a bans on air travel from and to affected countries.

As explained in the previous paragraph, data collected routinely over a period of time allows authorities to respond to increases in the incidence of a particular disease by taking measures to contain its spread. This may include an investigation into the origin of the outbreak, for instance the nature of the infectious agent or the vehicle. In other cases, the mode of transmission may need to be clarified. These tasks are part of the outbreak investigation. Several steps can be distinguished in the wake of a concerning notification or the determination of an unusual pattern. These include the use of descriptive epidemiology and analytical epidemiology, the subsequent implementation of control measures, as well as reporting to share experiences and new insights. (Reintjes and Zanuzdana, 2010)

In the case of an unusual disease such as the possibility of the recent Ebola outbreak in West Africa to result in isolated cases in Western Europe, it might not be necessary to engage in further epidemiological analysis once the diagnosis has been confirmed. Instead, control measures would be implemented immediately and might include ensuring best practice isolation of the patient and contact tracing to ensure that the infection does not spread further among a fully susceptible local population. Similarly, highly pathogenic diseases such as meningitis that tend to occur in clusters might prompt health authorities to close schools to disrupt the spread. In other types of outbreak investigations identifying the exact disease or exact strain of an infectious agent is the primary epidemiologic task. This might, for instance, be the case if clusters of relatively non-specific symptoms occur and need be confirmed as linked to one another and identified as either a known disease/infectious agent or be described and named. In the same vein, in food-borne infectious diseases, the infectious organism and vehicle of infection may have to be pinpointed by retrospectively tracing food intake, creating comparative tables, and calculating measures of association between possible exposures and outcome (CDC, 2012). Only then can targeted control measures such as pulling product lots from supermarket shelves and issuing a pubic warning be initiated.

Beyond identifying and controlling infectious disease outbreaks, monitoring and surveillance also plays a role in ensuring that primary prevention works as effectively as possible: collecting information on behavioural risk factors in cases such as sexually transmitted diseases can help identify groups that are most at risk and where Public Health interventions may yield the highest benefit. In another example, monitoring immunization coverage and analysing the effectiveness of vaccines over the life course may predict epidemics in the making if coverage is found decreasing or immunity appears to decline in older populations. In addition, the ability to anticipate the potential spread of disease with a reasonable degree of confidence hinges not only on good data collection. Advanced epidemiological methods such as mathematical modelling are equally instrumental in predicting possible outbreak patterns. Flu vaccines, for instance, need to be formulated long before the onset of the annual flu season. Against which particular strains the vaccines are to provide immunity can only be determined from past epidemiological data and modelling. (M’ikanatha et al., 2013) Mathematical models have also played a role in determining the most effective vaccine strategies, including target coverage and ideal ages and target groups, to eliminate the risk of epidemic outbreaks of infectious diseases (Gordis, 2009).

In addition to controlling outbreaks at the source and assuring the key protective strategies such as mass immunisation are effectively carried out, epidemiology is also a tool that allows comprehensive planning for potential epidemics. A scenario described in a research article by Ferguson and colleagues (2006) has as its premise a novel and therefore not immediately vaccine-preventable strain of influenza that has defied initial attempts at control and reached pandemic proportions. The large scale simulation of the theoretical epidemic assesses the potential of several intervention strategies to mitigate morbidity and mortality: international border and travel restrictions, a measure that is often demanded as a kneejerk reaction by policy-makers and citizens is found to have minimal impact, at best delaying spread by a few weeks even if generally adhered to (Ferguson et al., 2006). By contrast, interventions such as household quarantines or school closures that are aimed at interrupting contact between cases, potential carriers, and susceptible individuals are much more effective. . (Ferguson et al., 2006) Time sensitive antiviral treatment and post exposure prophylaxis using the same drugs are additional promising strategies identified. (Ferguson et al., 2006) The latter two potential interventions highlight the role of epidemiological risk assessment in translating anticipated spread of infectious disease into concrete emergency preparedness. For instance, both mass treatment and mass post exposure prophylaxis require advance stockpiling of antivirals. During the last H1N1 epidemic, public and political concern emerged over shortages of the antiviral drug oseltamivir (brand name Tamiflu). (De Clerq, 2006). However, advance stockpiling requires political support and significant resources at a time when governments are trying to reign in health spending and the threat is not immediate. Thus, epidemiologists also need to embrace the role of advocates and advisors that communicate scientific findings and evidence-based projections to decision-makers.

That being said, immunisation remains the most effective primary preventive strategies for the prevention and control of epidemics. As one of the most significant factors in the massive decline of morbidity and mortality form infectious disease in the Western world over the last century, vaccination accounts for an almost 100% reduction of morbidity from nine vaccine-preventable diseases such as Polio, Diphtheria, and Measles in the United States between 1900 to 1990. (CDC, 1999) Immunisation programmes are designed to reduce the incidence of particular infectious diseases by decreasing the number of susceptible individuals in a population. This is achieved by administering vaccines which stimulate the body’s immune response. The production of specific antibodies allows the thus primed adaptive immune system to eliminate the full strength pathogen when an individual becomes subsequently exposed to it. The degree of coverage necessary to achieve so called herd immunity- the collective protection of a population even if not every single individual is immune- depends on the of the infectivity and pathogenicity of the respective infectious agent. (Nelson, 2014) Infectivity, in communicable diseases, measures the percentage of infections out of all individuals exposed, whereas pathogenicity is the percentage of infected individuals that progress to clinical disease. (Nelson, 2014). Sub-clinical or inapparent infections are important to take into account because, even though they show no signs and symptoms of disease, people may still be carriers capable of infecting others. Polio is an example of an infectious disease where most infections are inapparent, but individuals are infectious. (Nelson, 2014).

Gauging infectivity is crucial to estimating the level of coverage needed to reach community immunity. The so called basic reproductive rate is a numerical measure of the average number of secondary infections attributable to one single source of disease, e.g. one infected individual. The rate is calculated by taking into account the average number of contacts a case makes, the likelihood of transmission at each contact point, and the duration of infectiousness. (Kretzschmar and Wallinga, 2010). The higher the reproductive rate, i.e. the theoretical number of secondary cases, the higher the percentage of the population that needs to be immunised in order to prevent or interrupt an outbreak of epidemic proportions. For instance, smallpox, which was successfully eradicated in 1980 (World Health Organization, 2010), is estimated to have a basic reproduction number of around 5, requiring a coverage of only 80% of the population to achieve herd immunity. By contrast, the estimated reproduction number for Measles is around 20 and it is believed that immunisation coverage has to reach at least 96% for population immunity to be ensured. (Kretzschmar and Wallinga, 2010). Once the herd immunity threshold is reached, the remaining susceptible individuals are indirectly protected by the immunised majority around them: in theory, no pathogen should be able to reach them because nobody else is infected or an asymptomatic carrier. Even if the unlikely event of an infection among the unvaccinated eventuated, the chain of transmission should be immediately interrupted thanks to the immunised status of all potential secondary cases. Vaccinating primary contacts of isolated cases is also an important containment strategy where a cluster of non-immune individuals was exposed to an infected individual. Such scenarios may apply, for example, where groups of vaccine objectors or marginalized groups not caught by the regular immunisation drive are affected or an imported disease meets a generally susceptible population.

However, epidemic prevention does not stop with having reached vaccination targets. Instead, constant monitoring of current coverage is required and adaptations of the immunisation strategy may be needed to ensure that epidemics are reliably prevented. Recent trends underscore the enduring challenge of permanently keeping at bay even diseases that are officially considered eradicated or near eradication: in the United Kingdom, a marked spike in the number of confirmed measles cases has been observed in the last decade, with an increase from under 200 cases in 2001 to just over 2,000 cases in 2012. (Oxford Vaccine Group, undated) The underlying cause is evident from a comparison of case numbers with data from vaccine coverage monitoring: indeed, the number of children receiving the combination Measles vaccine decreased in the 2000s roughly in parallel with the increase in Measles incidence. (Oxford Vaccine Group, undated) Other countries have seen similar trends and have responded with measures intended to increase vaccine uptake: for instance, in Australia, the government recently decided to enact measures that would withhold child benefit payments to parents who refuse to have their children vaccinated. (Lusted and Greene, 2015)

In conclusion, epidemiology, and in particular routine monitoring and surveillance, is a potent tool that enables health authorities to anticipate, detect, and contain the spread of infectious disease. Over the last century, immunisation has proven itself as one of the key interventions to curb infectious disease morbidity and mortality. However, with vaccine-preventable diseases again on the rise in UK and other industrialised countries, epidemiologic monitoring of vaccine coverage and disease incidence remains critically important. Where vaccines are not available or vaccine-induced immunity is short-lived, an effective system to detect cases and contain outbreaks is even more instrumental to the effort of preventing infectious disease epidemics.

Bibliography

Centers for Disease Control and Prevention (CDC) (2012) Principles of Epidemiology in Public Health Practice, 2nd edition, Atlanta, GA: US Department of Health and Human Services.

Centers for Disease Control and Prevention (CDC) (1999) ‘Achievements in Public Health, 1900-1999 Impact of Vaccines Universally Recommended for Children — United States, 1990-1998’, MMWR, vol. 48, no. 12, pp. 243-248.

De Clercq, E. (2006) ‘Antiviral agents active against influenza A viruses’, Nature Reviews Drug Discovery, vol. 5, no. 12, pp. 1015-1025.

Ferguson, N. et al. (2006) ‘Strategies for mitigating an influenza pandemic’, Nature, vol. 442, July, pp. 448-452.

Gordis, L. (2009) Epidemiology, 4th edition, Philadelphia, PA: Saunders Elsevier.

Kretzschmar, M. and Wallinga, J. (2010) ‘Mathematical Models in Infectious Disease Epidemiology’, in: Kramer, A. et al. (ed.) Modern Infectious Disease Epidemiology, New York, NY: Springer Science + Business Media.

Lusted, P. and Greene, A. (2015) Childcare rebates could be denied to anti-vaccination parents under new Federal Government laws. ABC News [Online], Available: http://www.abc.net.au/news/2015-04-12/parents-who-refuse-to-vaccinate-to-miss-out-on-childcare-rebates/6386448 [12 Feb 2015].

M’ikanatha, N. et al. (2013) ‘Infectious disease surveillance: a cornerstone for prevention and control’, in: M’ikanatha, N. et al. (ed.) Infectious Disease Surveillance, 2nd edition, West Sussex, UK: John Wiley & Sons.

Nelson, K. (2014) ‘Epidemiology of Infectious Disease: General Principles’, in: Nelson, K., Williams, C. and Graham, N. (ed.) Infectious disease epidemiology: theory and practice, 3rd edition, Burlington, MA: Jones & Bartlett Learning.

Oxford vaccine Group (undated) Measles [Online], Available: http://www.ovg.ox.ac.uk/measles [12 Feb 2015].

Public Health England (first published 2010) Notifications of infectious diseases (NOIDs) and reportable causative organisms: legal duties of laboratories and medical practitioners [Online], Available: https://www.gov.uk/notifiable-diseases-and-causative-organisms-how-to-report [12 Feb 2015].

Reintjes, R. and Aryna Zanuzdana. (2010) ‘Outbreak Investigations’, in: Kramer, A. et al. (ed.) Modern Infectious Disease Epidemiology, New York, NY: Springer Science + Business Media.

World Health Organization (WHO) (2005). ‘Notification and other reporting requirements under the IHR’, IHR Brief, No. 2 [Online], Available: http://www.who.int/ihr/ihr_brief_no_2_en.pdf [12 Feb 2015].

World Health Organization (WHO). (2010) Statue Commemorates Smallpox Eradication. Available: http://www.who.int/mediacentre/news/notes/2010/smallpox_20100517/en/index.html [12 Feb 2015].

The Epidemiology of Alcohol Abuse and Alcoholism

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

According to Alcohol Concern Organisation (2015) more than 9 million people in England consume alcoholic beverages more than the recommended daily limits. In relation to this, the National Health Service (2015) actually recommends no more than 3 to 4 units of alcohol a day for men and 2 to 3 units a day for women. The large number of people consuming alcohol more than the recommended limits, highlights the reality that alcoholism is a major health concern in the UK which can lead to a multitude of serious health problems. Moss (2013) states that alcoholism and chronic use of alcohol are linked to various medical, psychiatric, social and family problems. To add to this, the Health and Social Care Information Centre (2014) reported that between 2012 and 2013, a total of 1,008,850 admissions related to alcohol consumption where an alcohol-related disease, injury or condition was the primary cause for hospital admission or a secondary diagnosis. This shows the detrimental impact of alcoholism on the health and overall wellbeing of millions of people in the UK. It is therefore vital to examine the aetiology of alcoholism in order to understand why so many people end up consuming excessive alcohol. The National Institute on Alcohol Abuse and Alcoholism (NIAAA) (n.d.) supports this by stating that learning the natural history of a disorder will provide information essential for assessment and intervention and for the development of effective preventive measures. This essay will also look into the different public health policies that address the problem of alcoholism in the UK. A brief description of what alcoholism is will first be provided.

What is Alcoholism?

It is safe to declare that alcoholism is a lay term that simply means excessive intake of alcohol. It can be divided into two forms namely; alcohol misuse or abuse and alcohol dependence. Alcohol misuse simply means excessive intake of alcohol more than the recommended limits (National Health Service Choices 2013). A good example of this is binge drinking.

Alcohol dependence is worse because according to the National Institute for Health and Care Excellence (2011, n.p.) it “indicates craving, tolerance, a preoccupation with alcohol and continued drinking regardless of harmful consequences” (e.g. liver disease). Under the Diagnostic Statistical Manual of Mental Disorders (DSM)- 5, these two have been joined as one disorder called alcohol use disorder or AUD with mild, moderate and severe sub-classifications (NIAAA 2015).

Genetic Aetiologic Factor of Alcoholism

Alcoholism is a complex disorder with several factors leading to its development (NIAAA 2005). Genetics and other biological aspects can be considered as one factor involved in the development of alcohol abuse and dependence (NIAAA 2005). Other factors include cognitive, behavioural, temperament, psychological and sociocultural (NIAAA 2005).

According to Goodwin (1985) as far as the era of Aristotle and the Bible, alcoholism was believed to run in the families and thus could be inherited. To some extent, there is some basis that supports this ancient belief because in reality, alcoholic parents have about four to five times higher probability of having alcoholic children (Goodwin 1985). Today, this belief seems to lack substantially clear and direct research-based evidence. On the other hand, studies also do not deny the role of genetics in alcoholism. With this view, it is therefore safe to argue that genetics is considered still as an important aetiologic factor in alcoholism.

The current consensus simply indicates that there is more to a simple gene or two that triggers the predisposition of an individual to become an alcoholic. Scutti (2014) reports that although scientists have known for some time that genetics take an active role in alcoholism, they also propose that an individual’s inclination to be dependent on alcohol is more complicated than the simple presence or absence of any one gene. The National Institute on Alcohol Abuse and Alcoholism (2008) states that there is no one single gene that fully controls a person’s predisposition to alcoholism rather multiple genes play different roles in a person’s susceptibility in becoming an alcoholic. The NIAAA (2005) further claims that the evidence for a genetic factor in alcoholism lies mainly with studies that involve extended pedigree, those that involve identical and fraternal twins and those that include adopted individuals raised apart from their alcoholic parents.

For pedigree studies, it is believed that the risk of suffering from alcoholism is increased four to seven fold among first-degree relatives of an alcoholic (Cotton 1979; Merikangas 1990 cited in NIAAA, 2005.). First degree relatives naturally refer to parent-child relationships; hence, a child is therefore four to seven times at higher risk of becoming an alcoholic, if one or both of their parents are alcoholics. Moss (2013) supports this by stating that children whose parents are alcoholic are at higher risk of becoming alcoholics themselves when compared to children whose parents are non-alcoholics.

A study conducted by McGue, Pickens and Svikis (1992 cited in NIAAA 2005) revealed that identical twins generally have a higher concordance rate of alcoholism compared to fraternal twins or non-twin siblings. This basically means that a person who has an alcoholic identical twin, will have a higher risk of becoming an alcoholic himself when compared to if his alcoholic twin is merely a fraternal twin or a non-twin sibling. This study further proves the role of genetics in alcoholism because identical twins are genetically the same; hence, if one is alcoholic, the other must therefore also carry the alcoholic gene.

The genetic factor in alcoholism is further bolstered by studies conducted by Cloninger, Bohman and Sigvardsson 1981 cited in NIAAA 2005 and Cadoret, Cain and Grove (1980 cited in NIAAA 2005) involving adopted children wherein the aim was to separate the genetic factor from the environmental factor of alcoholism. In these studies, children of alcoholic parents were adopted and raised away from their alcoholic parents but despite this, some of these children still develop alcoholism as adults at a higher rate than those adopted children who did not have an alcoholic biological parent (Cloninger et al., 1981 cited in NIAAA 2005 and Cadoret et al., 1980 cited in NIAAA 2005).

One interesting fact about aetiologic genetic factor is that although there are genes that indeed increase the risk of alcoholism, there are also genes that protect an individual from becoming an alcoholic (NIAAA 2008). For example, some people of Asian ancestry carry a gene that modifies their rate of alcohol metabolism which causes them to manifest symptoms such as flushing, nausea and tachycardia and these generally lead them to avoid alcohol; thus, it can be said that this gene actually helps protect those who possess it from becoming alcoholic (NIAAA 2008).

Environment as an Aetiologic Factor of Alcoholism

Another clearly identifiable factor is environment, which involves the way an individual is raised and his or her exposure to different kinds of activities and opportunities. The National Institute on Alcohol Abuse and Alcoholism (2005) relates that the genetic factor and the environmental factor have a close relationship in triggering alcoholism in an individual. This can be explained by the simple fact that even if an individual is genetically predisposed to becoming an alcoholic, if he is not exposed to a particular kind of environment which triggers activities that lead to alcohol intake, the likelihood of his becoming an alcoholic will be remote.

There are certain aspects within the environment that makes it an important aetiologic factor. According to Alcohol Policy MD (2005) these aspects include acceptance by society, availability and public policies and enforcement.

Acceptance in this case refers to the idea that drinking alcoholic drinks even those that should be deemed excessive is somewhat encouraged through mass media, peer attitudes and behaviours, role models, and the overall view of society. Television series, films and music videos glorify drinking sprees and even drunken behaviour (Alcohol Policy MD 2005). TV and film actors and sports figures, peers and local role models also encourage a positive attitude towards alcohol consumption which overshadows the reality of what alcohol drinking can lead to (Alcohol Policy MD 2005). In relation to this, a review of different studies conducted by Grube (2004) revealed that mass media in the form of television shows for instance has an immense influence on the youth (age 11 to 18) when it comes to alcohol consumption. In films, portrayals regarding the negative impact of alcohol drinking are rare and often highlight the idea that alcohol drinking has no negative impact on a person’s overall wellbeing (Grube 2004). In support of these findings, a systematic review of longitudinal studies conducted by Anderson et al. (2009) revealed that the constant alcohol advertising in mass media can lead adolescents to start drinking or to increase their consumption for those who are already into it.

Availability of alcoholic drinks is another important environmental aetiologic factor of alcoholism simply because of the reality that no matter how predisposed an individual is to become an alcoholic, the risk for alcoholism will still be low if alcoholic drinks are not available. On the other hand, if alcoholic beverages are readily available as often are today, then the risk for alcoholism is increased not only for those who are genetically predisposed to alcoholism but even for those who do not carry the “alcoholic genes”. The more licensed liquor stores in an area, the more likely people are to drink (Alcohol Policy MD 2005). The cheaper its price, the more affordable it is for people to buy and consume it in excess (Alcohol Policy MD 2005).

Another crucial environmental aetiologic factor is the presence or absence of policies that regulate alcohol consumption and its strict or lax enforcement. It includes restricting alcohol consumption in specified areas, enacting stricter statutes concerning drunk driving and providing for penalties for those who sell to, buy for or serve to underage individuals (Alcohol Policy MD 2005). It is worthy to point out that in the UK, the drinking age is 18 and a person can be stopped, fined or even arrested by police if he or she is below this age and is seen drinking alcohol in public (Government UK 2015a). It is also against the law for someone to sell alcohol to an individual below 18; however, an individual age 16 or 17 when accompanied by an adult can actually drink but not buy alcohol in a pub or drink beer, wine or cider with a meal (Government UK 2015a).

Policies to Combat Alcoholism

One public health policy that can help address the problem on alcoholism is the mandatory code of practice for alcohol retailers which banned irresponsible alcohol promotions and competitions, and obliged retailers to provide free drinking water, compelled them to offer smaller measures and required them to have proof of age protocol. It can be argued that this policy addresses the problem of alcoholism by restricting the acceptance, availability and advertising of alcohol (Royal College of Nursing 2012). Another is the Police Reform and Social Responsibility Act 2011 which is a statute that enables local authorities to take a tougher stance on establishments which break licensing rules about alcohol sale (Royal Collage of Nursing 2012).

There is also the policy paper on harmful drinking which provides different strategies in addressing the problem of alcoholism. One such strategy is the advancement of the Change4Life campaign which promotes healthy lifestyle and therefore emphasises the recommended daily limit of alcohol intake for men and women (Government UK 2015b). Another strategy within this policy is the alcohol risk assessment as part of the NHS health check for adults ages 40 to 75 (Government UK 2015b). This policy aims to prevent rather than cure alcoholism which seems to be logical for after all, an ounce of prevention is better than a pound of cure.

Conclusion

Alcoholism which includes both alcohol misuse and alcohol dependence is a serious health problem which affects millions in the UK. Its aetiology is actually a combination of different factors. One vital factor is genetics wherein it can be argued that some people are predisposed to becoming an alcoholic. For example, an individual is at higher risk of becoming an alcoholic if he or she has a parent who is also alcoholic. When coupled with environmental factors, the risk of suffering from alcoholism becomes even greater. Environment refers to the acceptability and availability of alcohol and the presence or absence of policies that regulate alcohol sale and consumption. Vital health policies such as Harmful Drinking Policy Paper advocated by the government, are important preventive measures in reducing the incidence and prevalence of alcoholism in the UK.

References

Alcohol Concern Organisation (2015). Statistics on alcohol. [online]. Available from: https://www.alcoholconcern.org.uk/help-and-advice/statistics-on-alcohol/ [Accessed on 28 September 2015].

Alcohol Policy MD (2005). The effects of environmental factors on alcohol use and abuse. [online]. Available from: http://www.alcoholpolicymd.com/alcohol_and_health/study_env.htm[Accessed on 28 September 2015].

Anderson, P., de Brujin, A., Angus, K., Gordon, R. and Hastings, G. (2009). Impact of alcohol advertising and media exposure on adolescent alcohol use: A systematic review of longitudinal studies. Alcohol and Alcoholism. 44(3):229-243.

Goodwin, D. (1985). Alcoholism and genetics: The sins of the fathers. JAMA Psychiatry. 42(2):171-174.

Government UK (2015a). Alcohol and young people. [online]. Available from: https://www.gov.uk/alcohol-young-people-law [Accessed on 28 September 2015].

Government UK (2015b). policy paper 2010 to 2015 government policy: Harmful drinking. [online]. Available from: https://www.gov.uk/government/publications/2010-to-2015-government-policy-harmful-drinking/2010-to-2015-government-policy-harmful-drinking [Accessed on 28 September 2015].

Grube, J. (2004). Alcohol in the media: Drinking portrayals, alcohol advertising, and alcohol consumption among youth. [online]. Available from:http://www.ncbi.nlm.nih.gov/books/NBK37586/ [Accessed on 28 September 2015].

Health and Social Care Information Centre (2014). Statistics on alcohol England, 2014. [online]. Available from: http://www.hscic.gov.uk/catalogue/PUB14184/alc-eng-2014-rep.pdf [Accessed on 28 September 2015].

Moss, H.B. (2013). The impact of alcohol on society: A brief overview. Social Work in Public Health. 28(3-4):175-177.

National Health Service (2015). Alcohol units. [online]. Available from: http://www.nhs.uk/Livewell/alcohol/Pages/alcohol-units.aspx [Accessed on 28 September 2015].

National Health Services Choices (2013). Alcohol misuse. [online]. Available from: http://www.nhs.uk/conditions/alcohol-misuse/pages/introduction.aspx [Accessed on 28 September 2015].

National Institute on Alcohol Abuse and Alcoholism (2015). Alcohol use disorder: A comparison between DSM-IV and DSM-5. [online]. Available from: http://pubs.niaaa.nih.gov/publications/dsmfactsheet/dsmfact.pdf [Accessed on 28 September 2015].

National Institute on Alcohol Abuse and Alcoholism (2008). Genetics of alcohol use disorder. [online]. Available from: http://www.niaaa.nih.gov/alcohol-health/overview-alcohol-consumption/alcohol-use-disorders/genetics-alcohol-use-disorders [Accessed on 28 September 2015].

National Institute on Alcohol Abuse and Alcoholism (2005). Module 2: Etiology and natural history of alcoholism. [online]. Available from: http://pubs.niaaa.nih.gov/publications/Social/Module2Etiology&NaturalHistory/Module2.html [Accessed on 28 September 2015].

National Institute for Health and Care Excellence (2011). Alcohol-use disorders: Diagnosis, assessment and management of harmful drinking and alcohol dependence. [online]. Available from: https://www.nice.org.uk/guidance/CG115/chapter/Introduction [Accessed on 28 September 2015].

Royal College of Nursing (2012). Alcohol: policies to reduce alcohol-related harm in England. [online]. Available from: https://www.rcn.org.uk/__data/assets/pdf_file/0005/438368/05.12_Alcohol_Short_Briefing_Feb2012.pdf [Accessed on 28 September 2015.

Scutti, S. (2014). Is alcoholism genetic? Scientists discover link to a network of genes in the brain. [online]. Available from: http://www.medicaldaily.com/alcoholism-genetic-scientists-discover-link-network-genes-brain-312668 [Accessed on 28 September 2015].

Student Diet & Health Concerns

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

The obesity epidemic observed in the UK and other Western nations over the past two decades has increased the focus on eating habits of the nation (James, 2008, p. S120). Obesity, most often caused by prolonged poor diet, is associated with an increased risk of several serious chronic illnesses, including diabetes, hypertension and hyperlipidaemia, as well as possibly being associated with increased risk of mental health issues including depression (Wyatt et al., 2006, p. 166). In an attempt to promote better health of the population and reduce the burden of obesity and related health conditions on the NHS, the recent government white paper Healthy Lives, Healthy People (HM Government, 2010, p. 19) has identified improvements in diet and lifestyle as a priority in public health policy.

The design of effective interventions for dietary behaviour change may rely on having a thorough understanding of the factors determining individual behaviour. Although there has been a great deal of research published on eating habits of adults and school children (e.g. Raulio et al., 2010, p. 987) there has been much less investigation of the university student subpopulation, particularly within the UK. This may be important given that the dietary choices of general populations vary markedly across different countries and cultures, including within the student population (Yahia et al., 2008, p. 32; Dodd et al., 2010, p. 73).

This essay presents a discussion of the current research available on the eating habits of UK undergraduate students, including recent work being undertaken at Coventry University (Arnot, 2010, online). The essay then describes a small study conducted to supplement this research, using data collected from six students at a different university, exploring the influences which underpin the decisions made by students relating to their diet. The results of this study are presented and used to derive a set of recommendations for both a localized intervention and a national plan, targeted at university students, to improve dietary behaviour.

Eating Habits of University Students

It is widely accepted that students leaving home to attend university are likely to experience a significant shift in their lifestyle, including their diet, and this is supported by research evidence from the UK and other European countries (Papadaki et al., 2007, p. 169). This may encompass increased alcohol intake, reduced intake of fruit and vegetables, and increased intake of processed or fatty foods, as well as impacting on overall eating patterns (Arnot, 2010, online; Dodd et al., 2010, p. 73; Spanos & Hankey, 2010, p. 102).

Results of a study including 80 undergraduate students from Scotland found that around a quarter of participants never consumed breakfast (Spanos & Hankey, 2010, p. 102). Skipping breakfast habitually has been shown to be associated with increased risk of obesity and overweight amongst adolescents (Croezen et al., 2009, p. 405). The precise reasons for this are not entirely clear, although it could be due to increased snacking, on energy-dense, high-fat foods later in the day. This is based on the remainder of the results reported by Spanos and Hankey (2010, p. 102) which showed that three-quarters of students regularly used vending machines, snacking on chocolate bars and crisps; this was also shown to be significantly associated with body mass index (BMI).

Some studies have suggested that there may be different patterns of unhealthy eating amongst male and female groups of students. For example research conducted by Dr. Ricardo Costa and Dr. Farzad Amirabdollahian at Coventry University found that male students may be at risk of what they term “disordered eating patterns”. In addition, the study also suggests that males are at greater risk of not eating five portions of fruit and vegetables per day. This research is based on a substantial sample size, using data derived from in-depth interviews with approximately 130 undergraduates, although there are plans to increase this to include nearly 400 participants. It is acknowledged by the researchers that this may represent only those events occurring at one university, although there are also plans to expand the study sample across another two universities in the future (Arnot, 2010, online).

However, not all studies published support the existence of gender differences in eating behaviours. For example, research into risk factors for an unhealthy lifestyle reported by Dodd et al. (2010, p. 75) found that there were no differences in gender when measuring rates of eating five portions of fruit or vegetables per day.

Factors in Dietary Change

It is unsurprising that students’ dietary habits change when leaving home to attend university, since it has been identified that life transitions form a major factor in influencing eating habits (Lake et al., 2009, p. 1200). Studies have suggested that the dietary shift is most likely due to young adults leaving the family home and assuming responsibility for meal planning and preparation for the first time. This is supported by observations that university students who remain living at the family home may maintain a relatively healthier lifestyle than those moving out of home (Papadaki et al., 2007, p. 169). Early results from a Coventry University study also support this as a major factor, as it has been identified that cooking skills may be very limited amongst undergraduates, with the exception of mature students (Arnot, 2010, online).
Early results from Coventry University suggest that there is little evidence within their sample of any significant differences in eating habits between students from different social backgrounds (Arnot, 2010, online).

Arnot (2010, online) identifies that any trends in eating habits within the undergraduate population may reflect a phase, which the individuals may grow out of naturally. Lake et al. (2009, p. 1200) also suggest that changes in eating habits may simply be due to the life transition associated with the general maturation process, moving from adolescence to adulthood. This would then suggest that eating habit changes may be consistent across all groups of young adults, not differentiated within the undergraduate population. However, it is possible that the relationship between other factors such as stress may make the situation more complex, with university students possibly experiencing higher stress levels, therefore at increased risk of weight gain associated with diet change (Serlachius et al., 2007, p. 548).

Barriers and Facilitators to Healthy Eating

A systematic review of studies by Shepherd et al. (2005, p. 239) found that the major barriers to healthy eating included access to healthy foods, relative prices and personal preference, for example liking fast foods. This study also identified a lack of provision of healthy school meals as a major barrier, reflecting the fact that this review focused on exploring healthy eating in secondary school children, aged 11 to 16 years. It is therefore different barriers are most important in the university student population, as this group take a greater level of responsibility for their own food choices.

For example, evidence from the Coventry University study suggests that while undergraduate males were influenced by media images and were motivated to look good, this did not necessarily translate to improved healthy food choices. Instead, this appears to be associated with an increased risk of disordered eating within this group, alongside increased use of supplements such as protein powders, creatine and amino acids. This approach also led to increased intake of protein-rich foods but very little fruit and vegetable intake. It would be anticipated that factors such as availability and cost may still be important factors in this group.

The systematic review by Shepherd (2005, p. 239) suggested that support from family and friends, high levels of availability of healthy foods, an interest and desire to maintain appearance, and will-power were all major facilitators of eating healthily. Again, it is possible that different factors may be considered important within the university student population, who are older and have greater responsibility for their eating habits.

Methodology

The short review of the literature presented thus far in the essay demonstrates that there is still only a limited understanding of the underlying factors influencing eating habits in undergraduate students. Yet this is the information which is required if effective behavioural change interventions are to be designed and disseminated.

Research Aims

The aim of this small study was to investigate the decision-making processes which underlie the decisions of undergraduate students with regards to eating behaviours, including influences over these decisions. This could then be used alongside other published material to design a social marketing strategy on both a local and national level to improve healthy eating within this group.

Study Sample

A total of six undergraduate students from Manchester University were recruited to participate in the research. Convenience sampling was used to recruit participants to the study sample. Posters were displayed within the business school at the university, requesting participants to attend research focus groups. Eight participants contacted the researcher, but two subsequently withdrew, leaving a sample of four female and two male students. No further inclusion or exclusion criteria were applied to participants, other than that they were current undergraduate students at the university. This method of sampling may not provide a truly representative sample, therefore it may be difficult to generalize the results to the wider population of interest (Babbie, 2010, p. 192). However, this was the most appropriate recruitment approach given the limited time and budget constraints for the project. The diversity of the study sample would also suggest that there was little bias introduced.

Focus Group Methods

Focus groups were selected for data collection from study participants. Focus groups may be particularly useful for gaining an understanding of topics with a group behaviour element, but have also been shown to be very useful in the field of marketing for understanding the impact of marketing stimuli. They were considered to be of particular use in this instance as they allow integrated exploration of associations between lifestyle factors and reactions to marketing materials (Stewart et al., 2007, pp. 2-9).

The focus group was arranged for a two-hour session on one morning, and was moderated by the author. The entire session was video recorded so as to allow for further analysis of responses and behavioural cues at a later date. All participants were given assurance that their responses would remain anonymous and confidential and permission was sought to record the session before it began. Participants were also given information at the beginning of the session as to the purpose of the data collection, and were given opportunity to ask any questions, before being asked to provide consent for participation (Litosseliti, 2003, pp. 70-71).

The focus group began with some short introductory questions to break the ice between participants (Litosseliti, 2003, p. 73), before moving on to focus on the topic of interest: eating behaviours and potential influences. The questions included in the moderator guide, which was prepared to facilitate the focus group, are included in Box 1.

Box 1: Focus group questions

Tell me a little about what you would eat in a typical day.
Do you find that you eat regular meals?
What types of foods do you most like to eat?
Would you say that you eat many snacks? What type of snacks do you eat?
Is there anything you can think of that affects this – for example, do you eat differently on different days of the week?
How would you describe your cooking abilities – do you find it easy to plan meals and cook and prepare food?
How does the way you eat now compare to how you used to eat before coming to university?
Do you find that you eat differently when you go home for the weekend or for holidays?
Would you say that you have any concerns about the way in which you eat?
How do you think that the way in which you eat affects your health?
Are you at all concerned about whether the way you eat affects how you look?
What type of things affect whether you choose healthy foods over non-healthy foods?
Do you find it difficult to find/purchase healthy food?
Would cost have any impact on whether the food you buy is healthy?
Study Results

Overall, the results of the focus group suggested that the students in the sample had experienced a significant change in eating habits since leaving home to attend university. Although the daily eating patterns of participants differed significantly, all felt that they ate a less healthy diet since leaving home. The main difference noted was that regular meals were eaten less often, with several participants reporting that they skipped breakfast regularly, and that other meals were eaten based on convenience rather than at a regular time each day.

Most participants agreed that their eating patterns did differ on a daily basis. In particular, weekends were noted to follow more regular eating patterns, but often involve higher levels of alcohol and unhealthy foods such as takeaways. Participants also generally agreed that they returned to a healthier way of eating when returning home for the weekend or for holidays.

The actual components of diet varied widely across participants. While some participants reported that they regularly ate five portions of fruit and vegetables per day, others indicated that they ate only low levels. Four participants agreed that they ate convenience foods and takeaways on a regular basis, and it was acknowledged that these were usually calorie-dense, high fat foods.

All participants also agreed that they ate snacks on a regular basis, particularly where it was inconvenient to eat meals at regular intervals, and where breakfast was skipped. One participant reported that they felt that their snacking was healthy, however, as they usually snacked on fruit, nuts or seeds rather than chocolate bars or crisps. Given the small sample size and selection procedures, it was difficult to determine whether differences could be attributed to characteristics of the participants, for example gender (Babbie, 2010, p. 192).

There were a number of factors which influenced food choices which emerged from the focus group. The major factor appeared to be convenience. The patterns of meals which were eaten were largely driven by having the time to prepare and food, or having access to healthy foods which could be purchased and eaten within the university campus. Participants also agreed that cost played a major factor.

Only two participants agreed that their low level of cooking ability had any role in how healthy their diet was. The other participants claimed that while they could cook, convenience, cost and motivation were major barriers to doing so.

Food preferences were also a major factor in determining food choices, with all except one participant agreeing that they enjoyed fast food and several reporting that they preferred unhealthy foods to healthy ones. In spite of this, three participants reported that they did try to limit how often they ate fast foods, as it was acknowledged that it was bad for their health to eat them regularly.

In spite of this, the food choices of participants did not appear to be driven overall by concern over their health. Participants suggested that while they were aware of how their diet could impact on their health, other factors were more important influences. Similarly, only one participant agreed that maintaining the way that they looked played any role in influencing their dietary choices.

Social Marketing Strategy Design

Social marketing, first proposed as a public health tool in the 1970s, refers to the application of marketing techniques, using communication and delivery to encourage behaviour change. Such a strategy follows a sequential planning process which includes market research and analysis, segmentation, setting of objectives, and identifying appropriate strategies and tools to meet these objectives (DH, 2008, online). The literature review and focus group discussed thus far comprise the market research and analysis components of this process, with the remaining steps addressed below.

Market Segmentation

Market segmentation may be performed according to geographic distinctions, demographics or psychographic characteristics (Health Canada, 2004, online).
Based on the limited amount of information which is available so far, it would be difficult to segment the market geographically, as it is unclear whether differences exist according to which university is attended.

The demographics of undergraduate students may also be largely shared, with literature indicating that social background may hold little influence over eating habits within this subpopulation, and only limited evidence of any difference between genders (Arnot, 2010, online; Dodd et al., 2010, p. 75).

Instead, it may be preferential to segment on the basis of psychographic characteristics, according to shared knowledge, attitudes and beliefs with regard to changing dietary behaviour. The “Stages of Change” model proposed by Prochaska and DiClemente may be a useful tool to guide this segmentation, in which any change in behaviour is suggested to occur in six steps: precontemplation, contemplation, preparation, action, maintenance and termination (Tomlin & Richardson, 2004, pp. 13-6).
Those in the precontemplative stage do not see their behaviour as a problem (Tomlin & Richardson, 2004, p. 14), therefore targeting this segment could be targeted with a marketing campaign to increase knowledge. Evidence from the US would appear to indicate that higher levels of knowledge regarding dietary guidelines may be associated with better dietary choices, although there is little evidence which shows direct causality (Kolodinsky et al., 2007, p. 1409). Given the many different factors which appear to contribute to unhealthy diets amongst students, simply increasing knowledge may be insufficient to generate any significant improvements. This is further supported by current healthy eating initiatives aimed at the general population, such as the 5 A Day campaign, which incorporates additional, practical information, rather than simply educating people on the need to eat more fresh food (NHS Choices, 2010, online).

Those in the contemplative stage are aware that they need to change, but don’t really want to. It would be unlikely that targeting a marketing campaign at this group would have any significant effect (Tomlin & Richardson, 2004, p. 15). Once individuals reach the action stage, they are actively initiating or maintaining a change, until the initial issue is finally resolved in the termination stage (Tomlin & Richardson, 2004, pp. 15-6). Instead, it would be better to target those in the preparation stage, who have made the decision to change but may be unclear about how to initiate this change. Here, improving knowledge, but also providing information on effective ways in which to change behaviour, may be the most appropriate strategy, as that adopted by the 5 A Day campaign.

Strategy Objectives

Based on the information generated from the focus study, along with that from other research, the main aim of the strategy should be to improve the overall diet of undergraduate students. There already exist campaigns such as the 5 A Day campaign which aim to encourage eating more fruit and vegetables (NHS Choices, 2010, online). The main issues within the undergraduate group instead appear to lie in choosing unhealthy foods, or skipping meals, due to convenience and cost. Therefore this is where the campaign should focus. The following objectives may therefore be identified:

1. Reduce the number of undergraduate students experiencing disordered eating patterns.
2. Improve knowledge and awareness within the undergraduate student population of tasty, cost-effective, convenient alternatives to takeaways and other junk foods.

National Plan

The national strategy would comprise of two main arms. The first would be an educational campaign, which would be targeted specifically at the segment described above, therefore focusing on providing practical information to assist healthy eating choices amongst students. This appears to have been moderately successful with the 5 A Day campaign within the general population (Capacci & Mazzocchi, 2011, p. 87). Evidence from the US suggests that within the undergraduate population specifically, providing information which is directly relevant to their lifestyle may also be effective (Pires et al., 2008, p. 16).

This campaign would be run through national media, as the evidence suggests that such campaigns are associated not only with increased knowledge, but also moderate levels of behaviour change (Noar, 2006, p. 21). Online and social media campaigns may also be effective based on previous case studies. For example, the Kirklees Up For It project found that running a campaign which utilized Facebook alongside its own Website was a successful way of reaching a moderate audience of 18 to 24 year olds (NSMC, 2010, online). Therefore social media such as Twitter and Facebook would provide a simple means of providing weekly tips to students on how to create easy, cheap healthy meals.

Tips could also be given on how to choose healthier snacks which cost less, for example by preparing them at home. By tailoring the advice to the motives of the group, which appear to be related to convenience and cost, previous research would suggest that this should be more effective in changing snacking behaviour (Adriaanse et al., 2009, p. 60).

The second arm of the national campaign would involve lobbying of the government to introduce regulation on the food choices offered by university campuses, particularly where food is provided as part of an accommodation package. This is based on similar recent moves to improve school meals, which has been suggested to be an effective means of improving diet, even if obesity levels have not yet seen any impact (Jaime & Lock, 2009, p. 45). It is also consistent with the data collected in this study, which suggested that access to healthy foods and convenience were major barriers to healthy eating for students.

Localised Intervention

In addition to the national strategy, a local project aimed at providing food preparation workshops would also be piloted in Manchester. This concept is based on the observation that students mostly select unhealthy choices due to convenience and cost, and may not be aware of ways in which healthy food may also be prepared quickly and cheaply. Previous case studies have shown that these practical activities may be an effective means of reaching this target audience. For example a healthy living project called Up For It, run by Kirklees Council in association with NHS Kirklees, found on surveying young adults aged between 16 and 24 years that interventions which were fun and social were preferred to those which focus too much on health (NSMC, 2010, online). Provision of one-off sessions which provide information on where to eat healthily on campus have also shown some success within the undergraduate population in the US (Pires et al., 2008, p. 12).

Based on the budget for the Up For It project, it would be anticipated that approximately ?100 000 would be required to set up and run this local section of the strategy (NSMC, 2010, online). It would be assumed that lobbying and media coverage required as part of the national strategy would be managed by the Department of Health.

Conclusions

It is clear that there is some truth to the assumption that undergraduate students in the UK live on a relatively unhealthy diet. While the reasons for this may be somewhat complex, convenience and cost appear to play a major role in the diet decisions which are made by this group. It is also clear that many are aware of the health impact which their diet is likely to have, although this is overridden by other factors. Targeting students who recognize the need to change their diet, by providing information on how to prepare healthier food quickly and cheaply, may help to overcome the barriers of cost and convenience, thereby improving health within this population.

References

Adriaanse, M.A., de Ridder, D.T.D. & de Wit, J.B.F. (2009) ‘Finding the critical cue: Implementation intentions to change one’s diet work best when tailored to personally relevant reasons for unhealthy eating’. Personality and Social Psychology Bulletin, 35(1), 60-71.
Arnot, C. (2010) ‘Male students eschew balanced diet in favour of supplements’. The Guardian, 9 November 2010. Available [online] from: http://www.guardian.co.uk/education/2010/nov/09/male-students-eating-habits [Accessed 27/03/2011].
Babbie, E.R. (2010) The Practice of Social Research. Belmont, CA: Wadsworth, p. 192.
Capacci, S. & Mazzochi, M. (2011) ‘Five-a-day, a price to pay: An evaluation of the UK program impact accounting for market forces’. Journal of Health Economics, 30(1), 87-98.
Croezen, S., Visscher, T.L.S., ter Bogt, N.C.W., Veling, M.L. & Haveman-Nies, A. (2009) ‘Skipping breakfast, alcohol consumption and physical inactivity as risk factors for overweight and obesity in adolescents: Results of the E-MOVO project’. European Journal of Clinical Nutrition, 63, 405-412.
DH (2008) Social Marketing. Department of Health. Available [online] from: http://webarchive.nationalarchives.gov.uk/+/www.dh.gov.uk/en/Publichealth/Choosinghealth/DH_066342 [Accessed 28/03/2011].
Dodd, L.J., Al-Nakeeb, Y., Nevill, A. & Forshaw, M.J. (2010) ‘Lifestyle risk factors of students: A cluster analytical approach’. Preventative Medicine, 51(1), 73-77.
Health Canada (2004) Section 2: Market Segmentation and Target Marketing. Available [online] from: http://www.hc-sc.gc.ca/ahc-asc/activit/marketsoc/tools-outils/_sec2/index-eng.php [Accessed 26/03/2011].
HM Government (2010) Healthy Lives, Healthy People: Our strategy for public health in England. London: Public Health England. Available [online] from: http://www.dh.gov.uk/dr_consum_dh/groups/dh_digitalassets/@dh/@en/@ps/documents/digitalasset/dh_122347.pdf [Accessed 26/03/2011].
Jaime, P.C. & Lock, K. (2009) ‘Do school based food and nutrition policies improve diet and reduce obesity’. Preventative Medicine, 48(1), 45-53.
James, W.P.T. (2008) ‘WHO recognition of the global obesity epidemic’. International Journal of Obesity, 32, S120-S126.
Kolodinsky, J., Harvey-Berino, J.R., Berlin, L., Johnson, R.K. & Reynolds, T.W. (2007) ‘Knowledge of current dietary guidelines and food choice by college students: Better eaters have higher knowledge of dietary guidance’. Journal of the American Dietetic Association, 107(8), 1409-1413.
Lake, A.A., Hyland, R.M., Rugg-Gunn, A.J., Mathers, J.C. & Adamson, A.J. (2009) ‘Combining social and nutritional perspectives: From adolescence to adulthood’. British Food Journal, 111(11), 1200-1211.
Litosseliti, L. (2003) Using Focus Groups in Research. London: Continuum, pp. 70-73.
NHS Choices (2010) 5 A Day. Available [online] from: http://www.nhs.uk/livewell/5aday/pages/5adayhome.aspx/ [Accessed 26/03/2011].
Noar, S.M. (2006) ‘A 10-year retrospective of research in health mass media campaigns: Where do we go from here?’ Journal of Health Communication, 11(1), 21-42.
NSMC (2010) Up For It. Available [online] from: http://thensmc.com/component/nsmccasestudy/?task=view&id=156 [Accessed 26/03/2011].
Papadaki, A., Hondros, G., Scott, J.A. & Kapsokefalou, M. (2007) ‘Eating habits of university students living at, or away from home in Greece’. Appetite, 49(1), 169-176.
Pires, G.N., Pumerantz, A., Silbart, L.K. & Pescatello, L.S. (2008) ‘The influence of a pilot nutrition education program on dietary knowledge among undergraduate college students’. Californian Journal of Health Promotion, 6(2), 12-25.
Raulio, S., Roos, E. & Prattala, R. (2010) ‘School and workplace meals promote health food habits’. Public Health Nutrition, 13, 987-992.
Serlachius, A., Hamer, M. & Wardle, J. (2007) ‘Stress and weight change in university students in the United Kingdom’. Physiology & Behavior, 92(4), 548-553.
Shepherd, J., Harden, A., Rees, R., Brunton, G., Garcia, J., Oliver, S. & Oakley, A. (2005) ‘Young people and healthy eating: A systematic review of research on barriers and facilitators’. Health Education Research, 21(2), 239-257.
Spanos, D. & Hankey, C.R. (2010) The habitual meal and snacking patterns of university students in two countries and their use of vending machines. Journal of Human Nutrition and Dietetics, 23(1), 102-107.
Stewart, D.W., Shamdasani, P.N. & Rook, D.W. (2007) Focus Groups: Theory and Practice – 2nd Edition. Thousand Oaks, CA: Sage Publications, Inc., pp. 2-9.
Tomlin, K.M. & Richardson, H. (2004) Motivational Interviewing and Stages of Change. Center City: MN: Hazelden, pp. 14-16.
Wyatt, S.B., Winters, K.P. & Dubbert, P.M. (2006) ‘Overweight and obesity: Prevalence, consequences, and causes of a growing public health problem’. American Journal of the Medical Sciences, 331(4), 166-174.
Yahia, N., Achkar, A., Abdallah, A. & Rizk, S. (2008) ‘Eating habits and obesity among Lebanese university students’. Nutrition Journal, 7, 32-36.

Spinal Cord Trauma Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Abstract

Loss of sensory and motor function below the injury site is caused by trauma to the spinal cord. Approximately 10,000 people experience serious spinal cord injury each year. There are four general types of spinal cord injury, cord maceration and laceration, contusion and solid core injury. There are three phases of SCI response that occur after injury: the acute, secondary, and chronic. The most immediate concern is patient stabilization. Additionally interventions may be instituted in an effort to improve function and outcome. Through health, and future development one day there will be hope for recovery from the spinal cord injury.

Introduction

Loss of sensory and motor function below the injury site is caused by trauma to the spinal cord. As indicated by Huether & McCance (2008) normal activity of the spinal cord cells at and below the level of injury ceases due to loss of continuous tonic discharge from the brain and brain stem. Depending on the extent of the injury reflex function below the point of injury may be completely lost. This involves all skeletal muscles, bladder, bowel, sexual function and autonomic control. In the past hope for recovery has been minimal. With medical advancements and better understanding today hope for recovery is better but still limited.

Risk Factors and Incidence

According to Huether & McCance (2008) approximately 10,000 people experience serious spinal cord injury each year. 81% of those injuries are males with an average age of 33.4 years. As indicated by Hulsebosch (2002) the majority of injuries are divided into four separate groups; 44% of the injuries are young people sustained through motor vehicle crashes or other high energy traumatic accident; 18% are sustained through sports activities, and 24% are sustained through violence and 22% are sustained in the elderly population either through falls or cervical spinal stenosis caused by congenital narrowing or spondylosis.

Categories of Injury

According to Hulsebosch (2002) there are four general types of spinal cord injury: 1) cord maceration 2) cord laceration 3) contusion injury, and 4) solid cord injury. In the first two injuries, the surface of the cord is lacerated and a prominent connective tissue response is invoked, whereas in the latter two the spinal cord surface is not breached and the connective tissue component is minimal. The contusion injury represents from 25 to40% of all injuries and is a progressive injury that enlarges overtime.

Cellular Level Physiology

Hulsebosch (2002) gives us three phases of response directly after the injury of the spinal cord. The acute phase begins with the moment of injury and extends for the first few days. A variety of pathophysiological processes begins. There is immediate mechanical soft tissue damage, including endothelial cells of the vasculature. Cell death, resulting from mechanical forces and ischemic consequences is instantaneous.

Over the next few minutes there are significant electrolytic shifts, intracellular concentrations of sodium increase. Extracellular concentrations of potassium increase. Intracellular levels of calcium increase to toxic levels that contribute to a failure in neural function. Electrolyte shifts cascade to a generalized demonstration of spinal shock, which is representative of a “failure of circuitry in the spinal neural network. As indicated by Shewmon (1999) spinal shock is a transient functional depression of a structurally intact cord below the site of an acute spinal cord injury.

It does not occur with slowly progressive lesions. Limited function or loss of function typically lasts two to six weeks followed by recovery of functions. The secondary phase occurs over the next few minutes to the next few weeks. Ischemic cellular death, electrolytic shifts, and edema continue. As a result of cell lysis extracellular concentrations of glutamate and other amino acids reach toxic concentrations within the first fifteen minutes after injury.

Free-radical production amplifies. Neutrophils accumulate in the spinal parenchyma within 24 hours. Lymphocytes follow the neutrophils and reach their peak numbers within forty eight hours. Local concentrations of cytokines and chemokines increase as part of the inflammation process. As inflammation and ischemia proceed the injury site grows in size from the initial mechanical force response site into the area around the site, encompassing a larger region of cell death.

Regeneration is inhibited by factors expressed within the dominos of responsive reactions. The chronic phase occurs over a time course of days to years. Cell death continues. The cord becomes scarred and tethered. Conduction deficits result from demyelination of the cord. Regeneration and emergence of axons is exhibited but inhibitory factors suppress any resultant growth. Alteration of neural circuits often results in chronic pain syndromes for many spinal cord injury patients.

Therapeutic Management

Spinal cord injury is diagnosed by physical examination, radiological exam, CT scans, MRI scans, and myelography. The most immediate concern in the management of an acute spinal cord injury is patient stabilization. The vertebral column is subject to surgical stabilization using variety of surgical rods, pins, and wires.

Hardware must be meticulously placed. Surgical intervention has the potential to instigate additional spinal trauma. Hemostatic body systems must be supported through fluid resuscitation, medication management and electrolyte support. Additionally the following interventions may be instituted in an effort to improve function and outcome:

Edema Reduction

Reduction of the inflammatory response is one intervention of concentrating in the treatment of the acute spinal cord injury. Steroids have provided a primary tool to reduce edema and inflammation, the most successful of which is methylprednisolone (MP). According to Bracken (1993) the administration of a high dose of MP, if given within eight hours of the insult in patients with both complete and incomplete SCI, as proposed by the National Acute Spinal Cord Injury Study (NASCIS-2), has been promising with respect to improved clinical outcome. The cellular and molecular mechanisms by which MP improves function may involve antioxidant properties, the inhibition of inflammatory response, and/or a role in immunosuppression.

Inhibition of Inflammation: by use of Anti-Inflammatory Agents

Although inflammation is generally held to be are pair mechanism that is restorative in nature, recent work has demonstrated that the inflammatory cascade produces several pathways that are degradative in nature, such as the prostagland in pathways.

Anti-inflammatory agents have been administered with successful limitation of the inflammatory process. As indicated by Hains, Yucra and Hulsebosch (2001) selective cyclooxygenase (COX)-2inhibitors given systemically to spinal card injury patients have demonstrated significant improvements. Provision of inhibition of the enzyme activation sequence appears to be the safest medication action at this time.

Application of either whole body hypothermia or local cord cooling appears to hold promise for those suffering from neuro trauma. Application of hypothermia, either spinally or systemically, is thought to provide protection for neural cells and to reduce secondary inflammation, decreasing immediate mortality. According to Hayes, Hsieh, Potter, Wolfe, Delaney, and Blight (1993) local spinal cord cooling within eight and a half hours of injury in ten patients produced a better-than-expected rate of recovery of sensory and motor function.

Rescue from Neural Cell Death

Cells die due to a programmed cell death after SCI. An excellent opportunity is present for intervention with factors that could rescue the cells at risk. As presented by Eldadah and Faden (2000) one approach to cell rescue is the inhibition of caspases. Caspases are regulated signalling proteases that that accomplish a primary role in mediating cell apoptosis thru division at specific sites within proteins. These proteins inhibit programmed cell death and are a part of the bcl-2 oncogene products. According to Shibata, Murray,

Tessler, Ljubetic, Connors and Saavedra (2000) recent work has demonstrated prevention of retrograde cell loss and atrophy reduction by direct intra-spinal administration of the Bcl-2 protein into the damaged site.

Another group of proteins with potential cell death inhibition properties are calpains. Calpains are calcium-activated proteases that assist in degradation of cytoskeletal demolition of injured cells. Substances with calpain inhibitor properties could prove of benefit in reduction of cell death.

Demyelination and Conduction

According to Waxman (2001) the strategy of inhibiting the neural injury induced by the increased barrage of action potentials early in the injury phase or by inhibiting the voltage-dependent sodium channels, which provide the ionic basis for the action potential may be beneficial. In addition, neural injury and disease may introduce altered ionic channel function on nerve processes that would result in impaired conduction properties, which produces persistent hyperexcitability leading to the basis for chronic pain after CNS neural trauma.

As a result of secondary injury to the spinal cord many axons are demylinated. Infusion of a fast, voltage-sensitive potassium channel blocker may provide partial restoration of conduction properties to demylinated axons. As presented by Guest, Hiester and Bunge (2005) another strategy for the improvement in demyelinationis the transplantation of Schwann cells which may contribute to the restoration of myelin sheaths around some spinal axons.

Promotion of Axonal Regeneration

During development of the central nervous system, an assortment of axonal growth promoting proteins are present in the extracellular environment. The environment stimulates axon growth and neural development. Once the central nervous system is established the growth stimulating agents decline. The adult central nervous system shifts toward inhibition of axonal growth permitting a stable and circuitry. These inhibition and stimulatory factors provide an opportunity for research that will promote axonal growth after a spinal cord injury perhaps rebuilding a neural communication network.

Cell Replacement Strategies

After spinal cord injury function of nerve cells and cells that produce myelin that insulates and provides a positive impulse conduction venue has vanished. Cellular replacement to rebuild conduction properties is a promising therapy. As indicated by Normura, Tator and Shoichet (2006) there is promise that technology utilizing cellular treatment procedures including olfactory ensheathing cells, (the cells that form the myelin on olfactory nerves),

Schwann cells (the cells that form the myelin on peripheral nerves), dorsalroot ganglia, adrenal tissue, and neural stem cells can promote repair of the injured spinal cord. It is postulated that these tissues would rescue, replace, or provide a regenerative pathway for injured adult neurons, which would then integrate or promote the regeneration of the spinal cord circuitry and restore function after injury. As indicated by Nakamura (2005) there is promise that bioengineering technology utilizing cellular treatment advances can promote repair of the injured spinal cord. Transplantation of these cells promotes functional recovery of locomotion and reflex responses.

The engineering of cells combines the therapeutic advantage of the cells along with a delivery system. For example, if delivery of neurotrophins (neuro- related to cell nerves, tropin- a turning) is desired, cells that secrete neutrophins and cells that create myelin can be engineered to stimulate axon growth and rebuild nerve function.

In an effort to further enhance beneficial effects autoimmune agents such as macrophages can be extracted from the patient’s own system and inserted at the injury site. The patient’s own activated macrophages will scavenge degenerating myelin debris, rich in non-permissive factors, and at the same time encourage regenerative growth without eliciting an immune response.

Retrain the Brain with Aggressive Physical Therapy

It is apparent that recovery of locomotion is dependent on sensory input that can “reawaken” spinal circuits and activate central pattern generators in the spinal cord, as demonstrated by spontaneous “stepping” in the lower limbs of one patient. According to Calancie, Alexeeva, Broton and Molano (2005) it may take six or more months for reflexes to appear following acute SCI suggesting they might be due to new synaptic interconnections.

Electrical Stimulation

Functional electrical stimulation (FES) that contributes to improved standing can improve quality of life for the individual and the caregiver. There is considerable interest in computer-controlled FES for strengthening the lower extremities and for cardiovascular conditioning, which has met with some success in terms of physiological improvements such as increased muscle mass, improved blood flow, and better bladder and bowel function. With added benefit there are decreases in medical complications such as venous thrombosis, osteoporosis, and bone fractures. Stimulation of the phrenic nerve, which innervates the diaphragm, is used in cases where there is damage to respiratory pathways.

Chronic Central Pain

As indicated by Siddall & Cousins (1997) pain continues to be a significant problem in patients with spinal cord injuries. There is little consensus regarding the terminology, definitions and nature of the pain. Treatment studies have lacked congruence due to inaccurate identification of pain types. There has been little progress in efforts to bring an understanding of the pathophysiology of CCP to the development of therapeutic approaches for the SCI patient population.

Chronic central pain (CCP) syndromes develop in the majority of spinal cord injury patients. As indicated by Que, Siddall and Cousins (2007) chronic pain is a disturbing aspect of spinal cord injury, often interfering with basic activities, effective rehabilitation and the quality of life of the patient. Evidence that neurons in pain pathways are pathophysiologically altered after spinal cord injury comes from both clinical and animal literature. In addition, the development of the chronic pain state correlates with structural alterations such as intra-spinal sprouting of primary afferent fibres.

According to Que, Siddall and Cousins (2007) pain in the cord-injured patient is often resistant to treatment. Recognition of Chronic Central Pain has led to utilization of non-opioid analgesics. According to Siddall and Middleton (2006) Baclofen, once used exclusively in treatment of spasticity and the anticonvulsant gabapentin originally used to treat epilepsy, have had some success with attenuating muskuloskeletal CCP syndromes. The tricyclic antidepressant amitriptyline has shown effective in treatment of dysesthetic pain.

Conclusion

Stem cell therapy will offer hope for spinal cord injury patients with opportunities for the abundance of cell replacement strategies. Advances in the field of electronic circuitry will lead to better FES and robotic devices. Pharmacological advances offer intervention direction to aid in recovery and improve patient’s’ quality of life every day. The re-establishment of cell, nerve and muscle communication interconnections will be potentially possible. Through tenacity, health, and future development one day victims of spinal cord injury will be told there is hope of recovery.

References

American Psychological Association (2001). Publication manual of the American Psychological Association (5th ed.). Washington, DC: American Psychological Association.

Bracken, M.B., & Holford, T.R. (1993). Effects of timing of methylprednisolone or naloxone administration on recovery of segmental and long-tract neurological function in NASCIS 2. Journal of Neurosurgery. 79(4), 500-7.

Bunge, R.P., Puckett, W.R., and Hiester, E.D. (1997). Observations on the pathology of several types of human spinal cord injury, with emphasis on the astrocyte response to penetrating injuries. Adv Neurol.72, 305-315.

Calancie, B., Alexeeva, N., Broton, J.G., & Molano, M.R. (2005). Interlimb reflex activity after spinal cord injury in man: strengthening response patterns are consistent with ongoing synaptic plasticity. Clinical Neurophysiology : Official Journal of the International Federation of Clinical Neurophysiology. 116(1), 75-86.

Eldadah, B.A., & Faden, A.I. (2000). Caspase pathways, neuronal apoptosis, and CNS injury. Journal of Neurotrauma. 17(10), 811-29.

Guest, J.D., Hiester, E.D., & Bunge, R.P. (2005). Demyelination and Schwann cell responses adjacent to injury epicenter cavities following chronic human spinal cord injury. Experimental Neurology. 192(2), 384-93.

Hains, B.C., Yucra, J.A., & Hulsebosch, C.E. (2001). Reduction of pathological and behavioral deficits following spinal cord contusion injury with the selective cyclooxygenase-2 inhibitor NS-398. Journal of Neurotrauma. 18(4), 409-23.

Hayes, K.C., Hsieh, J.T., Potter, P.J., Wolfe, D.L., Delaney, G.A., & Blight, A.R. (1993). Effects of induced hypothermia on somatosensory evoked potentials in patients with chronic spinal cord injury. Paraplegia. 31(11), 730-41.

Huether, S.E., & McCance, K.L. (2008). Understanding pathophysiology (4th ed.). St. Louis, MO: Mosby, Inc.

Hulsebosch, C.E. (2002). Recent advances in pathophysiology and treatment of spinal cord injury. Advan. Physiol.Edu. 26, 238-255

Nakamura, M., Okano, H., Toyama, Y., Dai, H.N., Finn, T.P., & Bregman, B.S. (2005). Transplantation of embryonic spinal cord-derived neurospheres support growth of supraspinal projections and functional recovery after spinal cord injury in the neonatal rat. Journal of Neuroscience Research. 81(4), 457-68.

Nomura, H., Tator, C.H., & Shoichet, M.S. (2006). Bioengineered strategies for spinal cord repair. Journal of Neurotrauma. 23(3-4), 496-507.

Que, J.C., Siddall, P.J., & Cousins, M.J. (2007). Pain management in a patient with intractable spinal cord injury pain: a case report and literature review. Anesthesia and Analgesia. 105(5), 1462-73, table of contents.

Shewmon, D.A. (1999). Spinal shock and “brain death”: somatic pathophysiological equivalance and implications for the integrative-unity rationale. Spinal Cord 37, 313-324

Shibata, M., Murray M., Tessler, A., Ljubetic, C., Connors, T., & Saavedra, R.A. (2000). Single injections of a DNA plasmid that contains the human Bcl-2 gene prevent loss and atrophy of distinct neuronal populations after spinal cord injury in adult rats. Neurorehabilitation and Neural Repair. 14(4), 319-30.

Siddall, P.J., & Middleton, J.W. (2006). A proposed algorithm for the management of pain following spinal cord injury. Spinal Cord 44, 67-77

Tator, C.H. (1998). Biology of neurological recovery and functional restoration after spinal card injury. Neurosurgery. 42(4), 696-707

Waxman, S.G. (2001). Acquired channelopathies in nerve injury and MS. Neurology. 56(12), 1621-7.

Sickle-cell Disease (SCD) Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Sickle-cell disease

Sickle cell disease (SCD), or sickle cell anemia, is a group of genetic conditions, resulting from the inheritance of a mutated form of the gene coding for the ? globulin chain of the hemoglobin molecule, which causes malformation of red blood cells (RBCs) in their deoxygenated state. Specifically, this single point mutation occurs at position 6 of the ? globulin chain, where a valine is substituted for glutamic acid (Ballas et al. 2012). This abnormal hemoglobin causes a characteristic change in the RBC morphology, where it becomes abnormally rigid and sickle-like, rather than the usual biconcave disc. These cells do not flow as freely throughout the circulatory system as the normal phenotype, and can become damaged and hemolysed, resulting in vascular occlusion (Stevens and Lowe 2002).

SCD is an autosomal recessive condition, thus patients with SCD will have inherited a copy of the mutated gene from each of their parents (homozygous genotype). Individuals who only inherit one copy (heterozygous genotype) are termed sickle cell (SC) carriers, who may pass on the affected gene to their children (Stevens & Lowe 2002). The severity of SCD varies considerably from patient to patient, most likely as the result of environment or other unknown genetic factors (Bean et al. 2013).

Patients with SCD are typically of African or African-Caribbean origin, but all ethnic groups may be affected. In 2014 the National Institute of Clinical Excellence (NICE) estimated that between 12,500 and 15,000 people in the UK suffer from SCD (NICE quality standard 58, 2014), with more than 350 babies born with SCD between 2007 and 2008. Patients in developed countries typically live into their 40s and 50s. However in developing countries, it is estimated that between 50% (Odame 2014) and 90% of children die by the age of 5 (Gravitz and Pincock 2014).

SCD is more prevalent in the ethnic African population because SCD carriers exhibit a 10-fold reduction in severe malarial infection, which is common in many African countries and associated with significant mortality. One proposed mechanism for this is that on infection with the malarial plasmid, RBCs in SCD carriers become sickle shaped and are then removed from the circulation and destroyed. Consequently, it is genetically beneficial to be a SCD carrier, thus more SCD carriers survive to reproduction age, in turn increasing the incidence of the SCD mutation in the population. (Kwiatkowski 2005).

SC patients experience periods of acute illness termed “crises” resulting from the two different effects of SCD; vaso-occlusion (pain, stroke and acute chest syndrome) and those from hemolysis (for example, anemia from RBC destruction and inefficient oxygen carrying capacity) (Glassberg 2011). The frequency of these may be several times a week, or less than once a year. Patients typically present with anemia, low blood oxygen levels and pyrexia (NICE quality standard 58, 2014).

?

There are 3 classifications of crises:

1. Sequestration crisis (rapid pooling of RBCs in organs, typically the spleen, which may result in patient death from the acute reduction in available red cells for oxygen transportation).

2. Infarctive crisis (blockage of capillaries causing an infarction).

3. Aplastic crises (where the spleen is damaged from 1&2 which compromises RBC production (Stevens & Lowe 2002).

The result of these crises can be irreversible damage to a wide range of organs from the spleen to the retina which can cause extreme pain (Stevens & Lowe 2002). However, patients not currently experiencing a crisis can also present with anemia as the result of poor oxygen transport function, loss of RBCs due to sequestration in organs such as the spleen and reduced red cell production as the result of impaired spleen function (Ballas et al. 2012).

Typically, patients will initially present with an enlarged spleen in early childhood (due to pooling of malformed RBCs), which then becomes hypertrophied, ultimately resulting in a state of almost complete loss of function (autosplenectomy). Several complications of SCD are recognised, including impaired neurocognitive function, which is most likely the result of anemia or silent cerebral infarcts (Ballas et al. 2012).

In the UK, SCD is usually diagnosed antenatally or in the first few weeks of life. Prenatal screening is offered to parents who may be at risk of carrying the SCD causing gene. NICE recommend that screening is offered early in pregnancy for high risk groups (ideally before 10 weeks gestation) or via a family origin questionnaire in low risk groups. Full screening can then be offered if family history is suggested. In the case of a positive test, counselling should be offered immediately, and the parents offered the option of termination of pregnancy (NICE Clinical Guideline 62, 2014). However, if screening has not occurred, SCD is one of the diseases screened for by the newborn heel prick test in the first week of life (NICE quality standard 58, 2014). In older patients or those not in countries where screening is offered, patients present with anemia or acute crisis. Histological analysis of blood samples can also reveal sickle shaped RBCs and the characteristic abnormal hemoglobin can be identified by high performance liquid chromatography or electrophoresis (Glassberg 2011).

There are three approaches to treatment of SCD. The first is to manage the condition prophylactically in the hope of reducing the incidence of complications and crises. The second is to effectively manage crises, both to reduce the risk of organ damage and life threatening events, as well as control the severe pain associated with a SCD crisis. The third approach is to target the cause of the condition itself.

Penicillin (de Montalembert et al. 2011) and folic acid are usually offered to patients in order to prevent complications by bacterial disease and are associated with a significant increase in survival and quality of life (NICE quality standard 58, 2014). Children are also vaccinated against pneumococcal infection. Transcranial doppler imaging of the cerebral vessels can be used to identify children at risk of stroke (de Montalembert et al. 2011). As previously discussed, SCD carriers are conferred some protection from malarial infection. Paradoxically, SCD sufferers display an increased sensitivity to malarial infection and should also be treated with anti-malarial prophylaxis where appropriate (Oniyangi and Omari 2006).

Hydroxyurea has been used in the treatment of SCD, as it appears to increase the production of fetal hemoglobin (HbF), thus reducing the proportion of abnormal hemoglobin although the exact mechanism of this is unclear (Rang et al. 1999). Suggested mechanisms include induction of HbF by nitric oxide, or by ribonucleotide inhibition. Other suggested mechanisms include the increasing of RBC water content and reduced endothelial adhesion, which reduces the incidence of infarction (Charache et al. 1995).

Blood transfusion is an important tool in treating SCD, especially in children. It almost immediately improves the capacity of the blood to transport oxygen, and in the longer term as the “healthy” donor RBCs are not as destroyed as quickly as the sickle shaped RBCs, repeated transfusion is associated with a reduction in erythropoiesis (RBC production) in the SCD patient, thus reducing the proportion of sickle shaped RBCs in circulation, which in turn reduces the risk of a crisis or stroke. Exchange transfusion is also possible, whereby abnormal sickle RBCs are removed from the circulating volume prior to transfusion with donor blood. However there are drawbacks to transfusion, namely the inherent safety risks such as immunological sensitivity, contamination of blood products with infectious disease and a lack of available donated blood (Drasar et al. 2011).

The severe pain of a crisis must be controlled, most often with opioid analgesics. These are effective analgesics which act by binding to µ, ? and ? receptors. The common approach is intravenous infusion of morphine either by continuous drip or patient controlled analgesia (PCA) pump infusion. Non-opioid drug options, including paracetamol, tramadol and corticosteroids may also be considered, but these drugs have a limit to the analgesia they can produce, whereas opioid drugs are more often limited by their side effects, such as respiratory suppression, vomiting and itching (Ballas et al. 2012).

Bone marrow transplant is currently the only curative therapy for SCD. However it is dependent on locating a suitable donor with a HLA tissue match, usually from a healthy sibling. It is associated with some risks and complications including grant rejection, but generally is associated with a very positive prognosis (Maheshwari et al. 2014). As SCD is an autosomal recessive disease with one well identified causative gene, gene therapy to replace one copy of the faulty gene with a normal copy is of great interest to researchers. However this is very much still in development in humans and a 2014 review of SCD clinical trials found no trials of gene therapy as yet (Olowoyeye and Okwundu 2014)

In addition to the acute effects of SCD, patients are also at risk from a number of potentially fatal consequence of SCD such as acute splenic sequestration. In this condition, which often occurs after an acute viral or bacterial infection (classically parvovirus B19), the malformed RBCs become trapped in the sinuses of the spleen causing rapid enlargement. Patients will present with often severe abdominal pain and enlargement, pallor and weakness and potentially tachycardia and tachypnea. Patients may also suffer from hypovolemic shock from the significant reduction of available hemoglobin (acute aplastic crisis). This is managed by emergency treatment of the hypovolemia and transfusion of packed RBCs. Because the rate of recurrence for splenic sequestration is high (approximately 50%), a splenectomy may be performed after the patient has recovered from the event (NICE quality standard 58, 2014).

Acute chest syndrome is also a serious complication of SCD and may be fatal. It is characterised by the occlusion of the pulmonary blood vessels during an occlusive crisis. Patients typically present with chest pain, cough and low oxygen levels (Ballas et al. 2012). It is also associated with asthma, and it is recommended that asthma in patients with SCD be carefully monitored. Treatment of acute chest syndrome is usually by antibiotics, bronchodilators if indicated and transfusion or exchange transfusion also considered (de Montalembert et al. 2011).

Another consequence of rapid turnover of the abnormally shaped RBCs is the increased production of bile, which may cause hepatobiliary disease, specifically gallstones and vascular conditions of the liver. Liver pathology can result from ischemia-reperfusion injury following a crisis, endothelial dysfunction and overloading with iron as the result of the liver sequestering iron from the destroyed RBCs (Ballas et al. 2012). SCD patients are also at significant risk of ischemic stroke, resulting from a cerebral infarctive crisis, with one study suggesting that 11% of patients will suffer a stroke by 20 years of age, and 24% by 45. Children who suffer stroke may also go on to develop moya-moya syndrome, which is associated with s significant decrease in cognitive function and increased risk of further stroke (Ballas et al. 2012).

SCD is a complex condition and is associated with significant challenges in treatment as it requires the use of a multi-disciplinary team to cover the wide range of its affects and significant prophylactic treatments. As discussed, the effects of these potential complications can be life threatening and have life changing consequences.

An additional difficulty is that while screening, prophylactic and curative treatments are available in the developed world, they are not in the developing world where rates of the disease are in fact highest. In sub-Saharan Africa mortality is estimated to be between 50% (Odame 2014) and 90% (Gravitz & Pincock 2014) yet in developed counties life expectancy ranges from 40s to 50s (Gravitz & Pincock 2014). Currently, laboratory diagnosis and screening is prohibitively expensive in developing countries thus there is a need for the development of low cost techniques. The Gavi Vaccine Alliance also endeavors to make prophylactic treatment more available, specifically the pneumococcal vaccine. Of the therapies discussed here, hydroxyurea is likely to be the most affordable and increasing the availability of this would be of significant benefit and clinical trials have commenced in Africa in 2014 (Odame 2014).

?

References

Ballas, S.K., Kesen, M.R., Goldberg, M.F., Lutty, G.A., Dampier, C., Osunkwo, I., Wang, W.C., Hoppe, C., Hagar, W., Darbari, D.S., & Malik, P. 2012. Beyond the definitions of the phenotypic complications of sickle cell disease: an update on management. ScientificWorldJournal., 2012, 949535 available from: PM:22924029

Bean, C.J., Boulet, S.L., Yang, G., Payne, A.B., Ghaji, N., Pyle, M.E., Hooper, W.C., Bhatnagar, P., Keefer, J., Barron-Casella, E.A., Casella, J.F., & Debaun, M.R. 2013. Acute chest syndrome is associated with single nucleotide polymorphism-defined beta globin cluster haplotype in children with sickle cell anaemia. Br.J.Haematol., 163, (2) 268-276 available from: PM:23952145

Charache, S., Terrin, M.L., Moore, R.D., Dover, G.J., Barton, F.B., Eckert, S.V., McMahon, R.P., & Bonds, D.R. 1995. Effect of hydroxyurea on the frequency of painful crises in sickle cell anemia. Investigators of the Multicenter Study of Hydroxyurea in Sickle Cell Anemia. N.Engl.J.Med., 332, (20) 1317-1322 available from: PM:7715639

de Montalembert M., Ferster, A., Colombatti, R., Rees, D.C., & Gulbis, B. 2011. ENERCA clinical recommendations for disease management and prevention of complications of sickle cell disease in children. Am.J.Hematol., 86, (1) 72-75 available from: PM:20981677

Drasar, E., Igbineweka, N., Vasavda, N., Free, M., Awogbade, M., Allman, M., Mijovic, A., & Thein, S.L. 2011. Blood transfusion usage among adults with sickle cell disease – a single institution experience over ten years. Br.J.Haematol., 152, (6) 766-770 available from: PM:21275951

Glassberg, J. 2011. Evidence-based management of sickle cell disease in the emergency department. Emerg.Med.Pract., 13, (8) 1-20 available from: PM:22164362

Gravitz, L. & Pincock, S. 2014. Sickle-cell disease. Nature, 515, (7526) S1 available from: PM:25390134

Kwiatkowski, D.P. 2005. How malaria has affected the human genome and what human genetics can teach us about malaria. Am.J.Hum.Genet., 77, (2) 171-192 available from: PM:16001361

Maheshwari, S., Kassim, A., Yeh, R.F., Domm, J., Calder, C., Evans, M., Manes, B., Bruce, K., Brown, V., Ho, R., Frangoul, H., & Yang, E. 2014. Targeted Busulfan therapy with a steady-state concentration of 600-700 ng/mL in patients with sickle cell disease receiving HLA-identical sibling bone marrow transplant. Bone Marrow Transplant., 49, (3) 366-369 available from: PM:24317124

NICE Clinical Guideline 62 – Antenatal Care. Guideline CG62, published March 2008, revised February 2014. https://www.nice.org.uk/guidance/cg62

NICE quality standard 58: Sickle cell acute painful episode, Guidelines CG143, publication date June 2012, reviewed May 2014. https://www.nice.org.uk/guidance/cg143

Odame, I. 2014. Perspective: we need a global solution. Nature, 515, (7526) S10 available from: PM:25390135

Olowoyeye, A. & Okwundu, C.I. 2014. Gene therapy for sickle cell disease. Cochrane.Database.Syst.Rev., 10, CD007652 available from: PM:25300171

Oniyangi, O. & Omari, A.A. 2006. Malaria chemoprophylaxis in sickle cell disease. Cochrane.Database.Syst.Rev. (4) CD003489 available from: PM:17054173

Rang, Dale, & Ritter 1999. Pharmacology, 4th ed. Churchill Livingstone.

Stevens & Lowe 2002. Pathology, 2nd ed. London, Mosby.

Leadership and Management in Professional contexts

This work was produced by one of our professional writers as a learning aid to help you with your studies

Part 1: Management Style
Description and feelings

This essay aims to reflect on my experience when working with a group of seven students tasked to critically analyse a case study and develop a group presentation. The Gibbs (1988) model of reflection will be used to discuss and analyse the lessons gained from my experience. At the start of our group meeting, a leader was selected and helped the group in planning and implementing the task. However, my experience with the group was marked with difficulties and challenges. In the first stages of our group formation, or the norming stage, we had difficulties meeting as a group due to differences in university schedule. During the meetings, some of the members chose not to participate while others were more demanding and tried to dominate the discussions. The leader tried to create some sense of order in our first meetings and demonstrated the authoritarian leadership style. Throughout our team meetings, some of the members were absent, while others who were present continued to depend on the more dominant members to accomplish the tasks. I was frustrated in the beginning of our meetings and felt that we could have been successful in our presentation if we managed to work more effectively. Our team presentation was not what I expected. I was disappointed with our overall team performance.

Discussion and Analysis

Management is described as a process where leaders govern and make decision-making within an organisation (Bach and Ellis, 2011). This also involves planning of tasks, organising work, staffing, directing activities and controlling (Belbin, 2010). The main aim of management is for managers to influence or encourage team members to accomplish a task (Belbin, 2010). On reflection, my team leader demonstrated the authoritarian leadership style. This type of leadership is described as one where the leader provides the direction of the team and gives specific instructions and directives on how to achieve the team goal (Daly et al., 2015). An authoritarian leader also supervises the activities of the subordinates and strongly discourages members to validate or question his or her directives (Bach and Elllis, 2011). This type of leadership is appropriate in workplaces where there is a highly-structured setting with routine operations (Bishop, 2009). Autocratic leadership is also favourable for activities that are simple and of shorter duration (Marquis and Huston, 2012). On evaluation of my experience in the team, we had very little interaction and cohesion during the first few stages of the team working.

According to Tuckman’s model of team development, there are four stages of group formation (Clark et al., 2007). These include the following: forming, norming, storming and performing. Our lack of cohesion and difficulties in conducting team meetings may reflect the first stage of group formation, which is the establishing stage. In this this step, Clark et al. (2007) has explained that team members are still beginning to form their team roles and tend to be polite and diplomatic. At this stage, a team leader was chosen, who in turn reflected the authoritarian leadership style. Since most team members were reluctant to accept a task, our leader decided to assign team roles and ensured that each team member would attend the team meetings. The leader also supervised the entire group. On reflection, the authoritarian leadership style was appropriate in the first few stages of our team working since this ensured that tardiness and absenteeism were prevented (Belbin, 2010). Further, the authoritarian leadership style was also appropriate since our assigned task was not complex and was of shorter duration (Bishop, 2009). Our group leader was able to make follow-ups on our assigned task. However, as we progressed towards the second stage, which is the storming stage, conflicts soon arose.

There were members who tended to dominate the discussion and did not agree with our leader on our assigned team roles and how the case study should be presented. Although Goodman and Clemow (2010) argue that conflicts in teams are natural and may not always have a negative impact on the function and development of the team, in my experience, the conflicts had negative impact on our team development. Members who disagreed with our team leader on how the case study should be presented chose not to participate in our succeeding meeting and role-playing. Since the authoritarian leadership style was adopted, our team leader did not consider the team member’s suggestions. Morgan et al. (2015) reiterate that conflicts could help in the development of a team if each team member acknowledges the differences of the team members and learn to adjust to their individual roles. On reflection, most of my team members chose not to adjust to our individual differences. In turn, this created a discordant team, which also reflected on our final presentation. I felt that our presentation was chaotic and reflected poorly on our role as team members. On consideration, our team would have benefitted with the transformational leadership style. This type of leadership encourages members to actively participate in decision-making and is associated with achievement of goals and objectives (Bach and Ellis, 2010).

Conclusion

The authoritarian leadership style was not the most appropriate style in managing our team since this failed to encourage team members to participate in decision-making. This type of leadership is also not applicable in actual healthcare settings since patient-centred care is promoted and team working and participation highly encouraged.

Action Plan

When managing a team in the future, I will ensure that I am aware of my own team role. Conflicts should be used to develop and not destroy teams. I will also adopt a leadership style that allows teams members to actively participate in decision-making. Specifically, I will develop the transformational leadership style since this ensures that all members have opportunities to be actively involved and valued during achievement of a task (Bishop, 2009).

References:
Bach, S. & Ellis, P. (2011) Leadership, Management and Team Working in Nursing. Exeter: Learning Matters.
Belbin, R. (2010) Management of teams: why they succeed or fall. London: Butterworth-Heinemann.
Bishop. V. (2009) Leadership for nursing and allied healthcare professionals. Open University Press: Milton Keynes.
Clark, P., Cott, C. & Drinka, T. (2007) ‘Theory and practice in interprofessional ethics: a framework for understanding ethical issues in health care teams’, Journal of Interprofessional Care, 21(6), pp. 591-603.
Daly, J., Speedy, S. & Jackson, D. (2015) Leadership and Nursing. Contemporary Perspectives. 2nd ed. Chatswood: Elsevier.
Gibbs, G. (1988) Learning by doing: A guide to teaching and learning methods, Oxford: Further Educational Unit, Oxford Polytechnic.
Goodman, B. & Clemow, R. (2010) Nursing and collaborative practice: A guide to interprofessional learning and working. Exeter: Learning Matters, Ltd.
Marquis. B. & Huston. C. (2012) Leadership and management tools for the new nurse. A case study approach. Lippincott: Philadelphia.
Morgan, S., Pullon, S. & McKinlaey, E. (2015) ‘Observation of interprofessional collaborative practice in primary care teams: An integrative literature review’, International Journal of Nursing Studies, doi: 10.1016/j.ijnurstu.2015 03.008 [Online]. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25862411 (Accessed: 15 May 2015).
Part 2: Leadership, Management and Change
Description and Feelings

In our team meetings, the concept of change management surfaced since our team leader struggled in influencing team members to assume different team roles. I also realised that I used to complete tasks individually and not as a team. Although I was not the team leader, I also have to learn how to reflect an appropriate leadership style that will be used in future team working. During our team meetings, I was frustrated since we were accomplishing little, but in the end, I felt that I have developed my ability to work in a team.

Discussion and Analysis

Change is described as a transition that involves movement from the present state of an organisation to a desired, future state (Marquis and Huston, 2012). Changes often occur in healthcare settings and require change management. During the role-play and team meetings, collaborative team working was encouraged to achieve the goals of the team. This represented a change in how I accomplish tasks. From completing assigned tasks individually, I have to learn how to complete tasks as a group. Apart from changes on how to complete tasks, there was also a suggested change on leadership style from authoritarian to the transformational leadership style. On evaluation, change management was necessary in our group since this could have addressed the factors that caused our poor performance and increased the factors that would lead to a successful group performance.

Practising change management is crucial since this would help prepare myself in my future role as a registered nurse and as a nurse leader. At least three theories have been proposed in managing change. These include the Plan, Do, Study, Act cycle (PDSA), Kotter’s Model and Lewin’s change model (Bach and Ellis, 2010; Appelbaum et al., 2012; Reed and Card, 2016). The PDSA cycle is often used in the NHS and allows nurse leaders and other healthcare practitioners to create a plan on how to implement a change while the ‘do’ stage constitutes the actual performance of the plan. In the third or ‘study’ phase, nurse leaders and team members analyse the performance and whether this needs to be enhanced or changed (Reed and Card, 2016). In the ‘act’ phase, the proposed changes in the action plan and performance are implemented. The entire process is then repeated until change has been integrated within an organisation. A critique of the PDSA is the difficulty in repeating this cycle, with Reed and Card (2016) noting that only 20% of healthcare groups using PDSA actually repeat the cycle. The applicability of the PDSA is limited with some healthcare settings not benefitting from this type of change management (Taylor et al., 2013).

Meanwhile, the Kotter Model of change adopts the top-down approach and is often used in corporate settings (Appelbaum et al., 2012). It is difficult to use this model of change in actual healthcare settings since the NHS encourages all team members and patients to actively participate in planning and implementation of a change initiative (NHS Leadership Academy, 2011). However, a reflection of my own group would show that the Kotter Model of change was demonstrated as our team leader exercised the authoritarian leadership style. The change came from the leader and trickled down to the team members. Finally, the Lewin’s model of change proposes three stages of change: unfreezing, change and refreezing (Gopee and Galloway, 2013). This model is often used in healthcare settings since it takes into account the factors that enable or deter change in actual practice. Force-field analysis is done and factors that enable change are increased while factors that deter change are reduced (Gopee and Galloway, 2013).

On reflection, employing this type of change management is crucial in my future role as a registered nurse leading a multidisciplinary team. In the NHS, it is recognised that there are several factors that deter or promote change in practise. For instance, the perception that a proposed change initiative only increases paperwork could deter the uptake of change in practice (Bach and Ellis, 2011). This perception is supported in literature with the Royal College of Nursing (2013) reporting that nurses spend an average of 2.5 million hours per week completing clerical tasks. Hence, I have to be aware of factors that deter or enable change. On reflection, the autocratic leadership style, coupled with the top-down approach to change did not lead to a successful performance of my group. The Lewin’s model of change would have been more appropriate in helping my team members accept their individual roles and in changing their own way of completing tasks. This model would have helped our team leader investigate the factors that lead to poor attendance to our team meetings and the team members’ refusal to resolve conflicts.

Conclusion

Effective leadership and change management are crucial when implementing a change initiative and in completing group tasks. Using the Lewin’s model of change would have helped the team leader identify the factors that enable and deter change. Successful use of this model would lead to achievement of the goals of the team.

Action Plan

I will develop my leadership skills and abilities to carry out Lewin’s change model. I will find opportunities to practice change management skills in my own healthcare setting and report regularly to my mentor and colleagues on my progress. I will ask feedback from my mentor and colleagues if I have achieved leadership and change management skills.

References:
Appelbaum, S., Habashy, S., Malo, J. & Shafiz, H. (2012) ‘Back to the future: revisiting Kotter’s 1996 change model’, Journal of Management Development, 31(8), pp. 764-782.
Bach, S. & Ellis, P. (2011) Leadership, Management and Team Working in Nursing. Exeter: Learning Matters.
Gopee, N. & Galloway, J. (2013) Leadership and Management in Healthcare. 2nd ed. London: Sage.
Marquis. B. & Huston. C. (2012) Leadership and management tools for the new nurse. A case study approach. Lippincott: Philadelphia.
NHS Leadership Academy (2011) Clinical Leadership Competency Framework. Coventry: NHS Institute for Innovation and Improvement.
Reed, J. & Card, A. (2016) ‘The problem with Plan-do-study-act cycles’, British Medical Journal Quality and Safety, 25(3), pp. 147-152.
Royal College of Nursing (2013) Nurses spend 2.5 million hours a week on paperwork- RCN Survey [Online]. Available at: https://www2.rcn.org.uk/newsevents/press_releases/uk/cries_unheard_-_nurses_still_told_not_to_raise_concerns (Accessed: 10 May, 2017).
Taylor, M., McNicholas, C., Nicolay, C., Darzi, A., Bell, D. & Reed, J. (2013) ‘Systematic review of the application of the plan-do-study-act method to improve quality in healthcare’, British Medical Journal Quality and Safety, doi: 10.1136/bmjqs-2013-001862.
Part 3: Leadership, Management and Decision Making
Description and Feelings

In our group work, our team leader did not make a decision to identify the factors that deterred participants from resolving conflicts and adjusting to team roles. There was also no decision to reflect on why team members were reluctant to accept the assigned tasks and the reasons for poor attendance to the team meetings. I felt that these non-decisions heavily influenced our team performance. As a group, we made the erroneous conclusion that our team leader can handle all the required tasks. This group conclusion might have also contributed to our failed group presentation. During our meetings, I was anxious and apprehensive that we were not accomplishing our tasks with the given time frame.

Discussion and Analysis

The indecision to identify factors that deterred the group from participating in meetings and accepting tasks had a negative impact on our team performance. The ability to make decisions is crucial when completing tasks as a student nurse and in preparation for my role as a registered nurse or a nurse leader. Marriner-Tomey (2009) has argued that decision-making is crucial in healthcare organisations and within teams. In actual healthcare settings, decisions are made constantly and range from decision on whether to admit a patient to decisions on which interventions to use for a specific healthcare condition. These decisions are influenced by legislations, policies, leadership styles and the practice of patient-centred care (NHS Leadership Academy, 2011). On analysis, it is crucial to make decisions within groups. However, it is cautioned that collective decisions might reflect ‘groupthink’ and lead to failure instead of success (Marriner-Tomey, 2009). The theory of groupthink is described as faulty decision made by a group that represents deterioration in reality testing, mental efficiency and moral judgment (Wilcox, 2010). Groups who demonstrate groupthink often do so without realising the impact of their decisions on other groups and in the process, ignore alternatives or actions (Cooke and Young, 2002). It is important to note that groupthink often occurs when members have similar background, when rules for decision-making are not clear and when members do not consider the opinions of others (Wilcox, 2010). In my experience, we were not able to make a decision or demonstrate groupthink despite the similarities of our background. I felt that our lack of cohesion prevented us from also making faulty decisions, which are common when a team ‘groupthinks’.

An analysis of our group revealed that we were not able to examine the power relations within the group. Power relations could have an impact on who make the decisions and whether these decisions are followed (Bach and Ellis, 2010). Power is described according to who has the formal authority to make decisions for the group and according to who has access to resources (McDonald et al., 2012). Power is also described according to who has less ability to control ideas (McDonald et al., 2012). In teams, there may be power imbalance especially when professional systems, social and cultural factors reinforce these imbalances (Martin-Rodriguez et al., 2005). This power imbalance may be more evident in hospital settings where medical dominance is seen. For example, medical doctors have traditionally retained their independence and professional autonomy and status when collaborating with other groups of healthcare workers (Hudson, 2002). This may create power imbalance as doctors tend to have more power in decision-making compared to the rest of the group. This is in contrast with what is often seen in community healthcare settings where each member of a healthcare team tends to share power and make decisions according to what is best for the patient (Hudson, 2002).

Meanwhile, Weir-Hughes (2011) asserts that in order for a therapeutic relationship to develop, there is a need to consider the power relationships between healthcare practitioners and patients. It is suggested that power may be used negatively (i.e. through coercion and force) or positively (i.e. through encouragement and empowerment). On analysis, my ability to understand power relations through my experiences in team working will be essential when caring for actual patients. In our team, power was used negatively since our team leader had to force our team members to accept assignments. However, I realised that in actual settings, it is important to encourage and empower patients and my colleagues to improve patient care. It has been shown that patient empowerment tends to improve the quality of care and patient outcomes (Sullivan and Garland, 2010). On analysis, there was power imbalance in our group since the team leader made all the decisions and the top-down approach to change was followed.

Conclusion

Making decisions is crucial in team working and when caring for patients. However, the ability to make decisions would depend on one’s power. Those with more access to resources and power have greater ability to influence decisions. In healthcare settings, it is crucial to use power positively and empower patients and other members of the healthcare team to make decisions. Positive use of power is also important in preventing ‘groupthink’, a phenomenon that tends to result to negative consequences for the group.

Action Plan

When faced with a similar situation in the future, I will ensure that I actively participate in decision-making. However, I need to empower others and myself to make good decisions. Empowerment is necessary to prevent power imbalance. I will continue to engage in training on how to practice effective leadership and management skills in order to empower others to actively engage in decision-making.

References:
Bach, S. & Ellis, P. (2011) Leadership, Management and Team Working in Nursing. Exeter: Learning Matters.
Cooke, M. & Young A. (2002) Managing and Implementing Decisions in Healthcare. London: Healthcare Balliere Tindall/RCN.
Marriner-Tomey (2009) Guide to Nursing Management and Leadership. St. Louis: Mosby Elsevier.
Martin-Rodriguez, L., Beaulieu, M., D’Amour, D. & Ferrada-Videla, M. (2005) ‘The determinants of successful collaboration: a review of theoretical and empirical studies’, Journal of International Care, 19(2), pp. 132-147.
McDonald, J., Jayasuriya, R. & Harris, M. (2012) ‘The influence of power dynamics and trust on multidisciplinary collaboration: a qualitative case study of type 2 diabetes mellitus’, BMC Health Services Research, 12(63). Doi: 10.1186/1472-6963-12-63.
NHS Leadership Academy (2011) Clinical Leadership Competency Framework. Coventry: NHS Institute for Innovation and Improvement.
Sullivan E., Garland G. (2010) Practical Leadership and Management in Nursing. Pearson Education, Harlow.
Wilcox, C. (2010) Groupthink: An impediment to success. USA: Xlibris Corporation.
Part 4: Reflection on Development of Skill
Description and Feelings

I participated in a second group activity where I was chosen as the leader. In the second group, I was able to practice leadership skills such as effective communication, motivation, change management and integrity. During one of our discussions, I assigned a group member to search for evidence-based interventions for a specific healthcare condition. Following some research, my team member decided to use the case of a real patient to explain the interventions. However, she identified the name of the patient and the context of her care, including the names of the nurses who were involved in her care. I talked to my colleague privately after our discussion and informed her of the NMC (2015) code of conduct on patient autonomy and the need to observe the privacy of the patient. I asked her to use a pseudonym instead when discussing the case of a patient. My colleague accepted my suggestion and protected the identity of the patient during succeeding discussions. On reflection, I felt that my decision to inform my colleague on how to discuss patient care was based on the ethics principles of patient autonomy.

Discussion and Analysis

From my participation in teams/groups throughout the module, I was able to develop effective communication skills. Specifically, I learned how to listen and show compassion to my colleagues and my patients during placement when they converse with me. Kourkouta and Papathanasiou (2014) have emphasised that effective communication skills is crucial in healthcare settings and when working in teams. These communication skills include recognising both verbal and non-verbal messages (Johnston, 2013). Patients who feel that their nurses are listening intently tend to report higher patient satisfaction with the care they receive (Kourkouta and Papathanasiou, 2014). Effective communication skills are also necessary in resolving conflicts in teams and understanding the perspectives of others (Craig and Moore, 2015). In nursing teams or when working with patients, it is recognised that conflicts in ideas also occur. Hence, the ability to communicate effectively and resolve conflicts will be necessary in preparing myself in my future role as a registered nurse (Craig and Moore, 2015).

Apart from effective communication, I also learned how to motivate my fellow team members. Motivation is crucial in team working since this would help team members to complete tasks. In my experience with my first group, team motivation was not practiced. In contrast, my second team was able to use motivation to help team members accept and carry out tasks. I realised that the main difference was the support that my team members received in the succeeding group. Craig and Moore (2015) state that team support is critical in team working since the absence of support could create dissatisfaction and loss of motivation. In addition to the skills on motivation, I also saw the importance of change management in our team. In my first group, change management was not practised. Managing change is critical in healthcare practice. Thorpe (2015) has stated that planned change, which is described as purposeful, requires collaborative effort and the presence of a change agent. The NMC (2015) has emphasised that nurses must deliver quality care that is based on evidence, suggesting that nurses have to continually update their skills and practice. This also means that changes in practice have to be made. However, in practice, implementing change is challenging. It is suggested that almost 70% of change projects do not succeed (Mitchell, 2013).

In my experience with the group, I also realised the necessity of recognising factors that influence or deter change. Mitchell (2013) suggests that advances in science, shortages of the nursing workforce, an ageing population, the need to increase patient satisfaction and rising cost of treatment all influence change. Inappropriate leadership, poor communication and under-motivated staff also deter the uptake of change in practice (O’Neal and Manley, 2007). In my future practice, I have to identify factors that promote change in practice. On reflection, I was not able to promote change in our first group. I could have assisted the team leader in my first group in analysing the factors that deter my colleagues from accepting their assigned tasks.

Integrity was also practiced in the succeeding groups that I was involved in. Specifically, power was not misused as all team members in these groups had equal chances to participate in decision-making. In addition, the team leader and group members exercised honesty and transparency in the decisions made. Finally, ethics in decision-making was observed. For instance, all personal information of patients discussed during case studies was not mentioned and patient autonomy was observed. The NMC (2015) has reiterated the importance of protecting the privacy and autonomy of the patients.

Conclusion

Practising effective leadership skills and ethical decision-making are important when working as teams and in providing quality care to the patients. Inability to work effectively could result to poor performance, which in turn could affect the quality of care that my future patients will receive. Developing these leadership skills early in my undergraduate years would help prepare me in my role as a registered nurse.

Action Plan

As part of my action plan, I will continue to engage in training on how to develop effective communication skills. Specifically, I will refine my skills on how to show empathy when listening to my patients and colleagues. The ability to demonstrate empathy is crucial since this would help my patients feel that they matter to the team (Fowler, 2015).

References:
Craig. M. & Moore. A. (2015) ‘Providing support for teams in difficulty’, Nursing Times. 111(16), pp. 21 – 23.
Fowler. J. (2015) ‘What makes a good leader?’, British Journal of Nursing, 24(11), pp. 598 – 599.
Johnston, B. (2013) ‘Patient satisfaction and its discontents’, Journal of the American Medical Association, 173(22), pp. 2025-2026.
Kourkouta, L. & Papathanasiou, I. (2014) ‘Communication in nursing practice’, Materia Socio Medica, 26(1), pp. 65-67.
Mitchell, G. (2013) ‘Selecting the best theory to implementing planned change’, Nursing Management, 20(1), pp. 32-37.
Nursing and Midwifery Council (NMC, 2015) The Code: Professional Standards of practice and behaviour for nurses and midwives [Online]. Available from: http://www.nmc.org.uk/globalassets/sitedocuments/nmc-publications/revised-new-nmc-code.pdf (Accessed: 12 May, 2015).
O’Neal, H. & Manley, K. (2007) ‘Action planning: making change happen in clinical practice’, Nursing Standard, 21(35), pp. 35-39.
Thorpe. R. (2015) ‘Planning a change project in mental health nursing’, Nursing Standard, 30(1), pp. 38 – 44.

Preventing Wound Infection in Orthopaedic Surgery

This work was produced by one of our professional writers as a learning aid to help you with your studies

Choose one gender group and critically discuss how their health outcomes can be improved in regards to ageing.

Introduction

Wound infection post-surgery, now preferably known as Surgical Site Infection (SSI) refers to infections at or near a surgical site within 30 days after surgery or within one year, if the procedure involved insertion of an implant (Illingworth et al., 2013; Owens and Stoessel 2008). While definite statistics of the incidence of SSI are complicated given the gamut of surgical procedures, environment and patients, available data indicate that SSI contributes to more than 15% of reported Hospital-acquired infections (HAI) for all patients and about 38% for surgical patients (Campbell et al., 2013; Owens and Stoessel, 2008; Reichman and Greenberg, 2009). Also, data from across Europe indicate that, depending on surgical procedure and/or surveillance methods used, incidence of SSI may be as high as 20% for all surgical procedures (Leaper et al., 2004). Although, HAIs generally, and SSI are relatively less common in Orthopaedic surgery, compared with other surgical procedures (Johnson et al., 2013), however, when they do occur, osteo-articular infections for example, can be very difficult to treat, with significant risk of lifelong recurrence (Faruqui and Choubey, 2014). SSI leads to significantly higher costs of care from longer hospital stays; it poses a major burden on healthcare providers and the healthcare system, jeopardises the health outcomes of patients and remains a major cause of morbidity and mortality despite improvements in surgical procedures and infection control techniques (Owens and Stoessel, 2008; Tao et al., 2015). Consequently, understanding evidenced-based approaches to reduce/prevent incidence of SSI has attracted significant interests from researchers, healthcare administrators and policy-makers. This essay intends to review current best-practices in prevention of SSIs and to offer recommendations for future practice within orthopaedic settings.

Rationale

This review of best practices in the prevention of SSI following orthopaedic surgery is underpinned by two major reasons. One, despite the considerable improvement in surgical procedures and techniques in most orthopaedic settings, SSI negatively impact on patient outcomes and imposes significant cost on the healthcare system. According to a case-control study reported by Owens and Stoessel (2008), patients who suffer SSI are more likely to require readmission to hospital and have more than double the risk of death compared to patients without SSI. In addition, the median duration of hospitalisation required due to SSI was put at 11 days and the extra cost to the healthcare system estimated at ˆ325 per day (Owens and Stoessel, 2008). Two, the prevention of SSI is hardly straightforward. Given the wide range of factors that modify the risk of SSI, a ‘bundle’ approach with ‘systematic attention to multiple risk factor’ is required for any effective prevention of SSI (Uckay et al., 2013). Thus, by undertaking a state-of-the-art review of orthopaedic SSI prevention techniques/processes, this essay may contribute towards better orthopaedic surgery outcomes for patients and providers.

Prevention of SSI in orthopaedic surgery: Best Practices

According to the Health Protection Agency (2011), the most common pathogenic organisms responsible for surgical wound infections in orthopaedic surgery include methicillin-sensitive Staphylococcus aureus (MSSA), methicillin-resistant Staphylococcus aureus (MRSA), Coagulate negative Staphylococci (CoNS), Enterobacteriaceae, Enterococcus spp, Pseudomonas spp, Stretococcus spp as well as occasional cases of unspecified diphtheroids’ of the Corynebacterium spp. and other gram-positive organisms. Moreover, SSIs can be categorised into superficial incisional, deep incisional and organ space SSI (Reichman and Greenberg, 2009). Superficial incisional SSI refers to infection that involves only skin and subcutaneous tissue at the point of incision; deep incisional SSI refers to infection of the underlying soft tissues, while organ space SSI refers to infection involving organs or organ spaces that were opened or manipulated during the surgical procedure. Since the risk of ending up with SSI and the specific type of SSI suffered are determined by factors related to the patient, procedure and hospital environment, current best-practices and guidelines for preventing SSI can be broadly elaborated under these categories.

Patient-related Practices

Existing patient conditions like diabetes mellitus, obesity and/or rheumatoid arthritis have been associated with increased risk of SSI (Illingworth et al., 2013; Johnson et al., 2013). As part of effective patient management, pre-operatively, current body of evidence recommends aggressive glucose control for diabetes patient to reduce the heightened risk of infection due to hyperglycaemia pre or post-surgery. In patients with rheumatoid arthritis, corticosteroids and anti-tumour necrosis factor (TNF) therapy have been argued to delay wound healing and increase risk of infection. However, the British Society for Rheumatology (BSR) recommends that in deciding whether to cease these medications for such patients, pre-surgery, the potential benefits of preventing post-surgery infection should be balanced with the risk of disease flare, pre-surgery (Dixon et al., 2006; Luqmani et al., 2006). In addition, orthopaedic surgery for patients who currently smoke or are obese (BMI above 30kg/m2) should be delayed (until smoking cessation/loss of weight) to reduce the risk of SSI. For example, a randomised, controlled study reported that smoking cessation for just 4 weeks significantly reduced the odds of incisional SSI (Sorensen et al., 2003), while Namba et al. (2005) reported significantly higher odds of SSI in obese patients (>35kg/m2) undergoing total hip and knee replacement surgery, compared with patients that were not obese.

Screening patients for presence of MSSA and MRSA and subsequent decolonisation is one of the most recommended techniques for preventing SSI. Staphylococcus aureus colonisation is reportedly found in the nares of about 30% of healthy individuals (Kalmeijer et al., 2002). This nasal carriage of both methicillin sensitive/resistant S. aureus have been demonstrated as a significant risk factor for SSI. Kelly et al. (2012) reported a significant drop in SSI from 2.3% to 0.3% with the use of intranasal mupirocin and triclosan showers to decolonise patients before orthopaedic surgery. Also, a review of eight randomised controlled trial by van Rijen et al. (2008) reported that the use of mupirocin significantly reduced the incidence of MRSA and MSSA associated SSI. However, guidelines from the National Institute for Health and Care Excellence (NICE, 2008) recommends a combination of nasal mupirocin and chlorhexidine showers for patient decolonisation while Uckay et al. (2013) indicated that available evidence from orthopaedic literature suggests that S. aureus screening, decolonisation and shower constitute a cost-saving, effective strategy to reduce the incidence of SSI in orthopaedic surgeries.

Surgical Procedure-related Practices

Preoperative preparation of skin before incision is one of the major avenues to prevent SSI (Kelly et al., 2012). However, there is no consensus on what antiseptic agent offers the most effective protection against SSI. While NICE (2008) guidelines suggest that both aqueous and alcohol based preparations e.g. povidone-iodine or chlorhexidine are suitable for skin preparation, Darouiche et al. (2010) and Milstone et al. (2008) have raised concerns about the development of bacterial resistance to chlorhexidine. These studies report the relative superiority of 2% chlorhexidine mixed with 70% isopropyl alcohol, while some experts have suggested increasing the chlorhexidine concentration to 4% or the use of 10% povidone-iodine (Uckay et al., 2013). Nevertheless, povidone-iodine or chlorhexidine still remain the gold standard for preoperative skin preparation.

Also as part of skin preparation, NICE recommends that hair should only be removed if necessary, should be done immediately before surgery and with electronic clippers, not razor blades. Recent evidence suggests that use of razor blades can sometimes result in microscopic skin cuts that may act as foci for micro-organisms colonisation, thus increasing the risk of infection (Owens and Stoessel, 2008).

Preoperative administration of antibiotic prophylaxis to reduce the risk of surgical wound infection is widely accepted for surgery in orthopaedic settings, including bone trauma. Several large scale studies have demonstrated that antibiotic prophylaxis, when administered properly, help reduce tissue contamination, during surgery, to levels that do not overwhelm the patient’s immune system, and thus, can reduce the risk of SSI by up to 75% (Chen et al., 2013; Faruqui and Choubey, 2014; Illingworth et al., 2013; Uckay et al. 2013). However, NICE (2008) recommends that potential adverse effects, optimal dosage and most effective time for administration pre-operatively should be carefully considered to maximize the benefit of antibiotic prophylaxis. Uckay et al. (2013) believe that first or second generation parenteral cephalosporins are sufficient in most cases, except in cases of skin colonisation with MRSA, in which case glycopeptide antibiotics may be more effective. However, this should be considered in relation to individual patients’ allergy history. Uckay et al. (2013) also recommend that 30mins – 1hr before incision is the idea time to administer prophylaxis. While this is generally accepted, NICE (2008) recommends that prophylaxis may be given earlier in procedures where a tourniquet is used.

In addition to minimising the risks from the skin and endogenous flora of the patient, the surgical team must also strive to reduce chances of contamination from either their person, the tools used or the procedure itself. NICE (2008) recommends that every member of the surgical team must thoroughly scrub before wearing surgical gown and gloves. There is growing support for double-gloving and frequent glove-changing to reduce the risk of contamination from tiny punctures in surgical gloves that often go unnoticed during surgery. While evidence in support of double-gloving and/or frequent glove-changing intra-operatively as a strategy for reducing risk of SSI remain inconclusive, Widmer et al. (2010) conclude that the practice is supported by expert opinion, especially for lengthy procedures. Moreover, excellent surgical techniques are crucial in preventing SSI. For example, maintaining effective haemostasis while preserving adequate blood supply, removal of devitalized tissues, eradication of dead space(s), gentle handling of tissue and effective management of surgical wound postoperatively can all help reduce the chance of SSI (Uckay et al., 2013).

Hospital Environment-related Practices

The CDC and World Health Organization recommend that doors to the operating room should be kept closed and traffic kept to a minimum to reduce potential contamination of surgical sites (Tao et al., 2015). To achieve this, essential equipment and tools should be stored in the operating room. In fact, Health Protection Agency (2011) suggest that frequency of operating room door opening is a positive predictor of increased bacterial count in the operating room. Airflow in the operating room is another modifier of SSI risk. Vertical or horizontal laminar-flow ventilation systems have been advocated for orthopaedic surgery to achieve ultra-clean air within the operating room and reduce airborne contaminants. Although evidence supporting the effect of laminar airflow systems on SSI risk remains inconclusive, the reduction in airborne contaminants is perhaps an added advantage (Owens and Stoessel, 2008; Reichman and Greenberg, 2009).

Lastly, constant surveillance is an important part of preventing SSI. By following up on patients post-operatively and reporting appropriate data to the surgical team, surgical decisions can be improved upon based on historical records (Skramm et al., 2012). Moreover, surveillance ensures that cases of SSI are identified early and treated before complications arise. Data from surveillance could also form the basis of evidenced-based decision making on facility specific service improvements to reduce incidences of SSI and improve outcomes for all concerned (Skramm et al., 2012).

Recommendations

This essay have reviewed current knowledge on surgical site infection and strategies to reduce its incidence. It is pertinent to state that despite the various precautions elaborated above, complete eradication of surgical site contamination is almost impossible as some endogenous micro-organisms always remain and environmental factors cannot be totally eliminated. To reduce incidence of SSI to the barest minimum, the following are recommended:

It is crucial to adopt a ‘bundle’ approach that ensures that patient, procedure and facility related factors are controlled for as much as possible.
While improving surgical and care delivery is always crucial, surveillance and data collection should also promoted to ensure that changes/improvements in procedures and facility practices are evidenced-based
New technologies and strategies are continually been developed to reduce complications like SSI and improve outcomes for patients, it is important to always stay on top of these developments to ensure that orthopaedic surgeries are not only evidenced-based but contemporary, achieving the best outcome possible for all parties.
Conclusion

Surgical site infection (SSI) poses a significant challenge to patients undergoing orthopaedic surgeries, the surgical team as well as the healthcare system in general. SSI negatively impact patient outcomes and imposes unnecessary demand on healthcare resources. Fortunately, much of the burden associated with SSI can be avoided. This review identifies the multitude of patient and procedure-related factors that modify SSI risk and highlights various evidence-based strategies to mitigate these risks. The paper demonstrates that there is consensus in the literature that by screening and subsequent decolonisation of patients, administering antibiotic prophylaxis, ensuring that surgical tools, equipments and garments are properly sterilised and the operating room is free of airborne contaminants, cases of surgical wound infection in orthopaedic surgeries can be effectively prevented.

Bibliography

Campbell, K. A., Phillips, M. S., Stachel, A., Bosco Iii, J. A. and Mehta, S. A. (2013) Incidence and riskfactors for hospital-acquired Clostridium difficile infection among inpatients in an orthopaedic tertiary care hospital. Journal of Hospital Infection, 83(2), pp. 146-149.

Chen, A. F. M. D. M. B. A., Wessel, C. B. M. L. S. and Rao, N. M. D. (2013) Staphylococcus aureus Screening and Decolonization in Orthopaedic Surgery and Reduction of Surgical Site Infections. Clinical Orthopaedics and Related Research, 471(7), pp. 2383-99.

Darouiche, R. O., Wall, M. J., Itani, K. M. F., Otterson, M. F., Webb, A. L., Carrick, M. M., Miller, H. J., Awad, S. S., Crosby, C. T., Mosier, M. C., AlSharif, A. and Berger, D. H. (2010) Chlorhexidine–Alcohol versus Povidone–Iodine for Surgical-Site Antisepsis. New England Journal of Medicine, 362(1), pp. 18-26.

Dixon, W. G., Watson, K., Lunt, M., Hyrich, K. L., Silman, A. J. and Symmons, D. P. M. (2006) Rates of serious infection, including site-specific and bacterial intracellular infection, in rheumatoid arthritis patients receiving anti–tumor necrosis factor therapy: Results from the British Society for Rheumatology Biologics Register. Arthritis & Rheumatism, 54(8), pp. 2368-2376.

Faruqui, S. A. and Choubey, R. (2014) Antibiotics Use in Orthopaedic Surgery; An Overview. National Journal of Medical and Dental Research, 2(4), pp. 52-58.

Health Protection Agency (2011) Sixth report of the mandatory surveillance of surgical site infection in orthopaedic surgery, April 2004 to March 2010. in,London: Health Protection Agency.

Illingworth, K. D., Mihalko, W. M., Parvizi, J., Sculco, T., McArthur, B., el Bitar, Y. and Saleh, K. J. (2013) How to minimize infection and thereby maximize patient outcomes in total joint arthroplasty: a multicenter approach: AAOS exhibit selection. The Journal of bone and joint surgery. American volume, 95(8), pp. 1.

Johnson, R., Jameson, S. S., Sanders, R. D., Sargant, N. J., Muller, S. D., Meek, R. M. D. and Reed, M. R. (2013) Reducing surgical site infection in arthroplasty of the lower limb: A multi-disciplinary approach. Bone and Joint Research, 2(3), pp. 58-65.

Kalmeijer, M. D., Coertjens, H., van Nieuwland-Bollen, P. M., Bogaers-Hofman, D., de Baere, G. A. J., Stuurman, A., van Belkum, A. and Kluytmans, J. A. J. W. (2002) Surgical Site Infections in Orthopedic Surgery: The Effect of Mupirocin Nasal Ointment in a Double-Blind, Randomized, Placebo-Controlled Study. Clinical Infectious Diseases, 35(4), pp. 353-358.

Kelly, J. C., O’Briain, D. E., Walls, R., Lee, S. I., O’Rourke, A. and Mc Cabe, J. P. (2012) The role of pre-operative assessment and ringfencing of services in the control of methicillin resistant Staphlococcus aureus infection in orthopaedic patients. The Surgeon, 10(2), pp. 75-79.

Leaper, D. J., van Goor, H., Reilly, J., Petrosillo, N., Geiss, H. K., Torres, A. J. and Berger, A. (2004) Surgical site infection – a European perspective of incidence and economic burden. Int Wound J, 1(4), pp. 247-73.

Luqmani, R., Hennell, S., Estrach, C., Birrell, F., Bosworth, A., Davenport, G., Fokke, C., Goodson, N., Jeffreson, P., Lamb, E., Mohammed, R., Oliver, S., Stableford, Z., Walsh, D., Washbrook, C., Webb, F., Rheumatology, o. b. o. t. B. S. f., British Health Professionals in Rheumatology Standards, G. and Group, A. W. (2006) British Society for Rheumatology and British Health Professionals in Rheumatology Guideline for the Management of Rheumatoid Arthritis (the first two years). Rheumatology, 45(9), pp. 1167-1169.

Milstone, A. M., Passaretti, C. L. and Perl, T. M. (2008) Chlorhexidine: expanding the armamentarium for infection control and prevention. Clin Infect Dis, 46(2), pp. 274-81.

Namba, R. S., Paxton, L., Fithian, D. C. and Stone, M. L. (2005) Obesity and perioperative morbidity in total hip and total knee arthroplasty patients. J Arthroplasty, 20(7 Suppl 3), pp. 46-50.

National Institutte for Health and Care Excellence (2008) Surgical site infections: prevention andention and treatmenttreatment. Clinical guideline. in,Manchester: NICE.

Owens, C. D. and Stoessel, K. (2008) Surgical site infections: epidemiology, microbiology and prevention. Journal of Hospital Infection, 70, Supplement 2, pp. 3-10.

Reichman, D. E. and Greenberg, J. A. (2009) Reducing Surgical Site Infections: A Review. Reviews in Obstetrics and Gynecology, 2(4), pp. 212-221.

Skramm, I., SaltytA— Benth, J. and Bukholm, G. (2012) Decreasing time trend in SSI incidence for orthopaedic procedures: surveillance matters! Journal of Hospital Infection, 82(4), pp. 243-247.

Sorensen, L. T., Karlsmark, T. and Gottrup, F. (2003) Abstinence from smoking reduces incisional wound infection: a randomized controlled trial. Ann Surg, 238(1), pp. 1-5.

Tao, P., Marshall, C. and Bucknill, A. (2015) Surgical site infection in orthopaedic surgery: an audit of peri-operative practice at a tertiary centre. Healthcare Infection, 20(2), pp. 39-45.

Uckay, I., Hoffmeyer, P., Lew, D. and Pittet, D. (2013) Prevention of surgical site infections in orthopaedic surgery and bone trauma: state-of-the-art update. Journal of Hospital Infection, 84(1), pp. 5-12.

van Rijen, M., Bonten, M., Wenzel, R. and Kluytmans, J. (2008) Mupirocin ointment for preventing Staphylococcus aureus infections in nasal carriers. Cochrane Database Syst Rev, (4), pp. Cd006216.

Widmer, A. F., Rotter, M., Voss, A., Nthumba, P., Allegranzi, B., Boyce, J. and Pittet, D. (2010) Surgical hand preparation: state-of-the-art. J Hosp Infect, 74(2), pp. 112-22.

Impact of Drug Abuse on Health of Teenagers Aged 13-19

This work was produced by one of our professional writers as a learning aid to help you with your studies

Literature Review
1.0 Introduction

This chapter provides a comprehensive critical literature review of a small number of sources that are considered to be particularly useful in exploring the two key themes of this dissertation. The first of these themes is the impact of drug abuse on the health of the teenagers aged 13-19 in London, while the second is the impact of governmental strategies in tackling drug abuse amongst teenagers aged 13-19 in London. These themes are discussed using the resources selected, and the quality, methodological approach, relevance and ethical and anti-oppressive practices are all part of the critical review. The chapter finishes with a short summary bringing these key ideas together.

1.1 The Impact of Drug Abuse on the Health of Teenagers Aged 13 – 19 in London

The first theme investigates the impact of drug abuse on specific aspects of health on teenagers in London. There are two key sources that form the core of this critical review for this theme. Even so, neither of these relate solely to the target population, and in each case some extrapolation of findings is made in order to describe the likely characteristics of 13 – 19 years’ olds in London.

The first is source is the case-controlled study carried out by Di Forti et al (2015:1), and briefly discussed in Chapter Two above. Looking more closely at this study, and reviewing it critically, it still remains a useful article, as it focuses on the mental health impacts of cannabis and shows a clear association between the use of the drug in its high potency form (skunk) and psychosis. It might not at first appear that the study is relevant given that it started in 2005. However, it continued recruiting for over 6 years, and amassed a wealth of data on those individuals abusing drugs – specifically high potency and easily available cannabis.

The research study used a primary research methodology. For the recruitment of cases, the authors approached all patients (18 – 65 years) with first episode psychosis presenting at the inpatient units of the South London and Maudsley Hospital. They invited people to participate in the study only if they met the International Classification of Diseases 10 criteria for a diagnosis of non-affective (F20–F29) or affective (F30–F33) psychosis, which they validated by administering the Schedules for Clinical Assessment in Neuropsychiatry (SCAN) (Di Forti et al, 2015:2). For the controls, the authors used internet and newspaper adverts and also distributed leaflets on public transport and in shops and job centres. The controls were given the Psychosis Screening Questionnaire and were excluded if they met the criteria for a psychotic disorder. While the two groups only included the last two years of the target population group for this study i.e. 18 and 19 year olds, it was a study located in London, and on analysis appeared to indicate a number of characteristics that were felt to be useful for providing information that would also be useful for younger teenagers.

All participants (cases and controls) included in the study gave written informed consent under the ethical approval obtained from the Institute of Psychiatry Local Research Ethics Committee. There did not appear to be any unethical practices, but the study had the potential to be oppressive as by the nature of the patients presenting at the clinics, and by the nature of their access to skunk, being more likely to be of certain ethnic groups – especially of black West Indian origin – it could be argued that the study to some extent misrepresented the populations of south west London, and more specifically, the West Indian communities found there. In other words, the inclusion of participants from these origins might be likely to give observers an unjust view of the ethnic group or of the population of that area of London as a whole.

The method used with the participants was quantitative and involved questionnaire assessments, specifically socioeconomic data and the Cannabis Experience Questionnaire modified version (CEQmv) which included data on history of use of alcohol. tobacco, alcohol, any other use of recreational drugs, and detailed information on cannabis use (i.e. first use age, use duration, frequency of use, type of cannabis used) (Di Forti et al, 2015:2). Between 2005 and 2011, the researchers approached 606 patients of which 145 (24%) refused to participate, therefore 461 patients with first-episode psychosis were recruited. Using a range of statistical tests, and adjusting for a number of variables including the variables for frequency of cannabis use and the type of cannabis used, and in combining these the authors found that controls were more likely to be occasional users of hash, whilst the frequent users were more likely to be using skunk. They also found, using logistic regression, that those people who had started using cannabis at a younger age had a greater risk of developing psychotic episodes (Di Forti et al, 2015:5).

The second resource to be analysed was the study by McCardle (2004). This was a literature review focusing on the impacts of substance abuse by children and young people. Although this did not use primary research, it provided a useful analysis of a number of other studies. Although the age of this study meant that it might have had limited relevance to teenagers in 2017, in fact the study related directly to the findings of the later Di Forti et al study. This was because McCardle (2004:1) found that cannabis was becoming stronger than it had been in the past – just as Di Forti et al found that skunk use was increasing and that it was of a much higher potency than previously. McCardle (2004:2) also found that there was a range of mental health issues resulting from the use of cannabis, including an increased risk of suicide, and an increase in aggressive, disassociated behaviours, anxiety, depression and other similar problems (McCardle, 2004:2). Another useful aspect of this research was that it identified the problems of terminology relating to the gathering and analysis of data – so many different terms are used that it is often difficult to ascertain accurate trends and outcomes (McCardle, 2004:3). While it would have been preferred to have used a London based source or one that engaged participants of the target age group though a primary method, the lack of sources of academic literature meant that this study was valuable in that it analysed other studies, and also existing datasets from the UK government. The article also focused on the social impacts of cannabis, for example, looking at the developmental impacts, and the negative effects on education, both of which could lead to poor outcomes in terms of quality of life and attainment in later life.

The findings from these two articles provided valid evidence of the relationship between the use of cannabis and mental, emotional, social and physical health of teenagers and young people. Although there was limited focus on the population age target group for the dissertation specifically, both articles provided relevant points of interest, and it is possible to extrapolate from them to state that teenagers in London engaged in cannabis abuse are very likely to be at risk of experiencing the various health effects identified above.

3.2 The Impact of Government Strategies in Tackling Drug Abuse Amongst Teenagers Aged 13-19 in London

Finding academic research sources that focused on recent government strategies aimed at the target group based in London was very challenging. For the most recent strategy – the Troubled Families Programme, Lambert and Crossley (2017:1) get to the very heart of the ethical and oppressive practices issue, as they argue that this government strategy is one of a wider spectrum of policies that locates problems within the family itself, and which emphasises behaviour as the target for action irrespective of the socio-economic influences that exist. This is a review study – critically reviewing a strategy – and is very current, as the TFP has recently been revisited by the Government, who are considering an extension, despite evidence that it has not met its targets or expected outcomes. While this article is not based on a piece of primary data, the authors have conducted primary data about this issue through interviews in the very recent past, and the article refers to these. They have found that TFP has continued the view of target families as an ‘underclass’, as ‘neighbours from hell’ and as expensive and very difficult to ‘treat’. While the TFP took a holistic approach, using one individual or team to work with families on all of their problems, Lambert and Crossley (2017:4), and others (Bonell et al, 2016) argue that the underlying attitude of the Government and of the strategy meant that its approach was unlikely to succeed.

3.3 Summary

This chapter showed that there were clearly associated health impacts with the use of cannabis; some of these impacts were severe, and often included mental illness and behavioural change, especially where high potency cannabis was used. It also showed that despite many years of government strategies and policies, there still does not appear to be a solution that can reduce the use or impacts of cannabis and other drugs. The final chapter provides a reflection on the research undertaken for this dissertation, and provides some brief conclusions and recommendations.

CHAPTER FOUR – REFLECTIONS, CONCLUSIONS AND RECOMMENDATIONS
4.0 Introduction

In this final chapter, three tasks are completed. First, a reflective account of the research is undertaken. In research and practice, reflection on a task and outcome is very important because it provides the author with the opportunity to look back and learn from their actions. There are in fact two types of reflection, both of which might be applicable to this work. The first definition is that of ‘reflection’ which is considered to be a ‘process or activity’ that involves thinking and is judged to include cognitive processes of problem finding and problem solving (Leitch and Day, 2000:180). The second type of reflection is that of ‘reflective practice’. This is the use of reflection and reflective skills to transfer learnt knowledge i.e. theories to the application of those theories to the everyday practices of an individual. It has been shown to be very important for individual practitioners as it aids their ability to learn from their actions and associated outcomes, and enables them to develop improvements based on experience and theoretical knowledge (White et al, 2016:9).

There are two main models of reflection that can be used to support the reflective researcher or the reflective practitioner. These are Kolb’s model of experiential learning (Kolb, 1984) and Gibbs’ reflective cycle (Gibbs, 1988). Gibbs developed his model as a refinement of the earlier Kolb model, and it is Gibbs’ model that is used in this dissertation.

Figure 1: Gibbs’ Model of Reflection (Park and Son, 2011:2)

The Gibbs Model provides a researcher with the opportunity to gain a deep understanding of what they have learned (Park and Kastanis, 2009:11) and the strengths and weaknesses of their work, their underlying values, the insufficiency of their approach, and areas of improvement (Park and Son, 2012:3). For these reasons the Gibbs Model will be applied below.

4.1 Reflection on the Process of the Research

4.1.1 The Experience

The process of writing the dissertation was both challenging and enjoyable. It was enjoyable because any research activity is one of problem solving and of searching for information, and these two activities can be very satisfying when they result in finding out something new. While primary research is often seen as the most valid form of activity, in fact secondary research, based as it is on the gathering of existing data, and the synthesis of that data to suggest new outcomes or findings, can be just as valid, and just as difficult as carrying out processes that collect new or primary data.

4.1.2 The Challenges and the Achievements

As alluded to a number of times throughout this dissertation there were a number of difficulties or challenges. The choice of the topic was in retrospect a good one because it focussed on a population group in a particular location, London, that had clearly received little research focus previously. While there has been substantial data gathered on drug use and abuse more generally in the UK and more generally across age ranges, very little has been done in relation to the 13 – 19 year old age group. In fact, it was this aspect that caused the greatest difficulty in completing the dissertation – the lack of resources and data available that were relevant to this age group, in London, for any kind of drug abuse other than newspaper articles that often used the issue of drug abuse in relation to crime, ethnic minorities or deprivation, meant that the data that was available had to be used carefully. For example, it was possible to obtain academic resources such as that of Di Forti et al, that looked at drug abuse, specifically, cannabis, in London, but only two years of respondents in that study (18 and 19 year olds) fit into this dissertation, whilst the study by McCardle (2004) provided relevance to the wider age group (15 – 24) but was not based in London, so could point to some useful outcomes but did not have specific locational knowledge. In relation to the strategies developed to address the issue, again the resources of an academic nature were very limited, made even more challenging because the most recent strategies, i.e. those that had occurred in the past five years, have yet to undergo much academic analysis, but as they are a very different approach from those used a decade or so ago, there is little point in trying to evaluate those older approaches.

Despite the difficulties outlined above, it was felt that there were a number of positives obtained from the research. As there was such a dearth of resources available, this dissertation appears to provide new research and new analysis of data for this group of the population in this location. As a result, the author felt that the validity of their choice of topic and their research approach was justified to some extent. In terms of time management, it was felt that the research was planned well, and even though the search for data and resources took longer than expected, it was still possible to incorporate the timing required into the overall research schedule. The research also challenged the overall beliefs and judgements held by the author at the start of the process. Whilst it was felt that some degree of knowledge was held about these issues, there were some preconceptions held about the type of teenagers that participated in drug abuse. The gathering of the data enabled the author to begin to challenge those preconceptions especially in relation to the factors that cause people of this age to start abusing drugs. This new understanding allowed the author to start to view the issues differently.

4.1.3 Changes Required

There are a number of changes that could be implemented to make the research easier and to address the question of limited resources. Firstly, the age range would be extended to include children from the age of 0 years to 24 or 25 years, as this would enable a greater number of data sources to be used, and they could be more easily analysed and extrapolation made for teenage years. Second, the inclusion of drug abuse by parents impacting on the health of their children would be included, as this issue has consistently emerged as a key problem for children and teenagers throughout the data collection, and can be a major factor in determining whether teenagers participate in drug use and abuse. Finally, although London would still be the locational focus, because a lot of data that is collected for London and the South-East, the locational boundaries would be stretched to incorporate this area within the research. If these changes were put into place, it would be a positive exercise to undertake the research process again to see if it was possible to obtain data and achieve findings that were even more valuable than those already developed.

4.1.4 Applying the Gibb’s Model of Reflection

Figure 2: Biggs’ Reflective Model Applied to This Research

Having applied Gibbs’ model of reflection it is helpful to see that the reflection that is carried out in stages can lead to a targeted plan of action, which can form the framework for new research. Gibbs’ model does not necessarily allow for complexity, however, as it is a linear-cyclical model, and used in this way cannot represent the many complexities and variables that characterise the issue of drug abuse amongst teenagers.

4.2 Conclusions

The research question that this dissertation set out to examine was:

What patterns of drug abuse occur amongst teenagers in London, and what are the causes, health impacts and possible solutions?

Despite the difficulties in obtaining specific data for teenagers aged 13 – 19 in London, there was sufficient information available to be able to provide an answer to this research question. From the prevalence perspective, the data showed that while the prevalence of drug abuse was decreasing overall, there were areas of London that had disproportionately higher levels, especially amongst specific ethnic groups. However, amongst all drug abusers, cannabis was the most used drug. The causes of drug abuse amongst teenagers was found to be a complex mixture of environmental, emotional, mental health and peer pressure related factors, meaning that addressing the problem is always going to be challenging for policy makers and healthcare providers.

In relation to the health impacts, the previous chapter has revealed that there is clear evidence that its use can be clearly associated with health outcomes of mental health including psychosis and the development of schizophrenia for drug abusers of any age. Not only that, but it is also quite apparent that teenagers engaging in drug abuse are much more likely to experience other health related problems because of their attitude to risk, and their participation in high-risk behaviours when they are under the influence of the drug. These other problems include contracting STIs, teenage pregnancy, the taking of other drugs and substances that have more severe health impacts, participating in criminal activities that can lead to violence in an attempt to obtain money to buy drugs and so on.

Looking at the strategy that has most recently been developed to try and address the problem of teenage drug use in London, it is apparent that it has not succeeded in its aims, objectives or targets. This seems to be the result largely of the oppressive nature of all such strategies held by UK Governments over recent years – an attitude that views those with drug abuse and other problems, as ‘problem families’ that need to be ‘solved’, instead of trying to really understand what it is about society in general that leads to such families existing in the first place. A focus on social, economic and environmental issues rather than on the families themselves might result in a better outcome.

4.3 Recommendations

Having carried out a review of the literature surrounding this issue, there are some key recommendations that can immediately be made. The first of these recommendations relates to the data available for this issue – as indicated previously, one of the challenges of completing this dissertation was the paucity of data relating to the specific population being studied. It is, therefore, recommended, that research studies, or government agencies collecting data, should target this age group specifically when data is being collected about drug use or abuse. An alternative to this is for researchers to obtain the raw data from the various data collection agencies and sources, and to extrapolate the data that crosses the boundaries of the targeted populations group, and reprocess that data for the target age group. The second recommendation relates not to the data, but to the issues. It appears that controlling the availability of drugs is difficult, especially as there are so many types, and some, like cannabis, appear to be regularly available. As there seems to be an ongoing reduction in the number of young people using these illegal drugs, it would seem sensible to capitalise on this trend by providing better educational initiatives to inform people of the dangers to their health. It would also be appropriate to try and determine which factors were most likely to cause teenagers to start abusing drugs and to find ways of addressing these factors more effectively than has been the case to date.

References
Bonell, C., McKee, M., and Fletcher, A. (2016). Troubled Families, Troubled Policy making. BMJ, 355, doi: https://doi.org/10.1136/bmj.i5879.
Di Forti, M., Marconi, A., Carra, E., Fraietta, S., Trotta, A., Bonomo, M., Bianconi, F., Gardner-Sood, P., O’Connor, J., Russo, M., Stilo, S.A., Marques, T.R., Mondelli, V., Dazzan, P., Pariante, C., David, A.S., Gaughran, F., Atakan, Z., Iyegbe, C., Powell, J., Morgan, C., Lynskey, M., and Murray, R.M. (2015). Proportion of patients in south London with first-episode psychosis attributable to use of high potency cannabis: a case-control study. Lancet Psychiatry, http://dx.doi.org/10.1016/S2215-0366(14)00117-5
Gibbs, G. (1998). Learning by doing: A guide to teaching and learning. London: FEU.
Kolb, D. (1984). Experiential learning: Experience as the source of learning and development. Englewood Cliffs NJ: Prentice-Hall
Lambert, M., and Crossley, S. (2017). ‘Getting with the (Troubled Families) Programme’: A Review. Social Policy and Society, 16(1), pp. 87 – 97.
Leitch, R., and Day, C. (2000). Action Research and Reflective Practice: Towards a Holistic View. Educational Action Research, 8(1), pp. 179 – 193.
McCardle, P. (2004). Substance Abuse by Children and Young People. Archives of Disease in Childhood, 89(8), pp.701
Park, J.Y., and Kastanis, L.S. (2009). Reflective learning through social network sites in design education. International Journal of Learning,16(8), 11-22.
Park, J.Y., and Son, J.B. (2011). Expression and Connection: The Integration of the Reflective Learning Process and the Public Writing Process into Social Network Sites. Journal of Online Learning and Teaching, 7(1), pp. 1 – 6.
White, P., Laxton, J., and Brooke, R. (2016). Reflection: Importance, Theory and Practice. Leeds: University of Leeds.

MRSA Resistance to Anti-biotics

This work was produced by one of our professional writers as a learning aid to help you with your studies

Discuss how MRSA became resistant to antibiotics and became such a prevalent organism associated with British hospitals. Explain how MRSA is treated and touch upon the wider implications for antibiotics and the future of healthcare.

Introduction

It may be argued that micro-organisms are the most successful life form on the planet partly due to their pervasive presence and their utilisation of any available food source, including humans. The ubiquitous presence of micro-organisms and their astronomic numbers give rise to many mutations that account for rapid evolutionary adaptation and in part for emerging antibiotic resistance (Evans and Brachman 1998). Bacteria have evolved numerous structural and metabolic virulence factors that enhance their survival rate in the host. Once such bacteria is Meticillin Resistant Staphylococcus aureus (MRSA).

What is MRSA and why did resistance occur?

The genus Staphylococcus are non-motile, Gram-positive cocci, measuring 0.5-1.5µm in diameter and are commonly found in the nose and on skin. They can occur singly, in pairs, short chains or in grape like clusters. There are several species but Staphylococcus aureus has been a significant pathogen for humans for many years. It is different from other Staphylococci because it produces the enzyme coagulase. Potential virulence factors include surface proteins, which promote colonisation and membrane damaging toxins that can either damage tissue or invoke other disease symptoms. Before the emergence of antibiotics, the mortality rate for Staphylococcus aureus infections was 80% (Fedtke et al 2004). The versatile organism has developed a resistance to Meticillin due to its mobile genetic element the mecA gene, which is found in the Staphylococcal cassette chromosome mec (SCCmec) and this mediates the resistance to ?-lactam antibiotics such as Meticillin (Greenwood 2000).

Of the current antimicrobial resistant organisms, Meticillin-resistant Staphylococcus aureus (MRSA) is probably the most challenging in a hospital setting. MRSA first came to the public’s attention, here in the UK, in the 1980’s when the first epidemic strain, Epidemic Meticillin-resistant Staphylococcus aureus (EMRSA was identified. Subsequently a further sixteen epidemic strains have been recognised. Each strain has its own genetic makeup and display resistance to different antibiotics. EMRSA -15 and EMRSA -16 are the most common strains found in the UK, accounting for 96% of all MRSA bacteraemia. Worryingly, a new strain, EMRSA – 17 was identified in 2000. Not only did it display resistance to the previously recognised antibiotics but also Fusidic acid, Rifampicin, Tetracycline and sometimes Mupirocin.

Evolution and natural selection have produced the mechanism through which micro-organisms can adapt to their ever changing environment, including resistance to natural and man made antibiotics. Bacteria including Staphylococcus aureus are adept at infecting and colonising humans and also aid other microbes to cause infection by producing anti-inflammatory molecules, which allow microbes to evade the body’s immune system (Fedtke et al 2004). They are also able to hide in biofilms and proteins called defensins.Therefore bacteria successful in these evasive strategies are able to pass these strategies down the generations in a process called horizontal gene transfer (Bush 2004). However, this is not a new phenomenon. As far back as 1940, the journal Nature published an article describing the discovery of an enzyme that destroyed Penicillin called Beta-lactamase.

Two mechanisms are used by Staphylococcus aureus to cause infection (Roghmann et al 2005). These are toxin production and tissue invasion. Toxin production is exemplified in gastroenteritis resulting from consuming Staphylococcal enterotoxins in food and tissue invasion is demonstrated in the classical abscess comprised of pus contained in a fibrin wall and surrounded by inflamed tissues.

Why a hospital problem?

Staphylococci are the classic hospital acquired bacteria and Staphylococcus aureus is the commonest cause of surgical site infection. For years, glycopeptides, such as Vancomycin have been the first choice for serious Staphylococcus aureus infections. Now clinicians are facing strains with reduced susceptibility to glycopeptides, with no decline in virulence (Dancer 2003).

Within the hospital environment there are recognised high risk areas/departments where patients are at greater risk of infection. Two such areas are intensive care units and burns units. Examples of factors associated with higher risk MRSA acquisition are previous antibiotic therapy and frequent admissions. The more often a patient is admitted to hospital the greater the chance of exposure to MRSA and being prescribed antibiotics. Patients and their pre-disposing factors,,for example, being immunocompromised,and having wounds make them more susceptible to acquiring MRSA, In addition the healthcare workers and the environment are also potential reservoirs of MRSA. The environment as a reservoir has been more difficult to assess (Dancer 2004), although work done by Rayner 2003 confirmed that MRSA has been isolated on patient equipment.

The term risk factors, which are often used in relation to MRSA, apply to the strength of association between the organism and the odds of going onto develop an infection.

The factors responsible for increasing resistance are complex and varied as are the potential strategies for overcoming the problem. Inappropriate prescribing and overuse of antimicrobials by clinicians may be driven by lack of understanding of the problem and inadequate surveillance for resistance.

Poor prescribing and increasing resistance however not the only issue in the management of Staphylococcus aureus. This is where medical microbiologists are pivotal in the appropriate use of antimicrobials. They can provide clinicians with laboratory reports that contain a restricted number of antimicrobial sensitivities, as well as advising on the correct method and appropriate specimen to obtain. This saves time and resources. Therefore the patient should receive the appropriate antimicrobial treatment at an earlier stage.

However, it needs to be acknowledged that prescribers prefer and adhere more closely to policies that take an educational rather than a restrictive approach. Some view policies as rigorous and fixed and relate better to guidelines, that are seen as more flexible and acknowledge that some patients will fall outside of the recommendations (Binyon 2000). There are also legal aspects to consider, as it is more difficult to justify action taken outside a policy than a guideline.

Ideally a guideline will limit antimicrobial prescribing to situations where there is a clear indication for their use and that they should be administered for the shortest effective duration. The drug of choice should be appropriate, narrowest in spectrum and correct in dose and duration (SIGN 2000). Prophylactic antimicrobials should be only given for the recommended period. Emmerson (2000) argued that perhaps a guideline’s most important function is that of a vehicle for ensuring regular discussion amongst those concerned.

A study by Harrison (1998) found that approximately 20% of all prescribed antimicrobials relate to hospitalised patients. Of this 20%, 20-50% was unnecessary. His study also revealed that 25-50% of all hospital admissions receive an antimicrobial at some point during their stay. The study also made the point that even if numerous bacteria are killed during a single course of antimicrobials, if one mutant microbe remains in that patient; the possibility exists for the rapid establishment of a resistant population.

Current problems within the National Health Service exacerbate the issue. These problems include ‘hot’ bedding, overcrowding of wards, understaffing, inadequate cleaning, poor laundry services, patient relocation and poor isolation facilities. Dancer and Gemmill (2003) argue that erosion of hygiene standards emanated from the ready provision of antimicrobials.

Numerous guidelines have been written in order to attempt to control these problems. However sometimes what is good in theory is not so good in practice and there may be various explanations for this failure. Regardless of how sound the principles are, there may be insufficient resources to implement them. A prime example here is lack of isolation facilities in hospitals (Cooper 1999). There is a wide variance in which resistance is handled in different hospitals. Some hospitals isolate and treat the patient regardless as to whether or not the patient is colonised or infected. Therefore risk assessment in conjunction with the infection control team on a case-to-case basis is vital when resources are scarce.

Presently Vancomycin and Teicoplanin are used to treat MRSA infections. The majority of patients are colonised and are asymptomatic. They carry MRSA on skin or in the naso-pharynx. Patients who are found to be colonised in hospital settings are actively treated or decolonised. This is done by prescribing five days of a body wash used either in the bath or shower. The wash is also used to cleanse the hair. The wash includes chlorhexidine gluconate and is effective but known to dry out the skin with prolonged use. In conjunction with the body wash the patient is also prescribed a nasal cream which is applied 3 times a day for 5 days to both nares. The cream usually used is Bactroban which contains Mupirocin. For MRSA cases displaying intermediate or total resistance to Mupirocin, the cream of choice is Naseptin (BNF 2015).

Discussion

Antibiotic resistance may lead to routine infections being fatal. Antibiotics are losing their effectiveness at a rate that is both alarming and irreversible. The media talks of a post antibiotic era or antibiotic Armageddon.

So what of the future? Researchers are developing a vaccine. In order to achieve herd immunity, 85% of the population would require to be vaccinated and the vaccine would also have to provide protection against all the strains to which someone is likely to be exposed. However, limited vaccination of at risk groups may be possible (Farr 2004). Work is also ongoing in regard to lysostaphin, which is an enzyme that causes the cell wall in Staphylococcus aureus to rupture. It was first described 40 years ago. As it is specific to Staphylococcus aureus, it would not interfere with normal commensal flora. It could be used to reduce nasal carriage and subsequently reduce infection rates. Early clinical trials have been positive.

Assuming all the issues above were overcome, resistance still would not disappear. Thus there remains a need to continue with research into how and why bacterial mutations occur and into the development of new innovative drugs, vaccines and diagnostics. More resources need to be channelled into education of health care professionals, allied with effective infection control measures.

Every healthcare worker has a duty of care to comply with infection control policies. As long as infection control procedures are adhered to, hygiene improves and antibiotics are used prudently, there is the prospect of bringing MRSA under control in the hospital setting. However, we have to be aware that emphasising the importance of MRSA colonisation via policies and guidelines may result in accidental neglect of the factors that cause infection.

As MRSA will continue to spread in the wider community, via both humans and animals, some of the strains spread may be highly toxic and with an ageing population and increasing numbers of immuno-compromised patients, the danger will only increase. As more advances are made in medicine, these vulnerable populations will also increase. Those at most risk are those in long-term care homes, of which there is an ever-increasing number. While cross infection routes are relatively easily defined in a hospital setting, the situation in the community is not and because care homes are major feeders when it comes to hospital admissions, the impact on the crisis stricken NHS will continue.

Therefore MRSA screening was welcomed when introduced in 2013 across the UK following a nationwide study of the efficacy of screening patients on admission to hospital (HPS 2009). The aim of screening patients for MRSA is to identify patients that are colonised or infected with the organism. These patients can then be managed appropriately to reduce the risk of self-infection and of transmitting the organism to other patients.

As for MRSA rates being indicators of quality healthcare, they should be considered as tools that prompt further inquiry, rather than permitting judgements on quality of care.

Conclusion

MRSA has the capability to cause misery, morbidity and even fatalities under certain circumstances. The body is an incredibly complex machine; scientists are making striking advances in elucidating the precise molecular basis for the interaction between adherence surface structures of an organism and corresponding specific surface receptors on a host cell. Much more has still to be learned and microbiology will continue to play a huge part in research in order to understand the mechanisms of pathogenicity and the development of antibiotic resistance. This is essential for future treatment and prevention of infections allowing humans and micro-organisms to continue to co-exist.

Prevention and control of healthcare acquired infection demands the continual development of intervention strategies aimed at curtailing further antimicrobial resistance and reducing the spread of existing infection. Success however will only be achieved with a multi disciplinary approach at individual and organisational level. Infection prevention has to become an integral part of everyday healthcare practice (Fairclough 2006).

Bibliography

Binyon D. (2000) Restrictive antibiotic policies – how effective are they? Hospital Pharmacist, Vol. 7(7), pp183-187.

British National Formulary (BNF) 69 (March 2015) Joint Formulary Committee. BMJ Publishing Group Ltd. And Royal Pharmaceutical Society.

Bush K. (2004) Antibacterial drug discovery in the 21st century. Clinical Microbiology and Infection, Vol. 10 (Supplement 4), pp 10-17.

Cooper B. S., Medley G. F. and Scott G. M. (1999) Preliminary analysis of the transmission dynamics of nosocomial infections: stochastic and management effects. Journal of Hospital Infection, Vol.43, pp131-147.

Dancer S. J. (2003) Glycopeptide resistance in Staphylococcus aureus. Journal of Antimicrobial Chemotherapy.

Dancer S. J. (2004) How do we assess hospital cleaning? A proposal for microbiological standards for surface hygiene in hospitals. Journal of Hospital Infection, Vol. 56, pp 10-15.

Dancer S. J. and Gemmell C. G. (2003) Control of MRSA – Can Scotland Win? In SCIEH Weekly Report 2003; 37(01).

Emmerson A. M. (2000) Control of the spread of resistance. Chp. 14 in Greenwood D. (ed)(2000) Antimicrobial Chemotherapy, 4th edition, Oxford University Press.

Evans A. and Brachman P. (1998) Bacterial Infections of Humans: Epidemiology and Control. Rd Edition, Plenum Medical Book Company, New York.

Fairclough S. J. (2006) Why tackling MRSA needs a comprehensive approach. British Journal of Nursing, Vol. 15(2), pp 72-75

Farr B. M. (2004) Prevention and control of methicillin-resistant Staphylococcus aureus infections. Current Opinion in Infectious Diseases, Vol. 17, pp 317-322.

Fedtke I.,Gotz F., and Peschel A. (2004) Bacterial evasion of innate host defences – the Staphylococcus aureus lesson. International Journal of Medical Microbiology, Vol. 294, pp 189-194.

Greenwood D. (2000) Antimicrobial Chemotherapy, 4th edition, Oxford University Press.

Harrison P. f. and Lederburg J. (eds) (1998) Antimicrobial resistance: issues and options. Washington DC: National Academy Press.

Health Protection Scotland on behalf of Pathfinder Health Boards (Dec. 2009)

Final Report Volume 1: An investigation of the clinical effectiveness of MRSA screening. Glasgow: Health Protection Scotland.

Rayner D (2003) MRSA: an infection control overview. Nursing Standard, Vol. 17(45), pp 47-53.

Roghmann M., Taylor K.L., Gupte A., Zhan M., Johnson J. A., Cross A., Edelman R. and Fattom A.I. (2005) Epidemiology of capsular and surface polysaccharide in Staphylococcus aureus infections complicated by bacteraemia. Journal of Hospital Infection, Vol. 59, pp 27-32.

Scottish Intercollegiate Guidelines Network (2000) Antibiotic Prophylaxis in Surgery. SIGN Publication No. 45, July. www.sign.ac.uk

Critical Discussion of Health Outcomes in Ageing Females

This work was produced by one of our professional writers as a learning aid to help you with your studies

Choose one gender group and critically discuss how their health outcomes can be improved in regards to ageing.

The World Health Organisation’s definition of ‘Health’ emphasizes that the overall health of an individual is determined by not only their physical well-being but also their mental and social well-being. Therefore, NICE has framed its public health outcomes broadly to allow a range of health factors to be addressed. The paper will discuss how the health outcomes of the female gender can be improved in regards to ageing. Hence, due to the limited word count of this discussion, the health initiatives addressed will be physical activity and mental well-being with reference to Menopause, Osteoporosis, Depression and Breast Cancer.

Menopause has not just been chosen because it impacts only women but because in 2007 females expressed the need for more information on menopause and its impacts on their health (BMS, 2015). This has driven the creation of new clinical guidelines to be published in approximately four months time for application in all NHS Healthcare settings (BMS, 2015). The formation of these guidelines in response to the surveyed women may act as a possible improvement in the delivery of the healthcare treatments and advice given by practitioners because a greater focus is hoped to be put on menopause than demonstrated in previous years; this could then improve the quality of health education given to the patient, hence allowing them to understand their condition better. A better personal understanding of a condition can allow a patient to be more active in the decision making processes in partnership with the practitioner (D’Ambrosia, 1999). This could then improve the relationship between the patient and the practitioner; Empowerment via knowledge can also positively impact the confidence of the patient because they may be able to apply principles of self-help in some situations where menopause was affecting them because they would have the knowledge to make changes in their lifestyle choices and routines. For example, exercising regularly is promoted in the menopause period to avoid gaining extra weight or to maintain muscle mass and bone strength (NHS, 2014).

Health Psychologists often unravel menopause as a bio-psychosocial event in which social, cultural and biological factors can impact a woman psychologically. Therefore, weight gain may affect their self-esteem, self-confidence and self-image (Ogden, 2012). Hence, health education is not only a method of improvement for health outcomes related to specific conditions and the associated treatments but it also encourages the individual to self develop.

Interestingly, self-image / self – representation is discussed within all media forms in regards to both men and women, however more so for women. Also, ageing and self-image are often not directly addressed within academic texts that analyse the impacts of ageing, yet the physical symptoms of menopause can psychologically impact a woman as mentioned previously in this discussion. Furthermore, despite surveys and questionnaires forming knowledge in regards to the functional aspects of an elderly woman’s life, we know very little about their own perceptions on being someone who is considered as older by society (Queniart and Charpentier, 2011). The definition of Health by WHO is inclusive of social wellbeing, but we still have very limited specific research on elderly women and self-representations. Therefore, there is a need for both qualitative and quantitative research to be conducted on elderly women to be able to support these women to see ageing as a positive process and not a negative process, as this is still a widely accepted connotation amongst society in general and among women.

Within the NHS outcomes framework, mental illness is addressed to acknowledge the growing recognition of mental disorders both diagnosed and undiagnosed and to improve the quality of care for those suffering from mental health conditions. Mental health conditions are good case studies to analyse to explore the barriers which may prevent individuals from reaching their health outcomes. Generally, statistics show that more women access mental health services in comparison to men, however females from BME communities access mental health services less than females from non BME communities. It is often shown in reports that the relationship between BME individuals and healthcare services differs from the relationship of the native community with the healthcare service (Department of Health, 2011). Furthermore, South East Asian women may be dealt with after a delayed period of time and possibly even with inappropriate mental health services (Department of Health, 2011). This has been shown in some cases even where the female has suffered from severe mental health issues. In this case, the lack of accessibility and engagement will prevent these women achieving better health. Elderly men and women are also victims of mental disorders, with statistics suggesting approximately 15% of adults who are 60 years and older being affected (IHME, 2012). Therefore, barriers to health services will also delay treatments for these individuals. There are a variety of reasons why these barriers exist including; language barriers, cultural reasons, practitioners who do not understand the latter, the location of services and the individual’s own perceptions of the mental health condition. Furthermore, it is extremely difficult for a health service to be specialist and practical for all populations, therefore social inequalities exist as barriers to improving the wider health outcomes for services and governing bodies as well as the personal health outcomes of elderly patients.

Elderly individuals face biological, social and mental changes as part of the ageing process and they have to learn to cope and accept these changes. Many elderly individuals also lack the company of family or friends due to their circumstances. These changes could impact an individual’s everyday activities, which then could negatively impact their mental well-being causing them to suffer from depression because they have become socially excluded. Hence, it is important that elderly individuals know how to access specialist services which may not be necessarily healthcare based but who have personal wellbeing as central to their work.

An example of such services are campaigns which aim to tackle elderly depression by focusing on preventing social isolation amongst this age range though the promotion of social activities within community based environments. It is extremely important to recognise that the older age groups in society desire to have or feel similar positive health and well-being states as the younger age groups. However, the method of achieving these positive health and well-being states will in most cases differ between the age groups and also at what level individuals within these groups will be content with their health outcomes may differ too. For example, the Calderdale Clinical Commissioning Group in West Yorkshire has recently invested approximately one million pounds to improve the health and wellbeing of individuals via inclusion within groups, activities and accessibility to services through ‘The Staying Well Project’ (The Halifax Courier, 2015; James, 2014). Achievement of better physical health is viewed highly in this project so physical activity sessions will be delivered for elderly individuals, however the sessions are most likely not going to be at the pace of what would be delivered for younger individuals, traditional activities may be replaced by walking football, tai chi or salsa (James, 2014; NHS, 2013). Improved fitness is a desired health outcome which can support the improvement or treatment of a variety of conditions both acute and chronic, including the prevention of weight gain due to stress in menopause (The Mayo Clinic, 2013).

Also, recommended guidelines for exercise to prevent the onset of musculoskeletal conditions differ depending on the individual’s age and their present health and well-being. Osteoporosis is more prevalent in elderly women due to hormonal changes in the stages of menopause (NOS, 2010); however this may also be due to a lack of exercise or adopting a sedentary lifestyle in early life (WHO, 2003). Osteoporosis negatively impacts bone density either by reducing bone density or preventing bone from developing hence the individual becomes more at risk of acquiring bone fractures. However, physical activity and healthy eating would still be needed for maintaining overall health and as an attempt to maintain bone density, yet an individual may potentially injure themselves by breaking a bone, which then could directly impact their overall health and wellbeing. Doctors and physiotherapists (and relevant knowledgeable individuals) are advised by NICE to promote sufferers of osteoporosis to exercise safely and gently to avoid injury however most reports highlight patients’ lack knowledge of what is considered safe in accordance to their condition (NICE, 2013; Moore, 2011). Therefore, if more specific knowledge of appropriate exercise was given to the patient in relation to their condition, patients could ensure they are exercising safely; these patients could then become independent exercisers who would be more likely to sustain exercise in their daily habits for a longer period of time are able to feel fuller benefits of exercise.

In addition to this, there is a lack of research into social inequalities due to musculoskeletal conditions associated with ageing. However, a recent paper suggests that some sufferers of musculoskeletal disease are becoming victims of material deprivation because their physical ability is preventing them from using or owning social possessions. For example, the young-old Hertfordshire Cohort Study had 3,225 participants who could not possess a home due to lower grip strength and frailty, of which 23.1% were women (p.54, Sydall, 2011). The health outcomes of these individuals may not be solely related to physical health outcomes in relation to improving their muscular strength but they could also desire better mental and social health outcomes because these women are facing challenging life experiences. These outcomes can be achieved or supported by secure methods such as receiving social care support within their own home, fitting assistive healthcare/Telecare technology, by accessing supported living schemes or by sharing their accommodation. This will allow them to feel at least partially in possession of important materialistic things such as a home. Addressing these wider non physical health implications is important to prevent further health and social care concerns because these elderly women may have lost their residence due to the inability to function within their home due to their condition, and this feeling could lead to a lack of control and autonomy within their life, which could then lead to depression, hence co morbidities. To promote positive thinking and motivation in ageing, alternate therapeutic activities such as life coaching and talking therapies may be more engaging and with little or no side effects in comparison to drug based medication, to tackle what is usually diagnosed as clinical depression or anxiety (NHS, 2014).

Cohort studies suggest that physical activity has a protective role in an individual’s life either to prevent the development of conditions or the deterioration/maintenance of health and wellbeing. A study in the Netherlands has suggested physical activity can protect premenopausal women from breast cancer; this study looked at the recreational activities of women throughout their life (Verloop et al, 2000). This major study suggested that present, past and future studies would struggle in measuring all kinds of physical activity done by women due to the extreme difficulty in classifying all movements and the impact of these movements. This study suggested that the relationship between the initiation of physical activity and the risk of breast cancer needed to be examined further – in order to form more reliable public health recommendations. Also, the public need to understand why physical activity is important for them at a more developed level than it simply being part of a recommended ‘healthy living’ regime or for ‘weight management’ or to ‘prevent arthritis’ or ‘prevent cardiovascular disease’, so that the role of physical activity is of greater importance. This will improve specific health outcomes for individuals suffering from specific disease and a greater need for movements and durations of exercise will be understood by the individual.

To summarise, both physical activity and mental wellbeing health outcomes for women when ageing can be improved via health education because it will motivate individuals to self-help. To improve process this, further research needs to be done on the specific impact of physical activity on conditions and also the psycho-social impact of specific diseases; this will improve public health recommendations. Social inequalities such as accessibility of services and the perceptions of female elderly stereotypes need to be addressed via community engagement work at a local level and via national incentives. Lastly, recognition of the wider implications of poor health outcomes will allow professionals to better support both women and men through the ageing process.

Bibliography

British Menopause Society . (2015) Fact Sheets. [Online] Available from: http://www.thebms.org.uk/. [Accessed: 16th March 2015].

British Menopause Society . (2015) Nice Menopause clinical guideline is on its way. [Online] Available from: http://www.thebms.org.uk/index.php. [Accessed: 23rd March 2015].

D’Ambrosia, R. (1999) Orthopaedics in the New Millennium, a new patient-physician partnership. The Journal of Bone and Joint Surgery. 81. p. 447-451.

Department of Health. (2011) No Health Without Mental Health: A Cross-Government Mental Health Outcomes Strategy for People of All Ages. Mental Health and Disability, Department of Health: London. [Accessed: 19th March 2015].

Institute for Health Metrics and Evaluation. (2012) Global Burden of Disease Study 2010 (GBD 2010) Life Expectancy and Healthy Life Expectancy 1970-2010. [Online] Available from: http://ghdx.healthdata.org/record/global-burden-disease-study-2010-gbd-2010-life-expectancy-and-healthy-life-expectancy-1970. [Accessed: 23rd March 2015].

James, E. (2014) Staying Well in your neighbourhood.[Online] Available from: http://locality.org.uk/wp-content/uploads/Elaine-James-Calderdale-Council-Healthy-Neighbourhoods-workshop.pdf. [Accessed: 18th March 2015].

The Mayo Clinic Staff. (2013) Menopause weight gain: Stop the middle age spread. [Online] Available from: http://www.mayoclinic.org/healthy-living/womens-health/in-depth/menopause-weight-gain/art-20046058. [Accessed: 23rd March 2015].

Moore, G.F, Moore, L, Murphy, S. (2011) Facilitating adherence to physical activity: Exercise professionals experiences of the National Exercise Referral scheme in Wales, a qualitative study. [Online] BioMed Central. 935 (11). Available from: http://www.biomedcentral.com/1471-2458/11/935. [Accessed: 19th March 2015].

National Health Service. (2013) Activities for the elderly. [Online] Available from: http://www.nhs.uk/Livewell/fitness/Pages/activities-for-the-elderly.aspx. [Accessed: 19th March 2015].

National Health Service.(2014)Benefits of Talking Therapy. http://www.nhs.uk/Conditions/stress-anxiety-depression/Pages/benefits-of-talking-therapy.aspx. [Accessed: 19th March 2015].

National Osteoporosis Society. (2010) Hormone Replacement Therapy for the Treatment and Prevention of Osteoporosis. [Online] Available from: https://www.nos.org.uk/document.doc?id=823. [Accessed: 23rd March 2015].

National Institute for Health and Care Excellence. (2013) Osteoporosis – prevention of fragility fractures. [Online] Available from: http://cks.nice.org.uk/osteoporosis-prevention-of-fragility-fractures#!topicsummary. [Accessed: 23rd March 2015].

Ogden, J. (2012) Health Psychology. Open University Press: Oxford.

Queniart, A and Charpentier, M. (2011) Older women and their representations of old age: A qualitative analysis. The International Journal of Ageing and Society. 32 (6). p. 983-1007.

Syddall, H.E. (2012) Social inequalities in musculoskeletal ageing among community dwelling older men and women in the United Kingdom.University of Southampton, Gerontology, Doctoral Thesis. [Online] Available from: http://eprints.soton.ac.uk/354738/.[Accessed: 17th March 2015].

The Halifax Courier. (2015) Centre will play a role in tackling loneliness. [Online] Available from: http://www.halifaxcourier.co.uk/news/centre-will-play-a-role-in-tackling-loneliness-1-7060545 [Accessed: 17th March 2015].

Verloop, J, Rookus, M.A, Koay, K.V.D, Leeuwen, F.E.V. (2000) Physical activity and breast cancer risk in women Aged 20-54 years. Journal of the National Cancer Institute. 92 (2). p. 128-135.

World Health Organisation. (2003) Gender, Health and Ageing. [Online] Available from: http://www.who.int/gender/documents/en/Gender_Ageing.pdf [Accessed: 15th March 2015].