John F. Kennedy Assassination Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

On November 22, 1963 President John F. Kennedy was assassinated in Dallas, Texas. Lee Harvey Oswald was arrested for the murder. It is believed that Lee Harvey Oswald was not the only one involved with the crime. There are countless theories on how President Kennedy was murdered. Some of the theories include the FBI, CIA, and the mob being involved. The Warren commission said that they believe that it was solely Lee Harvey Oswald who killed President Kennedy.

Most of the evidence shows that Lee Harvey Oswald could not be the only one involved. John F. Kennedy was the fourth United States President to be assassinated. Even today, there remains tremendous debate on who was responsible for the murder of Kennedy. The assassination of President Kennedy has started many different conspiracy theories about who was involved with the murder.

President Kennedy wanted to travel to Dallas, Texas to help strengthen his vote for the upcoming election and also to gain more Democratic Party members. Before Kennedy went on the trip there was some concern about a sniper being on top of a building. President Kennedy also made comments before he was killed about his safety in a convertible car. The car President Kennedy was driving in was a 1963 Lincoln Continental open top limo.

Sergeant Davis of the Dallas police department was the one who made sure the city was secure whenever any President or foreign leader came to Dallas. The secret service agent who was responsible for the planning of the Kennedy motorcade was Winston Lawson. Lawson told Sergeant Davis not to allow any police officers to follow the president’s car.

It was standard procedure for the police to secure the perimeter when any president came to Dallas. Jessy Curry who was the chief of police said that if the cops were allowed to secure the area, then the murder could have been stopped. The cops who would normally secure the area have submachine guns and rifles.(Harrison Edward Livinstone, “High Treason 2 – The Great Cover-Up:The Assassination of John F. Kennedy” (1992) Hardback)

The original plan was to go from the Love Field Airport to downtown Dallas and Dealey Plaza. Kennedy was supposed to give a speech at the Dallas Trade Mart. Kennedy’s car did not have a bullet proof top, because they did not have anything invented at the time. At 12:30 President Kennedy’s limo went towards the Texas School book depository. Then the car turned right in front of the building and was only 65 feet away.

The car was going 13 miles per hour and then slowed down to 9 miles per hour. Once the car passed the building the shots rang out. A man named Abraham Zapruder was right in front of the limo when it was being shot at. Zapruder was filming as the shooting took place. Kennedy and Texas Governor John Connally were both shot.

John Connally was riding in the same car as Kennedy and was sitting in the passenger seat in front of the president. Governor John Connally was in critical condition but he survived. There was also another person that was just watching the motorcade that was injured from debris when the bullet hit a curb. (David S. Lifton, “Best Evidence:Disguise and Deception in the Assassination of John F. Kennedy”)

Lee Harvey Oswald had been arrested for the killing a Dallas police officer J.D. Tippit. Lee Harvey Oswald was charged for killing President Kennedy and officer J.D. Tippit. Whenever Oswald was questioned about the shooting of President Kennedy he denied everything. There was a twelve hour interrogation of Lee Harvey Oswald and no recordings or notes were taken. Oswald said that he wasn’t involved and that he was just a patsy. Two days after the assassination, Lee Harvey Oswald was shot by Jack Ruby. Oswald was in police custody at the time of the shooting. Jack Ruby posed as a reporter who was trying to ask Oswald a question.(The Assassination of JFK 19 June 2005 )

The gun that was used was an Italian Manlicher-Carcano rifle. The rifle was found at the Texas School Book Depository on the sixth floor. When the police officers found the gun they recorded everything. The rifle is said to be the same gun that was used in the assassination. There was a bullet on the Connally’s stretcher and it was fired from the gun that the police had found. Lee Harvey Oswald purchased the gun under the fake name of Alek James Hiddell. (The Assassination of JFK 19 June 2005 )

President Kennedy was announced deceased at the emergency room. The surgeons at the hospital said that Kennedy had absolutely no chance for survival. Dr. George Burkley came to the hospital shortly after the president was shot and looked at the head wound and said that it was the cause of death.

A priest came to give President Kennedy his last rites. Lyndon B. Johnson who was the vice president was the next person to become president. Lyndon B. Johnson was riding in a car behind Kennedy. Lyndon B. Johnson went through the procedure to become president while he was on Air Force One. (The Assassination of JFK 19 June 2005 )

Once Air Force One had landed, an autopsy was performed at Bethesda Naval Hospital. The autopsy report said that Kennedy had been shot in the head and in the shoulder. Reports of the autopsy were incorrect and did not match up. It is said that Dr. James J. Humes probably destroyed the autopsy report and notes that were taken during the autopsy. The measurements that Dr. James J. Humes took were inconsistent and not exact.

The autopsy reports were not shown to the Warren Commission. The people who handled the autopsy records did not keep track of how many pictures were taken. It is also said that the pathologists were not experienced enough to handle Kennedy’s body in the first place. Kennedy’s neck was not looked at to determine how the bullet entered and exited. After the autopsy Kennedy’s body was embalmed and was put into the white house for the public to see. The body was removed from the white house and buried in Arlington National Cemetery. (David S. Lifton, “Best Evidence:Disguise and Deception in the Assassination of John F. Kennedy”)

There was also no recordings or radio coverage of the assassination. All of the news crews were waiting at the trade Mart for Kennedy and not in Dealey Plaza. There was some news crews riding with the Kennedy motorcade, but they were in the very back. The only recording of the murder was from Abraham Zapruder’s camera.

Many individuals took still pictures of the shooting also. The Zapruder film shows Kennedy’s head moving forward and then backwards. The Zapruder film was shown on television, but was edited a lot. More recently, in 2003 ABC News drew Dealey Plaza in three dimensional computer models.(David S. Lifton, “Best Evidence:Disguise and Deception in the Assassination of John F. Kennedy”)

The government is doing a good job in preventing records of the Kennedy assassination from becoming publicly available. In 1964 President Lyndon B. Johnson made the Warren Commission findings to be kept from public viewing. Johnson said that the documents cannot be seen by the public for 75 years, which would be until 2039. Covering up all of the records, leads more people to believe that there is indeed a conspiracy involved with the death of President Kennedy. Congress established the “President John F. Kennedy Assassination Records Collection Act of 1992”.

Congress made the act so that people could see the records earlier and they also felt that there was not a need to keep the records from public eyes. The act says that any document that has not been lost of destroyed must be given to the public by 2017. Many documents have already been opened, but the majority still remains locked away. All of the original evidence and material cannot be released, because it was lost or destroyed. Some import pieces of evidence that were neglected are; the Governor of Texas’s suit being dry cleaned, the limo being cleaned, and Lee Harvey Oswald’s Marine service file being lost.

(Josiah Thompson, “Six Seconds in Dallas” (1976 Paperback))

There was a paraffin test conducted on Lee Harvey Oswald’s right cheek and hands. The purpose of the test was to tell if Oswald had fired a weapon. The paraffin test came out positive, but the Warren commission said the data was inaccurate.

The first people to conduct an investigation were the FBI. The director of the FBI said that he wanted something to convince the public that Lee Harvey Oswald was the only one involved with the assassination. The FBI report took 17 days to complete and was given to the Warren Commission. The FBI assisted the Warren Commission. Both the FBI and the Warren Commission said that there were only three shots fired from the rifle that Lee Harvey Oswald had. The House Select Committee Investigated the FBI’s results.

The committee concluded that the FBI did not investigate whether or not President Kennedy was involved in a conspiracy and also that they did not give their data to other law enforcement agencies. James Hosty was an FBI agent who name appeared in Jack Ruby’s address book. The FBI made another copy of the address book and erased James Hosty’s name out of it and then gave it to the Warren Commission. Before the assassination took place, Lee Harvey Oswald went to the FBI office so that he could meet with James Hosty. Hosty was not in his office when Oswald had arrived, so Oswald left a note for him. When Oswald was murdered by Ruby, James Hosty destroyed the note by tearing it up and flushing it down the toilet. (Josiah Thompson, “Six Seconds in Dallas” (1976 Paperback))

When the Warren Commission completed their report many people questioned it and did not believe its findings. Many people have written books and articles disproving what the Warren Commission had said. In 2003 ABC News did a poll to see what the public thought about the John F. Kennedy assassination. The poll said that seventy percent of the people think that there is a plot involved with the murder of Kennedy.

Around seventy to ninety percent of the American people did not believe the Warren Commission’s findings. Even government officials that worked for the Warren Commission said that they did not completely believe the commission’s results themselves. The House Select Committee on Assassinations said that the Warren Commission and the FBI failed to investigate who else could have been with the murder. The committee also said that the main reason for the lack of information and results were due to the Warren Commission not communicating with the CIA. (Gerald Posner, “Case Closed” (1993 Hardcover, 1st Edition))

The House Select Committee determined that President Kennedy was killed because of a conspiracy. Their results went directly against the Warren Commissions and were the complete opposite. The HSCA said that four shots were fired and Lee Harvey Oswald was not the only one who did the shooting. Lee Harvey Oswald has shot 3 shots and another gunman had fired the other shot from behind the fence on the grassy knoll.

The grassy knoll theory has came from acoustic evidence and many different witnesses. In 2001, an article by D.B. Thomas stated that the HSCA’s second gunman theory was right. The Assassination Record Review Board said that the autopsy of John F. Kennedy was a tragedy. (David S. Lifton, “Best Evidence:Disguise and Deception in the Assassination of John F. Kennedy”)

The majority of the evidence involved with the John F. Kennedy assassination was mishandled and not dealt with the way it should have been. Since most of the evidence was lost and is locked away, its leads people to further believe in a conspiracy theory. The murder of John F. Kennedy shows that the government has not served its people in a righteous manner. They have lied, covered-up, and twisted things so much that it will never be possible to find out who was really involved with the murder.

The government has violated the right of its own people. If the government took the time to correctly gather all of the evidence and look at all the aspects of the murder, than there would not be so much mystery surrounding the murder today. The government had violated the Kennedy family’s 14th amendment. The 14th amendment and the due process of law were violated, because the Government failed to do a proper investigation of the assassination. The Kennedy family was not provided with a thorough and correct investigation of the murder. The government should have been a lot more accurate and involved with the investigations its agencies performed.

Example History Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

How did Christianity succeed in becoming so widespread in the period up to Diocletian despite the Roman persecution of Christians?

It took Christianity a little under three hundred years to develop from a small, heretical Jewish cult based in the eastern provinces into the universal religion of the Roman Empire with churches and bishops ranging from Antioch and Edessa in the east to Lyons and Toledo in the west, and encompassing the North African cities of Carthage and Alexandria in the south. Just how an often persecuted sect managed to accomplish this is a complex issue that cannot be fully examined within the scope of this essay; rather it will focus on some aspects of early Christianity that allowed it to flourish and which could withstand the persecutions that took place from the second to the third centuries, concentrating on the eastern provinces, specifically Judaea, Phoenicia, Syria, Galatia and Bithynia-Pontus. (Chadwick, 1967)

It is impossible to know what percentage of the population of the Empire considered themselves Christian. One suggestion is ten percent but this is an estimate and, as Brown points out, it is more significant that during the third century Christian communities grew quickly (Brown, 2013). The early Christian sources cannot be relied upon to provide an accurate picture, as Lane Fox notes; Christian authors “were quite uncritical in their use of words like ‘all’ and ‘everywhere’” (Lane Fox, 1988, p. 269). Eusebius, for example, described how the Apostles had been sent across the globe to preach: Thomas was sent to Parthia, Andrew to Scythia, John to Asia and Peter was allotted the eastern provinces of the Roman Empire (Eusebius, 3:1).

In order to consider what impact persecution had on the spread of Christianity, it is necessary to consider the way Christianity spread; and in particular what was it about this religion that set it apart from the pagan cults (and Judaism) that could both attract followers and help it withstand persecution. There are a number of reasons why Christianity flourished in the period between the end of the first century and Diocletian’s Great Persecution at the start of the fourth. The nature of Christianity and its emphasis on charity and hospitality; a shared sacred text; the close-knit structure of the early Church; the way it appealed to all levels of society; the very act of persecution itself, the nature of the pagan cults it was competing with and the wide-ranging trade routes across the Empire, are just a few. (Chadwick, 1967; Lane Fox, 1988)

Christian groups shared a set of beliefs and ideals based around the preaching of salvation and it can be argued that this unity of beliefs is what strengthened Christianity and allowed it to flourish. Christians shared a meal recollecting the sacrifice of their saviour and were encouraged to regard themselves as a family, calling each other “brother” and “sister” and to greet each other with a kiss. Individual communities possessed similar structures, particularly during the late second and third centuries, which emphasised their unity. This was certainly the view of Origen when responding to the allegations of the second-century philosopher Celsus who acknowledged the unity of Christians but believed it to be based on “no trustworthy foundation” other than their “unity in revolt (…) and fear of outsiders” (3:14, Chadwick, 1953, p. 136).

Origen states that Christianity does have a firm foundation in divine doctrine and God’s law. (3:15, Chadwick, 1953). The message that everyone was subject to the same divine law and could achieve salvation through renunciation of sins was unique to Christianity. This was a message upon which persecution could have no impact; indeed persecution offered devout Christians the opportunity to emulate their saviour and make the ultimate sacrifice for their faith; persecution encouraged martyrdom.

Whilst elements of early Christian practice, such as the celebratory meal and offering practical support for fellow supporters, can be seen in some pagan cults at this time, what set Christianity apart was its shared sacred text. It must be acknowledged that Christianity and Judaism are very similar in this regard, however, the New Testament, works by Origen and other early Christian philosophers and those condemning the Gnostic practices of the Coptic Church as heresy show that different Christian groups were discussing and exchanging views on important topics. In this way early Christian thinkers, the ‘Church Fathers’ were formulating a common, orthodox canon of beliefs which were set down in documents that were shared amongst the communities. (Clark, 2004)

The early Christians did not worship in what we would recognise as churches; they held assemblies which acted as a family unit, providing not only spiritual but practical support to its members. (Chadwick, 1697; Brown, 2013) They met in the homes of individual Christians and these houses were extended to accommodate the growing community, as at Dura Europos in Mesopotamia, a private house which was extended at some point in the 240s to add a hall large enough to accommodate up to sixty people (Lane Fox, 1988). It is perhaps significant, therefore, that no Imperial edict against the Christians, even that of Decian, specifically mentioned destroying churches until the Great Persecution of Diocletian at the start of the fourth century. Whilst the community might consider itself a church, there was no physical building, like a synagogue, which pinpointed them within the landscape of the town or city. In effect the church was mobile and could relocate as and when persecution made it necessary and meant Christianity could spread easily.

One of the key principles of Christianity was its emphasis on acts of charity and supporting those in need, based around Matthew 25:38-40. (Clark 2004) No other religious group in the Empire held provision for the poor as a key doctrine, but Christians were duty-bound to offer not only spiritual but practical help to those less fortunate than themselves. (Chadwick, 1967) Eusebius quotes a letter of Dionysius, bishop of Alexandria, describing how Christians helped nurse the sick and dying of all religions during an outbreak of plague and helped to bury the dead, whereas the pagans abandoned the sick (even family members), to their fate. (Eusebius, 7: 22 7-10) As Clark (2004), following MacMullen, notes; nursing the sick might convince people that Christians had a special religious protection; their belief in suffering and salvation and stories of healing miracles could, perhaps, be more effective than doctrine in winning converts.

The notion of charity was not confined to offering comfort and solace; one of the ideas Christianity had inherited from Judaism was giving alms for “the remission of sin” (Brown, 2013, p. 69). The idea that money earned in this world, by whatever means, could help its owner earn their place in the next through the remission of his or her sins meant that churches were able to accumulate wealth. The pagan temples of the large cities depended on donations from the wealthy whereas the average Christians making donations for the salvation of their souls were tradesmen. This meant that during times of financial disaster, as in the third century, the Christian communities were better able to withstand a crisis. The church developed structures and systems to ensure this wealth was distributed to where it was needed and Christians acquired a reputation for taking care of their own; widows and orphans as well as the sick and the destitute were all embraced in this institutionalised alms giving. (Brown, 2013; Clark, 2004)Thus the knowledge that your community was duty bound to offer practical assistance in times of need could easily be argued as a contributing factor to the spread of Christianity, again one on which persecution would have little impact.

This did not mean that Christianity developed into a religion of the poor; rather it embraced all ranks from slaves and tradesmen up to the higher echelons of society: Marcia the concubine of Emperor Commodus was Christian, as were King Agbar VIII of Osrhoene and Julius Africanus from Palestine (Brown, 2013 and Clark 2004).

When considering the impact persecutions had on the spread of Christianity, the nature of these persecutions has to be taken into consideration. During the second and third centuries, there were two periods of persecution: the sporadic, isolated persecutions that were confined to specific areas during the second and early third centuries; and the Emperor led persecutions of Decian and Valerian which culminated in the Great Persecutions under Diocletian and Galerius.

Our best evidence for the nature of these earlier persecutions comes from Pliny’s letter to Trajan, written c. 112 (Ep. 10:96, Radice, 1969, p. 293). Pliny, governor of Bithynia-Pontus, wrote to the Emperor asking for guidance on how to treat Christians arrested in his province. The letter describes how he had tortured two female slaves to obtain information about the activities of Christians and asks for advice on how he should conduct trials of suspected Christians who were brought before him as a result of anonymous allegations. Trajan’s reply makes it clear that only known Christians should be prosecuted and anonymous allegations should not be considered and those simply suspected of being Christian should not be sought out. This shows that during this time there was no clear policy of persecution coming from the Roman authorities. Similarly, Eusebius includes a letter from Trajan’s successor, Hadrian, written to Pliny’s successor, Minicius Fundanus, reaffirming this position; Christians should not be sought out directly, but those correctly accused under Roman law, should be punished (Eusebius, 4:9).

Once again Eusebius’ evidence must be approached with caution; as with any Christian author he cannot be considered a reliable witness to the persecution of his own kind. When taken together, however, the evidence of both the pagan Roman official Pliny and the Christian Eusebius does indicate that there was no official policy of widespread persecution of Christians during the second century. Moreover, as St Croix (1963), illustrates, accusations against individuals were not likely to be made falsely as the person making the allegation had to carry out the prosecution, rendering themselves liable for a charge of calumnia (malicious prosecution) if they could not make a satisfactory case against the alleged Christian.

Decian’s edict of 250 represents the changing situations of both the Empire and the Christian church. By this period Christianity had spread across the whole Empire; an empire which had been suffering from years of civil war and was in something of a crisis and in need of assistance from its gods (Clark, 2004). The edict issued by Decian in 249-250 did not specifically target Christianity, though Christian writers chose to interpret it as a direct attack; rather it required all citizens to make sacrifices to the gods and obtain proof of this in the form of a special certificate. It is clear that many Christians did suffer as a result of this edict; Babylas of Antioch and Alexander of Jerusalem were amongst many notable church leaders who lost their lives. Others, however, preferred to go into hiding or buy certificates from friendly magistrates. The impact of this edict was, therefore, twofold: it created a new generation of martyrs from those who refused to sacrifice and were punished for it; and caused schism within the church regarding what to do about those (mainly in the east) who fled or bought their certificates. Neither had any detrimental effect on the spread of Christianity; martyrs were admired and acted as inspiration for the faithful and the debate regarding those who went into hiding helped to develop Church doctrine.

As noted above, persecution created martyrs who were held up as examples to be followed: men and women who had endured physical pain and suffering like Jesus on the cross. Christian writers praised their bravery and courage, recording their heroic suffering in Acts and Passions which were copied and disseminated throughout the Christian world, raising them to the status of saint. Martyrdom and the development of the cult of saints are other key topics to consider when looking at the spread of Christianity and its reaction to persecution, but ones which cannot be discussed here. The ideas discussed above: the nature of Christianity, the unity provided by shared sacred texts and church organisation, the emphasis on charity and personal redemption; are just a few of the reasons this fledging cult was able to flourish and spread throughout the Roman Empire, covering not only the Eastern Provinces but also those in the west. There has been little room here to give them the full discussion they deserve, or to consider other factors such as the wide-ranging trade routes across the Empire that allowed Christians to travel and spread their faith; or to consider the way families were converted. What can be seen, however, is that Christianity was a religion with a unified belief structure that appealed to a wide cross-section of society and which offered practical help for those in need, including members of society that were often marginalised. Persecution did not stop the spread of Christianity, nor did it drive it underground. In the face of persecution most Christians remained steadfast; secure in the knowledge that their physical pain and suffering in this life would lead to reward in the next.

Reference List:
Primary Sources:

Eusebius, Church History [online] Available from: Christian Classics Ethereal Library [online] http://www.ccel.org/ccel/schaff/npnf201.iii.viii.i.html [Accessed 14 February 2015]

Chadwick, H. (1953) Origen, Contra Celsum. Cambridge: Cambridge University Press.

Radice, B. (1969) The Letters of the Younger Pliny. Harmondsworth: Penguin.

Secondary Works:

Brown, P. (2013) The Rise of Western Christendom: Triumph and Diversity, A.D 200-1000. 10th Anniversary Revised Ed. Chichester: Wiley-Blackwell.

Chadwick, H. (1967) The Early Church. London: Penguin.

Clark, G. (2004) Christianity and Roman Society. Cambridge: Cambridge University Press.

Lane Fox, R. (1988) Pagans and Christians in the Mediterranean world from the second century AD to the conversion of Constantine. London: Penguin.

MacMullen, R. (1984) Christianizing the Roman Empire A.D. 100-400. New Haven; London: Yale University Press.

St Croix, G.E.M de (1963) Why Were Early Christians Persecuted? Past and Present. 26. P. 6-38.

Work related stress in healthcare

This work was produced by one of our professional writers as a learning aid to help you with your studies

Stress may be defined as the physical and emotional response to excessive levels of mental or emotional pressure, which may arise from issues in both the working and personal life. Stress may cause emotional symptoms such as anxiety, depression, irritability or low self-esteem, or even manifest as physical symptoms including insomnia, headaches, loss of appetite and difficulties concentrating. Individuals experiencing high levels of stress may experience difficulty in controlling emotions such as anger, and may be more likely to experience illness or consume increased quantities of alcohol (NHS Choices, 2015). In the UK a survey undertaken by the Health and Safety Executive (HSE) has estimated that in the year 2013-2014, 487,000 of work related illnesses (39%) could be attributed to work-related stress, anxiety or depression (HSE, 2014). Additionally the survey found that as many as 11.3 million working days were lost in the year 2013-2014 as the direct result of work-related stress (HSE, 2014).

Studies have shown that healthcare professionals, particularly nurses and paramedics, are at an increased risk of work-related stress compared with other professionals (Sharma et al., 2014). This is likely to be due to the innate long hours and high pressure of maintaining quality care standards in the job, as well as pressures caused by staff shortages, high levels of patient demand, a lack of adequate managerial support as well as the risk of aggression or violence towards nurses from patients, relatives or even other staff (Royal College of Nursing (RCN), 2009). Indeed, a 2014 survey of nursing staff by the RCN showed that up to 71% of staff surveyed worked up to 4 hours more than their contracted hours a week, 80% felt that work-related stress lowered morale, and that 72% reported that understaffing occurred frequently in their workplace. As a result of these issues, 66% of respondents in the survey considered leaving the NHS or the nursing profession altogether (RCN, 2014b). A separate report by the RCN suggested that over 30% of absence due to illness was due to stress, which was estimated to cost the NHS up to ?400 million every year (RCN, 2014a).

In addition to the physical and emotional symptoms of stress previously discussed, studies in this area have shown that nurses experiencing high levels of work-related stress were more likely to be obese and have low levels of physical exercise, factors which increased the likelihood of non-communicable diseases and co-morbidities such as hypertension and type 2 diabetes (Phiri et al., 2014).

Stress and staff absence

Chronic stress has been linked to “burnout”(Khamisa et al., 2015; Dalmolin et al., 2014), or a state of emotional exhaustion under extreme stress related to reduced professional fulfilment (Dalmolin et al., 2014) and “compassion fatigue”, where staff have experienced so many upsetting situations that they find it difficult to continue empathising with their patients (Wilkinson, 2014). As previously discussed, reducing staffing levels contribute to stress in nursing staff, and in this way chronic stress within the workplace launches a self-perpetuating cycle of understaffing; increased stress leads to increased illness, more staff absence and increased understaffing. In turn, these negative emotions also reduce job satisfaction and prompt many staff to consider leaving the nursing profession, further reducing staffing availability for services (Fitzpatrick and Wallace, 2011).

Reasons for work-related stress amongst healthcare professionals

Studies amongst nursing staff have also reported stress occurring as the result of poor and unsupportive management, poor communication skills amongst team members, institutional and organisational issues (e.g. outdated or restrictive hospital policies) or bullying and harassment (RCN, 2009). Even seemingly minor issues have been reported as exacerbating stress amongst nursing staff, for example a lack of common areas to take breaks in, changing shift patterns, and even difficulty and expense of car parking (Happell et al., 2013).

Work related stress can particularly affect student or newly qualified nurses, who often report higher expectations of job satisfaction from working in the profession, they have worked hard and aspired to join, and are therefore particularly prone to experiencing disappointment on discovering that they do not experience the job satisfaction that they presumed they would do whilst training. Student and newly qualified nurses may also have clear ideas from their recent training on how healthcare organisations should be run and how teams should be managed, and may then be disillusioned when they discover that the reality is that many departments could in fact benefit from improvements and further training for more experienced staff in these areas (Wojtowicz et al., 2014; Stanley and Matchett, 2014). Nursing staff are also likely to, on occasion, find themselves in a clinical situation that they feel unprepared for, or do not have the necessary knowledge to provide the best possible care for patients, and this may cause stress and anxiety (RCN, 2009). They may also be exposed to upsetting and traumatic situations, particularly in fields such as emergency or intensive care medicine (Wilkinson, 2014).

Moral distress can also cause strong feelings of stress amongst healthcare professionals. This psychological state occurs when a discrepancy occurs between the action that an individual takes, and the action that an individual feels they should have taken (Fitzpatrick and Wallace, 2011). This may occur if a nurse feels that a patient should receive an intervention in order to experience best possible care, but is unable to deliver it, for example due to organisational policy constraints, or a lack of support from other members of staff (Wojtowicz et al., 2014). For example, a nurse may be providing end of life care to a patient who has recently had an unplanned admission onto a general ward but is expected to die shortly. The nurse may feel that this patient would benefit from having a member of staff sitting with them until they died. However, due to a lack of available staffing this does not happen as the nurse must attend to other patients in urgent need of care. If the patient dies without someone with them, the nurse may experiences stress, anger, guilt and unhappiness over the situation as they made the moral judgement that the dying patient “should” have had a member of staff with them, but were unable to provide this without risking compromising the safety of other patients on the ward (Stanley and Matchett, 2014). One large scale questionnaire based study in the USA on moral distress amongst healthcare professionals has shown that moral distress is more common amongst nurses than other staff such as physicians or healthcare assistants. The authors suggested that this may be due to a discrepancy between the level of autonomy that a nurse has in making care decisions, (especially following disagreement with a doctor, who has a high level of autonomy), while experiencing a higher sense of responsibility for patient wellbeing than healthcare assistants, who were more likely to consider themselves to be following the instructions of the nurses than personally responsible for patient outcomes (Whitehead et al., 2015).

Recommendations for policies to address work related stress

It is acknowledged that many individuals find that being asked to perform tasks that they have not been adequately trained or prepared for can be very stressful. As such management teams should also try to ensure as far as possible that individuals are only assigned roles for which they have adequate training and abilities, and support employees with training to improve skills where necessary (RCN, 2009).

Surveys have frequently reported that organisational issues such as a lack of intuitive work patterns, overloading of workloads and an unpleasant working environment can all contribute to work related stress. Organisations can reduce the impact of these by developing programmes of working hours with working staff and adhering to them, making any necessary improvements to the environment (e.g. ensuring that malfunctioning air conditioning is fixed), and that incidents of understaffing are reduced as much as possible (RCN, 2009). Issues such as insomnia and difficulty in adapting to changing shift patterns can also be assisted by occupational health, for example by encouraging healthy eating and exercise (Blau, 2011; RCN, 2005). For example, in 2005 the RCN published an information booklet for nursing staff explaining the symptoms of stress, ways in which it can be managed e.g. relaxation through exercise or alternative therapies, and when help for dealing with stress should be sought (RCN, 2005). More recently, internet based resources are available from the NHS to help staff identify if they need assistance, and how and why it is important to access it (NHS Employers, 2015).

Witnessing or experiencing traumatic or upsetting events is an unavoidable aspect of nursing, and can even result in post-traumatic stress disorder (PTSD). However, there are ways in which staff can be encouraged by their management teams and organisations to deal with the emotions that these circumstances produce, limiting the negative and stressful consequences of these events. This may include measures such as counselling or even peer support programmes through the occupational health departments (Wilkinson, 2014). Staff should also be encouraged to use personal support networks e.g. family, as this can be an important and effective source of support, however studies have shown that support within the work place is most beneficial, particularly if this can be combined with a culture where healthcare professionals are encouraged to express their feelings (Lowery and Stokes, 2005).

One commonly cited reason for work related stress amongst nurses is the incompetence or unethical behaviours of colleagues, and a lack of opportunity to report dangerous or unethical practice without fear of reprisal. Therefore it is important that institutions and management teams ensure that there is an adequate care quality monitoring programme in place, and a culture where concerns can be reported for further investigation without fear of reprisal, particularly with respect to senior staff or doctors (Stanley and Matchett, 2014).

It has been reported that in the year 2012-2013, 1,458 assaults were reported against NHS staff (NHS Business Service Authority, 2013). Violence and abusive behaviour towards nursing staff is an acknowledged cause of stress and even PTSD, and staff have a right to provide care without fear (Nursing Standard News, 2015; Itzhaki et al., 2015). Institutions therefore have a responsibility towards their staff to provide security measures such as security staff, workplace design (e.g. locations of automatically locking doors) and policies for the treatment of potentially violent patients e.g. those with a history of violence or substance abuse issues (Gillespie et al., 2013).

As previously discussed, nurses are more likely than other healthcare professionals to experience moral distress as the result of a discrepancy between the actions they believe are correct and the actions they are able to perform (Whitehead et al., 2015). However there are policies that can be introduced into healthcare organisations to reduce its occurrence, and the severity with which it can affect nursing staff. Studies have shown that nurses who were encouraged to acknowledge and explore feelings of moral distress were able to process and overcome these in a less damaging manner than those who did not (Matzo and Sherman, 2009; Deady and McCarthy, 2010). Additionally, it is thought that moral distress is less frequent in institutions and teams that encourage staff to discuss ethical issues with a positive attitude (Whitehead et al., 2015). For example, institutions could employ a designated contact person for staff to discuss stressful ethical issues with, or set up the facility for informal and anonymous group discussion, for example on a restricted access internet-based discussion board (Matzo and Sherman, 2009)

Conclusion

Work related stress is responsible for significant costs to the NHS in terms of staffing availability and financial loss from staff absence from stress itself or co-morbidities that can be exacerbated by stress (RCN, 2009), for example hypertension and diabetes (Phiri et al., 2014; RCN, 2009, 2014a). The loss of valuable and qualified staff from the profession is also a significant cost to health services, and of course exacerbates the situation by increasing understaffing further, which in turn increases stress for the remaining staff (Hyrkas and Morton, 2013). It can also exert a significant cost to healthcare professionals who experience it, in terms of their ability to work, their personal health, effects on personal relationships (Augusto Landa et al., 2008) and job satisfaction (Fitzpatrick and Wallace, 2011). However, organisations can implement recommendations to reduce work related stress, for example by encouraging a positive and supportive culture for staff by offering interventions such as counselling (Wilkinson, 2014; RCN, 2005). Furthermore, interventions such as encouraging the reporting of unsafe or unethical practice – a commonly cited source of stress amongst nurses (RCN, 2009; Stanley and Matchett, 2014) – may also contribute to improving the quality of patient care.

References

Augusto Landa, J. M., Lopez-Zafra, E., Berrios Martos, M. P. and Aguilar-Luzon, M. D. C. (2008). The relationship between emotional intelligence, occupational stress and health in nurses: a questionnaire survey. International Journal of Nursing Studies, 45 (6), p.888–901. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/17509597

Blau, G. (2011). Exploring the impact of sleepaˆ?related impairments on the perceived general health and retention intent of an Emergency Medical Services (EMS) sample. Career Development International, 16 (3), p.238–253. [Online]. Available at: http://www.emeraldinsight.com/doi/abs/10.1108/13620431111140147

Dalmolin, G. de L., Lunardi, V. L., Lunardi, G. L., Barlem, E. L. D. and da Silveira, R. S. (2014). Moral distress and Burnout syndrome: are there relationships between these phenomena in nursing workers? Revista Latino-Americana de Enfermagem, 22 (1), p.35–42. [Online]. Available at: http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0104-11692014000100035

Deady, R. and McCarthy, J. (2010). A Study of the Situations, Features, and Coping Mechanisms Experienced by Irish Psychiatric Nurses Experiencing Moral Distress. Perspectives in Psychiatric Care, 46 (3), p.209–220. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/20591128

Fitzpatrick, J. J. and Wallace, M. (2011). Encyclopedia of Nursing Research. 3rd ed. New York: Springer Publishing Company.

Gillespie, G., Gates, D. M. and Berry, P. (2013). Stressful Incidents of Physical Violence Against Emergency Nurses. OJIN: The Online Journal of Issues in Nursing, 18 (1). [Online]. Available at: http://www.nursingworld.org/MainMenuCategories/ANAMarketplace/ANAPeriodicals/OJIN/TableofContents/Vol-18-2013/No1-Jan-2013/Stressful-Incidents-of-Physical-Violence-against-Emergency-Nurses.html

Happell, B., Dwyer, T., Reid-Searl, K., Burke, K. J., Caperchione, C. M. and Gaskin, C. J. (2013). Nurses and stress: recognizing causes and seeking solutions. Journal of Nursing Management, 21 (4), p.638–647. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/23700980

HSE. (2014). Statistics – Stress-related and psychological disorders in Great Britain. Health and Safety Executive. [Online]. Available at: http://www.hse.gov.uk/statistics/causdis/stress/index.htm

Hyrkas, K. and Morton, J. L. (2013). International perspectives on retention, stress and burnout. Journal of Nursing Management, 21 (4), p.603–604. [Online]. Available at:

Itzhaki, M., Peles-Bortz, A., Kostistky, H., Barnoy, D., Filshtinsky, V. and Bluvstein, I. (2015). Exposure of mental health nurses to violence associated with job stress, life satisfaction, staff resilience, and post-traumatic growth. International Journal of Mental Health Nursing, 24 (5), p.403–412. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/26257307

Khamisa, N., Oldenburg, B., Peltzer, K. and Ilic, D. (2015). Work Related Stress, Burnout, Job Satisfaction and General Health of Nurses. International Journal of Environmental Research and Public Health, 12 (1), p.652–666. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4306884/

Lowery, K. and Stokes, M. A. (2005). Role of peer support and emotional expression on posttraumatic stress disorder in student paramedics. Journal of Traumatic Stress, 18 (2), p.171–179. [Online]. Available at: doi:10.1002/jts.20016

Matzo, M. L. and Sherman, D. W. (2009). Palliative Care Nursing: Quality Care to the End of Life. 3rd ed. New York: Springer Publishing Company.

NHS Business Service Authority. (2013). 2012-13 figures released for reported physical assaults against NHS staff. NHS Business Service Authority. [Online]. Available at: http://www.nhsbsa.nhs.uk/4380.aspx

NHS Choices. (2015). Stress, anxiety and depression. NHS Choices. [Online]. Available at: http://www.nhs.uk/conditions/stress-anxiety-depression/pages/understanding-stress.aspx

NHS Employers. (2015). Health work and wellbeing. NHS Employers. Available at: http://www.nhsemployers.org/your-workforce/retain-and-improve/staff-experience/health-work-and-wellbeing

Nursing Standard News. (2015). Stress at work affecting nurses’ health, survey finds. Nursing Standard, 29 (27), p.8–8. [Online]. Available at: http://journals.rcni.com/doi/10.7748/ns.29.27.8.s6

Phiri, L. P., Draper, C. E., Lambert, E. V. and Kolbe-Alexander, T. L. (2014). Nurses’ lifestyle behaviours, health priorities and barriers to living a healthy lifestyle: a qualitative descriptive study. BMC Nursing, 13. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4264254/

RCN. (2005). Working well initiative: Managing your stress. A guide for nurses. Royal College of Nursing. [Online]. Available at: http://www.rcn.org.uk/__data/assets/pdf_file/0008/78515/001484.pdf

RCN. (2009). Work-related stress. Royal College of Nursing. [Online]. Available at: https://www.rcn.org.uk/__data/assets/pdf_file/0009/274473/003531.pdf

RCN. (2014a). Importance of stress awareness. [Online]. Available at: http://www.rcn.org.uk/newsevents/news/article/uk/importance_of_stress_awareness

RCN. (2014b). Two thirds of staff have considered leaving the NHS. [Online]. Available at: http://www.rcn.org.uk/newsevents/news/article/uk/two_thirds_of_staff_have_considered_leaving_the_nhs

Sharma, P., Davey, A., Davey, S., Shukla, A., Shrivastava, K. and Bansal, R. (2014). Occupational stress among staff nurses: Controlling the risk to health. Indian Journal of Occupational and Environmental Medicine, 18 (2), p.52–56. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4280777/

Stanley, M. J. C. and Matchett, N. J. (2014). Understanding how student nurses experience morally distressing situations: Caring for patients with different values and beliefs in the clinical environment. Journal of Nursing Education and Practice, 4 (10), p.p133. [Online]. Available at: doi:10.5430/jnep.v4n10p133

Whitehead, P. B., Herbertson, R. K., Hamric, A. B., Epstein, E. G. and Fisher, J. M. (2015). Moral Distress Among Healthcare Professionals: Report of an Institution-Wide Survey. Journal of Nursing Scholarship, 47 (2), p.117–125. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/25440758

Wilkinson, S. (2014). How nurses can cope with stress and avoid burnout: Stephanie Wilkinson offers a literature review on the workplace stressors experienced by emergency and trauma nurses. Emergency Nurse, 22 (7), p.27–31. [Online]. Available at: http://rcnpublishing.com/doi/abs/10.7748/en.22.7.27.e1354

Wojtowicz, B., Hagen, B. and Van Daalen-Smith, C. (2014). No place to turn: Nursing students’ experiences of moral distress in mental health settings. International Journal of Mental Health Nursing, 23 (3), p.257–264. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/23980930

The Effectiveness of Public Health Interventions

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

The health of the whole population is a very important issue. Conditions which are likely to affect the whole population or large sections of the population are considered to be public health issues and are the subject of specific healthcare promotions and interventions. These can take a range of forms; those aimed at raising awareness of symptoms or lifestyle factors that are implicated in developing a particular condition; management of health conditions to improve quality of life and/or longevity or recognition of symptoms to obtain early treatment. Public health interventions are developed to address identified public health issues (National Institute for Health and Care Excellence, 2015). Once these are put in place, it is important to be able to assess the impact of the interventions and their effectiveness in respect of the present situation and also to increase the knowledge base for development of further interventions in the future (Brownson, et al., 2010). This essay will consider the ways in which the effectiveness of public health interventions can be determined.

Discussion

One of the main factors that needs to be considered in public health interventions is cost-effectiveness (The King’s Fund, 2014). The NHS has increasing demands on its services and so, when developing new interventions or reviewing those already in place, cost effectiveness is one of the most important issues. A further aspect of the effectiveness of public health interventions is the extent to which they have demonstrably achieved the aims set for the intervention (Scutchfield & Keck, 2003). These two areas will now be considered in greater detail.

There is a finite budget available to the NHS to provide healthcare and this has to be utilised in the most efficient way. The economic constraints that have been in place for some time have created an even greater need for financial efficiency. One way that this can be achieved is through reducing the numbers of people who are suffering from conditions which are considered to be avoidable. Conditions such as diabetes and obesity for example, are considered to be largely avoidable by people changing lifestyle habits to improve their health. Thus a range of public health interventions have been directed to focus on these types of issues in order to prevent people from becoming ill as this would represent a substantial saving in costs of treatment for subsequent illnesses. It would also provide benefit to the public in that people would lead longer, healthier lives. However, preventative interventions present difficulties in measuring their effectiveness. A reduction in the numbers of people developing diabetes, for instance, may be attributable to a public health intervention or it may be the result of one or more other factors. The individuals measured may not have developed the condition anyway and so it cannot be proven that the intervention itself was solely responsible for them remaining well. As it can be difficult to accurately measure effectiveness of outcomes, the cost-effectiveness is also difficult to assess. Historically, preventative healthcare promotion has been a problematic area due to the difficulties in establishing effectiveness and this made obtaining funding for such activities particularly challenging. However, the increasing demand for services has meant that there has been a shift in perspective and a greater focus on prevention. Thus, the means of evaluating public health interventions in this area has become important. Although the financial implications cannot be the sole driver for health promotion, financial issues are of necessity a major factor as there are obligations on the NHS to produce evidence that their funding has been properly and effectively spent.

The effectiveness of health promotions from the perspective of health improvement of the population should be the primary motivation of interventions rather than cost. In order to improve public health, there are a range of options for interventions. The impact of health interventions was described by Frieden, (2010) as being in the formof a five-tier pyramid with the bottom tier being the most effective as it reaches the largest sector of the population and has the greatest potential to improve the social/economic determinants of health. The higher tiers of the pyramid relate to areas where the individual is helped to make healthy choices. Topics that are within the bottom tier of the pyramid include the improvements in health brought about by changing lifestyle habits such as smoking. Wide-scale promotions and interventions have been in place for many years and this has reduced the numbers of people who already smoke together with encouraging people not to begin smoking. As a result, the risk factors of health issues such as heart conditions has been reduced. Whilst this may not completely prevent some people from developing such conditions in terms of public health, which takes the wider perspective, a higher proportion of people will be at a lower risk. Thus, the effectiveness of interventions in this case can be measured by the proportion of the population who currently smoke, who have given up smoking and who have started smoking by comparison to previous years records (Durkin, et al., 2012). The numbers of people coming forward for help through smoking cessation provisions offered by their GPs can also be measured, together with the effectiveness of the those interventions in helping people to achieve their goal to stop smoking.

The longstanding interventions to reduce the numbers of people with HIV/AIDS also fell within the same category of public health interventions (as just described in respect of smoking) once it was clear that it was a potential risk to a large section of the population. In this instance, there was a large amount of public health promotional activity when the issue was first known in the 1980’s but this has largely subsided currently with few if any national high profile promotions/interventions (Bertozzi, et al., 2006). However, the risk has not been eradicated and there has been an increase in older people developing the condition (AVERT, 2015). This may be due to them not considering they are at risk or they may not have been targeted by the original campaigns which had a greater focus on the homosexual communities, needle using drug addicts and sexually active, younger adults. Married couples were not then considered to be the primary target audience for such campaigns. This demonstrates that there is a need for on-going interventions, particularly in terms of public awareness, to ensure that there is a consistent and improving impact (AVERT, 2015). Unless a health risk has been eradicated, there is likely to be a need for continuing interventions to maintain public knowledge levels. The way in which HIV/AIDS and smoking are directed at the wider population are examples of Frieden’s bottom sections of the pyramid.

When interventions are applied in the top levels of Frieden’s pyramid they address individuals more directly, rather than the whole population (2010). Thus, it could be argued that such interventions would overall, have a greater impact as any public changes need to involve each individual changing. Unless each person is reached by the intervention and perceives that it is a valuable change for them, publicly directly interventions will have reduced effectiveness. National interventions will of necessity be broadly based and they will, therefore, not reach all those people to whom it is aimed as they may feel that it does not apply to them. Thus, the use of interventions that are more specifically targeted to individuals can take into account their socio-economic status and other factors to make the interventions more easily seen to be applicable to them (Frieden, 2010 ).

A different view of public health interventions considers the situation for people with terminal or long term conditions. Many of the interventions focus heavily on the medical model and do not take into account the impact on the patient or how they would prefer to be cared for. The medical view of what constitutes good health may be considered to be a more laboratory based, theoretical view that does not necessarily reflect the lived experience of individuals (Higgs, et al., 2005). Physical incapacity may not impact badly on an individual who has found ways to live a fulfilling life whilst someone who is considered fit and well may not consider that they have good quality of life (Asadi-Lari, et al., 2004). Therefore, the impact of interventions on the public also needs to be considered. A medically effective intervention may be unpleasant or difficult for the patient to endure and thus, viewed as being less effective. Furthermore, if the intervention is too unpleasant the patient may fail to comply and thus, also not obtain the level of effectiveness that the medical model would suggest it should (Asadi-Lari, et al., 2004).

One area of public health that has proved to be somewhat controversial in recent years is that of immunisation. The possible links between the MMR vaccine and autism, for instance, has impacted heavily on the numbers of people having their children immunised (BMJ, 2013). Vaccination is an important branch of public health and relies upon sufficient people being immunised against diseases so that should isolated cases occur the disease will not spread. Many parents today will be unaware of the health implications of illnesses such as German measles and mumps as vaccination has made cases rare. The rarity of the cases has also led to the incorrect belief that these illnesses have been eradicated. Therefore, in this instance the effectiveness of the intervention has been varied by the influence of the media reports or adverse outcomes. The fear that was generated has been difficult to overcome and this has resulted in a loss of faith in the process. This then results in reduced effectiveness of the intervention. However, it can prove very difficult to restore public support following situation such as this that have continued for a long time. The impact can be measured in both the numbers of people coming forward to have their children immunised and in the numbers of cases of the various illnesses that occur each year. The current statistics, however, do suggest that the levels of immunisation with MMR has now been restored to an appropriate level (NHS, 2013).

The provision of the ‘flu vaccine is another instance where public health interventions may have varying effectiveness. The actual effectiveness of a ‘good’ vaccine is not considered to be 100% when the correct formula has been provided. In 2014, however, the vaccine was not for the actual strain of ‘flu that occurred and so there was little protection provided (Public Health England, 2015). As a result, it is likely that there will be a downturn in the numbers of people who will come forward to receive the ‘flu vaccination this year as the value may be perceived to be doubtful. This also demonstrates the need to provide the public with correct information so that they are aware of the potential effectiveness of the intervention. So in the case of ‘flu, if the vaccine has a 60% chance of preventing the illness this should perhaps be specifically stated. There may be a level at which the majority of people feel that it is not worth having the vaccination. If, hypothetically, an effectiveness of less than 30% was considered by the majority of people to be so low that it was not worth having the vaccination, there could be few people immunised and a major epidemic could follow. Therefore, it is important that the information provided is correct and that the intervention itself is seen to be of sufficient value to the individual to warrant them making that choice to take advantage of what is offered (NHS, 2015).

Conclusion

This essay has asserted that the effectiveness of public health interventions can be viewed from two main perspectives: the cost effectiveness of the provision and the impact on the target audience. Whilst there are considerable pressures in the NHS financially, this should not be the primary consideration in respect of public health. The aim of public health interventions is to improve the health and well-being of the population as a whole and uses a wide range of methods to achieve this. Some provisions are aimed at the whole population and others are designed for the individual or smaller target groups. For these to be effective, they need to reach the target audience and have meaning for them so that they will be encouraged to take the required action. Continuous changes in the provision may also be needed to ensure that long term issues remain in the public awareness.

Bibliography

Asadi-Lari, M., Tamburini, M. & Gray, D., 2004. Patients’ needs, satisfaction, and health related quality of life: Towards a comprehensive model. Health and Quality of Life Outcomes , 2(32).

AVERT, 2015. HIV/AIDS Statistics 2012. [Online] Available at: http://www.avert.org/hiv-aids-uk.htm [Accessed 28 September 2015].

Bertozzi, S.; Padian, N.S.; Wegbreit, J.; DeMaria, L.M.; Feldman, B.; Gayle, H.; Gold, J.; Grant, R.; Isbell, M.T., 2006. Disease Control Priorities in Developing Countries. New York: World Bank.

BMJ, 2013. Measles in the UK: a test of public health competency in a crisis. BMJ, 346(f2793).

Brownson, R.C.; Baker, E.A.; Leet, T.L.; Gillespie, K.N.; True, W.R., 2010. Evidence-Based Public Health. Oxford: Oxford University Press.

Durkin, S., Brennan, E. & Wakefield, M., 2012. Mass media campaigns to promote smoking cessation among adults: an integrative review. Tobacco Control, Volume 21, pp. 127-138.

Frieden, T. R., 2010 . A Framework for Public Health Action: The Health Impact Pyramid. American Journal of Public Health, 100(4), p. 590–595.

Higgs, J., Jones, M., Loftus, S. & Christensen, N., 2005. Clinical Reasoning in the Health Professions. New York: Elsevier Health Sciences.

National Institute for Health and Care Excellence, 2015. Methods for the development of NICE public health guidance (third edition). [Online] Available at: https://www.nice.org.uk/article/pmg4/chapter/1%20introduction [Accessed 28 September 2015].

NHS, 2013. NHS Immunisation Statistics, London: NHS.

NHS, 2015. Flu Plan Winter 2015/16. [Online] Available at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/418038/Flu_Plan_Winter_2015_to_2016.pdf [Accessed 28 September 2015].

Public Health England, 2015. Flu vaccine shows low effectiveness against the main circulating strain seen so far this season. [Online] Available at: https://www.gov.uk/government/news/flu-vaccine-shows-low-effectiveness-against-the-main-circulating-strain-seen-so-far-this-season [Accessed 28 September 2015].

Scutchfield, F. & Keck, C., 2003. Principles of Public Health Practice. Clifton Park: Delmar Learning.

The King’s Fund, 2014. Making the case for public health interventions. [Online] Available at: http://www.kingsfund.org.uk/audio-video/public-health-spending-roi?gclid=CM_ExbKomcgCFcZuGwodE44Lkg [Accessed 28 September 2015].

Transference Countertransference Therapeutic Relationship

This work was produced by one of our professional writers as a learning aid to help you with your studies

Describe the transference-countertransference element of the therapeutic relationship

An examination of the development of transference and counter-transference as a therapeutic tool with an exploration of the ways in which it can be defined and used in a therapeutic setting, with an overview and brief discussion of the way the concept of transference/counter-transference has been received by different schools of therapy.

Introduction

This essay explores the development of transference and countertransference from their origins in Freud’s work to their current uses in different psychotherapeutic schools. The Kleinian contribution is identified as a major catalyst to re-thinking countertransference as a resource rather than simply an obstacle to treatment.

An unseemly event and a fortuitous discovery

In 1881, the physician Dr Josef Breuer began treating a severely disturbed young woman who became famous in the history of psychoanalysis as “Anna O”. She had developed a set of distressing symptoms, including severe visual disturbances, paralysing muscular spasms, paralyses of her left forearm and hand and of her legs, as well as paralysis of her neck muscles (Breuer, 1895, in Freud and Breuer 1985/2004, p. 26). Medical science could not explain these phenomena organically, save to designate them as symptoms of what was then known as “hysteria”, so Breuer took the radical step of visiting his young patient twice a day and listening carefully to her as she spoke about her troubles. He was to make a powerful discovery which deeply influenced his young assistant, Dr Sigmund Freud: whenever Anna found herself spontaneously recounting memories of traumatic events from her early history, memories she had hitherto had no simple access to through conscious introspection, her symptoms began to disappear one by one. But for the purposes of this essay, one event was to be of pivotal importance: just as Breuer was about to conclude his treatment of the young woman as a success, she declared to him that she was in love with him and was pregnant with his child.

Perhaps unsurprisingly, Breuer was traumatised and withdrew from this intimate method of treatment promptly. Freud’s original biographer, Ernest Jones, reports that Breuer and Freud originally described the incident as an “untoward” event (Jones, 1953, p. 250); but where Breuer admonished himself for experimenting with an unethically intimate method which may have made him seem indiscreet to the young woman, Freud studied the phenomenon with scrupulous scientific neutrality. He, too, had experienced spontaneous outbursts of apparent love from his psychotherapeutic patients, but as Jones (1953, p. 250) observes, he was certain that such declarations had little or nothing to do with any magnetic attraction on his part. The concept of transference was born: patients, Freud argued, find themselves re-experiencing intense reactions in the psychotherapeutic relationship which were in origin connected with influential others in their childhoods (such as parents or siblings). Without being aware of doing so, patients tended to transfer their earlier relationship issues onto the person of the therapist.

As Spillius, Milton, Couve and Steiner (2011) argue, at the time of the Studies in Hysteria, Freud tended to regard manifestations of transference as a predominantly positive force: the patient’s mistaken affections could be harnessed in the service of a productive alliance between therapist and client to explore and analyse symptoms. But by 1905, his thinking about transference began to undergo a profound change. Already aware that patients could direct unjustifiably hostile feelings toward the analyst as well as affectionate ones, his work with the adolescent “Dora” shook him deeply when she abruptly terminated her analysis in a surprisingly unkind and perfunctory manner (Freud, 1905/2006). He had already worked out that both the positive and negative manifestations of transference functioned as forms of resistance to the often unpleasant business of understanding one’s own part in the events of the past (it is, for example, a good deal easier to lay the blame for one’s present-day failings on “bad” or unsupportive figures from the past or on their selected stand-ins in the present than it is to acknowledge that one rejected or failed to make full use of one’s opportunities). But he began to realise that Dora had actively repeated a pattern of relationship-behaviour with him that had actually arisen from her unacknowledged hostility toward her father, as well as to a young man she had felt attracted to, because both had failed to show her the affection and consideration she believed herself entitled to.

She took her revenge out on Freud – and she was not alone in actively re-enacting critical relationship scenarios inside the therapeutic relationship; other patients, he began to see, also frequently actively relived relational patterns in this way while totally unaware that they were repeating such established patterns. By 1915, transference was no longer a resistance to recovering hazy and unpleasant memories for Freud; instead, it was an active, lived repetition of earlier relationships based on mistakenly perceived similarities between here-and-now characteristics of the analyst and there-and-then characteristics of previously loved or hated figures (Freud, 1915/2002)

The interplay between psychical reality and social reality

Melanie Klein, a pioneer of child psychoanalysis, accepted Freud’s view of transference as a form of re-enactment, but using her meticulous observations of the free play of very young (and very disturbed) child patients, she began to develop the view that it was not the dim-and-distant past that was re-enacted but, on the contrary, the present. Psychical reality and social reality were not coterminous or even continuous; they were involved instead in a ceaseless dialectical interplay (Likierman, 2001, esp. pp. 136 – 144). Real people may constitute the child’s external world, but for Klein, the only way to make sense of the often violent and disturbing content of the children’s play she observed was to posit the existence of a psychical reality dominated by powerful unconscious phantasies involving frighteningly destructive and magically benevolent inner figures or “objects” (Klein, 1952/1985). Children didn’t simply re-enact actual, interpersonal relationships, they re-enacted relationships between themselves and their unique unconscious phantasy objects. In spontaneous play, children were dramatising and seeking to master or domesticate their own worst fears and anxieties, she believed.

Klein’s thought has changed the way transference is viewed in adult psychotherapy, too. If transference involves not simply the temporal transfer of unremembered historical beliefs into the present but the immediate transfer of phantasies, in the here-and-now, which are active in the patient’s mind, handling transference becomes a matter of immediate therapeutic concern: one does not have to wait until a contingency in the present evokes an event from the past, nor for the patient to make direct references to the therapist in her associations, because a dynamic and constantly shifting past is part of the present from the first moments of therapy in Kleinian thought. For example, Segal (1986, pp.8 – 10) describes a patient opening her first therapy session by talking about the weather – it’s cold and raining outside. Of all the issues a patient could choose to open a session – the latest political headlines, a currently active family drama, a dream, a quarrel with a work colleague, and so on – it is always significant when a patient “happens” to select a particular theme; for Segal, following Klein, this selection indicates the activity of unconscious phantasy objects. Transference is immediate: Segal asks whether the patient is actually exploring, via displacement onto the weather, her transferential fear that the analyst may be an unfriendly, cold, and joy-dampening figure.

Countertransference, its development and its use by different schools of therapy

The foregoing has focussed on transference but implicit throughout has been the complementary phenomenon of countertransference, from Breuer’s shocked withdrawal from Dora’s transferential love to Freud’s distress at being abruptly abandoned by Dora who, he later realised, was re-enacting a revenge scenario. Intensely aware that emotions could be roused all too easily in the analyst during a psychoanalytic treatment, Freud was exceptionally circumspect about any form of expression of these feelings to the patient. In his advice to practitioners, he suggested that the optimal emotional stance for the therapist was one of “impartially suspended attention” (Freud, 1912b/2002, p. 33). He did not, however, intend this to be a stable, unfluctuating position of constantly benevolent interest; he urged therapists to be as free of presuppositions and as open-minded as possible to their patients’ spoken material, to be willing to be surprised at any moment, and to allow themselves the freedom to shift from one frame of mind to another. But he was unambiguous in his advice about how the therapist should comport him- or herself during analysis:

“For the patient, the doctor should remain opaque, and, like a mirror surface, should show nothing but what is shown to him.” (Freud, 1912b, p. 29)

As his paper on technique makes clear, Freud considered the stirring up of intense emotions on the part of the therapist as inevitable during analytic work; but he also considered these responses to the patient an obstacle to analytic work, the stirring up of the therapist’s own psychopathology which required analysis rather than in-session expression. The analyst had an obligation to remove his own blind-spots so as to attend to the patient’s associations as fully and prejudicially as possible.

By the 1950s, psychoanalysts were beginning to explore countertransference as a potential source of insight into the patient’s mind. As Ogden (1992) draws out in his exploration of the development of Melanie Klein’s notion of projective identification, Kleinian analysts such as Wilfred Bion, Roger Money-Kyrle, Paula Heimann and Heinrich Racker began arguing that it was an interpersonal mechanism rather than an intrapsychic one (as Klein had intended). Patients, they believed, could evoke aspects of their own psychic reality, especially those aspects they that they found difficult to bear, inside the mind of the analyst by exerting subtle verbal and behavioural pressures on the therapist. Therapists should not, therefore, dismiss such evoked emotions as purely arising from their own psychopathology, but as a form of primitive, para- or pre-verbal communication from the patient. As Ogden (a non-Kleinian) puts it:

“Projective identification is that aspect of transference that involves the therapist being enlisted in an interpersonal actualization of (an actual enactment between patient and therapist) of a segment of the patient’s internal object world.”
(Ogden, 1992, p. 69)

Countertransference, in other words, when handled carefully and truthfully by the therapist, can be a resource rather than an obstacle, and as such it has spread well beyond the Kleinian School. For example, while advocating caution in verbalising countertransference effects in therapy, the Independent psychoanalyst Christopher Bollas (1987) suggests that the analyst’s mind can be used by patients as a potential space, a concept originally developed by Winnicott (1974) to designate a safe, delimited zone free of judgement, advice and emotional interference from others, within which people can creatively express hitherto unexplored aspects of infantile experience. Bollas cites the example of a patient who recurrently broke off in mid-sentence just as she was starting to follow a line of associations, remaining silent for extended periods. Initially baffled and then slightly irritated, Bollas worked on exploring his countertransference response carefully over several months of analytic work. He eventually shared a provisional understanding with her that came from his own experience of feeling, paradoxically, in the company of someone who was absent, who was physically present but not emotionally attentive or available. He told her that he had noticed that her prolonged silences left him in a curious state, which he wondered was her attempt to create a kind of absence he was meant to experience. The intervention immediately brought visible relief to the patient, who was eventually able to connect with previously repressed experiences of living her childhood with an emotionally absent mother (Bollas, 1987, pp. 211 – 214).
Other schools of psychoanalytic therapy such as the Lacanians remain much more aligned with Freud’s original caution, believing that useful though countertransference may be, it should never be articulated in therapy but taken to supervision or analysis for deeper understanding (Fink, 2007).

References

Bollas, C. (1987). Expressive uses of the countertransfeence:notes to the patient from oneself. In C. Bollas, The Shadow of the Object: Psychoanlsyis of the Unthought Known (pp. 200 – 235). London: Free Associations Books.

Breuer, J. (1883-5/2004). Fraulein Anna O. In S. Freud, & J. Breuer, Studies in Hysteria (pp. 25 – 50). London and New York: Penguin (Modern Classics Series).

Fink, B. (2007). Handling Transference and Countertransference. In B. Fink, Fundamentals of Psychoanalytic Technique: A Lacanian Approach for Practitioners (pp. 126 – 188). New York and London: W.W. Norton & Company.

Freud, S. (1905/2006). Fragment of an Analysis of Hysteria (Dora). In S. Freud, The Psychology of Love (pp. 3 – 109). London and New York: Penguin (Modern Classics Series).

Freud, S. (1912/2002). Advice to Doctors on Psychoanaytic Treatment. In S. Freud, Wild Analysis (pp. 33 – 41). London and New York : Penguin (Modern Classics Series).

Freud, S. (1912/2002). On The Dymanics of Transference. In S. Freud, Wild Analysis (pp. 19 – 30). Londona nd New York: Penguin (Modern Classics Series).
Freud, S. (1915/2003). Remenbering, Repeating and Working Through. In S. Freud, Beyond the Pleasure Priciple and Other Writings (pp. 31 – 42). London and New York: Penguin (Modern Classics Series).

Jones, E. (1953). The Life and Work of Sigmund Freud: The Formative Years and the Great Discoveries, 1856-1900 – Vol. 1. New York: Basic Books.
Klein, M. (1952/1985). The Origins of Transference. In M. Klein, Envy and Gratitude and Other Works (pp. 48 – 60). London: The Hogarth Press & The Institute of Psycho-Analysis.

Likierman, M. (2001). Melanie Klein: Her Work in Context. London and New York: Continuum.

Ogden, T. (1992). Projective Identification and Psychotherapeutic Technique. London: Maresfield Library.

Segal, H. (1986). Melanie Klein’s Technique. In H. Segal, The Work of Hanna Segal: Delusion and Artistic Creativity & Other Psycho-analytic Essays (pp. 3 – 34). London: Free Associations Books/Maresfield Library.

Spillius, E., Milton, J., P., G., Couve, C., & Steiner, D. (2011). The New Dictionary of Kleinian Thought. East Sussex and New York: Routledge.
Winnicott, D. W. (1974). Playing and Reality. London: Pelican.

The role of Epidemiology in Infection Control

This work was produced by one of our professional writers as a learning aid to help you with your studies

The role of epidemiology in infection control and the use of immunisation programs in preventing epidemics

The discipline of epidemiology is broadly defined as “the study of how disease is distributed in populations and the factors that influence or determine this distribution” (Gordis, 2009: 3). Among a range of core epidemiologic functions recognised (CDC, 2012), monitoring and surveillance as well as outbreak investigation are most immediately relevant to identifying and stopping the spread of infectious disease in a population.

Most countries perform routine monitoring and surveillance on a range of infectious diseases of concern to their respective jurisdiction. This allows health authorities to establish a baseline of disease occurrence. Based on this data, it is possible to subsequently discern sudden spikes or divergent trends and patterns in infectious disease incidence. In addition to cause of death which is routinely collected in most countries, many health authorities also maintain a list of notifiable diseases. In the UK, the list of reportable diseases and pathogenic agents maintained by Public Health England includes infectious diseases such as Tuberculosis and Viral Haemorrhagic Fevers, strains of influenza, vaccine-preventable diseases such as Whooping Cough or Measles, and food borne infectious diseases such as gastroenteritis caused by Salmonella or Listeria. (Public Health England, 2010) At the international level, the World Health Organization requires its members to report any “event that may constitute a public health emergency of international concern” (International Health Regulations, 2005). Cases of Smallpox, Poliomyelitis, Severe Acute Respiratory Syndrome (SARS), and new influenza strains are always notifiable. (WHO, undated) These international notification duties allow for the identification of trans-national patterns by collating data from national surveillance systems. Ideally, the system would enable authorities to anticipate and disrupt further cross-national spread by alerting countries to the necessity of tightened control at international borders or even by instituting more severe measures such a bans on air travel from and to affected countries.

As explained in the previous paragraph, data collected routinely over a period of time allows authorities to respond to increases in the incidence of a particular disease by taking measures to contain its spread. This may include an investigation into the origin of the outbreak, for instance the nature of the infectious agent or the vehicle. In other cases, the mode of transmission may need to be clarified. These tasks are part of the outbreak investigation. Several steps can be distinguished in the wake of a concerning notification or the determination of an unusual pattern. These include the use of descriptive epidemiology and analytical epidemiology, the subsequent implementation of control measures, as well as reporting to share experiences and new insights. (Reintjes and Zanuzdana, 2010)

In the case of an unusual disease such as the possibility of the recent Ebola outbreak in West Africa to result in isolated cases in Western Europe, it might not be necessary to engage in further epidemiological analysis once the diagnosis has been confirmed. Instead, control measures would be implemented immediately and might include ensuring best practice isolation of the patient and contact tracing to ensure that the infection does not spread further among a fully susceptible local population. Similarly, highly pathogenic diseases such as meningitis that tend to occur in clusters might prompt health authorities to close schools to disrupt the spread. In other types of outbreak investigations identifying the exact disease or exact strain of an infectious agent is the primary epidemiologic task. This might, for instance, be the case if clusters of relatively non-specific symptoms occur and need be confirmed as linked to one another and identified as either a known disease/infectious agent or be described and named. In the same vein, in food-borne infectious diseases, the infectious organism and vehicle of infection may have to be pinpointed by retrospectively tracing food intake, creating comparative tables, and calculating measures of association between possible exposures and outcome (CDC, 2012). Only then can targeted control measures such as pulling product lots from supermarket shelves and issuing a pubic warning be initiated.

Beyond identifying and controlling infectious disease outbreaks, monitoring and surveillance also plays a role in ensuring that primary prevention works as effectively as possible: collecting information on behavioural risk factors in cases such as sexually transmitted diseases can help identify groups that are most at risk and where Public Health interventions may yield the highest benefit. In another example, monitoring immunization coverage and analysing the effectiveness of vaccines over the life course may predict epidemics in the making if coverage is found decreasing or immunity appears to decline in older populations. In addition, the ability to anticipate the potential spread of disease with a reasonable degree of confidence hinges not only on good data collection. Advanced epidemiological methods such as mathematical modelling are equally instrumental in predicting possible outbreak patterns. Flu vaccines, for instance, need to be formulated long before the onset of the annual flu season. Against which particular strains the vaccines are to provide immunity can only be determined from past epidemiological data and modelling. (M’ikanatha et al., 2013) Mathematical models have also played a role in determining the most effective vaccine strategies, including target coverage and ideal ages and target groups, to eliminate the risk of epidemic outbreaks of infectious diseases (Gordis, 2009).

In addition to controlling outbreaks at the source and assuring the key protective strategies such as mass immunisation are effectively carried out, epidemiology is also a tool that allows comprehensive planning for potential epidemics. A scenario described in a research article by Ferguson and colleagues (2006) has as its premise a novel and therefore not immediately vaccine-preventable strain of influenza that has defied initial attempts at control and reached pandemic proportions. The large scale simulation of the theoretical epidemic assesses the potential of several intervention strategies to mitigate morbidity and mortality: international border and travel restrictions, a measure that is often demanded as a kneejerk reaction by policy-makers and citizens is found to have minimal impact, at best delaying spread by a few weeks even if generally adhered to (Ferguson et al., 2006). By contrast, interventions such as household quarantines or school closures that are aimed at interrupting contact between cases, potential carriers, and susceptible individuals are much more effective. . (Ferguson et al., 2006) Time sensitive antiviral treatment and post exposure prophylaxis using the same drugs are additional promising strategies identified. (Ferguson et al., 2006) The latter two potential interventions highlight the role of epidemiological risk assessment in translating anticipated spread of infectious disease into concrete emergency preparedness. For instance, both mass treatment and mass post exposure prophylaxis require advance stockpiling of antivirals. During the last H1N1 epidemic, public and political concern emerged over shortages of the antiviral drug oseltamivir (brand name Tamiflu). (De Clerq, 2006). However, advance stockpiling requires political support and significant resources at a time when governments are trying to reign in health spending and the threat is not immediate. Thus, epidemiologists also need to embrace the role of advocates and advisors that communicate scientific findings and evidence-based projections to decision-makers.

That being said, immunisation remains the most effective primary preventive strategies for the prevention and control of epidemics. As one of the most significant factors in the massive decline of morbidity and mortality form infectious disease in the Western world over the last century, vaccination accounts for an almost 100% reduction of morbidity from nine vaccine-preventable diseases such as Polio, Diphtheria, and Measles in the United States between 1900 to 1990. (CDC, 1999) Immunisation programmes are designed to reduce the incidence of particular infectious diseases by decreasing the number of susceptible individuals in a population. This is achieved by administering vaccines which stimulate the body’s immune response. The production of specific antibodies allows the thus primed adaptive immune system to eliminate the full strength pathogen when an individual becomes subsequently exposed to it. The degree of coverage necessary to achieve so called herd immunity- the collective protection of a population even if not every single individual is immune- depends on the of the infectivity and pathogenicity of the respective infectious agent. (Nelson, 2014) Infectivity, in communicable diseases, measures the percentage of infections out of all individuals exposed, whereas pathogenicity is the percentage of infected individuals that progress to clinical disease. (Nelson, 2014). Sub-clinical or inapparent infections are important to take into account because, even though they show no signs and symptoms of disease, people may still be carriers capable of infecting others. Polio is an example of an infectious disease where most infections are inapparent, but individuals are infectious. (Nelson, 2014).

Gauging infectivity is crucial to estimating the level of coverage needed to reach community immunity. The so called basic reproductive rate is a numerical measure of the average number of secondary infections attributable to one single source of disease, e.g. one infected individual. The rate is calculated by taking into account the average number of contacts a case makes, the likelihood of transmission at each contact point, and the duration of infectiousness. (Kretzschmar and Wallinga, 2010). The higher the reproductive rate, i.e. the theoretical number of secondary cases, the higher the percentage of the population that needs to be immunised in order to prevent or interrupt an outbreak of epidemic proportions. For instance, smallpox, which was successfully eradicated in 1980 (World Health Organization, 2010), is estimated to have a basic reproduction number of around 5, requiring a coverage of only 80% of the population to achieve herd immunity. By contrast, the estimated reproduction number for Measles is around 20 and it is believed that immunisation coverage has to reach at least 96% for population immunity to be ensured. (Kretzschmar and Wallinga, 2010). Once the herd immunity threshold is reached, the remaining susceptible individuals are indirectly protected by the immunised majority around them: in theory, no pathogen should be able to reach them because nobody else is infected or an asymptomatic carrier. Even if the unlikely event of an infection among the unvaccinated eventuated, the chain of transmission should be immediately interrupted thanks to the immunised status of all potential secondary cases. Vaccinating primary contacts of isolated cases is also an important containment strategy where a cluster of non-immune individuals was exposed to an infected individual. Such scenarios may apply, for example, where groups of vaccine objectors or marginalized groups not caught by the regular immunisation drive are affected or an imported disease meets a generally susceptible population.

However, epidemic prevention does not stop with having reached vaccination targets. Instead, constant monitoring of current coverage is required and adaptations of the immunisation strategy may be needed to ensure that epidemics are reliably prevented. Recent trends underscore the enduring challenge of permanently keeping at bay even diseases that are officially considered eradicated or near eradication: in the United Kingdom, a marked spike in the number of confirmed measles cases has been observed in the last decade, with an increase from under 200 cases in 2001 to just over 2,000 cases in 2012. (Oxford Vaccine Group, undated) The underlying cause is evident from a comparison of case numbers with data from vaccine coverage monitoring: indeed, the number of children receiving the combination Measles vaccine decreased in the 2000s roughly in parallel with the increase in Measles incidence. (Oxford Vaccine Group, undated) Other countries have seen similar trends and have responded with measures intended to increase vaccine uptake: for instance, in Australia, the government recently decided to enact measures that would withhold child benefit payments to parents who refuse to have their children vaccinated. (Lusted and Greene, 2015)

In conclusion, epidemiology, and in particular routine monitoring and surveillance, is a potent tool that enables health authorities to anticipate, detect, and contain the spread of infectious disease. Over the last century, immunisation has proven itself as one of the key interventions to curb infectious disease morbidity and mortality. However, with vaccine-preventable diseases again on the rise in UK and other industrialised countries, epidemiologic monitoring of vaccine coverage and disease incidence remains critically important. Where vaccines are not available or vaccine-induced immunity is short-lived, an effective system to detect cases and contain outbreaks is even more instrumental to the effort of preventing infectious disease epidemics.

Bibliography

Centers for Disease Control and Prevention (CDC) (2012) Principles of Epidemiology in Public Health Practice, 2nd edition, Atlanta, GA: US Department of Health and Human Services.

Centers for Disease Control and Prevention (CDC) (1999) ‘Achievements in Public Health, 1900-1999 Impact of Vaccines Universally Recommended for Children — United States, 1990-1998’, MMWR, vol. 48, no. 12, pp. 243-248.

De Clercq, E. (2006) ‘Antiviral agents active against influenza A viruses’, Nature Reviews Drug Discovery, vol. 5, no. 12, pp. 1015-1025.

Ferguson, N. et al. (2006) ‘Strategies for mitigating an influenza pandemic’, Nature, vol. 442, July, pp. 448-452.

Gordis, L. (2009) Epidemiology, 4th edition, Philadelphia, PA: Saunders Elsevier.

Kretzschmar, M. and Wallinga, J. (2010) ‘Mathematical Models in Infectious Disease Epidemiology’, in: Kramer, A. et al. (ed.) Modern Infectious Disease Epidemiology, New York, NY: Springer Science + Business Media.

Lusted, P. and Greene, A. (2015) Childcare rebates could be denied to anti-vaccination parents under new Federal Government laws. ABC News [Online], Available: http://www.abc.net.au/news/2015-04-12/parents-who-refuse-to-vaccinate-to-miss-out-on-childcare-rebates/6386448 [12 Feb 2015].

M’ikanatha, N. et al. (2013) ‘Infectious disease surveillance: a cornerstone for prevention and control’, in: M’ikanatha, N. et al. (ed.) Infectious Disease Surveillance, 2nd edition, West Sussex, UK: John Wiley & Sons.

Nelson, K. (2014) ‘Epidemiology of Infectious Disease: General Principles’, in: Nelson, K., Williams, C. and Graham, N. (ed.) Infectious disease epidemiology: theory and practice, 3rd edition, Burlington, MA: Jones & Bartlett Learning.

Oxford vaccine Group (undated) Measles [Online], Available: http://www.ovg.ox.ac.uk/measles [12 Feb 2015].

Public Health England (first published 2010) Notifications of infectious diseases (NOIDs) and reportable causative organisms: legal duties of laboratories and medical practitioners [Online], Available: https://www.gov.uk/notifiable-diseases-and-causative-organisms-how-to-report [12 Feb 2015].

Reintjes, R. and Aryna Zanuzdana. (2010) ‘Outbreak Investigations’, in: Kramer, A. et al. (ed.) Modern Infectious Disease Epidemiology, New York, NY: Springer Science + Business Media.

World Health Organization (WHO) (2005). ‘Notification and other reporting requirements under the IHR’, IHR Brief, No. 2 [Online], Available: http://www.who.int/ihr/ihr_brief_no_2_en.pdf [12 Feb 2015].

World Health Organization (WHO). (2010) Statue Commemorates Smallpox Eradication. Available: http://www.who.int/mediacentre/news/notes/2010/smallpox_20100517/en/index.html [12 Feb 2015].

The Epidemiology of Alcohol Abuse and Alcoholism

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

According to Alcohol Concern Organisation (2015) more than 9 million people in England consume alcoholic beverages more than the recommended daily limits. In relation to this, the National Health Service (2015) actually recommends no more than 3 to 4 units of alcohol a day for men and 2 to 3 units a day for women. The large number of people consuming alcohol more than the recommended limits, highlights the reality that alcoholism is a major health concern in the UK which can lead to a multitude of serious health problems. Moss (2013) states that alcoholism and chronic use of alcohol are linked to various medical, psychiatric, social and family problems. To add to this, the Health and Social Care Information Centre (2014) reported that between 2012 and 2013, a total of 1,008,850 admissions related to alcohol consumption where an alcohol-related disease, injury or condition was the primary cause for hospital admission or a secondary diagnosis. This shows the detrimental impact of alcoholism on the health and overall wellbeing of millions of people in the UK. It is therefore vital to examine the aetiology of alcoholism in order to understand why so many people end up consuming excessive alcohol. The National Institute on Alcohol Abuse and Alcoholism (NIAAA) (n.d.) supports this by stating that learning the natural history of a disorder will provide information essential for assessment and intervention and for the development of effective preventive measures. This essay will also look into the different public health policies that address the problem of alcoholism in the UK. A brief description of what alcoholism is will first be provided.

What is Alcoholism?

It is safe to declare that alcoholism is a lay term that simply means excessive intake of alcohol. It can be divided into two forms namely; alcohol misuse or abuse and alcohol dependence. Alcohol misuse simply means excessive intake of alcohol more than the recommended limits (National Health Service Choices 2013). A good example of this is binge drinking.

Alcohol dependence is worse because according to the National Institute for Health and Care Excellence (2011, n.p.) it “indicates craving, tolerance, a preoccupation with alcohol and continued drinking regardless of harmful consequences” (e.g. liver disease). Under the Diagnostic Statistical Manual of Mental Disorders (DSM)- 5, these two have been joined as one disorder called alcohol use disorder or AUD with mild, moderate and severe sub-classifications (NIAAA 2015).

Genetic Aetiologic Factor of Alcoholism

Alcoholism is a complex disorder with several factors leading to its development (NIAAA 2005). Genetics and other biological aspects can be considered as one factor involved in the development of alcohol abuse and dependence (NIAAA 2005). Other factors include cognitive, behavioural, temperament, psychological and sociocultural (NIAAA 2005).

According to Goodwin (1985) as far as the era of Aristotle and the Bible, alcoholism was believed to run in the families and thus could be inherited. To some extent, there is some basis that supports this ancient belief because in reality, alcoholic parents have about four to five times higher probability of having alcoholic children (Goodwin 1985). Today, this belief seems to lack substantially clear and direct research-based evidence. On the other hand, studies also do not deny the role of genetics in alcoholism. With this view, it is therefore safe to argue that genetics is considered still as an important aetiologic factor in alcoholism.

The current consensus simply indicates that there is more to a simple gene or two that triggers the predisposition of an individual to become an alcoholic. Scutti (2014) reports that although scientists have known for some time that genetics take an active role in alcoholism, they also propose that an individual’s inclination to be dependent on alcohol is more complicated than the simple presence or absence of any one gene. The National Institute on Alcohol Abuse and Alcoholism (2008) states that there is no one single gene that fully controls a person’s predisposition to alcoholism rather multiple genes play different roles in a person’s susceptibility in becoming an alcoholic. The NIAAA (2005) further claims that the evidence for a genetic factor in alcoholism lies mainly with studies that involve extended pedigree, those that involve identical and fraternal twins and those that include adopted individuals raised apart from their alcoholic parents.

For pedigree studies, it is believed that the risk of suffering from alcoholism is increased four to seven fold among first-degree relatives of an alcoholic (Cotton 1979; Merikangas 1990 cited in NIAAA, 2005.). First degree relatives naturally refer to parent-child relationships; hence, a child is therefore four to seven times at higher risk of becoming an alcoholic, if one or both of their parents are alcoholics. Moss (2013) supports this by stating that children whose parents are alcoholic are at higher risk of becoming alcoholics themselves when compared to children whose parents are non-alcoholics.

A study conducted by McGue, Pickens and Svikis (1992 cited in NIAAA 2005) revealed that identical twins generally have a higher concordance rate of alcoholism compared to fraternal twins or non-twin siblings. This basically means that a person who has an alcoholic identical twin, will have a higher risk of becoming an alcoholic himself when compared to if his alcoholic twin is merely a fraternal twin or a non-twin sibling. This study further proves the role of genetics in alcoholism because identical twins are genetically the same; hence, if one is alcoholic, the other must therefore also carry the alcoholic gene.

The genetic factor in alcoholism is further bolstered by studies conducted by Cloninger, Bohman and Sigvardsson 1981 cited in NIAAA 2005 and Cadoret, Cain and Grove (1980 cited in NIAAA 2005) involving adopted children wherein the aim was to separate the genetic factor from the environmental factor of alcoholism. In these studies, children of alcoholic parents were adopted and raised away from their alcoholic parents but despite this, some of these children still develop alcoholism as adults at a higher rate than those adopted children who did not have an alcoholic biological parent (Cloninger et al., 1981 cited in NIAAA 2005 and Cadoret et al., 1980 cited in NIAAA 2005).

One interesting fact about aetiologic genetic factor is that although there are genes that indeed increase the risk of alcoholism, there are also genes that protect an individual from becoming an alcoholic (NIAAA 2008). For example, some people of Asian ancestry carry a gene that modifies their rate of alcohol metabolism which causes them to manifest symptoms such as flushing, nausea and tachycardia and these generally lead them to avoid alcohol; thus, it can be said that this gene actually helps protect those who possess it from becoming alcoholic (NIAAA 2008).

Environment as an Aetiologic Factor of Alcoholism

Another clearly identifiable factor is environment, which involves the way an individual is raised and his or her exposure to different kinds of activities and opportunities. The National Institute on Alcohol Abuse and Alcoholism (2005) relates that the genetic factor and the environmental factor have a close relationship in triggering alcoholism in an individual. This can be explained by the simple fact that even if an individual is genetically predisposed to becoming an alcoholic, if he is not exposed to a particular kind of environment which triggers activities that lead to alcohol intake, the likelihood of his becoming an alcoholic will be remote.

There are certain aspects within the environment that makes it an important aetiologic factor. According to Alcohol Policy MD (2005) these aspects include acceptance by society, availability and public policies and enforcement.

Acceptance in this case refers to the idea that drinking alcoholic drinks even those that should be deemed excessive is somewhat encouraged through mass media, peer attitudes and behaviours, role models, and the overall view of society. Television series, films and music videos glorify drinking sprees and even drunken behaviour (Alcohol Policy MD 2005). TV and film actors and sports figures, peers and local role models also encourage a positive attitude towards alcohol consumption which overshadows the reality of what alcohol drinking can lead to (Alcohol Policy MD 2005). In relation to this, a review of different studies conducted by Grube (2004) revealed that mass media in the form of television shows for instance has an immense influence on the youth (age 11 to 18) when it comes to alcohol consumption. In films, portrayals regarding the negative impact of alcohol drinking are rare and often highlight the idea that alcohol drinking has no negative impact on a person’s overall wellbeing (Grube 2004). In support of these findings, a systematic review of longitudinal studies conducted by Anderson et al. (2009) revealed that the constant alcohol advertising in mass media can lead adolescents to start drinking or to increase their consumption for those who are already into it.

Availability of alcoholic drinks is another important environmental aetiologic factor of alcoholism simply because of the reality that no matter how predisposed an individual is to become an alcoholic, the risk for alcoholism will still be low if alcoholic drinks are not available. On the other hand, if alcoholic beverages are readily available as often are today, then the risk for alcoholism is increased not only for those who are genetically predisposed to alcoholism but even for those who do not carry the “alcoholic genes”. The more licensed liquor stores in an area, the more likely people are to drink (Alcohol Policy MD 2005). The cheaper its price, the more affordable it is for people to buy and consume it in excess (Alcohol Policy MD 2005).

Another crucial environmental aetiologic factor is the presence or absence of policies that regulate alcohol consumption and its strict or lax enforcement. It includes restricting alcohol consumption in specified areas, enacting stricter statutes concerning drunk driving and providing for penalties for those who sell to, buy for or serve to underage individuals (Alcohol Policy MD 2005). It is worthy to point out that in the UK, the drinking age is 18 and a person can be stopped, fined or even arrested by police if he or she is below this age and is seen drinking alcohol in public (Government UK 2015a). It is also against the law for someone to sell alcohol to an individual below 18; however, an individual age 16 or 17 when accompanied by an adult can actually drink but not buy alcohol in a pub or drink beer, wine or cider with a meal (Government UK 2015a).

Policies to Combat Alcoholism

One public health policy that can help address the problem on alcoholism is the mandatory code of practice for alcohol retailers which banned irresponsible alcohol promotions and competitions, and obliged retailers to provide free drinking water, compelled them to offer smaller measures and required them to have proof of age protocol. It can be argued that this policy addresses the problem of alcoholism by restricting the acceptance, availability and advertising of alcohol (Royal College of Nursing 2012). Another is the Police Reform and Social Responsibility Act 2011 which is a statute that enables local authorities to take a tougher stance on establishments which break licensing rules about alcohol sale (Royal Collage of Nursing 2012).

There is also the policy paper on harmful drinking which provides different strategies in addressing the problem of alcoholism. One such strategy is the advancement of the Change4Life campaign which promotes healthy lifestyle and therefore emphasises the recommended daily limit of alcohol intake for men and women (Government UK 2015b). Another strategy within this policy is the alcohol risk assessment as part of the NHS health check for adults ages 40 to 75 (Government UK 2015b). This policy aims to prevent rather than cure alcoholism which seems to be logical for after all, an ounce of prevention is better than a pound of cure.

Conclusion

Alcoholism which includes both alcohol misuse and alcohol dependence is a serious health problem which affects millions in the UK. Its aetiology is actually a combination of different factors. One vital factor is genetics wherein it can be argued that some people are predisposed to becoming an alcoholic. For example, an individual is at higher risk of becoming an alcoholic if he or she has a parent who is also alcoholic. When coupled with environmental factors, the risk of suffering from alcoholism becomes even greater. Environment refers to the acceptability and availability of alcohol and the presence or absence of policies that regulate alcohol sale and consumption. Vital health policies such as Harmful Drinking Policy Paper advocated by the government, are important preventive measures in reducing the incidence and prevalence of alcoholism in the UK.

References

Alcohol Concern Organisation (2015). Statistics on alcohol. [online]. Available from: https://www.alcoholconcern.org.uk/help-and-advice/statistics-on-alcohol/ [Accessed on 28 September 2015].

Alcohol Policy MD (2005). The effects of environmental factors on alcohol use and abuse. [online]. Available from: http://www.alcoholpolicymd.com/alcohol_and_health/study_env.htm[Accessed on 28 September 2015].

Anderson, P., de Brujin, A., Angus, K., Gordon, R. and Hastings, G. (2009). Impact of alcohol advertising and media exposure on adolescent alcohol use: A systematic review of longitudinal studies. Alcohol and Alcoholism. 44(3):229-243.

Goodwin, D. (1985). Alcoholism and genetics: The sins of the fathers. JAMA Psychiatry. 42(2):171-174.

Government UK (2015a). Alcohol and young people. [online]. Available from: https://www.gov.uk/alcohol-young-people-law [Accessed on 28 September 2015].

Government UK (2015b). policy paper 2010 to 2015 government policy: Harmful drinking. [online]. Available from: https://www.gov.uk/government/publications/2010-to-2015-government-policy-harmful-drinking/2010-to-2015-government-policy-harmful-drinking [Accessed on 28 September 2015].

Grube, J. (2004). Alcohol in the media: Drinking portrayals, alcohol advertising, and alcohol consumption among youth. [online]. Available from:http://www.ncbi.nlm.nih.gov/books/NBK37586/ [Accessed on 28 September 2015].

Health and Social Care Information Centre (2014). Statistics on alcohol England, 2014. [online]. Available from: http://www.hscic.gov.uk/catalogue/PUB14184/alc-eng-2014-rep.pdf [Accessed on 28 September 2015].

Moss, H.B. (2013). The impact of alcohol on society: A brief overview. Social Work in Public Health. 28(3-4):175-177.

National Health Service (2015). Alcohol units. [online]. Available from: http://www.nhs.uk/Livewell/alcohol/Pages/alcohol-units.aspx [Accessed on 28 September 2015].

National Health Services Choices (2013). Alcohol misuse. [online]. Available from: http://www.nhs.uk/conditions/alcohol-misuse/pages/introduction.aspx [Accessed on 28 September 2015].

National Institute on Alcohol Abuse and Alcoholism (2015). Alcohol use disorder: A comparison between DSM-IV and DSM-5. [online]. Available from: http://pubs.niaaa.nih.gov/publications/dsmfactsheet/dsmfact.pdf [Accessed on 28 September 2015].

National Institute on Alcohol Abuse and Alcoholism (2008). Genetics of alcohol use disorder. [online]. Available from: http://www.niaaa.nih.gov/alcohol-health/overview-alcohol-consumption/alcohol-use-disorders/genetics-alcohol-use-disorders [Accessed on 28 September 2015].

National Institute on Alcohol Abuse and Alcoholism (2005). Module 2: Etiology and natural history of alcoholism. [online]. Available from: http://pubs.niaaa.nih.gov/publications/Social/Module2Etiology&NaturalHistory/Module2.html [Accessed on 28 September 2015].

National Institute for Health and Care Excellence (2011). Alcohol-use disorders: Diagnosis, assessment and management of harmful drinking and alcohol dependence. [online]. Available from: https://www.nice.org.uk/guidance/CG115/chapter/Introduction [Accessed on 28 September 2015].

Royal College of Nursing (2012). Alcohol: policies to reduce alcohol-related harm in England. [online]. Available from: https://www.rcn.org.uk/__data/assets/pdf_file/0005/438368/05.12_Alcohol_Short_Briefing_Feb2012.pdf [Accessed on 28 September 2015.

Scutti, S. (2014). Is alcoholism genetic? Scientists discover link to a network of genes in the brain. [online]. Available from: http://www.medicaldaily.com/alcoholism-genetic-scientists-discover-link-network-genes-brain-312668 [Accessed on 28 September 2015].

Student Diet & Health Concerns

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

The obesity epidemic observed in the UK and other Western nations over the past two decades has increased the focus on eating habits of the nation (James, 2008, p. S120). Obesity, most often caused by prolonged poor diet, is associated with an increased risk of several serious chronic illnesses, including diabetes, hypertension and hyperlipidaemia, as well as possibly being associated with increased risk of mental health issues including depression (Wyatt et al., 2006, p. 166). In an attempt to promote better health of the population and reduce the burden of obesity and related health conditions on the NHS, the recent government white paper Healthy Lives, Healthy People (HM Government, 2010, p. 19) has identified improvements in diet and lifestyle as a priority in public health policy.

The design of effective interventions for dietary behaviour change may rely on having a thorough understanding of the factors determining individual behaviour. Although there has been a great deal of research published on eating habits of adults and school children (e.g. Raulio et al., 2010, p. 987) there has been much less investigation of the university student subpopulation, particularly within the UK. This may be important given that the dietary choices of general populations vary markedly across different countries and cultures, including within the student population (Yahia et al., 2008, p. 32; Dodd et al., 2010, p. 73).

This essay presents a discussion of the current research available on the eating habits of UK undergraduate students, including recent work being undertaken at Coventry University (Arnot, 2010, online). The essay then describes a small study conducted to supplement this research, using data collected from six students at a different university, exploring the influences which underpin the decisions made by students relating to their diet. The results of this study are presented and used to derive a set of recommendations for both a localized intervention and a national plan, targeted at university students, to improve dietary behaviour.

Eating Habits of University Students

It is widely accepted that students leaving home to attend university are likely to experience a significant shift in their lifestyle, including their diet, and this is supported by research evidence from the UK and other European countries (Papadaki et al., 2007, p. 169). This may encompass increased alcohol intake, reduced intake of fruit and vegetables, and increased intake of processed or fatty foods, as well as impacting on overall eating patterns (Arnot, 2010, online; Dodd et al., 2010, p. 73; Spanos & Hankey, 2010, p. 102).

Results of a study including 80 undergraduate students from Scotland found that around a quarter of participants never consumed breakfast (Spanos & Hankey, 2010, p. 102). Skipping breakfast habitually has been shown to be associated with increased risk of obesity and overweight amongst adolescents (Croezen et al., 2009, p. 405). The precise reasons for this are not entirely clear, although it could be due to increased snacking, on energy-dense, high-fat foods later in the day. This is based on the remainder of the results reported by Spanos and Hankey (2010, p. 102) which showed that three-quarters of students regularly used vending machines, snacking on chocolate bars and crisps; this was also shown to be significantly associated with body mass index (BMI).

Some studies have suggested that there may be different patterns of unhealthy eating amongst male and female groups of students. For example research conducted by Dr. Ricardo Costa and Dr. Farzad Amirabdollahian at Coventry University found that male students may be at risk of what they term “disordered eating patterns”. In addition, the study also suggests that males are at greater risk of not eating five portions of fruit and vegetables per day. This research is based on a substantial sample size, using data derived from in-depth interviews with approximately 130 undergraduates, although there are plans to increase this to include nearly 400 participants. It is acknowledged by the researchers that this may represent only those events occurring at one university, although there are also plans to expand the study sample across another two universities in the future (Arnot, 2010, online).

However, not all studies published support the existence of gender differences in eating behaviours. For example, research into risk factors for an unhealthy lifestyle reported by Dodd et al. (2010, p. 75) found that there were no differences in gender when measuring rates of eating five portions of fruit or vegetables per day.

Factors in Dietary Change

It is unsurprising that students’ dietary habits change when leaving home to attend university, since it has been identified that life transitions form a major factor in influencing eating habits (Lake et al., 2009, p. 1200). Studies have suggested that the dietary shift is most likely due to young adults leaving the family home and assuming responsibility for meal planning and preparation for the first time. This is supported by observations that university students who remain living at the family home may maintain a relatively healthier lifestyle than those moving out of home (Papadaki et al., 2007, p. 169). Early results from a Coventry University study also support this as a major factor, as it has been identified that cooking skills may be very limited amongst undergraduates, with the exception of mature students (Arnot, 2010, online).
Early results from Coventry University suggest that there is little evidence within their sample of any significant differences in eating habits between students from different social backgrounds (Arnot, 2010, online).

Arnot (2010, online) identifies that any trends in eating habits within the undergraduate population may reflect a phase, which the individuals may grow out of naturally. Lake et al. (2009, p. 1200) also suggest that changes in eating habits may simply be due to the life transition associated with the general maturation process, moving from adolescence to adulthood. This would then suggest that eating habit changes may be consistent across all groups of young adults, not differentiated within the undergraduate population. However, it is possible that the relationship between other factors such as stress may make the situation more complex, with university students possibly experiencing higher stress levels, therefore at increased risk of weight gain associated with diet change (Serlachius et al., 2007, p. 548).

Barriers and Facilitators to Healthy Eating

A systematic review of studies by Shepherd et al. (2005, p. 239) found that the major barriers to healthy eating included access to healthy foods, relative prices and personal preference, for example liking fast foods. This study also identified a lack of provision of healthy school meals as a major barrier, reflecting the fact that this review focused on exploring healthy eating in secondary school children, aged 11 to 16 years. It is therefore different barriers are most important in the university student population, as this group take a greater level of responsibility for their own food choices.

For example, evidence from the Coventry University study suggests that while undergraduate males were influenced by media images and were motivated to look good, this did not necessarily translate to improved healthy food choices. Instead, this appears to be associated with an increased risk of disordered eating within this group, alongside increased use of supplements such as protein powders, creatine and amino acids. This approach also led to increased intake of protein-rich foods but very little fruit and vegetable intake. It would be anticipated that factors such as availability and cost may still be important factors in this group.

The systematic review by Shepherd (2005, p. 239) suggested that support from family and friends, high levels of availability of healthy foods, an interest and desire to maintain appearance, and will-power were all major facilitators of eating healthily. Again, it is possible that different factors may be considered important within the university student population, who are older and have greater responsibility for their eating habits.

Methodology

The short review of the literature presented thus far in the essay demonstrates that there is still only a limited understanding of the underlying factors influencing eating habits in undergraduate students. Yet this is the information which is required if effective behavioural change interventions are to be designed and disseminated.

Research Aims

The aim of this small study was to investigate the decision-making processes which underlie the decisions of undergraduate students with regards to eating behaviours, including influences over these decisions. This could then be used alongside other published material to design a social marketing strategy on both a local and national level to improve healthy eating within this group.

Study Sample

A total of six undergraduate students from Manchester University were recruited to participate in the research. Convenience sampling was used to recruit participants to the study sample. Posters were displayed within the business school at the university, requesting participants to attend research focus groups. Eight participants contacted the researcher, but two subsequently withdrew, leaving a sample of four female and two male students. No further inclusion or exclusion criteria were applied to participants, other than that they were current undergraduate students at the university. This method of sampling may not provide a truly representative sample, therefore it may be difficult to generalize the results to the wider population of interest (Babbie, 2010, p. 192). However, this was the most appropriate recruitment approach given the limited time and budget constraints for the project. The diversity of the study sample would also suggest that there was little bias introduced.

Focus Group Methods

Focus groups were selected for data collection from study participants. Focus groups may be particularly useful for gaining an understanding of topics with a group behaviour element, but have also been shown to be very useful in the field of marketing for understanding the impact of marketing stimuli. They were considered to be of particular use in this instance as they allow integrated exploration of associations between lifestyle factors and reactions to marketing materials (Stewart et al., 2007, pp. 2-9).

The focus group was arranged for a two-hour session on one morning, and was moderated by the author. The entire session was video recorded so as to allow for further analysis of responses and behavioural cues at a later date. All participants were given assurance that their responses would remain anonymous and confidential and permission was sought to record the session before it began. Participants were also given information at the beginning of the session as to the purpose of the data collection, and were given opportunity to ask any questions, before being asked to provide consent for participation (Litosseliti, 2003, pp. 70-71).

The focus group began with some short introductory questions to break the ice between participants (Litosseliti, 2003, p. 73), before moving on to focus on the topic of interest: eating behaviours and potential influences. The questions included in the moderator guide, which was prepared to facilitate the focus group, are included in Box 1.

Box 1: Focus group questions

Tell me a little about what you would eat in a typical day.
Do you find that you eat regular meals?
What types of foods do you most like to eat?
Would you say that you eat many snacks? What type of snacks do you eat?
Is there anything you can think of that affects this – for example, do you eat differently on different days of the week?
How would you describe your cooking abilities – do you find it easy to plan meals and cook and prepare food?
How does the way you eat now compare to how you used to eat before coming to university?
Do you find that you eat differently when you go home for the weekend or for holidays?
Would you say that you have any concerns about the way in which you eat?
How do you think that the way in which you eat affects your health?
Are you at all concerned about whether the way you eat affects how you look?
What type of things affect whether you choose healthy foods over non-healthy foods?
Do you find it difficult to find/purchase healthy food?
Would cost have any impact on whether the food you buy is healthy?
Study Results

Overall, the results of the focus group suggested that the students in the sample had experienced a significant change in eating habits since leaving home to attend university. Although the daily eating patterns of participants differed significantly, all felt that they ate a less healthy diet since leaving home. The main difference noted was that regular meals were eaten less often, with several participants reporting that they skipped breakfast regularly, and that other meals were eaten based on convenience rather than at a regular time each day.

Most participants agreed that their eating patterns did differ on a daily basis. In particular, weekends were noted to follow more regular eating patterns, but often involve higher levels of alcohol and unhealthy foods such as takeaways. Participants also generally agreed that they returned to a healthier way of eating when returning home for the weekend or for holidays.

The actual components of diet varied widely across participants. While some participants reported that they regularly ate five portions of fruit and vegetables per day, others indicated that they ate only low levels. Four participants agreed that they ate convenience foods and takeaways on a regular basis, and it was acknowledged that these were usually calorie-dense, high fat foods.

All participants also agreed that they ate snacks on a regular basis, particularly where it was inconvenient to eat meals at regular intervals, and where breakfast was skipped. One participant reported that they felt that their snacking was healthy, however, as they usually snacked on fruit, nuts or seeds rather than chocolate bars or crisps. Given the small sample size and selection procedures, it was difficult to determine whether differences could be attributed to characteristics of the participants, for example gender (Babbie, 2010, p. 192).

There were a number of factors which influenced food choices which emerged from the focus group. The major factor appeared to be convenience. The patterns of meals which were eaten were largely driven by having the time to prepare and food, or having access to healthy foods which could be purchased and eaten within the university campus. Participants also agreed that cost played a major factor.

Only two participants agreed that their low level of cooking ability had any role in how healthy their diet was. The other participants claimed that while they could cook, convenience, cost and motivation were major barriers to doing so.

Food preferences were also a major factor in determining food choices, with all except one participant agreeing that they enjoyed fast food and several reporting that they preferred unhealthy foods to healthy ones. In spite of this, three participants reported that they did try to limit how often they ate fast foods, as it was acknowledged that it was bad for their health to eat them regularly.

In spite of this, the food choices of participants did not appear to be driven overall by concern over their health. Participants suggested that while they were aware of how their diet could impact on their health, other factors were more important influences. Similarly, only one participant agreed that maintaining the way that they looked played any role in influencing their dietary choices.

Social Marketing Strategy Design

Social marketing, first proposed as a public health tool in the 1970s, refers to the application of marketing techniques, using communication and delivery to encourage behaviour change. Such a strategy follows a sequential planning process which includes market research and analysis, segmentation, setting of objectives, and identifying appropriate strategies and tools to meet these objectives (DH, 2008, online). The literature review and focus group discussed thus far comprise the market research and analysis components of this process, with the remaining steps addressed below.

Market Segmentation

Market segmentation may be performed according to geographic distinctions, demographics or psychographic characteristics (Health Canada, 2004, online).
Based on the limited amount of information which is available so far, it would be difficult to segment the market geographically, as it is unclear whether differences exist according to which university is attended.

The demographics of undergraduate students may also be largely shared, with literature indicating that social background may hold little influence over eating habits within this subpopulation, and only limited evidence of any difference between genders (Arnot, 2010, online; Dodd et al., 2010, p. 75).

Instead, it may be preferential to segment on the basis of psychographic characteristics, according to shared knowledge, attitudes and beliefs with regard to changing dietary behaviour. The “Stages of Change” model proposed by Prochaska and DiClemente may be a useful tool to guide this segmentation, in which any change in behaviour is suggested to occur in six steps: precontemplation, contemplation, preparation, action, maintenance and termination (Tomlin & Richardson, 2004, pp. 13-6).
Those in the precontemplative stage do not see their behaviour as a problem (Tomlin & Richardson, 2004, p. 14), therefore targeting this segment could be targeted with a marketing campaign to increase knowledge. Evidence from the US would appear to indicate that higher levels of knowledge regarding dietary guidelines may be associated with better dietary choices, although there is little evidence which shows direct causality (Kolodinsky et al., 2007, p. 1409). Given the many different factors which appear to contribute to unhealthy diets amongst students, simply increasing knowledge may be insufficient to generate any significant improvements. This is further supported by current healthy eating initiatives aimed at the general population, such as the 5 A Day campaign, which incorporates additional, practical information, rather than simply educating people on the need to eat more fresh food (NHS Choices, 2010, online).

Those in the contemplative stage are aware that they need to change, but don’t really want to. It would be unlikely that targeting a marketing campaign at this group would have any significant effect (Tomlin & Richardson, 2004, p. 15). Once individuals reach the action stage, they are actively initiating or maintaining a change, until the initial issue is finally resolved in the termination stage (Tomlin & Richardson, 2004, pp. 15-6). Instead, it would be better to target those in the preparation stage, who have made the decision to change but may be unclear about how to initiate this change. Here, improving knowledge, but also providing information on effective ways in which to change behaviour, may be the most appropriate strategy, as that adopted by the 5 A Day campaign.

Strategy Objectives

Based on the information generated from the focus study, along with that from other research, the main aim of the strategy should be to improve the overall diet of undergraduate students. There already exist campaigns such as the 5 A Day campaign which aim to encourage eating more fruit and vegetables (NHS Choices, 2010, online). The main issues within the undergraduate group instead appear to lie in choosing unhealthy foods, or skipping meals, due to convenience and cost. Therefore this is where the campaign should focus. The following objectives may therefore be identified:

1. Reduce the number of undergraduate students experiencing disordered eating patterns.
2. Improve knowledge and awareness within the undergraduate student population of tasty, cost-effective, convenient alternatives to takeaways and other junk foods.

National Plan

The national strategy would comprise of two main arms. The first would be an educational campaign, which would be targeted specifically at the segment described above, therefore focusing on providing practical information to assist healthy eating choices amongst students. This appears to have been moderately successful with the 5 A Day campaign within the general population (Capacci & Mazzocchi, 2011, p. 87). Evidence from the US suggests that within the undergraduate population specifically, providing information which is directly relevant to their lifestyle may also be effective (Pires et al., 2008, p. 16).

This campaign would be run through national media, as the evidence suggests that such campaigns are associated not only with increased knowledge, but also moderate levels of behaviour change (Noar, 2006, p. 21). Online and social media campaigns may also be effective based on previous case studies. For example, the Kirklees Up For It project found that running a campaign which utilized Facebook alongside its own Website was a successful way of reaching a moderate audience of 18 to 24 year olds (NSMC, 2010, online). Therefore social media such as Twitter and Facebook would provide a simple means of providing weekly tips to students on how to create easy, cheap healthy meals.

Tips could also be given on how to choose healthier snacks which cost less, for example by preparing them at home. By tailoring the advice to the motives of the group, which appear to be related to convenience and cost, previous research would suggest that this should be more effective in changing snacking behaviour (Adriaanse et al., 2009, p. 60).

The second arm of the national campaign would involve lobbying of the government to introduce regulation on the food choices offered by university campuses, particularly where food is provided as part of an accommodation package. This is based on similar recent moves to improve school meals, which has been suggested to be an effective means of improving diet, even if obesity levels have not yet seen any impact (Jaime & Lock, 2009, p. 45). It is also consistent with the data collected in this study, which suggested that access to healthy foods and convenience were major barriers to healthy eating for students.

Localised Intervention

In addition to the national strategy, a local project aimed at providing food preparation workshops would also be piloted in Manchester. This concept is based on the observation that students mostly select unhealthy choices due to convenience and cost, and may not be aware of ways in which healthy food may also be prepared quickly and cheaply. Previous case studies have shown that these practical activities may be an effective means of reaching this target audience. For example a healthy living project called Up For It, run by Kirklees Council in association with NHS Kirklees, found on surveying young adults aged between 16 and 24 years that interventions which were fun and social were preferred to those which focus too much on health (NSMC, 2010, online). Provision of one-off sessions which provide information on where to eat healthily on campus have also shown some success within the undergraduate population in the US (Pires et al., 2008, p. 12).

Based on the budget for the Up For It project, it would be anticipated that approximately ?100 000 would be required to set up and run this local section of the strategy (NSMC, 2010, online). It would be assumed that lobbying and media coverage required as part of the national strategy would be managed by the Department of Health.

Conclusions

It is clear that there is some truth to the assumption that undergraduate students in the UK live on a relatively unhealthy diet. While the reasons for this may be somewhat complex, convenience and cost appear to play a major role in the diet decisions which are made by this group. It is also clear that many are aware of the health impact which their diet is likely to have, although this is overridden by other factors. Targeting students who recognize the need to change their diet, by providing information on how to prepare healthier food quickly and cheaply, may help to overcome the barriers of cost and convenience, thereby improving health within this population.

References

Adriaanse, M.A., de Ridder, D.T.D. & de Wit, J.B.F. (2009) ‘Finding the critical cue: Implementation intentions to change one’s diet work best when tailored to personally relevant reasons for unhealthy eating’. Personality and Social Psychology Bulletin, 35(1), 60-71.
Arnot, C. (2010) ‘Male students eschew balanced diet in favour of supplements’. The Guardian, 9 November 2010. Available [online] from: http://www.guardian.co.uk/education/2010/nov/09/male-students-eating-habits [Accessed 27/03/2011].
Babbie, E.R. (2010) The Practice of Social Research. Belmont, CA: Wadsworth, p. 192.
Capacci, S. & Mazzochi, M. (2011) ‘Five-a-day, a price to pay: An evaluation of the UK program impact accounting for market forces’. Journal of Health Economics, 30(1), 87-98.
Croezen, S., Visscher, T.L.S., ter Bogt, N.C.W., Veling, M.L. & Haveman-Nies, A. (2009) ‘Skipping breakfast, alcohol consumption and physical inactivity as risk factors for overweight and obesity in adolescents: Results of the E-MOVO project’. European Journal of Clinical Nutrition, 63, 405-412.
DH (2008) Social Marketing. Department of Health. Available [online] from: http://webarchive.nationalarchives.gov.uk/+/www.dh.gov.uk/en/Publichealth/Choosinghealth/DH_066342 [Accessed 28/03/2011].
Dodd, L.J., Al-Nakeeb, Y., Nevill, A. & Forshaw, M.J. (2010) ‘Lifestyle risk factors of students: A cluster analytical approach’. Preventative Medicine, 51(1), 73-77.
Health Canada (2004) Section 2: Market Segmentation and Target Marketing. Available [online] from: http://www.hc-sc.gc.ca/ahc-asc/activit/marketsoc/tools-outils/_sec2/index-eng.php [Accessed 26/03/2011].
HM Government (2010) Healthy Lives, Healthy People: Our strategy for public health in England. London: Public Health England. Available [online] from: http://www.dh.gov.uk/dr_consum_dh/groups/dh_digitalassets/@dh/@en/@ps/documents/digitalasset/dh_122347.pdf [Accessed 26/03/2011].
Jaime, P.C. & Lock, K. (2009) ‘Do school based food and nutrition policies improve diet and reduce obesity’. Preventative Medicine, 48(1), 45-53.
James, W.P.T. (2008) ‘WHO recognition of the global obesity epidemic’. International Journal of Obesity, 32, S120-S126.
Kolodinsky, J., Harvey-Berino, J.R., Berlin, L., Johnson, R.K. & Reynolds, T.W. (2007) ‘Knowledge of current dietary guidelines and food choice by college students: Better eaters have higher knowledge of dietary guidance’. Journal of the American Dietetic Association, 107(8), 1409-1413.
Lake, A.A., Hyland, R.M., Rugg-Gunn, A.J., Mathers, J.C. & Adamson, A.J. (2009) ‘Combining social and nutritional perspectives: From adolescence to adulthood’. British Food Journal, 111(11), 1200-1211.
Litosseliti, L. (2003) Using Focus Groups in Research. London: Continuum, pp. 70-73.
NHS Choices (2010) 5 A Day. Available [online] from: http://www.nhs.uk/livewell/5aday/pages/5adayhome.aspx/ [Accessed 26/03/2011].
Noar, S.M. (2006) ‘A 10-year retrospective of research in health mass media campaigns: Where do we go from here?’ Journal of Health Communication, 11(1), 21-42.
NSMC (2010) Up For It. Available [online] from: http://thensmc.com/component/nsmccasestudy/?task=view&id=156 [Accessed 26/03/2011].
Papadaki, A., Hondros, G., Scott, J.A. & Kapsokefalou, M. (2007) ‘Eating habits of university students living at, or away from home in Greece’. Appetite, 49(1), 169-176.
Pires, G.N., Pumerantz, A., Silbart, L.K. & Pescatello, L.S. (2008) ‘The influence of a pilot nutrition education program on dietary knowledge among undergraduate college students’. Californian Journal of Health Promotion, 6(2), 12-25.
Raulio, S., Roos, E. & Prattala, R. (2010) ‘School and workplace meals promote health food habits’. Public Health Nutrition, 13, 987-992.
Serlachius, A., Hamer, M. & Wardle, J. (2007) ‘Stress and weight change in university students in the United Kingdom’. Physiology & Behavior, 92(4), 548-553.
Shepherd, J., Harden, A., Rees, R., Brunton, G., Garcia, J., Oliver, S. & Oakley, A. (2005) ‘Young people and healthy eating: A systematic review of research on barriers and facilitators’. Health Education Research, 21(2), 239-257.
Spanos, D. & Hankey, C.R. (2010) The habitual meal and snacking patterns of university students in two countries and their use of vending machines. Journal of Human Nutrition and Dietetics, 23(1), 102-107.
Stewart, D.W., Shamdasani, P.N. & Rook, D.W. (2007) Focus Groups: Theory and Practice – 2nd Edition. Thousand Oaks, CA: Sage Publications, Inc., pp. 2-9.
Tomlin, K.M. & Richardson, H. (2004) Motivational Interviewing and Stages of Change. Center City: MN: Hazelden, pp. 14-16.
Wyatt, S.B., Winters, K.P. & Dubbert, P.M. (2006) ‘Overweight and obesity: Prevalence, consequences, and causes of a growing public health problem’. American Journal of the Medical Sciences, 331(4), 166-174.
Yahia, N., Achkar, A., Abdallah, A. & Rizk, S. (2008) ‘Eating habits and obesity among Lebanese university students’. Nutrition Journal, 7, 32-36.

Spinal Cord Trauma Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Abstract

Loss of sensory and motor function below the injury site is caused by trauma to the spinal cord. Approximately 10,000 people experience serious spinal cord injury each year. There are four general types of spinal cord injury, cord maceration and laceration, contusion and solid core injury. There are three phases of SCI response that occur after injury: the acute, secondary, and chronic. The most immediate concern is patient stabilization. Additionally interventions may be instituted in an effort to improve function and outcome. Through health, and future development one day there will be hope for recovery from the spinal cord injury.

Introduction

Loss of sensory and motor function below the injury site is caused by trauma to the spinal cord. As indicated by Huether & McCance (2008) normal activity of the spinal cord cells at and below the level of injury ceases due to loss of continuous tonic discharge from the brain and brain stem. Depending on the extent of the injury reflex function below the point of injury may be completely lost. This involves all skeletal muscles, bladder, bowel, sexual function and autonomic control. In the past hope for recovery has been minimal. With medical advancements and better understanding today hope for recovery is better but still limited.

Risk Factors and Incidence

According to Huether & McCance (2008) approximately 10,000 people experience serious spinal cord injury each year. 81% of those injuries are males with an average age of 33.4 years. As indicated by Hulsebosch (2002) the majority of injuries are divided into four separate groups; 44% of the injuries are young people sustained through motor vehicle crashes or other high energy traumatic accident; 18% are sustained through sports activities, and 24% are sustained through violence and 22% are sustained in the elderly population either through falls or cervical spinal stenosis caused by congenital narrowing or spondylosis.

Categories of Injury

According to Hulsebosch (2002) there are four general types of spinal cord injury: 1) cord maceration 2) cord laceration 3) contusion injury, and 4) solid cord injury. In the first two injuries, the surface of the cord is lacerated and a prominent connective tissue response is invoked, whereas in the latter two the spinal cord surface is not breached and the connective tissue component is minimal. The contusion injury represents from 25 to40% of all injuries and is a progressive injury that enlarges overtime.

Cellular Level Physiology

Hulsebosch (2002) gives us three phases of response directly after the injury of the spinal cord. The acute phase begins with the moment of injury and extends for the first few days. A variety of pathophysiological processes begins. There is immediate mechanical soft tissue damage, including endothelial cells of the vasculature. Cell death, resulting from mechanical forces and ischemic consequences is instantaneous.

Over the next few minutes there are significant electrolytic shifts, intracellular concentrations of sodium increase. Extracellular concentrations of potassium increase. Intracellular levels of calcium increase to toxic levels that contribute to a failure in neural function. Electrolyte shifts cascade to a generalized demonstration of spinal shock, which is representative of a “failure of circuitry in the spinal neural network. As indicated by Shewmon (1999) spinal shock is a transient functional depression of a structurally intact cord below the site of an acute spinal cord injury.

It does not occur with slowly progressive lesions. Limited function or loss of function typically lasts two to six weeks followed by recovery of functions. The secondary phase occurs over the next few minutes to the next few weeks. Ischemic cellular death, electrolytic shifts, and edema continue. As a result of cell lysis extracellular concentrations of glutamate and other amino acids reach toxic concentrations within the first fifteen minutes after injury.

Free-radical production amplifies. Neutrophils accumulate in the spinal parenchyma within 24 hours. Lymphocytes follow the neutrophils and reach their peak numbers within forty eight hours. Local concentrations of cytokines and chemokines increase as part of the inflammation process. As inflammation and ischemia proceed the injury site grows in size from the initial mechanical force response site into the area around the site, encompassing a larger region of cell death.

Regeneration is inhibited by factors expressed within the dominos of responsive reactions. The chronic phase occurs over a time course of days to years. Cell death continues. The cord becomes scarred and tethered. Conduction deficits result from demyelination of the cord. Regeneration and emergence of axons is exhibited but inhibitory factors suppress any resultant growth. Alteration of neural circuits often results in chronic pain syndromes for many spinal cord injury patients.

Therapeutic Management

Spinal cord injury is diagnosed by physical examination, radiological exam, CT scans, MRI scans, and myelography. The most immediate concern in the management of an acute spinal cord injury is patient stabilization. The vertebral column is subject to surgical stabilization using variety of surgical rods, pins, and wires.

Hardware must be meticulously placed. Surgical intervention has the potential to instigate additional spinal trauma. Hemostatic body systems must be supported through fluid resuscitation, medication management and electrolyte support. Additionally the following interventions may be instituted in an effort to improve function and outcome:

Edema Reduction

Reduction of the inflammatory response is one intervention of concentrating in the treatment of the acute spinal cord injury. Steroids have provided a primary tool to reduce edema and inflammation, the most successful of which is methylprednisolone (MP). According to Bracken (1993) the administration of a high dose of MP, if given within eight hours of the insult in patients with both complete and incomplete SCI, as proposed by the National Acute Spinal Cord Injury Study (NASCIS-2), has been promising with respect to improved clinical outcome. The cellular and molecular mechanisms by which MP improves function may involve antioxidant properties, the inhibition of inflammatory response, and/or a role in immunosuppression.

Inhibition of Inflammation: by use of Anti-Inflammatory Agents

Although inflammation is generally held to be are pair mechanism that is restorative in nature, recent work has demonstrated that the inflammatory cascade produces several pathways that are degradative in nature, such as the prostagland in pathways.

Anti-inflammatory agents have been administered with successful limitation of the inflammatory process. As indicated by Hains, Yucra and Hulsebosch (2001) selective cyclooxygenase (COX)-2inhibitors given systemically to spinal card injury patients have demonstrated significant improvements. Provision of inhibition of the enzyme activation sequence appears to be the safest medication action at this time.

Application of either whole body hypothermia or local cord cooling appears to hold promise for those suffering from neuro trauma. Application of hypothermia, either spinally or systemically, is thought to provide protection for neural cells and to reduce secondary inflammation, decreasing immediate mortality. According to Hayes, Hsieh, Potter, Wolfe, Delaney, and Blight (1993) local spinal cord cooling within eight and a half hours of injury in ten patients produced a better-than-expected rate of recovery of sensory and motor function.

Rescue from Neural Cell Death

Cells die due to a programmed cell death after SCI. An excellent opportunity is present for intervention with factors that could rescue the cells at risk. As presented by Eldadah and Faden (2000) one approach to cell rescue is the inhibition of caspases. Caspases are regulated signalling proteases that that accomplish a primary role in mediating cell apoptosis thru division at specific sites within proteins. These proteins inhibit programmed cell death and are a part of the bcl-2 oncogene products. According to Shibata, Murray,

Tessler, Ljubetic, Connors and Saavedra (2000) recent work has demonstrated prevention of retrograde cell loss and atrophy reduction by direct intra-spinal administration of the Bcl-2 protein into the damaged site.

Another group of proteins with potential cell death inhibition properties are calpains. Calpains are calcium-activated proteases that assist in degradation of cytoskeletal demolition of injured cells. Substances with calpain inhibitor properties could prove of benefit in reduction of cell death.

Demyelination and Conduction

According to Waxman (2001) the strategy of inhibiting the neural injury induced by the increased barrage of action potentials early in the injury phase or by inhibiting the voltage-dependent sodium channels, which provide the ionic basis for the action potential may be beneficial. In addition, neural injury and disease may introduce altered ionic channel function on nerve processes that would result in impaired conduction properties, which produces persistent hyperexcitability leading to the basis for chronic pain after CNS neural trauma.

As a result of secondary injury to the spinal cord many axons are demylinated. Infusion of a fast, voltage-sensitive potassium channel blocker may provide partial restoration of conduction properties to demylinated axons. As presented by Guest, Hiester and Bunge (2005) another strategy for the improvement in demyelinationis the transplantation of Schwann cells which may contribute to the restoration of myelin sheaths around some spinal axons.

Promotion of Axonal Regeneration

During development of the central nervous system, an assortment of axonal growth promoting proteins are present in the extracellular environment. The environment stimulates axon growth and neural development. Once the central nervous system is established the growth stimulating agents decline. The adult central nervous system shifts toward inhibition of axonal growth permitting a stable and circuitry. These inhibition and stimulatory factors provide an opportunity for research that will promote axonal growth after a spinal cord injury perhaps rebuilding a neural communication network.

Cell Replacement Strategies

After spinal cord injury function of nerve cells and cells that produce myelin that insulates and provides a positive impulse conduction venue has vanished. Cellular replacement to rebuild conduction properties is a promising therapy. As indicated by Normura, Tator and Shoichet (2006) there is promise that technology utilizing cellular treatment procedures including olfactory ensheathing cells, (the cells that form the myelin on olfactory nerves),

Schwann cells (the cells that form the myelin on peripheral nerves), dorsalroot ganglia, adrenal tissue, and neural stem cells can promote repair of the injured spinal cord. It is postulated that these tissues would rescue, replace, or provide a regenerative pathway for injured adult neurons, which would then integrate or promote the regeneration of the spinal cord circuitry and restore function after injury. As indicated by Nakamura (2005) there is promise that bioengineering technology utilizing cellular treatment advances can promote repair of the injured spinal cord. Transplantation of these cells promotes functional recovery of locomotion and reflex responses.

The engineering of cells combines the therapeutic advantage of the cells along with a delivery system. For example, if delivery of neurotrophins (neuro- related to cell nerves, tropin- a turning) is desired, cells that secrete neutrophins and cells that create myelin can be engineered to stimulate axon growth and rebuild nerve function.

In an effort to further enhance beneficial effects autoimmune agents such as macrophages can be extracted from the patient’s own system and inserted at the injury site. The patient’s own activated macrophages will scavenge degenerating myelin debris, rich in non-permissive factors, and at the same time encourage regenerative growth without eliciting an immune response.

Retrain the Brain with Aggressive Physical Therapy

It is apparent that recovery of locomotion is dependent on sensory input that can “reawaken” spinal circuits and activate central pattern generators in the spinal cord, as demonstrated by spontaneous “stepping” in the lower limbs of one patient. According to Calancie, Alexeeva, Broton and Molano (2005) it may take six or more months for reflexes to appear following acute SCI suggesting they might be due to new synaptic interconnections.

Electrical Stimulation

Functional electrical stimulation (FES) that contributes to improved standing can improve quality of life for the individual and the caregiver. There is considerable interest in computer-controlled FES for strengthening the lower extremities and for cardiovascular conditioning, which has met with some success in terms of physiological improvements such as increased muscle mass, improved blood flow, and better bladder and bowel function. With added benefit there are decreases in medical complications such as venous thrombosis, osteoporosis, and bone fractures. Stimulation of the phrenic nerve, which innervates the diaphragm, is used in cases where there is damage to respiratory pathways.

Chronic Central Pain

As indicated by Siddall & Cousins (1997) pain continues to be a significant problem in patients with spinal cord injuries. There is little consensus regarding the terminology, definitions and nature of the pain. Treatment studies have lacked congruence due to inaccurate identification of pain types. There has been little progress in efforts to bring an understanding of the pathophysiology of CCP to the development of therapeutic approaches for the SCI patient population.

Chronic central pain (CCP) syndromes develop in the majority of spinal cord injury patients. As indicated by Que, Siddall and Cousins (2007) chronic pain is a disturbing aspect of spinal cord injury, often interfering with basic activities, effective rehabilitation and the quality of life of the patient. Evidence that neurons in pain pathways are pathophysiologically altered after spinal cord injury comes from both clinical and animal literature. In addition, the development of the chronic pain state correlates with structural alterations such as intra-spinal sprouting of primary afferent fibres.

According to Que, Siddall and Cousins (2007) pain in the cord-injured patient is often resistant to treatment. Recognition of Chronic Central Pain has led to utilization of non-opioid analgesics. According to Siddall and Middleton (2006) Baclofen, once used exclusively in treatment of spasticity and the anticonvulsant gabapentin originally used to treat epilepsy, have had some success with attenuating muskuloskeletal CCP syndromes. The tricyclic antidepressant amitriptyline has shown effective in treatment of dysesthetic pain.

Conclusion

Stem cell therapy will offer hope for spinal cord injury patients with opportunities for the abundance of cell replacement strategies. Advances in the field of electronic circuitry will lead to better FES and robotic devices. Pharmacological advances offer intervention direction to aid in recovery and improve patient’s’ quality of life every day. The re-establishment of cell, nerve and muscle communication interconnections will be potentially possible. Through tenacity, health, and future development one day victims of spinal cord injury will be told there is hope of recovery.

References

American Psychological Association (2001). Publication manual of the American Psychological Association (5th ed.). Washington, DC: American Psychological Association.

Bracken, M.B., & Holford, T.R. (1993). Effects of timing of methylprednisolone or naloxone administration on recovery of segmental and long-tract neurological function in NASCIS 2. Journal of Neurosurgery. 79(4), 500-7.

Bunge, R.P., Puckett, W.R., and Hiester, E.D. (1997). Observations on the pathology of several types of human spinal cord injury, with emphasis on the astrocyte response to penetrating injuries. Adv Neurol.72, 305-315.

Calancie, B., Alexeeva, N., Broton, J.G., & Molano, M.R. (2005). Interlimb reflex activity after spinal cord injury in man: strengthening response patterns are consistent with ongoing synaptic plasticity. Clinical Neurophysiology : Official Journal of the International Federation of Clinical Neurophysiology. 116(1), 75-86.

Eldadah, B.A., & Faden, A.I. (2000). Caspase pathways, neuronal apoptosis, and CNS injury. Journal of Neurotrauma. 17(10), 811-29.

Guest, J.D., Hiester, E.D., & Bunge, R.P. (2005). Demyelination and Schwann cell responses adjacent to injury epicenter cavities following chronic human spinal cord injury. Experimental Neurology. 192(2), 384-93.

Hains, B.C., Yucra, J.A., & Hulsebosch, C.E. (2001). Reduction of pathological and behavioral deficits following spinal cord contusion injury with the selective cyclooxygenase-2 inhibitor NS-398. Journal of Neurotrauma. 18(4), 409-23.

Hayes, K.C., Hsieh, J.T., Potter, P.J., Wolfe, D.L., Delaney, G.A., & Blight, A.R. (1993). Effects of induced hypothermia on somatosensory evoked potentials in patients with chronic spinal cord injury. Paraplegia. 31(11), 730-41.

Huether, S.E., & McCance, K.L. (2008). Understanding pathophysiology (4th ed.). St. Louis, MO: Mosby, Inc.

Hulsebosch, C.E. (2002). Recent advances in pathophysiology and treatment of spinal cord injury. Advan. Physiol.Edu. 26, 238-255

Nakamura, M., Okano, H., Toyama, Y., Dai, H.N., Finn, T.P., & Bregman, B.S. (2005). Transplantation of embryonic spinal cord-derived neurospheres support growth of supraspinal projections and functional recovery after spinal cord injury in the neonatal rat. Journal of Neuroscience Research. 81(4), 457-68.

Nomura, H., Tator, C.H., & Shoichet, M.S. (2006). Bioengineered strategies for spinal cord repair. Journal of Neurotrauma. 23(3-4), 496-507.

Que, J.C., Siddall, P.J., & Cousins, M.J. (2007). Pain management in a patient with intractable spinal cord injury pain: a case report and literature review. Anesthesia and Analgesia. 105(5), 1462-73, table of contents.

Shewmon, D.A. (1999). Spinal shock and “brain death”: somatic pathophysiological equivalance and implications for the integrative-unity rationale. Spinal Cord 37, 313-324

Shibata, M., Murray M., Tessler, A., Ljubetic, C., Connors, T., & Saavedra, R.A. (2000). Single injections of a DNA plasmid that contains the human Bcl-2 gene prevent loss and atrophy of distinct neuronal populations after spinal cord injury in adult rats. Neurorehabilitation and Neural Repair. 14(4), 319-30.

Siddall, P.J., & Middleton, J.W. (2006). A proposed algorithm for the management of pain following spinal cord injury. Spinal Cord 44, 67-77

Tator, C.H. (1998). Biology of neurological recovery and functional restoration after spinal card injury. Neurosurgery. 42(4), 696-707

Waxman, S.G. (2001). Acquired channelopathies in nerve injury and MS. Neurology. 56(12), 1621-7.

Sickle-cell Disease (SCD) Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Sickle-cell disease

Sickle cell disease (SCD), or sickle cell anemia, is a group of genetic conditions, resulting from the inheritance of a mutated form of the gene coding for the ? globulin chain of the hemoglobin molecule, which causes malformation of red blood cells (RBCs) in their deoxygenated state. Specifically, this single point mutation occurs at position 6 of the ? globulin chain, where a valine is substituted for glutamic acid (Ballas et al. 2012). This abnormal hemoglobin causes a characteristic change in the RBC morphology, where it becomes abnormally rigid and sickle-like, rather than the usual biconcave disc. These cells do not flow as freely throughout the circulatory system as the normal phenotype, and can become damaged and hemolysed, resulting in vascular occlusion (Stevens and Lowe 2002).

SCD is an autosomal recessive condition, thus patients with SCD will have inherited a copy of the mutated gene from each of their parents (homozygous genotype). Individuals who only inherit one copy (heterozygous genotype) are termed sickle cell (SC) carriers, who may pass on the affected gene to their children (Stevens & Lowe 2002). The severity of SCD varies considerably from patient to patient, most likely as the result of environment or other unknown genetic factors (Bean et al. 2013).

Patients with SCD are typically of African or African-Caribbean origin, but all ethnic groups may be affected. In 2014 the National Institute of Clinical Excellence (NICE) estimated that between 12,500 and 15,000 people in the UK suffer from SCD (NICE quality standard 58, 2014), with more than 350 babies born with SCD between 2007 and 2008. Patients in developed countries typically live into their 40s and 50s. However in developing countries, it is estimated that between 50% (Odame 2014) and 90% of children die by the age of 5 (Gravitz and Pincock 2014).

SCD is more prevalent in the ethnic African population because SCD carriers exhibit a 10-fold reduction in severe malarial infection, which is common in many African countries and associated with significant mortality. One proposed mechanism for this is that on infection with the malarial plasmid, RBCs in SCD carriers become sickle shaped and are then removed from the circulation and destroyed. Consequently, it is genetically beneficial to be a SCD carrier, thus more SCD carriers survive to reproduction age, in turn increasing the incidence of the SCD mutation in the population. (Kwiatkowski 2005).

SC patients experience periods of acute illness termed “crises” resulting from the two different effects of SCD; vaso-occlusion (pain, stroke and acute chest syndrome) and those from hemolysis (for example, anemia from RBC destruction and inefficient oxygen carrying capacity) (Glassberg 2011). The frequency of these may be several times a week, or less than once a year. Patients typically present with anemia, low blood oxygen levels and pyrexia (NICE quality standard 58, 2014).

?

There are 3 classifications of crises:

1. Sequestration crisis (rapid pooling of RBCs in organs, typically the spleen, which may result in patient death from the acute reduction in available red cells for oxygen transportation).

2. Infarctive crisis (blockage of capillaries causing an infarction).

3. Aplastic crises (where the spleen is damaged from 1&2 which compromises RBC production (Stevens & Lowe 2002).

The result of these crises can be irreversible damage to a wide range of organs from the spleen to the retina which can cause extreme pain (Stevens & Lowe 2002). However, patients not currently experiencing a crisis can also present with anemia as the result of poor oxygen transport function, loss of RBCs due to sequestration in organs such as the spleen and reduced red cell production as the result of impaired spleen function (Ballas et al. 2012).

Typically, patients will initially present with an enlarged spleen in early childhood (due to pooling of malformed RBCs), which then becomes hypertrophied, ultimately resulting in a state of almost complete loss of function (autosplenectomy). Several complications of SCD are recognised, including impaired neurocognitive function, which is most likely the result of anemia or silent cerebral infarcts (Ballas et al. 2012).

In the UK, SCD is usually diagnosed antenatally or in the first few weeks of life. Prenatal screening is offered to parents who may be at risk of carrying the SCD causing gene. NICE recommend that screening is offered early in pregnancy for high risk groups (ideally before 10 weeks gestation) or via a family origin questionnaire in low risk groups. Full screening can then be offered if family history is suggested. In the case of a positive test, counselling should be offered immediately, and the parents offered the option of termination of pregnancy (NICE Clinical Guideline 62, 2014). However, if screening has not occurred, SCD is one of the diseases screened for by the newborn heel prick test in the first week of life (NICE quality standard 58, 2014). In older patients or those not in countries where screening is offered, patients present with anemia or acute crisis. Histological analysis of blood samples can also reveal sickle shaped RBCs and the characteristic abnormal hemoglobin can be identified by high performance liquid chromatography or electrophoresis (Glassberg 2011).

There are three approaches to treatment of SCD. The first is to manage the condition prophylactically in the hope of reducing the incidence of complications and crises. The second is to effectively manage crises, both to reduce the risk of organ damage and life threatening events, as well as control the severe pain associated with a SCD crisis. The third approach is to target the cause of the condition itself.

Penicillin (de Montalembert et al. 2011) and folic acid are usually offered to patients in order to prevent complications by bacterial disease and are associated with a significant increase in survival and quality of life (NICE quality standard 58, 2014). Children are also vaccinated against pneumococcal infection. Transcranial doppler imaging of the cerebral vessels can be used to identify children at risk of stroke (de Montalembert et al. 2011). As previously discussed, SCD carriers are conferred some protection from malarial infection. Paradoxically, SCD sufferers display an increased sensitivity to malarial infection and should also be treated with anti-malarial prophylaxis where appropriate (Oniyangi and Omari 2006).

Hydroxyurea has been used in the treatment of SCD, as it appears to increase the production of fetal hemoglobin (HbF), thus reducing the proportion of abnormal hemoglobin although the exact mechanism of this is unclear (Rang et al. 1999). Suggested mechanisms include induction of HbF by nitric oxide, or by ribonucleotide inhibition. Other suggested mechanisms include the increasing of RBC water content and reduced endothelial adhesion, which reduces the incidence of infarction (Charache et al. 1995).

Blood transfusion is an important tool in treating SCD, especially in children. It almost immediately improves the capacity of the blood to transport oxygen, and in the longer term as the “healthy” donor RBCs are not as destroyed as quickly as the sickle shaped RBCs, repeated transfusion is associated with a reduction in erythropoiesis (RBC production) in the SCD patient, thus reducing the proportion of sickle shaped RBCs in circulation, which in turn reduces the risk of a crisis or stroke. Exchange transfusion is also possible, whereby abnormal sickle RBCs are removed from the circulating volume prior to transfusion with donor blood. However there are drawbacks to transfusion, namely the inherent safety risks such as immunological sensitivity, contamination of blood products with infectious disease and a lack of available donated blood (Drasar et al. 2011).

The severe pain of a crisis must be controlled, most often with opioid analgesics. These are effective analgesics which act by binding to µ, ? and ? receptors. The common approach is intravenous infusion of morphine either by continuous drip or patient controlled analgesia (PCA) pump infusion. Non-opioid drug options, including paracetamol, tramadol and corticosteroids may also be considered, but these drugs have a limit to the analgesia they can produce, whereas opioid drugs are more often limited by their side effects, such as respiratory suppression, vomiting and itching (Ballas et al. 2012).

Bone marrow transplant is currently the only curative therapy for SCD. However it is dependent on locating a suitable donor with a HLA tissue match, usually from a healthy sibling. It is associated with some risks and complications including grant rejection, but generally is associated with a very positive prognosis (Maheshwari et al. 2014). As SCD is an autosomal recessive disease with one well identified causative gene, gene therapy to replace one copy of the faulty gene with a normal copy is of great interest to researchers. However this is very much still in development in humans and a 2014 review of SCD clinical trials found no trials of gene therapy as yet (Olowoyeye and Okwundu 2014)

In addition to the acute effects of SCD, patients are also at risk from a number of potentially fatal consequence of SCD such as acute splenic sequestration. In this condition, which often occurs after an acute viral or bacterial infection (classically parvovirus B19), the malformed RBCs become trapped in the sinuses of the spleen causing rapid enlargement. Patients will present with often severe abdominal pain and enlargement, pallor and weakness and potentially tachycardia and tachypnea. Patients may also suffer from hypovolemic shock from the significant reduction of available hemoglobin (acute aplastic crisis). This is managed by emergency treatment of the hypovolemia and transfusion of packed RBCs. Because the rate of recurrence for splenic sequestration is high (approximately 50%), a splenectomy may be performed after the patient has recovered from the event (NICE quality standard 58, 2014).

Acute chest syndrome is also a serious complication of SCD and may be fatal. It is characterised by the occlusion of the pulmonary blood vessels during an occlusive crisis. Patients typically present with chest pain, cough and low oxygen levels (Ballas et al. 2012). It is also associated with asthma, and it is recommended that asthma in patients with SCD be carefully monitored. Treatment of acute chest syndrome is usually by antibiotics, bronchodilators if indicated and transfusion or exchange transfusion also considered (de Montalembert et al. 2011).

Another consequence of rapid turnover of the abnormally shaped RBCs is the increased production of bile, which may cause hepatobiliary disease, specifically gallstones and vascular conditions of the liver. Liver pathology can result from ischemia-reperfusion injury following a crisis, endothelial dysfunction and overloading with iron as the result of the liver sequestering iron from the destroyed RBCs (Ballas et al. 2012). SCD patients are also at significant risk of ischemic stroke, resulting from a cerebral infarctive crisis, with one study suggesting that 11% of patients will suffer a stroke by 20 years of age, and 24% by 45. Children who suffer stroke may also go on to develop moya-moya syndrome, which is associated with s significant decrease in cognitive function and increased risk of further stroke (Ballas et al. 2012).

SCD is a complex condition and is associated with significant challenges in treatment as it requires the use of a multi-disciplinary team to cover the wide range of its affects and significant prophylactic treatments. As discussed, the effects of these potential complications can be life threatening and have life changing consequences.

An additional difficulty is that while screening, prophylactic and curative treatments are available in the developed world, they are not in the developing world where rates of the disease are in fact highest. In sub-Saharan Africa mortality is estimated to be between 50% (Odame 2014) and 90% (Gravitz & Pincock 2014) yet in developed counties life expectancy ranges from 40s to 50s (Gravitz & Pincock 2014). Currently, laboratory diagnosis and screening is prohibitively expensive in developing countries thus there is a need for the development of low cost techniques. The Gavi Vaccine Alliance also endeavors to make prophylactic treatment more available, specifically the pneumococcal vaccine. Of the therapies discussed here, hydroxyurea is likely to be the most affordable and increasing the availability of this would be of significant benefit and clinical trials have commenced in Africa in 2014 (Odame 2014).

?

References

Ballas, S.K., Kesen, M.R., Goldberg, M.F., Lutty, G.A., Dampier, C., Osunkwo, I., Wang, W.C., Hoppe, C., Hagar, W., Darbari, D.S., & Malik, P. 2012. Beyond the definitions of the phenotypic complications of sickle cell disease: an update on management. ScientificWorldJournal., 2012, 949535 available from: PM:22924029

Bean, C.J., Boulet, S.L., Yang, G., Payne, A.B., Ghaji, N., Pyle, M.E., Hooper, W.C., Bhatnagar, P., Keefer, J., Barron-Casella, E.A., Casella, J.F., & Debaun, M.R. 2013. Acute chest syndrome is associated with single nucleotide polymorphism-defined beta globin cluster haplotype in children with sickle cell anaemia. Br.J.Haematol., 163, (2) 268-276 available from: PM:23952145

Charache, S., Terrin, M.L., Moore, R.D., Dover, G.J., Barton, F.B., Eckert, S.V., McMahon, R.P., & Bonds, D.R. 1995. Effect of hydroxyurea on the frequency of painful crises in sickle cell anemia. Investigators of the Multicenter Study of Hydroxyurea in Sickle Cell Anemia. N.Engl.J.Med., 332, (20) 1317-1322 available from: PM:7715639

de Montalembert M., Ferster, A., Colombatti, R., Rees, D.C., & Gulbis, B. 2011. ENERCA clinical recommendations for disease management and prevention of complications of sickle cell disease in children. Am.J.Hematol., 86, (1) 72-75 available from: PM:20981677

Drasar, E., Igbineweka, N., Vasavda, N., Free, M., Awogbade, M., Allman, M., Mijovic, A., & Thein, S.L. 2011. Blood transfusion usage among adults with sickle cell disease – a single institution experience over ten years. Br.J.Haematol., 152, (6) 766-770 available from: PM:21275951

Glassberg, J. 2011. Evidence-based management of sickle cell disease in the emergency department. Emerg.Med.Pract., 13, (8) 1-20 available from: PM:22164362

Gravitz, L. & Pincock, S. 2014. Sickle-cell disease. Nature, 515, (7526) S1 available from: PM:25390134

Kwiatkowski, D.P. 2005. How malaria has affected the human genome and what human genetics can teach us about malaria. Am.J.Hum.Genet., 77, (2) 171-192 available from: PM:16001361

Maheshwari, S., Kassim, A., Yeh, R.F., Domm, J., Calder, C., Evans, M., Manes, B., Bruce, K., Brown, V., Ho, R., Frangoul, H., & Yang, E. 2014. Targeted Busulfan therapy with a steady-state concentration of 600-700 ng/mL in patients with sickle cell disease receiving HLA-identical sibling bone marrow transplant. Bone Marrow Transplant., 49, (3) 366-369 available from: PM:24317124

NICE Clinical Guideline 62 – Antenatal Care. Guideline CG62, published March 2008, revised February 2014. https://www.nice.org.uk/guidance/cg62

NICE quality standard 58: Sickle cell acute painful episode, Guidelines CG143, publication date June 2012, reviewed May 2014. https://www.nice.org.uk/guidance/cg143

Odame, I. 2014. Perspective: we need a global solution. Nature, 515, (7526) S10 available from: PM:25390135

Olowoyeye, A. & Okwundu, C.I. 2014. Gene therapy for sickle cell disease. Cochrane.Database.Syst.Rev., 10, CD007652 available from: PM:25300171

Oniyangi, O. & Omari, A.A. 2006. Malaria chemoprophylaxis in sickle cell disease. Cochrane.Database.Syst.Rev. (4) CD003489 available from: PM:17054173

Rang, Dale, & Ritter 1999. Pharmacology, 4th ed. Churchill Livingstone.

Stevens & Lowe 2002. Pathology, 2nd ed. London, Mosby.