Example Health Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

With reference to the UK, discuss the reasons why tuberculosis (TB) is a contemporary public health issue and give examples of relevant public health and health promotion initiatives.

With the exception of HIV/AIDS, infection with the Mycobacterium tuberculosis complex (MTB) causes more human deaths each year than any other infectious agent (World Health Organization, 2014a). The symptoms of tuberculosis (TB) are often non-specific and depend on the site of infection. Patients may present with fever, anorexia, weight loss, night sweats or lassitude, but a persistent productive cough is the hallmark of pulmonary tuberculosis (Department of Health, 2006). MTB bacilli multiply within infected macrophages for long periods of time and may be transported in the lymphatics or bloodstream to any part of the body (Gill and Beeching, 2004).

Humans are the only reservoir of infection and transmission of tuberculosis occurs when infectious respiratory secretions are aerosolized by coughing, sneezing or talking. These may remain suspended in the air for long periods and are small enough to reach terminal air spaces if inhaled (Gill and Beeching, 2004). Patients with lung disease are the main source of infection and 52% of cases notified in the UK in 2013 had pulmonary disease (Public Health England, 2014c). 5 to 10% of people will develop active tuberculosis after primary infection reducing to 3% within one year of exposure; however over 90% of MTB infection is non-pathogenic within a normal human lifespan (Gill and Beeching, 2004).

The incidence of tuberculosis in the UK in 2013 (12.3/100 000) was higher than most other Western European countries (European Centre for Disease Prevention and Control (ECDC)/WHO Regional Office for Europe, 2013) and nearly five times as high as the United States (Centers for Disease Control and Prevention, 2013), having increased steadily since the late 1980’s (Public Health England, 2014a). Rates of infection have declined by 11.6% in the past two years, where 73% of cases occurred among people born outside the UK. Of these, India, Pakistan and Somalia were the most common countries of origin but only 15% were recent migrants indicating a high rate of reactivation of latent tuberculosis (Public Health England, 2014c). The number of migrants from countries with very high TB incidence (>250 per 100,000) decreased by 68% in the last decade and indicators of recent transmission reflect a decline in primary infections. However, the rate of infection among the UK born adult population has remained stable (Public Health England, 2014c) and strain typing suggests that up to 40% of all UK cases may be newly acquired (Public Health England, 2014a). Consequently, Public Health England has identified TB as a major priority (12).

Globally, tuberculosis affects predominately young adults (World Health Organization, 2014b) and the highest rates of infection in the non-UK born population are among 25 to 29 year olds. Of patients born in Britain, TB is most virulent in those aged over 75 years and both sexes are equally at risk (Public Health England, 2014c). The burden of TB in England is concentrated in the most deprived communities of large urban areas and London accounted for 37.8% of patients in 2013 (Public Health England, 2014c). Nearly half of these cases were unemployed and 10% had a history of alcohol or drug misuse, homelessness or imprisonment. 6% were health-care workers (Public Health England, 2014c). Tuberculosis is particularly virulent among the immunosuppressed and people with HIV are 26 to 31 times more likely to contract the disease. Tobacco use has also been associated with 20% of TB cases worldwide (World Health Organization, 2014b).

TB is transmitted most effectively in environments where MTB microbes accumulate in the atmosphere, for example in overcrowded and poorly ventilated living and working conditions (Gill and Beeching, 2004). Individuals with close and/or prolonged contact with a patient with pulmonary tuberculosis or connections to higher-prevalence areas of the world are particularly at risk (Department of Health, 2006). Transmission is also favoured by dark and humid conditions, such as mines and prisons (Gill and Beeching, 2004) and several authors have implicated vitamin D deficiency in the disease pathogenesis, although findings are varied and inconclusive (Kearns et al., 2014). Active TB may be mild or asymptomatic for many months and sufferers may unknowingly infect up to 15 people over the course of a year (World Health Organization, 2014b). Drug-resistant TB is an increasing problem in the UK and multi-drug resistant TB comprised 1.6% of cases in 2012 (Public Health England, 2013a). Although MDR tuberculosis is unlikely to be more contagious, patients are infectious for longer than those with fully sensitive tuberculosis (Borrell and Gagneux, 2009, Anderson et al., 2014).

The features of effective national TB control programmes have been well documented (National Institute for Health and Care Excellence, 2011, Story et al., 2012, Department of Health – TB Action Plan Team, 2007, Public Health England, 2014a) and include transparent systems of accountability, adequate resources, active local implementation and close outcome monitoring (Abubakar et al., 2011). These activities are managed in the UK by Public Health England together with a wide range of stakeholders such as NHS England, and include screening. Screening strategies differ for the detection of early active and latent asymptomatic TB, the latter of which is recommended by NICE for individuals at high risk of infection (National Institute for Health and Care Excellence, 2011) and referred to as active case finding (ACF) (Golub et al., 2005, Zenner et al., 2013). Identifying tuberculosis early allows for prompt treatment and reduces transmission (Public Health England, 2014b).

In the UK, ACF is targeted at healthcare workers involved in exposure prone procedures, close contacts of known or suspected tuberculosis patients, and people with social risk factors such as homelessness, drug or alcohol misuse, imprisonment or migration from high risk countries (National Institute for Health and Care Excellence, 2012). Several local authorities and primary care trusts have successfully piloted such schemes, although weaknesses in coordination and targeting have been identified (Pareek et al., 2011a). London’s UCLH Find and Treat Service, for example, screens almost 10 000 socially vulnerable people at high risk of tuberculosis annually (University College London Hospitals NHS Foundation Trust, 2014). Various UK charities, such as ‘TB Alert’, raise public awareness of tuberculosis and support Primary Care Trusts. They build capacity of third sector organisations and inform and subsidize patients and communities (TB Alert, 2014).

The UK Border Agency, in collaboration with the International Organization for Migration, conducts pre-entry screening for active infection across 15 countries where tuberculosis is common (over 40/100,000) (Home Office UK Border Agency, 2012, Public Health England, 2013b). Visa applicants from these countries wishing to stay in the UK for more than 6 months are screened for pulmonary TB and granted entry only on receipt of a certificate of clearance (Public Health England, 2014b). Funding from the Health Protection Agency (HPA) also supports screening activity at Heathrow and Gatwick airports (Home Office UK Border Agency, 2012). Screening is routinely offered to asylum seekers and refugees accepted for resettlement into the UK through the Gateway Programme (Home Office UK Border Agency, 2012). There is further evidence that screening migrants for latent TB on entry to the UK is cost effective for the NHS (Pareek et al., 2011b).

Internationally, the World Health Organization operates via the Stop TB Partnership to set targets, procure and grant funds and resources, lobby governments, educate and advocate on behalf of TB communities (World Health Organization, 2006, Stop TB Partnership, 2014). Simultaneously, not-for-profit product development partnerships such as the TB Alliance endeavour to develop new TB drug regimens (Horsburgh et al., 2013, Lienhardt et al., 2012a, Lienhardt et al., 2012b, Clinton Health Access Initiative et al., 2010). School vaccination of the indigenous UK population was halted in 2005 following a decline in the incidence of TB and the Bacillus Calmette-Guerin immunisation (BCG) is now targeted at neonates within high risk groups (Department of Health, 2006). These UK endeavours contribute towards the WHO target to eliminate TB as a public health problem by 2050 (World Health Organization, 2006).

References

ABUBAKAR, I., LIPMAN, M., ANDERSON, C., DAVIES, P. & ZUMLA, A. 2011. Tuberculosis in the UK–time to regain control. BMJ, 343, d4281.

ANDERSON, L. F., TAMNE, S., BROWN, T., WATSON, J. P., MULLARKEY, C., ZENNER, D. & ABUBAKAR, I. 2014. Transmission of multidrug-resistant tuberculosis in the UK: a cross-sectional molecular and epidemiological study of clustering and contact tracing. Lancet Infect Dis., 14, 406-15. doi: 10.1016/S1473-3099(14)70022-2. Epub 2014 Mar 4.

BORRELL, S. & GAGNEUX, S. 2009. Infectiousness, reproductive fitness and evolution of drug-resistant Mycobacterium tuberculosis. The international journal of tuberculosis and lung disease : the official journal of the International Union against Tuberculosis and Lung Disease, 13, 1456-66.

CENTERS FOR DISEASE CONTROL AND PREVENTION 2013. Trends in Tuberculosis – United States, 2012. Morbidity and Mortality Weekly Report, 62, 201-2.

CLINTON HEALTH ACCESS INITIATIVE, BILL & MELINDA GATES FOUNDATION, GLOBAL ALLIANCE FOR TB DRUG DEVELOPMENT, GLOBAL DRUG FACILITY, INTERNATIONAL UNION AGAINST TUBERCULOSIS AND LUNG DISEASE, MANAGEMENT SCIENCES FOR HEALTH & TREATMENT ACTION GROUP 2010. Falling Short. Ensuring Access to Simple, Safe and Effective First-Line Medicines for Tuberculosis. New York: Global Alliance for TB Drug Development.

DEPARTMENT OF HEALTH – TB ACTION PLAN TEAM. 2007. Tuberculosis prevention and treatment: a toolkit for planning, commissioning and delivering high-quality services in England [Online]. London: Department of Health. Available: http://webarchive.nationalarchives.gov.uk/20130107105354/http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/documents/digitalasset/dh_075638.pdf [Accessed 19/12/2014].

DEPARTMENT OF HEALTH 2006. Chapter 32 – Tuberculosis. In: SALISBURY, D., RAMSAY, M. & NOAKES, K. (eds.) Immunisation against infectious disease – ‘The Green Book’. 3rd ed. London: The Stationery Office.

EUROPEAN CENTRE FOR DISEASE PREVENTION AND CONTROL (ECDC)/WHO REGIONAL OFFICE FOR EUROPE. 2013. Tuberculosis surveillance and monitoring in Europe 2013 [Online]. Stockholm: European Centre for Disease Prevention and Control. Available: http://www.ecdc.europa.eu/en/publications/_layouts/forms/Publication_DispForm.aspx?List=4f55ad51-4aed-4d32-b960-af70113dbb90&ID=811 [Accessed 19/12/2014].

GILL, G. V. & BEECHING, N. J. 2004. Chapter 12 – Tuberculosis. Tropical Medicine. 5th ed. Oxford: Blackwell Science.

GOLUB, J. E., MOHAN, C. I., COMSTOCK, G. W. & CHAISSON, R. E. 2005. Active case finding of tuberculosis: historical perspective and future prospects. Int J Tuberc Lung Dis., 9, 1183-203.

HOME OFFICE UK BORDER AGENCY 2012. Screening for Tuberculosis and the Immigration Control. UK Border Agency Review of Current Screening Activity 2011 (Central Policy Unit). London: Home Office.

HORSBURGH, C. R., JR., HAXAIRE-THEEUWES, M., LIENHARDT, C., WINGFIELD, C., MCNEELEY, D., PYNE-MERCIER, L., KESHAVJEE, S. & VARAINE, F. 2013. Compassionate use of and expanded access to new drugs for drug-resistant tuberculosis. The international journal of tuberculosis and lung disease : the official journal of the International Union against Tuberculosis and Lung Disease, 17, 146-52.

KEARNS, M. D., ALVAREZ, J. A., SEIDEL, N. & TANGPRICHA, V. 2014. Impact of Vitamin D on Infectious Disease: A Systematic Review of Controlled Trials. Am J Med Sci, 20, 20.

LIENHARDT, C., GLAZIOU, P., UPLEKAR, M., LONNROTH, K., GETAHUN, H. & RAVIGLIONE, M. 2012a. Global tuberculosis control: lessons learnt and future prospects. Nature reviews. Microbiology, 10, 407-16.

LIENHARDT, C., RAVIGLIONE, M., SPIGELMAN, M., HAFNER, R., JARAMILLO, E., HOELSCHER, M., ZUMLA, A. & GHEUENS, J. 2012b. New drugs for the treatment of tuberculosis: needs, challenges, promise, and prospects for the future. The Journal of infectious diseases, 205 Suppl 2, S241-9.

NATIONAL INSTITUTE FOR HEALTH AND CARE EXCELLENCE. 2011. Clinical guidance and management of tuberculosis, and measures for its prevention and control. CG117 [Online]. Available: http://www.nice.org.uk/guidance/cg117 [Accessed 19/12/2014].

NATIONAL INSTITUTE FOR HEALTH AND CARE EXCELLENCE. 2012. Identifying and managing tuberculosis among hard-to-reach groups. PH37 [Online]. Available: http://www.nice.org.uk/guidance/cg117 [Accessed 19/12/2014].

PAREEK, M., ABUBAKAR, I., WHITE, P. J., GARNETT, G. P. & LALVANI, A. 2011a. Tuberculosis screening of migrants to low-burden nations: insights from evaluation of UK practice. Eur Respir J., 37, 1175-82. doi: 10.1183/09031936.00105810. Epub 2010 Nov 11.

PAREEK, M., WATSON, J. P., ORMEROD, L. P., KON, O. M., WOLTMANN, G., WHITE, P. J., ABUBAKAR, I. & LALVANI, A. 2011b. Screening of immigrants in the UK for imported latent tuberculosis: a multicentre cohort study and cost-effectiveness analysis. The Lancet. Infectious diseases, 11, 435-44.

PUBLIC HEALTH ENGLAND 2013a. Tuberculosis in the UK: 2013 report. London.

PUBLIC HEALTH ENGLAND. 2013b. UK pre-entry tuberculosis screening brief report 2013 [Online]. London: Public Health England. Available: https://www.gov.uk/government/publications/tuberculosis-pre-entry-screening-in-the-uk [Accessed 19/12/2014].

PUBLIC HEALTH ENGLAND. 2014a. Collaborative Tuberculosis Strategy for England 2014 to 2019: For consultation [Online]. London. Available: https://www.gov.uk/government/consultations/collaborative-tuberculosis-strategy-for-england-2014-to-2019 [Accessed 19/12/2014].

PUBLIC HEALTH ENGLAND. 2014b. Guidance: Tuberculosis screening. Tuberculosis (TB) screening and early detection methods, for professionals working with at-risk populations in the UK. [Online]. Available: https://www.gov.uk/tuberculosis-screening#pre-entry-tb-screening-for-migrants [Accessed 18/12/2014].

PUBLIC HEALTH ENGLAND 2014c. Tuberculosis in the UK: 2014 report. London.

STOP TB PARTNERSHIP 2014. The Stop TB Partnership. Leading the fight against TB. Geneva: Stop TB Partnership.

STORY, A., COCKSEDGE, M., ANDERTON, A., EDGINTON, M., O’DONOGHUE, M., KON, O. M., TAMNE, S., MAW, J. & POLLINGER, E. 2012. Tuberculosis case management and cohort review guidance for health professionals [Online]. London: Royal College of Nursing. Available: http://www.rcn.org.uk/%5F%5Fdata/assets/pdf%5Ffile/0010/439129/004204.pdf [Accessed 19/12/2014].

TB ALERT. 2014. Our Work in the UK [Online]. Brighton. Available: http://www.tbalert.org/what-we-do/uk/ [Accessed 19/12/2014].

UNIVERSITY COLLEGE LONDON HOSPITALS NHS FOUNDATION TRUST. 2014. Find and Treat Service [Online]. Available: https://www.uclh.nhs.uk/OurServices/ServiceA-Z/HTD/Pages/MXU.aspx [Accessed 19/12/2014].

WORLD HEALTH ORGANIZATION. 2006. The Stop TB Strategy [Online]. World Health Organization. Available: http://whqlibdoc.who.int/hq/2006/WHO_HTM_STB_2006.368_eng.pdf [Accessed 19/12/2014].

WORLD HEALTH ORGANIZATION. 2014a. Global tuberculosis report 2014 [Online]. Geneva: World Health Organization. Available: http://www.who.int/tb/publications/global_report/en/ [Accessed 19/12/2014].

WORLD HEALTH ORGANIZATION. 2014b. Tuberculosis Fact Sheet No. 104 [Online]. Geneva: World Health Organization. Available: http://www.who.int/mediacentre/factsheets/fs104/en/ [Accessed 19/12/2014].

ZENNER, D., SOUTHERN, J., VAN HEST, R., DEVRIES, G., STAGG, H. R., ANTOINE, D. & ABUBAKAR, I. 2013. Active case finding for tuberculosis among high-risk groups in low-incidence countries. Int J Tuberc Lung Dis., 17, 573-82. doi: 10.5588/ijtld.12.0920.

End of Life Care: Cancer Patients’ Right to Die

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

Recently, the concept of patient autonomy has become more prevalent within the healthcare field with the government and the NHS promoting patient choice and providing assurance that individuals will have full control over their care and patient journey. However, a recent publication from Macmillan Cancer Care (MCC) (2013a, pp. 1-27), suggests that there is very little choice available for individuals suffering from terminal cancer with regards to where they spend the end of their lives. Figures provided within the MCC (2013a, p. 8) report suggest that 81% of cancer sufferers would prefer to die at home whilst in reality, 48% of these die in a hospital with only 23% of patients dying within the comfort of their own homes.

For individuals who are approaching the end of their lives, the option of being cared for and dying within their own home with the familiarity and comfort that this brings, is often very important. The National Bereavement Survey (NBS) (Office for National Statistics, 2012, np) showed that that the loved ones of those who had died in hospital often considered the standard of care as being poor when compared to those who died at home, in a care home or within a hospice. Indeed, the NBS (ONS, 2012, np) showed that 53% of loved ones whose friend or family member had died at home and 58% of those who had died in a hospice, rated the standard of care as outstanding or excellent compared to just 34% for those who had died within a hospital.

This essay will consider the barriers that cancer patients are presented with when making their end of life choices and will make recommendations for improvement of service to ensure that these individuals are allowed to make and receive their final choice. However, the essay will begin with a brief overview of the benefits that end of life patient choice can bring to both the individual and to the wider society.

The Benefits of End of Life Patient Choice

According to the National End of Life Intelligence Network (2012, p.7) 89% of patients who die in hospital are brought in as emergency admissions. However, a large number of these individuals have already expressed their desire to die at home, therefore representing a poor patient outcome and negative experiences. In addition, these unnecessary emergency admissions place a costly strain on accident and emergency departments and the patients take up hospital beds that could be used for other cases. When one considers that the number of people in this country is increasing with the elderly becoming the most prevalent age group, it is not unfeasible to believe that the number of individuals dying from terminal cancer over the next few decades is also going to increase. This increase in numbers is likely to cause the current model of care to become unsustainable. However, promoting choice and delivering end of life care choices can actually save money by reducing the number of emergency admissions. According to MCC (2013a, p.9), there is a net saving of just under A?1000 for every individual who dies in the community rather than in a hospital bed.

Barriers to End of Life Care Choices

Evidence suggests that there are multiple barriers that prevent individuals from being cared for and ending their lives in their chosen place. The first barrier is the identification of people approaching the end of their lives. According to MCC (2013a, p.10), 38% of cancer patients approaching their end of life were unaware that they were dying, whilst figures from Marie Curie Cancer Care (2013, p.7) show that only 26% of individuals with a palliative care need are placed on the palliative care register. One of the main reasons for this appears to be a lack of confidence in the health professionals over instigating conversations with individuals over their end of life journey. A study carried out by Revill (2010, p.11) found that 60% of GPs were not confident about discussing death or dying with their patients. This lack of identification and lack of professional confidence therefore prevents many people from being able to make their end of life choice in a timely fashion, therefore increasing the number of emergency admissions that have previously been discussed.

However, another issue that has been raised is that of poor planning and coordination between services. When one considers the needs of a terminally ill cancer patient, it is clear that there is a requirement for multiple health and social care providers to work together to provide a joined up service delivery. Unfortunately, the MCC (2013a, p.11) report suggests that this joined up service is not occurring with 45% of respondents thinking that community services worked well together and only 33% stating that GP and other services outside of the hospital worked well together. The reason for this poor service is considered to be a lack of coordination and communication between the different care entities. Indeed, the MCC (2013a, p.11) report suggests that it is often a requirement of the close family and friends of the dying loved one or the actual patient to coordinate care between health and social care departments. The report suggested that information needed to be repeated to the different professionals suggesting that there is a lack of communication between the different departments and that patient information is not being recorded or shared in an appropriate manner.

Nevertheless, there is evidence to suggest that Advance Care Plans (ACP) are a successful way in which a person’s end of life choices can be successfully achieved. Abel et al (2013, pp.168-173) followed 969 terminally ill patients, 550 of whom had made an ACP. 75% of these individuals successfully achieved their dying wishes with regards to the location that they had chosen. In addition, a study published by the NHS (2012, pp.3-4) suggests that the Electronic Palliative Care Coordination Systems (EPaCCS) where patient information, including their end of life choices, can be stored and shared, is an effective way of achieving pro choice for the patient with up to 80% of individuals living in areas where the EPaCCS system is implemented achieving their preferred choice of location to die. In addition, the NHS (2012, p.12) report shows that the implementation of this system has resulted in savings of A?133,200 where it is implemented. Another positive study has been published by Gao et al (2013, np) who found that the number of individuals being able to die either at home or in a hospice has increased since 2005 when the National End of Life Care Programme was first launched. However, the percentage change was only marginal (0.8%) therefore suggesting that more needs to be done to ensure patient autonomy is at the top of the list for terminally ill patients.

Another barrier that is likely to prevent an individual from dying within their own home is lack of skills and resources within the community workforce. In these cases, the role of the community nurse is vital, however, the number of community nurses is steadily declining (Royal College of Nursing, 2013, np). This reduction of the workforce further dilutes the available skill mix, therefore having a detrimental impact on the quality of care provided to those who choose to die at home. According to the MCC (2013a, p.13) report, only 19% of individuals who chose to die at home received adequate pain relief during their last 3 months of life. Indeed, the lack of 24/7 access to community services forced a large number of these individuals to contact emergency services resulting in admittance to hospital. In 2010, nearly half of the UK’s primary care trusts did not provide 24/7 community nursing services for end of life patients with little progress being made following the subsequent change to Clinical Commissioning Groups (MCC, 2013a, p. 13).

Another report published by MCC (2013b, pp. 1-15) suggests that a lack of access to social care services also restricts the ability of an individual to make end of life care choices. Whilst it is obvious that the right amount of social support is needed in order for a terminally ill individual to be able to remain at home during their last stages of life, this support is often not provided. The MCC (2013b, p. 3) report suggests that this is not always due to the service not being available, but more often being the result of the complex assessment process and the lack of coordination between health and social services. Indeed, 97% of healthcare professionals stated that the complexity of the social care needs assessment is a substantial barrier to gaining the right amount of home care for terminally ill patients. As such, the care for these terminally ill individuals is often left to family members as informal carers. However, only 5% of these individuals actually receive a carers allowance despite them taking on the majority of the personal care responsibilities of these terminally ill patients. Thomas et al (2002, p.531) asserted that the needs of cancer patient carers were greatest as the cancer progressed to end stage; however, a distinct lack of support for these informal carers is prevalent throughout the UK (Soothill et al, 2001, p.468). MCC (2013b, p.6) found that 47% of these informal carers felt that they needed support but were unable to get any. Therefore it is not surprising that this lack of carer support is resulting in many cancer patients being admitted to hospital in the days or hours before death despite it being their wish to die at home.

Recommendations for Improvement

As studies have shown that the local implementation of the EPaCCS has been successful, there should be a renewed commitment by the Department of Health and the NHS to ensure the national implementation of this scheme. Indeed the National End of Life Care Strategy (DOH, 2008, np) made a commitment to pilot and establish end of life care registers that would ensure the coordinated care of terminally ill patients and also ensure that every organisation involved in the care of that individual were aware of their end of life choices. As such, it is asserted that NHS England need to prioritise the roll out of these systems. When this system is implemented on a national basis, EPaCCS will not only coordinate care but will also provide considerable data that can be used to compare outcomes for end of life patients throughout the UK. In addition to this system, it is vital that health care professionals involved with terminally ill cancer patients encourage them to fill out an ACP as a routine part of the care package. A randomised control trial carried out by Detering et al (2010, np) followed 309 terminally ill patients for a period of six month, 154 of whom had completed an advanced care plan. Of the 56 patients who died during the study period, 29 of them had made an ACP with 86% of these achieving their end of life choices compared to just 30% of those who had not made an ACP. This shows that it is vital to document end of life choices to ensure that they are followed by all those involved in the final days of the patient’s care.

Another recommendation is to make end of life care training mandatory for all health professionals who are likely to be involved in palliative care. This includes making a timely identification of individuals who are approaching the end of their lives and providing these professionals, including GPs, with the right training to boost their confidence in instigating end of life discussions with terminally ill patients. This will enable these terminally ill individuals and their families to come to terms with their disease progression and make appropriate plans for their end of life care. It is also recommended that all terminally ill individuals have a named professional who is responsible for the coordination of their care and who will ensure that their end of life choices are met whenever possible. This was a key recommendation of the UK Government’s (2013, pp. 1-62) review of the Liverpool care pathway, which stated that a named consultant or GP should take overall responsibility for a patient’s end of life care, whilst a named registered nurse would have day to day responsibility for the care of that individual and for the communication of information between the patient, family members and other members of the care team.

The UK Government’s (2013, p. 57) review also recommends improving access to community services by increasing funding to ensure that there is a consistent 24/7 access to all social care services throughout the UK. This is considered to be a priority, as without access to 24/7 care, a large number of individuals are not having their pain managed adequately, forcing them to take further action by attending an emergency department. In addition, the government needs to commit to implementing free social care to terminally ill patients and to simplifying the social care assessment to ensure that all those who need social support are able to access this service in a timely fashion. Whilst the UK government has recognised that there is much merit in the proposal of free end of life social care (MCC, 2013a, p. 19), they are yet to offer a firm commitment to this proposal. The continued complexity of the social care assessment and the confusion over who is able to receive social care needs to change if patients’ wishes to die at home are to be honoured. Indeed, Taylor (2012, p.1297) asserts that there is a need to change the way in which all health and social care is provided to elderly patients and suggests a combined health and social care assessment to ensure a proper joined up and coordinated service for these vulnerable patients.

It is also recommended that improved support for carers is instigated to ensure that all those who are caring for a terminally ill patient are recognised as informal carers and are in receipt of a carers allowance. In addition, it is vital that these carers a given the right level of support by health professionals; this support should include having 24/7 access to help and advice, being given regular respite and having adequate information with regards to the progression of their loved ones disease to enable them to encourage the patient to make end of life care plans. Joyce et al (2014, p.1150) found that out of 120 caregivers who were responsible for delivery of medications to their terminally ill relative, only 27 (22.5%) of them received any formal support. This often led to confusion over dose rate and fear that the patient was receiving too much or too little of the medication provided. This issue is compounded by the fact that many of these informal carers are elderly themselves and often have their own health problems (Jack et al, 2015, p.131).

Finally, it is considered that delivering choice for end of life care should be focused on giving that patient a good death, regardless of where they chose to die. As such, it seems logical that there is a need to understand the experiences of terminally ill patients towards the end of their lives in order to deliver adequate care. As such, it is considered vital to explore how the experiences, concerns, fears and feelings of people approaching the end of their lives can be recorded and used to improve future patient outcomes. Whilst it is accepted that the National Bereavement Survey (ONS, 2012, np) provided a large volume of useful information, the current lack of nationally collected information from end of life patients’ needs to be addressed. As such, it is recommended that future study be directed in this way.

Conclusion

In conclusion, it is clear that whilst having a genuine choice over where to spend the last few days and hours of your life is hugely important to terminally ill patients, there are significant barriers to achieving these choices. Current figures suggest that nearly three quarters of cancer patients chose to die at home but less than 29% of them actually do so. The MCC (2013a, p. 3) report estimate that this amounts to 36,000 patients dying in hospital when they had chosen to die at home. A number of barriers exist that are currently preventing the individual from achieving personal choice at the end of their lives; these include poor identification of individuals entering the end of life stage, poor communication from health professionals, poor planning and coordination between health and social services, lack of skills and resources in community nursing and lack of universal access to social care resources. Nevertheless, despite these current barriers, none are insurmountable if current services are simplified and organised in a way that sees the needs of the individuals and their families and carers brought to the forefront. Whilst the government has funded reports and strategies to improve end of life care, it is clear that not enough is being done to change the way in which end of life care is provided. Significant change is required in order to move care and resources out of hospitals and into the community so that people’s preferences can be delivered. However, this can only happen if there is a clear commitment given by all the players involved in end of life care to share the same ambition, that being to deliver a coordinated and integrated care package that meets the needs, wishes and preferences of end of life patients and their carers. A number of recommendations on how this can be achieved have been included in this essay. These recommendations include simplify the social care assessment, providing free social care to end of life patients, improving support for informal carers and ensuring that these carers are recognised, improving the training of health professionals in recognising the transition to end of life stages and encouraging them to instigate discussions over end of life choices, improving access to social services by ensuring a 24/7 service across the UK and implementing the roll out of the EPaCCS across the whole of the UK to ensure that end of life choices are recorded and shared between all the relevant care providers. As it stands at present, whilst end of life patients do have a choice over where they die, these preferences are often not honoured. They do not have full control or autonomy over their end of life care. However, the choice of place to die is not a myth as it is a very achievable option that requires coordination between services and a commitment from the government to improve community health services.

References

Abel, J., Pring, A., Rich, A., Malik, T., & Verne, J. (2013). The impact of advance care planning of place of death, a hospice retrospective cohort study. BMJ Supportive & Palliative Care, 3(2), 168-173.

Department of Health. (2008). End of life care strategy. Available online at https://www.gov.uk/government/publications/end-of-life-care-strategy-promoting-high-quality-care-for-adults-at-the-end-of-their-life accessed 21 June 2015.

Detering, K. M., Hancock, A. D., Reade, M. C., & Silvester, W. (2010). The impact of advance care planning on end of life care in elderly patients: randomised controlled trial. British Medical Journal, 340. 1345-1353

Gao, W., Ho, Y. K., Verne, J., Glickman, M., Higginson, I. J., & GUIDE_Care Project. (2013). Changing patterns in place of cancer death in England: a population-based study. PLoS Med, 10(3), e1001410.

Jack, B. A., O’Brien, M. R., Scrutton, J., Baldry, C. R., & Groves, K. E. (2015). Supporting family carers providing endaˆ?ofaˆ?life home care: a qualitative study on the impact of a hospice at home service. Journal of Clinical Nursing, 24(1-2), 131-140.

Joyce, B. T., Berman, R., & Lau, D. T. (2014). Formal and informal support of family caregivers managing medications for patients who receive end-of-life care at home: A cross-sectional survey of caregivers. Palliative Medicine, 28(9), 1146-1155.

Macmillan Cancer Care. (2013a). A time to choose. Available online at http://www.macmillan.org.uk/Documents/GetInvolved/Campaigns/Endoflife/TimeToChoose.pdf accessed 21 June 2015.

Macmillan Cancer Care. (2013b), There’s no place like home. Available online at http://www.macmillan.org.uk/Documents/GetInvolved/Campaigns/SocialCare/Making-the-case-for-free-social-care-at-the-end-of-life.pdf accessed 21 June 2015.

Marie Curie Cancer Care. (2013). Death and dying. Available online at https://www.mariecurie.org.uk/globalassets/media/documents/policy/policy-publications/february-2013/death-and-dying-understanding-the-data.pdf accessed 21 June 2015.

National End of Life Intelligence Network. (2012). What do we know now that we didn’t know a year ago? New intelligence on end of life care in England. Available online at http://www.endoflifecare-intelligence.org.uk/view?rid=464 accessed 21 June 2015.

NHS. (2012). Making the case for change: Electronic palliative care coordination systems. Available online at www.nhsiq.nhs.uk/download.ashx?mid=4423&nid=4424 accessed 21 June 2015.

Office for National Statistics. (2012). National Bereavement Survey 2012. Available online at http://www.ons.gov.uk/ons/rel/subnational-health1/national-bereavement-survey–voices-/2012/index.html accessed 21 June 2015.

Revill, S. (2010). GP Pilot Project Evaluation. Available online at http://www.dyingmatters.org/sites/default/files/user/documents/Resources/Dying_Matters_GP_Pilot_Evaluation_-_final.pdf accessed 21 June 2015.

Royal College of Nursing. (2013). Frontline First: Nursing on Red Alert. Available online at https://www.rcn.org.uk/__data/assets/pdf_file/0003/518376/004446.pdf accessed 21 June 2015.

Soothill, K., Morris, S. M., Harman, J. C., Francis, B., Thomas, C., & McIllmurray, M. B. (2001). Informal carers of cancer patients: what are their unmet psychosocial needs? Health & Social Care in the Community, 9(6), 464-475.

Taylor, B. J. (2012). Developing an integrated assessment tool for the health and social care of older people. British Journal of Social Work, 42(7), 1293-1314.

Thomas, C., Morris, S. M., & Harman, J. C. (2002). Companions through cancer: the care given by informal carers in cancer contexts. Social Science & Medicine, 54(4), 529-544.

UK Government (2013). More Care, Less Pathway, A review of the Liverpool Care Pathway. Available online at https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/212450/Liverpool_Care_Pathway.pdf accessed 21 June 2015.

Drug Absorbed Administration

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

The oral route is still the most desired route for the administration of medicinal products1 due to the ease and lack of inconvenience associated with this administration route, in comparison to others such as the pulmonary route or the more invasive intravenous route.

The pharmaceutical industry has developed considerably over the past 40 years with respect to the rate at which new chemical entities are being discovered. This increased rate is primarily due to the invention of high throughput screening, but there is no correlation between the rate of synthesis of these novel compounds and the release of new drugs on the market due to the high failure rate during the development process1.

In order to minimise cost and resources associated with this loss, effective screening methods for both pharmacological action and bioavailability have to be used.

The most important process that influences bioavailability of the drug is absorption and the necessity of creating and using suitable models that can predict the in vivo absorption profile of a drug is absolutely critical in achieving the desired reduction in cost associated with the pharmaceutical development process.

There are two primary phases of absorption for orally administered drugs; the first is dissolution of the drug in the aqueous media present at the site or sites of absorption1 the second is permeation of the drug particles in solution through predominantly the small intestinal membrane into the hepatic portal vein1.

The main factors affecting dissolution of a drug in the gastrointestinal (GI) system are the pH of the environment, volume of dissolution media and the presence of food by either encouraging or delaying the passage of the dosage form into the small intestine where many drugs are absorbed.

Permutation of the drug through the small intestinal membrane is influenced by several variables. The presence of influx and efflux pumps on the apical surface is a main consideration2. There are three main routes of absorption that drugs can take; transcellular absorption through the cells, paracellular absorption by passing thorough the tight junctions between cells or by using influx transporters present on the apical surface3. Efflux transporters are also present which act to eject the drug molecule out of the cell and limit bioavailability1.

All of these processes and scenarios need to be considered in developing an in vitro model to accurately predict gastrointestinal drug absorption. The extent to which a particular model represents the results seen in vivo can be conveyed through a mathematical relationship known as the in vitro- in vivo correlation (IVIVC)2,4. The predictive power of this correlation ultimately depends upon the capacity of the in vitro method used to simulate and reflect what occurred in vivo. The fact that different models are able to do this to different degrees has been appreciated as different levels of IVIVC have been defined; levels A, B, C, multiple C and D with A being the highest level5.

There are many factors to consider and appreciate when looking at IVIVC made from drugs absorbed from the gastrointestinal tract, as models are either based on the dissolution of the drug within the GI media at the absorption site or permeability of the drug across the intestinal membrane.

This review primarily considers models used to simulate and predict drug permeability, with a discussion of the ability of each technique to reflect and predict the in vivo environment and response; which would allow a representative IVIVC to be formed.

In silico permeability models

These models are computer programs that aim to predict the absorption and permeability of a drug. One review6 gave a very good summary of the programming process and highlighted the specifications against which the physicochemical properties of drugs are judged.

An advantage of using such a model is that a high turnover of compounds can be tested within a short period of time6, a property that makes it very practical in industry.

But in terms of developing an IVIVC, this model has limited use7. One major argument against the use of this model highlighted by another review 1 is that absorption predictions are based only on the physicochemical properties of the drug. This assumption is false as there are other factors to consider such as drug – membrane interactions through active transporters and efflux pumps1

Parallel Artificial membrane permeability assay (PAMPA)

This technique is based on the formation of an artificial membrane by using a hydrophobic filter material as support upon which lecithin and organic solvents are placed upon to produce an artificial lipid1.

One recent review8 greatly criticised the use of this technique in the drug discovery process. It was stated that there was no real benefit in using this technique over the cell culture methods such as caco-2 and MKCD cell lines because it was just as time consuming with less informative data being obtained8.

One of the main advantages of using this technique was that it was less labour intensive and quicker to do9, but this was a main focus of the argument against use of the technique by this review. Due to the different manipulations such as testing in various pH that need to be carried out, the process was deemed just as labour intensive as the caco-2 or Ussing chamber method.

An attempt to debate against the points raised by this review was done by another9 which highlighted the ability to use this technique to obtain various information such as the partition coefficient and apparent permeability (Papp) of a drug.

Nevertheless, both reviews failed to specifically highlight the strengths or weaknesses of the technique in creating IVIVC. It appeared that the capacity of this technique to do so is limited as there is a gross underestimation of active transport of hydrophilic compounds with low molecular weights 1.

Ussing Chambers

This cell technique involves the isolation of intestinal membrane and cutting the tissue into strips. These strips are clamped onto a suitable clamping device to produce a flat sheet between two chambers, the donor and receiver chamber1. The measurement is taken as the amount of drug that appears in the receiver chamber1. To monitor the viability of the intestinal tissue, electrical resistance is measured by placing a current across the membrane1.

Only few studies have used this technique to reflect its capability but this has only been used to show a level D IVIVC, where drug candidates during the development process are placed in rank order. One such study10 presented this technique as being equally capable of ranking drug candidates when compared to caco-2 cells and the in situ technique of a perfused jejunum loop.

One article11 opposes the use of this technique and presents the counter argument to the method being used to create such a correlation. The paper identified the ability of this model to be biologically representative but clearly stated that the technique is not robust enough to incorporate as a method which is routinely used in early development, due to the complexity associated with setting up the instrument. This is a good observation and highlights an impracticality of the method.

Caco-2 cell lines and separated clones

The method that has been supported in recent studies is the Caco-2-cell culture model that has been shown to effectively mimic intestinal absorption. These cells are human colon adenocarcinoma cells that undergo proliferation when in culture1 which are grown on small porous membranes that fit in the wells of welled plates. The sample of the drug being tested is placed on top of the membrane with the amount of drug that passes through being calculated and the Papp is determined.

Arguments in favour of this method state that the ability of this model to reflect in vivo conditions is very good as not only can transcellular and paracellular diffusion occur, both influx and efflux transporters are present, allowing active transport processes to be considered1,12. Such transport systems are those for sugars, bile acids, the efflux transporter P-glycoprotein11 and the more recently discovered multiple drug resistance protein (MDRP)11.

This view is supported by many whom consider this model to be very representative of the prediction of intestinal absorption. A study by Yee13 analysed 36 drugs and observed the correlation between the apparent absorption (Papp) obtained from the cells and the percentage absorbed determined from in vivo testing.

A correlation coefficient of 0.90 between percentage absorbed in vitro and in vivo was obtained, showing that the technique is capable of reliably predicting in vivo results13. Another study14 confirmed the predictive ability of this model using 20 compounds and also established a correlation coefficient of 0.92 between Papp and the percentage of dose absorbed

To further support the use of caco-2 cells, some studies10,11 have highlighted the ability of this method to be used in early stages of development in order to produce level D IVIVC where drug candidates are placed in rank order.

But despite all these positive aspects some13,15-16 remain critical of this technique because of an associated low level of reproducibility with gross variability in results from different labs15. This has been attribute to differing culture conditions within each lab13,16. For example one study highlighted the importance of culture nutrients and duration of cell feeding as more L-methyldopa was absorbed as the feeding time increased13.

Another important limitation of the model that has been recognised is that as the number of cells within a cell line increases, the Trans epithelial electrical resistance (TEER), mannitol flux and cell growth changes1. The TEER is a validation tool used to quantitatively reflect the integrity of the monolayer as the viability of this cell culture diminishes17.

The cell line is unable to express mucus17 which has been shown to act as a barrier to drug permutation in retarding drug contact with the apical membrane of the small intestine and a fixed pH is used in the model17. This is not reflective of in vivo as the mucus layer has been shown to retard permutation and the pH of the small intestine changes.

A strong counter argument against the use of caco-2 cells is that the predictive power of the method differs depending upon the main absorption route that the drug uses. Two studies14,15 have indicated variability in the Papp for mannitol, polyethylene glycol (PEG) 4000 and fluorescein that have low paracellular permeability in various batches of caco-2 cells from different origins. Another study17 clearly showed that caco-2 cells underestimated the absorption of amoxicillin – a passively absorbed drug and was not able to truly model the absorption of drugs that are absorbed using a carrier-mediated process due to the saturation or under-expression of these influx carriers and the over-expression of the efflux transporter P-glycoprotein.

This limitation of the caco-2 cell line is where the calu-2 cell line proves to be superior. This is a sub-clone of the caco-2 cell line that is isolated at a late passage number and has been shown to express different levels of sucrase isomaltase and glucose transporters17.

Arguments in favour of this model claim that it is more representative of the in vivo situation17 as it expresses levels of sucrase isomaltase similar to that seen in the human jejunum17. UDP-transglucoronyltransferase, an enzyme involved in conjugation metabolic reaction is also seen at a level that is more representative of that in vivo and also an IVIVC has been formed using the in-vitro data obtained from this model17.

Another sub-clone of the caco-2 cell line is the HT29-18-C1. A study18 used this cell line and the information obtained was used to calculate a permeability coefficient (PC) for a particular compound. A relationship between the percentage absorbed and the PC was formed much in the same manner as that created using Papp and was shown to be a good model to use in the early development process.

Although this method possesses a significant flaw which is that the tight junctions established in this cell line were not as tight as those seen in vivo 18, therefore allowing passive diffusion to occur to a greater extent than would normally occur. This was shown in the same study18 where the Pc of mannitol was ten times less than that seen in caco-2 cells, which is not reflective of in vivo conditions.

Madlin Derby Canine Kidney (MDCK) cells

The progressive changes in TEER seen in caco-2 cells have led to the use of Madlin Derby Canine Kidney (MDCK) cells as a model to predict intestinal absorption14. These are differentiated epithelial cells that form tight junctions when cultured in semi-permeable membranes14 that also possess transporters, but not as many as seen in the caco-2 cell line14.

One study19 highlighted both opposing arguments and those in favour of the technique by comparing the ability of the model with not only in vivo data but also with the caco-2 cell line. The predictive power of the model was similar to that of the caco-2 cells for passively absorbed compounds that showed good permeability19. For those that were poorly permeable or were actively transported, the model was unable to accurately present the degree of absorption; for the latter this is due to the minimal transporters expressed by the MDCK cells19, resulting in a poor IVIVC

2/4/A1 cell line

This cell line which originated from fetal rat intestine was reported to mimic the permeability of the small intestine to drugs absorbed via the paracellular route to a greater extent than the caco-2 cell line1.

One paper20 clearly advocates the use of this cell line because of this point as the tight junctions seen are more representative with the extent of passive absorption being similar to that in vivo. In this study this cell line was transformed in order to improve viability and a sigmoid relationship between fraction of drug absorbed in vivo and permeability coefficient obtained in vitro was obtained.

The predominant argument against the use of this model also presented by the same study20, was that the shape properties of the cell line were not similar to that of the small intestine. The cells are cuboidal as oppose to columnar and there was a lower number of villi present on the apical surface. This limits the model’s capability of reflecting transcellular or carrier mediated absorption, which are major routes for many drugs which negatively impacts the IVIVC created.

Conclusion and the Future

In examining the arguments for and against the different cell culture techniques, the caco-2 cell line appears to be the most reflective of in vivo absorption. This is because the cell line can express transporters, allow all routes of absorption, has an associated low operating cost, high reliability and throughput capacity. All these advantages make it a very practical and useful model to routinely use in industry.

Nevertheless, there is still room for improvement as the in vivo environment is not completely shown with this cell line. One significant aspect omitted is the dissolution of the drug and the impact that this process has on amount of the dose of drug available for permutation.

Therefore the next step in producing a completely reflective model that can be used to form a good IVIVC is the combination of methods to take into account the many aspects influencing bioavailability1 with an ultimate goal of creating an in vitro gastrointestinal system model.

Incorporation of a modified caco-2 cell line that has been co-cultured with other cells such as MDCK cells with an artificial digestive system model such as the TIM-1 model is an example of such steps that can be investigated into attaining the ultimate goal. Within the TIM-1 model there is still room for improvement but it does provide a foundation to build and develop upon. The incorporation of the newly created PBL dynamic gastric model to replace the gastric compartment of the TIM-1 would be a combination that would shed more insight into actual food effects on drug absorption and permutation. Developments similar to this would eventually lead to the creation of a very reliable and reflective in vitro model.

Bibliography

(1) Balimane PV, Chong S, Morrison RA. Current methodologies used for evaluation of intestinal permeability and absorption. J.Pharmacol.Toxicol.Methods 2000;44(1):301-312.

(2) Emami J. In vitro – In vivo relationships: Concepts, regulatory perspectives, advances and attempts. J.Pharm.Pharm.Sci. 2006 27 Feb;9(1):82-100.

(3) Hu M, Borchardt RT. Mechanism of L-alpha-methyldopa transport through a monolayer of polarized human intestinal epithelial cells (Caco-2). Pharm.Res. 1990;7(12):1313-1319.

(4) Emami J. In vitro-in vivo correlation: From theory to applications. J.Pharm.Pharm.Sci. 2006 16 Jun;9(2):31-51.

(5) Yu LX, Amidon GL, Polli JE, Zhao H, Mehta MU, Conner DP, et al. Biopharmaceutics classification system: The scientific basis for biowaiver extensions. Pharm.Res. 2002;19(7):921-925.

(6) Bergstrom CAS. In silico predictions of drug solubility and permeability: Two rate-limiting barriers to oral drug absorption. Basic Clin.Pharmacol.Toxicol. 2005 Mar;96(3):156-161.

(7) Barr WH, Riegelman S. Intestinal drug absorption and metabolism. I. Comparison of methods and models to study physiological factors of in vitro and in vivo intestinal absorption. J.Pharm.Sci. 1970;59(Feb):154-163.

(8) Galinis-Luciani D, Nguyen L, Yazdanian M. Is PAMPA a useful tool for discovery?. J.Pharm.Sci. 2007 Nov;96(11):2886-2892.

(9) Avdeef A, Bendels S, Di L, Faller B, Kansy M, Sugano K, et al. PAMPA – Critical factors for better predictions of absorption. J.Pharm.Sci. 2007 Nov;96(11):2893-2909.

(10) Boisset M, Botham RP, Haegele KD, Lenfant B, Pachot JI. Absorption of angiotensin II antagonists in Ussing chambers, Caco-2, perfused jejunum loop and in vivo:: Importance of drug ionisation in the in vitro prediction of in vivo absorption. European Journal of Pharmaceutical Sciences, 2000 5;10(3):215-224.

(11) Fearn RA, Hirst BH. Predicting oral drug absorption and hepatobiliary clearance: Human intestinal and hepatic in vitro cell models. Environ.Toxicol.Pharmacol. 2006 Feb;21(2 SPEC. ISS):168-178.

(12) Stewart BH, Chan OH, Lu RH, Reyner EL, Schmid HL, Hamilton HW, et al. Comparison of Intestinal Permeabilities Determined in Multiple in Vitro and in Situ Models: Relationship to Absorption in Humans. Pharm.Res. 1995 May;12(5):693-699.

(13) Yee S. In vitro permeability across Caco-2 cells (colonic) can predict in vivo (small intestinal) absorption in man – Fact or myth. Pharm.Res. 1997;14(6):763-766.

(14) Volpe DA. Variability in Caco-2 and MDCK cell-based intestinal permeability assays. J.Pharm.Sci. 2008 Feb;97(2):712-725.

(15) Walter E, Kissel T. Heterogeneity in the human intestinal cell line Caco-2 leads to differences in transepithelial transport. Eur.J.Pharm.Sci. 1995;3(4):215-230.

(16) Tsuji A, Takanaga H, Tamai I, Terasaki T. Transcellular transport of benzoic acid across Caco-2 cells by a pH-dependent and carrier-mediated transport mechanism. Pharm.Res. 1994;11(1):30-37.

(17) Gres M, Julian B, Bourrie M, Meunier V, Roques C, Berger M, et al. Correlation Between Oral Drug Absorption in Humans, and Apparent Drug Permeability in TC-7 Cells, A Human Epithelial Intestinal Cell Line: Comparison with the Parental Caco-2 Cell Line. Pharm.Res. 1998 May;15(5):726-733.

(18) Wils P, Warnery A, Phung-Ba V, Scherman D. Differentiated intestinal epithelial cell lines as in vitro models for predicting the intestinal absorption of drugs. Cell Biol.Toxicol. 1994 Dec;10(5-6):393-397.

(19) Irvine JD, Takahashi L, Lockhart K, Cheong J, Tolan JW, Selick HE, et al. MDCK (Madin-Darby canine kidney) cells: A tool for membrane permeability screening. J.Pharm.Sci. 1999 Jan;88(1):28-33.

(20) Tavelin S, Milovic V, Ocklind G, Olsson S, Artursson P. A conditionally immortalized epithelial cell line for studies of intestinal drug transport. J.Pharmacol.Exp.Ther. 1999 Sep;290(3):1212-1221.

Breastfeeding in first six months and Childhood Obesity

This work was produced by one of our professional writers as a learning aid to help you with your studies

Can breastfeeding in the first six months prevent childhood obesity?

Childhood obesity is becoming a worldwide concern given the potential health implications in the future. Obese children are more likely to suffer physical and mental health problems and are likely to develop into obese adults (Labayen, Ruiz et al. 2012), thereby increasing the long term risk of developing chronic conditions such as diabetes, cardiovascular diseases and stroke.

The cause of childhood obesity is multifactorial, including hereditary factors, comorbidities, dietary habits and physical activity. There is much debate as to the impact of breastfeeding during the early stages of life and how it correlates with childhood obesity compared with formula-fed newborns.

Breast milk is nutritionally balanced to provide infants with all dietary requirements during the early stages of life. It also provides antibodies to reduce infection risks in newborns. Breast milk constitutes the appropriate amounts of protein, water, fat and sugar for a newborn and changes composition over time to adapt to a growing child’s needs. Formula tends to be higher in protein and fat than the baby actually requires and this excessive intake has been linked with adiposity (Hernell 2011). Marseglia et al have reviewed the potential impact of key breast milk constituents thought to play a role in reducing obesity risk (Marseglia, Manti et al. 2015).

There have been a number of recent reviews discussing the association between breastfeeding and childhood obesity, all of which have concluded that breastfeeding confers a protective effect against childhood obesity and being overweight (Horta and Victora 2013, Aguilar Cordero, Sanchez Lopez et al. 2014, Lefebvre and John 2014, Yan, Liu et al. 2014). The largest reduction in obesity risk was 81%, reported in a study of females aged 11 years of who had been breastfed for more than three months compared with controls who had never been breastfed (Panagiotakos, Papadimitriou et al. 2008). The males in the same study had a reduced risk of 72% and both results were statistically significant. However, other literature reports either no association between breastfeeding and childhood obesity (Burdette, Whitaker et al. 2006, Huus, Ludvigsson et al. 2008, Jing, Xu et al. 2014), or an increased risk of obesity following breastfeeding of 9% (Kwok, Schooling et al. 2010), 10% (Novaes, Lamounier et al. 2012), 11% (Buyken, Karaolis-Danckert et al. 2008), 14% (Sabanayagam, Shankar et al. 2009), 18% He (2000), 29% (Al-Qaoud and Prakash 2009), 34% (Neutzling, Hallal et al. 2009), 40% (Toschke, Martin et al. 2007) and 83% (Araujo, Victora et al. 2006), although none of which were statistically significant.

Some studies suggest that there is a dose-response relationship, with increased duration of breastfeeding resulting in a decreased prevalence of being obese in childhood (von Kries, Koletzko et al. 2000, Fallahzadeh, Golestan et al. 2009, Griffiths, Smeeth et al. 2009, Yan, Liu et al. 2014). In contrast, other studies have reported no significant association between breastfeeding and its duration and obesity prevention (Burke, Beilin et al. 2005, Al-Qaoud and Prakash 2009, Sabanayagam, Shankar et al. 2009, Vehapoglu, YazA±cA± et al. 2014).

One meta-analysis analysed the association between breastfeeding duration and obesity (Yan, Liu et al. 2014). As eligible studies reported different durations, the review categorised breastfeeding duration into less than three months, 3-4.9 months, 5-6.9 months and seven or more months. Those exclusively breastfed for at least seven months had a 21% decrease in the risk of childhood obesity, whilst those fed for less than three months only showed a 10% decrease. They concluded that the duration of breastfeeding was associated with a decreased likelihood of childhood obesity and reported a stepwise gradient of decreasing risk with increasing duration of breastfeeding.

Single studies report a significant protective effect against childhood obesity when breastfeeding is done for at least one to three months (Goldfield, Paluch et al. 2006), three months (Twells and Newhook 2010), 13-25 weeks (McCrory and Layte 2012), four months (Scholtens, Gehring et al. 2007, Griffiths, Smeeth et al. 2009, Chivers, Hands et al. 2010), nine months (Nelson and Sethi 2005), 12 months (Burke, Beilin et al. 2005) and two or more years (Rathnayake, Satchithananthan et al. 2013). However, the differences in study design make it difficult to directly compare findings as the comparator groups can be formula-fed babies or babies’ breastfed for short durations.

For studies investigating the impact of breastfeeding for at least six months on childhood obesity, the comparator group can be either newborns breastfed for less than six months (i.e. mixed feeding of variable durations) or newborns exclusively formula-fed. Additionally, the age of the children being assessed also differs in studies. When comparing those breastfed for at least six months with those breastfed less than six months, studies report a reduction in obesity risk of 60% when assessing two year olds (Weyermann, Rothenbacher et al. 2006), 54% and 43% in four year olds (Komatsu, Yorifuji et al. 2009, Simon, Souza et al. 2009), and 67% in six year olds (Thorsdottir, Gunnarsdottir et al. 2003). This suggests that the age of assessment affects the degree of risk reduction observed. However, when comparing against formula-fed newborns there are studies reporting reductions of 14%, 28% and 67% for three year olds (Poulton and Williams 2001, Armstrong, Reilly et al. 2002, Taveras, Rifas-Shiman et al. 2006), 6% for four year olds (Moschonis, Grammatikaki et al. 2008), 45% for seven year olds (Yamakawa, Yorifuji et al. 2013), 60% for nine year olds (Toschke, Martin et al. 2007), 64% for 11 year olds (Poulton and Williams 2001), 21% for 21 year olds (Poulton and Williams 2001) and 6% for 45 year olds (Michels, Willett et al. 2007). This data suggests that observing adults to determine the impact of breastfeeding on obesity is not advisable.

Only one study reported an increased risk of obesity for newborns breastfed more than six months compared with formula-fed newborns, reporting a non-significant 40% increased risk of obesity in nine year olds (Toschke, Martin et al. 2007).

Interestingly, very few detailed, for those breastfeeding for at least six months, whether the feeding duration was exclusively breastfeeding or mixed. Only two studies (Simon, Souza et al. 2009, Yamakawa, Yorifuji et al. 2013) reported on exclusive breastfeeding. There is evidence that exclusive breastfeeding also results in a decreased prevalence of being obese in childhood (Fallahzadeh, Golestan et al. 2009, Simon, Souza et al. 2009, Lefebvre and John 2014). Mayer-Davis et al (2006) compared exclusively breastfed newborns with exclusively formula-fed newborns and found that the breastfed children were significantly less likely to be overweight (34%) and that the results were not affected by maternal weight or diabetes status (Mayer-Davis, Rifas-Shiman et al. 2006).

When exploring the differences between studies who defined breastfeeding as “Never – ever” and those reporting “exposure” to breastfeeding (implying mixed feeding practices of different types), a systematic review found a reduced likelihood of obesity in the exclusive feeding group of 20% and in the mixed group of 27% (Yan, Liu et al. 2014). This was supported by another review comparing “ever” breastfed with “exclusively breastfed for a specific number of months”, the latter showing a 27% decreased risk compared with the former at 21% (Horta and Victora 2013). That review postulated that if there is no critical window effect, but rather a cumulative effect of breastfeeding, studies that compared ever vs. never breastfed subjects will tend to underestimate any association.

Any observed association between breastfeeding and later obesity does not prove causality (Butte 2001). There may be any number of potential confounders impacting on the relationship including geography, social deprivation status, parental weight status, smoking, marital status and education, ethnicity, gender, number of hospital admissions during the early stages of life, diet, sleep duration and physical activity. Whilst a number of studies discuss their impact, very few studies actually provide control for these factors in their analysis.

The issue of geography is a potential confounder of any association between breastfeeding and obesity. In high-income countries, the babies usually receive formula, whereas many

non-breastfed infants in low and middle income countries receive whole or diluted animal milk (Horta and Victora 2013). However, Hancox et al have reported that whilst breastfeeding reduced the risk of obesity slightly, there was no evidence that an association between breastfeeding and body mass index (BMI) was different in lower income countries compared with higher income countries (Hancox, Stewart et al. 2014).

The socio-economic status of the mother may also contribute to the child’s weight status in childhood. The World Health Organisation (WHO) review analysed obesity risk in studies also controlling for social deprivation and found a further 3% decrease in the risk of obesity to 37% compared with studies which did not (34%) (Horta and Victora 2013). Armstrong et al reported that the reduced prevalence in obesity for breastfed children also persisted after adjustment for socio-economic status, birth weight and gender (30% reduction) (Armstrong, Reilly et al. 2002).

The impact of gender was prominent as Nelson et al reported that breastfeeding for at least nine months reduced the risk of being overweight more in girls than in boys (Nelson and Sethi 2005). A similar gender inequality was reported by Panagiotakos et al with girls breastfed for more than three months having a larger reduced risk of obesity than the boys (Panagiotakos, Papadimitriou et al. 2008).

Sibling studies have been unable to rule out the impact of confounders on childhood obesity. One study which controlled for this as part of a sibling study reported the adolescent BMIs were 0.39 standard deviations lower in the breastfed sibling than the non-breastfed sibling (Metzger and McDade 2010). However, another study of sibling pairs was unable to prove a protective effect for breastfeeding (Nelson and Sethi 2005).

As well as the lack of control for confounders, other study limitations may affect the results reported. Definitions of obesity vary from a BMI of ?90th to ?97th, making any direct comparison of the outcome problematic. During their meta-analysis Yan et al investigated the association of breastfeeding and obesity, stratifying by the definitions of obesity and found a lower adjusted odds ratio for the BMI ? 97th group (25%) than the BMI ? 95th group (22%) (Yan, Liu et al. 2014).

Most studies varied in the time when obesity was measured. As the definition of childhood can extend from one year olds to adolescents, there is an increasing influence of external and genetic factors on a child’s weight as potential confounders for any weight gain. When Scholtens et al looked at children breastfed for at least four months they reported a significantly lower BMI at age 1 compared to children not breastfed, but at age 7 this difference was no longer significant (Scholtens, Gehring et al. 2007). The WHO review reported a 38% decreased risk of obesity when assessing 10-19 year olds compared with 23% for 1-9 year olds and 11% for adults aged 20 and over, suggesting that endpoint for analysis is critical in determining the impact of breastfeed on obesity at various stages in childhood (Horta and Victora 2013).

Finally, study design and follow up can affect the findings as high dropout rates affect long term follow ups, and the methodology used to analyse the results can produce unreliable results. Beyerlein et al investigated the impact of breastfeeding on children’s BMI in Germany but was unable to make any firm conclusions as the results differed according to whether they used linear or logistic regression (Beyerlein, Toschke et al. 2008).

To summarise, there is a wealth of literature reporting the decreased risk of childhood obesity for newborns who are breastfed, although there was limited literature exploring those breastfed for at least six months. However, most studies cannot completely control for confounding maternal, child, cultural, genetic and environmental factors. The WHO recommend that infants should be exclusively breastfed for the first six months and that it should be supplemented with additional foods for the first two years (World Health Organisation 2015). Following close examination of the literature, we would conclude that breastfeeding for at least six months should reduce the risk of obesity in early childhood, although the protective effect may be lost in latter childhood depending upon the child’s upbringing.

References?

Aguilar Cordero, M. J., A. M. Sanchez Lopez, N. Madrid Banos, N. Mur Villar, M. Exposito Ruiz and E. Hermoso Rodriguez (2014). “[Breastfeeding for the prevention of overweight and obesity in children and teenagers; systematic review].” Nutr Hosp 31(2): 606-620.

Al-Qaoud, N. and P. Prakash (2009). “Breastfeeding and obesity among Kuwaiti preschool children.” Med Princ Pract 18(2): 111-117.

Araujo, C. L., C. G. Victora, P. C. Hallal and D. P. Gigante (2006). “Breastfeeding and overweight in childhood: evidence from the Pelotas 1993 birth cohort study.” Int J Obes (Lond) 30(3): 500-506.

Armstrong, J., J. J. Reilly and C. H. I. Team (2002). “Breastfeeding and lowering the risk of childhood obesity.” Lancet 359(9322): 2003-2004.

Beyerlein, A., A. M. Toschke and R. von Kries (2008). “Breastfeeding and childhood obesity: shift of the entire BMI distribution or only the upper parts?” Obesity (Silver Spring) 16(12): 2730-2733.

Burdette, H. L., R. C. Whitaker, W. C. Hall and S. R. Daniels (2006). “Breastfeeding, introduction of complementary foods, and adiposity at 5 y of age.” Am J Clin Nutr 83(3): 550-558.

Burke, V., L. J. Beilin, K. Simmer, W. H. Oddy, K. V. Blake, D. Doherty, G. E. Kendall, J. P. Newnham, L. I. Landau and F. J. Stanley (2005). “Breastfeeding and overweight: longitudinal analysis in an Australian birth cohort.” J Pediatr 147(1): 56-61.

Butte, N. F. (2001). “The role of breastfeeding in obesity.” Pediatr Clin North Am 48(1): 189-198.

Buyken, A. E., N. Karaolis-Danckert, A. Gunther and M. Kersting (2008). “Effects of breastfeeding on health outcomes in childhood: beyond dose-response relations.” Am J Clin Nutr 87(6): 1964-1965; author reply 1965-1966.

Chivers, P., B. Hands, H. Parker, M. Bulsara, L. J. Beilin, G. E. Kendall and W. H. Oddy (2010). “Body mass index, adiposity rebound and early feeding in a longitudinal cohort (Raine Study).” Int J Obes (Lond) 34(7): 1169-1176.

Fallahzadeh, H., M. Golestan, T. Rezvanian and Z. Ghasemian (2009). “Breast-feeding history and overweight in 11 to 13-year-old children in Iran.” World J Pediatr 5(1): 36-41.

Goldfield, G. S., R. Paluch, K. Keniray, S. Hadjiyannakis, A. B. Lumb and K. Adamo (2006). “Effects of breastfeeding on weight changes in family-based pediatric obesity treatment.” J Dev Behav Pediatr 27(2): 93-97.

Griffiths, L. J., L. Smeeth, S. S. Hawkins, T. J. Cole and C. Dezateux (2009). “Effects of infant feeding practice on weight gain from birth to 3 years.” Arch Dis Child 94(8): 577-582.

Hancox, R. J., A. W. Stewart, I. Braithwaite, R. Beasley, R. Murphy, E. A. Mitchell and I. P. T. S. Group (2014). “Association between breastfeeding and body mass index at age 6-7 years in an international survey.” Pediatr Obes.

Hernell, O. (2011). “Human milk vs. cow’s milk and the evolution of infant formulas.” Nestle Nutr Workshop Ser Pediatr Program 67: 17-28.

Horta, B. L. and C. G. Victora (2013). Long-term effects of breastfeeding: a systematic review, World Health Organisation: 74.

Huus, K., J. F. Ludvigsson, K. Enskar and J. Ludvigsson (2008). “Exclusive breastfeeding of Swedish children and its possible influence on the development of obesity: a prospective cohort study.” BMC Pediatr 8: 42.

Jing, H., H. Xu, J. Wan, Y. Yang, H. Ding, M. Chen, L. Li, P. Lv, J. Hu and J. Yang (2014). “Effect of breastfeeding on childhood BMI and obesity: the China Family Panel Studies.” Medicine (Baltimore) 93(10): e55.

Komatsu, H., T. Yorifuji, T. Iwase, A. Sasaki, S. Takao and H. Doi (2009). “Impact of breastfeeding on body weight of preschool children in a rural area of Japan: population-based cross-sectional study.” Acta Med Okayama 63(1): 49-55.

Kwok, M. K., C. M. Schooling, T. H. Lam and G. M. Leung (2010). “Does breastfeeding protect against childhood overweight? Hong Kong’s ‘Children of 1997’ birth cohort.” Int J Epidemiol 39(1): 297-305.

Labayen, I., J. R. Ruiz, F. B. Ortega, H. M. Loit, J. Harro, I. Villa, T. Veidebaum and M. Sjostrom (2012). “Exclusive breastfeeding duration and cardiorespiratory fitness in children and adolescents.” Am J Clin Nutr 95(2): 498-505.

Lefebvre, C. M. and R. M. John (2014). “The effect of breastfeeding on childhood overweight and obesity: a systematic review of the literature.” J Am Assoc Nurse Pract 26(7): 386-401.

Marseglia, L., S. Manti, G. D’Angelo, C. Cuppari, V. Salpietro, M. Filippelli, A. Trovato, E. Gitto, C. Salpietro and T. Arrigo (2015). “Obesity and breastfeeding: The strength of association.” Women Birth.

Mayer-Davis, E. J., S. L. Rifas-Shiman, L. Zhou, F. B. Hu, G. A. Colditz and M. W. Gillman (2006). “Breast-feeding and risk for childhood obesity: does maternal diabetes or obesity status matter?” Diabetes Care 29(10): 2231-2237.

McCrory, C. and R. Layte (2012). “Breastfeeding and risk of overweight and obesity at nine-years of age.” Soc Sci Med 75(2): 323-330.

Metzger, M. W. and T. W. McDade (2010). “Breastfeeding as obesity prevention in the United States: a sibling difference model.” Am J Hum Biol 22(3): 291-296.

Michels, K. B., W. C. Willett, B. I. Graubard, R. L. Vaidya, M. M. Cantwell, L. B. Sansbury and M. R. Forman (2007). “A longitudinal study of infant feeding and obesity throughout life course.” Int J Obes (Lond) 31(7): 1078-1085.

Moschonis, G., E. Grammatikaki and Y. Manios (2008). “Perinatal predictors of overweight at infancy and preschool childhood: the GENESIS study.” Int J Obes (Lond) 32(1): 39-47.

Nelson, A. and S. Sethi (2005). “The breastfeeding experiences of Canadian teenage mothers.” J Obstet Gynecol Neonatal Nurs 34(5): 615-624.

Neutzling, M. B., P. R. Hallal, C. L. Araujo, B. L. Horta, M. e. F. Vieira, A. M. Menezes and C. G. Victora (2009). “Infant feeding and obesity at 11 years: prospective birth cohort study.” Int J Pediatr Obes 4(3): 143-149.

Novaes, J. F., J. A. Lamounier, E. A. Colosimo, S. C. Franceschini and S. E. Priore (2012). “Breastfeeding and obesity in Brazilian children.” Eur J Public Health 22(3): 383-389.

World Health Organisation. (2015). “Breastfeeding.” Retrieved March 2015 from http://www.who.int/topics/breastfeeding/en/.

Panagiotakos, D. B., A. Papadimitriou, M. B. Anthracopoulos, M. Konstantinidou, G. Antonogeorgos, A. Fretzayas and K. N. Priftis (2008). “Birthweight, breast-feeding, parental weight and prevalence of obesity in schoolchildren aged 10-12 years, in Greece; the Physical Activity, Nutrition and Allergies in Children Examined in Athens (PANACEA) study.” Pediatr Int 50(4): 563-568.

Poulton, R. and S. Williams (2001). “Breastfeeding and risk of overweight.” JAMA 286(12): 1449-1450.

Rathnayake, K. M., A. Satchithananthan, S. Mahamithawa and R. Jayawardena (2013). “Early life predictors of preschool overweight and obesity: a case-control study in Sri Lanka.” BMC Public Health 13: 994.

Sabanayagam, C., A. Shankar, Y. S. Chong, T. Y. Wong and S. M. Saw (2009). “Breast-feeding and overweight in Singapore school children.” Pediatr Int 51(5): 650-656.

Scholtens, S., U. Gehring, B. Brunekreef, H. A. Smit, J. C. de Jongste, M. Kerkhof, J. Gerritsen and A. H. Wijga (2007). “Breastfeeding, weight gain in infancy, and overweight at seven years of age: the prevention and incidence of asthma and mite allergy birth cohort study.” Am J Epidemiol 165(8): 919-926.

Simon, V. G., J. M. Souza and S. B. Souza (2009). “Breastfeeding, complementary feeding, overweight and obesity in pre-school children.” Rev Saude Publica 43(1): 60-69.

Taveras, E. M., S. L. Rifas-Shiman, K. S. Scanlon, L. M. Grummer-Strawn, B. Sherry and M. W. Gillman (2006). “To what extent is the protective effect of breastfeeding on future overweight explained by decreased maternal feeding restriction?” Pediatrics 118(6): 2341-2348.

Thorsdottir, I., I. Gunnarsdottir and G. I. Palsson (2003). “Birth weight, growth and feeding in infancy: relation to serum lipid concentration in 12-month-old infants.” Eur J Clin Nutr 57(11): 1479-1485.

Toschke, A. M., R. M. Martin, R. von Kries, J. Wells, G. D. Smith and A. R. Ness (2007). “Infant feeding method and obesity: body mass index and dual-energy X-ray absorptiometry measurements at 9-10 y of age from the Avon Longitudinal Study of Parents and Children (ALSPAC).” Am J Clin Nutr 85(6): 1578-1585.

Twells, L. and L. A. Newhook (2010). “Can exclusive breastfeeding reduce the likelihood of childhood obesity in some regions of Canada?” Can J Public Health 101(1): 36-39.

Vehapoglu, A., M. YazA±cA±, A. D. Demir, S. Turkmen, M. Nursoy and E. Ozkaya (2014). “Early infant feeding practice and childhood obesity: the relation of breast-feeding and timing of solid food introduction with childhood obesity.” J Pediatr Endocrinol Metab 27(11-12): 1181-1187.

von Kries, R., B. Koletzko, T. Sauerwald and E. von Mutius (2000). “Does breast-feeding protect against childhood obesity?” Adv Exp Med Biol 478: 29-39.

Weyermann, M., D. Rothenbacher and H. Brenner (2006). “Duration of breastfeeding and risk of overweight in childhood: a prospective birth cohort study from Germany.” Int J Obes (Lond) 30(8): 1281-1287.

Yamakawa, M., T. Yorifuji, S. Inoue, T. Kato and H. Doi (2013). “Breastfeeding and obesity among schoolchildren: a nationwide longitudinal survey in Japan.” JAMA Pediatr 167(10): 919-925.

Yan, J., L. Liu, Y. Zhu, G. Huang and P. P. Wang (2014). “The association between breastfeeding and childhood obesity: a meta-analysis.” BMC Public Health 14: 1267.

Deathography Essay Example

This work was produced by one of our professional writers as a learning aid to help you with your studies

When I was five, my grandmother passed away in hospital just before Christmas. She had been in the hospital for some time and was very elderly. As my sisters and I were at school, we could only visit the hospital at the weekend, whereas my mother and father would visit during the week. At weekends my sisters and I would be given the choice about going to the hospital with our father to visit, or to stay at home. I often chose to stay at home. I understood that my grandmother was old, however I did not understand how ill she was.

When my grandmother passed away, I felt guilty that I had not chosen to visit her. Although I knew that my grandmother had been ill for some time, I had not understood that she was coming to the end of her life, and it had also not been explained to me by the adults. I knew that death was irreversible, however because her death did not impact on my daily routine as my parents sought to maintain normality as far as possible. I found that my life continued as usual, without any major interruptions.

In the week leading up to my grandmother’s funeral I saw my father crying and remember that seeing my father cry made me feel both frightened and upset. I felt upset because I had never seen my father cry before, and I realised that he was suffering greatly. As a result of this, I tried to behave well at all times as I was worried that my actions would cause my father to cry again. I felt frightened because although my grandmother’s death had not had a large impact on myself, I could see that it was having a profound effect on those that I cared about. As I was only a small child, this was the first time that I had seen such a depth of emotion in those close to me, and I was not sure how to react to this.

Research has demonstrated that children, even very young children, are capable of grieving (Melhern et al, 2011). It is important to note that there are differences in the way that adults and children grieve. In particular, children are likely to show their grief in less direct ways than adults, and can move in and out of grief, almost grieving in bursts (Melhern et al, 2011). It is also important to realise that the child’s age, emotional maturity, circumstances of loss, and the level of relationship between the child and the person who has died are all important factors (Dowdney, 2008).

Piaget’s research demonstrated that toddlers and infants understand events in terms of direct experience, and that the dependable presence and emotional expression of loved people are more important than the language used (Piaget, 2013). Studies which have applied Piaget’s work have demonstrated that even children who cannot yet communicate verbally are aware of the distress of adults around them and are aware of the absence of a loved person (Himebauch et al, 2008). It can therefore be thought that not telling young children about the death of a family member will not protect them from the loss as intended, and will only prevent discussion.

This fits with Piaget’s work, who found that young children (between the ages of 3 – 6) do not think in logical sequences, and therefore have illogical explanations for events (Piaget, 2013). This is reflected in the difficulty they may have grasping that death is not reversible (Brown et al, 2008). Families often find it easier to help children after the loss of a grandparent, as they are often in an age group where death is more common (Brown et al, 2008). In my case, I did not have daily interaction with my grandmother due to geographical distance, however we did have regular contact at weekends. This may have meant that there were fewer obvious changes and reminders of the absence.

This is clearly not applicable to all children and cultures, where the grandparents may play a central role in the child’s life and in the family (Salloum, 2008). In these cases, the effect of the loss may be apparent as regression or behavioural problems in the child (Salloum, 2008). Ongoing discussion of the loss can provide the opportunity for children to reinterpret the death over the years as their cognitive comprehension grows (Salloum, 2008). Research has clearly demonstrated that the lack of a well-structured support system during the mourning period can lead to severe disruption of childhood development (Bonanno, 2004).

One study conducted in the United States found that out of 270 children taken to counselling after the death of a loved one and who lacked a well-structured support system, 66% demonstrated aggressive behaviour, 44% lacked social skills, and 18% had delayed cognitive, fine and gross motor development (McClatchy et al, 2009). However, it is not possible to determine from the study whether these children had developmental difficulties before counselling. If this is true, the quoted percentages may not be a true reflection of the impact of a lack of a well-structured support system.

There is also a clear impact on the academic abilities of children who have suffered loss Shear & Shair, 2005). In addition to this, children often have higher levels of absenteeism from school when a close relative is ill, which could have an impact on their academic performance. This impact on academic performance is often seen in children who have witnessed a traumatic death and subsequently develop post-traumatic stress disorder (Shear & Shair, 2005). I believe that my parents made considerable efforts not to disrupt the daily routines of my sisters and I, particularly around school. I think that this ensured that our academic performance did not suffer as much as it may otherwise have.

It is clear that children’s understanding of death develops in parallel with cognitive maturing throughout childhood (Cohen, 2011). The concept of death may develop at different rates in different children, but the developmental sequence seems to be the same (Cohen, 2011). For example, children below the age of five do not understand that death is irreversible, and will demonstrate this by asking when the person is coming back (Salloum, 2008). As a result of this, children at this age will have difficulty understanding abstract explanations of death, and such explanations such as saying the person has gone to sleep may result in fear of sleep (Cohen, 2011). It is therefore clear that although the concept of death is not fully developed in small children, there is little doubt that they still react strongly to loss at this age (Cohen, 2011).

This does not apply to my experience of loss, as I was slightly older; however it is clear that loss at even a very young age can have a lasting impression on children. Between the ages of four and six, it is thought that children begin to develop a biological understanding of life (Crenshaw, 2005). An example of this is knowing that parts of the body work to sustain life. I feel that this is true of my experience – I knew my grandmother was in hospital because she was ill; however I did not understand the seriousness of her illness, or that she had been in hospital for a considerable length.

Children from five to ten years of age develop an understanding of death as an irreversible process (Currier et al, 2008). Concrete thinking is seen in children until the age of 10, and need concrete expressions such as pictures or visiting graves or memorials as support for their grief (Currier et al, 2008). When my grandmother died, I knew that it was an irreversible event. My parents chose not to take me to the funeral, which I feel was a wise decision. I believe that although I knew my grandmother had died and that this was not a reversible event, I would have found it distressing to see my parents and other adults so openly upset. Research has also found that if children do attend funerals, it should be with someone who can provide emotional support (Currier et al, 2008), and I feel that this would have been an unfair demand on my parents at the funeral, particularly as I was so young.

As I grew older I found that accompanying my parents to the graves of my grandparents, particularly my grandmother, helped me to express my feelings and to ask questions. This is supported by literature which states that visiting graves or memorials can offer children or young adolescents a channel for communicating about the deceased person, which can help them to understand the circumstances of the loss and can also act as an opportunity to express their feelings (Paris et al, 2009). I found that as I matured, I could talk about my grandparents away from their graves, as I came to realise that this would not upset my parents. As a result of this, we were able to talk much more freely and openly about their lives.

My grandmother was the only grandparent that I had known, as my other grandparents had died before I was born. As my grandmother had died when I was relatively young, I have no substantial memories of her. Throughout my childhood this did not have a large impact on my beliefs and attitudes, as I believe that I did not possess the emotional maturity to reflect on the changes this had made to my life, and the impact that her death may have had on those around me. As I grew older, I became aware of the effects of loss on those around me, and in turn this altered my beliefs about life. For example, as I matured I became aware that death can happen at any age and so I was more appreciative of the roles that relatives and friends played in my life, and did not take their presence for granted.

This changed when I was at secondary school and I came to appreciate the roles and relationships that grandparents had in the lives of my peers. I felt, and still feel, that I have missed out on these key relationships, particularly as my parents often comment on how similar I am in both personality and appearance to my grandmother on my mother’s side. As I grew older, particularly in adolescence, I came to value relationships with relatives and friends in a different way from childhood, and I think that experiencing loss early in life was a large part of this. I believe that it is important to work hard to overcome obstacles to maintaining relationships, such as geographical distance and cultural differences, particularly as there is now greater mobility for employment.

In conclusion, although the death of my grandmother was perhaps not a shock to the adults in my life, I had not grasped how ill she was, nor had it been explained to me by adults close to me. As a result of this, I felt guilty because I had not chosen to visit her in the hospital when offered the opportunity. However as we had always lived quite far apart, there was no real impact on my daily life, which research has shown to be particularly disruptive for children going through grief (Bonanno, 2004).

There is clear evidence that experiencing death, particularly a traumatic death, can have a profound effect on childhood, and that a well-established support system is key (Brown et al, 2008). I believe that I had a well-established support system, and this allowed me to adapt to life without my grandmother without great levels of difficulty. Whilst I wish I could have had a longer relationship with my grandmother and have known my other grandparents, I believe it is important not to dwell on things that cannot be changed. Instead I invest my energy in building and maintaining relationships with friends and family. I believe that this attitude comes with maturity and experience of loss, and that small children may not have the emotional capacity to understand this.

References

Bonanno, G. (2004). Loss, trauma, and human resilience: have we underestimated the human capacity to thrive after extremely aversive events? American Psychologist, 59(1), pp.20-28.

Brown, E., Amaya-Jackson, L., Cohen, J., Handel, S., Zatta, E. (2008). Childhood traumatic grief: a multi-empirical examination of the construct and its correlates. Death Studies, 32(10), pp.323-326.

Cohen, J. 2011. Supporting children with traumatic grief: what educators need to know. Developmental and Educational Psychology, 32(2), pp. 117 – 131.

Crenshaw, D. 2005. Clinical tools to facilitate treatment of childhood traumatic grief. Journal of Death and Dying, 51(3), pp.239-255.

Currier, J., Neimeyer, R., Berman, J. (2008). The effectiveness of psychotherapeutic interventions for bereaved persons: A comprehensive quantitative review. Psychological Bulletin, 134(5), pp. 648-661.

Dowdney, L. (2008). Children bereaved by parent or sibling death. Psychiatry, 7(6), pp.270-275.

Himebauch, A., Arnold, R., May, C. (2008). Grief in children and developmental concepts of death. Journal of Palliative Medicine, 11(2), pp.242-244.

McClatchy, I., Vonk, E., Palardy, G. (2009). The prevalence of childhood traumatic grief – a comparison of violent/sudden and expected loss. Journal of Death and Dying, 59(4), pp.305-323.

Melhern, N., Porta, G., Shamseddeen, W., Walker, M., Brent, D. (2011). Grief in children and adolescents bereaved by sudden parental death. Archives of General Psychiatry, 68(9), pp.911-919.

Paris, M., Carter, B., Day, S., Armsworth, M. (2009). Greif and trauma in children after the death of a sibling. Journal of Child and Adolescent Trauma, 2(2), pp.71-80.

Piaget, J. (2013). The Construction of Reality in the Child. 3rd ed. London: Routledge.

Salloum, A. (2008). Evaluation of individual and group grief and trauma interventions for children post-disaster. Journal of Clinical Child and Adolescent Psychology, 37(3), pp. 495-507.

Shear, K., and Shair, H. (2005). Attachment, loss, and complicated grief. Developmental Psychobiology, 47(3), pp.253-267.

Collaborative Care: Australian Maternity Care

This work was produced by one of our professional writers as a learning aid to help you with your studies

In the context of maternity care, ‘collaboration’ is defined as a shared partnership between a birthing woman, midwives, doctors and other members of a multidisciplinary team (National Health & Medical Research Council, 2010). Collaborative practice is based on the philosophy that multidisciplinary teams can deliver care superior to that which could be provided by any one profession alone (National Health & Medical Research Council, 2010). Indeed, there is evidence to suggest that collaborative maternity practice does improve outcomes for women, including both clinical outcomes and consumer satisfaction with care (Hastie & Fahy, 2011). Collaborative practice is particularly important in Australian rural and remote maternity settings, which are characterised by fragmented, discontinuous care provision (Downe et al., 2010). As such, both the Code of Ethics for Midwives in Australia (for midwives and obstetric nurses) and the Collaborative Maternity Care Statement (for obstetricians and other doctors) require that a collaborative model of care be adopted in Australian maternity settings. However, inconsistencies between and among midwives and doctors about the definition of ‘collaboration’, and subsequent ineffective collaborative practice, remain key causes of adverse outcomes in maternity settings in Australia (Hastie & Fahy, 2011; Heatley & Kruske, 2011). This paper provides a critical analysis of collaborative practice in Australian rural and remote maternity settings.

Rural and remote maternity care in Australia

It is estimated that one-third of birthing women in Australia live outside of major metropolitan centres – defined for the purpose of this paper as ‘rural and remote regions’ (National Health & Medical Research Council, 2010). However, the number of facilities offering maternity care to women in these regions is just 156 and declining (2007 estimate) (Australian Government Department of Health, 2011). Australian research suggests that the decreasing number of rural and remote maternity services is resulting in more women having high-risk, unplanned and unassisted births outside of medicalised maternity services (Francis et al., 2012; McLelland et al., 2013); indeed, one recent study drew a direct correlation between these two factors (Kildea et al., 2015). Additionally, statistics suggest that both maternal and neonatal perinatal mortality rates in Australia are highest in rural and remote regions (Australian Government Department of Health, 2011).

High perinatal mortality rates and lack of services in rural and remote communities mean that many rural and remote women are transferred to metropolitan centres, often mandatorily, for birth (Josif et al., 2014). This system has resulted in fragmented, discontinuous care for many rural and remote women – which is itself a poor outcome (National Health & Medical Research Council, 2010; Sandall et al., 2015). Many women find such models of care to be significantly disempowering, which again may result in poorer outcomes (Josif et al., 2014). Indeed, many women, and particularly Aboriginal women, may resist engaging with medicalised maternity services to avoid being transferred ‘off-country’ for birth (Josif et al., 2014). Furthermore, those women who are transferred ‘off-country’ for birth bear a significant financial, social and cultural burden (Dunbar, 2011; Evans et al., 2011; Hoang & Le, 2013).

Australian maternity services reform

In response to these issues, in 2009 the Australian government commenced a major reform of maternity care. This reform included attempts to shift maternity services provided to rural and remote women to more collaborative, continuous, community-centred models (Francis et al., 2012). These new models of care require midwives to work collaboratively with general practitioners, obstetricians and rural doctors to care for a rural or remote woman in her own community to the greatest extent possible (McIntyre et al., 2012a). Evidence suggests that rural and remote women desire to be cared for in their local communities provided the maternity services offered are safe (Hoang & Le, 2013). Indeed, there is evidence to suggest that women, and particularly Aboriginal women, who birth within their communities have an increased likelihood of positive outcomes (Commonwealth of Australia, 2009). However, the National Guidance on Collaborative Maternity Care, which resulted from the government reforms, notes there are a number of unique and significant challenges to achieving collaborative practice in rural and remote community settings (National Health & Medical Research Council, 2010).

Collaborative care in Australian maternity settings – challenges and complexities

The fundamental aim of collaborative services in Australia is the provision of ‘woman-centred care’, where women are empowered to be active partners in the provision of their care (National Health & Medical Research Council, 2010). It is well-established that the delivery of woman-centred care in a maternity setting produces the best outcomes, in terms of both clinical outcomes and consumer satisfaction with care (Pairman et al., 2006). In a recent Australian study, Jenkins et al. (2015) suggest that collaboration is fundamental in the achievement of woman-centred care in rural and remote settings in terms of continuity of care – including consistency in communication between care providers – across often vast geographical regions. However, conflicting definitions and interpretations of the concept of ‘woman-centred care’ between midwives and doctors are a key barrier to achieving collaborative practice in Australian maternity settings (Lane, 2006). These problems are magnified in rural and remote settings, where transfers of care between midwives and doctors often occur abruptly when women are transported ‘off-country’ to deliver (Lane, 2012).

Differences in understandings of the concept of ‘woman-centred care’ between midwives and doctors – and, therefore, impairments to effective collaboration – are underpinned by midwives’ and doctors’ differing perceptions of ‘risk’ in childbirth. Indeed, a study by Beasley et al. (2012) identified incompatible perceptions of best-practice strategies to mitigate risk as the key factor underpinning the lack of collaborative practice between midwives and doctors in Australian maternity settings. Whilst midwives focus on normalcy, wellness and physiology in birth, doctors place an emphasis on intervention – both valid approaches to risk mitigation in birth, but fundamentally contradictory (Lane, 2006; Beasley et al. 2012; Downe et al., 2010; Lane 2006). These differing philosophies of care have resulted in increasing tensions in maternity settings, and this has been exacerbated by sensationalist media reporting, particularly following the Senate Inquiries into Media Reform of 2008/09 (Beasley et al., 2012). The concept of risk is particularly important in rural and remote settings, given the decision to transfer a woman ‘off-country’ is often made on the basis of risk.

The reforms to the Australian maternity system – including the introduction of the Nurses and Midwives Act 2009 – have resulted in significant increases to midwives’ scope of practice and autonomy (National Health & Medical Research Council, 2010; Beasley et al., 2012). This is particularly important in rural settings, where midwives are often required to be ‘specialist generalists’ with a diverse suite of clinical skills (Gleeson, 2015). However, this expansion in midwives’ scope has further challenged the achievement of collaborative practice in Australian maternity settings. Tensions have occurred because doctors often perceive themselves to be solely accountable for the outcomes of maternity care and, therefore, legally vulnerable when practicing under midwifery-led models of care focusing in risk-mitigation strategies to which they may be unaccustomed or opposed (Lane, 2006; Beasley et al., 2012). These issues are particularly obvious in rural and remote maternity settings, where the referral of the care of birthing women by midwives to doctors may occur primarily during obstetric emergencies. Doctors in Australia have been particularly vocal about the fact that there is poor evidence to support the safety of midwifery-led models of care, including in rural and remote maternity settings (Boxall & Flitcroft, 2007).

The expansion in midwives’ scope of practice has also challenged the achievement of collaborative practice in Australian maternity settings in other ways. Australian research suggests doctors fear the expansion of midwives’ scope will result in them becoming redundant in, and therefore, excluded from maternity settings, and that a decline in clinical outcomes will result (Lane, 2012). As noted by Barclay and Tracy (2010), despite the recent increases to midwives’ scope of practice, both midwives’ and doctors’ continue to have a distinct scope in terms of caring for a birthing woman and both remain legally bound to practice within this scope. However, many doctors continue to oppose the reforms to the maternity system on the basis of changes in midwives’ scope – and also because these reforms may not be evidence based, may fail to meet the needs of women (and particularly the unique needs of rural and remote women), and are driven by service providers rather than consumers (Boxall & Flitcroft, 2007; McIntyre et al., 2012b; Hoang & Le, 2013). Again, doctors’ opposition to changes in midwives’ scope significantly impairs the achievement of collaborative practice in Australian maternity settings.

These issues are further complicated by the fact that Commonwealth law now requires midwives practicing in Australia to have ‘collaborative arrangements’ with a medical practitioner if they are to receive Medicare-provider status (Barclay & Tracy, 2010). This particularly affects private-practice midwives practicing in rural and remote areas of Australia. However, as noted by Lane (2012), such legislation – which effectively forces midwives and doctors into a collaborative relationship – is fundamentally inconsistent with the concept of collaboration as a professional relationship based on equity, trust and respect. Further, these reforms impose collaboration and compel midwives and doctors to form collaborative relationships are unworkable in many rural and remote maternity settings. Often, midwives practicing in these settings work with doctors who are fly-in fly-out locums, who are on temporary placements or who are located in regional centres many hundreds of kilometres away, making the establishment of genuine collaborative relationships a highly complex process (Barclay & Tracy, 2010).

Collaborative care in Australian maternity settings – opportunities and achievement

Despite these significant issues, however, research suggests that collaboration can be achieved in Australian rural and remote maternity settings. The first step in achieving collaboration in this context is for both midwives and doctors to undergo a ‘shift in perception’ with regards to each other’s’ professional roles and boundaries (Lane, 2006; McIntyre et al., 2012a). This will particularly involve doctors’ increasing acceptance of midwives’ expanding role in rural and remote maternity care provision. Rural and remote maternity services in particular provide positive examples of midwifery-led models of maternity care providing maternity services which are both safe and effective (McIntyre et al., 2012a); indeed, one study concludes that shared but midwifery-led models are the best way to achieve continuity of care in rural and remote maternity settings (Francis et al., 2012). Therefore, evidence from these models may be used to bolster doctors’ confidence in the efficacy of midwifery-led approaches to maternity care. However, for this to be achieved, incompatibilities in care philosophies between midwives and doctors must be overcome. This may commence with midwives and doctors recognising that both professions share the same basic goal of achieving the best outcomes for women (Lane, 2006).

Communication is also fundamental to the achievement of collaborative practice in Australian maternity settings (National Health & Medical Research Council, 2010). Indeed, Lane (2012) notes that effective communication between midwives and doctors is one of the ‘minimal conditions’ which must be met if collaborative practice in maternity settings is to be achieved. However, there are a range of barriers to effective communication between midwives and doctors in rural and remote maternity settings, the most significant of which is geographical distance. ‘Telehealth’, which involves the use of telecommunication technologies to facilitate communication between clinicians – and particularly those who care for ‘priority consumers’ such as mothers and babies’ – in geographically diverse regions of Australia may be useful in promoting collaborative practice in rural and remote maternity settings (Australian Nursing Federation 2013). The National Health & Medical Research Council (2010) also identifies written documentation – including pregnancy records, care pathways and a transfer / retrieval plan – to be important in fostering collaborative practice in in rural and remote maternity settings.

Collaboration, or practice based on a shared partnership between a birthing woman, midwives, doctors and other members of a multidisciplinary team, results in improves outcomes for birthing women. As such, codes of practice for both midwives and doctors in Australia require that collaborative practice be utilised in Australian maternity settings. Research evidence suggests that due to the unique challenges posed by rural and remote maternity settings in Australia, collaborative practice is particularly important in this context. However, in Australia in general – and in rural and remote maternity settings in particular – collaborative practice is both lacking and challenging to achieve. This paper has provided a critical analysis of collaborative practice, with a particular focus on Australian rural and remote maternity settings. It has concluded that whilst it may be challenging to achieve, collaboration in Australian rural and remote maternity settings can – and, indeed, should – be achieved in order to promote the best outcomes for birthing women in these regions.

References

Australian Government Department of Health, (2011), Provision of Maternity Care, accessed 02 October 2015, http://www.health.gov.au/internet/publications/publishing.nsf/Content/pacd-maternityservicesplan-toc~pacd-maternityservicesplan-chapter3#Rural%20and%20remote%20services

Australian Nursing Federation, (2013), Telehealth standards: Registered midwives, accessed 02 October 2015, https://crana.org.au/files/pdfs/Telehealth_Standards_Registered_Midwives.pdf

Barclay, L & Tracy, SK, (2010), Legally binding midwives to doctors is not collaboration, Women & Birth, vol. 23, no. 1, pp. 1-2.

Beasley, S, Ford, N, Tracy, SK & Welsh, AW, (2012), Collaboration in maternity care is achievable and practical, Australia & New Zealand Journal of Obstetrics & Gynaecology, vol. 52, no.6, 576-581.

Boxall, AM & Flitcroft, K, (2007), From little things, big things grow: A local approach to system-wide maternity services reform in the absence of definitive evidence, Australia & New Zealand Health Policy, vol. 4, no. 1, p. 18.

Commonwealth of Australia, (2009), Improving Maternity Services in Australia: The Report of the Maternity Services Review, accessed 02 October 2015, https://www.health.gov.au/internet/main/publishing.nsf/content/624EF4BED503DB5BCA257BF0001DC83C/$File/Improving%20Maternity%20Services%20in%20Australia%20-%20The%20Report%20of%20the%20Maternity%20Services%20Review.pdf

Downe, S, Finlayson, K & Fleming, A, (2010), Creating a collaborative culture in maternity care, Journal of Midwifery & Women’s Health, vol. 55, no. 3, pp. 250-254.

Dunbar, T, (2011), Aboriginal people’s experiences of health and family services in the Northern Territory, International Journal of Critical Indigenous Studies, vol. 4, no. 2, pp. 2-16.

Evans, R, Veitch, C, Hays, R, Clark, M & Larkins, S, (2011), Rural maternity care and health policy: Parents’ experiences, Australian Journal of Rural Health, vol. 19, no. 6, pp. 306-311.

Francis, K, McLeod, M, McIntyre, M, Mills, J, Miles, M & Bradley, A (2012), Australian rural maternity services: Creating a future or putting the last nail in the coffin?, Australian Journal of Rural Health, vol. 20, no. 5, pp. 281-284.

Gleeson, G (2015), Contemporary midwifery education focusing on maternal emergency skills in remote and isolated areas, Australian Nursing & Midwifery Journal, vol. 22, no. 11, p. 48.

Hastie, C & Fahy, K (2011), Interprofessional collaboration in delivery suite: A qualitative study, Women & Birth, no. 24, vol. 2, pp. 72-79.

Heatley, M & Kruske, S (2011), Defining collaboration in Australian maternity care, Women & Birth, no. 24, vol. 2, pp. 53-57.

Hoang, H & Le, Q (2013), Comprehensive picture of rural women’s needs in maternity care in Tasmania, Australia, Australian Journal of Rural Health, vol. 21, pp. 197-202.

Jenkins, MG, Ford, JB, Todd, AL, Forsyth, R, Morris, J & Roberts, CL (2015), Women’s views about maternity care: How do women conceptualise the process of continuity?, Midwifery, vol. 31, no. 1, pp. 25-30.

Josif, CM, Barclay, L, Kruske, S & Kildea, S (2014), ‘No more strangers’: Investigating the experience of women, midwives and others during the establishment of a new model of maternity care for remote dwelling Aboriginal women in northern Australia, Midwifery, vol. 30, no. 3, pp. 317-323.

Kildea, S, McGhie, AC, Ghao, Y, Rumbold, A & Rolfe, M (2015), Babies born before arrival to hospital and maternity unit closures in Queensland and Australia, Women & Birth, vol. 28, no. 3, pp. 236-245.

Lane, K (2006), The plasticity of professional boundaries: A case study of collaborative care in maternity services, Health Sociology Review, vol. 15, no. 4, pp. 341-352.

Lane, K (2012), When is collaboration not collaboration? When it’s militarized, Women & Birth, vol. 25, no. 1, pp. 29-38.

McIntyre, M, Francis, K & Champan, Y (2012a), The struggle for contested boundaries in the move to collaborative care teams in Australian maternity care, Midwifery, vol. 28, no. 3, pp. 298-305.

McIntyre, M, Francis, K & Chapman, Y (2012b), Primary maternity care reform: Whose influence is driving the change?, Midwifery, vol. 28, no. 5, pp. 705-711.

McLelland, G, McKenna, L & Archer, F (2013), No fixed place of birth: Unplanned BBAs in Victoria, Australia, Midwifery, vol. 29, no. 1, pp. 19-25.

National Health and Medical Research Council (2010), National Guidance on Collaborative Maternity Care, accessed 02 October 2015, https://www.nhmrc.gov.au/_files_nhmrc/publications/attachments/CP124.pdf

Pairman, S, Pincombe, J, Thorogood, C & Tracy, S (2006), Midwifery: Preparation for Practice, Churchill Livingstone Elsevier, Sydney.

Sandall, J, Soltani, H, Gates, S, Shennan, A & Declan, D (2015), Midwife-led continuity models versus other models of care for childbearing women, Cochrane Database of Systematic Reviews, accessed 02 October 2015, http://onlinelibrary.wiley.com.ezp01.library.qut.edu.au/doi/10.1002/14651858.CD004667.pub4/abstract

Caring Aids and Equipment within Irish Healthcare

This work was produced by one of our professional writers as a learning aid to help you with your studies

Discuss the clients that would use each piece and how it would benefit the client and the staff. Describe each and outline how they are used correctly.

Include two examples in each of the below areas:

Lifts and Hoists
Mobility aids
Incontinence aids
Personal care aids
Communication aids
Introduction

There are a vast number of caring aids available to improve the quality of an individual’s daily living. These aids include lifts, hoists and mobility aids as well as incontinence aids, personal care aids and communication aids (Assist Ireland, no date, a). This report will consider two examples from each of the aforementioned categories and detail the clients that would use such an aid, the benefit of the aid to both patient and carer and how the aid is used.

Lifts and Hoists

There are two types of patient lift or hoist: these can be either a sling lift or a sit-to-stand lift (Thomas & Thomas, 2014). The sling lift is an assistive device that allows immobile patients, either at home or within the care environment, to be transferred between resting places, usually a bed and a chair (Baptiste et al, 2008). They are either mobile (floor based) or fixed (overhead) lifts that are suspended from the ceiling. The sit-to-stand lift is used to help patients who have some mobility but lack the core strength to rise to a standing position from a chair, bed or commode (Radawiec et al, 2009). A number of different slings, straps and belts are available for different uses. For example, a mobile hoist can be used in conjunction with a narrow sling that is positioned under the patient’s arms halfway down their back. The patient must be able to take some of their weight on their legs as the hoist lifts them from sitting to standing position (Assist Ireland, no date, b). An overhead hoist can be used with a divided leg sling. This U shaped sling is positioned with a leg band under each leg then crossed in the middle to provide the patient with some dignity; not crossed in the middle to allow for toileting or with both leg bands under each leg for improved comfort (Assist Ireland, no date, c).

There are multiple benefits of these aids to the patient. These include the prevention of pressure sores, improved quality of life due to enabling an element of mobility and the potential for the individual to remain in their own home rather than in the care environment. However, a study by Bilboe et al (2007) reported some interesting results. A total of 21 normal subjects performed three sit-to-stand transfers from a stool using no device, the Sara 3000 and the Encore hoist. The subjects were filmed with joint line markers on the greater trochanter, the lateral malleolus and the lateral femoral epicondyle. Bilboe et al (2007) found that neither device reproduced normal trunk angles or joint kinematics in the patients they were lifting when compared to the no device test. Nevertheless, it is considered that this study is somewhat limited due to only two devices being trialled and no other therapeutic benefit, such as weight relief or endurance, being measured. However, it does show that the choice of lifting device should be carefully considered for each patient.

The benefits for care workers have been extensively studied, with Chhokar et al (2005) finding that the use of an overhead suspended ceiling lift resulted in a sustained decrease in care worker days off, injury claims and direct costs associated with patient handling injuries over a three year period. These findings were supported by further studies from Engst et al (2005), Miller et al (2006) and Alamgir et al (2008).

Mobility Aids

A mobility aid benefits the patient by allowing them some freedom to either get out and about or move around their home in a safe manner. There are various walking aids, such as canes, crutches and walkers for patients with balance problems or to compensate for weakness or injury, along with wheelchairs and mobility scooters for those with more severe mobility impairments. A walker is used by the patient in a standing position and provides extra stability and confidence by providing additional points of contact with the floor and by the patient’s hands on the frame. It consists of four upright posts and one handgrip for each hand. The walker can reduce the loading on the lower limb by directing some of the load through the upper limbs and the frame of the walker. A wheelchair, can be either manually or battery powered, and allows the patient to travel further distances in a comfortable sitting position. Both of these aids benefit the carer by reducing the amount of intervention the patient requires; for example, with the use of a walker or cane, the patient may be able to access the toilet unaided, therefore reducing the need for carer intervention.

However, there is a considerable body of evidence that indicates that there is a high prevalence of disuse or abandonment of such aids, with between 30% and 50% of patients discontinuing use of their device soon after receiving it (Bateni & Maki, 2005). There is also evidence that the repetitive strains on the joints of the upper extremities can lead to injury and promote the discontinuation of use (Konop et al, 2011).

Incontinence Aids

There are a wide range of incontinence aids available for both bowel and bladder incontinence. These include pads and pull-up pants, protective sheets and pads for chairs and beds, catheters, penile sheaths and specially adapted clothing. The most popular of these are the incontinence pads that are worn inside the patient’s own underwear to mop up small to moderate urine leaks (NHS Choices, 2015). These pads are positioned within the user’s underwear and have a hydrophilic layer which ensures the urine is drawn away from the body, therefore preventing the skin from developing sores associated with wetness (Sugama et al, 2012). Another incontinence aid, used by men, is the penile sheath. These devices resemble a condom with a funnel tipped end and are applied by rolling the silicone sheath down the shaft of the penis (Robinson, 2006). A leg bag connector, or a sheath urinary drainage system, is then attached to the funnel tip (Robinson, 2006). Williams and Moran (2006) detailed the difficulties and benefits of the use of a penile sheath for male incontinence. They reported that this sizing of the sheath was difficult to establish, with even a 1mm error in measurement causing the sheath to become detached during urination or causing penile trauma. However, they do explain that with correct fitment, with the appropriate sized sheath being applied, this system provides significant freedom for the patient and improves their confidence and self-esteem.

Personal Care Aids

Personal care aids include long handled nail clippers, lotion applicators, shampooing aids and toileting aids. The extended reach, long handled or pistol grip toenail clipper is used by patients who are unable to bend far enough forwards to clip their toenails using standard clippers (Semple et al, 2009). These patient groups include pregnant women, overweight patients, the elderly and people with back problems. The device extends the length of reach enabling the individual to maintain independence and trim their own nails rather than relying on their carer to carry out this task for them (Semple et al, 2009).

Another personal care aid that benefits both the patient and the carer is that of the shampooing rinse basin. This basin, used for washing the hair of a bed bound, immobile patient, is positioned at the top of the bed and has a comfortable head and neck cradle to support the patient’s head in an appropriate position (Sloan et al, 1995; Eigsti, 2011). The patient’s hair can then be wetted, shampooed and rinsed within the basin without the need for the patient to be moved from the bed. The basin has a side drain and drain hose that can be left open for continuous irrigation or closed to hold water. This aid therefore reduces the risk of handling injury to both patient and carer and allows for a number of personal care routines including shampooing, ear irrigation or scalp treatment.

Communication Aids

Communication aids include aids that improve the hearing, reading and writing of the patient. One example is a magnifier, which can be handheld or attached to a spectacle, headband or neck attachment (Berry & Ignash, 2003). They can also be furniture, floor or wall mounted or can fit over screens. These aids magnify the size of the text on letters, books or other print therefore providing benefit to the patient by enabling them to read their own correspondence, therefore keeping personal information to themselves, or keep themselves entertained or informed by reading newspapers or books (Berry & Ignash, 2003). This also benefits the carer as is frees up their time to carry out other tasks.

Another communication aid is a personal sound amplifier, which amplifies TV and audio equipment. The equipment includes a microphone, which is positioned near to the TV or audio equipment, an amplifier and an earpiece for the patient (Palmer et al, 1995). These devices benefit the patient as they are able to enjoy the benefits of listening to on-screen or audio entertainment and also benefit the carer and other household individuals as the volume of the audio and TV equipment can be maintained at a normal level. The patient is able to increase the volume of the sound through their own earpiece without increasing the overall volume of the sound emitting device.

Conclusion

This report provides two examples of each of a number of care aids and explains the way in which these benefit both the carer and the patient. Examples include mobile hoists to transfer the patient between sitting and standing positions, a walking frame, which gives the patient additional support and confidence when mobilising both inside and outside the home and personal care aids such as the shampooing basin and long reach nail clippers that maintain the hygiene needs of restricted mobility patients.

References

Alamgir, H., Yu, S., Fast, C., Hennessy, S., Kidd, C., & Yassi, A. (2008). Efficiency of overhead ceiling lifts in reducing musculoskeletal injury among carers working in long-term care institutions. Injury, 39(5), 570-577.

Assist Ireland (no date, a). Choosing a product. Available online at http://www.assistireland.ie/eng/Information/Information_Sheets/ accessed 23 October 2015.

Assist Ireland (no date, b). Choosing a mobile hoist. Available online at http://www.assistireland.ie/eng/Information/Information_Sheets/Choosing_a_Mobile_Hoist.html accessed 23 October 2015.

Assist Ireland (no date, c). Choosing an overhead hoist. Available online at http://www.assistireland.ie/eng/Information/Information_Sheets/Choosing_an_Overhead_Hoist.html accessed 23 October 2015.

Baptiste, A., Cleerey, M. M., Matz, M., & Evitt, C. P. (2008). Proper sling selection and application while using patient lifts. Rehabilitation Nursing, 33(1), 22-32.

Bateni, H., & Maki, B. E. (2005). Assistive devices for balance and mobility: benefits, demands, and adverse consequences. Archives of Physical Medicine and Rehabilitation, 86(1), 134-145.

Berry, B. E., & Ignash, S. (2003). Assistive technology: Providing independence for individuals with disabilities. Rehabilitation Nursing, 28(1), 6-14.

Bilboe, J., Healey, K., & Busse, M. E. (2007). Investigating joint kinematics during a hoist-assisted sit-to-stand activity. International Journal of Therapy and Rehabilitation, 14(7), 311-317.

Chhokar, R., Engst, C., Miller, A., Robinson, D., Tate, R. B., & Yassi, A. (2005). The three-year economic benefits of a ceiling lift intervention aimed to reduce healthcare worker injuries. Applied Ergonomics, 36(2), 223-229.

Eigsti, J. E. (2011). Innovative solutions: beds, baths, and bottoms: a quality improvement initiative to standardize use of beds, bathing techniques, and skin care in a general critical-care unit. Dimensions of Critical Care Nursing, 30(3), 169-176.

Engst, C., Chhokar, R., Miller, A., Tate, R. B., & Yassi, A. (2005). Effectiveness of overhead lifting devices in reducing the risk of injury to care staff in extended care facilities. Ergonomics, 48(2), 187-199.

Konop, K. A., Strifling, K. M., Krzak, J., Graf, A., & Harris, G. F. (2011). Upper Extremity Joint Dynamics During Walker Assisted Gait: A Quantitative Approach Towards Rehabilitative Intervention. Journal of Experimental & Clinical Medicine, 3(5), 213-217.

Miller, A., Engst, C., Tate, R. B., & Yassi, A. (2006). Evaluation of the effectiveness of portable ceiling lifts in a new long-term care facility. Applied Ergonomics, 37(3), 377-385.

NHS Choices. (2015). Incontinence Products. Available online at http://www.nhs.uk/Livewell/incontinence/Pages/Incontinenceproducts.aspx accessed 23 October 2015.

Radawiec, S. M., Howe, C., Gonzalez, C. M., Waters, T. R., & Nelson, A. (2009). Safe ambulation of an orthopaedic patient. Orthopaedic Nursing, 28(2S), S24-S27.

Robinson, J. (2006). Continence: sizing and fitting a penile sheath. British Journal of Community Nursing, 11(10), 420-427.

Semple, R., Newcombe, L. W., Finlayson, G. L., Hutchison, C. R., Forlow, J. H., & Woodburn, J. (2009). The FOOTSTEP self management foot care programme: Are rheumatoid arthritis patients physically able to participate? Musculoskeletal Care, 7(1), 57-65.

Sloane, P. D., Rader, J., Barrick, A. L., Hoeffer, B., Dwyer, S., McKenzie, D., & Pruitt, T. (1995). Bathing persons with dementia. The Gerontologist, 35(5), 672-678.

Sugama, J., Sanada, H., Shigeta, Y., Nakagami, G., & Konya, C. (2012). Efficacy of an improved absorbent pad on incontinence-associated dermatitis in older women: cluster randomized controlled trial. BMC Geriatrics, 12(1), 22-24.

Thomas, D. R., & Thomas, Y. L. N. (2014). Interventions to reduce injuries when transferring patients: A critical appraisal of reviews and a realist synthesis. International Journal of Nursing Studies, 51(10), 1381-1394.

Williams, D., & Moran, S. (2005). Use of urinary sheaths in male incontinence. Nursing Times, 102(47), 42-44.

Campylobacter Jejuni Health Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Campylobacter jejuni is one of a family of bacteria known as Campylobacteriaceae that collectively are responsible for a significant number of reported cases of gastroenteritis in the UK. Gastrointestinal infection with Campylobacter spp. can produce significant long term sequelae, such as reactive arthritis and the neurological condition Guillain-Barre Syndrome. This report will give a brief overview of campylobacter jejuni with regard to its microbiology, and the identification and management of campylobacter infection.

Campylobacters were recognised as a cause of human illness in the 1970s, but were probably first identified in humans by Escherich in 1886, who identified spiral shaped bacteria of the colons of children who had died from a condition he called “cholera infantum” (Escherisch 1886). Veterinary research at the beginning of the twentieth century identified similar bacteria in livestock, and the bacteria (termed at the time “vibrio” or “spirillium”) was implicated in a number of reported cases in both animals and humans throughout the mid-twentieth century (Butzler 2004). The key breakthrough was reported in 1972, when Dekeyser and Butzler were able to isolate the bacteria now known as campylobacter jejuni from the stool of an infected patient (Dekeyser 1972).

Campylobacter spp. are classified as part of rRNA superfamily VI, a classification of bacteria that also includes Helicobacter and Arcobacter (Vandamme 1991). Campylobacters, and other members of the classification, are small, gram-negative bacteria that are specially adapted to colonise the surface of the mucous membranes of the digestive tract. This is reflected in the morphology of the bacteria, which has a spiral-shaped body with long unsheathed flagella at each tip. Consequently, Campylobacter are highly motile, and are able to tunnel through the mucous layer and colonise the membrane below, which is a key ability as they are highly susceptible to acidity. They are partially anaerobic, alongside other members of the classification, and undergo transformation to coccoid forms when exposed to adverse conditions (Moran 1987). Presently, 18 subspecies of Campylobacter have been identified and 11 of these are thought to be pathogenic in humans. By far the most common are campylobacter jejuni and campylobacter coli; together, these bacteria are a leading cause of diarrhoeal illness.

Principal risk factors for infection with campylobacter jejuni include the consumption of undercooked meat, especially poultry, inadequately pasteurised milk, contaminated water and pets with diarrhoea (Gillespie 2008). There may be human-human transmission via the faeco-oral route if personal hygiene is unsatisfactory (Wilson 2008).

There is an incubation period of around 3 days, though this can range from 1-7 days. There is occasionally a prodromal illness of fever, myalgia and headache lasting around 24 hours, and patients who present with the prodromal illness often have a more severe infection than those presenting with gastrointestinal symptoms (Minton 2004). The principal illness is characterised by colicky, periumbical abdominal pain, pyrexia (the fever may be as high as 40a?°C) and profuse diarrhoea, often with up to 10 bowel movements each day. The stool may be watery initially, and blood may appear in the stool as the infection progresses. Around 25% of patients will experiences tenesmus (Minton 2004).

Symptoms of diarrhoea generally last for up to 7 days, and abdominal pain may persist a little longer. The illness is generally self-limiting, though the prognosis can be worse in the very young, the elderly, those with comorbid condition and the immunocompromised (Nelson 2004).

It is not possible to differentiate campylobacter infection from other causes of infective gastroenteritis based on history and examination findings alone (Buss 2015). Therefore, detection of campylobacter in a stool sample is the mainstay of diagnosis, though a negative sample cannot exclude the presence of campylobacter. Samples are rarely positive after two weeks. Stool samples should always be collected in patients presenting with these symptoms, as infection with campylobacter is a notifiable disease in England and Wales (NICE 2014).

In a generally fit and well adult, the main risk of acute diarrhoea of any cause, including campylobacter, is dehydration. Therefore, maintaining adequate hydration is the cornerstone of treatment. This can generally be achieved by increasing oral fluid intake, but in vulnerable patients intravenous hydration may be indicated. Rehydration may be encouraged with the administration of Racecadotril. Racecadotril is an intestinal antisecretory enkephalinase inhibitor that inhibits the breakdown of endogenous enkephalins, reducing the hypersecretion of water and electrolytes into the intestine (NICE 2013). Racecadotril is licensed in the United Kingdom for the complimentary treatment of acute diarrhoea in patients aged greater than 3 months, together with oral rehydration.

Though the symptoms of campylobacter infection are unpleasant and inconvenient, there is generally no indication for antimotility medications. In fact, unless the diagnosis is confirmed via the laboratory, these medications are contraindicated as toxic megacolon has been reported as an adverse effect of antimotility medications in patients with pseudomembranous colitis and inflammatory bowel disease (Minton 2004).

Given the short duration and self-limiting nature of the condition, antibiotic therapy is generally not recommended. A Swedish meta-analysis of eleven randomised control trials reported that, versus placebo, antimicrobial therapy reduced the duration of intestinal symptoms by only 1.3 days (95% CI 0.6-2.0 days) (Ternhag 2007). A further review by the National Institute of Health and Care Excellence (NICE) reported that antibiotic treatment with erythromycin cleared the bacteria from stool samples rapidly, but had no effect on the course of the disease (NICE 2009).

Problems exist with antibiotic-resistant species of campylobacter, largely due to antibiotic use in animals (Gallay 2007; Lehtopolko 2010). However, antibiotic therapy should be considered for patients with severe disease or at risk of severe disease. Patients with severe disease include individuals with bloody stools, high fever, extra intestinal infection, worsening or relapsing symptoms, or symptoms lasting longer than one week (Ruiz-Palacios 2007). Patients classified as at risk of severe disease include the immunocompromised, the elderly and pregnant women. NICE supports this, suggesting that antibiotic therapy may be indicated if any of the following occur (NICE 2014):

High fever
Bloody diarrhoea
More than eight stools daily
Worsening clinical condition
Illness for over a week
Pregnancy
Immunocompromise

Should an antibiotic be required, azithromycin and erythromycin are the most effective agents against campylobacter in the UK, with a single 30mg/kg dose of azithromycin early in the disease proving just as effective as a 5 day course of erythromycin (Vukelic 2010). The British National Formulary recommends a combination therapy of clarithromycin with ciprofloxacin as an alternative.

There are a number of complications of campylobacter infection. Acute bacterial gastroenteritis has been linked with the onset of irritable bowel syndrome in around 15% of cases aˆ“ this is termed post-infective IBS (Smith 2007). Further complications include Reiter’s Syndrome (a form of reactive arthritis characterised by urethritis, conjunctivitis and arthritis), and the neurological condition Guillain-Barre Syndrome.

In summary, campylobacter jejuni is a gram negative, spiral shaped bacteria that colonises the mucous membranes of the gut. This colonisation produces a self-limiting illness characterised by fever, cramping abdominal pains and diarrhoea. Infection is diagnosed via detection of the bacteria in a sample of faeces. The mainstay of treatment is rehydration aˆ“ antibiotics are rarely indicated.

References

Buss S, Leber A, Chapin K, Fey P, Bankowski M, Jones M, Rogatcheva M, Kanack K, Bourzac K. (2015). Multicenter Evaluation of the BioFire FilmArray Gastrointestinal Panel for Etiologic Diagnosis of Infectious Gastroenteritis. Journal of Clinical Microbiology. 53(3)

Butzler J-P 2004 Campylobacter, from obscurity to celebrity. Clinical Microbiology and Infection 10(10)

Dekeyser, P, Gossuin-Detrain, M, Butzler, JP, and Sternon, J. 1972 Acute enteritis due to a related vibrio: first positive stool cultures. Journal of Infectious Diseases 125

Escherich, T. 1886 Beitrage zur Kenntniss der Darmbacterien. III. Ueber das Vorkommen von Vibrionen im Darmcanal und den Stuhlgangen der Sauglinge. (Articles adding to the knowledge of intestinal bacteria. III. On the existence of vibrios in the intestines and feces of babies.). MA?nchener Med Wochenschrift. 33

Gallay A, Prouzet-MaulA©on V, Kempf I, Lehours P, Labadi L, Camou C, Denis M, de Valk H, Desenclos JC, MA©graud F. 2007 Campylobacter antimicrobial drug resistance among humans, broiler chickens, and pigs, France. Emerging Infectious Diseases 13(2)

Gillespie L, O’Brien S, Penman C, Tomkins D, Cowden J, Humphrey T. 2008 Demographic determinants for Campylobacter infection in England and Wales: implications for future epidemiological studies. Epidemiology and Infection 136(12)

Lehtopolku M, Nakari UM, Kotilainen P, Huovinen P, Siitonen A, Hakanen AJ. 2010 Antimicrobial susceptibilities of multidrug-resistant Campylobacter jejuni and C. coli strains: in vitro activities of 20 antimicrobial agents. Antimicrobial Agents and Chemotherapy 54(3)

Minton J, Stanley P. 2004 Intra-Abdominal Infections. Clinical Medicine 4(6)

Moran AP, Upton ME 1987 Factors affecting production of coccoid forms by Campylobacter jejuni on solid media during incubation. Journal of Applied Bacteriology 62(6)

Nelson JM, Smith KE, Vugia DJ, Rabatsky-Ehr T, Segler SD, Kassenborg HD, Zansky SM, Joyce K, Marano N, Hoekstra RM, Angulo FJ. 2004 Prolonged diarrhea due to ciprofloxacin-resistant campylobacter infection. Journal of Infectious Diseases 190(6)

NICE. (2013). Acute diarrhoea in children: racecadotril as an adjunct to oral rehydration. Available: https://www.nice.org.uk/advice/esnm12. Last accessed March 23rd 2015.

NICE. (2009). CG84 – Diarrhoea and vomiting in children: Diarrhoea and vomiting caused by gastroenteritis: diagnosis, assessment and management in children younger than 5 years. Available: https://www.nice.org.uk/guidance/cg84. Last accessed March 23rd 2015.

NICE. (2014). Gastroenteritis. Available: http://cks.nice.org.uk/gastroenteritis. Last accessed March 23rd 2015.

Ruiz-Palacios GM 2007. The health burden of Campylobacter infection and the impact of antimicrobial resistance: playing chicken. Clinical Infectious Diseases 44(5)

Smith JL, Bayles D. 2007 Postinfectious irritable bowel syndrome: a long-term consequence of bacterial gastroenteritis. Journal of Food Protection 70(7)

Ternhag A, Asikainen T, Giesecke J, Ekdahl K. 2007 A meta-analysis on the effects of antibiotic treatment on duration of symptoms caused by infection with Campylobacter species. Clinical Infectious Diseases 44(5)

Vandamme P, Falsen E, Rossau R, Hoste B, Segers P, Tytgat R, De Ley J 1991 Revision of Campylobacter, Helicobacter, and Wolinella taxonomy: emendation of generic descriptions and proposal of Arcobacter gen. nov. International Journal of Systematic Bacteriology 41(1)

Vukelic D, Trkulja V, Salkovic-Petrisic M. 2010 Single oral dose of azithromycin versus 5 days of oral erythromycin or no antibiotic in treatment of campylobacter enterocolitis in children: a prospective randomized assessor-blind study. Journal of Pedatric Gastroenterology and Nutrition 50(4)

Wilson D, Gabriel E, Leatherbarrow A, Cheesbrough J, Gee S, Bolton E, Fox A, Fearnhead P, Hart C, Diggle P. 2008 Tracing the Source of Campylobacteriosis. PLoS Genetics 26; 4(9)

Methodology Research Data

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

According to Walliman (2001), a methodology explains the theory behind the research methods or approaches. This chapter highlights the theories behind the methodology employed and examines the research methods that are most appropriate for this research which help to better understand the topic under investigation.

This research undertakes an analytical review of customer retention techniques of Indian banks, using Citibank as a case study. This chapter outlines how this analysis is undertaken and describes the rationale behind the choice of research design and the construction of the method.

Research Method Construction

Much of the research undertaken in social sciences is primary. This is based on the collection of primary data, that is, data originated by the researcher for the purpose of the investigation at hand (Stewart and Kamins, 1993). Primary analysis is the original analysis of data in a research study. It is what one typically imagines as the application of statistical methods.

However, not every study or research undertaking must begin with the collection of primary data. In some cases, the information required is already available from published sources. This is called secondary research – the summation, collation, and/or synthesis of existing research. Secondary information consists of sources of information collected by others and archived in some form. These sources include reports, industry studies, as well as books and journals.

The collection, generation, and dissemination of information is growing. This means that there exists a tremendous amount of secondary data that is relevant to today’s decision-making problems. Knowledge accumulation increasingly relies on the integration of previous studies and findings. Glass (1976) argues that when the literature on a topic grows and knowledge lies untapped in completed research studies, “this endeavour (of research synthesis) deserves higher priority … than adding a new experiment or survey to the pile” (Glass, 1976, p. 4).

One of the main reasons to value secondary data comes from the ease of collection for research use (Houston, 2004). This information can be of considerable importance for two reasons.

Time savings – typically, the time involved in searching secondary sources is much less than that needed to complete primary data collection.

Cost effectiveness – similarly, secondary data collection in general is less costly than primary data collection. For the same level of research budget a thorough examination of secondary sources can yield a great deal more information than can be had through a primary data collection exercise.

Another, and perhaps more important, benefit to researchers from employing secondary data is that alternative types of data can provide multi-method triangulation to other research findings (Houston, 2004). This is because the knowledge bases regarding many constructs, such as retention and loyalty, have been built heavily through survey research approaches. All things being equal, secondary data should be used if it helps researchers to solve the research problem (Saunders et al., 2006).

If there exists data that solves or lends insight into the research problem, then little primary research has to be conducted. Because resource constraints are always a problem for the researcher, it makes good sense to exhaust secondary data sources before proceeding to the active collection of primary data. In addition, secondary data may be available which is entirely appropriate and wholly adequate to draw conclusions and answer the question or solve the problem.

This secondary analysis may involve the combination of one data set with another, address new questions or use new analytical methods for evaluation. Secondary analysis is the re-analysis of data for the purpose of answering the original research question with different statistical techniques, or answering new questions with old data. Secondary analysis is an important feature of the research and evaluation landscape. Generally, secondary research is used in problem recognition and problem clarification.

However, in addition to being helpful in the definition and development of a problem, secondary data is often insufficient in generating a problem solution (Davis, 2000). Whilst the benefits of secondary sources are considerable, their shortcomings have to be acknowledged. There is a need to evaluate the quality of both the source of the data and the data itself. The first problem relates to definitions. The researcher has to be careful, when making use of secondary data, of the definitions used by those responsible for its preparation.

Another relates to source bias. Researchers have to be aware of vested interests when they consult secondary sources. Those responsible for their compilation may have reasons for wishing to present a more optimistic or pessimistic set of results for their organisation. Also, secondary data can be general and vague and therefore may not help with decision-making. In addition, data may be incomplete. Finally, the time period during which secondary data was first compiled may have a substantial effect upon the nature of the data.

Considering these shortcomings, primary data collection strategy was also adopted after analysis and collection of secondary data. This was purposely done by the author as the author wanted to analyze the previous similar researches before drafting a primary data collection questionnaire. In constructing the primary data collection method, data needs were first specified. Primary data was collected in the form of interviews with Citibank operational and branch managers, focus groups were also conducted with a sample of Citibank customers.

These methods were considered to be the most appropriate in terms of achieving the objectives of the study and worked out best within the time and cost constraints. Semi-structured probing interview with Citibank management staff revealed in depth information and insights on customer retention and relationship banking. Focus groups conducted with Citibank customers was the best way to get information out of them as ideas from person sparked off ideas from another and the group gelled together very well.

Also, facial expressions and bodily movements indicated quite a lot in a focus group. It wasn’t feasible to conduct telephonic interview or video-conferencing due to the costs involved. Though, initially some thoughts were given to conducting telephonic interviews with Citibank employees, but later on the idea was shelved because of time and cost constraints.

Secondary data for this research concentrated on collecting data from books, journals, online publications, white papers, previous researches, newspapers (Economic Times) taped interviews, websites, research databases etc. Secondary data was collected and partially analyzed before embarking onto primary data collection methods so that the designing of focus group and interview questions can be framed properly.

Although, most of the secondary data was collected by the time primary data collection methods were embarked upon, but secondary data collection didn’t stop altogether. In a way, the data collected from the secondary data and the data gathered from field research helped in triangulation. The field research also helped in testing the hypothesis that was developed after studying the concepts and theories (deductive approach).

Also, after gaining sufficient insight on the topic, it made it easier for the author to frame the questionnaire, because first the questions to test the hypothesis were framed and then specific questions were framed which would have helped in forming a hypothesis (inductive approach). Primary research tried to delve as deep as possible into areas which could not have been covered by secondary research and where first-hand information was absolutely necessary to come to a definitive conclusion.

Research Approach

Qualitative method is a kind of research that produces findings not arrived at by means of statistical procedures or other means of quantification. It is based on a meaning expressed through words (Saunders, 2006).

Qualitative research method often provides rich descriptive and exploratory data and is exploratory in nature. Quantitative methods on the other hand, uses numbers and statistical method, it tends to be based on meanings derived from numbers.

The research approach used for this research is primarily qualitative. Both the primary data collection methods concentrate on qualitative data collection. But, quantitative data is also collated in the form of company reports. Company reports were reviewed to analyse the effect of retention measures on management accounts.

So, both quantitative and qualitative methods of data collection technique is applied, although the major part of the research relies upon qualitative data and its analysis. Qualitative secondary information from a variety of sources are gathered like Citibank Case Studies, Web page , Reference books , Journals , Online journals, Newspaper and Magazine Articles , Taped interviews , Business news channel views , Research Agency) databases . Quantitative data from Citibank Company Reports and other supermarkets are collected and analyzed to compare and contrast the effect of various retention initiatives.

The Research Design

A research design is the framework or a plan for a study used as a guide to collect and analyse data, it is the blueprint that is followed (Churchill and Iacobucci 2005; pg 73). Kerlinger (1996; pg 102) defines ‘a plan and structure of investigation to obtain answers to research questions.’ The plan here means the overall scheme or program of the research and includes an outline of what the researcher seeks to do from hypotheses testing to the final analysis of the data.

A structure is framework organization or configuration of the relations among variables of a study (Robson, 2002; pg 73). The research design expresses both the structure of the research and the plan of investigation used to obtain empirical evidence on the relations of the problem. Some of the common approaches to research design include exploratory research, descriptive research and causal research.

For the purpose of this research, an exploratory research is conducted as little previous researches are available on customer retention in Indian banks. Hence little information is available on how to solve the research since there is little past references. The focus of this study is on gaining insights and familiarity with the subject area of customer retention for more rigorous investigation at a later stage.

The approach is very open and a wide range of data and information can be gathered and it will provide the conclusive answer to the problem defined. This research will study which existing theories or concepts with regards to customer retention can be applied to the problem defined. It will rely on extensive face to face interviews conducted with Bank Managers of Citibank to understand the concept of customer retention and how it is implemented. One of the reasons for carrying out an exploratory study for the purpose of this study is, because some facts about customer retention are known by the author but more information is required to build a theoretical framework.

Sample

Sample selection in this study, was driven by the need to allow maximum variation in conceptions. Individual managers were interviewed according to their expected level of insight regarding customer retention. In total, five interviews were conducted, all participants were employed by Citibank in India.

In addition, two interviewees had been directly involved in developing the retention strategy while the other three had gained experience in implementing retention strategies. Thus, the likelihood of uncovering a range of variations between conceptions of retention was increased.

Focus group participants banked with Citigroup in some form or the other (current accounts, credit cards, loans etc). These participants represented a mix of genders, age, banking experience, discipline and experience of banking with different banks.

Method of Data Collection

Data was collected using a semi-structured interview technique, which is characterized by (Booth, 1997 as being both open and deep.Open refers to the fact that the researcher is open to be guided by the responses made by the interviewee (Marton, 1994; Booth, 1997). Deep describes how, during the interview, individual interviewees are encouraged to discuss their conceptions in depth until both the researcher and the interviewee reach a mutual understanding about the phenomenon in question (Booth, 1997; Svensson, 1997).

In this study, this facilitated the prompting of interviewees to move beyond the concept of retention and into relationship building and loyalty. All face-to-face interviews were conducted with a single member separately in the participant’s office, with the interviews lasting between 30 and 40 minutes. Initially a “community of interpretation” (Apel, 1972) between the researcher and participant was established, with the researcher explaining that the objective of the research was to understand what constitutes effective retention strategy and the importance of retention within the banking community.

The question encouraged the participants to reflect upon and articulate their own lived experience of retention. They also focused on the “structural-how” aspects of customer retention. In asking about the roles and activities related to retention, it tried to figure out the ‘how’ component of retention. The interviews progressed around these topics, with participants guiding the agenda based on the extent of their interest in the topic.

For example, the majority of interviewees drew upon comparisons between the American banking systems when expressing their views on the retention process. In addition to the primary questions, follow-up questions were asked as appropriate. Examples included “What do you mean by that?”, “What happens?”, and “Is that how you see your role?” These questions encouraged individual participants to elaborate the meaning of customer retention.

Data Analysis

All five interviews and focus group sessions were taped and then transcribed verbatim. The transcripts were then analysed by the research team using investigator triangulation (Janesick, 1994). In line with the suggestions of Francis (1996), a structural framework for organizing the transcripts was first developed.

This prevented the research team getting lost in the enormous amount of text contained in each transcript and ensured we focused on the underlying meaning of the statements in the text, rather than on the specific content of particular statements.

The components of the framework were dimensions of supervisors’ conceptions, which were not predetermined by the researchers but were revealed in the texts. The phenomenographic approach seeks to identify and describe the qualitatively different ways of experiencing a specific aspect of reality (Marton, 1981, 1986, 1988, 1994, 1995; Van Rossum & Schenk, 1984; Johansson, Marton, et al., 1985; Sa? ljo? , 1988; Sandberg 1994, 1997, 2000, 2001; Svensson 1997).

These experiences and understandings, or ways of making sense of the world, are labelled as conceptions or understandings. The emphasis in phenomenography is on how things appear to people in their world and the way in which people explain to themselves and to others what goes on around them, including how these explanations change (Sandberg, 1994).

The framework we used to organize the data in each transcript comprised four dimensions of the explanations that supervisors used to make sense of their world, as expressed by them in the interview:

(a) What the interviewee’s conception of supervision meant to the interviewee in terms of the goal of supervision (referential-what);

(b) How the conception was translated by the interviewee into roles and activities (structural-how);

(c) What the conception meant to the interviewee in terms of the desired outcomes of the PhD supervision (referential-what); and

(d) What factors influenced the interviewee’s conception (external influences). The organizing framework was then used to reduce the text in each interview transcript to its essential meaning. Each researcher reread the first interview transcript. Discussion, debate, and negotiation then followed as we applied the components of our organizing framework to the first interviewee. Where differences of opinion occurred, a researcher attempted to convince the others of the veracity of their claims.

As a result of this ongoing and open exchange, we reached agreement about the components of the first interviewee’s conceptions that we believed were most faithful to the interviewee’s understanding of their lived experience of PhD supervision, as represented by their interview transcript. We then repeated this process for the next interviewee until all of the transcripts had been reduced into the organizing framework.

Conceptions began to emerge from our organizing framework as we alternated between what the interviewees considered PhD supervision to be, how they enacted supervision in their roles and activities, and why they had come to this understanding of supervision. Once these conceptions emerged, we tentatively grouped together interviewees who shared conceptions of supervision that were similar to each other and were different from those conceptions expressed by other super-visors. We then cross-examined our interpretations of each interviewee’s understanding of supervision by proposing and debating alternative interpretations.

This cross-examination continued until we, as a group, reached agreement on two issues: First, we believed we had established the most authentic interpretation of each interviewee’s understanding.

Second, we believed we had grouped interviewees expressing qualitatively similar understandings into the same category of description and had grouped interviewees expressing qualitatively different conceptions into different categories of description. Five categories of description, which we labelled as Conceptions 1 through to 5, emerged from this process.

Through the same iterative process, and through open dialogue and debate between the members of the research team, we were then able to map the five conceptions into an outcome space. The outcome space illustrates the relationships between the differing conceptions in two ways: First, the outcome space illustrates the outcome of higher priority sought by the supervisor (completion of the PhD or new insight).

Second, it distinguishes the fundamental approach to supervision as either pushing (the student is a self-directed learner) or pulling (the student is a managed learner) the student through the process. Table 2 summarizes the techniques we applied, as derived from the literature, to improve the validity and reliability of our interpretations of the interviewee’s experiences as expressed in the transcripts.

Example French Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Quels sont les facteurs de la montee du Front National et de son succes aux elections presidentielles en 2002?

Apres les elections presidentielles de 2002 et la nomination de Jean-Marie Le Pen, leader du Front National, au premier tour le 21 avril, la France fut jetee en confusion. Personne, il est vrai, ne s’y attendait, mais vu de recul cette victoire partielle peut etre expliquee par nombreux facteurs a la fois psychologiques, politiques et sociaux.

1. Le nombre de candidats pour la presidence

Un incroyable total de seize candidats se sont presentes aux elections de 2002 ! Un choix extensif comme certains pourraient penser, trop extensif pensent d’autres. Tout le monde pensait que, comme tous les ans, Jacques Chirac le president actuel et Lionel Jospin le Premier Ministre l’epoque, se retrouverent face face au deuxieme tour. Les deux partis vivaient en co-habitation dans le gouvernement depuis quelques annees dej. ca allait etre la bataille du siecle. Mais la Gauche etait fragmentee en de nombreux petits partis (l’extreme gauche, les ecologistes, la nouvelle gauche), la fragilisant vis–vis des autres partis. Les Franais, desillusionnes face cette gauche divisee, se tournerent vers les seules alternatives qui leur parurent plausible ; la droite, l’extreme droite ou l’abstentionnisme. Le Front National reu 11,75% des votes mais l’electorat lui-meme est pense monter jusqu’ 16%. Le taux de voteurs pour le parti n’a jamais ete aussi eleve, et cette manifestation du pouvoir de l’electorat fut un reveil alarmiste pour les partis divises de la gauche socialiste.

2. Le taux d’abstention

Il est pense que ce fut, en plus du grand nombre de candidats, le large taux d’abstention qui couta sa place aux deuxieme tour au premier ministre Lionel Jospin. Il renona d’ailleurs la position de ‘leader’ de la gauche socialiste apres cette defaite, si humiliante pour son parti et si inattendue. Etre battu par un pourcentage si faible fut pris comme un vote de non-confiance en son parti. Il est vrai que, les Franais originalement de gauche qui s’etaient abstenus de voter au premier tour en proteste, se rassemblerent vite au second et voterent en masse pour Chirac par peur de la tres reelle montee de l’extreme droite en France. Mais le mal, malheureusement, avaient dej ete fait. Il est aussi pense que la hausse du taux d’abstention est lie la montee des theories anarchistes venue d’Espagne et de Corse, mais ceci n’est qu’un speculation pour tenter d’expliquer le graduel desillusionnement des Franais. Les abstentionnistes furent appeles voter en masse au deuxieme tour pour eliminer toute chance qu’avait Le Pen d’acceder la presidence, aussi improbable soit-elle.

3. L’insecurite, la violence et le chaumage en France

L’agenda politique en France en 2002 fut egalement crucial pour determiner du futur

President de la Republique. L’insecurite face la menace terroriste que provoqua le 11 septembre, la violence urbaine et la hausse du chaumage en France pousserent une population vieillissante se confier un homme contre l’immigration massive et l’integration raciale. Le Pen est connu pour ses propos anti-semite, en particulier l’affirmation que ‘l’immigration de mass est le pire danger nous avons jamais rencontre en histoire’. L’influx d’immigrants de partout dans le monde en France, un pays repute pour sa solidarite vis–vis de refugies politiques, est pense etre une des causes directes de la montee du chaumage dans l’hexagone. Dans ce contexte la sympathie pour les causes de Le Pen est evidente. Similairement, la violence dans les banlieues de Paris est des autres grandes villes franaises comme Marseille et Lille est pense etre egalement lie l’immigration et la montee de jeunes auxquelles les valeurs franaises n’ont pas encore ete inculquees.

4. La popularite de la fille de Le Pen

Le charisme de la fille de Le Pen, Marine, une avocate de 34 ans, est egalement pense avoir eu un effet non negligeable sur le succes du Front National aux elections de 2002. Elle fut integree en tant que membre du parti apres son succes aux elections regionales. En avril 2003 Jean-Marie Le Pen nomma sa fille une des cinq vice-presidents de Front National. Il est pense que cette demarche fut une man-uvre pour baisser l’influence de Bruno Gollnisch qui se positionnait pour remplacer Le Pen quand il prendra eventuellement sa retraite. Ce transfert de pouvoir ressemble beaucoup au transfert de pouvoir comme dans une monarchie. Cependant, l’influence de Marine Le Pen sur le succes du parti est passe pre dater les elections. En tant que femme, elle inspira confiance l’electorat feminin qui, jusqu’alors, ne representant qu’un minime pourcentage du parti. Elle est devenu un personnage mediatique apparaissant regulierement la television nationale pour defendre les idees de son pere. La strategie du parti vers le recrutement d’un plus jeune electorat fut aussi favoriser par ses apparitions publiques.

Heureusement, Le Pen fut battu en masse par Jacques Chirac au deuxieme tour le 5 mai, mais le choc de ce premier tour perdure. Les Franais, par contre, n’hesiterent pas exprimer leur deception avoir ete force voter pour un president qui, peut-etre, n’aurait pas gagne si Lionel Jospin etait passe au premier tour. Comme le dit Le Quebecois Libre, ‘Dans cet affrontement entre les candidats dits de droite et d’extreme droite (les deux offrant toutefois des programmes largement etatistes), quelle est la meilleure conduite adopter pour faire avancer la liberte ?’ Ce quoi nous faisons face maintenant est une strategie de diabolisation du Front National initiee par les socialistes et la droite luttant contre une strategie de normalisation de leur parti dirigee par le Front National avec Marine Le Pen en tete.