Variation of Light Intensity – Inverse Square Law

This work was produced by one of our professional writers as a learning aid to help you with your studies

Background Theory

i Light emitted from any kind of source, e.g. the sun, a light bulb, is a form of energy. Everyday problems such as lighting required for various forms of labouring or street illumination, require one to able to determine and evaluate the intensity of light emitted by any light source or even the illumination of a given surface. A special group of studies is formed around these issues and it is called photometry.

Luminous flux is a scalar quantity which measures the time rate of light flow from the source. As all measures of energy transferred over a period of time, luminous flux is measured in Joules/Seconds or Watts (SI units). It can therefore safely be said that luminous flux is a measure of light power.

Visible light consists of several different colours, each representing a different wavelength of the radiation spectrum. For example red colour has a wavelength 610-700 nm, similarly yellow 550-590 nm and blue 450-500 nm.

The human eye demonstrates different levels of sensitivity to the various colours of the spectra. More specifically, the maximum sensitivity is observed in the yellow-green colour (i.e. 555nm). From all the above, it is clear that there is the need to define a unit associating and standardising the visual sensitivity of the various wavelengths to the light power which are measured in Watt’s; this unit is called the special luminous flux unit of the lumen (lm).

One lumen is equivalent to1/680 Watt of light with a wavelength of 555 nm. This special relationship between illumination and visual response renders the lumen the preferred photometric unit of luminous flux for practical applications. On top of that one of the most widely used light sources in everyday life such as the electric light bulb emits light which consists of many different wavelengths.

A measure of the luminous strength of any light source is called the light sources intensity. At this point, it should be said that the intensity of a light source depends on the quantity of lumens emitted within a finite angular region which is formed by a solid angle. To give a visual representation of the solid angle, recall that in a bi-dimensional plane the plane angle is used for all kinds of angular measurements. A further useful reminder regards the arc length s; namely for a circle of radius r the arc length s is calculating by the formula

S = r * q -Equation. 1

(qis measured in radians)

Now, in a three dimensional plane the solid angle W is similarly used for angular measurements. Corresponding to the q plane angle, each section of surface area A of a sphere of radius r is calculating by using the following formula;

A= r2*W -Equation. 2

(Remember that W is measured in steradians)

By definition one steradian is the solid angle subtended by an area of the spherical surface equal to the square of the radius of the sphere.

Taking into account all the above mentioned, the luminous intensity I of a light source (small enough to be considered as a point source) pointing towards the solid angle is given by:

I = F/ W -Equation. 3

Where F is the flux measured in lumens. It is clear that the luminous intensity unit is lumen /steradian. This unit used to be called a candle, as it was defined in the context of light emitted from carbon filament lamps.

Generally speaking, luminous intensity in any particular direction is called the candle power of the source. The corresponding unit in the SI system is called the candela (cd)which is the luminous intensity emitted by 1/60 cm2 of platinum at a temperature of 2054K (which is the fusion point of platinum).

A uniform light source (small enough to be considered as a point source) whose luminous intensity is equal to one candela, is able to produce a luminous flux of one lumen through each solid angle. The equation shown below is the mathematical expression of the above definition:

F =

W * I

-Equation. 4

Where I is equal to one cd and W is equal to one sr.

In similar terms the total flux Ftof a uniform light source with an intensity I can be calculated with the aid of the following formula.

Ft = W t* I – Equation. 5

And taking into account that the total solid angle Wt of a sphere is 4p sr, the above formula becomes

Ft = 4p * I -Equation. 6

When a surface is irradiated with visible light it is said to be illuminated. For any given surface, the illuminance E (which is also called illumination) is intuitively understood and defined to be the flux indenting on the surface divided by the total area of the surface.

E = F / A – Equation. 7

In the case where the several light sources are present and illuminate the same surface, the total illuminance is calculated by adding up all of the individual source illuminations. The SI unit allocated the illuminance is the lux (lx)where one lx is equal to 1 lm / 1 m2.

Another way of expressing illumination in the context of light sources intensity and the distance from the light source can be derived by forming a combination of the last few mentioned equations:

E = F / A = I * W / A = I / r2 -Equation. 8

Where r is the distance measured from the source or the radius of a sphere whose total area is A (W = A / r2). An important side note at this point is that 1fc equals 1cd/ft2 and also 1lx is equal to1cd/ m2.

It is evident that the illumination is inversely proportional to the square of the measured distance from the light source. In the case of constant light source intensity I, it can be said that:

E2/E1 = r12/r22= (r1/r2)2 – Equation. 9

In the real world, the incident light is very rarely normal to a surface; nearly always light impacts on a surface at an angle of incidence q.

In this case the illuminance is calculated by:

E = I* cos q/ r2 -Equation. 10

To sum up, there are several ways which can be employed in order to measure illumination. Nearly all of them are based on the photoelectric effect originally discovered by Albert Einstein (for which he was awarded a Nobel Prize in 1921). In a few words when light strike sa material electron emission is observed and electric current flows if there is a circuit present.

This current is proportional to the incident light flux and to the work function of the material; the intensity of the resulted current flow is measured by instruments calibrated in illumination units.

Apparatus Components:

Light Sensor – Light Dependent Resistance (LDR)

Light bulb

Ruler

Power supply

Voltmeter

Ammeter

Connecting wires and Inline conductors

Two Vertical Stands

Black Paper

Experimental Apparatus

The experimental apparatus consisted off various parts. The basis of the light reception circuit was a Light Dependent Resistor (LDR) which is the essential part of the apparatus since in enables the measurement of the light’s intensity.

To give a brief introduction to this type of devices, it should be said that all kinds of materials exhibit some kind of resistance to electric current flow (which by definition is orientated flow of electrons). The particularity of an LDR device lays in the fact that its resistance is not constant; instead, it varies its value according to the light’s intensity that impacts on it. Generally speaking, LDR devices can be categorized in two main divisions: negative and positive coefficient. The former decrease the irresistance as the light’s intensity grows bigger; on the other hand, the latter increase their resistance as the light’s intensity becomes greater.

At the microscopic level, such a device consists of semi-conducting material like doped-silicon (the most commonly used material for electronic applications).When light impacts on the device material, this energy is absorbed by the covalent bonded electrons. Subsequently, this excessive energy breaks the bonds between the electrons and creates free electrons inside the material. These electrons are free to move inside the material and hence increase there sistivity of the material since they are no longer bonded.

Another essential part of the apparatus is the light source, which in this particular cause was an incandescent lamp (these lamp sare the most commonly used ones found in most everyday applications). The basic component of an incandescent lamp is the wire filament which is usually made of tungsten; this filament is sealed in glass bulb. Now, the bulb itself is filled with a mixture of low pressure argon and nitrogen in gaseous form. The use of those two gases is to delay the evaporation of the metal filament as well as itoxidation.

Ones current begins to flow through the tungsten filament, it gets so hot that it looks white. Under these operating conditions the filament itself ranges in temperature from 2500-3000 degrees Celsius. All incandescent lamps have continuous spectrum which lies primarily in the infrared region of the electromagnetic spectrum. The basic drawback of these devices is they poor efficiency, since more than 95% of the lamps energy is lost to the ambient environment in the form of heat.

The detailed apparatus used for this investigation is shown schematically in figure.1. According to this figure the light source(incandescent lamp (light bulbs electrical characteristics required here) ) is placed on a fixed stand and is kept at a vertical upright position looking upwards. It is evident that ones the bulb is switch on the light will be emitted isotropically towards all directions. A power supply(( power supply’s electrical characteristics required here) ) was used for powering up the light bulb and providing variable voltage values. In that way, as will be explained later, the intensity of the light emitted by the bulb will not stay constant and neither will the voltage across the LDR.

Opposite the light bulb, on another stand the LDR device has kept fixed in place with the aid of cohesive material (blu tack). The LDR device was placed normally to the light bulb so that the angle of incidence of the light coming of the source remains constant and normal throughout the experimental measurements.

Another observation that can be made from Figure.1 is the interconnection between the LDR device, the voltmeter, the ammeter and the power supply. More specifically, in order for the LDR to function properly, a voltage was applied across the receiver circuit ( 4 Volts power pack in our case). The voltmeter was connected across the LDR device in order to constantly measure the value of the voltage across the LDR. These variations were due to the alternations to the intensity of the incident light (since the resistance value was changing).

The volt meter ideally would have infinite resistance, however in reality its resistance is finite and thus small deviations of the indicated voltage from the real value were expected.

Another quantity under monitoring was the current flowing into the LDR device. For this purpose an ammeter was placed in series with the LDR. Its rule was very important since the current flow into the LDR device had to remain constant throughout the experimental measurements. Again, the ideal ammeter would not have any impendence at all. In reality all ammeter demonstrate a finite albeit very small value of resistance: thus deviation of the indicated value from the actual one should be expected.

(Missing resistance for potential divider?)

A very interesting configuration (and very widely used) for light intensity measurements using the same components as the ones available for this practical can be seen in Figure.1 with a little insight. A closer look to the receiver circuit reveals that a potential divider is formed by the way that the above mentioned components are connected. On a side note, measuring the current coming out of the LDR device would be feasible and relatively easy since the output current would be directly proportional to the value of the LDR resistance. A better way would be to measure the output voltage which happens to be the voltage across the LDR (i.e. the value monitor by the voltmeter). In this case the voltage is proportional to the current flowing through the LDR device. The second resistance required to form the potential divider comes from the finite internal resistance of the ammeter. The value of the output voltage V output can be calculated by using the standard potential divider formula shown below:

Vout = RLDR / (RLDR + RAMMETER)* Vin – Equation. 11

Where Vinis the voltage applied across the receiver circuit, RLDR and RAMMETER are the resistance of the LDR device and the internal resistance of the ammeter respectively.

Since the aim of those measurements is to investigate the relationship between the light intensity with distance, despite the fact that both the light bulb and LDR are kept fixed vertically the stand of the light bulb was able to be translated horizontally. For the purpose of the experiments the translation of the light bulb was made parallel with a ruler which was placed between the two stands. This configuration was quite optimal since it allowed the exact distance between light source and receiver to be know throughout the experiments.

In all optical experiments one of most fundamental error is the background illumination and the interference of other light sources. For this reason the apparatus was surrounded by black paper.

Experimental Procedure

The LDR sensor and the light bulb have to be at the same vertical height during all experimental measurements. One key point to notice is in that way the light bulb behaves as more like a point source of light, justifying the use of all mathematical equations. The LDR sensor has to point towards the light bulb at all times.

Having set up the experimental apparatus and chosen the range of the distance between the light bulb and the LDR sensor, a reference measurement of the LDR sensor was made having the light bulb switched off. Depending on the power of the light bulb a starting distance of 10 cm was deemed to be sufficient for the calibration purposes. Progressively, after performing the calibration this distance as explained below increased. Similarly, the rest of the experimental apparatus’s components (i.e. receiver device, voltmeter, ammeter, etc.) were also switched off during this very crucial calibration phase of the practical; generally speaking it is very good and common practice as well as much more preferable to carry out the calibration and experimental procedure in conditions of total darkness. The previous step insured that the background illumination was measured and this value would have to be deducted from all further measurements. Hence the error of the measurements is eliminated and their credibility is increased by a great degree.

The light bulb was initially switched on by applying a specific voltage across it; subsequently the exact distance between the light bulb and the LDR was measured using the ruler. The next and most important step at this stage was to measure the value of the potential difference across the LDR device for this specific position of the light bulb. For reasons of reference, the value of the ammeter was also recorded.

The position of the light bulb stand was then altered along the ruler in constant and knows intervals of distance. For each known distance the above measurements had to be repeated over and over. At this stage it would be useful to emphasize that the acquisition of the above data can be made for more than one time per known distance r, since averaging of data decreases the error percentage in the experimental measurements obtained. In that way, a comprehensive chart or table can be formed associating distance values (between the two stands) to output voltage values.

Hamstring Rehabilitation and Injury Prevention

This work was produced by one of our professional writers as a learning aid to help you with your studies

Hamstring injuries can be frustrating injuries. The symptoms are typically persistent and chronic. The healing can be slow and there is a high rate or exacerbation of the original injury (Petersen J et al. 2005).

The classical hamstring injury is most commonly found in athletes who indulge in sports that involve jumping or explosive sprinting (Garrett W E Jr. 1996) but also have a disproportionately high prevalence in activities such as water skiing and dancing (Askling C et al. 2002).

A brief overview of the literature on the subject shows that the majority of the epidemiological studies in this area have been done in the high-risk areas of Australian and English professional football teams. Various studies have put the incidence of hamstring strain injuries at 12 – 16% of all injuries in these groups (Hawkins R D et al. 2001). Part of the reason for this intense scrutiny of the football teams is not only the high incidence of the injury, which therefore make for ease of study, but also the economic implications of the injury.

Some studies (viz. Woods C et al. 2004) recording the fact that hamstring injuries have been noted at a rate of 5-6 injuries per club per season resulting in an average loss of 15 -21 matches per season. In terms of assessing the impact of one hamstring injury, this equates to an average figure of 18 days off playing and about 3.5 matches missed. It should be noted that this is an average figure and individuals may need several months for a complete recovery. (Orchard J et al. 2002). The re-injury rate for this group is believed to be in the region of 12 – 31% (Sherry M A et al. 2004).

The literature is notable for its lack of randomised prospective studies of treatment modalities and therefore the evidence base for treatment is not particularly secure.

If one considers the contribution of the literature to the evidence base on this subject, one is forced to admit that there is a considerable difficulty in terms of comparison of various differences in terminology and classification. Despite these difficulties this essay will take an overview of the subject.

Classification of injuries

To a large extent, the treatment offered will depend on a number of factors, not least of which is the classification of the injury. In broad terms, hamstring injuries can have direct or indirect causation. The direct forms are typically caused by contact sports and comprise contusions and lacerations whereas the indirect variety of injury is a strain which can be either complete or incomplete. This latter group comprises the vast majority of the clinical injuries seen (Clanton T O et al. 1998).

The most extreme form of strain is the muscle rupture which is most commonly seen as an avulsion injury from the ischial tuberosity. Drezner reports that this type of injury is particularly common in water skiers and can either be at the level of the insertion (where it is considered a totally soft tissue injury) or it may detach a sliver of bone from the ischial tuberosity (Drezner J A 2003). Strains are best considered to fall along a spectrum of severity which ranges from a mild muscle cramp to complete rupture, and it includes discrete entities such as partial strain injury and delayed onset muscle soreness (Verrall G M et al. 2001). One has to note that it is, in part, this overlap of terminology which hampers attempts at stratification and comparison of clinical work (Connell D A 2004).

Woods reports that the commonest site of muscle strain is the musculotendinous junction of the biceps femoris (Woods C et al. 2004).

In their exemplary (but now rather old) survey of the treatment options of hamstring injuries, Kujala et al. suggest that hamstring strains can usefully be categorised in terms of severity thus:

Mild strain/contusion (first degree): A tear of a few muscle fibres with minor swelling and discomfort and with no, or only minimal, loss of strength and restriction of movements.

Moderate strain/contusion (second degree): A greater degree of damage to muscle with a clear loss of strength.

Severe strain/contusion (third degree): A tear extending across the whole cross section of the muscle resulting in a total lack of muscle function.

(Kujala U M et al. 1997).

There is considerable debate in the literature relating to the place of the MRI scan in the diagnostic process. Many clinicians appear to be confident in their ability to both diagnose and categorise hamstring injuries on the basis of a careful history and clinical examination. The Woods study, for example, showing that only 5% of cases were referred for any sort of diagnostic imaging (Woods C et al. 2004). The comparative Connell study came to the conclusion that ultrasonography was at least as useful as the MRI in terms of diagnosis (this was not the case if it came to pre-operative assessment) and was clearly both easier to obtain and considerably less expensive than the MRI scan (Connell D A 2004).

Before one considers the treatment options, it is worth considering both the mechanism of injury and the various aetiological factors that are relevant to the injury, as these considerations have considerable bearing on the treatment and to a greater extent, the preventative measures that can be invoked.

It appears to be a common factor in papers considering the mechanisms of causation of hamstring injuries that the anatomical deployment of the muscle is a significant factor. It is one of a small group of muscles which functions over two major joints (biarticular muscle) and is therefore influenced by the functional movement at both of these joints. It is a functional flexor at the knee and an extensor of the hip. The problems appear to arise because in the excessive stresses experienced in sport, the movement of flexion of the hip is usually accompanied by flexion of the knee which clearly have opposite effects on the length of the hamstring muscle.

Cinematic studies that have been done specifically within football suggest that the majority of hamstring injuries occur during the latter part of the swing phase of the sprinting stride (viz. Arnason A et al. 1996). It is at this phase of the running cycle that the hamstring muscles are required to act by decelerating knee extension with an eccentric contraction and then promptly act concentrically as a hip joint extensor (Askling C et al. 2002).

Verrall suggests that it is this dramatic change in function that occurs very quickly indeed during sprinting that renders the hamstring muscle particularly vulnerable to injury (Verrall G M et al. 2001).

Consideration of the aetiological factors that are relevant to hamstring injuries is particularly important in formulating a plan to avoid recurrence of the injury.

Bahr, in his recent and well-constructed review of risk factors for sports injuries in general, makes several observations with specific regard to hamstring injuries. He makes the practical observation that the older classification of internal (intrinsic) and external (extrinsic) factors is not nearly so useful in clinical practice as the consideration of the distinction between those factors that are modifiable and those that are non-modifiable (Bahr R et al. 2003).

Bahr reviewed the evidence base for the potential risk factors and found it to be very scanty and “largely based on theoretical assumptions” (Bahr R et al. 2003 pg 385). He lists the non-modifiable factors as older age and being black or Aboriginal in origin (the latter point reflecting the fact that many of the studies have been based on Australian football).

The modifiable factors, which clearly have the greatest import for clinical practice, include an imbalance of strength in the leg muscles with a low H : Q ratio (hamstring to quadriceps ratio) (Clanton T O et al. 1998), hamstring tightness (Witvrouw E et al. 2003), the presence of significant muscle fatigue, (Croisier J L 2004), insufficient time spent during warm-up, (Croisier J L et al. 2002), premature return to sport (Devlin L 2000), and probably the most significant of all, previous injury (Arnason A et al. 2004).

This is not a straightforward additive compilation however, as the study by Devlin suggests that there appears to be a threshold for each individual risk factor to become relevant with some (such as a premature return to sport) being far more predicative than others (Devlin L 2000).

There is also some debate in the literature relating to the relevance of the degree of flexibility of the hamstring muscle. One can cite the Witvrouw study of Belgian football players where it was found that those players who had significantly less flexibility in their hamstrings were more likely to get a hamstring injury (Witvrouw E et al. 2003).

If one now considers the treatment options, an overview of the literature suggests that while there is general agreement on the immediate post-injury treatment (rest, ice, compression, and elevation), there is no real consensus on the rehabilitation aspects. To a large extent this reflects the scarcity of good quality data on this issue. The Sherry & Best comparative trial being the only well-constructed comparative treatment trial, (Sherry M A et al. 2004) but even this had only 24 athletes randomised to one of two arms of the trial.

In essence it compared the effects of static stretching, isolated progressive hamstring resistance, and icing (STST group) with a regime of progressive agility and trunk stabilisation exercises and icing (PATS group). The study analysis is both long and complex but, in essence, it demonstrated that there was no significant difference between the two groups in terms of the time required to return to sport (healing time). The real significant differences were seen in the re-injury rates with the ratio of re-injury (STST : PATS) at two weeks being 6 : 0, and at 1 year it was 7 : 1.

In the absence of good quality trials one has to turn to studies like those of Clanton et al. where a treatment regime is derived from theoretical healing times and other papers on the subject. (Clanton T O et al. 1998). This makes for very difficult comparisons, as it cites over 40 papers as authority and these range in evidential level from 1B to level IV. (See appendix). In the absence of more authoritative work one can use this as an illustrative example.

Most papers which suggest treatment regimes classify different phases in terms of time elapsed since the injury. This is useful for comparative purposes but it must be understood that these timings will vary with clinical need and the severity of the initial injury. For consistency this discussion will use the regime outlined by Clanton.

Phase I (acute): 1–7 days

As has already been observed, there appears to be a general consensus that the initial treatment should include rest, ice, compression, and elevation with the intention to control initial intramuscularly haemorrhage, to minimise the subsequent inflammatory reaction and thereby reduce pain levels. (Worrell T W 2004)

NSAIAs appear to be almost universally recommended with short term regimes (3 – 7 days) starting as soon as possible after the initial injury appearing to be the most commonly advised. (Drezner J A 2003). This is interesting as a theoretically optimal regime might suggest that there is merit in delaying the use of NSAIAs for about 48 hrs because of their inhibitory action on the chemotactic mechanisms of the inflammatory cells which are ultimately responsible for tissue repair and re-modelling. (Clanton T O et al. 1998).

There does appear to be a general consensus that early mobilisation is beneficial to reduce the formation of adhesions between muscle fibres or other tissues, with Worrell suggesting that active knee flexion and extension exercises can be of assistance in this respect and should be used in conjunction with ice to minimise further tissue reaction (Worrell T W 2004).

Phase II (sub-acute): day 3 to >3 weeks 0

Clanton times the beginning of this phase with the reduction in the clinical signs of inflammation. Goals of this stage are to prevent muscle atrophy and optimise the healing processes. This can be achieved by a graduated programme of concentric strength exercises but should not be started until the patient can manage a full range of pain free movement (Drezner J A 2003).

Clanton, Drezner and Worrell all suggest that “multiple joint angle, sub-maximal isometric contractions” are appropriate as long as they are pain free. If significant pain is encountered then the intensity should be decreased. Clanton and Drezner add that exercises designed to maintain cardiovascular fitness should be encouraged at this time. They suggest “stationary bike riding, swimming, or other controlled resistance activities.”

Phase III (remodelling); 1–6 weeks

After the inflammatory phase, the healing muscle undergoes a phase of scar retraction and re-modelling. This leads to the clinically apparent situation of hamstring shortening or loss of flexibility. (Garrett W E Jr. et al. 1989). To minimise this eventuality, Clanton cites the Malliaropoulos study which was a follow up study with an entry cohort of 80 athletes who had sustained hamstring injuries.

It was neither randomised nor controlled and the treatment regime was left to the discretion of the clinician in charge. It compared regimes which involved a lot of hamstring stretching (four sessions daily) or less sessions (once daily). In essence the results of the study showed that the athletes who performed the most intensive stretching programme were those who regained range of motion faster and also had a shorter period of rehabilitation. Both these differences were found to be significant. (Malliaropoulos N et al. 2004)

Verrall suggests that concentric strengthening followed by eccentric strengthening should begin in this phase. The rationale for this timing being that eccentric contractions tend to exert greater forces on the healing muscle and should therefore be delayed to avoid the danger of a rehabilitation-induced re-injury. (Verrall G M et al. 2001). We note that Verrall cites evidence for this from his prospective (un-randomised) trial

Phase IV (functional): 2 weeks to 6 months

This phase is aimed at a safe return to non-competitive sport. It is ideally tailored to the individual athlete and the individual sport. No firm rules can therefore be applied. Worrell advocates graduated pain-free running based activities in this phase and suggests that “Pain-free participation in sports specific activities is the best indicator of readiness to return to play.” (Worrell T W 2004)

Drezner adds the comment that return to competitive play before this has been achieved is associated with a high risk of injury recurrence. (Drezner J A 2003)

Phase V (return to competition): 3 weeks to 6 months

This is the area where there is perhaps the least agreement in the literature. All authorities are agreed that the prime goal is to try to avoid re-injury. Worrell advocates that the emphasis should be on the maintenance of stretching and strengthening exercises (Worrell T W 2004).

For the sake of completeness one must consider the place of surgery in hamstring injuries. It must be immediately noted that surgery is only rarely considered as an option, and then only for very specific indications. Indications which the clinician should be alert to are large intramuscular bleeds which lead to intramuscular haematoma formation as these can give rise to excessive intramuscular fibrosis and occasionally myositis ossificans (Croisier J L 2004).

The only other situations where surgery is contemplated is a complete tendon rupture or a detachment of a bony fragment from either insertion or origin. As Clanton points out, this type of injury appears to be very rare in football injuries and is almost exclusively seem in association with water skiing injuries (Clanton T O et al. 1998).

It is part of the role of the clinician to give advice on the preventative strategies that are available, particularly in the light of studies which suggest that the re-injury rate is substantial (Askling C et al. 2003).

Unfortunately this area has an even less substantial evidence base than the treatment area. For this reason we will present evidence from the two prospective studies done in this area, Hartig and Askling

Hartig et al. considered the role of flexibility in the prophylaxis of further injury with a non-randomised comparative trial and demonstrated that increasing hamstring flexibility in a cohort of military recruits halved the number of hamstring injuries that were reported over the following 6 months (Hartig D E et al. 1999).

The Askling study was a randomised controlled trial of 30 football players. The intervention group received hamstring strengthening exercises in the ten week pre-season training period. This intervention reduced the number of hamstring injuries by 60% during the following season (Askling C et al. 2003). Although this result achieved statistical significance, it should be noted that it involved a very small entry cohort.

Conclusions.

Examination of the literature has proved to be a disappointing exercise. It is easy to find papers which give advice at evidence level IV but there are disappointingly few good quality studies in this area which provide a substantive evidence base. Those that have been found have been presented here but it is accepted that a substantial proportion of what has been included in this essay is little more than advice based on theory and clinical experience.

References

Arnason A, Gudmundsson A, Dahl H A, et al. (1996) Soccer injuries in Iceland. Scand J Med Sci Sports 1996; 6: 40 – 5.

Arnason A, Sigurdsson S B, Gudmundson A, et al. (2004) Risk factors for injuries in football. Am J Sports Med 2004; 32 (1 suppl) :S5 – 16.

Askling C, Lund H, Saartok T, et al. (2002) Self-reported hamstring injuries in student dancers. Scand J Med Sci Sports 2002; 12: 230 – 5.

Askling C, Karlsson J, Thorstensson A. (2003) Hamstring injury occurrence in elite soccer players after preseason strength training with eccentric overload. Scand J Med Sci Sports 2003; 13: 244 – 50.

Bahr R, Holme I. (2003) Risk factors for sports injuries: a methodological approach. Br J Sports Med 2003; 37: 384 – 92.

Clanton T O, Coupe K J. (1998) Hamstring strains in athletes: diagnosis and treatment. J Am Acad Orthop Surg 1998; 6: 237 – 48.

Connell D A , Schneider-Kolsky ME, Hoving J L. et al (2004) Longitudinal study comparing sonographic and MRI assessments of acute and healing hamstring injuries. AJR Am J Roentgenol 2004; 183: 975 – 84

Croisier J-L, Forthomme B, Namurois M-H, et al. (2002) Hamstring muscle strain recurrence and strength performance disorders. Am J Sports Med 2002; 30: 199 – 203

Croisier J-L. (2004) Factors associated with recurrent hamstring injuries. Sports Med 2004; 34: 681 – 95.

Deave T (2005) Research nurse or nurse researcher: How much value is placed on research undertaken by nurses? Journal of Research in Nursing, November 1, 2005; 10(6): 649 – 657.

Devlin L . (2000) Recurrent posterior thigh symptoms detrimental to performance in rugby union: predisposing factors. Sports Med 2000; 29: 273 – 87.

Drezner J A. (2003) Practical management: hamstring muscle injuries. Clin J Sport Med 2003; 13: 48 – 52

Garrett W E Jr, Rich F R, Nikolaou P K, et al. (1989) Computer tomography of hamstring muscle strains. Med Sci Sports Exerc 1989; 21: 506 – 14.

Garrett W E Jr. (1996) Muscle strain injuries. Am J Sports Med 1996; 24 (6 suppl) : S2–8.

Hartig D E, Henderson J M. (1999) Increasing hamstring flexibility decreases lower extremity overuse in military basic trainees. Am J Sports Med 1999; 27: 173 – 6

Hawkins R D, Hulse M A, Wilkinson C, et al. (2001) The association football medical research programme: an audit of injuries in professional football. Br J Sports Med 2001; 35: 43 – 7

Kujala U M, Orava S, Jarvinen M. (1997) Hamstring injuries: current trends in treatment and prevention. Sports Med 1997; 23: 397 – 404

Malliaropoulos N, Papalexandris S, Papalada A, et al. (2004) The role of stretching in rehabilitation of hamstring injuries: 80 athletes follow-up. Med Sci Sports Exerc 2004; 36: 756 – 9.

Orchard J, Seward H. (2002) Epidemiology of injuries in the Australian Football League, season 1997 – 2000. Br J Sports Med 2002; 36: 39 – 44

Petersen J, Holmich P (2005) Evidence based prevention of hamstring injuries in sport Br. J. Sports Med. 2005; 39: 319 – 323

Sherry M A, Best T M. (2004) A comparison of 2 rehabilitation programs in the treatment of acute hamstring strains. J Orthop Sports Phys Ther 2004; 34: 116 – 25

Verrall G M, Slavotinek J P, Barnes P G, et al. (2001) Clinical risk factors for hamstring muscle strain injury: a prospective study with correlation of injury by magnetic resonance imaging. Br J Sports Med 2001; 35: 435 – 9

Witvrouw E, Danneels L, Asselman P, et al. (2003) Muscle flexibility as a risk factor for developing muscle injuries in male professional soccer players. A prospective study. Am J Sports Med 2003; 31: 41 – 6.

Woods C, Hawkins R D, Maltby S, et al. (2004) The football association medical research programme: an audit of injuries in professional football: analysis of hamstring injuries. Br J Sports Med 2004; 38: 36 – 41.

Worrell T W. (2004) Factors associated with hamstring injuries: an approach to treatment and preventative measures. Sports Med 2004; 17: 338 – 45.

Is Machiavelli a Teacher of Evil?

This work was produced by one of our professional writers as a learning aid to help you with your studies

This essay will consider whether or not Machiavelli was a teacher of evil, with specific reference to his text ‘The Prince’. It shall first be shown what it was that Machiavelli taught and how this can only be justified by consequentialism. It shall then be discussed whether consequentialism is a viable ethical theory, in order that it can justify Machiavelli’s teaching. Arguing that this is not the case, it will be concluded that Machiavelli is a teacher of evil.

To begin, it shall be shown what Machiavelli taught or suggested be adopted in order for a ruler to maintain power. To understand this, it is necessary to understand the political landscape of the period.

The Prince was published posthumously in 1532, and was intended as a guidebook to rulers of principalities. Machiavelli was born in Italy and, during that period, there were many wars between the various states which constituted Italy. These states were either republics (governed by an elected body) or principalities (governed by a monarch or single ruler). The Prince was written and dedicated to Lorenzo de Medici who was in charge of Florence which, though a republic, was autocratic, like a principality. Machiavelli’s work aimed to give Lorenzo de Medici advice to rule as an autocratic prince. (Nederman, 2014)

The ultimate objective to which Machiavelli aims in The Prince is for a prince to remain in power over his subjects. Critics who claim that Machiavelli is evil do not hold this view, necessarily, because of this ultimate aim, but by the way in which Machiavelli advises achieving it. This is because, to this ultimate end, Machiavelli holds that no moral or ethical expense need be spared. This is the theme which runs constant through the work. For example, in securing rule over the subjects of a newly acquired principality, which was previously ruled by another prince, Machiavelli writes:

“… to hold them securely enough is to have destroyed the family of the prince who was ruling them.” (Machiavelli, 1532: 7).

That is, in order to govern a new principality, it is necessary that the family of the previous prince be “destroyed”. Further, the expense of morality is not limited to physical acts, such as the murder advised, but deception and manipulation. An example of this is seen in that Machiavelli claims:

“Therefore it is unnecessary for a prince to have all the good qualities I have enumerated, but it is very necessary to appear to have them. And I shall dare to say this also, that to have them and always to observe them is injurious, and that to appear to have them is useful.” (Machiavelli, 1532: 81).

Here, Machiavelli is claiming that virtues are necessary to a ruler only insomuch as the ruler appears to have them. However, to act only by the virtues will be, ultimately, detrimental to the maintenance of the ruler, as they may often have to act against the virtues to quell a rebellion, for example. A prince must be able to appear just, so that he is trusted, but actually not be so, in order that he may maintain his dominance.

In all pieces of advice, Machiavelli claims that it is better to act in the way he advises, for to do otherwise would lead to worse consequences: the end of the rule. The defence which is to be made for Machiavelli, then, must come from a consequentialist viewpoint.

Consequentialist theory argues that the morality of an action is dependent upon its consequences. If the act or actions create consequences that, ultimately, are better (however that may be measured) than otherwise, the action is good. However, if a different act could, in that situation, have produced better consequences, then the action taken would be immoral.

The classic position of consequentialism is utilitarianism. First argued for by Bentham, he claimed that two principles govern mankind – pleasure and pain – and it is to achieve the former and avoid the latter that determines how we act (Bentham, 1789: 14). This is done either on an individual basis, or a collective basis, depending on the situation. In the first of these cases, the good action is the one which gives the individual the most pleasure or the least pain. In the second of these cases, the good action is the one which gives the collective group the most pleasure or the least pain. The collective group consists of individuals, and therefore the good action will produce most pleasure if it does so for the most amount of people (Bentham, 1789: 15). Therefore, utilitarianism claims that an act is good iff its consequences produce the greatest amount of happiness (or pleasure) for the greatest amount of people, or avoid the greatest amount of unhappiness (or pain) for the greatest amount of people.

This, now outlined, can be used to defend Machiavelli’s advice. If the ultimate goal is achieved, the consequence of the prince remaining in power must cause more happiness for more of his subjects than would otherwise be the case if he lost power. Secondly, the pain and suffering caused by the prince on the subjects whom he must murder/deceive/steal from must be less than the suffering which would be caused should he lose power. If these two criteria can be satisfied, then consequentialism may justify Machiavelli.

Further, it is practically possible that such a set of circumstances could arise; it is conceivable that it could be the case that the suffering would be less should the prince remain in power. Italy, as stated, at that time, was in turmoil and many wars were being fought. A prince remaining in power would also secure internal peace for a principality and the subjects. A prince who lost power would leave the land open to attacks and there would be a greater suffering for the majority of the populous. On the subject, Machiavelli writes:

“As there cannot be good laws where the state is not well armed, it follows that where they are well armed they have good laws.” (Machiavelli, 1532: 55)

This highlights the turmoil of the world at that time, and the importance of power, both military and lawful, for peace. Machiavelli, in searching for the ultimate end for the prince retaining his power, would also secure internal peace and defence of the principality. This would therefore mean that there would be less destruction and suffering for the people.

Defended by consequentialism, the claim that Machiavelli is evil becomes an argument against this moral theory. The criticisms against consequentialism are manifold. A first major concern against consequentialism is that it justifies actions which seem to be intuitively wrong, such as murder or torture, on not just an individual basis, but on a mass scale. Take the following example: in a war situation, the only way to save a million and a half soldiers is to kill a million civilians. Consequentialism justifies killing the million civilians as the suffering will be less than if a million and a half soldiers were to die. If consequentialism must be used in order to justify Machiavelli’s teachings, it must therefore be admitted that this act of mass murder, in the hypothetical situation, would also be justified.

A second major concern is that it uses people as means, rather than ends, and this seems to be something which is intuitively incorrect, as evidenced in the trolley problem. The trolley problem is thus: a train, out of control, is heading towards five workers on the track. The driver has the opportunity to change to another track, on which there is a single worker. Thomson argues it would be “morally permissible” to change track and kill the one (Thomson, 1985: 1395). However, the consequentialist would here state that “morality requires you” to change track (Thomson, 1985: 1395), as there is less suffering in one dying than in five dying. The difference in these two stances is to be noted.

Thomson then provides another situation: the transplant problem. A surgeon is able to transplant any body part to another without failure. In the hospital the surgeon works at, five people are in need of a single organ, without which they will die. Another person, visiting for a check-up, is found to be a complete match for all the transplants needed. Thomson asks whether it would be permissible for the surgeon to kill the one and distribute their organs for those who would die (Thomson, 1985: 1395-1396). Though she claims that it would not be morally permissible to do so, those who claimed that changing tracks in the trolley problem would be a moral requirement – the consequentialists – would also have to claim that murdering the one to save five would also be a moral requirement, as the most positive outcome would be given to the most people.

Herein lies the major concern for a consequentialist, and therefore Machiavelli’s defence: that consequentialism justifies using people as means to an end, and not an end within themselves. A criticism of this is famously argued for by Kant, who claims that humans are rational beings, and we do not state that they are “things”, but instead call them “persons” (Kant, 1785: 46). Only things can permissibly be used only as a means, and not persons, who are in themselves an end (Kant, 1785: 46). To use a person merely as a means rather than an end is to treat them as something other than a rational agent which, Kant claims, is immoral.

This now must be applied to Machiavelli. In advising the murder and deception of others, he is advocating treating people as merely a means, by using them in order to obtain the ultimate end of retaining power. Though this ultimate end may bring about greater peace, and therefore pleasure for a greater amount of people, it could be argued that the peace obtained does not outweigh the immoral actions required in creating this peace.

Further, it must also be discussed whether Machiavelli’s teaching is in pursuit of a prince retaining power in order to bring about peace, or whether it is in pursuit of retaining power simply that the prince may retain power. The former option may be justifiable, if consequentialism is accepted. However, this may not the case for the latter, even if peace is obtained.

Machiavelli’s motives will never be truly known. Such a problem as this demonstrates further criticisms of consequentialism, and therefore Machiavelli himself. If he was advising to achieve power for the sake of achieving power, he would not be able to justify the means to this end without the end providing a consequentialist justification – if, ultimately, the prince retains power but there is not a larger of amount of pleasure than would otherwise be the case.

To pursue power in order to promote peace is perhaps justifiable. However, as is a major concern with the normative approach of consequentialism, the unpredictability of consequences can lead to unforeseen ends. The hypothetical prince may take Machiavelli’s advice, follow it to the letter, and produce one of three outcomes:

Power is obtained and peace is obtained.
Power is obtained but peace is not obtained.
Neither power nor peace is obtained.

Only in the first of these outcomes can there be any consequentialist justification. However, this then means that there are two possible outcomes in which there cannot be a consequentialist justification, and it is impossible to know, truly, which outcome will be obtained. This is the criticism of both Machiavelli and consequentialism: that the risk involved in acting is too great, with such a chance of failure and therefore unjustifiable actions, when it is impossible to truly know the outcomes of actions. The nature of the risk is what makes this unjustifiable, in that the risk is against human life, wellbeing, and safety. Machiavelli condones using people as merely a means to an end without the guarantee of a positive end by a consequentialist justification.

In conclusion, it has been briefly demonstrated what Machiavelli put forward as his teachings. It was further shown how the only justification for Machiavelli’s teachings is a consequentialist approach. However, criticisms put against Machiavelli and consequentialism, such as the justification of mass atrocities, using people as means to ends, and the unpredictability of the pragmatic implementation, show it to fail as an acceptable justification of his teachings. Therefore, it is concluded that Machiavelli is a teacher of evil.

Reference List

Bentham, J. (1798). An Introduction to the Principles of Morals and Legislation. Accessed online at: http://socserv.mcmaster.ca/econ/ugcm/3ll3/bentham/morals.pdf. Last accessed on 26/09/2015.

Kant, I. (1785). Groundwork for the Metaphysics of Morals. Edited and Translated by Wood, A. (2002). New York: Vail-Ballou Press.

Machiavelli, N. (1532). The Prince. Translated by Marriott, W. K. (1908). London: David Campbell Publishers.

Nederman, C. (2012). Nicollo Machiavelli. Accessed online at: http://plato.stanford.edu/entries/machiavelli/. Last accessed on 02/10/2015.

Thomson, J. J. (1985). The Trolley Problem. The Yale Law Journal. Vol. 94, No. 6, pp. 1395-1415.

Analysis of Hobbes’ Theory that “People Need to be Governed”

This work was produced by one of our professional writers as a learning aid to help you with your studies

Examine Thomas Hobbes’ theory that people need to be governed and the debate regarding the original nature of the human species…

The debate surrounding our original state of nature or species being has been hotly contested by scholars for centuries and remains a pivotal line of enquiry in contemporary pedagogic circles. In societies across the globe we observe entire populations governed by (religious) laws and practices designed to manage, control and otherwise police the boundaries of individualism whilst accentuating solidarity and protecting the collective norm (Stiglitz 2003). In this essay, we explore the various conceptions that have sought to trace and detail the genealogy of human beings to their primordial or so-called primitive condition, with particular emphasis on exploring Hobbes’ (2008) proposition that the disposition of human nature is chaos and thus, as humans, we are compelled to forgo our instinctual nature and find sanctuary within the realms of social collectivism and central governance. In this vein, we confront the age-old nature versus nurture conundrum; are we social and moral animals by design, altruistic in nature, or does civilisation transpire from egotistical obligation to co-operate in order to thrive.

As ever-increasing demands are placed on social-scientific research to maintain pace with an ever-changing world, it is commonplace for scholars to forget the (historical) dictums of our primal beginnings; such investigations are often marginalised – afforded little time, finance and credence – in a world seeking solutions to contemporary problems (Benton and Craib 2010). Yet, to paraphrase Marx (1991), the ghosts of the past weigh heavy on the minds of the living; understanding our roots may become the greatest social discovery and contribution to forging our future as human beings. Thus, social science, by definition and direction, is arguably obsessed with the social constructs that humans generate, frequently dismissing (perhaps through arrogance) the undeniable fact that we remain animals, imbued with the same instinctual drives and impulses as other species. Indeed, one need only observe the effect of social neglect in the case of feral children, unfettered by societal constraints we return to barely recognisable beasts, uncivilised and unconcerned by social pretentions, decorum, normative expectations and values (Candland 1996). For Hobbes’ (2008) humankind in its original state of being is an evil scourge upon the earth; a ruthless and egotistical creature perpetuated by self-gain and absolute dominance – a survival of the fittest nightmare (Trivers 1985). Thus, paralleling the works of Plato (2014), he asserts that the individual, possessing the principle of reason, must sacrifice free-will to preserve their ontological wellbeing, acquired resources, property and way of life or what he calls a ‘commodious living’ (78). As Berger and Luckmann (1991) argue, we willingly accept social captivity as it offers a protective blanket from the otherwise harsh conditions; a remission from the barbarism and bloodshed that transpired previously. This led Hobbes’ (2008: 44) to assert that ‘people need governed’ under a social contract or mutual agreement of natural liberty; the promise to not pillage, rape or slaughter was reciprocated and later crystallised and enforced by the state or monarch. Indeed, whilst his belief in the sovereigns’ traditional (rather than divine) right to rule was unwavering, he was certain that a despotic kingdom would not ensue as reason would triumph over narcissism.

In response, Socrates (cited in Johnson 2011) hypothesised that justice was an inherent attribute where humans sought peace as a process of self-fulfilment – of regulating the soul – not because of fear or retribution; to paraphrase: ‘the just man is a happy man’ (102). The state would therefore stand as a moral citadel or vanguard against the profane. Similarly, Locke (2014) rejects the nightmarish depiction offered by Hobbes (2008), asserting a romanticised state of nature– permeated with God’s compassion – whereby humans seek liberty above all; not individual thrill-seekers but rather banded by familial bonds and communes – a pre-political conjugal society – possessing parochial values, norms and voluntary arrangements. However, he also appreciated that, without the presence of a central regulatory organisation, conflict could easily emerge and continue unabated. Hence, humanity ascends into a civil contract, the birth of the political, as a means of protecting the status quo of tranquillity, prosperity and ownership. Similarly, Rousseau (2015) also proposes a quixotic rendition of humanities social origins, considering such times as simplistic or mechanical (Durkheim 1972) inasmuch as populations were sparse, resources abundant and needs basic, implying that individuals where altruistic by nature and morally pure. Yet, the ascension of state, particularly the mechanisms of privatisation, polluted and contorted humankinds natural state into something wicked that not only coaxed but promoted tendencies of greed, selfishness and egocentrism. In this account, we find strong parallels with Marx (1991), specifically his critique of capitalism, which is conceptualised as a sadistic mechanism tearing humanity from its species-being – the world of idiosyncratic flare, enchantment and cultural wonder – and placing it into a rat-race of alienation (from ones fellow being), exploited labour and inequality. As Rousseau (2015) ably contends: ‘man is born free, and everywhere he is in chains’ (78). Thus, government and the liberalism it allegedly promotes is a farce, seeking to keep the architectural means to create the social world within the possession of a minority – this he calls the current naturalized social contract. He calls for a new social order premised on consensus, reason and compassion; we must reconnect with ourselves, re-engage with our neighbours and discover who we are as a species.

The supposition of our philosophical ancestors is that we require governance as a process of realisation, we are social animals that demand and reciprocate encounters with others; alongside the impulse for sustenance and shelter is the yearning for social contact – indeed love and belonging are included in Maslow’s (2014) hierarchy of needs. Yet, within many philosophical transcripts is the deployment of religion as a legitimate form of authority, since antiquity monarchs, pharaohs, dynasties and early tribal formations have claimed power through divine right or approval. In fact, conviction in a celestial realm has pervaded for epochs – carved in millennia-old cave paintings around the globe (Stiglitz 2003) – and perhaps emerged from an enchanted, speculative and awe-inspired outlook of the world in which our ancestors occupied; religion complemented the life-cycle, delineating the sacred from the profane (Foucault 1975). As Schluchter (1989) argues, later missionaries would propagate their dogma; a prime example of this is the upsurge, dissemination and (even today) domination of Christianity as it overran its pagan predecessors, witchdoctors and mystics. Thus, religion has been attributed with generating social mores, collectivism and ushering the rise of civilisations. Indeed, Elias (2000), details the social evolution of humanity as the animalistic fades to the backstage – with the gradual monopolisation of violence and (political) power – and presented civil self takes credence. Initially, this was necessary for survival as people became more interdependent and significantly influenced later by the royal courts who became a celebrity-like beacon of perfect decorum and taste.

By the 19th century, most of Europe was regarded as civilised whilst other developing parts where considered savage lands; the violence, exploitation and subsequent domination of such nations as India and Africa by western societies is well documented (Buckinx and Treto-Mathys 2015). As Elias puts it: “people were forced to live in peace” (2000, 99). This was also accompanied with the advent of Enlightenment whereby the rule of logic, rationalisation and pragmatism disrobed and effectively dismantled the prevailing supremacy of religion; though religion remains a powerful force in certain cultures and is frequently accompanied with its own medieval brutality. As Anderson (2008) alludes, in Africa and the middle-east, where Christianity, Judaism and Islam prevail and to varying degrees dominate life, purported barbaric acts like (female) genital mutilation, segregation, and (domestic) violence – that affects mainly women – and public violence and executions are commonplace and sanctioned.

Thus, secularisation and the rise of empiricism unshackled humankind from its beastly beginnings and rehomed them within the embracing idioms of consensus, free-will and reciprocal courteousness – humans had undergone a transformation or courtisation whereby mannerisms, hygiene and self-restraint became governing tenants, the barbarian was adorned (concealed) with socially acceptable masks, equipped with approved social scripts and the rules of the game – Goffman’s (1990) social actor and his/her presented selves was born. In this conceptualisation, self-governance or policing is prerequisite for progress and forms the basis for society; enhanced with consciousness we are capable of resisting our impulsive drives – Freud’s (2010) Eros and Thantos are forsaken for the greater good – and creating a utilitarian civilisation. Today, in late-capitalist societies, we live in relative prosperity and peace; the elected government and its respective agencies provide sustenance, infrastructure, healthcare, protection and political democracy; this template of humanity is – like our religious proselytisers – distributed globally, perpetuated by the mass media, globalisation and free-markets (Stiglitz 2013).

For Nietzsche (2013), this contemporary worldview was tantamount to emptiness where humanity had escaped their animalistic state of being, finding virtue in religion and will-to-power within to overcome and ascend, but is now found wanting with the demise of faith and contemporary nihilism that has proceeded (his famous ‘God is dead’ (13) quote). Indeed, he is dismissive of science, philosophical and religious idioms, particularly their totalitarian tendencies which (for him) inhibit, enslave or otherwise surrender life-affirming behaviours; similarities may be drawn with Marx and Engels (2008) critique of religion as the ‘sigh of the oppressed creature’ (45); religion (like governments or social contracts) demands that individuals relinquish or capitulate part of themselves; to genuflect the laws, tenets and values that rule. Such things seek to (re)capture or incarcerate our species being within a straightjacket. Therefore, humanity must re-engage their instinctual resolve – which Nietzsche (2014) regarded as stronger than our urge for sex or survival – and become supermen (Ubermensch) untrammelled by instinct, to find wonder in the fluidity and unpredictability of nature and good conscience by re-evaluating our values, expectations and shortcomings as a species. Namely, a stateless civilisation, unhindered by permanency, premised on the continual refinement of self. Yet, whilst Nietzsche (2014) highlights the stifling effects of dogma, it seems unrealistic to suggest humans are capable of living in constant flux – even a war-torn nation offer consistency (Stiglitz 2003) – insofar as we instinctually seek to structure the surrounding environment in a comprehendible manner; we assign labels, judgements and behavioural codes as we produce order – predictability is the precondition for life and offers humans ontological security and wellbeing (Berger and Luckmann 1991). However, given the asymmetrical nature of society, some possess the architectural means to govern others – reformulated as a form of symbolic violence or barbarism. For example, the credence given to hegemonic masculinities and subsequent denigration and objectification of women or the subjugation of nations to western ideals (Mulvey and Rogers 2015). Moreover, the free-markets offered by capitalism seek to segregate, exploit and captivate masses into a consumerist world of shiny prizes (Marcuse 2002), coaxing our selfish and cut-throat tendencies, whilst so-called liberalist governments attempt to impose their civility globally through violence, bullying and manipulation; a wolf in sheep’s clothing (Kinker 2014). So, even under the rule of government and presence of civilisations our so-called animalistic (violent) heritage pervades, like a ghostly presence haunting the present.

Hobbes (2008) reasons for why individuals need governed – to cage our inner beast – seems defective. As Walsh and Teo (2014) allude, a major fault with many of the propositions outlined above is the emphasis placed on linearity – government is seen as a progressive necessity – rather than appreciating that as social creatures we are capable of creating communities with their own normative flows, ebbs, fluxes and (more importantly) governing ourselves both as matter of necessity or self-preservation and as a means of self-fulfilment or belonging; contemporary modes of practice have become so integrated and reified that finding a parallel alternative or a “way back” seems implausible. That said, as Browning (2011) argues, in an increasingly interdependent and global world, the requirement for centralised states seems unavoidable to handle the sheer mass of human activity and to maintain a level of equilibrium; an inevitable course of human progress.

This essay has been both illuminating and simultaneously problematic; the proposition of whether humans are capable of cohabiting without the requirement of a state or intervening supra-organisation remains a mystery. In fact, such an assertion is premised on how one defines the original state of nature; are we barbaric creatures who engage in a social contract for personal gain or are we instinctually social and empathic animals whose predisposition is not only to safeguard our interests but to generate genuine communal bonds and interconnections with others. The latter affords more manoeuvring for alternative (flexible) social figurations without government, where humanity can bask in the wonder of difference, variety and levels of unpredictability, whilst the former finds sanctuary only in the incarceration of humanity to defined idioms and laws imposed by a centre of authority and power. It is tempting to concede that, despite Hobbes’ depiction of government as the epitome of civility, on the contrary it appears to be (in this era of modernity) the primary agent of (symbolic) violence and struggle, whether masquerading as a religious, communist or neo-liberal state. Thus, one is reluctant to accept Hobbes assertion that people should be governed by a reified or separate entity. Instead, with a level of Nietzschean sentiment, perhaps people should be permitted and empowered to re-evaluate and govern themselves.

Word Count: (2,195)

Bibliography

Anderson, J. 2008.Religion, State and Politics. Cambridge University Press.

Benton, T.Craib, I. 2010. 2ndedition.Philosophy of Social Science: The Philosophical Foundations of SocialThought (Traditions in Social Theory).Palgrave Macmillan.

Berger, P.Luckmann, T. 1991.The Social Construction of Reality: A Treatise in the Sociology of Knowledge(Penguin Social Sciences).Penguin Press.

Browning, G. 2011.Global Theory from Kant toHardtandNegri(International Political Theory).Palgrave Macmillan.

Buckinx, B. Trejo-Mathys, J. 2015.Domination and Global Political Justice: Conceptual, Historical and Institutional Perspectives (Routledge Studies in Contemporary Philosophy).Routledge.

Candland, D. 1996.Feral Children and Clever Animals: Reflections on Human Nature.aˆ?Oxford University Press.

Durkheim, E. 1972.Emile Durkheim: Selected Writings, ed. and trans. Giddens, A.

Cambridge University Press: Cambridge.

Elias, N. 2000. 2ndedition.The Civilisation Process.Wiley-Blackwell.

Foucault, M. 1975.Discipline & Punish:aˆ?The Birth of the Prison.Knopf Doubleday Publishing Group.

Freud, S. 2010.Civilization and Its Discontents.Martino Fine Books.

Goffman, I. 1990.Stigma: Notes on the Management of Spoiled Identity.Penguin Press.

Hobbes, T. 2008.Leviathan (Oxford World’s Classics).Oxford Paperbacks.

Johnson, P. 2011.Socrates: A Man for Our Times. Penguin Publishers.

Kinker, S. 2014.The Critical Window: The State and Fate ofHumanity.Oxford University Press.

Locke, J. 2014.Two Treatises of Government.CreateSpaceIndependent Publishing Platform.

Marcuse, H. 2002.One Dimensional Man.Routledge.

Marx, K. Engels, F. 2008.On Religion. Penguin Press.

Marx, K. 1991.Capital, ed. Mandel, E. Volume 3. Penguin Books (Classics): London.

Maslow, A. 2014.Toward a Psychology of Being.Sublime Books.

Mulvey, L. Rogers, A. 2015.Feminisms: Diversity, Difference and Multiplicity in Contemporary Film Cultures (Key Debates – Mutations and Appropriations in European Film Studies).Amsterdam University Press.

Nietzsche, F. 2014.Beyond good and evil. Penguin Press.

Nietzsche, F. 2013.On the Genealogy of Morals. Penguin Press.

Plato. 2014.The Republic. Reprint.CreateSpaceIndependent Publishing Platform.

Rousseau, J. 2015.The Social Contract.CreateSpaceIndependent Publishing Platform.

Schluchter, W. 1989.Rationalism, Religion, and Domination: A Weberian Perspective.University of California Press.

Stiglitz, J. 2003.Globalization and Its Discontents.Penguin Press.

Trivers, R. 1985.Social Evolution. Benjamin-Cummings Publishing Co.

Walsh, R.Teo, T. 2014.A Critical History and Philosophy of Psychology: Diversity of Context, Thought, and Practice.Cambridge University Press.

Do Virtue Ethics offer an Account of being Right?

This work was produced by one of our professional writers as a learning aid to help you with your studies

This essay shall discuss whether or not virtue ethics offers a convincing account of what it is to be morally right. It shall focus on Hursthouse’s version of virtue ethics, which shall be outlined first, and the positives of this argument: that it allows for different actions in different situations, and does not justify mass atrocities as a result. Four criticisms shall then be put against virtue ethics: that it is not action guiding; it does not explain cultural difference; it offers no guidance for virtue conflict; and that it relies on either a circularity or, at best, the argument being superfluous. With only one of these criticisms being answerable, it shall then be ultimately concluded that virtue ethics does not offer a convincing account of what it is to be right.

Hursthouse’s argument of virtue ethics is an updated version of Aristotle’s original work. She claims that an action is right “iff it is what a virtuous agent would characteristically do in the circumstances” (Hursthouse, 1996: 646). Virtue ethics, then, makes an essential reference to the virtuous person, which Hursthouse claims is a person who “acts virtuously … one who has and exercises the virtues” (Hursthouse, 1996: 647). It is a trivial truth that a virtuous person does what is right, according to all moral theories. However, virtue ethics differs from other arguments in that it claims that an action is right in virtue of it being what the virtuous person would do.

The concept of what is a virtue, then, must be established. In this, Hursthouse makes her claim to Aristotle, arguing that a virtue is “a character trait a human being needs for eudaimonia, to flourish or live well” (Hursthouse, 1996: 647). This links to Aristotle’s work The Nicomachean Ethics, in which he claims eudaimonia is living a flourishing, happy life, which he views as the ultimate end and goal of a person’s life (Aristotle, 340bc). A virtue is any trait which will make an addition to this flourishing life, arguably termed the “positive traits”, such as kindness or charity.

Here, virtue ethics demonstrates a shift from the deontic concepts of deontology and consequentialism; not claiming that an action “ought” or “ought not” to be done. Instead, there is a justification of actions in terms of areteic concepts; claiming that an action is “kind” or “callous”, for example.

It can now be summarised what makes an action right according to virtue ethics. An action will be right iff it is what a virtuous agent would characteristically do in the circumstances. The virtuous agent would characteristically do the action in the circumstances iff the trait which leads to the action is a virtue. Finally, the trait which leads to the action will be a virtue iff it would increase the eudaimonia of the agent.

There are positive things to be said of Hursthouse’s argument for virtue ethics. Firstly, by stating an action is right “iff it is what a virtuous agent would characteristically do in the circumstances”, there is an allowance for variation in action dependent on the situation, which is more in line with our pragmatic moral practice. This escapes the rigidity and often counter-intuitive rules of deontology. Secondly, whilst it allows for variation in moral practice, it doesn’t allow for the atrocities which consequentialism justifies as a consequence of its situational variation. This is because virtue ethics’ argument depends on what the virtuous person would do and, arguably, it would be said that the virtuous agent would not act in the way consequentialism argues for, by allowing mass murder or torture under certain extreme circumstances, for example.

However, there are decisive criticisms against virtue ethics. The first criticism is that it does little to tell us exactly how to act; it is not action guiding. Virtue ethics states that we should act as the virtuous person would. This gives no other instruction than “act virtuously”, which perhaps can be further developed into “act kindly” or “do not act callously”. However, there is no further instruction than this, and nothing to say whether an action will be kind or just; a person is left to rely on their pre-understanding and belief.

Hursthouse’s response to this criticism seems to be that this is all the instruction that we need. She argues:

“We can now see that [virtue ethics] comes up with a large number [of rules] … each virtue generate[s] a prescription – act honestly, charitably, justly.” (Hursthouse, 1996: 648).

When acting, we need only ask ourselves “is this act just?” or “is this act kind?”, and the response to the question, being either “yes” or “no”, will dictate whether or not an act should be done or not.

This response to the objection does little to answer the original concern, and leads to the second criticism. Hursthouse claims that in order to determine whether an act is just, or kind, or deceitful, a person should seek out those who they consider to be their moral and virtuous superior, and ask their advice (Hursthouse, 1996: 647-648). Not only does this rely on a preconception in measurement of virtue (in that we must have an understanding of what is just in order that we may decide which acquaintance is most just), it does little to recognise what is a second criticism for virtue ethics: the variation in morality between cultures.

There is a variation in virtues for different cultures in three senses. Firstly, cultures may vary on which virtue is to take precedence in cases of virtue conflict (though this is a separate criticism in itself). In the second sense, cultures vary in their conception of whether a trait is, indeed, a virtue. Thirdly, cultures vary on what they believe the action would be which the virtue leads to. MacIntyre writes:

“They [various thinkers and cultures] offer us different and incompatible lists of the virtues; they give a different rank order of importance to different virtues; and they have different and incompatible theories of the virtues.” (MacIntyre, 2007: 181).

He gives the example of Homer, who claimed that physical strength was a virtue. This, MacIntyre claims, would never be accepted as a virtue in modern society and, consequently, the difference in Homer’s idea of a virtue or an excellence is vastly different to that of ours (MacIntyre, 1981: 27). Though this demonstrates that one trait may be accepted as a virtue by one culture and not by another, it is also highlights the third sense of cultural difference: that different cultures can accept the same trait as a virtue, but what constitutes an act being virtuous may be varied. For example, all societies believe justice to be a virtue, yet one might consider capital punishment to be just and therefore virtuous, whilst the other may hold capital punishment to be unjust and therefore not virtuous.

To the defence of virtue ethics, Hursthouse claims that the problem is one which is equally shared by deontology, arguing:

“Each theory has to stick out its neck and say, in some cases ‘this person/these people/other cultures are in error’, and find some grounds for saying this.” (Hursthouse, 1991: 229)

Yet this causes concern for virtue theory. Hursthouse is here claiming that some cultures are wrong in believing that certain traits truly lead to an increase in eudaimonia, and are therefore wrong about them being virtues. This presents a circularity in reasoning for virtue ethics.

Before the circularity criticism is discussed, a defence can be made of one aspect of conflict: when two virtues are in conflict, not across cultures, but with one another in a situation. The third criticism is that situations are easily imagined in which two virtues can be in conflict in this manner. For instance, a police officer may apprehend a robber. On hearing the robber’s story, it turns out that he stole food in order to provide for his starving children. The police officer must then decide whether to act on the virtue of justice, and arrest the robber who, despite the circumstances, has committed a crime, or to act on the virtue of sympathy and charity, and allow the robber to take the food and feed the starving children. Hursthouse claims that “in such cases, virtue ethics has nothing helpful to say” (Hursthouse, 1991: 229).

However, a response can be contested. The degree of conflict can be very broad, dependent on the circumstances. In some situations, the correct answer is obvious; in the above case, it would be hard to justify not allowing a man a stolen loaf of bread to feed his starving children. In other situations, the degree of conflict can be much narrower, making the decision much more difficult. In keeping with the argument of virtue ethics, the correct decision is going to be the one which adds to eudaimonia. If both traits will lead to an increase in eudaimonia, the correct choice will be the one which adds most to eudaimonia. As the difference in the amount of increase narrows, the choice becomes harder, but the moral recompense in choosing wrongly will be less. Ultimately, if both virtues will increase eudaimonia equally, then they are equally the correct choice.

However, the most decisive criticism is that the argument which virtue ethics puts forward for what is morally right rests on a circularity. This is brought forward when it was demonstrated that virtue ethics necessitates the existence of some other criterion being the case in order that it can be said some cultures are right and others wrong in their approach to the implementation of virtues and what it is that they hold to be a virtue.

If virtue ethics is to explain why some cultures are wrong in their implementation of the virtues, then their argument must work as follows: a culture is wrong because what they are advocating as right would not be done by the virtuous person. It would not be done by the virtuous person because the trait which leads to the action is not a virtue. The trait which leads to the action is not a virtue because it would not add to the person’s eudaimonia. The reason, then, that a culture is wrong, is because they are mistaken in assuming that the trait which would lead to the action is a virtue, because it will not add to the persons’ eudaimonia.

It must therefore be considered what it takes for a trait to lead to an increase in eudaimonia. To this end, it must be claimed that a trait can only add to eudaimonia, and therefore be a virtue, because of something about the trait: if it is morally right. Herein is the circularity. Virtue ethics states that an action is right iff it is what the virtuous person would characteristically do in the situation. However, it has already been shown that there must be something about a trait which is morally right in order that it can add to eudaimonia and therefore be a virtue, so that the virtuous person may act on it. To avoid the circularity, for a trait to be morally right, there must be a criterion of rightness other than it is what the virtuous person would characteristically do in the situation. If such a criterion exists, virtue ethics’ argument becomes superfluous to explain what is right.

In conclusion, the argument for virtue ethics’ account of what it is for an action to be right has been set forward. Firstly, the positives to this argument were shown: that it avoids the rigidity of deontology and the atrocities of consequentialism. It was then criticised with four arguments: it is not action guiding; the difference in cultures’ morality; concerns when two or more virtues come into conflict; and the necessity for another criterion of rightness which, if accepted, renders virtue ethics unnecessary or, if rejected, leads to a circularity in virtue ethics. Therefore, it is concluded that virtue ethics does not offer a convincing account of what it is for an action to be right.

Reference List

Aristotle. (340bc). The Nichomachean Ethics. Translated by Ross, D. Edited by Brown, L. (2009). Oxford: Oxford University Press.

Hursthouse, R. (1991). Virtue Theory and Abortion. In Philosophy and Public Affairs. Vol. 20, No. 3, pp. 223-246.

Hursthouse, R. (1996). “Normative Virtue Ethics”. In Ethical Theory: An Anthology. Edited by Shafer-Landau, R. (2013). Chichester: John Wiley & Sons, pp. 645-652.

MacIntyre, A. (1981). The Nature of the Virtues. In The Hastings Centre Report. Vol. 11, No. 2, pp. 27-34.

MacIntyre, A. (2007). After Virtue: A Study in Moral Theory. 3rd edition. Notre Dame: University of Notre Dame Press.

Descartes First Meditations: Veridical Experiences”

This work was produced by one of our professional writers as a learning aid to help you with your studies

The question of whether or not one can know whether one is dreaming has become a staple of philosophical discussion since Descartes wrote The Meditations in the 1600s. Engaging in philosophy for the first time, this can seem a bizarre question. However, Descartes’ reasoning for doubting the certainty that one is not dreaming is compelling. For Descartes, our ability to perceive reality cannot be guaranteed, since our senses can deceive us (Descartes, 1986). Thus, over the course of the first two Meditations, Descartes concludes that the only thing he is certain of is that there is some being that is “I”. He concludes that this “I”, however, may only be a mind (Descartes, 1986).

Descartes reasons that even our perception of our bodies is a product of intellect. Therefore, the only thing he feels certain of is that there is a mind doing the thinking. There are two separate questions that arise from this. Firstly, can I know that I am awake? Secondly, can I know that my belief that I am not locked “inside a dream” is not itself a dream? This second question evokes the plot to a sci-fi film, and elicits imagery of being “a brain in a vat”, where everything that one perceives is illusory. The “brain in a vat” is a modern re-imagining of the demon argument, produced by Descartes. The “brain in a vat” idea originates with Putnam; and, according to Brueckner, is inspired The Matrix films (Brueckner). It is this second idea which will be the main focus of this essay – Descartes’ “demon argument”, or the “mind in a vat” argument.

This extreme form of scepticism, where one is merely a “brain in a vat” is surprisingly difficult to rule out with absolute certainty. However, the implications of this may be less profound than they initially appear. The notion that we do not have a true perception of the external world, because our sensory perceptions are being manipulated by a demon or we are a “mind in a vat”, may not actually have practical implications for how we live in the world. However, the discussion about whether we can know for certainty that we are not dreaming is not purely abstract and esoteric. There is an element of this that does pertain to a wider issue than merely dreaming. For instance, as Skirry explains, Descartes supposes that an evil demon may be deceiving him, and so as long as this supposition remains in place, there is no hope of gaining any absolutely certain knowledge (Skirry). If one cannot be sure that one is not being deceived by a demon, then one can have no absolutely certain knowledge about anything. However, as I will argue in this essay, concerning ourselves with whether we lack true knowledge, because we are being manipulated by a demon does not help us to find solutions to the issues in the world which we believe we are living in.

The sceptical account for not knowing whether one is dreaming or not has two levels. First, our perception of what we are currently experiencing does not allow us to determine whether we are awake or dreaming. Dreams can have the same quality as waking experiences, and we can dream that we are awake. Therefore, the experience of being awake is not distinguishable from dreaming. Descartes provides the following example of this situation: “How often, asleep at night, am I convinced of just such familiar events – that I am here in my dressing-gown, sitting by the fire – when in fact I am lying undressed in bed!” (Descartes, 1986, p. 13). Given that in one’s dream, one’s perceptual experiences are not different from those when awake, it may be that I am dreaming that I am typing out this essay.

This sceptical consideration of the possibility of not having the knowledge that one is awake is not as profound or extreme as it first seems. It leaves intact the idea that there are two states: dreaming and awake. The problem is that when we think we are awake, we may be dreaming. It is, for this reason, that this essay will leave this discussion aside, and move on to the second level of scepticism explored by Descartes.

The second reason for doubting if we can know if we are dreaming takes scepticism to a deeper level. The sceptical account for doubting our ability to know if we are awake or if we are dreaming is summed-up by Blumenfeld and Blumenfeld as the problem of the possibility of being in a dream within a dream:

for all I know, I may be dreaming… now, then my belief that not all my experiences have been dreams is itself a belief held in a dream, and hence it may be mistaken. If I am dreaming now, then my recollection of having been awake in the past is merely a dreamed recollection and may have no connection whatever with reality. (Blumenfeld and Blumenfeld, 1978, pp. 243-244).

There are two ways of illustrating this dilemma. First is the illustration devised by Descartes, whereby one is being deceived by a demon. The second is the one favoured by sci-fi films, whereby one is merely a “brain in a vat”, and all that we think we are experiencing has no relation to external reality.

This second level of scepticism speculates that all our experiences may be locked within a dream, including our experiences of waking and dreaming. Given the time period in which he was writing, Descartes invokes superstitious and supernatural ideas of a God or a demon to illustrate this. Descartes imagines that there may be:

some malicious demon of the utmost power and cunning [that] has employed all his energies in order to deceive me. I shall think that the sky, the air, the earth, colours, shapes, sounds and all external things are merely the delusions of dreams which he has devised to ensnare my judgement. I shall consider myself as not having hands or eyes, or flesh, or-blood or senses, but as falsely believing that I have all these thing (Descartes, 1986, p. 15).

The modern sci-fi parallel is that I am actually, merely “a brain in vat”, probably millions of miles away, on some distant planet. This is the view that everything we experience of the external world is a deception. This modern, scientific alternative allows the modern reader to see Descartes problem more clearly, and prevents us dismissing it is an anachronism from the time of superstition.

In the Second Meditation, Descartes convinces himself that because he is thinking, he does actually exist. Hence the famous phrase: Cogito, ergo sum or “I think, therefore I am” (Descartes, 1986, p. 17) ). This is important, as it does set a limit to scepticism, since Descartes’ conclusion is that “even if I am being deceived by an evil demon, I must exist in order to be deceived at all” (Skirry). The fact that I think is proof that I am at least a mind. However, this does not provide proof that I am also a body. Descartes poses to himself the question: “what am I to say about this mind, or about myself?” (Descartes, 1986, p. 22). But he then tells the reader, “so far, remember, I am not admitting that there is anything else in me except a mind” (Descartes, 1986, p. 22). Descartes famous phrase cogito, ergo sum is part of the philosophical canon because it is Descartes’ demonstration that there are limits to scepticism – I think; therefore, I am a mind. However, the knowledge that I am thinking does not, in itself, rule out the possibility that I am merely a mind, i.e. that I am locked in a dream within a dream, where I am deceived into thinking that I have two states of existing: one, being awake; the other being dreaming.

At the beginning of this essay, I said that, engaging in philosophy for the first time, the question, “can we know we are not dreaming?”, can seem a very bizarre question. This can be seen in Blumenfeld and Blumenfeld’s paper, when they show that “a frequent charge against scepticism is that it shows that we cannot have knowledge only by adopting an implausibly strong definition of knowledge” (Blumenfeld and Blumenfeld, 1978, p. 249). Intuitively, the idea that “I” (whatever I am in this case) am merely “a mind in a vat” is implausible. This is why the question, “can we know we are not dreaming?”, seems bizarre. It may not be possible to know that we are not dreaming. However, this requires the construction of a rather implausible hypothesis. In other words, only by invoking something that seems implausible can the question “can we know we are not dreaming?” be made.

However, to dismiss Descartes and the sceptics’ argument on these grounds is rather weak. Dismissing the demon argument on the basis that it is implausible does not falsify it. This is just an argument of probability. The argument that it seems more probable that I am not dreaming, and I do experience an external world is not sufficiently sound, philosophically, to end the argument. There is a need to produce a more satisfying philosophical explanation. Blumenfeld and Blumenfeld argue that it is not possible to justify empirical claims on the basis of probability (Blumenfeld and Blumenfeld, 1978). Therefore, they argue that to maintain the argument of an external world, and rule out the demon scenario, the hypothesis of an external world needs to be epistemically superior to the hypothesis of a world constructed by a demon (Blumenfeld and Blumenfeld, 1978).. However, Blumenfeld and Blumenfeld are not convinced that the hypothesis of an external world is epistemically superior. They argue:

One might think that this could be argued on grounds of the greater simplicity of the external-world hypothesis. But it is hard to see in what respect the external-world hypothesis is simpler than that of the demon. The latter is committed to the existence of the demon (a spirit) with the means of and a motive for producing sense experiences, to a mind in which these experiences are produced, and to the sense experiences themselves. The external-world hypothesis, on the other hand, is committed to all of the above, except the existence of the demon. But it is committed, in addition, to a physical world with the capability of producing sense experience. So, it is hard to see how the external-world hypothesis is simpler. (Blumenfeld and Blumenfeld, 1978, p. 250).

Therefore, it is surprisingly difficult to rule out the idea that “I” am “a mind in a vat”, and that all my experiences of the external world are based on a deception to my sensory perception.

However, the implications of this may not be as profound as they initially appear. Firstly, the implications that all our experiences of an external world are based on illusion would only come into existence if the illusion is broken. If there is a demon creating sensory experiences for me, or I am actually just “a brain in a vat”, the implication of this would only occur when I became aware of my “real” existence, and of the illusion and deception. Secondly, unless we become aware that all our past experiences, including those of being awake and of dreaming, are part of a dream, we are no better able to deal with the dilemmas of this world than we are currently. It is hard to see what the practical implications of this theory are. Or, more specifically and more importantly, how they can help us. For example, it isn’t going to work to tell a Syrian refugee, “don’t worry, go back to Syria, because the war isn’t real. We are actually ‘brains in a vat’, on another planet, many millions of miles away.” It may sound as though I am being facetious. However, the point is a serious one. The question: “can you know that you are not dreaming?” may be a valid one – it might be surprisingly difficult to prove that I not “a brain in a vat”. However, it is not a very helpful question to be concerning ourselves with.

In conclusion, demonstrating that our sensory experiences are not the trickery of a malicious demon proves unfruitful. Trying to satisfactorily refute the idea fails to recognise that the implications of this would only matter if we found out that in the “real” world, we were just “minds in a vat”. Meanwhile, there are practical concerns that require our thought, such as the Syrian refugee problem. The kinds of questions that scepticism is concerned with do not help us to deal with these practical issues. However, it does make us wonder if these “practical” issues are real. Descartes’ hypothesis makes us ponder the possibility that the Syrian refugee crisis is not real, and is part of the deceptions of a demon. However, this kind of thinking does not help us to respond to the things that we think are important.

Bibliography

Blumenfeld, D. and Blumenfeld, J.B. (1978) “Can I Know that I am not Dreaming?” in Hooker, M (ed.), Descartes: Critical and Interpretative Essays. John Hopkins, Baltimore, pp. 234-253

Brueckner, T. (Retrieved October 15, 2015). “Skepticism and Content Externalism”. Stanford Encyclopedia of Philosophy, Available from: http://plato.stanford.edu/entries/skepticism-content-externalism/#2

Descartes, R. (1986) “First Meditation”, in Cottingham, J (trans.) Meditations on First Philosophy: With Selections from the Objections and Replies. Cambridge University Press, Cambridge, pp. 12-15

Descartes, R. (1986) “Second Meditation”, in Cottingham, J (trans.) Meditations on First Philosophy: With Selections from the Objections and Replies. Cambridge University Press, Cambridge, pp. 16-23

Descartes, R. (1986) “Objections and Replies [Selections]” in Cottingham, J (trans.) Meditations on First Philosophy: With Selections from the Objections and Replies. Cambridge University Press, Cambridge, pp. 63-67

Skirry, J. (Retrieved October 6, 2015), “Rene Descartes (1596—1650)”, Internet Encyclopedia of Philosophy, Available from: http://www.iep.utm.edu/descarte/

Arguments that Prove the Existence of God

This work was produced by one of our professional writers as a learning aid to help you with your studies

In this essay it shall be discussed whether there are any arguments which work to prove the existence of God. The teleological and cosmological arguments shall be first discussed, criticised by showing their reliance on the ontological argument, which shall then be shown to be an inadequate argument. It will then be concluded that there appear to be no arguments which work to prove the existence of God.

The Teleological Argument

The teleological argument, or the argument from design, puts forward the claim that God’s existence is proven by the evidence that the universe is so well ordered, and its contents complex, to the point that they must have been designed. If this is the case, then there must be a designer of the universe, and this designer can only be God.

The strongest version of this argument is put forward by Paley. He claims that should he find a stone on the ground, he might suppose that it had always been there. However, should he then come across a watch, he could not hold this same supposition. This, he claims, is because the watch’s parts are such that “they are framed and put together for a purpose” (Paley, 1867: 11). The watch must have been designed and must therefore have a designer. Paley makes this same claim of the universe. He argues that:

“Every manifestation of design, which existed in the watch, exists in the works of nature; with the difference, on the side of nature, of being greater and more…” (Paley, 1867: 21).

Therefore, as the universe is of such a complex nature, in which all things are made of parts which allow them to fulfil their purpose in the same way the watch is made of mechanisms which permit its owner to tell the time, the universe must have been designed and must therefore have a designer. This designer, the argument concludes, is God.

However, there are problems with this argument. Firstly, it can be claimed that the world isn’t as perfectly designed as it would have been should God (being omniscient, omnipotent etc.) have created it. For example, there are many degenerative diseases which affect animals as they grow older, and therefore show the parts of their anatomy to be unable to complete their purpose of life.

Secondly, there are other, scientific, arguments which give explanation for things in nature being fit for their purpose such that do not suppose the existence of God. The most prominent of these arguments is evolution, proposed initially by Darwin (Darwin, 1859). Evolution argues that living beings in nature come to fit their purpose by adaptation, and therefore, though appearing to be designed, are not.

The Cosmological Argument

The cosmological argument, or the argument from first cause, claims that everything in the universe must have a cause. Were all the chains of cause and effect to be traced backwards in time, they would lead to the creation of the universe. However, the creation of the universe must also have a cause, since the universe cannot be the cause of itself (as nothing but an ontological being can be). The cause of anything must be either a physical law or a personal being. The first cause must be a personal being, as no physical laws predated the universe, which is God. Therefore, simply in virtue of the fact that the universe itself exists, God must exist. (Reichenbach, 2012).

As a first criticism against the cosmological argument, the argument relies on the existence of a being which can be the cause of itself. This, therefore, necessitates a reliance on the ontological argument which will later be shown to fail.

There are two more direct criticisms of the argument. Firstly, modern science has set forth hypotheses which aim to establish what could have been the cause of the universe other than God. Of these, the most prominent is the Big Bang. According to this theory, the universe literally arose out of nothing, and there is therefore no need to rely on God’s existence to explain the existence of the universe.

A criticism against this may be that the Big Bang itself, since it was an event, must need a cause. However, to claim this is to misunderstand the theory of the Big Bang. The Big Bang did not occur within the space-time continuum; the continuum was created from the Big Bang. It therefore does not need to rely on the regular cause and effect model of the universe.

A second criticism against the cosmological argument directly is that it relies on the claim that the universe itself must have a cause. However, should it able to be shown that the universe has existed for an infinite amount of time, it will need no cause, and therefore there is no reliance on God’s existence.

The argument that the universe cannot be infinite is that, if it were so, it would be impossible to reach the present moment from the beginning. Since we have arrived at the present moment, the universe cannot be infinite. However, Mackie argues that this representation of infinity is misleading. A true representation of infinity would not include a starting point. Whilst this may seem to make the arriving at the present moment more impossible, this isn’t the case. Mackie argues that to truly understand infinity is to know that from any past cause, no matter how far back the cause, there will be a finite number of links in the chain of causality to the present moment. Therefore, even in an infinite chain of causality, it is possible to reach the present moment (Mackie, 1983).

The Ontological Argument

The ontological argument differs from the other arguments for God’s existence because it argues that God must exist simply because of the concept of God, and not because of the existence of the universe, or some fact about it. As has been seen, the teleological argument and the cosmological argument both necessitate the existence of an ontological being. An ontological being is one which cannot but exist. The ontological argument claims that, simply in virtue of the concept of God, God must exist. That is, if we can conceive of the concept of God, without any contradiction, by the fact that it is possible, it must be true.

The classical version of this argument is put forward by Anselm. He claims that the definition of God is “a being than which nothing greater can be conceived” (Anselm, 1077: chapter 2). By this definition, Anselm claims that God now cannot be conceived not to exist (Anselm, 1077: chapter 3). This is because if we conceive of God (with all his qualities, including omniscience, omnipotence, omnipresence), and then believe God not to exist, we are not in fact conceiving of God, as a being greater than this one of which we conceived could be conceived: one which existed. Whichever being is greater of the two, and Anselm claims the one which exists would be greater, must therefore be God (Anselm, 1077: chapter 3).

A similar argument is found in Descartes’ Meditations. He claims that God is a “supremely perfect being” (Descartes, 1641: 45); a being who holds all the perfections. However, included within this, Descartes claims that there is also the perfection of existence (Descartes, 1641: 46). If we conceived God to be without the perfection of existence, we would not actually be conceiving of God. Therefore, to conceive of the perfect being necessitates its existence, and God must exist.

A criticism is put against this by Kant. Kant argues that if we deny something’s existence, we do not contradict a concept, as the ontological argument would claim we do, because he argues that existence does not “add” anything to a concept. That is, he holds that stating that something exists cannot make a concept greater. In order that a concept may be made greater by a predicate (in relation to the concept as a subject), the predicate must be something which, were it to be removed from the subject, would create a contradiction. In the example of God, we cannot claim the concept cannot be omniscient, as this would create a contradiction with the predicate “God is omniscient”. This, Kant claims, is a “determining predicate” (Kant, 1781:A598/B626).

Existence, however, is not a determining predicate. Kant claims, even, that it is not a predicate at all. Instead, to say something exists is “merely the positing of the thing” (Kant, 1781:A598/B626); that is, to say that something exists does not add to the concept, but simply states that there is an actual occurrence of the concept. So when we state “God exists”, we do not state anything extra of the concept as we would in saying “God is omnipotent”; instead, as Kant argues, “we attach no new predicate to the concept of God, but only posit the subject in itself with all its predicates” (Kant, 1781:A599/B627). Therefore, there is no reason to suppose that a being greater than which none can be conceived must necessarily exist. This is because there it is not necessary that there must be an actual occurrence of the greatest being in order that it is the greatest being. Therefore, the classic ontological argument fails.

There have been more recent applications of the ontological argument. Platinga aimed to show that, by the possibility of certain concepts, God must exist. These two concepts are maximal excellence and maximal greatness. Maximal excellence, he argues, “entails omniscience, omnipotence, and moral perfection” (Plantinga, 1974: 108). That is, all the concepts one attributes to God. Secondly, a being has maximal greatness if it has “maximal excellence in every world” (Plantinga, 1974: 108); that is, every possible world. These two concepts, Plantinga claims, are not self-contradictory, and therefore possible. Plantinga then argues that if the concept of maximal excellence is possible, there is a being, in some possible world, which has maximal excellence; this need not necessarily be the actual world. However, if the concept of maximal greatness is also possible, this means that in every possible world there is a being which has maximal excellence, including the actual world. The being who has maximal excellence is God, and therefore God must exist. (Plantinga, 1974: 108).

A criticism to be made of Plantinga is that the concept of maximal greatness, though it is not self-contradictory, may not necessarily be possible. Indeed, consider the case that it is possible that maximal excellence may not exist. Therefore, there is a possible world in which maximal excellence does not exist. However, if maximal greatness was to be possible, there must be a being with maximal excellence in the world in which there is no being with maximal excellence. This, of course, cannot be the case. Plantinga himself admits this. He writes:

“We must ask whether this argument … proves the existence of God. And the answer must be, I think, that it does not. An argument for God’s existence may be sound … without in any useful sense proving God’s existence.” Plantinga, 1974: 112).

The argument only proves God’s existence if the premise that maximal greatness is possible is accepted. However, this premise will only be accepted by people who already believe in God’s existence. Therefore, the argument fails in proving his existence externally of prejudiced beliefs.

In conclusion, the cosmological, teleological, and ontological arguments for God’s existence have been put forward. The criticisms put against them, in that the two former arguments rely on the ontological argument, and the ontological arguments fail to prove God’s existence, seem to indicate that there are no arguments which work to prove the existence of God.

Reference List

Anselm. (1077). Proslogium. Accessed online at: https://legacy.fordham.edu/halsall/basis/anselm-proslogium.asp. Last accessed at 24/09/2015.

Darwin, C. (1859). The Origin of Species. London: Everyman’s Library.

Descartes, R. (1641). Meditations on First Philosophy. Translated and edited by Cottingham, J. (1996). Cambridge: Cambridge University Press.

Kant, I. (1781). Critique of Pure Reason. Translated by Kemp Smith, N. (1933). London: The Macmillan Press ltd.

Mackie, J. L. (1983). The Miracle of Theism. Oxford: Oxford University Press.

Paley, W. (1867). Natural Theology. Ohio: DeWard Publishing Company.

Plantinga, A. (1974). God, Freedom, and Evil. New York: Harper and Row.

Reichenbach, B. (2012). “Cosmological Argument”. Accessed online at: plato.stanford.edu/entries/cosmological-argument/. Last accessed at 23/09/2015.

The Importance of Leadership in Nursing

This work was produced by one of our professional writers as a learning aid to help you with your studies

The importance of leadership is now widely recognised as a key part of overall effective healthcare, and nursing leadership is a crucial part of this as nurses are now the single largest healthcare discipline (Swearingen, 2009). The findings of the Francis Report (2013) raised major questions into the leadership and organisational culture which allowed hundreds of patients to die or come to harm and further found that the wards in Mid Staffordshire, where the worst failures of care were found were the ones that lacked strong and caring leadership, highlighting the crucial role of nurses in leadership. Research into nursing leadership has shown that a culture of good leadership within healthcare is linked to improved patient outcomes, increased job satisfaction, and lower staff turnover rates (MacPhee, 2012).

Although the NHS currently faces many challenges such as financial constraints and a growing elderly population, leadership cannot be viewed as an optional role. Previous research by Swearingen (2009) has suggested that educational programmes for nurses do not fully prepare them for leadership roles, and this gap between the demands of clinical roles and adequate educational preparation can result in ineffective leadership in nursing (Feather, 2009). It is important to recognise the critical role that nurses and nurse leaders play in establishing leadership for patient care and the overall culture within which they work (Feather, 2009). Themes explored in this essay will include defining leadership, leadership in nursing, factors that contribute to nursing leadership, and leadership preparation as part of nursing education.

What is leadership and culture?

Leadership can mean many different things and has clearly evolved in meaning over time (Brady, 2010). Common qualities associated with leadership are influence, innovation, autocracy, and influence (Brady, 2010, Cummings, 2010). A key factor which has remained part of leadership during its evolution has been the ideas that leadership can involve the influence of behaviours, feelings, and actions of other people (Malloy, 2010). Culture is different, and refers to the implicit assumptions that each member of a group or organisation perceives and reacts to different things (Malloy, 2010). Culture is often regarded as a good reflection of what an organisation values most: if compassion and safety are highly regarded, staff will assimilate this (Hutchinson, 2012). Interactions by leaders at all levels of an organisation have been identified as the most important aspect/component of establishing and maintaining a culture of leadership (Malloy, 2010, Hutchinson, 2012). The most senior level of leadership within NHS trusts often comes from the board of directors, who have overall responsibility for the overall leadership strategy (Brady. 2010).

Nursing leadership

Although there are many research articles and books about leadership and management, there has been relatively little research until recently into what nursing leadership entails. Cummings (2008) found that perceptions of nursing leadership were different from general leadership because it placed a greater emphasis on nurses taking responsibility for and improving and influencing the practice environment. Brady (2010) reported that anytime a nurse had recognised authority, they were providing leadership to others. By this argument, student nurses are leaders to their patients, a staff nurse is a leader to student nurses and patients, and the leader to all team members is seen in the ward manager (Brady 2010, Sanderson, 2011). It is also important to distinguish between a manager and a leader (Brady 2010, Sanderson, 2011). Mangers are seen to be those who administer, maintain, and control, whereas leaders are those who are seen to innovate, develop, and inspire (Sanderson, 2011). Whilst there is obvious need for managers within the health service, it is vital to realise that there is a clear distinction in the roles of managers and leaders (Sanderson, 2011), and that there are areas where these roles may not overlap (Sanderson, 2011). One of the key challenges facing the NHS is to nurture a culture which allows the delivery of high quality healthcare (MacPhee, 2012) and one of the most influential factors which can impact the delivery of quality patient care is leadership: ensuring there is a clear distinction between management and leadership, and that leaders are equipped with the necessary tools to inspire others to follow their example (Jackson, 2009).

Factors which contribute to nursing leadership

The systematic review by Cummings (2008) demonstrated that research into nursing leadership falls into two categories – studies of the practices and actions of nursing leaders including the impact of differing healthcare settings, and the effects of different educational backgrounds of nurse leaders. The conclusion from the systematic review by Cummings (2008) suggests that leadership from nurses can be developed by a stronger emphasis placed on leadership in education, and by modelling leadership styles on those which have been seen to be successful in the workplace. Several studies also highlighted personal characteristics which were deemed to promote leadership qualities, such as openness and the motivation to lead others (Jackson, 2009, Brady 2010, Sanderson, 2011). Marriner (2009) also showed that contrary to popular belief, age, experience, and gender did not seem important factors when considering the effectiveness of leadership, and that interpersonal skills were more important than financial or administrative skills. However this focus on financial and managerial skills seems to suggest an overlap between management and leadership, which has previously been shown to be two different areas (Richardson, 2010, MacPhee, 2012). They also showed that leadership was perceived to be less effective when leaders had less contact with those delivering care, highlighting the importance of nurses on the ward to also be effective leaders (Richardson, 2010, MacPhee, 2012).

The emphasis which has been placed on interpersonal skills and relationships between healthcare workers is strongly suggestive that this is an important leadership skill, and could be a key part of leadership development programmes (Malloy, 2010). A recent review of the role of emotional intelligence and nursing leadership highlights the need for emotional intelligence in effective leaders and has been shown to be highly influential on healthcare cultures (Hutchinson, 2012). Although the impact of these factors can suggest how best to promote leadership in nursing, it is clear that a thorough understanding and overview of their interactions are needed to fully understand their effectiveness. Sorensen (2008) suggested that these effects can also be promoted through educational programmes, particularly at undergraduate level.

Education

It is clear that leadership is considered to be fundamental to nursing, and that nurses are now expected to act as leaders across a wide variety of settings (Richardson, 2010). If nurses are expected to undertake such roles it is important that they are adequately trained and prepared for this (Sanderson, 2011). Studies have found that many undergraduate nursing courses now view organisation and management to be fundamental parts of autonomous nursing practice, and it is widely part of the curriculum (Richardson, 2010, Sanderson, 2011). However it is unclear what is actually taught, and much of the content appears to be focused on the transition period from student to qualified nurse (Sanderson, 2011). However it seems that current expectations of leadership within the NHS are not suitable to be taught as isolated elements within the curriculum, and should instead be embraced throughout training and beyond (Richardson, 2010, Sanderson, 2011). The development of leadership skills should also be continued through a nurse’s career to continually promote the importance of leadership, and to develop newly-qualified nurses into role models for others (Jackson, 2009).

Collective leadership

In collective leadership there are both individual and collective levels of accountability and responsibility (Cummings, 2008). There is a strong emphasis on regular reflective practice which has been shown to improve the standard of care given by nurses, and strives to make continuous improvement a habit of all within the organisation (Cummings, 2008, Cummings, 2010). This is in contrast to a command and control style of leadership, which displaces responsibility onto individuals and leads to a culture of fear of failure rather than a desire to improve (Feather, 2009). Leadership comes from both the leaders themselves and from the relationships among them and with other members of staff. Key to leadership is also the idea of followership – that everyone supports each other to deliver high quality care and that the success of the organisation is the responsibility of all (Hutchinson, 2012). It is important to recognise that good leadership does not happen by chance, and that collective leadership is the result of consciously and purposefully identifying the skills and behaviours needed at an individual and organisational level to create the desired culture (Hutchinson, 2012). This is in contrast to more traditional leadership development work, which has focused on developing individual capacity whilst neglecting the need for developing collective capability (Cummings, 208, Cummings, 2010). This style of leadership has been linked to poorer patient outcomes, decreased levels of job satisfaction, and higher levels of staff turnover (Sorensen, 2008). The challenge of recruiting and retaining leaders at all levels must be recognised, as there is need for clinical leadership at every level (Cummings, 2010).

Research has shown that where leaders and relationships between leaders are well developed, there is an increased quality of care due to all staff working towards the same goals and a well-established culture of caring (Sanderson, 2011). In addition to this, there is also an increasing drive to form leadership partnerships with patients (Sanderson, 2011, Hutchinson, 2012). Collective leadership with those receiving care functions in a similar way to multidisciplinary team working as this style of leadership with patients needs a redeployment of both power and decision making in addition to a change in thinking about who should be included in the collective leadership community (Hutchinson, 2012). Several authors (Cummings, 2008, Jackson, 2009, Malloy, 2010) recommended that NHS leaders should work with those seen as patient leaders to facilitate the changes outlined in the Francis Inquiry report (2013). There have been frequent reports that staff working in healthcare settings are often overwhelmed by the workloads required and are unsure of their priorities, sometimes because there are too many priorities identified by senior managers (Cummings, 2008). This can result in stress and poor quality care for patients (Cummings, 2008, Cummings, 2010). Whilst mission statements about efficient and high quality care can be helpful for staff, they are only helpful when translated into objectives for individuals (Jackson, 2009). Establishing and maintaining cultures of high-quality care relies on continual learning and improvements in patient care from all members of staff, and thus taking responsibility for improving quality (Jackson, 2009, MacPhee, 2010). Where there is a well-established mentality of collective leadership, all staff members are more likely to work together to solve problems, to ensure that the quality of care remains high, and to work towards innovation (MacPhee, 2012).

Conclusion

The importance of effective leadership to the provision of good quality care is firmly established, as is the central role that leadership plays in nursing (Cummings, 2008). It is now also clear that leadership should be found at all levels from board to ward and it seems obvious that the development of leadership skills for nurses should begin when training commences and should be something which is honed and developed throughout a nursing career (Feather, 2009). For health care organisations to provide patients with good quality healthcare there must be a culture that allows sustained high quality care at multiple levels (Francis Report, 2013). These cultures must concentrate on the delivery of high quality, safe health care and enable staff to do their jobs effectively (Jackson, 2009, Francis Report, 2013). Part of this is ensuring that there is a strong connection to the shared purpose regardless of the individual’s role within the system and that collaboration across professional boundaries is easily achieved (Cummings, 2010). Nurses can be a key part of this by using collective leadership to establish a culture where all staff take responsibility for high quality care and all are accountable (Malloy, 2010). This may require a shift in mentality of the way many see leadership – from seeing leadership as a command-and-control approach, to seeing leadership as the responsibility of all and working together as a team to work across organisations and other boundaries in the best interests of the patient (Brady, 2010).

References

Brady, P. (2010). The influence of nursing leadership on nurse performance: a systematic literature review. Journal of Nursing Management, 18(4), pp.425-439.

Cummings, G. (2008). Factors contributing to nursing leadership: a systematic review. Journal of Health Services Research and Policy, 13(4), pp.240-248.

Cummings, G. (2010). The contribution of hospital nursing leadership styles to 30-day patient mortality. Nursing Research, 59(5), pp.331-339.

Feather, R. (2009). Emotional intelligence in relation to nursing leadership: does it matter? Journal of Nursing Management? 17(3), pp.376-382.

Hutchinson, M. (2012). Transformational leadership in nursing: towards a more critical interpretation. Nursing Inquiry, 20(1), pp.11-22.

Jackson, J. (2009). Patterns of knowing: proposing a theory for nursing leadership. Nursing Economics, 27(1), pp.149-159.

MacPhee, M. (2012). An empowerment framework for nursing leadership development: supporting evidence. Journal of Advanced Nursing, 68(1), pp.159-169.

Malloy, T. (2010). Nursing leadership style and psychosocial work environment. Journal of Nursing Management, 18(6), pp.715-725.

Marriner, A. (2009). Nursing leadership and management effects work environments. Journal of Nursing Management, 17(1), pp.15-25.

The Mid Staffordshire NHS Foundation Trust Public Inquiry (2013) Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry: executive summary. London: Stationery Office (Chair: R Francis).

Richardson, A. (2010). Patient safety: a literature review on the impact of nursing empowerment, leadership, and collaboration. International Nursing Review, 57(1), pp.12-21.

Sandstrom, B. (2011). Promoting the implementation of evidence-based practice: a literature review focusing on the role of nursing leadership. Worldviews on Evidence-Based Nursing, 8(4), pp.212-223.

Sorensen, R. (2008). Beyond profession: nursing leadership in contemporary healthcare. Journal of Nursing Management, 16(5), pp.535-544.

Swearingen, S. (2009). A journey to leadership: dsigning a nursing leadership development program. The Journal of Continuing Education in Nursing, 40(3), pp.113-114.

Limiting Impact of Moral Distress in Nursing

This work was produced by one of our professional writers as a learning aid to help you with your studies

Make recommendations on how the impact of moral distress on nursing staff can be limited.

What is moral distress?

Moral distress is the state of psychological discomfort and distress that arises when an individual recognises that they have moral responsibility in a given situation, make a moral judgement regarding the best course of action but for a range of reasons are unable to carry out what they perceive to be the correct course of action.A In reference to nursing, it specifically refers to the psychological conflict that occurs when a nurse has to take actions that conflict with what they believe is right, for example, due to restrictions in practice policies within institutions (Fitzpatrick and Wallace, 2011).A A Studies in this area usually use the original definition by Jameton in 1993 “moral distress arises when one knows the right thing to do, but institutional constraints make it nearly impossible to pursue the right course of action” (Jameton, 1984).A Further work by Wilkinson in 1987, who published an account of moral distress (Wilkinson, 1989) refined this definition to relate it directly to “psychological disequilibrium and negative feeling” (Wilkinson, 1987).A Common causes cited by nurses for not being able to fulfil their moral responsibility include a lack of confidence in the ability of colleagues, negative attitudes of colleagues towards patients and a team decision on care that does not follow the patients expressed wishes, or fear of reprisal resulting from the course of action they feel is best for the patient (Wojtowicz et al., 2014).A

For example, a nurse working in post-operative ward might experience a patient dying as the result of refusing a blood transfusion following surgery due to religious beliefs.A The nurse’s personal judgement may be that the patient should receive the blood transfusion to give them the best chance of surviving the surgery.A However, because the patient did not consent, the nurse could not carry out the action they perceived to be correct.A When the patient died, the nurse may have experienced emotional and psychological distress in the form of guilt and anger that they had not saved a life that may have been possible to save, as well as feelings of helplessness that they could not overrule the patient’s wishes (Stanley and Matchett, 2014).A A

What situations are more likely to cause moral distress?

In 2015, Whitehead et al carried out a large scale questionnaire based study in the USA on moral distress amongst nurses and other healthcare professionals (592 participants, 395 of which were registered nurses).A The most common causes of moral distress in nurses included frustration at a lack of patient care due to inadequate continuity (rated 6.4 by nurses on a Likert scale of 0-16), poor communication (5.8) or inadequate staffing levels (5.7).A Additionally, nurses reported that giving life supportive therapy when not in a patient’s best interest (6.0), or resuscitation only to prolong the process of death (5.8) were also rated highly.A This study also showed that physicians and other healthcare professionals also rated these factors highly, but overall their scores were less than those of nurses.A The authors concluded that nurses are more likely to experience moral distress than other healthcare professionals, possibly due to a discrepancy between levels of responsibility for patient welfare and the required autonomy to make the decisions they believe should be made, as well as feelings of accepting treatment protocols from physicians which they feel are incorrect but unable to challenge or overrule.A Poor team leadership and poor communication was also cited by nurses as a cause of moral distress (Whitehead et al., 2015).A

Moral distress appears to be more likely amongst nursing staff who are involved in patient care protocols that are considered to be aggressive and futile e.g. prolonged end of life care, or care protocols that the nurse does not consider to be in the patient’s best interest.A For these reasons, moral distress is thought to be particularly prevalent amongst nurses treating patients in palliative care (Matzo and Sherman, 2009), paediatrics, intensive care (Whitehead et al., 2015; Wilson et al., 2013; Ulrich et al., 2010) and neonatal environments (Wilkinson, 1989).A Additionally moral distress is also prevalent amongst psychiatric nurses due to increased feelings of responsibility for vulnerable patients, particularly as these patients are at risk of suffering from ethical mistreatments, e.g. misinformation about drug side effects (Wojtowicz et al., 2014).A Other studies have also identified that issues with the institution itself can cause moral distress, such as inadequate staffing, depersonalisation of staff, inadequate supply of resources and overloading of work (Dalmolin et al., 2014).

How does it affect nursing staff?

Moral distress can have psychological consequences that affect the nurse’s performance and wellbeing.A For example, it is thought that nurses experiencing moral distress may self-blame or criticise themselves for an unsatisfactory outcome, and may experience emotions of anger, guilt, sadness or powerlessness (Fitzpatrick and Wallace, 2011; Borhani et al., 2014).A They may shift blame onto others or exhibit avoidance behaviours such as taking time off for illness.A Physical manifestations may also include headaches, diarrhoea, sleep disturbance and palpitations, which may well be interpreted as illness and require time off work, further contributing to low staffing levels, which perpetuates a cycle of understaffing = moral distress / illness = time off = understaffing (Fitzpatrick and Wallace, 2011).A Moral distress is associated with “burnout” (or emotional exhaustion and extreme stress) and with a reduced sense of professional fulfilment (Dalmolin et al., 2014) .A

Moral distress and staff retention

Because experiencing moral distress has been linked to harm and stress to nurses, as well as a reduction in the quality of patient care, many studies have cited it as a reason for nurses to leave the profession, resulting in a reduction in staffing levels and self-perpetuating cycle of staff shortages (Fitzpatrick and Wallace, 2011; Borhani et al., 2014).A Indeed, one study of 102 intensive care nurses in the USA found that as many as 40% had left or had considered leaving a job as the direct result of moral distress (Morgan and Tarbi, 2015),

Together, these issues can significantly compromise the quality of patient care and result in “burnout” of nursing staff, causing more to leave the profession to avoid the feelings of guilt that moral distress can cause, particularly in those specialisms typically associated with moral distress such as oncology or paediatrics.A Moral distress also contributes to job dissatisfaction, typically as the result of a discrepancy between the experience the nurse is expecting to have at an institution, and the actual experience (Borhani et al., 2014) This is particularly true of student nurses, who are more likely to have higher expectations of the profession they have worked hard to join, and will be more familiar with the policies and values by which organisations “should” be run rather than the reality, where it is likely that some practices will be sub-optimal or archaic (Wojtowicz et al., 2014; Stanley and Matchett, 2014).A A

Managing and limiting the impact of moral distress

As previously discussed, moral distress is thought to primarily result from either institutional disorganisation (which can be prevented), or distressing ethical situations such as providing futile life prolonging treatment which are unfortunately inevitable (Whitehead et al., 2015).A However, there are ways in which nurses and their management can prepare themselves to deal with these situations effectively, thus reducing the impact of the moral distress (Deady and McCarthy, 2010).A A Although it is important for nursing staff to be supported by their management, ultimately the nurse should be responsible for themselves and their own psychological wellbeing in order to prevent burnout from moral distress (Severinsson, 2003).A

Several studies have suggested that the best way to reduce the risk of burnout as a result of moral distress is for nurses to share their feelings and seek support from their peers, ideally in an environment where nurses can share their experiences and discuss ethical implications of specific situations.A It is also important that nurses understand what moral distress is, and can identify the source of negative feelings.A Psychologically it is thought to be important that nurses acknowledge and identify these feelings so that they may be processed in a less damaging manner (Matzo and Sherman, 2009; Deady and McCarthy, 2010; Em Pijlaˆ?Zieber et al., 2008).A Nurses should also be encouraged to challenge treatment protocols they feel are inappropriate without fear of reprisal (Deady and McCarthy, 2010).A Some researchers have advocated approaches such as nurses emotionally distancing themselves from distressing situations, or actively striving to desensitise themselves.A However it is controversial whether or not this actually reduces moral distress, and of course raises questions about patient welfare with some suggesting that it is important that the nurse feels ethically responsible (Whitehead et al., 2015; Severinsson, 2003) and has a degree of emotional involvement in the situation in order to provide best possible care (Bryon et al., 2012; Linnard-Palmer and Kools, 2005; Severinsson, 2003).A

The majority of studies in this area recommend that moral distress should be included in the curriculum studied by student nurses, along with practical recommendations regarding measures that can be taken to deal with it as and when it occurs (Wojtowicz et al., 2014; Borhani et al., 2014; Matzo and Sherman, 2009; Stanley and Matchett, 2014; Whitehead et al., 2015), for example in the form of ethical philosophical discussion to facilitate students to explore their individual moral value systems and emotional responses, as well as be more informed regarding the underlying psychological processes involved.A Therefore nurses may better understand the thought processes involved, and be better equipped to identify unhelpful thinking patterns that may result from moral distress, thus limiting stress and avoiding the development of “burnout” (Stanley and Matchett, 2014; Severinsson, 2003).A

It has been shown by several studies that moral distress occurs less in institutions and teams where there is a healthy and positive attitude towards ethics and the discussion of the application of ethics (Whitehead et al., 2015).A Therefore, it is important that institutions encourage the development of an ethically healthy environment at all levels of management (Deady and McCarthy, 2010).A A Additionally, many studies highlight that incompetence in colleagues and subsequent errors in patient care is a primary source of moral distress in nursing staff, and as such institutions should ensure that an adequate quality of care monitoring system is in place, preferably where staff are able to raise concerns without fear of reprisal (Whitehead et al., 2015; Stanley and Matchett, 2014).A A Institutions should also strive to reduce factors such as institutional disorganisation, inadequate resource levels and understaffing (Dalmolin et al., 2014).A Anonymous reviews have also identified extreme examples of patient mistreatment and poor care, and a lack of empowerment of student nurses in particular to report or challenge unacceptable behaviour in colleagues.A Universities and institutions should therefore encourage an environment where this is possible (Rees et al., 2015).A Feelings of powerlessness to contest clinical decisions can also be reduced by encouragingA collaborative decision making within teams (Karanikola et al., 2014; Em Pijlaˆ?Zieber et al., 2008).A

Healthcare institutions should also recognise their responsibilities in reducing moral distress amongst nursing staff in order to support them correctly and also to retain staff and limit absence due to staff sickness.A For example, an institution could appoint a designated ethics consultant who can offer guidance to nurses, and ensure that staff have access to counselling if required to address any psychological distress.A The institution could also support the setting up of an ethics discussion forum where staff could discuss troubling situations (Matzo and Sherman, 2009), for example using an online forum which would also provide anonymity to facilitate open discussion.A It has been recommended that such groups be cross-disciplinary, as this would allow for potentially valuable differing viewpoints to facilitate discussion and potentially offer different solutions or approaches to those traditionally used by a team (Matzo and Sherman, 2009).A

Nursing management staff are thought to experience less moral distress than nurses themselves, presumably as the result of the “distance” perceived between themselves and the questionable moral decision (Ganz et al., 2015).A As a result it may also be beneficial for management staff to receive specific training about moral distress so that they can understand the situation better and provide more effective support to their teams.A

Conclusion

Moral distress is a significant factor for nurses leaving the profession.A Combatting moral distress is important, not only for the welfare of nursing staff but also the patients themselves.A Healthcare institutions have a responsibility to minimise moral distress as much as possible by improving administrative issues such as staffing levels, team organisation and job satisfaction.A However nurses still have a responsibility to themselves and their patients to reduce moral distress and thus negate its impact on patient care (as well as their own health and wellbeing) by actively partaking in activities such as ethical discussion groups and peer support networks.A Together nurses, healthcare institutions and universities can reduce the impact of moral distress by cultivating an environment where nursing staff can participate in controversial care plan discussions.A

References

Borhani, F., Abbaszadeh, A., Nakhaee, N. and Roshanzadeh, M. (2014). The relationship between moral distress, professional stress, and intent to stay in the nursing profession. Journal of Medical Ethics and History of Medicine, 7, p.3. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/25512824 [Accessed: 25 June 2015]

Bryon, E., Dierckx de CasterlA©, B. and Gastmans, C. (2012). ‘Because we see them naked’ – nurses’ experiences in caring for hospitalized patients with dementia: considering artificial nutrition or hydration (ANH). Bioethics, 26 (6), p.285aˆ“295. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/21320145 [Accessed: 25 June 2015]

Dalmolin, G. de L., Lunardi, V. L., Lunardi, G. L., Barlem, E. L. D. and Silveira, R. S. da. (2014). Moral distress and Burnout syndrome: are there relationships between these phenomena in nursing workers? Revista Latino-Americana de Enfermagem, 22 (1), p.35aˆ“42. [Online]. Available at: http://www.scielo.br/scielo.php [Accessed: 26 June 2015].

Deady, R. and McCarthy, J. (2010). A Study of the Situations, Features, and Coping Mechanisms Experienced by Irish Psychiatric Nurses Experiencing Moral Distress: A Study of the Situations, Features, and Coping Mechanisms Experienced by Irish Psychiatric Nurses Experiencing Moral Distress. Perspectives in Psychiatric Care, 46 (3), p.209aˆ“220. [Online]. Available at: http://doi.wiley.com/10.1111/j.1744-6163.2010.00260.xA [Accessed: 26 June 2015].

Em Pijlaˆ?Zieber, Brad Hagen, Chris Armstrongaˆ?Esther, Barry Hall, Lindsay Akins and Michael Stingl. (2008). Moral distress: an emerging problem for nurses in longaˆ?term care? Quality in Ageing and Older Adults, 9 (2), p.39aˆ“48. [Online]. Available at: http://www.emeraldinsight.com/doi/abs/10.1108/14717794200800013A [Accessed: 26 June 2015].

Fitzpatrick, J. J. and Wallace, M. (2011). Encyclopedia of Nursing Research, Third Edition. Springer Publishing Company. [Online]. Available at: https://books.google.co.uk/books?id=jAE_s82NjtAC&dq=nursing+moral+distress&hl=en&sa=X&ei=WMiLVfSZE8Ke7gaO4IGIBg&ved=0CD8Q6AEwBQ [Accessed: 25 June 2015].

Ganz, F. D., Wagner, N. and Toren, O. (2015). Nurse middle manager ethical dilemmas and moral distress. Nursing Ethics, 22 (1), p.43aˆ“51. [Online]. Available at: http://nej.sagepub.com/cgi/doi/10.1177/0969733013515490 [Accessed: 25 June 2015].

Jameton, A. (1984). Nursing practice: The ethical issues. 1st ed. Englewood Cliffs. [Accessed: 25 June 2015].

Karanikola, M. N. K., Albarran, J. W., Drigo, E., Giannakopoulou, M., Kalafati, M., Mpouzika, M., Tsiaousis, G. Z. and Papathanassoglou, E. D. (2014). Moral distress, autonomy and nurse-physician collaboration among intensive care unit nurses in Italy. Journal of Nursing Management, 22 (4), p.472aˆ“484. [Online]. Available at: http://doi.wiley.com/10.1111/jonm.12046 [Accessed: 26 June 2015].

Linnard-Palmer, L. and Kools, S. (2005). Parents’ refusal of medical treatment for cultural or religious beliefs: an ethnographic study of health care professionals’ experiences. Journal of Pediatric Oncology Nursing: Official Journal of the Association of Pediatric Oncology Nurses, 22 (1), p.48aˆ“57. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/15574726. [Accessed: 25 June 2015]

Matzo, M. L. and Sherman, D. W. (2009). Palliative Care Nursing: Quality Care to the End of Life, Third Edition. Springer Publishing Company. [Online]. Available at: https://books.google.co.uk/books?id=rTexGiX5bqoC&pg=PA121&dq=nursing+moral+distress&hl=en&sa=X&ei=cciLVbDDK-fd7QbR6q3oDQ&ved=0CEMQ6AEwBjgK#v=onepage&q=nursing%20moral%20distress&f=false [Accessed: 25 June 2015].

Morgan, B. and Tarbi, E. (2015). A Survey of Moral Distress Across Nurses in Intensive Care Units (FR416-A). Journal of Pain and Symptom Management, 49 (2), p.360aˆ“361. [Online]. Available at: doi:10.1016/j.jpainsymman.2014.11.091 [Accessed: 25 June 2015].

Rees, C. E., Monrouxe, L. V. and McDonald, L. A. (2015). ‘My mentor kicked a dying woman’s bedaˆ¦’ Analysing UK nursing students’ ‘most memorable’ professionalism dilemmas. Journal of Advanced Nursing, 71 (1), p.169aˆ“180. [Online]. Available at: http://doi.wiley.com/10.1111/jan.12457 [Accessed: 26 June 2015].

Severinsson, E. (2003). Moral stress and burnout: Qualitative content analysis. Nursing and Health Sciences, 5 (1), p.59aˆ“66. [Online]. Available at: http://doi.wiley.com/10.1046/j.1442-2018.2003.00135.xA [Accessed: 26 June 2015].

Stanley, M. J. C. and Matchett, N. J. (2014). Understanding how student nurses experience morally distressing situations: Caring for patients with different values and beliefs in the clinical environment. Journal of Nursing Education and Practice, 4 (10), p.p133. [Online]. Available at: http://www.sciedu.ca/journal/index.php/jnep/article/view/5139 [Accessed: 25 June 2015].

Ulrich, C., Hamric, A. and Grady, C. (2010). Moral Distress: A Growing Problem in the Health Professions? Hastings Center Report, 40 (1), p.20aˆ“22. [Online]. Available at: http://muse.jhu.edu/content/crossref/journals/hastings_center_report/v040/40.1.ulrich.html [Accessed: 26 June 2015].

Whitehead, P. B., Herbertson, R. K., Hamric, A. B., Epstein, E. G. and Fisher, J. M. (2015). Moral Distress Among Healthcare Professionals: Report of an Institution-Wide Survey: Moral Distress. Journal of Nursing Scholarship, 47 (2), p.117aˆ“125. [Online]. Available at: http://doi.wiley.com/10.1111/jnu.12115 [Accessed: 25 June 2015].

Wilkinson, J. M. (1987). Moral Distress in Nursing Practice: Experience and Effect. Nursing Forum, 23 (1), p.16aˆ“29. [Online]. Available at: http://onlinelibrary.wiley.com/doi/10.1111/j.1744-6198.1987.tb00794.x/abstract [Accessed: 25 June 2015].

Wilkinson, J. M. (1989). Moral Distress: A Labor and Delivery Nurse’s Experience. Journal of Obstetric, Gynecologic, Neonatal Nursing, 18 (6), p.513aˆ“519. [Online]. Available at: http://doi.wiley.com/10.1111/j.1552-6909.1989.tb00503.x [Accessed: 26 June 2015].

Wilson, M. A., Goettemoeller, D. M., Bevan, N. A. and McCord, J. M. (2013). Moral distress: levels, coping and preferred interventions in critical care and transitional care nurses. Journal of Clinical Nursing, 22 (9-10), p.1455aˆ“1466. [Online]. Available at: http://doi.wiley.com/10.1111/jocn.12128 [Accessed: 26 June 2015].

Wojtowicz, B., Hagen, B. and Van Daalen-Smith, C. (2014). No place to turn: Nursing students’ experiences of moral distress in mental health settings: Moral Distress in Mental Health Settings. International Journal of Mental Health Nursing, 23 (3), p.257aˆ“264. [Online]. Available at: http://doi.wiley.com/10.1111/inm.12043 [Accessed: 25 June 2015].

Challenges faced by Newly Qualified Nurses

This work was produced by one of our professional writers as a learning aid to help you with your studies

It is estimated that approximately 60% of the nursing workforce consists of newly qualified nurses: consequently there is much literature that examines the transitions that individuals experience as they progress from the student nurse to the newly qualified nurse (Whitehead, 2001; 2011). The recruitment and retention of nurses globally is a major issue, and hence healthcare systems need to address how best to ensure smooth transition into the professional nurse role to ensure newly qualified nurses successfully adjust into their new roles (Duchscher, 2008). In facilitating such transitions, great emphasis has been placed upon providing effective work environments in which newly qualified nurses can be best supported through the use of supervisors and preceptorship, and in having their views acknowledged and valued (Department of Health, DoH, 2008; Nursing and Midwifery Council, 2006). Indeed the policy paper, ‘A High Quality Workforce’ (DoH, 2008) specifically acknowledged the role that the National Health System must adopt in improving not simply the quality of care but also the quality of support offered to NHS staff. Key DoH (2008) recommendations were placed on establishing more effective nursing training to ensure newly qualified nurses were better prepared for the realities of nursing practice, and providing avenues for appropriate continued professional development. However, studies still highlight that in reality, newly qualified nurses’ experiences are not aligned with these recommendations and nurses are still experiencing great challenges and difficulties in adjusting to the newly qualified nurse role (Mooney, 2007; Nash et al. 2009). The aim of this essay therefore is to examine the challenges that newly qualified nurses’ experience as they make their transitions into professional nursing practice, and to explore particular evidence based strategies to facilitate effective adjustment to their new role.

Nursing role transitions

The difficulties that student nurses experience in making the transition to newly qualified nurse has been highlighted by both the Department of Health (DoH, 2007) and the Nursing and Midwifery Council (NMC, 2006) who raise concerns around whether such nurses are being appropriately prepared to feel confident and competent in their new nursing positions. As the NHS ethos of the 6 Cs of care demonstrates, competence, and the courage to act with confidence, alongside communication, collaboration and continuity, are essential aspects of the nurses’ role in order to practice effectively (NHS, 2013). The literature indicates however that student nurses are simply not being effectively supported by both the NHS health care system and pre-registration training, which is leading to ineffective training which results in poorly prepared student nurses with expectations that do not translate into their actual new ‘professional’ nursing roles (Mooney, 2007; O’Shea and Kelly 2007). As Clark and Holmes (2007, p.1211) state, nursing education does not offer students “the knowledge, skills or confidence necessary for independent practice”. As O’Shea and Kelly (2007) also highlight, newly qualified nurses’ transitions are further challenged by little knowledge of the diverse roles qualified nurses engage within, such as managerial, leadership, decision-making and clinical duties. Studies however reveal that amongst newly qualified nurses there are similar, shared personal values based on altruistic values of desiring to help, care and support patients’, which promotes the person-centred model of care (DoH, 2000). However studies highlight that in practice, organisational constraints (Lack of time and staffing problems) combined with managers’ high expectations create challenges for new nurses in implementing theoretical knowledge and personal values into practice (Mackintosh, 2006). Therefore there is much need to determine key strategies that can promote effective transitions for nurses to help them to negotiate new positions as newly qualified nurses that prevent disillusionment, frustration, stress and potential burnout (Mackintosh, 2006).

The shock of transition

Duchscher (2008) identifies two key processes, those of socialisation and professionalisation, that occur as student nurses adjust to becoming a newly qualified nurse, Duchscher states that in order for nurses to effectively adjust to the transition they must modify their professional and personal values so that they are more aligned with the actual role. Duchscher (2008) argues that these changes result in nurses experiencing a process of intellectual, emotive personal, professional, role, skill and relationship transitions, which lead to new understandings, expectations and, subsequently, experiences. Studies corroborate this by highlighting that the first three months of becoming a newly qualified nurse have been reported by such nurses to be a sharp shock, as prior expectations of theory-based nursing are challenged by having such ideals of person-centred care made often impossible through different care practices expected within NHS settings being reinforced within health care teams (Kelly and Ahern, 2009; Hollywood, 2011). As multi-disciplinary teamwork in NHS care systems is a key aspect of NHS policy (DoH, 2010; NHS, 2013), newly qualified nurses can feel coerced into adopting different care practices that challenge their theoretical understanding of best practice, which can lead to tensions and, as studies reveal, could lead to distrust and poor staff morale (McDonald, Jayasuriya, and Harris, 2012). The literature evidences that newly qualified nurses who feel pressured to follow the practices of other staff can become desensitised to the use of poor practice through rationalising the need for such practice as a result of environmental pressures, such as time or staffing issues, which can lead to the nurse also adopting them (Mackintosh, 2006). Mackintosh (2006) highlights how this can lead to newly qualified nurses re-negotiating new nursing roles where personal values are re-assessed to enable adoption of similar practices, which serves to further reinforce the use of poor care within NHS settings.

Consequently as Kelly and Ahern (2009) identified, it is no wonder that newly qualified nurses report finding the transitional process overwhelming and stressful, confirming Mooney’s (2007) findings that nurses are unprepared and experiencing unexpected difficulties. Whitehead (2011) and Scully (2011) argue that such difficulties are a result of a theory-practice gap, which leads to nurses experiencing a conflict amongst theoretical, personal and professional values (Maben, Latter and Clark, 2006). Mooney (2007) confirms this in research conducted with newly qualified nurses that reported that pre-registration training did not prepare them for the realities of actual practice. Mooney (2007) also demonstrated how the high expectations of staff-leaders and patients furthered nurses’ feeling of lacking skills and knowledge, as no accommodation was made for their newly qualified status and lack of experience, which led to stress and disillusionment (Hollywood, 2011). As Maben et al. (2006) state, such treatment and lack of support places newly qualified nurses in vulnerable situations: they are at great disadvantage due to lack of experience and appropriate support strategies (Hollywood, 2011).

Addressing stress and expectations

Whilst studies highlight the difficulties that nurses experience in adjusting to the newly qualified nurse role (Whitehead, 2001; 2011), Edwards et al.(2011) reveal that appropriate support can minimise student nurses’ anxiety and help to build confidence through enhancing greater understanding of their role and staff demonstrating acceptance within nursing teams. However, Edwards et al. (2011) identify that staffing issues, staff attitudes and time constraints often lead to such nurses being unsupported, and can foster inequalities across NHS settings in the level of support provided. Scully (2011) emphasises that in order to provide appropriate support to newly qualified nurses, the political, social, and cultural barriers inherent in such a context must be addressed to help such nurses to overcome the theory-practice gap. As Fenwick et al. (2012) recommend, staff support needs to support a re-negotiation of newly qualified nurses’ expectations – resulting from theoretical training – to offer contexts in which discussions can be promoted that can address unrealistic expectations of the newly qualified nurse’s role so that what Kramer (1974) terms as reality shock is prevented. Theory-practice gaps, if strategies are not developed, can lead to segregation across newly qualified nurses and experienced staff, as when high expectations are placed upon newly qualified staff, they are unable to re-negotiate their new roles as they have no understanding of how their role can be limited by the particular socio-political and organisational constraints that can impede their practice (Maben et al. 2006).

Supportive work environments

Consequently the actual NHS environment and organisational culture in which newly qualified nurses find themselves can elicit a major impact upon how such nurses manage their transitions and forge a new self-identity and come to make sense of the role of the newly qualified nurse (Mooney, 2007; Whitehead, 2001). A key strategy promoted by the Nursing and Midwifery Council (NMC) (2006) is the employment of preceptors and supervisors to facilitate newly qualified nurses’ adjustment to their new practice settings (NMC 2006). Preceptorship within a nurse’s first year of professional practice can be utilised to highlight newly qualified nurses’ existing strengths and weaknesses, so that areas of development can be highlighted and addressed. However, it can also provide a valuable context in which fears, emotions and challenges can be discussed (NMC, 2006). Despite NMC (2006) recommendations however, the utilisation of preceptorship support strategies in practice is limited, with its use across the NHS being fragmented and inconsistent. However the literature does demonstrate that preceptorship strategies can be very effective in supporting newly qualified nurses in successfully managing such transitions, with student nurses reporting that preceptorship facilitated easier transitions into clinical practice and helped them to negotiate better understandings of their new roles (Mooney, 2007). Whitehead’s (2001; 2011) studies’ findings led to the recommendations that newly qualified nurses must have access to preceptorship, clinical supervision and some form of full time support so that difficulties can be addressed swiftly and reduce the number of newly qualified nurses living too hastily without appropriate discussion the nursing profession. As Whitehead (2011) states, social support and peer interaction can help to address and alleviate fears and stress through nurses being able to access appropriate emotional support and guidance at any time (Mooney 2007).

A qualitative study by Jonsen et al. (2012) examined the impact that providing preceptorship support elicited upon nurses’ successful transition into new practice, Jonsen et al. (2012) identified three key aspects, these being: preceptors; theory and practice; and reflection. Jonsen et al’s (2012) findings revealed that student nurses found the availability of support through preceptorship facilitated positive working environments which promoted feelings of security and yet fostered enhanced confidence and greater clinical effectiveness. As Jonsen et al. (2012) state, preceptorship provides contexts in which nurses are able to reflect upon their clinical practice experiences, which provides an environment in which students are able to balance theory with practice and personal with professional values, which facilitates better practice and confidence.

Conclusion

In summary, this essay demonstrates that to ensure student nurses adapt and make effective transitions to the role of newly qualified nurse, vital support is needed to offer appropriate supportive working environments, which can help nurses to re-negotiate the theory-practice gap. NHS settings need to acknowledge, accept and address the unique and individual needs of newly qualified nurses so that strategies can be employed that can facilitate continued professional development and encourage nurses to discuss their actual fears, issues and needs. The provision of preceptors and supervisors is essential to enable newly qualified nurses to have access to contexts in which personal and professional values can also be discussed so that they are able to not simply assimilate dominant practices inherent in the NHS setting but to also question them. Such strategies can thus offer newly qualified nurses context in which to reflect upon such practice experiences so that they can make sense of their new roles and re-negotiate new identities. It is therefore recommended that nurse training must address the potential transitionary difficulties that newly qualified nurses can experience to better prepare individuals for the realities of professional practice. NHS health care contexts must also promote greater access to preceptorship for newly qualified nurses to cater to the specific needs of newly qualified nurses. It is anticipated that through this development and a universal shift to enabling newly qualified nurses access to support such as preceptorship, newly qualified nurses can act with greater confidence and feel more supported in their clinical practice.

References

Clark, T., and Holmes, S. (2007) ‘Fit for practice? An exploration of the development of newly qualified nurses using focus groups’. International Journal of Nursing Studies, 44 (7), pp. 1210-1220

Department of Health (2000) NHS Plan. London: DoH.

Department of Health (DH) (2007) Towards a framework for post registration nursing careers – Consultation document. London: Department of Health.

Department of Health (2008) A high quality workforce. London: DoH.

Duchscher, J. B. (2008). A Process of Becoming: The Stages of New Nursing Graduate Professional Role Transition. Journal of Advance Nursing. 5(2), 22-36.

Edwards, D., Hawker, C., Carrier, J., & Rees, C. (2011). The effectiveness of strategies and interventions that aim to assist the transition from student to newly qualified nurse.International Journal of Evidence-Based Healthcare,9(3), 286.

FenwickJ,Hammond A,Raymond J,Smith R,Gray J,Foureur M,Homer C,Symon, C. (2012) Surviving, not thriving: a qualitative study of newly qualified midwives’ experience of theirtransitionto practice. Journal of Clinical Nursing. (13-14), 2054-63.

Hollywood E. (2011) The lived experiences of newly qualified children’s nurses. Journal of Clinical Nursing. 10-23; 20(11):665-71.

Jasper, M. (1996). The first year as a staff nurse: the experiences of a first cohort of Project 2000 nurses in a demonstration district.Journal of Advanced Nursing,24(4), 779-790.

Jonsen, E., Melender, H.L. & Hilli, Y. (2012) Finnish and Swedish nursing students’ experiences of their first clinical practice placement. A qualitative study. Nurse Education Today. 11(2), 8-17.

Kelly, J. and Ahern, K. (2009) Preparingnursesfor practice: a phenomenological study of the new graduate in Australia. Journal of Clinical Nursing, 18(6), 910-918.

Kramer, M. (1974) Reality shock: Why nurses leave nursing. St Louis: CV Mosby.

Maben, J., Latter, S., and Clark, J.M. (2006) The theory-practice gap: impact of professional-bureaucratic work conflict on newly-qualified nurses. Journal of Advanced Nursing, 55, pp. 465–477.

Mackintosh, C. (2006) Caring: the socialisation of pre-registration student nurses: a longitudinal qualitative descriptive study. International Journal of Nursing Studies, 43 (8), pp. 953-962.

McDonald, J., Jayasuriya, R., & Harris, M. F. (2012). The influence of power dynamics and trust on multidisciplinary collaboration: a qualitative case study of type 2 diabetes mellitus.BMC health services research,12(1), 63.

Mooney, M. (2007) Professional socialization: The key to survival as a newly qualified nurse. International Journal of Nursing Practice, 13 (2), pp. 75-80.

Nash, R., Lemcke, P., and Sacre, S. (2009) Enhancing transition: An enhanced model of clinical placement for final year nursing students. Nurse Education Today, 29 (1). 48-56.

National Health Service (2013) Change management plan. London: DoH.

Nursing and Midwifery Council (NMC) (2006) The future of pre-registration nursing education – NMC Consultation. London: Nursing and Midwifery Council.

O’Shea, M., and Kelly, B. (2007) The lived experiences of newly qualified staff nurses on clinical placement during the first six months following registration in the Republic of Ireland. Journal of Clinical Nursing, 16(8), 1534-1542.

Scully, N.J. (2011) The theory-practice gap and skill acquisition: An issue for nursing education. Collegian, 18, (2), 93–98

Whitehead, J. (2001) Newly qualified staff nurses perceptions of the role transition’, British Journal of Nursing, 10 (5), pp. 330-339.

Whitehead, D (2011) Are newly qualified nurses prepared for practice. Nursing Times. 107, 19/20, 20-23.