Was the Cold War inevitable after World War II?

This work was produced by one of our professional writers as a learning aid to help you with your studies

Unless you believe in predeterminism, nothing is inevitable in history. However, some things have a higher probability of happening than others, and this is what this study addresses. It looks at possibilities other than the outcome which occurred and explores why these scenarios did not prevail. It then looks at the actual unfolding of events and the deeper history which led to the Cold War emerging between 1945 and 1947/48. It analyses the factors which inclined the world towards ideological polarisation and evaluates what was the most significant.

Several outcomes other than an armed, hostile stand-off could have emerged at the end of World War II. There might have been a hot war, with the vast armies of the Soviet Union pitched against the equally powerful armed might of the Western Allies. Alternatively, there could have been electoral successes and popular uprisings by communist and other radical left-wing movements across Western Europe leading to the coming to power of regimes less willing to take a hostile stance towards the USSR. Thirdly, elections in Eastern Europe might have resulted in Soviet influence stopping at her own borders and hence no Iron Curtain “stretching from Stettin to Trieste” (Thomas, 1988, 703). Finally, a more cooperative, consensual and less suspicious approach to diplomacy would possibly have achieved a mutually acceptable rapprochement.

Apart from some hot-headed, dyed-in-the-wool anti-communists, such as General George Patton, there was little desire to start up another war against erstwhile allies. For the politicians of the democracies, initiating a new war would have been political suicide. For Stalin, there was little to be gained since he was in control of sufficient east European territory to create a series of buffer states to protect the Soviet Union (Leffler, 1986). Additionally, the USA had developed and demonstrated the use of the atomic bomb, something which the Russians had not yet mastered. Equally significantly, despite Churchill’s extreme wariness about Soviet post-war intentions in Europe, President Roosevelt was less concerned with ideas of Russian expansionism and he was by far the senior Western partner. He was willing to treat with Stalin, seeing the winning of the war as much more important than manoeuvring for later anti-communist geostrategic advantage (Offner, 1999). Despite his death a month before victory in Europe, his cooperative legacy prevailed long enough to make a shooting war with the USSR a non-starter (van Alstein, 2009).

The prospect of a much more left-leaning political Europe was a genuine possibility. In Britain, the Labour Party won an overwhelming victory in the 1945 election, while in Italy there was a very real possibility of the Communist Party at the least being a participant in Italy’s first post-war government. Determined that Italy must remain in the Western camp, President Truman authorised the covert transfer of vast amounts of cash to the anti-communist Christian Democrat Party which proved significant in overcoming the initial broad support for the anti-fascist parties of the left (Mistry, 2014). Even more decisive was the decision to finance and arm the right-wing government in Greece during the civil war which began in 1946. Truman’s support came at a crucial moment when it looked like communist forces might prevail. Significantly Stalin chose not to back the insurgents, honouring the agreements reached at Moscow in 1944 and the Yalta Conference of 1945 over spheres of influence in Europe. Similar US aid was extended to Turkey to prevent her entering into any agreement with Russia over defence and access to the Mediterranean. Had things turned out differently in those countries, it might well have strengthened the already powerful communist movements in France and Belgium (Gaddis, 2005; Edwards, 1989).

The scenario of elections in the eastern European nations occupied by Soviet forces at the end of the war producing non-communist governments was not impossible, although neither was it likely. Western historians have largely seen the Russians imposing puppet communist governments upon unwilling populaces, but in each country there were strong indigenous communist movements (Theoharis, 1976; Joll, 1973). Once in power, however, each regime refused to submit itself for re-election. This was not wholly because of Russian force of arms, but also because these regimes knew that their hold upon power depended on remaining within the Soviet bloc and thus they acquiesced in becoming client states. For Stalin they provided a buffer against what he still saw as a threat from the West to their very existence (Starobin, 1969). After experiencing foreign intervention in the 1917-22 civil war, international ostracism in the subsequent interwar years, and a brutal, genocidal invasion by Germany, it is not altogether surprising that Stalin was somewhat wary.

It has been argued by numerous revisionist historians that, in the immediate post-war years, Stalin was seeking rapprochement with the West (Zubok & Pleshakov, 1996; Roberts, 1994; Starobin, 1969). This seems persuasive since the Soviet Union was in desperate need of a period of retrenchment after the terrible depredations of the life-or-death struggle against Nazi invasion which it had just endured. There was a shield-wall of buffer states in place, Stalin was both unwilling and unable to expand any further, no attempt was made to incorporate Finland or Austria into the communist orbit despite having ample opportunity to do so, both the Western Allies and the USSR had demobilised the great bulk of their armed forces by 1948, and the West had been given free rein to impose its preferred political set-up in Italy, Greece and Turkey (Hobsbawm, 1994). Why then did there not emerge a period of international tensionless coexistence?

There seems to be two principal reasons for this: the presidency of Harry Truman, and Western (especially American) ideological intransigence. Truman was a truculent, belligerent individual who had little experience of foreign affairs when he became president upon Roosevelt’s death. He had a very black-and-white, us-and-them view of the world, and despite his lack of knowledge of political belief-systems beyond the USA, was viscerally anti-communist (Costigliola, 2010). Alan Offner described him as “a parochial nationalist who lacked the leadership to move America away from conflict and towards detente” (1999, 150), seeing his aggressive posturing towards the USSR as a major factor causing Stalin to adopt more hard-line, domineering policies in the Russian zone of influence in eastern Europe.

It was during his speech announcing US aid to Turkey and Greece that Truman first enunciated his Policy of Containment towards the Soviet Union.

[T]otalitarian regimes imposed upon free peoples, by direct or indirect aggression, undermine the foundations of international peace and hence the security of the United States… It must be the policy of the United States to support free peoples who are resisting attempted subjugation by armed minorities or outside pressures. (Edwards, 1989, 131)

Truman was setting up the USA as the world’s policeman, and in the process was creating the basis of American policy towards the USSR for the next forty years. The Soviet Union was to be treated as an implacable foe, as the ideological antithesis of what America believed it stood for, and as a state intent on undermining democracy and Western civilisation (Roberts, 1991). As such it was an existential threat which must be opposed and contained everywhere and at all times. Some historians have argued that “Containment” was the wrong term for American/Western aims during the Cold War – the goal was in fact “the collapse and destruction of the Soviet state and system and its displacement by liberal democratic institutions, whatever the rhetoric about co-existence.” (Kimball, 2001, 352) Truman began this policy, marking a distinct break with the consensual approach of his predecessor (Costigliola, 2010).

Obsessive anti-communism so permeated successive high-level American thinking that almost all foreign policy was seen in terms of defeating the Russians and their evil doctrines. Joseph Siracusa described the USA developing “an increasingly rigid ideological view of the world – anti-communist, anti-socialist, anti-leftist – that came to rival that of communism.” (Siracusa, 2001, 154) The roots of this preoccupation can be traced to the Bolshevik Revolution of 1917, not so much the events or even the consequences for Russia, but rather the self-proclaimed global mission of fomenting world insurrection against the established order, the propertied classes and liberal capitalism. However, during the interwar years, the USSR was not viewed as a dangerously powerful state, and when Stalin promulgated the policy of “socialism in one country” there was even less reason to be proactively hostile. Ideological animosity was still intense, but action was confined to trade embargoes and a refusal to recognise the Soviet Union. It was only in 1933 that Roosevelt extended recognition when the threat of fascism appeared much greater than that of communism (Roberts, 1991).

As well as the personality and worldview of Truman, events between 1945 and 1948 progressively and cumulatively increased the polarisation and ratcheted up hostility. Among these were the abandonment by Britain and the USA of their commitment to making the Germans pay substantial reparations, something which had been agreed at Yalta and was seen as important and necessary by Russia which had suffered far worse infrastructural and economic damage than the Western Allies. Choosing the option of rehabilitation over repression (Thomas, 1988), the British and Americans merged their occupations areas into the Bizone, then created the Trizone by adding the French sector, introducing a single currency for the whole area. This established a framework for an integrated administrative economic area in the Western sectors, a development advanced greatly in 1947 by the Marshall Plan (Lewkowicz, 2008). The Marshall Plan was not the simple gesture of a generous United States unselfishly seeking to help a debilitated Europe recover. The aim was to create an Open-Door policy within a free-trade Europe where the USA could freely sell its surplus production and invest its huge capital reserves. Money which was offered as aid came with strings attached. What could be bought and from whom was carefully prescribed, the greater part being American-made goods, while the supra-national decision-making body administering the Plan was dominated by the Americans (Roberts, 1994).

The Russians, initially welcoming the Plan, quickly recognised its underlying economic and political disadvantages. They saw it creating a design for Europe which would work to the benefit of the USA within an ideologically unacceptable framework, and declined to participate. The creation of the Trizone and its further binding together with Marshall Aid was only one step away from the implementation of political integration. Following the Berlin Blockade, this duly happened in May 1949 with the declaration of the Federal Republic of Germany. Five months later the German Democratic Republic was established (Lewkowicz, 2008; Roberts, 1994).

The crystallisation of a bipolar Europe was mirrored in the Far East. As part of a deal struck with Stalin, the Americans were given free rein to restructure both Japan and the Philippines which they turned into compliant pro-American, pro-capitalist states. Korea was divided between the two blocs, while Vietnam was prevented from unifying as one nation under Ho Chi Minh and his nationalist-communist liberation movement by the Americans. Against all the anti-imperial promises of Roosevelt, Truman encouraged the French to return as colonial masters in the South rather than let the country be united under a left-wing regime (Theoharis, 1976; Herring, 1986). Effectively, the USA was engaging in an economic, ideological and military-backed expansionist policy while accusing the USSR of that self-same activity.

Post-war international relations were always going to tend towards the development of two rival camps, but that is not sufficient to explain the intense hostility which emerged. In early 1945, cooperation was still the dominant paradigm among the Allies, not just to defeat the Axis, but for reasons of future security and peace. Ideological differences were seen more as domestic matters than major shapers of international relations. Soviet expansionism and her claim to zones of influence were regarded largely as conventional Russian nationalist ambitions, and were matched by the Western Allies’ own zones of influence. However, coinciding with the advent of Truman, suspicions and misreadings of the other side’s intentions emerged. Fearing the worst, both began acting upon their misconceived views of the other and started behaving in ways that confirmed their opponents preconceptions, creating self-fulfilling prophecies about what the other would do (van Alstein, 2009).

It is not surprising that Stalin acted out of paranoia and suspicion as his domestic record in the late 1920s and 1930s testifies, but Truman was his ideological counterpart in his misreading of Russian intentions and his doggedly anti-communist certainty. William Fulbright summed up the emerging ideological mind-set which would dominate US foreign-policy thinking for four decades and which was the most important factor in creating the reality of the Cold War:

Like medieval theologians we had a philosophy that explained everything to us in advance, and everything that did not fit could be readily identified as a fraud or a lie or an illusion… The perniciousness of the anti-Communist ideology arises not from any patent falsehood but from its distortion and simplification of reality, from its universalization and its elevation to the status of a revealed truth. (Fulbright, 1972, 43)

It was not inevitability which led to the Cold War, but inflexibility.

Bibliography

Costigliola, Frank. “After Roosevelt’s Death: Dangerous Emotions, Divisive Discourses, and the Abandoned Alliance.” Diplomatic History 34, no. 1 (2010): 1-24.

Edwards, Lee. “Congress and the Origins of the Cold War: The Truman Doctrine.” World Affairs 151, no. 3 (1989): 131-141.

Fulbright, J. William. “Reflections: In Thrall to Fear.” The New Yorker, January 1972: 41-43.

Gaddis, John Lewis. The Cold War . London: Penguin, 2005.

Herring, George C. America’s Longest War: The United States and Vietnam, 1950-1975. 2nd edition. New York: Alfred A. Knopf, 1986.

Hobsbawm, Eric. Age of Extremes. London: Penguin, 1994.

Joll, James. Europe Since 1870: An International History. London: Pelican, 1973.

Kimball, Warren F. “The Incredible Shrinking War: The Second World War, Not (Just) the Origins of the Cold War.” Diplomatic History 25, no. 3 (2001): 347-365.

Leffler, Melvyn P. “Adherence to Agreements: Yalta and the Experiences of the Early Cold War.” International Security 11, no. 1 (1986): 88-123.

Lewkowicz, Nicolas. The German Question and the Origins of the Cold War. Milan: IPOC di Pietro Condemi, 2008.

Mistry, Kaeten. The United States, Italy and the Origins of Cold War: Waging Political Warfare, 1945-1950. Cambridge: Cambridge University Press, 2014.

Offner, Arnold A. “‘Another such victory’: President Truman, American foreign policy, and the Cold War.” Diplomatic History 23, no. 2 (1999): 127-155.

Roberts, Geoffrey. “Moscow and the Marshall Plan: Politics, ideology and the onset of the Cold War, 1947.” Europe-Asia Studies 46, no. 8 (1994): 1371-1386.

—. The Soviet Union in World Politics: Coexistence, Revolution and Cold War, 1945-1991. London: Routledge, 1999.

Siracusa, Joseph M. “The ‘New’ Cold War History and the Origins of the Cold War.” Australian Journal of Politics and History 47, no. 1 (2001): 149-155.

Starobin, Joseph R. “Origins of the Cold War: The Communist Dimension.” Foreign Affairs 47, no. 4 (1969): 681-696.

Theoharis, Atan. “The origins of the Cold War: A revisionist interpretation.” Foreign Affairs 4, no. 1 (1976): 3-11.

Thomas, Hugh. Armed Truce. Sevenoaks: Hodder and Stoughton, 1988.

van Alstein, Maarten. “The meaning of hostile bipolarization: Interpreting the origins of the Cold War.” Cold War History 9, no. 3 (2009): 301-319.

Zubok, Vladislav, and Constantine Pleshakov. Inside the Kremlin’s Cold War: From Stalin to Khrushchev. Cambridge, MA: Harvard University Press, 1996.

History of Trade Unions Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

1. Brief history on trade union

The history of the trade union can be seen to have begun in the Industrial Revolution, where the rise of factories and the deskilling of labour led to workers seeking security through collective bargaining agreements. However, these early efforts at unionisation were generally deemed to be illegal, and punished by imprisonment or ‘transportation’ to the colonies, such as in the case of the Tolpuddle Martyrs (Webb and Webb, 1976, p. 23). However, in the nineteenth century many of the laws that prevented the formation of unions were repealed. As a result, trade unions grew rapidly, supported by the passage of further laws such as the 1906 Trade Disputes Act, which protected employees from being sued for going on strike, provided their strike was carried out by a trade union and met certain rules (Beckett, 2001, p. 22). Indeed, as of the present day, trade unions are the only accepted vehicle through which industrial action can occur.

Role of trade union in the UK

In spite of their important role in industrial action, this is not actually only an aspect of the trade union’s major role, which is to engage in collective bargaining on behalf of its member. This is important in unskilled and semi skilled working environments, where individual employees might be unaware of market rates of pay, and thus not be able to bargain effectively. Ultimately, this has led to a degree of institutional separation between day to day working practices and the negotiation of wages (Employee Relations, 1990, p. 15). However it is important to also realise that another role of the trade union is to negotiate these working practices, including the length of shifts, holidays, sick pay and other practices. Finally, the trade union also plays a role in supporting its members if they feel they have been unfairly dismissed, or discriminated against. Here, the union employs legal experts who have knowledge of employment laws, and thus can ensure that employees are treated fairly, such as in the case of Roberts v West Coast Trains Ltd [2004] (BAILII, 2010).

Practical (action & relationship)

The main practical actions that trade unions can take fall into two categories. The first is large scale practical actions by all members, including strikes and other coordinated industrial action. It should be noted that these actions are only triggered by a properly conducted ballot of union members, and hence can occur when the union members disagree with any action taken by management. For example, in 2009 the trade union Unite launched industrial action to prevent Total Oil Company using mainly overseas contractor at its Lindsey Oil Refinery, in spite of the Acas tribunal ruling that this use of contractors was not illegal (Gill, 2009, p. 29). As such, it can be argued that trade unions not only act when the written contract between managers and employees has been broken, but also when the psychological contract has been broken. The other main practical actions that trade unions take are for individual employees, including the legal assistance mentioned above, but also the provision of services such as unemployment benefits, sick pay and even additional pension provision.

Rights reference of trade union

Trade unions give employees several important rights that they would not otherwise possess as individuals. First and foremost amongst these is the effective right to strike. Whilst no individual or trade union has the right to strike in the UK, striking is also not a criminal offence, it is a civil one (Goswami, 2007, p. 8). As such, if an individual chooses to strike, they would become liable for the losses sustained by their employer due to their strike action. However, if a trade union holds a properly conducted ballot, then their members are protected from liability for these actions, effectively giving them the right to strike. The other main right trade unions have is the right to collectively bargain on behalf of their members, thus negotiating a pay settlement for all members that can then be agreed on in a vote of the members. Trade unions may also have the right to legally represent their members in any dispute with the employer, although this will often depend on the structure and laws of the union.

Example of industrial disputes

One recent dispute that is of interest is the case of British Airways and the trade union Unite. In this case, Unite called for strike action in response to the cost cutting program instituted by British Airways. This case is of interest due to its relation to the psychological contract. The psychological contract holds that employees will help the company make a profit, and in return managers will respect the employees and provide good working conditions (Gill, 2009, p. 29). However, in the case of BA, the company was making massive losses. This indicates that Unite was not interested in the company’s financial problems, and was instead more focused on maintaining its relevance, and the social contract it can be seen to hold with its members. Specifically, the social contract implies that employees will support the union when it calls for strike action, in exchange for receiving the support of the union in other areas (Peyrat-Guillard, 2008, p. 479). A similar example can be seen in the recent dispute between the Rail Maritime and Transport union (RMT) and London Underground. In this case, the union called for a strike claiming that cuts would compromise passenger safety, even though they would not result in any salary cuts or compulsory redundancies (BBC News, 2010).

Relevance and importance of trade union

The relevance and importance of trade unions depends strongly on which analytical perspective is employed. A labour process theory perspective indicates that trade unions play a vital role in defending workers’ rights in the face of the relentless growth of global capitalism and neo liberalist economics (Braverman, 1974, p. 8). This argument holds that as the owners of capital and their agents, the managers, obtain more control over the working process through mechanisation, so workers will be more vulnerable to exploitation. As such, trade unions need to ensure that their efforts to defend their workers match the efforts of managers looking to undermine them. According to this viewpoint, the RMT’s actions in the recent Underground strike were fully justified, as if they failed to act the managers would succeed in removing 800 employees, thus increasing management’s ability to exploit the remaining workers. In contrast, a post structuralist view of the issue indicates that the strike is more likely to be a product of the union attempting to maintain its own power, partly by opposing anything that might allow power to shift towards managers in the future, and partly by maintaining its relevance in the eyes of employees (Foucault, 2003, p. 6). The post structuralist view thus holds that unions are not particularly relevant or important in a modern capitalist society, and are in fact acting more to maintain their own power than to actually perform their role in society.

Size and components (hierarchy) of the trade union

Trade unions range in size from smaller specialist unions such as the British Orthoptic Society Trade Union, with a membership of just over a thousand (TUC, 2010), to the massive International Trade Union Confederation, which is a federation of 301 affiliated trade unions, with a total membership of 176 million workers (ITUC, 2010). There are also smaller unions each representing individual workplaces. In general, the structure of most unions will be set up to allow them to operate as an artificial legal entity. This helps them to carry out negotiations on behalf of its members, as well as ensuring that it can represent its members in the event of any individual disputes. Unions are also mandated by local laws to have a democratic structure and elected leadership in order to ensure that any strike action they take is legal. This is an important aspect of a trade union, as workers themselves do not have an implicit right to strike, they only have protection from legal action if a strike is organised by a union in a properly conducted ballot of members (Goswami, 2007, p. 8).

Conclusion

In conclusion, trade unions still tend to play an important role in protecting workers and helping them enforce their legal rights, particularly in cases when these rights may be uncertain or under debate. Unions will also be able to support employees when they feel that the psychological contract between workers and managers is being breached, and can help workers to renegotiate this contract if necessary. Unfortunately, a post structuralist view of the trade unions indicates that the unions tend to be more responsive to their own social contract with the workers, than to the actual needs and demands of the workplace itself. This can lead to unions behaving in overly militant ways, particularly when they feel their own power and relevance is being threatened.

References

1. BAILII (2010) British and Irish Legal Information Institute. http://www.bailii.org/ Accessed 9th September 2010. 2. BBC News (2010) London Underground strike causes severe disruption. http://www.bbc.co.uk/news/uk-england-london-11209522 Accessed 9th September 2010. 3. Beckett, F. (2001) Bring back the right to strike. New Statesman; Vol. 130, Issue 4528, p. 22. 4. Braverman, H. (1974) Labor and Monopoly Capital. New York, Free Press 5. Employee Relations (1990) Institutional Separation. Employee Relations; Vol. 12, Issue 5, p. 15-17. 6. Foucault, M. (2003) Society Must be Defended. New York: Picador. 7. Gill, C. (2009) How Unions Impact on the State of the Psychological Contract to Facilitate the adoption of New Work Practices (NWP). New Zealand Journal of Employment Relations; Vol. 34, Issue 2, p. 29-43. 8. Goswami, N. (2007) UK Govt declares in ECJ that strike actions be curtailed. Lawyer; Vol. 21, Issue 2, p. 8. 9. Peyrat-Guillard, D. (2008) Union Discourse and Perceived Violation of Contract: A Social Contract-Based Approach. Industrial Relations; Vol. 63, Issue 3, p. 479-501 10. TUC (2010) Britain’s unions. Trades Union Congress. http://www.tuc.org.uk/tuc/unions_main.cfm Accessed 9th September 2010. 11. Webb, S. and Webb, B. (1976) History of Trade Unionism. New York: AMS Press.

Thirty Years War – Relations Between States

This work was produced by one of our professional writers as a learning aid to help you with your studies

What was the significance of the Thirty Years’ War (1618-1648) for the relations between States? To what extent is Modern diplomacy Renaissance diplomacy in disguise?

The conflicts known as the Thirty Years’ War fundamentally altered the balance of power in Europe. Indeed, it could certainly be argued that modern diplomacy is ‘Renaissance diplomacy in disguise’, largely as a result of this. The conflict forced into being allegiances and alliances that can still be seen today and which in part shaped subsequent conflicts within European nation states.

Initially, the Thirty Years’ War was a religious conflict, though resulting from a ‘complex sequence of events’ , but it quickly escalated into a more comprehensive power struggle in the Holy Roman Empire:The Thirty Years’ War may be viewed from two aspects–a European and a German one. In respect of the first, it was the last of the great religious wars, closing the epoch of Reformation and Counter-Reformation, proving to the Catholic Powers of Europe that their ideal unity was no longer attainable and teaching mankind, by the rudest possible process, the hard lesson of toleration. In respect of the second, it had a somewhat similar effect. Germany was a Europe in miniature; her nominal unity under the Hapsburgs was a parallel to the Catholic ideal unity of Europe under the Pope and the Emperor. This unity was blasted forever by the muskets of the opposing armies. But worse than this; when the war began Germany was a rich country, as the countries of Europe then went. She was really full of cities, which, though their main threads of commerce were fast snapping, might yet fairly be called very flourishing. When the war ended she was a desert.

The decimation is extremely significant since it gives an insight into why the proactive, even aggressive, aspect to German territorial diplomacy in modern terms can be seen to be historically traceable and Renaissance diplomacy allied to it in embryo. In addition, it can be seen that the conflict itself was an integral part of the way in which countries were perceived and in how they perceived themselves, for example

‘Renaissance Denmark’ was expanding and wished to gain control with Sweden over ports on the Baltic which were in German hands. This period of aggression facilitated individual concerns such as this, both within and outside of the Empire, as well as exposing entrenched grievances and Church power over lands which they were reluctant to give up even after changing religion: ‘Everything depended on bringing the doubtful ecclesiastical principalities into the hands of men whose power and whose orthodoxy should alike be undoubted’ . Thus, it can be seen that the Thirty Years’ War is not easy to define in terms of the precise nature of its cause. Like most conflicts, its outbreak was due to reasons many and various and its progression and aftermath reflected this state of fragmented relations. In many ways, the Thirty Years’ War was as much evidence of the failure of Renaissance diplomacy as anything else:

That particular moment in history, the dozen years between the Twelve Years Truce of 1609 and the fatal spreading of the Thirty Years’ War, offered Spanish diplomats a unique opportunity. Between 1598 and 1609 some sort of peace was patched up, first with France, then with England, and finally with the rebellious provinces of the Netherlands so that, although many problems were left unsolved, there was again something like a community of nations in which diplomats had room to manoeuvre. At the same time, though Spanish power was little more than a husk, Spanish prestige was scarcely diminished.

The significance of the Thirty Years’ War for the relations between States, therefore, is to a great extent connected with both contemporary and modern diplomacy, with the current diplomatic relations and practices amongst states very much an echo of Renaissance diplomacy ‘in disguise’. The ‘patched up’ peace referred to above had an inherent inevitability of failure because it did not take into account the way in which individual imperatives would not only conflict with but also capitalise on the years of conflict:

In 1618, over half a century of festering religious, dynastic, and strategic tensions erupted into civil war in the Holy Roman Empire, subsequently engulfing the entire European continent in thirty years of exhausting and utterly devastating warfare. Wars are seldom simple affairs, but the Thirty Years’ War was even more complex than most, prompting endless scholarly debates about its causes and the motives of the major protagonists.

Thus, it can readily be seen that as with the Danish desire to wrest Baltic control from the Germans the long held ambitions of individual states were given the capacity to develop under the umbrella of the conflicts of the Thirty Years’ War. Clearly, the tensions which continue to exist within Europe and the way in which modern diplomacy operates can be seen to be rooted in the same entrenched desires. Indeed, even the ending of the protracted conflict in the terms of the Peace of Westphalia, has strong resonance for contemporary commentators on the way a diplomatic resolution at one point can evolve into further conflict, as the terms of the Treaty of Versailles are frequently seen as the root of the First World War:

The product of seven years of diplomatic wrangling and protracted negotiation, the Peace of Westphalia is generally seen as a crucial watershed in the transition from a heteronomous system of rule to a system of territorial sovereign states. […] This representation of the Westphalian settlement has attained almost canonical status in the discourse of international relations, but it should not be overdrawn. Significant as they were, the Treaties of Munster and Osnabruck were but one step in the territorialization of sovereign authority.[…] the Treaties of Westphalia played a crucial role in defining the scope of territorial rule. That is, they defined and codified an historically contingent range of substantive areas over which princes and monarchs could legitimately exercise political authority. The geographical extension of these political rights, however, was left ill-defined, with the reach of dynastic ties and ancient feudal rights defying the clear territorial demarcation of sovereignty.

The relations between states which the Thirty Years’ War established were entrenched within territorial sovereignty and this was to have extreme repercussions over succeeding centuries of diplomacy. From this point onwards, in fact, sovereignty would be a major issue in all diplomatic negotiations and remains so. Indeed, it might be argued that the Peace of Westphalia was as significant for what it failed to achieve as for what it accomplished, since it established political authority but could not take into account the extent of historical and dynastic connections or the influence of feudal rights, as stated above. Therefore, the creation or affirmation of sovereign states at the culmination of the Thirty Years’ War attempted to impose or redefine territorial rights which in many cases went against historical claims:

In medieval Europe political authority was decentralized and nonexclusive; a multitude of actors held rights to rule, and the content and jurisdictional purview of these rights varied temporally, spatially, and substantively, often overlapping in complex and contradictory ways. Originally, such rights were held en fief, bestowed by a superior lord in return for aid and counsel. In the late medieval period, however, the possession of feudal rights hardened; the idea that they were bestowed from above and maintained by conditional bonds of mutual obligation receded into the background, and feudal rights came to be seen as patrimony, as rightful inheritance.

The Peace of Westphalia, in creating sovereign states, altered the essential dynamic of relations between states. The new treaty, ending the Thirty Years’ War did not take account of these rights, replacing them with a new system of sovereign rule created, it is often said by faulty law and Machiavellian diplomacy: Hinsley argues that the Treaties of Westphalia “came to be looked upon as the public law of Europe” but that law is questionable in its effectiveness, validity and viability, both contemporaneously and in its effects in the present day. The significance of the Thirty Years’ War for the relations between States, therefore, is closely allied to the assessment of to what extent Westphalia was effective, or indeed lawful:

It is clear from the texts of the treaties, and from accounts of the negotiations, that the settlement’s legality did not derive from the existence of formal ‘contractual’ agreements between the princes and monarchs of Europe, or at least not primarily. The treaties were written and duly signed accords, but the bases of their legal sanctity lay elsewhere.

The diplomacy which brought about this somewhat equivocal ‘peace’, therefore, needs to be examined in greater detail.

Renaissance diplomacy relied heavily upon the desire by those with power to expand and protect that power. The territorial claims which emerged and developed during the course of the Thirty Years’ War had little to do with the original religious conflicts between Protestants and Catholics which began the war in the Holy Roman Empire and everything to do with the Renaissance diplomacy which was rooted in the desire to maintain and expand power. Rulers, of feudal origin, had mostly seized the power they possessed, could not rely on the loyalty of their subjects and needed to sustain their wealth by territorial alliances and self-serving allegiances. Morality had little, if anything, to do with Renaissance diplomacy and everything to do with what could be gained by power hungry individuals, including the ‘princes of the Church’, who even when they changed their religion were reluctant to sacrifice the control they had. After the creation of sovereign states by the Peace of Westphalia, these feudal lords were often compelled to comply with laws that were imposed upon them, parts of which can still be seen in significant aspects of contemporary diplomacy, for example permanent ambassadors with rights of impunity and foreign offices. Both of these were designed, and still are, to ensure that the power of a sovereign state was established and upheld in territories beyond their own. In this case, it can be clearly seen that modern diplomacy is indeed Renaissance diplomacy in disguise. Further, the territory of an embassy was established as being a foothold of the sovereign state it represented even though established in a foreign land. All of these aspects of Renaissance diplomacy remain fundamental to modern diplomacy and to the relationship between states. The European Union has affected this in abstract but has struggled to enforce its challenges in practice, with sovereignty a passionately debated issue in most treaties. Therefore, although it is important to remember that, ‘the peace treaties [of Westphalia] do not specifically include much evidence for the claim that Westphalia is the crucial turning-point in the emergence of sovereignty [and] there is no mention of the word “sovereignty” , essentially, the diplomacy which produced the treaty at the end of the Thirty Years’ War is still in place and even though ‘with the birth of the Modern Age […] the diplomatic institution underwent a profound transformation’ its connection with Renaissance diplomacy is still evident.

In conclusion, it must be stated that after the Thirty Years’ War, the power of individual feudal laws and lords was irrevocably diminished and there was more incentive for sovereign nation states to form mutually and generically beneficial alliances. However, as is clear from wars between such states trough the centuries, it could not achieve lasting peace, though its importance in developing relations between nation states and establishing diplomatic procedure is profoundly important:

Westphalia is not a literal moment of political transformation but, rather, the symbol of that change. Westphalia symbolized putting one of the final and most decisive nails in the coffin of the medieval claim that all European states were subject to the spiritual leadership of the pope and the political leadership of the Holy Roman Emperor. After Westphalia that was a hollow claim. As one historian puts it: ‘this extraordinary compromise saved the theory of religious unity for each state while destroying it for the Empire.’ A modern society of sovereign states had been created out of the political debris of a ruined medieval Christian empire.

Nevertheless, there is a school of thought that reduces the importance of Westphalia, stating particularly that: ‘the rise of the sovereign state was over three centuries old by the time of Westphalia’ . Notwithstanding, the profound effect that the Thirty Years’ War had on the development of the recognition of the sovereign state is difficult to overestimate and certainly, there is strong evidence of Renaissance diplomacy today.

Bibliography:

Jose Calvet De Magalhaes, Bernardo Futscher Pereira, The Pure Concept of Diplomacy, (Greenwood Press, New York, 1988).

Kevin Cramer, The Thirty Years’ War and German Memory in the Nineteenth Century, (University of Nebraska Press, Lincoln, NE., 2007).

Paul Douglas Lockhart, Frederik II and the Protestant Cause: Denmark’s Role in the Wars of Religion, 1559-1596, (Brill, Boston, 2004).

C.R.L. Fletcher, Gustavus Adolphus and the Struggle of Protestantism for Existence, (G. P. Putnam’s Sons, New York, 1890).

Ernest F. Henderson, A Short History of Germany, (Macmillan Company, New York, 1902).

Robert Jackson, The Global Covenant: Human Conduct in a World of States, (Oxford University Press, Oxford, 2003).

Andrew Macrae, ‘Counterpoint: The Westphalia Overstatement’, International Social Science Review, Vol. 80, 2005.

Garrett Mattingly, Renaissance Diplomacy, (Dover Publications Inc., Mineola, NY, 1989).

Christian Reus-Smit, The Moral Purpose of the State: Culture, Social Identity, and Institutional Rationality in International Relations, (Princeton University Press, Princeton, NJ., 1999).

The Need for Sustainable Construction

This work was produced by one of our professional writers as a learning aid to help you with your studies

Problem Specification

If everyone on the planet were to consume natural resources and generate carbon dioxide at the rate we do in the UK, we would need three planets to support us. Sustainability is becoming a central concern for all of us. It is a concern that has grown out of wider recognition that rising populations and economic development are threatening a progressive degradation of the earth’s resources.

The construction, maintenance and use of housing impacts substantially on our environment and is currently contributing significantly to irreversible changes in the world’s climate, atmosphere and ecosystem. Housing is by far the greatest producers of harmful gases such as CO2 and this eco-footprint can only increase with the large population growth predicted to occur by 2050. What sustainability means is adapting the ways we all live and work towards meeting needs, while minimising the impacts of consumption, providing for people of today and not endangering the generations of tomorrow.

A Government report on the economic impact of climate change has criticized the training and organisation of the construction industry. The Stern Report, by Sir Nichols Stern, the World Bank’s former chief economist, says the lack of co-ordination between elements of the industry creates poor quality, energy-inefficient housing.

It says architects and other consultants require more training on the principles of sustainable design and efficient technologies, and that policies need to be put in place to inform decisions made at the design stage of a building. As a result of the report, the government has set legally binding targets of a 26 – 32 per cent reduction in CO2 emissions by the year 2020 and an independent body will be introduced to advise on and monitor the Government’s policies on the subject.

The drive for more sustainable development is one of the defining issues of the early 21st Century. It is often said that the costs of today’s lifestyles are such that future generations will pay a high price through reduced environmental quality and living standards. However, it is also perceived that the short term costs of more sustainable practices are too high to justify their application in a competitive property market.

Government plans for sustainable housing applies to both new builds and existing dwellings. The construction industry as a whole is responsible for finding new materials and building methods, and the Government is tasked with educating the general public on the sustainable features they can add to their homes to ensure sustainability.

Despite substantial advances in best practice, there is a lag in the application of more sustainable solutions that improve building performance beyond that required by Building Regulations. There are many reasons for this, not least a lack of client/customer demand; however, one of the most cited is that more sustainable alternatives are prohibitively expensive. Typically, cost consultants can add a significant margin of as much as 10% to capital costs to allow for more sustainable solutions. (Cyril Sweett)

Often the most powerful and direct driver for addressing sustainability is that the client, funder or planning authority has made it a key project requirement. In order to meet this requirement, everyone involved in a construction project must re-think their operations in areas such as energy, materials, waste and pollution. For the purposes of this essay, choosing, using, re-using and recycling materials during design, manufacture, construction and maintenance to reduce resource requirements and essentially lower the costs of a project.

The design of a sustainable home and the materials used during construction are key factors in reducing CO2 emissions from transport and operational energy, reducing mains water consumption, reducing the impact of materials used, reducing pollutants harmful to the atmosphere and improving the indoor environment.

It is claimed all of these can be done with an increase in capital costs of just 3% (John Shore). The aim of the essay, therefore, is to examine the need for sustainable construction and to identify the real costs of sustainable solutions and thereby tackling a key barrier to the industry in advancing the sustainability agenda.

Literature review

There are many articles, journals and reports that look into sustainable housing in the UK, many of which begin by explaining the extent of the problems global warming will bring and how the construction industry has contributed to this. It has been well documented in the national news on a regular basis. The Climate Change Bill, which was included in the 2006 Queens speech was the beginning to the Government acting upon the information they were being given which indicated a strong need for change sooner than later.

This led to reports including ‘Low Cost Homes: economical eco-options on the rise’ (Hall 2007) and articles such as ‘Green construction costs dramatically lower than believed’ (World Business Council for Sustainable Development 2007). This article was produced on the back of findings from a survey conducted by the WBCSD that green construction costs were being overestimated by 300%. Respondents to a 1400 person estimated the additional cost of building green at 17% above conventional construction; more than triple the true cost difference of about 5 %.

At the same time, survey respondents put greenhouse gas emissions by building at 19% of world total, while the actual number of 40% is double this. As a reaction to the article on the report on the Euractiv website, the RICS has emphasized its ‘Green Value’ study, which shows that while there are signs of an increasing market value of green housing, industry stakeholders still seem to be failing to get the message across that the main beneficiaries are the housing occupants.

Hall’s report identified the issue of the Government insisting that all new homes in Britain must be carbon-neutral by 2016, putting pressure on developers to come up with good design that doesn’t cost the earth – financially or environmentally. Hall went on to say, at the moment, going green costs money and most private sector developers are reluctant to see beyond their profits. But eco-friendly innovation is coming from elsewhere – namely social housing.

Costing Green: A Comprehensive Database (Matthiessen & Morris) used extensive data on building costs to compare the cost of green housing with housing comparable programs, which do not have sustainable goals. The report concluded that many projects achieve sustainable design within their initial budget, or with very small supplemental funding, which suggests that home owners are finding ways to incorporate project goals and values, regardless of budget, by making choices.

The Stern Report, published by Sir Nicholas Stern Head of head of the Government economic service and advisor to the Government on the economics of climate change and development is a report that suggests that global warming could shrink the global economy by 20%. The review coincides with the release of new data by the United Nations showing an upward trend in emission of greenhouse gases – a development for which Sir Nicholas said that rich countries must shoulder most of the responsibility.

The study is the first major contribution to the global warming debate by an economist, rather than an environmental scientist. Prime Minister Gordon Brown, who commissioned the report, has also recruited former US Vice-President Al Gore as an environment advisor.

However, the report has sparked furious debate among economists. An example of why the report has sparked such debates is; if the economy grows at current levels, the cost of mitigation will be less than Stern estimates – therefore we would be paying more to act now. It is said that we could save money by addressing the issues as and when they erupt. By forecasting how global warming is to affect the environment, Stern has set himself up for criticism such as this from the many people who fail to share his views and concerns.

Gathering relevant information on the true costs of sustainable housing is not a problem with so many government and independent studies/articles/journals being produced. This data can be compared against the price of housing without the sustainable goals which are found in construction pricing books such as Spons Architect and Builders Price Book (Davis Langdon). In comparing the prices the essay will either prove are falsify the hypothesis: sustainable construction can be attained with very little additional costs to that of construction without sustainable characteristics. For the purpose of this hypothesis, ‘very little additional costs’ is defined as ranging from 0% – 10% additional costs.

Methodology

Chapter 2 of this essay will be a review of the literature on sustainable construction in regards to the principles of sustainable construction, sustainable construction policies and practices in the UK and the economic benefits of sustainable construction. The results of this research show that the business benefits have been made and can be illustrated by many pioneer projects in the UK. However, the misperception of higher capital cost and the lack of awareness of market value are still significant barriers to the implement and demand for sustainable construction. It is critical, therefore, to establish the economic performance of sustainable construction in order to motivate stakeholders to consider methods of sustainable construction.

This subject has been the attention of mass media in recent years meaning that existing literature such as numerous Government and independent reports as well as the Climate change Bill introduced to help prevent the situation we find ourselves in environmentally from becoming worse, will be excellent sources of information to explain thoroughly why there is a need for change and what sustainable construction entails from a economic perspective.

Chapter 3 will be researching the various sustainable construction materials and methods that are available to the industry. Each one of these will be looked at in detail to explain how they work, what exactly is involved with them and how they are deemed sustainable.

Although they are still not widely used, there are plenty of companies offering sustainable building materials and construction services. A lot of these companies are available through online websites promoting sustainable construction and offering their services. These companies will be good ways of gathering the information needed in order to give a comprehensive review of the sustainable materials and methods that are available.

Using the information gathered in chapter 3, this essay will then be finding out the costs incurred when using these sustainable construction materials and building methods and comparing them against the non-eco-friendly methods that most contractors currently choose to incorporate. This information will make up chapter 4 and will ultimately go on to either verify or falsify the hypothesis: ‘sustainable construction can be attained with very little additional costs to that of construction without sustainable characteristics.’

This structure has been carefully chosen to gain as much relevant information as possible and comparing two methods of construction against one another. In chapter 2, a review of existing literature will be used. Reasons for this are that the subject of sustainable construction and its financial factors have already been investigated and numerous authors have wrote their findings and ideas on the subject. These findings will be reviewed in order to pull out the relevant parts for this essay.

Chapter 3 will consist of an in depth look into the types of sustainable materials and building methods that are available to the construction industry. This will take the form of a mix between a review of existing literature and a survey of the service provider’s views, feelings and attitudes towards sustainability.

Chapter 4 will be a comparative analysis of sustainable building and material costs and the costs of materials and building methods without sustainable characteristics. The information found in chapter 3 will be the argument for sustainability. The argument for construction without sustainable characteristics will come from pricing books used throughout the industry. Once both sides’ costs have been discovered, they will be weighed up against each other which will verify or falsify the hypothesis. If sustainable construction can be provided with an extra cost of 5% or less, then the hypothesis will be verified.

Introduction of Sustainable Construction

In 1987, the Brundtland Report, also known as Our Common Future, alerted the world to the urgency of making progress toward economic development that could be sustained without depleting natural resources or harming the environment. It was headed by the Norwegian Prime Minister at the time, Gro Harlem Brundtland.

The report was primarily concerned with securing a global equity, redistributing resources towards poorer nations whilst encouraging their economic growth. The report also suggested that equity, growth and environmental maintenance are simultaneously possible and that each country is capable of achieving its full economic potential whilst at the same time enhancing its resource base. The report also recognised that achieving this equity and sustainable growth would require technological and social change.

The report went on to highlight three primary areas where sustainable development should come from, these were, protection of the environment, economic growth and social equity. It is imperative that our environment is protected and our resource base enhanced, by gradually making the necessary changes in which we develop technologies and put them to use.

Developing nations must be allowed to meet their basic needs of employment, food, energy, water and sanitation. If this is to be done in a sustainable manner, then there is a definite need for a sustainable level of population. Economic growth should be revived and developing nations should be allowed a growth of equal quality to the developing nations.

The Brundtland Report has often been subject to criticism, on the grounds that many of its forecasts have not come true. However, such criticisms are perhaps missing the significance of the report and the fact that despite inaccuracies in forecasting, the Brundtland Report’s premise of the need for global environmental action has not been invalidated.

Back in 1994, the first sustainable construction conference was held in Tampa, USA. This conference is seen as the starting point for the whole eco-friendly building to become a global issue. The UK construction industry has so far used sustainable construction as a way to respond to the criticism that fell upon the industry, as it were seen to be one of the main contributors to greenhouse gases.

There are numerous examples of housing in the UK that have been constructed with sustainable characteristics to help provide a healthier way of living for the occupier and constructing for the developer. However, these examples tend to be bespoke designs for clients who choose themselves to build and live in a sustainable home. The idea of sustainable developments is still yet to catch on in the UK. Perceived higher risks and extra costs are the main factors in this lack of incorporation into the industry. It is becoming clear that the whole concept of sustainable construction is going to face some barriers in regards to economic justification.

Incurring higher risks and costs is not the only issue. The market value of sustainable construction is also not being considered by clients and developers. Zhou and Lowe (2003) said:

The current economic measuring too (life cycle costing), is very effective at illustrating the long term value of sustainable construction, but at the same time is limited when showing the initial cost reduction.

If those involved in the UK construction industry continue to be encouraged by short term financial gain as opposed to the consequences of their actions in the long run then the future does not look too bright for the sustainable construction idea.

Hydes and Creech (2000) said, “Sustainability is a holistic concept that holds economic social and environmental factors in balance, moreover it is a complex concept, which is hard to define in simple terms.” This statement recognises that clients and developers should not only take their financial rewards into consideration, but also they should consider the consequences the environment and our society is reportedly beginning to see.

Pearce et al (1989) concluded that: “There have been over 200 different definitions of sustainability, making it extremely difficult to determine practical ways to support sustainability.” This statement could also outline the problem that the industry has still not come to an agreement on the actual definition of sustainability, therefore, making its inception into recognised practice unlikely, as people simply don’t know or don’t want to know what their role could be in reducing the problems of global warming.

In July 2005, the then Chancellor of the Exchequer, Gordon Brown announced that he had asked Sir Nicholas Stern to lead a major review of the economics of climate change, to understand more comprehensively the nature of the economic challenges and how they can be met, in the UK and globally.

The main conclusion from the report were that 1% of global gross domestic product per annum was required to be invested in order to avoid the worst effects of climate change, and that failure to do so could risk global GDP being up to 20% lower than it otherwise might be. Stern’s report suggested that climate change threatens to be the greatest and widest ranging market failure ever seen, and it provided prescriptions including environmental taxes to minimise the economic and social disruptions. Stern stated,

Our actions over the coming few decades could create risks of major disruption to economic and social activity, later in this century and in the next, on a scale similar to those associated with the great wars and the economic depression of the first half of the 20th century.

It was the findings in this report that prompted the UK Government to introduce the Climate Change Bill. It was introduced to: Combat climate change by setting annual targets for the reduction of carbon dioxide emission until 2050; to place duties on the Prime Minister regarding the reporting on and achievement of those targets; to specify procedures to be followed if the targets are not met; to specify certain functions of and provide certain powers to Members of Parliament with regard to ensuring carbon dioxide emission are reduced; to set sectoral reduction targets and targets for energy efficiency, the generation of energy from renewable sources, combined heat and power and micro-generation; and for connected purposes.

This Bill was outlined in the Queens speech, and would also see the setting up of a ‘Carbon Committee’ to ensure the targets are met. Announcing the Government’s planned legislation for the forthcoming parliamentary session, the Queen told MPs and peers: “My Government will publish a Bill on climate change as part of its policy to protect the environment, consistent with the need to secure long term energy supplies”.

The construction industry uses vast quantities of natural resources such as energy, water, materials and land, and produces large amounts of waste in the region of 70 million tonnes per annum to landfill. The Brundtland definition of sustainable development; “Development that meets the needs of the present without compromising the ability of future generations to meet their own needs” informs us that this cannot continue. There is a big difference between the environment impacts of a poorly performing building compared to what is achievable using current practice. If we are to deliver the legally binding targets set by the Government we must ensure that today’s housing meet best practice (BREEAM).

BREEAM is the world’s longest standing and most widely used environment assessment method for housing. It sets the standard for best practice in sustainable development and demonstrates a level of achievement. It has become the vocabulary used to describe a building’s environmental performance.

The BRE Sustainable community’s team is involved with aiding local authorities, land owners and developers to identify the relevant sustainable development opportunities available to help deliver sustainable communities. They work with them to provide assessment framework to guide the sustainable developments, and to allow developers to demonstrate the sustainability features of their proposals to the local planning authority.

The benefits are said to be enormous, and cost effective. Developers can assess the sustainability of proposed designs iteratively, and understand its strengths and weaknesses. Expensive reworking is avoided by considering issues in the right stage of the design issues. The value in this approach for developers and land owners is that sustainability credentials are presented to both the local planning authority, and importantly to potential purchasers.

Our homes account to some 27% of the UK’s CO2 emissions and for this reason, in order to meet its targets for cutting carbon emissions by some 60% by 2050, the Government has announced that, as part of the new Home Information Pack which all homes sold after June 1st 2007 must make available, every home should have an energy rating. The so called Energy Performance Certificate will give home buyers A to G ratings for their home’s energy efficiency and carbon emissions. They will tell them current and average costs for heating, hot water and lighting in the home. This helps the Government meet the EU target for all homes having energy ratings by 2009.

Changes to a currently constructed dwelling or additional features on a new build are always going to give an immediate impression of ‘extra costs.’ And getting people to dig a little bit deeper into their pockets is always going to be a difficult task whatever the reason being, and the fact that the public are generally over pricing the cost of these new construction methods and features, increases the difficulty the Government have of achieving their targets.

This chapter has looked into how and why sustainability has become such a big issue in recent years. The Brundtland report which is said to have started it all off was published over 20 years ago, outlined the potential problems that have begun to arise. And although a lot of what was said in the report did not happen, it can’t be said that the potential environmental problems it predicted have not materialised. Chapter 3 will now go on to look at the sustainable construction methods and materials that are available to be implemented into the industries everyday life.

Sustainable Construction Materials and Methods

There is an urgent need to address the great challenges of our times: climate change, resource depletion, pollution, and peak oil. These issues are all accelerating rapidly, and all have strong links with the UK construction industry (SustainableBuild).

There is a growing consensus from scientists and the oil industry that we are going to reach peak oil within the next twenty years, and that we might have reached this point already. Global demand is soaring, whilst global production is declining, and oil is set to become increasingly expensive and scarce.

The building industry is hugely dependent on cheap oil, from the manufacture and transportation of its materials, to the machinery and tools used in demolition and construction. In the UK, it uses vast quantities of fossil fuels, accounting for over half of total carbon emissions that lead to climate change. The built environment is also responsible for significant amounts of air, soil and water pollution, and millions of tonnes of landfill waste. This is a situation that clearly needs to change (SustainableBuild).

Sustainable construction is not only a wise choice for our future; it is also a necessary choice. The construction industry must adopt eco-friendly practices and materials that reduce its impacts, before we reach a point of irreversible damage to our life supporting systems.

The UK Government is beginning to recognise this urgency, and is committed to integrating green specifications into building regulations and codes, but the process of developing policy is slow. The industry needs to take its own initiative and find alternative ways to build, using green, renewable energy resources, and adopt non-polluting practices and materials that reduce, recycle and reuse, before it is too late (SustainableBuild).

In the previous chapter, this essay examined current literature on sustainable construction in terms of the principles of sustainable construction, sustainable construction policies and practices in the UK and the economic benefits of sustainable construction. In this chapter it will now investigate the various sustainable construction materials and methods available to the industry ranging from very large complex items to small simple items. These are:

Biomass roofing

Solar Water and Electric

Wind power

Cob building

Insulation materials

Non-toxic paints

Heat pump

Green roofs

Reclaimed materials

Lime

Using locally sourced materials

Biomass roofing

The use of plant materials to build the roof on a building is known as biomass roofing. Vegetation that was found locally and in abundance has been used to build roofs all over world for many years. This cultural and environmental diversity has led to a range of roofing materials and styles, from simple and short lived to the more durable and complex. Although hundreds of different plants have been used to roof houses, these can be classified into two main types: thatch and wood tiles.

Thatch is one of the oldest forms of roofing, dating back thousands of years. It is found in almost every country, from savannah grasses in Africa to coconut palm fronds in the Caribbean to banana leaves in the Amazon. It was the predominant roofing material in Britain up until the 19th Century and thatched cottages remain a hallmark of the English Countryside.

All sorts of plants have been used for thatching in Britain: oats, reeds, broom, heather, bracken and various grasses. But today only three main thatching materials are used: water reed, wheat reed and long straw.

Water Reed is the most popular thatching material. Both water reed and wheat reed (actually a straw but cut with a binder and combed to give the appearance of reed) give a compact and even texture when applied to a roof. This is in contrast with long straw (wheat straw that has been threshed so that the ears and butts are mixed up together), which gives a shaggy, rounded appearance. The lifespan of thatch is around 30 to 50 years, although this varies widely depending on the skill of the thatcher, the pitch of the roof, the local climate conditions and the quality of the materials.

The technique for thatching is basically the same for all materials. First the thatch is fastened together in bundles about 25 inches in diameter. Each bundle is then laid down with the butt end facing outwards, secured together to the roof beams, and pegged in place with wooden rods. Successive layers are added on top of each other, working from the bottom of the roof up towards the top, with a final layer used to reinforce the ridgeline.

Thatch roofs can withstand high winds and heavy rains, provide good thermal insulation and are easy to repair. Thatch is light and needs only a simple support structure, and is flexible so it can be used for any roof shape. On the downside, thatching is labour intensive and a certain level of skill is required. The materials can be expensive as reeds are increasingly imported from Europe to keep up with demand. Like all biomass materials, thatch is flammable which means that building restrictions may apply and home insurance can be high.

Wood tiles have been used since medieval times in Britain. They are traditionally made by hand-splitting logs into small wedge shaped pieces, but today most are manufactured by machine. There are two basic types: shingles, which are sawn, and shakes, which are split.

Shakes are thicker and have a more rustic, rough look, whilst shingles are thinner and smoother. Both come in a variety of lengths and are made from the heartwood of unseasoned wood. Hardwood is best, with cedar being the most popular, although any straight-grained wood can be used. Split bamboo can also be used to create Spanish-style tiles, and are popular in some countries, but bamboo has the disadvantage of decaying fast in wet conditions unless chemically treated.

Wood tiles are laid from the bottom of the roof to the top, with each row overlapping the previous one. A cap is placed at the roof ridge. Typically tiles are nailed onto wood strips spaced a few inches apart between the roof beams, to allow air to circulate and prevent decay.

Wood tiles last between 25 – 50 years. Like thatch, they give good insulation and are flexible so can cover any roof shape. They are highly resistant to wind, heavy snow and hail, but must be regularly cleaned of vegetative debris. They are also flammable, and building regulations may prohibit their use in urban areas.

The recognised need to use renewable resources has led to a revival of traditional, natural building methods, along with a growing market for biomass roofing. Thatch and wood tiles are not only aesthetically appealing, but are durable and biodegradable. But their sustainability value is diminished if the materials have been imported or produced and treated with chemicals. Biomass roofing is only a true sustainable solution if the materials are obtained from a local, renewable source, and are grown, harvested and manufactured in an environmentally sensitive way (SustainableBuild).

Using the sun to provide energy is split into two areas, solar panels which are used for heating water, and PV cells, which are used for creating electricity. A heating system tends to cost around ?2,000 installed and can usually provide enough hot water all year round, the problem from a value point of view is that it only costs around ?100 a year to provide this anyway. PV cells create a more significant amount of electricity which may allow you to sell some of the energy you have created back to the grid.

References

Anke Val Haal, (1997) Sustainable Building, Vol. 3

Aye, L., Bamford, N., Charters, B. and Robinson, J., (2000) Environmentally sustainable development: a lifecycle costing approach for a commercial office building in Melbourne, Australia, Construction Management and Economics18, Taylor & Francis Ltd., 927 –934

Barlett, E. and Howard, H., (2000) Informing the decision makers on the cost and value of green building, Building Research & Information28(5/6), Taylor & Francis Ltd. London, 315 –324

Bon, R. and Hutchinson, K., (2000) Sustainable construction: some economic challenges, Building Research & Information28(5/6), Taylor & Francis Ltd. London, 301- 304

Bogenstatter,U., (2000) Prediction and optimisation of life-cycle costs in early design,

Napoleon and the French Revolution

This work was produced by one of our professional writers as a learning aid to help you with your studies

Was Napoleon An Heir to the French Revolution?

Of all the Events of European history, the French Revolution of 1789 is without doubt one of the most important and controversial. Similarly Napoleon Bonaparte has to be amongst the most written on and opinion dividing individuals world history has ever seen. Therefore the question as to weather Napoleon was an heir to the revolution, its saviour, hijacker, or simply consolidator is probably the most frequently asked question regarding the revolution and Napoleon.

In this essay I will be attempting to answer the question of weather Napoleon was an heir to the French Revolution. This will involve me firstly exploring my definition of the term heir, and my views on the explanations and definitions of the French Revolution. Having done this I will then move on to examine the reign of Napoleon. By doing this I hope to prove my view that, whilst Napoleon may be considered an inevitable consequence of the revolution, he was not its heir.

In my opinion the word heir describes a person’s or events natural successor. Therefore the term heir to the revolution would in my opinion be used to describe the next regime, which came to embody the principles and morals of the revolution. The revolution’s heir must be the regime that follows on from were the revolution left France, and presides over, or creates the kind of society the revolutionaries of 1789 intended to. It is my belief that Napoleon and the Napoleonic regime did not either preside over or create this kind of society and as such Napoleon cannot be considered an heir to the French Revolution. In order for this view to be qualified the next aspect we need to look at, is the various definitions and interpretations of the French Revolution.

Put simply the French Revolution was, when in 1789 the old Ancien regime was overthrown, and France went from a monarchy-governed state to a republic. After this, France went through a number of different stages in terms of forms and types of government. The revolutionary government of 1789-1793 was the most immediate, until between 1793-1794, when Robespierre became the most powerful man in France overseeing the era known as the terror. This was followed by the Directory who ruled between the years 1794-1799, and this was the government Napoleon overthrew in the Coup of Brumaire on November 9-10th 1799.

Studying the history of these events has gone through many stages and significant changes, especially in the last fifty years or so. For a long time after the revolution, the most dominant form historiography on the subject was the Marxist interpretation. This interpretation went largely unchallenged until the 1950’s and the arrival of the first generation revisionists. This was essentially a critique of the Marxist interpretation. This was followed up in the 1960’s and 1970’s by what is often called second generation revisionism, as historians such as Blanning and Doyle began to look more closely at the Nobility as a social group and found new definitions for the events in the years after 1789 up to when Napoleon took power. The most recent historical study on the subject is known as post revisionism and this tends to place more emphasis on matters such as chance than previous approaches whilst also stressing the importance played by the aspects such as popular culture and the psyche of the days society and influential groups and people. Of these approaches I find the Marxist interpretation most convincing and therefore I will now move on to briefly explore this, in order to portray my definition of the French Revolution.

The Ancien regime saw an absolute monarch with complete power, running a feudal based society and economy. The Marxist interpretation of the French Revolution states that it was in essence a power struggle between the middle classes or the bourgeoisie and the upper classes, aristocracy and the nobility. This is proven by the view that it was the Third Estate, which began the revolution and this was dominated by the bourgeoisie. It is claimed that they had been motivated by political ideology inspired by the enlightenment and the fact their economic wealth did not reflect their share of power. The declaration of the rights of man on the 24th August 1789 and the abolishing of the feudal system are often pointed out as them most important evidence that the revolution was a bourgeois one, overthrowing the feudal Ancien regime after a power struggle.

The degree to, and speed with which French society changed after this has been much debated among historians. Many historians continue to define the revolution as the whole of the period 1789 – 1799. Historians such as Geoffrey Ellis who points out how Napoleon himself declared at the Coup of Brumaire that:


Citizens the revolution is established on the principles which began it. It is finished.”

However I believe that the revolution is defined as the result of the power struggle between the old Ancien regime, and the newly emerging bourgeois middle class. The revolution is defined by the events of 1789 and 1789 alone. The founding principles and morals of the revolution were that of the bourgeoisie, and these can best been seen by such documents as the declaration of the rights of man, the decree abolishing the feudal system, the Cashier de Doleances referring to the middle classes, and the actions and constitution of the revolutionary government up until 1793 and the beginning of the terror.

Having established my definition of the French Revolution, it is first important not to gloss over without mention to the years 1793 – 1799, before going on to look at the nature of the Napoleonic regime itself. Inmy view these years can in essence be described as a crisis created by panic and a power vacuum. The execution of King Louis XVI in January 1793 created much panic within and outside France leading to foreign war and numerous insurgencies and political divisions inside France itself. In these years France became almost ungovernable and the terror can be seen purely as a reaction to the threats the new French Republic was facing. The era of the Directory, in my view, is summed up by the fact that, the revolution was under threat from Jacobins, Monarchists, foreign invaders, and the mass of the French population tired of war and political upheaval. Therefore the bourgeoisies tried to create a strong government that could defeat all of these enemies. However such a task soon proved impossible and with the coup of Brumaire in November 1799, France was once again to be ruled by a single authoritarian leader.

Having now explained my understanding of the term heir to the revolution, my definition of the FrenchRevolution, and briefly looked at the years before Napoleon came to power, I will now go on to look at the Napoleonic regime and convey my argument as to why I do not believe it is correct to describe Napoleon as an heir to the French Revolution. In order to prove this I will look the Napoleonic regime from two different viewpoints namely, politically and economically.

Up until the second half of the twentieth century historical study on Napoleon nearly always came down to historians being either for or against Napoleon. Some believed he was the revolution’s saviour, whilst others believed he was its destroyer. However such an approach came to be seen as inadequate and the political and social aspects of the Napoleonic regime began to be put under closer scrutiny in an attempt to better understand its nature. Today’s historians often look closely at the personality and motivations of Napoleon, subjects which previous generations have offered little on. Looking at Napoleon from a political point of view, there is much evidence to support the view he was not an heir to the revolution. Many recent historical studies on Napoleon, such as Correlli Barnett’s 1997 work Bonaparte, look closely at Napoleon’s character and motivations, and are often (as in this case) very critical of him. Studies such as these convey the view that Napoleon had very little political or ideological motivation in taking power, but was only concerned with gaining glory for France, its people, and himself.

I would largely agree with this view and claim there are many pieces of evidence to support it. Firstly is the fact that Napoleon always presented himself as a man above the revolution and the political factions it created. He never allied himself closely with any of the groups involved in French politics between 1789-1799, and one can look at Napoleon from an almost Machiavellian point of view and say that, this was a conscience decision on his part, taken to avoid becoming compromised, and thus allowing him to eventually take power.

Indeed looking at the political nature of the Napoleonic regime only supports this view further. On December 2nd 1804 Napoleon crowned himself emperor of France and this reveals two important things. Firstly it meant that Napoleon was now a single authoritarian leader with absolute power. The ethos of democracy, which had been the founding principles of all the revolutionary forms of government since 1789, had been disregarded completely. This was evident from as early as 1800 when Napoleon’s reforms of local government reduced the role of the electorate to simply producing a list of candidates for the legislation assembly, from which the government would select the members. After the revolution the franchise had been extended to almost all male citizens and these action are in direct contradiction to the ideologies of the bourgeois revolutionaries of 1789. In fact I believe its fair to say that all of Napoleon’s action during his reign were aimed at him keeping hold of power. As Clive Emsley says in Napoleon:

“A
n underlying, unifying element to many, perhaps most of the reforms… was the desire to foster and maintain loyalty to the regime.”

The second thing this event revealed was how Napoleon saw himself. When the pope went to crown him, Napoleon took the crown away from his hands and placed the crow upon his own head. The message was clear; he was the embodiment of the people and as such their natural leader. Such a belief in more in keeping with the beliefs of previous kings who believed they were ordained by god, than with the ideals of the liberal revolutionary bourgeoisies.The economic nature of the Napoleonic regime is often seen as the strongest area of support for those claiming Napoleon was an heir to the French Revolution. As historians such as Alexander Grab point out Napoleon implemented many economic reforms that both were bourgeois in nature, and did a lot to consolidate the gains the land owning classes made from the revolution. This is proven by the fact the reforms long outlasted the regime, as Grab himself puts it:


Once Napoleon was gone, France and liberated Europe happily retained the efficient fiscal bureaucracies he had created.”

Indeed I will accept that the Code Napoleon of 1804 for example did do much to protect property rights and his wider economic policies were probably the for-runner of the European common market, which exists today. However I would still claim that such reforms were only made by Napoleon to keep the bourgeoisies on side. Whilst doing this Napoleon also brought back the Catholic Church into a central position within French society with the Concordat with the Pope in 1802, and he even created a new Nobility in 1808. It is my view that, as bourgeois and successful as the economic reforms were, they were not created because of any political or moral ideology on Napoleon’s part, but should be seen as concessions to those who had brought about the revolution. Napoleon clearly made concessions to both sides, as the above examples illustrate, and as this proves his aim was not to create a democratic capitalist society, I believe he cannot be seen as an heir to the French Revolution.

If one were to go on, and look at Napoleon’s policy in Europe I believe that the same aims, goals, and methods would be found on the international scene. War was Napoleon’s main weapon here, and he used it to expand his and the French’s glory, whilst basking in the loyalty his undoubted military skills afforded him from the mass of the French population.

In conclusion I believe that the French Revolution was a bourgeoisie one. The nature, instability, and divided nature of the revolutionary government, popular sovereignty under Robespierre, and the directory, were down to the fact that no political culture of difference and debate existed in France in 1789, unlike in countries such as Britain. Therefore the struggle for power between the different factions of the revolutionary bourgeoisie became inevitable. As did, as in almost all revolutions, the eventual arrival of a dictator to restore order and stability. In the case of the French Revolution, Napoleon was that dictator. Whilst he implemented many long lasting, bourgeois in nature reforms, he did not create the kind of society that can be truly seen as the revolution’s heir. Perhaps a regime such as Napoleon’s was required to stop France from destroying itself, and perhaps, in one way, Napoleon can be seen as an heir of the revolution as he was in many respects the first non-ideologue modern day politician. However it is my view that the real heir to the French Revolution was the kind of capitalist, democratic nation state France has become today. As D. G. Wright correctly points out:


Modern political parties and class conflict both have their origins in the French Revolution. So do liberal democracy, communism and fascism.”

The debate over Napoleon will be one, which can never be resolved. Some will always see him as the revolutions saviour, whilst others will continue to claim he was the predecessor of men like Hitler and Stalin. The political beliefs of the historian, unfortunately, normally dictate which conclusion they come to as regards Napoleon Bonaparte. In my view though the French Revolution created a new kind of world; the liberal democracies of today’s Europe can be considered its true heir. Napoleon was just its inevitable, short-term consequence.

John F. Kennedy Assassination Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

On November 22, 1963 President John F. Kennedy was assassinated in Dallas, Texas. Lee Harvey Oswald was arrested for the murder. It is believed that Lee Harvey Oswald was not the only one involved with the crime. There are countless theories on how President Kennedy was murdered. Some of the theories include the FBI, CIA, and the mob being involved. The Warren commission said that they believe that it was solely Lee Harvey Oswald who killed President Kennedy.

Most of the evidence shows that Lee Harvey Oswald could not be the only one involved. John F. Kennedy was the fourth United States President to be assassinated. Even today, there remains tremendous debate on who was responsible for the murder of Kennedy. The assassination of President Kennedy has started many different conspiracy theories about who was involved with the murder.

President Kennedy wanted to travel to Dallas, Texas to help strengthen his vote for the upcoming election and also to gain more Democratic Party members. Before Kennedy went on the trip there was some concern about a sniper being on top of a building. President Kennedy also made comments before he was killed about his safety in a convertible car. The car President Kennedy was driving in was a 1963 Lincoln Continental open top limo.

Sergeant Davis of the Dallas police department was the one who made sure the city was secure whenever any President or foreign leader came to Dallas. The secret service agent who was responsible for the planning of the Kennedy motorcade was Winston Lawson. Lawson told Sergeant Davis not to allow any police officers to follow the president’s car.

It was standard procedure for the police to secure the perimeter when any president came to Dallas. Jessy Curry who was the chief of police said that if the cops were allowed to secure the area, then the murder could have been stopped. The cops who would normally secure the area have submachine guns and rifles.(Harrison Edward Livinstone, “High Treason 2 – The Great Cover-Up:The Assassination of John F. Kennedy” (1992) Hardback)

The original plan was to go from the Love Field Airport to downtown Dallas and Dealey Plaza. Kennedy was supposed to give a speech at the Dallas Trade Mart. Kennedy’s car did not have a bullet proof top, because they did not have anything invented at the time. At 12:30 President Kennedy’s limo went towards the Texas School book depository. Then the car turned right in front of the building and was only 65 feet away.

The car was going 13 miles per hour and then slowed down to 9 miles per hour. Once the car passed the building the shots rang out. A man named Abraham Zapruder was right in front of the limo when it was being shot at. Zapruder was filming as the shooting took place. Kennedy and Texas Governor John Connally were both shot.

John Connally was riding in the same car as Kennedy and was sitting in the passenger seat in front of the president. Governor John Connally was in critical condition but he survived. There was also another person that was just watching the motorcade that was injured from debris when the bullet hit a curb. (David S. Lifton, “Best Evidence:Disguise and Deception in the Assassination of John F. Kennedy”)

Lee Harvey Oswald had been arrested for the killing a Dallas police officer J.D. Tippit. Lee Harvey Oswald was charged for killing President Kennedy and officer J.D. Tippit. Whenever Oswald was questioned about the shooting of President Kennedy he denied everything. There was a twelve hour interrogation of Lee Harvey Oswald and no recordings or notes were taken. Oswald said that he wasn’t involved and that he was just a patsy. Two days after the assassination, Lee Harvey Oswald was shot by Jack Ruby. Oswald was in police custody at the time of the shooting. Jack Ruby posed as a reporter who was trying to ask Oswald a question.(The Assassination of JFK 19 June 2005 )

The gun that was used was an Italian Manlicher-Carcano rifle. The rifle was found at the Texas School Book Depository on the sixth floor. When the police officers found the gun they recorded everything. The rifle is said to be the same gun that was used in the assassination. There was a bullet on the Connally’s stretcher and it was fired from the gun that the police had found. Lee Harvey Oswald purchased the gun under the fake name of Alek James Hiddell. (The Assassination of JFK 19 June 2005 )

President Kennedy was announced deceased at the emergency room. The surgeons at the hospital said that Kennedy had absolutely no chance for survival. Dr. George Burkley came to the hospital shortly after the president was shot and looked at the head wound and said that it was the cause of death.

A priest came to give President Kennedy his last rites. Lyndon B. Johnson who was the vice president was the next person to become president. Lyndon B. Johnson was riding in a car behind Kennedy. Lyndon B. Johnson went through the procedure to become president while he was on Air Force One. (The Assassination of JFK 19 June 2005 )

Once Air Force One had landed, an autopsy was performed at Bethesda Naval Hospital. The autopsy report said that Kennedy had been shot in the head and in the shoulder. Reports of the autopsy were incorrect and did not match up. It is said that Dr. James J. Humes probably destroyed the autopsy report and notes that were taken during the autopsy. The measurements that Dr. James J. Humes took were inconsistent and not exact.

The autopsy reports were not shown to the Warren Commission. The people who handled the autopsy records did not keep track of how many pictures were taken. It is also said that the pathologists were not experienced enough to handle Kennedy’s body in the first place. Kennedy’s neck was not looked at to determine how the bullet entered and exited. After the autopsy Kennedy’s body was embalmed and was put into the white house for the public to see. The body was removed from the white house and buried in Arlington National Cemetery. (David S. Lifton, “Best Evidence:Disguise and Deception in the Assassination of John F. Kennedy”)

There was also no recordings or radio coverage of the assassination. All of the news crews were waiting at the trade Mart for Kennedy and not in Dealey Plaza. There was some news crews riding with the Kennedy motorcade, but they were in the very back. The only recording of the murder was from Abraham Zapruder’s camera.

Many individuals took still pictures of the shooting also. The Zapruder film shows Kennedy’s head moving forward and then backwards. The Zapruder film was shown on television, but was edited a lot. More recently, in 2003 ABC News drew Dealey Plaza in three dimensional computer models.(David S. Lifton, “Best Evidence:Disguise and Deception in the Assassination of John F. Kennedy”)

The government is doing a good job in preventing records of the Kennedy assassination from becoming publicly available. In 1964 President Lyndon B. Johnson made the Warren Commission findings to be kept from public viewing. Johnson said that the documents cannot be seen by the public for 75 years, which would be until 2039. Covering up all of the records, leads more people to believe that there is indeed a conspiracy involved with the death of President Kennedy. Congress established the “President John F. Kennedy Assassination Records Collection Act of 1992”.

Congress made the act so that people could see the records earlier and they also felt that there was not a need to keep the records from public eyes. The act says that any document that has not been lost of destroyed must be given to the public by 2017. Many documents have already been opened, but the majority still remains locked away. All of the original evidence and material cannot be released, because it was lost or destroyed. Some import pieces of evidence that were neglected are; the Governor of Texas’s suit being dry cleaned, the limo being cleaned, and Lee Harvey Oswald’s Marine service file being lost.

(Josiah Thompson, “Six Seconds in Dallas” (1976 Paperback))

There was a paraffin test conducted on Lee Harvey Oswald’s right cheek and hands. The purpose of the test was to tell if Oswald had fired a weapon. The paraffin test came out positive, but the Warren commission said the data was inaccurate.

The first people to conduct an investigation were the FBI. The director of the FBI said that he wanted something to convince the public that Lee Harvey Oswald was the only one involved with the assassination. The FBI report took 17 days to complete and was given to the Warren Commission. The FBI assisted the Warren Commission. Both the FBI and the Warren Commission said that there were only three shots fired from the rifle that Lee Harvey Oswald had. The House Select Committee Investigated the FBI’s results.

The committee concluded that the FBI did not investigate whether or not President Kennedy was involved in a conspiracy and also that they did not give their data to other law enforcement agencies. James Hosty was an FBI agent who name appeared in Jack Ruby’s address book. The FBI made another copy of the address book and erased James Hosty’s name out of it and then gave it to the Warren Commission. Before the assassination took place, Lee Harvey Oswald went to the FBI office so that he could meet with James Hosty. Hosty was not in his office when Oswald had arrived, so Oswald left a note for him. When Oswald was murdered by Ruby, James Hosty destroyed the note by tearing it up and flushing it down the toilet. (Josiah Thompson, “Six Seconds in Dallas” (1976 Paperback))

When the Warren Commission completed their report many people questioned it and did not believe its findings. Many people have written books and articles disproving what the Warren Commission had said. In 2003 ABC News did a poll to see what the public thought about the John F. Kennedy assassination. The poll said that seventy percent of the people think that there is a plot involved with the murder of Kennedy.

Around seventy to ninety percent of the American people did not believe the Warren Commission’s findings. Even government officials that worked for the Warren Commission said that they did not completely believe the commission’s results themselves. The House Select Committee on Assassinations said that the Warren Commission and the FBI failed to investigate who else could have been with the murder. The committee also said that the main reason for the lack of information and results were due to the Warren Commission not communicating with the CIA. (Gerald Posner, “Case Closed” (1993 Hardcover, 1st Edition))

The House Select Committee determined that President Kennedy was killed because of a conspiracy. Their results went directly against the Warren Commissions and were the complete opposite. The HSCA said that four shots were fired and Lee Harvey Oswald was not the only one who did the shooting. Lee Harvey Oswald has shot 3 shots and another gunman had fired the other shot from behind the fence on the grassy knoll.

The grassy knoll theory has came from acoustic evidence and many different witnesses. In 2001, an article by D.B. Thomas stated that the HSCA’s second gunman theory was right. The Assassination Record Review Board said that the autopsy of John F. Kennedy was a tragedy. (David S. Lifton, “Best Evidence:Disguise and Deception in the Assassination of John F. Kennedy”)

The majority of the evidence involved with the John F. Kennedy assassination was mishandled and not dealt with the way it should have been. Since most of the evidence was lost and is locked away, its leads people to further believe in a conspiracy theory. The murder of John F. Kennedy shows that the government has not served its people in a righteous manner. They have lied, covered-up, and twisted things so much that it will never be possible to find out who was really involved with the murder.

The government has violated the right of its own people. If the government took the time to correctly gather all of the evidence and look at all the aspects of the murder, than there would not be so much mystery surrounding the murder today. The government had violated the Kennedy family’s 14th amendment. The 14th amendment and the due process of law were violated, because the Government failed to do a proper investigation of the assassination. The Kennedy family was not provided with a thorough and correct investigation of the murder. The government should have been a lot more accurate and involved with the investigations its agencies performed.

Example History Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

How did Christianity succeed in becoming so widespread in the period up to Diocletian despite the Roman persecution of Christians?

It took Christianity a little under three hundred years to develop from a small, heretical Jewish cult based in the eastern provinces into the universal religion of the Roman Empire with churches and bishops ranging from Antioch and Edessa in the east to Lyons and Toledo in the west, and encompassing the North African cities of Carthage and Alexandria in the south. Just how an often persecuted sect managed to accomplish this is a complex issue that cannot be fully examined within the scope of this essay; rather it will focus on some aspects of early Christianity that allowed it to flourish and which could withstand the persecutions that took place from the second to the third centuries, concentrating on the eastern provinces, specifically Judaea, Phoenicia, Syria, Galatia and Bithynia-Pontus. (Chadwick, 1967)

It is impossible to know what percentage of the population of the Empire considered themselves Christian. One suggestion is ten percent but this is an estimate and, as Brown points out, it is more significant that during the third century Christian communities grew quickly (Brown, 2013). The early Christian sources cannot be relied upon to provide an accurate picture, as Lane Fox notes; Christian authors “were quite uncritical in their use of words like ‘all’ and ‘everywhere’” (Lane Fox, 1988, p. 269). Eusebius, for example, described how the Apostles had been sent across the globe to preach: Thomas was sent to Parthia, Andrew to Scythia, John to Asia and Peter was allotted the eastern provinces of the Roman Empire (Eusebius, 3:1).

In order to consider what impact persecution had on the spread of Christianity, it is necessary to consider the way Christianity spread; and in particular what was it about this religion that set it apart from the pagan cults (and Judaism) that could both attract followers and help it withstand persecution. There are a number of reasons why Christianity flourished in the period between the end of the first century and Diocletian’s Great Persecution at the start of the fourth. The nature of Christianity and its emphasis on charity and hospitality; a shared sacred text; the close-knit structure of the early Church; the way it appealed to all levels of society; the very act of persecution itself, the nature of the pagan cults it was competing with and the wide-ranging trade routes across the Empire, are just a few. (Chadwick, 1967; Lane Fox, 1988)

Christian groups shared a set of beliefs and ideals based around the preaching of salvation and it can be argued that this unity of beliefs is what strengthened Christianity and allowed it to flourish. Christians shared a meal recollecting the sacrifice of their saviour and were encouraged to regard themselves as a family, calling each other “brother” and “sister” and to greet each other with a kiss. Individual communities possessed similar structures, particularly during the late second and third centuries, which emphasised their unity. This was certainly the view of Origen when responding to the allegations of the second-century philosopher Celsus who acknowledged the unity of Christians but believed it to be based on “no trustworthy foundation” other than their “unity in revolt (…) and fear of outsiders” (3:14, Chadwick, 1953, p. 136).

Origen states that Christianity does have a firm foundation in divine doctrine and God’s law. (3:15, Chadwick, 1953). The message that everyone was subject to the same divine law and could achieve salvation through renunciation of sins was unique to Christianity. This was a message upon which persecution could have no impact; indeed persecution offered devout Christians the opportunity to emulate their saviour and make the ultimate sacrifice for their faith; persecution encouraged martyrdom.

Whilst elements of early Christian practice, such as the celebratory meal and offering practical support for fellow supporters, can be seen in some pagan cults at this time, what set Christianity apart was its shared sacred text. It must be acknowledged that Christianity and Judaism are very similar in this regard, however, the New Testament, works by Origen and other early Christian philosophers and those condemning the Gnostic practices of the Coptic Church as heresy show that different Christian groups were discussing and exchanging views on important topics. In this way early Christian thinkers, the ‘Church Fathers’ were formulating a common, orthodox canon of beliefs which were set down in documents that were shared amongst the communities. (Clark, 2004)

The early Christians did not worship in what we would recognise as churches; they held assemblies which acted as a family unit, providing not only spiritual but practical support to its members. (Chadwick, 1697; Brown, 2013) They met in the homes of individual Christians and these houses were extended to accommodate the growing community, as at Dura Europos in Mesopotamia, a private house which was extended at some point in the 240s to add a hall large enough to accommodate up to sixty people (Lane Fox, 1988). It is perhaps significant, therefore, that no Imperial edict against the Christians, even that of Decian, specifically mentioned destroying churches until the Great Persecution of Diocletian at the start of the fourth century. Whilst the community might consider itself a church, there was no physical building, like a synagogue, which pinpointed them within the landscape of the town or city. In effect the church was mobile and could relocate as and when persecution made it necessary and meant Christianity could spread easily.

One of the key principles of Christianity was its emphasis on acts of charity and supporting those in need, based around Matthew 25:38-40. (Clark 2004) No other religious group in the Empire held provision for the poor as a key doctrine, but Christians were duty-bound to offer not only spiritual but practical help to those less fortunate than themselves. (Chadwick, 1967) Eusebius quotes a letter of Dionysius, bishop of Alexandria, describing how Christians helped nurse the sick and dying of all religions during an outbreak of plague and helped to bury the dead, whereas the pagans abandoned the sick (even family members), to their fate. (Eusebius, 7: 22 7-10) As Clark (2004), following MacMullen, notes; nursing the sick might convince people that Christians had a special religious protection; their belief in suffering and salvation and stories of healing miracles could, perhaps, be more effective than doctrine in winning converts.

The notion of charity was not confined to offering comfort and solace; one of the ideas Christianity had inherited from Judaism was giving alms for “the remission of sin” (Brown, 2013, p. 69). The idea that money earned in this world, by whatever means, could help its owner earn their place in the next through the remission of his or her sins meant that churches were able to accumulate wealth. The pagan temples of the large cities depended on donations from the wealthy whereas the average Christians making donations for the salvation of their souls were tradesmen. This meant that during times of financial disaster, as in the third century, the Christian communities were better able to withstand a crisis. The church developed structures and systems to ensure this wealth was distributed to where it was needed and Christians acquired a reputation for taking care of their own; widows and orphans as well as the sick and the destitute were all embraced in this institutionalised alms giving. (Brown, 2013; Clark, 2004)Thus the knowledge that your community was duty bound to offer practical assistance in times of need could easily be argued as a contributing factor to the spread of Christianity, again one on which persecution would have little impact.

This did not mean that Christianity developed into a religion of the poor; rather it embraced all ranks from slaves and tradesmen up to the higher echelons of society: Marcia the concubine of Emperor Commodus was Christian, as were King Agbar VIII of Osrhoene and Julius Africanus from Palestine (Brown, 2013 and Clark 2004).

When considering the impact persecutions had on the spread of Christianity, the nature of these persecutions has to be taken into consideration. During the second and third centuries, there were two periods of persecution: the sporadic, isolated persecutions that were confined to specific areas during the second and early third centuries; and the Emperor led persecutions of Decian and Valerian which culminated in the Great Persecutions under Diocletian and Galerius.

Our best evidence for the nature of these earlier persecutions comes from Pliny’s letter to Trajan, written c. 112 (Ep. 10:96, Radice, 1969, p. 293). Pliny, governor of Bithynia-Pontus, wrote to the Emperor asking for guidance on how to treat Christians arrested in his province. The letter describes how he had tortured two female slaves to obtain information about the activities of Christians and asks for advice on how he should conduct trials of suspected Christians who were brought before him as a result of anonymous allegations. Trajan’s reply makes it clear that only known Christians should be prosecuted and anonymous allegations should not be considered and those simply suspected of being Christian should not be sought out. This shows that during this time there was no clear policy of persecution coming from the Roman authorities. Similarly, Eusebius includes a letter from Trajan’s successor, Hadrian, written to Pliny’s successor, Minicius Fundanus, reaffirming this position; Christians should not be sought out directly, but those correctly accused under Roman law, should be punished (Eusebius, 4:9).

Once again Eusebius’ evidence must be approached with caution; as with any Christian author he cannot be considered a reliable witness to the persecution of his own kind. When taken together, however, the evidence of both the pagan Roman official Pliny and the Christian Eusebius does indicate that there was no official policy of widespread persecution of Christians during the second century. Moreover, as St Croix (1963), illustrates, accusations against individuals were not likely to be made falsely as the person making the allegation had to carry out the prosecution, rendering themselves liable for a charge of calumnia (malicious prosecution) if they could not make a satisfactory case against the alleged Christian.

Decian’s edict of 250 represents the changing situations of both the Empire and the Christian church. By this period Christianity had spread across the whole Empire; an empire which had been suffering from years of civil war and was in something of a crisis and in need of assistance from its gods (Clark, 2004). The edict issued by Decian in 249-250 did not specifically target Christianity, though Christian writers chose to interpret it as a direct attack; rather it required all citizens to make sacrifices to the gods and obtain proof of this in the form of a special certificate. It is clear that many Christians did suffer as a result of this edict; Babylas of Antioch and Alexander of Jerusalem were amongst many notable church leaders who lost their lives. Others, however, preferred to go into hiding or buy certificates from friendly magistrates. The impact of this edict was, therefore, twofold: it created a new generation of martyrs from those who refused to sacrifice and were punished for it; and caused schism within the church regarding what to do about those (mainly in the east) who fled or bought their certificates. Neither had any detrimental effect on the spread of Christianity; martyrs were admired and acted as inspiration for the faithful and the debate regarding those who went into hiding helped to develop Church doctrine.

As noted above, persecution created martyrs who were held up as examples to be followed: men and women who had endured physical pain and suffering like Jesus on the cross. Christian writers praised their bravery and courage, recording their heroic suffering in Acts and Passions which were copied and disseminated throughout the Christian world, raising them to the status of saint. Martyrdom and the development of the cult of saints are other key topics to consider when looking at the spread of Christianity and its reaction to persecution, but ones which cannot be discussed here. The ideas discussed above: the nature of Christianity, the unity provided by shared sacred texts and church organisation, the emphasis on charity and personal redemption; are just a few of the reasons this fledging cult was able to flourish and spread throughout the Roman Empire, covering not only the Eastern Provinces but also those in the west. There has been little room here to give them the full discussion they deserve, or to consider other factors such as the wide-ranging trade routes across the Empire that allowed Christians to travel and spread their faith; or to consider the way families were converted. What can be seen, however, is that Christianity was a religion with a unified belief structure that appealed to a wide cross-section of society and which offered practical help for those in need, including members of society that were often marginalised. Persecution did not stop the spread of Christianity, nor did it drive it underground. In the face of persecution most Christians remained steadfast; secure in the knowledge that their physical pain and suffering in this life would lead to reward in the next.

Reference List:
Primary Sources:

Eusebius, Church History [online] Available from: Christian Classics Ethereal Library [online] http://www.ccel.org/ccel/schaff/npnf201.iii.viii.i.html [Accessed 14 February 2015]

Chadwick, H. (1953) Origen, Contra Celsum. Cambridge: Cambridge University Press.

Radice, B. (1969) The Letters of the Younger Pliny. Harmondsworth: Penguin.

Secondary Works:

Brown, P. (2013) The Rise of Western Christendom: Triumph and Diversity, A.D 200-1000. 10th Anniversary Revised Ed. Chichester: Wiley-Blackwell.

Chadwick, H. (1967) The Early Church. London: Penguin.

Clark, G. (2004) Christianity and Roman Society. Cambridge: Cambridge University Press.

Lane Fox, R. (1988) Pagans and Christians in the Mediterranean world from the second century AD to the conversion of Constantine. London: Penguin.

MacMullen, R. (1984) Christianizing the Roman Empire A.D. 100-400. New Haven; London: Yale University Press.

St Croix, G.E.M de (1963) Why Were Early Christians Persecuted? Past and Present. 26. P. 6-38.

Work related stress in healthcare

This work was produced by one of our professional writers as a learning aid to help you with your studies

Stress may be defined as the physical and emotional response to excessive levels of mental or emotional pressure, which may arise from issues in both the working and personal life. Stress may cause emotional symptoms such as anxiety, depression, irritability or low self-esteem, or even manifest as physical symptoms including insomnia, headaches, loss of appetite and difficulties concentrating. Individuals experiencing high levels of stress may experience difficulty in controlling emotions such as anger, and may be more likely to experience illness or consume increased quantities of alcohol (NHS Choices, 2015). In the UK a survey undertaken by the Health and Safety Executive (HSE) has estimated that in the year 2013-2014, 487,000 of work related illnesses (39%) could be attributed to work-related stress, anxiety or depression (HSE, 2014). Additionally the survey found that as many as 11.3 million working days were lost in the year 2013-2014 as the direct result of work-related stress (HSE, 2014).

Studies have shown that healthcare professionals, particularly nurses and paramedics, are at an increased risk of work-related stress compared with other professionals (Sharma et al., 2014). This is likely to be due to the innate long hours and high pressure of maintaining quality care standards in the job, as well as pressures caused by staff shortages, high levels of patient demand, a lack of adequate managerial support as well as the risk of aggression or violence towards nurses from patients, relatives or even other staff (Royal College of Nursing (RCN), 2009). Indeed, a 2014 survey of nursing staff by the RCN showed that up to 71% of staff surveyed worked up to 4 hours more than their contracted hours a week, 80% felt that work-related stress lowered morale, and that 72% reported that understaffing occurred frequently in their workplace. As a result of these issues, 66% of respondents in the survey considered leaving the NHS or the nursing profession altogether (RCN, 2014b). A separate report by the RCN suggested that over 30% of absence due to illness was due to stress, which was estimated to cost the NHS up to ?400 million every year (RCN, 2014a).

In addition to the physical and emotional symptoms of stress previously discussed, studies in this area have shown that nurses experiencing high levels of work-related stress were more likely to be obese and have low levels of physical exercise, factors which increased the likelihood of non-communicable diseases and co-morbidities such as hypertension and type 2 diabetes (Phiri et al., 2014).

Stress and staff absence

Chronic stress has been linked to “burnout”(Khamisa et al., 2015; Dalmolin et al., 2014), or a state of emotional exhaustion under extreme stress related to reduced professional fulfilment (Dalmolin et al., 2014) and “compassion fatigue”, where staff have experienced so many upsetting situations that they find it difficult to continue empathising with their patients (Wilkinson, 2014). As previously discussed, reducing staffing levels contribute to stress in nursing staff, and in this way chronic stress within the workplace launches a self-perpetuating cycle of understaffing; increased stress leads to increased illness, more staff absence and increased understaffing. In turn, these negative emotions also reduce job satisfaction and prompt many staff to consider leaving the nursing profession, further reducing staffing availability for services (Fitzpatrick and Wallace, 2011).

Reasons for work-related stress amongst healthcare professionals

Studies amongst nursing staff have also reported stress occurring as the result of poor and unsupportive management, poor communication skills amongst team members, institutional and organisational issues (e.g. outdated or restrictive hospital policies) or bullying and harassment (RCN, 2009). Even seemingly minor issues have been reported as exacerbating stress amongst nursing staff, for example a lack of common areas to take breaks in, changing shift patterns, and even difficulty and expense of car parking (Happell et al., 2013).

Work related stress can particularly affect student or newly qualified nurses, who often report higher expectations of job satisfaction from working in the profession, they have worked hard and aspired to join, and are therefore particularly prone to experiencing disappointment on discovering that they do not experience the job satisfaction that they presumed they would do whilst training. Student and newly qualified nurses may also have clear ideas from their recent training on how healthcare organisations should be run and how teams should be managed, and may then be disillusioned when they discover that the reality is that many departments could in fact benefit from improvements and further training for more experienced staff in these areas (Wojtowicz et al., 2014; Stanley and Matchett, 2014). Nursing staff are also likely to, on occasion, find themselves in a clinical situation that they feel unprepared for, or do not have the necessary knowledge to provide the best possible care for patients, and this may cause stress and anxiety (RCN, 2009). They may also be exposed to upsetting and traumatic situations, particularly in fields such as emergency or intensive care medicine (Wilkinson, 2014).

Moral distress can also cause strong feelings of stress amongst healthcare professionals. This psychological state occurs when a discrepancy occurs between the action that an individual takes, and the action that an individual feels they should have taken (Fitzpatrick and Wallace, 2011). This may occur if a nurse feels that a patient should receive an intervention in order to experience best possible care, but is unable to deliver it, for example due to organisational policy constraints, or a lack of support from other members of staff (Wojtowicz et al., 2014). For example, a nurse may be providing end of life care to a patient who has recently had an unplanned admission onto a general ward but is expected to die shortly. The nurse may feel that this patient would benefit from having a member of staff sitting with them until they died. However, due to a lack of available staffing this does not happen as the nurse must attend to other patients in urgent need of care. If the patient dies without someone with them, the nurse may experiences stress, anger, guilt and unhappiness over the situation as they made the moral judgement that the dying patient “should” have had a member of staff with them, but were unable to provide this without risking compromising the safety of other patients on the ward (Stanley and Matchett, 2014). One large scale questionnaire based study in the USA on moral distress amongst healthcare professionals has shown that moral distress is more common amongst nurses than other staff such as physicians or healthcare assistants. The authors suggested that this may be due to a discrepancy between the level of autonomy that a nurse has in making care decisions, (especially following disagreement with a doctor, who has a high level of autonomy), while experiencing a higher sense of responsibility for patient wellbeing than healthcare assistants, who were more likely to consider themselves to be following the instructions of the nurses than personally responsible for patient outcomes (Whitehead et al., 2015).

Recommendations for policies to address work related stress

It is acknowledged that many individuals find that being asked to perform tasks that they have not been adequately trained or prepared for can be very stressful. As such management teams should also try to ensure as far as possible that individuals are only assigned roles for which they have adequate training and abilities, and support employees with training to improve skills where necessary (RCN, 2009).

Surveys have frequently reported that organisational issues such as a lack of intuitive work patterns, overloading of workloads and an unpleasant working environment can all contribute to work related stress. Organisations can reduce the impact of these by developing programmes of working hours with working staff and adhering to them, making any necessary improvements to the environment (e.g. ensuring that malfunctioning air conditioning is fixed), and that incidents of understaffing are reduced as much as possible (RCN, 2009). Issues such as insomnia and difficulty in adapting to changing shift patterns can also be assisted by occupational health, for example by encouraging healthy eating and exercise (Blau, 2011; RCN, 2005). For example, in 2005 the RCN published an information booklet for nursing staff explaining the symptoms of stress, ways in which it can be managed e.g. relaxation through exercise or alternative therapies, and when help for dealing with stress should be sought (RCN, 2005). More recently, internet based resources are available from the NHS to help staff identify if they need assistance, and how and why it is important to access it (NHS Employers, 2015).

Witnessing or experiencing traumatic or upsetting events is an unavoidable aspect of nursing, and can even result in post-traumatic stress disorder (PTSD). However, there are ways in which staff can be encouraged by their management teams and organisations to deal with the emotions that these circumstances produce, limiting the negative and stressful consequences of these events. This may include measures such as counselling or even peer support programmes through the occupational health departments (Wilkinson, 2014). Staff should also be encouraged to use personal support networks e.g. family, as this can be an important and effective source of support, however studies have shown that support within the work place is most beneficial, particularly if this can be combined with a culture where healthcare professionals are encouraged to express their feelings (Lowery and Stokes, 2005).

One commonly cited reason for work related stress amongst nurses is the incompetence or unethical behaviours of colleagues, and a lack of opportunity to report dangerous or unethical practice without fear of reprisal. Therefore it is important that institutions and management teams ensure that there is an adequate care quality monitoring programme in place, and a culture where concerns can be reported for further investigation without fear of reprisal, particularly with respect to senior staff or doctors (Stanley and Matchett, 2014).

It has been reported that in the year 2012-2013, 1,458 assaults were reported against NHS staff (NHS Business Service Authority, 2013). Violence and abusive behaviour towards nursing staff is an acknowledged cause of stress and even PTSD, and staff have a right to provide care without fear (Nursing Standard News, 2015; Itzhaki et al., 2015). Institutions therefore have a responsibility towards their staff to provide security measures such as security staff, workplace design (e.g. locations of automatically locking doors) and policies for the treatment of potentially violent patients e.g. those with a history of violence or substance abuse issues (Gillespie et al., 2013).

As previously discussed, nurses are more likely than other healthcare professionals to experience moral distress as the result of a discrepancy between the actions they believe are correct and the actions they are able to perform (Whitehead et al., 2015). However there are policies that can be introduced into healthcare organisations to reduce its occurrence, and the severity with which it can affect nursing staff. Studies have shown that nurses who were encouraged to acknowledge and explore feelings of moral distress were able to process and overcome these in a less damaging manner than those who did not (Matzo and Sherman, 2009; Deady and McCarthy, 2010). Additionally, it is thought that moral distress is less frequent in institutions and teams that encourage staff to discuss ethical issues with a positive attitude (Whitehead et al., 2015). For example, institutions could employ a designated contact person for staff to discuss stressful ethical issues with, or set up the facility for informal and anonymous group discussion, for example on a restricted access internet-based discussion board (Matzo and Sherman, 2009)

Conclusion

Work related stress is responsible for significant costs to the NHS in terms of staffing availability and financial loss from staff absence from stress itself or co-morbidities that can be exacerbated by stress (RCN, 2009), for example hypertension and diabetes (Phiri et al., 2014; RCN, 2009, 2014a). The loss of valuable and qualified staff from the profession is also a significant cost to health services, and of course exacerbates the situation by increasing understaffing further, which in turn increases stress for the remaining staff (Hyrkas and Morton, 2013). It can also exert a significant cost to healthcare professionals who experience it, in terms of their ability to work, their personal health, effects on personal relationships (Augusto Landa et al., 2008) and job satisfaction (Fitzpatrick and Wallace, 2011). However, organisations can implement recommendations to reduce work related stress, for example by encouraging a positive and supportive culture for staff by offering interventions such as counselling (Wilkinson, 2014; RCN, 2005). Furthermore, interventions such as encouraging the reporting of unsafe or unethical practice – a commonly cited source of stress amongst nurses (RCN, 2009; Stanley and Matchett, 2014) – may also contribute to improving the quality of patient care.

References

Augusto Landa, J. M., Lopez-Zafra, E., Berrios Martos, M. P. and Aguilar-Luzon, M. D. C. (2008). The relationship between emotional intelligence, occupational stress and health in nurses: a questionnaire survey. International Journal of Nursing Studies, 45 (6), p.888–901. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/17509597

Blau, G. (2011). Exploring the impact of sleepaˆ?related impairments on the perceived general health and retention intent of an Emergency Medical Services (EMS) sample. Career Development International, 16 (3), p.238–253. [Online]. Available at: http://www.emeraldinsight.com/doi/abs/10.1108/13620431111140147

Dalmolin, G. de L., Lunardi, V. L., Lunardi, G. L., Barlem, E. L. D. and da Silveira, R. S. (2014). Moral distress and Burnout syndrome: are there relationships between these phenomena in nursing workers? Revista Latino-Americana de Enfermagem, 22 (1), p.35–42. [Online]. Available at: http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0104-11692014000100035

Deady, R. and McCarthy, J. (2010). A Study of the Situations, Features, and Coping Mechanisms Experienced by Irish Psychiatric Nurses Experiencing Moral Distress. Perspectives in Psychiatric Care, 46 (3), p.209–220. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/20591128

Fitzpatrick, J. J. and Wallace, M. (2011). Encyclopedia of Nursing Research. 3rd ed. New York: Springer Publishing Company.

Gillespie, G., Gates, D. M. and Berry, P. (2013). Stressful Incidents of Physical Violence Against Emergency Nurses. OJIN: The Online Journal of Issues in Nursing, 18 (1). [Online]. Available at: http://www.nursingworld.org/MainMenuCategories/ANAMarketplace/ANAPeriodicals/OJIN/TableofContents/Vol-18-2013/No1-Jan-2013/Stressful-Incidents-of-Physical-Violence-against-Emergency-Nurses.html

Happell, B., Dwyer, T., Reid-Searl, K., Burke, K. J., Caperchione, C. M. and Gaskin, C. J. (2013). Nurses and stress: recognizing causes and seeking solutions. Journal of Nursing Management, 21 (4), p.638–647. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/23700980

HSE. (2014). Statistics – Stress-related and psychological disorders in Great Britain. Health and Safety Executive. [Online]. Available at: http://www.hse.gov.uk/statistics/causdis/stress/index.htm

Hyrkas, K. and Morton, J. L. (2013). International perspectives on retention, stress and burnout. Journal of Nursing Management, 21 (4), p.603–604. [Online]. Available at:

Itzhaki, M., Peles-Bortz, A., Kostistky, H., Barnoy, D., Filshtinsky, V. and Bluvstein, I. (2015). Exposure of mental health nurses to violence associated with job stress, life satisfaction, staff resilience, and post-traumatic growth. International Journal of Mental Health Nursing, 24 (5), p.403–412. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/26257307

Khamisa, N., Oldenburg, B., Peltzer, K. and Ilic, D. (2015). Work Related Stress, Burnout, Job Satisfaction and General Health of Nurses. International Journal of Environmental Research and Public Health, 12 (1), p.652–666. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4306884/

Lowery, K. and Stokes, M. A. (2005). Role of peer support and emotional expression on posttraumatic stress disorder in student paramedics. Journal of Traumatic Stress, 18 (2), p.171–179. [Online]. Available at: doi:10.1002/jts.20016

Matzo, M. L. and Sherman, D. W. (2009). Palliative Care Nursing: Quality Care to the End of Life. 3rd ed. New York: Springer Publishing Company.

NHS Business Service Authority. (2013). 2012-13 figures released for reported physical assaults against NHS staff. NHS Business Service Authority. [Online]. Available at: http://www.nhsbsa.nhs.uk/4380.aspx

NHS Choices. (2015). Stress, anxiety and depression. NHS Choices. [Online]. Available at: http://www.nhs.uk/conditions/stress-anxiety-depression/pages/understanding-stress.aspx

NHS Employers. (2015). Health work and wellbeing. NHS Employers. Available at: http://www.nhsemployers.org/your-workforce/retain-and-improve/staff-experience/health-work-and-wellbeing

Nursing Standard News. (2015). Stress at work affecting nurses’ health, survey finds. Nursing Standard, 29 (27), p.8–8. [Online]. Available at: http://journals.rcni.com/doi/10.7748/ns.29.27.8.s6

Phiri, L. P., Draper, C. E., Lambert, E. V. and Kolbe-Alexander, T. L. (2014). Nurses’ lifestyle behaviours, health priorities and barriers to living a healthy lifestyle: a qualitative descriptive study. BMC Nursing, 13. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4264254/

RCN. (2005). Working well initiative: Managing your stress. A guide for nurses. Royal College of Nursing. [Online]. Available at: http://www.rcn.org.uk/__data/assets/pdf_file/0008/78515/001484.pdf

RCN. (2009). Work-related stress. Royal College of Nursing. [Online]. Available at: https://www.rcn.org.uk/__data/assets/pdf_file/0009/274473/003531.pdf

RCN. (2014a). Importance of stress awareness. [Online]. Available at: http://www.rcn.org.uk/newsevents/news/article/uk/importance_of_stress_awareness

RCN. (2014b). Two thirds of staff have considered leaving the NHS. [Online]. Available at: http://www.rcn.org.uk/newsevents/news/article/uk/two_thirds_of_staff_have_considered_leaving_the_nhs

Sharma, P., Davey, A., Davey, S., Shukla, A., Shrivastava, K. and Bansal, R. (2014). Occupational stress among staff nurses: Controlling the risk to health. Indian Journal of Occupational and Environmental Medicine, 18 (2), p.52–56. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4280777/

Stanley, M. J. C. and Matchett, N. J. (2014). Understanding how student nurses experience morally distressing situations: Caring for patients with different values and beliefs in the clinical environment. Journal of Nursing Education and Practice, 4 (10), p.p133. [Online]. Available at: doi:10.5430/jnep.v4n10p133

Whitehead, P. B., Herbertson, R. K., Hamric, A. B., Epstein, E. G. and Fisher, J. M. (2015). Moral Distress Among Healthcare Professionals: Report of an Institution-Wide Survey. Journal of Nursing Scholarship, 47 (2), p.117–125. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/25440758

Wilkinson, S. (2014). How nurses can cope with stress and avoid burnout: Stephanie Wilkinson offers a literature review on the workplace stressors experienced by emergency and trauma nurses. Emergency Nurse, 22 (7), p.27–31. [Online]. Available at: http://rcnpublishing.com/doi/abs/10.7748/en.22.7.27.e1354

Wojtowicz, B., Hagen, B. and Van Daalen-Smith, C. (2014). No place to turn: Nursing students’ experiences of moral distress in mental health settings. International Journal of Mental Health Nursing, 23 (3), p.257–264. [Online]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/23980930

The Effectiveness of Public Health Interventions

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

The health of the whole population is a very important issue. Conditions which are likely to affect the whole population or large sections of the population are considered to be public health issues and are the subject of specific healthcare promotions and interventions. These can take a range of forms; those aimed at raising awareness of symptoms or lifestyle factors that are implicated in developing a particular condition; management of health conditions to improve quality of life and/or longevity or recognition of symptoms to obtain early treatment. Public health interventions are developed to address identified public health issues (National Institute for Health and Care Excellence, 2015). Once these are put in place, it is important to be able to assess the impact of the interventions and their effectiveness in respect of the present situation and also to increase the knowledge base for development of further interventions in the future (Brownson, et al., 2010). This essay will consider the ways in which the effectiveness of public health interventions can be determined.

Discussion

One of the main factors that needs to be considered in public health interventions is cost-effectiveness (The King’s Fund, 2014). The NHS has increasing demands on its services and so, when developing new interventions or reviewing those already in place, cost effectiveness is one of the most important issues. A further aspect of the effectiveness of public health interventions is the extent to which they have demonstrably achieved the aims set for the intervention (Scutchfield & Keck, 2003). These two areas will now be considered in greater detail.

There is a finite budget available to the NHS to provide healthcare and this has to be utilised in the most efficient way. The economic constraints that have been in place for some time have created an even greater need for financial efficiency. One way that this can be achieved is through reducing the numbers of people who are suffering from conditions which are considered to be avoidable. Conditions such as diabetes and obesity for example, are considered to be largely avoidable by people changing lifestyle habits to improve their health. Thus a range of public health interventions have been directed to focus on these types of issues in order to prevent people from becoming ill as this would represent a substantial saving in costs of treatment for subsequent illnesses. It would also provide benefit to the public in that people would lead longer, healthier lives. However, preventative interventions present difficulties in measuring their effectiveness. A reduction in the numbers of people developing diabetes, for instance, may be attributable to a public health intervention or it may be the result of one or more other factors. The individuals measured may not have developed the condition anyway and so it cannot be proven that the intervention itself was solely responsible for them remaining well. As it can be difficult to accurately measure effectiveness of outcomes, the cost-effectiveness is also difficult to assess. Historically, preventative healthcare promotion has been a problematic area due to the difficulties in establishing effectiveness and this made obtaining funding for such activities particularly challenging. However, the increasing demand for services has meant that there has been a shift in perspective and a greater focus on prevention. Thus, the means of evaluating public health interventions in this area has become important. Although the financial implications cannot be the sole driver for health promotion, financial issues are of necessity a major factor as there are obligations on the NHS to produce evidence that their funding has been properly and effectively spent.

The effectiveness of health promotions from the perspective of health improvement of the population should be the primary motivation of interventions rather than cost. In order to improve public health, there are a range of options for interventions. The impact of health interventions was described by Frieden, (2010) as being in the formof a five-tier pyramid with the bottom tier being the most effective as it reaches the largest sector of the population and has the greatest potential to improve the social/economic determinants of health. The higher tiers of the pyramid relate to areas where the individual is helped to make healthy choices. Topics that are within the bottom tier of the pyramid include the improvements in health brought about by changing lifestyle habits such as smoking. Wide-scale promotions and interventions have been in place for many years and this has reduced the numbers of people who already smoke together with encouraging people not to begin smoking. As a result, the risk factors of health issues such as heart conditions has been reduced. Whilst this may not completely prevent some people from developing such conditions in terms of public health, which takes the wider perspective, a higher proportion of people will be at a lower risk. Thus, the effectiveness of interventions in this case can be measured by the proportion of the population who currently smoke, who have given up smoking and who have started smoking by comparison to previous years records (Durkin, et al., 2012). The numbers of people coming forward for help through smoking cessation provisions offered by their GPs can also be measured, together with the effectiveness of the those interventions in helping people to achieve their goal to stop smoking.

The longstanding interventions to reduce the numbers of people with HIV/AIDS also fell within the same category of public health interventions (as just described in respect of smoking) once it was clear that it was a potential risk to a large section of the population. In this instance, there was a large amount of public health promotional activity when the issue was first known in the 1980’s but this has largely subsided currently with few if any national high profile promotions/interventions (Bertozzi, et al., 2006). However, the risk has not been eradicated and there has been an increase in older people developing the condition (AVERT, 2015). This may be due to them not considering they are at risk or they may not have been targeted by the original campaigns which had a greater focus on the homosexual communities, needle using drug addicts and sexually active, younger adults. Married couples were not then considered to be the primary target audience for such campaigns. This demonstrates that there is a need for on-going interventions, particularly in terms of public awareness, to ensure that there is a consistent and improving impact (AVERT, 2015). Unless a health risk has been eradicated, there is likely to be a need for continuing interventions to maintain public knowledge levels. The way in which HIV/AIDS and smoking are directed at the wider population are examples of Frieden’s bottom sections of the pyramid.

When interventions are applied in the top levels of Frieden’s pyramid they address individuals more directly, rather than the whole population (2010). Thus, it could be argued that such interventions would overall, have a greater impact as any public changes need to involve each individual changing. Unless each person is reached by the intervention and perceives that it is a valuable change for them, publicly directly interventions will have reduced effectiveness. National interventions will of necessity be broadly based and they will, therefore, not reach all those people to whom it is aimed as they may feel that it does not apply to them. Thus, the use of interventions that are more specifically targeted to individuals can take into account their socio-economic status and other factors to make the interventions more easily seen to be applicable to them (Frieden, 2010 ).

A different view of public health interventions considers the situation for people with terminal or long term conditions. Many of the interventions focus heavily on the medical model and do not take into account the impact on the patient or how they would prefer to be cared for. The medical view of what constitutes good health may be considered to be a more laboratory based, theoretical view that does not necessarily reflect the lived experience of individuals (Higgs, et al., 2005). Physical incapacity may not impact badly on an individual who has found ways to live a fulfilling life whilst someone who is considered fit and well may not consider that they have good quality of life (Asadi-Lari, et al., 2004). Therefore, the impact of interventions on the public also needs to be considered. A medically effective intervention may be unpleasant or difficult for the patient to endure and thus, viewed as being less effective. Furthermore, if the intervention is too unpleasant the patient may fail to comply and thus, also not obtain the level of effectiveness that the medical model would suggest it should (Asadi-Lari, et al., 2004).

One area of public health that has proved to be somewhat controversial in recent years is that of immunisation. The possible links between the MMR vaccine and autism, for instance, has impacted heavily on the numbers of people having their children immunised (BMJ, 2013). Vaccination is an important branch of public health and relies upon sufficient people being immunised against diseases so that should isolated cases occur the disease will not spread. Many parents today will be unaware of the health implications of illnesses such as German measles and mumps as vaccination has made cases rare. The rarity of the cases has also led to the incorrect belief that these illnesses have been eradicated. Therefore, in this instance the effectiveness of the intervention has been varied by the influence of the media reports or adverse outcomes. The fear that was generated has been difficult to overcome and this has resulted in a loss of faith in the process. This then results in reduced effectiveness of the intervention. However, it can prove very difficult to restore public support following situation such as this that have continued for a long time. The impact can be measured in both the numbers of people coming forward to have their children immunised and in the numbers of cases of the various illnesses that occur each year. The current statistics, however, do suggest that the levels of immunisation with MMR has now been restored to an appropriate level (NHS, 2013).

The provision of the ‘flu vaccine is another instance where public health interventions may have varying effectiveness. The actual effectiveness of a ‘good’ vaccine is not considered to be 100% when the correct formula has been provided. In 2014, however, the vaccine was not for the actual strain of ‘flu that occurred and so there was little protection provided (Public Health England, 2015). As a result, it is likely that there will be a downturn in the numbers of people who will come forward to receive the ‘flu vaccination this year as the value may be perceived to be doubtful. This also demonstrates the need to provide the public with correct information so that they are aware of the potential effectiveness of the intervention. So in the case of ‘flu, if the vaccine has a 60% chance of preventing the illness this should perhaps be specifically stated. There may be a level at which the majority of people feel that it is not worth having the vaccination. If, hypothetically, an effectiveness of less than 30% was considered by the majority of people to be so low that it was not worth having the vaccination, there could be few people immunised and a major epidemic could follow. Therefore, it is important that the information provided is correct and that the intervention itself is seen to be of sufficient value to the individual to warrant them making that choice to take advantage of what is offered (NHS, 2015).

Conclusion

This essay has asserted that the effectiveness of public health interventions can be viewed from two main perspectives: the cost effectiveness of the provision and the impact on the target audience. Whilst there are considerable pressures in the NHS financially, this should not be the primary consideration in respect of public health. The aim of public health interventions is to improve the health and well-being of the population as a whole and uses a wide range of methods to achieve this. Some provisions are aimed at the whole population and others are designed for the individual or smaller target groups. For these to be effective, they need to reach the target audience and have meaning for them so that they will be encouraged to take the required action. Continuous changes in the provision may also be needed to ensure that long term issues remain in the public awareness.

Bibliography

Asadi-Lari, M., Tamburini, M. & Gray, D., 2004. Patients’ needs, satisfaction, and health related quality of life: Towards a comprehensive model. Health and Quality of Life Outcomes , 2(32).

AVERT, 2015. HIV/AIDS Statistics 2012. [Online] Available at: http://www.avert.org/hiv-aids-uk.htm [Accessed 28 September 2015].

Bertozzi, S.; Padian, N.S.; Wegbreit, J.; DeMaria, L.M.; Feldman, B.; Gayle, H.; Gold, J.; Grant, R.; Isbell, M.T., 2006. Disease Control Priorities in Developing Countries. New York: World Bank.

BMJ, 2013. Measles in the UK: a test of public health competency in a crisis. BMJ, 346(f2793).

Brownson, R.C.; Baker, E.A.; Leet, T.L.; Gillespie, K.N.; True, W.R., 2010. Evidence-Based Public Health. Oxford: Oxford University Press.

Durkin, S., Brennan, E. & Wakefield, M., 2012. Mass media campaigns to promote smoking cessation among adults: an integrative review. Tobacco Control, Volume 21, pp. 127-138.

Frieden, T. R., 2010 . A Framework for Public Health Action: The Health Impact Pyramid. American Journal of Public Health, 100(4), p. 590–595.

Higgs, J., Jones, M., Loftus, S. & Christensen, N., 2005. Clinical Reasoning in the Health Professions. New York: Elsevier Health Sciences.

National Institute for Health and Care Excellence, 2015. Methods for the development of NICE public health guidance (third edition). [Online] Available at: https://www.nice.org.uk/article/pmg4/chapter/1%20introduction [Accessed 28 September 2015].

NHS, 2013. NHS Immunisation Statistics, London: NHS.

NHS, 2015. Flu Plan Winter 2015/16. [Online] Available at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/418038/Flu_Plan_Winter_2015_to_2016.pdf [Accessed 28 September 2015].

Public Health England, 2015. Flu vaccine shows low effectiveness against the main circulating strain seen so far this season. [Online] Available at: https://www.gov.uk/government/news/flu-vaccine-shows-low-effectiveness-against-the-main-circulating-strain-seen-so-far-this-season [Accessed 28 September 2015].

Scutchfield, F. & Keck, C., 2003. Principles of Public Health Practice. Clifton Park: Delmar Learning.

The King’s Fund, 2014. Making the case for public health interventions. [Online] Available at: http://www.kingsfund.org.uk/audio-video/public-health-spending-roi?gclid=CM_ExbKomcgCFcZuGwodE44Lkg [Accessed 28 September 2015].

Transference Countertransference Therapeutic Relationship

This work was produced by one of our professional writers as a learning aid to help you with your studies

Describe the transference-countertransference element of the therapeutic relationship

An examination of the development of transference and counter-transference as a therapeutic tool with an exploration of the ways in which it can be defined and used in a therapeutic setting, with an overview and brief discussion of the way the concept of transference/counter-transference has been received by different schools of therapy.

Introduction

This essay explores the development of transference and countertransference from their origins in Freud’s work to their current uses in different psychotherapeutic schools. The Kleinian contribution is identified as a major catalyst to re-thinking countertransference as a resource rather than simply an obstacle to treatment.

An unseemly event and a fortuitous discovery

In 1881, the physician Dr Josef Breuer began treating a severely disturbed young woman who became famous in the history of psychoanalysis as “Anna O”. She had developed a set of distressing symptoms, including severe visual disturbances, paralysing muscular spasms, paralyses of her left forearm and hand and of her legs, as well as paralysis of her neck muscles (Breuer, 1895, in Freud and Breuer 1985/2004, p. 26). Medical science could not explain these phenomena organically, save to designate them as symptoms of what was then known as “hysteria”, so Breuer took the radical step of visiting his young patient twice a day and listening carefully to her as she spoke about her troubles. He was to make a powerful discovery which deeply influenced his young assistant, Dr Sigmund Freud: whenever Anna found herself spontaneously recounting memories of traumatic events from her early history, memories she had hitherto had no simple access to through conscious introspection, her symptoms began to disappear one by one. But for the purposes of this essay, one event was to be of pivotal importance: just as Breuer was about to conclude his treatment of the young woman as a success, she declared to him that she was in love with him and was pregnant with his child.

Perhaps unsurprisingly, Breuer was traumatised and withdrew from this intimate method of treatment promptly. Freud’s original biographer, Ernest Jones, reports that Breuer and Freud originally described the incident as an “untoward” event (Jones, 1953, p. 250); but where Breuer admonished himself for experimenting with an unethically intimate method which may have made him seem indiscreet to the young woman, Freud studied the phenomenon with scrupulous scientific neutrality. He, too, had experienced spontaneous outbursts of apparent love from his psychotherapeutic patients, but as Jones (1953, p. 250) observes, he was certain that such declarations had little or nothing to do with any magnetic attraction on his part. The concept of transference was born: patients, Freud argued, find themselves re-experiencing intense reactions in the psychotherapeutic relationship which were in origin connected with influential others in their childhoods (such as parents or siblings). Without being aware of doing so, patients tended to transfer their earlier relationship issues onto the person of the therapist.

As Spillius, Milton, Couve and Steiner (2011) argue, at the time of the Studies in Hysteria, Freud tended to regard manifestations of transference as a predominantly positive force: the patient’s mistaken affections could be harnessed in the service of a productive alliance between therapist and client to explore and analyse symptoms. But by 1905, his thinking about transference began to undergo a profound change. Already aware that patients could direct unjustifiably hostile feelings toward the analyst as well as affectionate ones, his work with the adolescent “Dora” shook him deeply when she abruptly terminated her analysis in a surprisingly unkind and perfunctory manner (Freud, 1905/2006). He had already worked out that both the positive and negative manifestations of transference functioned as forms of resistance to the often unpleasant business of understanding one’s own part in the events of the past (it is, for example, a good deal easier to lay the blame for one’s present-day failings on “bad” or unsupportive figures from the past or on their selected stand-ins in the present than it is to acknowledge that one rejected or failed to make full use of one’s opportunities). But he began to realise that Dora had actively repeated a pattern of relationship-behaviour with him that had actually arisen from her unacknowledged hostility toward her father, as well as to a young man she had felt attracted to, because both had failed to show her the affection and consideration she believed herself entitled to.

She took her revenge out on Freud – and she was not alone in actively re-enacting critical relationship scenarios inside the therapeutic relationship; other patients, he began to see, also frequently actively relived relational patterns in this way while totally unaware that they were repeating such established patterns. By 1915, transference was no longer a resistance to recovering hazy and unpleasant memories for Freud; instead, it was an active, lived repetition of earlier relationships based on mistakenly perceived similarities between here-and-now characteristics of the analyst and there-and-then characteristics of previously loved or hated figures (Freud, 1915/2002)

The interplay between psychical reality and social reality

Melanie Klein, a pioneer of child psychoanalysis, accepted Freud’s view of transference as a form of re-enactment, but using her meticulous observations of the free play of very young (and very disturbed) child patients, she began to develop the view that it was not the dim-and-distant past that was re-enacted but, on the contrary, the present. Psychical reality and social reality were not coterminous or even continuous; they were involved instead in a ceaseless dialectical interplay (Likierman, 2001, esp. pp. 136 – 144). Real people may constitute the child’s external world, but for Klein, the only way to make sense of the often violent and disturbing content of the children’s play she observed was to posit the existence of a psychical reality dominated by powerful unconscious phantasies involving frighteningly destructive and magically benevolent inner figures or “objects” (Klein, 1952/1985). Children didn’t simply re-enact actual, interpersonal relationships, they re-enacted relationships between themselves and their unique unconscious phantasy objects. In spontaneous play, children were dramatising and seeking to master or domesticate their own worst fears and anxieties, she believed.

Klein’s thought has changed the way transference is viewed in adult psychotherapy, too. If transference involves not simply the temporal transfer of unremembered historical beliefs into the present but the immediate transfer of phantasies, in the here-and-now, which are active in the patient’s mind, handling transference becomes a matter of immediate therapeutic concern: one does not have to wait until a contingency in the present evokes an event from the past, nor for the patient to make direct references to the therapist in her associations, because a dynamic and constantly shifting past is part of the present from the first moments of therapy in Kleinian thought. For example, Segal (1986, pp.8 – 10) describes a patient opening her first therapy session by talking about the weather – it’s cold and raining outside. Of all the issues a patient could choose to open a session – the latest political headlines, a currently active family drama, a dream, a quarrel with a work colleague, and so on – it is always significant when a patient “happens” to select a particular theme; for Segal, following Klein, this selection indicates the activity of unconscious phantasy objects. Transference is immediate: Segal asks whether the patient is actually exploring, via displacement onto the weather, her transferential fear that the analyst may be an unfriendly, cold, and joy-dampening figure.

Countertransference, its development and its use by different schools of therapy

The foregoing has focussed on transference but implicit throughout has been the complementary phenomenon of countertransference, from Breuer’s shocked withdrawal from Dora’s transferential love to Freud’s distress at being abruptly abandoned by Dora who, he later realised, was re-enacting a revenge scenario. Intensely aware that emotions could be roused all too easily in the analyst during a psychoanalytic treatment, Freud was exceptionally circumspect about any form of expression of these feelings to the patient. In his advice to practitioners, he suggested that the optimal emotional stance for the therapist was one of “impartially suspended attention” (Freud, 1912b/2002, p. 33). He did not, however, intend this to be a stable, unfluctuating position of constantly benevolent interest; he urged therapists to be as free of presuppositions and as open-minded as possible to their patients’ spoken material, to be willing to be surprised at any moment, and to allow themselves the freedom to shift from one frame of mind to another. But he was unambiguous in his advice about how the therapist should comport him- or herself during analysis:

“For the patient, the doctor should remain opaque, and, like a mirror surface, should show nothing but what is shown to him.” (Freud, 1912b, p. 29)

As his paper on technique makes clear, Freud considered the stirring up of intense emotions on the part of the therapist as inevitable during analytic work; but he also considered these responses to the patient an obstacle to analytic work, the stirring up of the therapist’s own psychopathology which required analysis rather than in-session expression. The analyst had an obligation to remove his own blind-spots so as to attend to the patient’s associations as fully and prejudicially as possible.

By the 1950s, psychoanalysts were beginning to explore countertransference as a potential source of insight into the patient’s mind. As Ogden (1992) draws out in his exploration of the development of Melanie Klein’s notion of projective identification, Kleinian analysts such as Wilfred Bion, Roger Money-Kyrle, Paula Heimann and Heinrich Racker began arguing that it was an interpersonal mechanism rather than an intrapsychic one (as Klein had intended). Patients, they believed, could evoke aspects of their own psychic reality, especially those aspects they that they found difficult to bear, inside the mind of the analyst by exerting subtle verbal and behavioural pressures on the therapist. Therapists should not, therefore, dismiss such evoked emotions as purely arising from their own psychopathology, but as a form of primitive, para- or pre-verbal communication from the patient. As Ogden (a non-Kleinian) puts it:

“Projective identification is that aspect of transference that involves the therapist being enlisted in an interpersonal actualization of (an actual enactment between patient and therapist) of a segment of the patient’s internal object world.”
(Ogden, 1992, p. 69)

Countertransference, in other words, when handled carefully and truthfully by the therapist, can be a resource rather than an obstacle, and as such it has spread well beyond the Kleinian School. For example, while advocating caution in verbalising countertransference effects in therapy, the Independent psychoanalyst Christopher Bollas (1987) suggests that the analyst’s mind can be used by patients as a potential space, a concept originally developed by Winnicott (1974) to designate a safe, delimited zone free of judgement, advice and emotional interference from others, within which people can creatively express hitherto unexplored aspects of infantile experience. Bollas cites the example of a patient who recurrently broke off in mid-sentence just as she was starting to follow a line of associations, remaining silent for extended periods. Initially baffled and then slightly irritated, Bollas worked on exploring his countertransference response carefully over several months of analytic work. He eventually shared a provisional understanding with her that came from his own experience of feeling, paradoxically, in the company of someone who was absent, who was physically present but not emotionally attentive or available. He told her that he had noticed that her prolonged silences left him in a curious state, which he wondered was her attempt to create a kind of absence he was meant to experience. The intervention immediately brought visible relief to the patient, who was eventually able to connect with previously repressed experiences of living her childhood with an emotionally absent mother (Bollas, 1987, pp. 211 – 214).
Other schools of psychoanalytic therapy such as the Lacanians remain much more aligned with Freud’s original caution, believing that useful though countertransference may be, it should never be articulated in therapy but taken to supervision or analysis for deeper understanding (Fink, 2007).

References

Bollas, C. (1987). Expressive uses of the countertransfeence:notes to the patient from oneself. In C. Bollas, The Shadow of the Object: Psychoanlsyis of the Unthought Known (pp. 200 – 235). London: Free Associations Books.

Breuer, J. (1883-5/2004). Fraulein Anna O. In S. Freud, & J. Breuer, Studies in Hysteria (pp. 25 – 50). London and New York: Penguin (Modern Classics Series).

Fink, B. (2007). Handling Transference and Countertransference. In B. Fink, Fundamentals of Psychoanalytic Technique: A Lacanian Approach for Practitioners (pp. 126 – 188). New York and London: W.W. Norton & Company.

Freud, S. (1905/2006). Fragment of an Analysis of Hysteria (Dora). In S. Freud, The Psychology of Love (pp. 3 – 109). London and New York: Penguin (Modern Classics Series).

Freud, S. (1912/2002). Advice to Doctors on Psychoanaytic Treatment. In S. Freud, Wild Analysis (pp. 33 – 41). London and New York : Penguin (Modern Classics Series).

Freud, S. (1912/2002). On The Dymanics of Transference. In S. Freud, Wild Analysis (pp. 19 – 30). Londona nd New York: Penguin (Modern Classics Series).
Freud, S. (1915/2003). Remenbering, Repeating and Working Through. In S. Freud, Beyond the Pleasure Priciple and Other Writings (pp. 31 – 42). London and New York: Penguin (Modern Classics Series).

Jones, E. (1953). The Life and Work of Sigmund Freud: The Formative Years and the Great Discoveries, 1856-1900 – Vol. 1. New York: Basic Books.
Klein, M. (1952/1985). The Origins of Transference. In M. Klein, Envy and Gratitude and Other Works (pp. 48 – 60). London: The Hogarth Press & The Institute of Psycho-Analysis.

Likierman, M. (2001). Melanie Klein: Her Work in Context. London and New York: Continuum.

Ogden, T. (1992). Projective Identification and Psychotherapeutic Technique. London: Maresfield Library.

Segal, H. (1986). Melanie Klein’s Technique. In H. Segal, The Work of Hanna Segal: Delusion and Artistic Creativity & Other Psycho-analytic Essays (pp. 3 – 34). London: Free Associations Books/Maresfield Library.

Spillius, E., Milton, J., P., G., Couve, C., & Steiner, D. (2011). The New Dictionary of Kleinian Thought. East Sussex and New York: Routledge.
Winnicott, D. W. (1974). Playing and Reality. London: Pelican.