Does Machiavelli Reduce Politics to Force?

This work was produced by one of our professional writers as a learning aid to help you with your studies

In this essay, I assess whether Machiavelli reduces politics to force. To construct a response to this, it is necessary to explore what “force” means, since “force” is a philosophically weak concept. In order to understand “force” as a philosophical concept, we need to separate the concepts of authority and power. With a clear concept of what we mean by power and how it differs from authority, it becomes possible to discuss whether Machiavelli reduces politics to force. Once the concepts of power and authority are clearly differentiated, the question becomes does Machiavelli reduce politics to force, where force is equated to power, or does Machiavelli rest politics on authority.

In this essay, I argue that, despite Skinner’s attempts to re-habilitate Machiavelli and re-construct Machiavelli as a defender of liberty, Machiavelli does not rest power and politics on authority. Instead, Machiavelli argues that power should be utilised for the purpose of “the common good”. For Machiavelli, political necessity allows for incursions on liberty and the use of power, rather than authority. Femia is alive to the implications of “the dark, authoritarian and militaristic element in Machiavelli’s writings” (Femia, 2004, p.15); and, in this essay, I argue that this should not be overlooked.

Goodwin argues that attempting to distinguish rigorously between power and authority “is ultimately doomed to failure” (Goodwin, 1997, p.314). However, she argues that “the distinction between power and authority has exercised many philosophers, who feel there should be a sharp demarcation between the two” (Goodwin, 1997, p.306). Whilst a “sharp demarcation” may not be possible, Goodwin does separate the two. She argues that power “is the ability to cause someone to act in a way which she would not choose, [if] left to herself” (Goodwin, 1997, p. 307). This can, obviously, occur in a number of ways, including threats and violence, but also through persuasion, propaganda and advertising. However, authority Goodwin argues, has a basis in law; a government has authority if it has legal validity (Goodwin, 1997). A sharp distinction between power and authority may not be possible, and it may be made to see the concepts on sliding scale, with illegitimate power on oneside, and legitimate authority on the other side, with much in-between.

This separation between power and authority is fundamental to this essay, as it is important to understand whether Machiavelli argues that politics ought to rest on authority or whether it can be reduced to maintaining power. Therefore, in an attempt to summarise the “demarcation” between power and authority, I once more return to Goodwin, who says the individual “defers to authority… [but] yields to power” (Goodwin, 1997, p.313). If Machiavelli reduces politics to force/power, his concern is that people must yield to the government; whereas, if Machiavelli argues that politics ought to rest on authority, his concern would be that the people deferred to the government, and recognised its legal legitimacy.

Machiavelli’s political philosophy is more complex than the often one-dimensional interpretation of Machiavelli as a self-serving manipulator, promoter of immorality and defender of tyranny. In contrast to the one-dimensional view of Machiavelli which implies that he reduces politics to the maintenance of power and a justification of tyranny, Machiavelli is a defender of a certain kind of liberty. However, Machiavelli’s concept of liberty is about the liberty of the state or the Government. He argues that in order for the people to be free, they must live a free state – a state free from external servitude. Machiavelli’s concept of liberty prioritises the state in the relationship between the individual and the state: “it is not the well-being of individuals that makes cities great, but the well-being of the community” (Machiavelli, The Discourses: Book II, Discourse 2). For Machiavelli, it is not the individual that is important, but the community or the state. Therefore, the individual must yield to the will of the state for the liberty and well-being of the “common good”.

In his interpretation of Machiavelli’s thought, Skinner emphasises the importance of the free state; and crucially, he stresses the seriousness of the metaphor of the body politic to neo-roman thought, which meant that Machiavelli could not conceive of a free individual without a free state. This is only one of many interpretations of Machiavelli, and is not objective as it is underpins Skinner’s thesis that liberty was an important concept to Machiavelli. Machiavelli defines the free state as one that is “removed from any kind of external servitude” (Machiavelli, The Discourses: Book I, Discourse 2). Skinner expands this by relating it to the concept of the body politic, where, “just as individual human bodies are free… only if they are able to act or forbear from acting at will, so the bodies of nations and states are likewise free… only if they are similarly unconstrained from using their powers according to their own wills” (Skinner, 1998, p.25). Skinner’s elaboration means that a state is only free, when it follows the collective will of the people, and thereby, liberty is equated to self-government, so a free state is defined as a community “independent of any authority save that of the community itself” (Skinner, 1981, p.52). Machiavelli stridently defends the free state, arguing that “history reveals the harm that servitude has done to people and cities… [as they] have never increased either in dominion or wealth, unless they have been independent” (Machiavelli, The Discourses: Book II, Discourse 2). This underpins Machiavelli’s perennial fear that freedom is fragile and liberty could succumb to external conquest or internal tyranny.

Skinner pursues this notion, and argues that overt coercion is not necessary for a state to be in a condition of slavery: if the maintenance of civil liberty is dependent upon the good will of arbitrary power, then the individual is already living as a slave (Skinner, 1998). This is a rational consequence of Machiavelli’s bleak interpretation of human nature, where men do not promote the common good i.e. the preservation of the state’s liberty. Machiavelli argues that humans are: self motivated – “men never do good unless necessity drives them” (Machiavelli, The Discourses: Book I, Discourse 3); bellicose – “security for man is impossible unless it be conjoined with power” (Machiavelli, The Discourses: Book I, Discourse 1); fickle and untrustworthy – they “will not keep their promises” (Machiavelli, The Prince: Chapter XVIII); pusillanimous – “when the state needs its citizens, few are to be found” (Machiavelli, The Prince: Chapter IX). These attributes are a hindrance to a state that is trying to preserve its ability to enact the collective will without constraint. Therefore, liberty requires overcoming men’s selfish inclination, so they can be fit to govern themselves, and this involves engaging in activities which are conducive to “human flourishing” (Skinner, 1990). Given that it is contrary to mens’ natural inclinations to pursue the “common good”, it seems that this involves yielding to the power of the state. Skinner’s eloquent term “human flourishing” describes the need to imbue each citizen with a sense of civic virtu, which is essentially, a public-spirited ethos, whereby the individual commits a great deal of time and energy to participating in the affairs of the state, and maintaining a vigilance to safeguard its freedom. Skinner admits that civic virtu requires placing “the good of the community above all private interests and ordinary considerations of morality” (Skinner, 1981, p.54).

Machiavelli’s political philosophy rests on valuing the public sphere, with a resulting dismissive attitude toward the private sphere. Thus, the citizens of the state are required to yield to the power of the state, and to relinquish their individual liberty, if it is perceived to be in the “common good”. Machiavelli praises Rome where those who worked through the public sphere were honoured, but those working through private means were condemned and prosecuted (Machiavelli, The Discourses). Machiavelli argues that a sense of duty to the community, which entails sacrificing the legitimacy of the private sphere, does not curtail liberty but preserve it, as civic virtu is essential to ensuring the state is not constrained from acting upon its own will. He quotes, (possibly apocryphally) from ancient history: “they rebelled because when peace means servitude it is more intolerable to free men than war” (Machiavelli, The Discourses: Book III, Discourse 44), which appeals to Machiavelli’s doctrine of public-spiritedness, and his promotion of the well-being of the community.

Machiavelli promotes the ideals of republicanism, and republican liberty, which entails a need to safeguard the state against internal tyranny, through citizens that are active, vigilant, and participate in the daily running of the community to ensure that the state is not subjected to the caprices of a minority; and that, instead, the community seeks the public interest. Machiavelli criticises the consequences of internal tyranny with empirical reference to the greatness attained by Athens, once “liberated from the tyranny of Pisistratus…. [and] the greatness which Rome attained after freeing itself from its Kings” (Machiavelli, The Discourses: Book II, Discourse 2). Thus, Machiavelli can be read as a defender of liberty by citing his belief that the conflict between the nobles and plebs was the primary reason Rome maintained her freedom (Machiavelli, The Discourses), and his assertion that a Monarch’s interests are usually harmful to the city (Machiavelli, The Discourses). This interpretation of Machiavelli shows that he does not unambiguously reduce politics to the use of force and power. Instead, he argues that politics rests on the order of a well-structured government. However, for Machiavelli, a well-structured government and political authority are not necessarily synonymous, since he argues that political order may require the use of force and the wielding of power by a powerful leader.

Machiavelli’s writings are littered with references to his love for strong leadership e.g. “dictatorship was always useful” in Rome (Machiavelli, The Discourses), or his defence of a Prince’s cruelty to keep his subjects united and loyal, as men are wretched and will pursue their own interest, unless they fear punishment (Machiavelli, The Prince). There are clearly elements of Machiavelli’s writings that support the idea of the free state and a certain concept of liberty; for instance, he argues that “experience shows that cities have never increased in dominion or riches except while they have been at liberty” (Machiavelli, The Discourses: Book II, Discourse 2). This allows Skinner to construct Machiavelli as a defender of liberty, by arguing that “what Machiavelli primarily has in mind in laying so much emphasis on liberty is that a city bent on greatness must remain free from all forms of political servitude” (Skinner, 1981, p.58). Skinner’s reading of Machiavelli suggests that Machiavelli did not reduce politics to force and power; and that, instead, Machiavelli rested politics on political authority. However, this re-habilitating of Machiavelli by Skinner overlooks a number of passages in Machiavelli’s writing that show he clearly was prepared to allow force and power to be used without linking it to authority.

Femia takes the view that Machiavelli was not a defender of liberty, and did not place authority at the heart of politics. Femia concludes that Machiavelli’s political thought can be characterised by the belief that “we cannot draw a sharp line between moral virtue and moral vice: the two things often change place. Fair is foul and foul is fair” (Femia, 2004, p.11). For Machiavelli, it is the state that is important, and the individual’s liberty can be subjected to power and force in order for the good of the city to prevail. Machiavelli eradicates the private sphere, which allows Femia to draw a parallel between Machiavelli’s concept of freedom and fascists who also argue that “freedom comes through participating in a great whole… [and] nothing to do with limiting the state’s autonomy” (Femia, 2004, p.8). Machiavelli primary concern is maintaining political order, and his advice in The Prince often seems to be more about maintaining power, than establishing authority. In places, Machiavelli’s advice is brutal, and seems unambiguously to promote the exercise of force for the purposes of maintaining power.

Machiavelli shows no regard for individual liberties, and allows The State to trample over its citizens when force and power are necessary, arguing that “it should be noted that one must either pamper or do away with men, because they will avenge themselves for minor offences while for more serious ones they cannot” (Machiavelli, The Prince: Chapter III). This brutal, cynical observation is an instance of Machiavelli’s realism. Such cynical realistic observations do not, in themselves, prove that Machiavelli reduces politics to force and power. It is possible to argue that Machiavelli’s observation accurately observes politics, and he is simply drawing the reader to an important piece of wisdom about human nature. However, this does not seem to be Machiavelli’s motivation. He is not merely observing brutal realism, but appears to be advocating its application. He argues that those the ruler “hurts, being dispersed and poor, can never be a threat to him, and all others remain on the one hand unharmed… and on the other afraid of making a mistake, for fear that what happened to those who were dispossessed might happen to them” (Machiavelli, The Prince: Chapter III). The important word here is “fear”. The people fear the ruler, and so obey. This does not imply that the ruler that governs by authority. Instead, the implication is that the ruler holds power through force.

Despite the ruthless, brutal and cynical methods that Machiavelli appears to advocate, it is important not to misread Machiavelli as someone who advocates force and violence merely for the sake of power. Machiavelli is concerned with “The Common Good”, and thus he argues that the exercise of force – raw power – is only justified if it is exercised in pursuit of “The Common Good”. Or, more simply, the “ends justify the means”. Machiavelli does not advocate raw power, per se; instead, he argues that if the ends are “good”, then the use of force is justified. This blurring of the common good and the use of power to promote it is evident when he argues that “a prince must not worry about the reproach of cruelty when it is a matter of keeping his subjects united and loyal; for with a very few examples of cruelty he will be more compassionate than those who, out of excessive mercy, permit disorders to continue… for these usually harm the community at large” (Machiavelli, The Prince: Chapter XVII). This, however, exposes the paradox in Machiavelli’s thought, where cruelty is justified by the ends. The problem is that Machiavelli’s initial concern is about holding power to prevent disobedience and disorder. It is possible that this exercising of power may shift, and become authority; but, in its first instance, politics is about maintaining power.

Machiavelli was a Renaissance writer; and, therefore, the differentiation between power and authority that Goodwin discussed had not become a part of political philosophy. Therefore, to argue that Machiavelli did not seek political authority, but power, would be a mis-representation, as these concepts were not available to him. However, for Machiavelli, political necessity dominates, and in a realist vein, he allows for incursions on liberty and the use of force and even cruelty to hold power. Ultimately, he seeks authority in the common good, and this justifies whatever methods are used to hold on to power.

Machiavelli doesn’t simply reduce politics to force, since force is used to pursue the common good. However, Machiavelli is not concerned with the individual citizen, since he does not differentiate between the public and private realms. Thus, Machiavelli is not concerned with individual liberty and individuals’ rights: when the “private person may be the loser… there are so many who benefit thereby that the common good can be realized in spite of those few who suffer in consequence” (Machiavelli, The Discourses: Book II, Discourse 2). Without a clear separation of public and private, and between legitimate authority and illegitimate power, the common good can become the arbitrary will of the ruler. The arbitrary will of a ruler – even one that is seeking to promote the common good – leaves politics very open to the use of force to maintain power, in the name of common good. This notion of the use of force to maintain power is quite different from the use of force by a Government that governs through authority, under the rule of law.

Bibliography

Femia, J (2004) “Machiavelli and Italian Fascism”, History of Political Thought, Volume 25, Issue 1, pp. 1-15

Goodwin, B (1997) Using Political Ideas (4th edition), John Wiley & Sons, Chichester

Machiavelli, N (1984) The Prince (Edited, Introduced and Translated by P Bondanella and M Musa) Oxford University Press, Oxford

Machiavelli, N (1998) The Discourses (Edited, Introduced, Revised and Translated by B Crick, L Walker and B Richardson) Penguin Classics, London

Skinner, Q (1981) Machiavelli, Oxford University Press, Oxford

Skinner, Q (1990) “The republican ideal of political liberty” in Bock, G & Skinner, Q & Viroli, M (editors) Machiavelli and Republicanism, Cambridge University Press, Cambridge, pp. 293-310

Skinner, Q (1998) Liberty Before Liberalism, Cambridge University Press, Cambridge

Discussion on the Validity of The Leftist Intellectual

This work was produced by one of our professional writers as a learning aid to help you with your studies

Famously, in the last of his Theses on Feuerbach (1845), Marx declared that ‘The philosophers have only interpreted the world, in various ways; the point is to change it’ (1974, p.123). This was intended as part of a contribution to a contemporary debate within German philosophy – in this case, over the exact character of existing materialism. However, Marx’s challenge could be said to encapsulate the key question at the heart of the discussion about the role of the ‘philosopher’, or intellectual – what impact do his or her ideas have in the wider world? More plainly, what is the relationship between thought and action? In terms of the communist or socialist left, with which, of course, Marx was most concerned, this question has worked itself out in a number of ways, but perhaps the main focus has been on the issue of the political or social commitment of the intellectual – especially, his or her commitment to a specific ideology and political formation such as the Party. At times in the history of the Leftist intellectual since the 19th century, this has led to a high degree of tension between those who see a specific ideological commitment as the sine qua non of an intellectual position, and those who argue for a more creative, if not more complex conception of the relationship between intellectuals and the practical political sphere.

Thus, for the Left the idea of the intellectual as a figure who stands in some way apart from and above the political fray and offers universally applicable insights into the state of things as they are is problematic. In his book on the intellectual, Legislators and Interpreters (1987), the social theorist Zygmunt Bauman identifies two general conceptions of the intellectual and of intellectual work – modern and postmodern. For the first of these, he writes, the ‘typically modern strategy of intellectual work is one best characterised by the metaphor of the “legislator” role’. This role ‘consists of making authoritative statements which arbitrate in controversies of opinions and which select those opinions which, having been selected, become correct and binding’ (1987, p.4). In this conception, the intellectual has, through his or her ‘superior, objective knowledge’ (1987, p.5), access to an impartial, universal ‘Truth’ which enables him or her to make the right decisions on the part of society or humanity as a whole.

The modern intellectual of whom Bauman is writing has its origins largely in the rationalist philosophes of 18th century France, who sought to establish modern society on the basis of Reason and rationalist principles. Such a ‘legislative’ intellectual would seem to be anathema to those on the Left, especially the revolutionary Left, who required the intellectual to be aligned with and committed to their particular cause. However, for Bauman, and for other theorists such as Michel Foucault, whose conception of the ‘universal’ intellectual as ‘the master of truth and justice’ (1980, p.126) shares much in common with Bauman’s, the Leftist intellectual in fact operates in much the same way as the figure he describes. Thus, Lenin in What is to be Done? (1902) wrote of the revolutionary intellectual as one who brings theoretical-consciousness to the masses, or proletariat, from ‘outside’ (1988, pp.143-4). Lenin argued that the proletariat was incapable of developing such a consciousness spontaneously, on its own, and needed the vanguard intellectual, standing at the head of the class and organised within the tightly disciplined revolutionary party, to supply its shortfall. Although eventually he became persona non grata, as far as the Soviet state was concerned, and was assassinated by agents of that state, Trotsky also argued with Lenin for the supremacy of the party. In his speech given to mark the founding of the Fourth International in1938, he signalled the need for complete commitment on the part of revolutionary intellectuals to the party: ‘Our party demands each of us, totally and completely…For a revolutionary to give himself entirely to the party signifies finding himself’ (1974, p.86).

For Trotsky, the experience of persecution at the hands of Stalin did not lead to his disillusionment with the idea of the revolutionary party as the ‘lever of history’ (1974, p.86), the means by which intellectuals such as himself would raise the ‘revolutionary level’ of the masses (1974, p.86). It was in this context, and only in this context, that the intellectual of the Left (specifically, the revolutionary left) had validity, because he or she had political agency. However, for many on the Left the victory of Stalin and totalitarianism in the Soviet Union led them to re-think the relationship between the intellectual and the working class, seeking to address the problem of how to produce intellectuals from and develop revolutionary consciousness more widely and authentically in the working class itself. Perhaps the most convincingly elaborated effort to do so was that of Antonio Gramsci.

Gramsci is best known for his development of the concept of the ‘organic intellectual’. Such an intellectual is distinct from the ‘traditional’ type by dint of the fact that it arises out of the ranks of the working class itself, instead of being of the bourgeoisie, or ruling class. The ‘traditional intellectuals’, although they thought of themselves as autonomous and ‘endowed with a character of their own’ (Gramsci, 1971, p.8), were rather a stratum which legitimised the rule of the bourgeoisie, which had arisen with that class and functioned to serve its ends in the spheres of culture and ethics. In fact, according to Gramsci, the ‘traditional intellectual’ had been itself the ‘organic intellectual’ of the now ruling class when it was expanding and elaborating its hegemony over all other classes.

The elaboration of its own ‘organic intellectuals’, therefore, becomes a key task for the working class in its struggle for hegemony, or cultural and political domination, over all other classes. The process whereby such intellectuals are created was not marginal to the achievement of that domination but constituted the very movement of that process. As the working class ‘distinguishes’ (1971, p.334) itself through the production of such intellectuals, it raises its general level of consciousness and culture and is able to produce more, and more accomplished, intellectuals, which will enable it to challenge its competitors across the whole field of culture and society. With the widening and deepening of this process, the working class is able to generate and develop a culture of its own sufficient to the tasks of the revolutionary transformation of society, rather than having to rely upon intellectuals from ‘outside’ to perform those tasks for it.

Such a conception as Gramsci’s would seem to place the intellectual at the very heart of the political and cultural practice of the Left, opening up the possibilities of participation in intellectual action to many members of the working class itself. However, the party was still a centralised and hierarchical structure. Gramsci still had to try and balance the often conflicting demands of party organisation and discipline with the centrifugal forces of popular participation and autonomy. Gramsci borrowed the idea of the Centaur from Machiavelli, which brought together the two sides of ‘force and consent…the individual moment and the universal moment’ (1971, p.170), party and mass. It was his conception that the ‘organic intellectual’ would articulate these two sides, as an intermediate stratum which would ensure the unification of the spontaneous consciousness of the working class, rooted in its experience of oppression and exploitation, and the revolutionary-theoretical consciousness of its ‘leaders’ in the party. However, Gramsci was to die after his long imprisonment and in the end his project to re-energise the revolutionary party from below was defeated by the bureaucratism of Stalinism, which became more entrenched with the movement towards World War in the 1930s. For Gramsci, the intellectual was not only a valid category but a crucial agent in the victory of socialism over capitalism, although one which still was to be seen within the context of the party.

The last of the incarnations of the intellectual of the Left I am going to discuss is one which arose within the context of the post-war period and the rise of what came to be known as the ‘New Left’. With the coming of the Cold War and the increasing disillusionment with the Soviet Union of many of those on the Left in the West, many of the latter began to look around for alternatives to the ‘statist’ politics of the Communist Party. This process was hastened by events in Hungary in 1956, where the Soviet Union crushed a rebellion against its client regime, which saw a mass-scale withdrawal by intellectuals and others from the Communist Parties of the West. During this period, immediately after the Second World War, many intellectuals – or those in what might be called the ‘intellectual professions’ – became deeply suspicious of state-level political organisations and sought to found a New Left which connected with the everyday lives and experiences and struggles of ordinary people on the ground.

One may say that this effort had much in common with what Gramsci hoped to do, as discussed above. However, the intellectual of the New Left was concerned with re-founding politics on the basis of ethical commitment rather than with achieving state power through the elaboration and strengthening of the party as an organisation. One figure who was influential both as a model and as an advocate of this altered conception of the Left intellectual was the British historian E.P. Thompson. Thompson argued for an emphasis upon moral responsibility and ethical commitment in the practice of politics. He was less concerned with seizing state power than enabling ordinary people to resist its worst effects.

It is possible here only to touch upon the ideas Thompson developed with regard to the intellectual and his or her commitment to a more ethical politics in the post-war world. Thompson had been a member of the Communist Party of Great Britain as well as being a tutor in workers’ and adult education, and each of these experiences could be said to have shaped his particular thinking about the necessary responsibilities of the intellectual. As a former Communist Party member, he believed that such events as those in Hungary demonstrated the bankruptcy of its politics and the failure of the party to connect itself to the wider working class. However, as a tutor Thompson saw education (especially that extramural education that took place outside of the formal context of schools and academies) as a key alternative context to that of the party in which the intellectual could play a vital role in politicising and connecting with ordinary members of the working class. Indeed, when Thompson joined Leeds University as an adult education tutor in 1948, he declared his aim to be ‘to create revolutionaries’ (Searsby et al, 1993, p.3). At the same time, Thompson saw his involvement in adult and workers’ education as a two-way process, insofar as it enabled him to tap into a longstanding tradition within that sphere of independent thought and participation from below.

At this time, then, Thompson was committed both to the party and to workers’ education. However, this dual commitment eventually became impossible. In the wake of the 1956 events a journal he had co-founded, The Reasoner, was suppressed by the Communist Party which Thompson then left. From then on he was fully committed to ‘socialist humanism’ (see 1957), and with the struggle ‘between competing moralities within the working class’ (Thompson, 1959, p.52). A key site for that struggle was education, where the intellectual of the Left could foster the humanist values necessary to enable his or her students to defend themselves against the corruption of state ideologies and politics, and the intellectual him- or herself could learn from the lived experience of the working class.

Thompson became one of the most influential figures of the British New Left, and wrote one of the most influential texts of social history ‘from below’ in 1963, The Making of the English Working Class, as well as becoming a key figure in the Campaign for Nuclear Disarmament. For Thompson, as for other New Left figures such as C. Wright Mills, the radical American sociologist, the Leftist intellectual had the most validity and social significance outside of the party and when relating to the struggles of people at the level of their everyday lives. What mattered for them was not ideology and dogma but moral values and experience.

2078 words

Bibliography

BAUMAN, Z. (1987) Legislators and Interpreters: On Modernity, Post-Modernity and Intellectuals. Cambridge: Polity Press.

FOUCAULT, M. (1980) Power/Knowledge. GORDON, C. (ed.). Brighton: Harvester Press.

GRAMSCI, A. (1971) Selections from Prison Notebooks. HOARE, Q. and NOWELL-SMITH, G. (ed. and trans.). London: Lawrence and Wishart.

LENIN, V. I. (1988) What is to be Done? Harmondsworth: Penguin Books.

MARX, K. AND ENGELS, F. (1974) The German Ideology: Students Edition. ARTHUR, C.J. (ed.). London: Lawrence and Wishart.

SEARSBY, P., RULE, J. MALCOLMSON, R. (1993) Edward Thompson as a Teacher: Yorkshire and Warwick. In RULE, J. and MALCOLMSON, R. (eds.) Protest and Survival: Essays for E.P. Thompson. London: The Merlin Press.

THOMPSON, E.P. (1957) “Socialist Humanism”: An Epistle to the Philistines. The New Reasoner 1, pp.105-43.

THOMPSON, E.P. (1959) Commitment in Politics. Universities and Left Review 6, pp.50-55.

TROTSKY, L. (1974) Writings of Leon Trotsky 1938/9. New York: Pathfinder Press.

Democracy and Democratic Politics

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

Democracy usually refers to a political system that advocates the kratos (meaning the rule) or the demos (meaning, the collectivity) of ‘the people’ in Greek (Castoriadis 2007, p.122). The demos, that also stands for the political body of the active ‘people’ who mutually contract with each other, is bound to the decisions of the majority (Hobbes 1994, p.119; 1998, p.94 & p.117; 2006, p.103). However, democracy has seen a variety of different definitions and interpretations. For the ancients, democracy was almost synonymous with direct participation in the decision making, rejecting tout court any form of expertism and delegation of powers to third parties (Castoriadis 1997). Modern democracies, however, function based on the principle of representation in parliaments and councils, whose operation abide to legislations of national Constitutions (Zakaria 1997, p.41; Leach & Coxall 2011, p.4) and jurisdictions that allow a body politic to exercise active surveillance over its representatives, discarding them if they betray their trust, or revoking the powers which they might have abused (Constant 1998, p.326). This essay aims to explore these two diametrically opposite definitions, in order to provide a clear understanding regarding democracy and democratic politics. In addition, by examining up to what extent a state like the United Kingdom may be classified as democratic (taking into account the two different interpretations of democracy), it will expose the theoretical deficiencies of the modern conception. It will finally stress that democracy should be better understood as a system of open public consultation and participation (according to the ancient model), acknowledging the modern Swiss paradigm of direct democracy through referendums and public initiatives as a vital alternative.

The democracy of the ancients compared to that of the moderns

Benjamin Constant in his speech at the Athenee Royal addresses two types of liberty, one in the Greek and Roman antiquity and the other after the consolidation of the French Revolution. In this speech Constant (1998) champions modern democracy as a system that respects individual rights and personal freedoms, which, in his view, appear absent from both the ancient Athenian and the Roman model. Respect to individual rights is a fundamental principle of a modern democratic state. But at the same time, such a state bases its institutioning upon a complex of liberal-republican values that were born during the French Revolution, such as the state of justice, the rule of law, the right of the masses to elect freely their own leaders and representatives, freedom of speech, free trade and private property; ideals considered among the highest, able to ensure social peace, stability and prosperity for every human society, ideals that “have remained with us ever since” (Graeber 2012).

Another important feature of modern democracies, however, is the principle of (majoritarian) consent, exercised through the process of electing a government. According to the modern democratic theory, “elections give sovereignty or ultimate power to the citizens. It is through elections that the citizen participates in the political process and ultimately determines the personnel and policies of governments. Only a government which is elected by the people is a legitimate government” (Denver & Carman 2012, p.5). The elected governors and statesmen are also accountable to ‘the people’, and their power is limited to their demands according to John Locke (Laslett 2008, p.109). In case this public consent is neglected, the government should be immediately dissolved. Thus, “it is for the people only to decide whether or when their government trustees have acted contrary to their trust, or their legislative has been changed, and for the people as a whole to act as umpire in any dispute between the governors and a part of their body” (Laslett 2008, p.109). Democracy, therefore, is to protect people from arbitrary powers, since as Locke (2008, p.281) stated, “force without Right, upon Man’s Person, makes a State of War”.

Individual rights, consent and protection from arbitrary powers in modern democracies are safeguarded by national constitutions, which are “designed to prevent the accumulation of power and the abuse of office. This is done “not by simply writing up a list of rights, but by constructing a system in which government will not violate those rights” (Zakaria 1997, p.41). In other words, “the people who in order to enjoy the liberty which suits them resort to the representative system, must exercise an active and constant surveillance over their representatives, the right to discard them if they betray their trust, and to revoke the powers which they might have abused” (Constant 1998, p.326). Constitutionalism also “seeks to protect an individual’s autonomy and dignity against coercion, whatever the source – state, church or society” (Zakaria 1997, p.25-26).

The concept of democracy, however, according to the standards of the pre-modern, or even the ancient, world, differs significantly in many aspects. In ancient Greece where one initially identifies the first emergence of democracy according to Castoriadis (1997, p.87) the idea of representation was unknown, and the idea of elections was considered an aristocratic principle, whereas among the moderns it is at the basis of their political systems (Castoriadis 1997, p.89-90). As Rousseau (2014, p.114) stressed, “the idea of representatives is modern: it comes to us from feudal Government that iniquitous and absurd Government in which the human species is degraded, and the name of man dishonored”. Further, for the ancients, politics was synonymous with the public sphere, characterized by openness and voluntary participation in the common world of public life, in the making of decisions that determine the function and course of a community (Arendt 1961, p.149). According to the Athenian experience “freedom itself needed a place where people could come together – the agora, the market-place, or the polis, the political space proper” (Arendt 1990, p.31). The polis, for both Castoriadis and Arendt was also the self-governed body of active citizens who through open discussions could take upon themselves the creation of institutions “that regulate their own active participation in the running of society” (Straume 2012, p.3).

Summarizing: there is, on one hand, the modern approach on democracy that is based on the principle of consent and representation (id est the acting and deciding on behalf of the demos), focusing at the same time on the institutions that regulate governments from abuse of office, protecting minorities and civil freedoms. On the other, the definition provided by Castoriadis and Arendt who have thoroughly elucidated on the Greek and Roman antiquity, focuses on direct participation (rather than elections), on common appearance and, above all, on the ability of questioning laws, norms and institutions (Castoriadis 1997, p.87). Which among the two definitions, however, could be considered as more accurate, is about to be discussed in the next section, which also aims to examine whether a modern state, such as the United Kingdom, can be classified as democratic. This process will reveal major deficiencies in the modern understanding of democracy.

Which democracy? The UK as a case study

“Britain, along with most states in the modern world, and many others elsewhere, claims to be a democracy” (Leach & Coxall 2011, p.4). At prima facie one could argue that this statement is valid up to an extent. In fact, a brief study on the political institutions of modern Britain shows that all the perquisites that must be met in order for a state to be classified as democratic are perfectly followed by the British political establishment. There is equality before the law, respect for individual rights and restrictions of the powers of the royal families, free elections and freedom of speech, which are also guaranteed by British legal documents, court judgments, treaties and constitutional conventions (Kavanagh 2000; Norton 2013; Wright 2013). However, “do elected politicians make the real decisions that affect the British people?” ask Leach and Coxall (2011, p.5-6). In other words, does the majoritarian consent and the voice of the demos predominate or is it exercised only formally?

“More real power and influence may be exercised by individuals who are not part of the formal political process at all” say Leach and Coxall (2011, p.5-6). Such individuals are “businessmen aˆ¦ bankers, or owners of newspapers, television companies and other media, some of whom may not even be British” (Leach & Coxall 2011, p.5-6). As also Roy Greenslade (2011) has argued, newspapers, despite their steady decline during the past few years, still have the capacity to influence the political process. Thus, on one hand the mass media (owned by powerful entrepreneurs) obstruct independent public commentary by shaping certain opinions (Leach & Coxall 2011, p.5) while on the other “the civil service, the City of London, or multi-national corporations exercise far more effective power and influence in the British political process than any single personality” claim Leach and Coxall (2011, p.6).

At this point it would be important to acknowledge the following well known quote by Rousseau: “the English people think it is free; it is greatly mistaken, it is free only during the election of Members of Parliament; as soon as they are elected, it is enslaved, it is nothing” (Rousseau2014, p.114). This quote comes from his book The Social Contract (1762) where he exposes the impossibility of the representative system, claiming that only through an ancient model of democracy, popular sovereignty and, therefore, freedom could be achieved. Hence, since the English representative system cannot safeguard popular sovereignty it cannot also sustain freedom, except from the day of the elections, where the public can exercise its vote. After the end of this process the English citizen becomes again a subject to the decisions taken by their representatives. Further, since public consciousness in Britain is shaped by powerful media (whose role, as stated above, is contradictory), and most of the decisions of elected politicians is not as influential as those coming from non-accountable institutions according to Leach and Coxall (2015), then it could arguably be said that the British people are not free even during parliamentary elections, since legislations and laws are influenced by non-political individuals.

Consequently, only formally Britain might be considered as a democratic state. It would be more accurate to classify it as a liberal constitutional regime, since freedom of speech and respect for individual rights alone do not entail democracy. But Britain is not an isolated example of a representative democracy that appears to be insufficient in implementing the will of the people and safeguarding the consent of the majority. Castoriadis who has thoroughly observed the modern occidental world, came to the following conclusion: no western society, including Britain, should be called democratic. Instead they are liberal oligarchies (Castoriadis 2007, p.122). In his words, modern western societies are “oligarchies since they are dominated by a specific stratum of people [and] liberal because that stratum consents a number of negative or defensive liberties to citizens” (Castoriadis 2007, p.126).

Since, however, direct democracy, as Castoriadis and Arendt visualized it (according to the ancient model) could not be easily implemented under the current circumstances, a study on the Swiss paradigm would inspire alternative ideas. The political system of Switzerland allows its citizens to broadly participate in the decision making (Kriesi & Trechsel 2008; Huber 1968). This is happening through referendums and open assemblies in many cantons (creating, thus, a public sphere). More precisely, over 30 referendums held every year by popular initiative, thus limiting the power of the parliament whilst parties and governments have often been forced to abandon their policies under the pressure of the popular vote (Kriesi & Trechsel 2008, p.34; Huber 1968, p.24-25). Through such procedures power partly remains in the hands of the citizens (as it is seen in the ancient types of democratic participation), and this power cannot be bypassed by representatives or by non-political institutions which may hijack the role of the elected representatives. The Swiss paradigm, therefore, being closer to the ancient model of democracy seems preferable in order to safeguard the majoritarian consent. It appears closer to the initial definition of democracy that is “the power of the people”.

Conclusion

This essay has highlighted the significant differences between the modern and ancient definitions of democracy. By examining the United Kingdom and the way political representation becomes easily taken over by strong powerful centers that invade the domain of politics, influencing important decisions and legislations, one understands the fragility of the modern model. Nonetheless, it could not be argued that Britain is an isolated case. In other words, the UK should not be understood as a unique example of ineffective representation. Although individual rights, freedom of speech and protection from abuse of power are important perquisites for a democratic state, the same state, in order to be classified as truly democratic has to fulfill also an area of other demands, such as effective participation and public consent, which appears marginalized not only in the UK but almost everywhere in the occidental world, with Switzerland being a notable example. The theoretical observations conducted above, relying both the work of Castoriadis as well as on the Swiss paradigm, not only confirm this reasoning, but at the same time provide vital alternatives of how open participation (close to the ancient model) can safeguard the majoritarian consent, preventing officials and political personnel to bypass the will of the citizens.

Bibliography

Arendt, H., 1961. Between past and future: six exercises in political thought. London: Faber and Faber.

Arendt, H., 1990. On Revolution. 6Th ed. London: Penguin Books.

Castoriadis, C., & Curtis, D. A. 1997. World in fragments: Writings on politics, society, psychoanalysis, and the imagination. Stanford, California: Stanford University Press.

Castoriadis, C., 2007. Figures of the Thinkable. Stranford: Stranford University Press.

Constant, B., 1988. Political Writings. Cambridge: Cambridge University Press.

Denver, D., & Carman, C., 2012. Elections and Voters in Britain. 3Rd Ed. Hampshire: Palgrave Macmillan.

Garnett, M., & Lynch, P., 2014. Exploring British Politics. London: Routledge.

Graeber, D., 2012. The movement as an end-in-itself? Platipus in New York, [online] 31st of January 2012, Available at: http://platypus1917.org/2012/01/31/interview-with-david-graeber/ [Accessed 17 September 2015].

Greenslade, R., 2011. How newspapers, despite decline, still influence the political process. The Guardian, [online] 21st of June 2011, Available at: http://www.theguardian.com/media/greenslade/2011/jun/21/national-newspapers-newspapers [Accessed 18 September 2015].

Hobbes, T., & Gaskin, J., C., A., 1994. The elements of law, natural and politic: Part I, Human nature, part II, De corpore politico; with Three lives. Oxford: Oxford University Press.

Hobbes, T., Tuck, R., & Silverthorne, M. 1998. On the citizen. Cambridge: Cambridge University Press.

Hobbes, Th., 2006. Leviathan. New York: Dover Philosophical Classics.

Huber, H., 1968. How Switzerland is Governed. Switzerland: Schweizer Spiegel Vergal.

Kavanagh, D., 2000. British Politics: Continuities and Change. 4Th ed. Oxford: Oxford University Press.

Kriesi, H., & Trechsel, I‘., 2008. The Politics of Switzerland. Cambridge: Cambridge University Press.

Leach, R., Coxall, B., & Robins, L., 2011. British Politics. 2Nd ed. Hampshire: Palgrave Macmillan.

Locke, J., and Laslett, P., 1988. Two Treatises of Government. Student ed. Cambridge: Cambridge University Press.

Norton, P., 2013. Parliament in British Politics. 2Nd ed. London: Palgrave Macmillan.

Rousseau, J., J., & Gourevitch, V., 2014. Rousseau: The Social Contract and other later Political Writings. Cambridge: Cambridge University Press.

Straume, S. I., 2012. A common world? Arendt, Castoriadis and political creation. [e-journal] 16(2). Available through: European Journal of Social Theory – Sage Articles http://est.sagepub.com/content/early/2012/03/26/1368431012440870 [Accessed 17 September 2015].

Wright, T., 2013. British Politics: a very short Introduction. 2Nd ed. Oxford: Oxford University Press.

Zakaria, F., 1997. The Rise of Illiberal Democracy. US: Foreign Affairs.

Attack Ads in US Presidential Elections

This work was produced by one of our professional writers as a learning aid to help you with your studies

Discuss to what extent attack ads are effective within presidential election campaigns in the U.S, with a focus on the 2012 election

In the 2012 U.S. Presidential Election

Attack ads were a major part of the 2012 presidential election campaign in the U.S. In fact, the Washington Post reports that of the $404 million that was spent on TV ads in favour of Barack Obama, 85% ($343.4 million) was spent on negative ads, while of the $492 million spent on TV ads in favour of Mitt Romney, 91% ($447.72 million) was spent on negative ads (Andrews, Keating, & Yourish, 2012). The attack ad strategies of both candidates were very similar. In fact, the top ten U.S. states in which the candidates spent campaign funds on negative TV ads were exactly the same, with Florida, Virginia, and Ohio being the top three respectively (Andrews, Keating, & Yourish, 2012). Given that the vast majority of money spent on TV ads was spent on negative ads, it is reasonable to believe that there must be some efficacy to such ads. In this project, scholarly research on the effectiveness of attack ads in the 2012 U.S. presidential campaign is reviewed in order to answer the question when and in what circumstances were the attack ads effective during this election?

Interests Group Involvement and Attack Ads

Recent trends in media and campaign ad funding may contribute to the high number of attack ads in the 2012 U.S. presidential campaign, as well as the campaign’s high ratio of negative-to-positive ads. While the percentage of negative ads coming directly from the campaigns of the candidates increased significantly from 2008 to 2012, the majority of the increase in negative ads is attributable to the rise in campaign ads that were not funded by the candidates’ campaigns (Fowler, 2012). In fact, 60% of presidential campaign ads in 2012 were funded by groups other than presidential campaign groups (Fowler, 2012). This is a huge increase from 2008 in which 97% of ads were funded by presidential candidate campaigns (Fowler, 2012). The number of ads from interest groups increased by 1,100% from 2008 to 2012, while the number of TV ads from political parties increased from zero in 2008 to almost 10,000 in 2012 (Fowler, 2012).A Moreover, in 2008, ads from presidential candidates were only 9% negative, while those from interest groups were 25% negative (Fowler, 2012). These numbers quickly changed by 2012, in which 53% of ads from the presidential candidates themselves were negative and 86% from interest groups were negative (Fowler, 2012). The increase in the involvement of special interest groups in advertisement campaigns only partially explains the increase in attack ads in 2012. The change in media and the rise of social media may be able to explain partially both the increase in special interest group participation and the increase in attack ads.

Polarized Parties and Polarized Media

Several recent changes in news media may have affected not only the number of political attack ads, but also the efficacy of such ads. One major change in news media is that it now covers political ad campaigns much more than in the past. In fact, from 1960 to 2008, the percentage of political news articles and segments that covered political ads rose by over 500% (Geer, 2012). On one hand, the increased coverage of political ads may be because of the increase in attack ads. After all, attack ads tend to be more controversial and ‘news-worthy’ than positive ads. On the other hand, however, the increase in attack ads may be, in part, the result of an increase in media coverage of negative ads. Geer (2012) argues that “news media now cover negative ads so extensively that they have given candidates and their consultants extra incentive to produce and air them” (p. 423). There may or may not be a mutualistic relationship between attack ads and media coverage of political ads. Nevertheless, the clear increase in both may help to increase the efficacy of attack ads, given that such ads may receive more media coverage.

If it is the case that the media’s willingness to cover negative political ads more than positive ads does, in fact, encourage more attack ads, there is not a necessary increase in the efficacy of such ads. Geer (2012) holds that the increase in media coverage on attack ads does not mean that such coverage is in any way influential to voters; that is, it is not typically the goal of news organizations to influence voters. Thus, while an attack ad may receive more public attention because of the media, the increase in attention may not be necessarily favourable or unfavourable to any candidate.

Another recent change in news media is its partisanship. Now, many U.S. news outlets are partisan or are considered to be partisan by viewers. For example, just as Fox News is considered to be a conservative news organization that promotes Republican politicians over Democratic politicians, MSNBC is considered to be a liberal news organization (Jacobson, 2013). The polarization of the media may actually be the result of the polarization of the current two-party federal political system in the U.S. (Sides & Vavreck, 2014). In the last decade, the democratic and republican political parties in the U.S. have moved further away ideologically, resulting in substantial gridlock in Congress (Sides & Vavreck, 2014). Such disagreement and polarization may, on one hand, lead to an increase in attack ads. Attack ads may seem more effective when there is such a large ideological divide between the parties. On the other hand, such political polarization has likely contributed to the polarization of news outlets (Sides & Vavreck, 2014), which, in turn, further encourages attack ads. Even with the increase in polarized parties and media outlets, attack ads may not be an effective means to sway voters towards or away from particular candidates.

Attack Ad Rationale and Efficacy

A meta-analysis of research studies on the effects of political attack ads reveals that attack ads tends to be more memorable and stimulate more knowledge about political campaigns than positive campaign ads (Lau, Sigelman, & Rovner, 2007). Despite these effects, campaign attack ads were not found to be effective at convincing individuals to either change their votes or to vote in an election (Lau, Sigelman, & Rovner, 2007). Moreover, the results of the meta-analysis revealed that attack ads have significant negative effects on individual perceptions of the political system, trust in government, and public mood (Lau, Sigelman, & Rovner, 2007).

A more recent meta-analysis conducted by Fridkin and Kenney (2011) found that in some cases campaign attack ads can be effective at lower voter evaluations of targeted candidates. However, Fridkin and Kenney (2011) also found that in certain circumstances, attack ads lower voter evaluations of the attacking candidates. For an attack ad to be effective, the researchers found that the attack ad must bring up a relevant issue that is reinforced with fact or must present the opposing candidate as being uncivil in some significant way. Otherwise, the attack ad may have no effect or even a negative effect on voters. Additionally, Fridkin and Kenney (2011) found that effects from attack ads on voter evaluations of candidates tend to be very small.

Social Media and Attack Ads

The rise of social media has dramatically changed the political advertising landscape. The 2012 presidential campaign features another strong social media showing by President Obama, who outspent every other candidate in social media advertising in his successful 2008 presidential run (West, 2013). Social media allowed Obama to reach key demographics much more effectively than general television commercials allowed (West, 2013). Social media allows candidates to contrast a higher number of messages and aim specific messages at target audiences effectively (West, 2013). This is extremely important during a time in which there are so many issues of disagreement between the two major U.S. political parties and in which transparency is highly valued (West, 2013). Social media outlets serve as a significant platform for all political ads and their content, altering the ways in which we tend to think about politics and the media.

Another important aspect of social media and attack ads is that social media acts as a platform for social discussions on attack ads. Just as the news media tends to cover attack ads more than positive political ads, members of social media sites tend to openly discuss attack ads more than positive political ads (Hong & Nadler, 2012). Thus, the rise of social media may have further encouraged the use of attack ads during the 2012 U.S. presidential election. Even so, as with news media, there is no significant evidence that the increase in news media coverage generated from attack ads alters voter behaviour or attitudes (Hong & Nadler, 2012). As a result, the effectiveness of attack ads cannot be confirmed.

A Deeper Look into the 2012 Election and its Attack Ads

The 2012 presidential election featured Mitt Romney, who spent significantly more on attack ads than Barack Obama (Andrews, Keating, & Yourish, 2012). Moreover, a greater ratio of Romney’s television ads were attack ads (Andrews, Keating, & Yourish, 2012). Nevertheless, Obama was the victor in the election, as well as the popular vote. The results of the 2012 presidential election, however, do not suggest that attack ads are ineffective. Incumbent candidates are more likely to win elections, including presidential elections, in the U.S. than non-incumbents (Sides & Vavreck, 2014). Thus, the efficacy of the attack ads used by either candidate cannot be determined based on the outcome of the election alone.

Of the six most memorable attack ads of the 2012 U.S. presidential election, West (2013) argues, five were attack ads. The first is an attack ad from Obama about Romney’s Swiss Bank account. This attack ad may have been effective with moderate voters because it singled Romney out as having a major interest in big business, as opposed to improving the middle-class (West, 2013). Additionally, the ad had high relevance to a real issue, which meets the Fridkin and Kenney (2011) criteria for an ad that may be effective at reducing favourability with a particular candidate. The second ad is from Romney and targeted Obama’s failure to bring unemployment levels to acceptable levels (West, 2013). This ad targeted a real issue, while providing a positive aspect, which is that Romney has the business experience to create jobs as President. The third attack ad is also from Romney and claimed that Obama’s recent tax plan would raise taxes on the middle class (2013). This can be viewed as a direct rebuttal to Obama’s attack ad and consequently addresses a real and relevant topic.

The fourth memorable attack ad in this campaign is the attack from the American Crossroads, which is a Super Political Action Committee (PAC). The attack targets Obama’s celebrity status (West, 2013). This attack fails to address any real issue and, thus, should not be viewed under the Fridkin and Kenney (2011) criteria as being able to influence voter favourability toward Obama. Finally, the Priorities USA Super PAC targeted Romney’s capitalization on Bain Capital, again indicating that Romney does not have the interests of the middle-class in mind, but instead has the interests of the upper-class in mind. This attack ad addresses a highly relevant issue.

For the most part, the attack ads of the 2012 U.S. Presidential Election were likely to be somewhat effective in decreasing voter favourability. While there is no strong evidence that attack ads actually sway voter decisions or voter turnout (Lau, Sigelman, & Rovner, 2007), there is evidence that voter favourability of a candidate can be decreased through political attack ads when such ads address a relevant issue (Fridkin & Kenney, 2011). Moreover, attack ads tend to generate considerably more media attention than positive political ads. While this may seem, prima facie, to benefit candidates who put out attack ads, there is no evidence that such media coverage influences voter behaviour. Thus, the logic behind one of the primary reasons for attack ads may be flawed. Nevertheless, the 2012 U.S. Presidential Election featured a number of attack ads, many of which were on-topic and relevant, others were off-topic and irrelevant. The actual effectiveness of these attack ads is not currently known, though they likely, at the very least, increased media coverage for the targeted candidates.

References

Andrews, W., Keating, D., & Yourish, K. (2012) Mad Money: TV Ads in the 2012 Presidential Campaign. The Washington Post. Accessed on 15 October 2015 from: http://www.washingtonpost.com/wp-srv/special/politics/track-presidential-campaign-ads-2012/

Fridkin, K. L., & Kenney, P. (2011) Variability in Citizens’ Reactions to Different Types of Negative Campaigns. American Journal of Political Science, 55(2), pp.307-325.

Fowler, E. F. (2012) Presidential Ads 70 Percent Negative in 2012, Up from 9 Percent in 2008. Wesleyan Media Project, May, 2, pp.119-136.

Geer, J. G. (2012) The News Media and the Rise of Negativity in Presidential Campaigns. PS: Political Science & Politics, 45(03), pp.422-427.

Hong, S., & Nadler, D. (2012) Which candidates do the public discuss online in an election campaign?: The use of social media by 2012 presidential candidates and its impact on candidate salience. Government Information Quarterly, 29(4), pp.455-461.

Jacobson, G. C. (2013) How the Economy and Partisanship Shaped the 2012 Presidential and Congressional Elections. Political Science Quarterly, 128(1), pp.1-38.

Lau, R. R., Sigelman, L., & Rovner, I. B. (2007) The Effects of Negative Political Campaigns: a Metaaˆ?analytic Reassessment. Journal of Politics, 69(4), pp.1176-1209.

Sides, J., & Vavreck, L. (2014) The Gamble: Choice and Chance in the 2012 Presidential Election. Princeton University Press.

West, D. M. (2013) Air Wars: Television Advertising and Social Media in Election Campaigns, 1952-2012. Sage.

Oxidative Stress in human Brain Ageing

This work was produced by one of our professional writers as a learning aid to help you with your studies

The human brain is the main source of nerve function in the body. It is the epicentre of the nervous system and controls all of the main neural functions of the human body (Lewiset al, 1998, 479-483). When assessing brain function, there are many different areas that are addressed, but one main area of concern is the actual aging of the brain. As the brain ages, the functions that it performs are broken down and degraded. The nerves become slower and the motor functions are less precise. Short term and long term memory is negatively affected and the overall brain function is broken down.

Many people attribute all of these detrimental effects to old age and poor health, when in reality oxidative stress and free radicals are the main causes of loss of brain function. Throughout this paper, actual brain function patterns will be examined, followed by some common reasons for brain function degradation. Then oxidative stress and its effects on the human brain will be looked at, along with a few of the common diseases and health problems that are associated with brain aging and loss of brain function.

The Brain: an Overview

The human brain is a mass of nerve tissue, synaptic gaps, and nerves (Lewis et al, 1998, 479-483). All of these parts work together to form what is known as the human brain. The brain is the main centre of nerve function in the body. The nervous system is controlled solely by the brain itself, which works as a kind of packaging centre for the messages that are delivered to each nerve cell by the body. However, the brain would not function properly if it were not for the job performed by each cell and its consequent parts. A cell is made up of the nerve cell itself, the synapse, and dendrite. Each dendrite is connected to the next dendrite by a small opening that allows the passage of chemicals such as Potassium and in order for proper neural functioning. The chemicals move along the dendritic pathway and form a gradient at the synaptic gap. The gradient then allows the chemical to trickle across the gap, which then causes the nerve to deliver its message (usually a message for a muscle to contract). If a gradient does not exist, then the message is not sent and the function is not performed properly. If a problem arises in the nervous system then it is usually due to the fact that the chemical gradient is incorrect at a particular synaptic gap, creating either a muscle seizure or some other undesirable reaction.

The main nerve cord of the body, known as the spinal column, is made up of layer upon layer of nerve cells. This mass of nerves serves as the pathway for all of the major neural messages of the body. It allows the chemical messages packaged by the brain to be transported to various parts of the body, and vice versa. All of the neural messages of the human body are delivered in a matter of seconds, that is why it does not seem as if there is along delay in between a particular stimuli and the consequent reaction. Branching out from the spinal cord itself are the various nervous pathways of the body. There are nerves that stretch all of the way to the fingertips and toes, but they all return to the spinal cord to deliver various stimulus messages. Each of the various nervous pathways is also made up of layers of nerve cells. All of the nerve cells of the body work together to form messages that are interpreted by the brain. The brain is able to decide what priority is needed to be appropriately assigned to each task and then takes action to perform those actions.

Brain Function

There are basically three main functions of the brain: memory, interpretation of data, and motor function control. Not only is the brain a packaging and interpretation centre for the neural messages of the human body, but it is also a storage bank for information. The brain stores information from everyday life using chemical reactions in the cerebrum to create memories. This information is then available for the rest of the brain’s life, regardless of whether a person can actually pull the information up to examine it.

The brain serves its main purpose of data interpretation by deciphering the messages and stimulus information that the human body encounters every day. Each and every piece of information that the body comes into contact with is sent through the brain to either store the information, cause a reaction to a stimulus, or to disregard it. This interpretation process is very exact, yet extremely fast. The entire process seems instantaneous, from the introduction of the information all the way to the interpretation results/stimuli reaction.

Finally, the brain controls all of the muscles of the body and consequently all motor control of the human body. Every movement, be it voluntary or involuntary is controlled by the brain. Each function of the muscles is perfectly coordinated and timed so that the abducting muscles work perfectly with the adducting muscles to produce useful movement. The brain coordinates each twitch of any muscle in the entire musculature system so that no energy is wasted in useless movement. Because the body is constantly in a delicate balance, it is necessary for the brain to be even more precise than the world’s most sophisticated computer when dealing with the body’s homeostasis. The body has many involuntary muscle movements that are necessary for life, but need not be thought about to be performed each time. A couple of these movements are such things as the contraction and expansion of the diaphragm in the stomach to allow respiration and the beating of the heart. However, other muscles and functions are also controlled by the brain, such as the movement in walking, swimming, or running. The contraction of the bladder and other voluntary, yet unthought of muscle contractions are also controlled by the brain.

Stressors of the Brain

In every cell of the body, there are what are known as redox reactions (OXIS Research, 2003, 2). Basically, a redox reaction is an oxidation-reduction chemical reaction in which one compound is oxidized (loses electrons) and another compound is reduced (gains electrons) (Zumdahl, 1991,216-220). Redox reactions are essential for survival and for the proper function of various organ systems in the body.

While redox reactions may be essential for survival, they can produce what are known as free radicals (OXIS Research, 2003, 2). A free radical is defined as any type of chemical existence that can stand alone and survive on its own without the need for any other chemicals to continue the life of the chemical (OXIS Research, 2003, 2). Free radicals contain unpaired electrons, which make the chemical very unstable (OXIS Research, 2003, 2). The unpaired electrons tend to try to pair with any other free electrons to achieve a stable outer electron ring (usually eight electrons). Therefore, the unstable free radicals are always trying to pair up with any and all organic chemicals that they come into contact with. Free radicals can be increased in the body by exercise and environmental stresses. They tend to be stored in the fat cells of the body and are released when fat is burned. The free radicals are then spread all throughout the body where they can then react with other organic substrates (OXIS Research, 2003, 1). These organic substrates include DNA and various proteins as well (OXIS Research, 2003, 1). The oxidation of these molecules can damage them and cause a great number of diseases (OXIS Research, 2003, 1).

There are several different organ systems that are predisposed to free radical damage. These organ systems include the circulatory system, the pulmonary system, the eye, the reproductive system, and the brain (OXIS Research, 2003, 2). While it is true that every organ system could be examined and an oxidative stress Achilles heel could be found, the brain is especially susceptible to free radical damage (OXIS Research, 2003,2). Oxidative stress is a term that is used when dealing with a build up of ROS chemicals (OXIS Research, 2003, 2). ROS stands for Reactive Oxygen Species and refers to many chemical oxygen derivatives (OXIS Research, 2003, 2). The build up of these chemicals can cause an imbalance of oxidant activity in the system (i.e. the brain) and can lead to several negative health effects including premature aging of the system and any number of diseases (OXIS Research, 2003, 2).

The oxidative reactions that take place in the body and especially the brain are regulated by a system known as the Antioxidant Defence System, or ADS for short (OXIS Research, 2003, 2). This system is a conglomerate of many different approaches to keeping free radical production and collection to a minimum in the body. The ADS contains antioxidant chemicals as well as a number of enzymes that can not only limit and control the overall production of oxidative reactions, but actually target damaged molecules for the purpose of replacement or repair (OXIS Research, 2003, 2).The actual antioxidants are either internally synthesized or are ingested by the organism via various fruits, vegetables, and grains (OXIS Research, 2003,2). Antioxidants are categorized into two different categories: Scavenger oxidants and prevention antioxidants (OXIS Research, 2003, 2). Scavenger antioxidants remove the ROS molecules from the body and include both small antioxidants (Vitamin C and glutamine) and large antioxidants that need to be synthesized by cells in the body before they can be used to protect the organ systems (OXIS Research, 2003, 2). Prevention antioxidants such as ferritin and myoglobin are designed to prevent the formation of new oxidants and free radicals (OXIS Research, 2003, 2). They work by binding to the various free radicals to protect the proteins that are essential in the organ system (OXIS Research, 2003, 2). This group includes such chemicals as metallothionine, albumin, and transferrin (OXIS Research, 2003, 2).

It is obvious that free radicals are at least a necessary evil in the body when it comes to the completion of certain processes. In order for proper functioning of the various life systems of the human body, it is necessary to have the by products of the processes (generally free radicals)present in the system. However, this does not mean that free radicals are safe or needed. Most of the time the body’s systems of removal (ADS, etc.) will take care of the overabundance of free radicals, however at times it is possible for even the ADS system to be overpowered by a great influx of free radicals. This phenomenon can be due to the production of energy by mitochondria or some other natural process, but in most cases this large influx of free radicals is caused either by environmental stresses or from being near various industrial processes. It is a great concern of researchers today that there are more free radicals being released into the environment by industrialactivities and other forms of pollution. These free radicals are easily bound to various food products that are produced by humans and have a detrimental health effect on both animals and humans. If more free radicals are present in the environment than in past historical records, there is a high risk of ingesting enough oxidants to produce an imbalance of free radicals that could lead to the ADS system not being able to handle the extra oxidant load. This would then result in a large epidemic of environmentally caused free radical damage and disease.

Degradation and the Effects on Brain Function

Due to the importance of the brain function to the body, it can be seen why it is imperative that the brain be kept in good working order so to speak. If the brain is allowed to degrade to the point that motor functions and memory is affected, then there could be long term health effects that can cause more problems than just brain functioning. If the brain is allowed to degrade to a point at which everyday muscular functions and other physiological functions begin to become harder to perform then there is a possibility that other more serious side effects could be on the horizon. Certain diseases are caused by brain degradation or are causation factors in brain aging and degradation itself. One such disease is Alzheimer’s Disease.

Alzheimer’s disease is a brain disorder that has many symptoms and causes the loss of memory, the ability to learn, and the ability to carry out everyday activities. Towards the end of the disease progression, Alzheimer’s can cause personality changes and even cause hallucinations and paranoia (Alzheimer’s Association, 2005, 2). Alzheimer’s is a form of dementia: a category of diseases that cause the systematic destruction of brain cells and lead to a decline in brain function and quality (Alzheimer’s Association, 2005, 2). It has many stages and eventually leads to the complete breakdown of the brain to the point of death (Alzheimer’s Association, 2005,2). A person who has a dementia disease will eventually need full-time care because of the loss of a large portion of the brain function (Alzheimer’s Association, 2005, 2). While Alzheimer’s and dementia are not the only neural disorders that have a progressive effect on brain function, they are two of the main problems that are faced in countries such as the United States and England. Researchers have not yet identified a known cause of Alzheimer’s disease, however the field has progressed great strides in the past few years. As of right now, the disease is linked to a genetic predisposition to the disease and generally bad aging habits (Alzheimer’s Association, 2005, 2). But there is still some value to the school of thought adopted by a few doctors that believe that diseases like Alzheimer’s, dementia, and Parkinson’s disease are all due to not only genetic factors but also to environmental stresses which would include the introduction of free radicals into the body. Free radicals can cause great disruption in the brain function mainly because the neurotransmitters and neurons that are present in the brain are very delicate and can be destroyed easily. The free radicals can bind to the various proteins that are used to transmit messages and perform repairs in the brain tissues, preventing them from performing their duties and causing a weakened brain state. Proteins are themselves very specific concerning binding properties and will only function correctly if they bind with the correct substrate (Staines et al, 1993, 130). Therefore, if the active site of the protein is disrupted by a free radical, then that protein is completely changed and will not perform as it was intended.

Brain Aging: An uphill battle

Many diseases are linked to free radicals and other types of oxidants, however another factor of brain function needs to be examined to get the entire picture concerning brain functions and memory. This factor is, of course, brain aging. It is what some call an unfortunate fact of life, but we all grow older. From the time of our birth all the way to our death, our body is in a constant state of degradation and repair (Ebbing and Gammon, 2002, 809). This is true for every part of the body including the brain and carries great consequences for overall brain function and health. The brain is a delicate organ that stores the information that runs the rest of the body’s functions. If it is allowed to age past a certain point and it is not in good health, then it is possible for bodily functions and memory to be detrimentally affected. As the brain ages, it becomes slightly more sluggish and tends to lose its edge so to speak. Because of the complexity of the brain itself, aging tends to have a harsh effect on its ability to function correctly. A major factor in the development of diseases such as dementia and other neural system diseases is often the aging of the brain. The older the brain is, the less it functions correctly. As of now, there is not a particular treatment or cure for dementia. The best that we can do is to simply make the patient comfortable and to try to make their lives as easy as possible when dealing with everyday life functions. It is the hope of researchers of brain aging that by forging new paths in the field of neural aging, that a cure will be found for such diseases as dementia and Alzheimer’s.

For years it has been common practice to believe that brain and neural diseases were caused either by environmental stresses or from brain aging. Today, however the tide is swaying more towards the middle than to either extreme. Researchers are starting to realize that the environment as well as brain aging could be factors in the development of certain diseases and disorders. Not only can both environmental factors and the age of the brain itself work together to cause stress on the brain, but some environmental factors can actually cause the brain to age prematurely as well. This premature aging is actually a worse form of aging than the actual aging process of the human body itself. Premature aging means that the brain is aging faster than it would naturally; in other words a brain that is supposed to only be five years old would look and function as if it were ten years old or older. The implications of this type of aging process are obvious. As the brain ages, neurons and neurotransmitters die and do not function as well as when the brain was younger, leading to memory loss and slower reaction time.

Brain aging is caused by many factors including environmental factors, industrial processes, and of course the passage of time. Two of these factors can be regulated: environmental factors and industrial processes. By regulating certain chemicals and industrial processes, it is possible to cut down on the amount of premature aging that occurs in the brain (Sharon, 1998,167). Certain industrial processes such as the metallurgic processes used in alloy formation as well as welding are known causes of brain degradation and causation factors in such diseases as Parkinson’s and manganism (Landis and Yu,1999, 213-217). Certain chemicals that are present in these various processes are able to penetrate through the blood brain barrier and contact the brain tissue directly. This can lead to tumours and neuron death that then causes cognitive problems as well as body function problems. The only good way to prevent such contamination is to completely negate contact with these chemicals at all. Researchers know this and that is why environmental laws are being put into place that allow for the prevention of release of these chemicals.

Aging of the brain occurs whether or not there are external environmental stressors present in the person’s surroundings. It occurs throughout the entire lifespan of the organism. Earlier in history it was believed that the aging of the brain caused the degradation of neurons no matter the circumstances, however it is the common belief today that as long as a few guidelines concerning lifestyle choices are followed, it is possible for the neurons of the brain to stay completely healthy and fully functional all the way until death. Brain aging is defined as the breakdown of the brain itself. The grooves in the brain tissue will grow wider and the actual weight of the brain material will decrease dramatically. New studies are showing that the plaques and neural tangles that were previously believed to have been the culprits of Alzheimer’s disease may actually not be the main disease causing factors after all (Brady et al, 2000, 864). It is a growing school of thought that the actual cause of dementia type diseases is actually result of complex chemical reactions in the brain (Brady et al, 2000, 864). This information is very important to neural researchers because it can completely change the focus of their research and hopefully eventually lead to a cure for dementia and other diseases of this type.

Conclusions

It is apparent that the aging of the brain is a major concern, especially to researchers studying the effects of specific kinds of neural diseases. It is believed that these diseases could have a myriad of causes, but brain aging may be a contributing factor in several or all of them. The overall aging of the brain is coming to the forefront of modern medicine because not much is known about it. It is becoming evident that what was thought to be facts concerning brain aging before was little more than just educated guesses. Now however, the technology is available that will allow the actual study of the brain and its functions to try to give a better picture of the breakdown of the organ. Once a specific timeline is established that shows the breakdown of a healthy brain, it will be possible to quantitatively measure the degradation of a diseased brain. While this may not seem very important, it is actually very useful information. This information can be used to explain to patients what they should expect to experience at specific time periods of their disease and could help prepare them for what is to come.

Brain aging information can also be of use to the doctors that are administering treatment, in as much that it would allow the doctor to determine at what stage the aging was in, and therefore what type of treatment to administer.

Oxidative brain stress is a completely different matter than brain aging as far as research is concerned. While it is true that more is known about free radicals and their effects on the brain than the aging process, it is important to understand why research of this kind needs to be continued. The world is constantly changing and the chemicals and different kinds of pollutants that are released are in a continuous state of advancement. Because of this it is necessary to continually be studying the physiological and biological effects of each new chemical that is developed and put onto the market. By performing this kind of research early on in the development process, it is possible to determine if there are any harmful effects of using the new chemicals. The early research performed as a preliminary study could lead to less disease and fewer health problems later on.

Overall, oxidative stress along with brain aging is newly emerging field that has the job of trying to answer age old questions that are concerned with brain and neural health. It is important to continue research in both of these areas so that advancements in modern medicine can be pursued. Society owes a great debt to the researchers who have and will spend their entire lives studying the effects of brain aging and oxidative stress on the functioning of the brain. Hopefully in the near future there will be great advancements made in the field of neural medicine to allow for better and more effective treatment of certain nervous system diseases.

Plant Physiology Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

In the study of general biology, a number of fields such as plant anatomy, plant taxonomy, plant physiology, comparative ecosystems, comparative animal physiology, neurophysiology, physiological ecology, endocrinology, and principles of electronic instrumentation may be topics of interest.

In this paper, the writer will discuss plant physiology. The paper contains the definition of plant physiology in different dimensions, notes related study fields that complement or overlap the topic and explains the branches (or specific study areas) in the topic, detailing examples of what is studied within each subtopic. A general conclusion is given at the end of the presentation. An accompanying reference to the discussed topics is provided at the end of the paper.

Definition of plant physiology

Physiology has been defined as ‘the science of the normal functions and phenomena of living things’. The current understanding of physiology crops from the works in Europe during the Renaissance interest in experimentation with animals. William Harvey (1628) who was a doctor to Charles I described the working of the heart in sparkling analyses after observations lead to the conclusion of experimental proof and functionality and which informs the importance of physiological analysis as ‘physiology’.

Physiology is based on hypothesis testing or functionality of given living phenomena. Harvey’s work also emphasized the natural relation between physiology and anatomy (structure of living things) which makes the understanding of the former easier. The successive meanings of ‘physiology’ are illustrated by instances of its use.

Harris (1704) in Lexicon Technica describes physiology as part of medicine that teaches the constitution of the body so far as it is sound, or in its Natural State; and endeavors to find Reasons for its Functions and Operations, by the Help of Anatomy and Natural Philosophy’. Another definition by Huxley150 years later is clearer and closer to the current definition: ‘whereas that part of biological science which deals with form and structure is called Morphology; that which concerns itself with function is Physiology’ to make a distinction between structure and function in living organisms.

From the foregoing, plant physiology can be described as that aspect of study that deals with the functioning of plants both microscopically and macroscopically. It assumes the system of understanding the functionality of plant life within itself, without it and within its immediate environment.

The field of plant physiology relates closely to cell morphology which studies development, formation and structures of different species of plant, ecology, which studies the plant habitat, biochemistry which lumps all the biochemical activities of cells, and molecular processes inside the cell. All these fields interact or overlap in the study of plant physiology.

The general field of plant physiology involves the study physical processes and chemicals that describe life and elucidates how they occur in plant. The study is at many levels that encompass various scales, time and sizes. The smallest scale is the molecular interactions that include the photosynthesis interaction in the leaves and diffusion of water in cells.

Diffusion also happens for mineral and nutrients within the plants. In the large scale there are concepts of plant development, dormancy, seasonality and reproduction. Other major disciplines of plant physiology include phytopathology that studies diseases in plants and the study regarding biochemistry of plans, also called phytochemistry. Plant physiology as a unit is divided into many areas of research.

Elementary study of plant physiology mainly concentrates on stomata function, circadian rhythms, transpiration , respiration, environmental stress physiology, hormone function, photosynthesis, tropisms, photoperiodism, nastic movements, seed germination, dormancy, plant hormone functions, photomorphogenesis and plant nutrition.

Branches of Plant Psychology

The subtopics of plant physiology can safely take the forms of photochemistry, biological and chemical processes, internal cell, tissue and organ interaction within the plant, control and regulation of the internal functions (anatomy) and the response to external conditions and environmental changes (environmental physiology). In the following section, these branches of physiology will be discussed in details.

Photochemistry refers to the chemical actions that take place within or without the cell. Plants are considered unique in their chemical reactions since as opposed to animals or other organisms, they have to produce chemical compounds to be used within the same plant. These chemicals are in the form of pigments or enzymes directly used within the plant.

The functions of these chemicals are various. They may be used for defence against external interference from such quarters as herbivores or primary consumers and pathogens. This mechanism is advanced in plants because they are immobile. This, plants do through the production of tissue toxins and foul smells.

Toxicity from plants is associated with plants alkaloids which have pharmacological effects on animals. The Christmas setta if eaten by dogs causes poisoning to them. Another plant in its fresh form, the wolf’s bane (the Aconitum genus; Aconitum carmichaelli) has toxic aconite alkaloid that is known to kill wolves and causes tingling, nausea or numbness of tongue or vomiting if tasted by mouth. Some other plants also have secretions or chemical compounds that make them less digestible to animals.

Plants also produce toxins to repel invasion form other plants or in instances of competition for similar nutrients. They produce secretions that are repellent thereby maintaining autonomy over competed for resources. The foul smell exhibited by other plants help to keep herbivores away. The rafflesia (Rafflesia arnoldii) of the Magnoliophyta division has flowers with distinctive smell of rotting flesh of animals to keep herbivores that are known not to eat flesh away.

Toxins or smell can also be produced to guard against encroachment of disease causing organisms or to guard the plant from the effects of drought or unfavourable weather conditions. Enzyme or hormone secretion has been observed in the behaviour such as in preparation for dormancy for the seed, shedding of leaves for deciduous trees in preparation for dry conditions and withering in some plants are caused by chemical reactions in plants.

Innate immune systems such as those of plants are known to repel pathogenic invasions. In an experiment, small protein secreted by strains of the fungus caused it to overcome two a tomato’s disease- resistant genes. A third resistance gene, however, would target this suppressor protein, making the tomato plant fully immune to any form of fungal strain that produced the protein. With the right combination of resistance genes, tomatoes can overcome invasion of fungus despite the fungus’ molecular tricks.

Attraction of possible pollinators for the furtherance of the species of plants is also employing the chemical reactions in plants. Some plants, during their reproduction cycles are known to produce very pleasant smells to attract insect which then help in pollination. An example is the night rose or the Aloysia triphylla that smell so to attract insects that symbiotically gain their nectar and help in pollination of their flowers.

Photochemistry involves the understanding of the metabolic actions of compounds within the plant cells. Studies of these metabolic compounds have been successful through the use of extraction techniques, isolation processes, structural elucidation and chromatography. Modern approaches are numerous and thus expand the field for further studies.

Plant cells vary so much from cells of other organisms. This necessitates different behaviour in order to perform their productive actions. Plants cells have cell walls that are rigid and thus restrict their shape as opposed to animal cells that have both cell walls and cell membranes. This is primarily responsible for plants’ immobility and limited flexibility. The internal cell structures vary according to specializations required of the plant to adapt to its life.

For example, the cell vacuole is responsible for storage of cell food material, for intracellular digestion and storage and discharge of cell waste material. It also protects the cell and is also fundamental in endocytosis processes of the cell such as the regulation of the turgor pressure of the cell in response to cell fluid uptake.

The chloroplast is responsible for photosynthesis within the cell and contains the sugars for the photosynthesis. It is also the manufacturer of food for the other organelles. The ribosome use genetic instructions form the Ribonucleic acid (RNA) to link amino acids in long chain polypeptides to form proteins. These plant proteins are very important in plant structures.

Golgi complexes store packages and distribute proteins within the cell endoplasmic reticulum. The smooth endoplasmic reticulum synthesizes lipids while the rough endoplasmic reticulum synthesizes proteins.

The plastid is found in the cytoplasm and possesses double membranes surroundings that depend on the environmental conditions of the parent plant and the plant’s adjustment to these conditions. They store molecules such as pigments which give the characteristic colours of flowers and fruits during plant reproduction. They also store photosynthetic products.

The plant cell contains chlorophyll, pigments that are responsible for the manufacture of the plant’s own food. The cell physiology is such that the adaptations of different internal cell organelles are commensurate to the ability of the plant to live in a given environment. The cell structure thus plays a major role in plant adaptation.

Plant cells are the smallest unit in building a special system of a plant life. Cells make up tissues that specialize in given plant functions. Tissues coordinate to form organs within the plant that respond to environmental needs as appropriate as is required of the plant.

The specialization of difference types of plant cells such as the parenchyma cells, the collenchyma cells and the scherenchyma cells make it possible for plants to coordinate its functions in its habitat. The parenchyma cells are divided to storage cells for storing cell material, the photosynthetic chlorenchyma cells that are adapted to photosynthesis and transfer cells that are responsible for phloem loading functions of transfer of manufactured food within the plant. These cells have thin cell walls for mediation or simply passage of material from cell to cell.

The collenchyma cells also have only one thin cell wall. They mature from the meristems of the plant tissues. The scherenchyma cells have strong sclereids and fibres made of lignin that provides mechanical support to the plant. This rigidity has also found value in discouraging herbivory.

The tissue systems include the meristematic tissues- the xylem and the phloem as well as the epidermal cells of the external plant cells. The xylem is made up of cells specialized in uptake of minerals active transport. The phloem has a composition of cells mostly of the transfer cells. The epidermal cells are rigid and cuticular to prevent the loss of fluid and are also for protecting the inner weaker placid cells.

All these systems focused to perform different function within the plant both chemically and physical. For example, the roots and the rhizoids help to hold the plant into position for vantage production of its food. For earth plants, the roots have the penetrative power while the aqueous plants have roots helpful in buoying them in place for mineral acquisition.

The leaves are adapted to trap sunlight that is instrumental in photosynthetic process of making food. The leaf structure is such that it is adapted to the habitat of the plant. The position of the stomata in the leaves, for example, is atop and not under the leaf to regulate the flow of gases. The specialized guard cell for the opening and closure of the stomata depicts just how the specialization befits the functionality of the cells, tissues and the plant organs.

Plants also possess transport systems that rely on physical processes in absorption and use of nutrients, air and water within and without the plant. The absorption of minerals depends on a combination of diffusion and active transport that is regulated by the plant in its environment. The roots are developed to successfully execute this process.

Up into the plant the uptake of minerals and water has a developed xylem system that relies on osmosis, diffusion and even active transport in tissues specially adapted to this function. The phloem system successfully executes the transport of manufacture food from the leaves and the stems to other parts of the pant body. The vascular tissues are just an indication of how these forms of interactions work for the benefit of the plant.

Plants have internally developed mechanisms that coordinate responses. These mechanisms are developed on hormonal systems that are instrumental in the development and maturity of the plants. Examples of hormonal coordination in plants include reproduction in flowering plants, ripening of fruits and subsequent expulsion of the same from the mother plant and loss of leaves in response to impending drought or inadequacy of water, just to mention but a few.

The ripening of fruits result from the reactions of the Brix acid in the fruit. The amount of the acid in the fruit determines its ripening. A gas called ethylene is usually created from a compound called methionine acid belonging to the amino group. The ethylene increases the intercellular levels of enzymes.

The amylases hydrolyze starch into sugars while the pectinases hydrolyzes pectin that are responsible for the hardness of fruits while breaking down the green pigment with the colour turning to orange, red or yellow depending on the plant pigments. The process of ripening is related the degree of pollination such that properly pollinated fruits ripen during maturity while those not properly pollinated may have to be shed off before maturity

Abscission in plants is associated with the hormone ethylene. It is believed that ethylene (and not abscisic acid as was previously thought), stimulates the process of abscission. It takes the forms of falling leaves of deciduous trees to conserve water, shedding mostly branches for reproduction purposes, abscission after fertilization, fruit drops to conserve sources or dropping of damaged leaves to conserve water and for photosynthetic efficiency

Paradoxically, ecological physiology is on one hand a new field of learning in plant ecology while again, it is one of the oldest. Physiology of the Environmental is the favoured name of the sub-discipline among botanical physiologists. It however goes by other names in the field of applied sciences. It is more or less synonymous to eco-physiology, ecology of crops, agronomy and horticulture. The discipline overlaps with the field of ecology since the plants act in response to the surrounding.

Ecological physiologists scrutinize plant reaction to factors that are physical such as radiation such as visible light and ultraviolet radiation from the sun, fire, wind and temperature, Of particular interest are water interactions and the stress of deficiency of water or inundation, exchange of gases with the ambient air as well as cycling of nitrogen and carbon nutrients.

Ecological physiologists also analyse and examine plant reaction to biotic factors. This includes not only unfavourable relation, such as rivalry, parasitism, disease and herbivory, but also favourable interactions, such as pollination, symbiosis, and mutualism.

Plants react to environmental changes in a very fantastic way. These reactions are only comparable to the homeostatic processes hitherto experienced splendidly in animals. Environmental changes may impact the plants either positives or negatively and the plants have developed systems to change appropriately. It is however, important to note that environmental variations may sometimes be too extreme to be avoided by plants leading to their demise or possible extinction. This may be understood well in topics such as evolution or more specifically, the ecological succession.

Plants respond to stresses from loss of water in their habitats. Since they are usually stationery, the water usually has to find the plant and not vice versa. An example is the wilting process associated with non woody plants or non-woody parts of woody plants. This process is a reaction to turgidity in non-lignified cells of the plant such that the plant loses rigidity. This results from inadequate water. The process modifies the effective photosynthetic area of the leaf so that the angle of leaf exposed to the sun such that erectophile conditions are enhanced.

This condition may result from drought, reduced soil moisture, increased salinity; saturated soils or a blockage of the vascular tissues of the plant by bacteria or fungi to cause clogging that deprives the leaves of water.

Changes in the composition of the air are also another determinant of plant reaction to its environment. The greatest effect comes from the amount of water vapour in the air. The humidity of the air determines the rate of photosynthesis. Wind also plays a major role in actuating the rate of photosynthesis. Some substances are also toxic to photosynthetic plants. These therefore trigger varied response from plants.

Plants act in response both to directional and non- directional stimuli like gravity or sunlight hence it is called “tropism”. A reaction to a non-directional stimulus, such as humidity or temperature is called a nastic movement.

Tropisms in plants result from differential cell growth. This is where the cells on a single side of the plant become longer than those on the other side of the plant. This causes a bend toward with less growth. Most common tropisms experienced in plants include autotropism, that signifies a bed towards a side where light comes from. This allows the plant to maximize on its absorption of the much needed light or to allow the plant to receive associated heat from the source of light.

Geotropism is the reaction of the roots of a plant to gravitational pull that reacts on all substances. This growth is usually downward towards the earth enables the plant roots to grow downwards due to direction of gravity. Tropism is a direct influence of hormonal communication within a plant.

Nastic movements on the contrary are reactions from the influence of turgor pressure and may occur within a short period of time. A good example is the thigmonasty reaction that is a reaction of a carnivorous plant or yet still the Venus fly trap that react to touch to trap insects that acts as their food. Mechanisms used here are a network of blades with sensitive trigger thin hairs that shut closed and traps the invader instantly. This is done for additional nutrient. The leaf has to grow slowly between successive catches and readjust before the next catch.

Another recent and most important area of ecological physiology is the study the way plants resist or cope with these diseases in them. Plants, just like animals and other organism are susceptible to a host of pathogenic organism such as bacteria, fungi and viruses.

The morphology of plants differed from that of animal. This implies that their reaction to diseases also vary greatly. Plants may react to an invasion only by shedding their leaves. Animals however have to obtain either innate immunity or tackle the intrusion through other antibodies.

Diseases organisms affecting plants also vary from those causing that cause disease to animals. Plants cannot usually spread diseases because of their immobile nature thus physical contact infections are rarely the case. Their pathogens thus usually spread through spores or are transmitted by animals that act as vectors.

Plant habitat and competitive environmental conditions also necessitate readjustments in plants. Competition for nutrients due to encroaching competitors may force the plant to change its morphology or other aspects of plant functionality.

Many phototrophic plants use a photoreceptor protein such as phytochrome or cryptochrome to sense changes in seasons, changes in length of day and take to allow them to flower. In a broader sense, phototropic plants can be grouped into short, long or neutral day plants.

When the day extends past the critical period such that night is shorter that day the long day plant flowers. The plants generally flower during spring or in the early summer with longer days approaching. Short day plants flower when the day is shorter than a standard or critical length. This is when the night is longer than a critical length. The plants generally flower during late summer or during fall when the shorter days are approaching.

Scientist concur that the night length and not that of day controls the pattern of flowering. Thus flowering in a longer day plant is necessitated by shorter nights which mean longer days. The opposite is true; short day plants will flower when the nights get longer than the critical duration of day. This has been done by using night break experiments. For instance, a long night (long day) will not flower if a pulse of say 10 minutes of artificial beam of light is shone at it during midnight. This occurrence is not possible with natural light such as the moon in the night, fire flies or even lighting since the light from these light sources are not sufficiently intense to help trigger the response.

Day neutral plants are not affected by photoperiodism. They always flower regardless of the presence of light or absence of the same, the length of light in day or night variations. Some have adapted to use temperature instead of light. Long or short day plants will have their flowering enhanced or interfered with inn the presence of variations in length of day or night.

They will however flower in sub optimal or half day lengths and temperature is a likely effect to their flowering time. Contemporary biologists believe that it is the happenstance of the active kinds of phytochrome or cryptochrome, resulting from the light during daytime, with the sync of the circadian clock that enables plants to determine the duration of the night. Other occurrences of photoperiodism in plants are like the growth of stems or roots within some seasons or the loss of plant leaves at other seasons.

Transpiration and stomata actions also greatly affect the plant in almost all the cited circumstances above. Transpiration in plants is the process by which evaporation of water molecules usually through the leaves but also takes place from flowers, roots and even stems. The stomata are the major site for transpiration. The opening of the stomata is a regulated process through the stomata guard cells and the process of water loss may be considered both unfortunate and necessary.

The stomata open to allow the diffusion of the photosynthetic gas, carbon dioxide and allows out oxygen. Transpiration has a dual action of cooling the plant in excessive heating and will also aid the loss of unwanted water within the plant system. It also enables the mass flow of mineral nutrients that is aided by the flow of plant water. This is a hydrostatic process that thrives on diffusion of water out of the stomata.

The rate of transpiration is directly affected by the rate of stomata opening. The evaporation demand of the atmosphere is also another factor that influences the release of water. Humid conditions don’t favour evapotranspiration. Wind also enhances this rate. The amount of water through the process also depends on the individual plant size, the surrounding intensity of light the ambient temperature, soil water supply and the soil temperature.

Genetic, physical and chemical factors affects all the environmental responses, internal cell functions and external adjustments. The plant functioning is a complex that embraces all the aspects of botanical science and one cannot be studied alone in isolation. All the functions may vary from one plant to another depending on the cell morphology, anatomy or ecological niche but essentially, for all photosynthetic plant, the general functions are read along similar lines.

Deviations may occur as a result of evolutional characteristics or adaptations. These deviations, however, have not deterred the organization of the study of plant physiology. Research on physiology of plants is still developing and a great understanding of the topic is essential if it is approached from all aspects of the study of Biology as a discipline and may call for inclusion of other disciplines.

Bibliography

Hodgkin, Atmourserg, The Pursuit of Nature. Cambridge: Cambridge University Press, 1977.

Boyd, Claver. The logic of life: Challenge of Integrative Physiology. Oxford: Oxford University Press, 1993.

Robinson, Trevor. The Organic Constituents of Higher Plants, Minneapolis: Burgess Publishing, 1963. 183.

Fosket, Donald. Germination of Plant.A Molecular Approach. San Diego: Academic Press 1994, 498-509.

Pigments and Photosynthesis

This work was produced by one of our professional writers as a learning aid to help you with your studies

Lab Four: Plant Pigments and Photosynthesis

Part A

Table 4.1:

Distance Moved by Pigments Band (millimetres)

Band Number

Distance (mm)

Band Colour

1.

15

Yellow

2.

35

Yellow

3.

73

Green

4.

172

Olive Green

5.

Distance Solvent Front Moved 180 (mm)

Table 4.2:

.083334= Rf for Carotene (yellow to yellow orange)

.194445= Rf for Xanthophyll (yellow)

.405556= Rf for Chlorophyll a (bright green to blue green)

.955556= Rf for Chlorophyll b (yellow green to olive green)

Analysis

Page 47-48 (1-3)

What factors are involved in the separation of the pigments?

The factors that are involved in the separation of the pigments are the pigments solubility, the formation of the intermolecular bonds, and the size of each individual pigment particle. Since capillary action is the method by which the solvent moves up the strip of paper, the attraction of the molecules to the paper and to each other molecule is essentially determined by those factors.

Would you expect the R value of a pigment to be the same if a different solvent were used? Explain.

No, because in different solvents, the solubility of the pigments would be different causing the Rf value to be different. In different solvents, the solvent rate would be affected, and since the rate is different, the distance travelled would also be affected, causing the Rf value to also be different.

What type of chlorophyll does the reaction centre contain? What are the roles of the other pigments?

Chlorophyll a is contained in the reaction centre. Because it is the primary photosynthetic pigments in plants, other chlorophyll a molecules, chloroplast b, and the carotenoids (carotenes and xanthophylls) capture light energy and transfer it to the chlorophyll a at the reaction centre. (College Board, 46)

Part B
Purpose

The purpose of this lab is to measure the effect of various conditions of chloroplast on the rate of photosynthesis or percentage of light transmittance. By using unboiled chloroplast in light, unboiled chloroplast in dark, and boiled chloroplast in light, DPIP was placed into each cuvette and a colorimeter was used to measure the rate of light transmittance.

Since DPIP is the electron acceptor, as there is more light present, the DPIP absorbs more elections thus reducing the DPIP. Eventually the reduction causes the DPIP to change colour from a deep blue to a clear or opaque colour.

Variables
Independent Variable

The independent variable in this lab is the different forms/conditions of the chloroplast. These include boiled chloroplast in light, unboiled chloroplast in light, and unboiled chloroplast in dark.

Dependent Variable

The dependent variable in this lab is the rate/ level of light transmittance over a period of time measured by the colorimeter. From this data we can determine the rate of photosynthesis because as the DPIP becomes excited and reduced by the electrons, the colour changes indicating the rate of photosynthesis.

Control Variable

The control variables in this lab includes the type of cuvette, size of cuvette, type of buffer used, amount of phosphate buffer used (1mL), and the time intervals (min) used to measure the % or level of transmittance in the colorimeter.

Measurement

To measure the dependent variable, in this lab, a colorimeter and DPIP was used to determine the level of light transmittance. As the electron acceptor, DPIP was placed in each cuvette. Later after a certain interval of time, each was placed into a colorimeter which determined the level of light transmittance. As electrons were accepted, the DPIP became excited and reduced causing the color in the cuvette to also change, thus affecting the level of light transmittance as measured by the colorimeter.

Hypothesis

Since photosynthesis is the process by which plants, bacteria, and other autotrophic organisms obtain energy to produce sugars, the right conditions and the right environment are necessary in order to carry out this complex process. Based on prior knowledge and information from this lab, cuvette 3 will have the highest percent of light transmittance and the highest rate of photosynthesis.

Since photosynthesis requires light and functional chloroplast to absorb and produce sugars, without either one, the process is interrupted and cannot function properly. Unboiled chloroplast will have a higher percent of light transmittance than boiled chloroplast because of the impact temperature has on the proteins/enzymes of the chloroplast. As high temperatures, like the boiling point, the heat generated will denature the enzymes/proteins thus reducing its effect on photosynthesis.

Without functional chloroplast to absorb the energy from the light, the electrons will not be bumped to a higher energy level and will not be able to reduce DPIP. Of the two cuvettes with unboiled chloroplast, the cuvette place in front of the light will have a higher percent of light transmittance than the cuvette placed in the dark because with light, energy can be absorbed, DPIP can be reduced, ATP can be created, and photosynthesis can be carried out.

Similar to functional chloroplast, light is another essential component of photosynthesis, without light photosynthesis cannot occur. Therefore, the cuvette placed in the dark may have functional chloroplast but without light to provide the necessary energy, the reaction will either occur very slowly or not at all. Finally, the cuvette with no chloroplast will not photosynthesize at all, because without chloroplast to absorb the energy from the light, the solution will not carry out photosynthesis.

Procedures

First, a beaker of water was positioned between the samples and the light source which was to be the heat sink. Next, an ice bath was created to preserve the phosphate buffer and chloroplast by filling an ice bucket with ice. Then, before the cuvettes could be used, they had to be cleaned out with lint free tissue to ensure the light transmittance goes smoothly and uninterrupted.

Before anymore is done with each cuvette, both boiled and unboiled chloroplast were obtained in pipettes and place in the ice bath inverted. Next, of the five cuvettes labelled 1 to 5, cuvette 2 had a foil container constructed for the sake of keeping light out of the solution. Each cuvette then received the corresponding amount of phosphate buffer, distilled water, and DPIP. The colorimeter was then set up by starting up the computer program that would read the colorimeter and was linked accordingly.

The first cuvette received three drops of unboiled chloroplast, and then shaken up and placed in the slot of the colorimeter. The first solution would be the first calibration point of reference for the colorimeter at zero percent light transmittance. Following the setting of the first calibration point, the second calibration point was also set. In cuvette 2, three drops of unboiled chloroplast was added and immediately timed with a stopwatch and the light transmittance was recorded.

The same cuvette was encased with the foiled created earlier and then placed in the light. Cuvette 3 also received three drops of unboiled chloroplast at which the time and the light transmittance was also recorded. Right afterwards, the cuvette returned to the light. Cuvette 4 received three drops of boiled chloroplast at which the time and the light transmittance was also recorded. Just like cuvette 3, cuvette 4 was returned to the light. Curette 5 the control would receive no chloroplast but still has the time and light transmittance recorded. The light transmittance for each would continue to be recorded at an interval of every five minutes (5 minutes, 10 minutes, 15 minutes) following the same procedure until all data had been collected.

Conclusion

The process of photosynthesis is described as the conversion of light energy to chemical energy that is stored in glucose and other organic compounds. Essential to the development of plants and animals, light from the sun or from an artificial source is necessary for this process to occur and to carry out its benefits. Having performed this lab, the results obtained supports this concept and it also supports my hypothesis.

After gathering all the data, cuvette 3 did have the highest percentage of light transmittance and the fastest rate of photosynthesis. Because of the unboiled chloroplast in the cuvette absorbing the light and a light source available to provide energy to reduce the DPIP, the conditions were right for photosynthesis to occur.

In cuvette 3, photosynthesis did occur because when the light shined on the unboiled chloroplast, the electrons were excited and moved to a higher energy level.

This energy was then used to produce ATP and to reduce DPIP causing the solution to change colour creating a higher and faster rate of photosynthesis/light transmittance. This cuvette essentially showed that light and chloroplast are needed in order to carry out photosynthesis. Although the graph may show the rate of photosynthesis slowing down, the reason why the curve begins to slow down and level off is not because of photosynthesis but because as the process of photosynthesis occurs, the DPIP will begin to be used up causing the reaction to slow down and level off.

Cuvette 2 showed different results in that no photosynthesis occurred because there was no light present for the chloroplast to absorb and to reduce the DPIP. Photosynthesis requires light but without out light, photosynthesis could not occur causing essentially no change in the cuvette. The data table and graph does show that there were some change in the rate of photosynthesis but that occurred because since we had to take the cuvette out of the aluminium sleeve to place in the colorimeter, the DPIP broke down because of the brief exposure to the light.

However, overall, the data shows that because there was no light present, photosynthesis could not occur causing no change. Cuvette 4 also showed little increase or change in the percentage of light transmittance because since the cuvette had boiled chloroplast, the high temperatures denatured the proteins/enzymes found in the chloroplast rendering them ineffective. Because the light could not be absorbed by the chloroplast, photosynthesis could not occur or it occurred at a very slow pace.

Similar to cuvette 2, the date table and graph also shows that there were change in the percentage of light transmittance in cuvette 4 but because the DPIP was exposed to the light, the DPIP did break down causing a slight change in the rate of light transmittance. Essentially, this cuvette showed that chloroplast in addition to light is required for photosynthesis.

Cuvette 5 also showed no change in the percentage of light transmittance because without the presence of chloroplast, the light could not be absorbed to excite the elections and to reduce the DPIP. Without the functions of chloroplast, photosynthesis could not occur because the DPIP would not be reduced and ATP would not be created. Any fluctuations in the data or graph for cuvette 5 could be explained by human or data error.

Analysis

Page 52-53 (1-8)

What is the function of DPIP in this experiment?

The function of the DPIP in this experiment is to act as the electron acceptor, replacing the usual NADP found in plants. When the light shines on the active chloroplasts, the electrons are excited, which causes them to jump to a higher energy level thus reducing the DPIP. As the DPIP is reduced, the colour changes from deep blue to colourless, which affects the rate and level of light transmittance when measured by the colorimeter.

What molecule found in the chloroplasts does DPIP “replace” in this experiment?

DPIP in this experiment “replaces” the electron acceptor NADP

What is the source of the electrons that will reduce DPIP?

When the light shines on the chloroplast, the light provides enough energy to bump the electrons to a higher energy level thus reducing the DPIP. The source of the electrons can also come from the photolysis of water.

What was measured with the spectrophotometer in this experiment?

The spectrophotometer in this experiment is used to measure the percentage/level of light transmittance through the cuvette based on the amount of photosynthetic activity.

What is the effect of darkness on the reduction of DPIP? Explain.

Because there is not an absence of light shining on the chloroplast, the DPIP could not be reduced because there was no or not enough energy to excite the electrons and move them to a higher energy level in order to reduce the DPIP.

What is the effect of boiling the chloroplasts on the subsequent reduction of DPIP? Explain.

Similar to the effects of darkness, by boiling the chloroplast, the proteins were denatured by the high temperatures which caused the process of photosynthesis to be slowed down and inhibited. Because the chloroplast could not absorb light and perform its job, the DPIP could not be reduced which reduced the percentage/level of transmittance.

What reasons can you give for the difference in the percentage of transmittance between the live chloroplast that were incubated in the light and those that were leapt in the dark?

Because light is essential for photosynthesis, the chloroplast placed in light was able to reduce DPIP and perform photosynthesis. As the chloroplast absorbed the light, the energy absorbed, pushed the electrons to a higher energy level which caused the DPIP to reduce.

As the DPIP reduced, the colours changed and the rate of light transmittance was higher. In the dark chloroplast, however, because there is no energy source for the chloroplast to use and since the DPIP could not be reduced due to the lack of light energy, the percentages of light transmittance were lower.

Identify the function of each of the cuvettes

Cuvette 1: Cuvette 1 was used to measure how the absence of DPIP and chloroplast affected the percentage of light transmittance. This cuvette was also used to calibrate the colorimeter.

Cuvette 2: Cuvette 2 was used to measure how the lack of light and unboiled chloroplast affected the percentage of light transmittance. It essentially showed how important light was to the process of photosynthesis.

Cuvette 3: Cuvette 3 was used to measure how light and unboiled chloroplast affected the percentage of light transmittance. It essentially showed how light and active chloroplasts are needed to carry out the process of photosynthesis.

Cuvette 4: Cuvette 4 was used to measure how light and boiled chloroplast affected the percentage of light transmittance. It essentially showed how the denatured proteins in the chloroplast prevented the light to be absorbed and the process of photosynthesis to be carried out.

Cuvette 5: Cuvette 5 is the control of the experiment and is used to show how the availability of light but absence of chloroplast will prevent the process of photosynthesis from being performed and its effect on the percentage of light transmitted.

Variation of Light Intensity – Inverse Square Law

This work was produced by one of our professional writers as a learning aid to help you with your studies

Background Theory

i Light emitted from any kind of source, e.g. the sun, a light bulb, is a form of energy. Everyday problems such as lighting required for various forms of labouring or street illumination, require one to able to determine and evaluate the intensity of light emitted by any light source or even the illumination of a given surface. A special group of studies is formed around these issues and it is called photometry.

Luminous flux is a scalar quantity which measures the time rate of light flow from the source. As all measures of energy transferred over a period of time, luminous flux is measured in Joules/Seconds or Watts (SI units). It can therefore safely be said that luminous flux is a measure of light power.

Visible light consists of several different colours, each representing a different wavelength of the radiation spectrum. For example red colour has a wavelength 610-700 nm, similarly yellow 550-590 nm and blue 450-500 nm.

The human eye demonstrates different levels of sensitivity to the various colours of the spectra. More specifically, the maximum sensitivity is observed in the yellow-green colour (i.e. 555nm). From all the above, it is clear that there is the need to define a unit associating and standardising the visual sensitivity of the various wavelengths to the light power which are measured in Watt’s; this unit is called the special luminous flux unit of the lumen (lm).

One lumen is equivalent to1/680 Watt of light with a wavelength of 555 nm. This special relationship between illumination and visual response renders the lumen the preferred photometric unit of luminous flux for practical applications. On top of that one of the most widely used light sources in everyday life such as the electric light bulb emits light which consists of many different wavelengths.

A measure of the luminous strength of any light source is called the light sources intensity. At this point, it should be said that the intensity of a light source depends on the quantity of lumens emitted within a finite angular region which is formed by a solid angle. To give a visual representation of the solid angle, recall that in a bi-dimensional plane the plane angle is used for all kinds of angular measurements. A further useful reminder regards the arc length s; namely for a circle of radius r the arc length s is calculating by the formula

S = r * q -Equation. 1

(qis measured in radians)

Now, in a three dimensional plane the solid angle W is similarly used for angular measurements. Corresponding to the q plane angle, each section of surface area A of a sphere of radius r is calculating by using the following formula;

A= r2*W -Equation. 2

(Remember that W is measured in steradians)

By definition one steradian is the solid angle subtended by an area of the spherical surface equal to the square of the radius of the sphere.

Taking into account all the above mentioned, the luminous intensity I of a light source (small enough to be considered as a point source) pointing towards the solid angle is given by:

I = F/ W -Equation. 3

Where F is the flux measured in lumens. It is clear that the luminous intensity unit is lumen /steradian. This unit used to be called a candle, as it was defined in the context of light emitted from carbon filament lamps.

Generally speaking, luminous intensity in any particular direction is called the candle power of the source. The corresponding unit in the SI system is called the candela (cd)which is the luminous intensity emitted by 1/60 cm2 of platinum at a temperature of 2054K (which is the fusion point of platinum).

A uniform light source (small enough to be considered as a point source) whose luminous intensity is equal to one candela, is able to produce a luminous flux of one lumen through each solid angle. The equation shown below is the mathematical expression of the above definition:

F =

W * I

-Equation. 4

Where I is equal to one cd and W is equal to one sr.

In similar terms the total flux Ftof a uniform light source with an intensity I can be calculated with the aid of the following formula.

Ft = W t* I – Equation. 5

And taking into account that the total solid angle Wt of a sphere is 4p sr, the above formula becomes

Ft = 4p * I -Equation. 6

When a surface is irradiated with visible light it is said to be illuminated. For any given surface, the illuminance E (which is also called illumination) is intuitively understood and defined to be the flux indenting on the surface divided by the total area of the surface.

E = F / A – Equation. 7

In the case where the several light sources are present and illuminate the same surface, the total illuminance is calculated by adding up all of the individual source illuminations. The SI unit allocated the illuminance is the lux (lx)where one lx is equal to 1 lm / 1 m2.

Another way of expressing illumination in the context of light sources intensity and the distance from the light source can be derived by forming a combination of the last few mentioned equations:

E = F / A = I * W / A = I / r2 -Equation. 8

Where r is the distance measured from the source or the radius of a sphere whose total area is A (W = A / r2). An important side note at this point is that 1fc equals 1cd/ft2 and also 1lx is equal to1cd/ m2.

It is evident that the illumination is inversely proportional to the square of the measured distance from the light source. In the case of constant light source intensity I, it can be said that:

E2/E1 = r12/r22= (r1/r2)2 – Equation. 9

In the real world, the incident light is very rarely normal to a surface; nearly always light impacts on a surface at an angle of incidence q.

In this case the illuminance is calculated by:

E = I* cos q/ r2 -Equation. 10

To sum up, there are several ways which can be employed in order to measure illumination. Nearly all of them are based on the photoelectric effect originally discovered by Albert Einstein (for which he was awarded a Nobel Prize in 1921). In a few words when light strike sa material electron emission is observed and electric current flows if there is a circuit present.

This current is proportional to the incident light flux and to the work function of the material; the intensity of the resulted current flow is measured by instruments calibrated in illumination units.

Apparatus Components:

Light Sensor – Light Dependent Resistance (LDR)

Light bulb

Ruler

Power supply

Voltmeter

Ammeter

Connecting wires and Inline conductors

Two Vertical Stands

Black Paper

Experimental Apparatus

The experimental apparatus consisted off various parts. The basis of the light reception circuit was a Light Dependent Resistor (LDR) which is the essential part of the apparatus since in enables the measurement of the light’s intensity.

To give a brief introduction to this type of devices, it should be said that all kinds of materials exhibit some kind of resistance to electric current flow (which by definition is orientated flow of electrons). The particularity of an LDR device lays in the fact that its resistance is not constant; instead, it varies its value according to the light’s intensity that impacts on it. Generally speaking, LDR devices can be categorized in two main divisions: negative and positive coefficient. The former decrease the irresistance as the light’s intensity grows bigger; on the other hand, the latter increase their resistance as the light’s intensity becomes greater.

At the microscopic level, such a device consists of semi-conducting material like doped-silicon (the most commonly used material for electronic applications).When light impacts on the device material, this energy is absorbed by the covalent bonded electrons. Subsequently, this excessive energy breaks the bonds between the electrons and creates free electrons inside the material. These electrons are free to move inside the material and hence increase there sistivity of the material since they are no longer bonded.

Another essential part of the apparatus is the light source, which in this particular cause was an incandescent lamp (these lamp sare the most commonly used ones found in most everyday applications). The basic component of an incandescent lamp is the wire filament which is usually made of tungsten; this filament is sealed in glass bulb. Now, the bulb itself is filled with a mixture of low pressure argon and nitrogen in gaseous form. The use of those two gases is to delay the evaporation of the metal filament as well as itoxidation.

Ones current begins to flow through the tungsten filament, it gets so hot that it looks white. Under these operating conditions the filament itself ranges in temperature from 2500-3000 degrees Celsius. All incandescent lamps have continuous spectrum which lies primarily in the infrared region of the electromagnetic spectrum. The basic drawback of these devices is they poor efficiency, since more than 95% of the lamps energy is lost to the ambient environment in the form of heat.

The detailed apparatus used for this investigation is shown schematically in figure.1. According to this figure the light source(incandescent lamp (light bulbs electrical characteristics required here) ) is placed on a fixed stand and is kept at a vertical upright position looking upwards. It is evident that ones the bulb is switch on the light will be emitted isotropically towards all directions. A power supply(( power supply’s electrical characteristics required here) ) was used for powering up the light bulb and providing variable voltage values. In that way, as will be explained later, the intensity of the light emitted by the bulb will not stay constant and neither will the voltage across the LDR.

Opposite the light bulb, on another stand the LDR device has kept fixed in place with the aid of cohesive material (blu tack). The LDR device was placed normally to the light bulb so that the angle of incidence of the light coming of the source remains constant and normal throughout the experimental measurements.

Another observation that can be made from Figure.1 is the interconnection between the LDR device, the voltmeter, the ammeter and the power supply. More specifically, in order for the LDR to function properly, a voltage was applied across the receiver circuit ( 4 Volts power pack in our case). The voltmeter was connected across the LDR device in order to constantly measure the value of the voltage across the LDR. These variations were due to the alternations to the intensity of the incident light (since the resistance value was changing).

The volt meter ideally would have infinite resistance, however in reality its resistance is finite and thus small deviations of the indicated voltage from the real value were expected.

Another quantity under monitoring was the current flowing into the LDR device. For this purpose an ammeter was placed in series with the LDR. Its rule was very important since the current flow into the LDR device had to remain constant throughout the experimental measurements. Again, the ideal ammeter would not have any impendence at all. In reality all ammeter demonstrate a finite albeit very small value of resistance: thus deviation of the indicated value from the actual one should be expected.

(Missing resistance for potential divider?)

A very interesting configuration (and very widely used) for light intensity measurements using the same components as the ones available for this practical can be seen in Figure.1 with a little insight. A closer look to the receiver circuit reveals that a potential divider is formed by the way that the above mentioned components are connected. On a side note, measuring the current coming out of the LDR device would be feasible and relatively easy since the output current would be directly proportional to the value of the LDR resistance. A better way would be to measure the output voltage which happens to be the voltage across the LDR (i.e. the value monitor by the voltmeter). In this case the voltage is proportional to the current flowing through the LDR device. The second resistance required to form the potential divider comes from the finite internal resistance of the ammeter. The value of the output voltage V output can be calculated by using the standard potential divider formula shown below:

Vout = RLDR / (RLDR + RAMMETER)* Vin – Equation. 11

Where Vinis the voltage applied across the receiver circuit, RLDR and RAMMETER are the resistance of the LDR device and the internal resistance of the ammeter respectively.

Since the aim of those measurements is to investigate the relationship between the light intensity with distance, despite the fact that both the light bulb and LDR are kept fixed vertically the stand of the light bulb was able to be translated horizontally. For the purpose of the experiments the translation of the light bulb was made parallel with a ruler which was placed between the two stands. This configuration was quite optimal since it allowed the exact distance between light source and receiver to be know throughout the experiments.

In all optical experiments one of most fundamental error is the background illumination and the interference of other light sources. For this reason the apparatus was surrounded by black paper.

Experimental Procedure

The LDR sensor and the light bulb have to be at the same vertical height during all experimental measurements. One key point to notice is in that way the light bulb behaves as more like a point source of light, justifying the use of all mathematical equations. The LDR sensor has to point towards the light bulb at all times.

Having set up the experimental apparatus and chosen the range of the distance between the light bulb and the LDR sensor, a reference measurement of the LDR sensor was made having the light bulb switched off. Depending on the power of the light bulb a starting distance of 10 cm was deemed to be sufficient for the calibration purposes. Progressively, after performing the calibration this distance as explained below increased. Similarly, the rest of the experimental apparatus’s components (i.e. receiver device, voltmeter, ammeter, etc.) were also switched off during this very crucial calibration phase of the practical; generally speaking it is very good and common practice as well as much more preferable to carry out the calibration and experimental procedure in conditions of total darkness. The previous step insured that the background illumination was measured and this value would have to be deducted from all further measurements. Hence the error of the measurements is eliminated and their credibility is increased by a great degree.

The light bulb was initially switched on by applying a specific voltage across it; subsequently the exact distance between the light bulb and the LDR was measured using the ruler. The next and most important step at this stage was to measure the value of the potential difference across the LDR device for this specific position of the light bulb. For reasons of reference, the value of the ammeter was also recorded.

The position of the light bulb stand was then altered along the ruler in constant and knows intervals of distance. For each known distance the above measurements had to be repeated over and over. At this stage it would be useful to emphasize that the acquisition of the above data can be made for more than one time per known distance r, since averaging of data decreases the error percentage in the experimental measurements obtained. In that way, a comprehensive chart or table can be formed associating distance values (between the two stands) to output voltage values.

Hamstring Rehabilitation and Injury Prevention

This work was produced by one of our professional writers as a learning aid to help you with your studies

Hamstring injuries can be frustrating injuries. The symptoms are typically persistent and chronic. The healing can be slow and there is a high rate or exacerbation of the original injury (Petersen J et al. 2005).

The classical hamstring injury is most commonly found in athletes who indulge in sports that involve jumping or explosive sprinting (Garrett W E Jr. 1996) but also have a disproportionately high prevalence in activities such as water skiing and dancing (Askling C et al. 2002).

A brief overview of the literature on the subject shows that the majority of the epidemiological studies in this area have been done in the high-risk areas of Australian and English professional football teams. Various studies have put the incidence of hamstring strain injuries at 12 – 16% of all injuries in these groups (Hawkins R D et al. 2001). Part of the reason for this intense scrutiny of the football teams is not only the high incidence of the injury, which therefore make for ease of study, but also the economic implications of the injury.

Some studies (viz. Woods C et al. 2004) recording the fact that hamstring injuries have been noted at a rate of 5-6 injuries per club per season resulting in an average loss of 15 -21 matches per season. In terms of assessing the impact of one hamstring injury, this equates to an average figure of 18 days off playing and about 3.5 matches missed. It should be noted that this is an average figure and individuals may need several months for a complete recovery. (Orchard J et al. 2002). The re-injury rate for this group is believed to be in the region of 12 – 31% (Sherry M A et al. 2004).

The literature is notable for its lack of randomised prospective studies of treatment modalities and therefore the evidence base for treatment is not particularly secure.

If one considers the contribution of the literature to the evidence base on this subject, one is forced to admit that there is a considerable difficulty in terms of comparison of various differences in terminology and classification. Despite these difficulties this essay will take an overview of the subject.

Classification of injuries

To a large extent, the treatment offered will depend on a number of factors, not least of which is the classification of the injury. In broad terms, hamstring injuries can have direct or indirect causation. The direct forms are typically caused by contact sports and comprise contusions and lacerations whereas the indirect variety of injury is a strain which can be either complete or incomplete. This latter group comprises the vast majority of the clinical injuries seen (Clanton T O et al. 1998).

The most extreme form of strain is the muscle rupture which is most commonly seen as an avulsion injury from the ischial tuberosity. Drezner reports that this type of injury is particularly common in water skiers and can either be at the level of the insertion (where it is considered a totally soft tissue injury) or it may detach a sliver of bone from the ischial tuberosity (Drezner J A 2003). Strains are best considered to fall along a spectrum of severity which ranges from a mild muscle cramp to complete rupture, and it includes discrete entities such as partial strain injury and delayed onset muscle soreness (Verrall G M et al. 2001). One has to note that it is, in part, this overlap of terminology which hampers attempts at stratification and comparison of clinical work (Connell D A 2004).

Woods reports that the commonest site of muscle strain is the musculotendinous junction of the biceps femoris (Woods C et al. 2004).

In their exemplary (but now rather old) survey of the treatment options of hamstring injuries, Kujala et al. suggest that hamstring strains can usefully be categorised in terms of severity thus:

Mild strain/contusion (first degree): A tear of a few muscle fibres with minor swelling and discomfort and with no, or only minimal, loss of strength and restriction of movements.

Moderate strain/contusion (second degree): A greater degree of damage to muscle with a clear loss of strength.

Severe strain/contusion (third degree): A tear extending across the whole cross section of the muscle resulting in a total lack of muscle function.

(Kujala U M et al. 1997).

There is considerable debate in the literature relating to the place of the MRI scan in the diagnostic process. Many clinicians appear to be confident in their ability to both diagnose and categorise hamstring injuries on the basis of a careful history and clinical examination. The Woods study, for example, showing that only 5% of cases were referred for any sort of diagnostic imaging (Woods C et al. 2004). The comparative Connell study came to the conclusion that ultrasonography was at least as useful as the MRI in terms of diagnosis (this was not the case if it came to pre-operative assessment) and was clearly both easier to obtain and considerably less expensive than the MRI scan (Connell D A 2004).

Before one considers the treatment options, it is worth considering both the mechanism of injury and the various aetiological factors that are relevant to the injury, as these considerations have considerable bearing on the treatment and to a greater extent, the preventative measures that can be invoked.

It appears to be a common factor in papers considering the mechanisms of causation of hamstring injuries that the anatomical deployment of the muscle is a significant factor. It is one of a small group of muscles which functions over two major joints (biarticular muscle) and is therefore influenced by the functional movement at both of these joints. It is a functional flexor at the knee and an extensor of the hip. The problems appear to arise because in the excessive stresses experienced in sport, the movement of flexion of the hip is usually accompanied by flexion of the knee which clearly have opposite effects on the length of the hamstring muscle.

Cinematic studies that have been done specifically within football suggest that the majority of hamstring injuries occur during the latter part of the swing phase of the sprinting stride (viz. Arnason A et al. 1996). It is at this phase of the running cycle that the hamstring muscles are required to act by decelerating knee extension with an eccentric contraction and then promptly act concentrically as a hip joint extensor (Askling C et al. 2002).

Verrall suggests that it is this dramatic change in function that occurs very quickly indeed during sprinting that renders the hamstring muscle particularly vulnerable to injury (Verrall G M et al. 2001).

Consideration of the aetiological factors that are relevant to hamstring injuries is particularly important in formulating a plan to avoid recurrence of the injury.

Bahr, in his recent and well-constructed review of risk factors for sports injuries in general, makes several observations with specific regard to hamstring injuries. He makes the practical observation that the older classification of internal (intrinsic) and external (extrinsic) factors is not nearly so useful in clinical practice as the consideration of the distinction between those factors that are modifiable and those that are non-modifiable (Bahr R et al. 2003).

Bahr reviewed the evidence base for the potential risk factors and found it to be very scanty and “largely based on theoretical assumptions” (Bahr R et al. 2003 pg 385). He lists the non-modifiable factors as older age and being black or Aboriginal in origin (the latter point reflecting the fact that many of the studies have been based on Australian football).

The modifiable factors, which clearly have the greatest import for clinical practice, include an imbalance of strength in the leg muscles with a low H : Q ratio (hamstring to quadriceps ratio) (Clanton T O et al. 1998), hamstring tightness (Witvrouw E et al. 2003), the presence of significant muscle fatigue, (Croisier J L 2004), insufficient time spent during warm-up, (Croisier J L et al. 2002), premature return to sport (Devlin L 2000), and probably the most significant of all, previous injury (Arnason A et al. 2004).

This is not a straightforward additive compilation however, as the study by Devlin suggests that there appears to be a threshold for each individual risk factor to become relevant with some (such as a premature return to sport) being far more predicative than others (Devlin L 2000).

There is also some debate in the literature relating to the relevance of the degree of flexibility of the hamstring muscle. One can cite the Witvrouw study of Belgian football players where it was found that those players who had significantly less flexibility in their hamstrings were more likely to get a hamstring injury (Witvrouw E et al. 2003).

If one now considers the treatment options, an overview of the literature suggests that while there is general agreement on the immediate post-injury treatment (rest, ice, compression, and elevation), there is no real consensus on the rehabilitation aspects. To a large extent this reflects the scarcity of good quality data on this issue. The Sherry & Best comparative trial being the only well-constructed comparative treatment trial, (Sherry M A et al. 2004) but even this had only 24 athletes randomised to one of two arms of the trial.

In essence it compared the effects of static stretching, isolated progressive hamstring resistance, and icing (STST group) with a regime of progressive agility and trunk stabilisation exercises and icing (PATS group). The study analysis is both long and complex but, in essence, it demonstrated that there was no significant difference between the two groups in terms of the time required to return to sport (healing time). The real significant differences were seen in the re-injury rates with the ratio of re-injury (STST : PATS) at two weeks being 6 : 0, and at 1 year it was 7 : 1.

In the absence of good quality trials one has to turn to studies like those of Clanton et al. where a treatment regime is derived from theoretical healing times and other papers on the subject. (Clanton T O et al. 1998). This makes for very difficult comparisons, as it cites over 40 papers as authority and these range in evidential level from 1B to level IV. (See appendix). In the absence of more authoritative work one can use this as an illustrative example.

Most papers which suggest treatment regimes classify different phases in terms of time elapsed since the injury. This is useful for comparative purposes but it must be understood that these timings will vary with clinical need and the severity of the initial injury. For consistency this discussion will use the regime outlined by Clanton.

Phase I (acute): 1–7 days

As has already been observed, there appears to be a general consensus that the initial treatment should include rest, ice, compression, and elevation with the intention to control initial intramuscularly haemorrhage, to minimise the subsequent inflammatory reaction and thereby reduce pain levels. (Worrell T W 2004)

NSAIAs appear to be almost universally recommended with short term regimes (3 – 7 days) starting as soon as possible after the initial injury appearing to be the most commonly advised. (Drezner J A 2003). This is interesting as a theoretically optimal regime might suggest that there is merit in delaying the use of NSAIAs for about 48 hrs because of their inhibitory action on the chemotactic mechanisms of the inflammatory cells which are ultimately responsible for tissue repair and re-modelling. (Clanton T O et al. 1998).

There does appear to be a general consensus that early mobilisation is beneficial to reduce the formation of adhesions between muscle fibres or other tissues, with Worrell suggesting that active knee flexion and extension exercises can be of assistance in this respect and should be used in conjunction with ice to minimise further tissue reaction (Worrell T W 2004).

Phase II (sub-acute): day 3 to >3 weeks 0

Clanton times the beginning of this phase with the reduction in the clinical signs of inflammation. Goals of this stage are to prevent muscle atrophy and optimise the healing processes. This can be achieved by a graduated programme of concentric strength exercises but should not be started until the patient can manage a full range of pain free movement (Drezner J A 2003).

Clanton, Drezner and Worrell all suggest that “multiple joint angle, sub-maximal isometric contractions” are appropriate as long as they are pain free. If significant pain is encountered then the intensity should be decreased. Clanton and Drezner add that exercises designed to maintain cardiovascular fitness should be encouraged at this time. They suggest “stationary bike riding, swimming, or other controlled resistance activities.”

Phase III (remodelling); 1–6 weeks

After the inflammatory phase, the healing muscle undergoes a phase of scar retraction and re-modelling. This leads to the clinically apparent situation of hamstring shortening or loss of flexibility. (Garrett W E Jr. et al. 1989). To minimise this eventuality, Clanton cites the Malliaropoulos study which was a follow up study with an entry cohort of 80 athletes who had sustained hamstring injuries.

It was neither randomised nor controlled and the treatment regime was left to the discretion of the clinician in charge. It compared regimes which involved a lot of hamstring stretching (four sessions daily) or less sessions (once daily). In essence the results of the study showed that the athletes who performed the most intensive stretching programme were those who regained range of motion faster and also had a shorter period of rehabilitation. Both these differences were found to be significant. (Malliaropoulos N et al. 2004)

Verrall suggests that concentric strengthening followed by eccentric strengthening should begin in this phase. The rationale for this timing being that eccentric contractions tend to exert greater forces on the healing muscle and should therefore be delayed to avoid the danger of a rehabilitation-induced re-injury. (Verrall G M et al. 2001). We note that Verrall cites evidence for this from his prospective (un-randomised) trial

Phase IV (functional): 2 weeks to 6 months

This phase is aimed at a safe return to non-competitive sport. It is ideally tailored to the individual athlete and the individual sport. No firm rules can therefore be applied. Worrell advocates graduated pain-free running based activities in this phase and suggests that “Pain-free participation in sports specific activities is the best indicator of readiness to return to play.” (Worrell T W 2004)

Drezner adds the comment that return to competitive play before this has been achieved is associated with a high risk of injury recurrence. (Drezner J A 2003)

Phase V (return to competition): 3 weeks to 6 months

This is the area where there is perhaps the least agreement in the literature. All authorities are agreed that the prime goal is to try to avoid re-injury. Worrell advocates that the emphasis should be on the maintenance of stretching and strengthening exercises (Worrell T W 2004).

For the sake of completeness one must consider the place of surgery in hamstring injuries. It must be immediately noted that surgery is only rarely considered as an option, and then only for very specific indications. Indications which the clinician should be alert to are large intramuscular bleeds which lead to intramuscular haematoma formation as these can give rise to excessive intramuscular fibrosis and occasionally myositis ossificans (Croisier J L 2004).

The only other situations where surgery is contemplated is a complete tendon rupture or a detachment of a bony fragment from either insertion or origin. As Clanton points out, this type of injury appears to be very rare in football injuries and is almost exclusively seem in association with water skiing injuries (Clanton T O et al. 1998).

It is part of the role of the clinician to give advice on the preventative strategies that are available, particularly in the light of studies which suggest that the re-injury rate is substantial (Askling C et al. 2003).

Unfortunately this area has an even less substantial evidence base than the treatment area. For this reason we will present evidence from the two prospective studies done in this area, Hartig and Askling

Hartig et al. considered the role of flexibility in the prophylaxis of further injury with a non-randomised comparative trial and demonstrated that increasing hamstring flexibility in a cohort of military recruits halved the number of hamstring injuries that were reported over the following 6 months (Hartig D E et al. 1999).

The Askling study was a randomised controlled trial of 30 football players. The intervention group received hamstring strengthening exercises in the ten week pre-season training period. This intervention reduced the number of hamstring injuries by 60% during the following season (Askling C et al. 2003). Although this result achieved statistical significance, it should be noted that it involved a very small entry cohort.

Conclusions.

Examination of the literature has proved to be a disappointing exercise. It is easy to find papers which give advice at evidence level IV but there are disappointingly few good quality studies in this area which provide a substantive evidence base. Those that have been found have been presented here but it is accepted that a substantial proportion of what has been included in this essay is little more than advice based on theory and clinical experience.

References

Arnason A, Gudmundsson A, Dahl H A, et al. (1996) Soccer injuries in Iceland. Scand J Med Sci Sports 1996; 6: 40 – 5.

Arnason A, Sigurdsson S B, Gudmundson A, et al. (2004) Risk factors for injuries in football. Am J Sports Med 2004; 32 (1 suppl) :S5 – 16.

Askling C, Lund H, Saartok T, et al. (2002) Self-reported hamstring injuries in student dancers. Scand J Med Sci Sports 2002; 12: 230 – 5.

Askling C, Karlsson J, Thorstensson A. (2003) Hamstring injury occurrence in elite soccer players after preseason strength training with eccentric overload. Scand J Med Sci Sports 2003; 13: 244 – 50.

Bahr R, Holme I. (2003) Risk factors for sports injuries: a methodological approach. Br J Sports Med 2003; 37: 384 – 92.

Clanton T O, Coupe K J. (1998) Hamstring strains in athletes: diagnosis and treatment. J Am Acad Orthop Surg 1998; 6: 237 – 48.

Connell D A , Schneider-Kolsky ME, Hoving J L. et al (2004) Longitudinal study comparing sonographic and MRI assessments of acute and healing hamstring injuries. AJR Am J Roentgenol 2004; 183: 975 – 84

Croisier J-L, Forthomme B, Namurois M-H, et al. (2002) Hamstring muscle strain recurrence and strength performance disorders. Am J Sports Med 2002; 30: 199 – 203

Croisier J-L. (2004) Factors associated with recurrent hamstring injuries. Sports Med 2004; 34: 681 – 95.

Deave T (2005) Research nurse or nurse researcher: How much value is placed on research undertaken by nurses? Journal of Research in Nursing, November 1, 2005; 10(6): 649 – 657.

Devlin L . (2000) Recurrent posterior thigh symptoms detrimental to performance in rugby union: predisposing factors. Sports Med 2000; 29: 273 – 87.

Drezner J A. (2003) Practical management: hamstring muscle injuries. Clin J Sport Med 2003; 13: 48 – 52

Garrett W E Jr, Rich F R, Nikolaou P K, et al. (1989) Computer tomography of hamstring muscle strains. Med Sci Sports Exerc 1989; 21: 506 – 14.

Garrett W E Jr. (1996) Muscle strain injuries. Am J Sports Med 1996; 24 (6 suppl) : S2–8.

Hartig D E, Henderson J M. (1999) Increasing hamstring flexibility decreases lower extremity overuse in military basic trainees. Am J Sports Med 1999; 27: 173 – 6

Hawkins R D, Hulse M A, Wilkinson C, et al. (2001) The association football medical research programme: an audit of injuries in professional football. Br J Sports Med 2001; 35: 43 – 7

Kujala U M, Orava S, Jarvinen M. (1997) Hamstring injuries: current trends in treatment and prevention. Sports Med 1997; 23: 397 – 404

Malliaropoulos N, Papalexandris S, Papalada A, et al. (2004) The role of stretching in rehabilitation of hamstring injuries: 80 athletes follow-up. Med Sci Sports Exerc 2004; 36: 756 – 9.

Orchard J, Seward H. (2002) Epidemiology of injuries in the Australian Football League, season 1997 – 2000. Br J Sports Med 2002; 36: 39 – 44

Petersen J, Holmich P (2005) Evidence based prevention of hamstring injuries in sport Br. J. Sports Med. 2005; 39: 319 – 323

Sherry M A, Best T M. (2004) A comparison of 2 rehabilitation programs in the treatment of acute hamstring strains. J Orthop Sports Phys Ther 2004; 34: 116 – 25

Verrall G M, Slavotinek J P, Barnes P G, et al. (2001) Clinical risk factors for hamstring muscle strain injury: a prospective study with correlation of injury by magnetic resonance imaging. Br J Sports Med 2001; 35: 435 – 9

Witvrouw E, Danneels L, Asselman P, et al. (2003) Muscle flexibility as a risk factor for developing muscle injuries in male professional soccer players. A prospective study. Am J Sports Med 2003; 31: 41 – 6.

Woods C, Hawkins R D, Maltby S, et al. (2004) The football association medical research programme: an audit of injuries in professional football: analysis of hamstring injuries. Br J Sports Med 2004; 38: 36 – 41.

Worrell T W. (2004) Factors associated with hamstring injuries: an approach to treatment and preventative measures. Sports Med 2004; 17: 338 – 45.

Is Machiavelli a Teacher of Evil?

This work was produced by one of our professional writers as a learning aid to help you with your studies

This essay will consider whether or not Machiavelli was a teacher of evil, with specific reference to his text ‘The Prince’. It shall first be shown what it was that Machiavelli taught and how this can only be justified by consequentialism. It shall then be discussed whether consequentialism is a viable ethical theory, in order that it can justify Machiavelli’s teaching. Arguing that this is not the case, it will be concluded that Machiavelli is a teacher of evil.

To begin, it shall be shown what Machiavelli taught or suggested be adopted in order for a ruler to maintain power. To understand this, it is necessary to understand the political landscape of the period.

The Prince was published posthumously in 1532, and was intended as a guidebook to rulers of principalities. Machiavelli was born in Italy and, during that period, there were many wars between the various states which constituted Italy. These states were either republics (governed by an elected body) or principalities (governed by a monarch or single ruler). The Prince was written and dedicated to Lorenzo de Medici who was in charge of Florence which, though a republic, was autocratic, like a principality. Machiavelli’s work aimed to give Lorenzo de Medici advice to rule as an autocratic prince. (Nederman, 2014)

The ultimate objective to which Machiavelli aims in The Prince is for a prince to remain in power over his subjects. Critics who claim that Machiavelli is evil do not hold this view, necessarily, because of this ultimate aim, but by the way in which Machiavelli advises achieving it. This is because, to this ultimate end, Machiavelli holds that no moral or ethical expense need be spared. This is the theme which runs constant through the work. For example, in securing rule over the subjects of a newly acquired principality, which was previously ruled by another prince, Machiavelli writes:

“… to hold them securely enough is to have destroyed the family of the prince who was ruling them.” (Machiavelli, 1532: 7).

That is, in order to govern a new principality, it is necessary that the family of the previous prince be “destroyed”. Further, the expense of morality is not limited to physical acts, such as the murder advised, but deception and manipulation. An example of this is seen in that Machiavelli claims:

“Therefore it is unnecessary for a prince to have all the good qualities I have enumerated, but it is very necessary to appear to have them. And I shall dare to say this also, that to have them and always to observe them is injurious, and that to appear to have them is useful.” (Machiavelli, 1532: 81).

Here, Machiavelli is claiming that virtues are necessary to a ruler only insomuch as the ruler appears to have them. However, to act only by the virtues will be, ultimately, detrimental to the maintenance of the ruler, as they may often have to act against the virtues to quell a rebellion, for example. A prince must be able to appear just, so that he is trusted, but actually not be so, in order that he may maintain his dominance.

In all pieces of advice, Machiavelli claims that it is better to act in the way he advises, for to do otherwise would lead to worse consequences: the end of the rule. The defence which is to be made for Machiavelli, then, must come from a consequentialist viewpoint.

Consequentialist theory argues that the morality of an action is dependent upon its consequences. If the act or actions create consequences that, ultimately, are better (however that may be measured) than otherwise, the action is good. However, if a different act could, in that situation, have produced better consequences, then the action taken would be immoral.

The classic position of consequentialism is utilitarianism. First argued for by Bentham, he claimed that two principles govern mankind – pleasure and pain – and it is to achieve the former and avoid the latter that determines how we act (Bentham, 1789: 14). This is done either on an individual basis, or a collective basis, depending on the situation. In the first of these cases, the good action is the one which gives the individual the most pleasure or the least pain. In the second of these cases, the good action is the one which gives the collective group the most pleasure or the least pain. The collective group consists of individuals, and therefore the good action will produce most pleasure if it does so for the most amount of people (Bentham, 1789: 15). Therefore, utilitarianism claims that an act is good iff its consequences produce the greatest amount of happiness (or pleasure) for the greatest amount of people, or avoid the greatest amount of unhappiness (or pain) for the greatest amount of people.

This, now outlined, can be used to defend Machiavelli’s advice. If the ultimate goal is achieved, the consequence of the prince remaining in power must cause more happiness for more of his subjects than would otherwise be the case if he lost power. Secondly, the pain and suffering caused by the prince on the subjects whom he must murder/deceive/steal from must be less than the suffering which would be caused should he lose power. If these two criteria can be satisfied, then consequentialism may justify Machiavelli.

Further, it is practically possible that such a set of circumstances could arise; it is conceivable that it could be the case that the suffering would be less should the prince remain in power. Italy, as stated, at that time, was in turmoil and many wars were being fought. A prince remaining in power would also secure internal peace for a principality and the subjects. A prince who lost power would leave the land open to attacks and there would be a greater suffering for the majority of the populous. On the subject, Machiavelli writes:

“As there cannot be good laws where the state is not well armed, it follows that where they are well armed they have good laws.” (Machiavelli, 1532: 55)

This highlights the turmoil of the world at that time, and the importance of power, both military and lawful, for peace. Machiavelli, in searching for the ultimate end for the prince retaining his power, would also secure internal peace and defence of the principality. This would therefore mean that there would be less destruction and suffering for the people.

Defended by consequentialism, the claim that Machiavelli is evil becomes an argument against this moral theory. The criticisms against consequentialism are manifold. A first major concern against consequentialism is that it justifies actions which seem to be intuitively wrong, such as murder or torture, on not just an individual basis, but on a mass scale. Take the following example: in a war situation, the only way to save a million and a half soldiers is to kill a million civilians. Consequentialism justifies killing the million civilians as the suffering will be less than if a million and a half soldiers were to die. If consequentialism must be used in order to justify Machiavelli’s teachings, it must therefore be admitted that this act of mass murder, in the hypothetical situation, would also be justified.

A second major concern is that it uses people as means, rather than ends, and this seems to be something which is intuitively incorrect, as evidenced in the trolley problem. The trolley problem is thus: a train, out of control, is heading towards five workers on the track. The driver has the opportunity to change to another track, on which there is a single worker. Thomson argues it would be “morally permissible” to change track and kill the one (Thomson, 1985: 1395). However, the consequentialist would here state that “morality requires you” to change track (Thomson, 1985: 1395), as there is less suffering in one dying than in five dying. The difference in these two stances is to be noted.

Thomson then provides another situation: the transplant problem. A surgeon is able to transplant any body part to another without failure. In the hospital the surgeon works at, five people are in need of a single organ, without which they will die. Another person, visiting for a check-up, is found to be a complete match for all the transplants needed. Thomson asks whether it would be permissible for the surgeon to kill the one and distribute their organs for those who would die (Thomson, 1985: 1395-1396). Though she claims that it would not be morally permissible to do so, those who claimed that changing tracks in the trolley problem would be a moral requirement – the consequentialists – would also have to claim that murdering the one to save five would also be a moral requirement, as the most positive outcome would be given to the most people.

Herein lies the major concern for a consequentialist, and therefore Machiavelli’s defence: that consequentialism justifies using people as means to an end, and not an end within themselves. A criticism of this is famously argued for by Kant, who claims that humans are rational beings, and we do not state that they are “things”, but instead call them “persons” (Kant, 1785: 46). Only things can permissibly be used only as a means, and not persons, who are in themselves an end (Kant, 1785: 46). To use a person merely as a means rather than an end is to treat them as something other than a rational agent which, Kant claims, is immoral.

This now must be applied to Machiavelli. In advising the murder and deception of others, he is advocating treating people as merely a means, by using them in order to obtain the ultimate end of retaining power. Though this ultimate end may bring about greater peace, and therefore pleasure for a greater amount of people, it could be argued that the peace obtained does not outweigh the immoral actions required in creating this peace.

Further, it must also be discussed whether Machiavelli’s teaching is in pursuit of a prince retaining power in order to bring about peace, or whether it is in pursuit of retaining power simply that the prince may retain power. The former option may be justifiable, if consequentialism is accepted. However, this may not the case for the latter, even if peace is obtained.

Machiavelli’s motives will never be truly known. Such a problem as this demonstrates further criticisms of consequentialism, and therefore Machiavelli himself. If he was advising to achieve power for the sake of achieving power, he would not be able to justify the means to this end without the end providing a consequentialist justification – if, ultimately, the prince retains power but there is not a larger of amount of pleasure than would otherwise be the case.

To pursue power in order to promote peace is perhaps justifiable. However, as is a major concern with the normative approach of consequentialism, the unpredictability of consequences can lead to unforeseen ends. The hypothetical prince may take Machiavelli’s advice, follow it to the letter, and produce one of three outcomes:

Power is obtained and peace is obtained.
Power is obtained but peace is not obtained.
Neither power nor peace is obtained.

Only in the first of these outcomes can there be any consequentialist justification. However, this then means that there are two possible outcomes in which there cannot be a consequentialist justification, and it is impossible to know, truly, which outcome will be obtained. This is the criticism of both Machiavelli and consequentialism: that the risk involved in acting is too great, with such a chance of failure and therefore unjustifiable actions, when it is impossible to truly know the outcomes of actions. The nature of the risk is what makes this unjustifiable, in that the risk is against human life, wellbeing, and safety. Machiavelli condones using people as merely a means to an end without the guarantee of a positive end by a consequentialist justification.

In conclusion, it has been briefly demonstrated what Machiavelli put forward as his teachings. It was further shown how the only justification for Machiavelli’s teachings is a consequentialist approach. However, criticisms put against Machiavelli and consequentialism, such as the justification of mass atrocities, using people as means to ends, and the unpredictability of the pragmatic implementation, show it to fail as an acceptable justification of his teachings. Therefore, it is concluded that Machiavelli is a teacher of evil.

Reference List

Bentham, J. (1798). An Introduction to the Principles of Morals and Legislation. Accessed online at: http://socserv.mcmaster.ca/econ/ugcm/3ll3/bentham/morals.pdf. Last accessed on 26/09/2015.

Kant, I. (1785). Groundwork for the Metaphysics of Morals. Edited and Translated by Wood, A. (2002). New York: Vail-Ballou Press.

Machiavelli, N. (1532). The Prince. Translated by Marriott, W. K. (1908). London: David Campbell Publishers.

Nederman, C. (2012). Nicollo Machiavelli. Accessed online at: http://plato.stanford.edu/entries/machiavelli/. Last accessed on 02/10/2015.

Thomson, J. J. (1985). The Trolley Problem. The Yale Law Journal. Vol. 94, No. 6, pp. 1395-1415.