Attack Ads in US Presidential Elections

This work was produced by one of our professional writers as a learning aid to help you with your studies

Discuss to what extent attack ads are effective within presidential election campaigns in the U.S, with a focus on the 2012 election

In the 2012 U.S. Presidential Election

Attack ads were a major part of the 2012 presidential election campaign in the U.S. In fact, the Washington Post reports that of the $404 million that was spent on TV ads in favour of Barack Obama, 85% ($343.4 million) was spent on negative ads, while of the $492 million spent on TV ads in favour of Mitt Romney, 91% ($447.72 million) was spent on negative ads (Andrews, Keating, & Yourish, 2012). The attack ad strategies of both candidates were very similar. In fact, the top ten U.S. states in which the candidates spent campaign funds on negative TV ads were exactly the same, with Florida, Virginia, and Ohio being the top three respectively (Andrews, Keating, & Yourish, 2012). Given that the vast majority of money spent on TV ads was spent on negative ads, it is reasonable to believe that there must be some efficacy to such ads. In this project, scholarly research on the effectiveness of attack ads in the 2012 U.S. presidential campaign is reviewed in order to answer the question when and in what circumstances were the attack ads effective during this election?

Interests Group Involvement and Attack Ads

Recent trends in media and campaign ad funding may contribute to the high number of attack ads in the 2012 U.S. presidential campaign, as well as the campaign’s high ratio of negative-to-positive ads. While the percentage of negative ads coming directly from the campaigns of the candidates increased significantly from 2008 to 2012, the majority of the increase in negative ads is attributable to the rise in campaign ads that were not funded by the candidates’ campaigns (Fowler, 2012). In fact, 60% of presidential campaign ads in 2012 were funded by groups other than presidential campaign groups (Fowler, 2012). This is a huge increase from 2008 in which 97% of ads were funded by presidential candidate campaigns (Fowler, 2012). The number of ads from interest groups increased by 1,100% from 2008 to 2012, while the number of TV ads from political parties increased from zero in 2008 to almost 10,000 in 2012 (Fowler, 2012).A Moreover, in 2008, ads from presidential candidates were only 9% negative, while those from interest groups were 25% negative (Fowler, 2012). These numbers quickly changed by 2012, in which 53% of ads from the presidential candidates themselves were negative and 86% from interest groups were negative (Fowler, 2012). The increase in the involvement of special interest groups in advertisement campaigns only partially explains the increase in attack ads in 2012. The change in media and the rise of social media may be able to explain partially both the increase in special interest group participation and the increase in attack ads.

Polarized Parties and Polarized Media

Several recent changes in news media may have affected not only the number of political attack ads, but also the efficacy of such ads. One major change in news media is that it now covers political ad campaigns much more than in the past. In fact, from 1960 to 2008, the percentage of political news articles and segments that covered political ads rose by over 500% (Geer, 2012). On one hand, the increased coverage of political ads may be because of the increase in attack ads. After all, attack ads tend to be more controversial and ‘news-worthy’ than positive ads. On the other hand, however, the increase in attack ads may be, in part, the result of an increase in media coverage of negative ads. Geer (2012) argues that “news media now cover negative ads so extensively that they have given candidates and their consultants extra incentive to produce and air them” (p. 423). There may or may not be a mutualistic relationship between attack ads and media coverage of political ads. Nevertheless, the clear increase in both may help to increase the efficacy of attack ads, given that such ads may receive more media coverage.

If it is the case that the media’s willingness to cover negative political ads more than positive ads does, in fact, encourage more attack ads, there is not a necessary increase in the efficacy of such ads. Geer (2012) holds that the increase in media coverage on attack ads does not mean that such coverage is in any way influential to voters; that is, it is not typically the goal of news organizations to influence voters. Thus, while an attack ad may receive more public attention because of the media, the increase in attention may not be necessarily favourable or unfavourable to any candidate.

Another recent change in news media is its partisanship. Now, many U.S. news outlets are partisan or are considered to be partisan by viewers. For example, just as Fox News is considered to be a conservative news organization that promotes Republican politicians over Democratic politicians, MSNBC is considered to be a liberal news organization (Jacobson, 2013). The polarization of the media may actually be the result of the polarization of the current two-party federal political system in the U.S. (Sides & Vavreck, 2014). In the last decade, the democratic and republican political parties in the U.S. have moved further away ideologically, resulting in substantial gridlock in Congress (Sides & Vavreck, 2014). Such disagreement and polarization may, on one hand, lead to an increase in attack ads. Attack ads may seem more effective when there is such a large ideological divide between the parties. On the other hand, such political polarization has likely contributed to the polarization of news outlets (Sides & Vavreck, 2014), which, in turn, further encourages attack ads. Even with the increase in polarized parties and media outlets, attack ads may not be an effective means to sway voters towards or away from particular candidates.

Attack Ad Rationale and Efficacy

A meta-analysis of research studies on the effects of political attack ads reveals that attack ads tends to be more memorable and stimulate more knowledge about political campaigns than positive campaign ads (Lau, Sigelman, & Rovner, 2007). Despite these effects, campaign attack ads were not found to be effective at convincing individuals to either change their votes or to vote in an election (Lau, Sigelman, & Rovner, 2007). Moreover, the results of the meta-analysis revealed that attack ads have significant negative effects on individual perceptions of the political system, trust in government, and public mood (Lau, Sigelman, & Rovner, 2007).

A more recent meta-analysis conducted by Fridkin and Kenney (2011) found that in some cases campaign attack ads can be effective at lower voter evaluations of targeted candidates. However, Fridkin and Kenney (2011) also found that in certain circumstances, attack ads lower voter evaluations of the attacking candidates. For an attack ad to be effective, the researchers found that the attack ad must bring up a relevant issue that is reinforced with fact or must present the opposing candidate as being uncivil in some significant way. Otherwise, the attack ad may have no effect or even a negative effect on voters. Additionally, Fridkin and Kenney (2011) found that effects from attack ads on voter evaluations of candidates tend to be very small.

Social Media and Attack Ads

The rise of social media has dramatically changed the political advertising landscape. The 2012 presidential campaign features another strong social media showing by President Obama, who outspent every other candidate in social media advertising in his successful 2008 presidential run (West, 2013). Social media allowed Obama to reach key demographics much more effectively than general television commercials allowed (West, 2013). Social media allows candidates to contrast a higher number of messages and aim specific messages at target audiences effectively (West, 2013). This is extremely important during a time in which there are so many issues of disagreement between the two major U.S. political parties and in which transparency is highly valued (West, 2013). Social media outlets serve as a significant platform for all political ads and their content, altering the ways in which we tend to think about politics and the media.

Another important aspect of social media and attack ads is that social media acts as a platform for social discussions on attack ads. Just as the news media tends to cover attack ads more than positive political ads, members of social media sites tend to openly discuss attack ads more than positive political ads (Hong & Nadler, 2012). Thus, the rise of social media may have further encouraged the use of attack ads during the 2012 U.S. presidential election. Even so, as with news media, there is no significant evidence that the increase in news media coverage generated from attack ads alters voter behaviour or attitudes (Hong & Nadler, 2012). As a result, the effectiveness of attack ads cannot be confirmed.

A Deeper Look into the 2012 Election and its Attack Ads

The 2012 presidential election featured Mitt Romney, who spent significantly more on attack ads than Barack Obama (Andrews, Keating, & Yourish, 2012). Moreover, a greater ratio of Romney’s television ads were attack ads (Andrews, Keating, & Yourish, 2012). Nevertheless, Obama was the victor in the election, as well as the popular vote. The results of the 2012 presidential election, however, do not suggest that attack ads are ineffective. Incumbent candidates are more likely to win elections, including presidential elections, in the U.S. than non-incumbents (Sides & Vavreck, 2014). Thus, the efficacy of the attack ads used by either candidate cannot be determined based on the outcome of the election alone.

Of the six most memorable attack ads of the 2012 U.S. presidential election, West (2013) argues, five were attack ads. The first is an attack ad from Obama about Romney’s Swiss Bank account. This attack ad may have been effective with moderate voters because it singled Romney out as having a major interest in big business, as opposed to improving the middle-class (West, 2013). Additionally, the ad had high relevance to a real issue, which meets the Fridkin and Kenney (2011) criteria for an ad that may be effective at reducing favourability with a particular candidate. The second ad is from Romney and targeted Obama’s failure to bring unemployment levels to acceptable levels (West, 2013). This ad targeted a real issue, while providing a positive aspect, which is that Romney has the business experience to create jobs as President. The third attack ad is also from Romney and claimed that Obama’s recent tax plan would raise taxes on the middle class (2013). This can be viewed as a direct rebuttal to Obama’s attack ad and consequently addresses a real and relevant topic.

The fourth memorable attack ad in this campaign is the attack from the American Crossroads, which is a Super Political Action Committee (PAC). The attack targets Obama’s celebrity status (West, 2013). This attack fails to address any real issue and, thus, should not be viewed under the Fridkin and Kenney (2011) criteria as being able to influence voter favourability toward Obama. Finally, the Priorities USA Super PAC targeted Romney’s capitalization on Bain Capital, again indicating that Romney does not have the interests of the middle-class in mind, but instead has the interests of the upper-class in mind. This attack ad addresses a highly relevant issue.

For the most part, the attack ads of the 2012 U.S. Presidential Election were likely to be somewhat effective in decreasing voter favourability. While there is no strong evidence that attack ads actually sway voter decisions or voter turnout (Lau, Sigelman, & Rovner, 2007), there is evidence that voter favourability of a candidate can be decreased through political attack ads when such ads address a relevant issue (Fridkin & Kenney, 2011). Moreover, attack ads tend to generate considerably more media attention than positive political ads. While this may seem, prima facie, to benefit candidates who put out attack ads, there is no evidence that such media coverage influences voter behaviour. Thus, the logic behind one of the primary reasons for attack ads may be flawed. Nevertheless, the 2012 U.S. Presidential Election featured a number of attack ads, many of which were on-topic and relevant, others were off-topic and irrelevant. The actual effectiveness of these attack ads is not currently known, though they likely, at the very least, increased media coverage for the targeted candidates.

References

Andrews, W., Keating, D., & Yourish, K. (2012) Mad Money: TV Ads in the 2012 Presidential Campaign. The Washington Post. Accessed on 15 October 2015 from: http://www.washingtonpost.com/wp-srv/special/politics/track-presidential-campaign-ads-2012/

Fridkin, K. L., & Kenney, P. (2011) Variability in Citizens’ Reactions to Different Types of Negative Campaigns. American Journal of Political Science, 55(2), pp.307-325.

Fowler, E. F. (2012) Presidential Ads 70 Percent Negative in 2012, Up from 9 Percent in 2008. Wesleyan Media Project, May, 2, pp.119-136.

Geer, J. G. (2012) The News Media and the Rise of Negativity in Presidential Campaigns. PS: Political Science & Politics, 45(03), pp.422-427.

Hong, S., & Nadler, D. (2012) Which candidates do the public discuss online in an election campaign?: The use of social media by 2012 presidential candidates and its impact on candidate salience. Government Information Quarterly, 29(4), pp.455-461.

Jacobson, G. C. (2013) How the Economy and Partisanship Shaped the 2012 Presidential and Congressional Elections. Political Science Quarterly, 128(1), pp.1-38.

Lau, R. R., Sigelman, L., & Rovner, I. B. (2007) The Effects of Negative Political Campaigns: a Metaaˆ?analytic Reassessment. Journal of Politics, 69(4), pp.1176-1209.

Sides, J., & Vavreck, L. (2014) The Gamble: Choice and Chance in the 2012 Presidential Election. Princeton University Press.

West, D. M. (2013) Air Wars: Television Advertising and Social Media in Election Campaigns, 1952-2012. Sage.

Oxidative Stress in human Brain Ageing

This work was produced by one of our professional writers as a learning aid to help you with your studies

The human brain is the main source of nerve function in the body. It is the epicentre of the nervous system and controls all of the main neural functions of the human body (Lewiset al, 1998, 479-483). When assessing brain function, there are many different areas that are addressed, but one main area of concern is the actual aging of the brain. As the brain ages, the functions that it performs are broken down and degraded. The nerves become slower and the motor functions are less precise. Short term and long term memory is negatively affected and the overall brain function is broken down.

Many people attribute all of these detrimental effects to old age and poor health, when in reality oxidative stress and free radicals are the main causes of loss of brain function. Throughout this paper, actual brain function patterns will be examined, followed by some common reasons for brain function degradation. Then oxidative stress and its effects on the human brain will be looked at, along with a few of the common diseases and health problems that are associated with brain aging and loss of brain function.

The Brain: an Overview

The human brain is a mass of nerve tissue, synaptic gaps, and nerves (Lewis et al, 1998, 479-483). All of these parts work together to form what is known as the human brain. The brain is the main centre of nerve function in the body. The nervous system is controlled solely by the brain itself, which works as a kind of packaging centre for the messages that are delivered to each nerve cell by the body. However, the brain would not function properly if it were not for the job performed by each cell and its consequent parts. A cell is made up of the nerve cell itself, the synapse, and dendrite. Each dendrite is connected to the next dendrite by a small opening that allows the passage of chemicals such as Potassium and in order for proper neural functioning. The chemicals move along the dendritic pathway and form a gradient at the synaptic gap. The gradient then allows the chemical to trickle across the gap, which then causes the nerve to deliver its message (usually a message for a muscle to contract). If a gradient does not exist, then the message is not sent and the function is not performed properly. If a problem arises in the nervous system then it is usually due to the fact that the chemical gradient is incorrect at a particular synaptic gap, creating either a muscle seizure or some other undesirable reaction.

The main nerve cord of the body, known as the spinal column, is made up of layer upon layer of nerve cells. This mass of nerves serves as the pathway for all of the major neural messages of the body. It allows the chemical messages packaged by the brain to be transported to various parts of the body, and vice versa. All of the neural messages of the human body are delivered in a matter of seconds, that is why it does not seem as if there is along delay in between a particular stimuli and the consequent reaction. Branching out from the spinal cord itself are the various nervous pathways of the body. There are nerves that stretch all of the way to the fingertips and toes, but they all return to the spinal cord to deliver various stimulus messages. Each of the various nervous pathways is also made up of layers of nerve cells. All of the nerve cells of the body work together to form messages that are interpreted by the brain. The brain is able to decide what priority is needed to be appropriately assigned to each task and then takes action to perform those actions.

Brain Function

There are basically three main functions of the brain: memory, interpretation of data, and motor function control. Not only is the brain a packaging and interpretation centre for the neural messages of the human body, but it is also a storage bank for information. The brain stores information from everyday life using chemical reactions in the cerebrum to create memories. This information is then available for the rest of the brain’s life, regardless of whether a person can actually pull the information up to examine it.

The brain serves its main purpose of data interpretation by deciphering the messages and stimulus information that the human body encounters every day. Each and every piece of information that the body comes into contact with is sent through the brain to either store the information, cause a reaction to a stimulus, or to disregard it. This interpretation process is very exact, yet extremely fast. The entire process seems instantaneous, from the introduction of the information all the way to the interpretation results/stimuli reaction.

Finally, the brain controls all of the muscles of the body and consequently all motor control of the human body. Every movement, be it voluntary or involuntary is controlled by the brain. Each function of the muscles is perfectly coordinated and timed so that the abducting muscles work perfectly with the adducting muscles to produce useful movement. The brain coordinates each twitch of any muscle in the entire musculature system so that no energy is wasted in useless movement. Because the body is constantly in a delicate balance, it is necessary for the brain to be even more precise than the world’s most sophisticated computer when dealing with the body’s homeostasis. The body has many involuntary muscle movements that are necessary for life, but need not be thought about to be performed each time. A couple of these movements are such things as the contraction and expansion of the diaphragm in the stomach to allow respiration and the beating of the heart. However, other muscles and functions are also controlled by the brain, such as the movement in walking, swimming, or running. The contraction of the bladder and other voluntary, yet unthought of muscle contractions are also controlled by the brain.

Stressors of the Brain

In every cell of the body, there are what are known as redox reactions (OXIS Research, 2003, 2). Basically, a redox reaction is an oxidation-reduction chemical reaction in which one compound is oxidized (loses electrons) and another compound is reduced (gains electrons) (Zumdahl, 1991,216-220). Redox reactions are essential for survival and for the proper function of various organ systems in the body.

While redox reactions may be essential for survival, they can produce what are known as free radicals (OXIS Research, 2003, 2). A free radical is defined as any type of chemical existence that can stand alone and survive on its own without the need for any other chemicals to continue the life of the chemical (OXIS Research, 2003, 2). Free radicals contain unpaired electrons, which make the chemical very unstable (OXIS Research, 2003, 2). The unpaired electrons tend to try to pair with any other free electrons to achieve a stable outer electron ring (usually eight electrons). Therefore, the unstable free radicals are always trying to pair up with any and all organic chemicals that they come into contact with. Free radicals can be increased in the body by exercise and environmental stresses. They tend to be stored in the fat cells of the body and are released when fat is burned. The free radicals are then spread all throughout the body where they can then react with other organic substrates (OXIS Research, 2003, 1). These organic substrates include DNA and various proteins as well (OXIS Research, 2003, 1). The oxidation of these molecules can damage them and cause a great number of diseases (OXIS Research, 2003, 1).

There are several different organ systems that are predisposed to free radical damage. These organ systems include the circulatory system, the pulmonary system, the eye, the reproductive system, and the brain (OXIS Research, 2003, 2). While it is true that every organ system could be examined and an oxidative stress Achilles heel could be found, the brain is especially susceptible to free radical damage (OXIS Research, 2003,2). Oxidative stress is a term that is used when dealing with a build up of ROS chemicals (OXIS Research, 2003, 2). ROS stands for Reactive Oxygen Species and refers to many chemical oxygen derivatives (OXIS Research, 2003, 2). The build up of these chemicals can cause an imbalance of oxidant activity in the system (i.e. the brain) and can lead to several negative health effects including premature aging of the system and any number of diseases (OXIS Research, 2003, 2).

The oxidative reactions that take place in the body and especially the brain are regulated by a system known as the Antioxidant Defence System, or ADS for short (OXIS Research, 2003, 2). This system is a conglomerate of many different approaches to keeping free radical production and collection to a minimum in the body. The ADS contains antioxidant chemicals as well as a number of enzymes that can not only limit and control the overall production of oxidative reactions, but actually target damaged molecules for the purpose of replacement or repair (OXIS Research, 2003, 2).The actual antioxidants are either internally synthesized or are ingested by the organism via various fruits, vegetables, and grains (OXIS Research, 2003,2). Antioxidants are categorized into two different categories: Scavenger oxidants and prevention antioxidants (OXIS Research, 2003, 2). Scavenger antioxidants remove the ROS molecules from the body and include both small antioxidants (Vitamin C and glutamine) and large antioxidants that need to be synthesized by cells in the body before they can be used to protect the organ systems (OXIS Research, 2003, 2). Prevention antioxidants such as ferritin and myoglobin are designed to prevent the formation of new oxidants and free radicals (OXIS Research, 2003, 2). They work by binding to the various free radicals to protect the proteins that are essential in the organ system (OXIS Research, 2003, 2). This group includes such chemicals as metallothionine, albumin, and transferrin (OXIS Research, 2003, 2).

It is obvious that free radicals are at least a necessary evil in the body when it comes to the completion of certain processes. In order for proper functioning of the various life systems of the human body, it is necessary to have the by products of the processes (generally free radicals)present in the system. However, this does not mean that free radicals are safe or needed. Most of the time the body’s systems of removal (ADS, etc.) will take care of the overabundance of free radicals, however at times it is possible for even the ADS system to be overpowered by a great influx of free radicals. This phenomenon can be due to the production of energy by mitochondria or some other natural process, but in most cases this large influx of free radicals is caused either by environmental stresses or from being near various industrial processes. It is a great concern of researchers today that there are more free radicals being released into the environment by industrialactivities and other forms of pollution. These free radicals are easily bound to various food products that are produced by humans and have a detrimental health effect on both animals and humans. If more free radicals are present in the environment than in past historical records, there is a high risk of ingesting enough oxidants to produce an imbalance of free radicals that could lead to the ADS system not being able to handle the extra oxidant load. This would then result in a large epidemic of environmentally caused free radical damage and disease.

Degradation and the Effects on Brain Function

Due to the importance of the brain function to the body, it can be seen why it is imperative that the brain be kept in good working order so to speak. If the brain is allowed to degrade to the point that motor functions and memory is affected, then there could be long term health effects that can cause more problems than just brain functioning. If the brain is allowed to degrade to a point at which everyday muscular functions and other physiological functions begin to become harder to perform then there is a possibility that other more serious side effects could be on the horizon. Certain diseases are caused by brain degradation or are causation factors in brain aging and degradation itself. One such disease is Alzheimer’s Disease.

Alzheimer’s disease is a brain disorder that has many symptoms and causes the loss of memory, the ability to learn, and the ability to carry out everyday activities. Towards the end of the disease progression, Alzheimer’s can cause personality changes and even cause hallucinations and paranoia (Alzheimer’s Association, 2005, 2). Alzheimer’s is a form of dementia: a category of diseases that cause the systematic destruction of brain cells and lead to a decline in brain function and quality (Alzheimer’s Association, 2005, 2). It has many stages and eventually leads to the complete breakdown of the brain to the point of death (Alzheimer’s Association, 2005,2). A person who has a dementia disease will eventually need full-time care because of the loss of a large portion of the brain function (Alzheimer’s Association, 2005, 2). While Alzheimer’s and dementia are not the only neural disorders that have a progressive effect on brain function, they are two of the main problems that are faced in countries such as the United States and England. Researchers have not yet identified a known cause of Alzheimer’s disease, however the field has progressed great strides in the past few years. As of right now, the disease is linked to a genetic predisposition to the disease and generally bad aging habits (Alzheimer’s Association, 2005, 2). But there is still some value to the school of thought adopted by a few doctors that believe that diseases like Alzheimer’s, dementia, and Parkinson’s disease are all due to not only genetic factors but also to environmental stresses which would include the introduction of free radicals into the body. Free radicals can cause great disruption in the brain function mainly because the neurotransmitters and neurons that are present in the brain are very delicate and can be destroyed easily. The free radicals can bind to the various proteins that are used to transmit messages and perform repairs in the brain tissues, preventing them from performing their duties and causing a weakened brain state. Proteins are themselves very specific concerning binding properties and will only function correctly if they bind with the correct substrate (Staines et al, 1993, 130). Therefore, if the active site of the protein is disrupted by a free radical, then that protein is completely changed and will not perform as it was intended.

Brain Aging: An uphill battle

Many diseases are linked to free radicals and other types of oxidants, however another factor of brain function needs to be examined to get the entire picture concerning brain functions and memory. This factor is, of course, brain aging. It is what some call an unfortunate fact of life, but we all grow older. From the time of our birth all the way to our death, our body is in a constant state of degradation and repair (Ebbing and Gammon, 2002, 809). This is true for every part of the body including the brain and carries great consequences for overall brain function and health. The brain is a delicate organ that stores the information that runs the rest of the body’s functions. If it is allowed to age past a certain point and it is not in good health, then it is possible for bodily functions and memory to be detrimentally affected. As the brain ages, it becomes slightly more sluggish and tends to lose its edge so to speak. Because of the complexity of the brain itself, aging tends to have a harsh effect on its ability to function correctly. A major factor in the development of diseases such as dementia and other neural system diseases is often the aging of the brain. The older the brain is, the less it functions correctly. As of now, there is not a particular treatment or cure for dementia. The best that we can do is to simply make the patient comfortable and to try to make their lives as easy as possible when dealing with everyday life functions. It is the hope of researchers of brain aging that by forging new paths in the field of neural aging, that a cure will be found for such diseases as dementia and Alzheimer’s.

For years it has been common practice to believe that brain and neural diseases were caused either by environmental stresses or from brain aging. Today, however the tide is swaying more towards the middle than to either extreme. Researchers are starting to realize that the environment as well as brain aging could be factors in the development of certain diseases and disorders. Not only can both environmental factors and the age of the brain itself work together to cause stress on the brain, but some environmental factors can actually cause the brain to age prematurely as well. This premature aging is actually a worse form of aging than the actual aging process of the human body itself. Premature aging means that the brain is aging faster than it would naturally; in other words a brain that is supposed to only be five years old would look and function as if it were ten years old or older. The implications of this type of aging process are obvious. As the brain ages, neurons and neurotransmitters die and do not function as well as when the brain was younger, leading to memory loss and slower reaction time.

Brain aging is caused by many factors including environmental factors, industrial processes, and of course the passage of time. Two of these factors can be regulated: environmental factors and industrial processes. By regulating certain chemicals and industrial processes, it is possible to cut down on the amount of premature aging that occurs in the brain (Sharon, 1998,167). Certain industrial processes such as the metallurgic processes used in alloy formation as well as welding are known causes of brain degradation and causation factors in such diseases as Parkinson’s and manganism (Landis and Yu,1999, 213-217). Certain chemicals that are present in these various processes are able to penetrate through the blood brain barrier and contact the brain tissue directly. This can lead to tumours and neuron death that then causes cognitive problems as well as body function problems. The only good way to prevent such contamination is to completely negate contact with these chemicals at all. Researchers know this and that is why environmental laws are being put into place that allow for the prevention of release of these chemicals.

Aging of the brain occurs whether or not there are external environmental stressors present in the person’s surroundings. It occurs throughout the entire lifespan of the organism. Earlier in history it was believed that the aging of the brain caused the degradation of neurons no matter the circumstances, however it is the common belief today that as long as a few guidelines concerning lifestyle choices are followed, it is possible for the neurons of the brain to stay completely healthy and fully functional all the way until death. Brain aging is defined as the breakdown of the brain itself. The grooves in the brain tissue will grow wider and the actual weight of the brain material will decrease dramatically. New studies are showing that the plaques and neural tangles that were previously believed to have been the culprits of Alzheimer’s disease may actually not be the main disease causing factors after all (Brady et al, 2000, 864). It is a growing school of thought that the actual cause of dementia type diseases is actually result of complex chemical reactions in the brain (Brady et al, 2000, 864). This information is very important to neural researchers because it can completely change the focus of their research and hopefully eventually lead to a cure for dementia and other diseases of this type.

Conclusions

It is apparent that the aging of the brain is a major concern, especially to researchers studying the effects of specific kinds of neural diseases. It is believed that these diseases could have a myriad of causes, but brain aging may be a contributing factor in several or all of them. The overall aging of the brain is coming to the forefront of modern medicine because not much is known about it. It is becoming evident that what was thought to be facts concerning brain aging before was little more than just educated guesses. Now however, the technology is available that will allow the actual study of the brain and its functions to try to give a better picture of the breakdown of the organ. Once a specific timeline is established that shows the breakdown of a healthy brain, it will be possible to quantitatively measure the degradation of a diseased brain. While this may not seem very important, it is actually very useful information. This information can be used to explain to patients what they should expect to experience at specific time periods of their disease and could help prepare them for what is to come.

Brain aging information can also be of use to the doctors that are administering treatment, in as much that it would allow the doctor to determine at what stage the aging was in, and therefore what type of treatment to administer.

Oxidative brain stress is a completely different matter than brain aging as far as research is concerned. While it is true that more is known about free radicals and their effects on the brain than the aging process, it is important to understand why research of this kind needs to be continued. The world is constantly changing and the chemicals and different kinds of pollutants that are released are in a continuous state of advancement. Because of this it is necessary to continually be studying the physiological and biological effects of each new chemical that is developed and put onto the market. By performing this kind of research early on in the development process, it is possible to determine if there are any harmful effects of using the new chemicals. The early research performed as a preliminary study could lead to less disease and fewer health problems later on.

Overall, oxidative stress along with brain aging is newly emerging field that has the job of trying to answer age old questions that are concerned with brain and neural health. It is important to continue research in both of these areas so that advancements in modern medicine can be pursued. Society owes a great debt to the researchers who have and will spend their entire lives studying the effects of brain aging and oxidative stress on the functioning of the brain. Hopefully in the near future there will be great advancements made in the field of neural medicine to allow for better and more effective treatment of certain nervous system diseases.

Plant Physiology Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

In the study of general biology, a number of fields such as plant anatomy, plant taxonomy, plant physiology, comparative ecosystems, comparative animal physiology, neurophysiology, physiological ecology, endocrinology, and principles of electronic instrumentation may be topics of interest.

In this paper, the writer will discuss plant physiology. The paper contains the definition of plant physiology in different dimensions, notes related study fields that complement or overlap the topic and explains the branches (or specific study areas) in the topic, detailing examples of what is studied within each subtopic. A general conclusion is given at the end of the presentation. An accompanying reference to the discussed topics is provided at the end of the paper.

Definition of plant physiology

Physiology has been defined as ‘the science of the normal functions and phenomena of living things’. The current understanding of physiology crops from the works in Europe during the Renaissance interest in experimentation with animals. William Harvey (1628) who was a doctor to Charles I described the working of the heart in sparkling analyses after observations lead to the conclusion of experimental proof and functionality and which informs the importance of physiological analysis as ‘physiology’.

Physiology is based on hypothesis testing or functionality of given living phenomena. Harvey’s work also emphasized the natural relation between physiology and anatomy (structure of living things) which makes the understanding of the former easier. The successive meanings of ‘physiology’ are illustrated by instances of its use.

Harris (1704) in Lexicon Technica describes physiology as part of medicine that teaches the constitution of the body so far as it is sound, or in its Natural State; and endeavors to find Reasons for its Functions and Operations, by the Help of Anatomy and Natural Philosophy’. Another definition by Huxley150 years later is clearer and closer to the current definition: ‘whereas that part of biological science which deals with form and structure is called Morphology; that which concerns itself with function is Physiology’ to make a distinction between structure and function in living organisms.

From the foregoing, plant physiology can be described as that aspect of study that deals with the functioning of plants both microscopically and macroscopically. It assumes the system of understanding the functionality of plant life within itself, without it and within its immediate environment.

The field of plant physiology relates closely to cell morphology which studies development, formation and structures of different species of plant, ecology, which studies the plant habitat, biochemistry which lumps all the biochemical activities of cells, and molecular processes inside the cell. All these fields interact or overlap in the study of plant physiology.

The general field of plant physiology involves the study physical processes and chemicals that describe life and elucidates how they occur in plant. The study is at many levels that encompass various scales, time and sizes. The smallest scale is the molecular interactions that include the photosynthesis interaction in the leaves and diffusion of water in cells.

Diffusion also happens for mineral and nutrients within the plants. In the large scale there are concepts of plant development, dormancy, seasonality and reproduction. Other major disciplines of plant physiology include phytopathology that studies diseases in plants and the study regarding biochemistry of plans, also called phytochemistry. Plant physiology as a unit is divided into many areas of research.

Elementary study of plant physiology mainly concentrates on stomata function, circadian rhythms, transpiration , respiration, environmental stress physiology, hormone function, photosynthesis, tropisms, photoperiodism, nastic movements, seed germination, dormancy, plant hormone functions, photomorphogenesis and plant nutrition.

Branches of Plant Psychology

The subtopics of plant physiology can safely take the forms of photochemistry, biological and chemical processes, internal cell, tissue and organ interaction within the plant, control and regulation of the internal functions (anatomy) and the response to external conditions and environmental changes (environmental physiology). In the following section, these branches of physiology will be discussed in details.

Photochemistry refers to the chemical actions that take place within or without the cell. Plants are considered unique in their chemical reactions since as opposed to animals or other organisms, they have to produce chemical compounds to be used within the same plant. These chemicals are in the form of pigments or enzymes directly used within the plant.

The functions of these chemicals are various. They may be used for defence against external interference from such quarters as herbivores or primary consumers and pathogens. This mechanism is advanced in plants because they are immobile. This, plants do through the production of tissue toxins and foul smells.

Toxicity from plants is associated with plants alkaloids which have pharmacological effects on animals. The Christmas setta if eaten by dogs causes poisoning to them. Another plant in its fresh form, the wolf’s bane (the Aconitum genus; Aconitum carmichaelli) has toxic aconite alkaloid that is known to kill wolves and causes tingling, nausea or numbness of tongue or vomiting if tasted by mouth. Some other plants also have secretions or chemical compounds that make them less digestible to animals.

Plants also produce toxins to repel invasion form other plants or in instances of competition for similar nutrients. They produce secretions that are repellent thereby maintaining autonomy over competed for resources. The foul smell exhibited by other plants help to keep herbivores away. The rafflesia (Rafflesia arnoldii) of the Magnoliophyta division has flowers with distinctive smell of rotting flesh of animals to keep herbivores that are known not to eat flesh away.

Toxins or smell can also be produced to guard against encroachment of disease causing organisms or to guard the plant from the effects of drought or unfavourable weather conditions. Enzyme or hormone secretion has been observed in the behaviour such as in preparation for dormancy for the seed, shedding of leaves for deciduous trees in preparation for dry conditions and withering in some plants are caused by chemical reactions in plants.

Innate immune systems such as those of plants are known to repel pathogenic invasions. In an experiment, small protein secreted by strains of the fungus caused it to overcome two a tomato’s disease- resistant genes. A third resistance gene, however, would target this suppressor protein, making the tomato plant fully immune to any form of fungal strain that produced the protein. With the right combination of resistance genes, tomatoes can overcome invasion of fungus despite the fungus’ molecular tricks.

Attraction of possible pollinators for the furtherance of the species of plants is also employing the chemical reactions in plants. Some plants, during their reproduction cycles are known to produce very pleasant smells to attract insect which then help in pollination. An example is the night rose or the Aloysia triphylla that smell so to attract insects that symbiotically gain their nectar and help in pollination of their flowers.

Photochemistry involves the understanding of the metabolic actions of compounds within the plant cells. Studies of these metabolic compounds have been successful through the use of extraction techniques, isolation processes, structural elucidation and chromatography. Modern approaches are numerous and thus expand the field for further studies.

Plant cells vary so much from cells of other organisms. This necessitates different behaviour in order to perform their productive actions. Plants cells have cell walls that are rigid and thus restrict their shape as opposed to animal cells that have both cell walls and cell membranes. This is primarily responsible for plants’ immobility and limited flexibility. The internal cell structures vary according to specializations required of the plant to adapt to its life.

For example, the cell vacuole is responsible for storage of cell food material, for intracellular digestion and storage and discharge of cell waste material. It also protects the cell and is also fundamental in endocytosis processes of the cell such as the regulation of the turgor pressure of the cell in response to cell fluid uptake.

The chloroplast is responsible for photosynthesis within the cell and contains the sugars for the photosynthesis. It is also the manufacturer of food for the other organelles. The ribosome use genetic instructions form the Ribonucleic acid (RNA) to link amino acids in long chain polypeptides to form proteins. These plant proteins are very important in plant structures.

Golgi complexes store packages and distribute proteins within the cell endoplasmic reticulum. The smooth endoplasmic reticulum synthesizes lipids while the rough endoplasmic reticulum synthesizes proteins.

The plastid is found in the cytoplasm and possesses double membranes surroundings that depend on the environmental conditions of the parent plant and the plant’s adjustment to these conditions. They store molecules such as pigments which give the characteristic colours of flowers and fruits during plant reproduction. They also store photosynthetic products.

The plant cell contains chlorophyll, pigments that are responsible for the manufacture of the plant’s own food. The cell physiology is such that the adaptations of different internal cell organelles are commensurate to the ability of the plant to live in a given environment. The cell structure thus plays a major role in plant adaptation.

Plant cells are the smallest unit in building a special system of a plant life. Cells make up tissues that specialize in given plant functions. Tissues coordinate to form organs within the plant that respond to environmental needs as appropriate as is required of the plant.

The specialization of difference types of plant cells such as the parenchyma cells, the collenchyma cells and the scherenchyma cells make it possible for plants to coordinate its functions in its habitat. The parenchyma cells are divided to storage cells for storing cell material, the photosynthetic chlorenchyma cells that are adapted to photosynthesis and transfer cells that are responsible for phloem loading functions of transfer of manufactured food within the plant. These cells have thin cell walls for mediation or simply passage of material from cell to cell.

The collenchyma cells also have only one thin cell wall. They mature from the meristems of the plant tissues. The scherenchyma cells have strong sclereids and fibres made of lignin that provides mechanical support to the plant. This rigidity has also found value in discouraging herbivory.

The tissue systems include the meristematic tissues- the xylem and the phloem as well as the epidermal cells of the external plant cells. The xylem is made up of cells specialized in uptake of minerals active transport. The phloem has a composition of cells mostly of the transfer cells. The epidermal cells are rigid and cuticular to prevent the loss of fluid and are also for protecting the inner weaker placid cells.

All these systems focused to perform different function within the plant both chemically and physical. For example, the roots and the rhizoids help to hold the plant into position for vantage production of its food. For earth plants, the roots have the penetrative power while the aqueous plants have roots helpful in buoying them in place for mineral acquisition.

The leaves are adapted to trap sunlight that is instrumental in photosynthetic process of making food. The leaf structure is such that it is adapted to the habitat of the plant. The position of the stomata in the leaves, for example, is atop and not under the leaf to regulate the flow of gases. The specialized guard cell for the opening and closure of the stomata depicts just how the specialization befits the functionality of the cells, tissues and the plant organs.

Plants also possess transport systems that rely on physical processes in absorption and use of nutrients, air and water within and without the plant. The absorption of minerals depends on a combination of diffusion and active transport that is regulated by the plant in its environment. The roots are developed to successfully execute this process.

Up into the plant the uptake of minerals and water has a developed xylem system that relies on osmosis, diffusion and even active transport in tissues specially adapted to this function. The phloem system successfully executes the transport of manufacture food from the leaves and the stems to other parts of the pant body. The vascular tissues are just an indication of how these forms of interactions work for the benefit of the plant.

Plants have internally developed mechanisms that coordinate responses. These mechanisms are developed on hormonal systems that are instrumental in the development and maturity of the plants. Examples of hormonal coordination in plants include reproduction in flowering plants, ripening of fruits and subsequent expulsion of the same from the mother plant and loss of leaves in response to impending drought or inadequacy of water, just to mention but a few.

The ripening of fruits result from the reactions of the Brix acid in the fruit. The amount of the acid in the fruit determines its ripening. A gas called ethylene is usually created from a compound called methionine acid belonging to the amino group. The ethylene increases the intercellular levels of enzymes.

The amylases hydrolyze starch into sugars while the pectinases hydrolyzes pectin that are responsible for the hardness of fruits while breaking down the green pigment with the colour turning to orange, red or yellow depending on the plant pigments. The process of ripening is related the degree of pollination such that properly pollinated fruits ripen during maturity while those not properly pollinated may have to be shed off before maturity

Abscission in plants is associated with the hormone ethylene. It is believed that ethylene (and not abscisic acid as was previously thought), stimulates the process of abscission. It takes the forms of falling leaves of deciduous trees to conserve water, shedding mostly branches for reproduction purposes, abscission after fertilization, fruit drops to conserve sources or dropping of damaged leaves to conserve water and for photosynthetic efficiency

Paradoxically, ecological physiology is on one hand a new field of learning in plant ecology while again, it is one of the oldest. Physiology of the Environmental is the favoured name of the sub-discipline among botanical physiologists. It however goes by other names in the field of applied sciences. It is more or less synonymous to eco-physiology, ecology of crops, agronomy and horticulture. The discipline overlaps with the field of ecology since the plants act in response to the surrounding.

Ecological physiologists scrutinize plant reaction to factors that are physical such as radiation such as visible light and ultraviolet radiation from the sun, fire, wind and temperature, Of particular interest are water interactions and the stress of deficiency of water or inundation, exchange of gases with the ambient air as well as cycling of nitrogen and carbon nutrients.

Ecological physiologists also analyse and examine plant reaction to biotic factors. This includes not only unfavourable relation, such as rivalry, parasitism, disease and herbivory, but also favourable interactions, such as pollination, symbiosis, and mutualism.

Plants react to environmental changes in a very fantastic way. These reactions are only comparable to the homeostatic processes hitherto experienced splendidly in animals. Environmental changes may impact the plants either positives or negatively and the plants have developed systems to change appropriately. It is however, important to note that environmental variations may sometimes be too extreme to be avoided by plants leading to their demise or possible extinction. This may be understood well in topics such as evolution or more specifically, the ecological succession.

Plants respond to stresses from loss of water in their habitats. Since they are usually stationery, the water usually has to find the plant and not vice versa. An example is the wilting process associated with non woody plants or non-woody parts of woody plants. This process is a reaction to turgidity in non-lignified cells of the plant such that the plant loses rigidity. This results from inadequate water. The process modifies the effective photosynthetic area of the leaf so that the angle of leaf exposed to the sun such that erectophile conditions are enhanced.

This condition may result from drought, reduced soil moisture, increased salinity; saturated soils or a blockage of the vascular tissues of the plant by bacteria or fungi to cause clogging that deprives the leaves of water.

Changes in the composition of the air are also another determinant of plant reaction to its environment. The greatest effect comes from the amount of water vapour in the air. The humidity of the air determines the rate of photosynthesis. Wind also plays a major role in actuating the rate of photosynthesis. Some substances are also toxic to photosynthetic plants. These therefore trigger varied response from plants.

Plants act in response both to directional and non- directional stimuli like gravity or sunlight hence it is called “tropism”. A reaction to a non-directional stimulus, such as humidity or temperature is called a nastic movement.

Tropisms in plants result from differential cell growth. This is where the cells on a single side of the plant become longer than those on the other side of the plant. This causes a bend toward with less growth. Most common tropisms experienced in plants include autotropism, that signifies a bed towards a side where light comes from. This allows the plant to maximize on its absorption of the much needed light or to allow the plant to receive associated heat from the source of light.

Geotropism is the reaction of the roots of a plant to gravitational pull that reacts on all substances. This growth is usually downward towards the earth enables the plant roots to grow downwards due to direction of gravity. Tropism is a direct influence of hormonal communication within a plant.

Nastic movements on the contrary are reactions from the influence of turgor pressure and may occur within a short period of time. A good example is the thigmonasty reaction that is a reaction of a carnivorous plant or yet still the Venus fly trap that react to touch to trap insects that acts as their food. Mechanisms used here are a network of blades with sensitive trigger thin hairs that shut closed and traps the invader instantly. This is done for additional nutrient. The leaf has to grow slowly between successive catches and readjust before the next catch.

Another recent and most important area of ecological physiology is the study the way plants resist or cope with these diseases in them. Plants, just like animals and other organism are susceptible to a host of pathogenic organism such as bacteria, fungi and viruses.

The morphology of plants differed from that of animal. This implies that their reaction to diseases also vary greatly. Plants may react to an invasion only by shedding their leaves. Animals however have to obtain either innate immunity or tackle the intrusion through other antibodies.

Diseases organisms affecting plants also vary from those causing that cause disease to animals. Plants cannot usually spread diseases because of their immobile nature thus physical contact infections are rarely the case. Their pathogens thus usually spread through spores or are transmitted by animals that act as vectors.

Plant habitat and competitive environmental conditions also necessitate readjustments in plants. Competition for nutrients due to encroaching competitors may force the plant to change its morphology or other aspects of plant functionality.

Many phototrophic plants use a photoreceptor protein such as phytochrome or cryptochrome to sense changes in seasons, changes in length of day and take to allow them to flower. In a broader sense, phototropic plants can be grouped into short, long or neutral day plants.

When the day extends past the critical period such that night is shorter that day the long day plant flowers. The plants generally flower during spring or in the early summer with longer days approaching. Short day plants flower when the day is shorter than a standard or critical length. This is when the night is longer than a critical length. The plants generally flower during late summer or during fall when the shorter days are approaching.

Scientist concur that the night length and not that of day controls the pattern of flowering. Thus flowering in a longer day plant is necessitated by shorter nights which mean longer days. The opposite is true; short day plants will flower when the nights get longer than the critical duration of day. This has been done by using night break experiments. For instance, a long night (long day) will not flower if a pulse of say 10 minutes of artificial beam of light is shone at it during midnight. This occurrence is not possible with natural light such as the moon in the night, fire flies or even lighting since the light from these light sources are not sufficiently intense to help trigger the response.

Day neutral plants are not affected by photoperiodism. They always flower regardless of the presence of light or absence of the same, the length of light in day or night variations. Some have adapted to use temperature instead of light. Long or short day plants will have their flowering enhanced or interfered with inn the presence of variations in length of day or night.

They will however flower in sub optimal or half day lengths and temperature is a likely effect to their flowering time. Contemporary biologists believe that it is the happenstance of the active kinds of phytochrome or cryptochrome, resulting from the light during daytime, with the sync of the circadian clock that enables plants to determine the duration of the night. Other occurrences of photoperiodism in plants are like the growth of stems or roots within some seasons or the loss of plant leaves at other seasons.

Transpiration and stomata actions also greatly affect the plant in almost all the cited circumstances above. Transpiration in plants is the process by which evaporation of water molecules usually through the leaves but also takes place from flowers, roots and even stems. The stomata are the major site for transpiration. The opening of the stomata is a regulated process through the stomata guard cells and the process of water loss may be considered both unfortunate and necessary.

The stomata open to allow the diffusion of the photosynthetic gas, carbon dioxide and allows out oxygen. Transpiration has a dual action of cooling the plant in excessive heating and will also aid the loss of unwanted water within the plant system. It also enables the mass flow of mineral nutrients that is aided by the flow of plant water. This is a hydrostatic process that thrives on diffusion of water out of the stomata.

The rate of transpiration is directly affected by the rate of stomata opening. The evaporation demand of the atmosphere is also another factor that influences the release of water. Humid conditions don’t favour evapotranspiration. Wind also enhances this rate. The amount of water through the process also depends on the individual plant size, the surrounding intensity of light the ambient temperature, soil water supply and the soil temperature.

Genetic, physical and chemical factors affects all the environmental responses, internal cell functions and external adjustments. The plant functioning is a complex that embraces all the aspects of botanical science and one cannot be studied alone in isolation. All the functions may vary from one plant to another depending on the cell morphology, anatomy or ecological niche but essentially, for all photosynthetic plant, the general functions are read along similar lines.

Deviations may occur as a result of evolutional characteristics or adaptations. These deviations, however, have not deterred the organization of the study of plant physiology. Research on physiology of plants is still developing and a great understanding of the topic is essential if it is approached from all aspects of the study of Biology as a discipline and may call for inclusion of other disciplines.

Bibliography

Hodgkin, Atmourserg, The Pursuit of Nature. Cambridge: Cambridge University Press, 1977.

Boyd, Claver. The logic of life: Challenge of Integrative Physiology. Oxford: Oxford University Press, 1993.

Robinson, Trevor. The Organic Constituents of Higher Plants, Minneapolis: Burgess Publishing, 1963. 183.

Fosket, Donald. Germination of Plant.A Molecular Approach. San Diego: Academic Press 1994, 498-509.

Pigments and Photosynthesis

This work was produced by one of our professional writers as a learning aid to help you with your studies

Lab Four: Plant Pigments and Photosynthesis

Part A

Table 4.1:

Distance Moved by Pigments Band (millimetres)

Band Number

Distance (mm)

Band Colour

1.

15

Yellow

2.

35

Yellow

3.

73

Green

4.

172

Olive Green

5.

Distance Solvent Front Moved 180 (mm)

Table 4.2:

.083334= Rf for Carotene (yellow to yellow orange)

.194445= Rf for Xanthophyll (yellow)

.405556= Rf for Chlorophyll a (bright green to blue green)

.955556= Rf for Chlorophyll b (yellow green to olive green)

Analysis

Page 47-48 (1-3)

What factors are involved in the separation of the pigments?

The factors that are involved in the separation of the pigments are the pigments solubility, the formation of the intermolecular bonds, and the size of each individual pigment particle. Since capillary action is the method by which the solvent moves up the strip of paper, the attraction of the molecules to the paper and to each other molecule is essentially determined by those factors.

Would you expect the R value of a pigment to be the same if a different solvent were used? Explain.

No, because in different solvents, the solubility of the pigments would be different causing the Rf value to be different. In different solvents, the solvent rate would be affected, and since the rate is different, the distance travelled would also be affected, causing the Rf value to also be different.

What type of chlorophyll does the reaction centre contain? What are the roles of the other pigments?

Chlorophyll a is contained in the reaction centre. Because it is the primary photosynthetic pigments in plants, other chlorophyll a molecules, chloroplast b, and the carotenoids (carotenes and xanthophylls) capture light energy and transfer it to the chlorophyll a at the reaction centre. (College Board, 46)

Part B
Purpose

The purpose of this lab is to measure the effect of various conditions of chloroplast on the rate of photosynthesis or percentage of light transmittance. By using unboiled chloroplast in light, unboiled chloroplast in dark, and boiled chloroplast in light, DPIP was placed into each cuvette and a colorimeter was used to measure the rate of light transmittance.

Since DPIP is the electron acceptor, as there is more light present, the DPIP absorbs more elections thus reducing the DPIP. Eventually the reduction causes the DPIP to change colour from a deep blue to a clear or opaque colour.

Variables
Independent Variable

The independent variable in this lab is the different forms/conditions of the chloroplast. These include boiled chloroplast in light, unboiled chloroplast in light, and unboiled chloroplast in dark.

Dependent Variable

The dependent variable in this lab is the rate/ level of light transmittance over a period of time measured by the colorimeter. From this data we can determine the rate of photosynthesis because as the DPIP becomes excited and reduced by the electrons, the colour changes indicating the rate of photosynthesis.

Control Variable

The control variables in this lab includes the type of cuvette, size of cuvette, type of buffer used, amount of phosphate buffer used (1mL), and the time intervals (min) used to measure the % or level of transmittance in the colorimeter.

Measurement

To measure the dependent variable, in this lab, a colorimeter and DPIP was used to determine the level of light transmittance. As the electron acceptor, DPIP was placed in each cuvette. Later after a certain interval of time, each was placed into a colorimeter which determined the level of light transmittance. As electrons were accepted, the DPIP became excited and reduced causing the color in the cuvette to also change, thus affecting the level of light transmittance as measured by the colorimeter.

Hypothesis

Since photosynthesis is the process by which plants, bacteria, and other autotrophic organisms obtain energy to produce sugars, the right conditions and the right environment are necessary in order to carry out this complex process. Based on prior knowledge and information from this lab, cuvette 3 will have the highest percent of light transmittance and the highest rate of photosynthesis.

Since photosynthesis requires light and functional chloroplast to absorb and produce sugars, without either one, the process is interrupted and cannot function properly. Unboiled chloroplast will have a higher percent of light transmittance than boiled chloroplast because of the impact temperature has on the proteins/enzymes of the chloroplast. As high temperatures, like the boiling point, the heat generated will denature the enzymes/proteins thus reducing its effect on photosynthesis.

Without functional chloroplast to absorb the energy from the light, the electrons will not be bumped to a higher energy level and will not be able to reduce DPIP. Of the two cuvettes with unboiled chloroplast, the cuvette place in front of the light will have a higher percent of light transmittance than the cuvette placed in the dark because with light, energy can be absorbed, DPIP can be reduced, ATP can be created, and photosynthesis can be carried out.

Similar to functional chloroplast, light is another essential component of photosynthesis, without light photosynthesis cannot occur. Therefore, the cuvette placed in the dark may have functional chloroplast but without light to provide the necessary energy, the reaction will either occur very slowly or not at all. Finally, the cuvette with no chloroplast will not photosynthesize at all, because without chloroplast to absorb the energy from the light, the solution will not carry out photosynthesis.

Procedures

First, a beaker of water was positioned between the samples and the light source which was to be the heat sink. Next, an ice bath was created to preserve the phosphate buffer and chloroplast by filling an ice bucket with ice. Then, before the cuvettes could be used, they had to be cleaned out with lint free tissue to ensure the light transmittance goes smoothly and uninterrupted.

Before anymore is done with each cuvette, both boiled and unboiled chloroplast were obtained in pipettes and place in the ice bath inverted. Next, of the five cuvettes labelled 1 to 5, cuvette 2 had a foil container constructed for the sake of keeping light out of the solution. Each cuvette then received the corresponding amount of phosphate buffer, distilled water, and DPIP. The colorimeter was then set up by starting up the computer program that would read the colorimeter and was linked accordingly.

The first cuvette received three drops of unboiled chloroplast, and then shaken up and placed in the slot of the colorimeter. The first solution would be the first calibration point of reference for the colorimeter at zero percent light transmittance. Following the setting of the first calibration point, the second calibration point was also set. In cuvette 2, three drops of unboiled chloroplast was added and immediately timed with a stopwatch and the light transmittance was recorded.

The same cuvette was encased with the foiled created earlier and then placed in the light. Cuvette 3 also received three drops of unboiled chloroplast at which the time and the light transmittance was also recorded. Right afterwards, the cuvette returned to the light. Cuvette 4 received three drops of boiled chloroplast at which the time and the light transmittance was also recorded. Just like cuvette 3, cuvette 4 was returned to the light. Curette 5 the control would receive no chloroplast but still has the time and light transmittance recorded. The light transmittance for each would continue to be recorded at an interval of every five minutes (5 minutes, 10 minutes, 15 minutes) following the same procedure until all data had been collected.

Conclusion

The process of photosynthesis is described as the conversion of light energy to chemical energy that is stored in glucose and other organic compounds. Essential to the development of plants and animals, light from the sun or from an artificial source is necessary for this process to occur and to carry out its benefits. Having performed this lab, the results obtained supports this concept and it also supports my hypothesis.

After gathering all the data, cuvette 3 did have the highest percentage of light transmittance and the fastest rate of photosynthesis. Because of the unboiled chloroplast in the cuvette absorbing the light and a light source available to provide energy to reduce the DPIP, the conditions were right for photosynthesis to occur.

In cuvette 3, photosynthesis did occur because when the light shined on the unboiled chloroplast, the electrons were excited and moved to a higher energy level.

This energy was then used to produce ATP and to reduce DPIP causing the solution to change colour creating a higher and faster rate of photosynthesis/light transmittance. This cuvette essentially showed that light and chloroplast are needed in order to carry out photosynthesis. Although the graph may show the rate of photosynthesis slowing down, the reason why the curve begins to slow down and level off is not because of photosynthesis but because as the process of photosynthesis occurs, the DPIP will begin to be used up causing the reaction to slow down and level off.

Cuvette 2 showed different results in that no photosynthesis occurred because there was no light present for the chloroplast to absorb and to reduce the DPIP. Photosynthesis requires light but without out light, photosynthesis could not occur causing essentially no change in the cuvette. The data table and graph does show that there were some change in the rate of photosynthesis but that occurred because since we had to take the cuvette out of the aluminium sleeve to place in the colorimeter, the DPIP broke down because of the brief exposure to the light.

However, overall, the data shows that because there was no light present, photosynthesis could not occur causing no change. Cuvette 4 also showed little increase or change in the percentage of light transmittance because since the cuvette had boiled chloroplast, the high temperatures denatured the proteins/enzymes found in the chloroplast rendering them ineffective. Because the light could not be absorbed by the chloroplast, photosynthesis could not occur or it occurred at a very slow pace.

Similar to cuvette 2, the date table and graph also shows that there were change in the percentage of light transmittance in cuvette 4 but because the DPIP was exposed to the light, the DPIP did break down causing a slight change in the rate of light transmittance. Essentially, this cuvette showed that chloroplast in addition to light is required for photosynthesis.

Cuvette 5 also showed no change in the percentage of light transmittance because without the presence of chloroplast, the light could not be absorbed to excite the elections and to reduce the DPIP. Without the functions of chloroplast, photosynthesis could not occur because the DPIP would not be reduced and ATP would not be created. Any fluctuations in the data or graph for cuvette 5 could be explained by human or data error.

Analysis

Page 52-53 (1-8)

What is the function of DPIP in this experiment?

The function of the DPIP in this experiment is to act as the electron acceptor, replacing the usual NADP found in plants. When the light shines on the active chloroplasts, the electrons are excited, which causes them to jump to a higher energy level thus reducing the DPIP. As the DPIP is reduced, the colour changes from deep blue to colourless, which affects the rate and level of light transmittance when measured by the colorimeter.

What molecule found in the chloroplasts does DPIP “replace” in this experiment?

DPIP in this experiment “replaces” the electron acceptor NADP

What is the source of the electrons that will reduce DPIP?

When the light shines on the chloroplast, the light provides enough energy to bump the electrons to a higher energy level thus reducing the DPIP. The source of the electrons can also come from the photolysis of water.

What was measured with the spectrophotometer in this experiment?

The spectrophotometer in this experiment is used to measure the percentage/level of light transmittance through the cuvette based on the amount of photosynthetic activity.

What is the effect of darkness on the reduction of DPIP? Explain.

Because there is not an absence of light shining on the chloroplast, the DPIP could not be reduced because there was no or not enough energy to excite the electrons and move them to a higher energy level in order to reduce the DPIP.

What is the effect of boiling the chloroplasts on the subsequent reduction of DPIP? Explain.

Similar to the effects of darkness, by boiling the chloroplast, the proteins were denatured by the high temperatures which caused the process of photosynthesis to be slowed down and inhibited. Because the chloroplast could not absorb light and perform its job, the DPIP could not be reduced which reduced the percentage/level of transmittance.

What reasons can you give for the difference in the percentage of transmittance between the live chloroplast that were incubated in the light and those that were leapt in the dark?

Because light is essential for photosynthesis, the chloroplast placed in light was able to reduce DPIP and perform photosynthesis. As the chloroplast absorbed the light, the energy absorbed, pushed the electrons to a higher energy level which caused the DPIP to reduce.

As the DPIP reduced, the colours changed and the rate of light transmittance was higher. In the dark chloroplast, however, because there is no energy source for the chloroplast to use and since the DPIP could not be reduced due to the lack of light energy, the percentages of light transmittance were lower.

Identify the function of each of the cuvettes

Cuvette 1: Cuvette 1 was used to measure how the absence of DPIP and chloroplast affected the percentage of light transmittance. This cuvette was also used to calibrate the colorimeter.

Cuvette 2: Cuvette 2 was used to measure how the lack of light and unboiled chloroplast affected the percentage of light transmittance. It essentially showed how important light was to the process of photosynthesis.

Cuvette 3: Cuvette 3 was used to measure how light and unboiled chloroplast affected the percentage of light transmittance. It essentially showed how light and active chloroplasts are needed to carry out the process of photosynthesis.

Cuvette 4: Cuvette 4 was used to measure how light and boiled chloroplast affected the percentage of light transmittance. It essentially showed how the denatured proteins in the chloroplast prevented the light to be absorbed and the process of photosynthesis to be carried out.

Cuvette 5: Cuvette 5 is the control of the experiment and is used to show how the availability of light but absence of chloroplast will prevent the process of photosynthesis from being performed and its effect on the percentage of light transmitted.

Variation of Light Intensity – Inverse Square Law

This work was produced by one of our professional writers as a learning aid to help you with your studies

Background Theory

i Light emitted from any kind of source, e.g. the sun, a light bulb, is a form of energy. Everyday problems such as lighting required for various forms of labouring or street illumination, require one to able to determine and evaluate the intensity of light emitted by any light source or even the illumination of a given surface. A special group of studies is formed around these issues and it is called photometry.

Luminous flux is a scalar quantity which measures the time rate of light flow from the source. As all measures of energy transferred over a period of time, luminous flux is measured in Joules/Seconds or Watts (SI units). It can therefore safely be said that luminous flux is a measure of light power.

Visible light consists of several different colours, each representing a different wavelength of the radiation spectrum. For example red colour has a wavelength 610-700 nm, similarly yellow 550-590 nm and blue 450-500 nm.

The human eye demonstrates different levels of sensitivity to the various colours of the spectra. More specifically, the maximum sensitivity is observed in the yellow-green colour (i.e. 555nm). From all the above, it is clear that there is the need to define a unit associating and standardising the visual sensitivity of the various wavelengths to the light power which are measured in Watt’s; this unit is called the special luminous flux unit of the lumen (lm).

One lumen is equivalent to1/680 Watt of light with a wavelength of 555 nm. This special relationship between illumination and visual response renders the lumen the preferred photometric unit of luminous flux for practical applications. On top of that one of the most widely used light sources in everyday life such as the electric light bulb emits light which consists of many different wavelengths.

A measure of the luminous strength of any light source is called the light sources intensity. At this point, it should be said that the intensity of a light source depends on the quantity of lumens emitted within a finite angular region which is formed by a solid angle. To give a visual representation of the solid angle, recall that in a bi-dimensional plane the plane angle is used for all kinds of angular measurements. A further useful reminder regards the arc length s; namely for a circle of radius r the arc length s is calculating by the formula

S = r * q -Equation. 1

(qis measured in radians)

Now, in a three dimensional plane the solid angle W is similarly used for angular measurements. Corresponding to the q plane angle, each section of surface area A of a sphere of radius r is calculating by using the following formula;

A= r2*W -Equation. 2

(Remember that W is measured in steradians)

By definition one steradian is the solid angle subtended by an area of the spherical surface equal to the square of the radius of the sphere.

Taking into account all the above mentioned, the luminous intensity I of a light source (small enough to be considered as a point source) pointing towards the solid angle is given by:

I = F/ W -Equation. 3

Where F is the flux measured in lumens. It is clear that the luminous intensity unit is lumen /steradian. This unit used to be called a candle, as it was defined in the context of light emitted from carbon filament lamps.

Generally speaking, luminous intensity in any particular direction is called the candle power of the source. The corresponding unit in the SI system is called the candela (cd)which is the luminous intensity emitted by 1/60 cm2 of platinum at a temperature of 2054K (which is the fusion point of platinum).

A uniform light source (small enough to be considered as a point source) whose luminous intensity is equal to one candela, is able to produce a luminous flux of one lumen through each solid angle. The equation shown below is the mathematical expression of the above definition:

F =

W * I

-Equation. 4

Where I is equal to one cd and W is equal to one sr.

In similar terms the total flux Ftof a uniform light source with an intensity I can be calculated with the aid of the following formula.

Ft = W t* I – Equation. 5

And taking into account that the total solid angle Wt of a sphere is 4p sr, the above formula becomes

Ft = 4p * I -Equation. 6

When a surface is irradiated with visible light it is said to be illuminated. For any given surface, the illuminance E (which is also called illumination) is intuitively understood and defined to be the flux indenting on the surface divided by the total area of the surface.

E = F / A – Equation. 7

In the case where the several light sources are present and illuminate the same surface, the total illuminance is calculated by adding up all of the individual source illuminations. The SI unit allocated the illuminance is the lux (lx)where one lx is equal to 1 lm / 1 m2.

Another way of expressing illumination in the context of light sources intensity and the distance from the light source can be derived by forming a combination of the last few mentioned equations:

E = F / A = I * W / A = I / r2 -Equation. 8

Where r is the distance measured from the source or the radius of a sphere whose total area is A (W = A / r2). An important side note at this point is that 1fc equals 1cd/ft2 and also 1lx is equal to1cd/ m2.

It is evident that the illumination is inversely proportional to the square of the measured distance from the light source. In the case of constant light source intensity I, it can be said that:

E2/E1 = r12/r22= (r1/r2)2 – Equation. 9

In the real world, the incident light is very rarely normal to a surface; nearly always light impacts on a surface at an angle of incidence q.

In this case the illuminance is calculated by:

E = I* cos q/ r2 -Equation. 10

To sum up, there are several ways which can be employed in order to measure illumination. Nearly all of them are based on the photoelectric effect originally discovered by Albert Einstein (for which he was awarded a Nobel Prize in 1921). In a few words when light strike sa material electron emission is observed and electric current flows if there is a circuit present.

This current is proportional to the incident light flux and to the work function of the material; the intensity of the resulted current flow is measured by instruments calibrated in illumination units.

Apparatus Components:

Light Sensor – Light Dependent Resistance (LDR)

Light bulb

Ruler

Power supply

Voltmeter

Ammeter

Connecting wires and Inline conductors

Two Vertical Stands

Black Paper

Experimental Apparatus

The experimental apparatus consisted off various parts. The basis of the light reception circuit was a Light Dependent Resistor (LDR) which is the essential part of the apparatus since in enables the measurement of the light’s intensity.

To give a brief introduction to this type of devices, it should be said that all kinds of materials exhibit some kind of resistance to electric current flow (which by definition is orientated flow of electrons). The particularity of an LDR device lays in the fact that its resistance is not constant; instead, it varies its value according to the light’s intensity that impacts on it. Generally speaking, LDR devices can be categorized in two main divisions: negative and positive coefficient. The former decrease the irresistance as the light’s intensity grows bigger; on the other hand, the latter increase their resistance as the light’s intensity becomes greater.

At the microscopic level, such a device consists of semi-conducting material like doped-silicon (the most commonly used material for electronic applications).When light impacts on the device material, this energy is absorbed by the covalent bonded electrons. Subsequently, this excessive energy breaks the bonds between the electrons and creates free electrons inside the material. These electrons are free to move inside the material and hence increase there sistivity of the material since they are no longer bonded.

Another essential part of the apparatus is the light source, which in this particular cause was an incandescent lamp (these lamp sare the most commonly used ones found in most everyday applications). The basic component of an incandescent lamp is the wire filament which is usually made of tungsten; this filament is sealed in glass bulb. Now, the bulb itself is filled with a mixture of low pressure argon and nitrogen in gaseous form. The use of those two gases is to delay the evaporation of the metal filament as well as itoxidation.

Ones current begins to flow through the tungsten filament, it gets so hot that it looks white. Under these operating conditions the filament itself ranges in temperature from 2500-3000 degrees Celsius. All incandescent lamps have continuous spectrum which lies primarily in the infrared region of the electromagnetic spectrum. The basic drawback of these devices is they poor efficiency, since more than 95% of the lamps energy is lost to the ambient environment in the form of heat.

The detailed apparatus used for this investigation is shown schematically in figure.1. According to this figure the light source(incandescent lamp (light bulbs electrical characteristics required here) ) is placed on a fixed stand and is kept at a vertical upright position looking upwards. It is evident that ones the bulb is switch on the light will be emitted isotropically towards all directions. A power supply(( power supply’s electrical characteristics required here) ) was used for powering up the light bulb and providing variable voltage values. In that way, as will be explained later, the intensity of the light emitted by the bulb will not stay constant and neither will the voltage across the LDR.

Opposite the light bulb, on another stand the LDR device has kept fixed in place with the aid of cohesive material (blu tack). The LDR device was placed normally to the light bulb so that the angle of incidence of the light coming of the source remains constant and normal throughout the experimental measurements.

Another observation that can be made from Figure.1 is the interconnection between the LDR device, the voltmeter, the ammeter and the power supply. More specifically, in order for the LDR to function properly, a voltage was applied across the receiver circuit ( 4 Volts power pack in our case). The voltmeter was connected across the LDR device in order to constantly measure the value of the voltage across the LDR. These variations were due to the alternations to the intensity of the incident light (since the resistance value was changing).

The volt meter ideally would have infinite resistance, however in reality its resistance is finite and thus small deviations of the indicated voltage from the real value were expected.

Another quantity under monitoring was the current flowing into the LDR device. For this purpose an ammeter was placed in series with the LDR. Its rule was very important since the current flow into the LDR device had to remain constant throughout the experimental measurements. Again, the ideal ammeter would not have any impendence at all. In reality all ammeter demonstrate a finite albeit very small value of resistance: thus deviation of the indicated value from the actual one should be expected.

(Missing resistance for potential divider?)

A very interesting configuration (and very widely used) for light intensity measurements using the same components as the ones available for this practical can be seen in Figure.1 with a little insight. A closer look to the receiver circuit reveals that a potential divider is formed by the way that the above mentioned components are connected. On a side note, measuring the current coming out of the LDR device would be feasible and relatively easy since the output current would be directly proportional to the value of the LDR resistance. A better way would be to measure the output voltage which happens to be the voltage across the LDR (i.e. the value monitor by the voltmeter). In this case the voltage is proportional to the current flowing through the LDR device. The second resistance required to form the potential divider comes from the finite internal resistance of the ammeter. The value of the output voltage V output can be calculated by using the standard potential divider formula shown below:

Vout = RLDR / (RLDR + RAMMETER)* Vin – Equation. 11

Where Vinis the voltage applied across the receiver circuit, RLDR and RAMMETER are the resistance of the LDR device and the internal resistance of the ammeter respectively.

Since the aim of those measurements is to investigate the relationship between the light intensity with distance, despite the fact that both the light bulb and LDR are kept fixed vertically the stand of the light bulb was able to be translated horizontally. For the purpose of the experiments the translation of the light bulb was made parallel with a ruler which was placed between the two stands. This configuration was quite optimal since it allowed the exact distance between light source and receiver to be know throughout the experiments.

In all optical experiments one of most fundamental error is the background illumination and the interference of other light sources. For this reason the apparatus was surrounded by black paper.

Experimental Procedure

The LDR sensor and the light bulb have to be at the same vertical height during all experimental measurements. One key point to notice is in that way the light bulb behaves as more like a point source of light, justifying the use of all mathematical equations. The LDR sensor has to point towards the light bulb at all times.

Having set up the experimental apparatus and chosen the range of the distance between the light bulb and the LDR sensor, a reference measurement of the LDR sensor was made having the light bulb switched off. Depending on the power of the light bulb a starting distance of 10 cm was deemed to be sufficient for the calibration purposes. Progressively, after performing the calibration this distance as explained below increased. Similarly, the rest of the experimental apparatus’s components (i.e. receiver device, voltmeter, ammeter, etc.) were also switched off during this very crucial calibration phase of the practical; generally speaking it is very good and common practice as well as much more preferable to carry out the calibration and experimental procedure in conditions of total darkness. The previous step insured that the background illumination was measured and this value would have to be deducted from all further measurements. Hence the error of the measurements is eliminated and their credibility is increased by a great degree.

The light bulb was initially switched on by applying a specific voltage across it; subsequently the exact distance between the light bulb and the LDR was measured using the ruler. The next and most important step at this stage was to measure the value of the potential difference across the LDR device for this specific position of the light bulb. For reasons of reference, the value of the ammeter was also recorded.

The position of the light bulb stand was then altered along the ruler in constant and knows intervals of distance. For each known distance the above measurements had to be repeated over and over. At this stage it would be useful to emphasize that the acquisition of the above data can be made for more than one time per known distance r, since averaging of data decreases the error percentage in the experimental measurements obtained. In that way, a comprehensive chart or table can be formed associating distance values (between the two stands) to output voltage values.

Hamstring Rehabilitation and Injury Prevention

This work was produced by one of our professional writers as a learning aid to help you with your studies

Hamstring injuries can be frustrating injuries. The symptoms are typically persistent and chronic. The healing can be slow and there is a high rate or exacerbation of the original injury (Petersen J et al. 2005).

The classical hamstring injury is most commonly found in athletes who indulge in sports that involve jumping or explosive sprinting (Garrett W E Jr. 1996) but also have a disproportionately high prevalence in activities such as water skiing and dancing (Askling C et al. 2002).

A brief overview of the literature on the subject shows that the majority of the epidemiological studies in this area have been done in the high-risk areas of Australian and English professional football teams. Various studies have put the incidence of hamstring strain injuries at 12 – 16% of all injuries in these groups (Hawkins R D et al. 2001). Part of the reason for this intense scrutiny of the football teams is not only the high incidence of the injury, which therefore make for ease of study, but also the economic implications of the injury.

Some studies (viz. Woods C et al. 2004) recording the fact that hamstring injuries have been noted at a rate of 5-6 injuries per club per season resulting in an average loss of 15 -21 matches per season. In terms of assessing the impact of one hamstring injury, this equates to an average figure of 18 days off playing and about 3.5 matches missed. It should be noted that this is an average figure and individuals may need several months for a complete recovery. (Orchard J et al. 2002). The re-injury rate for this group is believed to be in the region of 12 – 31% (Sherry M A et al. 2004).

The literature is notable for its lack of randomised prospective studies of treatment modalities and therefore the evidence base for treatment is not particularly secure.

If one considers the contribution of the literature to the evidence base on this subject, one is forced to admit that there is a considerable difficulty in terms of comparison of various differences in terminology and classification. Despite these difficulties this essay will take an overview of the subject.

Classification of injuries

To a large extent, the treatment offered will depend on a number of factors, not least of which is the classification of the injury. In broad terms, hamstring injuries can have direct or indirect causation. The direct forms are typically caused by contact sports and comprise contusions and lacerations whereas the indirect variety of injury is a strain which can be either complete or incomplete. This latter group comprises the vast majority of the clinical injuries seen (Clanton T O et al. 1998).

The most extreme form of strain is the muscle rupture which is most commonly seen as an avulsion injury from the ischial tuberosity. Drezner reports that this type of injury is particularly common in water skiers and can either be at the level of the insertion (where it is considered a totally soft tissue injury) or it may detach a sliver of bone from the ischial tuberosity (Drezner J A 2003). Strains are best considered to fall along a spectrum of severity which ranges from a mild muscle cramp to complete rupture, and it includes discrete entities such as partial strain injury and delayed onset muscle soreness (Verrall G M et al. 2001). One has to note that it is, in part, this overlap of terminology which hampers attempts at stratification and comparison of clinical work (Connell D A 2004).

Woods reports that the commonest site of muscle strain is the musculotendinous junction of the biceps femoris (Woods C et al. 2004).

In their exemplary (but now rather old) survey of the treatment options of hamstring injuries, Kujala et al. suggest that hamstring strains can usefully be categorised in terms of severity thus:

Mild strain/contusion (first degree): A tear of a few muscle fibres with minor swelling and discomfort and with no, or only minimal, loss of strength and restriction of movements.

Moderate strain/contusion (second degree): A greater degree of damage to muscle with a clear loss of strength.

Severe strain/contusion (third degree): A tear extending across the whole cross section of the muscle resulting in a total lack of muscle function.

(Kujala U M et al. 1997).

There is considerable debate in the literature relating to the place of the MRI scan in the diagnostic process. Many clinicians appear to be confident in their ability to both diagnose and categorise hamstring injuries on the basis of a careful history and clinical examination. The Woods study, for example, showing that only 5% of cases were referred for any sort of diagnostic imaging (Woods C et al. 2004). The comparative Connell study came to the conclusion that ultrasonography was at least as useful as the MRI in terms of diagnosis (this was not the case if it came to pre-operative assessment) and was clearly both easier to obtain and considerably less expensive than the MRI scan (Connell D A 2004).

Before one considers the treatment options, it is worth considering both the mechanism of injury and the various aetiological factors that are relevant to the injury, as these considerations have considerable bearing on the treatment and to a greater extent, the preventative measures that can be invoked.

It appears to be a common factor in papers considering the mechanisms of causation of hamstring injuries that the anatomical deployment of the muscle is a significant factor. It is one of a small group of muscles which functions over two major joints (biarticular muscle) and is therefore influenced by the functional movement at both of these joints. It is a functional flexor at the knee and an extensor of the hip. The problems appear to arise because in the excessive stresses experienced in sport, the movement of flexion of the hip is usually accompanied by flexion of the knee which clearly have opposite effects on the length of the hamstring muscle.

Cinematic studies that have been done specifically within football suggest that the majority of hamstring injuries occur during the latter part of the swing phase of the sprinting stride (viz. Arnason A et al. 1996). It is at this phase of the running cycle that the hamstring muscles are required to act by decelerating knee extension with an eccentric contraction and then promptly act concentrically as a hip joint extensor (Askling C et al. 2002).

Verrall suggests that it is this dramatic change in function that occurs very quickly indeed during sprinting that renders the hamstring muscle particularly vulnerable to injury (Verrall G M et al. 2001).

Consideration of the aetiological factors that are relevant to hamstring injuries is particularly important in formulating a plan to avoid recurrence of the injury.

Bahr, in his recent and well-constructed review of risk factors for sports injuries in general, makes several observations with specific regard to hamstring injuries. He makes the practical observation that the older classification of internal (intrinsic) and external (extrinsic) factors is not nearly so useful in clinical practice as the consideration of the distinction between those factors that are modifiable and those that are non-modifiable (Bahr R et al. 2003).

Bahr reviewed the evidence base for the potential risk factors and found it to be very scanty and “largely based on theoretical assumptions” (Bahr R et al. 2003 pg 385). He lists the non-modifiable factors as older age and being black or Aboriginal in origin (the latter point reflecting the fact that many of the studies have been based on Australian football).

The modifiable factors, which clearly have the greatest import for clinical practice, include an imbalance of strength in the leg muscles with a low H : Q ratio (hamstring to quadriceps ratio) (Clanton T O et al. 1998), hamstring tightness (Witvrouw E et al. 2003), the presence of significant muscle fatigue, (Croisier J L 2004), insufficient time spent during warm-up, (Croisier J L et al. 2002), premature return to sport (Devlin L 2000), and probably the most significant of all, previous injury (Arnason A et al. 2004).

This is not a straightforward additive compilation however, as the study by Devlin suggests that there appears to be a threshold for each individual risk factor to become relevant with some (such as a premature return to sport) being far more predicative than others (Devlin L 2000).

There is also some debate in the literature relating to the relevance of the degree of flexibility of the hamstring muscle. One can cite the Witvrouw study of Belgian football players where it was found that those players who had significantly less flexibility in their hamstrings were more likely to get a hamstring injury (Witvrouw E et al. 2003).

If one now considers the treatment options, an overview of the literature suggests that while there is general agreement on the immediate post-injury treatment (rest, ice, compression, and elevation), there is no real consensus on the rehabilitation aspects. To a large extent this reflects the scarcity of good quality data on this issue. The Sherry & Best comparative trial being the only well-constructed comparative treatment trial, (Sherry M A et al. 2004) but even this had only 24 athletes randomised to one of two arms of the trial.

In essence it compared the effects of static stretching, isolated progressive hamstring resistance, and icing (STST group) with a regime of progressive agility and trunk stabilisation exercises and icing (PATS group). The study analysis is both long and complex but, in essence, it demonstrated that there was no significant difference between the two groups in terms of the time required to return to sport (healing time). The real significant differences were seen in the re-injury rates with the ratio of re-injury (STST : PATS) at two weeks being 6 : 0, and at 1 year it was 7 : 1.

In the absence of good quality trials one has to turn to studies like those of Clanton et al. where a treatment regime is derived from theoretical healing times and other papers on the subject. (Clanton T O et al. 1998). This makes for very difficult comparisons, as it cites over 40 papers as authority and these range in evidential level from 1B to level IV. (See appendix). In the absence of more authoritative work one can use this as an illustrative example.

Most papers which suggest treatment regimes classify different phases in terms of time elapsed since the injury. This is useful for comparative purposes but it must be understood that these timings will vary with clinical need and the severity of the initial injury. For consistency this discussion will use the regime outlined by Clanton.

Phase I (acute): 1–7 days

As has already been observed, there appears to be a general consensus that the initial treatment should include rest, ice, compression, and elevation with the intention to control initial intramuscularly haemorrhage, to minimise the subsequent inflammatory reaction and thereby reduce pain levels. (Worrell T W 2004)

NSAIAs appear to be almost universally recommended with short term regimes (3 – 7 days) starting as soon as possible after the initial injury appearing to be the most commonly advised. (Drezner J A 2003). This is interesting as a theoretically optimal regime might suggest that there is merit in delaying the use of NSAIAs for about 48 hrs because of their inhibitory action on the chemotactic mechanisms of the inflammatory cells which are ultimately responsible for tissue repair and re-modelling. (Clanton T O et al. 1998).

There does appear to be a general consensus that early mobilisation is beneficial to reduce the formation of adhesions between muscle fibres or other tissues, with Worrell suggesting that active knee flexion and extension exercises can be of assistance in this respect and should be used in conjunction with ice to minimise further tissue reaction (Worrell T W 2004).

Phase II (sub-acute): day 3 to >3 weeks 0

Clanton times the beginning of this phase with the reduction in the clinical signs of inflammation. Goals of this stage are to prevent muscle atrophy and optimise the healing processes. This can be achieved by a graduated programme of concentric strength exercises but should not be started until the patient can manage a full range of pain free movement (Drezner J A 2003).

Clanton, Drezner and Worrell all suggest that “multiple joint angle, sub-maximal isometric contractions” are appropriate as long as they are pain free. If significant pain is encountered then the intensity should be decreased. Clanton and Drezner add that exercises designed to maintain cardiovascular fitness should be encouraged at this time. They suggest “stationary bike riding, swimming, or other controlled resistance activities.”

Phase III (remodelling); 1–6 weeks

After the inflammatory phase, the healing muscle undergoes a phase of scar retraction and re-modelling. This leads to the clinically apparent situation of hamstring shortening or loss of flexibility. (Garrett W E Jr. et al. 1989). To minimise this eventuality, Clanton cites the Malliaropoulos study which was a follow up study with an entry cohort of 80 athletes who had sustained hamstring injuries.

It was neither randomised nor controlled and the treatment regime was left to the discretion of the clinician in charge. It compared regimes which involved a lot of hamstring stretching (four sessions daily) or less sessions (once daily). In essence the results of the study showed that the athletes who performed the most intensive stretching programme were those who regained range of motion faster and also had a shorter period of rehabilitation. Both these differences were found to be significant. (Malliaropoulos N et al. 2004)

Verrall suggests that concentric strengthening followed by eccentric strengthening should begin in this phase. The rationale for this timing being that eccentric contractions tend to exert greater forces on the healing muscle and should therefore be delayed to avoid the danger of a rehabilitation-induced re-injury. (Verrall G M et al. 2001). We note that Verrall cites evidence for this from his prospective (un-randomised) trial

Phase IV (functional): 2 weeks to 6 months

This phase is aimed at a safe return to non-competitive sport. It is ideally tailored to the individual athlete and the individual sport. No firm rules can therefore be applied. Worrell advocates graduated pain-free running based activities in this phase and suggests that “Pain-free participation in sports specific activities is the best indicator of readiness to return to play.” (Worrell T W 2004)

Drezner adds the comment that return to competitive play before this has been achieved is associated with a high risk of injury recurrence. (Drezner J A 2003)

Phase V (return to competition): 3 weeks to 6 months

This is the area where there is perhaps the least agreement in the literature. All authorities are agreed that the prime goal is to try to avoid re-injury. Worrell advocates that the emphasis should be on the maintenance of stretching and strengthening exercises (Worrell T W 2004).

For the sake of completeness one must consider the place of surgery in hamstring injuries. It must be immediately noted that surgery is only rarely considered as an option, and then only for very specific indications. Indications which the clinician should be alert to are large intramuscular bleeds which lead to intramuscular haematoma formation as these can give rise to excessive intramuscular fibrosis and occasionally myositis ossificans (Croisier J L 2004).

The only other situations where surgery is contemplated is a complete tendon rupture or a detachment of a bony fragment from either insertion or origin. As Clanton points out, this type of injury appears to be very rare in football injuries and is almost exclusively seem in association with water skiing injuries (Clanton T O et al. 1998).

It is part of the role of the clinician to give advice on the preventative strategies that are available, particularly in the light of studies which suggest that the re-injury rate is substantial (Askling C et al. 2003).

Unfortunately this area has an even less substantial evidence base than the treatment area. For this reason we will present evidence from the two prospective studies done in this area, Hartig and Askling

Hartig et al. considered the role of flexibility in the prophylaxis of further injury with a non-randomised comparative trial and demonstrated that increasing hamstring flexibility in a cohort of military recruits halved the number of hamstring injuries that were reported over the following 6 months (Hartig D E et al. 1999).

The Askling study was a randomised controlled trial of 30 football players. The intervention group received hamstring strengthening exercises in the ten week pre-season training period. This intervention reduced the number of hamstring injuries by 60% during the following season (Askling C et al. 2003). Although this result achieved statistical significance, it should be noted that it involved a very small entry cohort.

Conclusions.

Examination of the literature has proved to be a disappointing exercise. It is easy to find papers which give advice at evidence level IV but there are disappointingly few good quality studies in this area which provide a substantive evidence base. Those that have been found have been presented here but it is accepted that a substantial proportion of what has been included in this essay is little more than advice based on theory and clinical experience.

References

Arnason A, Gudmundsson A, Dahl H A, et al. (1996) Soccer injuries in Iceland. Scand J Med Sci Sports 1996; 6: 40 – 5.

Arnason A, Sigurdsson S B, Gudmundson A, et al. (2004) Risk factors for injuries in football. Am J Sports Med 2004; 32 (1 suppl) :S5 – 16.

Askling C, Lund H, Saartok T, et al. (2002) Self-reported hamstring injuries in student dancers. Scand J Med Sci Sports 2002; 12: 230 – 5.

Askling C, Karlsson J, Thorstensson A. (2003) Hamstring injury occurrence in elite soccer players after preseason strength training with eccentric overload. Scand J Med Sci Sports 2003; 13: 244 – 50.

Bahr R, Holme I. (2003) Risk factors for sports injuries: a methodological approach. Br J Sports Med 2003; 37: 384 – 92.

Clanton T O, Coupe K J. (1998) Hamstring strains in athletes: diagnosis and treatment. J Am Acad Orthop Surg 1998; 6: 237 – 48.

Connell D A , Schneider-Kolsky ME, Hoving J L. et al (2004) Longitudinal study comparing sonographic and MRI assessments of acute and healing hamstring injuries. AJR Am J Roentgenol 2004; 183: 975 – 84

Croisier J-L, Forthomme B, Namurois M-H, et al. (2002) Hamstring muscle strain recurrence and strength performance disorders. Am J Sports Med 2002; 30: 199 – 203

Croisier J-L. (2004) Factors associated with recurrent hamstring injuries. Sports Med 2004; 34: 681 – 95.

Deave T (2005) Research nurse or nurse researcher: How much value is placed on research undertaken by nurses? Journal of Research in Nursing, November 1, 2005; 10(6): 649 – 657.

Devlin L . (2000) Recurrent posterior thigh symptoms detrimental to performance in rugby union: predisposing factors. Sports Med 2000; 29: 273 – 87.

Drezner J A. (2003) Practical management: hamstring muscle injuries. Clin J Sport Med 2003; 13: 48 – 52

Garrett W E Jr, Rich F R, Nikolaou P K, et al. (1989) Computer tomography of hamstring muscle strains. Med Sci Sports Exerc 1989; 21: 506 – 14.

Garrett W E Jr. (1996) Muscle strain injuries. Am J Sports Med 1996; 24 (6 suppl) : S2–8.

Hartig D E, Henderson J M. (1999) Increasing hamstring flexibility decreases lower extremity overuse in military basic trainees. Am J Sports Med 1999; 27: 173 – 6

Hawkins R D, Hulse M A, Wilkinson C, et al. (2001) The association football medical research programme: an audit of injuries in professional football. Br J Sports Med 2001; 35: 43 – 7

Kujala U M, Orava S, Jarvinen M. (1997) Hamstring injuries: current trends in treatment and prevention. Sports Med 1997; 23: 397 – 404

Malliaropoulos N, Papalexandris S, Papalada A, et al. (2004) The role of stretching in rehabilitation of hamstring injuries: 80 athletes follow-up. Med Sci Sports Exerc 2004; 36: 756 – 9.

Orchard J, Seward H. (2002) Epidemiology of injuries in the Australian Football League, season 1997 – 2000. Br J Sports Med 2002; 36: 39 – 44

Petersen J, Holmich P (2005) Evidence based prevention of hamstring injuries in sport Br. J. Sports Med. 2005; 39: 319 – 323

Sherry M A, Best T M. (2004) A comparison of 2 rehabilitation programs in the treatment of acute hamstring strains. J Orthop Sports Phys Ther 2004; 34: 116 – 25

Verrall G M, Slavotinek J P, Barnes P G, et al. (2001) Clinical risk factors for hamstring muscle strain injury: a prospective study with correlation of injury by magnetic resonance imaging. Br J Sports Med 2001; 35: 435 – 9

Witvrouw E, Danneels L, Asselman P, et al. (2003) Muscle flexibility as a risk factor for developing muscle injuries in male professional soccer players. A prospective study. Am J Sports Med 2003; 31: 41 – 6.

Woods C, Hawkins R D, Maltby S, et al. (2004) The football association medical research programme: an audit of injuries in professional football: analysis of hamstring injuries. Br J Sports Med 2004; 38: 36 – 41.

Worrell T W. (2004) Factors associated with hamstring injuries: an approach to treatment and preventative measures. Sports Med 2004; 17: 338 – 45.

Is Machiavelli a Teacher of Evil?

This work was produced by one of our professional writers as a learning aid to help you with your studies

This essay will consider whether or not Machiavelli was a teacher of evil, with specific reference to his text ‘The Prince’. It shall first be shown what it was that Machiavelli taught and how this can only be justified by consequentialism. It shall then be discussed whether consequentialism is a viable ethical theory, in order that it can justify Machiavelli’s teaching. Arguing that this is not the case, it will be concluded that Machiavelli is a teacher of evil.

To begin, it shall be shown what Machiavelli taught or suggested be adopted in order for a ruler to maintain power. To understand this, it is necessary to understand the political landscape of the period.

The Prince was published posthumously in 1532, and was intended as a guidebook to rulers of principalities. Machiavelli was born in Italy and, during that period, there were many wars between the various states which constituted Italy. These states were either republics (governed by an elected body) or principalities (governed by a monarch or single ruler). The Prince was written and dedicated to Lorenzo de Medici who was in charge of Florence which, though a republic, was autocratic, like a principality. Machiavelli’s work aimed to give Lorenzo de Medici advice to rule as an autocratic prince. (Nederman, 2014)

The ultimate objective to which Machiavelli aims in The Prince is for a prince to remain in power over his subjects. Critics who claim that Machiavelli is evil do not hold this view, necessarily, because of this ultimate aim, but by the way in which Machiavelli advises achieving it. This is because, to this ultimate end, Machiavelli holds that no moral or ethical expense need be spared. This is the theme which runs constant through the work. For example, in securing rule over the subjects of a newly acquired principality, which was previously ruled by another prince, Machiavelli writes:

“… to hold them securely enough is to have destroyed the family of the prince who was ruling them.” (Machiavelli, 1532: 7).

That is, in order to govern a new principality, it is necessary that the family of the previous prince be “destroyed”. Further, the expense of morality is not limited to physical acts, such as the murder advised, but deception and manipulation. An example of this is seen in that Machiavelli claims:

“Therefore it is unnecessary for a prince to have all the good qualities I have enumerated, but it is very necessary to appear to have them. And I shall dare to say this also, that to have them and always to observe them is injurious, and that to appear to have them is useful.” (Machiavelli, 1532: 81).

Here, Machiavelli is claiming that virtues are necessary to a ruler only insomuch as the ruler appears to have them. However, to act only by the virtues will be, ultimately, detrimental to the maintenance of the ruler, as they may often have to act against the virtues to quell a rebellion, for example. A prince must be able to appear just, so that he is trusted, but actually not be so, in order that he may maintain his dominance.

In all pieces of advice, Machiavelli claims that it is better to act in the way he advises, for to do otherwise would lead to worse consequences: the end of the rule. The defence which is to be made for Machiavelli, then, must come from a consequentialist viewpoint.

Consequentialist theory argues that the morality of an action is dependent upon its consequences. If the act or actions create consequences that, ultimately, are better (however that may be measured) than otherwise, the action is good. However, if a different act could, in that situation, have produced better consequences, then the action taken would be immoral.

The classic position of consequentialism is utilitarianism. First argued for by Bentham, he claimed that two principles govern mankind – pleasure and pain – and it is to achieve the former and avoid the latter that determines how we act (Bentham, 1789: 14). This is done either on an individual basis, or a collective basis, depending on the situation. In the first of these cases, the good action is the one which gives the individual the most pleasure or the least pain. In the second of these cases, the good action is the one which gives the collective group the most pleasure or the least pain. The collective group consists of individuals, and therefore the good action will produce most pleasure if it does so for the most amount of people (Bentham, 1789: 15). Therefore, utilitarianism claims that an act is good iff its consequences produce the greatest amount of happiness (or pleasure) for the greatest amount of people, or avoid the greatest amount of unhappiness (or pain) for the greatest amount of people.

This, now outlined, can be used to defend Machiavelli’s advice. If the ultimate goal is achieved, the consequence of the prince remaining in power must cause more happiness for more of his subjects than would otherwise be the case if he lost power. Secondly, the pain and suffering caused by the prince on the subjects whom he must murder/deceive/steal from must be less than the suffering which would be caused should he lose power. If these two criteria can be satisfied, then consequentialism may justify Machiavelli.

Further, it is practically possible that such a set of circumstances could arise; it is conceivable that it could be the case that the suffering would be less should the prince remain in power. Italy, as stated, at that time, was in turmoil and many wars were being fought. A prince remaining in power would also secure internal peace for a principality and the subjects. A prince who lost power would leave the land open to attacks and there would be a greater suffering for the majority of the populous. On the subject, Machiavelli writes:

“As there cannot be good laws where the state is not well armed, it follows that where they are well armed they have good laws.” (Machiavelli, 1532: 55)

This highlights the turmoil of the world at that time, and the importance of power, both military and lawful, for peace. Machiavelli, in searching for the ultimate end for the prince retaining his power, would also secure internal peace and defence of the principality. This would therefore mean that there would be less destruction and suffering for the people.

Defended by consequentialism, the claim that Machiavelli is evil becomes an argument against this moral theory. The criticisms against consequentialism are manifold. A first major concern against consequentialism is that it justifies actions which seem to be intuitively wrong, such as murder or torture, on not just an individual basis, but on a mass scale. Take the following example: in a war situation, the only way to save a million and a half soldiers is to kill a million civilians. Consequentialism justifies killing the million civilians as the suffering will be less than if a million and a half soldiers were to die. If consequentialism must be used in order to justify Machiavelli’s teachings, it must therefore be admitted that this act of mass murder, in the hypothetical situation, would also be justified.

A second major concern is that it uses people as means, rather than ends, and this seems to be something which is intuitively incorrect, as evidenced in the trolley problem. The trolley problem is thus: a train, out of control, is heading towards five workers on the track. The driver has the opportunity to change to another track, on which there is a single worker. Thomson argues it would be “morally permissible” to change track and kill the one (Thomson, 1985: 1395). However, the consequentialist would here state that “morality requires you” to change track (Thomson, 1985: 1395), as there is less suffering in one dying than in five dying. The difference in these two stances is to be noted.

Thomson then provides another situation: the transplant problem. A surgeon is able to transplant any body part to another without failure. In the hospital the surgeon works at, five people are in need of a single organ, without which they will die. Another person, visiting for a check-up, is found to be a complete match for all the transplants needed. Thomson asks whether it would be permissible for the surgeon to kill the one and distribute their organs for those who would die (Thomson, 1985: 1395-1396). Though she claims that it would not be morally permissible to do so, those who claimed that changing tracks in the trolley problem would be a moral requirement – the consequentialists – would also have to claim that murdering the one to save five would also be a moral requirement, as the most positive outcome would be given to the most people.

Herein lies the major concern for a consequentialist, and therefore Machiavelli’s defence: that consequentialism justifies using people as means to an end, and not an end within themselves. A criticism of this is famously argued for by Kant, who claims that humans are rational beings, and we do not state that they are “things”, but instead call them “persons” (Kant, 1785: 46). Only things can permissibly be used only as a means, and not persons, who are in themselves an end (Kant, 1785: 46). To use a person merely as a means rather than an end is to treat them as something other than a rational agent which, Kant claims, is immoral.

This now must be applied to Machiavelli. In advising the murder and deception of others, he is advocating treating people as merely a means, by using them in order to obtain the ultimate end of retaining power. Though this ultimate end may bring about greater peace, and therefore pleasure for a greater amount of people, it could be argued that the peace obtained does not outweigh the immoral actions required in creating this peace.

Further, it must also be discussed whether Machiavelli’s teaching is in pursuit of a prince retaining power in order to bring about peace, or whether it is in pursuit of retaining power simply that the prince may retain power. The former option may be justifiable, if consequentialism is accepted. However, this may not the case for the latter, even if peace is obtained.

Machiavelli’s motives will never be truly known. Such a problem as this demonstrates further criticisms of consequentialism, and therefore Machiavelli himself. If he was advising to achieve power for the sake of achieving power, he would not be able to justify the means to this end without the end providing a consequentialist justification – if, ultimately, the prince retains power but there is not a larger of amount of pleasure than would otherwise be the case.

To pursue power in order to promote peace is perhaps justifiable. However, as is a major concern with the normative approach of consequentialism, the unpredictability of consequences can lead to unforeseen ends. The hypothetical prince may take Machiavelli’s advice, follow it to the letter, and produce one of three outcomes:

Power is obtained and peace is obtained.
Power is obtained but peace is not obtained.
Neither power nor peace is obtained.

Only in the first of these outcomes can there be any consequentialist justification. However, this then means that there are two possible outcomes in which there cannot be a consequentialist justification, and it is impossible to know, truly, which outcome will be obtained. This is the criticism of both Machiavelli and consequentialism: that the risk involved in acting is too great, with such a chance of failure and therefore unjustifiable actions, when it is impossible to truly know the outcomes of actions. The nature of the risk is what makes this unjustifiable, in that the risk is against human life, wellbeing, and safety. Machiavelli condones using people as merely a means to an end without the guarantee of a positive end by a consequentialist justification.

In conclusion, it has been briefly demonstrated what Machiavelli put forward as his teachings. It was further shown how the only justification for Machiavelli’s teachings is a consequentialist approach. However, criticisms put against Machiavelli and consequentialism, such as the justification of mass atrocities, using people as means to ends, and the unpredictability of the pragmatic implementation, show it to fail as an acceptable justification of his teachings. Therefore, it is concluded that Machiavelli is a teacher of evil.

Reference List

Bentham, J. (1798). An Introduction to the Principles of Morals and Legislation. Accessed online at: http://socserv.mcmaster.ca/econ/ugcm/3ll3/bentham/morals.pdf. Last accessed on 26/09/2015.

Kant, I. (1785). Groundwork for the Metaphysics of Morals. Edited and Translated by Wood, A. (2002). New York: Vail-Ballou Press.

Machiavelli, N. (1532). The Prince. Translated by Marriott, W. K. (1908). London: David Campbell Publishers.

Nederman, C. (2012). Nicollo Machiavelli. Accessed online at: http://plato.stanford.edu/entries/machiavelli/. Last accessed on 02/10/2015.

Thomson, J. J. (1985). The Trolley Problem. The Yale Law Journal. Vol. 94, No. 6, pp. 1395-1415.

Analysis of Hobbes’ Theory that “People Need to be Governed”

This work was produced by one of our professional writers as a learning aid to help you with your studies

Examine Thomas Hobbes’ theory that people need to be governed and the debate regarding the original nature of the human species…

The debate surrounding our original state of nature or species being has been hotly contested by scholars for centuries and remains a pivotal line of enquiry in contemporary pedagogic circles. In societies across the globe we observe entire populations governed by (religious) laws and practices designed to manage, control and otherwise police the boundaries of individualism whilst accentuating solidarity and protecting the collective norm (Stiglitz 2003). In this essay, we explore the various conceptions that have sought to trace and detail the genealogy of human beings to their primordial or so-called primitive condition, with particular emphasis on exploring Hobbes’ (2008) proposition that the disposition of human nature is chaos and thus, as humans, we are compelled to forgo our instinctual nature and find sanctuary within the realms of social collectivism and central governance. In this vein, we confront the age-old nature versus nurture conundrum; are we social and moral animals by design, altruistic in nature, or does civilisation transpire from egotistical obligation to co-operate in order to thrive.

As ever-increasing demands are placed on social-scientific research to maintain pace with an ever-changing world, it is commonplace for scholars to forget the (historical) dictums of our primal beginnings; such investigations are often marginalised – afforded little time, finance and credence – in a world seeking solutions to contemporary problems (Benton and Craib 2010). Yet, to paraphrase Marx (1991), the ghosts of the past weigh heavy on the minds of the living; understanding our roots may become the greatest social discovery and contribution to forging our future as human beings. Thus, social science, by definition and direction, is arguably obsessed with the social constructs that humans generate, frequently dismissing (perhaps through arrogance) the undeniable fact that we remain animals, imbued with the same instinctual drives and impulses as other species. Indeed, one need only observe the effect of social neglect in the case of feral children, unfettered by societal constraints we return to barely recognisable beasts, uncivilised and unconcerned by social pretentions, decorum, normative expectations and values (Candland 1996). For Hobbes’ (2008) humankind in its original state of being is an evil scourge upon the earth; a ruthless and egotistical creature perpetuated by self-gain and absolute dominance – a survival of the fittest nightmare (Trivers 1985). Thus, paralleling the works of Plato (2014), he asserts that the individual, possessing the principle of reason, must sacrifice free-will to preserve their ontological wellbeing, acquired resources, property and way of life or what he calls a ‘commodious living’ (78). As Berger and Luckmann (1991) argue, we willingly accept social captivity as it offers a protective blanket from the otherwise harsh conditions; a remission from the barbarism and bloodshed that transpired previously. This led Hobbes’ (2008: 44) to assert that ‘people need governed’ under a social contract or mutual agreement of natural liberty; the promise to not pillage, rape or slaughter was reciprocated and later crystallised and enforced by the state or monarch. Indeed, whilst his belief in the sovereigns’ traditional (rather than divine) right to rule was unwavering, he was certain that a despotic kingdom would not ensue as reason would triumph over narcissism.

In response, Socrates (cited in Johnson 2011) hypothesised that justice was an inherent attribute where humans sought peace as a process of self-fulfilment – of regulating the soul – not because of fear or retribution; to paraphrase: ‘the just man is a happy man’ (102). The state would therefore stand as a moral citadel or vanguard against the profane. Similarly, Locke (2014) rejects the nightmarish depiction offered by Hobbes (2008), asserting a romanticised state of nature– permeated with God’s compassion – whereby humans seek liberty above all; not individual thrill-seekers but rather banded by familial bonds and communes – a pre-political conjugal society – possessing parochial values, norms and voluntary arrangements. However, he also appreciated that, without the presence of a central regulatory organisation, conflict could easily emerge and continue unabated. Hence, humanity ascends into a civil contract, the birth of the political, as a means of protecting the status quo of tranquillity, prosperity and ownership. Similarly, Rousseau (2015) also proposes a quixotic rendition of humanities social origins, considering such times as simplistic or mechanical (Durkheim 1972) inasmuch as populations were sparse, resources abundant and needs basic, implying that individuals where altruistic by nature and morally pure. Yet, the ascension of state, particularly the mechanisms of privatisation, polluted and contorted humankinds natural state into something wicked that not only coaxed but promoted tendencies of greed, selfishness and egocentrism. In this account, we find strong parallels with Marx (1991), specifically his critique of capitalism, which is conceptualised as a sadistic mechanism tearing humanity from its species-being – the world of idiosyncratic flare, enchantment and cultural wonder – and placing it into a rat-race of alienation (from ones fellow being), exploited labour and inequality. As Rousseau (2015) ably contends: ‘man is born free, and everywhere he is in chains’ (78). Thus, government and the liberalism it allegedly promotes is a farce, seeking to keep the architectural means to create the social world within the possession of a minority – this he calls the current naturalized social contract. He calls for a new social order premised on consensus, reason and compassion; we must reconnect with ourselves, re-engage with our neighbours and discover who we are as a species.

The supposition of our philosophical ancestors is that we require governance as a process of realisation, we are social animals that demand and reciprocate encounters with others; alongside the impulse for sustenance and shelter is the yearning for social contact – indeed love and belonging are included in Maslow’s (2014) hierarchy of needs. Yet, within many philosophical transcripts is the deployment of religion as a legitimate form of authority, since antiquity monarchs, pharaohs, dynasties and early tribal formations have claimed power through divine right or approval. In fact, conviction in a celestial realm has pervaded for epochs – carved in millennia-old cave paintings around the globe (Stiglitz 2003) – and perhaps emerged from an enchanted, speculative and awe-inspired outlook of the world in which our ancestors occupied; religion complemented the life-cycle, delineating the sacred from the profane (Foucault 1975). As Schluchter (1989) argues, later missionaries would propagate their dogma; a prime example of this is the upsurge, dissemination and (even today) domination of Christianity as it overran its pagan predecessors, witchdoctors and mystics. Thus, religion has been attributed with generating social mores, collectivism and ushering the rise of civilisations. Indeed, Elias (2000), details the social evolution of humanity as the animalistic fades to the backstage – with the gradual monopolisation of violence and (political) power – and presented civil self takes credence. Initially, this was necessary for survival as people became more interdependent and significantly influenced later by the royal courts who became a celebrity-like beacon of perfect decorum and taste.

By the 19th century, most of Europe was regarded as civilised whilst other developing parts where considered savage lands; the violence, exploitation and subsequent domination of such nations as India and Africa by western societies is well documented (Buckinx and Treto-Mathys 2015). As Elias puts it: “people were forced to live in peace” (2000, 99). This was also accompanied with the advent of Enlightenment whereby the rule of logic, rationalisation and pragmatism disrobed and effectively dismantled the prevailing supremacy of religion; though religion remains a powerful force in certain cultures and is frequently accompanied with its own medieval brutality. As Anderson (2008) alludes, in Africa and the middle-east, where Christianity, Judaism and Islam prevail and to varying degrees dominate life, purported barbaric acts like (female) genital mutilation, segregation, and (domestic) violence – that affects mainly women – and public violence and executions are commonplace and sanctioned.

Thus, secularisation and the rise of empiricism unshackled humankind from its beastly beginnings and rehomed them within the embracing idioms of consensus, free-will and reciprocal courteousness – humans had undergone a transformation or courtisation whereby mannerisms, hygiene and self-restraint became governing tenants, the barbarian was adorned (concealed) with socially acceptable masks, equipped with approved social scripts and the rules of the game – Goffman’s (1990) social actor and his/her presented selves was born. In this conceptualisation, self-governance or policing is prerequisite for progress and forms the basis for society; enhanced with consciousness we are capable of resisting our impulsive drives – Freud’s (2010) Eros and Thantos are forsaken for the greater good – and creating a utilitarian civilisation. Today, in late-capitalist societies, we live in relative prosperity and peace; the elected government and its respective agencies provide sustenance, infrastructure, healthcare, protection and political democracy; this template of humanity is – like our religious proselytisers – distributed globally, perpetuated by the mass media, globalisation and free-markets (Stiglitz 2013).

For Nietzsche (2013), this contemporary worldview was tantamount to emptiness where humanity had escaped their animalistic state of being, finding virtue in religion and will-to-power within to overcome and ascend, but is now found wanting with the demise of faith and contemporary nihilism that has proceeded (his famous ‘God is dead’ (13) quote). Indeed, he is dismissive of science, philosophical and religious idioms, particularly their totalitarian tendencies which (for him) inhibit, enslave or otherwise surrender life-affirming behaviours; similarities may be drawn with Marx and Engels (2008) critique of religion as the ‘sigh of the oppressed creature’ (45); religion (like governments or social contracts) demands that individuals relinquish or capitulate part of themselves; to genuflect the laws, tenets and values that rule. Such things seek to (re)capture or incarcerate our species being within a straightjacket. Therefore, humanity must re-engage their instinctual resolve – which Nietzsche (2014) regarded as stronger than our urge for sex or survival – and become supermen (Ubermensch) untrammelled by instinct, to find wonder in the fluidity and unpredictability of nature and good conscience by re-evaluating our values, expectations and shortcomings as a species. Namely, a stateless civilisation, unhindered by permanency, premised on the continual refinement of self. Yet, whilst Nietzsche (2014) highlights the stifling effects of dogma, it seems unrealistic to suggest humans are capable of living in constant flux – even a war-torn nation offer consistency (Stiglitz 2003) – insofar as we instinctually seek to structure the surrounding environment in a comprehendible manner; we assign labels, judgements and behavioural codes as we produce order – predictability is the precondition for life and offers humans ontological security and wellbeing (Berger and Luckmann 1991). However, given the asymmetrical nature of society, some possess the architectural means to govern others – reformulated as a form of symbolic violence or barbarism. For example, the credence given to hegemonic masculinities and subsequent denigration and objectification of women or the subjugation of nations to western ideals (Mulvey and Rogers 2015). Moreover, the free-markets offered by capitalism seek to segregate, exploit and captivate masses into a consumerist world of shiny prizes (Marcuse 2002), coaxing our selfish and cut-throat tendencies, whilst so-called liberalist governments attempt to impose their civility globally through violence, bullying and manipulation; a wolf in sheep’s clothing (Kinker 2014). So, even under the rule of government and presence of civilisations our so-called animalistic (violent) heritage pervades, like a ghostly presence haunting the present.

Hobbes (2008) reasons for why individuals need governed – to cage our inner beast – seems defective. As Walsh and Teo (2014) allude, a major fault with many of the propositions outlined above is the emphasis placed on linearity – government is seen as a progressive necessity – rather than appreciating that as social creatures we are capable of creating communities with their own normative flows, ebbs, fluxes and (more importantly) governing ourselves both as matter of necessity or self-preservation and as a means of self-fulfilment or belonging; contemporary modes of practice have become so integrated and reified that finding a parallel alternative or a “way back” seems implausible. That said, as Browning (2011) argues, in an increasingly interdependent and global world, the requirement for centralised states seems unavoidable to handle the sheer mass of human activity and to maintain a level of equilibrium; an inevitable course of human progress.

This essay has been both illuminating and simultaneously problematic; the proposition of whether humans are capable of cohabiting without the requirement of a state or intervening supra-organisation remains a mystery. In fact, such an assertion is premised on how one defines the original state of nature; are we barbaric creatures who engage in a social contract for personal gain or are we instinctually social and empathic animals whose predisposition is not only to safeguard our interests but to generate genuine communal bonds and interconnections with others. The latter affords more manoeuvring for alternative (flexible) social figurations without government, where humanity can bask in the wonder of difference, variety and levels of unpredictability, whilst the former finds sanctuary only in the incarceration of humanity to defined idioms and laws imposed by a centre of authority and power. It is tempting to concede that, despite Hobbes’ depiction of government as the epitome of civility, on the contrary it appears to be (in this era of modernity) the primary agent of (symbolic) violence and struggle, whether masquerading as a religious, communist or neo-liberal state. Thus, one is reluctant to accept Hobbes assertion that people should be governed by a reified or separate entity. Instead, with a level of Nietzschean sentiment, perhaps people should be permitted and empowered to re-evaluate and govern themselves.

Word Count: (2,195)

Bibliography

Anderson, J. 2008.Religion, State and Politics. Cambridge University Press.

Benton, T.Craib, I. 2010. 2ndedition.Philosophy of Social Science: The Philosophical Foundations of SocialThought (Traditions in Social Theory).Palgrave Macmillan.

Berger, P.Luckmann, T. 1991.The Social Construction of Reality: A Treatise in the Sociology of Knowledge(Penguin Social Sciences).Penguin Press.

Browning, G. 2011.Global Theory from Kant toHardtandNegri(International Political Theory).Palgrave Macmillan.

Buckinx, B. Trejo-Mathys, J. 2015.Domination and Global Political Justice: Conceptual, Historical and Institutional Perspectives (Routledge Studies in Contemporary Philosophy).Routledge.

Candland, D. 1996.Feral Children and Clever Animals: Reflections on Human Nature.aˆ?Oxford University Press.

Durkheim, E. 1972.Emile Durkheim: Selected Writings, ed. and trans. Giddens, A.

Cambridge University Press: Cambridge.

Elias, N. 2000. 2ndedition.The Civilisation Process.Wiley-Blackwell.

Foucault, M. 1975.Discipline & Punish:aˆ?The Birth of the Prison.Knopf Doubleday Publishing Group.

Freud, S. 2010.Civilization and Its Discontents.Martino Fine Books.

Goffman, I. 1990.Stigma: Notes on the Management of Spoiled Identity.Penguin Press.

Hobbes, T. 2008.Leviathan (Oxford World’s Classics).Oxford Paperbacks.

Johnson, P. 2011.Socrates: A Man for Our Times. Penguin Publishers.

Kinker, S. 2014.The Critical Window: The State and Fate ofHumanity.Oxford University Press.

Locke, J. 2014.Two Treatises of Government.CreateSpaceIndependent Publishing Platform.

Marcuse, H. 2002.One Dimensional Man.Routledge.

Marx, K. Engels, F. 2008.On Religion. Penguin Press.

Marx, K. 1991.Capital, ed. Mandel, E. Volume 3. Penguin Books (Classics): London.

Maslow, A. 2014.Toward a Psychology of Being.Sublime Books.

Mulvey, L. Rogers, A. 2015.Feminisms: Diversity, Difference and Multiplicity in Contemporary Film Cultures (Key Debates – Mutations and Appropriations in European Film Studies).Amsterdam University Press.

Nietzsche, F. 2014.Beyond good and evil. Penguin Press.

Nietzsche, F. 2013.On the Genealogy of Morals. Penguin Press.

Plato. 2014.The Republic. Reprint.CreateSpaceIndependent Publishing Platform.

Rousseau, J. 2015.The Social Contract.CreateSpaceIndependent Publishing Platform.

Schluchter, W. 1989.Rationalism, Religion, and Domination: A Weberian Perspective.University of California Press.

Stiglitz, J. 2003.Globalization and Its Discontents.Penguin Press.

Trivers, R. 1985.Social Evolution. Benjamin-Cummings Publishing Co.

Walsh, R.Teo, T. 2014.A Critical History and Philosophy of Psychology: Diversity of Context, Thought, and Practice.Cambridge University Press.

Do Virtue Ethics offer an Account of being Right?

This work was produced by one of our professional writers as a learning aid to help you with your studies

This essay shall discuss whether or not virtue ethics offers a convincing account of what it is to be morally right. It shall focus on Hursthouse’s version of virtue ethics, which shall be outlined first, and the positives of this argument: that it allows for different actions in different situations, and does not justify mass atrocities as a result. Four criticisms shall then be put against virtue ethics: that it is not action guiding; it does not explain cultural difference; it offers no guidance for virtue conflict; and that it relies on either a circularity or, at best, the argument being superfluous. With only one of these criticisms being answerable, it shall then be ultimately concluded that virtue ethics does not offer a convincing account of what it is to be right.

Hursthouse’s argument of virtue ethics is an updated version of Aristotle’s original work. She claims that an action is right “iff it is what a virtuous agent would characteristically do in the circumstances” (Hursthouse, 1996: 646). Virtue ethics, then, makes an essential reference to the virtuous person, which Hursthouse claims is a person who “acts virtuously … one who has and exercises the virtues” (Hursthouse, 1996: 647). It is a trivial truth that a virtuous person does what is right, according to all moral theories. However, virtue ethics differs from other arguments in that it claims that an action is right in virtue of it being what the virtuous person would do.

The concept of what is a virtue, then, must be established. In this, Hursthouse makes her claim to Aristotle, arguing that a virtue is “a character trait a human being needs for eudaimonia, to flourish or live well” (Hursthouse, 1996: 647). This links to Aristotle’s work The Nicomachean Ethics, in which he claims eudaimonia is living a flourishing, happy life, which he views as the ultimate end and goal of a person’s life (Aristotle, 340bc). A virtue is any trait which will make an addition to this flourishing life, arguably termed the “positive traits”, such as kindness or charity.

Here, virtue ethics demonstrates a shift from the deontic concepts of deontology and consequentialism; not claiming that an action “ought” or “ought not” to be done. Instead, there is a justification of actions in terms of areteic concepts; claiming that an action is “kind” or “callous”, for example.

It can now be summarised what makes an action right according to virtue ethics. An action will be right iff it is what a virtuous agent would characteristically do in the circumstances. The virtuous agent would characteristically do the action in the circumstances iff the trait which leads to the action is a virtue. Finally, the trait which leads to the action will be a virtue iff it would increase the eudaimonia of the agent.

There are positive things to be said of Hursthouse’s argument for virtue ethics. Firstly, by stating an action is right “iff it is what a virtuous agent would characteristically do in the circumstances”, there is an allowance for variation in action dependent on the situation, which is more in line with our pragmatic moral practice. This escapes the rigidity and often counter-intuitive rules of deontology. Secondly, whilst it allows for variation in moral practice, it doesn’t allow for the atrocities which consequentialism justifies as a consequence of its situational variation. This is because virtue ethics’ argument depends on what the virtuous person would do and, arguably, it would be said that the virtuous agent would not act in the way consequentialism argues for, by allowing mass murder or torture under certain extreme circumstances, for example.

However, there are decisive criticisms against virtue ethics. The first criticism is that it does little to tell us exactly how to act; it is not action guiding. Virtue ethics states that we should act as the virtuous person would. This gives no other instruction than “act virtuously”, which perhaps can be further developed into “act kindly” or “do not act callously”. However, there is no further instruction than this, and nothing to say whether an action will be kind or just; a person is left to rely on their pre-understanding and belief.

Hursthouse’s response to this criticism seems to be that this is all the instruction that we need. She argues:

“We can now see that [virtue ethics] comes up with a large number [of rules] … each virtue generate[s] a prescription – act honestly, charitably, justly.” (Hursthouse, 1996: 648).

When acting, we need only ask ourselves “is this act just?” or “is this act kind?”, and the response to the question, being either “yes” or “no”, will dictate whether or not an act should be done or not.

This response to the objection does little to answer the original concern, and leads to the second criticism. Hursthouse claims that in order to determine whether an act is just, or kind, or deceitful, a person should seek out those who they consider to be their moral and virtuous superior, and ask their advice (Hursthouse, 1996: 647-648). Not only does this rely on a preconception in measurement of virtue (in that we must have an understanding of what is just in order that we may decide which acquaintance is most just), it does little to recognise what is a second criticism for virtue ethics: the variation in morality between cultures.

There is a variation in virtues for different cultures in three senses. Firstly, cultures may vary on which virtue is to take precedence in cases of virtue conflict (though this is a separate criticism in itself). In the second sense, cultures vary in their conception of whether a trait is, indeed, a virtue. Thirdly, cultures vary on what they believe the action would be which the virtue leads to. MacIntyre writes:

“They [various thinkers and cultures] offer us different and incompatible lists of the virtues; they give a different rank order of importance to different virtues; and they have different and incompatible theories of the virtues.” (MacIntyre, 2007: 181).

He gives the example of Homer, who claimed that physical strength was a virtue. This, MacIntyre claims, would never be accepted as a virtue in modern society and, consequently, the difference in Homer’s idea of a virtue or an excellence is vastly different to that of ours (MacIntyre, 1981: 27). Though this demonstrates that one trait may be accepted as a virtue by one culture and not by another, it is also highlights the third sense of cultural difference: that different cultures can accept the same trait as a virtue, but what constitutes an act being virtuous may be varied. For example, all societies believe justice to be a virtue, yet one might consider capital punishment to be just and therefore virtuous, whilst the other may hold capital punishment to be unjust and therefore not virtuous.

To the defence of virtue ethics, Hursthouse claims that the problem is one which is equally shared by deontology, arguing:

“Each theory has to stick out its neck and say, in some cases ‘this person/these people/other cultures are in error’, and find some grounds for saying this.” (Hursthouse, 1991: 229)

Yet this causes concern for virtue theory. Hursthouse is here claiming that some cultures are wrong in believing that certain traits truly lead to an increase in eudaimonia, and are therefore wrong about them being virtues. This presents a circularity in reasoning for virtue ethics.

Before the circularity criticism is discussed, a defence can be made of one aspect of conflict: when two virtues are in conflict, not across cultures, but with one another in a situation. The third criticism is that situations are easily imagined in which two virtues can be in conflict in this manner. For instance, a police officer may apprehend a robber. On hearing the robber’s story, it turns out that he stole food in order to provide for his starving children. The police officer must then decide whether to act on the virtue of justice, and arrest the robber who, despite the circumstances, has committed a crime, or to act on the virtue of sympathy and charity, and allow the robber to take the food and feed the starving children. Hursthouse claims that “in such cases, virtue ethics has nothing helpful to say” (Hursthouse, 1991: 229).

However, a response can be contested. The degree of conflict can be very broad, dependent on the circumstances. In some situations, the correct answer is obvious; in the above case, it would be hard to justify not allowing a man a stolen loaf of bread to feed his starving children. In other situations, the degree of conflict can be much narrower, making the decision much more difficult. In keeping with the argument of virtue ethics, the correct decision is going to be the one which adds to eudaimonia. If both traits will lead to an increase in eudaimonia, the correct choice will be the one which adds most to eudaimonia. As the difference in the amount of increase narrows, the choice becomes harder, but the moral recompense in choosing wrongly will be less. Ultimately, if both virtues will increase eudaimonia equally, then they are equally the correct choice.

However, the most decisive criticism is that the argument which virtue ethics puts forward for what is morally right rests on a circularity. This is brought forward when it was demonstrated that virtue ethics necessitates the existence of some other criterion being the case in order that it can be said some cultures are right and others wrong in their approach to the implementation of virtues and what it is that they hold to be a virtue.

If virtue ethics is to explain why some cultures are wrong in their implementation of the virtues, then their argument must work as follows: a culture is wrong because what they are advocating as right would not be done by the virtuous person. It would not be done by the virtuous person because the trait which leads to the action is not a virtue. The trait which leads to the action is not a virtue because it would not add to the person’s eudaimonia. The reason, then, that a culture is wrong, is because they are mistaken in assuming that the trait which would lead to the action is a virtue, because it will not add to the persons’ eudaimonia.

It must therefore be considered what it takes for a trait to lead to an increase in eudaimonia. To this end, it must be claimed that a trait can only add to eudaimonia, and therefore be a virtue, because of something about the trait: if it is morally right. Herein is the circularity. Virtue ethics states that an action is right iff it is what the virtuous person would characteristically do in the situation. However, it has already been shown that there must be something about a trait which is morally right in order that it can add to eudaimonia and therefore be a virtue, so that the virtuous person may act on it. To avoid the circularity, for a trait to be morally right, there must be a criterion of rightness other than it is what the virtuous person would characteristically do in the situation. If such a criterion exists, virtue ethics’ argument becomes superfluous to explain what is right.

In conclusion, the argument for virtue ethics’ account of what it is for an action to be right has been set forward. Firstly, the positives to this argument were shown: that it avoids the rigidity of deontology and the atrocities of consequentialism. It was then criticised with four arguments: it is not action guiding; the difference in cultures’ morality; concerns when two or more virtues come into conflict; and the necessity for another criterion of rightness which, if accepted, renders virtue ethics unnecessary or, if rejected, leads to a circularity in virtue ethics. Therefore, it is concluded that virtue ethics does not offer a convincing account of what it is for an action to be right.

Reference List

Aristotle. (340bc). The Nichomachean Ethics. Translated by Ross, D. Edited by Brown, L. (2009). Oxford: Oxford University Press.

Hursthouse, R. (1991). Virtue Theory and Abortion. In Philosophy and Public Affairs. Vol. 20, No. 3, pp. 223-246.

Hursthouse, R. (1996). “Normative Virtue Ethics”. In Ethical Theory: An Anthology. Edited by Shafer-Landau, R. (2013). Chichester: John Wiley & Sons, pp. 645-652.

MacIntyre, A. (1981). The Nature of the Virtues. In The Hastings Centre Report. Vol. 11, No. 2, pp. 27-34.

MacIntyre, A. (2007). After Virtue: A Study in Moral Theory. 3rd edition. Notre Dame: University of Notre Dame Press.

Descartes First Meditations: Veridical Experiences”

This work was produced by one of our professional writers as a learning aid to help you with your studies

The question of whether or not one can know whether one is dreaming has become a staple of philosophical discussion since Descartes wrote The Meditations in the 1600s. Engaging in philosophy for the first time, this can seem a bizarre question. However, Descartes’ reasoning for doubting the certainty that one is not dreaming is compelling. For Descartes, our ability to perceive reality cannot be guaranteed, since our senses can deceive us (Descartes, 1986). Thus, over the course of the first two Meditations, Descartes concludes that the only thing he is certain of is that there is some being that is “I”. He concludes that this “I”, however, may only be a mind (Descartes, 1986).

Descartes reasons that even our perception of our bodies is a product of intellect. Therefore, the only thing he feels certain of is that there is a mind doing the thinking. There are two separate questions that arise from this. Firstly, can I know that I am awake? Secondly, can I know that my belief that I am not locked “inside a dream” is not itself a dream? This second question evokes the plot to a sci-fi film, and elicits imagery of being “a brain in a vat”, where everything that one perceives is illusory. The “brain in a vat” is a modern re-imagining of the demon argument, produced by Descartes. The “brain in a vat” idea originates with Putnam; and, according to Brueckner, is inspired The Matrix films (Brueckner). It is this second idea which will be the main focus of this essay – Descartes’ “demon argument”, or the “mind in a vat” argument.

This extreme form of scepticism, where one is merely a “brain in a vat” is surprisingly difficult to rule out with absolute certainty. However, the implications of this may be less profound than they initially appear. The notion that we do not have a true perception of the external world, because our sensory perceptions are being manipulated by a demon or we are a “mind in a vat”, may not actually have practical implications for how we live in the world. However, the discussion about whether we can know for certainty that we are not dreaming is not purely abstract and esoteric. There is an element of this that does pertain to a wider issue than merely dreaming. For instance, as Skirry explains, Descartes supposes that an evil demon may be deceiving him, and so as long as this supposition remains in place, there is no hope of gaining any absolutely certain knowledge (Skirry). If one cannot be sure that one is not being deceived by a demon, then one can have no absolutely certain knowledge about anything. However, as I will argue in this essay, concerning ourselves with whether we lack true knowledge, because we are being manipulated by a demon does not help us to find solutions to the issues in the world which we believe we are living in.

The sceptical account for not knowing whether one is dreaming or not has two levels. First, our perception of what we are currently experiencing does not allow us to determine whether we are awake or dreaming. Dreams can have the same quality as waking experiences, and we can dream that we are awake. Therefore, the experience of being awake is not distinguishable from dreaming. Descartes provides the following example of this situation: “How often, asleep at night, am I convinced of just such familiar events – that I am here in my dressing-gown, sitting by the fire – when in fact I am lying undressed in bed!” (Descartes, 1986, p. 13). Given that in one’s dream, one’s perceptual experiences are not different from those when awake, it may be that I am dreaming that I am typing out this essay.

This sceptical consideration of the possibility of not having the knowledge that one is awake is not as profound or extreme as it first seems. It leaves intact the idea that there are two states: dreaming and awake. The problem is that when we think we are awake, we may be dreaming. It is, for this reason, that this essay will leave this discussion aside, and move on to the second level of scepticism explored by Descartes.

The second reason for doubting if we can know if we are dreaming takes scepticism to a deeper level. The sceptical account for doubting our ability to know if we are awake or if we are dreaming is summed-up by Blumenfeld and Blumenfeld as the problem of the possibility of being in a dream within a dream:

for all I know, I may be dreaming… now, then my belief that not all my experiences have been dreams is itself a belief held in a dream, and hence it may be mistaken. If I am dreaming now, then my recollection of having been awake in the past is merely a dreamed recollection and may have no connection whatever with reality. (Blumenfeld and Blumenfeld, 1978, pp. 243-244).

There are two ways of illustrating this dilemma. First is the illustration devised by Descartes, whereby one is being deceived by a demon. The second is the one favoured by sci-fi films, whereby one is merely a “brain in a vat”, and all that we think we are experiencing has no relation to external reality.

This second level of scepticism speculates that all our experiences may be locked within a dream, including our experiences of waking and dreaming. Given the time period in which he was writing, Descartes invokes superstitious and supernatural ideas of a God or a demon to illustrate this. Descartes imagines that there may be:

some malicious demon of the utmost power and cunning [that] has employed all his energies in order to deceive me. I shall think that the sky, the air, the earth, colours, shapes, sounds and all external things are merely the delusions of dreams which he has devised to ensnare my judgement. I shall consider myself as not having hands or eyes, or flesh, or-blood or senses, but as falsely believing that I have all these thing (Descartes, 1986, p. 15).

The modern sci-fi parallel is that I am actually, merely “a brain in vat”, probably millions of miles away, on some distant planet. This is the view that everything we experience of the external world is a deception. This modern, scientific alternative allows the modern reader to see Descartes problem more clearly, and prevents us dismissing it is an anachronism from the time of superstition.

In the Second Meditation, Descartes convinces himself that because he is thinking, he does actually exist. Hence the famous phrase: Cogito, ergo sum or “I think, therefore I am” (Descartes, 1986, p. 17) ). This is important, as it does set a limit to scepticism, since Descartes’ conclusion is that “even if I am being deceived by an evil demon, I must exist in order to be deceived at all” (Skirry). The fact that I think is proof that I am at least a mind. However, this does not provide proof that I am also a body. Descartes poses to himself the question: “what am I to say about this mind, or about myself?” (Descartes, 1986, p. 22). But he then tells the reader, “so far, remember, I am not admitting that there is anything else in me except a mind” (Descartes, 1986, p. 22). Descartes famous phrase cogito, ergo sum is part of the philosophical canon because it is Descartes’ demonstration that there are limits to scepticism – I think; therefore, I am a mind. However, the knowledge that I am thinking does not, in itself, rule out the possibility that I am merely a mind, i.e. that I am locked in a dream within a dream, where I am deceived into thinking that I have two states of existing: one, being awake; the other being dreaming.

At the beginning of this essay, I said that, engaging in philosophy for the first time, the question, “can we know we are not dreaming?”, can seem a very bizarre question. This can be seen in Blumenfeld and Blumenfeld’s paper, when they show that “a frequent charge against scepticism is that it shows that we cannot have knowledge only by adopting an implausibly strong definition of knowledge” (Blumenfeld and Blumenfeld, 1978, p. 249). Intuitively, the idea that “I” (whatever I am in this case) am merely “a mind in a vat” is implausible. This is why the question, “can we know we are not dreaming?”, seems bizarre. It may not be possible to know that we are not dreaming. However, this requires the construction of a rather implausible hypothesis. In other words, only by invoking something that seems implausible can the question “can we know we are not dreaming?” be made.

However, to dismiss Descartes and the sceptics’ argument on these grounds is rather weak. Dismissing the demon argument on the basis that it is implausible does not falsify it. This is just an argument of probability. The argument that it seems more probable that I am not dreaming, and I do experience an external world is not sufficiently sound, philosophically, to end the argument. There is a need to produce a more satisfying philosophical explanation. Blumenfeld and Blumenfeld argue that it is not possible to justify empirical claims on the basis of probability (Blumenfeld and Blumenfeld, 1978). Therefore, they argue that to maintain the argument of an external world, and rule out the demon scenario, the hypothesis of an external world needs to be epistemically superior to the hypothesis of a world constructed by a demon (Blumenfeld and Blumenfeld, 1978).. However, Blumenfeld and Blumenfeld are not convinced that the hypothesis of an external world is epistemically superior. They argue:

One might think that this could be argued on grounds of the greater simplicity of the external-world hypothesis. But it is hard to see in what respect the external-world hypothesis is simpler than that of the demon. The latter is committed to the existence of the demon (a spirit) with the means of and a motive for producing sense experiences, to a mind in which these experiences are produced, and to the sense experiences themselves. The external-world hypothesis, on the other hand, is committed to all of the above, except the existence of the demon. But it is committed, in addition, to a physical world with the capability of producing sense experience. So, it is hard to see how the external-world hypothesis is simpler. (Blumenfeld and Blumenfeld, 1978, p. 250).

Therefore, it is surprisingly difficult to rule out the idea that “I” am “a mind in a vat”, and that all my experiences of the external world are based on a deception to my sensory perception.

However, the implications of this may not be as profound as they initially appear. Firstly, the implications that all our experiences of an external world are based on illusion would only come into existence if the illusion is broken. If there is a demon creating sensory experiences for me, or I am actually just “a brain in a vat”, the implication of this would only occur when I became aware of my “real” existence, and of the illusion and deception. Secondly, unless we become aware that all our past experiences, including those of being awake and of dreaming, are part of a dream, we are no better able to deal with the dilemmas of this world than we are currently. It is hard to see what the practical implications of this theory are. Or, more specifically and more importantly, how they can help us. For example, it isn’t going to work to tell a Syrian refugee, “don’t worry, go back to Syria, because the war isn’t real. We are actually ‘brains in a vat’, on another planet, many millions of miles away.” It may sound as though I am being facetious. However, the point is a serious one. The question: “can you know that you are not dreaming?” may be a valid one – it might be surprisingly difficult to prove that I not “a brain in a vat”. However, it is not a very helpful question to be concerning ourselves with.

In conclusion, demonstrating that our sensory experiences are not the trickery of a malicious demon proves unfruitful. Trying to satisfactorily refute the idea fails to recognise that the implications of this would only matter if we found out that in the “real” world, we were just “minds in a vat”. Meanwhile, there are practical concerns that require our thought, such as the Syrian refugee problem. The kinds of questions that scepticism is concerned with do not help us to deal with these practical issues. However, it does make us wonder if these “practical” issues are real. Descartes’ hypothesis makes us ponder the possibility that the Syrian refugee crisis is not real, and is part of the deceptions of a demon. However, this kind of thinking does not help us to respond to the things that we think are important.

Bibliography

Blumenfeld, D. and Blumenfeld, J.B. (1978) “Can I Know that I am not Dreaming?” in Hooker, M (ed.), Descartes: Critical and Interpretative Essays. John Hopkins, Baltimore, pp. 234-253

Brueckner, T. (Retrieved October 15, 2015). “Skepticism and Content Externalism”. Stanford Encyclopedia of Philosophy, Available from: http://plato.stanford.edu/entries/skepticism-content-externalism/#2

Descartes, R. (1986) “First Meditation”, in Cottingham, J (trans.) Meditations on First Philosophy: With Selections from the Objections and Replies. Cambridge University Press, Cambridge, pp. 12-15

Descartes, R. (1986) “Second Meditation”, in Cottingham, J (trans.) Meditations on First Philosophy: With Selections from the Objections and Replies. Cambridge University Press, Cambridge, pp. 16-23

Descartes, R. (1986) “Objections and Replies [Selections]” in Cottingham, J (trans.) Meditations on First Philosophy: With Selections from the Objections and Replies. Cambridge University Press, Cambridge, pp. 63-67

Skirry, J. (Retrieved October 6, 2015), “Rene Descartes (1596—1650)”, Internet Encyclopedia of Philosophy, Available from: http://www.iep.utm.edu/descarte/