Crimes of the Powerful Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Why has the analysis of crimes of the powerful been such a growth area in criminology over the past century?

It is tempting to give a simple or even simplistic answer to the above question: it is tempting to say that analysis and theory of crimes of the powerful have grown so quickly in the last century because the quantity and diversity of such crimes have themselves exploded outwards. As the number of crimes committed by the powerful have risen exponentially across the years and continents, so the police forces, crime-prevention agencies and legislators of the governments charged with halting these crimes have had to evolve into larger and more complex organizations also. For instance, amongst myriad forms of organized crime that developed in the twentieth century, one pertinent recent example is the efflorescence of high-tech and internet crime, where professional and international gangs manipulate technology to extort or steal large sums of money from the public. High-tech crime is of course a recent phenomenon; it did not exist at the turn of the last century. Therefore analysis of such activities by law agencies has grown to respond to this new threat; moreover, the analysis and prevention of such crimes has had to grow in sophistication and size just as the crimes themselves have done. Organized crime – be it narcotic trafficking, prostitution rings, corporate crimes and so on – has become a massive international business, and it has required larger agencies equipped with better criminal theory and technology and international cooperation between agencies to deal with it. Moreover, the clear lapse between the professionalism and techniques of many criminal organizations and the law agencies that pursue them will require these agencies to catch-up to the advances of these criminals in the next decades. Meaning, of course, that this catch-up will depend heavily upon advances in criminal theory and analysis.

Crimes of the powerful‘ are not exclusively concerned with illegal activities of the above description, but also with ‘crimes’ committed by corporations, by governments, by dictators and even, in an interesting new perspective, by patriarchal gender structures that sanction crimes of power against women. The attention of law agencies and legislators upon these crimes has led to a mass of new analysis and theory by criminologists on the nature of such crimes. Likewise, several theories compete to describe the causes of organized crime and crimes of the powerful. One such theory points to social change as the most profound catalyst in the spread of organized crime and the detection of organized crime. This theory assimilates the teachings of sociology, psychology, anthropology and history to produce a detailed sociological critique of these causes. In the eighteenth or nineteenth centuries, many acts committed by the powerful that would today be classified as criminal were then merely pseudo-illegal or socially disapproved of; they carried no specific criminal offence. But social and legislative advances have made the prosecution of crimes of the powerful easier. For instance, the prosecution of corporate crime is, theoretically at least, far easier to identify and prosecute than it was in the early twentieth century. Moreover, greater media exposure of the life of corporations and governments has magnified their crimes whenever they are committed.

A moment of this essay might be given to discuss exactly what is meant by the phrase ‘crimes of the powerful‘. Indeed, a person unfamiliar with the literature of criminology might be forgiven for regarding the term as somewhat amorphous and nebulous: he might argue that nearly any criminal phenomenon could be termed a crime of the powerful. The dictionary defines a crime as ‘ an act punishable by law, as being forbidden by statute or injurious to the public welfare. … An evil or injurious act; an offence, as in; esp. of grave character ‘ (Oxford, 1989). It is difficult to see how the word ‘power’ could not be inserted into any part of this definition and for it still to make sense. There is therefore in the pure ‘black letter’ interpretation of the law a huge shaded area that allows for misinterpretation of the term ‘crime of power’. Can, for instance, a crime of the powerful be a physical act? Or must it the top levels of an organization? Moreover, the use of the word ‘crime’ is itself ambiguous. The trafficking of drugs or children is clearly illegal and criminal according to the principles of law; but we also speak of corporate crimes against the public — withholding medicines from the dying, adulterating foods etc., — as ‘crimes’ even though they have no explicit recognition as such in law. There is then a near infinite possible extension of the word ‘crime’ when one uses the word in the sense of something that ought to be illegal rather than something that is presently illegal. In Smith’s words: ‘ If a crime is to be understood simply as law violation, then…no matter how immoral, reprehensible, damaging or dangerous an act is, it is not a crime unless it is made such by the authorities of the state ‘.

There is moreover often the paradoxical situation where a government that commits ‘crimes of power’ against its people can only be legally recognized as doing such if it passes legislation against itself. This is obviously extremely unlikely to happen and so many such crimes go unnoticed. It is often directly against the interests of certain groups or interests to recognize the existence of certain ‘crimes’ because they then have to recognize theory legal existence also. Recently however, one growth of criminal analysis of the powerful has come from greater international laws that allow for the international legal recognition of crimes committed by dictators or despots when they would never do this themselves. For instance, Saddam Hussein is near universally thought to have committed crimes of power against his people; such things were never legally recognized as ‘crimes’ as such until a body such as the United Nations had the international authority to declare the illegal actions of heads of states.

Sociologists and psychologists amongst other groups (Chesterton, 1997) have argued that the moral, sociological and psychological aspects of ‘crimes of the powerful’ should be recognized by criminologists to a far greater extent. By using approaches such as these criminologists can add the activities of environmental pollution, insider trading, and tax evasion to the public consciousness of what constitute crimes of the powerful. In Sellin’s (2003) words ‘ if the study of crime is to attain an objective and scientific status, it should not allow itself to be restricted to the terms and boundaries of enquiry established by legislators and politicians’ .

According to scholars authors like Chesterton and Dupont the intense interest in by criminologists in the analysis and prevention of crimes of the powerful is due to the massive growth and myriad new forms of these crimes. Perhaps the most powerful criminals whose crimes are explicitly illegal are international drug trafficking organizations. In 2004, according to Smith (Smith, 2004) 550 billion of cocaine and other illegal substances were transported illegally internationally. This trade is therefore lager than the GDP of many African and other third-world countries. Faced with this massive business and with its catastrophic social consequences traditional law agencies and their democratic legislators have had to radically alter the way they investigate and prosecute these crimes. The extreme complexity and ingenuity of international drug cartels have meant that governments have had to build equally complex systems of criminological analysis and technique to limit these crimes. Complex intelligence agencies like the MI5 and MI6 in England and the CIA and FBI in the United States now have innumerable specialist intelligence groups of scientists, field-officers and so on investigating the criminal nature and consequences of organized crime such as drug trafficking, the shipping of illegal weapons and so on.

Perhaps the only organizations on earth with greater power than the above organized crime syndicates are the international corporations of Western countries like Britain, America and soon. Many critics of these organizations (Chomsky, 2003) allege that the secret crimes of these corporations exceed even those of the drug barons. For instance, everyone will be familiar with the recent scandals of Enron, Anderson and Paramalat where billions of pounds were swindled by these massive companies. This ‘white-collar’ crime was half a century ago hardly investigated and such crimes went essentially unnoticed. But greater public consciousness of the activities of these companies through the media has theoretically at least imposed a greater accountability and potential punishment for companies who exploit either their shareholders or their customers. This increased interest in corporate crime has led in turn to the need for a vast number of criminologists to produce theories to explain the causes of such crimes and then strategies for their prevention.

A further consequence of the media revolution of the past century and the changed social assumptions of our society has meant that the ‘crimes of governments’ as ‘crimes of power’ are now open to far greater than public and professional scrutiny and analysis than they ever have been before. Twenty-four hour television and instant access to news stories and the daily events of our political life have meant that the public can therefore criticise the ‘crimes’ of their governments with greater ease than before. For instance, the vociferous protests in 2003 by citizens of Western democracies against the invasion of Iraq were due to the belief of those citizens that their governments had acted illegally and ‘criminally’ in invading that country. Traditionally, such crimes do not fall into the sphere of ‘criminology’ because of the numerous problems identified in the definition paragraph of this essay. However, criminologists, at least theoretically, and urged by famous opponents of the war like Noam Chomsky and Michael Moore, are coming to analyse and investigate the issues and theoretical difficulties of holding entire governments to account for committing ‘crimes of power’. Many of the principles used by criminologists to analyse the techniques and structures of organized crime indicates are being suggested to be transferred to an analysis of the crimes of government. The analysis of government crime may prove to be one of the most fruitful of the coming decades for criminologists.

In this essay then, the term ‘crimes of the powerful’ refers to such crimes as are carried out by organized criminal gangs (either national or international), by corporations, by governments, by powerful individuals such as corrupt magnates, businessmen and so on. Such crimes might include: corporate fraud, corporate mal-practise, illegal narcotics or arms; high-tech crimes such as computer fraud.

It is necessary for the student of criminology to know something of the state criminal affairs at the end of the 19th century if he is to find a clear answer for the growth of analysis of crimes of the powerful in the twentieth century. One strong reason why analysis of such crimes was less in say 1900 was that many organized crimes did not exist at all. For instance, the use of narcotics like opium and heroin were widespread amongst all levels of society but legal also; the trade of these drugs were controlled by legally registered companies and there existed no illegal market for their production or importation. Accordingly, since these acts were not understood as crimes, British police did not need to analyse the behaviour or causes of these. Moreover, the size of the police force as well as its technical and theoretical know-how were far smaller than they are today in Britain, America, France and so on. Similarly, whilst many companies exploited the Victorian workforce, none did so in the systematic and pre-determined fashion that is characteristic of Anderson, Enron or Parmalat in the past ten years. Other crimes of the powerful like high-tech computer fraud obviously required no analysis or theory of criminology since they did not exist at all. Similarly, in James Smith’s (Smith,1999, p44) memorable phrase ‘ At the dawn of the twenty-first century the Western world faces a plethora of organised criminality of the like that it has never known before. From the mass trafficking of illegal narcotics, to whole-scale prostitution, to high-tech computer fraud, to corporate offences on a giant scale, the police forces and criminal prevention agencies of the new century will meet challenges as they have never glimpsed in the past. And, a little further on, ‘ They will no longer compete against petty or isolated crimes of individuals, but against the sophisticated and organized attempts to make vast fortunes by systematically breaking the law. In this contest between law officer and criminal former is now far behind; it remains to be seen whether he will catch-up in the near decades’ (Smith, 1999, p44).

Another area of rapid growth in crimes of the powerful has been the feminist critique of domestic violence committed against women by dominant males. Feminists of the last few decades have argued cogently that the term ‘crimes of the powerful’ should include also these domestic abuses because of the patriarchal structures within our society that promote such abuses. The explosion of such feminist critiques flows from the fact that before this century there was no feminism as such, and domestic abuse was either not considered a crime or it was publicly invisible or ignored. The changing social philosophies such as liberalism and attitudes of the twentieth century gave birth to a greater consciousness for women and therefore greater demands for them for social and legal equality. This, in the 1960’s and 1970’s, leading feminists like Germane Greer campaigned for recognition of the domination of women by societal institutions and conventions that are massively weighted in favour of men. Feminist scholars and theorists argue that the vast majority of these structures and the crimes they inflict upon women are unreported; marital rape is the most frequent abuse, and nearly 80% of women in this predicament are abused repeatedly (Painter, 1991). A whole host of crimes committed by men supported by social institutions go unreported and unprosecuted. Some feminists therefore describe a fundamental imbalance in the power structures of Western society, and that agencies and organizations should be set up to combat and prevent this crime. In S. Griffin’s words: ‘ Men in our culture are taught and encouraged to rape women as the symbolic expression of male power‘ (Griffin, 1971) and Brownmiller says eloquently that rapists are the shock troops of patriarchy, necessary for male domination. Some men may not rape, but only because their power over women is already secured by the rapists who have done their work for them ‘ (Brownmiller, 1976). This feminine critique therefore demands a considerable extension of the definition of the term ‘crimes of the powerful’ to include all those thousands of incidents of unseen violence issued from an entire gender that has power over another. In this sense, arguably feminists have uncovered the ‘crime of the powerful’ of all. According to feminists, the truths of this oppression has been recognised partially by criminological theorists by the tides of social legislation that have been passed in recent years to protect women from domestic violence. Nonetheless, say that criminologists yet lack a complete or detailed analytical theory of such violence; this itself being reflected by the dominance in criminology of males.

In the final analysis, the growth of the analysis of crimes of the powerful may be attributed principally to the growth of the number and types of such crimes and the subsequent need to investigate and prevent them. Some crimes of the powerful such as drug trafficking are nearly entirely new to our age, and criminologists have had to develop wholly new theories and techniques to combat it. On the other hand, entirely new academic critiques like those of feminism, sociology and psychology have identified and produced theories to describe ‘invisible’ crimes of power against groups who before the last century had to suffer in silence. Criminologists too have had to swallow these theories and then learn methods and techniques to apply them to our modern world. Similarly, the rise of mass media and the extension of democratic institutions have enabled citizens with far better information about the behaviour of their corporations and governments; this awareness has in turn led to a consciousness of the similarity of nature between ‘illegal’ crimes like drug-smuggling and corporate crimes like deliberately withholding medicines from the sick or the invasion of a foreign country. These new fields of investigation have given the criminologist much to think about. The student of criminology should not forget that the subject he studies had itself evolved over the last century to become a highly professional and international and therefore capable of greater levels and specializations in analysis than it could ever have been before.

Housing MMC Construction Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

Modern Methods of construction (MMC) consist of a range of techniques aimed at improving efficiency in construction.

There is currently a serious shortage of homes in the UK. Mainly as a result of too few homes for sale being built, prices have been forced up to unaffordable levels. It is not possible for many people on average incomes to buy even a cheaper home.

The shortage of homes for rent is causing still greater problems for people on low incomes. Housing waiting lists have lengthened, resulting in more overcrowding and sharing and more homeless families than ever in temporary accommodation.

There is an urgent need for new homes – to make it possible for young families to buy a home, for essential workers in key public services to be able to afford somewhere to live, and for people on low incomes to have a home to rent.

The government are looking to MMC to solve this problem by creating affordable homes quickly, and in 2004 set targets to add an extra 120 000 homes to the housing stock every year for the next ten years. The government’s social housing funding body, The Housing Cooperation, has set a target that at least 25% of every new social housing development has to be built using MMC techniques.

A shortage of housing in the UK however is by no means a new problem. We’ve been in similar situations before, and looked for similar answers. Many solutions of the past have, in the long-term, failed.

The idea that Modern Methods of Construction could address a low cost housing issue has been used before.

Shortly after the First World War and the passing of a series of housing acts from 1919 the government became concerned at the high cost of ‘working class’ housing. In 1924 a committee on New Methods of House Construction (Later to become Modern Methods of construction (MMC)) was set up which produced a series of reports, which among other things recommended that they wanted to see ‘what may be called factory production of houses’

In the middle of the Second World War, a mission sent to study systems in America urged the wholesale reorganisation of the British building industry; among many other specific points it recommended

Simplification of building design for greater standardisation and mechanisation of constructional work

Much more use of factory produced units and assemblies

55 years later in1998, Egan, in his report ‘Rethinking Construction’, recommended exactly the same change of direction. So what went wrong? And has the industry yet listened?

Modern methods of Construction is the government’s initiative to push firms to look for new technologies as well as the government doing its own research, this is clearly a big problem.

The Office of the Deputy Prime Minister defines modern methods of construction as a process to produce more, better quality homes in less time. The Office of the Deputy Prime Minister also offers grants to firms to help them develop new methods, yet despite all this, there is still a problem. The UK is, again, in the middle of a housing crisis.

The Housing crisis

Merseyside housing renewal bosses are considering the use of flat pack Boklok housing to try to solve the problem of affordable housing in Merseyside which is of particular interest to me as it is in this area, my immediate concern with this project is whether the housing will actually be ‘affordable’

House factory – there are currently over 30 house factories in the UK, I will look at Westbury Homes Space4 factory near Birmingham which opened in 2001 Onsite house factory.

Despite there being many house factories in the UK none of them are quite the same as the on-site house factories being used in the US, I plan to see if these types of factories could be used in the UK

Problems in the past – quality has always suffered, aesthetics too

Modern Methods of Construction (MMC)

The Office of the Deputy Prime Minister defines modern methods of construction as a process to produce more, better quality homes in less time.

Post war pre fabricated housing failure

There is a current housing crisis

Volumetric: involves the manufacture of three-dimensional units in factory conditions for delivery to Site. Some units are delivered with all internal and external finishes and services installed.

Panellised: Flat panel units are produced in a factory and assembled on site. These may be ‘open’ panels or frames to which services, insulation and internal and external cladding is fixed on site, of fully-furnished panels containing more factory fabrication.

Hybrid: A combination of panellised and volumetric units typically with more highly serviced and repeatable elements (such as bathrooms) supplied as ‘pods’.

Subassemblies and components: Construction methods that incorporate factory-made subassemblies, such as floor cassettes or precise concrete foundations, within otherwise traditional structures can also be classified as MMC.

Non-off-site: Not all MMC’s are factory based. Some, such as those employing lightweight concrete and ‘thin-joint’ mortar construction, are site based.

Prefabricated housing has been used in the UK during periods of high demand, such as after the world wars and during the slum clearances of the 1960s. In total about 1 million prefabricated homes were built during the 20th century, many of which were designed to be temporary. However, problems arose over the quality of building materials and poor workmanship, leading to negative public attitudes towards prefabrication.

In Japan 40% of new housing uses MMC. In other European countries there is also much greater use of MMC, particularly in Scandinavia and Germany. Indeed, some house building companies in Europe have started to export their houses to the UK

The reasons for greater use of MMC in these countries are uncertain, but suggestions have included: 3

In colder climates the building season is short due to bad weather

Use of MMC allows quick construction.

MMC building materials, such as timber, are more readily available.

There is a greater tradition of self build housing. MMC appeals because faster construction reduces disruption to neighbours and allows earlier occupancy.

There are cultural preferences for certain house styles, e.g. timber frame in Scandinavia.

Issues surrounding MMC

While the Government is keen to encourage use of MMC for house building, research is still ongoing to assess its benefits. Issues arise over the cost of MMC; the industry capacity; its environmental benefits; the quality of such housing; public acceptance; and planning and building regulations. These questions are considered below.

Cost

Although some house builders argue that MMC is less expensive than traditional methods, industry sources indicate increased costs of around 7-10%. Reasons for the higher costs are difficult to discern because most project financial information is commercially confidential, and traditional masonry building costs vary widely too. It may be that the costs appear high because some benefits of using MMC, such as better quality housing and fewer accidents, are not obviously reflected in project accounts.

MMC housing is faster to build, reducing on-site construction time by up to 50%, and thus reducing labour costs. Quicker construction is an extra benefit for builders of apartments (because viewing often starts only once all flats are finished), and for Housing Associations, who receive rent earlier.

However, it is less important for private house builders as they rarely sell all the properties on a new development at once. An additional consideration is that the majority of factory overhead costs, e.g. labour, are fixed regardless of output. In contrast, site-based construction costs are only incurred if building is taking place. It is therefore less easy with MMC to respond to fluctuating demand.

Industry capacity

Industry capacity may be a barrier to increasing the number of houses built using MMC. Difficulties fall into two categories: a shortage of skills, and the factory capacity to manufacture parts.

Revisions to the Building Regulations

Building Regulations have been a major influence in the design specification for housing. They have been used by the Government to drive up standards and as the need for more sustainable buildings has increased, two of the regulations have been significantly revised which has had a large impact on construction methods.

Part L is concerned with the conservation of fuel and power.

Part E is concerned with resistance to the passage of sound, which is becoming more important as dwelling densities increase.

It is not just the improvement in the standards themselves that is exercising the minds of builders but that some aspects of the building’s performance (i.e. air tightness and sound resistance) will now be tested post construction. If the building falls short, expensive remedial work will have to be carried out.

Building performance in these areas is not just dependent on design detail, but also on the repeatability and consistency of good quality construction, aspects that lend themselves to the use of MMC.

Barriers to the use of MMC in housing

In a major survey of the top 100 house builders[8], the following factors were identified as being significant barriers to the introduction of MMC. They are summarised in Box 3 and discussed below in order of importance.

Capital costs

MMC are perceived as being more expensive than traditional methods with economies of scale being hard to achieve. 68% of housebuilders said that this was a barrier to the introduction of MMC. The National Audit Office (NAO)[9] reported that, for Registered Social Landlords (RSLs), open panel systems had a similar cost to traditional methods, but that hybrid and volumetric methods were slightly more expensive. To come to this conclusion, they took into account the following advantages: earlier rental income streams, the Social Housing Grant being drawn down earlier (thereby reducing borrowing and interest payments), reduced defects and reduced inspection. Some of these advantages would also benefit private developers.

The NAO estimated that as the market matured the cost of building elements could be reduced by 15% which would close the gap in costs between traditional build and volumetric/hybrid MMC. This appears obtainable, but is more likely to occur if developers and RSLs partner with manufacturers so that they can have the assurance of the long-term viability of the MMC market. This will enable investment in finding ways of reducing costs through product and process development.

Costs to the developer are also likely to reduce over time as developers become more familiar with MMC and are able to increase the efficiency of on-site trades as a result.

Concern over interfaces

This covers the interfaces (joints) of MMC to traditional build (eg how to fit roof trusses to a steel-frame house) and between different MMC systems (eg between a steel-frame house and a panellised timber-roof systems). This is a genuine concern that must be dealt with head-on. There are of course no reasons why interfaces should be more difficult than with traditional construction techniques. They are merely different and this needs to be planned for.

Early design freeze

MMC does require an early design freeze (when the details of the dwelling are set and cannot be altered) in comparison with traditional build. The timing of the design freeze will depend on the manufacturer’s lead times and this is in part to do with manufacturing capacity. Although MMC providers should do all that they can to minimise their lead times and to build in flexibility, it is likely that users will always experience this problem to some extent.

Planning

The constraint of planning on MMC may be perceived rather than actual. Planners, quite rightly, are keen to make sure that neighbourhoods do not all look the same. Some people’s perception of factory-produced housing is of lines of identical houses and clearly this should be avoided. The introduction of CADCAM techniques makes variation of MMC products relatively simple to achieve, although volumetric units will always have some constraints by their nature.

Having said that, the planners often want to see changes to storey heights, window design and window layout. These can sometimes be agreed at a late stage in the planning application process and can result in extending the factory lead times through:

Delaying the start of the MMC design process, which has to be completed before the MMC product can be produced in the factory.

Delaying the purchase of the fenestration, which is often on long lead times. Where the windows and doors are installed in the MMC product in the factory, the production is either delayed or the product has to be shipped without the fenestration being installed.

In traditional build, windows are fitted much later in the build process and hence their delayed specification is more easily accommodated.

Review and Evaluation of CDM Regulations

This work was produced by one of our professional writers as a learning aid to help you with your studies

Review and evaluate the impact of the proposed Construction (Design and Management)(CDM) Regulations 2006 in the improvement and management of risk.

The proposed changes to the Construction (Design and Management) Regulations 1994 aim to simply clarify the existing regulations; make the current regulations more flexible and compatible with procurement requirements; place the emphasis on the management of health and safety risk rather than creating paperwork and to strengthen the co-ordination and co-operation between designers and contractors. The initial Act was introduced with a view to setting a safety standard because of the large accident record prior to its introduction. The HSC produced a consultation paper explaining the proposed changes on 31 March 2005 with the consultation open until 29 July 2005, though there has been an extension to receive response documents to 31 August 2005, as many were not submitted in time by the time of the consultation on 29 July 2005. The Regulations are expected to come into force in October 2006.

The CDM Regulations were made under section 15 of the Health and Safety at Work etc. Act 1974, the principal Act dealing with securing the health and safety of people at work and those whose health and safety could be affected by work activities. The regulations came into force on 31 March 1995, and implemented provisions of European Directive No. 89/654/EEC, Temporary or Mobile Construction Sites Directive, which specifies a health and safety plan to be adhered to by five key parties to be involved in the Regulations when undertaking a project. These are the employer, planning supervisor, consultant, principal contractor and sub contractors and self-employed persons. The previous approach was statutory, with a view to avoiding unsafe situations.

Under regulation 6, the client or the developer must appoint a planning supervisor and a principal contractor. The planning supervisor must notify the Health and Safety Executive (HSE) about the project; fulfil specific requirements regarding design and ensure that the health and safety plan complies with the requirements (Regulations 14 &15). The consultant (or designer) has a duty to design to minimise risks in accordance with health and safety legislation. The principal contractor has to co-ordinate all contractors to ensure compliance with the health and safety plan. Contractors and self-employed persons have to co-operate with the principal contractor, and to advise of any risks connected with their work.

The CDM Regulations 1994 apply to construction work lasting for more than 30 days or involving more than 500 person days of work; construction work involving five or more people on site at any one time; design risk related to construction and demolition work.

Prior to the introduction of the CDM Regulations 1994, the accident statistics were 100 fatal accidents annually in the late 1980s. By 1994, the annual total of fatal accidents was reduced to 75, and thereafter, from 1994 to 2004 to between47 and 73, and furthermore for injuries lasting more than three days, the annual total was 17,177 in 1989/1990, which reduced to 8162 in 2003/2004. However, critics of the Regulations referred to the possibility that this may be influenced by a reduction in the amount of construction activity rather than purely as a result of implementation of the CDM Regulations alone.

There were inconsistencies in case law; it was considered that a subcontractor has the duty to warn contractor of a design defect for which another party was responsible and scope of implied term as to skill and care in performing contract owed by sub contractor to contractor. In the Court of Appeal case of McCook v Lobo in which an employee was injured on a construction site falling from a ladder, it was decided that although the site owner had breached the CDM Regulations 1994 by failing to prepare a health and safety plan in advance of the commencement of the building work, it was considered that it was unlikely that such a plan would cover the securing of ladders and therefore could not be considered as having caused the injuries.

In 2002 the construction Discussion Document (DD) formally recognised that there was a need for changes with regard to the industry’s health and safety performance, and the ensuing discussions led to the conclusion that although the principles underpinning the CDM Regulations were accepted, the methods adopted to implement the CDM Regulations often resulted in the principles being obscured beneath layers of bureaucracy and paperwork.

Therefore, the HSC concluded that the CDM Regulations needed to be revised by refocusing attention on effective, but practical, planning and management of construction projects. The Health and Safety Commission launched a 4 month consultation on its proposals to replace both the current Construction (Design and Management) Regulations 1994and the Construction (Health Safety and Welfare) Regulations 1996 with a single set of Regulations.

A draft set of amended CDM Regulations has been drafted together with a draft of revisions to the Approved Code of Practice by the HSC and the Construction Industry Advisory Committee (CONIAC). The new CDM Regulations were made available for comment in the hope that a set of Regulations can be formed that properly address the industry’s concerns in relation to health and safety and the inadequacies of the current Regulations.

The problems with the current Regulations also were argued to include the fact that many of the intended benefits were not being fully realized, contributed to by the difficulty in implementing radical change into the construction industry and the financial implications of full CDM compliance, together with the structure of the regulations themselves, the role of the planning supervisor being unsatisfactory as not being part of the core contraction team.

The complexity of the regulations themselves was a problem despite the consensus regarding the underlying ethos remaining valid. The proposed CDM Regulations are intended to be simpler and to remove any uncertainty regarding the nature of the duties imposed. They are also structured differently, setting out precisely what is expected of each duty holder.

The changes that have been introduced by the CDM Regulations 2006 include the following: for applicable projects there will be two types of construction projects, notifiable and non-notifiable, and a project will remain notifiable if it is likely to involve more than 30 days or500 person days of construction work. Notification to HSE must be made before design work, planning or preparation for construction begins; for the client, he must ensure there are suitable project management arrangements for health and safety and allocate sufficient resources, explicitly including time, to ensure that this can happen. To make sure principal contractors have sufficient time to make proper preparations for work on the site, the co-ordinator has to advise them of the minimum notice allowed between appointment and commencement of work.

The client and the principal contractor must also ensure adequate facilities are in place at the start of the construction phase of the project, by means of a document prepared by the principal contractor setting out the health and safety arrangements and site rules for a project.

The client can no longer appoint an agent to delegate these duties, as the provisions on agents will be removed as they are seen as a means to allow clients to absolve themselves of their legal obligations. Now several clients on the same project can now agree amongst themselves that one client should be the sole client, the aim being to prevent anyone retaining control and avoiding responsibility.

Furthermore, the client and the principal contractor must ensure that there are adequate welfare facilities are in place at the commencement of construction; in relation to the planning supervisor/co-ordinator, the planning supervisor has to be replaced by co-ordinator, the co-ordinator must be appointed before the design work commences and designers and contractors cannot be appointed in advance of the coordinator.

The designer must eliminate any hazards and reduce risks to the health and safety of persons carrying out construction work, cleaning or maintaining the permanent fixtures or using the structure as a place of work, and provide sufficient information about the design, construction or maintenance of the structure to assist any other designers and the principal contractor fulfil their duties.

Further requirements are specified in relation to competence, and it is stated that no appointment or engagement is to be accepted unless the particular person is competent, perhaps in relation to industry standards. In relation to a pre-tender or pre-construction plan this is to be replaced with an information pack that should focus attention on communication of the information that designers and contractors need to plan and do their work. In relation to the health and safety file, this will be required for a site rather than for each particular project. Demolition has to be planned and carried out in such a manner with a view to preventing, as far as possible, unnecessary danger, with arrangements for demolition work recorded in writing.

The civil liability that would arise from the introduction of the new Regulations is that employees (though not self-employed workers) will now be permitted to take action in the civil courts for injuries resulting from failure to comply with duties under the Regulations.

The new Regulations are regarded as representing a ‘radical and fundamental change in construction health and safety legislation’. The Regulations can be regarded as being much more detailed and prescriptive than CDM Regulations 1994 and will impose a wide range of new duties and potential abilities with a potential significant impact on allocation of risks and responsibilities in the construction industry. It can be argued that the biggest change is in the duties of the client, who now has a number of new responsibilities for health and safety. Furthermore, wider duties have been imposed upon both designer and principal contractor than under CDM Regulations1994, and all sectors of the construction industry need to be aware of the effect of the proposed Regulations and the significantly increased risk of enforcement action, including prosecutions by the HSE, for all members of the project team.

The purpose of the Regulations is arguably to ensure that responsibility for health and safety is placed with those who are best placed to manage it and to simplify the legislation to make it easier to understand the roles, responsibilities and duties of the various members of the project team.

In evaluating the changes introduced by the CDM Regulations 2006, the consequences thereof are demonstrated by the changes to the client’s responsibilities made on the basis that the client has the greatest control and influence over a construction project, though there is significant onus upon the client in the imposition of the obligation to appoint a competent co-ordinator and a principal contractor and the obligation to ensure that the co-ordinator performs his duties under the Regulations. The additional obligation is the duty to ensure that the designer, principal contractor and contractors are given sufficient time to plan and prepare for carrying out construction work.

In relation to co-ordinator duties, it can be seen that the role of the co-ordinator is similar to that of the planning supervisor under CDM Regulations 1994, but with a number of important additional responsibilities which make the role of co-ordinator prominent in the project team.

The co-ordinator’s role is intended to assist the client, designer and principal contractor to achieve better health and safety on site.

The client’s obligation is demonstrated by the need to appoint the co-ordinator at an early stage in the project and before any design work or preparation for construction is carried out.

The obligation of the co-ordinator is to “identify and extract” all the information to secure the health and safety of persons engaged in construction work and those who are liable to be affected by the way in which that construction work is carried out, and he is also required to identify and extract information to assist the client, the designer and the principal contractor to perform their duties under the Regulations arguably, the co-ordinator has a broader responsibility for design and is required to advise on the “suitability and compatibility” of designs and on any need for modification to those designs. The co-ordinator is also required to liaise with the principal contractor in relation to any design or design changes which affect the construction phase plan.

The obligations upon the designer include the requirement to “eliminate” hazards which may give rise to risks to health and safety (e.g. not specifying the use of materials which could be hazardous, addressing design issues to minimise use of scaffolding or working at height). Furthermore, he must also take into account the risk to any person using a structure which it designs as a place of work in the future when it prepares or modifies its design.

The obligations upon the principal contractor includes the obligation obliged to ensure that every contractor is given sufficient information to carry out its obligations under the regulations and to allow the contractor to carry out the work safely. He must ensure that every worker carrying out construction work is provided with site induction and any further information and training to ensure that a particular element of work is carried out without unnecessary risk to health and safety.

There is also an obligation that there is co-operation, and duty is imposed upon everyone covered by the CDM Regulations 2006 to co-operate with each other and to seek the co-operation of others involved in any project involving construction work to enable each party to meet their obligations under the Regulations.

Criticisms of the attempt of the HSC to adapt the original Regulations include the argument that the industry’s record in focusing upon the safety rules as opposed to the paper trail has been poor, and that therefore as demonstrated by the ten year record in adopting the CDM Regulations 1994, the record of the industry in reaping the benefits from such changes are not good. It has been argued that despite the fact that implementation of such rules should be simple, as it merely relates to managing projects from concept to completion, ensuring that there are adequate resources and sufficient time, and that health and safety standards are integrated into all levels of project management and the benefits demonstrated to be ensured as a result, the industry has always chosen to focus upon the costs and the unnecessary paperwork, with a view to ignoring these benefits.

Other criticisms include the argument that the CDM Regulations 2006 do not go far enough in addressing the underlying causes of the industry’s health and safety record, argued by some sectors as being unacceptable. It has been argued that merely replacing a paper trail system with a system that focuses upon co-operation and management is not going to change much in the statistics regarding health and safety, as in many cases the designers argue that the contractors do not understand their design solutions, and contractors argue that designers do not understand how buildings are built. It is argued that although it is hoped that the planning supervisor can override these problems by bridging the gap, often they cannot because of inadequate fees, lack of authority or a lack of skill.

Therefore, the system of ‘co-operation’ would not work because the more duties are imposed, the more unclear each individual duty appears to be.

It has been acknowledged that the CDM Regulations 2006 could improve matters to some degree in relation to the need for training and debate to increase health and safety awareness, but an alternative solution has been suggested in which the clients procuring the projects should be made ultimately responsible for health and safety issues, as the client is in the ideal position to do so. In this instance, it has been considered that the duties delegated to the client under the CDM Regulations 2006 are vague and relate to matters such as the provision of information. It is therefore argued that there cannot be a significant change of the improvement and management of risk until clients in at least the public and commercial sectors are given more direct responsibility for ensuring that projects are carried out with regard to the safety standards. In ensuring this, reference is made to the need for civil and criminal sanctions.

In conclusion, the proposals made by the HSC are merely an attempt to address many of the main problems of the current Regulations, but as the HSC is willing to admit, they do not deal with all the issues, and are intended to be a starting point, to encourage and facilitate discussion by means of responses from members of the construction industry. The delay in submitting responses by the prescribed deadline has not in theory affected the fact that the new Regulations are due to be implemented in October 2006.It can be argued that contrary to the criticisms levelled at CDM Regulations2006, the responsibilities of the client have been increased to an appropriate degree, and that in a fair and proportionate manner appropriate obligations have also been placed upon other participants in a project.

It appears that even so the ultimate onus is upon the client to ensure that a planning supervisor is employed with the correct skill and experience to ensure smooth running of the project and to effective address the concerns regarding management and risk.

Cellular Networks and Wireless Data Applications

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

Computers and computer networks have changed the way in which we live, run our lives, communicate with each other and the way we work and produce what makes every commercial organisation function and reach success within its field, and in the same time, continue on the path of that success.

The computers as stand-alone machines, or as separated ones, are nothing more than advanced computing machines, but what was required in reality is a way to link all the computers with each other and to allow users to have simultaneous access to databases and information; and this is why networks had to be created. Tanenbaum (2003, p.2) explains this fact by stating that “The merging of computers and communications has had a profound influence on the way computer systems are organized. The concept of the ‘computer centre’ as a room with a large computer to which users bring their work for processing is now totally obsolete. The old model of a single computer serving all of the organization’s computational needs has been replaced by one in which a large number of separate but interconnected computers do the job. These systems are called computer networks.”

The main principle behind Computer networking is the communication between two or more computer systems. Computers within a network might be close to one another (such as the case with Bluetooth for example) or hundreds of kilometres away from each other (through the Internet).

The first important step in this field came in 1984, when a completely digitalised, circuit-switched telephony system was introduced; this system was called ISDN; which stands for Integrated Services Digital Network for voice and non-voice data. After that, BellCore started developing the standard for the Synchronous Optical Network (SONET), and by the end of the 1980’s, Local Area Networks (LAN) appeared as effective method of transferring data between a number of local computers, which led telephone companies replaces all its analogue multiplexing with digital multiplexing.

But it is also essential to point out the element of the Internet; this international linked network, composed of servers and clients all over the world, encouraged the changes in both information technology and mobile computing, and this is why we find most of the indications, whenever we face a new product or application, referring to its characteristics in what concerns wireless connection, Bluetooth link, infrared, and much more. Raidl (2003, p.199) states that “mobile cellular networks are by far the most common of all public wireless communication systems. One of the basic principles is to re-use radio resources after a certain distance.”

Walters and Kritzinger (2004) refer to the fact that mobile technology has turned to become one of the fastest, if not the fastest, growing field in the telecommunications industry.

To give a clearer idea about the change brought to the world and to every one of us, we can refer to the comments of Furht and Ilyas (2003), as they state that “just a few years ago, the only way to access the Internet and the Web was by using wireline desktop and laptop computers. Today, however, users are traveling between corporate offices and customer sites, and there is a great need to access the Internet through wireless devices. The wireless revolution started with wireless phones and continued with Web phones and wireless handheld devices that can access the Internet”

Types of network

Computer networks can vary according to the purpose for which they were created and depending on the area they are supposed to cover geographically. Computer networks can be one of the following:

1) LAN (Local Area Network) is “a small interconnection infrastructure that typically uses a shared transmission medium. Because of such factors as the volume of traffic, the level of security, and cost, the network structure in a local area network can be significantly different from that for a wide area network.” And “LAN is used for communications in a small community in which resources, such as printers, software, and servers, are shared. Each device connected to a LAN has a unique address. Two or more LANs of the same type can also be connected to forward data frames among multiple users of other local area networks” (Mir, 2007, p.102).

2) WAN (Wide Area Network) is “spans a large geographical area, often a country or continent. It contains a collection of machines intended for running user (i.e., application) programs” (Tanenbaum, 2003, p.19).

3) CAN (Campus Area Networks) “are the enterprise networks that serve number of related structure, as in a large company or a college campus.” Lehtinen, Gangemi, Gangemi Sr, and Russel, 2006, p.182).

4) MAN (Metropolitan Area Network) which “covers a city. The best-known example of a MAN is the cable television network available in many cities” (Tanenbaum, p.18).

5) HAN (Home Area Network) is “the connection of a number of devices and terminals in the home on to one or more networks which are themselves connected in such a way that digital information and content can be passed between devices and any access ‘pipe’ to the home” (Turnbull & Garrett, 2003, p.46).

Cellular networks

In their description of the first cellular radio networks in history, Walters and Kritzinger (2004) state that “in 1946, the first car-based telephone was set up in St. Louis, Missouri, USA. The system used a single radio transmitter on top of a tall building. A single channel was used, therefore requiring a button to be pushed to talk, and released to listen. This half duplex system is still used by modern day CB radio systems utilized by police and taxi operators. In the 1960s, the system was improved to a two-channel system, called the improved mobile telephone system (IMTS)… Cellular radio systems, implemented for the first time in the advanced mobile phone system (AMPS), support more users by allowing reuse of frequencies. AMPS is an analogue system, and is part of first generation cellular radio systems.”

Even though it has become one of the most common and popular means of communication between people in the last years, cellular networks still have no specific definition; “Cellular communications has experienced explosive growth in the past two decades. Today millions of people around the world use cellular phones. Cellular phones allow a person to make or receive a call from almost anywhere. Likewise, a person is allowed to continue the phone conversation while on the move. Cellular communications is supported by an infrastructure called a cellular network, which integrates cellular phones into the public switched telephone network” (Zhang and Stojmenovic, 2005, p.654).

This difficulty in finding a definition is due to the fact that there are different technologies and networking methods used within the frame of cellular networks. Frantz and Carley (2005, p.5) explain that “cellular networks are a distinct and important network topology. Although there is a growing body of work referring to cellular networks, there is no complete formal definition. However, there are several papers that seek to describe characteristics of cellular networks. Cellular networks are a critical topology to formally characterize, in part, as they are thought to be a common form for covert networks.”

Yet, it is possible to find some kind of an explanation of such networks and how they operate:

“Cellular networks use a networked array of transceiver base stations, each located in a cell to cover the networking services in a certain area. Each cell is assigned a small frequency band and is served by a base station. Neighbouring cells are assigned different frequencies to avoid interference. However, the transmitted power is low, and frequencies can be reused over cells separated by large distances” (Mir, 2006 p.42).

A cellular network, for it to be considered a functional type of communication network, relies “on relatively short-range transmitter/ receiver (transceiver) base stations that serve small sections (or cells) of a larger service area. Mobile telephone users communicate by acquiring a frequency or time slot in the cell in which they are located. A master switching centre called the ‘mobile transport serving office’ (MTSO) links calls between users in different cells and acts as a gateway to the PSTN” (Muller, 2003, p.50)

Each cellular network is composed of what is can be referred to as Cells; which are defined by Frantz, and Carley, (2005) as “a distinct subgroup of actors within a larger cellular network. The presence of at least one cell is fundamental to a network’s distinction of being cellular—without at least one cell, a network is not cellular. Empirically, a cell often consists of relatively few actors and has a distinct topology that is effortless to identify visually. The actors in a cell can be partitioned into two distinct but intertwined subgroups, namely the cell-core and the cell-periphery.” Muller (2003) explains that there are no specific sizes for cells within a cellular network, this is due to the fact that there are many factors that interfere in this element and according to the surrounding environment and obstacles can the cell’s size be determined: “Cell boundaries are neither uniform nor constant. The usage density in the area, as well as the landscape, the presence of major sources of interference (e.g., power lines, buildings), and the location of competing carrier cells, contributes to the definition of cell size. Cellular boundaries change continuously, with no limit to the number of frequencies available for transmission of cellular calls in an area. As the density of cellular usage increases, individual cells are split to expand capacity. By dividing a service area into small cells with limited-range transceivers, each cellular system can reuse the same frequencies many times.”

According to Muller (2003), a cellular network is composed also of a Master Switching Centre which “operates similar to a telephone central office and provides links to other offices. The switching centre supports trunk lines to the base stations that establish the cells in the service area.” Another component is the transmission channels which are, in most cases, two kinds of channels; a control channel and a traffic channel. And, of course, to close the circle within this network, a cellular phone is needed; “cellular telephones incorporate a combination of multi-access digital communications technology and traditional telephone technology and are designed to appear to the user as familiar residential or business telephone equipment.”

During their evolution and continuing enhancement, cellular networks went through consecutive levels of development; each of them added more power and functionality to the previous one. Zhang and Stojmenovic (2005, p.654) explain that cellular networks have had three stages that are called generations. The first of those generations is analogue in nature. Then, when more cellular phone subscribers needed to be connected and function simultaneously, digital TDMA (time division multiple access) and CDMA (code division multiple access) technologies appeared and were put to work; and this was the second stage or what is known as the second generation (2G) which was necessary in order to increase the capacity of the cellular network.

“With digital technologies, digitized voice can be coded and encrypted. Therefore, the 2G cellular network is also more secure.” With the high importance of applications related to the Internet and their continuous growth, many users required more of the their cellular devices. Then the third generation (3G) arrived. 3G “integrates cellular phones into the Internet world by providing high speed packet-switching data transmission in addition to circuit-switching voice transmission. The 3G cellular networks have been deployed in some parts of Asia, Europe, and the United States since 2002 and will be widely deployed in the coming years.” There are some expectations regarding the future for what concerns the fourth generation wireless networks: “These will evolve towards an integrated system, which will produce a common packet-switched (possibly IP-based) platform for wireless systems, offering support for high-speed data applications and transparent integration with the wired networks” (Nicopolitidis, Obaidat, Papadimitriou and Pomportsis, 2003).

Cellular networks make use of certain protocols in order to make communication easier between various entities within the limits of the network. A protocol of communication can be defined as a group of rules which correspond to messages that two or more entities communicate between each other within a network. Protocols used for cellular networks are included within the standard which is covering the service. The first and most popular standard for mobile phones is GSM (Global System for Mobile communications). Other standards are CDMA and TDMA.

Another important point concerning cellular networks is what can be called Location Management, which is essential for the network to monitor every registered mobile station’s location so that the mobile station can be able to connect to the network upon request.

It is important to note the similarities between cellular networks and Wireless LANs, but it is also worthwhile noticing the differences between the two: “Goals for third-generation wireless communication, enunciated in the early 1990s by the International Telecommunications Union Task Group IMT-2000, focused on the first two criteria, bit rate and mobility. Third-generation systems should deliver 2 Mbps to stationary or slowly moving terminals, and at least 144 kbps to terminals moving at vehicular speeds. Meanwhile, WLAN development has confined itself to communications with low-mobility (stationary or slowly moving) terminals, and focused on high-speed data transmission. The relationship of bit rate to mobility in cellular and WLAN systems has been commonly represented in two dimensions” (Furht and Ilyas, 2003, p.33).

Wireless data applications

With the continuous growth of mobile devices, different services were created in order to widen the range of the functionality of those devices. For such devices to be able to use the newly offered services, specific types of applications had to be created and deployed or installed on the mobile device, may it be a cell-phone, PDA, or a notebook computer. “Wireless data services use a mix of terrestrial and satellite-based technologies to meet a wide variety of local (in building or campus settings), metropolitan, regional, national, and international communication needs… A number of wireless data applications, in fact, are being designed with fixed users in mind” (Office of Technology Assessment, 1995).

To be able to understand how wireless data applications work, it is necessary to have a comprehensive view concerning their delivery methods; as a matter of fact, there are two main delivery methods: “There are two fundamental information delivery methods for wireless data applications: point-to-point access and broadcast. In point-to-point access, a logical channel is established between the client and the server. Queries are submitted to the server and results are returned to the client in much the same way as in a wired network. In broadcast, data are sent simultaneously to all users residing in the broadcast area. It is up to the client to select the data it wants” (Zomaya, 2002)

Wireless data applications can be divided into two main groups: Messaging and Remote Access. “Messaging applications can generally tolerate low throughput and long transmission delays. Electronic mail (e-mail) often fits this category, but not always, messages with attached files may strain the capacity of wireless messaging networks,” then there is Remote access which is required to allow access to the resources and services of a network from outside the geographical barriers of the physical establishment of that network (Brodsky, 1997).

Conclusion

Throughout this paper, understanding the information presented within it fully, it is accurate to state that a cellular network is definable correctly by presenting the following: “We define a cellular network as a single-component and undirected network of actors and their relationships, strictly consisting entirely of actors who are members of a specific cell, as previously defined; thus a network in which all actors are a member of a cell. For a network to be considered cellular, these conditions must be met: (a) the ties making up the relations in the network may only be undirected, (b) the network consists of a single component, e.g., there are no isolate actors, and (c) the network consists solely of cell subgroups that are connected via spanning ties, e.g., there are no actor in the network who is not a member of a cell subgroup” (Frantz and Carley, 2005, p.10)

As for wireless data applications, in 1997 Brodsky stated that if such application are to become widespread and popular exactly as the simple mobile phones were in the end of the 1990s, users should become “readily and reliably send and receive data over paging, cellular and PCs”. And as we can see today, that phase is exactly what we experience today; ten years after the author wrote those words.

Reference List

Brodsky, I. (1997). Wireless Computing: A Manager’s Guide To Wireless Networking. New York, New York: John Wiley & Sons, Inc.

Furht, B. and Ilyas, M. (2003) Wireless Internet Handbook—Technologies, Standards, and Applications, Boca Raton, Florida: CRC Press LLC.

Frantz, T. and Carley, K. (2005) A Formal Characterization of Cellular Networks, CASOS Report, [Online] September.

Available at: http://cos.cs.cmu.edu/publications/papers/CMU-ISRI-05-109.pdf

Lehtinen, R., Gangemi, G. Gangemi, G Sr., and Russel, D. (2006) Computer Security Basics, Sebastopol, California: O’Reilly & Associates.

Mir, N. (2007) Computer and Communication Networks, Saddle River, New Jersey: Pearson Education, Inc.

Muller, N. (2003) Wireless A to Z, New York, New York: The McGraw-Hill Companies, Inc.

Nicopolitidis, P., Obaidat, M., Papadimitriou, G. and Pomportsis, A. (2003) Wireless Networks, West Sussex, England: John Wiley & Sons Ltd.

Office of Technology Assessment – Congress of the United States. (1995) Wireless technologies and the national information infrastructure. Washington, DC: DIANE Publishing.

Raidl, G. (2003) Applications of Evolutionary Computing, Berlin, Germany: Springer.

Tanenbaum, A. (2003) Computer Networks, Upper Saddle River, New Jersey: Pearson

Education, Inc.

Turnbull, J. and Garrett, S. (2003) Broadband Applications and the Digital Home. Stevenage, United Kingdom: The Institution of Electrical Engineers.

Walters, L. and Kritzinger, P. (2004) ‘Cellular Networks: Past, Present, and Future’, Association for Computing Machinery [Online]

Available at: http://www.acm.org/crossroads/xrds7-2/cellular.html

Zhang, J. and Stojmenovic, I. (2005) Cellular networks, Handbook on Security (H. Bidgoli, ed.), Vol. I, Part 2, chapter 45, pp.654-663.

Zomaya, A. (2002). Handbook of Wireless Networks and Mobile Computing. New York, New York: John Wiley & Sons, Inc.

Euripides Hippolytus Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Illustrate the importance of the themes of self-control, shame and desire in Euripides’ Hippolytus. How does Euripides connect these themes to the world of the Athenian audience?

Euripides’ Hippolytus (1972) is a paradoxical play that, at its heart, deals with the outcomes of conflicting human emotion. As Charles Segal suggests in his study Euripides and the Poetics of Sorrow (1993) commensurate with a great many of the playwright’s other works – Alcestis, Hecuba etc., Hippolytus examines the divisions and conflicts of male and female experience (and) all three also experiment with the limits of the tragic form (Segel, 1993: 3).

There are no clear cut moral demarcations in Hippolytus,the ethical sense and movement of the piece is symbolised by the figures of Aphrodite and Artemis who straddle the drama both symbolically and physically being as they are present in both the first and last scenes. As we shall see,the outcomes of the narrative veer more towards a psychological questioning of what it is to be human than any moral proselytizing and the characters show both weakness and strength in their dealing with the Gods and their quixotic natures. With this in mind, in this essay I would like to look at this concept in Hippolytus but more specifically how it relates to the notions of self control, shame and desire, all subjects that form an integral part of the drama’s ultimate socio-ethical meaning.

Firstly, I will look at the drama itself, attempting to illustrate and draw out instances of moral thinking within it, then I will move on to examine the ways in which these are blurred and made complicated by Euripides before going to suggest ways in which this might have been specifically tailored as both a critique and a lesson to the contemporary Athenian audience.

Aristotle, in his Poetics(1965) calls Euripides our most tragic of poets (1965: 49) chiefly through the misfortune that befalls many of his leading characters at the conclusions of his dramas. However, Aristotle also criticises Euripides for the faulty management of other aspects of the plot, and the moral and ethical position of his characters must be one of these. Let us, for instance, consider the character of Hippolytus himself. On the surface, he seems to fulfil the rubric set by Aristotle that states a tragic hero must be better than average (Aristotle,1965: 52) in terms of morality and humanity; Hippolytus is a follower of Artemis, the Greek goddess of constancy and self control, as is stated by Aphrodite in the opening passages:

that son of Theseus born of the Amazon, Hippolytus, who holy Pitteus taught, alone of the all the dwellers in this land of Rroezen, calls me the vilest of the deities. Love he scorns, and , as for marriage, will none of it. (Euripides, 1972: 225)

It is this self control that is the main focus of the play, as Hippolytus is shown to be, as Aristotle states of better than average moral worth. However, there are subtle psychological suggestions that beneath the external veneer of moral constancy, Hippolytus is as weak and as human as his audience. We can witness, for example his misogynistic tirade after the Nurse reveals Phaedra’s actions:

Great Zeus, why didst thou, to man’s sorrow, put woman, evil and counterfeit, to dwell where shines the sun? If thou wert minded that the human race should multiply, it was not from women they should have drawn their stock (Euripides, 1972: 230)

This scene could be interpreted, as indeed Barnes and Sutherland do in Hippolytus in Drama and Myth (1960:82)as the reaction of an overtly moral consciousness to the very object he sees as threatening it. However, this scene could also be indicative of what Melanie Klein called projection (Klein, 1991; 1997) in which the subject attributes traits and failings of their own self to another. With this in mind, it is easy to see that what one witnesses in Hippolytus’ misogyny is much deeper than a mere hatred of women and the projection of his own self hatred, brought about by the constant repression of his desire.

This, at once, adds a psychological layer of complexity to Euripides’ characters and also distinguishes them from the, relatively, simplistic tenants of Aristotle.

What then are the outcomes of Hippolytus’ moral conflicts? What are the tragic results? According to Aristotle, the tragedy is characterised by a change in fortune from prosperity to misery (Aristotle, 1965: 48) and we can see this is certainly the case with a number of the characters. Theseus makes this journey in what we could think of as a typically Attic manner. We can note his initial moral position as being one of conviction as he defends the honour of his wife against the perceived laxity of his son, as in this passage:

Behold this man; he, my own son, hath outraged mine honour, his guilt most clearly proved by my dead wife (Euripides, 1972: 232)

We can also see, however, that this is short lived, as we become witness to what Aristotle called the anagnores is,or the discovery; the goddess Artemis being the facilitator of this action. In the character of Phaedra, however, this situation is, to an extent, reversed. She begins the play as an innocent victim of Aphrodite’s wish to reap revenge on Hippolytus:

Aphrodite: So Phaedra is to die, an honoured death t’is true., but still to die; for I will not let her suffering outweigh the payment of such a forfeit by my foes as shall satisfy my honour. (Aristotle, 1972: 225)

Of course, because of this it is Phaedre’s desire that is the motivating force behind the tragedy. She is, in many ways, the human manifestation of the drives of Aphrodite as Hippolytus is of Artemis. Like Hippolytus, also however, she is caught between the two poles of desire and self control by, firstly, not acting upon her sexual drives and,secondly, by committing suicide. It is only in her letter that, ultimately,damns Hippolytus, that she shows her true nature:

I can no longer keep the cursed tale within the portal of my lips, cruel though its utterance be. Ah me! Hippolytus hath dared by brutal force to violate my honour, recking naught of Zeus, whose awful eye is all over. (Euripides, 1972: 232)

Phaedre’s character here alters from one of innocent victim of the gods to one of false accuser. Interpreted in a contemporary light, however, could we not suggest that her actions are not the products of an innate maliciousness but of her own shame? Trapped between the desires instilled in her by Aphrodite and that which she knows is socially correct she not only chooses to take her own life but, in a psychological sense, refuses to acknowledge her sin. Again Euripides displays the concept of projection only this time it is Phaedre’s self loathing and shame that is projected onto Hippolytus.

The enormity of this act, the sexual longing of an older woman for a younger man and the suggestion of an incestuous relationship is stressed by James Morwood in his essay on Euripides:

The Athenian legal speeches attest to the domestic conflicts to which this could lead. But it could also cause sexual confusion, and the canonical Greek articulation of the illicit love of a married woman for a single man, the famous love of Phaedra for Hippolytus, is compounded by the quasi-incestuous connotations of the step parent-stepchild bond. (Morwood, 1997)

In this, the play must have had a definite political subtext to it; Euripides serving as a guardian of public morality, suggesting that tragedy arises out of illicit love between near family members.

There is, however, another deeper meaning to play, I think, and one that would be just as relevant to an Athenian audience as a warning against incest. What we see in the play’s structure, in its very narrative form, are circles within circles. Each character, ultimately suffers and they suffer not only from their individual desires, shames and lack of self control but through each other’s. Phaedre suffers through her desire for Hippolytus and through the actions of the Nurse, Hippolytus suffers through the actions of his father and stepmother and Theseus suffers through the actions of his wife and son. Through structuring his narrative in such an interconnected way Euripides suggests that personal desire and lack of self control affects not only the individual but those around them; we are, in a sense, connected and our actions resonant outwards to those around us.

As Sophie Mills suggests in her study Theseus, Tragedy and the Athenian Empire (1997: 19) there is a further thread to the play, one that concerns the relationship man has to the Gods. It must not be forgotten that the tragedy in Hippolytus ultimately emanates from the Goddess Aphrodite, it is her actions after all that sets in motion the entire drama. The two Goddess, as I stated in the earlier parts of this paper, form a binary that entraps the main characters of the play and forces them along predestined paths. Euripides’ ultimate philosophical subtext is, then, one of man’s position to the Gods and to the fate that they represent and he achieves this by not only the psychological polarity that the characters find themselves in but also a physical polarity of the two Goddesses.

As Mills suggests, the character of Theseus, in many ways, represents the very populace of Athens:

Where he is the representative of Athens in tragedy, Theseus embodies Athenian civilization in all its manifestations, so that he is usually less an individual character with his own fate than a symbol of Athenian virtue. He is consistently given characteristics which are considered as especially commendable in Athenian (and often Greek) thought, and such characteristics are usually marked as uniquely Athenian, (Mills, 1997: 57)

Could Euripides be offering a warning to his Athenian audience concerning their own desires and self control?After all, the sexual desire and control of Hippolytus and Phaedre pails into insignificance when compared to those of Theseus who loses control and loses as on. Could Euripides also be warning his audience about the vagaries of the Gods and gently reminding them of their humanity both in terms of their self restraint and in their mutability?

As we have seen, Euripides’ drama is a complex and, surprisingly, contemporary play suggesting as it does a wide variety of critical and psychological areas; from Melanie Klein’s notions on projection of one’s own frustrations and self hatred, to Aristotle’s concepts of anagnoresis and tragic heroes; from issues concerning Athenian politics to their moral and ethical systems. It is, however, in the combination of the set hings that, I think, Euripides achieves the play’s true meaning. The complexity of life is mirrored in Hippolytus by the complexity of the character’s interconnected lives and finely wrought psychologies that must have been as affecting to an Athenian audience as a modern one.

Discrimination in Single Adult Adoption

This work was produced by one of our professional writers as a learning aid to help you with your studies

Discrimination of Single Adults in the Adoption Process: An Interdisciplinary Approach

Introduction

Even though it is legal in all 50 states for a single adult to adopt a child, there is still a negative attitude on placing adoptee children with single adults in the adoption process. This problem exists due to the fact that millions of children remain in the adoption system waiting to be adopted, despite the fact that there are numerous suitable single adults wanting to adopt these children. Since the adoption process is made more difficult for single adults due to discrimination, many children remain without a home.

There are a large amount of willing single adults in the U.S. that are more than willing to adopt, love and care for the unwanted children in the adoption system. The adoption process is made more difficult for single adults because there is still the common belief that “two heads are better than one”, and that children need to be placed in two parent homes rather than with single adults. If more children in the adoption system can be placed with eligible and loving single adults, then they will have a better chance of having a more stable and successful life.

The discrimination that single parents experience when attempting to adopt an unwanted child requires multiple perspectives in order to be fully discussed. The reason that this is an interdisciplinary problem is because the discrimination of single adults in the adoption process is “too broad and complex to be dealt with adequately by a single discipline or profession (Repko, 2005).

The unwanted children in the adoption system are a huge societal problem that needs to be addressed. It is especially a problem when there a people out there that want to take care of these children. An interdisciplinary approach also needs to be taken because there is not one area or subject that can provide a sufficient solution to this social problem.

The first discipline that will give us a better understanding of this complex social problem would be the Sociology. According to the Journal of the American Planning Association, the structure of family has changed over the last 40 years due to several factors such as, the rising divorce rate, the increase in cohabitating couples and rising unemployment rates.

The nuclear family is no longer the norm and many families are headed by single parents. These factors alone should make the adoption process fair and more acceptable when it comes to single adult homes versus two-parent homes.

The next discipline that can give us a better perspective on the discrimination of single adults in the adoption process would be Economics. If single adults were considered as equal as two parent families in the adoption process then more children can be put into more homes and the financial burden on the state will be greatly reduced. According to the U.S. Department of Labor it is estimated that it cost $124,000- $170,000 to raise a child from birth to age 18 depending on the child.

According to the National Council for Adoption (NCFA), as of 1997 about 100,000 children were in need of a home. That is approximately 2 billion dollars that the government has to pay to care for these unwanted children. If more single adults were allowed to adopt, then that cost can be greatly reduced.

The third discipline that will help in addressing this problem will be Psychology. Children in need of adoption will have a better chance of psychologically wellbeing if they are out in a stable home, even if it is the home of a single adult, rather than them staying in the foster care system waiting on a two-parent home. There is this common belief that “two heads are better than one” when it comes to raising a child, but that may not necessarily be true. A child may have just as equal as a chance of psychological wellbeing in a single adult home as they would in a two-parent home.

Since the very beginning with the very first adoption laws, there have been laws in most states that allow single parents to adopt according to the American Adoption Project. With single parents being eligible to adopt legally there was a negative attitude geared towards single adults in the adoption process especially at the beginning of the twentieth century.

During this time period it was stigmatic to be a single parent whether the child was born out of wedlock or if a single adult was attempting to adopt. According to The Adoption History Project it wasn’t until 1965 that the Los Angeles Bureau of Adoptions made the first organized effort to enlist single parents to adopt children. (www.uoregon.edu/~adoption/topics/singleparentadoptions.htm).

Also according to The Adoption History Project, not only has adoption by single adults has been a growing trend since the 1970’s but approximately one-third of children adopted from the public foster care system and one-quarter of all children with special needs are adopted by single individuals today, but many fewer single adults adopt fewer singles adopt healthy infants domestically or internationally.

The purpose of this paper is to bring to light the ongoing bias that occurs against the numerous amounts of single adults pursuing adoption and hopefully bring an end to the bias against the single adults who want to nurture and provide a loving home for the unwanted children in the foster care system. With 89.6 million singles heading over half of America’s households, according to the 2006 US Census, there should be more have just as equal of an opportunity to adopt a child in need of a home as anyone else.

Background

Discrimination of single adults in the adoption process has a negative impact not only on the children that are in dire need of stable and loving homes, but discrimination of single adults in the adoption process also has a negative impact on the single, potential parents that are ready and willing to provide a home for children that are unwanted by the rest of society. The problem with discriminating against single adults in the adoption process not only alienates a major population in America, but the children in need have a decreased chance at a stable home and end up waiting in the system if no one else adopts them.

As of 2005 there were over 513,000 children in the U.S. that were in some form of foster care. Of those 513,000 children that were in foster care, 114,000, over half being male, were waiting to be adopted; meaning the parental rights of their biological parents had been terminated. Almost 700 of these children were runaways, and the rest were divided amongst government institutions, and foster homes. 23% of the children that were waiting to be adopted had been waiting in the foster care system since they were infants. (Adoption and Foster Care Analysis and Reporting System, 2005) With two-parent homes being the preference for an adoptee rather over single parents, many of these children age out of the adoption system without ever being placed in a permanent home.

Nearly 20,000 children each year “aged out” of the foster care system or become a legal adults when they turn 18 and are no longer in the care of the foster care system. With these children coming from abusive families, without knowing where they came from at all or without the stability that they need as a child, they can end up becoming unstable adults which can have a negative impact on society. They may not get the help that they need to overcome their unfortunate circumstance, and therefore more likely not to experience stable adulthood also. According to a study conducted on foster children aged out of the system, Aging Out of the Foster Care System: Challenges and Opportunities for the State of Michigan:

Young adults out of foster care are 51 percent more likely to be unemployed, 27 percent more likely to be incarcerated, and 42 percent more likely to be teenage parents, and 25 percent more likely to be homeless. Within four years, 60 percent of them will have had a child (Anderson, 2003)

With the rising numbers of children in the foster care system, the problem of youth ageing out of the system and not succeeding in life will only become worse if nothing is done about it (Anderson, 2003). With them being at a disadvantage during childhood and growing up to a disadvantage in adulthood, these factors can a huge negative impact on society. They may need to be placed on welfare due their higher chance of being unemployed, which will cost the government and taxpayers even more money in addition to the cost of raising them as children.

It also cost money to have them in prison, and support them if they become teenage parents. Making the adoption process fair for single adults increases the likelihood of giving more children in foster care a better childhood, a chance at a successful adulthood, and easing the financial burden on the U.S. Government and its taxpayers.

Giving qualified single adults the opportunity to give these children a better future and loving home and have a positive impact on all of society. With more stable adults coming from stable homes, this decreases the chances of unemployment, teenage pregnancy, and imprisonment.

Imagine growing up in an abusive household or being given up as a baby and not knowing where you came from and being placed in foster care or adoption facility. You may be placed in and out of different foster families throughout your life, but never the permanent and loving home that you need. Some of the foster families you have lived with may have been sufficient, other foster homes have had abusive foster parents, or other children in their care may have abused you also.

You eventually turn 18, a legal adult, and are told to gather all of your belongings so that you can leave. Imagine being forced out of the only home you knew, without knowing a stable home or being taught the basic skills of surviving in the everyday world. This process happens to over 20,000 adults coming out of the foster care system all over the U.S.

It is often wondered why these single adults would want to tie themselves down with children, let alone someone else’s child. It is also wondered why risk adopting a child that comes from an abusive home and has a risk of mental health problems, or why go through with the difficult process as a single adult by competing with two-parent families to adopt a child.

Single adults may be single by no fault of their own or they may choose to be single by choice. Either way, a single adult have the same needs and urges to nurture a child so they pursue parenthood just like any other adult. Single adults that pursue adoption want to love and provide a home for the unwanted children in foster care, even the ones with special needs. It is estimated that 25% of the adoption of children with special needs, are adopted by single adults (Prowler, 1990).

Not only do single adults have to endure negativity to adopt a child from adoption agencies, they may endure criticism from the people that are closest to them. Family and friends of these single adults that are attempting to adopt can be discouraging by telling them to get married first or by telling them that they cannot raise a child on their own.

For many singles, family and friends maybe the biggest obstacle that you have to overcome before even beginning adoption procedures (Prowler, 1990). She also states that single men may have it even tougher when it comes to overcoming obstacles. Their motives are highly questioned and they may get asked intimate questions about their sexuality and their reasoning behind wanting to adopt a child as a single man.

The disciplines that are used to explain this complex, real world problem are Sociology, Economics, and Psychology. Sociology is one of the most important disciplines that will be used to address the problem of discrimination of single adults in the adoption process because the family structure in America has drastically changed the this discipline helps to address this fact. Sociology not only deals with the individual, it deals with family structure also.

The next discipline that is used to address this complex issue would be Economics. Not allowing eligible single adults adopt fairly is hurting the American society financially, and the Economic discipline helps to address this issue. Sociology and Economics are discussed first because they are the more important of all three of the disciplines that are discussed and they have the biggest impact on the groups that are being discussed.

Although Sociology and Economics are the more important disciplines, the complex problem of the discrimination of single adults in the adoption process cannot be fully addressed without discussing the discipline of Psychology, which discusses the mental well-being of the children that are in the foster care system. In order to have a better understanding of the complex issue of discrimination of single adults in the adoption process, we must be able to make use of the interdisciplinary process in order to have a good understanding of this issue.

For this interdisciplinary problem , of the different models that can be used to address the problem, the comprehensive model will be used by giving the information, facts, and conclusion from each discipline in order to fully address the complex problem of discrimination against single adults in the adoption process (Repko, 2005).

References

U.S. Singles: The New Nuclear Family. (2007, May 30). Marketing Charts. Retrieved

February 9, 2008, from http://www.marketingcharts.com/television/us-singles-the-new-nuclear-family-490.

Ellewood, D. (1993). The Changing Structure of American Families. Journal of the

American Planning Association (27) 1, 45-47. Retrieved February 14, 2007, from

Academic Search Complete Database.

Economics

Anderson, G. (2003) Aging Out of the Foster Care System: Challenges and

Opportunities for the State of Michigan. http://www.ippsr.msu.edu/Publications/ARFosterCare.pdf.

Psychology

Additional Sources

Repko, A. (2005). Interdisciplinary Practice: A Student Guide to Research and Writing. Boston, MA: Pearson Custom Publishing.

Prowler, M (1990). Single Parent Adoption: What You Need to Know. National Adoption Center. Retrieved January 26, 2008, http://library.adoption.com/single-parent-

Adoption.

Adoption and Foster Care Analysis and Reporting System. (2005) The AFCARS Report.

www.acf.hhs.gov/programs/cb.

Benefits of Outdoor Play in Early Years Settings

This work was produced by one of our professional writers as a learning aid to help you with your studies

In the 18th-19th centuries, industrialisation caused some serious changes in the lives of people (Knight, 2009). In the UK, for instance, industrialisation significantly decreased the schools’ provision of outdoor activities. However, such educators as Friedrich Froebel, Margaret McMillan and Maria Montessori contributed much to the revival of interest in outdoor play. Due to their efforts, the outdoor play provision occupies a crucial place in a contemporary early years setting. This essay analyses the issue of outdoor play in an early years setting. It will start with the definition of the concept of outdoor play and will proceed with the discussion of the first early-years practitioners who accentuated the need to integrate outdoor play into the curriculum. The analysis will then discuss in more detail the significance and use of outdoor play in an early years setting, juxtaposing theoretical and empirical evidence. Finally, the essay will identify the challenges to the successful provision of outdoor play in an early years setting.

In view of the fact that children perceive and interact with the world using different senses, it is essential for early years practitioners to use the methods which provide children with an opportunity to learn through these senses (Ouvry, 2000). Play is especially effective for learning because play evokes positive feelings in children and thus motivates them to learn (Ouvry, 2000). According to Johnston and Nahmad-Williams (2014), it is rather difficult to understand what constitutes play within an early years setting because educators and researchers cannot agree on whether to consider structured play (e.g. play activities developed by early years practitioners) as play. Johnston and Nahmad-Williams (2014, p.273) define outdoor play as “a carefully planned outdoor environment that covers the six areas of learning”. These six areas include: 1) physical development; 2) creative development; 3) social, personal, and emotional development; 4) understanding of the world; 5) literacy, language, and communication; 6) reasoning, problem solving, and numeracy (DCSF, 2008). In contemporary early years settings, two types of outdoor play are used: free play and structured play (Johnston and Nahmad-Williams, 2014). Free play is initiated by children: in free play, children choose the resources and materials to play with, although early years practitioners are responsible for preparing the materials. In free play, early years practitioners do not control play; however, they supervise children and provide necessary support. In this regard, free play reinforces children’s independence and their interactions with each other (Johnston and Nahmad-Williams, 2014). In structured play, it is an early years practitioner who chooses the resources and materials and who prepares specific tasks for children to complete (Johnston and Nahmad-Williams, 2014). Although structured play activities are created taking into account children’s interests and needs, structured play is controlled by an early years practitioner who ensures that specific learning outcomes are met.

Friedrich Froebel (1782-1852) was one of the first advocates of outdoor play in an early years setting (Riddall-Leech, 2002; Knight, 2009). According to Froebel, outdoor play contributes to the development of children’s imagination which is essential for successful learning and healthy growth. It was Froebel who opened a kindergarten in Germany to integrate the outdoor play provision. A significant focus in this provision was put on imaginative play and play with wooden blocks (Tassoni, 2007). In contrast to Froebel, Maria Montessori (1870-1952) did not consider imaginative play as crucial for children’s development. A doctor and educator who mainly worked with children with specific learning needs and who opened Children’s Houses for working class children, Montessori stimulated young children to actively participate in real life outdoor activities and thus acquire knowledge and develop diverse skills (Tassoni, 2007). She strongly believed that the best way for children to learn was through their interactions with the environment. Montessori put a particular emphasis on structured play, endowing children with constructive play materials and intentionally designed equipment to facilitate their learning (Tassoni, 2007).

However, according to Montessori philosophy, early years practitioners are not allowed to interfere into children’s play. In this regard, children learn to develop decision-making skills, independent thinking, and confidence through outdoor play activities. Many contemporary early years settings are organised, drawing on Montessori’s ideas of structured outdoor play environment (Tassoni, 2007). Margaret McMillan (1860-1931), a social reformer who worked with children from poor families, significantly highlighted the value of outdoor play (Knight, 2009; Ouvry, 2000). In the viewpoint of McMillan, outdoor play is essential for the healthy development of children. She especially emphasised such aspects of outdoor play as fresh air and movement. McMillan contributed much to the spread of a play-centred approach by opening several outdoor nurseries (Knight, 2009). Her first nursery school was opened in Deptford and was organised as a garden “with children flowing freely between inside and out” (Ouvry, 2000, p.5). For McMillan, a professionally structured outdoor setting satisfied all learning needs of children. One of McMillan’s major requirements towards a professionally structured outdoor setting was to create “a provocative challenging environment” (Tovey, 2010, p.79). It is in such a challenging environment that children acquire rich and diverse experience and thus uncover their true identities (Garrick, 2009). It is in such a challenging environment that children engage in adventurous and creative activities and acquire understanding of the natural world. This environment motivates children to learn.

Drawing on the ideas of early years practitioners on outdoor play, contemporary researchers and authors also widely discuss the significance of outdoor play in an early years setting. For instance, Garrick (2009) acknowledges that outdoor play significantly reinforces children’s physical development. Baldock (2001) accentuates the ability of outdoor play to shape children’s spatial skills and decision-making skills due to the acquired independence. In her action research, Nind (2003) drew the parallels between independent outdoor play and improved language competence. The findings of Nind’s (2003) study showed that children who had problems with English as a foreign language more actively engaged in communication in the outdoor play setting. Playing outdoors, they behaved in a more independent way and employed a variety of communication strategies to share their views with peers. However, Manning-Morton and Thorp (2003) and Siraj-Blatchford and Sylva (2004) point at the need to create a balance between outdoor activities planned by early years practitioners and children’s free activities. In the case of free activities, children acquire an opportunity to explore the outer world and express their selves through these activities. Planned outdoor activities are also crucial as they improve children’s cognitive skills, social skills, and creativity. In their study of early years settings, Siraj-Blatchford et al. (2002, p.8) have found that outdoor play activities are especially effective if they are based on the adult-child interaction because such interaction reinforces “sustained shared thinking, an episode in which two or more individuals ‘work together’ in an intellectual way to solve a problem, clarify a concept, evaluate activities, extend a narrative”.

Despite the discussed positive effects of outdoor play on children in an early years setting, there are some factors that prevent its successful provision. According to Garrick (2009, p.x), although UK statutory guidance recognises outdoor play as a crucial aspect of an early years curriculum, “currently there is no requirement in England to develop outdoor areas as a condition of registration”. According to UK statutory guidance, early years practitioners are recommended to use parks and similar facilities for outdoor play if early years settings lack outdoor areas (DCSF, 2008). Garrick (2009) goes further by claiming that early years settings with outdoor areas are often poorly equipped and thus do not provide many possibilities for learning. Ouvry (2000), Maynard and Waters (2007), and Johnston and Nahmad-Williams (2014) have found out that early years practitioners in English settings are often reluctant to integrate the outdoor play provision because they are too obsessed with health and safety issues. In more specific terms, they are afraid that cold and windy weather is detrimental to children’s health and thus rarely allow young children to play outdoors. The study of Ellis (2002) has brought into light the opinions of ATL (Association of Teachers and Lecturers) members on the outdoor play provision. In the viewpoints of more than 60 percent of teachers, it is difficult to integrate the outdoor play provision because of poor management and the lack of adequate support.

Unlike the UK, Scandinavian countries widely integrate the outdoor play provision which draws on Froebel philosophy (Knight, 2009). In particular, a range of Forest Schools have been opened in Denmark, Sweden, and other Scandinavian countries. In these schools, the emphasis is put on free outdoor activities, the development of social skills and creativity in children, and children’s emotional well-being (Knight, 2009). In these schools, children engage in outdoor activities in different weather conditions. In the 1990s, the early years practitioners from Bridgwater College visited the Danish Forest School and greatly admired the way children acquire their skills and knowledge (Knight, 2009). Upon their return, these early years practitioners decided to open a similar school in the UK. They found outdoor areas not far from Bridgwater College and created the outdoor play provision for early years children and children with special needs. The provision has improved children’s overall well-being and has increased children’s confidence, self-esteem, and independent thinking (Knight, 2009). Due to its great achievements in the development of young children, Bridgwater College has received the Queen’s Anniversary Prizes Award. The difference between Forest Schools in Scandinavian countries and Forest Schools in the UK is that UK early years practitioners allow children to play outdoors only in warm weather. Moreover, UK early years practitioners prefer structured play; thus, learning in these early years setting is more formal than learning in Scandinavian Forest Schools (Knight, 2009). According to Tovey (2010, p.79), children in the UK “are limited by a culture of risk aversion, risk anxiety, restrictions on children’s freedoms to play outdoors and increased regulation”. In Scandinavian countries, children possess more freedom in their outdoor play.

Some recent research provides conclusive evidence that young children prefer playing in dangerous and challenging outdoor settings. For instance, Stephenson (2003) and Sandseter (2007), who studied outdoor play in early years settings of Norway and New Zealand, revealed that children specifically chose dangerous, risky, and scary places for their outdoor play. Such places motivated children to engage in the exploration of the unknown and thus overcome their fears. More importantly, Stephenson (2003) and Sandseter (2007) found that each time children played, they intentionally increased risk. Tovey (2010, p.80) specifies that the findings of Stephenson (2003) and Sandseter (2007) suggest that “it is not just the feelings of joy that motivate children but the desire to experience the borderlines of fear and exhilaration”. Taking into account these crucial findings, it is obvious that instead of creating a safe outdoor environment for young children, it is more effective to create a significantly challenging environment in which children are able to uncover all their potential. While acknowledging the importance of safety issues, Sandseter (2007, p.104) nevertheless proves that early years practitioners should pay equal attention to “the benefits of risky play”. Drawing on the findings of Stephenson (2003) and Sandseter (2007), UK policy makers and early years practitioners should reconsider their views on the outdoor play provision and gradually shift towards the creation of the environment which benefits children rather than hinders their learning and overall development.

As the essay has clearly shown, both first early years practitioners and contemporary researchers have accentuated the significance of outdoor play in the development of children. Outdoor play is thought to positively affect children’s spatial skills, social skills, decision-making skills, language competence, and physical health. On the basis of the acquired evidence, it is obvious that the juxtaposition of structured and free outdoor activities is especially effective. Unfortunately, as the analysis has revealed, there are some serious obstacles to the successful integration of the outdoor play provision in the UK, including the lack of outdoor areas in early years settings, the educators’ obsession with safety and health issues, inadequate support and poor management. Scandinavian countries, however, have significant experience in the integration of the outdoor play provision. Recently, UK early years practitioners have borrowed this experience and have opened several Forest Schools in which children successfully learn through outdoor play.

Bibliography

Baldock, P. (2001). Regulating early years services. London: David Fulton.

DCSF (2008). Practice guidance for the Early Years Foundation Stage. Nottingham: Department for Children, Schools and Families.

Ellis, N. (2002). Firm foundations? A survey of ATL members working in the Foundation Stage. London: Association of Teachers and Lecturers.

Garrick, R. (2009). Playing outdoors in the early years. London: Continuum International Publishing Group.

Johnston, J. & Nahmad-Williams, L. (2014). Early childhood studies. Abington: Routledge.

Knight, S. (2009). Forest schools and outdoor learning in the early years. London: Sage.

Manning-Morton, M. & Thorp, M. (2003). Key times for play. Maidenhead: Open University.

Maynard, T. & Waters, J. (2007). Learning in the outdoor environment: A missed opportunity. Early Years, 27 (3), 255-265.

Nind, M. (2003). Enhancing the communication learning environment of an early years unit through action research. Educational Action Research, 11 (3), 347-363.

Ouvry, M. (2000). Exercising muscles and minds: Outdoor play and the early years curriculum. London: The National Early Years Network.

Riddall-Leech, S. (2002). Childminding: A coursebook for the CACHE certificate in childminding practice (CCP). Oxford: Heinemann.

Sandseter, E. (2007). Children’s expressions of exhilaration and fear in risky play. Contemporary Issues in Early Childhood, 10 (2), 92-106.

Siraj-Blatchford, I. & Sylva, K. (2004). Researching pedagogy in English pre-schools. British Educational Research Journal, 30 (5), 713-730.

Siraj-Blatchford, I., Sylva, K., Muttock, S., Gilden, R., & Bell, D. (2002). Researching effective pedagogy in the early years. London: Institute of Education.

Stephenson, A. (2003). Physical risk taking: Dangerous or endangered. Early Years, 23 (1), 35-43.

Tassoni, P. (2007). Child care and education: Cache level 3. Oxford: Heinemann.

Tovey, H. (2010). Playing on the edge: Perceptions of risk and danger in outdoor play. In: P. Broadhead, J. Howard, & E. Wood (Eds.), Play and learning in the early years: From research to practice (pp.79-94). London: Sage.

Polyurethane Dispersion Coating v Solvent Based Polyurethane

This work was produced by one of our professional writers as a learning aid to help you with your studies

What are the application differences of a Polyurethane dispersion coating compared to a solvent based polyurethane and what are the advantages upon each other?

A polyurethane coating is a versatile product with many advantages upon other coating systems. A major disadvantage of classical, solvent based polyurethane coatings, are the volatile organic compounds (VOCs) present in the wet state. New regulations force formulators to keep the VOC content below 350 g/l. Recent developments dealt with this problem. Developers have succeeded in making a polyurethane dispersion in water, eliminating most of the volatile organic solvents.

However, the costs of a so called water borne polyurethane are higher than that of a solvent based polyurethane. A question that may arise is whether or not the dispersed polyurethane performs the same way as the classical, solvent based, polyurethanes and whether or not it is worth the money.

Besides possible differences in performance, processing techniques may differ. The goal of this investigation is to give an overview of the differences between the performance, processing and cost between a polyurethane dispersion and solvent based polyurethanes. The major formulations of both types will be summarized and compared to find the best coating.

The chemistry of PU

A polyurethane (PU) is a polycondensation reaction product of an isocyanate with a monomer. The isocyanate must have at least two functional groups and the monomer at least two alcohol groups. The catalyst for the reaction can be a tertiary amine like dimethylcyclohexylamine or organic metallic materials like dibutyltin dilaturate. The condensation of a cyanate with a hydroxyl end-group results in a urethane linkage. Both the isocyanate and the hydroxyl alcohol (diol) need to be bi-functional to form polyurethane. The reaction mechanism of the formation of PU catalyzed by a tertiary amine is given by:

Figure 1: reaction mechanism of the catalyzed condensation reaction of PU by a tertiary amine

Many isocyanates can be used but MDI, aliphatics such as H12MDI, HDI, IPDI and TDI are the most widely used among others. MDI consumption exceed 45% of the total amount of isocyanates used, closely followed by aliphatics (35-40%) and TDI (15%). Whilst the reactivity of the isocyanate determents the rate of the reaction, the main properties of the PU is devised from the diol.

As with the choice of an isocyanate, a wide variety of diols can be used. EG, BDO, DEG, glycerin and TMP are all useable. For hard and weatherable coatings acrylic and polyester polyols tend to be preferred. Polyols with a low molecular weight as the main reactant produces polymer chains with more urethane groups hence a harder and stiff polymer is formed. High molecular weight polyols however produces a more flexible polymer. Also a low functionality long-chain polyol produces soft and flexible PU while short-chain polyols with high functionality makes more cross-linked products which are more rigid.

Different types of PU formulations

PU coatings can be divided into two main groups, namely into 1 and 2 pack systems (1k and 2k). The 1k system basically contains a dissolved, fully reacted PU whilst a 2k system can contain partially reacted PU and unreacted monomers. Both systems can be solvent-based or waterborne. Furthermore there are several curing (or drying) systems known, each resulting in different performing coatings.

Two-component or 2k

As mentioned briefly 2k systems are reactive and the primary reaction is of isocyanate with polyols. The main disadvantage of a 2k system is the pot-life. When the isocyanate is added the mixture begins to react and hardens. However, the main advantage of a reactive system is the outstanding mechanical performance. Because the PU particles react and crosslink, an endless polymer forms which is hard and chemical resistant. Two-component systems include solvent-based and waterborne formulations.

Solvent-based 2kcoatings are obtained by mixing aliphatic isocyanates with polyester polyols or blends of polyester with acrylic grades. Formulations like these cure by partially physical drying and cross-linking with the isocyanate. Solvent-based 2k formulations are mostly used in the automotive and aviation industry as a finish coating.

Waterborne 2k coatings are formulated with dispersible isocyanates and water-dispersible polyols such as polyacrylates or emulsifiable polyesters. The most commonly used isocyanate is a HDI trimer but IPDI trimers can also be used. Aromatic isocyanates cannot be used in waterborne formulations because they react dangerously with water. Dispersible isocyanates can be used as such or can, by partial reaction with a dispersible polyol, be emulsified, making it easier to mix.

However some waterborne systems still need up to 10% co-solvent to form a homogeneous finish. These formulations cure by partially physical drying and by cross-linking but can also be thermally cured at temperatures ranging from 20 EsC to 80 EsC. Waterborne 2k systems are frequently used as protective coatings in the transportation, machinery and furniture industry. Their high flexibility also makes it possible for use on polymeric and wooden surfaces.

One-component or 1k

A 1k PU coating consists of partially reacted polymers (prepolymers) which are liquid at room temperature. These prepolymers are synthesized by reacting MDI, HDI or TDI with a polyester or polyester polyols. The main advantage of a one-component system is that no mixing is required and pot life is no issue. 1k systems are storage stable with a shelf life of up to six months. However a disadvantage is that most 1k formulations are not cross-linked making them less hard and vulnerable to solvents.

1k formulations are broadly used as maintenance and repair coatings for their ease in application and mechanical behavior. They are used for painting steel constructions such as bridges and other large steel structures where corrosion protection is needed.

Solvent-based 1k coatingsare obtained through reacting aromatic or aliphatic isocyanates (MDI and IPDI) with polyesters or polyether polyols. This reaction forms high molecular weight linear PUs. Commonly used additives are chain extenders. Curing occurs by evaporation of the solvent but 1k systems can be formulated so they cure by oxidation, with moisture and even by UV-radiation.

Waterborne 1k or better known as PUDs are fully reacted polyurethane systems. The PU particles have hydrophilic groups in their backbone and are maximum one tenth of a micrometer in length, dispersed in water. This makes a both chemically and colloidal stable mixture (Figure 2). A PUD can also be formed by incorporating a surfactant.

PUDs are currently very popular because they are environmentally friendly, but still being able to perform reasonably. Because PUDs are relatively expensive they are mixed with acrylic grades to lower the material costs. However more acrylic means less hardness. 1k waterborne formulations can cure physical, by oxidation and by UV-radiation.

There are formulations at the market containing no solvent. These coatings find their application in the building sector. To obtain solvent-free formulations MDI is reacted with polyether or oil-modified polyester polyols. To obtain higher hardness chain extenders and catalysts are added to the formulation.

Drying systems

As mentioned above the way a coating cures strongly effects the final performance of the coating. The way a coating dries is dependant of its formulation. A 2k system can cure on air, by heat and under influence of UV. One-component systems can cure physically, with moisture, by oxidation, under influence of UV-radiation and by heat.

Physical drying basically means that the solvent containing the PU evaporates, leaving the PU to form a film. A major disadvantage of this way of drying is that there is no cross-linking between the PU particles. This drying mechanism affects some one component systems.

UV curing coatings can be formulated as solvent-based or waterborne and both 1k and 2k. In a UV-curing formulation the catalyst is inactive in absence of UV-radiation; this behavior is seen with a photo-initiator. When UV-radiation hits the catalyst it unblocks and becomes active and initiates the curing. A schematic representation of this process is shown in Figure 3. UV-curing coatings are predominantly used as automobile finishes as it has unmatchable hardness and gloss.

Oxidative drying is a process which is used with a special type of 1k PU coatings. So called oil-modified PUs (OMU) are synthesized through an addition reaction of isocyanate with a hydroxyl bearing, fatty acid modified ester (TDI). To obtain higher densities more isocyanate can be added but this means that more solvent is needed, as much as 550 g/l (not VOC-compliant).

Natural oils like linseed oil can be used as the diol and a mineral spirit can be used as the solvent. An OMU can be solvent- or water-based and cures by reacting with air surrounding the coating. The fatty acid groups of the oil (attached to the PU) form cross-links with each other by mean of oxidation. OMUs have better mechanical and weathering properties than unmodified, non-reactive alkyds, but reactive PU coatings are superior. OMUs are predominantly used as wood finishes for their distinctive yellowing/aging which some formulators prefer.

Moisture curing PU (MCPU) coatings are formulated with NCO-terminated PU prepolymers. The NCO groups react with atmospheric moisture which produces a amine-group. This further reacts with remaining isocyanate to form highly cross-linked urea-networks. MCPU coatings have superior hardness, strength and stiffness. Even though the coating is cross-linked a MCPU has a relatively high flexibility. Because a MCPU cures with moisture this strongly affects the storage stability.

Thermal curing formulations are based on deactivated isocyanate mixed with a polyol. This semi-one-component formulation is stable at room temperature but when heated (100-200 EsC) the deactivated isocyanate unblocks and reacts with the polyol, the same way as a reactive 2k coating. The isocyanates (aromatic or aliphatic) all have one active hydrogen. For blocking the isocyanate caprolactam is mostly used. A different way of blocking the isocyanate is creating uretidinedione or dimer links. Thermally cured coatings find their main usage on surfaces which need to withstand excessive heating and cooling cycles.

An overview of all the PU coatings with their distinctive curing system is shown in Figure 5.

Testing the coatings

To make a comparison of different types of PU coatings the performance of a coating need to be tested. Because of the broad application possibilities of PU coatings and because of the need of a wide variety of different characteristics, the comparison will be narrowed down to floor coatings. Floor coatings are tested on mar and scuff resistance, taber abrasion, chemical resistance, color and Konig hardness. Because thermal cured coatings are not applicable as flour coatings these formulations will not be used in the comparison. Moisture cured…

Mar and scuff resistance

Mar and scuff resistance or simply put resistance to marking can be measured by several methods. One of them is the pendulum method which consists of a pendulum arm with a hard-wood block attached to the end. The weighted block hits the coated panel four times and the average 20Es gloss of the coating is measured before and after the test. The results are expressed as percentage 20Es gloss retained and visual assessment of the panel (scratching and scuffing).

Taber abrasion

To test taber abrasion can be described as wear resistance. To measure abrasion resistance an arm is weighted with 1000 gram weights and attached to abrasive wheels (mostly consist of minerals). The arm makes 1000 cycles over the substrate. The initial weight of the coated substrate is compared with the weight after the test. The results are expressed in milligrams removed.

Chemical resistance

Chemical resistance is determined of dry films using eight household stains and chemicals. Test chemicals include MEK, olive oil, several cleaning chemicals, ethanol, white vinegar, water and 7% of ammonia solution. The chemicals are applied on a two-ply square towel on the test film, completely saturating the towel. The towel with the liquid is immediately covered with a watch glass. After a period of two hours the stains are removed and the panel is rinsed and dried. The impact of the chemicals on the coating is investigated immediately after the rinsing. The surface is investigated on discoloration, blistering and softening. Each chemical is rated on a scale of 1 to 10 with 10 being “no effect”.

Color

An important property to investigate of coatings is color, especially when evaluating PU coatings, because PUs tend to yellow, especially OMUs. The initial yellowness index of a coating is measured and after a period of lighting. The difference between initial and final yellowness index is also measured as the Delta E. When this value is below 1.0 the color difference is insignificant. The higher the value the more yellow the coating has become in a period of time.

Konig hardness

Konig hardness is a method used for measuring the hardness of a coating. With this method a pendulum rocks back and forth over the coated substrate. The coating will dampen the rocking motion, slowing the pendulum down. The results with the Konig hardness are expressed in seconds; the longer the pendulum rocks, the harder the coating.

Solvent-based vs. waterborne

For this comparison different formulations of each type are reviewed. Solvent-based 2k, solvent-based OMU and 2k UV are compared with waterborne OMU, 2k and a PUD/acrylic mixture. Data from sb OMU, PUD/acrylic, wb OMU and wb 2k is obtained from the article “Oil-modified urethanes for clear wood finishes: Distinction or extinction” by Richard A. Caldwell from Reichhold. Data from sb 2k, 2k UV and 100% PUD formulations are obtained from several commercially available coatings […]. Some values may differ because of the objective opinion of the investigator and the formulation.

Some values are projected as expected where data was missing. These projections include chemical resistance of sb 2k, 2k UV and 100% PUD formulations. The taber abrasion resistance – at 500 cycles – of the wb OMU coating is multiplied by a statistically calculated value, using know data from other formulations, to obtain a value with 1000 cycles.

As shown in Graph 1 a 2k UV coating has superior mechanical properties (Konig hardness and taber abrasion). This is because a 2k UV coating has a high amount of cross-linking. This can be related to the 20Es gloss of the dried film. High gloss usually means high cross-linkage. The solvent-based 2k formulations perform comparable with 2k UV coatings but are slightly less cross-linked as shown in the gloss and the hardness.

The waterborne formulations are all significantly softer but tend to be slightly more mar and scuff resistant. However the initial gloss values of these waterborne coatings are somewhat lower. It can also be found that the 1k OMU formulations perform better than most non-reactive coatings. However they tend to yellow and perform worse than reactive (2k) coatings. This shows that the oxidative cross-linking cannot be compared with the reactive cross-linking of 2k formulations. It is somewhat surprising that 100% PUD performs comparable with a wb 2k formulation. However the chemical resistance and the scuff resistance are lower showing the benefits of cross-linking.

Conclusion

If a hard coating with high gloss is wishful UV cured 2k coatings are the best choice. The best mar and scuff resistance is obtained with waterborne formulations but these show less hardness and chemical resistance. Solvent-based systems have an overall better performance than waterborne systems but VOC regulations restrict the amount of solvents used, causing a lower amount of solids possible.

This results in less cross-linking hence less hardness and chemical resistance. Even though high VOC content solvent-based coatings perform better, VOC regulations cause a shift to waterborne formulations which are increasingly performing better. During the investigation it became clear that a good comparison between different PU formulations is a nearly impossible task because of the large amount of different possible formulations of each class.

Waterborne PU coatings, when properly formulated, can meet the performance of solvent-based coatings, especially when compared with VOC-compatible solvent-based coatings but with a higher price. Eventually VOC regulations are further sharpened causing a market shift towards waterborne formulations, making them worth the money.

Types of Chemical Reaction Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

The types of chemical reaction considered here are: oxidation, reduction, phosphorylation, hydrolysis, condensation, isomerization,deamination and carboxylation.

Oxidation

Oxidation involves an increase in the oxidation number ofa species. This involves the addition of oxygen or the loss of hydrogen orelectrons. Ultimately the first two can always be viewed as equivalent toloss of electrons. Oxidation always occurs together with reduction as part ofa redox reaction. The substance producing the oxidation is termed the oxidant(electron acceptor) and is concomitantly reduced. Likewise the oxidisedspecies (electron donor) can be termed the reductant. (Atkins, 1990)

There are many examples of oxidation reactions in thecatabolism of glucose. For example in the first stage of the glycolyticpathway leading from glucose to pyruvate, after the six carbon intermediatesare cleaved to generate glyceraldehyde 3-phosphate the enzyme glyceraldehydephosphate dehydrogenase catalyses the conversion of glyceraldehyde3-phosphate to 3-phosphoglycerol phosphate (utilizing the cofactor NAD andinorganic phosphate) This oxidation is the first time that reducing potentialin the form of NADH is generated in the breakdown of glucose (note the NAD iscorrespondingly reduced in the process).

Reduction

Reduction is a decrease in the oxidation number of asubstance resulting from the gain of electrons as part of a redox reaction.This is often but not necessarily associated with the loss of oxygen orhydrogen. (See oxidation above).

An example of a reduction reaction occurs in the finalstage of glycolysis under anaerobic conditions where pyruvate is reduced tolactate catalysed by the enzyme lactate dehydrogenase. The reducing potentialfor this reaction is provided by NADH and H+ and prevents thecells finite supply of NAD being tied up in the reduced form from reactionssuch as the oxidation example above. Again the reductant NADH is oxidised inthe process.

Phosphorylation

Phosphorylation involves the addition of a phosphate (PO42-)or phospho (PO32-) from a donor to receptor moleculeusually by a nucleophilic displacement of the phosphorus atom by a lone pairon an electronegative heteroatom (e.g. O or N). (Cox, 2004)

In the first reaction of the catabolism of glucose, theglucose molecule is phosphorylated by the high-energy phosphate compoundadenosine triphosphate (ATP) catalysed by the enzyme hexokinase. The hydroxylgroup on carbon atom 6 of the glucose nucleophilically attacks the terminalphosphate of ATP displacing a PO32- group, which isadded to the glucose releasing ADP. In addition to priming the molecule withenergy it keeps the glucose in the cytoplasm, as the glucose transporters arespecific for free glucose. (Cox, 2004)

Hydrolysis

Hydrolysis is the reaction of a chemical species (moleculeor ion) with water. In biological systems this usually involves addition ofthe elements of water across a chemical bond to break the bond, resulting inan OH group attached to one atom of the hydrolysed bond and an H atom addedto the other atom. This can split the molecule into two separate molecules(see example) or can break a cyclical compound into a linear structure.

The neurochemical transmitter acetylcholine is responsiblefor conduction of the motor neurone impulse across the synaptic gap. Toprevent continuation of the signal and tetanic paralysis of the muscle theacetylcholine is hydrolysed to acetate and choline by the enzymeacetylcholine esterase. This is an uncomplicated ester hydrolysis in whichwater nucleophilically attacks the carbonyl group of the acetate component ofthe ester. (Vander, 2001)

Condensation

A condensation reaction occurs when two or more reactingspecies react to form a single product and eliminate a simple molecule in theprocess. Where the simple molecule is water the condensation reaction can bethought of as the opposite of the hydrolysis reaction. Likewise the tworeacting species can be separated but on the same molecule resulting in acyclization reaction. (McNaught, 1997)

Peptide synthesis occurring on the ribosome and catalysedby its peptide synthetase activity is an example of a condensation reaction.Condensation takes place between the amino group of the added amino acid andthe carboxyl group of the growing peptide chain (activated by anaminoacyl-tRNA linkage) eliminating the elements of water. (Alberts, 1989).

Isomerization

In an isomerization reaction the product of the reactionis an isomer of the reactant. In such a reaction there is no net change inthe stoichiometry of the molecular formula between reactants and products(though intermediate steps may involve extra atoms). The isomerization can bethe result of molecular or conformational rearrangements (McNaught, 1997)

The second reaction of the glycolytic breakdown of glucoseis an example of an isomerization reaction. Glucose-6-phosphate is convertedto fructose-6-phosphate by the enzyme glucose phosphate isomerase. Thisproduces a more symmetrical molecule with a second available primary alcoholgroup for phosphorylation.

Deamination

Deamination is the removal of an amine (NH2)group from a molecule. The nitrogen is usually removed as ammonia, the extrahydrogen coming from water leaving a ketone group in place of the amine. Thisreaction also increases the resulting oxidation number of the reactingspecies and is often termed oxidative deamination. (Cox, 2004)

Oxidative deamination is an important reaction in thedegradation of amino acids especially in the liver. Glutamate, produced fromother amino acids by transamination, is converted into a-ketoglutarate and ammonia by the enzymeglutamate dehydrogenase in association with the cofactors NAD or NADP. Theammonia is ultimately excreted via the urea cycle. (Cox, 2004)

Carboxylation

Carboxylation is the addition of a carboxylate group to amolecule. This is an important method for increasing the number of carbonatoms in a synthesis. The source of the carbon is typically carbon dioxidefor example in the reaction of a carbonated grignard reagent (or thebicarbonate ion in aqueous biological systems).

The fixation of carbon dioxide by green plants is animportant example of a carboxylation reaction. In plants that use theCalvin cycle, CO2 is incorporated into 3-phosphoglycerate by theenzyme ribulose-diphosphate carboxylase. This carboxylates the five-carbonribulose sugar to produce a six-carbon intermediate, which is then hydrolysedto produce two 3 carbon molecules of 3-phosphoglycerate.

Critically discuss Corporate Social Responsibility (CSR)

This work was produced by one of our professional writers as a learning aid to help you with your studies

What are the implications for a firm that does not conduct CSR?

Corporate Social Responsibility (CSR) is often mistaken for a 21st century buzz phrase when in fact it has been part of the business lexicon for decades. While some argue that the concept dates back to the Industrial Revolution, the first substantive work was written by Peter Drucker in his 1954 book The Practice of Management. Despite the passage of time, there is still no universal definition of CSR. Corporate Social Responsibility, what it is and how it is implemented, is different depending upon the country a business operates within, the regulatory system they are answerable to and even the industry within which they work. These complications aside, it is necessary to fix on the well-rounded definition of CSR in order to critically discuss the concept in this paper. The definition offered by the International Organization for Standardization will be used, as it is general in nature and applicable to most businesses, regardless their country of operation:

“Social responsibility is the responsibility of an organisation for the impacts of its decisions and activities on society and the environment, through transparent and ethical behaviour that:

contributes to sustainable development, including the health and the welfare of society

takes into account the expectations of stakeholders

is in compliance with applicable law and consistent with international norms of behaviour; and

Is integrated throughout the organization and practised in its relationships.” (International Organization for Standardization, 2010)

The one weakness in this definition is the proposition that CSR is about compliance with applicable law. In Dahlsrud’s (2008) analysis of 37 CSR definitions, he identified five critical dimensions. The first dimension is the environment and its consideration in business operations and the second is the social dimension which covers businesses taking into account their impact on society. Both of these dimensions are central to our working definition. The third dimension identified is the economic dimension which looks for a commitment to integrating CSR into business operations is also present as is the fourth dimension which related to how businesses should manage all stakeholder groups in a socially responsible manner (Dahlsrud, 2008). The final dimension, voluntariness, is what is missing from the ISO definition. Dahlsrud (2008) defines voluntariness as businesses making decisions and undertaking activities that are above what is legally required whereas the ISO definition (International Organization for Standardization, 2010) states that mere compliance is acceptable. It is argued that merely complying with the law is better described as good corporate governance and not of itself an act of corporate social responsibility (Ashley and Crowther, 2012; Benabou and Tirole, 2010).

Central to the CSR debate is the notion of how society defines the role of business, and the resulting responsibilities. The classic roles and responsibilities assigned to business are to harness capital and other resources in production, to provide employment and meaningful jobs, to conduct research, development and innovation, to provide goods and services for sale, to create wealth for shareholders, employees, customers and society at large. (Fitzgerald and Cormack, 2011) These core, growth and profit motivated responsibilities do touch on some dimensions of CSR, but comparing these to the responsibilities endowed by CSR shows the amount of change necessary to move towards a socially responsible business model.

One extreme of the CSR debate, often referred to as the neo-classical or traditional conflict approach (Redman, 2005), argues that the only social responsibility of business is to increase profits (Friedman, 1970). The other end of the spectrum is what Redman terms the “true believers” (2005, 78) approach to CSR. This is where a firm has environmental and social commitments in place that are not profit motivated. However, true corporate altruism is rare with evidence suggesting that organisations are more likely to adopt an ‘enlightened self-interest’ approach to CSR (Porter and Kramer, 2006). This is an approach that ties socially responsible activities to profit making activities (Redman, 2005).

Enlightened self-interest has been one of the driving forces behind corporate responsibility in relation to the environment and utilization of scare resources. Inputs to production, from raw products to fossil fuels, are becoming scare and businesses have needed to adapt to these changes or risk extinction (Ashley and Crowther, 2012). So while environmental impacts are now of greater concern to business, it could be argued that this is more the survival of the business than a deliberately socially responsible endeavor (Ashley and Crowther, 2012).

At the same time, society now holds greater expectations of the business community (Scherer and Palazzo, 2011). With higher levels of education (for the most part) and thus knowledge, there is less of a tendency to believe the rhetoric of business. Ashley and Crowther argue that customers are not looking for perfection of business practices, but “the do expect honesty and transparency” (2012, pg.3).

The rise and rise of social media has also created a fast and ubiquitous means for people to call businesses to account for (perceived) socially irresponsible acts (Fitzgerald and Cormack, 2011). The media also has the ability to provide focus and extensive coverage on businesses who have engaged in dubious practices (Fitzgerald and Cormack, 2011). Companies who use third world (often slave) labour are being named and shamed, and forced to reassess their supply chain practices (Ashley and Crowther, 2012).

Despite these inroads, the last decade has seen examples where self-regulation and responsible corporate behaviour have failed spectacularly (Lynch-Wood et al, 2009), causing such events as the Global Financial Crisis. Few, if any, parts of society remain unaffected by these events. The response by policy makers and legislators has been swift and punitive. The net result being greater compliance and reporting requirements across most organisations and industries. Now there exists little distinction between what would have been considered a CSR organisation and one that practices good corporate governance (Money and Scheper, 2007; Mason and Simmonds, 2014).

It would be disingenuous to deny that the CSR movement has not had a positive impact on the business community. However, the overwhelming amount of progress in socially responsible action has been sparked by the depletion of natural resources and the need for businesses to diversify operations, changes in society and societal expectations of business and government legislative response to corporate failings. Being socially responsible is now just good business, an essential component of operational and strategic decision making (Porter and Kramer, 2006). Whichever way it is has been achieved, there are consequences that still exist for organisation that do not conduct CSR.

Both the perception and reality of company performance can be enhanced by adopting CSR. Some pundits argue the payoff is long term, others argue that there is no payoff at all (McWilliams et al, 2006). Above profitability, there are a number of risks organisations face if they do not engage in CSR behaviour. It should be noted that the following is not an exhaustive list, merely the ones with the greatest potential impact.

Reputational damage has always been a key outcome of socially irresponsible business activities (Walker and Dyck, 2014). Reputation can be defined as the aggregate perception of an organisations internal and external stakeholders (Walker and Dyck, 2014) and represents a firm’s single greatest intangible asset. Once reputation is lost, or at least impacted significantly, it is difficult to get back. Changes to the speed with which reputation damaging information can spread is also of concern to socially irresponsible organisations as it is much more difficult to hide or deny wrong doing (Ashley and Crowther, 2012).Further to this, Walker and Dyck’s (2014) research showed a positive correlation between a firm’s reputation and those with corporate social responsibility.

Employee engagement and attracting talent appears to go hand in hand with socially responsible corporate practices (Bhattacharya et al., 2008). The global economy has been described as a ‘knowledge economy’ (Fitzgerald and Cormack, 2011), with the greatest corporate assets residing in the intellectual endeavor of staff. Bhattacharya et al. (2008) also argue that CSR is a way for a firm to show their values in practice and thereby emotionally engaging employees to achieve all of the organisation’s goals.

Engaged staff, at all levels of the business, are crucial to complete in a market place that is increasingly saturated by products and services. Differentiating the offering of one business from another (Servaes and Tamayo 2013) is becoming more difficult to achieve, but CSR related activities provide a point of product differentiation. Environmentally sounds goods (such as recyclable plastics) and Fairtrade food stuffs (such as coffee) are two examples of familiar products that have been differentiated by organisations acting in a more socially responsible manner. Firms who fail to innovate in this way will become followers instead of leaders, and potentially impact their profitability (Blowfield and Murray, 2008).

Smarter product and service development needs to start with managers and leaders thinking outside their traditional product and service offerings (Blowfield and Murray, 2008). The move to a more socially responsible business imperative has opened up new markets and opportunities within which an organisation can expand and prosper (Porter and Kramer, 2006). Those organisations closed to CSR will miss these opportunities and run the risk of being left behind. Even if opportunities are identified, access to capital may become increasingly difficult for non-CSR firms.

With the rise of Socially Responsible Investment, organisations that do not engage in CSR can limit their access to capital and hence, their growth potential (Porter and Kramer, 2006). Furthermore, organisations run the risk of greater regulatory intervention if they do not change to more socially responsible ways.

The recent trend towards regulation of business activities has highlighted the fact that if governments and policy makers identify failures in self-regulation, they are more than willing to step in and regulate business behaviour (Lynch-Wood et al, 2009). Legislation changes and compliance requirements are both restrictive and costly to organisations. If organisations fail to go above and beyond the current compliance requirements, they risk more being imposed on their activities (Benabou and Tirole, 2010).

These risks all have the potential to significantly impact an organisations profitability and in extreme cases, long-term survival. These considerations also should be cause enough for businesses to reconsider their default position on CSR initiatives. Whatever the short-comings of the CSR movement, and the ideologically motivated debates about definition, society and the global economy are radically changed. Being socially responsible is now the only way to do business.