International Co-Operation in Kyoto Protocol

This work was produced by one of our professional writers as a learning aid to help you with your studies

International co-operation in Kyoto Protocol.

The environment started to be seen as a serious issue for some during the 1970’s. Many politicians at that point in time did not regard the environment as being an important issue, although the oil crisis of 1973 did make people think about pollution and resources. Concern for the environment was manly confined to ecologists and a few fringe environmental groups such as FOTE. However, greater scientific evidence of environmental damage that could seriously damage the future of the planet placed the environment firmly on the global political agenda (Evans& Newnham, 1998, p.149). Although the environment got onto the political agenda it has not proved straightforward to gain full international co-operation over taking meaningful measures to reverse or at least halt environmental damage. Effective co-operation has been delayed by the reluctance of some countries to reduce their pollution levels, as it would mean lowering their prosperity like the United States. Also there is resentment from third world countries being told they should stop their economic development because the West have already used most of the global resources. There are issues concerning who owns the remaining natural resources and who pays for the pollution that that they cause (Bannock, Baxter & Davis, 2003, p.120).

Ecological movements are not new to the late 20th century and the early 21st century yet the amount of influence that environmentalists have is greater than ever. There were people and movements opposed to industrialisation due to its social as well at its environmental impact (Eatwell & Wright, 2003, p.231).Ecological movements would usually have far more expansive plans for reversing environmental damage and would not be popular with political leaders, consumers and voters. Politicians rather than ecological movements almost always determine the pace and direction of international co-operation on the environment. The ecological movements have won a partial victory in that the environment looks set to remain on the agenda indefinitely. The difficult part is to make sure that agreements such as the Kyoto Protocol are truly co-operative and effective rather than just meaningless gimmicks by all governments that signed up to it (Eatwell & Wright, 2003, p.250).

It was probably no coincidence that modern ecological movements emerged in the West during the 1960s when growing affluence amongst the young middle classes allowed them the chance to think about the global economy and the impact that it had upon the global environment. In the 1960s many people if they thought about the environment at all assumed it meant nothing more or less than making the air and water clean (Hobsbawm, 1994, p. 262). The main political and economic to capitalism, communism may have had different aims to its capitalist rivals yet it still aimed at rapid economic growth. Neither capitalism nor communism were or are intended to be guardians of the environment. However growing knowledge of the damage being caused to the environment would force countries to co-operate with each other especially after the fall of communism in Central and Eastern Europe (Brown, 2002, p. 240). In the ever increasing consumption of natural resources and increasing levels of pollution arguably intensified the Global Warming problem and would mean that co-operation over the environment would become a major area of contention. For the instance the United States consumption of oil increased by 300 per cent between 1950 and the start of the oil crisis in 1973. The highly ineffective factories of the Soviet Union produced almost as much as the United States for the production of far fewer goods (Hobsbawm, 1994, pp. 252-253).

Complacency about the environment started to be lifted during the 1970s eventually leading to international protocols to reduce pollution. The oil crisis of 1973 led to some attempts to find alternatives to fossil fuels although it did nothing in the long term to reduce oil consumption even if it did hurt the pockets of Western motorists and Third World governments. As the human population continues to grow upwards of 6 billion plus beyond the use of resources and resulting pollution will grow (Nicholson, 1998, p.157). Environmental and ecological movements started to make headway in Western Europe and North America with concerns about acid rain, the emission of CFC’s reducing the ozone layer and most significantly global warming (Brown, 2001, p.252).

Global warming is now a concern of most governments although they do not have an equal say as to the policies that should be pursued to stop or reverse the process. Rising sea levels are more of a threat to the Netherlands, parts of Britain or Bangladesh than they are to the United States, Russia and China. The relative wealth of the Netherlands and Britain make their co-operation with the Kyoto Protocol than that of Bangladesh. The exclusion of the United States, Russia and China plus India would seriously damage the co-operation needed to make the Kyoto Protocol near being effective (Nicholson, 1998, p.165).

It has been encouraging that has been co-operation between governments over the environment. However, that co-operation has to be bought about by a process of negotiations and compromises with little to force countries, especially more powerful ones such as the United States, Russia and China into agreeing to effective measures to protect the environment. Aside from appealing to sense and reason there is little way of enforcing measures agreed at the Kyoto Protocol or any other environmental summit. The Kyoto Protocol, like its predecessor the Rio Earth summit, was the result of long drawn out talks similar in complexity to the GATT rounds or EU treaties or summits. Co-operation over the environment is often to the minimum restrictions and measures that can be agreed rather than the maximum. The agreements over reducing CFC (chloroflurocarbons) emissions can be regarded as starting the process on international co-operation to slow down environmental damage although it amply demonstrated that politicians are only willing to take action once there is enough scientific of environmental damage. By that time much damage has already been done (Brown, 2002, p. 240).

The Rio Earth Summit was intended to introduce measures and co-operation to tackle global warming on a greater scale. To a certain extent it succeeded in producing co-operation even if it was hampered by the unwillingness of the Bush senior administration to agree to the most stringent measures that could have been agreed. The United States remains the world’s largest individual polluting country yet its governments are generally unwilling to jeopardise American living standards to save the planet. The Bush senior administration did not however block the agreement at R as it could have done. The United States government came under pressure from other governments to take greater action yet did yield to it (Brown, 2002, p. 243).

Third World and developing countries were not happy and remain unhappy that the United States does not do more to protect the environment as it gained most from the way that the global environment operates (Nicholson, 1998, p.173). Other countries mainly in the EU and Scandinavia have been more active in seeing the environment as being of vital importance and wished to go further that the agreements reached at Rio. The EU can play its part in protecting the environment as it can pass legislation and regulations that its member states have to conform to (McCormick, 2002, p.128).

EU states plus Australia and New Zealand played their part in the Kyoto Protocol. Once again the United States proved reluctant to adopt tough measures. That reluctance was despite President Bill Clinton being keen on environmental issues. He was unwilling to cut American living standards and also realised that tough restriction were unlikely to get through a Republican controlled Congress (Crystal, 2003, p.513).

European countries such as Britain, France and Germany were unhappy about the lack of United States support for the Kyoto Protocol. Tony Blair was especially disappointed as he expected Bill Clinton to have been more supportive of the Kyoto Protocol and protecting the environment (Young, 2003, p.150). George W Bush was even less willing for the United States to be constrained by any parts of the Kyoto Protocol. Indeed prior to the 9/11 attacks the bush administration seemed cool to the idea of government co-operation most issues. Since 9/11 the United States government has been more interested in pursuing the war on terrorism rather than co-operation to uphold the Kyoto Protocol or protecting the environment. The campaigns in Afghanistan and Iraq have done harm to the environment. The soaring oil prices seen after the invasion of Iraq may however boost the moves towards finding alternative fuels as much as any of the target son reducing emissions agreed within the Kyoto Protocol would have done on their own. OPEC countries seem far happier to cut oil production to maintain high prices, as it is not in their economic interests to co-operate with other countries to reduce oil production. Such is the demand for oil that the consumption does not decline even when prices are at record levels. Western governments when talking with OPEC countries would rather get the oil production quotas raised than discuss co-operation towards reduced consumption (Evans and Newnham, 1998, p. 397).

The need for global co-operation to achieve the 5 per cent emission targets set out with the Kyoto Protocol would seem to gathering with global warming seeming to contribute to climate changes that are increasingly costly and dangerous(Crystal, 2003, p. 513). Climate changes have and will make floods and droughts more common whilst lack of adequate food and water supplies will contribute to greater instances of famine and severe poverty. Whilst countries can take steps to avoid economic problems there is little they can do to stop a hurricane or tsunami. The costs of reducing pollution or improving irrigation and building up flood protection with no certainty that they will be successful (Eatwell and Wright, 2003, p. 251). There was large-scale international co-operation to help the Asian countries devastated by the tsunami of December 2004. Countries can only hope that such disasters are confined to unpopulated or lightly populated areas to keep death and destruction to a minimum. Aside from such hopes countries can increase their levels of co-operation by encouraging recycling, energy and water conservation schemes to reduce the emission of greenhouse gases.

Therefore although the Kyoto Protocol was a sign of global co-operation to start reducing pollution and trying to make the global environment safer it has had its limitations. The failure of the United States to co-operate with the process means that the world’s greatest polluter is not taking active steps to help protect the environment. Perhaps that might change depending on who succeeds George W Bush to the presidency. Not every country has signed up to the Kyoto Protocol and of those that did not all have ratified it. Should the current high oil prices continue there maybe co-operation to find alternative fuels that are cheaper and possibly cause less pollution? Some countries are more committed to co-operation in order to protect the environment. The member states of the EU are formerly aiming towards sustainable development and the mission reductions agreed to as part of the Kyoto Protocols.

The selfishness of the United States and other countries that fail to co-operate to reduce environmental damage will come back to haunt us all and leave a terrible legacy to our descendants to deal with its full consequences. However the Rio Earth Summit and the Kyoto Protocols have provided a framework for global co-operation that needs to be built upon. Perhaps global co-operation could be increased through the auspices of the United Nations and aided by scientific evidence of the urgent need to act now. People should also consider acting on an individual and community basis to conserve and protect the environment as best they could by recycling and conserving water and energy.

UK and US relations changed after Obama’s election

This work was produced by one of our professional writers as a learning aid to help you with your studies

In his 1946 Iron Curtain speech, Winston Churchill (2015, n. pag.) stated that “[n]either the sure prevention of war, nor the continuous rise of world organisation will be gained without what I have called the fraternal association of the English-speaking peoples. This means a special relationship between the [UK] and the United States.” The end of World War II did indeed mark the start of a special relationship between the two nations and it has been characterised by political, diplomatic, economic and military relations as well as shared values and strategic objectives in the intervening years (Wallace & Phillips, 2009). However, there has been significant coverage and analysis of the special relationship since the 2008 election of Barack Obama as US President, with many political commentators, academics and journalists alike speculating as to how relevant such a relationship may now be as a result of changing strategy on the part of both nations. This essay will establish the state of the special relationship between the UK and the United States prior to the election of Barack Obama before exploring the political, economic and social changes that occurred in the wake of his inauguration. This will be done with a view to concluding that 2008 was a watershed for the special relationship as a result of changing US and UK priorities, transitional leadership and the global financial crisis. However, despite the evolution of the special relationship as a result of a shifting global political and economic climate, it is still a key strategic alliance that is relevant to the security of both states and the international community as a whole today.

Prior to 2008, the special relationship between the UK and US had the purpose of forming close cooperation between the two in terms of nuclear weapons technology, economic activity, trade and military planning and execution, amongst other areas (Wither, 2006). In the immediate aftermath of World War II, it forged a mutual recovery and support in the rebuilding of states that had been damaged by the war with the added point of America becoming the global leader (Arnold, 2014; Friedman, 2007). Lee (2010) argues that the UK was the weaker partner throughout the second half of the 20th century was a result of the fact that the defence cooperation between the two was dominated by the US, who had larger strategic forces and often demanded UK cooperation in initiatives during the Cold War. Wright (2002) supplements this perspective and notes that this imbalance persisted into the 21st century as a result of the UK’s backing of George W. Bush’s actions in the wake of 9/11 in exchange for maintaining British influence internationally. The British support for US foreign and security policy in the aftermath of 9/11, particularly the 2003 invasion of Iraq, suggested that the special relationship remained strong despite the fact that it did not have popular support in Britain and raised significant questions as to whether it was in the best interests of the UK to follow American international strategy (Dumbrell, 2009). However, regardless of the controversy that surrounded the actions of both nations, there can be little doubt that there were strong political, strategic and military links in place prior to the election of Obama in 2008.

Following Obama’s election in 2008, the special relationship has been called into question as a direct result of the ideological disparities between the new President and his predecessor. Obama’s diplomatic objectives and strategic goals departed significantly from the approach taken towards alliances and security by George W. Bush. For example, Dumbrell (2009) notes that Obama had an ambivalence towards the protectionist strategy that had previously been employed and those applying pressure on the president to continue to pursue it as well as committing to diplomacy with other European states to encourage engagement in Afghanistan. As such, the foundation of the special relationship had become distorted as a direct result of the fact that Obama did not wish to prioritise relations with Britain in order to secure an ally in the international community based on a traditional mutual need. Indeed, both Gordon Brown and Barack Obama sought to address global issues like climate change (Dumbrell, 2013) and the restructuring of international institutions to create an effective and efficient global society (Dumbrell, 2008). These issues were not prioritised on Bush’s agenda and the cooperation between the US and UK on them provides evidence of a shift in focus. However, the fact that both pledged to cooperate on matters of international rather than domestic importance does underscore the fact that diplomatic and political relations were still in place despite Obama’s determination to redefine US foreign policy. As such, the special relationship did change in the field of diplomacy but remained resolutely in place.

In highlighting Obama’s tentative departure from the traditional American protectionist stance, Dumbrell (2009) also drew attention to the fact that Obama sought to build military alliances to strengthen his position in Afghanistan and this also impacted upon that particular aspect of the special relationship. For example, Self (2010) states that Obama exerted pressure on the British government into committing more troops to the ongoing war in Afghanistan. This was done via coercive rhetoric in a bid to overcome policy differences that were geared towards protecting the British national interest in a time of economic crisis. As such, there was significant conflict in the area of military cooperation because of the circumstances that had changed the priorities of both nations. The military element of the special relationship also evolved after the inauguration of Obama in other ways that were directly linked to the shift in American strategic priorities (Wither, 2006). For example, Wither (2006, p. 47) argues that:

…the longstanding defence partnership is threatened by a number of factors, including interoperability problems, the UK’s national and defence spending priorities, the likely impact of a decision to replace Trident and the decline in the importance of the transatlantic strategic partnership in NATO.

This identifies several areas where US priorities were distinctly different to those of the UK and therefore marks a major disjunction between the policies of both. This had not existed before as the UK had actively supported the US in its global endeavours, often without question (Dumbrell, 2009). There can be no doubt that the UK was not able to do so to the same extent after 2008. British military capabilities were significantly reduced in the wake of the 2008 global financial crisis, which raised significant questions about the UK’s ability to contribute to global security as well as compromising any future usefulness in collaborative overseas operations (Wither, 2006). This was also paired with a reluctance to aid Obama via bilateral agreements to take action overseas, with a prime example being parliament’s rejection of Prime Minister David Cameron’s proposal to ally with Obama and commit to air strikes in Syria in 2013 and the resultant strained relationship between the two leaders (Rothkopf, 2014). As a consequence, the special relationship had fundamentally changed in numerous ways based upon the reluctance of the UK to tow the American line and the frustration that manifested in the military relationship between the two, at least on a governmental level if not on the ground where cooperation did occur.

Although cooperation may not have been as forthcoming in a military context as it had been prior to 2008, there are areas of policy and the special relationship in which new forms of cooperation flourished. For example, according to Wallace and Phillips (2009, p, 263), the “…US-UK special relationship today has a political and ideological superstructure and an embedded military and intelligence substructure.” This suggests that there is active cooperation between the two in the intelligence sphere and that is reinforced by the creation of a National Security Strategy Board, which was designed to provide a clear line of communication between officials in the UK and the US to discuss security and strategy as and when necessary (Watt, 2011). In addition, there are ongoing intelligence operations that require cooperation between the two, most notably the running of CIA networks within British communities in conjunction with MI5 in order to prevent terrorist attacks (Svendson, 2010). In effect, stronger links have developed in this particular area of the relationship and illuminate how it has changed based upon need.

The economic aspect of the special relationship also demands scrutiny. Despite the global economic crisis that damaged both the US and UK economies significantly there is extensive economic activity that ties them together, including trade and investment that renders each the largest investor in the other (Foreign Affairs Committee, 2010). This irrevocably bound the nations together and provided a point of cooperation that was seemingly unaffected by global goals as it benefitted both nations. Indeed, Stacey et al (2015, n. pag.) note that Obama perceived the US and UK economies as the two that were “standing out at a time when a lot of other countries are having problems” at the beginning of 2015, thus ostensibly reaffirming the special relationship publically. The implication here is that the strength of both economies reinforced the relationship as a result of the ongoing benefits that both nations were able to reap from the situation. It should be noted that there were points of disagreement, such as the fact that Obama sought to insert clauses into World Trade Organisation UK stimulus packages that were designed to protect American industry and jobs. However, these did not actively impact upon the economic support or cooperation that one provided the other. In effect, this particular area of the special relationship changed very little despite the global economic climate and the uncertainty it introduced impacting upon other areas.

However, despite the changes to the special relationship illustrated above, there are certain elements of it that have altered little since 2008. For example, despite the fact that Obama has favoured a partnership with the collective of European states rather than one nation, the UK is still the weaker partner in the relationship: “…relief that [Obama’s] first phone call to a European leader was to Gordon Brown, indicates how dependent Britain’s claim to global status is on Washington’s approval” (Wallace & Phillips, 2009, p. 283). Although the UK is no longer a bridge to Europe as a result of Obama’s desire to establish relationships with the European Union and its individual states (Cameron, 2007), it still maintains the closest relationship of all European states to the US and continues to be its closest ally. This is important in determining how far the special relationship had changed and denotes the presence of common ground that has endured from the end of World War II and is still in place.

In conclusion, the analysis in this essay points to the special relationship between the UK and United States undergoing a fundamental change in the wake of the election of President Barack Obama in 2008. Although the channels of communication remained open and were consolidated as a result of cooperation in the field of intelligence and via the new National Security Strategy Board, the strategic goals of both nations were undoubtedly impacted by economic crisis, involvement on a changing international stage and the need to develop enhanced relations with other European nations. There is also evidence of friction between the two nations and this manifests in an unwillingness to support the other unless initiatives and policies were also in the national interest. These points outline how the special relationship changed on an ideological and a practical level. However, the economic element of the special relationship remained intact, in spite of the attempts by the US government to insert clauses into stimulus agreements to aid the American economy, and this underlined the remaining importance of each power to the other. As such, the analysis reinforces the idea that 2008 was a watershed for the US-UK special relationship as a result of the impact that changing priorities, transitional leadership and the global financial crisis had on both nations. There has certainly been a need for the evolution of the special relationship as a result of a shifting global political and economic climate but the relations between the US and UK still facilitate the maintenance of a key strategic alliance that is relevant to the security of both states and responds to the demand for global leadership by the international community today.

Bibliography

Arnold, G., (2014). America and Britain: Was There Ever a Special Relationship. London: C. Hurst & Co.

Cameron, F., (2007). An Introduction to European Foreign Policy. London: Routledge.

Churchill, W., (2015). The Sinews of Peace (“Iron Curtain Speech”). The Churchill Centre. [Online] Available at: http://www.winstonchurchill.org/resources/speeches/1946-1963-elder-statesman/the-sinews-of-peace [Accessed 17 October 2015].

Dumbrell, J., (2009). Hating Bush, Supporting Washington: George W. Bush, Anti-Americanism and the US-UK Special Relationship. In J. Dumbrell & A. Schafer eds. America’s ‘Special Relationships’: Foreign and Domestic Aspects of the Politics of Alliance. Abingdon: Routledge, pp. 45-59.

Dumbrell, J., (2013). Personal Diplomacy: Relations Between Prime Ministers and Presidents. In A. Dobson & S. Marsh eds. Anglo-American Relations: Contemporary Perspectives. Abingdon: Routledge, pp. 82-104.

Dumbrell, J., (2008). The US-UK Special Relationship: Taking the 21st Century Temperature. The British Journal of Politics and International Relations, 11, pp. 64-78.

Foreign Affairs Committee, (2010). Global Security: UK-US Relations. London: The Stationary Office.

Friedman, N., (2007). The Fifty-Year War: Conflict and Strategy in the Cold War. Washington DC: Naval Institute Press.

Lee, L., (2010). US Hegemony and International Legitimacy. Abingdon: Routledge.

Rothkopf, D., (2014). National Insecurity: Can Obama’s Foreign Policy Be Saved? Foreign Policy. [Online] Available at: http://foreignpolicy.com/2014/09/09/national-insecurity/ [Accessed 21 October 2015].

Self, R., (2010). British Foreign and Defence Policy Since 1945: Challenges and Dilemmas in a Changing World. Basingstoke: Palgrave Macmillan.

Stacey, K., Dyer, G. & Murphy, M., (2015). David Cameron and Barack Obama Reaffirm Special Relationship. Financial Times. [Online] Available at: http://www.ft.com/cms/s/0/7dfb3402-9d9f-11e4-8946-00144feabdc0.html#axzz3pD1jb3YF [Accessed 20 October 2015].

Svendson, A., (2010). Intelligence Cooperation and the War on Terror: Anglo-American Security Relations After 9/11. Abingdon: Routledge.

Wallace, W. & Phillips, C., (2009). Reassessing the Special Relationship. International Affairs, 85:2, pp. 263-284.

Watt, N., (2011). Barack Obama Agrees to Form Joint National Security Body with UK. The Guardian. [Online] Available at: http://www.theguardian.com/world/2011/may/23/barack-obama-security-board-with-uk [Accessed 20 October 2015].

Wither, J., (2006). An Endangered Partnership: The Anglo American Defence Relationship in the Early Twenty-First Century. European Security, 15:1, pp. 47-65.

Wright, B., (2002). Analysis: Anglo-American ‘Special Relationship’. BBC. [Online] Available at: http://news.bbc.co.uk/1/hi/world/americas/1913522.stm [Accessed 20 October 2015].

Feminist theories and International Law

This work was produced by one of our professional writers as a learning aid to help you with your studies

Feminism is a political movement that seeks to overturn gender inequalities between men and women (Blunt and Wills, 2000: p. 90). It is concerned with the power relations that influence not only how individuals relate to each other, but how spheres of life are gendered in particular ways. Feminism is therefore, inherently linked to international law and is one of the ways in which it can be theorised. While the international legal system may be broadening in scope, it remains narrow in perspective. In particular, the boundaries and limits of international law can be seen from a critical and feminist perspective. Feminist legal theory is comprised of two broad strands. The first is to analyse and critically interrogate the implicit and masculinist assumptions of international law in theory and in practice. The second is to reform international law such that it might better serve the interests of women across the world. It has been argued that ‘feminist theories have nothing to add to the study of international law’ (Hunter-Williams, 2009). However, despite this criticism, feminist theories have much to contribute to the study of international law. The importance of feminist theories in international law can be seen through the inadequacies of traditional theories of law and also in the application of feminist theories in areas such as human trafficking and refugee law. The absence of women in international law has distorted the discipline’s boundaries and “produced a narrow and inadequate jurisprudence that has, among other things, legitimised the unequal position of women around the world rather than challenged it” (Charlesworth and Chinkin, 2000: p.1). Feminist theory acts to challenge this situation and thus offers a significant contribution to the study of international law.

Traditional theories of international law have seriously failed to address the situation of women worldwide (Charlesworth and Chinkin, 2000: p.25). Feminist theories, however, contribute to our understanding of international law and the global inequality of women. As such, the remainder of this essay will refute the claim that ‘feminist theories have nothing to add to the study of international law’. It should be stressed that there is no single school of feminist jurisprudence and the categories do overlap in some respects.

Liberal feminists typically accept the language and aims of the existing domestic legal order. Charlesworth and Chinkin explain how liberal feminists “insist that the law fulfil its promise of objective regulation upon which principled decision-making is based” (2000: p.39). Their primary goal is to achieve equality of treatment between women and men in public areas, such as political participation and representation, and equal access to and equality within paid employment, market services and education (Charlesworth and Chinkin, 2000: p.39). Liberal feminism therefore, has something to add to international law in that it seeks to achieve equality between men and women.

Charlesworth and Chinkin define cultural feminism to be “concerned with the identification and rehabilitation of qualities and perspectives identified as particular to women” (2000: p.40). Epistemologically, it is a standpoint theory in that it emphasises the importance of knowledge based upon experience and asserts that women’s subjugated position allows them to formulate more complete and accurate accounts of nature and social life (Harding, 1986: pp.24-29). In this area, the work of Carol Gilligan is particularly relevant. Gilligan investigates whether there is a distinctively feminine way of thinking or solving problems (Gilligan, 1982). She identifies a ‘different’ voice which bases decisions on the values of caring and connection in contrast to a style of decision-making based on abstract logic (Gilligan, 1982: p.24). The former is associated with women and the later with men (Charlesworth and Chinkin, 2000: p.40). Gilligan’s work has been useful to the critical analysis of legal reasoning, which lays claim to abstract, objective decision making. Accordingly, “if legal reasoning simply reproduces a masculine type of reasoning, its objectivity and authority are reduced” (Charlesworth, et al., 1991: p.615). This illustrates the contribution of cultural feminism to international legal theory.

Radical feminism explains women’s inequality as the product of domination of women by men. Catherine Mackinnon has been a consistent exponent of this view. Her view is that the law keeps women ‘out and down’ (Mackinnon, 1987: p.205) by preserving a hierarchical system based on gender and sex. Radical feminism has paid attention to the public/private dichotomies that also feature in liberal thought. The public realm of the workplace, the law, economics, politics and intellectual and cultural life is regarded as the natural province of men; while the private world of the home, the hearth and children is seen as the appropriate domain of women (Charlesworth et al., 1991: p.626). This dichotomy has led to a debate amongst feminist scholars over whether this distinction often operates to obscure or legitimate men’s domination of women. This dispute could be seen to weaken radical feminist theory. However, the awareness it raises of the domination of women by men and particularly the hierarchical system of international law outweighs its flaws.

Feminist campaigns have not only been restricted to women from the Global North. The term ‘third world feminisms’ refers to approaches developed by women from the Global South and women of colour in the Global North. These approaches explore the differences among, as well as between, men and women. For instance, Alice Walker coined the term ‘womanism’ (1984, quoted in Blunt and Wills, 2000: p. 114) because many black feminists prefer the term ‘womanism’ to ‘feminism’, as the later has been largely white and largely uncritical of its whiteness. Charlesworth et al. assess third world feminisms in terms of the notion of a ‘different voice’ (1991: p.615) in international law. The authors argue that third world states have challenged international law as either disadvantageous to them or inadequate to their needs (Charlesworth et al.: p.616). However, they also suggest that although the challenge of the ‘different voice’ of the developing nations to international law has been fundamental, it has focused on disparities in economic position and has not questioned the silence of half the world’s population in the creation of international law, or the unequal impact of rules of international law on women (1991: p. 618). Despite the limitations of third world feminisms, it still provides an important contribution to international law in that it highlights the application of Western feminist theories to third world communities and societies (Charlesworth and Chinkin, 2000: p.46).

The importance of the contribution of feminist theories to international law can be seen in practice in relation to human trafficking. In December 2000, over 80 countries signed the United Nations Protocol to Prevent, Suppress and Punish Trafficking in Persons, Especially Women and Children (the Trafficking Protocol), (Doezema, 2002: p.20). The Trafficking Protocol works to conceptualise an international problem; it established the first definition of trafficking in international law and put in place a set of measures for international co-operation to address this problem (Sullivan, 2003: p.68). The Trafficking Protocol defines trafficking in persons as “the recruitment, transportation, transfer, harbouring or receipt of persons, by means of the threat or use of force or other forms of coercion” (United Nations, 2003: p.2). Trafficking in women for the sex industry is highly profitable for those running the trade. The UN estimated that 4 million people were trafficked in 1998, producing a profit of USD 7 billion for criminal groups (Sassen, 2002). Feminists and feminist organisations were particularly involved in discussions about the text of the Trafficking Protocol (Sullivan, 2003: pp.67-68). Feminist lobbying regarding the Protocol was split into two ‘camps’ espousing differing views on prostitution. One group, the Human Rights Caucus, viewed prostitution as legitimate labour. The other, represented by the Coalition Against Trafficking in Women (CATW), considered all forms of prostitution to be a violation of women’s human rights.

Differences about the possibilities of distinguishing between free and forced prostitution divided feminists. Consequently, the definition of trafficking incorporated in the Protocol has some important weaknesses. Furthermore, the debate amongst feminists on this topic has fuelled claims that ‘feminist theories have nothing to add to the study of international law’ (Hunter-Williams, 2009). However, the Protocol does have its strengths. The Trafficking Protocol has had significant worldwide impact on the status of women. As such feminist theory should be seen as making an important contribution to the study of international law.

A further area in which feminist theories are viewed as important in international law is that of refugee law. Carving out territory for refugee women within mainstream legal realms has been one way that feminists have successfully redressed their invisibility within refugee discourse (Oswin, 2001). Efforts have largely focused on eliminating the male bias within the legal definition of ‘refugee’ in order to incorporate the experiences of refugee women into refugee status determination processes. Emphasis has also been placed upon the recognition of violence against women as a ground of persecution. Those feminists who have sought to incorporate women’s experiences into refugee law can claim success on a variety fronts. For instance, the UNHCR’s Guidelines on the Protection of Refugee Women, adopted in 1991, emphasises the fact that gender-based persecution exists and should be recognised by ‘refugee-receiving’ states as a basis for asylum (Oswin, 2001: p.350). In this way, feminist efforts have been instrumental in putting refugee women’s experiences on the agenda of international refugee law. However, it could be proposed that feminist theories have not had a substantial involvement in refugee law as feminists “have only been granted a small portion of what is already extremely finite territory” (Oswin, 2001: p.347).

A final example of the significant impact that feminist theory has had on the study of international law is that of the United Nations Security Council Resolution 1325. SC1325 is an eighteen-point resolution that develops an agenda for women, peace and security. It calls for the prosecution of crimes against women, increased protection of women and girls during war, the appointment of more women to the UN peacekeeping operations and field missions and an increase in women’s participation in decision making processes at the regional, national and international level (Cohn, et al., 2004: p.130). The resolution was unanimously adopted by the Security Council on 31 October 2000. SC1325 is highly significant because it is the first time the Security Council has devoted an entire session to debating women’s experiences in conflict and post-conflict situations. The resolution was influenced by feminist campaigners and the case highlights the growing influence of feminist theories on international law.

Women are on the margins of the international legal system (Charlesworth and Chinkin, 2000: p.48). Charlesworth and Chinkin comment that: “Women form over half the world’s population, but their voices, in all their variety, have been thoroughly obscured by and within the international legal order” (2000: p.1). Feminist excursions into international law have been reproved for criticising the male-centredness of international law while at the same time invoking the international legal order to improve the situation for women (Charlesworth and Chinkin, 2000: p.59). The implication of this is that “feminists forfeit the right to invoke international law if they point out its biases” (Charlesworth and Chinkin, 2000: p.59). Such claims have led to assertions that ‘feminist theories have nothing to add to the study of international law’. However, the development of feminist jurisprudence in recent years has made a “rich and fruitful contribution to legal theory” (Charlesworth, et al., 1991: p.613). This is highlighted by the inadequacies of traditional theories of international law, and the important contribution of feminist ideas both in theory and in practice, such as in the Trafficking Protocol and refugee law. Consequently, feminist theory can be used to “reshape the way women’s lives are understood in an international context, thus altering the boundaries of international law” (Charlesworth and Chinkin, 2000: p.337).

Bibliography

Blunt, A. and Wills, J. (2000) Dissident Geographies: An Introduction to Radical Ideas and Practice, Harlow: Prentice Hall.

Charlesworth, H. and Chinkin C. (2000) The Boundaries of International Law: A Feminist Analysis, Manchester: Manchester University Press.

Charlesworth, H., Chinkin, C., and Wright, S. (1991) ‘Feminist Approaches to International Law’, American Journal of International Law, 85(4), pp.613-645.

Coalition Against Trafficking in Women (CATW) (1999) ‘Prostitutes Work, But Do They Consent?’, available at http://www.uriedu/artsci/wms/Hughes/catw

Cohn, C., Kinsella, H. and Gibbings, S. (2004) ‘Women, Peace and Security: Resolution 1325’, International Feminist Journal of Politics, 6(1), pp.130-140.

Doezema, J. (2002) ‘Who Gets to Choose? Coercion, Consent and the UN Trafficking Protocol’, Gender and Development, 10(1), pp.20-27.

Gilligan, C. (1982) In a Different Voice: Psychological Theory and Women’s Development, MA: Harvard University Press.

Harding, S. (1986) The Science Question in Feminism, Milton Keynes: Open University Press.

Hunter-Williams, S. (2009) Feminist theories have nothing to add to the Study of

International Law. Available at: https://simonhunterwilliams.wordpress.com/2009/3/16/feminist-theories-have-nothing-to-add-to-the-study-of-international-law/

Mackinnon, C. (1987) Feminism unmodified, Boston, MA: Harvard University Press.

Oswin, N. (2001) ‘Right Spaces’, International Feminist Journal of Politics, 3(3), pp.347-364.

Sassen, S. (2002) ‘Women’s Burden: Counter-Geographies of Globalization and the Feminization of Survival’, Nordic Journal of International Law, 71, pp.255-274.

Sullivan, B. (2003) ‘Trafficking in Women’, International Feminist Journal of Politics, 5(1), pp.67-91.

United Nations (2003) ‘Protocol to Prevent, Suppress and Punish Trafficking in Persons, Especially Women and Children, supplementing the United Nations Convention Against Transnational Organized Crime’, available at http://www.ohchr.org/english/law/pdf/protocoltraffic.pdf.

Example International Relations Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

What are the main differences between ‘classical realism’ and ‘neo-realism’?
Introduction

Realism has become a foremost theory within international relations over six decades. Its contemporary construction is attributed to Hans Morgenthau and his work in the late 1940s. Morgenthau utilised previous works from scholars and strategists, which include, Ancient Greek scholar Thucydides, Machiavelli, Hobbes and his notions of the anarchic state, and the 1939 work of E.H Carr. Realism became the primary theory as the discipline of International Relations blossomed, forming political hypothesis based on its philosophies, such as Real Politik. As International Relations expanded as a discipline with Realism at its centre the theory become reformed. Kenneth Waltz succeeded in becoming the father of Neo-Realism in the same way Morgenthau had done with Realism thirty years prior. This resulted in a schism in the Realist theory between classic Realism and structural (neo) Realism.

The purpose of this essay is to investigate this split and to distinguish the major differences of the two Realist strands. These theories are vast volumes of work that have been considered by the brightest minds of discipline for several decades, the salient features of the two theories discussed in this text will offer just a glimpse into their philosophies. Investigation to compare the differences of the two shall be split into two parts, firstly examining the theoretical base and highlighting the noticeable distinctions. The second part will conceptualise these points in a practical sense, attaching them to historical events predominantly from the twentieth century.

Theoretical

Morgenthau’s key principles of Realism consider states as individuals, a ‘unified actor.’ One state represents itself, and these states are primary in international relations. Internal politics and contradictions are irrelevant as states pursue interests defined by power. Power, is a further key proponent of Morgenthau’s paradigm, he believed it central to human nature and therefore state actors. Morgenthau considered human nature as corrupt, dictated by selfishness and ego, resulting in a dangerous world constructed by egotistical greedy actors. Thus Realism possesses at its core a very pessimistic outlook of constant threat and danger, logically therefore Realism submits as one of its fundamental considerations that state actors are driven by survival and the need for greater dominance and power to create a favourable balance of power and decreasing the actors potential to diminish. (Gellman, 1988). Realists consider these attitudes to consume national interest, trumping any other concern. Self-help becomes a necessity. Reliance or trust of other actors is foolish as Machiavelli describes – “today’s friend is tomorrow’s enemy” (Morgenthau, 1948).

Realisms success and prominence in international relations naturally exposed it to a series of critiques. Authors and scholars disagreed with its ideological theory and often advocated alternative theories. These included a Liberalist outlook that promotes the importance of democracy and free trade, while Marxists believe international affairs could be understood as a class struggle between capital and labour. Other theories derided the lack of morality, collectivism and simplicity in Realism. Despite it retaining several of the basic features of classical Realism, including the notions that states are primary unitary actors and power is dominant. Neo-Realism provided criticism of the classic paradigm. Structural Realism directed attention to the structural characteristics of an international system of states rather than to its components (Evans and Newham, 1998). Kenneth Waltz detaches from Morgenthau’s classic Realism suggesting it to be too ‘reductionist’. He argues that international politics can be thought of as a system with a precisely defined structure, Realism in his view, is unable to conceptualise the international system in this way due to its varying limitation, essentially due to its behavioural methodology. (Waltz, 1979) Neo-Realism considers the traditional strand as being incapable of explaining behaviour at a level above a nation state. Waltz is described as offering defensive version of Realism, while John Mearsheimer promotes an offensive consideration of Realism, suggesting Waltz’s analysis fails to chart the aggression that exists in international relations, however they are often considered as one through neo or structural Realism. (Mearsheimer, 2013)

The idea, that international politics can be understood as a system, with an exact construct and separate structure, is both the starting point for international theory and point of departure from the traditional Realism. The fundamental concern for Neo-Realists is why do states exhibit similar foreign policy behaviour regardless of their opposing political systems and contrasting ideologies. The Cold War brought two opposing superpowers that although were socially and politically opposite behaved in a similar manner and weren’t separate in their pursuit of military power and influence. Realism in Waltz’s view was severely limited, as where other classic disciplines of international relations. Neo-Realism is designed as re-examination, a second tier explanation that fills in the gaps classic theories neglected. For example, traditional Realists remain adamant that actors are individuals in international affairs, referencing the Hobbesian notion that two entities are unable to enjoy the same thing equally and are consequently destined to become enemies. Whilst, Neo-Realists consider that relative and absolute gains are important and they may be attained by collusion through international institutions. (Waltz, 1979)

Practical

The salient theoretical differences exhibited in the first section will be strengthened in this second section by applying the theory to practical situations in order to enhance the understanding and the degree of separation. As one has discussed, traditional Realists consider that the foundation of international affairs is war, perpetrated by states. A Realist doctrine is exhibited by the actions and musings of Richard Nixon and Henry Kissinger, during their time together during Nixon’s presidency and with Kissinger’s influence on Nixon’s successor, Gerald Ford. While in the theatre of the Cold War, they attempted to maximise American power in order to safeguard American security against fellow actors. Incursions in Vietnam and Korea were designed at a basic level to keep their ideology as the primary superpower and increase American dominance. Nixon’s presidency was associated also with his administrations dialogue with China, and their keenness to exploit the Sino-Soviet split in order to tip the balance of power in America’s favour, all illustrating a class Realist mentality of international relations, that it is constructed entirely between state interactions and a grasp for power. (Nye, 2007)

Another example that depicts this mentality is Thucydides work concerning The Peloponnesian War, an often-utilised example used by traditional Realists; Thucydides in his works expresses an unrelenting Athenian desire to pursue self-interest, and achieved this through the use of force and hard power. He famously wrote, “The strong do what they have the power to do and the weak accept what they have to accept” (Thucydides, 1972, p402). Thucydides sentiments illustrate the Realist notion of human nature being motivated primarily by power, and it is similar to subsequent wars throughout human history. Colin Gray a modern scholar concurs with the Realist outlook suggesting an inherent human characteristic that still drives states in the same way it did in 400 B.C (Gray,2009).

Neo-Realists tend to distance themselves from this notion of a corruptible human nature. They blame the starting of the Second World War, not on innate human corruptibility, but on the failure to achieve a recognised international system. They disagree with Realist logic that the primary reason for the Second World War was Hitler’s lust to institute his power and influence across Europe. In their estimations the disorder provided by the Treaty of Versailles was principal in throwing the world back into war. Its adoption on the behest of French, British and American states provided the opportunity and the catalyst for the Nazi Party to flourish. Resentment in Germany of the allied powers, coupled with a weak nation unable to recover because of this ‘dictate’ rendered the German economy and military perpetually weak, all contributing to Hitler’s ability to snatch power and consequently produce the elements to start a world war. The world was failed in Neo-Realist estimations by a lack of substantial system (Jervis, 1994).

The response classic Realists provide to Neo-Realists is that their re-worked form of the theory is simply presented in a way that is more structural and scientific but with the core maintaining the original doctrines offered by traditional Realism. Although Neo-Realists do not deny that their ideology is extremely similarIt is an improvement on the original theory offering a more structured and formulated paradigm., but Realists argue those alterations, which include these structural formations is what inhibits the new theory. Richard Ashley is one author that concurs with these sentiments stating traditional Realism, provides an advanced concept of analysis (Ashley, 1984). For example, even if the Treaty of Versailles did create bleak conditions on Germany that incited the Nazi’s upsurge, the fundamental lust for power Hitler exhibited in the extreme was still predominant for starting World War Two regardless of structural factors. This analysis echoes Colin Gray’s opinions regarding the characteristics exhibited from the Peloponnesian War still being relevant in the twentieth and twenty-first centuries, and illustrates Realism relevance.

A further crucial difference between the two strands is the role of political belief or governance. Classic Realism has always established this consideration. Hitler, Mussolini, Franco, Hirohito all had what was classed as un-democratic governances. Stalin, with a similar totalitarian system had initially signed a pact with Hitler, it was only the latter’s covetousness for supremacy that scuppered that particular alliance, illustrating the pessimistic nature of traditional Realism, not being able to trust other actors. Conversely, Neo-Realists, led by Waltz concluded that there is no “differentiation of function between different units, i.e. all states perform roughly the same role” (Halliday, 1994). Neo-Realism came at a time where the system had altered from what classic Realism was founded upon, a pre-war world of several great powers. The Cold War heralded a bi-polar system, dominant on nuclear weapons rendering the differing ideologies and political regimes irrelevant, it was the system that prevailed. Furthermore, America propped up highly undemocratic regimes throughout the Cold War in Asia, South America and the Middle East. Suggesting classic Realist arguments of governance systems is incomplete (Merhasimer, 2013).

Traditional Realism witnessed a degree of a resurgence post-9/11, the event itself and the subsequent fallout was deemed textbook of classic Realism. Actors had to employ self-help and act unilaterally to stop attacks and an assault on the states survival. 9/11 produced a real illustration to the strength non-state actors can have on the international relations. Although Neo-Realism maintains the classic theory consideration of state primacy, it does reference non-state actors as relevant in the international system. Additional actors however must adapt to the actions of states Waltz suggests, “When the crunch comes, states remake the rules by which other actors operate.” (Waltz, 1979, p94) Furthermore, America’s democratic crusade dubbed ‘the war on terror’ was viewed as traditional Realism in action, inferring Morgenthau’s consideration of autocracy vs. democracy. However, Neo Realists will reference American support for very non-democratic states, such as its unwavering support for Saudi Arabia as the system still triumphing over the state and its form of governance. The actions of the US tie in with Mearsheimer’s offensive Realist outlook to seek hegemony, “great powers recognize that the best way to ensure their security is to achieve hegemony now, thus eliminating any possibility of a challenge by another great power. Only a misguided state would pass up an opportunity to be the hegemon in the system because it thought it already had sufficient power to survive.” (Merhasimer, 2001)

Conclusion

In conclusion whilst both strands of Realism remain constant in key areas such as the anarchic state, unitary actors and the importance of Power. Neo-Realism presents a shift away from the traditional theories offering a tangible alternative to the corruptible human nature consideration being the root of the cause conflict, as exemplified aptly by the debate on the outbreak of World War Two. However, the crucial point of departure that Neo-Realism provides is the importance given to the international system over the state, claiming that traditional Realism is inhibited by its methodology, failing to explain behaviour of an entity above the nation state. Neo-Realism allows for co-operation among states at a higher level than Realism permits, this provides an opportunity to succeed in achieving absolute and relative gains. The concept flourished during the Cold War, rejecting Morgenthau’s system of governance analysis, suggesting that states behave the same regardless whether it’s democratic or not. Neo-Realists still maintain this is relevant. Classic Realists disagree using the events of this century to prove that its methodology was always correct.

In Sum, the two differ fundamentally on approach, Neo-Realism seeks to offer a systematic and scientific approach that they believe is lacking in traditional Realism; according to its proponents it complements the original theory by correcting its fallacies, building on classic Realism emphasis on self-interest, power and the state, challenging the human nature concept and behaviour above state level.

Bibliography

Ashley, R K, 1984. ‘The Poverty of Neo-Realism’. International Organisation , 38/02, pp. 255-286.

Donnelly, J 2000 Realism and International Relations. 1st ed. Cambridge: Cambridge University Press

Evans, G and Newham R, 1998. The Penguin Dictionary of International Relations. 1st ed. London: Penguin

Fox, W, 1985. ‘E H Carr and Political Realism: Vision and Revision’, Review of International Studies 11/1 , pp.1-16.

Gellman, P , 1988. ‘Hans J. Morgenthau and the Legacy of Political Realism’. Review of International Studies, 14/04, pp.247-266.

Gray, C. S, 2009. ‘The 21st Century security environment and the future of war’. Parameters, XXXVIII (4). pp. 14-26.

Halliday, F, 1994. Rethinking International Relations. 1st ed. Hampshire: Palgrave Macmillan

Harrison, E. 2002, Waltz, Kant and Systemic Approaches to International Relations, Review of International Studies 28(1), 2002.

Jervis, R, 1994. ‘Hans Morgenthau, Realism and the Study of International Politics’, Social Research 61(Winter)_, pp. 853-876 .

Mearsheimer, J J, 2013. “Structural Realism,” in Rex Warner eds., M. Finlay Translates. International Relations Theories: Discipline and Diversity, 3rd Edition. Oxford: Oxford University Press, pp. 77-93.

Mearsheimer, J J, 2001. The Tragedy of Great Power Politics. 1st ed. New York: W.W. Norton & Company

Morgenthau , H J, 1948. Politics Among Nations. 1st ed. New York: Knopf.

Nye, J, 2007. Understanding International Conflicts: An Introduction to Theory and History in political science. 6th ed. Pearson Longman: New York.

Thucydides, 1972. History of the Peloponnesian War M.I Finley eds,

Translated by Rex Warner Penguin: London.

Waltz, K, 2001, Man, the State and War: A Theoretical Analysis. 2nd Revised edition. New York: Colombia University Pres.

Waltz, K. 2000, ‘Structural Realism after the Cold War’, International Security, 25(1).

Waltz, K, 1979. Theory of International Politics . 1st ed. New York: McGraw-Hill.

The Role of Military Force in Promoting Humanitarian Values

This work was produced by one of our professional writers as a learning aid to help you with your studies

Recent years have seen an increase in military force being used as a tool for increasing the scope for humanitarian values within conflict zones. This paper assesses this trend, and uses a number of conflict case studies as a vehicle for evaluating this premise. In doing so, this paper considers that the Libyan intervention in 2011 offers a case study which argues that state led humanitarian intervention is borne out of a political, as opposed to a humanitarian, need. This undermines the promotion of humanitarian values.

The concept of military led humanitarian intervention can be found within a highly subjective area of academic and political thought. With regards to this, there are some commentator’s, such as Waxman (2013: n.p.) who consider that military led humanitarian intervention consists of “the use of military force to protect foreign populations from mass atrocities or gross human rights abuses” whilst others, including Marjanovic (2012: n.p.) see this particular course of action as being “a state using military force against another state when the chief publicly declared aim of that military action is ending human-rights violations being perpetrated by the state against which it is directed”. With regards to this subjectivity there is a series of overlapping concepts that help to further the debate in this area. These overlapping areas can be found within a number of conceptual areas including war and conflict within which humanitarian values are negatively impacted by activities which impact upon non-combatants, these include human rights abuses. Where humanitarian values are considered, the International Committee of the Red Cross (ICRC) (2013) holds a perspective which suggests that these comprise of aspiration in relation to humanity, neutrality, independence, and impartiality. In this regard, therefore, one can suggest that where military forces are deployed in order to promote or support humanitarian operations it is necessary that these forces act accordingly within the boundaries of these guiding principles. In their totality, therefore, it is arguable that there exists a number of factors which need to be present where a situation occurs that requires military led humanitarian assistance.

With regards to any underpinning intervention that relates to issues covered within humanitarian interventions, Weiss (2012: 1) believes that it is possible that an underlying notion of a “responsibility to protect” is a dominating factor in contemporary geo-political thinking, however instead of this doctrinal approach being used across the globe Weiss (2012) believes that the global community tends to cherry-pick the various conflicts that it intervenes in, this is discussed elsewhere in this paper. That said, Minear & Weiss (1995) had previously indicated that any military intervention that seeks to promote humanitarian values should incorporate a post war recovery planning and redevelopment programme. However recent decades, particularly since the end of the Cold War, has seen an increase in the numbers of military led humanitarian interventions that are related to “activities undertaken to improve the human condition” (Weiss, 2012: 1). This latter issue, concerning the human condition, suggests that there has been a genuine shift in the contemporary conflict environment. This shift is primarily based on the progression from conventional warfare to of asymmetric warfare which involves a number of non-state actors and combatants. This is a factor that has not been ignored by Weiss (2012). Here the suggestion that, today, only state led military interventions can promote humanitarian values has been promoted because non-state actors are not bound by regulations and international protocols regarding the dynamics and conduct of war. Indeed this particular perspective gains an increased level of support where the current post Cold War conflict environment is considered.

For Pattison (2010) the years following the end of the Cold War have resulted in a vastly increased number of military operations that have been designed to support humanitarian values through intervention. These interventions have occurred in a plethora of collapsed or failed states and include, but are not limited to. post Gulf War (1991–2003) Iraq, Bosnia – Serbia (1995), The Balkans and Kosovo (1992-1999), East Timor (1999) Somalia (2002), Haiti (2004), and Libya (2011). These interventions, for some, also include the post 9-11 era’s intervention in to Afghanistan and latterly in Iraq (2003-2010) (Pattison, 2010). In this regards, Weiss (2012) believes that the underlying concept of humanitarian intervention has helped to increase the potential for international interventions into other states because of a need to increase the level of protection offered to non-combatants from conflict. However, the earlier indication of cherry picking conflicts offers for a greater insight into the nature of political discourses which take place at the United Nations (UN) Security Council with regards to these conflicts and where state led political aspirations are an overbearing factor in the intervention tools and choices made by states. Indeed one can argue that the current and ongoing conflict in Syria offers as a casing point particularly since all state actors which have intervened possess their own aspirations in shaping the future of that particular country (Haaretz, 2014; Press TV, 2013; Ruthven, 2014; Time, 2015). In some respects, therefore, the issue of humanitarian intervention and its related values base is being abused in order that these political aspirations can be furthered (Dagher, 2014). This aspect, however, is a perpetual factor in the international arena, particularly where realist agendas are taken into consideration (Bayliss & Smith, 2001). One area where international intervention has been encouraged is in relation to ethnic conflict.

Kaldor (1998) recognises that the end of the Cold War resulted in an increase in the frequency of ethnically charged conflicts and that these types of conflict have been offered as a rationale for international humanitarian based interventions In respect of this, Kaldor (1998) argues that the changes that have taken place within conflict dynamics that has resulted in belligerent forces not being constrained by international regulations, including the Geneva Convention protocols, Laws of Armed Conflict or relevant United Nations Charters (Kaldor, 1998) has led to humanitarian values being used as an excuse to further the political aspirations of a number of states. The result of this changed dynamic has perpetuated and has spread to a number of conflict zones around the world. However, it has led to an increase in the reliance upon conventional forces whose role has been to offer peace keeping and security services to non-governmental organisations (NGOs) in support of their own operations. In this respect it is noted that Christoplos, Longley, and Slaymaker (2004) consider that the intervention strategies have also altered in recent years. Here, they note that the underpinning intervention programmes now seek to promote humanitarian values and that this is evidenced by the creation of a tripartite doctrinal system which now utilises areas of national and personal rehabilitation; added to this are post war recovery programmes that are intended to help redevelop both the state and social infrastructures; finally there is the central issue of relief programmes that seek to maintain the fabric of civil society during crisis periods. For Seybolt (2007) this perspective adds weight to any argument that promotes the possibility that military humanitarian interventions can assist NGOs in their duties via the provision of security provisions. However, it is also recognised that adding external military forces into a combat zone has can lead to further complications primarily because military operations possess a potential for using force when necessary (Davidson, 2012; Ministry of Defence, 2011).

In promotion of a perspective which says that deployed military forces can utilise force is well grounded in military doctrines. For example the UK Ministry of Defence promotes a policy whereby “The peacekeeper fulfils a mandate with the strategic consent of the main warring parties, allowing a degree of freedom to fulfil its task in an impartial manner, while a sustainable peace settlement is pursued.” (Ministry of Defence, 2011: 1.1). This perspective suggests that it is possible for military personnel whose primary function is to assist NGOs as part of the promotion of humanitarian values is in fact a secondary consideration. Ultimately the use of military force within humanitarian interventions is a purely political choice that is intended to help reshape the political landscape of the affected region or state in the post conflict environment. With regards to the current Syrian conflict, one can argue that the divergent and conflicting political perspectives and aspirations is a factor which will undermine the potential for any real focus upon the promotion of humanitarian values. Indeed, it is also recognised that this eventuality does little to promote the principles of humanitarianism as argued by the likes of the ICRC (2013). In effect the possibility that military forces can conduct purely military operations, or war phase fighting, during a humanitarian intervention undermines any utilitarian or altruistic claims made by the respective political powers. In its totality this suggests that the aforementioned issue of political realism is both present and ongoing. Indeed such an argument can be backed up by a policy review of the recent and ongoing Afghan conflict.

A review of UK doctrinal papers promotes this paper’s preference that military operations incorporate the possibility that war fighting, as well as security duties, is a contingent factor in the preparations for any military force. Stabilisation programmes in the Afghanistan intervention occurred in an environment where the UK’s military “had the consent of the host nation government but no other warring party (Afghanistan: Taliban 2001 – present)……..A military force may decide in such situations that the defeat of a specific enemy is essential to the success of the operation.” (Ministry of Defence, 2011: 1.1). Essentially, therefore, in political terms it is feasible that political intentions can undermine any altruistic argument in relation to the deployment of military forces to carry out humanitarian operations. For some the recent ‘humanitarian’ intervention into Libya is an example of this outcome.

The recent UN backed military intervention in Libya was mandated via humanitarian intervention that was intended to provide relief and assistance (United Nations, 2011). The promotion of this intervention was supposed to further the seven values of humanitarian intervention, as promoted by the ICRC (2013) however one can argue that the resultant intervention was mainly politically motivated because there is sufficient evidence to indicate that Gaddafi’s regime had been a long time foe of those states which executed the intervention (USA, UK & France) (Boulton, 2008). In promotion of their intervention, the USA UK, and France had argued that a failure to intervene would result in a humanitarian crisis caused by the perpetuation of conflict. However, Kuperman (2011) argues that the resultant UN Resolution 1973 (United Nations, 2011) created conditions where the intervening military forces could operate beyond the realms of Resolution 1973. These included, for example, allowing the USA, UK, and France to conduct stabilisation operations so that the authority of the Gaddafi regime could be undermined, thereby helping to bring this conflict to a swift conclusion. In layman terms this meant military intervention via war fighting. With regards to this, Kuperman (2011) also argues that Libyan state functions were impacted, including the freezing of its financial and economic assets. It was also argued that the intervening forces of the USA, France and the UK oversaw the deployment of private military contractors whose role was to undertake anti Gaddafi operations thereby seeking to overthrow his regime (RT News, 2012). In effect, the usage of humanitarian justifications for military intervention in conflict can be defined in terms of the actions and justification of the states whose forces have been committed to operate in those areas and regions.

In its totality, therefore, the usage of military force as an effective instrument for the promotion of humanitarian values is limited. These limitations can be found within the underlying political rationales that exist within states that are prepared to commit forces for these operations, particularly where these states have an interest in the realisation of a particular outcome. Whilst humanitarian led interventions have become a mainstay of the post Cold War climate, one can argue that the promotion of the seven humanitarian values that are promoted by the ICRC (2013) are undermined by the intervening forces because of their ability to both flout their mandate, as well as their ability to conduct war fighting operations under the guise of humanitarianism. In essence, therefore, one can argue that there are genuine limits to the ability of military forces to promote humanitarian values however these limitations are not factors which states consider when seeking to intervene in any conflict.

Bibliography

Bayliss, J., & Smith, S., (2001), The Globalisation of World Politics. Oxford, Oxford University Press.

Boulton, A., (2008), Memoirs of the Blair Administration: Tony’s Ten Years, London: Simon & Schuster.

Christoplos, I., Longley, C. and Slaymaker, T., (2004), The Changing Roles of Agricultural Rehabilitation: Linking Relief, Development and Support to Rural Livelihoods, available at http://odi.org.uk/wpp/publications_pdfs/Agricultural_rehabilitation.pdf, (accessed on 17/10/15).

Dagher, S., (2014), Kurds Fight Islamic State to Claim a Piece of Syria, (online), available at http://online.wsj.com/articles/kurds-fight-islamic-state-to-claim-a-piece-of-syria-1415843557, (accessed on 17/10/15).

Davidson, J., (2012), Principles of Modern American Counterinsurgency: Evolution and Debate, Washington DC: Brookings Institute.

Haaretz, (2014), Russia demands Israeli explanation of air strikes in Syria, (online), available at http://www.haaretz.com/news/diplomacy-defense/1.630584, (accessed on 20/10/15).

International Committee of the Red Cross, (2013), Humanitarian Values and Response to Crisis, (online), available at https://www.icrc.org/eng/resources/documents/misc/57jmlz.htm, (accessed on 17/10/15).

Kuperman, A., (2011), False Pretence for war in Libya, available at http://www.boston.com/bostonglobe/editorial_opinion/oped/articles/2011/04/14/false_pretense_for_war_in_libya/, (accessed on 17/10/15).

Marjanovic, M., (2011), Is Humanitarian War the Exception?, (online), available at http://mises.org/daily/5160/Is-Humanitarian-War-the-Exception, (accessed on 17/10/15).

Minear, L and Weiss, T.G., (1995), Mercy Under Fire: War and the Global Humanitarian Community, Boulder: Westview Press.

Ministry of Defence, (2011), Peacekeeping: An evolving Role for the Military, London: HMSO.

Pattison, M., (2010), Humanitarian Intervention and the Responsibility To Protect: Who Should, Oxford: Oxford University Press.

Press TV, (2013), Hezbollah to remain in Syria: Official, (online), available at http://www.presstv.ir/detail/2014/02/10/350058/hezbollah-to-remain-in-syria-official/, (accessed on 20/10/15).

RT News, (2012), Stratfor: Blackwater helps regime Change, (online), available at http://www.rt.com/news/stratfor-syria-regime-change-063/, (accessed on 17/10/15).

Ruthven, M., (2014), The Map ISIS Hates, (online), available at http://www.nybooks.com/blogs/nyrblog/2014/jun/25/map-isis-hates/, (accessed on 20/10/15).

Seybolt, T., (2007), Humanitarian Military Intervention: The Conditions for Success and Failure, Oxford: Oxford University Press.

Time, (2015), Iran Looms Over ISIS Fight as Baghdad-Tehran Alliance Moves Into Tikrit, (online), available at http://time.com/3741427/isis-iran-iraq-tikrit/, (accessed on 20/10/15).

United Nations, (2011), Resolution 1973, (online), available at http://www.un.org/press/en/2011/sc10200.doc.htm#Resolution, (accessed on 17/10/15).

Waxman, M., (2013), Is humanitarian military intervention against international law, or are there exceptions?, (online), available at http://www.cfr.org/international-law/humanitarian-military-intervention-against-international-law-there-exceptions/p31017, (accessed on 17/10/15).

Weiss, T., (2012), Humanitarian Intervention, Cambridge: Polity Press.

Cities, poverty and inequality in the UK

This work was produced by one of our professional writers as a learning aid to help you with your studies

With reference to London, Manchester and Glasgow in the UK
Introduction

Debates on poverty and inequality have been always heated and topical. In the aftermath of the global financial crisis and the dogma of austerity, poverty and inequality received a newfound attention from academic and policy circles alike. What is especially interesting, for the purposes of this essay, is to look at the bare version of austerity politics and how they have fed into existing socioeconomic privation and how they are aligned with more deep seated politics dating back to Thatcherite economics and Voodoo economics (Harvey, 2005).

This essay will look at the UK and specifically London, Manchester and Glasgow, and tease out themes around poverty and inequality and how they have been animated as a direct result of policy as well as decision-making at Westminster. By and large, poverty and inequality are multifaceted concepts and should not be seen as purely economic. They intersect with legacies and collective memories and the relationship between cities and inequality is therefore going to be dynamic and complicated.

This essay first turns to delineating what cities, poverty and inequality are taken to mean and locate this discussion within a larger theoretical current and critique. The argument that will be proposed is that poverty and inequality are, put simply, manifested in their fullest extent in global cities, as they are the immediate receptors of government policy and dialogue. Although regional cities and towns are also affected, the ‘contagion’ of policy is a lot weaker and their relationship obscure. To provide evidence for this argument, this essay will examine three different socioeconomic phenomena that have stark implications for poverty and inequality, namely neoliberal austerity politics, a protracted housing crisis, and finally, deindustrialization and a one-sided focus on the City. The essay concludes with a couple of policy recommendations as to how to curtail the rise of inequality in cities.

Why global cities?

As briefly mentioned in the introduction, this essay identifies and looks into global cities as opposed to a nation as a whole. This is because the latter is more abstract and generalized, and relies on more macroeconomic assumptions. In contrast, the former is the ‘playground’ of policy and dialogue, being their proximate receptor and their locus (Musterd and Ostendorf, 2013). That is to say, global cities, in a way, symbolize what policy underlies and is about. The direct consequences that accrue allow an observer to make more credible and robust points about its relationship to inequality and poverty (Sassen, 2011). For example, if this essay were to take up national inequality, measured by the Gini coefficient, the concepts would become harder to discern and the implications unclear. Much of the theoretical literature has homed in on the root causes of inequality and how this deleterious phenomenon has come about (see Atkinson, 2015). Although this essay will later touch on and attempt to trace why inequality exists and is magnified in cities, it is noteworthy that most of the research into inequality shies away from looking at the direct results it has on life in global cities.

How do we explain poverty and inequality?

Next, this essay turns to defining poverty and inequality. There is a presumption in favour of conflating these two to purely economic phenomena to be addressed by economic solutions. However, as will be examined, the case study of Glasgow is a powerful rejoinder to this conflation. Namely, it is a city that has competitive economic infrastructure and results, and yet lags behind in other crucial holistic social measurements. More broadly, poverty and inequality, as stated are complex and multifaceted. That is why it is suggested that the Gini coefficient is a fundamentally limited and misguided measurement to marshal in this essay. Instead, what would be more relevant would be more relevant is to look at the likes of Amartya Sen (2005) and his work on human capabilities and how potential can be frustrated in myriad non-economic ways. For this reason, this essay cannot properly infer from London’s high economic performance that it adequately caters to the problems of inequality and poverty. Put simply, that a global city grows does not mean that the least well off are benefiting as well. By taking this comprehensive approach, this essay will discuss how complex policy has complex consequences on people’s lives and general levels of contentedness.

The trajectory of inequality

Inequality is, by no means, novel. This discussion is embedded in a global debate about what gives rise and momentum to inequality, especially following the global financial crisis of 2008. In the core of the Western world, inequality has run amok in the past few decades, despite the fact that they have rendered modest economic growth in general (Piketty, 2014). This puzzling reality has been the subject of a lot of academic debate and contributions; some scholars have suggested that inequality is not inevitable but, in fact, beneficial, as it makes people more driven and aspirational, and more likely to celebrate and mimic such role models as Mark Zuckerberg and Warren Buffet (Lippman et al., 2005). According to this line of arguing, inequality is seen to be a by-product of entrepreneurial ability and prowess.

However, it is unlikely that this line of thought captures the deep and perplexing character of inequality. To rebut the claim that inequality is a fair reflection of talent and ability, this essay makes a contention that it is rather the result of collective deliberate decision-making (Stieglitz, 2012). This becomes particularly evident in global cities where the contradictions therein highlight that it must be more than just a lack of talent or luck that is holding people back on such a large scale. London, for example, boasts the City which is undeniably the globe’s foremost financial center and also the silicon roundabout, a very promising and booming hub of entrepreneurs. Yet, it also has areas like Peckham. Inquiring into the latter’s residents’ attitudes, it becomes plain that they feel disillusioned and failed by the capital of the United Kingdom (Glaeser et al., 2009). This area offers another side to London’s ‘success story’, as it tends to be hosts of endemic crime, destitution, childhood obesity and other negative manifestations. Therefore, to say that inequality is down to the genes you are endowed and the aspirations you form is too simplistic a story for global cities.

Another instance in which it is seen that people are adversely affected by phenomena outside of their control is that of the prolonged housing crisis that London is witnessing (Harford, 2014). Due to unprecedented demand and people looking to move in, house prices have been on a perpetual rise. What has enabled this rise has been the power that landlords have in that they can charge disproportionate amounts to tenants but they can also fund their own mortgage by letting out properties (Harford, 2014). This translates very negatively for people from lower socioeconomic strata, as they lack comparable access to credit to begin with. That is why they turn to the state and council houses which cater to that. However, the latter have also been penetrated by private landlords leading to the perverse situation whereby council housing is owned privately and can also be overcharged. This is down to political choices regarding allowing the right to buy these kinds of properties, but also creating a generally more permissive framework to buy and let property. At the same time, those at the top end of the economic spectrum have benefitted from more generous inheritance and high property tax offering a glimpse into how glaring inequality can become in global cities. To contrast that, note that Berlin has recently introduced rent controls to avoid a similar scenario (Vasagar, 2015).

It is therefore clear that people living in London have vastly different and unequal access to the most important asset of their consumption lives, namely their house, which has bad implications for their psychological wellbeing and the extent to which they can provide for their families sustainably. Big cities cannot afford to have these kinds of contradictions run within them, whereby lower strata segregate from the mainstream in the own communities and refuse to engage with political-decision making and active citizenship (Wheeler, 2005). This, in turn, exacerbates the already unsteady relationship between cities and inequality, as these groups lose morale and incentive to engage with common goals and agendas.

Neoliberalism

The global financial crisis has made a case that the United Kingdom’s government has heated and that is in favour of austerity politics. The government has engaged in discretionary benefit cuts and also has increased tuition for tertiary education, both of which disproportionally hurt the poor and therefore augment inequality. In seeing benefits reduced, a person in a big city faces profound adversity. Compounded by the housing crisis and general inflation, this person is likely to have his livelihood eroded. Their children will also have to take bloated student loans, and that is if they can afford to hold off working immediately after school. Recently, the UK government has engaged in a bait-and-switch policy whereby benefits to the poor were cut yet that was supposedly counteracted by the introduction of a ‘living’ wage (O’Connor and Gordon, 2015). Again, this example demonstrates that inequality is not an inevitable result of human nature and a random distribution of talent, but created and magnified by governments and collective communities that have bought into the austerity dogma. This has been criticised by high-profile academics such as Pikkety (2014), Stieglitz (2012), and Atkinson (2015).

The seeds of inequality were perhaps planted by Thatcherite economics and a legacy of tough-love when it comes to trade unions, workers, and the welfare state. Following Thatcher’s election, the government introduced a series of neoliberal reforms that placed socioeconomically vulnerable people in an even more precarious situation, stripped of participation in unions, their jobs, if they worked for a factory that closed down, and livelihoods as regressive taxation took its toll (Harvey, 2005). One of the most important features that is relevant for the purposes of this essay is that of deindustrialization and how it has engendered a deep north-south divide in the UK that is persistent and difficult to address. Through a strong and remorseless focus on the service industry, which was hailed as forward-looking, efficient and innovative, the UK’s industrial base concentrated in cities like Manchester and Glasgow (less so) took a back seat to the city of London (The Equity Trust, 2014). The latter has been consistently nurtured with state support and policy ever since at the expensive of other sectors, such as the manufacturing one which used to make up the backbone of the British economy. Instead, now it is, broadly speaking, lagging behind in terms of productivity as the latest findings of the CDI show (The Equity Trust, 2014).

(source: The Equity Trust, 2014)

The graph above shows pay gaps between the rich and the poor in different regions in the UK. It is clear that the pay gap in London is the most glaring, although London is by far the highest growing city. This is because the service industry caters mainly to the wealthy and lacks the traditionally job-creating economic multipliers of the industrial and manufacturing sectors that have suffered.

Conclusion

In conclusion, this essay first took up the ambitious task of delineating what is meant by poverty and inequality, which are inherently complicated concepts. It has also attempted to come to grips with global cities and why they should be viewed as the main reference point in any policy discussion about poverty and inequality. The relationship that this essay identified is, by no accounts static. Rather, it evolves with time and changes in government and collective dialogue. This essay has also aimed to dispel associations between growth and inequality throughout by pointing at the example of London and Glasgow, both of which should alter the reader to the holistic and insidious ways in which inequality and poverty work. The roots of inequality and poverty have also been briefly explored, looking at how they are not novel but the result of long-lasting legacies and engrained ways of political thinking. It has finally turned to how important and telling the current context is in terms of how inequality sustaining policies have been legitimized under the guise of austerity and in the name of balanced budgets.

Bibliography

Atkinson, A.B., 2015. Inequality: What Can Be Done? Cambridge, MA: Harvard University Press.

Glaeser, E.L., Resseger, M., Tobio, K., 2009. Inequality in Cities. Journal of Regional Science 49, 617–646.

Harford, T., 2014. Why a house-price bubble means trouble. Financial Times. Available at: http://www.ft.com/cms/s/0/66189a7a-6f76-11e4-b50f-00144feabdc0.html (accessed 15/10/2015).

Harvey, D., 2005. A Brief History of Neoliberalism. Oxford: Oxford University Press.

Lippmann, S., Davis, A., Aldrich, H., 2005. Entrepreneurship and Inequality. In: Lisa Keister (Ed.) Entrepreneurship, Research in the Sociology of Work. London: Emerald Group Publishing Limited, pp. 3–31.

Musterd, S., Ostendorf, W., 2013. Urban Segregation and the Welfare State: Inequality and Exclusion in Western Cities. London: Routledge.

O’Connor, S., Gordon, S., 2015. Summer Budget: Osborne makes bold bet with “living wage.” Financial Times. Available at: http://www.ft.com/cms/s/0/611460a8-2584-11e5-9c4e-a775d2b173ca.html#axzz3oeeh6xid (accessed 15/10/2015).

Piketty, T., 2014. Capital in the 21st Century. Cambridge, MA: Harvard University Press.

Sassen, S., 2011. Cities in a World Economy. London: SAGE Publications.

Sen, A., 2005. Human rights and capabilities. Journal of Human Development 6, 151–166.

The Equity Trust, 2014. A Divided Britain? – Inequality Within and Between Regions. London: The Equity Trust.

Stiglitz, J., 2012. The price of inequality. London: Penguin.

Vasagar, J., 2015. Germany caps rents to tackle rise in housing costs. Financial Times. Available at: http://www.ft.com/cms/s/0/27efd4b2-c33a-11e4-ac3d-00144feab7de.html#axzz3oeeh6xid (accessed 14/10/2015).

Wheeler, C.H., 2005. Cities, Skills, and Inequality. Growth and Change 36, 329–353.

Changes in US Foreign Policy after 9/11

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

On September 20th, 2001, President George W. Bush (2001, n. pag.) gave a speech addressing the events of nine days before: “On September the 11th, enemies of freedom committed an act of war against our country. Americans have known wars, but for the past 136 years they have been wars on foreign soil, except for one Sunday in 1941.” The speech drew upon the notion that America had been attacked and also laid the blame firmly at the door of terrorism whilst interpreting it as an act of war. Although the emotive rhetoric was designed to stir support for a response, it also heralded a new era in US foreign policy. Defined as a “foreign policy crisis” by Bolton (2008, p. 6), it was inevitable that it would elicit a response by American policymakers but the extent to which it has changed US foreign policy has been hotly debated. As such, this essay will discuss the changes in post-9/11 US foreign policy, identifying areas that marked a departure from the policy in place prior to 9/11. It will analyse each to determine the extent to which it was a direct response to the terrorist attack and evaluate how the change impacted upon long-term foreign policy strategy. This will be done with a view to concluding that many of the changes to US foreign policy in the post-9/11 era have been a response to the evolving security threat posed by terrorism and did force policy to evolve in order to accommodate strategies that address modern problems. However, those changes may have made an immediate impact but did little to alter the long-term course of US foreign policy.

Foreign policy arguably changed direction within days of 9/11 with the most immediate and most obvious change being the shift in focus towards terrorism. Bentley and Holland (2013) highlight that the focus had been foreign economic policy under Clinton but 9/11 produced a dramatic movement away from diplomacy and towards military solutions via the War on Terror. There was also movement away from policy that prioritised relations with the great powers of Russia and China. Earlier unilateralism had negatively impacted upon relations with both nations, thus causing deterioration that extended beyond the Cold War era hostilities and prevented effective relations between East and West (Cameron, 2006; Nadkarni, 2010). However, the American desire to create a “world-wide anti-terrorism alliance” (Nadkarni, 2010, p. 60) brought about a relative thaw between the nations and facilitated discourse in order to cater for shared security concerns. This change provides evidence of an immediate shift in US interests and this manifested in foreign policy. As such, this is an extremely important change that occurred post-9/11, especially as it emerged out of the first response to the attack and served to dictate US actions abroad for more than a decade afterwards.

The shift of focus from the great powers and towards terrorism provided policy space to address security threats via the three pillars of the Bush administration’s national security policy, which had become a fundamental element of foreign policy as, for the first time since World War II, the attack on American soil brought both ostensibly dichotomous strands of policy together. The pillars were missile defence (a continuation of policy prior to 9/11), pre-emption and homeland security, both of which were embraced after 9/11 in response to it (Lebovic, 2007). Although elements of this were rooted in domestic policy, the pre-emption aspect of policy was also manifest in foreign policy because non-state terrorist groups and rogue states became inextricably linked to US foreign policy as targets to be dealt with under the new priorities outlined in the wake of the terror attacks, although this was somewhat more gradual than the initial shift to focus on terrorism. Indeed, the Bush Doctrine marked a fundamental shift towards utilisation of policy that incorporates both pre-emptive action and preventative action, which marked the decline of the reliance on containment and deterrence that dictated policy from the Cold War era onwards (Jentleson, 2013; Levy, 2013). The pre-emptive strikes were indicative of a strategy that sought to defend by attacking those who posed an immediate security threat to the US and allowed policy to justify the unilateral military pursuit of specifically American interests. This suggests that 9/11 was used as an effective excuse to create foreign policy that better mirrored the ideology of the government than what was in place in the months prior to the attack.

There is extensive criticism of the policy that reinforces the assumption that the government manipulate foreign policy to suit its own ends. For example, Ryan (2008, p. 49) argues that Iraq, which was labelled a rogue state, was already a focal point of foreign policy but the events of 9/11 allowed policymakers to push their specific agenda: “Influential strategists within the Bush administration seized on the horror to gain assent from liberal Americans to move the country towards a war in Iraq that neoconservative strategists desired, but that many within the US… shunned.” Holland (2012) concurs, arguing that coercive rhetoric was used extensively in order to sell the War on Terror via culturally embedded discourse. In addition, Miles (2013, p. 110) advocates that “…Bush’s placement of rogue states at the centre of America’s response to 9/11 was welcomed as an opportunity to overthrow a number of old threats and terror loving tyrannies who stood in the way of democracy and freedom.” This perspective certainly offers a credible insight as to how 9/11 was manipulated in order to push foreign policy in a certain direction, and indeed one that was a continuation of what had gone before. However, the need to manipulate public opinion is indicative of the fact that foreign policy had deviated from that in place directly prior to the terrorist attack on the World Trade Centre.

US foreign policy has also responded to the increased demand for humanitarian assistance to aid failed states and nation building to ensure their reconstruction following 9/11. Shannon (2009) points out that the reconstruction of Afghanistan following the US invasion there has essentially helped to prevent the failure of the state, improve the quality of life for its people, introduce freedoms and democratic processes that were absent before and aided the avoidance of the state being controlled by terrorists. This was certainly a change from previous foreign policy: “Before 9/11, nation building was often caricatured as a form of idealistic altruism divorced from hard-headed foreign policy realism… In the post-9/11 era, nation-building has a hard-headed strategic rationale: to prevent weak or failing states from falling prey to terrorist groups” (Litwak, 2007, p. 313). This summary of the extent to which attitudes changed highlights the fact that a greater role in states that required humanitarian assistance was incorporated into foreign policy out of necessity rather than ideological choice. There was a distinct need to limit terrorist activity as far as possible and this actively manifested in this element of foreign policy. As Litwak (2007) points out, humanitarian action was not a staple element of American foreign policy by any means and so this, more than any other element of foreign policy, does signal that a change occurred within the strategic objectives inherent in the War on Terror. However, there are criticisms of this particular change because the US is charged with failing to follow through with humanitarian aid to the extent that it should have done. For example, Johnstone and Laville (2010) suggest that the reconstruction of Afghanistan was effectively abandoned with a failure to create institutions that would withstand future threats to freedom and democracy. This suggests that this particular area of strategy was not well thought out and did not achieve its ultimate aims. However, the fact that it was included in US foreign policy post-9/11 suggests that there was a concerted effort to implement a multifaceted policy to tackle terrorism as a new and dangerous global strategic threat.

However, despite the fact that the analysis here points to a change of direction for US foreign policy in the wake of 9/11 that was specifically designed to tackle the causes of and security threat posed by terrorism, some critical areas of policy did not change. For example, the long term objectives of the US were still manifest within new policy but they appeared in a different form that essentially provided a response to a different threat. Leffler (2011, n. pag.) argues that 9/11:

…did not change the world or transform the long-term trajectory of US grand strategy. The United States’ quest for primacy, its desire to lead the world, its preference for an open door and free markets, its concern with military supremacy, its readiness to act unilaterally when deemed necessary, its eclectic merger of interests and values, its sense of indispensability – all these remained, and remain, unchanged.

This summary of the ultimate goals of US foreign policy draws attention to the fact that very little has changed. Although the British government supported the invasion of Iraq in the wake of 9/11, the fact that the United Nations Security Council refused to pass a resolution condoning the use of force did not prevent the launch of Operation Iraqi Freedom (Hybel, 2014). This is evidence of the readiness to act unilaterally if it serves their interests. Gaddis concurs, noting that US self-interest remained the same with very little consideration of long term strategy that intervention elsewhere would require. Bolton (2008, p. 6), on the other hand, agrees that many of the changes to US foreign policy were made immediately but he disagrees with the assertions of Leffler and Gaddis concerning their long term impact. Bolton (2008, pp. 6-7) asserts that the changes have caused a longer-term impact, albeit one that has diminished over time as a result of the enduring nature of the national security policy and its evolution to accommodate the threat of terrorism in the wake of 9/11. Although this provides a dissenting voice in one respect, it demonstrates consensus on the fact that the changes in US foreign policy post-9/11 were a direct response to a new global threat but they were implemented alongside existing strategic goals. In effect, the approach may have changed but the ultimate objective had not.

Conclusion

In conclusion, the analysis here has identified and discussed several changes that occurred within US foreign policy post-9/11. There can be little doubt that there was a distinct shift in focus to the need to deal with terrorism after the first attack on American soil for seventy years. Similarly, the policy content evolved to adopt a more humanitarian approach to global crises and a proactive and pre-emptive approach to potential threats. All of these changes did mark a departure from what had gone before in some way. However, although the majority of changes were incorporated into foreign policy within two years and were all undoubtedly a response to the attack and its causes, there is significant evidence to suggest that such actions provided an extension of foreign policy doctrine that had gone before. For example, although the focus of foreign policy shifted from the old Cold War objectives of containment and deterrence to terrorism, the interest policymakers took in some rogue states like Iraq was simply a continuation of established ideologies of ensuring freedom and democracy. Similarly, the US administration of foreign policy changed very little in terms of its determination to act unilaterally where necessary and lead the world in a battle against the latest threat to global security. As such, it is possible to conclude that many of the changes to US foreign policy in the post-9/11 era have been a response to the evolving security threat posed by terrorism. Furthermore, it was necessary for policy to evolve in order to accommodate strategies that address modern problems that were not as much of a priority in the late 20th century. However, whilst those changes made an immediate impact on foreign policy, it did not alter the long-term course of US foreign policy because that remained firmly focused on the outcomes of action elsewhere in the world in relation to American interests.

Bibliography

Bentley, M. & Holland, J., (2013). Obama’s Foreign Policy: Ending the War on Terror. Abingdon: Routledge.

Bolton, M., (2008). US National Security and Foreign Policymaking After 9/11. Lanham: Rowman & Littlefield.

Bush, G., (2001). President Bush Addresses the Nation. The Washington Post. [Online] Available at: http://www.washingtonpost.com/wp-srv/nation/specials/attacked/transcripts/bushaddress_092001.html [Accessed 3 October 2015].

Cameron, F., (2006). US Foreign Policy After the Cold War. Abingdon: Routledge.

Gaddis, J., (2004). Surprise, Security and the American Experience. New Haven: Harvard University Press.

Holland, J., (2012). Selling the War on Terror: Foreign Policy Discourses After 9/11. Abingdon: Routledge.

Hybel, A., (2014). US Foreign Policy Decision-Making from Kennedy to Obama. Basingstoke: Palgrave Macmillan.

Jentleson, B., (2013). American Foreign Policy. 5th Edition. New York: W. W. Norton.

Johnstone, A. & Laville, H., (2010). The US Public and American Foreign Policy. Abingdon: Routledge.

Lebovic, J., (2007). Deterring International Terrorism and Rogue States. Abingdon: Routledge.

Leffler, M., (2011). September 11 in Retrospect: George W. Bush’s Grand Strategy Reconsidered. Foreign Affairs. [Online] Available at: https://www.foreignaffairs.com/articles/2011-08-19/september-11-retrospect [Accessed 3 October 2015].

Levy, J., (2013). Preventative War and the Bush Doctrine. In S. Renshon & P. Suedfeld eds. Understanding the Bush Doctrine: Psychology and Strategy in an Age of Terrorism. Abingdon: Routledge, pp. 175-200.

Litwak, R., (2007). Regime Change: US Strategy Through the Prism of 9/11. Baltimore: JHU Press.

Miles, A., (2013). US Foreign Policy and the Rogue State Doctrine. Abingdon: Routledge.

Nadkarni, V., (2010). Strategic Partnerships in Asia: Balancing Without Alliances. Abingdon: Routledge.

Ryan, D., (2008). 9/11 and US Foreign Policy. In M. Halliwell & C. Morley eds. American Thought and Culture in the Twenty First Century. Edinburgh: Edinburgh University Press.

Shannon, R., (2009). Playing with Principles in an Era of Securitized Aid: Negotiating Humanitarian Space in Post-9/11 Afghanistan. Progress in Development Studies. 9:1, pp. 15-36.

Software Engineering Groups Behaviour Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Factors And Issues That Influence The Behaviour Of Software Engineering Groups

Most presentations on software engineering highlight the historically high failure rates of software projects, of up to eighty percent. Failure under the guise of budget overruns, delivery of solutions not compliant with specifications, late delivery and the like. More often than not, these failure rates are used to motivate the use of software engineering practices. The premise being that if adequate engineering practises were utilised, failure would become more of an exception rather than a rule. Best practise and lifecycles have been proposed and tailored to the various paradigms that the computer and information sciences throw up in rapid succession. There is extensive debate on what works and what does not within academia and without. The consensus being that what is best depends on the problem at hand and the expertise of those working on the problem.

A few software engineering group models have been popular in the history of software development. Earlier groups tended to be hierarchical, along the lines of traditional management teams. The project manager in-charge did not necessarily contribute in a non-managerial capacity and was responsible for putting together teams, had the last word on accepting recommendations and delegation to team members. Later groups worked around one or more chief-programmers or specialists. The specialists took charge of core components themselves and were assisted by other group members in testing, producing documentation and deployment. More recently, collegial groups have become common. Here, people with varied specialisations form groups wherein they organise themselves informally by assuming roles as needs arise.

The advantage of a particular model over the others becomes evident only in the context of specific projects. The hierarchical model is best suited to relatively large projects that are decomposable into sub-goals that can be addressed by near independent teams. This is usually possible for software tasks that are very well defined, that need reliable and quality controlled solutions, particularly those that are mission critical. A large project may inherently require many people working on it to successfully complete it, if it were to be deployed in multiple sites, for instance. Alternatively, a large group may be assembled to expedite delivery. In either case, structured organisation and well-defined roles facilitate coordination at a high level.

A central problem with adding people to expedite delivery, or otherwise, is that the effectiveness of a group does not scale linearly. One person joining another does not mean that they are collectively twice as productive. More importantly, the contribution of the seventh person in a seven-person group is a fraction of the contribution of the second person in a two-person group. This is due to additional overheads in communication and coordination as group size increases and to the dilution of tasks assigned to individual member. As is evident, this is a problem for any group; however, in very large groups the problem is exacerbated.

In hierarchical settings, group members do not have a sense of ownership of the bigger solution. This may be reflected in their productivity. Because of the concentration of decision-making powers to particular individuals according to some hierarchy, the success of processes ultimately lies with them. A lot rides on their ability to pick the best practises and recommendations, delegate effectively and keep track of the bigger picture. In quality-controlled or mission-critical settings, there are not many alternatives to having large hierarchical groups with redundant contributors.

Primarily in non-commercial settings, a single specialist engineers a complete software solution. Invariably, the solution being a prototype is accessible only to other specialists. In addition, it is not designed for general consumption and is put together without going through most recommended processes in software engineering lifecycles. Single programmers tend to practise evolutionary programming. This involves producing a quick working solution followed by repeated reworking of the solution to make it more accessible to the programmer for future review, incremental development and peer review or development. If demand for such a software solution gains momentum, for either its general utility or its commercial viability, the core solution would most likely be adopted for further development by a larger software engineering group. It stands to reason that the core developer, who is most familiar with the solution, retains the last word on further technical development. Other members organise themselves around the chief-programmer.

In general, some form of incremental development and periodic redevelopment from scratch of software solutions are common regardless of group models. The first incrementally developed solution tends to be the least well-engineered solution and is a patchwork of poorly designed and tightly coupled components. This is a reflection of the difficulty involved in producing quick solutions using new tools and techniques and inexperienced software engineers. Supported by a high immediate cost barrier to reworking solutions, incumbents from pervious software development cycles, spend a lot of their post deployment time in supporting and patching what they produced.

In collegial groups formed in smaller organisations or departments, software engineers assume roles as needs arise. Brainstorming may be carried out by all members and design approved by consensus but development may be carried out by a few individual members, while the others gain feedback from end-users, keep track of competitor solutions and the like. In the initial phases of a software development life cycle, the problem definition, feasibility study and system analysis phases, end users of the system and independent specialists may form part of the group. During the design and implementation phases, a disjoint group of outsiders could merge with the team. The external members may then be invited for their feedback post implementation during the quality assurance and maintenance phases. Generally, best practise suggests that groups should be adaptive or loosely structured during the creative phases and become more structured as the design becomes clearer.

Groups with loosely defined structures are the most flexible in adapting to changing user needs. However, the greatest risk to project cancellations and overruns are ill-defined and changing requirements. Adaptiveness to an extent is crucial. Given that users change requirements so compulsively, lacking adaptiveness completely would make an engineering group not viable. If group size is variable, the learning curve of new entrants must be kept in mind. A project manager hiring additional developers late in the software development cycle, after not meeting some deadline say, must factor in delayed contributions from the newcomers as a result of time taken by them to familiarise themselves with the project and time lost in coordinating their joining the group.

Following this, the next most common cause of failure is due to poor planning or management. If the person taking on the role of project manager has poor management or planning skills, the likelihood of which is heightened by the fact that each group member is called upon to serve in diverse capacities, projects are destined to fall over.

A number of reasonable software engineering guidelines are ignored by software engineers commonly. When programming, using descriptive names for variables is a good example. A section of program code will immediately make sense to its author for a reasonably long period, when reviewed. However, if the code were not documented sufficiently, which includes using descriptive variable names, and with the correct intended audience in mind, it would take a considerable amount of time for another programmer to understand what the other had implemented. In the extreme, some programmers obfuscate because they can or to ensure that only they will ever understand what they have written thereby making them indispensable. The potential for doing a half-hearted job of writing code is obvious in that poorly structured and poorly designed code is functionally indistinct from well-structured code and is less demanding a task. If software projects were evaluated only on their functionality, this would not pose a problem but upgrades and patches require someone to review the code and add to it or repair it in the future. The long term cost of maintaining software that is not well designed and documented may rise exponentially as older technologies are phased out and finding people competent to carry out repair and review shrink. In essence, this is an instance of a quality control problem.

Uncontrolled quality problems are the third most common cause of cancellations and overruns of software projects. It is convenient to group documentation along with quality control as they should be reviewed in tandem in a software development lifecycle. The first casualties of a late running project are quality control and documentation. The long-term costs of skimping on either have been illustrated by example above but there are short-term costs as well. In both evolutionary engineering common among specialist-centred groups and component engineering commonly employed by hierarchical groups, the quality of each revision or component affects the quality of subsequent revisions or combined components.

The next most common causes of failure are unrealistic or inaccurate estimates and naive adoption of emerging technologies. The blame for the former rests with both users and planners or project managers. Most engineering groups are unrealistically optimistic about the speed with which they can deliver solutions. Their estimates may be accurate for prototypes. In actual deployment, conformance to specifications, human-computer interfaces, quality control, training and change management are essential and take time. Users have a poor understanding of how descriptive their specifications are and much too often assume that implementers are contextually familiar with the environments in which they work and intend to use the system. Project managers and implementers have an affinity to emerging technologies ignoring their core competencies that are more likely to be established proven technologies.

Success among software engineering groups is a function of planning and execution. The responsibility of planning falls on a project manager. A manager must draw on the best a group has to offer, appreciate software and technical concerns, facilitate communication and coordinate a groups effort. Enforcing quality standards from the beginning by adopting design and programming guidelines, for example, helps formalise expectations. A project manager with a technical background has the advantage of understanding the position of other technical members and is likely to communicate more effectively with them and has the opportunity of leading by example. Given the emphasis on planning, it is worthwhile noting that it can be overdone. Over-engineering is not ideal engineering. It is often convenient for a single developer to take the lead for coding. Other developers and end-users should concurrently test the developing solution for functionality, usability and quality. Execution in isolation is likely to result in solutions that developers are comfortable with and even proud of but that end-users find lacking. The various stakeholders of the project must be simultaneously and consistently involved throughout the development cycle of software projects.

The greater the communication between specialist designers and specialist implementers, the more successful the group would be in terms of quality and ease-of-use of solutions. The technical crowd in a software engineering group sees the problem uniquely in terms of simplifying or making more elegant their contribution. The design crowd balances out this perspective by offering an alternative view, which is more likely to be aligned with that held by end-users, uncurtailed by technical considerations. Ultimately, end-users must be given an opportunity to have their say. The solution is theirs.

Changing requirements and specifications may be an acceptable excuse from the user’s perspective for delays in final solution delivery. Many projects are twenty percent complete after eighty percent of the initially estimated time. More people are brought in to expedite the process, budget overruns follow and sub-par solutions are delivered, albeit, late. Given the historical frequency, project managers should factor in possible requirement changes to arrive at estimates that are more realistic before commencing projects.

Call Dropping Syndrome with Mobile Routers

This work was produced by one of our professional writers as a learning aid to help you with your studies

Research Call Dropping Syndrome in a Mobile Router Connection Used in a Vehicular Environment

Abstract

With the emergence of mobile automobile internet routers in the past five years, theorists and visionaries have begun to picture a world for their widespread application. From transportation infrastructure and inter-vehicular communication to mobile conferencing and business applications, the ability to access the internet during transportation is an increasingly valued concept. Yet mobile phones have internet services and cellular providers offer broadband 3G and 4G options, so why amidst all of this integrated technology does the mobile router become such a key component? Efficiency and performance. By leveraging the strengths of an integrated urban infrastructure and utilising multiple access points, the bandwidth and quality of service associated with mobile internet routing is rapidly increasing. Due to the rapid rate of motion and exchange, one of the most inefficient concepts within mobile routing is handover latency, a potential lag in network resources during which packets of information are exchanged between the mobile router and the new access point.

This research will provide a broad spectrum of theory and evidence regarding opportunities for moving towards a soft handover, undermining the performance losses and network degradation associated with hard handover switching behaviour. Further, predictions will be made for the future of mobile automobile routing services, highlighting particular concerns that must be remedied in the coming years in order to enhance industry performance.

Introduction

1.1 Research Problem

As internet integration and communications convergence is increasingly impactful on human existence, the exploitation of new and emergent technologies for increasingly mobile applications continues. One of the most debated advances in recent years is directly related to the integration of mobile internet into automobiles. With one leading firm, Autonet Mobile currently supplying a proprietary technology to several key automobile manufacturers, the merits of mobile routing continue to be validated through commercial value and consumer investment. From a technical standpoint, router-network communication protocol is relatively standard when a static relationship is established; however, once this relationship becomes mobile, the handover requirements due to mobile access points can result in a breakdown in quality of service (QoS) and connection dropping behaviour. Using the NEMO basic support protocol, a mobile router is able to use Mobile IPv6 to ‘establish and maintain a session with its home agent…using bidirectional tunnelling between the mobile router and home agent to provide a path through which nodes attached to links in the mobile network can maintain connectivity with nodes not in the NEMO’. This brief explanation of a network architecture designed to maintain mobile consistency and reduce signal dropping behaviour is indicative of emergent technology in the mobile routing field, a capability with wide scale applications across automobiles, trains, busses, and other ground transportation networks.

Although Autonet Mobile is the most public corporation currently working towards the development and implementation of mobile internet in automobiles, it is unlikely that such market supremacy will continue into the future. With expectations of more integrated automobile systems, particularly those related to navigation and intra-traffic vehicular communication (accident reduction schemes), academics such as Carney and Delphus are already predicting a rich, network-integrated future for mobile computing and internet applications. Considering that QoS for other diverse communication options including VoIP remains of particular concern in the mobile computing community, more in-depth analysis of connection management and performance in a mobile environment is needed. The concept of mobile routers and a mobile internet connection through intra-vehicular integration is foreign to many consumers, even in this era of diverse technologies and increasingly advanced network architecture. Therefore, the fundamental value of this dissertation may be linked to more predictive analysis of future applications and systemic evolutions regarding these emergent technologies. Through a comprehensive review of the existing academic evidence in this field as well as several examples of mobile routing technologies that are either currently in production are being field tested, the following chapters will firmly establish a rich, evidence-based perspective regarding technological viability, updates and version 2.0, and the future of mobile internet routing.

1.2 Aims and Objectives

Although wireless technologies have a longstanding history in internet protocol and architecture, the complexity of handover behaviour and connectivity in mobile router service continues to challenge developers to reconsider the merits of their design. Accordingly, as 3G and 4G mobile broadband networks are expanded across metropolitan and surrounding areas, the flexibility of mobile routers and intra-vehicular internet use is increasing significantly. Simultaneously, alternative technologies including the Autonet Mobile router exploit such interconnectedness to maximise wireless performance, conducting mobile handoffs of service as vehicles pass from one cell tower to another. The scope of this investigation is based on emergent technologies and connection dropping behaviour during in-motion computing. Therefore, a variety of sources will be explored over the subsequent chapters in order to evaluate the progress made in this field, practical applications and their relative reliability, and future opportunities for redesign and reconfiguration of mobile routers. In order to govern the scope and scale of this investigative process, the following research aim has been defined:

To evaluate the emergence of wireless router technologies for automobiles, comparing the connection dropping behaviour of mobile broadband networks and tower switching protocol in order to predict the viability of future applications and technologies.

Based on the variables addressed in this particular research aim, this investigation involves three primary data streams including evidence regarding the performance of mobile broadband routers and cards, the evidence regarding the performance of hand-off-based mobile internet access routers, and the opportunities for expanding this technology beyond its currently limited scope in future applications. As this investigative process involves the analysis of a broad spectrum of empirical findings in this field, secondary academic evidence forms the theoretical foundations of the background to this mobile internet problem. In addition, empirical evidence from actual network architecture is retrieved from existing applications of these distinctive technologies. Throughout the collection and analysis of this evidence, the following research objectives will be accomplished:

To identify the underlying conditions which contribute to connection dropping behaviour in mobile internet usage

To evaluate the structure of mobile internet architecture, highlighting the benefits and limitations associated with the various technologies

To highlight theoretical and emergent applications for mobile internet connections, expanding the scope of usage beyond just web surfing whilst driving

To offer recommendations based on the optimisation of network architecture according to a purpose-oriented protocol

1.3 Research Questions

Based on the aforementioned research aims and objectives, there are several key research questions that will be answered over the following chapters:

What expectations are manifested regarding mobile internet usage in vehicles, and how is such performance evaluated?

What opportunities are there for integrating mobile internet on a broader scale for more strategic, vehicular purposes (i.e. navigation, multi-vehicle communication, etc.)?

Are there specific benefits of a mobile broadband connection over tower handover behaviour and vice versa?

What will determine the future of mobile internet and how will such variables be identified and incorporated into the network architecture?

1.4 Structure of Dissertation

This dissertation has been structured in order to progress from a more general, theoretical background to specific mobile internet routing evidence. The following is a brief explanation of the primary objectives for each of the subsequent chapters:

Chapter 2: Literature Review: Highlighting an academic precedence established in this field over the past two to three years, empirical studies and theoretical findings are presented and compared in direct relation to the research aims and objectives.

Chapter 3: Research Methodology: This chapter seeks to demonstrate the foundations of the research model and the variables considered in the definition of the analytical research methodology.

Chapter 4: Data Presentation: Findings from an empirical review of existing mobile router architecture are presented, highlighting particular conditions, standards, and performance monitoring that govern functionality and performance.

Chapter 5: Discussion and Analysis: Retreating to the academic background presented in Chapter 2, the research findings are discussed in detail, offering insights into the challenges and opportunities associated with current network architecture and mobile internet protocol.

Chapter 6: Conclusions and Recommendations: In this final chapter, summative conclusions are offered based on the entirety of the collected evidence, and recommendations for future mobile internet routing solutions are provided.

Chapter 2: Literature Review

2.1 Introduction

There is a broad spectrum of academic evidence relating to mobile internet, network architecture, and operational protocol. This chapter seeks to extract the most relevant studies from this wealth of theoretical and empirical findings in order to identify the key conditions and components associated with effective and high performing mobile internet in automobiles. Further, evidence regarding connection dropping syndrome is investigated in order to highlight those deficient characteristics that continue to detract from the overall performance of these various networks. Ultimately, this chapter provides the background findings that will be compared with practical applications of mobile internet routers in vehicular scenarios in Chapter 4. This analysis is designed to not only introduce the academic arguments regarding the functional architecture of mobile routing and its widespread potential applications, but to also compare the principles and practices that have been discussed across a diverse range of technological interpretations.

2.2 The Background of Mobile Automotive Routers

In 2009, emergent technology inspired by an increasing social demand for internet mobility and integrated online resources in automobiles began to make its way to the market. Carney reported on an American based firm, Autonet Mobile which viewed the future of integrated mobile wireless as handoff-based through existing cell towers rather than mobile broadband card-driven. In essence, this proprietary technology leverages a similar communications standard to the 3G and 4G broadband routers that continue to be offered by mobile phone providers AT&T, Verizon, Sprint, and others. Consumer analysis by Autonet determined that over 50% of consumers surveyed reported on a desire for internet service in their cars in comparison with just 16% who were interested such technologies in the early parts of the 21st century. Practical applications of mobile internet routers include direct streaming of navigation tools such as MapQuest and Google Maps to the vehicle or benefits for business customers which include mail and file transfer capabilities or even online information sourcing. Uconnect Web is the service provider which ultimately links the user through the Autonet router to the internet, offering data speeds that have been reported as comparable to 3G technologies. By default, the broadcast range is around 150 feet from the car, differentiating the flexibility of use in this technology from PAN architecture.

Although the uptake of the Autonet router in such automotive producers as Chrysler and Cadillac was widely publicised, the general public reaction was not necessarily a market-shifting response. In fact, a leading analyst of direct competitor Ford would criticise the Autonet router early in its lifecycle, suggesting that many consumer will not see value in the investment in technology that is similar to that which they already pay for on their other mobile devices, especially when it is limited to the architecture of the vehicle. In spite of such predictions, by February of 2009, the Autonet router had received its first award from Good Housekeeping magazine for Very Innovative Products (VIP), a recognition that was directly oriented towards this new products potential value for families in its integration of multiple devices within a single wireless hub. In 2010, Delphus reported on significant increases in subscriber statistics, from around 3,000 vehicles in 2009 to over 10,000 by mid-2010, the direct result of strategic partnerships with such rental car giants as Avis and continued OEM partnering with Chrysler, GM, Volkswagen, and Subaru. In spite of the more commercial value of this concept, what is most relevant to the scope of this investigation is the proprietary handover management technologies that have emerged in the Autonet operating protocol. In fact, Delphus reports that because of contractual partnering with multiple wireless telecom providers, Autonet is able to maintain consistent web streaming with very minimal ‘between tower’ signal fading in urban spaces. Considering that handover processing and seamless transfer of addresses between towers is one of the technologies developed under the NEMO project previously introduced by Lorchat et al., the commercial value of such initiatives could potentially be expanded to include a much more integrated traffic architecture and communication network.

In his exploratory evaluation of NEMO as a handover framework for mobile internet routing (particularly in multi-nodal vehicular applications for traffic navigation/communication), Ernst highlights particular challenges with maintaining quality of service under mobile conditions. In particular, he recognises that addresses must be topologically correct involving specific language designed to interface with a particular tower, an ability to change the IP subnet, and ultimately the change of location and routing directive. In order to maintain sessions and quality of service, Ernst introduces a communicative architecture based on a bi-directional tunnel between the home agent (HA) and the mobile node (MN), a connection which must remain dynamic and automatic whilst receiving bandwidth allocation from the access network. In particular, such early work on the NEMO architecture established specific performance requirements which included permanent and un-interrupted access to the internet, the need to connect simultaneously to the internet via several access networks, and the ability to switch to best available access technology as needed. By default, this flexible architecture provides the following predicted benefits:

Redundancy which reduces link failures that arise in mobile environments

Ubiquity which allows for a wide area of coverage and permanent and un interrupted connectivity

Flexibility that receives specific policies from users/applications and price-oriented competition amongst providers

Load sharing to efficiently allocate bandwidth, limiting delays and signal losses

Load balancing

Bi casting

The value of NEMO protocol is that it allows for shifting points of attachment in order to achieve optimal internet connectivity. When a mobile node is on a foreign network, it is able to obtain a local address termed Care of Address (CoA) which is then sent to the home address for binding. Once the binding is complete, the HA ‘intercepts and forwards packets that arrive for the MN to the MN’ via the ubiquitous tunnel to the CoA. It is this binding and re-binding of different CoAs during mobility that ultimately allows for improved QoS, restricting the number of dropped connections and maintaining persistent internet connectivity in all areas where call towers can be accessed. Within this architecture, binding updates are used to notify HAs of a new CoA, whereby the HAs send a binding acknowledgement that may either be implicit (no mobile network prefix option) or explicit (one or more mobile network prefix options). It is the underlying use of the IPv6 architecture which Moceri argues allows for more efficient tunnelling and more consistent security than IPv4 options, due to the IPSec, the tunnelling mechanism, and the optional foreign agent usage.

2.3 Mobile Routing and Network Architecture

One of the more recent evolutions of the mobile routing protocol is based on NEMO (Network Mobility), an architecture that is designed to flexibly manage a single or multiple connections to the internet, even during motion. Based on the standardisation of protocol and architectural features by the IETF in recent years, NEMO is quickly becoming a viable means of extending internet services, diversifying online communication, and establishing a mobile link between variable nodes. In their recent analysis of this architecture, Lorchat et al. suggest that IPv6 was designated as the best fit solution to the network mobility problem, allowing for the mobile router to change its point of attachment to the IPv6 internet infrastructure whilst maintaining all current connections transparently. The authors introduce a model-in-development application of the NEMO architecture suggesting that a singular home agent would act as a maintenance and exchange module, retaining information regarding permanent addresses of mobile routers, temporary addresses, and mobile network prefixes. The primary challenge associated with intra-vehicular mobility of an internet connection in this particular challenge is that the automobile needs to perform handovers between wireless access points. Although such research is valuable from an early architectural standpoint (i.e. 2006 technology), the accessibility of wireless technology provided over mobile telephony suites via 3G and 4G technology is far advanced from a point-to-point handover protocol.

In more in-depth review of the NEMO technology, other researchers have endeavoured to identify the key limitations and opportunities that are associated with particular orientations and architectural standards. Chen et al., for example, based their research on the viability of applying NEMO BSP within public transportation in order to provide mobile internet for all passengers. This research is extremely valuable for the development of effective router protocol in the future, as the authors propose that in order to overcome the multihoming problem (i.e. a need to access multiple types of networks in order to reduce downtime and connection dropping), multiple router types could be linked wherein each router is equipped with just one type of interface could be viably used to improve quality of service. For their research, the mobile router is equipped with WLAN, GPRS, and CDMA interfaces simultaneously and an inter-interface handover algorithm is proposed for the signal exchange whilst performance during handover is measured and analysed. To accomplish such network architecture, the authors needed to introduce multiple CoA registration under which bi-directional tunnels could be established for each of the three networks without having to identify one network as primary over the others. Post analytical conclusions suggest that MIPv6 and NEMO BSP are inappropriate for ‘delay sensitive applications due to handover latency of more than 1.5s’; however, multiple interfaces and different internet service providers can offer a means of transferring traffic smoothly from one interface to another.

2.4 Alternative Schemes and Personal Access Networks

In spite of a more narrowed broadcast scope, wireless personal access networks (WPANs) are increasing in popularity, basing short range wireless communications on two distinct standards including IEEE 802.15.3 (High-Rate WPAN) and IEEE 802.15.4 (Low-Rate WPAN). Accordingly, WPANs are defined around a limited range personal operating space (POS) that is traditionally extended up to 10m in all directions around a person or object, stationary or motionless. LRWPANs are typically characterised by a limited data transmission rate of between 20 kb/s to 250 kb/s, requiring only minimal battery power and providing a transfer service for specific applications including industrial and medical monitoring. Conversely, HRWPANs offer a much higher rate of data transmission from 11 Mb/s to 55 Mb/s and are suitable to allow for the transmission of real time video or audio and providing the foundation for more interactive gaming technologies. In HRWPAN protocol, the formation, called a piconet, requires a single node to assume the role of the Piconet Coordinator (PNC) that is designed to synchronise other piconet nodes, support QoS, and manage nodal power control and channel access control mechanisms. Node functionality in the piconet architecture is defined as follows:

Independent Piconet: Stand-alone HRWPAN with a single network coordinator and one or more network nodes. Network coordinator transmits periodic beacon frames which other network nodes use to synchronise and communicate with network coordinator.

Parent Piconet: HRWPAN that controls functionality of one or more piconets. Manages communication of network nodes and controls operations of one or more dependent network coordinators.

Dependent Piconet: Involve a ‘child piconet’ which is created by a node from a parent piconet to extend network coverage and/or to provide computational and memory resources to the parent.

The value of the PAN architecture is based on its high mobility and innate flexibility, allowing for single devices to operate as mobile routers, providing internet access to multiple devices. Moceri predicts that by integrating NEMO protocol into PAN network architecture, it is possible to use a particular device such as a mobile phone to provide continuous access to a variety of other devices. Ultimately the future of this technology is directly linked to inherent efficiencies that are associated with network operations and architecture, Ali and Mouftah reiterate that in order to maximise PAN uptake in the future, a variety of protocol-based concerns must be remedied and transmissions should be become increasingly efficient. One instance of inefficiency that was identified by their empirical analysis indicated that there is a threshold value for the exchange of packets that once violated results in an accelerated rate of rejection. This is a serious concern that must be addressed through design and development of the PAN standard.

2.5 Summary

This chapter has introduced the background concepts associated with mobile wireless internet in modern automobiles. In spite of the fact that the market is limited to just one strong and integrated firm, it is evident that over the long term, opportunities for competition and alliance from other providers and service agencies is increasing. Consumers continue to demand additional connectivity and an increased standard of internet access. Unprecedented potential for redefining the future of internet mobility is currently manifesting itself throughout this industry and as such leading agencies as the IETF continue to expand their investigative process, the expectation of advancement is rampant. Ultimately, one of the first challenges that must be addressed within this field is that of handover technologies, an area of mobile internet that involves the majority of performance-based losses. By focusing on such key transitional periods in the access process, the opportunity for systemic optimisation will be greatly enhanced. The following chapter will provide background regarding the research methods that were employed in the analysis and discussion of practical handover problems and their review in this field.

Chapter 3: Research Methodology

3.1 Introduction

This chapter presents the research methods that were employed in the collection and analysis of evidence regarding the viability of mobile routing technologies across intra-vehicular applications. Focusing on an academic precedence in this field as well as guidance from theorists focusing specifically on data collection methods and analytical techniques, background is offered to validate the methodological decisions made over the course of this process. Ultimately, specific evidence regarding research architecture and the various components integrated into the research process will be addressed, as well as particular, strategic and incidental limitations affiliated with the focus of this study and a multi-stream analysis of complex data.

3.2 Research Methods

The majority of research in this field focuses on case study evidence in which network architecture, internet protocol, and various limitations and opportunities are investigated via case study examples. Chen et al., for example, utilised three different mobile routers in order to investigate handover behaviour and network performance in a mobile vehicular network. Such experimental data serves to validate best fit programming and architectural features, measuring handover time for packets of information across different conditions including between GPRS and CDMO and MR in NEMO BSP. Although the value of such analysis was recognised early in this research process, the focus of this analysis is to differentiate between hard and soft handover architecture, a condition that can be evaluated within the context of existing technology. Therefore, the experimental research method was determined to prescribe too wide of a scope of research for this study and was eliminated from the available options.

Other academics have leveraged the past theories and studies of other empiricists in order to conceptualise the foundations of a future defined by mobile vehicular internet connections. Gerla and Kleinrock, for example, explored a variety of different concepts on Inter vehicle communications (IVCs) and their applications in hypothetical transportation system architecture. Such research involved content analysis from past studies in which empirical findings and theories are cited as a means of predicting future adaptation and adjustment within the global architecture. Based on the research model presented in this study, it was determined that a comprehensive review of leading theories and findings in this field that were directly linked to the aims and objectives of this research would be a valuable research methodology.

Based on the review of past academic methodologies, a comprehensive content analysis of recent findings from empiricists and academics in this field was determined to provide a best fit research methodology. Krippendorff argues that analytical constructs ‘ensure that an analysis of given texts models the texts’ context of use’, limiting any violations regarding what is known about the conditions surrounding the text. Due to the complexity and technological variability of this topic, it was important to restrict interpretation of the findings and academic perspectives to their relative context, the foundations of which were ultimately defined early in the reports. From the application of mobile routing in pedestrian circles to vehicular mobile routing for public transportation purposes, the context was determined to be a driving factor in the protocol and architecture chosen for handover schemes in mobile internet connections. A total of six unique studies were identified as directly relevant to the investigation of soft handover technology and applications, and the general findings from these studies were then extracted and integrated into the following chapter. This data is directly relevant in predicting evolution in this industry and detailing opportunities for integrating soft handover technologies in order to optimise system performance in the future.

3.3 Ethical Concerns and Limitations

The evidence presented in the content analysis was all extracted from journal publications that are widely available to the public through multiple databases and online retrieval sites. Therefore, it was determined that based on this research method, there weren’t any ethical concerns relating to the data. There were, however, limitations imposed on the scope of the studies researched in order to ensure that the focal point of these analyses was directly oriented towards handover protocol and mobile routing architecture. The imposition of such limitations proved valuable because they allowed the research to be focused on specific conditions, outcomes, and opportunities regarding this topic that will be extremely relevant in future developments in this field.

3.4 Summary

This chapter has presented the research methods that were employed in the collection and analysis of secondary evidence regarding this widely debated topic. Recognising that inconsistencies in the review of one or two studies could result in innate research bias, six different studies were chosen from varying areas of focus in mobile routing technology. The findings are presented and discussed in direct relation to their independent context, with the exception of a few correlations that were drawn in order to link concepts and industry standards. The following chapter will present the findings from this content analysis in detail.

Chapter 4: Data Presentation

4.1 Introduction

This chapter presents a broad spectrum of academic theories, evidence, and predictions regarding the evolution of the mobile internet architecture. Whilst oriented towards the application of this technology in modern automobiles, the findings from a review of leading theorists in this field have demonstrated that the concept of handover management and strategic redefinition in mobile networks transcends the limited scope of this problem. Therefore, although current routing systems available in the marketplace may integrate different technologies or architecture than those discussed here, the focus of this research is ultimately on the evolution of the mobile handover between access points from a hard, delay-limited process to a soft, dynamic and integrated process.

4.2 The Current Problem

The modern consumer demands immediacy in all aspects of their life, from food procurement to entertainment to communication. As the heterogeneous architecture of an integrated, internet-oriented society continues to affect product choices and consumer values, the notion of a functional, high performing vehicular router has quickly become integrated into several leading automotive producers in the past several years. Labiod et al. define a mobile router as ‘a mobile node which can change its point of attachment to the internet and then its IP address’. Similarly, mobile networks involve a ‘set of hosts that move collectively as a unit’, leveraging a mobile router to perform gateway tasks and prov

Semiotic Framework Method at CSA

This work was produced by one of our professional writers as a learning aid to help you with your studies

Children Support Agency Case Study
Introduction

The use and potential benefit to system developers is examined by use of the Semiotic Framework method in the case study information supplied regarding the Children Support Agency (CSA).

Analysis of Semiotic Framework

The framework as described by Kangassalo et al (1995) (1) refers to the work of Stamper (1987) (2) as it applies to information systems, and distinguishes the four levels of properties as empirics, syntactic, semantics, and pragmatics. This is likened to a semiotic ladder running from the physical world to the social (business) world.

The semiotic framework consists of two main components, these being the human information functions and the IT platform considerations. These are both split to three sub-components.

Social World, developer actives would be:

To determine how best to match the negative responses of some staff to new technology with the high expectations of others, by designing a system which takes account of both To ensure the legal aspects such as compliance with the Data Protection Act (DPA) (2) are addressed. To ensure contractual information is protected in transmission. To meet the cultural standards held by those who work in an organisation whose purpose is to support disadvantaged young people.

Issues are:

Lack of computer literacy among some CSA staff, its status as a charity will probably restrict funding available for the system, feelings of protection for financial data versus lack of (apparent) concern voiced about personal data of vulnerable young people. The wish to accommodate training in IT for young people, without concern that this may lead to opportunities for any who have anti-social tendencies to affect the overall operation of the system by having access. The lack of realisation that today’s young people in the age range 12 to 24, whether from a deprived or difficult family background may be conversant with the use of computers.

_________________________
1. Kangassalo et al, (1995), p 358.
2. Stamper et al, (1987), p 43-78.
3. Data Protection Act 1998.

Pragmatics, developer activities would be:

To attempt resolution between conflicting attitudes in conversation which were expressed about the value of the system, and consider capital and revenue funding for the new system.

Issues are:

To determine how the system would be supported, and responsibility for the support.

Semantics, developer activities would be:

To attempt to model the syntactic structures, which are by nature, the technical concerns, to the semantic which concern the world are matched, in a machine-independent manner.

Issues are:

Security concerns, which are people-related, with system issues, which are software dependent.

Syntactics, developer activities would be:

The formalisation of documentation of the system specification, and outline the programming requirements. This is the bridge between the conceptual and the formal rules governing system development.

Issues are:

The documentation may only be understood by the IT people who create the system

Empirics, developer activities would be:

To estimate the number of data fields required, their volume, the speed with which they require to be transmitted, and the overall performance as perceived by the user.

Issues are:

Limited information available, combined with inability of potential users to express these attributes.

Physical World developer activities would be:

To analyse of existing systems, networks, hardware and software. Estimation of storage and retention of data requirements, physical condition of room housing system equipment and communications, power supply, entrance restrictions to sensitive areas, policy on removal of media from buildings, printout handling, access by young people to IT equipment.

Issues are:

Replacement of existing communication links, introduction of encrypted traffic, offsite storage of backups, disaster recovery, software licences, fire detection and suppression, volumes of data transmitted and stored. To separate young people’s IT equipment.

System requirements specification

Hass et al (2007) (4) explains requirements analysis and specification as the activities involved in assessing information gathered from the organisation involved regarding the business need and scope of the desired solution. The specification is the representation of the requirements in the form of diagrams and structured text documents.

Tan (2005) (5) describes the use of a rich picture as ‘a structural’ as opposed to a ‘pictorial’ description. It allows the practitioners to use any form of familiar symbols to depict activities and events, plus taking into consideration, conflicting views.

The definition of a use case (Seffah et al 1999) (6) is a simplified, abstract, generalised use case that captures the intentions of a user in a technology and implementation independent manner. Use case modelling is today one of the most widely used software engineering techniques to specify user requirements. Dittrich et al, (2002) (7) suggest a new approach to development they term ‘systems development as networking’. They go on to suggest the key questions to ask is ‘How do systems developers recruit and mobilise enough allies to forge a network that will bring out and support the use of the system’.

Unified Modelling Language (UML) is described by Arrington and Rayhan (2003) (8) as a method, employing use case and activity diagrams for requirements gathering. They state that use case serves as a view of the overall use of the system, allowing both developers and stakeholders to navigate the documented requirements.

_________________________
4. Tan (2005), p67.
5. Seffah et al (1999), p21
6. Dittrich et al, (2002), p 311.
7. Arrington and Rayhan, (2003), p28.
8. Hass et al, (2007), p 4.

Rich Picture

People

Activities

Current system

Future system

Use Case Diagram

See Appendix A

Primary Scenario

The likely outcome when the project specification is delivered is that the funding body will agree to the bid, but subject to some changes, which will reduce the overall cost.

This will involve a degree of compromise in the design of the new system. Suggestions may be made to re-enter the Excel data and to delay the phasing out of the financial system.

This would mean a phased project with an all-encompassing solution left to a later stage.

The impact may be additional effort on the part of CSA staff.

The system needs to be delivered in phases, with core functionality first. The successful delivery of core components will assist acceptance.

A key component is the security of information stored and transmitted, as much of it is of a sensitive, personal nature. The protection of information will require conforming to the requirements of the DPA (1).

Due to the number of area offices, with few staff, the data repository will require to be centralised, probably at HQ. This is for simplification of backups, which will require to be weekly full, stored offsite, and daily incremental with the last day stored on site.

Communications between HQ and branches requires to be encrypted, and e-mail will require protected Internet access.

Anti-virus, anti-spyware, anti-phishing and spam filtering software will be required, and a firewall introduced between the Internet-facing component and the main system.

Rigid field input will be required to avoid erroneous numbers or characters.

Menus will be restricted to selected functions and denied to others, and Admin level (privileged) control will be able to access all menus.

The training for IT clients will need to be on a separate network segment from the main systems.

Compatibility between the existing financial system and the new system will need to be established, and the system will require the capability to import Excel data.

The system will be required to replace the functionality of the Excel data.

Questions
Developer questions to CSA staff:

How much funding is available for the proposed system and who are the stakeholders?

What facilities for computer systems exist at HQ: power, space, fire suppression, telecoms, operating staff, storage required, and records retained?

Who will support the new system when delivered?

What configuration does the finance system have: hardware, operating system, application software, network links, storage, number of users, support?

Will staff time be available for training?

Will only CSA staff use the new system and will they use it from home?

Will there be allocation of CSA staff for user acceptance?

Discussion of requirements analysis tools

The usefulness of the semiotic framework is that it offers the system developer an insight into the attitudes and feelings of people who will use the proposed system. This aids the developer, in that he/she is more likely to pay attention to the human-computer interface (HCI) aspects of the system. This should, if properly delivered, make the new system easier to use, and consequently, be received with more enthusiasm, than might otherwise be the case. A key message that core aspects should be delivered first, rather than the full functionality required, may win more converts than might otherwise be the case. Also revealed by the use of the Semiotic Framework was the attitude of some of the staff, who sees the requirement for the new system as superfluous to their ‘real’ work, and consequently wish no contact with it as they are too ‘busy’. This helps the developer, as it brings home the need for the employment of techniques to make the system simple to use and not forbidding in terms of error messages to may produce due to inexperience.

What the round of interviews in the case study revealed was some conflicting attitudes among CSA staff. A key example was mention of the need for protection of financial information, but the requirement to protect personal data of clients of the CSA, some of whom may have criminal records, was not mentioned. Given that failure to protect this type of information could lead to more harm to the individual than any help they may receive from the CSA, this is cause for concern, and seems to indicate that some of the CSA staff have lost sight of the organisation’s mission in life.

The interview process resulting in the case study report produced a lack of vital information any system developer would require to produce a workable system. Basic items were not uncovered. As an example there is no information on number of users, estimates of amount of data to be stored, how long it is to be retained, and what kind of systems are in use at the moment. The availability of capital and revenue information was not discovered, and it may well be that the funding will be dependent upon the proposed design in terms of capital and revenue costs to operate.

The use of rich picture and case diagram illuminates the overall view of the required system, allowing the developer and the recipients of the system to see the whole picture and gain a better understanding of the likely finished product. It also simplifies the dependencies and collaboration required in a pictorial from which makes the ‘big picture’ easier to understand.

The importance of the Semiotic Framework is that it helps shed light on areas which the developer, using traditional systems development methodologies, may neglect. It concentrates the mind on the human-computer interface required, and influences the design attributes which need to be built in, in order to gain user acceptability. Taking the step-wise approach down through the levels, brings home to the systems developer, the need to start with the social needs, which focuses on the human aspirations (or not) of the proposed system. Working through the Pragmatics is very revealing of the contradicting attitudes of the potential users in conversation, and should lead to the developer making compromises between technical elegance in the design and being able to obtain a favourable reaction from at least a majority of the eventual users. The scope of the system required to be developed has not been revealed during the case study, which impedes the ability of the developer to estimate size and nature of the hardware or software required.

The Syntactic level assists the design in that it forces concentration on the logical handling of data input, with system response to incorrect entry, being handled not with abrupt error messages, but more friendly advice messages and suggestions on data re-entry. This tracks back to the importance of human reaction learnt from the Social Word level. The software chosen should be influenced by the Pragmatics influence in that the choice should reflect the fact that the CSA is a charity, and both hardware and applications should be in the affordable range for an institution dependent upon charitable funding.

The Empirics portion of the framework should include the estimation of required system performance, speed of telecommunications, volume of data to be stored, and response times of the system. In the CSA case study there is no information which can be used to project such requirements, so the developer would be required to utilise an educated guess, based only on the existing finance system, which could be measured, or practical experience. Some of the required information may be gathered by contact with whichever vendor delivered the existing finance system. The framework also draws attention to peripheral items, such as the Excel spreadsheet, which may well contain valuable data, not subject to strict input criteria, and possibly not backed up.

The Physical World portion of the framework focuses the developer’s mind on what will be acceptable to the users in terms of speed of response, the time and effort potentially to be saved, and the type of reporting of information capabilities of the system. It emphasises that there needs to be demonstrable benefits in the way of management information, and therefore capability to respond, which would otherwise have been unavailable. From a system developer’s point of view, this is probably the section he/she would feel most comfortable with, as it consists of tangibles, which can be translated into MIPS, baud rated, gigabytes, and other terms which IT developers are expected to be completely conversant with.

Probably the most difficult aspect of the framework for the developer is the Semantics level.

The reason for this is that it tends towards the abstract, and system developers as a breed, operate mostly in a practical, exact, measureable fashion. They act as a translator between the business requirements as expressed by the stakeholders and eventual users, and the technical people who deliver code, hardware and communications to realise the stated needs. The developer has to perform a balancing act between what is sometimes conflicting requirements and technical possibilities. This required the ability to converse with, and understand, both participants in the overall project to deliver the required system. The use of the Semiotic Framework leads the developer to address these issues and attempt to develop a clear understanding of the CSA business activities, as opposed to trying to force fit them into a prejudged idea of the system.

The developer may reflect that the application of the Semiotic Framework forces undue attention on the people-related aspects of system engineering, to the detriment of a design which embodies good technical practice and the necessary protective aspects required complying with any legal implications. Against this, the aim of developers to attain elegance and efficiency in design may be meaningless to the users of the system, whose main concerns are to assist in the capture of information, its ease of retrieval and the management information it can produce. In short, how it can help improve the user’s work practices and make life easier for them.

References

Arrington, C.T., Rayhan, S.H., (2003), Enterprise Java with ULM, Second Edition, Wiley Publishing, Inc., Indiana, USA, p28.

Clarke,S., Elayne, (2003), Socio-technical and human cognition elements of information systems, Idea Group Inc, p 8. GOOD Diagram.

The Data Protection Act
Available from: http://www.ico.gov.uk.what_we_cover/data_protection.aspx.

Hass, K.B., et al, (2007), Getting it Right, Management Concepts, p 4.

Kangassalo, H.,et al, (1995), Information Modelling and Knowledge Bases, IOS Press, p 358.

Seffah, A. et al, (2005), Human-Centered Software Engineering –Integrating Usability in the Software, Springer, p 21.

Stamper,R.et al, (1987, Critical Issues in Information Systems research, Wiley, Chichester, p 47-78.

Tan, J.K., (2005), E-health Care Information Systems, John Wiley and Sons, p 67.

Tipton, H.F., Krause, M., (2007), Information Security Management Handbook, Edition 6, CRC Press, p1290-186-587