Example International Relations Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

What are the main differences between ‘classical realism’ and ‘neo-realism’?
Introduction

Realism has become a foremost theory within international relations over six decades. Its contemporary construction is attributed to Hans Morgenthau and his work in the late 1940s. Morgenthau utilised previous works from scholars and strategists, which include, Ancient Greek scholar Thucydides, Machiavelli, Hobbes and his notions of the anarchic state, and the 1939 work of E.H Carr. Realism became the primary theory as the discipline of International Relations blossomed, forming political hypothesis based on its philosophies, such as Real Politik. As International Relations expanded as a discipline with Realism at its centre the theory become reformed. Kenneth Waltz succeeded in becoming the father of Neo-Realism in the same way Morgenthau had done with Realism thirty years prior. This resulted in a schism in the Realist theory between classic Realism and structural (neo) Realism.

The purpose of this essay is to investigate this split and to distinguish the major differences of the two Realist strands. These theories are vast volumes of work that have been considered by the brightest minds of discipline for several decades, the salient features of the two theories discussed in this text will offer just a glimpse into their philosophies. Investigation to compare the differences of the two shall be split into two parts, firstly examining the theoretical base and highlighting the noticeable distinctions. The second part will conceptualise these points in a practical sense, attaching them to historical events predominantly from the twentieth century.

Theoretical

Morgenthau’s key principles of Realism consider states as individuals, a ‘unified actor.’ One state represents itself, and these states are primary in international relations. Internal politics and contradictions are irrelevant as states pursue interests defined by power. Power, is a further key proponent of Morgenthau’s paradigm, he believed it central to human nature and therefore state actors. Morgenthau considered human nature as corrupt, dictated by selfishness and ego, resulting in a dangerous world constructed by egotistical greedy actors. Thus Realism possesses at its core a very pessimistic outlook of constant threat and danger, logically therefore Realism submits as one of its fundamental considerations that state actors are driven by survival and the need for greater dominance and power to create a favourable balance of power and decreasing the actors potential to diminish. (Gellman, 1988). Realists consider these attitudes to consume national interest, trumping any other concern. Self-help becomes a necessity. Reliance or trust of other actors is foolish as Machiavelli describes – “today’s friend is tomorrow’s enemy” (Morgenthau, 1948).

Realisms success and prominence in international relations naturally exposed it to a series of critiques. Authors and scholars disagreed with its ideological theory and often advocated alternative theories. These included a Liberalist outlook that promotes the importance of democracy and free trade, while Marxists believe international affairs could be understood as a class struggle between capital and labour. Other theories derided the lack of morality, collectivism and simplicity in Realism. Despite it retaining several of the basic features of classical Realism, including the notions that states are primary unitary actors and power is dominant. Neo-Realism provided criticism of the classic paradigm. Structural Realism directed attention to the structural characteristics of an international system of states rather than to its components (Evans and Newham, 1998). Kenneth Waltz detaches from Morgenthau’s classic Realism suggesting it to be too ‘reductionist’. He argues that international politics can be thought of as a system with a precisely defined structure, Realism in his view, is unable to conceptualise the international system in this way due to its varying limitation, essentially due to its behavioural methodology. (Waltz, 1979) Neo-Realism considers the traditional strand as being incapable of explaining behaviour at a level above a nation state. Waltz is described as offering defensive version of Realism, while John Mearsheimer promotes an offensive consideration of Realism, suggesting Waltz’s analysis fails to chart the aggression that exists in international relations, however they are often considered as one through neo or structural Realism. (Mearsheimer, 2013)

The idea, that international politics can be understood as a system, with an exact construct and separate structure, is both the starting point for international theory and point of departure from the traditional Realism. The fundamental concern for Neo-Realists is why do states exhibit similar foreign policy behaviour regardless of their opposing political systems and contrasting ideologies. The Cold War brought two opposing superpowers that although were socially and politically opposite behaved in a similar manner and weren’t separate in their pursuit of military power and influence. Realism in Waltz’s view was severely limited, as where other classic disciplines of international relations. Neo-Realism is designed as re-examination, a second tier explanation that fills in the gaps classic theories neglected. For example, traditional Realists remain adamant that actors are individuals in international affairs, referencing the Hobbesian notion that two entities are unable to enjoy the same thing equally and are consequently destined to become enemies. Whilst, Neo-Realists consider that relative and absolute gains are important and they may be attained by collusion through international institutions. (Waltz, 1979)

Practical

The salient theoretical differences exhibited in the first section will be strengthened in this second section by applying the theory to practical situations in order to enhance the understanding and the degree of separation. As one has discussed, traditional Realists consider that the foundation of international affairs is war, perpetrated by states. A Realist doctrine is exhibited by the actions and musings of Richard Nixon and Henry Kissinger, during their time together during Nixon’s presidency and with Kissinger’s influence on Nixon’s successor, Gerald Ford. While in the theatre of the Cold War, they attempted to maximise American power in order to safeguard American security against fellow actors. Incursions in Vietnam and Korea were designed at a basic level to keep their ideology as the primary superpower and increase American dominance. Nixon’s presidency was associated also with his administrations dialogue with China, and their keenness to exploit the Sino-Soviet split in order to tip the balance of power in America’s favour, all illustrating a class Realist mentality of international relations, that it is constructed entirely between state interactions and a grasp for power. (Nye, 2007)

Another example that depicts this mentality is Thucydides work concerning The Peloponnesian War, an often-utilised example used by traditional Realists; Thucydides in his works expresses an unrelenting Athenian desire to pursue self-interest, and achieved this through the use of force and hard power. He famously wrote, “The strong do what they have the power to do and the weak accept what they have to accept” (Thucydides, 1972, p402). Thucydides sentiments illustrate the Realist notion of human nature being motivated primarily by power, and it is similar to subsequent wars throughout human history. Colin Gray a modern scholar concurs with the Realist outlook suggesting an inherent human characteristic that still drives states in the same way it did in 400 B.C (Gray,2009).

Neo-Realists tend to distance themselves from this notion of a corruptible human nature. They blame the starting of the Second World War, not on innate human corruptibility, but on the failure to achieve a recognised international system. They disagree with Realist logic that the primary reason for the Second World War was Hitler’s lust to institute his power and influence across Europe. In their estimations the disorder provided by the Treaty of Versailles was principal in throwing the world back into war. Its adoption on the behest of French, British and American states provided the opportunity and the catalyst for the Nazi Party to flourish. Resentment in Germany of the allied powers, coupled with a weak nation unable to recover because of this ‘dictate’ rendered the German economy and military perpetually weak, all contributing to Hitler’s ability to snatch power and consequently produce the elements to start a world war. The world was failed in Neo-Realist estimations by a lack of substantial system (Jervis, 1994).

The response classic Realists provide to Neo-Realists is that their re-worked form of the theory is simply presented in a way that is more structural and scientific but with the core maintaining the original doctrines offered by traditional Realism. Although Neo-Realists do not deny that their ideology is extremely similarIt is an improvement on the original theory offering a more structured and formulated paradigm., but Realists argue those alterations, which include these structural formations is what inhibits the new theory. Richard Ashley is one author that concurs with these sentiments stating traditional Realism, provides an advanced concept of analysis (Ashley, 1984). For example, even if the Treaty of Versailles did create bleak conditions on Germany that incited the Nazi’s upsurge, the fundamental lust for power Hitler exhibited in the extreme was still predominant for starting World War Two regardless of structural factors. This analysis echoes Colin Gray’s opinions regarding the characteristics exhibited from the Peloponnesian War still being relevant in the twentieth and twenty-first centuries, and illustrates Realism relevance.

A further crucial difference between the two strands is the role of political belief or governance. Classic Realism has always established this consideration. Hitler, Mussolini, Franco, Hirohito all had what was classed as un-democratic governances. Stalin, with a similar totalitarian system had initially signed a pact with Hitler, it was only the latter’s covetousness for supremacy that scuppered that particular alliance, illustrating the pessimistic nature of traditional Realism, not being able to trust other actors. Conversely, Neo-Realists, led by Waltz concluded that there is no “differentiation of function between different units, i.e. all states perform roughly the same role” (Halliday, 1994). Neo-Realism came at a time where the system had altered from what classic Realism was founded upon, a pre-war world of several great powers. The Cold War heralded a bi-polar system, dominant on nuclear weapons rendering the differing ideologies and political regimes irrelevant, it was the system that prevailed. Furthermore, America propped up highly undemocratic regimes throughout the Cold War in Asia, South America and the Middle East. Suggesting classic Realist arguments of governance systems is incomplete (Merhasimer, 2013).

Traditional Realism witnessed a degree of a resurgence post-9/11, the event itself and the subsequent fallout was deemed textbook of classic Realism. Actors had to employ self-help and act unilaterally to stop attacks and an assault on the states survival. 9/11 produced a real illustration to the strength non-state actors can have on the international relations. Although Neo-Realism maintains the classic theory consideration of state primacy, it does reference non-state actors as relevant in the international system. Additional actors however must adapt to the actions of states Waltz suggests, “When the crunch comes, states remake the rules by which other actors operate.” (Waltz, 1979, p94) Furthermore, America’s democratic crusade dubbed ‘the war on terror’ was viewed as traditional Realism in action, inferring Morgenthau’s consideration of autocracy vs. democracy. However, Neo Realists will reference American support for very non-democratic states, such as its unwavering support for Saudi Arabia as the system still triumphing over the state and its form of governance. The actions of the US tie in with Mearsheimer’s offensive Realist outlook to seek hegemony, “great powers recognize that the best way to ensure their security is to achieve hegemony now, thus eliminating any possibility of a challenge by another great power. Only a misguided state would pass up an opportunity to be the hegemon in the system because it thought it already had sufficient power to survive.” (Merhasimer, 2001)

Conclusion

In conclusion whilst both strands of Realism remain constant in key areas such as the anarchic state, unitary actors and the importance of Power. Neo-Realism presents a shift away from the traditional theories offering a tangible alternative to the corruptible human nature consideration being the root of the cause conflict, as exemplified aptly by the debate on the outbreak of World War Two. However, the crucial point of departure that Neo-Realism provides is the importance given to the international system over the state, claiming that traditional Realism is inhibited by its methodology, failing to explain behaviour of an entity above the nation state. Neo-Realism allows for co-operation among states at a higher level than Realism permits, this provides an opportunity to succeed in achieving absolute and relative gains. The concept flourished during the Cold War, rejecting Morgenthau’s system of governance analysis, suggesting that states behave the same regardless whether it’s democratic or not. Neo-Realists still maintain this is relevant. Classic Realists disagree using the events of this century to prove that its methodology was always correct.

In Sum, the two differ fundamentally on approach, Neo-Realism seeks to offer a systematic and scientific approach that they believe is lacking in traditional Realism; according to its proponents it complements the original theory by correcting its fallacies, building on classic Realism emphasis on self-interest, power and the state, challenging the human nature concept and behaviour above state level.

Bibliography

Ashley, R K, 1984. ‘The Poverty of Neo-Realism’. International Organisation , 38/02, pp. 255-286.

Donnelly, J 2000 Realism and International Relations. 1st ed. Cambridge: Cambridge University Press

Evans, G and Newham R, 1998. The Penguin Dictionary of International Relations. 1st ed. London: Penguin

Fox, W, 1985. ‘E H Carr and Political Realism: Vision and Revision’, Review of International Studies 11/1 , pp.1-16.

Gellman, P , 1988. ‘Hans J. Morgenthau and the Legacy of Political Realism’. Review of International Studies, 14/04, pp.247-266.

Gray, C. S, 2009. ‘The 21st Century security environment and the future of war’. Parameters, XXXVIII (4). pp. 14-26.

Halliday, F, 1994. Rethinking International Relations. 1st ed. Hampshire: Palgrave Macmillan

Harrison, E. 2002, Waltz, Kant and Systemic Approaches to International Relations, Review of International Studies 28(1), 2002.

Jervis, R, 1994. ‘Hans Morgenthau, Realism and the Study of International Politics’, Social Research 61(Winter)_, pp. 853-876 .

Mearsheimer, J J, 2013. “Structural Realism,” in Rex Warner eds., M. Finlay Translates. International Relations Theories: Discipline and Diversity, 3rd Edition. Oxford: Oxford University Press, pp. 77-93.

Mearsheimer, J J, 2001. The Tragedy of Great Power Politics. 1st ed. New York: W.W. Norton & Company

Morgenthau , H J, 1948. Politics Among Nations. 1st ed. New York: Knopf.

Nye, J, 2007. Understanding International Conflicts: An Introduction to Theory and History in political science. 6th ed. Pearson Longman: New York.

Thucydides, 1972. History of the Peloponnesian War M.I Finley eds,

Translated by Rex Warner Penguin: London.

Waltz, K, 2001, Man, the State and War: A Theoretical Analysis. 2nd Revised edition. New York: Colombia University Pres.

Waltz, K. 2000, ‘Structural Realism after the Cold War’, International Security, 25(1).

Waltz, K, 1979. Theory of International Politics . 1st ed. New York: McGraw-Hill.

The Role of Military Force in Promoting Humanitarian Values

This work was produced by one of our professional writers as a learning aid to help you with your studies

Recent years have seen an increase in military force being used as a tool for increasing the scope for humanitarian values within conflict zones. This paper assesses this trend, and uses a number of conflict case studies as a vehicle for evaluating this premise. In doing so, this paper considers that the Libyan intervention in 2011 offers a case study which argues that state led humanitarian intervention is borne out of a political, as opposed to a humanitarian, need. This undermines the promotion of humanitarian values.

The concept of military led humanitarian intervention can be found within a highly subjective area of academic and political thought. With regards to this, there are some commentator’s, such as Waxman (2013: n.p.) who consider that military led humanitarian intervention consists of “the use of military force to protect foreign populations from mass atrocities or gross human rights abuses” whilst others, including Marjanovic (2012: n.p.) see this particular course of action as being “a state using military force against another state when the chief publicly declared aim of that military action is ending human-rights violations being perpetrated by the state against which it is directed”. With regards to this subjectivity there is a series of overlapping concepts that help to further the debate in this area. These overlapping areas can be found within a number of conceptual areas including war and conflict within which humanitarian values are negatively impacted by activities which impact upon non-combatants, these include human rights abuses. Where humanitarian values are considered, the International Committee of the Red Cross (ICRC) (2013) holds a perspective which suggests that these comprise of aspiration in relation to humanity, neutrality, independence, and impartiality. In this regard, therefore, one can suggest that where military forces are deployed in order to promote or support humanitarian operations it is necessary that these forces act accordingly within the boundaries of these guiding principles. In their totality, therefore, it is arguable that there exists a number of factors which need to be present where a situation occurs that requires military led humanitarian assistance.

With regards to any underpinning intervention that relates to issues covered within humanitarian interventions, Weiss (2012: 1) believes that it is possible that an underlying notion of a “responsibility to protect” is a dominating factor in contemporary geo-political thinking, however instead of this doctrinal approach being used across the globe Weiss (2012) believes that the global community tends to cherry-pick the various conflicts that it intervenes in, this is discussed elsewhere in this paper. That said, Minear & Weiss (1995) had previously indicated that any military intervention that seeks to promote humanitarian values should incorporate a post war recovery planning and redevelopment programme. However recent decades, particularly since the end of the Cold War, has seen an increase in the numbers of military led humanitarian interventions that are related to “activities undertaken to improve the human condition” (Weiss, 2012: 1). This latter issue, concerning the human condition, suggests that there has been a genuine shift in the contemporary conflict environment. This shift is primarily based on the progression from conventional warfare to of asymmetric warfare which involves a number of non-state actors and combatants. This is a factor that has not been ignored by Weiss (2012). Here the suggestion that, today, only state led military interventions can promote humanitarian values has been promoted because non-state actors are not bound by regulations and international protocols regarding the dynamics and conduct of war. Indeed this particular perspective gains an increased level of support where the current post Cold War conflict environment is considered.

For Pattison (2010) the years following the end of the Cold War have resulted in a vastly increased number of military operations that have been designed to support humanitarian values through intervention. These interventions have occurred in a plethora of collapsed or failed states and include, but are not limited to. post Gulf War (1991–2003) Iraq, Bosnia – Serbia (1995), The Balkans and Kosovo (1992-1999), East Timor (1999) Somalia (2002), Haiti (2004), and Libya (2011). These interventions, for some, also include the post 9-11 era’s intervention in to Afghanistan and latterly in Iraq (2003-2010) (Pattison, 2010). In this regards, Weiss (2012) believes that the underlying concept of humanitarian intervention has helped to increase the potential for international interventions into other states because of a need to increase the level of protection offered to non-combatants from conflict. However, the earlier indication of cherry picking conflicts offers for a greater insight into the nature of political discourses which take place at the United Nations (UN) Security Council with regards to these conflicts and where state led political aspirations are an overbearing factor in the intervention tools and choices made by states. Indeed one can argue that the current and ongoing conflict in Syria offers as a casing point particularly since all state actors which have intervened possess their own aspirations in shaping the future of that particular country (Haaretz, 2014; Press TV, 2013; Ruthven, 2014; Time, 2015). In some respects, therefore, the issue of humanitarian intervention and its related values base is being abused in order that these political aspirations can be furthered (Dagher, 2014). This aspect, however, is a perpetual factor in the international arena, particularly where realist agendas are taken into consideration (Bayliss & Smith, 2001). One area where international intervention has been encouraged is in relation to ethnic conflict.

Kaldor (1998) recognises that the end of the Cold War resulted in an increase in the frequency of ethnically charged conflicts and that these types of conflict have been offered as a rationale for international humanitarian based interventions In respect of this, Kaldor (1998) argues that the changes that have taken place within conflict dynamics that has resulted in belligerent forces not being constrained by international regulations, including the Geneva Convention protocols, Laws of Armed Conflict or relevant United Nations Charters (Kaldor, 1998) has led to humanitarian values being used as an excuse to further the political aspirations of a number of states. The result of this changed dynamic has perpetuated and has spread to a number of conflict zones around the world. However, it has led to an increase in the reliance upon conventional forces whose role has been to offer peace keeping and security services to non-governmental organisations (NGOs) in support of their own operations. In this respect it is noted that Christoplos, Longley, and Slaymaker (2004) consider that the intervention strategies have also altered in recent years. Here, they note that the underpinning intervention programmes now seek to promote humanitarian values and that this is evidenced by the creation of a tripartite doctrinal system which now utilises areas of national and personal rehabilitation; added to this are post war recovery programmes that are intended to help redevelop both the state and social infrastructures; finally there is the central issue of relief programmes that seek to maintain the fabric of civil society during crisis periods. For Seybolt (2007) this perspective adds weight to any argument that promotes the possibility that military humanitarian interventions can assist NGOs in their duties via the provision of security provisions. However, it is also recognised that adding external military forces into a combat zone has can lead to further complications primarily because military operations possess a potential for using force when necessary (Davidson, 2012; Ministry of Defence, 2011).

In promotion of a perspective which says that deployed military forces can utilise force is well grounded in military doctrines. For example the UK Ministry of Defence promotes a policy whereby “The peacekeeper fulfils a mandate with the strategic consent of the main warring parties, allowing a degree of freedom to fulfil its task in an impartial manner, while a sustainable peace settlement is pursued.” (Ministry of Defence, 2011: 1.1). This perspective suggests that it is possible for military personnel whose primary function is to assist NGOs as part of the promotion of humanitarian values is in fact a secondary consideration. Ultimately the use of military force within humanitarian interventions is a purely political choice that is intended to help reshape the political landscape of the affected region or state in the post conflict environment. With regards to the current Syrian conflict, one can argue that the divergent and conflicting political perspectives and aspirations is a factor which will undermine the potential for any real focus upon the promotion of humanitarian values. Indeed, it is also recognised that this eventuality does little to promote the principles of humanitarianism as argued by the likes of the ICRC (2013). In effect the possibility that military forces can conduct purely military operations, or war phase fighting, during a humanitarian intervention undermines any utilitarian or altruistic claims made by the respective political powers. In its totality this suggests that the aforementioned issue of political realism is both present and ongoing. Indeed such an argument can be backed up by a policy review of the recent and ongoing Afghan conflict.

A review of UK doctrinal papers promotes this paper’s preference that military operations incorporate the possibility that war fighting, as well as security duties, is a contingent factor in the preparations for any military force. Stabilisation programmes in the Afghanistan intervention occurred in an environment where the UK’s military “had the consent of the host nation government but no other warring party (Afghanistan: Taliban 2001 – present)……..A military force may decide in such situations that the defeat of a specific enemy is essential to the success of the operation.” (Ministry of Defence, 2011: 1.1). Essentially, therefore, in political terms it is feasible that political intentions can undermine any altruistic argument in relation to the deployment of military forces to carry out humanitarian operations. For some the recent ‘humanitarian’ intervention into Libya is an example of this outcome.

The recent UN backed military intervention in Libya was mandated via humanitarian intervention that was intended to provide relief and assistance (United Nations, 2011). The promotion of this intervention was supposed to further the seven values of humanitarian intervention, as promoted by the ICRC (2013) however one can argue that the resultant intervention was mainly politically motivated because there is sufficient evidence to indicate that Gaddafi’s regime had been a long time foe of those states which executed the intervention (USA, UK & France) (Boulton, 2008). In promotion of their intervention, the USA UK, and France had argued that a failure to intervene would result in a humanitarian crisis caused by the perpetuation of conflict. However, Kuperman (2011) argues that the resultant UN Resolution 1973 (United Nations, 2011) created conditions where the intervening military forces could operate beyond the realms of Resolution 1973. These included, for example, allowing the USA, UK, and France to conduct stabilisation operations so that the authority of the Gaddafi regime could be undermined, thereby helping to bring this conflict to a swift conclusion. In layman terms this meant military intervention via war fighting. With regards to this, Kuperman (2011) also argues that Libyan state functions were impacted, including the freezing of its financial and economic assets. It was also argued that the intervening forces of the USA, France and the UK oversaw the deployment of private military contractors whose role was to undertake anti Gaddafi operations thereby seeking to overthrow his regime (RT News, 2012). In effect, the usage of humanitarian justifications for military intervention in conflict can be defined in terms of the actions and justification of the states whose forces have been committed to operate in those areas and regions.

In its totality, therefore, the usage of military force as an effective instrument for the promotion of humanitarian values is limited. These limitations can be found within the underlying political rationales that exist within states that are prepared to commit forces for these operations, particularly where these states have an interest in the realisation of a particular outcome. Whilst humanitarian led interventions have become a mainstay of the post Cold War climate, one can argue that the promotion of the seven humanitarian values that are promoted by the ICRC (2013) are undermined by the intervening forces because of their ability to both flout their mandate, as well as their ability to conduct war fighting operations under the guise of humanitarianism. In essence, therefore, one can argue that there are genuine limits to the ability of military forces to promote humanitarian values however these limitations are not factors which states consider when seeking to intervene in any conflict.

Bibliography

Bayliss, J., & Smith, S., (2001), The Globalisation of World Politics. Oxford, Oxford University Press.

Boulton, A., (2008), Memoirs of the Blair Administration: Tony’s Ten Years, London: Simon & Schuster.

Christoplos, I., Longley, C. and Slaymaker, T., (2004), The Changing Roles of Agricultural Rehabilitation: Linking Relief, Development and Support to Rural Livelihoods, available at http://odi.org.uk/wpp/publications_pdfs/Agricultural_rehabilitation.pdf, (accessed on 17/10/15).

Dagher, S., (2014), Kurds Fight Islamic State to Claim a Piece of Syria, (online), available at http://online.wsj.com/articles/kurds-fight-islamic-state-to-claim-a-piece-of-syria-1415843557, (accessed on 17/10/15).

Davidson, J., (2012), Principles of Modern American Counterinsurgency: Evolution and Debate, Washington DC: Brookings Institute.

Haaretz, (2014), Russia demands Israeli explanation of air strikes in Syria, (online), available at http://www.haaretz.com/news/diplomacy-defense/1.630584, (accessed on 20/10/15).

International Committee of the Red Cross, (2013), Humanitarian Values and Response to Crisis, (online), available at https://www.icrc.org/eng/resources/documents/misc/57jmlz.htm, (accessed on 17/10/15).

Kuperman, A., (2011), False Pretence for war in Libya, available at http://www.boston.com/bostonglobe/editorial_opinion/oped/articles/2011/04/14/false_pretense_for_war_in_libya/, (accessed on 17/10/15).

Marjanovic, M., (2011), Is Humanitarian War the Exception?, (online), available at http://mises.org/daily/5160/Is-Humanitarian-War-the-Exception, (accessed on 17/10/15).

Minear, L and Weiss, T.G., (1995), Mercy Under Fire: War and the Global Humanitarian Community, Boulder: Westview Press.

Ministry of Defence, (2011), Peacekeeping: An evolving Role for the Military, London: HMSO.

Pattison, M., (2010), Humanitarian Intervention and the Responsibility To Protect: Who Should, Oxford: Oxford University Press.

Press TV, (2013), Hezbollah to remain in Syria: Official, (online), available at http://www.presstv.ir/detail/2014/02/10/350058/hezbollah-to-remain-in-syria-official/, (accessed on 20/10/15).

RT News, (2012), Stratfor: Blackwater helps regime Change, (online), available at http://www.rt.com/news/stratfor-syria-regime-change-063/, (accessed on 17/10/15).

Ruthven, M., (2014), The Map ISIS Hates, (online), available at http://www.nybooks.com/blogs/nyrblog/2014/jun/25/map-isis-hates/, (accessed on 20/10/15).

Seybolt, T., (2007), Humanitarian Military Intervention: The Conditions for Success and Failure, Oxford: Oxford University Press.

Time, (2015), Iran Looms Over ISIS Fight as Baghdad-Tehran Alliance Moves Into Tikrit, (online), available at http://time.com/3741427/isis-iran-iraq-tikrit/, (accessed on 20/10/15).

United Nations, (2011), Resolution 1973, (online), available at http://www.un.org/press/en/2011/sc10200.doc.htm#Resolution, (accessed on 17/10/15).

Waxman, M., (2013), Is humanitarian military intervention against international law, or are there exceptions?, (online), available at http://www.cfr.org/international-law/humanitarian-military-intervention-against-international-law-there-exceptions/p31017, (accessed on 17/10/15).

Weiss, T., (2012), Humanitarian Intervention, Cambridge: Polity Press.

Cities, poverty and inequality in the UK

This work was produced by one of our professional writers as a learning aid to help you with your studies

With reference to London, Manchester and Glasgow in the UK
Introduction

Debates on poverty and inequality have been always heated and topical. In the aftermath of the global financial crisis and the dogma of austerity, poverty and inequality received a newfound attention from academic and policy circles alike. What is especially interesting, for the purposes of this essay, is to look at the bare version of austerity politics and how they have fed into existing socioeconomic privation and how they are aligned with more deep seated politics dating back to Thatcherite economics and Voodoo economics (Harvey, 2005).

This essay will look at the UK and specifically London, Manchester and Glasgow, and tease out themes around poverty and inequality and how they have been animated as a direct result of policy as well as decision-making at Westminster. By and large, poverty and inequality are multifaceted concepts and should not be seen as purely economic. They intersect with legacies and collective memories and the relationship between cities and inequality is therefore going to be dynamic and complicated.

This essay first turns to delineating what cities, poverty and inequality are taken to mean and locate this discussion within a larger theoretical current and critique. The argument that will be proposed is that poverty and inequality are, put simply, manifested in their fullest extent in global cities, as they are the immediate receptors of government policy and dialogue. Although regional cities and towns are also affected, the ‘contagion’ of policy is a lot weaker and their relationship obscure. To provide evidence for this argument, this essay will examine three different socioeconomic phenomena that have stark implications for poverty and inequality, namely neoliberal austerity politics, a protracted housing crisis, and finally, deindustrialization and a one-sided focus on the City. The essay concludes with a couple of policy recommendations as to how to curtail the rise of inequality in cities.

Why global cities?

As briefly mentioned in the introduction, this essay identifies and looks into global cities as opposed to a nation as a whole. This is because the latter is more abstract and generalized, and relies on more macroeconomic assumptions. In contrast, the former is the ‘playground’ of policy and dialogue, being their proximate receptor and their locus (Musterd and Ostendorf, 2013). That is to say, global cities, in a way, symbolize what policy underlies and is about. The direct consequences that accrue allow an observer to make more credible and robust points about its relationship to inequality and poverty (Sassen, 2011). For example, if this essay were to take up national inequality, measured by the Gini coefficient, the concepts would become harder to discern and the implications unclear. Much of the theoretical literature has homed in on the root causes of inequality and how this deleterious phenomenon has come about (see Atkinson, 2015). Although this essay will later touch on and attempt to trace why inequality exists and is magnified in cities, it is noteworthy that most of the research into inequality shies away from looking at the direct results it has on life in global cities.

How do we explain poverty and inequality?

Next, this essay turns to defining poverty and inequality. There is a presumption in favour of conflating these two to purely economic phenomena to be addressed by economic solutions. However, as will be examined, the case study of Glasgow is a powerful rejoinder to this conflation. Namely, it is a city that has competitive economic infrastructure and results, and yet lags behind in other crucial holistic social measurements. More broadly, poverty and inequality, as stated are complex and multifaceted. That is why it is suggested that the Gini coefficient is a fundamentally limited and misguided measurement to marshal in this essay. Instead, what would be more relevant would be more relevant is to look at the likes of Amartya Sen (2005) and his work on human capabilities and how potential can be frustrated in myriad non-economic ways. For this reason, this essay cannot properly infer from London’s high economic performance that it adequately caters to the problems of inequality and poverty. Put simply, that a global city grows does not mean that the least well off are benefiting as well. By taking this comprehensive approach, this essay will discuss how complex policy has complex consequences on people’s lives and general levels of contentedness.

The trajectory of inequality

Inequality is, by no means, novel. This discussion is embedded in a global debate about what gives rise and momentum to inequality, especially following the global financial crisis of 2008. In the core of the Western world, inequality has run amok in the past few decades, despite the fact that they have rendered modest economic growth in general (Piketty, 2014). This puzzling reality has been the subject of a lot of academic debate and contributions; some scholars have suggested that inequality is not inevitable but, in fact, beneficial, as it makes people more driven and aspirational, and more likely to celebrate and mimic such role models as Mark Zuckerberg and Warren Buffet (Lippman et al., 2005). According to this line of arguing, inequality is seen to be a by-product of entrepreneurial ability and prowess.

However, it is unlikely that this line of thought captures the deep and perplexing character of inequality. To rebut the claim that inequality is a fair reflection of talent and ability, this essay makes a contention that it is rather the result of collective deliberate decision-making (Stieglitz, 2012). This becomes particularly evident in global cities where the contradictions therein highlight that it must be more than just a lack of talent or luck that is holding people back on such a large scale. London, for example, boasts the City which is undeniably the globe’s foremost financial center and also the silicon roundabout, a very promising and booming hub of entrepreneurs. Yet, it also has areas like Peckham. Inquiring into the latter’s residents’ attitudes, it becomes plain that they feel disillusioned and failed by the capital of the United Kingdom (Glaeser et al., 2009). This area offers another side to London’s ‘success story’, as it tends to be hosts of endemic crime, destitution, childhood obesity and other negative manifestations. Therefore, to say that inequality is down to the genes you are endowed and the aspirations you form is too simplistic a story for global cities.

Another instance in which it is seen that people are adversely affected by phenomena outside of their control is that of the prolonged housing crisis that London is witnessing (Harford, 2014). Due to unprecedented demand and people looking to move in, house prices have been on a perpetual rise. What has enabled this rise has been the power that landlords have in that they can charge disproportionate amounts to tenants but they can also fund their own mortgage by letting out properties (Harford, 2014). This translates very negatively for people from lower socioeconomic strata, as they lack comparable access to credit to begin with. That is why they turn to the state and council houses which cater to that. However, the latter have also been penetrated by private landlords leading to the perverse situation whereby council housing is owned privately and can also be overcharged. This is down to political choices regarding allowing the right to buy these kinds of properties, but also creating a generally more permissive framework to buy and let property. At the same time, those at the top end of the economic spectrum have benefitted from more generous inheritance and high property tax offering a glimpse into how glaring inequality can become in global cities. To contrast that, note that Berlin has recently introduced rent controls to avoid a similar scenario (Vasagar, 2015).

It is therefore clear that people living in London have vastly different and unequal access to the most important asset of their consumption lives, namely their house, which has bad implications for their psychological wellbeing and the extent to which they can provide for their families sustainably. Big cities cannot afford to have these kinds of contradictions run within them, whereby lower strata segregate from the mainstream in the own communities and refuse to engage with political-decision making and active citizenship (Wheeler, 2005). This, in turn, exacerbates the already unsteady relationship between cities and inequality, as these groups lose morale and incentive to engage with common goals and agendas.

Neoliberalism

The global financial crisis has made a case that the United Kingdom’s government has heated and that is in favour of austerity politics. The government has engaged in discretionary benefit cuts and also has increased tuition for tertiary education, both of which disproportionally hurt the poor and therefore augment inequality. In seeing benefits reduced, a person in a big city faces profound adversity. Compounded by the housing crisis and general inflation, this person is likely to have his livelihood eroded. Their children will also have to take bloated student loans, and that is if they can afford to hold off working immediately after school. Recently, the UK government has engaged in a bait-and-switch policy whereby benefits to the poor were cut yet that was supposedly counteracted by the introduction of a ‘living’ wage (O’Connor and Gordon, 2015). Again, this example demonstrates that inequality is not an inevitable result of human nature and a random distribution of talent, but created and magnified by governments and collective communities that have bought into the austerity dogma. This has been criticised by high-profile academics such as Pikkety (2014), Stieglitz (2012), and Atkinson (2015).

The seeds of inequality were perhaps planted by Thatcherite economics and a legacy of tough-love when it comes to trade unions, workers, and the welfare state. Following Thatcher’s election, the government introduced a series of neoliberal reforms that placed socioeconomically vulnerable people in an even more precarious situation, stripped of participation in unions, their jobs, if they worked for a factory that closed down, and livelihoods as regressive taxation took its toll (Harvey, 2005). One of the most important features that is relevant for the purposes of this essay is that of deindustrialization and how it has engendered a deep north-south divide in the UK that is persistent and difficult to address. Through a strong and remorseless focus on the service industry, which was hailed as forward-looking, efficient and innovative, the UK’s industrial base concentrated in cities like Manchester and Glasgow (less so) took a back seat to the city of London (The Equity Trust, 2014). The latter has been consistently nurtured with state support and policy ever since at the expensive of other sectors, such as the manufacturing one which used to make up the backbone of the British economy. Instead, now it is, broadly speaking, lagging behind in terms of productivity as the latest findings of the CDI show (The Equity Trust, 2014).

(source: The Equity Trust, 2014)

The graph above shows pay gaps between the rich and the poor in different regions in the UK. It is clear that the pay gap in London is the most glaring, although London is by far the highest growing city. This is because the service industry caters mainly to the wealthy and lacks the traditionally job-creating economic multipliers of the industrial and manufacturing sectors that have suffered.

Conclusion

In conclusion, this essay first took up the ambitious task of delineating what is meant by poverty and inequality, which are inherently complicated concepts. It has also attempted to come to grips with global cities and why they should be viewed as the main reference point in any policy discussion about poverty and inequality. The relationship that this essay identified is, by no accounts static. Rather, it evolves with time and changes in government and collective dialogue. This essay has also aimed to dispel associations between growth and inequality throughout by pointing at the example of London and Glasgow, both of which should alter the reader to the holistic and insidious ways in which inequality and poverty work. The roots of inequality and poverty have also been briefly explored, looking at how they are not novel but the result of long-lasting legacies and engrained ways of political thinking. It has finally turned to how important and telling the current context is in terms of how inequality sustaining policies have been legitimized under the guise of austerity and in the name of balanced budgets.

Bibliography

Atkinson, A.B., 2015. Inequality: What Can Be Done? Cambridge, MA: Harvard University Press.

Glaeser, E.L., Resseger, M., Tobio, K., 2009. Inequality in Cities. Journal of Regional Science 49, 617–646.

Harford, T., 2014. Why a house-price bubble means trouble. Financial Times. Available at: http://www.ft.com/cms/s/0/66189a7a-6f76-11e4-b50f-00144feabdc0.html (accessed 15/10/2015).

Harvey, D., 2005. A Brief History of Neoliberalism. Oxford: Oxford University Press.

Lippmann, S., Davis, A., Aldrich, H., 2005. Entrepreneurship and Inequality. In: Lisa Keister (Ed.) Entrepreneurship, Research in the Sociology of Work. London: Emerald Group Publishing Limited, pp. 3–31.

Musterd, S., Ostendorf, W., 2013. Urban Segregation and the Welfare State: Inequality and Exclusion in Western Cities. London: Routledge.

O’Connor, S., Gordon, S., 2015. Summer Budget: Osborne makes bold bet with “living wage.” Financial Times. Available at: http://www.ft.com/cms/s/0/611460a8-2584-11e5-9c4e-a775d2b173ca.html#axzz3oeeh6xid (accessed 15/10/2015).

Piketty, T., 2014. Capital in the 21st Century. Cambridge, MA: Harvard University Press.

Sassen, S., 2011. Cities in a World Economy. London: SAGE Publications.

Sen, A., 2005. Human rights and capabilities. Journal of Human Development 6, 151–166.

The Equity Trust, 2014. A Divided Britain? – Inequality Within and Between Regions. London: The Equity Trust.

Stiglitz, J., 2012. The price of inequality. London: Penguin.

Vasagar, J., 2015. Germany caps rents to tackle rise in housing costs. Financial Times. Available at: http://www.ft.com/cms/s/0/27efd4b2-c33a-11e4-ac3d-00144feab7de.html#axzz3oeeh6xid (accessed 14/10/2015).

Wheeler, C.H., 2005. Cities, Skills, and Inequality. Growth and Change 36, 329–353.

Changes in US Foreign Policy after 9/11

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

On September 20th, 2001, President George W. Bush (2001, n. pag.) gave a speech addressing the events of nine days before: “On September the 11th, enemies of freedom committed an act of war against our country. Americans have known wars, but for the past 136 years they have been wars on foreign soil, except for one Sunday in 1941.” The speech drew upon the notion that America had been attacked and also laid the blame firmly at the door of terrorism whilst interpreting it as an act of war. Although the emotive rhetoric was designed to stir support for a response, it also heralded a new era in US foreign policy. Defined as a “foreign policy crisis” by Bolton (2008, p. 6), it was inevitable that it would elicit a response by American policymakers but the extent to which it has changed US foreign policy has been hotly debated. As such, this essay will discuss the changes in post-9/11 US foreign policy, identifying areas that marked a departure from the policy in place prior to 9/11. It will analyse each to determine the extent to which it was a direct response to the terrorist attack and evaluate how the change impacted upon long-term foreign policy strategy. This will be done with a view to concluding that many of the changes to US foreign policy in the post-9/11 era have been a response to the evolving security threat posed by terrorism and did force policy to evolve in order to accommodate strategies that address modern problems. However, those changes may have made an immediate impact but did little to alter the long-term course of US foreign policy.

Foreign policy arguably changed direction within days of 9/11 with the most immediate and most obvious change being the shift in focus towards terrorism. Bentley and Holland (2013) highlight that the focus had been foreign economic policy under Clinton but 9/11 produced a dramatic movement away from diplomacy and towards military solutions via the War on Terror. There was also movement away from policy that prioritised relations with the great powers of Russia and China. Earlier unilateralism had negatively impacted upon relations with both nations, thus causing deterioration that extended beyond the Cold War era hostilities and prevented effective relations between East and West (Cameron, 2006; Nadkarni, 2010). However, the American desire to create a “world-wide anti-terrorism alliance” (Nadkarni, 2010, p. 60) brought about a relative thaw between the nations and facilitated discourse in order to cater for shared security concerns. This change provides evidence of an immediate shift in US interests and this manifested in foreign policy. As such, this is an extremely important change that occurred post-9/11, especially as it emerged out of the first response to the attack and served to dictate US actions abroad for more than a decade afterwards.

The shift of focus from the great powers and towards terrorism provided policy space to address security threats via the three pillars of the Bush administration’s national security policy, which had become a fundamental element of foreign policy as, for the first time since World War II, the attack on American soil brought both ostensibly dichotomous strands of policy together. The pillars were missile defence (a continuation of policy prior to 9/11), pre-emption and homeland security, both of which were embraced after 9/11 in response to it (Lebovic, 2007). Although elements of this were rooted in domestic policy, the pre-emption aspect of policy was also manifest in foreign policy because non-state terrorist groups and rogue states became inextricably linked to US foreign policy as targets to be dealt with under the new priorities outlined in the wake of the terror attacks, although this was somewhat more gradual than the initial shift to focus on terrorism. Indeed, the Bush Doctrine marked a fundamental shift towards utilisation of policy that incorporates both pre-emptive action and preventative action, which marked the decline of the reliance on containment and deterrence that dictated policy from the Cold War era onwards (Jentleson, 2013; Levy, 2013). The pre-emptive strikes were indicative of a strategy that sought to defend by attacking those who posed an immediate security threat to the US and allowed policy to justify the unilateral military pursuit of specifically American interests. This suggests that 9/11 was used as an effective excuse to create foreign policy that better mirrored the ideology of the government than what was in place in the months prior to the attack.

There is extensive criticism of the policy that reinforces the assumption that the government manipulate foreign policy to suit its own ends. For example, Ryan (2008, p. 49) argues that Iraq, which was labelled a rogue state, was already a focal point of foreign policy but the events of 9/11 allowed policymakers to push their specific agenda: “Influential strategists within the Bush administration seized on the horror to gain assent from liberal Americans to move the country towards a war in Iraq that neoconservative strategists desired, but that many within the US… shunned.” Holland (2012) concurs, arguing that coercive rhetoric was used extensively in order to sell the War on Terror via culturally embedded discourse. In addition, Miles (2013, p. 110) advocates that “…Bush’s placement of rogue states at the centre of America’s response to 9/11 was welcomed as an opportunity to overthrow a number of old threats and terror loving tyrannies who stood in the way of democracy and freedom.” This perspective certainly offers a credible insight as to how 9/11 was manipulated in order to push foreign policy in a certain direction, and indeed one that was a continuation of what had gone before. However, the need to manipulate public opinion is indicative of the fact that foreign policy had deviated from that in place directly prior to the terrorist attack on the World Trade Centre.

US foreign policy has also responded to the increased demand for humanitarian assistance to aid failed states and nation building to ensure their reconstruction following 9/11. Shannon (2009) points out that the reconstruction of Afghanistan following the US invasion there has essentially helped to prevent the failure of the state, improve the quality of life for its people, introduce freedoms and democratic processes that were absent before and aided the avoidance of the state being controlled by terrorists. This was certainly a change from previous foreign policy: “Before 9/11, nation building was often caricatured as a form of idealistic altruism divorced from hard-headed foreign policy realism… In the post-9/11 era, nation-building has a hard-headed strategic rationale: to prevent weak or failing states from falling prey to terrorist groups” (Litwak, 2007, p. 313). This summary of the extent to which attitudes changed highlights the fact that a greater role in states that required humanitarian assistance was incorporated into foreign policy out of necessity rather than ideological choice. There was a distinct need to limit terrorist activity as far as possible and this actively manifested in this element of foreign policy. As Litwak (2007) points out, humanitarian action was not a staple element of American foreign policy by any means and so this, more than any other element of foreign policy, does signal that a change occurred within the strategic objectives inherent in the War on Terror. However, there are criticisms of this particular change because the US is charged with failing to follow through with humanitarian aid to the extent that it should have done. For example, Johnstone and Laville (2010) suggest that the reconstruction of Afghanistan was effectively abandoned with a failure to create institutions that would withstand future threats to freedom and democracy. This suggests that this particular area of strategy was not well thought out and did not achieve its ultimate aims. However, the fact that it was included in US foreign policy post-9/11 suggests that there was a concerted effort to implement a multifaceted policy to tackle terrorism as a new and dangerous global strategic threat.

However, despite the fact that the analysis here points to a change of direction for US foreign policy in the wake of 9/11 that was specifically designed to tackle the causes of and security threat posed by terrorism, some critical areas of policy did not change. For example, the long term objectives of the US were still manifest within new policy but they appeared in a different form that essentially provided a response to a different threat. Leffler (2011, n. pag.) argues that 9/11:

…did not change the world or transform the long-term trajectory of US grand strategy. The United States’ quest for primacy, its desire to lead the world, its preference for an open door and free markets, its concern with military supremacy, its readiness to act unilaterally when deemed necessary, its eclectic merger of interests and values, its sense of indispensability – all these remained, and remain, unchanged.

This summary of the ultimate goals of US foreign policy draws attention to the fact that very little has changed. Although the British government supported the invasion of Iraq in the wake of 9/11, the fact that the United Nations Security Council refused to pass a resolution condoning the use of force did not prevent the launch of Operation Iraqi Freedom (Hybel, 2014). This is evidence of the readiness to act unilaterally if it serves their interests. Gaddis concurs, noting that US self-interest remained the same with very little consideration of long term strategy that intervention elsewhere would require. Bolton (2008, p. 6), on the other hand, agrees that many of the changes to US foreign policy were made immediately but he disagrees with the assertions of Leffler and Gaddis concerning their long term impact. Bolton (2008, pp. 6-7) asserts that the changes have caused a longer-term impact, albeit one that has diminished over time as a result of the enduring nature of the national security policy and its evolution to accommodate the threat of terrorism in the wake of 9/11. Although this provides a dissenting voice in one respect, it demonstrates consensus on the fact that the changes in US foreign policy post-9/11 were a direct response to a new global threat but they were implemented alongside existing strategic goals. In effect, the approach may have changed but the ultimate objective had not.

Conclusion

In conclusion, the analysis here has identified and discussed several changes that occurred within US foreign policy post-9/11. There can be little doubt that there was a distinct shift in focus to the need to deal with terrorism after the first attack on American soil for seventy years. Similarly, the policy content evolved to adopt a more humanitarian approach to global crises and a proactive and pre-emptive approach to potential threats. All of these changes did mark a departure from what had gone before in some way. However, although the majority of changes were incorporated into foreign policy within two years and were all undoubtedly a response to the attack and its causes, there is significant evidence to suggest that such actions provided an extension of foreign policy doctrine that had gone before. For example, although the focus of foreign policy shifted from the old Cold War objectives of containment and deterrence to terrorism, the interest policymakers took in some rogue states like Iraq was simply a continuation of established ideologies of ensuring freedom and democracy. Similarly, the US administration of foreign policy changed very little in terms of its determination to act unilaterally where necessary and lead the world in a battle against the latest threat to global security. As such, it is possible to conclude that many of the changes to US foreign policy in the post-9/11 era have been a response to the evolving security threat posed by terrorism. Furthermore, it was necessary for policy to evolve in order to accommodate strategies that address modern problems that were not as much of a priority in the late 20th century. However, whilst those changes made an immediate impact on foreign policy, it did not alter the long-term course of US foreign policy because that remained firmly focused on the outcomes of action elsewhere in the world in relation to American interests.

Bibliography

Bentley, M. & Holland, J., (2013). Obama’s Foreign Policy: Ending the War on Terror. Abingdon: Routledge.

Bolton, M., (2008). US National Security and Foreign Policymaking After 9/11. Lanham: Rowman & Littlefield.

Bush, G., (2001). President Bush Addresses the Nation. The Washington Post. [Online] Available at: http://www.washingtonpost.com/wp-srv/nation/specials/attacked/transcripts/bushaddress_092001.html [Accessed 3 October 2015].

Cameron, F., (2006). US Foreign Policy After the Cold War. Abingdon: Routledge.

Gaddis, J., (2004). Surprise, Security and the American Experience. New Haven: Harvard University Press.

Holland, J., (2012). Selling the War on Terror: Foreign Policy Discourses After 9/11. Abingdon: Routledge.

Hybel, A., (2014). US Foreign Policy Decision-Making from Kennedy to Obama. Basingstoke: Palgrave Macmillan.

Jentleson, B., (2013). American Foreign Policy. 5th Edition. New York: W. W. Norton.

Johnstone, A. & Laville, H., (2010). The US Public and American Foreign Policy. Abingdon: Routledge.

Lebovic, J., (2007). Deterring International Terrorism and Rogue States. Abingdon: Routledge.

Leffler, M., (2011). September 11 in Retrospect: George W. Bush’s Grand Strategy Reconsidered. Foreign Affairs. [Online] Available at: https://www.foreignaffairs.com/articles/2011-08-19/september-11-retrospect [Accessed 3 October 2015].

Levy, J., (2013). Preventative War and the Bush Doctrine. In S. Renshon & P. Suedfeld eds. Understanding the Bush Doctrine: Psychology and Strategy in an Age of Terrorism. Abingdon: Routledge, pp. 175-200.

Litwak, R., (2007). Regime Change: US Strategy Through the Prism of 9/11. Baltimore: JHU Press.

Miles, A., (2013). US Foreign Policy and the Rogue State Doctrine. Abingdon: Routledge.

Nadkarni, V., (2010). Strategic Partnerships in Asia: Balancing Without Alliances. Abingdon: Routledge.

Ryan, D., (2008). 9/11 and US Foreign Policy. In M. Halliwell & C. Morley eds. American Thought and Culture in the Twenty First Century. Edinburgh: Edinburgh University Press.

Shannon, R., (2009). Playing with Principles in an Era of Securitized Aid: Negotiating Humanitarian Space in Post-9/11 Afghanistan. Progress in Development Studies. 9:1, pp. 15-36.

Software Engineering Groups Behaviour Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Factors And Issues That Influence The Behaviour Of Software Engineering Groups

Most presentations on software engineering highlight the historically high failure rates of software projects, of up to eighty percent. Failure under the guise of budget overruns, delivery of solutions not compliant with specifications, late delivery and the like. More often than not, these failure rates are used to motivate the use of software engineering practices. The premise being that if adequate engineering practises were utilised, failure would become more of an exception rather than a rule. Best practise and lifecycles have been proposed and tailored to the various paradigms that the computer and information sciences throw up in rapid succession. There is extensive debate on what works and what does not within academia and without. The consensus being that what is best depends on the problem at hand and the expertise of those working on the problem.

A few software engineering group models have been popular in the history of software development. Earlier groups tended to be hierarchical, along the lines of traditional management teams. The project manager in-charge did not necessarily contribute in a non-managerial capacity and was responsible for putting together teams, had the last word on accepting recommendations and delegation to team members. Later groups worked around one or more chief-programmers or specialists. The specialists took charge of core components themselves and were assisted by other group members in testing, producing documentation and deployment. More recently, collegial groups have become common. Here, people with varied specialisations form groups wherein they organise themselves informally by assuming roles as needs arise.

The advantage of a particular model over the others becomes evident only in the context of specific projects. The hierarchical model is best suited to relatively large projects that are decomposable into sub-goals that can be addressed by near independent teams. This is usually possible for software tasks that are very well defined, that need reliable and quality controlled solutions, particularly those that are mission critical. A large project may inherently require many people working on it to successfully complete it, if it were to be deployed in multiple sites, for instance. Alternatively, a large group may be assembled to expedite delivery. In either case, structured organisation and well-defined roles facilitate coordination at a high level.

A central problem with adding people to expedite delivery, or otherwise, is that the effectiveness of a group does not scale linearly. One person joining another does not mean that they are collectively twice as productive. More importantly, the contribution of the seventh person in a seven-person group is a fraction of the contribution of the second person in a two-person group. This is due to additional overheads in communication and coordination as group size increases and to the dilution of tasks assigned to individual member. As is evident, this is a problem for any group; however, in very large groups the problem is exacerbated.

In hierarchical settings, group members do not have a sense of ownership of the bigger solution. This may be reflected in their productivity. Because of the concentration of decision-making powers to particular individuals according to some hierarchy, the success of processes ultimately lies with them. A lot rides on their ability to pick the best practises and recommendations, delegate effectively and keep track of the bigger picture. In quality-controlled or mission-critical settings, there are not many alternatives to having large hierarchical groups with redundant contributors.

Primarily in non-commercial settings, a single specialist engineers a complete software solution. Invariably, the solution being a prototype is accessible only to other specialists. In addition, it is not designed for general consumption and is put together without going through most recommended processes in software engineering lifecycles. Single programmers tend to practise evolutionary programming. This involves producing a quick working solution followed by repeated reworking of the solution to make it more accessible to the programmer for future review, incremental development and peer review or development. If demand for such a software solution gains momentum, for either its general utility or its commercial viability, the core solution would most likely be adopted for further development by a larger software engineering group. It stands to reason that the core developer, who is most familiar with the solution, retains the last word on further technical development. Other members organise themselves around the chief-programmer.

In general, some form of incremental development and periodic redevelopment from scratch of software solutions are common regardless of group models. The first incrementally developed solution tends to be the least well-engineered solution and is a patchwork of poorly designed and tightly coupled components. This is a reflection of the difficulty involved in producing quick solutions using new tools and techniques and inexperienced software engineers. Supported by a high immediate cost barrier to reworking solutions, incumbents from pervious software development cycles, spend a lot of their post deployment time in supporting and patching what they produced.

In collegial groups formed in smaller organisations or departments, software engineers assume roles as needs arise. Brainstorming may be carried out by all members and design approved by consensus but development may be carried out by a few individual members, while the others gain feedback from end-users, keep track of competitor solutions and the like. In the initial phases of a software development life cycle, the problem definition, feasibility study and system analysis phases, end users of the system and independent specialists may form part of the group. During the design and implementation phases, a disjoint group of outsiders could merge with the team. The external members may then be invited for their feedback post implementation during the quality assurance and maintenance phases. Generally, best practise suggests that groups should be adaptive or loosely structured during the creative phases and become more structured as the design becomes clearer.

Groups with loosely defined structures are the most flexible in adapting to changing user needs. However, the greatest risk to project cancellations and overruns are ill-defined and changing requirements. Adaptiveness to an extent is crucial. Given that users change requirements so compulsively, lacking adaptiveness completely would make an engineering group not viable. If group size is variable, the learning curve of new entrants must be kept in mind. A project manager hiring additional developers late in the software development cycle, after not meeting some deadline say, must factor in delayed contributions from the newcomers as a result of time taken by them to familiarise themselves with the project and time lost in coordinating their joining the group.

Following this, the next most common cause of failure is due to poor planning or management. If the person taking on the role of project manager has poor management or planning skills, the likelihood of which is heightened by the fact that each group member is called upon to serve in diverse capacities, projects are destined to fall over.

A number of reasonable software engineering guidelines are ignored by software engineers commonly. When programming, using descriptive names for variables is a good example. A section of program code will immediately make sense to its author for a reasonably long period, when reviewed. However, if the code were not documented sufficiently, which includes using descriptive variable names, and with the correct intended audience in mind, it would take a considerable amount of time for another programmer to understand what the other had implemented. In the extreme, some programmers obfuscate because they can or to ensure that only they will ever understand what they have written thereby making them indispensable. The potential for doing a half-hearted job of writing code is obvious in that poorly structured and poorly designed code is functionally indistinct from well-structured code and is less demanding a task. If software projects were evaluated only on their functionality, this would not pose a problem but upgrades and patches require someone to review the code and add to it or repair it in the future. The long term cost of maintaining software that is not well designed and documented may rise exponentially as older technologies are phased out and finding people competent to carry out repair and review shrink. In essence, this is an instance of a quality control problem.

Uncontrolled quality problems are the third most common cause of cancellations and overruns of software projects. It is convenient to group documentation along with quality control as they should be reviewed in tandem in a software development lifecycle. The first casualties of a late running project are quality control and documentation. The long-term costs of skimping on either have been illustrated by example above but there are short-term costs as well. In both evolutionary engineering common among specialist-centred groups and component engineering commonly employed by hierarchical groups, the quality of each revision or component affects the quality of subsequent revisions or combined components.

The next most common causes of failure are unrealistic or inaccurate estimates and naive adoption of emerging technologies. The blame for the former rests with both users and planners or project managers. Most engineering groups are unrealistically optimistic about the speed with which they can deliver solutions. Their estimates may be accurate for prototypes. In actual deployment, conformance to specifications, human-computer interfaces, quality control, training and change management are essential and take time. Users have a poor understanding of how descriptive their specifications are and much too often assume that implementers are contextually familiar with the environments in which they work and intend to use the system. Project managers and implementers have an affinity to emerging technologies ignoring their core competencies that are more likely to be established proven technologies.

Success among software engineering groups is a function of planning and execution. The responsibility of planning falls on a project manager. A manager must draw on the best a group has to offer, appreciate software and technical concerns, facilitate communication and coordinate a groups effort. Enforcing quality standards from the beginning by adopting design and programming guidelines, for example, helps formalise expectations. A project manager with a technical background has the advantage of understanding the position of other technical members and is likely to communicate more effectively with them and has the opportunity of leading by example. Given the emphasis on planning, it is worthwhile noting that it can be overdone. Over-engineering is not ideal engineering. It is often convenient for a single developer to take the lead for coding. Other developers and end-users should concurrently test the developing solution for functionality, usability and quality. Execution in isolation is likely to result in solutions that developers are comfortable with and even proud of but that end-users find lacking. The various stakeholders of the project must be simultaneously and consistently involved throughout the development cycle of software projects.

The greater the communication between specialist designers and specialist implementers, the more successful the group would be in terms of quality and ease-of-use of solutions. The technical crowd in a software engineering group sees the problem uniquely in terms of simplifying or making more elegant their contribution. The design crowd balances out this perspective by offering an alternative view, which is more likely to be aligned with that held by end-users, uncurtailed by technical considerations. Ultimately, end-users must be given an opportunity to have their say. The solution is theirs.

Changing requirements and specifications may be an acceptable excuse from the user’s perspective for delays in final solution delivery. Many projects are twenty percent complete after eighty percent of the initially estimated time. More people are brought in to expedite the process, budget overruns follow and sub-par solutions are delivered, albeit, late. Given the historical frequency, project managers should factor in possible requirement changes to arrive at estimates that are more realistic before commencing projects.

Call Dropping Syndrome with Mobile Routers

This work was produced by one of our professional writers as a learning aid to help you with your studies

Research Call Dropping Syndrome in a Mobile Router Connection Used in a Vehicular Environment

Abstract

With the emergence of mobile automobile internet routers in the past five years, theorists and visionaries have begun to picture a world for their widespread application. From transportation infrastructure and inter-vehicular communication to mobile conferencing and business applications, the ability to access the internet during transportation is an increasingly valued concept. Yet mobile phones have internet services and cellular providers offer broadband 3G and 4G options, so why amidst all of this integrated technology does the mobile router become such a key component? Efficiency and performance. By leveraging the strengths of an integrated urban infrastructure and utilising multiple access points, the bandwidth and quality of service associated with mobile internet routing is rapidly increasing. Due to the rapid rate of motion and exchange, one of the most inefficient concepts within mobile routing is handover latency, a potential lag in network resources during which packets of information are exchanged between the mobile router and the new access point.

This research will provide a broad spectrum of theory and evidence regarding opportunities for moving towards a soft handover, undermining the performance losses and network degradation associated with hard handover switching behaviour. Further, predictions will be made for the future of mobile automobile routing services, highlighting particular concerns that must be remedied in the coming years in order to enhance industry performance.

Introduction

1.1 Research Problem

As internet integration and communications convergence is increasingly impactful on human existence, the exploitation of new and emergent technologies for increasingly mobile applications continues. One of the most debated advances in recent years is directly related to the integration of mobile internet into automobiles. With one leading firm, Autonet Mobile currently supplying a proprietary technology to several key automobile manufacturers, the merits of mobile routing continue to be validated through commercial value and consumer investment. From a technical standpoint, router-network communication protocol is relatively standard when a static relationship is established; however, once this relationship becomes mobile, the handover requirements due to mobile access points can result in a breakdown in quality of service (QoS) and connection dropping behaviour. Using the NEMO basic support protocol, a mobile router is able to use Mobile IPv6 to ‘establish and maintain a session with its home agent…using bidirectional tunnelling between the mobile router and home agent to provide a path through which nodes attached to links in the mobile network can maintain connectivity with nodes not in the NEMO’. This brief explanation of a network architecture designed to maintain mobile consistency and reduce signal dropping behaviour is indicative of emergent technology in the mobile routing field, a capability with wide scale applications across automobiles, trains, busses, and other ground transportation networks.

Although Autonet Mobile is the most public corporation currently working towards the development and implementation of mobile internet in automobiles, it is unlikely that such market supremacy will continue into the future. With expectations of more integrated automobile systems, particularly those related to navigation and intra-traffic vehicular communication (accident reduction schemes), academics such as Carney and Delphus are already predicting a rich, network-integrated future for mobile computing and internet applications. Considering that QoS for other diverse communication options including VoIP remains of particular concern in the mobile computing community, more in-depth analysis of connection management and performance in a mobile environment is needed. The concept of mobile routers and a mobile internet connection through intra-vehicular integration is foreign to many consumers, even in this era of diverse technologies and increasingly advanced network architecture. Therefore, the fundamental value of this dissertation may be linked to more predictive analysis of future applications and systemic evolutions regarding these emergent technologies. Through a comprehensive review of the existing academic evidence in this field as well as several examples of mobile routing technologies that are either currently in production are being field tested, the following chapters will firmly establish a rich, evidence-based perspective regarding technological viability, updates and version 2.0, and the future of mobile internet routing.

1.2 Aims and Objectives

Although wireless technologies have a longstanding history in internet protocol and architecture, the complexity of handover behaviour and connectivity in mobile router service continues to challenge developers to reconsider the merits of their design. Accordingly, as 3G and 4G mobile broadband networks are expanded across metropolitan and surrounding areas, the flexibility of mobile routers and intra-vehicular internet use is increasing significantly. Simultaneously, alternative technologies including the Autonet Mobile router exploit such interconnectedness to maximise wireless performance, conducting mobile handoffs of service as vehicles pass from one cell tower to another. The scope of this investigation is based on emergent technologies and connection dropping behaviour during in-motion computing. Therefore, a variety of sources will be explored over the subsequent chapters in order to evaluate the progress made in this field, practical applications and their relative reliability, and future opportunities for redesign and reconfiguration of mobile routers. In order to govern the scope and scale of this investigative process, the following research aim has been defined:

To evaluate the emergence of wireless router technologies for automobiles, comparing the connection dropping behaviour of mobile broadband networks and tower switching protocol in order to predict the viability of future applications and technologies.

Based on the variables addressed in this particular research aim, this investigation involves three primary data streams including evidence regarding the performance of mobile broadband routers and cards, the evidence regarding the performance of hand-off-based mobile internet access routers, and the opportunities for expanding this technology beyond its currently limited scope in future applications. As this investigative process involves the analysis of a broad spectrum of empirical findings in this field, secondary academic evidence forms the theoretical foundations of the background to this mobile internet problem. In addition, empirical evidence from actual network architecture is retrieved from existing applications of these distinctive technologies. Throughout the collection and analysis of this evidence, the following research objectives will be accomplished:

To identify the underlying conditions which contribute to connection dropping behaviour in mobile internet usage

To evaluate the structure of mobile internet architecture, highlighting the benefits and limitations associated with the various technologies

To highlight theoretical and emergent applications for mobile internet connections, expanding the scope of usage beyond just web surfing whilst driving

To offer recommendations based on the optimisation of network architecture according to a purpose-oriented protocol

1.3 Research Questions

Based on the aforementioned research aims and objectives, there are several key research questions that will be answered over the following chapters:

What expectations are manifested regarding mobile internet usage in vehicles, and how is such performance evaluated?

What opportunities are there for integrating mobile internet on a broader scale for more strategic, vehicular purposes (i.e. navigation, multi-vehicle communication, etc.)?

Are there specific benefits of a mobile broadband connection over tower handover behaviour and vice versa?

What will determine the future of mobile internet and how will such variables be identified and incorporated into the network architecture?

1.4 Structure of Dissertation

This dissertation has been structured in order to progress from a more general, theoretical background to specific mobile internet routing evidence. The following is a brief explanation of the primary objectives for each of the subsequent chapters:

Chapter 2: Literature Review: Highlighting an academic precedence established in this field over the past two to three years, empirical studies and theoretical findings are presented and compared in direct relation to the research aims and objectives.

Chapter 3: Research Methodology: This chapter seeks to demonstrate the foundations of the research model and the variables considered in the definition of the analytical research methodology.

Chapter 4: Data Presentation: Findings from an empirical review of existing mobile router architecture are presented, highlighting particular conditions, standards, and performance monitoring that govern functionality and performance.

Chapter 5: Discussion and Analysis: Retreating to the academic background presented in Chapter 2, the research findings are discussed in detail, offering insights into the challenges and opportunities associated with current network architecture and mobile internet protocol.

Chapter 6: Conclusions and Recommendations: In this final chapter, summative conclusions are offered based on the entirety of the collected evidence, and recommendations for future mobile internet routing solutions are provided.

Chapter 2: Literature Review

2.1 Introduction

There is a broad spectrum of academic evidence relating to mobile internet, network architecture, and operational protocol. This chapter seeks to extract the most relevant studies from this wealth of theoretical and empirical findings in order to identify the key conditions and components associated with effective and high performing mobile internet in automobiles. Further, evidence regarding connection dropping syndrome is investigated in order to highlight those deficient characteristics that continue to detract from the overall performance of these various networks. Ultimately, this chapter provides the background findings that will be compared with practical applications of mobile internet routers in vehicular scenarios in Chapter 4. This analysis is designed to not only introduce the academic arguments regarding the functional architecture of mobile routing and its widespread potential applications, but to also compare the principles and practices that have been discussed across a diverse range of technological interpretations.

2.2 The Background of Mobile Automotive Routers

In 2009, emergent technology inspired by an increasing social demand for internet mobility and integrated online resources in automobiles began to make its way to the market. Carney reported on an American based firm, Autonet Mobile which viewed the future of integrated mobile wireless as handoff-based through existing cell towers rather than mobile broadband card-driven. In essence, this proprietary technology leverages a similar communications standard to the 3G and 4G broadband routers that continue to be offered by mobile phone providers AT&T, Verizon, Sprint, and others. Consumer analysis by Autonet determined that over 50% of consumers surveyed reported on a desire for internet service in their cars in comparison with just 16% who were interested such technologies in the early parts of the 21st century. Practical applications of mobile internet routers include direct streaming of navigation tools such as MapQuest and Google Maps to the vehicle or benefits for business customers which include mail and file transfer capabilities or even online information sourcing. Uconnect Web is the service provider which ultimately links the user through the Autonet router to the internet, offering data speeds that have been reported as comparable to 3G technologies. By default, the broadcast range is around 150 feet from the car, differentiating the flexibility of use in this technology from PAN architecture.

Although the uptake of the Autonet router in such automotive producers as Chrysler and Cadillac was widely publicised, the general public reaction was not necessarily a market-shifting response. In fact, a leading analyst of direct competitor Ford would criticise the Autonet router early in its lifecycle, suggesting that many consumer will not see value in the investment in technology that is similar to that which they already pay for on their other mobile devices, especially when it is limited to the architecture of the vehicle. In spite of such predictions, by February of 2009, the Autonet router had received its first award from Good Housekeeping magazine for Very Innovative Products (VIP), a recognition that was directly oriented towards this new products potential value for families in its integration of multiple devices within a single wireless hub. In 2010, Delphus reported on significant increases in subscriber statistics, from around 3,000 vehicles in 2009 to over 10,000 by mid-2010, the direct result of strategic partnerships with such rental car giants as Avis and continued OEM partnering with Chrysler, GM, Volkswagen, and Subaru. In spite of the more commercial value of this concept, what is most relevant to the scope of this investigation is the proprietary handover management technologies that have emerged in the Autonet operating protocol. In fact, Delphus reports that because of contractual partnering with multiple wireless telecom providers, Autonet is able to maintain consistent web streaming with very minimal ‘between tower’ signal fading in urban spaces. Considering that handover processing and seamless transfer of addresses between towers is one of the technologies developed under the NEMO project previously introduced by Lorchat et al., the commercial value of such initiatives could potentially be expanded to include a much more integrated traffic architecture and communication network.

In his exploratory evaluation of NEMO as a handover framework for mobile internet routing (particularly in multi-nodal vehicular applications for traffic navigation/communication), Ernst highlights particular challenges with maintaining quality of service under mobile conditions. In particular, he recognises that addresses must be topologically correct involving specific language designed to interface with a particular tower, an ability to change the IP subnet, and ultimately the change of location and routing directive. In order to maintain sessions and quality of service, Ernst introduces a communicative architecture based on a bi-directional tunnel between the home agent (HA) and the mobile node (MN), a connection which must remain dynamic and automatic whilst receiving bandwidth allocation from the access network. In particular, such early work on the NEMO architecture established specific performance requirements which included permanent and un-interrupted access to the internet, the need to connect simultaneously to the internet via several access networks, and the ability to switch to best available access technology as needed. By default, this flexible architecture provides the following predicted benefits:

Redundancy which reduces link failures that arise in mobile environments

Ubiquity which allows for a wide area of coverage and permanent and un interrupted connectivity

Flexibility that receives specific policies from users/applications and price-oriented competition amongst providers

Load sharing to efficiently allocate bandwidth, limiting delays and signal losses

Load balancing

Bi casting

The value of NEMO protocol is that it allows for shifting points of attachment in order to achieve optimal internet connectivity. When a mobile node is on a foreign network, it is able to obtain a local address termed Care of Address (CoA) which is then sent to the home address for binding. Once the binding is complete, the HA ‘intercepts and forwards packets that arrive for the MN to the MN’ via the ubiquitous tunnel to the CoA. It is this binding and re-binding of different CoAs during mobility that ultimately allows for improved QoS, restricting the number of dropped connections and maintaining persistent internet connectivity in all areas where call towers can be accessed. Within this architecture, binding updates are used to notify HAs of a new CoA, whereby the HAs send a binding acknowledgement that may either be implicit (no mobile network prefix option) or explicit (one or more mobile network prefix options). It is the underlying use of the IPv6 architecture which Moceri argues allows for more efficient tunnelling and more consistent security than IPv4 options, due to the IPSec, the tunnelling mechanism, and the optional foreign agent usage.

2.3 Mobile Routing and Network Architecture

One of the more recent evolutions of the mobile routing protocol is based on NEMO (Network Mobility), an architecture that is designed to flexibly manage a single or multiple connections to the internet, even during motion. Based on the standardisation of protocol and architectural features by the IETF in recent years, NEMO is quickly becoming a viable means of extending internet services, diversifying online communication, and establishing a mobile link between variable nodes. In their recent analysis of this architecture, Lorchat et al. suggest that IPv6 was designated as the best fit solution to the network mobility problem, allowing for the mobile router to change its point of attachment to the IPv6 internet infrastructure whilst maintaining all current connections transparently. The authors introduce a model-in-development application of the NEMO architecture suggesting that a singular home agent would act as a maintenance and exchange module, retaining information regarding permanent addresses of mobile routers, temporary addresses, and mobile network prefixes. The primary challenge associated with intra-vehicular mobility of an internet connection in this particular challenge is that the automobile needs to perform handovers between wireless access points. Although such research is valuable from an early architectural standpoint (i.e. 2006 technology), the accessibility of wireless technology provided over mobile telephony suites via 3G and 4G technology is far advanced from a point-to-point handover protocol.

In more in-depth review of the NEMO technology, other researchers have endeavoured to identify the key limitations and opportunities that are associated with particular orientations and architectural standards. Chen et al., for example, based their research on the viability of applying NEMO BSP within public transportation in order to provide mobile internet for all passengers. This research is extremely valuable for the development of effective router protocol in the future, as the authors propose that in order to overcome the multihoming problem (i.e. a need to access multiple types of networks in order to reduce downtime and connection dropping), multiple router types could be linked wherein each router is equipped with just one type of interface could be viably used to improve quality of service. For their research, the mobile router is equipped with WLAN, GPRS, and CDMA interfaces simultaneously and an inter-interface handover algorithm is proposed for the signal exchange whilst performance during handover is measured and analysed. To accomplish such network architecture, the authors needed to introduce multiple CoA registration under which bi-directional tunnels could be established for each of the three networks without having to identify one network as primary over the others. Post analytical conclusions suggest that MIPv6 and NEMO BSP are inappropriate for ‘delay sensitive applications due to handover latency of more than 1.5s’; however, multiple interfaces and different internet service providers can offer a means of transferring traffic smoothly from one interface to another.

2.4 Alternative Schemes and Personal Access Networks

In spite of a more narrowed broadcast scope, wireless personal access networks (WPANs) are increasing in popularity, basing short range wireless communications on two distinct standards including IEEE 802.15.3 (High-Rate WPAN) and IEEE 802.15.4 (Low-Rate WPAN). Accordingly, WPANs are defined around a limited range personal operating space (POS) that is traditionally extended up to 10m in all directions around a person or object, stationary or motionless. LRWPANs are typically characterised by a limited data transmission rate of between 20 kb/s to 250 kb/s, requiring only minimal battery power and providing a transfer service for specific applications including industrial and medical monitoring. Conversely, HRWPANs offer a much higher rate of data transmission from 11 Mb/s to 55 Mb/s and are suitable to allow for the transmission of real time video or audio and providing the foundation for more interactive gaming technologies. In HRWPAN protocol, the formation, called a piconet, requires a single node to assume the role of the Piconet Coordinator (PNC) that is designed to synchronise other piconet nodes, support QoS, and manage nodal power control and channel access control mechanisms. Node functionality in the piconet architecture is defined as follows:

Independent Piconet: Stand-alone HRWPAN with a single network coordinator and one or more network nodes. Network coordinator transmits periodic beacon frames which other network nodes use to synchronise and communicate with network coordinator.

Parent Piconet: HRWPAN that controls functionality of one or more piconets. Manages communication of network nodes and controls operations of one or more dependent network coordinators.

Dependent Piconet: Involve a ‘child piconet’ which is created by a node from a parent piconet to extend network coverage and/or to provide computational and memory resources to the parent.

The value of the PAN architecture is based on its high mobility and innate flexibility, allowing for single devices to operate as mobile routers, providing internet access to multiple devices. Moceri predicts that by integrating NEMO protocol into PAN network architecture, it is possible to use a particular device such as a mobile phone to provide continuous access to a variety of other devices. Ultimately the future of this technology is directly linked to inherent efficiencies that are associated with network operations and architecture, Ali and Mouftah reiterate that in order to maximise PAN uptake in the future, a variety of protocol-based concerns must be remedied and transmissions should be become increasingly efficient. One instance of inefficiency that was identified by their empirical analysis indicated that there is a threshold value for the exchange of packets that once violated results in an accelerated rate of rejection. This is a serious concern that must be addressed through design and development of the PAN standard.

2.5 Summary

This chapter has introduced the background concepts associated with mobile wireless internet in modern automobiles. In spite of the fact that the market is limited to just one strong and integrated firm, it is evident that over the long term, opportunities for competition and alliance from other providers and service agencies is increasing. Consumers continue to demand additional connectivity and an increased standard of internet access. Unprecedented potential for redefining the future of internet mobility is currently manifesting itself throughout this industry and as such leading agencies as the IETF continue to expand their investigative process, the expectation of advancement is rampant. Ultimately, one of the first challenges that must be addressed within this field is that of handover technologies, an area of mobile internet that involves the majority of performance-based losses. By focusing on such key transitional periods in the access process, the opportunity for systemic optimisation will be greatly enhanced. The following chapter will provide background regarding the research methods that were employed in the analysis and discussion of practical handover problems and their review in this field.

Chapter 3: Research Methodology

3.1 Introduction

This chapter presents the research methods that were employed in the collection and analysis of evidence regarding the viability of mobile routing technologies across intra-vehicular applications. Focusing on an academic precedence in this field as well as guidance from theorists focusing specifically on data collection methods and analytical techniques, background is offered to validate the methodological decisions made over the course of this process. Ultimately, specific evidence regarding research architecture and the various components integrated into the research process will be addressed, as well as particular, strategic and incidental limitations affiliated with the focus of this study and a multi-stream analysis of complex data.

3.2 Research Methods

The majority of research in this field focuses on case study evidence in which network architecture, internet protocol, and various limitations and opportunities are investigated via case study examples. Chen et al., for example, utilised three different mobile routers in order to investigate handover behaviour and network performance in a mobile vehicular network. Such experimental data serves to validate best fit programming and architectural features, measuring handover time for packets of information across different conditions including between GPRS and CDMO and MR in NEMO BSP. Although the value of such analysis was recognised early in this research process, the focus of this analysis is to differentiate between hard and soft handover architecture, a condition that can be evaluated within the context of existing technology. Therefore, the experimental research method was determined to prescribe too wide of a scope of research for this study and was eliminated from the available options.

Other academics have leveraged the past theories and studies of other empiricists in order to conceptualise the foundations of a future defined by mobile vehicular internet connections. Gerla and Kleinrock, for example, explored a variety of different concepts on Inter vehicle communications (IVCs) and their applications in hypothetical transportation system architecture. Such research involved content analysis from past studies in which empirical findings and theories are cited as a means of predicting future adaptation and adjustment within the global architecture. Based on the research model presented in this study, it was determined that a comprehensive review of leading theories and findings in this field that were directly linked to the aims and objectives of this research would be a valuable research methodology.

Based on the review of past academic methodologies, a comprehensive content analysis of recent findings from empiricists and academics in this field was determined to provide a best fit research methodology. Krippendorff argues that analytical constructs ‘ensure that an analysis of given texts models the texts’ context of use’, limiting any violations regarding what is known about the conditions surrounding the text. Due to the complexity and technological variability of this topic, it was important to restrict interpretation of the findings and academic perspectives to their relative context, the foundations of which were ultimately defined early in the reports. From the application of mobile routing in pedestrian circles to vehicular mobile routing for public transportation purposes, the context was determined to be a driving factor in the protocol and architecture chosen for handover schemes in mobile internet connections. A total of six unique studies were identified as directly relevant to the investigation of soft handover technology and applications, and the general findings from these studies were then extracted and integrated into the following chapter. This data is directly relevant in predicting evolution in this industry and detailing opportunities for integrating soft handover technologies in order to optimise system performance in the future.

3.3 Ethical Concerns and Limitations

The evidence presented in the content analysis was all extracted from journal publications that are widely available to the public through multiple databases and online retrieval sites. Therefore, it was determined that based on this research method, there weren’t any ethical concerns relating to the data. There were, however, limitations imposed on the scope of the studies researched in order to ensure that the focal point of these analyses was directly oriented towards handover protocol and mobile routing architecture. The imposition of such limitations proved valuable because they allowed the research to be focused on specific conditions, outcomes, and opportunities regarding this topic that will be extremely relevant in future developments in this field.

3.4 Summary

This chapter has presented the research methods that were employed in the collection and analysis of secondary evidence regarding this widely debated topic. Recognising that inconsistencies in the review of one or two studies could result in innate research bias, six different studies were chosen from varying areas of focus in mobile routing technology. The findings are presented and discussed in direct relation to their independent context, with the exception of a few correlations that were drawn in order to link concepts and industry standards. The following chapter will present the findings from this content analysis in detail.

Chapter 4: Data Presentation

4.1 Introduction

This chapter presents a broad spectrum of academic theories, evidence, and predictions regarding the evolution of the mobile internet architecture. Whilst oriented towards the application of this technology in modern automobiles, the findings from a review of leading theorists in this field have demonstrated that the concept of handover management and strategic redefinition in mobile networks transcends the limited scope of this problem. Therefore, although current routing systems available in the marketplace may integrate different technologies or architecture than those discussed here, the focus of this research is ultimately on the evolution of the mobile handover between access points from a hard, delay-limited process to a soft, dynamic and integrated process.

4.2 The Current Problem

The modern consumer demands immediacy in all aspects of their life, from food procurement to entertainment to communication. As the heterogeneous architecture of an integrated, internet-oriented society continues to affect product choices and consumer values, the notion of a functional, high performing vehicular router has quickly become integrated into several leading automotive producers in the past several years. Labiod et al. define a mobile router as ‘a mobile node which can change its point of attachment to the internet and then its IP address’. Similarly, mobile networks involve a ‘set of hosts that move collectively as a unit’, leveraging a mobile router to perform gateway tasks and prov

Semiotic Framework Method at CSA

This work was produced by one of our professional writers as a learning aid to help you with your studies

Children Support Agency Case Study
Introduction

The use and potential benefit to system developers is examined by use of the Semiotic Framework method in the case study information supplied regarding the Children Support Agency (CSA).

Analysis of Semiotic Framework

The framework as described by Kangassalo et al (1995) (1) refers to the work of Stamper (1987) (2) as it applies to information systems, and distinguishes the four levels of properties as empirics, syntactic, semantics, and pragmatics. This is likened to a semiotic ladder running from the physical world to the social (business) world.

The semiotic framework consists of two main components, these being the human information functions and the IT platform considerations. These are both split to three sub-components.

Social World, developer actives would be:

To determine how best to match the negative responses of some staff to new technology with the high expectations of others, by designing a system which takes account of both To ensure the legal aspects such as compliance with the Data Protection Act (DPA) (2) are addressed. To ensure contractual information is protected in transmission. To meet the cultural standards held by those who work in an organisation whose purpose is to support disadvantaged young people.

Issues are:

Lack of computer literacy among some CSA staff, its status as a charity will probably restrict funding available for the system, feelings of protection for financial data versus lack of (apparent) concern voiced about personal data of vulnerable young people. The wish to accommodate training in IT for young people, without concern that this may lead to opportunities for any who have anti-social tendencies to affect the overall operation of the system by having access. The lack of realisation that today’s young people in the age range 12 to 24, whether from a deprived or difficult family background may be conversant with the use of computers.

_________________________
1. Kangassalo et al, (1995), p 358.
2. Stamper et al, (1987), p 43-78.
3. Data Protection Act 1998.

Pragmatics, developer activities would be:

To attempt resolution between conflicting attitudes in conversation which were expressed about the value of the system, and consider capital and revenue funding for the new system.

Issues are:

To determine how the system would be supported, and responsibility for the support.

Semantics, developer activities would be:

To attempt to model the syntactic structures, which are by nature, the technical concerns, to the semantic which concern the world are matched, in a machine-independent manner.

Issues are:

Security concerns, which are people-related, with system issues, which are software dependent.

Syntactics, developer activities would be:

The formalisation of documentation of the system specification, and outline the programming requirements. This is the bridge between the conceptual and the formal rules governing system development.

Issues are:

The documentation may only be understood by the IT people who create the system

Empirics, developer activities would be:

To estimate the number of data fields required, their volume, the speed with which they require to be transmitted, and the overall performance as perceived by the user.

Issues are:

Limited information available, combined with inability of potential users to express these attributes.

Physical World developer activities would be:

To analyse of existing systems, networks, hardware and software. Estimation of storage and retention of data requirements, physical condition of room housing system equipment and communications, power supply, entrance restrictions to sensitive areas, policy on removal of media from buildings, printout handling, access by young people to IT equipment.

Issues are:

Replacement of existing communication links, introduction of encrypted traffic, offsite storage of backups, disaster recovery, software licences, fire detection and suppression, volumes of data transmitted and stored. To separate young people’s IT equipment.

System requirements specification

Hass et al (2007) (4) explains requirements analysis and specification as the activities involved in assessing information gathered from the organisation involved regarding the business need and scope of the desired solution. The specification is the representation of the requirements in the form of diagrams and structured text documents.

Tan (2005) (5) describes the use of a rich picture as ‘a structural’ as opposed to a ‘pictorial’ description. It allows the practitioners to use any form of familiar symbols to depict activities and events, plus taking into consideration, conflicting views.

The definition of a use case (Seffah et al 1999) (6) is a simplified, abstract, generalised use case that captures the intentions of a user in a technology and implementation independent manner. Use case modelling is today one of the most widely used software engineering techniques to specify user requirements. Dittrich et al, (2002) (7) suggest a new approach to development they term ‘systems development as networking’. They go on to suggest the key questions to ask is ‘How do systems developers recruit and mobilise enough allies to forge a network that will bring out and support the use of the system’.

Unified Modelling Language (UML) is described by Arrington and Rayhan (2003) (8) as a method, employing use case and activity diagrams for requirements gathering. They state that use case serves as a view of the overall use of the system, allowing both developers and stakeholders to navigate the documented requirements.

_________________________
4. Tan (2005), p67.
5. Seffah et al (1999), p21
6. Dittrich et al, (2002), p 311.
7. Arrington and Rayhan, (2003), p28.
8. Hass et al, (2007), p 4.

Rich Picture

People

Activities

Current system

Future system

Use Case Diagram

See Appendix A

Primary Scenario

The likely outcome when the project specification is delivered is that the funding body will agree to the bid, but subject to some changes, which will reduce the overall cost.

This will involve a degree of compromise in the design of the new system. Suggestions may be made to re-enter the Excel data and to delay the phasing out of the financial system.

This would mean a phased project with an all-encompassing solution left to a later stage.

The impact may be additional effort on the part of CSA staff.

The system needs to be delivered in phases, with core functionality first. The successful delivery of core components will assist acceptance.

A key component is the security of information stored and transmitted, as much of it is of a sensitive, personal nature. The protection of information will require conforming to the requirements of the DPA (1).

Due to the number of area offices, with few staff, the data repository will require to be centralised, probably at HQ. This is for simplification of backups, which will require to be weekly full, stored offsite, and daily incremental with the last day stored on site.

Communications between HQ and branches requires to be encrypted, and e-mail will require protected Internet access.

Anti-virus, anti-spyware, anti-phishing and spam filtering software will be required, and a firewall introduced between the Internet-facing component and the main system.

Rigid field input will be required to avoid erroneous numbers or characters.

Menus will be restricted to selected functions and denied to others, and Admin level (privileged) control will be able to access all menus.

The training for IT clients will need to be on a separate network segment from the main systems.

Compatibility between the existing financial system and the new system will need to be established, and the system will require the capability to import Excel data.

The system will be required to replace the functionality of the Excel data.

Questions
Developer questions to CSA staff:

How much funding is available for the proposed system and who are the stakeholders?

What facilities for computer systems exist at HQ: power, space, fire suppression, telecoms, operating staff, storage required, and records retained?

Who will support the new system when delivered?

What configuration does the finance system have: hardware, operating system, application software, network links, storage, number of users, support?

Will staff time be available for training?

Will only CSA staff use the new system and will they use it from home?

Will there be allocation of CSA staff for user acceptance?

Discussion of requirements analysis tools

The usefulness of the semiotic framework is that it offers the system developer an insight into the attitudes and feelings of people who will use the proposed system. This aids the developer, in that he/she is more likely to pay attention to the human-computer interface (HCI) aspects of the system. This should, if properly delivered, make the new system easier to use, and consequently, be received with more enthusiasm, than might otherwise be the case. A key message that core aspects should be delivered first, rather than the full functionality required, may win more converts than might otherwise be the case. Also revealed by the use of the Semiotic Framework was the attitude of some of the staff, who sees the requirement for the new system as superfluous to their ‘real’ work, and consequently wish no contact with it as they are too ‘busy’. This helps the developer, as it brings home the need for the employment of techniques to make the system simple to use and not forbidding in terms of error messages to may produce due to inexperience.

What the round of interviews in the case study revealed was some conflicting attitudes among CSA staff. A key example was mention of the need for protection of financial information, but the requirement to protect personal data of clients of the CSA, some of whom may have criminal records, was not mentioned. Given that failure to protect this type of information could lead to more harm to the individual than any help they may receive from the CSA, this is cause for concern, and seems to indicate that some of the CSA staff have lost sight of the organisation’s mission in life.

The interview process resulting in the case study report produced a lack of vital information any system developer would require to produce a workable system. Basic items were not uncovered. As an example there is no information on number of users, estimates of amount of data to be stored, how long it is to be retained, and what kind of systems are in use at the moment. The availability of capital and revenue information was not discovered, and it may well be that the funding will be dependent upon the proposed design in terms of capital and revenue costs to operate.

The use of rich picture and case diagram illuminates the overall view of the required system, allowing the developer and the recipients of the system to see the whole picture and gain a better understanding of the likely finished product. It also simplifies the dependencies and collaboration required in a pictorial from which makes the ‘big picture’ easier to understand.

The importance of the Semiotic Framework is that it helps shed light on areas which the developer, using traditional systems development methodologies, may neglect. It concentrates the mind on the human-computer interface required, and influences the design attributes which need to be built in, in order to gain user acceptability. Taking the step-wise approach down through the levels, brings home to the systems developer, the need to start with the social needs, which focuses on the human aspirations (or not) of the proposed system. Working through the Pragmatics is very revealing of the contradicting attitudes of the potential users in conversation, and should lead to the developer making compromises between technical elegance in the design and being able to obtain a favourable reaction from at least a majority of the eventual users. The scope of the system required to be developed has not been revealed during the case study, which impedes the ability of the developer to estimate size and nature of the hardware or software required.

The Syntactic level assists the design in that it forces concentration on the logical handling of data input, with system response to incorrect entry, being handled not with abrupt error messages, but more friendly advice messages and suggestions on data re-entry. This tracks back to the importance of human reaction learnt from the Social Word level. The software chosen should be influenced by the Pragmatics influence in that the choice should reflect the fact that the CSA is a charity, and both hardware and applications should be in the affordable range for an institution dependent upon charitable funding.

The Empirics portion of the framework should include the estimation of required system performance, speed of telecommunications, volume of data to be stored, and response times of the system. In the CSA case study there is no information which can be used to project such requirements, so the developer would be required to utilise an educated guess, based only on the existing finance system, which could be measured, or practical experience. Some of the required information may be gathered by contact with whichever vendor delivered the existing finance system. The framework also draws attention to peripheral items, such as the Excel spreadsheet, which may well contain valuable data, not subject to strict input criteria, and possibly not backed up.

The Physical World portion of the framework focuses the developer’s mind on what will be acceptable to the users in terms of speed of response, the time and effort potentially to be saved, and the type of reporting of information capabilities of the system. It emphasises that there needs to be demonstrable benefits in the way of management information, and therefore capability to respond, which would otherwise have been unavailable. From a system developer’s point of view, this is probably the section he/she would feel most comfortable with, as it consists of tangibles, which can be translated into MIPS, baud rated, gigabytes, and other terms which IT developers are expected to be completely conversant with.

Probably the most difficult aspect of the framework for the developer is the Semantics level.

The reason for this is that it tends towards the abstract, and system developers as a breed, operate mostly in a practical, exact, measureable fashion. They act as a translator between the business requirements as expressed by the stakeholders and eventual users, and the technical people who deliver code, hardware and communications to realise the stated needs. The developer has to perform a balancing act between what is sometimes conflicting requirements and technical possibilities. This required the ability to converse with, and understand, both participants in the overall project to deliver the required system. The use of the Semiotic Framework leads the developer to address these issues and attempt to develop a clear understanding of the CSA business activities, as opposed to trying to force fit them into a prejudged idea of the system.

The developer may reflect that the application of the Semiotic Framework forces undue attention on the people-related aspects of system engineering, to the detriment of a design which embodies good technical practice and the necessary protective aspects required complying with any legal implications. Against this, the aim of developers to attain elegance and efficiency in design may be meaningless to the users of the system, whose main concerns are to assist in the capture of information, its ease of retrieval and the management information it can produce. In short, how it can help improve the user’s work practices and make life easier for them.

References

Arrington, C.T., Rayhan, S.H., (2003), Enterprise Java with ULM, Second Edition, Wiley Publishing, Inc., Indiana, USA, p28.

Clarke,S., Elayne, (2003), Socio-technical and human cognition elements of information systems, Idea Group Inc, p 8. GOOD Diagram.

The Data Protection Act
Available from: http://www.ico.gov.uk.what_we_cover/data_protection.aspx.

Hass, K.B., et al, (2007), Getting it Right, Management Concepts, p 4.

Kangassalo, H.,et al, (1995), Information Modelling and Knowledge Bases, IOS Press, p 358.

Seffah, A. et al, (2005), Human-Centered Software Engineering –Integrating Usability in the Software, Springer, p 21.

Stamper,R.et al, (1987, Critical Issues in Information Systems research, Wiley, Chichester, p 47-78.

Tan, J.K., (2005), E-health Care Information Systems, John Wiley and Sons, p 67.

Tipton, H.F., Krause, M., (2007), Information Security Management Handbook, Edition 6, CRC Press, p1290-186-587

MIS Within Security Department During London 2012 Olympics

This work was produced by one of our professional writers as a learning aid to help you with your studies

This report analyses the need and the reasoning for a management information system (MIS) for the security department during the Olympics London 2012. This report looks at the functions of the security department and how they will benefit from an effective information management system. Furthermore, the report discusses how management information systems are used for decision making and the importance of implementing such systems within any organization.

Executive summary

One of the most fundamental functions in any organization is the decision making process. When one considers the economy we face today, many organizations come to appreciate the importance of being able to challenge competitors, gain advantages and make intelligent use of their resources. The core element of this is the process of making decisions. Information can be central in achieving management goals successfully. To facilitate decision making it is imperative that managers have the correct information at the correct time to overcome the gap between needs and prospects. Furthermore to aid improvements in communication of information adequate management information systems (mis) are indispensable. Thus it is vital to have an appreciation of the management information systems used in an organization and have effective integration, by all levels of management. It is only then that there will be effective, profitable and constructive decision making.

Terms of reference

On the instruction of the senior manager, the security department was asked to evaluate and analyse the requirements for the duration of the London Olympics 2012. Details of the importance of information required and detailing what information will be required plays an important role in the reporting back to the senior manager.

Introduction

Regardless of the nature of an organization, every organization is filled with information. The information content of organizations is what makes the business function. The role of information in an organization is crucial. Information is important in order to allow for an organization to plan, control, measure performance, record movements and assist with decision making. Management information systems are the conversion and collaboration of this information, from both internal and external data sources into a format which is easily communicated and understood by managers at all levels. Ensuring that information is well structured and effectively stored allows ease of access and timely and effective decision making. Larry long and nancy long (2005, p. 370) describe an information system as: “a system is any group of components (functions, people, activities, events, and so on) that interface with and complement one another to achieve one or more predefined goals” (donald, 2005). Information system may also be considered to be a generic reference to a technology-based system that does two things: providing information processing capabilities and providing information people need to make better, more informed decisions (donald, 2005).

Management information systems are the result of a combination of internal and external sources; they provide a means by which data/ information can be easily manipulated, stored, amended etc. Furthermore, management information systems coalesce all the essentials which assist in the decision making process.

Security is by no means limited to any one aspect of an organization, particularly when on consider an event as large and as globally involving the London Olympics – 2012. For any organization, security cover the physical security of those involved, security of buildings and offices and security if information technology, both physical equipment and cyber security. Assistant commissioner chris allison released a brief on the security issues and concerns surrounding London 2012; his brief included all the ordinary security concerns, such as terrorism and petty crime, but also the danger of online ticket scams, potential protesters hijacking olympic websites and also the more sinister criminals (hervey, 2010).

The overall vision for the London 2012 olympic games and paralympic games, agreed by the olympic board is, ‘to host an inspirational, safe and inclusive olympic games and paralympic games and leave a sustainable legacy for London and the uk’ (London2012, 2010). In order to achieve this there any many threats and many angles from which threats can occur which need to be taken in to consideration. Furthermore, in order to manage and ensure security the information systems implemented must allow for effective decision making prior to the event and most importantly in the event of an untoward happening.

Findings and analysis

The security department cannot be limited to one specific function. The security department, especially for London Olympics 2012, will involve the handling of many aspects of potential threats to the people and systems involved for the Olympics. There are two primary areas which the security department will be responsible for. Firstly, cyber security and secondly the security of the public.

Cyber security

As technology, its uses and abuses expand at hasty rates, so does the level of threat faced by organizations and their information systems. Information technology forms an important feature of the information systems in place today. Information systems define what needs to done or managed and the information technology aspect is how this is done. Therefore, technological advancements and the increase in their abuse is a major threat where London Olympics 2012 is concerned.

A case study by students of the pennsylvania state university looked into some of the major threats which organization face in the form of it threats. These included; wireless network security, cryptography, access control, privacy control, risk management, operating system security, including server threats, vulnerabilities and firewalls. These are just a handful of examples (bogolea & wijekuma, n.d.). Amongst these examples and besides these examples are many others which are an easy cause for concern for London Olympics 2012.

For any organization it is imperative to exercise control over their computer based information systems. London 2012 needs to ensure that the computer based systems, those which rely on it are protected from threats, as the cost of errors and irregularities that may arise in these systems can be high and may even challenge the very survival of the event. An organizations ability to survive can be severely undermined through corruption or destruction of its database; decision making errors caused by poor-quality information systems; losses incurred through computer abuses; loss of computer assets and their control on how the computers are used within the organization (mandol & verma, 2004).

Cyber security expert professor peter sommer of the London school of economics warned that computer security would be extremely important during the games (hervey, 2010). A case study which looks at the tragic death of two boys 18, and 10 years of age discusses how cyber security was the issue in relation to the gasoline leak of olympic pipelines pipeline (abrams & weiss, n.d.). This is an example of the devastation to human life which cyber threats can cause, and when one considers this on the scale of London 2012, it becomes clear the number of people depending on optimum security.

In order to combat this threat, information needs to be obtained from both internal and external sources. External information may include information from professionals in the cyber security industry to information from intelligence agencies. Terrorism is as much of a cyber-threat as is the computer virus or any other infection. Information systems will only be able to cope with and combat these threats by ensuring they all well informed through risk assessments of potential dangers. Furthermore, in order to overcome any unexpected threats contingency planning forms an essential element of information systems development.

Risk assessment is an important step in a risk management procedure. Risk assessment is the determination of the quantitative or qualitative value of a risk related to an actual situation and a recognized threat. Maroochy water services, australia, are an organization a world apart from London olympic 2012, however for the purpose of their cyber security improvement program; risk assessment establishment played a key role (abrams & weiss, 2008). This example shows that important of risk assessments if by no means limited by industry, size of organization, or any other feature for that matter. Risk assessments provide a means for any organization to help avoid potential threats through prior consideration.

In addition to information required for a risk assessment, is the information required for a contingency plan. A contingency plan is a plan of action for if things were to go wrong. It is a backup plan. In order to overcome any type of disaster information must be collated into a contingency plan. This would again form an essential part of the information systems, as it would be crucial in the event of a disaster.

People

London Olympics 2012 will see several thousands of people from all over the globe in London. Amongst visitors will be players, key visitors and reporters. Those visiting, and then those who already reside in the uk, accumulate to an increase in population, and thus there is a risk of increase in crime. The crime can range from petty crime, to terrorism. The common factor amongst all, is that people need to be protected.

Security has been a crucial concern at the Olympics since the killing of 11 israeli athletes and coaches at the 1972 munich games. Olympic planners have ramped up security following the september 11, 2001, attacks in the united states (wilson, 2010). Inefficient management of the people involved in the Olympics and the public, can have devastating effects. This is a major concern during and time and for a city where terrorism is a real and potential threat.

In order to be able to implement information systems which can cope with, and appreciate the requirements with regards to security, information needs to collated from many sources. First of all, predictions are one of the very first decision making elements which need to be fulfilled by an information system in this situation. Information regarding the number of athletes expected to be present during the Olympics, statistics from previous olympic games will be required regarding the number of spectators/visitors the country had, and finally the number of security staff and resources available at present. By means of prediction and analysis through a computer being able to protect and serve the public can be achieved. The information system may be used to obtain information concerning the number of staff which will be required to patrol the streets. The number of security staff which will need to be put in to place to sufficiently protect the athletes and their trainers. Also, the locations which are expected to be busiest can be recognized, and thus will require more staff and concentration of cctv cameras.

In addition to predictions, is the actual information which will need to be included in an information system, this is information about the number of police officers, or security guards in other areas, or cities besides London that can assist in providing security in this situation. This information may well form a part of the contingency planning.
Where cctv cameras are concerned, or access controls, id badges etc there is the need for information systems to collate and manage all this information. Systems will be need to record information of who accessed which area or building at which time for access/ id cards, cctv will need to keep a recording of all activities captured, and there will be the need for databases to log people working for the period of the Olympics and athletes. This information will help to deter crime, provide an element of security and protect people.

Conclusions

Information systems come in not set type or standard. They are the collation of several information sets to provide a well-integrated system used to make decisions. The London Olympics 2012 are like no other organization, and are on a scale grander and vast the most other organizations deal with. It is this grandness and this large scale involvement of people, which in turn increases the risks and potential threats. London Olympics 2012 is an enormous event and is expected to employ several thousands of people. And furthermore have several thousand spectators, reporters etc. An effective and accurate management information system is essential in order to ensure that the city hosting the event is able to effectively plan, control, record people and protect systems. Hudson bank managed to overcome the problems it faced with adaptation of its information system, some of this was done using off the shelf software and the majority through establishment of customer requirements and communication essentials (anon., 2008).

The security department is involved with many people and many types of threats; the most important two being, securing people and securing systems. Cyber threats can not only damage systems, but even cease functioning of the event. In order to avoid this it is important that the potential risks are assessed, all that can be done to avoid them striking is done and contingency plans are set for action.

Another important aspect is protecting people. In order to do more staff would be required, police, community support, security guards etc. This is a large amount of shifting people around, staff from other cities, new recruits etc. Therefore it is vital that this information is managed efficiently. The information systems should be able to cope with large numbers of peoples and provide effective and accurate predictions and decision making results. As with all information systems, the number of information sources will need to extensive in order to provide optimum results.

Recommendations

Taking in to consideration the need and scope of the management information systems for the Olympics in London 2012, particularly the security departments’ involvement and requirements the following recommendations are made:

The security department need to ensure that all staff involved with the use of the information systems for them is full trained. Any glitch can have dire effects on the rest of the system and ignorance of any warnings of threats can also be horrific in consequence. Training is not only limited to the staff working with the systems, it is also important that staff working with people are trained to handle a large number of people, overcome any problems, identify potential threats, maintain the cooperation of people in the event of a disaster etc.

Risk assessments and contingency plans should be in place for each and every aspect of security. Furthermore, all staff should be made aware of both of these reports, particularly contingency planning. This will only help them do their job better and overcome any disasters. Informing staff will provide a more thoroughly aware work force and maintenance of security in the event of a disaster.

References

Abrams, m. & weiss, j., 2008. Malicious control system cyber security attack case study. Case study. Australia: maroochy water services.
Abrams, m. & weiss, j., n.d. Bellingham, washington, control system cyber security case study. Case study.
Anon., 2008. Banking on customer service. New jersey: hudson bank.
Bogolea, b. & wijekuma, k., n.d. Information security creation. Case study. Pennsylvania: the pennsylvania state university.
Donald, a., 2005. Mastering information management. Prentice hall.
Fitzpatrick, k., Fujimoto, y., Hartel, c. & strybosch, v., 2007. Human resource management: transforming theory into innovative practice. Malaysia: pearson australia group pte ltd.
Hervey, l., 2010. Sky news. [online] available at: hyperlink “http://news.sky.com/skynews/home/twin-terror-threat-to-London-Olympics-security-expert-warn/article/201003415579707” http://news.sky.com/skynews/home/twin-terror-threat-to-London-Olympics-security-expert-warn/article/201003415579707 [accessed 16 august 2010].
London2012, 2010. London 2012 sustainability policy. [online] London 2012 available at: hyperlink “http://www.London2012.com/documents/locog-publications/London-2012-sustainability-policy.pdf” http://www.London2012.com/documents/locog-publications/London-2012-sustainability-policy.pdf [accessed 6 august 2010].
Mandol, p. & verma, m., 2004. Formulation of it auditing standards. Case study. China: national audit office.
Wilson, s., 2010. Yahoo news. [online] available at: hyperlink “%20http://news.yahoo.com/s/ap/20100806/ap_on_sp_ol/oly_London2012_security” http://news.yahoo.com/s/ap/20100806/ap_on_sp_ol/oly_London2012_security [accessed 16 august 2010].

Bibliography

Http://news.yahoo.com/s/ap/20100806/ap_on_sp_ol/oly_London2012_security [accessed 16 august 2010].

Legal, Ethical and Social Issues on Information Systems

This work was produced by one of our professional writers as a learning aid to help you with your studies

Dissertation Proposal: An Examination of Legal, Ethical and Social Issues on Information Systems

Provisional Title

The Provisional Title of the Dissertation is as follows: “An Examination of Legal, Ethical and Social Issues on Information Systems”.

Brief Review of the Related Literature

We will begin our review of the related literature with a close examination of the literature concerning the definition of Information Systems. A clear definition of the concept of Information Systems is vital, because as Currie shows there is a great disparity between the extents to which clear concepts apply in a field such as chemistry compared with the academic discipline of management. “For example, physical chemists know exactly what they mean by ‘entropy’. Would-be scholars in the management field, on the other hand, have no shared precise meaning for many of their relevant concepts, for example ‘role’, ‘norm’, ‘culture’ or ‘information system’ all these terms are often fuzzy as a result of their unreflective use in everyday chat” (Currie 1999: pp.46). In this passage Currie eloquently sums up the task before us when we attempt to define the concept of Information Systems. The conceptual haziness and lazy use of concepts such as Information Systems in everyday usage as well as in academic circles has led to a situation in which providing a clear definition of the concept of Information Systems is a highly complex undertaking. For this reason it is probably not possible to provide a rigid and narrow definition of the concept of Information Systems, because any such definition will be criticised for its inability to incorporate the broad spectrum of features that management scholars understand by the term Information Systems. Many management scholars prefer this approach to the concept of Information Systems and the approach of Rainer is a clear example of this. She understands the concept of Information Systems to be a broad concept incorporating any number of activities that include the use of information technology to support management operations. “It has been said that the purpose of information systems is to get the right information to the right people at the right time in the right amount and in the right format” (Rainer 2009: pp.10). She looks closely at a range of concepts that full under the umbrella term of Information Systems and argues that “one of the primary goals of information systems is to economically process data into information and knowledge” (Rainer 2009: pp.10).

The UK Academy for Information Systems agrees with the type of broad definition offered by Rainer and defines Information Systems as “the means by which people and organisations, utilising technologies, gather, process, store, use and disseminate information” (UK Academy for Information Systems 1999: pp.1). It is clear, therefore, that the term information Systems can be used and applied to a wide variety of activities.

Information Systems can denote the interaction between people, data, technology and knowledge and as a result Buckland also argues that a broad definition of the concept is desirable. As he explains, “information systems deal with data, texts and objects, with millions of these objects on endless miles of shelving, in untold filing cabinets, and on innumerable magnetic and optical devices with enormous data storage capacities” (Buckland 1991: pp.69). Buckland goes on to specify one of the most important reasons why a clear and concise definition of Information Systems is so difficult to attain. He argues that “any significant change in the nature or characteristics of the technology for handling the representations of knowledge, facts and beliefs could have profound effects on information systems and information services” (Buckland 1991: pp.69). In other words, Information Systems are likely to be affected by such an enormous variety of factors that a concise definition of the concept will probably always fail to include some important elements of the concept. It is for this reason that it is advisable for the purposes of this investigation to proceed in the same manner as the vast majority of the literature and therefore operate with a very broad and inclusive definition of the concept of Information Systems.

The next challenge that lies before us is to illustrate some of the most salient and prominent legal issues associated with Information Systems. Sacca defines one of the major challenges in the relationship between Information Systems and legal issues when he states that “first of all, the Rule of Law is based on these unavoidable elements, among others: equality and freedom of citizens. How can the legal system put this element into effect in a highly technological society?” (Sacca 2009: pp.29). Sacca argues that legislation governing the use of Information Systems has existed for a long time, stretching back as far as the 1970s, but that such legislation must constantly be updated in order to be able to keep up with the pace of innovation. He therefore proposes, for example, a “dialogue between institutions and citizens based upon a ‘digital citizenship’” in order to fully exploit the relationship between Information Systems, the government and people and set up an e-government in which everybody who has access to a computer and Internet can participate.

As Sacca states, “democratic legal systems have to foster and promote civil and political rights also with reference to the use of ICT, against digital divide” (Sacca 2009: pp.29). However, the issue of electronic democracy is only one of many legal issues that has been raised by the development of Information Systems. Pollack argues that “we are living in an era in which we routinely deal with issues such as privacy, digital security, identity theft, spyware, phishing, Internet pornography and spam. These costly and time consuming concerns were completely foreign to the American public only a few years ago” (Pollack 2006: pp.172). It is clear, therefore, that there are a multitude of legal issues surrounding Information Systems and Adamski argues that how we deal with information and data is a critical part of how we function as a modern liberal democracy and that the legal system must reflect this emphasis upon freedom of information. “Information, being an intangible and an entity that can be possessed, shared and reproduced by many, is not capable of being property as most corporeal objects do. Unlike corporeal objects, which are more exclusively attributed to certain persons, information is rather a public good. As such it must principally flow freely in a free society” (Adamski 2007: pp.1). It is clear, therefore, that legal issues are of vital importance with regard to Information Systems and that a multitude of issues must be examined in order to fully understand the relationship between Information Systems and the Rule of Law.

In the next section we will examine the extent to which ethical issues impact upon Information Systems. A study on the relationship between ethics and Information Systems has defined ethics as “the principles of right and wrong that individuals, acting as free moral agents, use to make choices that guide their behaviours” (Ethical and Social Issues 2010: pp.128). The study argues that the development of Information Systems has fundamentally transformed the relationship between management and ethics because new Information Systems give rise to a series of new ethical dilemmas. The study argues that “information systems raise new ethical questions for both individuals and societies because they create opportunities for intense social change, and thus threaten existing distributions of power, money, rights, and obligations” (Ethical and Social Issues 2010: pp.128).

Many of the ethical problems of Information Systems were foreseen by Mason in a famous study conducted in 1986 entitled ‘Four Ethical Issues of the Information Age’. In this study Mason argues that there will be above all four ethical issues that will dominate the era in which information Systems will dominate. He defined four ethical issues, namely “privacy, accuracy, property and accessibility” (Mason 1986: pp.5). Mason raised a number of pertinent questions that are indeed still relevant today and help us greatly in our quest to fully understand the relationship between legal, ethical and social issues and Information Systems. For example, with regard to privacy Mason asked, “What information about one’s self or one’s associations must a person reveal to others, under what conditions and with what safeguards? What things can people keep to themselves and not be forced to reveal to others?” (Mason 1986: pp.5). At this point it is important to point out that whilst such questions are clearly ethical questions in nature, the answers that society provides to such questions have clear and profound social dimensions and therefore the relationship between ethical and social issues is inextricably linked with regard to Information Systems. As the study on Ethical and Social Issues points out, “like other technologies, such as steam engines, electricity, the telephone, and the radio, information technology can be used to achieve social progress, but it can also be used to commit crimes and threaten cherished social values. The development of information technology will produce benefits for many and costs for others” (Ethical and Social Issues 2010: pp.128).

Despite the fact that ethical and social issues are inextricably intertwined, it is important that we delineate between the two concepts and in the final section of this dissertation we will focus upon the social issues relating to Information Systems. Here we will examine some of the most prominent social issues that arise when dealing with Information Systems. Some of the social questions we will examine concern the extent to which society is affected by a move toward computer-based systems. What costs do societies incur by doing so and what benefits do they accrue as a result? Do increased levels of automation affect employment patterns and cause people in lower social classes to lose employment opportunities? Will the rise of Information Systems serve to strengthen or dilute class divisions? It is possible to argue that Information Systems serve only to expand the power of the rich, because they re-enforce existing prejudices against the poor. As Wilson argues, “the economic climate and the differential stratification of resources will define some work environments as ‘information-poor’ and others as ‘information-rich’, with consequent effects upon the probability of information-seeking behaviour and the choice of channel of communication (Wilson 2006: pp.665). Another important social concerns the extent to which Information Systems will give rise to greater Identity Theft in which ordinary citizens are the victims and the great rise in the numbers of Identity Theft victims shows that there are a large number of negative social issues that have occurred since the birth of Information Systems.

Aims and objectives of the research

The aim of this dissertation is to encompass a broad spectrum of academic research in order to fully examine the legal, ethical and social issues on Information Systems. In order to be able to complete this task competently, we must first of all begin by outlining a clear structure of how this dissertation will be completed. We will conduct this investigation in five distinct sections. In the first section we will seek to define the concept of Information Systems. This is a vital task in this dissertation because in order to be able to fully and adequately analyse the legal, ethical and social issues on Information Systems we must first of all clearly define the concept of Information Systems in order to be able to proceed any further. In the next three sections we will focus upon the legal, ethical and social issues on Information Systems. We will examine each one of these issues in turn and begin by defining some of the most important issues that are relevant to Information Systems in each field. Once we have defined the relevant concepts in this dissertation we will move on to apply the concepts to an organisation that clearly reflects a number of pertinent issues raised by the literature review.

We have chosen to focus upon the firm Panasonic, because it is an example of an organisation that has been greatly affected by the developments of Information Systems over the last few decades and will allow us to fully explore the social, ethical and legal issues that arise when dealing with Information Systems.

Statement of the Design and Methodology

This investigation will allow us to critically evaluate the impact of legal, social and ethical issues upon Information Systems, focusing particularly on the organisation of Panasonic. It is likely that this dissertation will take a considerable amount of time and we will need to ensure that we have access to the relevant data and statistics that will be necessary in order to support and justify our findings. The aim of this dissertation is to clearly present a theoretical framework from which we can critically examine and evaluate the most important concepts within the title of this investigation. Once the internal theoretical framework has been established we will move on to apply the theoretical framework to the external world in order to analyse the extent to which this theoretical framework is supported by the realities of running a modern organisation in the real world. This will allow us to transfer the internal theoretical framework to the external world where such theoretical concepts operate.

Sources and Acquisition of Data

Throughout this dissertation we will focus primarily upon primary and secondary academic literature in order to establish the theoretical framework upon which this investigation will be based. If possible, it would also be useful to conduct some first-hand interviews with employees and manager of Panasonic in order to ascertain the impact that our theoretical framework has upon the company.

Method of Data Analysis

Throughout this dissertation we will employ both deductive and quantitative techniques as well as inductive and qualitative techniques. The literature review will be primarily based upon qualitative techniques, but we will also focus upon quantitative techniques in order to be able to compare the data and statistics that we found in our literature review with the evidence we will assemble from the firm Panasonic. We will also use both deductive and inductive techniques throughout this investigation and allow for the fact that the conclusions we reach may be false in nature. This type of hypothetical reasoning will strengthen our ultimate conclusions and findings.

Form of Presentation

The dissertation will be presented in written form, but where necessary relevant graphs, tables, charts and illustrations will be included in order to provide statistical data that support and justify the conclusions reached in this investigation.

References and Bibliography

Adamski, A., 2007. Information Management: Legal and Security Issues, pp.1-17

Buckland, M., 1991. Information and Information Systems. London: Greenwood Publishing

Currie, W., 1999. rethinking Management Information Systems. Oxford: Oxford University Press

Ethical and Social Issues in Information Systems, 2010, pp.124-165. http://www.prenhall.com/behindthebook/0132304619/pdf/laudon%20MIS10_CH-04%20FINAL.pdf Accessed 26/07/2010

Mason, R., 1986. Four Ethical Issues of the Information Age. Management Information Systems Quarterly, 10 (1), pp.5-12

Pollack, T., 2006. Ethical and Legal Issues for the Information Systems Professional. Proceedings of the 2006 ASCUE Conference, pp.172-180

Rainer, K., 2009. Introduction to Information Systems: Enabling and Transforming Business. London: Wiley Publishing

Sacca, D., 2009. Information Systems: People, Organisations, Institutions and Technologies. New York: Springer Publishing

UK Academy for Information Systems, 1999. The Definition of Information Systems, pp.1-6

Wilson, T., 2006. On User Studies and Information Needs. Journal of Documentation 62 (6), pp.658-670

HMIS Research Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Open Health: A research prospectus on HMIS research
Introduction

Change management decision models based on shifts within the global economic order have forced administrators to seek new systems and relationships of oversight as organizations switch from traditional vertical work relationships to horizontal interactions. Much of the insight built into recommendations toward better change management models has been developed in scientific fields of practice. The interest in management of knowledge by science communities, and especially the integration of practice into localized IT systems has long been promoted by consultants and advisors to those fields, whom look to channels of facilitation as viable strategies toward competition in the context of change. The popularity of IT systems management as strategic model for practice field growth, as well as a core competency for institutional change, is well established. Cost cutting and innovative, IT knowledge sharing networks expand the options of institutions and professionals. Competitiveness now equates with interface with the highest calibre artificial intelligence in advancement of human potential toward global solutions that promise to enhance a new generation in oversight.

Andrew Grove, former CEO of Intel once observed that “only paranoid firms survive, primarily because they continuously analyse their external environments and competition, but also because they continuously innovate” (Hitt et al. 1995). Grove’s assertions are echoed by many corporate executives, whom have become sold on the constancy of research and development as the single most powerful source of competitive capital in organizations faced with ‘new market’ competition. For instance, the equity of ‘value’ is a price statement or ‘proposition,’ as well as a method of translating brand identity within the market through illustrated performance of a product. For service organizations, structural response to delivery is still inherent to value. Practice settings are environments desire synthetic opportunities to forge alliances between internal and external forces as they navigate against risk. Value increases continuously, and incrementally as capitalization is realized in relation to those activities.

Early responses to the local-global equation looked to structural articulation in what became known as ‘matrix organizations that allowed for retention of rational-analytical choice models, with modified response through process-oriented incremental decision. More recent organizational approaches, and especially in capital intensive fields such as IT, offer support for the benefit of incremental decision making with the salient distinction between the form and function of decisions. Content in both cases is driven by challenges to productivity, and executive direction is now more than before forced to consider incremental decision making as strategic option, despite the fact that rational choice inevitably overrides constant reinvention (Tiwana, A. et al. 2006).

Responsive to the aforementioned challenges in the emergent healthcare environment, leaders looking to new IT HMIS operations systems are seeking change management solutions that will enable them to forge lean and agile strategic growth models in settings known for fiscal and resource waste.
Six Sigma approaches to analysis have allowed businesses to streamline operations through combined methodologies of analysis (Edgeman and Dugan 2008). In the past ten years there has been increased demand for seamless service between hospitals, clinics and multidisciplinary teams concerned with the wellbeing of patients and their families. Healthcare organizations seeking competitive and more efficient options to serving patients now look to IT Healthcare Management Information Systems (HMIS) for optimizing capacity both in terms of finance and in standard of care to patients (Tan and Payton 2010).

Despite the upfront costs of planning and implementation that go into introduction of new IT systems into an existing HMIS setting, integrated operations enable the advancement of fiscal and other controls not previously realized due to time lapse, as well as precision in every step of the service provision process from decoupling point between allocations to actual delivery of patient services. If efficiency in information is directly linked to ‘duty of a reasonable standard of care’ within hospitals and healthcare institutions, the benefits to those organizations in terms of direction and better control of liability issues through information channels, offers new promise in terms of comprehensive patient care through “patient-centric management systems,” and ultimately sustainable organizational growth (Tan and Payton 2010). The foregoing research proposal outlines the development of HMIS in the medical field of practice in the United Kingdom.

Literature Review

The 1990s marked the dawn of knowledge sharing systems in the space science industry, and the landmark mission deployed by NASA IT engineers in the development of what would come to be known as a Competency Management System (an online system that maps individuals to their competencies). Out of that seed project, the 2005 initiation of the NASA Engineering Network (NEN) was formed under the Office of the Chief Engineer in furtherance of the space agency’s knowledge-sharing capacity. Coinciding with a to benchmarked study with U.S. Navy, U.S. Army Company Command, the U.S. Department of Commerce, and Boeing Corporation, the NEN network enables “peers and experts through communities of practice, search multiple repositories from one central search engine, and find experts” (Topousis, D.E. et al. 2009). The research study follows this idea, and proposes to contribute to three (3) bodies of literature pertinent to the field of knowledge sharing: 1) General history of IT integration as change management strategy for advancement of purpose in science; 2) studies on the development of IT networks of practice within the health science community in particularly and the development of heath management information systems (HMIS); 3) literature dedicated to risk mitigation and compliance within legislative policy, and elements of security within institutional networks subject to oversight by chief information officers (CIO).

Invitation of recognized Technical Fellows noted in their discipline to facilitate their respective community of practice within the network set the pace for portal integration of human resource tools, such as jSpace. The platform can be utilized as communicator/research source for professional recruitment to projects and permanent roles. Links to related associations and professional societies offer participating fellows and partners access to an integrated contact source of engineers, “while fostering an environment of sharing across geographical and cultural boundaries.” The next step in NASA NEN is incorporation into the larger NASA Enterprise Search, and potential accommodation of oft requested ITAR-restricted information. The extension of the NASA space science knowledge sharing concept has done two things: 1) further the advancement of space science objectives through KMS (Knowledge Management Systems) and PMS (Plan Management Systems) toward design and launch of multinational space missions; and 2) extend the idea of an IT integrated field of scientific practice to other scientists in distinct fields of practice throughout the scientific community (Quattrone and Hopper 2004).

The emergent emphasis in organizational theory on IT Healthcare Management Information Systems (HMIS) as presented by Tan and Payton (2010), initiates query into the integration of extended practice setting networks. Interested in the advancement of IT platforms and software driven data bases as solution to change operations in global institutions, the search for approaches that succeed at meeting core competencies through risk reduction and resource maximization are the most sought after technologies for the betterment of the ‘total’ organization. The new IT systems offer interconnectivity between operational units within healthcare institutions, and link human intelligence to the logistics data analysis for in-depth insight into the history of expenditures and allocation requests. Some institutions have joined supply chain cooperatives in their region to further enhance the use of network logistics and stem of the flow of fiscal waste – a persistent concern within healthcare organizations – saving literally hundreds of millions of dollars annually (Healthcare Finance News, 2010).

Healthcare Management Information Systems (HMIS) offer integrated systems platforms and applications to the entire range of chain operations management activities within and between institutions that provide patient care. Consistent with the emergent interests in organizational knowledge sharing networks, healthcare institutions are looking to IT solutions for a number of reasons, and especially the growing impetus toward: 1) healthcare provider connectivity; 2) increased focus in tracking and management of chronic diseases; 3) heightened patient expectations regarding personal input in care process; 4) market pressures driving hospital-physician alignment; and 5) advances in the technological facilitation of systems operability in this area (Tan and Payton, 2010).

Design of systems architecture from institution to institution still varies, as data management and interconnectivity may be distinct and also subject to existing ‘legacy systems’ issues that might be incorporated in the new HMIS model. The core competency of HMIS is the more ephemeral side of systems planning which is the knowledge sharing path – where data and information become meaningful. The other key components to consideration of HMIS integration include: 1) the basic hardware, software and network schema; 2) process, task and system(s); 3) integration and inoperability aspects; and 4) user, administration and/or management inputs and oversight. For instance, IT HMIS designed to enhance the networking of financial operations in hospital institutions must be especially responsive to the growing complications in the US insurance industry as product options such as bundled claims force institutions into synchronous attention to patient’ demands. Convenience and competitive pressures to supply those services supersede mere fiscal allocation in service to patients amidst conglomerate interests in the healthcare industry (Monegain, 2010).

Chief Information Officers (CIO) are critical to the administration and planning of HMIS systems, and in particular, security measures and oversight of privacy protections. Unlike Chief Executive Officers (CEO) that serve as the primary responsible party for general governance, the CIO is more directly involved in the scientific praxis of organizational management; as precision in systems that retain data for record, and for analysis toward organizational growth are in their hands. CIOs are increasingly drawn into this external environment based on the nature of transactional relationships, as they are called upon to find IT systems of accountability within their own institutions (cio.com, 2010). Regulation of computer and telecommunications activities in the UK’s Computer Misuse Act (CMA) of 1990 has impact in regard to the stipulations pertaining to definitions of personal and professional use of HMIS by employees, partners and clients (Crown Prosecution Service Advice on CMA 1990).

Aims and Objectives to the study

The aim of the research is to study successful approaches to knowledge sharing, risk reduction and resource maximization through HMIS IT systemization. The most sought after technologies are those that expedite a ‘total’ organizational approach to information management. The goal of the research is to conduct a Six Sigma analysis of an IT based knowledge sharing infrastructure of a scientific community of practice. In spite of the nascent value of space science as a critical beginning to baseline assumptions the study proposes to survey the development of HMIS in the medical field in the United Kingdom. The three (3) core objectives to the study on healthcare IT infrastructure will be: 1) review of HMIS infrastructure as it is understood by healthcare administration in contract with systems engineers; 2) fiscal accountability is the second priority objective toward the goal of projected and actual capitalization on IT systemization in the practice setting; and 3) the significance of quality control of those systems in relation to government reporting and policy.

Methodological Consideration

Methodologies to the study will be implemented toward building a portfolio of practice on HMIS in the British healthcare industry based on data drawn from the following sources:

Survey of lead UK health institutions

The structured Survey instrument will be comprised of (50) questions and will be circulated in the HMIS practice community in the UK. A series of open queries at the end of the Survey will offer an opportunity to CIOs and IT administrators to contribute unique knowledge about their systems.

Interviews with CIO

Depth content to the research will be drawn from two (2) semi-structured Interviews with CIOs selected from information obtained from data generated in the Survey. Findings on the development of HMIS onsite in those chosen institutions will open up a new field of query into the actual challenges faced in planning, implementation and updated maintenance of architectural systems as new enterprise systems come on the market. Policy and procedure will also be discussed, as well as extended referral networks.
3. Internet Research
a. Patient Research. Review of patient interface with HMIS portals at lead organizations and community healthcare providers.
b. Aggregate Index. Research Data collected from healthcare industry indexes toward furtherance of trend analyses.
c. Risk Management. Recommended best practices, policy and security protocol toward risk management of fiscal information, institutional and staff privacy and non-disclosure of patient record will be investigated. Review of open source software as protective measure as well as sufficient firewalls, intrusion detection, and encryption.

Sources and Acquisition of Data

Acquisition of data on the study will be conducted in three phases: 1) Survey; 2) Interviews; and 3) Internet. Phases 1 and 2 will focus on CIO and other lead IT staff in selected UK healthcare institutions, and incorporate information from the two instruments, as well as augmentation of the research with information on engineer consultancy relationships that they have worked with, and institutional documentation on HMIS and unit databases. Phase 3 will be conducted consecutive to the latter two phases of the research toward supplementation of policy and other details to the project.

Data Analysis

Examination of standardized taxonomies to open source database repositories used in HMIS will serve to further data analysis: Customer Relations Management (CRM); Electronic Health Records (EHR); Enterprise Resource Planning (ERP); Personal Health Records (PHR); and Supply Chain Management (SCM) dedicated to total operations management control, patient referral and professional knowledge sharing (Tan and Payton, 2010). Analysis of data on the project will be based on a Six Sigma solutions oriented approach.

Table 1

Approach

Description

ITIL Area

Charter

Defines the case, project goals of the organization

Policy and Procedures

Drill Down Tree

Process Drill Down Tree

Engineering Process & Unit Oversight

FMEA

Failure Modes & Effects Analysis

Risk Assessment

QFD

Quality Function Deployment

Compliance

SWOT

Strengths, Weaknesses, Opportunities, Threats

Planning and Implementation (ongoing for future inputs)

Trend Analysis

Aggregate Narrative

HMIS industry trends

Table 1: Six Sigma methodologies for analysis of HMIS survey, interview and internet archive sources.

References

Computer Misuse Law, 2006. Parliament UK. Available at: http://www.publications.parliament.uk/pa/cm200809/cmhansrd/cm090916/text/90916w0015.htm#09091614000131
Crown Prosecution Service Advice on CMA 1990. Available at:
http://www.cps.gov.uk/legal/a_to_c/computer_misuse_act_1990
Edgeman, Rick L. and Dugan, J. P., 2008. Six Sigma for Government IT: Strategy & Tactics for Washington D.C. Available at: http://www.webpages.uidaho.edu/~redgeman/RLE/PUBS/Edgeman-Dugan.pdf
Hitt, Black & Porter, 1995. Management. Upper Saddle River: Pearson Education, Prentice Hall.
Jones, R.E., et al., 1994. Strategic decision processes in matrix organizations. European Journal of Operational Research, 78 (2), 192-203
Monegain, B. N.C. health system to launch bundled payment pilot. Healthcare Finance News, 22 June 2010. Available at: http://www.healthcarefinancenews.com
Quattrone, Paolo and Hopper, T., 2004. A ‘time-space odyssey’: management control systems in two multinational organizations. Accounting Organizations and Society 30, 735-754.
The imperative to be customer-centric IT leaders (2010). CIO.com. Available at: www.cio.com
Tan, J. and Payton, F.C., 2010. Adaptive Health Management Information Systems: Concepts, Cases, & Practical Applications, Third Edition. Sudbury, MA: Jones & Bartlett Learning.
Tiwana, A. et al. (2006). Information Systems Project Continuation in Escalation Situations: A Real Options Model. Decision Sciences, 37 (3), 357-391.
Topousis, D.E. et al., 2009. Enhancing Collaboration Among NASA Engineers through a Knowledge Sharing System. Third IEEE International Conference on Space Mission Challenges for Information Technology. Pasadena, CA: Jet Propulsion Laboratory.