Cities, poverty and inequality in the UK

This work was produced by one of our professional writers as a learning aid to help you with your studies

With reference to London, Manchester and Glasgow in the UK
Introduction

Debates on poverty and inequality have been always heated and topical. In the aftermath of the global financial crisis and the dogma of austerity, poverty and inequality received a newfound attention from academic and policy circles alike. What is especially interesting, for the purposes of this essay, is to look at the bare version of austerity politics and how they have fed into existing socioeconomic privation and how they are aligned with more deep seated politics dating back to Thatcherite economics and Voodoo economics (Harvey, 2005).

This essay will look at the UK and specifically London, Manchester and Glasgow, and tease out themes around poverty and inequality and how they have been animated as a direct result of policy as well as decision-making at Westminster. By and large, poverty and inequality are multifaceted concepts and should not be seen as purely economic. They intersect with legacies and collective memories and the relationship between cities and inequality is therefore going to be dynamic and complicated.

This essay first turns to delineating what cities, poverty and inequality are taken to mean and locate this discussion within a larger theoretical current and critique. The argument that will be proposed is that poverty and inequality are, put simply, manifested in their fullest extent in global cities, as they are the immediate receptors of government policy and dialogue. Although regional cities and towns are also affected, the ‘contagion’ of policy is a lot weaker and their relationship obscure. To provide evidence for this argument, this essay will examine three different socioeconomic phenomena that have stark implications for poverty and inequality, namely neoliberal austerity politics, a protracted housing crisis, and finally, deindustrialization and a one-sided focus on the City. The essay concludes with a couple of policy recommendations as to how to curtail the rise of inequality in cities.

Why global cities?

As briefly mentioned in the introduction, this essay identifies and looks into global cities as opposed to a nation as a whole. This is because the latter is more abstract and generalized, and relies on more macroeconomic assumptions. In contrast, the former is the ‘playground’ of policy and dialogue, being their proximate receptor and their locus (Musterd and Ostendorf, 2013). That is to say, global cities, in a way, symbolize what policy underlies and is about. The direct consequences that accrue allow an observer to make more credible and robust points about its relationship to inequality and poverty (Sassen, 2011). For example, if this essay were to take up national inequality, measured by the Gini coefficient, the concepts would become harder to discern and the implications unclear. Much of the theoretical literature has homed in on the root causes of inequality and how this deleterious phenomenon has come about (see Atkinson, 2015). Although this essay will later touch on and attempt to trace why inequality exists and is magnified in cities, it is noteworthy that most of the research into inequality shies away from looking at the direct results it has on life in global cities.

How do we explain poverty and inequality?

Next, this essay turns to defining poverty and inequality. There is a presumption in favour of conflating these two to purely economic phenomena to be addressed by economic solutions. However, as will be examined, the case study of Glasgow is a powerful rejoinder to this conflation. Namely, it is a city that has competitive economic infrastructure and results, and yet lags behind in other crucial holistic social measurements. More broadly, poverty and inequality, as stated are complex and multifaceted. That is why it is suggested that the Gini coefficient is a fundamentally limited and misguided measurement to marshal in this essay. Instead, what would be more relevant would be more relevant is to look at the likes of Amartya Sen (2005) and his work on human capabilities and how potential can be frustrated in myriad non-economic ways. For this reason, this essay cannot properly infer from London’s high economic performance that it adequately caters to the problems of inequality and poverty. Put simply, that a global city grows does not mean that the least well off are benefiting as well. By taking this comprehensive approach, this essay will discuss how complex policy has complex consequences on people’s lives and general levels of contentedness.

The trajectory of inequality

Inequality is, by no means, novel. This discussion is embedded in a global debate about what gives rise and momentum to inequality, especially following the global financial crisis of 2008. In the core of the Western world, inequality has run amok in the past few decades, despite the fact that they have rendered modest economic growth in general (Piketty, 2014). This puzzling reality has been the subject of a lot of academic debate and contributions; some scholars have suggested that inequality is not inevitable but, in fact, beneficial, as it makes people more driven and aspirational, and more likely to celebrate and mimic such role models as Mark Zuckerberg and Warren Buffet (Lippman et al., 2005). According to this line of arguing, inequality is seen to be a by-product of entrepreneurial ability and prowess.

However, it is unlikely that this line of thought captures the deep and perplexing character of inequality. To rebut the claim that inequality is a fair reflection of talent and ability, this essay makes a contention that it is rather the result of collective deliberate decision-making (Stieglitz, 2012). This becomes particularly evident in global cities where the contradictions therein highlight that it must be more than just a lack of talent or luck that is holding people back on such a large scale. London, for example, boasts the City which is undeniably the globe’s foremost financial center and also the silicon roundabout, a very promising and booming hub of entrepreneurs. Yet, it also has areas like Peckham. Inquiring into the latter’s residents’ attitudes, it becomes plain that they feel disillusioned and failed by the capital of the United Kingdom (Glaeser et al., 2009). This area offers another side to London’s ‘success story’, as it tends to be hosts of endemic crime, destitution, childhood obesity and other negative manifestations. Therefore, to say that inequality is down to the genes you are endowed and the aspirations you form is too simplistic a story for global cities.

Another instance in which it is seen that people are adversely affected by phenomena outside of their control is that of the prolonged housing crisis that London is witnessing (Harford, 2014). Due to unprecedented demand and people looking to move in, house prices have been on a perpetual rise. What has enabled this rise has been the power that landlords have in that they can charge disproportionate amounts to tenants but they can also fund their own mortgage by letting out properties (Harford, 2014). This translates very negatively for people from lower socioeconomic strata, as they lack comparable access to credit to begin with. That is why they turn to the state and council houses which cater to that. However, the latter have also been penetrated by private landlords leading to the perverse situation whereby council housing is owned privately and can also be overcharged. This is down to political choices regarding allowing the right to buy these kinds of properties, but also creating a generally more permissive framework to buy and let property. At the same time, those at the top end of the economic spectrum have benefitted from more generous inheritance and high property tax offering a glimpse into how glaring inequality can become in global cities. To contrast that, note that Berlin has recently introduced rent controls to avoid a similar scenario (Vasagar, 2015).

It is therefore clear that people living in London have vastly different and unequal access to the most important asset of their consumption lives, namely their house, which has bad implications for their psychological wellbeing and the extent to which they can provide for their families sustainably. Big cities cannot afford to have these kinds of contradictions run within them, whereby lower strata segregate from the mainstream in the own communities and refuse to engage with political-decision making and active citizenship (Wheeler, 2005). This, in turn, exacerbates the already unsteady relationship between cities and inequality, as these groups lose morale and incentive to engage with common goals and agendas.

Neoliberalism

The global financial crisis has made a case that the United Kingdom’s government has heated and that is in favour of austerity politics. The government has engaged in discretionary benefit cuts and also has increased tuition for tertiary education, both of which disproportionally hurt the poor and therefore augment inequality. In seeing benefits reduced, a person in a big city faces profound adversity. Compounded by the housing crisis and general inflation, this person is likely to have his livelihood eroded. Their children will also have to take bloated student loans, and that is if they can afford to hold off working immediately after school. Recently, the UK government has engaged in a bait-and-switch policy whereby benefits to the poor were cut yet that was supposedly counteracted by the introduction of a ‘living’ wage (O’Connor and Gordon, 2015). Again, this example demonstrates that inequality is not an inevitable result of human nature and a random distribution of talent, but created and magnified by governments and collective communities that have bought into the austerity dogma. This has been criticised by high-profile academics such as Pikkety (2014), Stieglitz (2012), and Atkinson (2015).

The seeds of inequality were perhaps planted by Thatcherite economics and a legacy of tough-love when it comes to trade unions, workers, and the welfare state. Following Thatcher’s election, the government introduced a series of neoliberal reforms that placed socioeconomically vulnerable people in an even more precarious situation, stripped of participation in unions, their jobs, if they worked for a factory that closed down, and livelihoods as regressive taxation took its toll (Harvey, 2005). One of the most important features that is relevant for the purposes of this essay is that of deindustrialization and how it has engendered a deep north-south divide in the UK that is persistent and difficult to address. Through a strong and remorseless focus on the service industry, which was hailed as forward-looking, efficient and innovative, the UK’s industrial base concentrated in cities like Manchester and Glasgow (less so) took a back seat to the city of London (The Equity Trust, 2014). The latter has been consistently nurtured with state support and policy ever since at the expensive of other sectors, such as the manufacturing one which used to make up the backbone of the British economy. Instead, now it is, broadly speaking, lagging behind in terms of productivity as the latest findings of the CDI show (The Equity Trust, 2014).

(source: The Equity Trust, 2014)

The graph above shows pay gaps between the rich and the poor in different regions in the UK. It is clear that the pay gap in London is the most glaring, although London is by far the highest growing city. This is because the service industry caters mainly to the wealthy and lacks the traditionally job-creating economic multipliers of the industrial and manufacturing sectors that have suffered.

Conclusion

In conclusion, this essay first took up the ambitious task of delineating what is meant by poverty and inequality, which are inherently complicated concepts. It has also attempted to come to grips with global cities and why they should be viewed as the main reference point in any policy discussion about poverty and inequality. The relationship that this essay identified is, by no accounts static. Rather, it evolves with time and changes in government and collective dialogue. This essay has also aimed to dispel associations between growth and inequality throughout by pointing at the example of London and Glasgow, both of which should alter the reader to the holistic and insidious ways in which inequality and poverty work. The roots of inequality and poverty have also been briefly explored, looking at how they are not novel but the result of long-lasting legacies and engrained ways of political thinking. It has finally turned to how important and telling the current context is in terms of how inequality sustaining policies have been legitimized under the guise of austerity and in the name of balanced budgets.

Bibliography

Atkinson, A.B., 2015. Inequality: What Can Be Done? Cambridge, MA: Harvard University Press.

Glaeser, E.L., Resseger, M., Tobio, K., 2009. Inequality in Cities. Journal of Regional Science 49, 617–646.

Harford, T., 2014. Why a house-price bubble means trouble. Financial Times. Available at: http://www.ft.com/cms/s/0/66189a7a-6f76-11e4-b50f-00144feabdc0.html (accessed 15/10/2015).

Harvey, D., 2005. A Brief History of Neoliberalism. Oxford: Oxford University Press.

Lippmann, S., Davis, A., Aldrich, H., 2005. Entrepreneurship and Inequality. In: Lisa Keister (Ed.) Entrepreneurship, Research in the Sociology of Work. London: Emerald Group Publishing Limited, pp. 3–31.

Musterd, S., Ostendorf, W., 2013. Urban Segregation and the Welfare State: Inequality and Exclusion in Western Cities. London: Routledge.

O’Connor, S., Gordon, S., 2015. Summer Budget: Osborne makes bold bet with “living wage.” Financial Times. Available at: http://www.ft.com/cms/s/0/611460a8-2584-11e5-9c4e-a775d2b173ca.html#axzz3oeeh6xid (accessed 15/10/2015).

Piketty, T., 2014. Capital in the 21st Century. Cambridge, MA: Harvard University Press.

Sassen, S., 2011. Cities in a World Economy. London: SAGE Publications.

Sen, A., 2005. Human rights and capabilities. Journal of Human Development 6, 151–166.

The Equity Trust, 2014. A Divided Britain? – Inequality Within and Between Regions. London: The Equity Trust.

Stiglitz, J., 2012. The price of inequality. London: Penguin.

Vasagar, J., 2015. Germany caps rents to tackle rise in housing costs. Financial Times. Available at: http://www.ft.com/cms/s/0/27efd4b2-c33a-11e4-ac3d-00144feab7de.html#axzz3oeeh6xid (accessed 14/10/2015).

Wheeler, C.H., 2005. Cities, Skills, and Inequality. Growth and Change 36, 329–353.

Changes in US Foreign Policy after 9/11

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

On September 20th, 2001, President George W. Bush (2001, n. pag.) gave a speech addressing the events of nine days before: “On September the 11th, enemies of freedom committed an act of war against our country. Americans have known wars, but for the past 136 years they have been wars on foreign soil, except for one Sunday in 1941.” The speech drew upon the notion that America had been attacked and also laid the blame firmly at the door of terrorism whilst interpreting it as an act of war. Although the emotive rhetoric was designed to stir support for a response, it also heralded a new era in US foreign policy. Defined as a “foreign policy crisis” by Bolton (2008, p. 6), it was inevitable that it would elicit a response by American policymakers but the extent to which it has changed US foreign policy has been hotly debated. As such, this essay will discuss the changes in post-9/11 US foreign policy, identifying areas that marked a departure from the policy in place prior to 9/11. It will analyse each to determine the extent to which it was a direct response to the terrorist attack and evaluate how the change impacted upon long-term foreign policy strategy. This will be done with a view to concluding that many of the changes to US foreign policy in the post-9/11 era have been a response to the evolving security threat posed by terrorism and did force policy to evolve in order to accommodate strategies that address modern problems. However, those changes may have made an immediate impact but did little to alter the long-term course of US foreign policy.

Foreign policy arguably changed direction within days of 9/11 with the most immediate and most obvious change being the shift in focus towards terrorism. Bentley and Holland (2013) highlight that the focus had been foreign economic policy under Clinton but 9/11 produced a dramatic movement away from diplomacy and towards military solutions via the War on Terror. There was also movement away from policy that prioritised relations with the great powers of Russia and China. Earlier unilateralism had negatively impacted upon relations with both nations, thus causing deterioration that extended beyond the Cold War era hostilities and prevented effective relations between East and West (Cameron, 2006; Nadkarni, 2010). However, the American desire to create a “world-wide anti-terrorism alliance” (Nadkarni, 2010, p. 60) brought about a relative thaw between the nations and facilitated discourse in order to cater for shared security concerns. This change provides evidence of an immediate shift in US interests and this manifested in foreign policy. As such, this is an extremely important change that occurred post-9/11, especially as it emerged out of the first response to the attack and served to dictate US actions abroad for more than a decade afterwards.

The shift of focus from the great powers and towards terrorism provided policy space to address security threats via the three pillars of the Bush administration’s national security policy, which had become a fundamental element of foreign policy as, for the first time since World War II, the attack on American soil brought both ostensibly dichotomous strands of policy together. The pillars were missile defence (a continuation of policy prior to 9/11), pre-emption and homeland security, both of which were embraced after 9/11 in response to it (Lebovic, 2007). Although elements of this were rooted in domestic policy, the pre-emption aspect of policy was also manifest in foreign policy because non-state terrorist groups and rogue states became inextricably linked to US foreign policy as targets to be dealt with under the new priorities outlined in the wake of the terror attacks, although this was somewhat more gradual than the initial shift to focus on terrorism. Indeed, the Bush Doctrine marked a fundamental shift towards utilisation of policy that incorporates both pre-emptive action and preventative action, which marked the decline of the reliance on containment and deterrence that dictated policy from the Cold War era onwards (Jentleson, 2013; Levy, 2013). The pre-emptive strikes were indicative of a strategy that sought to defend by attacking those who posed an immediate security threat to the US and allowed policy to justify the unilateral military pursuit of specifically American interests. This suggests that 9/11 was used as an effective excuse to create foreign policy that better mirrored the ideology of the government than what was in place in the months prior to the attack.

There is extensive criticism of the policy that reinforces the assumption that the government manipulate foreign policy to suit its own ends. For example, Ryan (2008, p. 49) argues that Iraq, which was labelled a rogue state, was already a focal point of foreign policy but the events of 9/11 allowed policymakers to push their specific agenda: “Influential strategists within the Bush administration seized on the horror to gain assent from liberal Americans to move the country towards a war in Iraq that neoconservative strategists desired, but that many within the US… shunned.” Holland (2012) concurs, arguing that coercive rhetoric was used extensively in order to sell the War on Terror via culturally embedded discourse. In addition, Miles (2013, p. 110) advocates that “…Bush’s placement of rogue states at the centre of America’s response to 9/11 was welcomed as an opportunity to overthrow a number of old threats and terror loving tyrannies who stood in the way of democracy and freedom.” This perspective certainly offers a credible insight as to how 9/11 was manipulated in order to push foreign policy in a certain direction, and indeed one that was a continuation of what had gone before. However, the need to manipulate public opinion is indicative of the fact that foreign policy had deviated from that in place directly prior to the terrorist attack on the World Trade Centre.

US foreign policy has also responded to the increased demand for humanitarian assistance to aid failed states and nation building to ensure their reconstruction following 9/11. Shannon (2009) points out that the reconstruction of Afghanistan following the US invasion there has essentially helped to prevent the failure of the state, improve the quality of life for its people, introduce freedoms and democratic processes that were absent before and aided the avoidance of the state being controlled by terrorists. This was certainly a change from previous foreign policy: “Before 9/11, nation building was often caricatured as a form of idealistic altruism divorced from hard-headed foreign policy realism… In the post-9/11 era, nation-building has a hard-headed strategic rationale: to prevent weak or failing states from falling prey to terrorist groups” (Litwak, 2007, p. 313). This summary of the extent to which attitudes changed highlights the fact that a greater role in states that required humanitarian assistance was incorporated into foreign policy out of necessity rather than ideological choice. There was a distinct need to limit terrorist activity as far as possible and this actively manifested in this element of foreign policy. As Litwak (2007) points out, humanitarian action was not a staple element of American foreign policy by any means and so this, more than any other element of foreign policy, does signal that a change occurred within the strategic objectives inherent in the War on Terror. However, there are criticisms of this particular change because the US is charged with failing to follow through with humanitarian aid to the extent that it should have done. For example, Johnstone and Laville (2010) suggest that the reconstruction of Afghanistan was effectively abandoned with a failure to create institutions that would withstand future threats to freedom and democracy. This suggests that this particular area of strategy was not well thought out and did not achieve its ultimate aims. However, the fact that it was included in US foreign policy post-9/11 suggests that there was a concerted effort to implement a multifaceted policy to tackle terrorism as a new and dangerous global strategic threat.

However, despite the fact that the analysis here points to a change of direction for US foreign policy in the wake of 9/11 that was specifically designed to tackle the causes of and security threat posed by terrorism, some critical areas of policy did not change. For example, the long term objectives of the US were still manifest within new policy but they appeared in a different form that essentially provided a response to a different threat. Leffler (2011, n. pag.) argues that 9/11:

…did not change the world or transform the long-term trajectory of US grand strategy. The United States’ quest for primacy, its desire to lead the world, its preference for an open door and free markets, its concern with military supremacy, its readiness to act unilaterally when deemed necessary, its eclectic merger of interests and values, its sense of indispensability – all these remained, and remain, unchanged.

This summary of the ultimate goals of US foreign policy draws attention to the fact that very little has changed. Although the British government supported the invasion of Iraq in the wake of 9/11, the fact that the United Nations Security Council refused to pass a resolution condoning the use of force did not prevent the launch of Operation Iraqi Freedom (Hybel, 2014). This is evidence of the readiness to act unilaterally if it serves their interests. Gaddis concurs, noting that US self-interest remained the same with very little consideration of long term strategy that intervention elsewhere would require. Bolton (2008, p. 6), on the other hand, agrees that many of the changes to US foreign policy were made immediately but he disagrees with the assertions of Leffler and Gaddis concerning their long term impact. Bolton (2008, pp. 6-7) asserts that the changes have caused a longer-term impact, albeit one that has diminished over time as a result of the enduring nature of the national security policy and its evolution to accommodate the threat of terrorism in the wake of 9/11. Although this provides a dissenting voice in one respect, it demonstrates consensus on the fact that the changes in US foreign policy post-9/11 were a direct response to a new global threat but they were implemented alongside existing strategic goals. In effect, the approach may have changed but the ultimate objective had not.

Conclusion

In conclusion, the analysis here has identified and discussed several changes that occurred within US foreign policy post-9/11. There can be little doubt that there was a distinct shift in focus to the need to deal with terrorism after the first attack on American soil for seventy years. Similarly, the policy content evolved to adopt a more humanitarian approach to global crises and a proactive and pre-emptive approach to potential threats. All of these changes did mark a departure from what had gone before in some way. However, although the majority of changes were incorporated into foreign policy within two years and were all undoubtedly a response to the attack and its causes, there is significant evidence to suggest that such actions provided an extension of foreign policy doctrine that had gone before. For example, although the focus of foreign policy shifted from the old Cold War objectives of containment and deterrence to terrorism, the interest policymakers took in some rogue states like Iraq was simply a continuation of established ideologies of ensuring freedom and democracy. Similarly, the US administration of foreign policy changed very little in terms of its determination to act unilaterally where necessary and lead the world in a battle against the latest threat to global security. As such, it is possible to conclude that many of the changes to US foreign policy in the post-9/11 era have been a response to the evolving security threat posed by terrorism. Furthermore, it was necessary for policy to evolve in order to accommodate strategies that address modern problems that were not as much of a priority in the late 20th century. However, whilst those changes made an immediate impact on foreign policy, it did not alter the long-term course of US foreign policy because that remained firmly focused on the outcomes of action elsewhere in the world in relation to American interests.

Bibliography

Bentley, M. & Holland, J., (2013). Obama’s Foreign Policy: Ending the War on Terror. Abingdon: Routledge.

Bolton, M., (2008). US National Security and Foreign Policymaking After 9/11. Lanham: Rowman & Littlefield.

Bush, G., (2001). President Bush Addresses the Nation. The Washington Post. [Online] Available at: http://www.washingtonpost.com/wp-srv/nation/specials/attacked/transcripts/bushaddress_092001.html [Accessed 3 October 2015].

Cameron, F., (2006). US Foreign Policy After the Cold War. Abingdon: Routledge.

Gaddis, J., (2004). Surprise, Security and the American Experience. New Haven: Harvard University Press.

Holland, J., (2012). Selling the War on Terror: Foreign Policy Discourses After 9/11. Abingdon: Routledge.

Hybel, A., (2014). US Foreign Policy Decision-Making from Kennedy to Obama. Basingstoke: Palgrave Macmillan.

Jentleson, B., (2013). American Foreign Policy. 5th Edition. New York: W. W. Norton.

Johnstone, A. & Laville, H., (2010). The US Public and American Foreign Policy. Abingdon: Routledge.

Lebovic, J., (2007). Deterring International Terrorism and Rogue States. Abingdon: Routledge.

Leffler, M., (2011). September 11 in Retrospect: George W. Bush’s Grand Strategy Reconsidered. Foreign Affairs. [Online] Available at: https://www.foreignaffairs.com/articles/2011-08-19/september-11-retrospect [Accessed 3 October 2015].

Levy, J., (2013). Preventative War and the Bush Doctrine. In S. Renshon & P. Suedfeld eds. Understanding the Bush Doctrine: Psychology and Strategy in an Age of Terrorism. Abingdon: Routledge, pp. 175-200.

Litwak, R., (2007). Regime Change: US Strategy Through the Prism of 9/11. Baltimore: JHU Press.

Miles, A., (2013). US Foreign Policy and the Rogue State Doctrine. Abingdon: Routledge.

Nadkarni, V., (2010). Strategic Partnerships in Asia: Balancing Without Alliances. Abingdon: Routledge.

Ryan, D., (2008). 9/11 and US Foreign Policy. In M. Halliwell & C. Morley eds. American Thought and Culture in the Twenty First Century. Edinburgh: Edinburgh University Press.

Shannon, R., (2009). Playing with Principles in an Era of Securitized Aid: Negotiating Humanitarian Space in Post-9/11 Afghanistan. Progress in Development Studies. 9:1, pp. 15-36.

Software Engineering Groups Behaviour Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Factors And Issues That Influence The Behaviour Of Software Engineering Groups

Most presentations on software engineering highlight the historically high failure rates of software projects, of up to eighty percent. Failure under the guise of budget overruns, delivery of solutions not compliant with specifications, late delivery and the like. More often than not, these failure rates are used to motivate the use of software engineering practices. The premise being that if adequate engineering practises were utilised, failure would become more of an exception rather than a rule. Best practise and lifecycles have been proposed and tailored to the various paradigms that the computer and information sciences throw up in rapid succession. There is extensive debate on what works and what does not within academia and without. The consensus being that what is best depends on the problem at hand and the expertise of those working on the problem.

A few software engineering group models have been popular in the history of software development. Earlier groups tended to be hierarchical, along the lines of traditional management teams. The project manager in-charge did not necessarily contribute in a non-managerial capacity and was responsible for putting together teams, had the last word on accepting recommendations and delegation to team members. Later groups worked around one or more chief-programmers or specialists. The specialists took charge of core components themselves and were assisted by other group members in testing, producing documentation and deployment. More recently, collegial groups have become common. Here, people with varied specialisations form groups wherein they organise themselves informally by assuming roles as needs arise.

The advantage of a particular model over the others becomes evident only in the context of specific projects. The hierarchical model is best suited to relatively large projects that are decomposable into sub-goals that can be addressed by near independent teams. This is usually possible for software tasks that are very well defined, that need reliable and quality controlled solutions, particularly those that are mission critical. A large project may inherently require many people working on it to successfully complete it, if it were to be deployed in multiple sites, for instance. Alternatively, a large group may be assembled to expedite delivery. In either case, structured organisation and well-defined roles facilitate coordination at a high level.

A central problem with adding people to expedite delivery, or otherwise, is that the effectiveness of a group does not scale linearly. One person joining another does not mean that they are collectively twice as productive. More importantly, the contribution of the seventh person in a seven-person group is a fraction of the contribution of the second person in a two-person group. This is due to additional overheads in communication and coordination as group size increases and to the dilution of tasks assigned to individual member. As is evident, this is a problem for any group; however, in very large groups the problem is exacerbated.

In hierarchical settings, group members do not have a sense of ownership of the bigger solution. This may be reflected in their productivity. Because of the concentration of decision-making powers to particular individuals according to some hierarchy, the success of processes ultimately lies with them. A lot rides on their ability to pick the best practises and recommendations, delegate effectively and keep track of the bigger picture. In quality-controlled or mission-critical settings, there are not many alternatives to having large hierarchical groups with redundant contributors.

Primarily in non-commercial settings, a single specialist engineers a complete software solution. Invariably, the solution being a prototype is accessible only to other specialists. In addition, it is not designed for general consumption and is put together without going through most recommended processes in software engineering lifecycles. Single programmers tend to practise evolutionary programming. This involves producing a quick working solution followed by repeated reworking of the solution to make it more accessible to the programmer for future review, incremental development and peer review or development. If demand for such a software solution gains momentum, for either its general utility or its commercial viability, the core solution would most likely be adopted for further development by a larger software engineering group. It stands to reason that the core developer, who is most familiar with the solution, retains the last word on further technical development. Other members organise themselves around the chief-programmer.

In general, some form of incremental development and periodic redevelopment from scratch of software solutions are common regardless of group models. The first incrementally developed solution tends to be the least well-engineered solution and is a patchwork of poorly designed and tightly coupled components. This is a reflection of the difficulty involved in producing quick solutions using new tools and techniques and inexperienced software engineers. Supported by a high immediate cost barrier to reworking solutions, incumbents from pervious software development cycles, spend a lot of their post deployment time in supporting and patching what they produced.

In collegial groups formed in smaller organisations or departments, software engineers assume roles as needs arise. Brainstorming may be carried out by all members and design approved by consensus but development may be carried out by a few individual members, while the others gain feedback from end-users, keep track of competitor solutions and the like. In the initial phases of a software development life cycle, the problem definition, feasibility study and system analysis phases, end users of the system and independent specialists may form part of the group. During the design and implementation phases, a disjoint group of outsiders could merge with the team. The external members may then be invited for their feedback post implementation during the quality assurance and maintenance phases. Generally, best practise suggests that groups should be adaptive or loosely structured during the creative phases and become more structured as the design becomes clearer.

Groups with loosely defined structures are the most flexible in adapting to changing user needs. However, the greatest risk to project cancellations and overruns are ill-defined and changing requirements. Adaptiveness to an extent is crucial. Given that users change requirements so compulsively, lacking adaptiveness completely would make an engineering group not viable. If group size is variable, the learning curve of new entrants must be kept in mind. A project manager hiring additional developers late in the software development cycle, after not meeting some deadline say, must factor in delayed contributions from the newcomers as a result of time taken by them to familiarise themselves with the project and time lost in coordinating their joining the group.

Following this, the next most common cause of failure is due to poor planning or management. If the person taking on the role of project manager has poor management or planning skills, the likelihood of which is heightened by the fact that each group member is called upon to serve in diverse capacities, projects are destined to fall over.

A number of reasonable software engineering guidelines are ignored by software engineers commonly. When programming, using descriptive names for variables is a good example. A section of program code will immediately make sense to its author for a reasonably long period, when reviewed. However, if the code were not documented sufficiently, which includes using descriptive variable names, and with the correct intended audience in mind, it would take a considerable amount of time for another programmer to understand what the other had implemented. In the extreme, some programmers obfuscate because they can or to ensure that only they will ever understand what they have written thereby making them indispensable. The potential for doing a half-hearted job of writing code is obvious in that poorly structured and poorly designed code is functionally indistinct from well-structured code and is less demanding a task. If software projects were evaluated only on their functionality, this would not pose a problem but upgrades and patches require someone to review the code and add to it or repair it in the future. The long term cost of maintaining software that is not well designed and documented may rise exponentially as older technologies are phased out and finding people competent to carry out repair and review shrink. In essence, this is an instance of a quality control problem.

Uncontrolled quality problems are the third most common cause of cancellations and overruns of software projects. It is convenient to group documentation along with quality control as they should be reviewed in tandem in a software development lifecycle. The first casualties of a late running project are quality control and documentation. The long-term costs of skimping on either have been illustrated by example above but there are short-term costs as well. In both evolutionary engineering common among specialist-centred groups and component engineering commonly employed by hierarchical groups, the quality of each revision or component affects the quality of subsequent revisions or combined components.

The next most common causes of failure are unrealistic or inaccurate estimates and naive adoption of emerging technologies. The blame for the former rests with both users and planners or project managers. Most engineering groups are unrealistically optimistic about the speed with which they can deliver solutions. Their estimates may be accurate for prototypes. In actual deployment, conformance to specifications, human-computer interfaces, quality control, training and change management are essential and take time. Users have a poor understanding of how descriptive their specifications are and much too often assume that implementers are contextually familiar with the environments in which they work and intend to use the system. Project managers and implementers have an affinity to emerging technologies ignoring their core competencies that are more likely to be established proven technologies.

Success among software engineering groups is a function of planning and execution. The responsibility of planning falls on a project manager. A manager must draw on the best a group has to offer, appreciate software and technical concerns, facilitate communication and coordinate a groups effort. Enforcing quality standards from the beginning by adopting design and programming guidelines, for example, helps formalise expectations. A project manager with a technical background has the advantage of understanding the position of other technical members and is likely to communicate more effectively with them and has the opportunity of leading by example. Given the emphasis on planning, it is worthwhile noting that it can be overdone. Over-engineering is not ideal engineering. It is often convenient for a single developer to take the lead for coding. Other developers and end-users should concurrently test the developing solution for functionality, usability and quality. Execution in isolation is likely to result in solutions that developers are comfortable with and even proud of but that end-users find lacking. The various stakeholders of the project must be simultaneously and consistently involved throughout the development cycle of software projects.

The greater the communication between specialist designers and specialist implementers, the more successful the group would be in terms of quality and ease-of-use of solutions. The technical crowd in a software engineering group sees the problem uniquely in terms of simplifying or making more elegant their contribution. The design crowd balances out this perspective by offering an alternative view, which is more likely to be aligned with that held by end-users, uncurtailed by technical considerations. Ultimately, end-users must be given an opportunity to have their say. The solution is theirs.

Changing requirements and specifications may be an acceptable excuse from the user’s perspective for delays in final solution delivery. Many projects are twenty percent complete after eighty percent of the initially estimated time. More people are brought in to expedite the process, budget overruns follow and sub-par solutions are delivered, albeit, late. Given the historical frequency, project managers should factor in possible requirement changes to arrive at estimates that are more realistic before commencing projects.

Call Dropping Syndrome with Mobile Routers

This work was produced by one of our professional writers as a learning aid to help you with your studies

Research Call Dropping Syndrome in a Mobile Router Connection Used in a Vehicular Environment

Abstract

With the emergence of mobile automobile internet routers in the past five years, theorists and visionaries have begun to picture a world for their widespread application. From transportation infrastructure and inter-vehicular communication to mobile conferencing and business applications, the ability to access the internet during transportation is an increasingly valued concept. Yet mobile phones have internet services and cellular providers offer broadband 3G and 4G options, so why amidst all of this integrated technology does the mobile router become such a key component? Efficiency and performance. By leveraging the strengths of an integrated urban infrastructure and utilising multiple access points, the bandwidth and quality of service associated with mobile internet routing is rapidly increasing. Due to the rapid rate of motion and exchange, one of the most inefficient concepts within mobile routing is handover latency, a potential lag in network resources during which packets of information are exchanged between the mobile router and the new access point.

This research will provide a broad spectrum of theory and evidence regarding opportunities for moving towards a soft handover, undermining the performance losses and network degradation associated with hard handover switching behaviour. Further, predictions will be made for the future of mobile automobile routing services, highlighting particular concerns that must be remedied in the coming years in order to enhance industry performance.

Introduction

1.1 Research Problem

As internet integration and communications convergence is increasingly impactful on human existence, the exploitation of new and emergent technologies for increasingly mobile applications continues. One of the most debated advances in recent years is directly related to the integration of mobile internet into automobiles. With one leading firm, Autonet Mobile currently supplying a proprietary technology to several key automobile manufacturers, the merits of mobile routing continue to be validated through commercial value and consumer investment. From a technical standpoint, router-network communication protocol is relatively standard when a static relationship is established; however, once this relationship becomes mobile, the handover requirements due to mobile access points can result in a breakdown in quality of service (QoS) and connection dropping behaviour. Using the NEMO basic support protocol, a mobile router is able to use Mobile IPv6 to ‘establish and maintain a session with its home agent…using bidirectional tunnelling between the mobile router and home agent to provide a path through which nodes attached to links in the mobile network can maintain connectivity with nodes not in the NEMO’. This brief explanation of a network architecture designed to maintain mobile consistency and reduce signal dropping behaviour is indicative of emergent technology in the mobile routing field, a capability with wide scale applications across automobiles, trains, busses, and other ground transportation networks.

Although Autonet Mobile is the most public corporation currently working towards the development and implementation of mobile internet in automobiles, it is unlikely that such market supremacy will continue into the future. With expectations of more integrated automobile systems, particularly those related to navigation and intra-traffic vehicular communication (accident reduction schemes), academics such as Carney and Delphus are already predicting a rich, network-integrated future for mobile computing and internet applications. Considering that QoS for other diverse communication options including VoIP remains of particular concern in the mobile computing community, more in-depth analysis of connection management and performance in a mobile environment is needed. The concept of mobile routers and a mobile internet connection through intra-vehicular integration is foreign to many consumers, even in this era of diverse technologies and increasingly advanced network architecture. Therefore, the fundamental value of this dissertation may be linked to more predictive analysis of future applications and systemic evolutions regarding these emergent technologies. Through a comprehensive review of the existing academic evidence in this field as well as several examples of mobile routing technologies that are either currently in production are being field tested, the following chapters will firmly establish a rich, evidence-based perspective regarding technological viability, updates and version 2.0, and the future of mobile internet routing.

1.2 Aims and Objectives

Although wireless technologies have a longstanding history in internet protocol and architecture, the complexity of handover behaviour and connectivity in mobile router service continues to challenge developers to reconsider the merits of their design. Accordingly, as 3G and 4G mobile broadband networks are expanded across metropolitan and surrounding areas, the flexibility of mobile routers and intra-vehicular internet use is increasing significantly. Simultaneously, alternative technologies including the Autonet Mobile router exploit such interconnectedness to maximise wireless performance, conducting mobile handoffs of service as vehicles pass from one cell tower to another. The scope of this investigation is based on emergent technologies and connection dropping behaviour during in-motion computing. Therefore, a variety of sources will be explored over the subsequent chapters in order to evaluate the progress made in this field, practical applications and their relative reliability, and future opportunities for redesign and reconfiguration of mobile routers. In order to govern the scope and scale of this investigative process, the following research aim has been defined:

To evaluate the emergence of wireless router technologies for automobiles, comparing the connection dropping behaviour of mobile broadband networks and tower switching protocol in order to predict the viability of future applications and technologies.

Based on the variables addressed in this particular research aim, this investigation involves three primary data streams including evidence regarding the performance of mobile broadband routers and cards, the evidence regarding the performance of hand-off-based mobile internet access routers, and the opportunities for expanding this technology beyond its currently limited scope in future applications. As this investigative process involves the analysis of a broad spectrum of empirical findings in this field, secondary academic evidence forms the theoretical foundations of the background to this mobile internet problem. In addition, empirical evidence from actual network architecture is retrieved from existing applications of these distinctive technologies. Throughout the collection and analysis of this evidence, the following research objectives will be accomplished:

To identify the underlying conditions which contribute to connection dropping behaviour in mobile internet usage

To evaluate the structure of mobile internet architecture, highlighting the benefits and limitations associated with the various technologies

To highlight theoretical and emergent applications for mobile internet connections, expanding the scope of usage beyond just web surfing whilst driving

To offer recommendations based on the optimisation of network architecture according to a purpose-oriented protocol

1.3 Research Questions

Based on the aforementioned research aims and objectives, there are several key research questions that will be answered over the following chapters:

What expectations are manifested regarding mobile internet usage in vehicles, and how is such performance evaluated?

What opportunities are there for integrating mobile internet on a broader scale for more strategic, vehicular purposes (i.e. navigation, multi-vehicle communication, etc.)?

Are there specific benefits of a mobile broadband connection over tower handover behaviour and vice versa?

What will determine the future of mobile internet and how will such variables be identified and incorporated into the network architecture?

1.4 Structure of Dissertation

This dissertation has been structured in order to progress from a more general, theoretical background to specific mobile internet routing evidence. The following is a brief explanation of the primary objectives for each of the subsequent chapters:

Chapter 2: Literature Review: Highlighting an academic precedence established in this field over the past two to three years, empirical studies and theoretical findings are presented and compared in direct relation to the research aims and objectives.

Chapter 3: Research Methodology: This chapter seeks to demonstrate the foundations of the research model and the variables considered in the definition of the analytical research methodology.

Chapter 4: Data Presentation: Findings from an empirical review of existing mobile router architecture are presented, highlighting particular conditions, standards, and performance monitoring that govern functionality and performance.

Chapter 5: Discussion and Analysis: Retreating to the academic background presented in Chapter 2, the research findings are discussed in detail, offering insights into the challenges and opportunities associated with current network architecture and mobile internet protocol.

Chapter 6: Conclusions and Recommendations: In this final chapter, summative conclusions are offered based on the entirety of the collected evidence, and recommendations for future mobile internet routing solutions are provided.

Chapter 2: Literature Review

2.1 Introduction

There is a broad spectrum of academic evidence relating to mobile internet, network architecture, and operational protocol. This chapter seeks to extract the most relevant studies from this wealth of theoretical and empirical findings in order to identify the key conditions and components associated with effective and high performing mobile internet in automobiles. Further, evidence regarding connection dropping syndrome is investigated in order to highlight those deficient characteristics that continue to detract from the overall performance of these various networks. Ultimately, this chapter provides the background findings that will be compared with practical applications of mobile internet routers in vehicular scenarios in Chapter 4. This analysis is designed to not only introduce the academic arguments regarding the functional architecture of mobile routing and its widespread potential applications, but to also compare the principles and practices that have been discussed across a diverse range of technological interpretations.

2.2 The Background of Mobile Automotive Routers

In 2009, emergent technology inspired by an increasing social demand for internet mobility and integrated online resources in automobiles began to make its way to the market. Carney reported on an American based firm, Autonet Mobile which viewed the future of integrated mobile wireless as handoff-based through existing cell towers rather than mobile broadband card-driven. In essence, this proprietary technology leverages a similar communications standard to the 3G and 4G broadband routers that continue to be offered by mobile phone providers AT&T, Verizon, Sprint, and others. Consumer analysis by Autonet determined that over 50% of consumers surveyed reported on a desire for internet service in their cars in comparison with just 16% who were interested such technologies in the early parts of the 21st century. Practical applications of mobile internet routers include direct streaming of navigation tools such as MapQuest and Google Maps to the vehicle or benefits for business customers which include mail and file transfer capabilities or even online information sourcing. Uconnect Web is the service provider which ultimately links the user through the Autonet router to the internet, offering data speeds that have been reported as comparable to 3G technologies. By default, the broadcast range is around 150 feet from the car, differentiating the flexibility of use in this technology from PAN architecture.

Although the uptake of the Autonet router in such automotive producers as Chrysler and Cadillac was widely publicised, the general public reaction was not necessarily a market-shifting response. In fact, a leading analyst of direct competitor Ford would criticise the Autonet router early in its lifecycle, suggesting that many consumer will not see value in the investment in technology that is similar to that which they already pay for on their other mobile devices, especially when it is limited to the architecture of the vehicle. In spite of such predictions, by February of 2009, the Autonet router had received its first award from Good Housekeeping magazine for Very Innovative Products (VIP), a recognition that was directly oriented towards this new products potential value for families in its integration of multiple devices within a single wireless hub. In 2010, Delphus reported on significant increases in subscriber statistics, from around 3,000 vehicles in 2009 to over 10,000 by mid-2010, the direct result of strategic partnerships with such rental car giants as Avis and continued OEM partnering with Chrysler, GM, Volkswagen, and Subaru. In spite of the more commercial value of this concept, what is most relevant to the scope of this investigation is the proprietary handover management technologies that have emerged in the Autonet operating protocol. In fact, Delphus reports that because of contractual partnering with multiple wireless telecom providers, Autonet is able to maintain consistent web streaming with very minimal ‘between tower’ signal fading in urban spaces. Considering that handover processing and seamless transfer of addresses between towers is one of the technologies developed under the NEMO project previously introduced by Lorchat et al., the commercial value of such initiatives could potentially be expanded to include a much more integrated traffic architecture and communication network.

In his exploratory evaluation of NEMO as a handover framework for mobile internet routing (particularly in multi-nodal vehicular applications for traffic navigation/communication), Ernst highlights particular challenges with maintaining quality of service under mobile conditions. In particular, he recognises that addresses must be topologically correct involving specific language designed to interface with a particular tower, an ability to change the IP subnet, and ultimately the change of location and routing directive. In order to maintain sessions and quality of service, Ernst introduces a communicative architecture based on a bi-directional tunnel between the home agent (HA) and the mobile node (MN), a connection which must remain dynamic and automatic whilst receiving bandwidth allocation from the access network. In particular, such early work on the NEMO architecture established specific performance requirements which included permanent and un-interrupted access to the internet, the need to connect simultaneously to the internet via several access networks, and the ability to switch to best available access technology as needed. By default, this flexible architecture provides the following predicted benefits:

Redundancy which reduces link failures that arise in mobile environments

Ubiquity which allows for a wide area of coverage and permanent and un interrupted connectivity

Flexibility that receives specific policies from users/applications and price-oriented competition amongst providers

Load sharing to efficiently allocate bandwidth, limiting delays and signal losses

Load balancing

Bi casting

The value of NEMO protocol is that it allows for shifting points of attachment in order to achieve optimal internet connectivity. When a mobile node is on a foreign network, it is able to obtain a local address termed Care of Address (CoA) which is then sent to the home address for binding. Once the binding is complete, the HA ‘intercepts and forwards packets that arrive for the MN to the MN’ via the ubiquitous tunnel to the CoA. It is this binding and re-binding of different CoAs during mobility that ultimately allows for improved QoS, restricting the number of dropped connections and maintaining persistent internet connectivity in all areas where call towers can be accessed. Within this architecture, binding updates are used to notify HAs of a new CoA, whereby the HAs send a binding acknowledgement that may either be implicit (no mobile network prefix option) or explicit (one or more mobile network prefix options). It is the underlying use of the IPv6 architecture which Moceri argues allows for more efficient tunnelling and more consistent security than IPv4 options, due to the IPSec, the tunnelling mechanism, and the optional foreign agent usage.

2.3 Mobile Routing and Network Architecture

One of the more recent evolutions of the mobile routing protocol is based on NEMO (Network Mobility), an architecture that is designed to flexibly manage a single or multiple connections to the internet, even during motion. Based on the standardisation of protocol and architectural features by the IETF in recent years, NEMO is quickly becoming a viable means of extending internet services, diversifying online communication, and establishing a mobile link between variable nodes. In their recent analysis of this architecture, Lorchat et al. suggest that IPv6 was designated as the best fit solution to the network mobility problem, allowing for the mobile router to change its point of attachment to the IPv6 internet infrastructure whilst maintaining all current connections transparently. The authors introduce a model-in-development application of the NEMO architecture suggesting that a singular home agent would act as a maintenance and exchange module, retaining information regarding permanent addresses of mobile routers, temporary addresses, and mobile network prefixes. The primary challenge associated with intra-vehicular mobility of an internet connection in this particular challenge is that the automobile needs to perform handovers between wireless access points. Although such research is valuable from an early architectural standpoint (i.e. 2006 technology), the accessibility of wireless technology provided over mobile telephony suites via 3G and 4G technology is far advanced from a point-to-point handover protocol.

In more in-depth review of the NEMO technology, other researchers have endeavoured to identify the key limitations and opportunities that are associated with particular orientations and architectural standards. Chen et al., for example, based their research on the viability of applying NEMO BSP within public transportation in order to provide mobile internet for all passengers. This research is extremely valuable for the development of effective router protocol in the future, as the authors propose that in order to overcome the multihoming problem (i.e. a need to access multiple types of networks in order to reduce downtime and connection dropping), multiple router types could be linked wherein each router is equipped with just one type of interface could be viably used to improve quality of service. For their research, the mobile router is equipped with WLAN, GPRS, and CDMA interfaces simultaneously and an inter-interface handover algorithm is proposed for the signal exchange whilst performance during handover is measured and analysed. To accomplish such network architecture, the authors needed to introduce multiple CoA registration under which bi-directional tunnels could be established for each of the three networks without having to identify one network as primary over the others. Post analytical conclusions suggest that MIPv6 and NEMO BSP are inappropriate for ‘delay sensitive applications due to handover latency of more than 1.5s’; however, multiple interfaces and different internet service providers can offer a means of transferring traffic smoothly from one interface to another.

2.4 Alternative Schemes and Personal Access Networks

In spite of a more narrowed broadcast scope, wireless personal access networks (WPANs) are increasing in popularity, basing short range wireless communications on two distinct standards including IEEE 802.15.3 (High-Rate WPAN) and IEEE 802.15.4 (Low-Rate WPAN). Accordingly, WPANs are defined around a limited range personal operating space (POS) that is traditionally extended up to 10m in all directions around a person or object, stationary or motionless. LRWPANs are typically characterised by a limited data transmission rate of between 20 kb/s to 250 kb/s, requiring only minimal battery power and providing a transfer service for specific applications including industrial and medical monitoring. Conversely, HRWPANs offer a much higher rate of data transmission from 11 Mb/s to 55 Mb/s and are suitable to allow for the transmission of real time video or audio and providing the foundation for more interactive gaming technologies. In HRWPAN protocol, the formation, called a piconet, requires a single node to assume the role of the Piconet Coordinator (PNC) that is designed to synchronise other piconet nodes, support QoS, and manage nodal power control and channel access control mechanisms. Node functionality in the piconet architecture is defined as follows:

Independent Piconet: Stand-alone HRWPAN with a single network coordinator and one or more network nodes. Network coordinator transmits periodic beacon frames which other network nodes use to synchronise and communicate with network coordinator.

Parent Piconet: HRWPAN that controls functionality of one or more piconets. Manages communication of network nodes and controls operations of one or more dependent network coordinators.

Dependent Piconet: Involve a ‘child piconet’ which is created by a node from a parent piconet to extend network coverage and/or to provide computational and memory resources to the parent.

The value of the PAN architecture is based on its high mobility and innate flexibility, allowing for single devices to operate as mobile routers, providing internet access to multiple devices. Moceri predicts that by integrating NEMO protocol into PAN network architecture, it is possible to use a particular device such as a mobile phone to provide continuous access to a variety of other devices. Ultimately the future of this technology is directly linked to inherent efficiencies that are associated with network operations and architecture, Ali and Mouftah reiterate that in order to maximise PAN uptake in the future, a variety of protocol-based concerns must be remedied and transmissions should be become increasingly efficient. One instance of inefficiency that was identified by their empirical analysis indicated that there is a threshold value for the exchange of packets that once violated results in an accelerated rate of rejection. This is a serious concern that must be addressed through design and development of the PAN standard.

2.5 Summary

This chapter has introduced the background concepts associated with mobile wireless internet in modern automobiles. In spite of the fact that the market is limited to just one strong and integrated firm, it is evident that over the long term, opportunities for competition and alliance from other providers and service agencies is increasing. Consumers continue to demand additional connectivity and an increased standard of internet access. Unprecedented potential for redefining the future of internet mobility is currently manifesting itself throughout this industry and as such leading agencies as the IETF continue to expand their investigative process, the expectation of advancement is rampant. Ultimately, one of the first challenges that must be addressed within this field is that of handover technologies, an area of mobile internet that involves the majority of performance-based losses. By focusing on such key transitional periods in the access process, the opportunity for systemic optimisation will be greatly enhanced. The following chapter will provide background regarding the research methods that were employed in the analysis and discussion of practical handover problems and their review in this field.

Chapter 3: Research Methodology

3.1 Introduction

This chapter presents the research methods that were employed in the collection and analysis of evidence regarding the viability of mobile routing technologies across intra-vehicular applications. Focusing on an academic precedence in this field as well as guidance from theorists focusing specifically on data collection methods and analytical techniques, background is offered to validate the methodological decisions made over the course of this process. Ultimately, specific evidence regarding research architecture and the various components integrated into the research process will be addressed, as well as particular, strategic and incidental limitations affiliated with the focus of this study and a multi-stream analysis of complex data.

3.2 Research Methods

The majority of research in this field focuses on case study evidence in which network architecture, internet protocol, and various limitations and opportunities are investigated via case study examples. Chen et al., for example, utilised three different mobile routers in order to investigate handover behaviour and network performance in a mobile vehicular network. Such experimental data serves to validate best fit programming and architectural features, measuring handover time for packets of information across different conditions including between GPRS and CDMO and MR in NEMO BSP. Although the value of such analysis was recognised early in this research process, the focus of this analysis is to differentiate between hard and soft handover architecture, a condition that can be evaluated within the context of existing technology. Therefore, the experimental research method was determined to prescribe too wide of a scope of research for this study and was eliminated from the available options.

Other academics have leveraged the past theories and studies of other empiricists in order to conceptualise the foundations of a future defined by mobile vehicular internet connections. Gerla and Kleinrock, for example, explored a variety of different concepts on Inter vehicle communications (IVCs) and their applications in hypothetical transportation system architecture. Such research involved content analysis from past studies in which empirical findings and theories are cited as a means of predicting future adaptation and adjustment within the global architecture. Based on the research model presented in this study, it was determined that a comprehensive review of leading theories and findings in this field that were directly linked to the aims and objectives of this research would be a valuable research methodology.

Based on the review of past academic methodologies, a comprehensive content analysis of recent findings from empiricists and academics in this field was determined to provide a best fit research methodology. Krippendorff argues that analytical constructs ‘ensure that an analysis of given texts models the texts’ context of use’, limiting any violations regarding what is known about the conditions surrounding the text. Due to the complexity and technological variability of this topic, it was important to restrict interpretation of the findings and academic perspectives to their relative context, the foundations of which were ultimately defined early in the reports. From the application of mobile routing in pedestrian circles to vehicular mobile routing for public transportation purposes, the context was determined to be a driving factor in the protocol and architecture chosen for handover schemes in mobile internet connections. A total of six unique studies were identified as directly relevant to the investigation of soft handover technology and applications, and the general findings from these studies were then extracted and integrated into the following chapter. This data is directly relevant in predicting evolution in this industry and detailing opportunities for integrating soft handover technologies in order to optimise system performance in the future.

3.3 Ethical Concerns and Limitations

The evidence presented in the content analysis was all extracted from journal publications that are widely available to the public through multiple databases and online retrieval sites. Therefore, it was determined that based on this research method, there weren’t any ethical concerns relating to the data. There were, however, limitations imposed on the scope of the studies researched in order to ensure that the focal point of these analyses was directly oriented towards handover protocol and mobile routing architecture. The imposition of such limitations proved valuable because they allowed the research to be focused on specific conditions, outcomes, and opportunities regarding this topic that will be extremely relevant in future developments in this field.

3.4 Summary

This chapter has presented the research methods that were employed in the collection and analysis of secondary evidence regarding this widely debated topic. Recognising that inconsistencies in the review of one or two studies could result in innate research bias, six different studies were chosen from varying areas of focus in mobile routing technology. The findings are presented and discussed in direct relation to their independent context, with the exception of a few correlations that were drawn in order to link concepts and industry standards. The following chapter will present the findings from this content analysis in detail.

Chapter 4: Data Presentation

4.1 Introduction

This chapter presents a broad spectrum of academic theories, evidence, and predictions regarding the evolution of the mobile internet architecture. Whilst oriented towards the application of this technology in modern automobiles, the findings from a review of leading theorists in this field have demonstrated that the concept of handover management and strategic redefinition in mobile networks transcends the limited scope of this problem. Therefore, although current routing systems available in the marketplace may integrate different technologies or architecture than those discussed here, the focus of this research is ultimately on the evolution of the mobile handover between access points from a hard, delay-limited process to a soft, dynamic and integrated process.

4.2 The Current Problem

The modern consumer demands immediacy in all aspects of their life, from food procurement to entertainment to communication. As the heterogeneous architecture of an integrated, internet-oriented society continues to affect product choices and consumer values, the notion of a functional, high performing vehicular router has quickly become integrated into several leading automotive producers in the past several years. Labiod et al. define a mobile router as ‘a mobile node which can change its point of attachment to the internet and then its IP address’. Similarly, mobile networks involve a ‘set of hosts that move collectively as a unit’, leveraging a mobile router to perform gateway tasks and prov

Semiotic Framework Method at CSA

This work was produced by one of our professional writers as a learning aid to help you with your studies

Children Support Agency Case Study
Introduction

The use and potential benefit to system developers is examined by use of the Semiotic Framework method in the case study information supplied regarding the Children Support Agency (CSA).

Analysis of Semiotic Framework

The framework as described by Kangassalo et al (1995) (1) refers to the work of Stamper (1987) (2) as it applies to information systems, and distinguishes the four levels of properties as empirics, syntactic, semantics, and pragmatics. This is likened to a semiotic ladder running from the physical world to the social (business) world.

The semiotic framework consists of two main components, these being the human information functions and the IT platform considerations. These are both split to three sub-components.

Social World, developer actives would be:

To determine how best to match the negative responses of some staff to new technology with the high expectations of others, by designing a system which takes account of both To ensure the legal aspects such as compliance with the Data Protection Act (DPA) (2) are addressed. To ensure contractual information is protected in transmission. To meet the cultural standards held by those who work in an organisation whose purpose is to support disadvantaged young people.

Issues are:

Lack of computer literacy among some CSA staff, its status as a charity will probably restrict funding available for the system, feelings of protection for financial data versus lack of (apparent) concern voiced about personal data of vulnerable young people. The wish to accommodate training in IT for young people, without concern that this may lead to opportunities for any who have anti-social tendencies to affect the overall operation of the system by having access. The lack of realisation that today’s young people in the age range 12 to 24, whether from a deprived or difficult family background may be conversant with the use of computers.

_________________________
1. Kangassalo et al, (1995), p 358.
2. Stamper et al, (1987), p 43-78.
3. Data Protection Act 1998.

Pragmatics, developer activities would be:

To attempt resolution between conflicting attitudes in conversation which were expressed about the value of the system, and consider capital and revenue funding for the new system.

Issues are:

To determine how the system would be supported, and responsibility for the support.

Semantics, developer activities would be:

To attempt to model the syntactic structures, which are by nature, the technical concerns, to the semantic which concern the world are matched, in a machine-independent manner.

Issues are:

Security concerns, which are people-related, with system issues, which are software dependent.

Syntactics, developer activities would be:

The formalisation of documentation of the system specification, and outline the programming requirements. This is the bridge between the conceptual and the formal rules governing system development.

Issues are:

The documentation may only be understood by the IT people who create the system

Empirics, developer activities would be:

To estimate the number of data fields required, their volume, the speed with which they require to be transmitted, and the overall performance as perceived by the user.

Issues are:

Limited information available, combined with inability of potential users to express these attributes.

Physical World developer activities would be:

To analyse of existing systems, networks, hardware and software. Estimation of storage and retention of data requirements, physical condition of room housing system equipment and communications, power supply, entrance restrictions to sensitive areas, policy on removal of media from buildings, printout handling, access by young people to IT equipment.

Issues are:

Replacement of existing communication links, introduction of encrypted traffic, offsite storage of backups, disaster recovery, software licences, fire detection and suppression, volumes of data transmitted and stored. To separate young people’s IT equipment.

System requirements specification

Hass et al (2007) (4) explains requirements analysis and specification as the activities involved in assessing information gathered from the organisation involved regarding the business need and scope of the desired solution. The specification is the representation of the requirements in the form of diagrams and structured text documents.

Tan (2005) (5) describes the use of a rich picture as ‘a structural’ as opposed to a ‘pictorial’ description. It allows the practitioners to use any form of familiar symbols to depict activities and events, plus taking into consideration, conflicting views.

The definition of a use case (Seffah et al 1999) (6) is a simplified, abstract, generalised use case that captures the intentions of a user in a technology and implementation independent manner. Use case modelling is today one of the most widely used software engineering techniques to specify user requirements. Dittrich et al, (2002) (7) suggest a new approach to development they term ‘systems development as networking’. They go on to suggest the key questions to ask is ‘How do systems developers recruit and mobilise enough allies to forge a network that will bring out and support the use of the system’.

Unified Modelling Language (UML) is described by Arrington and Rayhan (2003) (8) as a method, employing use case and activity diagrams for requirements gathering. They state that use case serves as a view of the overall use of the system, allowing both developers and stakeholders to navigate the documented requirements.

_________________________
4. Tan (2005), p67.
5. Seffah et al (1999), p21
6. Dittrich et al, (2002), p 311.
7. Arrington and Rayhan, (2003), p28.
8. Hass et al, (2007), p 4.

Rich Picture

People

Activities

Current system

Future system

Use Case Diagram

See Appendix A

Primary Scenario

The likely outcome when the project specification is delivered is that the funding body will agree to the bid, but subject to some changes, which will reduce the overall cost.

This will involve a degree of compromise in the design of the new system. Suggestions may be made to re-enter the Excel data and to delay the phasing out of the financial system.

This would mean a phased project with an all-encompassing solution left to a later stage.

The impact may be additional effort on the part of CSA staff.

The system needs to be delivered in phases, with core functionality first. The successful delivery of core components will assist acceptance.

A key component is the security of information stored and transmitted, as much of it is of a sensitive, personal nature. The protection of information will require conforming to the requirements of the DPA (1).

Due to the number of area offices, with few staff, the data repository will require to be centralised, probably at HQ. This is for simplification of backups, which will require to be weekly full, stored offsite, and daily incremental with the last day stored on site.

Communications between HQ and branches requires to be encrypted, and e-mail will require protected Internet access.

Anti-virus, anti-spyware, anti-phishing and spam filtering software will be required, and a firewall introduced between the Internet-facing component and the main system.

Rigid field input will be required to avoid erroneous numbers or characters.

Menus will be restricted to selected functions and denied to others, and Admin level (privileged) control will be able to access all menus.

The training for IT clients will need to be on a separate network segment from the main systems.

Compatibility between the existing financial system and the new system will need to be established, and the system will require the capability to import Excel data.

The system will be required to replace the functionality of the Excel data.

Questions
Developer questions to CSA staff:

How much funding is available for the proposed system and who are the stakeholders?

What facilities for computer systems exist at HQ: power, space, fire suppression, telecoms, operating staff, storage required, and records retained?

Who will support the new system when delivered?

What configuration does the finance system have: hardware, operating system, application software, network links, storage, number of users, support?

Will staff time be available for training?

Will only CSA staff use the new system and will they use it from home?

Will there be allocation of CSA staff for user acceptance?

Discussion of requirements analysis tools

The usefulness of the semiotic framework is that it offers the system developer an insight into the attitudes and feelings of people who will use the proposed system. This aids the developer, in that he/she is more likely to pay attention to the human-computer interface (HCI) aspects of the system. This should, if properly delivered, make the new system easier to use, and consequently, be received with more enthusiasm, than might otherwise be the case. A key message that core aspects should be delivered first, rather than the full functionality required, may win more converts than might otherwise be the case. Also revealed by the use of the Semiotic Framework was the attitude of some of the staff, who sees the requirement for the new system as superfluous to their ‘real’ work, and consequently wish no contact with it as they are too ‘busy’. This helps the developer, as it brings home the need for the employment of techniques to make the system simple to use and not forbidding in terms of error messages to may produce due to inexperience.

What the round of interviews in the case study revealed was some conflicting attitudes among CSA staff. A key example was mention of the need for protection of financial information, but the requirement to protect personal data of clients of the CSA, some of whom may have criminal records, was not mentioned. Given that failure to protect this type of information could lead to more harm to the individual than any help they may receive from the CSA, this is cause for concern, and seems to indicate that some of the CSA staff have lost sight of the organisation’s mission in life.

The interview process resulting in the case study report produced a lack of vital information any system developer would require to produce a workable system. Basic items were not uncovered. As an example there is no information on number of users, estimates of amount of data to be stored, how long it is to be retained, and what kind of systems are in use at the moment. The availability of capital and revenue information was not discovered, and it may well be that the funding will be dependent upon the proposed design in terms of capital and revenue costs to operate.

The use of rich picture and case diagram illuminates the overall view of the required system, allowing the developer and the recipients of the system to see the whole picture and gain a better understanding of the likely finished product. It also simplifies the dependencies and collaboration required in a pictorial from which makes the ‘big picture’ easier to understand.

The importance of the Semiotic Framework is that it helps shed light on areas which the developer, using traditional systems development methodologies, may neglect. It concentrates the mind on the human-computer interface required, and influences the design attributes which need to be built in, in order to gain user acceptability. Taking the step-wise approach down through the levels, brings home to the systems developer, the need to start with the social needs, which focuses on the human aspirations (or not) of the proposed system. Working through the Pragmatics is very revealing of the contradicting attitudes of the potential users in conversation, and should lead to the developer making compromises between technical elegance in the design and being able to obtain a favourable reaction from at least a majority of the eventual users. The scope of the system required to be developed has not been revealed during the case study, which impedes the ability of the developer to estimate size and nature of the hardware or software required.

The Syntactic level assists the design in that it forces concentration on the logical handling of data input, with system response to incorrect entry, being handled not with abrupt error messages, but more friendly advice messages and suggestions on data re-entry. This tracks back to the importance of human reaction learnt from the Social Word level. The software chosen should be influenced by the Pragmatics influence in that the choice should reflect the fact that the CSA is a charity, and both hardware and applications should be in the affordable range for an institution dependent upon charitable funding.

The Empirics portion of the framework should include the estimation of required system performance, speed of telecommunications, volume of data to be stored, and response times of the system. In the CSA case study there is no information which can be used to project such requirements, so the developer would be required to utilise an educated guess, based only on the existing finance system, which could be measured, or practical experience. Some of the required information may be gathered by contact with whichever vendor delivered the existing finance system. The framework also draws attention to peripheral items, such as the Excel spreadsheet, which may well contain valuable data, not subject to strict input criteria, and possibly not backed up.

The Physical World portion of the framework focuses the developer’s mind on what will be acceptable to the users in terms of speed of response, the time and effort potentially to be saved, and the type of reporting of information capabilities of the system. It emphasises that there needs to be demonstrable benefits in the way of management information, and therefore capability to respond, which would otherwise have been unavailable. From a system developer’s point of view, this is probably the section he/she would feel most comfortable with, as it consists of tangibles, which can be translated into MIPS, baud rated, gigabytes, and other terms which IT developers are expected to be completely conversant with.

Probably the most difficult aspect of the framework for the developer is the Semantics level.

The reason for this is that it tends towards the abstract, and system developers as a breed, operate mostly in a practical, exact, measureable fashion. They act as a translator between the business requirements as expressed by the stakeholders and eventual users, and the technical people who deliver code, hardware and communications to realise the stated needs. The developer has to perform a balancing act between what is sometimes conflicting requirements and technical possibilities. This required the ability to converse with, and understand, both participants in the overall project to deliver the required system. The use of the Semiotic Framework leads the developer to address these issues and attempt to develop a clear understanding of the CSA business activities, as opposed to trying to force fit them into a prejudged idea of the system.

The developer may reflect that the application of the Semiotic Framework forces undue attention on the people-related aspects of system engineering, to the detriment of a design which embodies good technical practice and the necessary protective aspects required complying with any legal implications. Against this, the aim of developers to attain elegance and efficiency in design may be meaningless to the users of the system, whose main concerns are to assist in the capture of information, its ease of retrieval and the management information it can produce. In short, how it can help improve the user’s work practices and make life easier for them.

References

Arrington, C.T., Rayhan, S.H., (2003), Enterprise Java with ULM, Second Edition, Wiley Publishing, Inc., Indiana, USA, p28.

Clarke,S., Elayne, (2003), Socio-technical and human cognition elements of information systems, Idea Group Inc, p 8. GOOD Diagram.

The Data Protection Act
Available from: http://www.ico.gov.uk.what_we_cover/data_protection.aspx.

Hass, K.B., et al, (2007), Getting it Right, Management Concepts, p 4.

Kangassalo, H.,et al, (1995), Information Modelling and Knowledge Bases, IOS Press, p 358.

Seffah, A. et al, (2005), Human-Centered Software Engineering –Integrating Usability in the Software, Springer, p 21.

Stamper,R.et al, (1987, Critical Issues in Information Systems research, Wiley, Chichester, p 47-78.

Tan, J.K., (2005), E-health Care Information Systems, John Wiley and Sons, p 67.

Tipton, H.F., Krause, M., (2007), Information Security Management Handbook, Edition 6, CRC Press, p1290-186-587

MIS Within Security Department During London 2012 Olympics

This work was produced by one of our professional writers as a learning aid to help you with your studies

This report analyses the need and the reasoning for a management information system (MIS) for the security department during the Olympics London 2012. This report looks at the functions of the security department and how they will benefit from an effective information management system. Furthermore, the report discusses how management information systems are used for decision making and the importance of implementing such systems within any organization.

Executive summary

One of the most fundamental functions in any organization is the decision making process. When one considers the economy we face today, many organizations come to appreciate the importance of being able to challenge competitors, gain advantages and make intelligent use of their resources. The core element of this is the process of making decisions. Information can be central in achieving management goals successfully. To facilitate decision making it is imperative that managers have the correct information at the correct time to overcome the gap between needs and prospects. Furthermore to aid improvements in communication of information adequate management information systems (mis) are indispensable. Thus it is vital to have an appreciation of the management information systems used in an organization and have effective integration, by all levels of management. It is only then that there will be effective, profitable and constructive decision making.

Terms of reference

On the instruction of the senior manager, the security department was asked to evaluate and analyse the requirements for the duration of the London Olympics 2012. Details of the importance of information required and detailing what information will be required plays an important role in the reporting back to the senior manager.

Introduction

Regardless of the nature of an organization, every organization is filled with information. The information content of organizations is what makes the business function. The role of information in an organization is crucial. Information is important in order to allow for an organization to plan, control, measure performance, record movements and assist with decision making. Management information systems are the conversion and collaboration of this information, from both internal and external data sources into a format which is easily communicated and understood by managers at all levels. Ensuring that information is well structured and effectively stored allows ease of access and timely and effective decision making. Larry long and nancy long (2005, p. 370) describe an information system as: “a system is any group of components (functions, people, activities, events, and so on) that interface with and complement one another to achieve one or more predefined goals” (donald, 2005). Information system may also be considered to be a generic reference to a technology-based system that does two things: providing information processing capabilities and providing information people need to make better, more informed decisions (donald, 2005).

Management information systems are the result of a combination of internal and external sources; they provide a means by which data/ information can be easily manipulated, stored, amended etc. Furthermore, management information systems coalesce all the essentials which assist in the decision making process.

Security is by no means limited to any one aspect of an organization, particularly when on consider an event as large and as globally involving the London Olympics – 2012. For any organization, security cover the physical security of those involved, security of buildings and offices and security if information technology, both physical equipment and cyber security. Assistant commissioner chris allison released a brief on the security issues and concerns surrounding London 2012; his brief included all the ordinary security concerns, such as terrorism and petty crime, but also the danger of online ticket scams, potential protesters hijacking olympic websites and also the more sinister criminals (hervey, 2010).

The overall vision for the London 2012 olympic games and paralympic games, agreed by the olympic board is, ‘to host an inspirational, safe and inclusive olympic games and paralympic games and leave a sustainable legacy for London and the uk’ (London2012, 2010). In order to achieve this there any many threats and many angles from which threats can occur which need to be taken in to consideration. Furthermore, in order to manage and ensure security the information systems implemented must allow for effective decision making prior to the event and most importantly in the event of an untoward happening.

Findings and analysis

The security department cannot be limited to one specific function. The security department, especially for London Olympics 2012, will involve the handling of many aspects of potential threats to the people and systems involved for the Olympics. There are two primary areas which the security department will be responsible for. Firstly, cyber security and secondly the security of the public.

Cyber security

As technology, its uses and abuses expand at hasty rates, so does the level of threat faced by organizations and their information systems. Information technology forms an important feature of the information systems in place today. Information systems define what needs to done or managed and the information technology aspect is how this is done. Therefore, technological advancements and the increase in their abuse is a major threat where London Olympics 2012 is concerned.

A case study by students of the pennsylvania state university looked into some of the major threats which organization face in the form of it threats. These included; wireless network security, cryptography, access control, privacy control, risk management, operating system security, including server threats, vulnerabilities and firewalls. These are just a handful of examples (bogolea & wijekuma, n.d.). Amongst these examples and besides these examples are many others which are an easy cause for concern for London Olympics 2012.

For any organization it is imperative to exercise control over their computer based information systems. London 2012 needs to ensure that the computer based systems, those which rely on it are protected from threats, as the cost of errors and irregularities that may arise in these systems can be high and may even challenge the very survival of the event. An organizations ability to survive can be severely undermined through corruption or destruction of its database; decision making errors caused by poor-quality information systems; losses incurred through computer abuses; loss of computer assets and their control on how the computers are used within the organization (mandol & verma, 2004).

Cyber security expert professor peter sommer of the London school of economics warned that computer security would be extremely important during the games (hervey, 2010). A case study which looks at the tragic death of two boys 18, and 10 years of age discusses how cyber security was the issue in relation to the gasoline leak of olympic pipelines pipeline (abrams & weiss, n.d.). This is an example of the devastation to human life which cyber threats can cause, and when one considers this on the scale of London 2012, it becomes clear the number of people depending on optimum security.

In order to combat this threat, information needs to be obtained from both internal and external sources. External information may include information from professionals in the cyber security industry to information from intelligence agencies. Terrorism is as much of a cyber-threat as is the computer virus or any other infection. Information systems will only be able to cope with and combat these threats by ensuring they all well informed through risk assessments of potential dangers. Furthermore, in order to overcome any unexpected threats contingency planning forms an essential element of information systems development.

Risk assessment is an important step in a risk management procedure. Risk assessment is the determination of the quantitative or qualitative value of a risk related to an actual situation and a recognized threat. Maroochy water services, australia, are an organization a world apart from London olympic 2012, however for the purpose of their cyber security improvement program; risk assessment establishment played a key role (abrams & weiss, 2008). This example shows that important of risk assessments if by no means limited by industry, size of organization, or any other feature for that matter. Risk assessments provide a means for any organization to help avoid potential threats through prior consideration.

In addition to information required for a risk assessment, is the information required for a contingency plan. A contingency plan is a plan of action for if things were to go wrong. It is a backup plan. In order to overcome any type of disaster information must be collated into a contingency plan. This would again form an essential part of the information systems, as it would be crucial in the event of a disaster.

People

London Olympics 2012 will see several thousands of people from all over the globe in London. Amongst visitors will be players, key visitors and reporters. Those visiting, and then those who already reside in the uk, accumulate to an increase in population, and thus there is a risk of increase in crime. The crime can range from petty crime, to terrorism. The common factor amongst all, is that people need to be protected.

Security has been a crucial concern at the Olympics since the killing of 11 israeli athletes and coaches at the 1972 munich games. Olympic planners have ramped up security following the september 11, 2001, attacks in the united states (wilson, 2010). Inefficient management of the people involved in the Olympics and the public, can have devastating effects. This is a major concern during and time and for a city where terrorism is a real and potential threat.

In order to be able to implement information systems which can cope with, and appreciate the requirements with regards to security, information needs to collated from many sources. First of all, predictions are one of the very first decision making elements which need to be fulfilled by an information system in this situation. Information regarding the number of athletes expected to be present during the Olympics, statistics from previous olympic games will be required regarding the number of spectators/visitors the country had, and finally the number of security staff and resources available at present. By means of prediction and analysis through a computer being able to protect and serve the public can be achieved. The information system may be used to obtain information concerning the number of staff which will be required to patrol the streets. The number of security staff which will need to be put in to place to sufficiently protect the athletes and their trainers. Also, the locations which are expected to be busiest can be recognized, and thus will require more staff and concentration of cctv cameras.

In addition to predictions, is the actual information which will need to be included in an information system, this is information about the number of police officers, or security guards in other areas, or cities besides London that can assist in providing security in this situation. This information may well form a part of the contingency planning.
Where cctv cameras are concerned, or access controls, id badges etc there is the need for information systems to collate and manage all this information. Systems will be need to record information of who accessed which area or building at which time for access/ id cards, cctv will need to keep a recording of all activities captured, and there will be the need for databases to log people working for the period of the Olympics and athletes. This information will help to deter crime, provide an element of security and protect people.

Conclusions

Information systems come in not set type or standard. They are the collation of several information sets to provide a well-integrated system used to make decisions. The London Olympics 2012 are like no other organization, and are on a scale grander and vast the most other organizations deal with. It is this grandness and this large scale involvement of people, which in turn increases the risks and potential threats. London Olympics 2012 is an enormous event and is expected to employ several thousands of people. And furthermore have several thousand spectators, reporters etc. An effective and accurate management information system is essential in order to ensure that the city hosting the event is able to effectively plan, control, record people and protect systems. Hudson bank managed to overcome the problems it faced with adaptation of its information system, some of this was done using off the shelf software and the majority through establishment of customer requirements and communication essentials (anon., 2008).

The security department is involved with many people and many types of threats; the most important two being, securing people and securing systems. Cyber threats can not only damage systems, but even cease functioning of the event. In order to avoid this it is important that the potential risks are assessed, all that can be done to avoid them striking is done and contingency plans are set for action.

Another important aspect is protecting people. In order to do more staff would be required, police, community support, security guards etc. This is a large amount of shifting people around, staff from other cities, new recruits etc. Therefore it is vital that this information is managed efficiently. The information systems should be able to cope with large numbers of peoples and provide effective and accurate predictions and decision making results. As with all information systems, the number of information sources will need to extensive in order to provide optimum results.

Recommendations

Taking in to consideration the need and scope of the management information systems for the Olympics in London 2012, particularly the security departments’ involvement and requirements the following recommendations are made:

The security department need to ensure that all staff involved with the use of the information systems for them is full trained. Any glitch can have dire effects on the rest of the system and ignorance of any warnings of threats can also be horrific in consequence. Training is not only limited to the staff working with the systems, it is also important that staff working with people are trained to handle a large number of people, overcome any problems, identify potential threats, maintain the cooperation of people in the event of a disaster etc.

Risk assessments and contingency plans should be in place for each and every aspect of security. Furthermore, all staff should be made aware of both of these reports, particularly contingency planning. This will only help them do their job better and overcome any disasters. Informing staff will provide a more thoroughly aware work force and maintenance of security in the event of a disaster.

References

Abrams, m. & weiss, j., 2008. Malicious control system cyber security attack case study. Case study. Australia: maroochy water services.
Abrams, m. & weiss, j., n.d. Bellingham, washington, control system cyber security case study. Case study.
Anon., 2008. Banking on customer service. New jersey: hudson bank.
Bogolea, b. & wijekuma, k., n.d. Information security creation. Case study. Pennsylvania: the pennsylvania state university.
Donald, a., 2005. Mastering information management. Prentice hall.
Fitzpatrick, k., Fujimoto, y., Hartel, c. & strybosch, v., 2007. Human resource management: transforming theory into innovative practice. Malaysia: pearson australia group pte ltd.
Hervey, l., 2010. Sky news. [online] available at: hyperlink “http://news.sky.com/skynews/home/twin-terror-threat-to-London-Olympics-security-expert-warn/article/201003415579707” http://news.sky.com/skynews/home/twin-terror-threat-to-London-Olympics-security-expert-warn/article/201003415579707 [accessed 16 august 2010].
London2012, 2010. London 2012 sustainability policy. [online] London 2012 available at: hyperlink “http://www.London2012.com/documents/locog-publications/London-2012-sustainability-policy.pdf” http://www.London2012.com/documents/locog-publications/London-2012-sustainability-policy.pdf [accessed 6 august 2010].
Mandol, p. & verma, m., 2004. Formulation of it auditing standards. Case study. China: national audit office.
Wilson, s., 2010. Yahoo news. [online] available at: hyperlink “%20http://news.yahoo.com/s/ap/20100806/ap_on_sp_ol/oly_London2012_security” http://news.yahoo.com/s/ap/20100806/ap_on_sp_ol/oly_London2012_security [accessed 16 august 2010].

Bibliography

Http://news.yahoo.com/s/ap/20100806/ap_on_sp_ol/oly_London2012_security [accessed 16 august 2010].

Legal, Ethical and Social Issues on Information Systems

This work was produced by one of our professional writers as a learning aid to help you with your studies

Dissertation Proposal: An Examination of Legal, Ethical and Social Issues on Information Systems

Provisional Title

The Provisional Title of the Dissertation is as follows: “An Examination of Legal, Ethical and Social Issues on Information Systems”.

Brief Review of the Related Literature

We will begin our review of the related literature with a close examination of the literature concerning the definition of Information Systems. A clear definition of the concept of Information Systems is vital, because as Currie shows there is a great disparity between the extents to which clear concepts apply in a field such as chemistry compared with the academic discipline of management. “For example, physical chemists know exactly what they mean by ‘entropy’. Would-be scholars in the management field, on the other hand, have no shared precise meaning for many of their relevant concepts, for example ‘role’, ‘norm’, ‘culture’ or ‘information system’ all these terms are often fuzzy as a result of their unreflective use in everyday chat” (Currie 1999: pp.46). In this passage Currie eloquently sums up the task before us when we attempt to define the concept of Information Systems. The conceptual haziness and lazy use of concepts such as Information Systems in everyday usage as well as in academic circles has led to a situation in which providing a clear definition of the concept of Information Systems is a highly complex undertaking. For this reason it is probably not possible to provide a rigid and narrow definition of the concept of Information Systems, because any such definition will be criticised for its inability to incorporate the broad spectrum of features that management scholars understand by the term Information Systems. Many management scholars prefer this approach to the concept of Information Systems and the approach of Rainer is a clear example of this. She understands the concept of Information Systems to be a broad concept incorporating any number of activities that include the use of information technology to support management operations. “It has been said that the purpose of information systems is to get the right information to the right people at the right time in the right amount and in the right format” (Rainer 2009: pp.10). She looks closely at a range of concepts that full under the umbrella term of Information Systems and argues that “one of the primary goals of information systems is to economically process data into information and knowledge” (Rainer 2009: pp.10).

The UK Academy for Information Systems agrees with the type of broad definition offered by Rainer and defines Information Systems as “the means by which people and organisations, utilising technologies, gather, process, store, use and disseminate information” (UK Academy for Information Systems 1999: pp.1). It is clear, therefore, that the term information Systems can be used and applied to a wide variety of activities.

Information Systems can denote the interaction between people, data, technology and knowledge and as a result Buckland also argues that a broad definition of the concept is desirable. As he explains, “information systems deal with data, texts and objects, with millions of these objects on endless miles of shelving, in untold filing cabinets, and on innumerable magnetic and optical devices with enormous data storage capacities” (Buckland 1991: pp.69). Buckland goes on to specify one of the most important reasons why a clear and concise definition of Information Systems is so difficult to attain. He argues that “any significant change in the nature or characteristics of the technology for handling the representations of knowledge, facts and beliefs could have profound effects on information systems and information services” (Buckland 1991: pp.69). In other words, Information Systems are likely to be affected by such an enormous variety of factors that a concise definition of the concept will probably always fail to include some important elements of the concept. It is for this reason that it is advisable for the purposes of this investigation to proceed in the same manner as the vast majority of the literature and therefore operate with a very broad and inclusive definition of the concept of Information Systems.

The next challenge that lies before us is to illustrate some of the most salient and prominent legal issues associated with Information Systems. Sacca defines one of the major challenges in the relationship between Information Systems and legal issues when he states that “first of all, the Rule of Law is based on these unavoidable elements, among others: equality and freedom of citizens. How can the legal system put this element into effect in a highly technological society?” (Sacca 2009: pp.29). Sacca argues that legislation governing the use of Information Systems has existed for a long time, stretching back as far as the 1970s, but that such legislation must constantly be updated in order to be able to keep up with the pace of innovation. He therefore proposes, for example, a “dialogue between institutions and citizens based upon a ‘digital citizenship’” in order to fully exploit the relationship between Information Systems, the government and people and set up an e-government in which everybody who has access to a computer and Internet can participate.

As Sacca states, “democratic legal systems have to foster and promote civil and political rights also with reference to the use of ICT, against digital divide” (Sacca 2009: pp.29). However, the issue of electronic democracy is only one of many legal issues that has been raised by the development of Information Systems. Pollack argues that “we are living in an era in which we routinely deal with issues such as privacy, digital security, identity theft, spyware, phishing, Internet pornography and spam. These costly and time consuming concerns were completely foreign to the American public only a few years ago” (Pollack 2006: pp.172). It is clear, therefore, that there are a multitude of legal issues surrounding Information Systems and Adamski argues that how we deal with information and data is a critical part of how we function as a modern liberal democracy and that the legal system must reflect this emphasis upon freedom of information. “Information, being an intangible and an entity that can be possessed, shared and reproduced by many, is not capable of being property as most corporeal objects do. Unlike corporeal objects, which are more exclusively attributed to certain persons, information is rather a public good. As such it must principally flow freely in a free society” (Adamski 2007: pp.1). It is clear, therefore, that legal issues are of vital importance with regard to Information Systems and that a multitude of issues must be examined in order to fully understand the relationship between Information Systems and the Rule of Law.

In the next section we will examine the extent to which ethical issues impact upon Information Systems. A study on the relationship between ethics and Information Systems has defined ethics as “the principles of right and wrong that individuals, acting as free moral agents, use to make choices that guide their behaviours” (Ethical and Social Issues 2010: pp.128). The study argues that the development of Information Systems has fundamentally transformed the relationship between management and ethics because new Information Systems give rise to a series of new ethical dilemmas. The study argues that “information systems raise new ethical questions for both individuals and societies because they create opportunities for intense social change, and thus threaten existing distributions of power, money, rights, and obligations” (Ethical and Social Issues 2010: pp.128).

Many of the ethical problems of Information Systems were foreseen by Mason in a famous study conducted in 1986 entitled ‘Four Ethical Issues of the Information Age’. In this study Mason argues that there will be above all four ethical issues that will dominate the era in which information Systems will dominate. He defined four ethical issues, namely “privacy, accuracy, property and accessibility” (Mason 1986: pp.5). Mason raised a number of pertinent questions that are indeed still relevant today and help us greatly in our quest to fully understand the relationship between legal, ethical and social issues and Information Systems. For example, with regard to privacy Mason asked, “What information about one’s self or one’s associations must a person reveal to others, under what conditions and with what safeguards? What things can people keep to themselves and not be forced to reveal to others?” (Mason 1986: pp.5). At this point it is important to point out that whilst such questions are clearly ethical questions in nature, the answers that society provides to such questions have clear and profound social dimensions and therefore the relationship between ethical and social issues is inextricably linked with regard to Information Systems. As the study on Ethical and Social Issues points out, “like other technologies, such as steam engines, electricity, the telephone, and the radio, information technology can be used to achieve social progress, but it can also be used to commit crimes and threaten cherished social values. The development of information technology will produce benefits for many and costs for others” (Ethical and Social Issues 2010: pp.128).

Despite the fact that ethical and social issues are inextricably intertwined, it is important that we delineate between the two concepts and in the final section of this dissertation we will focus upon the social issues relating to Information Systems. Here we will examine some of the most prominent social issues that arise when dealing with Information Systems. Some of the social questions we will examine concern the extent to which society is affected by a move toward computer-based systems. What costs do societies incur by doing so and what benefits do they accrue as a result? Do increased levels of automation affect employment patterns and cause people in lower social classes to lose employment opportunities? Will the rise of Information Systems serve to strengthen or dilute class divisions? It is possible to argue that Information Systems serve only to expand the power of the rich, because they re-enforce existing prejudices against the poor. As Wilson argues, “the economic climate and the differential stratification of resources will define some work environments as ‘information-poor’ and others as ‘information-rich’, with consequent effects upon the probability of information-seeking behaviour and the choice of channel of communication (Wilson 2006: pp.665). Another important social concerns the extent to which Information Systems will give rise to greater Identity Theft in which ordinary citizens are the victims and the great rise in the numbers of Identity Theft victims shows that there are a large number of negative social issues that have occurred since the birth of Information Systems.

Aims and objectives of the research

The aim of this dissertation is to encompass a broad spectrum of academic research in order to fully examine the legal, ethical and social issues on Information Systems. In order to be able to complete this task competently, we must first of all begin by outlining a clear structure of how this dissertation will be completed. We will conduct this investigation in five distinct sections. In the first section we will seek to define the concept of Information Systems. This is a vital task in this dissertation because in order to be able to fully and adequately analyse the legal, ethical and social issues on Information Systems we must first of all clearly define the concept of Information Systems in order to be able to proceed any further. In the next three sections we will focus upon the legal, ethical and social issues on Information Systems. We will examine each one of these issues in turn and begin by defining some of the most important issues that are relevant to Information Systems in each field. Once we have defined the relevant concepts in this dissertation we will move on to apply the concepts to an organisation that clearly reflects a number of pertinent issues raised by the literature review.

We have chosen to focus upon the firm Panasonic, because it is an example of an organisation that has been greatly affected by the developments of Information Systems over the last few decades and will allow us to fully explore the social, ethical and legal issues that arise when dealing with Information Systems.

Statement of the Design and Methodology

This investigation will allow us to critically evaluate the impact of legal, social and ethical issues upon Information Systems, focusing particularly on the organisation of Panasonic. It is likely that this dissertation will take a considerable amount of time and we will need to ensure that we have access to the relevant data and statistics that will be necessary in order to support and justify our findings. The aim of this dissertation is to clearly present a theoretical framework from which we can critically examine and evaluate the most important concepts within the title of this investigation. Once the internal theoretical framework has been established we will move on to apply the theoretical framework to the external world in order to analyse the extent to which this theoretical framework is supported by the realities of running a modern organisation in the real world. This will allow us to transfer the internal theoretical framework to the external world where such theoretical concepts operate.

Sources and Acquisition of Data

Throughout this dissertation we will focus primarily upon primary and secondary academic literature in order to establish the theoretical framework upon which this investigation will be based. If possible, it would also be useful to conduct some first-hand interviews with employees and manager of Panasonic in order to ascertain the impact that our theoretical framework has upon the company.

Method of Data Analysis

Throughout this dissertation we will employ both deductive and quantitative techniques as well as inductive and qualitative techniques. The literature review will be primarily based upon qualitative techniques, but we will also focus upon quantitative techniques in order to be able to compare the data and statistics that we found in our literature review with the evidence we will assemble from the firm Panasonic. We will also use both deductive and inductive techniques throughout this investigation and allow for the fact that the conclusions we reach may be false in nature. This type of hypothetical reasoning will strengthen our ultimate conclusions and findings.

Form of Presentation

The dissertation will be presented in written form, but where necessary relevant graphs, tables, charts and illustrations will be included in order to provide statistical data that support and justify the conclusions reached in this investigation.

References and Bibliography

Adamski, A., 2007. Information Management: Legal and Security Issues, pp.1-17

Buckland, M., 1991. Information and Information Systems. London: Greenwood Publishing

Currie, W., 1999. rethinking Management Information Systems. Oxford: Oxford University Press

Ethical and Social Issues in Information Systems, 2010, pp.124-165. http://www.prenhall.com/behindthebook/0132304619/pdf/laudon%20MIS10_CH-04%20FINAL.pdf Accessed 26/07/2010

Mason, R., 1986. Four Ethical Issues of the Information Age. Management Information Systems Quarterly, 10 (1), pp.5-12

Pollack, T., 2006. Ethical and Legal Issues for the Information Systems Professional. Proceedings of the 2006 ASCUE Conference, pp.172-180

Rainer, K., 2009. Introduction to Information Systems: Enabling and Transforming Business. London: Wiley Publishing

Sacca, D., 2009. Information Systems: People, Organisations, Institutions and Technologies. New York: Springer Publishing

UK Academy for Information Systems, 1999. The Definition of Information Systems, pp.1-6

Wilson, T., 2006. On User Studies and Information Needs. Journal of Documentation 62 (6), pp.658-670

HMIS Research Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Open Health: A research prospectus on HMIS research
Introduction

Change management decision models based on shifts within the global economic order have forced administrators to seek new systems and relationships of oversight as organizations switch from traditional vertical work relationships to horizontal interactions. Much of the insight built into recommendations toward better change management models has been developed in scientific fields of practice. The interest in management of knowledge by science communities, and especially the integration of practice into localized IT systems has long been promoted by consultants and advisors to those fields, whom look to channels of facilitation as viable strategies toward competition in the context of change. The popularity of IT systems management as strategic model for practice field growth, as well as a core competency for institutional change, is well established. Cost cutting and innovative, IT knowledge sharing networks expand the options of institutions and professionals. Competitiveness now equates with interface with the highest calibre artificial intelligence in advancement of human potential toward global solutions that promise to enhance a new generation in oversight.

Andrew Grove, former CEO of Intel once observed that “only paranoid firms survive, primarily because they continuously analyse their external environments and competition, but also because they continuously innovate” (Hitt et al. 1995). Grove’s assertions are echoed by many corporate executives, whom have become sold on the constancy of research and development as the single most powerful source of competitive capital in organizations faced with ‘new market’ competition. For instance, the equity of ‘value’ is a price statement or ‘proposition,’ as well as a method of translating brand identity within the market through illustrated performance of a product. For service organizations, structural response to delivery is still inherent to value. Practice settings are environments desire synthetic opportunities to forge alliances between internal and external forces as they navigate against risk. Value increases continuously, and incrementally as capitalization is realized in relation to those activities.

Early responses to the local-global equation looked to structural articulation in what became known as ‘matrix organizations that allowed for retention of rational-analytical choice models, with modified response through process-oriented incremental decision. More recent organizational approaches, and especially in capital intensive fields such as IT, offer support for the benefit of incremental decision making with the salient distinction between the form and function of decisions. Content in both cases is driven by challenges to productivity, and executive direction is now more than before forced to consider incremental decision making as strategic option, despite the fact that rational choice inevitably overrides constant reinvention (Tiwana, A. et al. 2006).

Responsive to the aforementioned challenges in the emergent healthcare environment, leaders looking to new IT HMIS operations systems are seeking change management solutions that will enable them to forge lean and agile strategic growth models in settings known for fiscal and resource waste.
Six Sigma approaches to analysis have allowed businesses to streamline operations through combined methodologies of analysis (Edgeman and Dugan 2008). In the past ten years there has been increased demand for seamless service between hospitals, clinics and multidisciplinary teams concerned with the wellbeing of patients and their families. Healthcare organizations seeking competitive and more efficient options to serving patients now look to IT Healthcare Management Information Systems (HMIS) for optimizing capacity both in terms of finance and in standard of care to patients (Tan and Payton 2010).

Despite the upfront costs of planning and implementation that go into introduction of new IT systems into an existing HMIS setting, integrated operations enable the advancement of fiscal and other controls not previously realized due to time lapse, as well as precision in every step of the service provision process from decoupling point between allocations to actual delivery of patient services. If efficiency in information is directly linked to ‘duty of a reasonable standard of care’ within hospitals and healthcare institutions, the benefits to those organizations in terms of direction and better control of liability issues through information channels, offers new promise in terms of comprehensive patient care through “patient-centric management systems,” and ultimately sustainable organizational growth (Tan and Payton 2010). The foregoing research proposal outlines the development of HMIS in the medical field of practice in the United Kingdom.

Literature Review

The 1990s marked the dawn of knowledge sharing systems in the space science industry, and the landmark mission deployed by NASA IT engineers in the development of what would come to be known as a Competency Management System (an online system that maps individuals to their competencies). Out of that seed project, the 2005 initiation of the NASA Engineering Network (NEN) was formed under the Office of the Chief Engineer in furtherance of the space agency’s knowledge-sharing capacity. Coinciding with a to benchmarked study with U.S. Navy, U.S. Army Company Command, the U.S. Department of Commerce, and Boeing Corporation, the NEN network enables “peers and experts through communities of practice, search multiple repositories from one central search engine, and find experts” (Topousis, D.E. et al. 2009). The research study follows this idea, and proposes to contribute to three (3) bodies of literature pertinent to the field of knowledge sharing: 1) General history of IT integration as change management strategy for advancement of purpose in science; 2) studies on the development of IT networks of practice within the health science community in particularly and the development of heath management information systems (HMIS); 3) literature dedicated to risk mitigation and compliance within legislative policy, and elements of security within institutional networks subject to oversight by chief information officers (CIO).

Invitation of recognized Technical Fellows noted in their discipline to facilitate their respective community of practice within the network set the pace for portal integration of human resource tools, such as jSpace. The platform can be utilized as communicator/research source for professional recruitment to projects and permanent roles. Links to related associations and professional societies offer participating fellows and partners access to an integrated contact source of engineers, “while fostering an environment of sharing across geographical and cultural boundaries.” The next step in NASA NEN is incorporation into the larger NASA Enterprise Search, and potential accommodation of oft requested ITAR-restricted information. The extension of the NASA space science knowledge sharing concept has done two things: 1) further the advancement of space science objectives through KMS (Knowledge Management Systems) and PMS (Plan Management Systems) toward design and launch of multinational space missions; and 2) extend the idea of an IT integrated field of scientific practice to other scientists in distinct fields of practice throughout the scientific community (Quattrone and Hopper 2004).

The emergent emphasis in organizational theory on IT Healthcare Management Information Systems (HMIS) as presented by Tan and Payton (2010), initiates query into the integration of extended practice setting networks. Interested in the advancement of IT platforms and software driven data bases as solution to change operations in global institutions, the search for approaches that succeed at meeting core competencies through risk reduction and resource maximization are the most sought after technologies for the betterment of the ‘total’ organization. The new IT systems offer interconnectivity between operational units within healthcare institutions, and link human intelligence to the logistics data analysis for in-depth insight into the history of expenditures and allocation requests. Some institutions have joined supply chain cooperatives in their region to further enhance the use of network logistics and stem of the flow of fiscal waste – a persistent concern within healthcare organizations – saving literally hundreds of millions of dollars annually (Healthcare Finance News, 2010).

Healthcare Management Information Systems (HMIS) offer integrated systems platforms and applications to the entire range of chain operations management activities within and between institutions that provide patient care. Consistent with the emergent interests in organizational knowledge sharing networks, healthcare institutions are looking to IT solutions for a number of reasons, and especially the growing impetus toward: 1) healthcare provider connectivity; 2) increased focus in tracking and management of chronic diseases; 3) heightened patient expectations regarding personal input in care process; 4) market pressures driving hospital-physician alignment; and 5) advances in the technological facilitation of systems operability in this area (Tan and Payton, 2010).

Design of systems architecture from institution to institution still varies, as data management and interconnectivity may be distinct and also subject to existing ‘legacy systems’ issues that might be incorporated in the new HMIS model. The core competency of HMIS is the more ephemeral side of systems planning which is the knowledge sharing path – where data and information become meaningful. The other key components to consideration of HMIS integration include: 1) the basic hardware, software and network schema; 2) process, task and system(s); 3) integration and inoperability aspects; and 4) user, administration and/or management inputs and oversight. For instance, IT HMIS designed to enhance the networking of financial operations in hospital institutions must be especially responsive to the growing complications in the US insurance industry as product options such as bundled claims force institutions into synchronous attention to patient’ demands. Convenience and competitive pressures to supply those services supersede mere fiscal allocation in service to patients amidst conglomerate interests in the healthcare industry (Monegain, 2010).

Chief Information Officers (CIO) are critical to the administration and planning of HMIS systems, and in particular, security measures and oversight of privacy protections. Unlike Chief Executive Officers (CEO) that serve as the primary responsible party for general governance, the CIO is more directly involved in the scientific praxis of organizational management; as precision in systems that retain data for record, and for analysis toward organizational growth are in their hands. CIOs are increasingly drawn into this external environment based on the nature of transactional relationships, as they are called upon to find IT systems of accountability within their own institutions (cio.com, 2010). Regulation of computer and telecommunications activities in the UK’s Computer Misuse Act (CMA) of 1990 has impact in regard to the stipulations pertaining to definitions of personal and professional use of HMIS by employees, partners and clients (Crown Prosecution Service Advice on CMA 1990).

Aims and Objectives to the study

The aim of the research is to study successful approaches to knowledge sharing, risk reduction and resource maximization through HMIS IT systemization. The most sought after technologies are those that expedite a ‘total’ organizational approach to information management. The goal of the research is to conduct a Six Sigma analysis of an IT based knowledge sharing infrastructure of a scientific community of practice. In spite of the nascent value of space science as a critical beginning to baseline assumptions the study proposes to survey the development of HMIS in the medical field in the United Kingdom. The three (3) core objectives to the study on healthcare IT infrastructure will be: 1) review of HMIS infrastructure as it is understood by healthcare administration in contract with systems engineers; 2) fiscal accountability is the second priority objective toward the goal of projected and actual capitalization on IT systemization in the practice setting; and 3) the significance of quality control of those systems in relation to government reporting and policy.

Methodological Consideration

Methodologies to the study will be implemented toward building a portfolio of practice on HMIS in the British healthcare industry based on data drawn from the following sources:

Survey of lead UK health institutions

The structured Survey instrument will be comprised of (50) questions and will be circulated in the HMIS practice community in the UK. A series of open queries at the end of the Survey will offer an opportunity to CIOs and IT administrators to contribute unique knowledge about their systems.

Interviews with CIO

Depth content to the research will be drawn from two (2) semi-structured Interviews with CIOs selected from information obtained from data generated in the Survey. Findings on the development of HMIS onsite in those chosen institutions will open up a new field of query into the actual challenges faced in planning, implementation and updated maintenance of architectural systems as new enterprise systems come on the market. Policy and procedure will also be discussed, as well as extended referral networks.
3. Internet Research
a. Patient Research. Review of patient interface with HMIS portals at lead organizations and community healthcare providers.
b. Aggregate Index. Research Data collected from healthcare industry indexes toward furtherance of trend analyses.
c. Risk Management. Recommended best practices, policy and security protocol toward risk management of fiscal information, institutional and staff privacy and non-disclosure of patient record will be investigated. Review of open source software as protective measure as well as sufficient firewalls, intrusion detection, and encryption.

Sources and Acquisition of Data

Acquisition of data on the study will be conducted in three phases: 1) Survey; 2) Interviews; and 3) Internet. Phases 1 and 2 will focus on CIO and other lead IT staff in selected UK healthcare institutions, and incorporate information from the two instruments, as well as augmentation of the research with information on engineer consultancy relationships that they have worked with, and institutional documentation on HMIS and unit databases. Phase 3 will be conducted consecutive to the latter two phases of the research toward supplementation of policy and other details to the project.

Data Analysis

Examination of standardized taxonomies to open source database repositories used in HMIS will serve to further data analysis: Customer Relations Management (CRM); Electronic Health Records (EHR); Enterprise Resource Planning (ERP); Personal Health Records (PHR); and Supply Chain Management (SCM) dedicated to total operations management control, patient referral and professional knowledge sharing (Tan and Payton, 2010). Analysis of data on the project will be based on a Six Sigma solutions oriented approach.

Table 1

Approach

Description

ITIL Area

Charter

Defines the case, project goals of the organization

Policy and Procedures

Drill Down Tree

Process Drill Down Tree

Engineering Process & Unit Oversight

FMEA

Failure Modes & Effects Analysis

Risk Assessment

QFD

Quality Function Deployment

Compliance

SWOT

Strengths, Weaknesses, Opportunities, Threats

Planning and Implementation (ongoing for future inputs)

Trend Analysis

Aggregate Narrative

HMIS industry trends

Table 1: Six Sigma methodologies for analysis of HMIS survey, interview and internet archive sources.

References

Computer Misuse Law, 2006. Parliament UK. Available at: http://www.publications.parliament.uk/pa/cm200809/cmhansrd/cm090916/text/90916w0015.htm#09091614000131
Crown Prosecution Service Advice on CMA 1990. Available at:
http://www.cps.gov.uk/legal/a_to_c/computer_misuse_act_1990
Edgeman, Rick L. and Dugan, J. P., 2008. Six Sigma for Government IT: Strategy & Tactics for Washington D.C. Available at: http://www.webpages.uidaho.edu/~redgeman/RLE/PUBS/Edgeman-Dugan.pdf
Hitt, Black & Porter, 1995. Management. Upper Saddle River: Pearson Education, Prentice Hall.
Jones, R.E., et al., 1994. Strategic decision processes in matrix organizations. European Journal of Operational Research, 78 (2), 192-203
Monegain, B. N.C. health system to launch bundled payment pilot. Healthcare Finance News, 22 June 2010. Available at: http://www.healthcarefinancenews.com
Quattrone, Paolo and Hopper, T., 2004. A ‘time-space odyssey’: management control systems in two multinational organizations. Accounting Organizations and Society 30, 735-754.
The imperative to be customer-centric IT leaders (2010). CIO.com. Available at: www.cio.com
Tan, J. and Payton, F.C., 2010. Adaptive Health Management Information Systems: Concepts, Cases, & Practical Applications, Third Edition. Sudbury, MA: Jones & Bartlett Learning.
Tiwana, A. et al. (2006). Information Systems Project Continuation in Escalation Situations: A Real Options Model. Decision Sciences, 37 (3), 357-391.
Topousis, D.E. et al., 2009. Enhancing Collaboration Among NASA Engineers through a Knowledge Sharing System. Third IEEE International Conference on Space Mission Challenges for Information Technology. Pasadena, CA: Jet Propulsion Laboratory.

Rights of Individuals Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

This essay will discuss three assertions: (i) that international law was not intended to deal with rights of individuals; (ii) that international law is not equipped to deal with rights of individuals; (iii) that individual rights should be the concern of domestic legal systems only.

We will deal with each of these in turn, with reference to international legal instruments and bodies. We will observe first of all how the rights of individuals, although falling outside the province of international law as it was conceived in the1600s, began to seep into the framework of international legal rules over the centuries, eventually coming to prominence during the ‘human rights era’ that followed the end of the Second World War. We will consider secondly the various mechanisms that have been put in place by the international community in order to deal with the enforcement and observance of individual rights enshrined in international legal instruments. Lastly, we will critically assess the claim that questions about individual rights should be the sole concern of domestic legal systems.

The scholars who laid the intellectual foundations of international law in the Western world, like Hugo Grotius (1625) and John Locke (1690), all stressed in their writings that legal systems, be they domestic or international, were founded in natural law and commonly accepted standards of (Christian)morality. It may seem surprising, therefore, that for centuries the rights of individuals played no significant role in the framework of international law. International law, as the name suggests, was the body of legal rules governing the relations between states – ‘the law of nations’. Nation states, and not individuals, were the ‘subjects’ of international law. The behaviour of a state towards individuals within its own territorial boundaries was governed by its domestic legal system. Any interference by one state in the internal affairs of another, for whatever reason, was viewed as a violation of state sovereignty, and as a threat to stability in international relations.

It did not take long for international law to begin to concern itself with the welfare of individual human beings. However, when this did start to occur it was not because human compassion and religious morality had risen to the foreign international relations; it was motivated rather by the reciprocal political and economic interests of states. An early concern of nation states was the manner in which their diplomats and other nationals were treated when residing and conducting their business in the territory of another state, as noted by Louis Henkin (1989):

Of course, every State was legitimately concerned with what happened to its diplomats, to its diplomatic mission and to its property in the territory of another State. States were concerned, and developed norms to assure, that their nationals (and the property of their nationals) in the territory of another State be treated reasonably, ‘fairly’, and the system and the law early identified an international standard of justice by which a State must abide in its treatment of foreign nationals.

Once such norms were agreed between two states, it was no longer possible for either of them to assert that the treatment of individuals within its borders was a matter exclusively to be dealt with by its domestic legal system, a point that was stressed in an Advisory Opinion on Nationality Decrees Issued in Tunis and Morocco (1923) of the Permanent Court of International Justice (the forerunner to the International Court of Justice).However, although the rights of individuals were thus ‘internationalised’ to a limited extent, the international agreements in question did not permit states to take action against any state that was deemed to be violating the rights of its own nationals. The position under international law in this respect began to change with the developing doctrine of humanitarian intervention.

First expounded by Hugo Grotius (1625), the doctrine of humanitarian intervention allowed for limited exceptions to the rule that states were prohibited from interfering with the internal affairs of other states for the benefit of individuals within those other states. This could be done to stop the maltreatment by a state of its own nationals ‘when that conduct was so brutal and large-scale as to shock the conscience of the community of nations’ (Stowell 1921). The doctrine has been much abused throughout history, and is often invoked as a pretext for the invasion or occupation of weaker countries. However, it shows that states were becoming concerned with the welfare of individuals even when this was not directly linked to political and economic interests to be derived at the state level.

As we moved into the nineteenth century, a new wave of concern for human welfare sparked changes within the international system. European and American states abolished slavery and the slave trade, and international agreements were put in place to govern the conduct of war between states in such a way as to minimise cruelty and brutality in international conflicts. The Hague Regulations (1899) sought to codify principles of customary international law that had developed over time in relation to land warfare, making provisions to outlaw certain weapons that had proved particularly destructive to individuals on the battlefield and civilians, and to protect the welfare of prisoners of war.

This could not stop the catastrophe that was to unfold in the course of the First World War, which claimed more lives than any conflict in the history of humankind. In the aftermath of the War, the Covenant of the League of Nations (1920) came into force. This established the League and served as its constitution. Although it contained no express provisions dealing with human rights, it marked a substantial step forward in terms of international law recognising the rights of individuals, in three important respects. Firstly it recognised the rights of individuals living in the colonial territories of the states that were defeated in the War, transforming these territories into League Mandates, and stating in article 22 that ‘the principle that the well-being and development of [the native] peoples form a sacred trust of civilization.’ Secondly, article 23 of the Covenant stressed the need for ‘fair and human conditions of labour for men, women and children.’ This was to pave the way for the creation of the International Labour Organisation under the Treaty of Versailles (1919). Many scholars,including Leary (1981) have stressed the importance of the ILO in improving the working conditions for millions around the globe, and in turn making a significant contribution to the development of international human rights law. Finally, the League of Nations established a system for dealing with the protection of minority groups within certain states. A series of special treaties were concluded for the protection of ethnic, linguistic and religious minorities in several countries in central and eastern Europe (Hannum 1990).These treaties were supported by a relatively sophisticated (and successful)system of enforcement, whereby a committee accepted petitions concerning allegations that minority rights had been violated, with the possibility of the Permanent Court of International Justice rendering an Advisory Opinion on the legal merits (Stone 1934). International law showed itself to be more than equipped to deal with the rights of individuals belonging to minority groups during a short period between the two World Wars. This success was to prove short-lived.

The events of the Second World War, and in particular the systematic extermination of over six million European Jews by Hitler’s Nazi Germany, were to shock the world’s conscience. The notion of human rights,never before made explicit under international law, was to find its way into the Charter of the United Nations (1945), which was ratified after the War by most members of the international community. Although the rights accorded to individuals under the Charter were not as extensive as some had hoped (Robinson1946), it nevertheless began its Preamble with the words ‘We the peoples’ of the United Nations – human beings, as well as nation states, had now become subjects of international law. Article 1(3) of the UN Charter states that one of the purposes of the UN is:

To achieve international co-operation in solving problems of an economic, social,cultural, or humanitarian character, and in promoting and encouraging respect for human rights and for fundamental freedoms for all without distinction as to race, sex, language, or religion.

Article 55(c) also stresses the need for the UN to promote ‘universal respect for, and observance of, human rights and fundamental freedoms for all.’ The UN Charter was followed in 1948 by the Universal Declaration of Human Rights, which draws on documents like the French Declaration of the Rights of Man and the American Declaration of Independence(Eide 1992). This instrument, which has proved a driving force in the human rights movement, proclaims in article 1 that ‘all human beings are born free and equal in dignity and rights.’ The Universal Declaration on Human Rights was followed in 1966 by the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social and Cultural Rights (ICESCR). These Covenants create binding legal obligations for the states that have ratified them. Henkin (1977) emphasises that these states are therefore no longer free to claim that the rights contained in the Covenants fall exclusively within their domestic jurisdiction. International law has come a long way since the days of Grotius; there can be no doubt that individual rights are firmly enshrined within its framework.

If individual rights are plainly part of today’s international system, the next question that falls to be considered is whether international law is ‘equipped’ to deal with individual rights. We observed earlier how the League of Nations put in place a system of enforcement and observance for the minorities regime that existed during the inter-war years, oversee ultimately by the Permanent Court of International Justice. Various other mechanisms exist within the international system, and they have enjoyed varying degrees of success.

One of the most successful human rights enforcement mechanisms is the Human Rights Committee established under the ICCPR. The Committee exists to ensure that states that have ratified the ICCPR comply with the obligations they have assumed under it. State parties are required under article 40(1) ‘to submit reports on the measures they have adopted which give effect to the rights recognised [in the Covenant] and in the progress made in the enjoyment of those rights.’ Under the First Protocol to the ICCPR, the Committee will also accept petitions from individuals alleging that their rights under the Covenant have been violated by a state that has ratified the Protocol. The Committee has developed an extensive body of jurisprudence,which serves as a valuable tool in helping with the interpretation of the rights under the Covenant (McGoldrick 1994).

Regional human rights systems have also shown that it is possible to enforce the observance of individual rights in an arena other than the domestic legal system of a nation state. The European Court of Human Rights hears applications from individuals in member states of the Council of Europe concerning alleged violations of the European Convention on Human Rights(1950), a document that draws heavily from the Universal Declaration of Human Rights. Since the passing of the Human Rights Act (1998) into UK law, the English courts are obliged to follow rulings of the European Court of Human Rights, which presents us with an interesting example of the interplay between domestic and international law in relation to the rights of individuals. Other regional bodies include the Inter-American Court of Human Rights and the African Commission for Human Rights. Although less prolific and powerful than their European counterpart, these bodies have demonstrated that it is possible to enforce individual rights under international law.

Many who argue that international law is not ‘equipped’ to deal with individual rights point to the so-called ‘non-justifiability’ of economic, social and cultural rights, as well as third generation peoples’ rights. They aim to show in other words that, by their very nature, such rights are not capable of being determined judicially, unlike the sort of rights that arise ordinarily within domestic legal systems. In the context of civil and political rights, the argument goes, the individual holds a clearly defined right against the state, the violation of which can be tested in a court of law. However, it is said that ‘economic and social rights are not suitable for judicial consideration because of the wide range of issues that have to betaken into account and the uncertainty surrounding effective means of achieving the ends in question.’ While Article 2(1) of the ICCPR states: ‘Each Party to the present Covenant undertakes to respect and to ensure to all individuals within its territory and subject to its jurisdiction the rights recognized in the present Covenant, without distinction of any kind,’ Article 2(1) of the ICESCR states: ‘Each State Party to the present Covenant undertakes to take steps to the maximum of its available resources, with a view to achieving progressively the full realization of the rights recognized in the present Covenant by all appropriate means, including particularly the adoption of legislative measures.’

However, the Committee that oversees the ICESCR has refuted the ‘non-justifiability’ argument. In its General Comment No. 3(1990), the Committee insists that Article 2 of the Covenant imposes concrete legal obligations, requiring states to realise minimum standards relating to each of the rights, utilising available resources in an effective manner. It follows therefore that although economic and social rights under international law may be different to the sort of rights that are normally found within a domestic legal system, that is not to say that they are not capable of enforcement. Methods of enforcement do need to become more effective, but several international bodies have shown that they are equipped to perform this role, often with very positive results.

We finish by dealing with the assertion that questions about individual rights should be the concern of domestic legal systems only. We can safely dismiss this assertion as ill founded with the help of an unlikely source: Hermann Goering, during the Nuremberg trials that took place in the wake of the Second World War, exclaimed: ‘But that was our right! We were a sovereign state and that was strictly our business.’ Germany’s sovereignty, in Goering’s view, shielded individuals involved in the atrocities of the Holocaust from accountability.

When domestic legal systems (like that in Nazi Germany)fail to prevent the murder and ill treatment of prisoners of war, murder and ill-treatment of the civilian population and a policy of slave labour and persecution and murder of Jews, it is right that the international community should step in to protect the rights of the individuals concerned. There can be no doubt that the international system is often ill-equipped to deal with atrocities that occur within state borders; the genocide in Rwanda in 1994 is a case in point. However, that is not to say that we should not keep striving to perfect the systems that do exist under international law. It may not have been conceived to deal with such issues, but international law has evolved into a corpus of rules with huge potential as a mechanism for the enforcement and protection of individual rights. Nation states would be wise to build on this potential rather than ignore it.

Various Methods of Remunerating Employees

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

Rewards can be defined as the compensations and benefits received by an employee in exchange for their services (Torrington Et al, 2014). Remuneration forms an important subset of the total rewards and comprises of those elements that can be valued in monetary terms (Jiang Et al, 2009). Effective remuneration strategy often underpins the success of the business as it is considered as one of the key factors to attract and motivate human capital. Herzberg (1993) asserts that inadequate remuneration is one of the key factors causing dissatisfaction amongst employees.

The term remuneration is often associated with basic pay. However, remuneration is a much broader term and can encompass a wide range of techniques for rewarding employees in the form of salaries, bonuses, piece based remuneration, commission, employee stock options, fringe benefits, deferred considerations, performance related pay and profit sharing amongst many others (Torrington Et al, 2014).

One of the major challenges for organisations in the contemporary business environment is that of employee motivation. Motivation can be defined as ‘the degree to which individual wants and chooses to engage in certain specified behaviour’ (Mullins, 2002, p.418).

The purpose of this essay is to critically examine various methods of employee remuneration and assess its role in strategic management of human resources for an organisation by shedding light on its key advantages and disadvantages. The essay will finally conclude by analysing if a particular method of remunerating employees fits all situations or is preferred over other means of remuneration.

Different methods of remuneration
Performance related pay scheme:

As the name suggests, performance related pay schemes reward employees by linking the level of reward with the performance of the employees (Perry, Engbers, and Jun, 2009). Typical examples of performance related pay include bonuses, commissions and deferred considerations. One of the key advantages of performance related remuneration is that it provides an effective means of rewarding by distinguishing between good and poor performers (Torrington Et al, 2014). Other advantages of performance related pay are increased motivation amongst employees to improve performance, attract and retain high performers and talented individuals, and ultimately improve corporate performance (Torrington Et al, 2014).

Nonetheless, empirical evidence highlights that the performance related pay has often been ineffective (Frey and Osterloh, 2012). Frey and Osterloh (2012) also highlight that the link between the increases in performance related pay and corporate performance has remained weak. Performance related pay is also criticised for inciting employees to take dysfunctional decisions, as it acts as an inducement for employees to take greater risks which could put an organisation’s survival at stake (Frey and Osterloh, 2012). Performance related pay may also lead to conflict of interest for the employees by inducing them to focus exclusively on areas that impact their pay and ignore other important tasks that may be in the long term interest of the company. Performance related pay might often supress the intrinsic motivation of employees (Frey and Osterloh, 2012, p.2). Amabile (1998, p.79) asserts that intrinsic motivation reflects employees passion and interest in work, which has a stronger impact on the performance of an employee and the business. Lastly, Maslow’s theory of motivation elucidates that within every individual there are hierarchy of five needs – basic physiological needs, safety needs, belongingness needs, esteem needs and self-actualisation needs (Maslow, 1943). Maslow asserts (1943, p.363) that the needs lower than the self-esteem needs can be accomplished through remuneration, whereas the higher level needs of esteem and self-actualisation for the senior management are unlikely to be achieved through extrinsic rewards, such as performance related pay. Thus, it may not act as a motivational factor for the senior management.

Profit Sharing

In the contemporary times, increased numbers of business organisations have started linking the level of remuneration offered to the employees with the profits of the organisation (Torrington Et al, 2014). Stock options are a common example of this type of remuneration. One of the key advantages of this remuneration policy is deemed to be higher level of commitment by the employee towards the company because of increased level of mutual interest (Torrington Et al, 2014). Other common advantage of profit sharing schemes is deemed to be change in attitude of workers due to increased sense of belongingness with the company (Rappaport, 1999). Amabile (1998) asserts that feeling of increased sense of belongingness leads to intrinsic motivation which has a more direct and stronger relationship with company’s performance.

However, Empirical evidence highlights a lack of evidence of relationship between this type of remuneration and performance of the company (Rappaport 1999). One of the key criticisms of this type of remuneration is that any improvement in company’s performance will reward both good and bad performers, resulting in poor motivation for high performers as they may feel that part of the reward that they deserve is being enjoyed by the low performers (Rappaport 1999). Furthermore, sometimes profit based remuneration policies might fail to motivate the employees as they often feel share prices are undervalued despite of business outperforming the forecasts (Rappaport 1999). Lastly, Kohn (1993) argues that shareholders expect the board to reward employees when the company has outperformed the market. However, empirical evidence highlights that for executives to exercise the option profitably, the performance of the company need not be superior and executive can easily benefit in the times of rising market (Rappaport 1999). Thus, if employees feel that the movement in share prices are independent of their performance, there is a risk that profit based remuneration scheme may not act as motivational factor.

Piece Based Remuneration

Piece based remuneration scheme is historically one of the most commonly used incentive schemes in practice for manual workers and is based on the number of items they produce or the number of hours worked by them (Torrington Et al, 2014). Typical examples of piece based remuneration schemes include individual time saving scheme, measured day work schemes, group incentives, plant wide bonus schemes and commissions (Torrington Et al, 2014). Advantages of such schemes typically include increased level of control by the management over the production process and it also acts as a cost control measure because the workers main goal is to do the task expediently and efficiently in order to achieve the goal (Kohn, 1993). Furthermore, Maslow’s theory of motivation (1943), as mentioned above, highlights that extrinsic rewards, such as piece based remuneration, might act as a motivational factor for manual workers because these workers are likely to have the lower level needs as per Maslow’s theory.

Like other types of remuneration, piece based remuneration has its own set of disadvantages. Remunerations such as group incentives and plant wide bonus schemes lead to additional pressure on employees and create interpersonal animosities because of high performers not being able to receive the incentives due to some low performers in the group (Torrington Et al, 2014). Furthermore, time saving schemes and measured day work schemes may act as a deterrent to creativity, as individual employee’s focus is on standardisation and predictability in order to complete the work in the minimum possible time (Kohn, 1993). Herzberg (2003) motivation hygiene theory suggests that job satisfactions and job dissatisfactions are two independent experiences. Whilst extrinsic rewards, such as piece based remuneration, can help the manual workers to avoid job dissatisfaction, it might not lead to job satisfaction as the employees are not intrinsically motivated by the work itself.

Skill Based Pay

This is a remuneration policy where employees are remunerated based on the skills and competencies they possess (Armstrong, 2002). One of the biggest advantages of skill based remuneration is that it promotes employees to acquire multiple skills, thereby, offering flexibility to the organisation in terms of using same employees for various purposes and responding to customer needs more efficiently (Torrington Et al, 2014). Such remuneration schemes also enable organisations to attract and retain skilled employees easily compared to their competitors as people are likely to be rewarded appropriately for the skills they possess under this scheme (Torrington Et al, 2014).

Potential disadvantages with this scheme is that costs often outweighs the benefits if the increase in productivity is not enough to compensate the increased cost of hiring and training skilled employees (Armstrong, 2002). As business operates in a dynamic environment, there is a risk of skills obsolescence and associated high cost of training. Lastly, the business might also bear the risk of losing a skilled employee, on whom the business has invested a significant amount in training, to a competitor due to a highly competitive labour market (Torrington Et al 2014).

Flexible and Fringe Remuneration

Fringe benefits can be defined as the benefits in kind provided to the employees and have substantially growth in the recent years (Armstrong, 2002). The value of the fringe benefits paid to the employees reflect approximately twenty to fifty percent of the remuneration and typically includes benefits like pensions, company cars, sick pay, private health insurance, mobile phones, staff discounts, maternity or paternity pay, creche facilities and relocation expense amongst many others (Torrington Et al, 2014).

Flexible benefits provide options to the employees to decide how their remuneration should be structured (Torrington Et al, 2014). Under such schemes, the gross value of the remuneration package is determined by the employer; however, the employees have the flexibility to choose the mix of cash and other benefits as a part of remuneration package (Dychtwald, Erickson, and Morison, 2006). Examples of flexible benefits include the option to choose between additional holidays, access to company creche, childcare vouchers or cash, amongst many others. The advantages of flexible benefits include the potential of increased employee motivation as they end up getting the rewards they desire. Savings in social security taxes could also be made through comprising the salary for the desired benefits that might attract a lower level of tax (Thomsons, 2015). Furthermore, research has highlighted that flexible remuneration programs contribute to attracting new employees, improve retention of existing employees and improve employee engagement (Thomsons, 2015).

The primary disadvantage of flexible benefits remuneration schemes is increased cost burden for the employer due to rise in the amount of administrative work related to managing the individual choices of employees (Armstrong, 2002). Another criticism of flexible remuneration policy is that the expensive company cars and glamorous lifestyle provided to employees have contributed little towards developing long term commitment towards the business organisation and retention of employees (Thrope and Homan, 2000). Empirical evidence highlights that the employees do not completely understand the value of the flexible benefits and there is little evidence of the positive motivational impact of these remuneration policies on the employees (Torrington Et al, 2014). Nonetheless, it does not indicate that employees do not value the presence of these benefits and are likely to resist their removal (Torrington Et al, 2014).

Conclusion

Based on the discussions in the sections above, it is evident that each method of remunerating employees has certain advantages associated with it. However, Maslow’s theory of motivation and Herzberg hygiene factors, as discussed above, have highlighted a common issue across all forms of remuneration, i.e. the extent to which extrinsic rewards can contribute to motivating an individual employee, thereby, improving the company’s performance. Kohn (1993, p.1) asserts that whether remuneration is performance based, profit based or piece based, it might motivate employees in the short run, but would not contribute to long term commitment towards the company.

Nonetheless, it is not deniable that remuneration plays an important role in influencing employee’s decision regarding the long term commitment towards the company. However, no one method of remuneration is deemed to be recommended over another method and a business might use a combination of methods to remunerate the employees according to the needs and motivations of the employees. Employees at lower level might be motivated by the prospects of better remuneration through different tools; however, for senior management self-esteem and self-actualisation needs would need to be satisfied in order to motivate them. Thus, rewards needs to be carefully crafted to support one another and incorporate both financial and non-financial remuneration.

References

Armstrong, M., 2002, Employee Reward: People and Organisation, Chartered Institute of Personnel and Development, pp. 410 – 420

Dychtwald, K., Erickson, T., and Morison, R., 2006, Flexible Compensation and Benefits: Why Variety Will Rule and How to Leverage it, Harvard Business Review, pp. 1 – 9.

Frey, B. and Osterloh, M., 2012, Stop Tying Pay to Performance. The Evidence is Overwhelming: It doesn’t work, Harvard Business Review, pp.1 -7.

Herzberg, F., 2003, One More Time How Do You Motivate Employees?, Harvard Business Review on Motivating People, Harvard Business School Press, pp. 45-71

Jiang, Z., Xiao, Q., Qi, H., and Xiao, L., 2009, Total Reward Strategy: A Human Resources Management Strategy Going with the Trends of the Times, International Journal of Business and Management, vol. 4 (no. 11), pp. 177 – 183.

Kohn, A., 1993, Why Incentive Plans Cannot Work, Harvard Business Review, pp. 1 – 19.

Maslow, A., 1943, A Theory of Human Motivation, Psychological Review, vol. 50 (no. 4), pp. 370 – 396.

Mullins, L.,2002, Management and Organisational Behaviour, Edinburgh Gate: Pearson Education Limited, pp. 410 – 420.

Perry, J., Enbergs, T. and Jun, S., 2009, Back to the Future? Performance-Related Pay, Empirical Research, and the Perils of Persistence, Public Administration Review, vol. 69 (no. 1), pp.39-51

Rappaport, A., 1999, New Thinking on How to Link Executive Pay with Performance, Harvard Business Review, pp. 1 -23.

Thomsons, 2015, Introduction to Flexible Benefits. Accessed on 25th May 2015 at : http://www.thomsons.com/resources/guides/intro-flexible-benefits

Thrope, R. and Homan, G., 2000, Strategic Reward Management, Prentice Hall, pp.378 – 390.

Torrington, D., Hall, L., Taylor, S. and Atkinson, C., 2014, Human Resource Management Ninth Edition, Pearson Education Limited, pp. 412 – 460.