How is human nature changed from technology?

How is human nature changed form technology?

In the late 20th century, internet combined with World Wide Web start to hit a big in every society because it makes a possible of sharing the worldwide information. There is no doubt that computer network has a strong impact to people with its capacity of gathering and delivering information. However, as the information’s property is that it above all kinds of messages, which means that while people search for the useful information they also may receive some other useless information, for example, some information that called propaganda. The basic reason for the development of propaganda can be described as how to take advantages from the technological enhancement and the internal of human nature. Though the upgrading of technology provides a lot of benefits for human society, it also brings about so many problems; and human nature is like the catalyst to amplify the greedy of human desire. Propaganda, as an important way for advertisers to lure and brainwash the audiences, has developed in a rapid speed when technological enhancement and human nature continue influences people’s decisions.

Technological enhancement is like the top one assistant to help propaganda spread and exploded faster than decades before. In the article “Computer and the Pursuit of Happiness”, David Gelernter draws out a statement that “But using technology to defeat distance has been another goal of the industrial revolution from the start, from railroads through the Panama Canal and onward.” (138) What Gelernter’s clime is that no matter in which period, people are willing to use technology skills to link together from a long distance. Propaganda can be a well appropriate example to show this method. Assuming that if the president of a large country wants to announce a political decision by using propaganda as soon as possible, but the only way he can use is the traditional way to transmit it, such as railway or water route which will defiantly delay the speed of transmitting the announcement. However, with the assist of technology, people can use radio, TV, and then computer and networks, finally the latest one – internet which connected the whole world to overcome geography and be able to get the information in a fastest way. How come that advertiser will not take some benefits from that technological society by producing propaganda? Technology builds a bright and straight way for advertisers to broadcast various propagandas without any restriction from time or region. Furthermore, technology also makes the category of propaganda becomes copious and colorful. Advertisers can have plentiful technologic ways and skills to create propaganda so that it will be more attracted and alluring. Nobody would say no to the glamorous and creative propaganda which can draw your attention at the very beginning; and the advertisers know it extremely well than anybody else. In spite of the technology enhancement that improve the appearance and the hardware of propaganda, the people who are easily be allured by propaganda can be described as the human nature that is always be in the same pattern and is to simple too be penetrated.

From ancient time to the present, human nature seems like it has conflicting phenomenon. On the one side it remains the same in some aspects; on the other side, there are also have some changes. Ann McClintock mentioned a common sense when people see propaganda that “We are victims, content – even eager – to be victimized. We read advertisers’ propaganda messages in newspapers and magazines; we watch their alluring images on television …… We all do it – even those of us who claim to see through advertisers’ tricks and therefore feel immune to advertising’s charm”, (158) in her article “Propaganda Techniques in Today’s Advertising”. The author use the word “victim” to describe the people who truly buy the lie of propaganda in order to reveal a truth that people are vulnerable when they face the fascinating propaganda; and no matter how much the people convince to themselves that they know all tricks in propaganda; they still easily to be fooled by advertisers. So the answer to why it would happen is that the feature of the human nature. There is a common situation in almost everyone’s experience, when people come into a selection of two similar products, in most time people will chose the one with fancy appearance. This is one of the human natures that people will tend to be attracted by appearances but not the inside qualities, which has not changed in decades and will not be changed in future. As this reason, advertisers will try as hard as they can to improve the appearance of products or figures inside of the intrinsic and the qualities of it. Speaking of the change of human nature, it can show through the development of human mind and society. When communication between people from everywhere has fewer limits and people are more open to adapt new things from other country or culture, advertisers are able to have more opportunities to get profits from all over the world by broadcasting a proper propaganda. Considering the alteration and the uniformity of human nature, advertisers always can find a way to promote their “products” whether they are objects, figures or opinions.

In the war between audience and advertiser, the winner always is the advertiser, especially when advertisers can use technology flexible and know human nature like the back of their hands. In Ann McClintock’s article “Propaganda Techniques in Today’s Advertising”, McClintock points that “Every day, we are bombarded with slogans, print ads, commercials, packaging claims, billboards, trademarks, logos, and designer brands – all forms of propaganda.” (160-161) In this sentence, most of these media are partly connected to the technology, because technology makes these things become more and more common in the society; and technology is still showing its advantages to advertisers for how to improve tactics in producing propaganda. Nobody will doubt the ingenuity of human so it means that nobody can stop the development of technology. In that way, audiences will become more vulnerable in front of a well decorated propaganda. Similarly, David Gelernter mentioned his thoughts in the article “Computers and the Pursuit of Happiness” that “Human nature does not change; human needs and wants remain basically the same. Human ingenuity dreams up a new technology, and we put it to use – doing in a new way something we have always done in some other way.” (140) He claims that with the desire of human nature has never changed, we tend to invite more high-tech devices to replace some works which were done by human before. In some aspect, it is good for human life; on the contrary, this is how propaganda converts from paperwork into different forms, such like radio, video, or even lights. Human nature is like a flaw in a precious jade, everyone can see it, but cannot fix it. To contradict the information from propaganda but continue develop technology; we can try to concentrate more on the essence of human nature and be aware of the deliberate deception from advertisers. If advertisers put a good use of technology and human nature in propaganda, people will have highly risk of the propaganda might control the personal decisions and judgments.

Although people know that most propaganda are fictitious and deceitful, the majority will still buy and trust it. Someone may make an assertion that all these faults and influences are made by the development of technology and have no relevance with human nature. However, because human nature is deep inside human mind, people do not want to admit that they – themselves also are an aspect of being deceived by propaganda. On the one hand, technology does make human life become more convenient and efficient, so the society cannot only blame the technological changing. On the other hand, human nature is hard to change because it already inherits from generations to generations. In some aspect, this world cannot be operated well without propaganda. For instance, producers need it to sell their products, candidates need it to win the campaign and even the politicians or scholars need it to express their viewpoints. In order to really take some benefits from technology but not fooled by the deception of propaganda, people need to be more cautious and avoid the influence of human nature when they encountered with the diverse propaganda. If everyone can see through advertisers’ strategy, the winner of a propaganda war must be the audiences instead of the producers.

Work Cited:

Gelernter, David “Computers and the Pursuit of Happiness”. New Directions. New York, NY: Cambridge University Press, 2005

McClintock, Ann “Propaganda Techniques in Today’s Advertising”. New Directions. New York, NY: Cambridge University Press, 2005

How has Technology Contributed to Globalisation?

Explain How Changes In Technology Have Contributed Towards Globalization Of Markets And Of Production?

Technology has dramatically changed people’s way of life all over the world and the world today has become a true manifestation of a global village. Not only the frequency of international travelling increased manifold but the possibilities of cross-border trading of goods and services have also increased exponentially. These impacts are collectively known as globalization.

(Hill, 2009) defines globalisation as a process which enables individuals, organisations and governments from different natins to come across each other and interact in an intergative manner. The end result of such intergation would be an intergated globalised market system which can act as a melting pot of indivual economies of different nations.

There are two ways in which globalisation can be envisaged, i.e. with the production perspective and thebmarket perspective. (Hill, 2009) defines the markets’ globalisation as melting down and convergence of individually independent market places into an amalgamated market place. Sharing of the sources of production from different geographical locations for levaraging the quality and cost of the goods and services produces is the idea behind the products’ globalisation. (Hill, 2009)

Many institutions have been formulated to help manage, regulate and police the phenomena of globalization and to promote the establishment of transnational treaties for global trade. A few are as following::

The World Trade Organization (WTO)
The International Monetary Fund (IMF)
The World Bank
The United Nations (UN)

These institutions act on an international level to regulate and tackle any problems that the different countries, companies and individual may face when undergoing globalization e.g. The IMF provides monetary services and acts as a last resort for the members in financial distress (Gitman, 2008)

Now the question is how instead of what. How does globalization happen? What drives globalization? There are many drivers or rather changes that result in globalization. Generally, there are two macro drivers of globalization. These are the declining trade and investment barriers between countries and changes in technology

Organisations across the world now face lower level of obstacles to investing and trading in foreign lands. This flexibility allows the firms to choose global locations where they have to spend minimum on production costs and reap maximum benefits in return by strategically locating their production site, and service and product outlet locations. Design can thus be created in one global location, production at a second global site and the niche market can be a totally far off market at the other end of the world. Globalisation of production thus exploits cheap labour in the third world markets and rich buyers in the first world markets. (Arribas, 2009)

The technological changes are not just limited to the automation of the production line but it also includes the advancement in infrastructure and connectivity. The most important innovation has been the microprocessors. The developments in communication technologies like wireless, optic fibre, satellite communications and the rapid growth of the internet have brought the global business to a previously unimagined level. Improvements have also occurred in the field of transportation technology resulting in the development of commercial jet aircraft, which has reduced the time for transit.

Globalization is not only resulting from declining trade barriers or changes in technology but upon scrutinizing two other factors come into play. These are Foreign Direct Investment (FDI) and increasing international trade.

Globalization is not a straight line event rather it has been maturing from many decades and the implications of this phenomenon are being strongly felt now. This has been going on since the 1960’s. In the 1960’s the US dominated the globe’s economy and the international trade picture and it also led the front when it came to FDI, similarly the US multinationals ranked high in international business (Hill, 2009). This has all changed due to globalization and other countries, firms and individuals have risen to compete in the global market place.

Much has changed in the demographics of the world when looking at world GDP and trade. China did not have a share in the worlds output in 1963, now has 11.5 % of the GDP in 2007 and 7.2% of the world’s export in 2006. This shows the tremendous effect of globalization in the current world marketplace. China in 2008 was listed as the 3rd largest Economy based on Nominal GDP. The share of world output generated by third-world countries has steadily increased since the 1960’s. There also has been a persistent growth in cross-border flow of FDI and it does not come as a surprise that China has been the largest receiver of FDI (Hill, 2009).

There are many facets to globalization and on a closer look there is the multinational enterprise. A multinational enterprise (MNE) is a type of business which has operations in two or more countries. A multinational enterprise can also be referred to as an International Corporation. MNE’s have powerful influence over local as well as the global economies and play an important role in international relations and globalization.

In the past the western market was closed for many economies but that trend has changed and many markets have opened up for the western market to invest in. The collapse of communism in Eastern Europe has created a host opportunities for export and investment. The biggest opportunity emerged in China due to economic developed even with the continuing communist control. Also the change in democracy and the free market reforms in Latin America have also given a possibility for investment from foreign investors.

Going over all what globalization has to offer, a question comes to one’s mind that a shift towards a global marketplace a good thing? There many views on this particular question. Many experts believe that globalization is helping prosperity by providing more jobs, lower prices of labor, materials, land and thus resulting in more profitability. Whereas other experts suggest that globalization is not beneficial as managers who are managing transnational and multinational organizations have to take into account a lot more factors as compared to stereotypical administrators (Hill, 2009). Managing an international business differs from a typical business in four notable areas:

Differences in countries require companies to employ different practices in different countries.

Administrators face greater and complex range of problems.

Companies have to follow the different limits imposed by different governments in countries and have to work within those limits.

International business requires converting funds and is very susceptible to fluctuations in the exchange rate.

To overcome these insights about managing international organizations managers have to use un-structured solutions and practices that may require additional resources in terms of labor, capital and land. This brings us to our next thought, why so many experts against what globalization have to offer. (Artis, 2009)

Globalization has occasionally been regarded as a solution to problems like underdevelopment, malnutrition and violation of human rights, and important human rights institutions have been set up and incorporated into the global human rights regime. Governments are finding it increasingly difficult to violate their citizens’ human rights without attracting the attention of the media and international organizations as a result of developed telecommunications and global interdependence. Indeed, overall human rights practices have improved worldwide during the last decade or so. However, this improvement has neither been universal nor linear. (Bardhan, 2006)

The contemporary world order owes its existence to a large degree to the information power unleashed as a result of the free flow of ideas and communications across geographical boundaries without any restriction or obstacles with help of the latest communication technologies. While globalisation has made it possible for the human rights bodies to react on human rights abuses in the remote societies of the world, the same globalisation has in fact also exposed the autonomous societies to human rights abuses at hands of the more powerful actors in the global scenario. What might be a collateral damage for a powerful actor in the emerging world order might be a human rights abuse involving victimisation of defenceless children and females for the recipient. Thus with respect to human rights, globalisation is a double edged weapon and it can work in both ways. Not only do the weaker players in this world order risk the wrath of raw power of the more powerful actors but the multinationals and conglomerates tend to act as mighty powers in their own right. The citizens of the weaker nations are left at the mercy of powerful yet unelected global giants like IMF, the World Bank , peacekeeping forces and first world NGOs who increasingly control the lives and fates of the denizens of the weaker nations of the world.

We’ve talked about what globalization is, what the key component drivers of globalization are, how it affects the production process. In doing so we’ve talked about the MNC’s (MNE’s) and also how the demographics have changed since globalization started. This also has provided us with a picture of how managers who are working for transnational organization take into account different factors for their, planning, organizing and leading decisions. Advancement in technology did not globalize the production and marketplace but it has increased the momentum of globalization manifolds. Although globalization is widely considered as a positive phenomenon but as always everything has its virtues and vices. It all depends on the perspective one employs to look at globalization

References

Anon., 2010. International Labor Organization. [Online] Available at: http://www.ilo.org/ [Accessed 27 February 2010].

Christos Pitelis, R.S., 2000. The nature of the transnational firm. Routledge.

contributors, W., 2010. Multinational corporation. [Online] Available at: http://en.wikipedia.org/w/index.php?title=Multinational_corporation&oldid=345942736 [Accessed 27 February 2010].

Dunning, J.H., 1998. Location And The Multinational Enterprise: A Neglected Factor?. JOURNAL OF INTERNATIONAL BUSINESS STUDIES , 29(1), pp.45-66.

Hill, C., 2009. International Business.

Levitt, T., 1984. The globalization of Markets. THE McKINSEY QUARTERLY.

Luo, Y. & Tung, R.L., 2007. International expansion of emerging market enterprises: A springboard perspective. Journal of International Business Studies, pp.38, 481-498.

Sullivan, D., 1994. Measuring the Degree of Internationalization of A Firm. JOURNAL OF INTERNATIONAL BUSINESS STUDIES , 25(2), pp.325-42.

Home Energy Management System

1) INTRODUCTION

Network of electricity that intelligently integrates with users actions connected to network for sustainable, economical and secure supply is called smart grid [1]. Smart grid is called smart due to fast communication and networking capabilities. Smart grid has an important role in energy structure adjustment, coping with climate changes and economic development [2].

Since 1982 energy demand during peak hour is increasing 25% approximately every year [3]. New intelligent devices must be used to fulfill energy requirements. In order to add intelligence new technologies are to be developed. Electric intelligence is only requirement for operational cost and energy consumption reduction [4]. These new technologies should be able to remove peak load and off load power difference, should be capable of making demand-supply curve smoother and should reduce environmental pollution. In smart grid user plays a vital role in reducing and optimizing energy consumption thus improving system efficiency. In smart grid emission of CO2 and household energy consumption is reduced by 9% and 10% respectively [5]

In order to improve electricity consumption keeping in view of consumer’s need there should be an optimum solution. Different optimization techniques can be used for energy management. Different technologies like home area networks, home automation, advanced metering infrastructure and bidirectional communication are introduced by smart grid during past few years [6]. Now a days Zigbee and sensor networks not only monitors the quality of power but use powerful strategy for communication and distribution and sale locally generated energy back to grid [7] [8].

Demand Side Management (DSM) system is important utilization efficiency parameters which have been ignored due to complex dynamics of consumption, random behavior of consumers and lack of commutation technology. The advancement in communication technology has revolutionized the power sector and introduced a concept of new modernized electrical system called as Smart grid [9] .The concept of demand side management was first introduced in late 1970’s which reduces GHG, provides reliable energy reducing the electricity cost [10]. Traditional grids consists of DSM but do not provide such type of reliability to users due lack of sensors and inefficient automation and communication. But smart grid is more efficient due introduction of low cost sensors, smart meters and integration of ICT [11]. Challenges faced by smart grid is shown below

Interoperability is said to satisfy if multiple communication network coexist in smart grid. Scalability means increasing number of hardware’s in proportion to others but due to increase in demand this become a major issue can be solved by using sensing networks. Smart grid holds different communities and societies which become an issue in order to resolve it integration of power system is required which is done with actuation, security and communication networks. Security is main issue in smart grid because a hacker can interrupt the data of smart grid and easily access to smart meters so this issue can be resolve by modernization the security and by data hiding [12].

The smart grid consists of HEMS that enable demand response and demand side management. On the basis of power supply demand response is responsible for altering and managing the energy and demand side management (DSM) controls the planning, techniques and implementation of policies [13]. There are two major schemes of HEMS one is communication and 2nd one is optimization [14].

Home energy management systems consist of three basic functional blocks [15]:

HEMS software
Home energy management center(HEMC)
Load scheduler.

HEMC provides customer’s user’s friendly graphical representation which not only provides assistance to customers but also give them control over various loads using load scheduler.

HEMC software uses lab view developer tool that provides necessary information to customers using Zigbee protocol. It has two main sectors a) home tab b) data tab.

The information load control, line control and on/off control is provided by home tab on/off control use in switching sequence and its major application is that it is used to detect any abnormality in hardware. The data logging of current and voltage with in time is represented by data tab.

Load scheduler considers the bundle of single knapsack known as multiple knapsacks due to which customers become aware of peak load at specific time and interval of occurrence. A load scheduler not only determines the critical and non-critical loads but also time dependent loads. In emergency conditions it also controls various loads. This also stores data of electricity consumption 24/7.

Energy management is an organized and systematic coordination of procurement, conversion, distribution and use of energy to meet the requirements, taking into account environmental and economic objectives [16]. Energy management systems are computer aided tools that are used by operators of utility grid for controlling, monitoring and optimizing the generation and transmission system performance.

In efficient energy consumption the participation of end user is as much important as the supplier. In modern power supply systems end users’ are provided with RT-feedback (real-time feedback) via different website portals, in-home displays (IHD), or some other feedback viewer device like mobile etc. [17]. Provision of feedback to end user will motivate them to alter their energy usage to minimize the electricity bills.

In present smart grid system end users’ are forced for shifting their energy usage on peak to off peak hours regardless of their comfort and life style [17]. All over the world major portion of energy is consumed by residential users, so they can play an important role for energy optimization. Research shows that energy consumption is minimized by 12% by installing energy consumption information system and displays energy consumption of whole unit [18].

Due to advancement in technology the industries has become advance due to which energy demand increases which results in load shedding and blackouts and use of fossil fuel is increases which will finish soon. Now world is going toward non-conventional energy resources like solar and photovoltaic cells but in order to provide information regarding to electricity to customers the smart hems use analogue and digital systems which is an efficient methodology.

Home energy management systems play a fantastic role in distribution of energy via conventional grid and homes optimistically.

With increasing demand of energy the communication in wired as well as in wireless medium is increased. Internet connection and intranet connection is makes low cost smart homes. Zigbee is one of the devices that uses for communication between smart homes and smart grid.

Zigbee alliance introduced Zigbee standard protocol based on IEEE 802.15.4 standard set by IEEE and new standard committee (NESCOM) for low rates wireless personal area network [15]. Zigbee consists of four layers.

Zigbee

Zigbee alliance platform

IEEE 802.15.4

There is a great difference between energy production and consumption, low production cause deficiency of electricity supply. World energy production pie chart is shown as

There are many optimization techniques that are used for optimization one of which is knapsack. In knapsack there are many algorithms used for obtaining the best optimized result. In this paper we will make comparative study of some algorithms of knapsack and we will find the best solution obtained from these algorithms.

Comparison of different home energy management schemes:
Optimization based residential energy management:

It is the linear programming model basic purpose of this model is minimization of electricity cost at residential areas [1]. In this scheme a day of 24 hours is divided into equal time slots having equal lengths consists of various prices of electricity like time of use (TOU) tariffs. The objective function shows that with proper scheduling we can reduce the energy expenses by division of home appliances in time slots. ?????

Objective function is defined as

EI DI Ut St

Where I define number of appliances, E defines energy consumption of appliance, J number of days, DI length of cycle of appliance, K number of requests, Ut unit price for slot t and T number of time slots.

IN home energy management (iHEM):

This scheme use smart appliances like energy management unit (EMU) and wireless sensor home area network (WSHANs) for communication purposes between appliances (IHEMS). It also uses (Zigbee) protocol, wireless sensor networks and cluster tree topology. In this scheme customer may turn on appliance at any moment without being worried about peak hours. In home energy management (iHEM) gives suitable time to customer for appliance use. ????

How iHEM works a request package is sent by appliance to EMU at the start, on receiving the packet EMU communicates with storage system in order to know available energy. Energy management unit (EMU) than communicates with smart meter for current prices. The storage unit send available reply containing information about storage energy when EMU receives packet, it schedules the suitable start time according to iHEM algorithm. It also reduces the carbon emission and energy consumption cost.

2) KNAPSACK:

The single knapsack is a problem of combinatorial optimization having objects, values and weights packed into knapsack of specific capacity such that value of object is maximized [19] .The multiple knapsack is generalization of single knapsack problem it is resource allocation problem consists of M resources and set of N objects [19]- [20]. Knapsack problem is basically an items set that have different weights and values. Our choice of item must be in such a way that it should be maximum among the weights of given items.

Knapsack allows community to use energy efficiently in order to achieve their goals. It not only minimizes the customer bills but force them to use their heavy appliances other than peak hours. It is estimated that energy demands overall around the world increasing 25 percent per year so its great challenge for us to fulfill the demand [21]. In order to overcome the increasing demand of energy we need to optimize energy usage. Knapsack is one of the optimization technique used for optimizing a problem.

2.1) Types of Knapsack: There are different types of knapsack which are as 1) 0-1Knapsack problem. 2) Bounded Knapsack problem. 3) Unbounded Knapsack problem.

2.1.1) 0-1 Knapsack problem (binary Knapsack): In such case the item is either taken or not taken (accepted / rejected) there are no other possibilities. Suppose a set of ’n’ items having different weights and values say ‘wi’ and ‘vi’, xi is the number of copies of item of set ‘n’. In mathematical form:

Maximize ixi

Subject to i xi ? W Here Xi {0, 1}

2.1.2) Bounded Knapsack Problem: In bounded knapsack problem restriction on xi is removed i.e. xi is an integer value in BKP. In BKP restriction is applied on copies of each item of set ‘n’ to some integer value say ‘ci’. Mathematically:

Maximize ixi

Subject to i xi ? WHere xi {0, 1… ci}

2.1.3) Unbounded Knapsack Problem: In unbounded knapsack problem no restriction is applied on xi. Mathematically:

Maximize ixi

Subject to i xi ? WHere xi ? 0.

2.2) Algorithms for Knapsack: There are many algorithms for solving 0-1 knapsack problems which are as [22]:

2.2.1) Brute Force: It is a straight forward approach based on statement of problem and concepts’ definition. If there are n items that can be chosen from a set of given items then there will be 2n possible combination of items for knapsack. There is a bit string of 0’s and 1’s, if the string is 1 of an item then it is chosen if 0 then not chosen.

2.2.2) Dynamic Programming: It is a technique in which a complex problem is divided into smaller sub problems. These sub-problems are then solved and are recorded in table. Thus table obtain is utilize to solve original problem.

The complexity of this algorithm is O (NlogN).

2.2.3) Greedy Algorithm: This algorithm requires some common sense and problem solving experience for solving problem. It’s a decision making process which may have following steps:

1) Choosing such item that has maximum value in knapsack.

2) Choose item with less weight.

3) Choose item with high value to weight ratio.

2.2.4) Genetic Algorithm: It is an algorithm used to search best solution among different possible solution of a problem. It begins with a solution set and each set is called population. A new population is made from old population by selecting them according to certain specified fitness level.

3) Appliance Usage Control: In [23] consumption of electricity is not always rational it also depends on human psychology factors. Different people use different appliances at different time slots. Now if we consider some appliances we can classify them into following categories:

1) Must Run Appliances: Appliances whose time slot cannot be changed and they must run comes in this category.

2) Fixed Run Appliances: Appliances that run only once a day at specific time.

3) Flexible Appliances: Appliances that can be run at any time in a day they don’t have fixed time.

Some of the household devices are shown below:

Oven LampRefrigerator Fan

Considering human psychology factor the use of appliances can be categorize as:

1) Emergency Use: Appliances that use in some sort of emergency.

2) Welfare: Appliances that use for welfare

3) Enjoyment: Appliances that use for enjoyment purpose.

Taking an example of personnel computer that is used by student for his working purpose or for emergency while an adult use it for enjoyment purpose.

In order to make decision on which appliance should be used during peak hour follows analytic hierarchy process.

Bibliography

[1]

P. Nabbuurs, “Strategic Deployment Document For Europe’s Electricity Networks of The Future,” 2008.

[2]

Yanfeng Chen , Guoqing Xu, Qi Zhang , Ludovic Krundel Yimin Zhou, “Home energy management with PSO in Smart Grid,” 2014.

[3]

US Department of Energy, “The Smart Grid: An Introduction,” 2009.

[4]

S.Electric, “Leading the way in energy efficiency,” 2009.

[5]

http://www.smartgrids.eu/.

[6]

US DEPARTMENT OF ENERGY. www.oe.energy.gov.

[7]

S. Massoud Amin and B. F. Wollenberg, ““Toward a smart grid: power delivery for the 21st century,”,” IEEE Power and Energy Mag, vol. 3.

[8]

A. Vaccaro and D. Villacci, ““Performance analysis of low earth orbit satellites for power system communication,”,” Electric Power systems Research, vol. 73, pp. 287-294, 2005.

[9]

Yuhui Shi, “Particle swarm optimization. Electronic data systems, Inc.,” pp. 8-13, 2004.

[10]

S. Bavarian, and L. Lampe A. Ghassemi, ““Cognitive Radio for Smart Grid Communications,”,” First IEEE International Conference Smart Grid Communications, pp. 297-302, 2010.

[11]

F. Rahimi and A. Ipakchi, ““Demand Response as a Market Resource Under the Smart Grid Paradigm,”,” IEEE Trans. Smart Grid, pp. 82-88, june 2010.

[12]

U.S. Department of Energy, ““A vision for the modern Grid,”,” March 2007.

[13]

Yi Qian, Hamid Sharif, and David Tipper Ye Yan, “” A Survey on Smart Grid Communication Infrastructures: Motivations, Requirements and Challenges””.

[14]

IEC-TC 57, ““Communication networks and systems in substations- Part 1: Introduction and overview,”,” vol. Edition 1.0, 2003.

[15]

Pradeepa Palraj and Reena Kumar Sampathkumar Alzwar, “A Novel Energy Management of Smart Home,” vol. 3, May 2014.

[16]

“VDI-Guideline,” vol. VDI 4602, p. 3, 2007.

[17]

Omar Hussain and Azadeh Rajabian Tabesh Omid Ameri Sianaki, “A knapsack problem approach for achieving efficient energy consumption in smart grid for end users’ life style,” 2010.

[18]

Ryo Inada, Osamu Saeki, and Kiichiro Tsuji Tsuyoshi Ueno, ““Effectiveness of Displaying Energy Consumption Data in Residential Houses – Analysis on How the Residents Respond,”,” Proceedings of ECEEE 2005 Summer Study, 2005.

[19]

U. Pferschy, and D. Pisinger H. Kellerer, “Knapsack Problems,” 2004.

[20]

R. Inada, O. Saeki, and K. Tsuji, T. Ueno, ““Effectiveness of an energy consumption information system for residential buildings,””.

[21]

T. Matsuyama, ““Creating safe, secure, and environment-friendly lifestyles through i-Energy,”,” vol. 21, pp. 1–8, 2009.

[22]

Dipti Shrestha Maya Hristakeva, “Different Approaches to Solve the 0/1 Knapsack Problem”.

[23]

A. Suzuki, Y Fuwa and T. Sato Y. Yamamoto, “Decision making in electrical appliance use in home ,” Energy Policy, vol. 36, pp. 1679-1686, 2008.

Helical CT Scan in Comparison to MRI Scans

Introduction

Helical CT is also known as spiral CT; the two terms are interchangeable (Kalender, 1994). Both MRI and helical CT have been introduced into clinical practice ahead of any evidence for cost-effective improvement in clinical care. Both technologies are still evolving. For instance vascular 3D imaging is a newly expanding indication within CT. Although helical CT is replacing conventional CT the question arises as to whether it will replace MRI.

1) Equipment

Helical CT began in the 1990’s. It is a fast technique; data is collected continuously at less than one second for a 10 mm slice. It is called helical because the patient moves continuously though the machine whilst the X-ray tube rotates around them. Slip ring technology enables the scanner, mounted on a gantry, to continue rotating in the same direction around the patient yet still maintain its power supply and x-Ray capability. Because it is so quick breathing does not affect the quality of the final image and it is an excellent way to view the lungs and liver. Because of the continuous rotation helical CT enables patient translation and the acquisition of data to take place at the same time. Helical CT requires completely different equipment to convention CT necessitating the replacement of the entire unit not just an upgrade. A multislice CT scanner is along the same principle as a helical scanner but is even faster still and contains more detection elements. Although the actual data acquisition is so much faster with multislice the time required to process the image is lengthy (so patient through put will be no faster). The amount of data storage space required for multislice images is incredibly vast and may overload the capability of the existing PACS system within the hospital.

The equipment for MRI consists of a large, heavy magnet which creates the magnetic field. Magnetic shielding of the room is necessary together with stringent safety precautions to avoid accidents for instance with flying metal objects within the room. The scanning tube where the patient must lie is relatively enclosed and this can create problems with claustrophobia. The equipment is also very noisy which may unnerve the patient. MRI requires more extensive software for viewing the images than does CT. Some MRI machinery is more open permitting greater patient access even to the extent of allowing simultaneous surgery (Gould and Darzi, 1997).

2) Techniques

MRI involves the person being placed in a large magnet the magnetic field of which causes all the protons (the nuclei of hydrogen atoms) in the body to line up and oscillate at a certain frequency (precision frequency). Radiofrequency pulses are emitted from the machinery at the same frequency as the precision frequency causing the protons to come out of alignment for a brief time and subsequently realign emitting energy in the process. The radiofrequency of these emissions is specific to the type of issue (since it reflects the hydrogen content) and is then computed to form an image. Patient movement is a major problem with the MRI technique since data acquisition is quite slow and so it is not as good as helical CT for moving organs such as the lungs and liver. MRI scans are more expensive to produce that helical CT. The major advantages of MRI over helical CT are that MRI involves no x-Ray exposure and certain structures provide better images with MRI such as the brain and musculoskeletal system. MRI is definitely the best test for acoustic neuroma (Renowden and Anslow 1993). CT is better than MRI for imaging brain trauma and is better in the abdomen for the bowel (on account of it being a moving structure) whereas MRI is better in the pelvis. Helical CT is finding a place in the diagnosis of pulmonary embolism (Roy 2005). The disadvantages of CT are the x-Ray dose and the nephrotoxicity of some contrast agents.

In 1993 the Royal College of Radiologist guidelines recommended MRI be used for investigations on the brain, musculoskeletal system, oncology and paediatrics, the 1995 version of the guidelines recommended back pain beyond six weeks be investigated by MRI. The Royal College of Radiologists document on oncology (1999) provides graded evidence based recommendation of which scanning modality to use according to tumour site.

3) Staff

Staff training is necessary for both modalities of scanning. MRI staffing costs are higher than with CT. Because MRI scans are in such demand and scanning time long it is often necessary to run the machines in the evenings and at weekends (Moore & Golding, 1992). Multislice CT can involve increased radiologist workload.

4) Patient

Patients with metal implants or pacemakers or who are claustrophobic are unsuitable for MRI. Mechanical ventilation is a relative contraindication. Patients with acute major trauma including head injury are unlikely to be suitable for MRI because of the duration of scanning. The increased x-Ray dose to patients (and to the community) of the later generation CT scanners is of concern (National Radiological Protection Board, 1990). For this reason MRI is the preferred modality for children and fetuses (Duncan 1996). Patients requiring interventional procedures may be suitable for a CT fluoroscopy (Wagner 2001).

5) Quality of results

MRI is preferred for the brain and spine (where it is of overriding advantage), orthopaedics and the pelvis. MRI produces very accurate images of soft tissues but imaging time is longer and artefacts are caused by patient movement. It is likely it has reduced the number of knee arthroscopies (Stoner, 1995) and it is anticipated to reduce the number of invasive radiological investigations such as angiograms. MRI may develop a clinical role as investigating the actual function of the brain in neuropsychiatry (Callicott and Weinberger1999). CT is preferable for bone. In brain trauma, subarachnoid haemorrhage and acute cerebrovascular disease MRI is not as good as CT.

Spiral CT is used for the lungs and abdomen and pelvis. It is valuable in detecting small lesions. It is helpful in trauma patients since the procedure is so quick. Spiral CT does lose a bit of resolution as compared with conventional CT and so for structures that are not moving conventional CT or MRI has the advantage.

6) Cost

Cost considerations include those of initial purchase (or lease) set up and also running costs. Assistance in the procurement process is available from the Diagnostic Medical Equipment team which is working closely with the Department of Health in the optimising of value for money in the replacement of all MRI and CT scanners that are pre-1997. A 16 multislice CT scanner costs approximately ?500 000 whereas an MRI scanner is more at ?800 000; running costs are also more with MRI (Frank, 2003). Bowens and Smith (writing in 1999) state the costs of an MRI scanner are from ?400 000 for a 0.5T and ?750 000 for a 1.5T. They state the service contracts are around ?50 000 per year and that to lease a machine costs about ?120 000 per year. MRI may be more expensive to install since the magnet is large and heavy. The site may be unsuitable with regard to load bearing or access. In any case expense will be incurred in magnetic shielding. MRI is a relatively expensive imaging modality. Fletcher (1999) has analysed costs of acquiring and operating MRI in the NHS over a seven-year machine lifespan. Its staffing, upgrade, maintenance and running costs are all high. The cost of an MRI scan varies from ?30 to ?180 (Bowens and Smith, 1999).

In evaluating costs it is necessary to look at the whole picture. The running costs of isolated MRI machines will be higher than where machines are grouped together. Smaller MRI scanners just for joint scanning use may prove cost effective (Marti-Bonmati & Kormano, 1997). If a more expensive scanning modality saves on the costs of surgery then overall there may be economic gain. For instance MRI may avoid knee joint surgery (Bui-Mansfield 1997). It is important to ensure that it is actually replacing other investigations or surgery and not just adding to them (Hailey & Marshall, 1995). Overall the cost effectiveness will depend on how appropriately the imaging modality is used.

Regarding CT the X- Ray tubes are expensive. A helical scanner is likely to need one x-Ray tube replacement per year (possibly more frequently in the case of a multislice scanner) and this will cost approximately ?30000-40000 (Conall and Hanlon, 2002). Berry (1999) performed a systematic review finding little clinical or economic impact of spiral CT.

Conclusion

Although there has been away from MRI to helical CT in some clinical situations units will need access to both types of scan. Cooperation between different units is important in order to provide a comprehensive service to the population. It is likely that some patients such as orthopaedic outpatients should move to another unit for the scan. Computerised reporting makes off site scanning realistically closer. Choice of scanning modality is likely to ultimately depend upon collaboration with local units to develop a hub and spoke approach to providing cost effective services which are also effective and convenient for patients.

References
Book

Fishman EK Jeffrey RB Spiral CT. Principles, Techniques and Clinical Applications. 2nd edition. 1998 Philadelphia. Lippincourt Raven.

Articles

Berry E et al A systematic literature review of spiral and electron beam computed tomography: with particular reference to clinical applications in hepatic lesions, pulmonary embolus and coronary artery disease. Health Technology Assessment, 1999; 3(18)

Bui-Mansfield LT et al Potential cost savings of MR imaging obtained before arthroscopy of the knee: evaluation of 50 consecutive patients. American Journal of Roentgenology 1997; 168: 913-18

Callicott JH and Weinberger DR Neuropsychiatric dynamics: the study of mental illness using functional magnetic resonance imaging. European Journal of Radiology, 1999: 30(2): 95-104

Conall JGarvey CJ and Hanlon R Computed tomography in clinical practice BMJ 2002;324:1077-1080

Fletcher J et al The cost of MRI: changes in costs 1989-1996. British Journal of Radiology 1999; 72(5): 432-437

Duncan KR. The development of magnetic resonance imaging in obstetrics. British Journal of Hospital Medicine, 1996; 55(4): 178-81

Frank J introduction to imaging Student BMJ 2003;11:393-436

Gould SW and Darzi A The interventional magnetic resonance unit – the minimal access operating theatre of the future? British Journal of Radiology 1997; 70 (Special issue): S89-97

Kalender WA Spiral or helical CT; right or wrong?[letter] Radiology 1994; 193:583.

Hailey D and Marshall D The place of magnetic resonance imaging in health care. Health Policy, 1995; 31: 43-52

Marti-Bonmati L & Kormano M. MR equipment acquisition strategies: low-field or high-field scanners. European Radiology 1997; 7(Supplement 5): 263-68

Moore NR and Golding SJ Increasing patient throughput in magnetic resonance imaging: a practical approach. British Journal of Radiology, 1992; 470-75 26

National Radiological Protection Board. Patient dose reduction in diagnostic radiology. Didcot, 1990:1(3).

Renowden SA and Anslow P. The effective use of magnetic resonance imaging in the diagnosis of acoustic neuromas. Clinical Radiology 1993; 48(1): 25-8

Roy P-M Colombet I and Durieux P et al Systematic review and meta-analysis of strategies for the diagnosis of suspected pulmonary embolism. BMJ2005;331:259

Royal College of Radiologists. A guide to the practical use of MRI in oncology. London – RCR, 1999b

Royal College of Radiologists. Making the best use of a department of clinical radiology: guidelines for doctors (2nd edition). London – RCR, (3rd edition) 1993, (4th edition) 1998, (5th edition) 2003.

Stoner DW. The knee. In: Seminars in Roentgenology 1995; 30: 277-93

Wagner LK. CT fluoroscopy: another advancement with additional challenges in radiation management. Radiology 2001; 216: 9-10

Reports

Bowens A Smith I Magnetic resonance imaging: current provision and future demands. Nuffield Portfolio programme Report No3. Northern and Yorkshire R&D Portfolio programme at the Nuffield Institute for Health. December 1999. Available at http://www.nuffield.leeds.ac.uk/downloads/portfolio/mri.pdf

Royal College of radiologists Making the Best Use of a Department of Clinical Radiology Guidelines for Doctors. Fifth Edition 2003 BFCR(03)3 Making the Best Use of a Department of Clinical Radiology Guidelines for Doctors. Fifth Edition

Websites

British Association of MR Radiographers http://www.bamrr.net/

Department of Health www.dh.gov.uk

Diagnostic Medical Equipment team http://www.pasa.doh.gov.uk/dme/radiology/mr.stm

Hand Glove Controller Rehabilitation Aids Technology

DEVELOPMENT OF A HAND GLOVE CONTROLLER REHABILITATION AIDS
Muhammad Hafizudin Bin Abdul Manas
Abstract

A hand-glove with a sensor is developed for rehabilitation aids. The aim of this paper is to demonstrate the hand glove with a sensor that can be used as an input signal for rehabilitation purpose. This project, the wheelchair is used as a mechanism. The project will be control by the flex sensor. This sensor will be guide the wheelchair to move forward, reverse, right and left direction by using the algorithm hand gesture. The Arduino UNO (ATmega328) is used as an interface with the flex sensor and wheelchair. This Arduino will convert resistance input from sensors with analog read to digital read by using ATmega328. Then it will response with the pulse with modulation (PWM) by using driver motor to accelerate the DC motor. The output will give the instruction for wheelchair motion such as forward, reverse, left and right.

Keywords – Hand Glove, Flex Sensor, Wheelchair, Microcontroller Arduino UNO, Hand Gesture

INTRODUCTION

In real life, there are many people who are having disability physically that cannot communicate, listen, walking and more. For the disable people, rehabilitative aids it important to maintain their daily activities same as normal people. In many years, Stroke affects almost one million people and 80 % survivors are left with weakened limbs and hands. There are many designs and method was developed to regain their hand movement and strength. A rehabilitation technique system has been designed to help the patient or disable people [4]. One of the most common human physically activities are walking. Walking is an important role in human daily. Patient or disable people have this problem need the rehabilitation aids. For example, wheelchair is one of the mechanisms to help the disable people to continue their daily activities. Current wheelchair that everyone uses nowadays it manually controls it with their barehanded or need assistance to move the wheelchair. Many evolution has been transform to control the wheelchair such as powered wheelchair, wheelchair control by using voice commands, wheelchair control by joystick and wheelchair control by pressing button.

Wheelchair control by voice command is one of the methods to help disable people. This mechanism is design based on vocal command. There are have several condition to move the wheelchair such forward, backward, right, left, stop, light on and light off. Vocal command has limited and must use the same voice to control the motion of wheelchair. Efficiency to control in silent environment is higher than noise environment [1]. Pressing button also can be control the wheelchair, it been setup in programming to move wheelchair in particular place. For example, when disable people press the button 1 it move to the kitchen or press button 2 moves to living room. The wheelchair it helps with a sensor to detect the obstacle in its path. It follows the line algorithm path based on black and white surface on the floor [2]. For this papers and this idea, this project consist:

Development of Control Algorithm
High Level programming language
Development of Control Algorithm

The algorithm referred to any computation performed under a set of rules that applied to numbers in a decimal form. An algorithm can be represented of a solution to a problem. The control algorithm is the most important characteristic and the one to consider first. The control algorithm interpreted the true nature of output as a function of the input.

For example, algorithm is hand gesture to produce the output. The hand gestures used for controlling robot’s motion, in video games [3]

High Level programming language

In order to develop a control algorithm, high level programming language is used. High level languages are designed to understand than assembly languages and allow a program to run. The translation of high level languages into object code needs a source code.

For example, wheelchair using MEMS sensor has been using c language programming into PIC microcontroller to control the wheelchair motion [4].

LITERATURE REVIEW
Hand Glove Control

In modern biomedical technologies, a robotic system has been used in physical assistance and rehabilitation such as soft robotic glove. Glove is a portable device that patient can be wear for exercise individual fingers to minimize the stresses on their hand during therapy. The application used new technologies in now days such as The PowerFoot One and Luke Arm. For The PowerFoot One is an advanced complete ankle and foot prosthesis. The user legs move like a normal person walking when their use this product because this product mimic human foot. For Luke Arm device designed to provide a person with a partially articulated robotic arm that uses foot pads to control and move it [4]. Moreover, hand- glove also implemented into control the machine toys such as helicopter and remote car. It uses the same technique that the person wears the glove as the control and use the variables motion to move the machine. The function is similar same as joystick [7]. The same concept applied to this project where the hand glove is used. The hand glove will act as a controller in moving the wheelchair according the movement of fingers.

Flex Sensor

Nowadays, high technologies used robot to move something and doing a task to replace the human being. Sensor plays important role in robotic. A sensor is device that can measure the motion in high degree. Flex sensors are analog resistors. These resistors work as variable analog voltage divider. Inside the flex sensor are carbon resistive elements with thin flexible substrate. Figure 1 shows degree of bending for flex sensor. Smaller the radius, higher will be the resistance value [8]. From this research, design powered wheelchair which control by using hand movement. Used two fingers of hand gloves are needed to control the wheelchair. Two photodiodes at upper side and two sensors locate opposite side for these two fingers. Microcontroller is programming for different code combinations, it also know as converters from input signal to output, it decoded the signal into appropriate movement of wheelchair with accelerate DC motor. [3]

Figure 1: Flex Sensor offers variable resistance readings

Retrieved from

Flex Sensor Based Robotic Arm Controller Using Micro Controller [8]

ALOGRITHM HAND GESTURES

In our daily activities, many people frequently used hand gesture to communication such as thumb up for good and two fingers like ‘V’ shape for peace. Many researchers [4] [8] used hand gesture to identifying and recognize some form of action without saying it to express the action have their do. Joyeeta Singha project for sign language hand gesture step by step. The system consist five step such as skin filtering, palm cropping, edge detection, feature extraction and classification. Recognize obtain almost 90% for different symbols. [4]. Y Tabata is developing hand gesture for spelling in Japanese language. Creating the hand gesture to obtain the alphabet and numbering to get spelling. [8] The same concept hand gesture applied for algorithm to control the wheelchair movement and motion.

Figure 2: Example of hand gesture that cropping [4]

METHODOLOGY
Project Overview

Figure 3: Block Diagram of Hand Glove System

Figure 3 shows a block diagram of hand glove system including Flex sensor 2.2 “, Amplifier, Atmega 328 (Arduino UNO) microcontroller, Dual channel 10A motor driver and DC Motor. The input is flex sensor and the output is DC Motor

Microcontroller is a main core for the whole that will generate Pulse Width Modulation (PWM) signal to control the dual channel motor driver. Dual channel motor driver produce the output signal to motor from the PWM to accelerate the DC motor slow or fast. The resistance input from sensors will converter from analog read to digital read in binary number by using Arduino.

Flex Sensor with amplifier

Figure 4 shows the flex sensor connection circuit for resistance to voltage converter. The range resistance of input for this flex sensor is 0 to 10K Ohms.

Figure 4: Resistance to Voltage converter connection

A negative reference voltage will give a positive output. The output value is produced when the sensor in condition of low degree of bending. The Op-amp is used to produce the signal from the input to the output as a voltage in a wide range. For this project is used is LM324 as amplifier. LM324 consists of four independent, high gains, internally frequency compensated operational amplifiers. Figure 5 shows the performance characteristic of LM324, from this figure 5, the higher supply voltage versus the input voltage.

Figure 5: Input Voltage Range vs Supply Voltage

Atmega328 Microcontroller

Figure 6: ATmega 328 microcontroller

Figure 6 shows the ATmega 328 microcontroller’s pin which triggers the signal to motor driver. The generated pulse width modulation (PWM) signal is sent to motor driver from specific pin port. This microcontroller also convert the resistance input from the sensors with analog read to digital read in binary number.

EXPERIMENT SETUP

Figure 7: Blok diagram hand glove control system

The hand glove controller has been developing in order to assist disable people to accelerate the wheelchair. Figure 7 shows the 5 main part combine together to perform hand glove control the wheelchair movement and the Arduino board becomes important electronic device.

Figure 8 : Hand Glove with Flex Sensor

Figure 8 shows the assembly hand glove with flex sensor. It is portable and easy to handle. One hand is used and there have four flex sensors that attached on the hand glove. Each sensor is fitted with the length of each finger except thumb. For each finger has their own function to control the movement of wheelchair and voltage required is +5v. It will convert resistance input from sensors with analog read to digital. Each finger for each sensor also has own function to accelerate the wheelchair. Table 1 shows the function and motion for each sensor.

Table 1: Function of each sensor

No

Sensor

Motion of wheelcahir

1

A

Forward

2

B

Backward

3

C

Right

4

D

Left

RESULT AND DISCUSSION

Based on the experiment setup, a new algorithm is developed for the movement of hand glove system to control the wheelchair. The new development of hand glove controller consisted of Arduino, PWM and wheelchair system. In this project, the c language has been used as a medium of language and it already programmed into Arduino Uno. The source code is written based on the performing work decision and control algorithm based on hand gesture. Figure 9 shows flowchart for algorithm to accelerate the movement of wheelchair.

Figure 9: Flow Chart for Algorithm Hand Gesture

After the source code has been programmed in to Atmega328 microcontroller (Arduino UNO board), the hand glove controller system is test by two different method. The first test of the programming is without DC motor. Second test, the programming will test with DC motor are conducted in order to see either the development of control algorithm is functioning or not.

Figure 10: the assembly hand glove system with wheelchair

Figure 10 shows the real situation where people seat on the wheelchair with hand controller system. In these experiment two normal male subjects, age 25 with weight range 50-70 kg. For second test, the performance was good and the mechanism in good functional. The DC motor was performed based on the signal provided by dual channel motor drive with algorithm. Table 2 shows that hand gesture and direction of wheelchair movement.

Table 2: Hand Gesture and direction of movement

Hand Gesture

LCD Display

Direction

Wheelchair

Stop

Wheelchair forward slow or fast followed by degree bends of fingers.

Wheelchair reverse slow or fast followed by degree bends of fingers.

Wheelchair

right slow or fast followed by degree bends of fingers.

Wheelchair

left slow or fast followed by degree bends of fingers.

Table 2 shows the hand gesture algorithm and direction of wheelchair when it applied. From this test, the control system implementation using this hand glove system development is quite successful with the wheelchair system. Therefore, it also gives the disable people use wheelchair without assistance in their daily activities. DC motor has been controlled by motor controller with interface Arduino that was programmed by C language.

Table 3: Reading of flex sensor and condition of wheelchair

Sensor

Condition of Wheelchair Movement

Reading of flex sensor (Hz)

A, B, C and D

Stop

A,B,C,D < 50

A,B,C,D > 150

A

Forward

50150

B

Backward

50150

C

Turn Right

50150

D

Turn Left

50150

Table 2 shows the reading of flex sensor and condition of wheelchair when the algorithm is applied. The wheelchair stops when the PWM value for each sensor below 50Hz or more than 150Hz, DC motor stop. For forward slow and fast, sensor A value 50Hz150Hz then motor are accelerate. For sensor B, C and D, PWM value has same as sensor A to produce the different direction.

CONCLUSION

In this project, the development of hand glove controller for rehabilations aids is propose to assist the disable people that have problem such as walking. It will help them to control the wheelchair more easily. The implementation of control algorithm and device into the system hand glove become succesfull.This application, it will be easier the disable people or patient to control wheeelchair by their self. In addition , the development of algorithm for hand gesture gave some easily to control the system and it comfortable. There, the objectives of this project can be said as successfully, which is algoritm can give instruction the motion of wheelchair.

REFERENCES
R.Puviarasi, MrithaRamalingam, Elanchezhian Chinnavan (2013), Low Cost Self-assistive Voice Controlled Technology for Disabled People Department of Electrical and Electronics Engineering, Saveetha University, India
S.Shaheen#1, A.Umamakeswari(2013), INTELLIGENT WHEELCHAIR FORPEOPLE WITHDISABILITie University, Thanjavur, Tamil Nadu, India
Solanki Krunal M (2013), Indian Sign Languages using Flex Sensor Glove, Department of Biomedical Engineering, Govt. Engineering College, Gandhinagar, India
Michael A Delph II, Sarah A Fischer, Phillip W Gauthier ,Carlos H. Martinez Luna, Edward A. Clancy, Gregory S. Fischer,(2013) A Soft Robotic Exomusculature Glove with Integrated sEMG Sensing for Hand Rehabilitation, AIM Lab, Worcester Polytechnic Institute, Worcester, MA, USA,.
Dr.Shaik Meeravali, Aparna,(2013), Design and Development of a Hanglove Controlled Wheel Chair Based on MEMS, Department of Electronics and communication Engineering, RRS College of Engineering and Technology, Muthangi, Faculty of Electronics and Communication Engineering, Jawaharlal Technological University, Hyderabad, India.
Joyeeta Singha, Karen Das (2012), Hand Gesture Recognition Based on Karhunen-Loeve Transform, Department of Electronics and Communication Engineering 1,2Assam Don Bosco University, Guwahati, Assam, India
Jeremyblum (2010), Hardware Control Using Hand Gesture from http://www.jeremyblum.com/2010/05/09/sudoglove/ retrieved 20/11/2014
Abidhusain Syed1, Zamrrud Taj H. Agasbal, Thimmannagouday Melligeri 1, Bheemesh Gudur1 (2011), Flex Sensor Based Robotic Arm Controller Using Micro Controller,Department of Electronics and Communication, BLDEA College of Engg Bijapur-3, India; 2Department of Electronics and Com- munication, KBN College of Engg Gulbarga-4, India.
SensorWiki.org (2013),Flexion retrieved from http://www.sensorwiki.org/doku.php/sensors/flexion
Y Tabata, T Kuroda, K Okamoto (2012), Development of a glove-type input device with the minimum number of sensors for Japanese finger spelling,Department of Radiation Technology, Kyoto College of Medical Science, 1-3 Imakita, Oyama-higashi, Sonobe, Nantan, JAPAN.
GlobalSpec, (2013), Safety Gloves Information retrieved from http://www.globalspec.com/learnmore/manufacturing_process_equipment/personal_protective_equipment/gloves_clothing

Google’s Entry in Publishing | Dissertation Proposal

DISSERTATION PROPOSAL
GOOGLE: AT THE FORE OF A PUBLISHING REVOLUTION
Abstract

Google Inc. is poised to ignite a technological revolution in publishing, a revolution that will establish the company as a leader in the publishing industry. This thesis will be supported by applying existing theories on industry and organisational life cycles, technology, and business strategy to the current state of the publishing industry vis-a-vis internal factors at Google.

Hypothesis

Google Inc. is strategically poised to ignite a technological revolution in the publishing industry, a move that will permit Google, already proclaimed as “the top search engine in the world” (Piper 2004), to become the dominant player in the electronic publishing, or e-publishing, industry and a major force in the broader publishing industry.

Importance of the Topic

Today, the world is witnessing the beginnings of a technological challenge to traditional ‘paper and ink’ publishing. This challenge, which is not unlike that posed by Internet enterprises to traditional ‘brick and mortar’ retail establishments, banks, and service organisations, is being led by Google Inc.

Based on a theoretical foundation, this research will explore the convergence of new technologies and organisational factors that Google is strategically leveraging to revolutionise publishing and to achieve leadership status in the publishing industry.

Theoretical Base for the Research

Research will be based on industry and organisational life cycle theories as well as classic technology theory and its relevance to the life cycle theories and business strategy. Industry life cycle theory suggests that industries pass through a series of stages which affect factors such as competition, consumer demand, and strategy. Organisational life cycle theory suggests that all organisations evolve through a typically predictable set of sequential stages in which their thinking and behaviour change. The concept of technology in this context refers to the methods and mechanisms that organisations use to transform inputs into outputs. The application of technology, through the implementation of organisational strategies, can affect industry and organisational life cycles. The theory of competitive position suggests that organisations adopt strategies that reflect their positions in the market.

Prior Research on the Topic

Google’s Web site (n.d.) states that its mission is “to organize the world’s information and make it universally accessible and useful”. Deutschman (2005) reports that Google, founded in 1998, has experienced phenomenal sales growth of more than 400,000% in the past five years, making it the fastest growing company in history. He states that the market value of the company is US$80 billion. Wikipedia (n.d.) traces Google’s history from its inception as a research project in 1996 through today and furnishes insight into management and salaries, the corporate culture, acquisitions, and legal and social issues. Google has been described as “more than a search engine, less than a god” (Piper 2004) and as “the 800-pound octopus that is filling potential rivals with dread and envy”, implicitly threatening competitors with acquisition or elimination. (Elgin and Hesseldahl 2005). Glover (2004) summarises Google’s business model as one which offers its services to the public at no cost, earning its revenue from advertisers who post links to their own Web sites then pay fees to Google based on the number of people who make the link from Google to the advertisers’ sites.

Elgin and Hesseldahl (2005) provide significant insight into Google’s ambitious business expansion plans and its challenges to major industry players. Notess (2005) reports on Google’s first entry into the e-publishing arena with Google Answers. Pike (2005) describes Google Scholar and the Google Library Project as continuing forays into the e-publishing industry; M2 Presswire (2004) explains Google Library in more depth. Notess (2005) compares Google Scholar with Scirus, a competitive product. Peek (2004) reports on Google’s relationship with DSpace, a company devoted to capturing, storing, indexing, preserving, and redistributing university research results, and the Electronic Education Report (2003) describes Google’s relationship with DK Publishing in a joint effort to install an encyclopaedia for young people on the Web. Jesdanun (2005) reports on the impact Google Library is having on the publishing industry. Ferguson (2005) and Dodson (2005) provide in-depth analyses of Google’s cross-industry plans for the future as well as plans specifically relating to the publishing industry. Finally, Carvajal (2005), Degtyareva (2005), Liedtke (May and August 2005), and PR Newswire (2005) pose global issues that Google as well as publishers and authors are facing with regard to Google’s e-publishing plans.

In addition to the sources surveyed for information about Google, research was conducted into the history of publishing, industry life cycle, technology theory, organisational life cycle, and business strategy formulation. Feather (1990) and Millgate (1987) write extensively about the history of publishing. The Columbia Encyclopedia (2004) defines publishing in a broad sense as “making something publicly known” then continues by describing its history, the emergence of publishing firms, new technologies, and mergers and acquisitions. Proctor (2000), in advising that industry life cycle is a key factor in business strategy planning, identifies and describes in detail the three stages that comprise an industry’s life cycle – growth, maturity, and decline – as well as the characteristics of industries at each stage. Pitt (2000) explores the philosophical meaning of and various definitions for technology. Daft (1998, citing Rosseau 1979 and Perrow 1967) defines technology as “the tools, techniques, and actions used to transform inputs into outputs”. Daft (1998) likens the life cycle of an organisation to that of a person (i.e. birth, growth, and death), citing the following as stages through which an organisation passes during its development: entrepreneurial stage, collectivity stage, formalisation stage, and elaboration stage. Smith and colleagues (1991) provide a conceptual framework and a comprehensive methodology for developing and implementing business strategies.

Research Approach

The selected approach involves using secondary research to support the stated thesis. The interrelationships among industry and organisational life cycle, technology, and business strategy theories will be explored then related to the history of the publishing industry to explain the reasons that the industry is at a stage where it is susceptible to fundamental change. Finally, based on the established theoretical foundation and the publishing industry’s susceptibility to change, data collected about Google Inc.’s history, mission, business model, financial status, competitors, challenges, technologies, and plans for the future will be used to show that the company is in a unique position to take advantage of the publishing industry’s susceptibility to change by fundamentally changing publishing technology thereby allowing Google to become the dominant player in the electronic publishing, or e-publishing, industry and a major force in the broader publishing industry.

Limitations and Key Assumptions

This project will not involve the use of primary research as sufficient secondary data exists. The research will allude to the many business opportunities that Google is currently exploring to highlight the extent of the company’s expansion plans, but will concentrate on the company’s e-publishing initiatives. Only the theories identified in this proposal will be used to prove the thesis.

No assumptions are being made.

Contribution to Knowledge

The contribution to knowledge resulting from this research will be to use industry and organisational life cycle, technology, and business strategy theories and their interrelationships to demonstrate how Google can leverage its position and new technologies to fundamentally change a major existing industry and to establish a leadership position in that industry.

Proposed Chapters

It is envisioned that the dissertation will consist of six chapters: (1) introduction, (2) survey of prior research, (3) research methodology, (4) research results, (5) analysis of results, and (6) summary and conclusions.

References

Carvajal, Doreen (2005) ‘German publishers, Google challenge’, International Herald Tribune, June 6, 2005.

(The) Columbia Encyclopedia (2004) Book publishing.

Daft, Richard L. (1998) Organization Theory and Design, Cincinnati, Ohio: South-Western College Publishing.

Degtyareva, Victoria (2005) ‘New Google digital library hits copyright roadblocks’, University Wire, September 21, 2005.

Deutschman, Alan (2005) ‘Can Google stay Google’, Fast Company, August 1, 2005.

Dodson, Angela P. (2005) ‘A whole new meaning for the verb to Google – Between the lines: The inside scoop on what’s happening in the publishing industry’. Black Issues Book Review, March 1, 2005.

Electronic Education Report (2003) ‘DK Publishing teams with Google to launch new e-encyclopedia’, August 29, 2003.

Elgin, Ben and Hesseldahl, Arik (2005) ‘Google’s grand ambitions’, Business Week, September 5, 2005.

Feather, John (1990) ‘The printed book’ and ‘Publishing before 1800’, Coyle, Martin et al., eds., Encyclopaedia of Literature and Criticism, London: Routledge.

Ferguson, Charles H. (2005) What’s Next for Google, January 2005. Web Site: http://www.technologyreview.com/articles/05/01/issue/ferguson0105.0.asp, [Accessed: October 18, 2005].

Glover, Tony (2004) ‘Google IPO locks out foreign investors – or does it?’, Knight Ridder/Tribune Business News, May 9, 2004.

Google (n.d.), Corporate Information: Company Overview, Web site: http://www.google.com/intl/en/corporate/index.html, [Accessed: October 19, 2005].

Jesdanun, Anick (2005) ‘Google project shakes up book publishing’, Wisconsin State Journal, September 22, 2005.

Liedtke, Michael (2005) ‘Google halts scanning of copyrighted books’, Associated Press, August 13, 2005.

Liedtke, Michael (2005) ‘Publishers protest Google Library project’, Associated Press, May 24, 2005.

M2 Presswire (2004) ‘Google checks out library books; The Libraries of Harvard, Stanford, the University of Michigan, the University of Oxford, and The New York Public Library join with Google to digitally scan library books and make them searchable online’, December 14, 2004.

Millgate, Jane (1987) Scott’s Last Edition: A Study in Publishing History, Edinburgh: Edinburgh University Press.

Notess, Greg (2005) ‘Scholarly Web searching: Google Scholar and Scirus’. Online, July 1, 2005.

Peek, Robin (2004) ‘Googling DSpace’, Information Today, June 1, 2004.

Perrow, Charles (1967) ‘A framework for the comparative analysis of organizations’, American Sociological Review 32. Cited in Daft (1998).

Pike, George H. (2005) ‘All Google, all the time’, Information Today, February 1, 2005.

Piper, Paul S. (2004) ‘Google spawn: The culture surrounding Google’, Searcher, June 1, 2004.

Pitt, Joseph C. (2000) Thinking about Technology: Foundations of the Philosophy of Technology, New York: Seven Bridges Press.

PR Newswire (2005) ‘Google Library project raises serious questions for publishers and authors, August 12, 2005.

Proctor, Tony (2000) Strategic Marketing: An Introduction, London: Routledge.

Rousseau, Denise M. (1979) ‘Assessment of technology in organizations: Closed versus open systems approaches’, Academy of Management Review 4. Cited in Daft (1998).

Smith, Garry D. et al. (1991) Business Strategy and Policy, Boston, Massachusetts: Houghton Mifflin Company.

Wikipedia (n.d.) Google, Web site: http://en.wikipedia.org/wiki/Google, [Accessed: October 18, 2005].

Global Warming: Technical Solutions

Evaluate the Technological Solutions Available to Ameliorate Global Warming
Introduction

Global warming has been proven to be the direct result of anthropogenic causes or man-made interventions with nature. Starting with the Industrial Revolution of the late 18th Century, technologies have been developed that resulted in the accumulation of greenhouse gases in the atmosphere which trap the sun’s radiant energy. This enhanced greenhouse effect gradually raises the earth’s surface temperatures and is projected to create irregular environmental conditions, namely: the melting of polar ice caps, rising of sea level, profound agricultural changes resulting from climate change, extinction of species, abnormal weather conditions, increased incidence of tropical diseases, disappearance of ecological niches and disruption of drinking water supply, (“Global Warming,” 2004).

Since global warming offers a great potential to create catastrophic effects on the environment as a whole, it becomes a global issue, requiring the involvement of the whole international community in finding ways to ameliorate its adverse effects, (Baird, 2006). Global greenhouse gas emissions that are causing global warming come from different sectors. Figure I below shows the global greenhouse gas emissions by sector data: Land use change and forestry contribute the highest greenhouse gas emission rate (19%); followed by electricity (16%); agriculture (14%); transport (13%); other fuel combustion (11%); manufacturing and construction (10%); waste (4%); and industrial (3%) and combined heat and power (3%). The Pew Center on Global Climate Change (Undated, p. 1) asserts that “because there are so many sources of these gases, there are also many options for reducing emission.”

This paper evaluates the available technological solutions to ameliorate global warming by presenting the advantages and disadvantages of each option. Moreover, such solutions will be presented on a sectoral basis, starting with land use, forestry and agriculture; followed by electricity, and finally by the transportation sector.

Land Use, Forestry and Agriculture Sector Technology

Land use and forestry technology includes carbon accounting, sequestration, and biofuel production.

1. Carbon Accounting and Sequestration

According to the Intergovernmental Panel on Climate Change or IPCC (2000), carbon stock enhancement from land use, land –use change and forestry activities are reversible and therefore require careful accounting. Carbon accounting technology, which involves land-based accounting and activity based accounting, provides accurate and transparent data on carbon stocks and/or changes in greenhouse gas emissions by sources and removals by sink. These data are required to assess compliance with the commitments under the Kyoto Protocol. Moreover, carbon accounting will help determine relevant carbon pools that can be used in the production of an alternative source of fuel, such as biofuels. Changes in carbon stocks can be technically determined with the use of activity data, remote-sensing techniques, models derived from statistical analysis, flux measurement, soil sampling and ecological surveys. However, the cost of carbon accounting increases as precision and landscape heterogeneity increases, (IPCC, 2000). As a result of careful carbon accounting, excess carbon can then be captured or sequestered in order to be utilized as fuel source. An example of carbon sequestration technology is the Integrated Gasification and Combined Cycle Process or IGCC, which allows for easy sequestration of carbon for long term storage in underground geological formations. However, the Pew Center on Global Climate Change (Undated), cautions that further research is needed to test the viability of large scale underground storage of carbon in a long term scale.

2. Biofuel Technology

Biofuel production or biomass gasification ensures lower greenhouse gas emission levels by converting waste wood and biomass into biofuels that could replace fossil fuels. The report of the Pew Center on Global Climate Change (Undated, p. 4), maintains that agricultural lands can be planted with carbon-dioxide fixing trees that can be used for fuel production. This will result to land use changes that may have multiple indirect benefits such as improvement of soil, air and water quality; and increase in wildlife habitat. However, study findings suggest that the cultivation of corn and soybeans for biofuel production produces adverse environmental impacts, such as the leaching of pesticides and nitrogen and phosphorus from fertilizers into water resources, (Manuel, 2007). Moreover, biofuels are from two to four times more expensive than fossil fuels and are not believed to compete well in the marketplace. For example, “a fuel –cost comparison indicates that while gasoline could be refined for 15 to 16 cents per liter (in the late 1980s), the cost of biofuels ranged from an average of about 30 cents per liter (for methanol derived from biomass) to 63 cents per liter (for ethanol derived from beets in the United Kingdom)”, (Barbier et al. 1991, p. 142; cited in Johansen, 2002, p. 266).

Electricity Sector Technology

According to the Pew Center on Global Climate Change Report (Undated), power plants and coal combustion that supply electric power account for the greenhouse gas emissions on the electricity sector. Technological solutions available for this sector to address global warming include:

1. Integrated Gasification and Combined Cycle Process

The Integrated Gasification and Combined Cycle Process or IGCC, is a power generation technology that improves the efficiency of electric power and heat generation with the use of a combination of fossil fuels and renewable energy. It enables clean gas production and the reduction in carbon dioxide emissions with the use of high performance gas turbines, (Abela, et al., 2007). Moreover, air pollutants such as particulate matter, sulphur, nitrogen and mercury are removed from the gasified coal before combustion, (Abela, et al., 2007). However, the major disadvantage of using this technology is its high cost of operation, which is about 20% more than the operating cost of a traditional coal plant, (Wikipedia, undated).

2. Renewable Energy Sources

Renewable energy sources such as the wind, solar and water can produce electricity without releasing greenhouse gases and are thus important in the amelioration of global warming.

a. Wind Power

Wind power technology harnesses the power of the wind which is an indirect form of solar power, to supply energy. Some have propeller type devices, while others have vertical axis designs, which possess the ability to accept wind from any direction. According to Elliott (2003), wind power is already an essential source of energy; and that in 2002, the total generating capacity has reached 24,000 megawatts, with costs decreasing significantly with technology development. However, this technology often has large space requirements, due to the need of the wind turbines to be grouped together in wind farms, in order to facilitate sharing of connections to the power grid. Moreover, there should be a separation of about 5 to 15 blade diameters between individual wind turbines, in order to “prevent turbulent interactions in wind farm arrays” , (Elliott, 2003, p. 135).

b. Solar Power

Radiant energy can be captured and utilized to generate electricity which may be used to operate solar batteries or may be transmitted along normal transmission lines. Radiant energy is collected in a photovoltaic cell, which is a bimetallic unit that allows direct conversion of sunlight to electricity. The only drawback of utilizing photovoltaic cells is its high cost. However, recent “developments in the semiconductor industry have significantly brought down prices”, (Elliott, 2003, p.132). Electric power generation has also been accomplished with the use of big solar heat-concentrating mirrors and parabolic troughs and dishes that track the sun across the sky and focus its rays so as to raise steam, (Elliott, 2003, p130), and consequently produce electricity. One major disadvantage of using solar power technology is that it works only during the day and requires electrical storage mechanisms at night. Additionally, radiant heat is insufficient in cold regions and in areas with extensive cloudy periods, resulting in low amount of energy collection.

c. Water Power

Hydropower is the world’s biggest renewable source of energy. It is deemed as one of the most acceptable and cleanest technologies whereby a unit of water produces hydropower cumulatively by passing through the turbines of many dams along the descent of a river”, (Gibbons, 1986, p. 86). According to Elliott (2003, p. 151), “there is around 650 GW of installed capacity in place, mostly in 300 large projects. However, in recent years, there have been social and environmental concerns about large hydros, and some new projects have met with opposition”. Its adverse environmental impacts include the destruction of large areas of natural vegetation and agricultural land for water storage; biodiversity loss, flooding and displacement of population, (Elliott, 2003).

3. Geothermal Power

Geothermal power is not considered a renewable resource when used at rates of extraction greater than their natural replenishment. With sustainable use, however, geothermal power can be effectively harnessed to provide electricity. Geothermal energy comes from the heat of the earth and can be categorized into geopressured, magma, hydrothermal and hot dry rock, (Wright, 2002, p. 362). According to Hobbs (1995, cited in Wright, 2002, p. 362), commercial operations are mostly in the form of hydrothermal systems “where wells are about 2000 metres deep with reservoir temperatures of 180 to 270°C.” Although geothermal systems produce less than 0.2 percent of the carbon dioxide produced by coal or oil-fired plant, they also emit non-condensable gases such as small quantities of sulphur dioxide, methane, hydrogen sulphide, nitrogen and hydrogen. Additionally, such systems cause induced seismicity and ground subsidence. They are also capital-intensive investments that require financial and technical assistance, (Wright, 2002, p. 362).

Transport System Technology

The transportation sector has one of the highest greenhouse gas emissions rate, after land use and forestry, electricity and agriculture sectors. The Pew Center on Global Climate Change (Undated), recommends the use of “off the shell” technologies that are currently available in the market, which significantly reduce greenhouse gas emissions of conventional cars and trucks. These “off the shell” technologies focus on increasing energy efficiency, fuel blending and the use of advanced diesels and hybrids. Additionally, long term technological options to reduce greenhouse gas emissions are now gradually being developed which include the use of biofuels, electric vehicles and hydrogen fuel cells.

a. Fuel Blending

Fuel blending involves the mixing of ethanol and other biofuels with gasoline to produce more-environment friendly fuels. The Pew Center on Global Climate Change (Undated, p. 4), asserts that corn-based ethanol can reduce greenhouse emissions to at least 30% “for each gallon of regular gasoline that it replaces”.

b. Diesels and Hybrids

Diesel and hybrid engines offer excellent fuel economy and overall fuel efficiency. However, they also emit air pollutants such as nitrogen oxides and particulates. Newer diesel engine models, however, “use very sophisticated fuel-injection systems, which result in vehicles that have better acceleration with reduced emissions, vibration, and noise”, (Doyle, 2000, p. 383). Moreover, because diesels and hybrids afford excellent fuel economy, they use less gas on a per mile basis, thereby producing less greenhouse gas emissions compared to conventional cars and trucks. “When both technologies are combined in a diesel hybrid vehicle, it can yield a 65-percent reduction in greenhouse gas emissions per mile”, (Green and Schafer, 2003; cited in The Pew Center on Global Climate Change Undated, p.6).

c. Biofuels

As previously mentioned, biofuels offer cleaner emissions than regular gasoline. Agricultural and forest products can be processed to produce ethanol that may be combined with gasoline and enable significant reductions in greenhouse gas emissions. Corn-based, cellulosic and sugar-cane-based ethanols have been proven to significantly reduce emissions, (The Pew Center on Global Climate Change, Undated).

d. Electric Vehicles

Electric vehicles offer cleaner emissions by reducing the amount of pollutant and greenhouse gas release in the air. They release “30 percent less hydrocarbons and 15 percent less nitrogen oxides” than conventional vehicles, (Doyle, 2000, p. 289). In the past, electric cars needed advances in battery storage. Thus, the “plug-in” hybrid was developed in order to solve the battery storage problem. The “plug-in” hybrid “is a gas- electric vehicle that can be charged at home overnight”, (The Pew Center on Global Climate Change, Undated).

e. Hydrogen Fuel Cells

Hydrogen fuel cells “produce power by combining oxygen with hydrogen to create water”, (The Pew Center on Global Climate Change, Undated, p.6). Hydrogen is obtained from natural gas by reforming and is combined with oxygen that is readily available in the air, which generates electricity continuously. The fuel cells replace combustion turbines in integrated cycles, resulting in increased fuel efficiency of 46-55 percent. However, there is a need to find ways to produce hydrogen with minimal emissions, (The Pew Center on Global Climate Change, Undated).

Conclusion

A careful analysis of the global greenhouse gas emissions by sector is essential in identifying the needed technological solutions to help curb or reduce gas emissions. By focusing the effort to reduce emissions of the higher contributing sectors, the overall efforts to address global warming effects can be effectively channeled. Thus, it is imperative to focus on the available technologies that address the adverse effects of global warming on the following sectors: land use and forestry, electricity, agriculture and transport. In its comprehensive report on technological solutions for climate change amelioration, the Pew Center on Global Climate Change (Undated, p.2), claims that “there is no single, silver bullet technology that will deliver the reductions in emissions that are needed to protect the climate”. It further recommends the integration of a portfolio of solutions wherein the identification of useful technologies should be based on the analysis of key economic sectors. Moreover, it suggests that policy makers should prioritize the creation of incentives that will release the power of the marketplace in developing solutions. In the final analysis, further research and development of more exact and cost-effective portfolio of technologies that ameliorate global warming effects must be advocated.

References:
Abela, M., Bonavita, N., Martini, R., 2007. Advanced process control at an integrated gasification combined cycle plant. Available from: http://library.abb.com/GLOBAL/SCOT/scot267.nsf/VerityDisplay/62CF14177B1A39D2852572FB004B4EB3/$File/AC2%20ISAB_ABB.pdf. [Accessed: 11 August 2007].
Baird, S. L., 2006. Climate Change: A Runaway Train? The Human Species Has Reshaped Earth’s Landscapes on an Ever-Larger and Lasting Scale. The Technology Teacher, 66(4), 14+
Doyle, J. 2000. Taken for a Ride: Detroit’s Big Three and the Politics of Pollution. New York: Four Walls Eight Windows.
Elliott, D., 2003. Energy, Society & Environment. New York: Routledge.
Gibbons, D. C., 1986. The Economic Value of Water. Washington, DC: Resources for the Future.
Global Warming. 2004. In the Columbia Encyclopedia (6th Ed.). New York: Columbia University Press
Intergovernmental Panel on Climate Change (IPCC), 2000. IPCC Special Report: Land Use, Land Use Change and Forestry. Summary for Policy Makers. Available from: http://www.grida.no/climate/ipcc/spmpdf/srl-e.pdf. [Accessed: 10 August 2007}.
Johansen, B. E., 2002. The Global Warming Desk Reference. Westport, CT: Greenwood Press.
Manuel, J., 2007. Battle of the Biofuels. Environmental Health Perspectives, 115(2), 92+.
The Pew Center on Global Climate Change. Undated. Climate Data: A Sect oral Perspective. Climate Change 101: Understanding and Responding to Global Climate Change. Available from: http://www.pewtrusts.org/pdf/pew_climate_101_techsolutions.pdf. [Accessed: 10 August, 2007].
Wikipedia. Undated. Combined Cycle. Available from: http://en.wikipedia.org/wiki/Combined_cycle#_note-0. [Accessed: 11 August 2007].
Wright, R. M., 2002. Energy and Sustainable Development. In Natural Resource Management for Sustainable Development in the Caribbean, Goodbody, I. & Thomas-Hope, E. (Eds.) (pp. 307-385). Barbados: Canoe Press.

Global System for Mobile Communications (GSM) Technology

Investigation on the physical layer technologies employed in the GSM System
Absyarie Syafiq Bin Shahrin
Abstract

Basically in this paper, we intend to give a rundown on GSM (Global System for Mobile Communications) specifically on the technologies employed at the physical layer in the GSM system. The GSM system is a very interesting topic as it revolutionized the way we communicate and it is still being used till this day. It is actually the 2nd Generation (2G) wireless system as it uses digital instead of analog and it also deploys Time Division Multiple Access (TDMA) that is implemented on multiple frequency subbands. Frequency Division Multiple Access (FDMA). GMSK modulation and demodulation technique will also be discussed together with how it works and what their advantages/disadvantages are. The problems with ISI (Intersymbol Interference) in GSM systems will also be addressed together with how to mitigate ISI using channel equalization. With that, we will also give a simple explanation on how speech coding is accomplished in GSM transceivers.

Keywords: Gaussian Pulse, GMSK, ISI, channel equalizer, ISI equalizer, speech coding

I. Introduction

GSM is a standard developed by the ETSI (Europe Telecommunication Standards Institute) to describe the protocols of the 2nd Generation (2G) communication technology used by mobile networks and cell phones. It was first launched in Finland with a data speed of up to 64kbps. The GSM is given the term 2G because it is something completely new compared to the first generation (1G) with the usage of digital signal signals instead of analog. It was designed from scratch with no backward compatibility with the previous 1G technology. Using 124 channels per cell, it can accommodate up to 8 users by using a combination of TDMA and FDD scheme [1], though some of its channels are used for control signals. It also introduces the SIM (Subscriber identity module) card which allows for roaming calls. At first, it was only designed for operation in the 900 MHz band but later it was adapted for 1800Mhz. GSM is a very popular standard used today with over 90% market share, with availability in over 219 countries and territories worldwide.

Originally the GSM was developed with the intention that it will replace the first generation analog networks by having digital, circuit-switched networks which are optimized for full-duplex voice telephony. However as time passes, the GSM system was further developed to include data communications by firstly having it on circuit-switched transport, and then changing it later to packet-switched transport via GPRS ( General Packet Radio Service) and EDGE ( Enhanced Data Rates for GSM Evolution) .

In GSM, Gaussian pulse shaping is used and Gaussian Minimum Shift Keying (GMSK) as a modulation/demodulation technique with a modulation index of 0.5 [2]. This modulation method however gives rise to inter symbol interference. Inter Symbol Interference (ISI) in the GSM system are usually caused by two factors; Multipath propagation and Bandlimited channels. An ISI equalizer is used to solve this problem by implementing the Maximum Likelihood Sequence Estimation (MLSE) via vertibri algorithm. To make things easier to understand, Figure 1 is attached to relate how the GSM system can relate to the OSI (Open System Interconnection) model. We will however, focus more on the Physical Layer of the GSM system.

Figure 1: How the GSM is realized in the famous OSI model [7].

Pulse Shaping

In digital telecommunications systems, we strive to achieve broad spread spectrum with significant low-frequency content. This in return, requires a lowpass channel that has a bandwidth sufficient enough to accommodate the essential frequency content in the data stream. Gaussian function fits this requirement perfectly. The speciality of this waveshape is that, the pulses rise and small smoothly until it settles to a value [14]. This is a valuable asset as it gives a solution to problems such as precursors, overshoot and ringing in a pulse signal [14]. This problems cause uncertainty to the actual value so it is very troublesome. Besides that, it also addresses the two required needs of communication systems which are band-limited channels and reduced Inter-symbol interference (ISI) by applying a Gaussian filter symbol-by-symbol. It is nearly impossible to get the perfect sinc spectrum in the time domain as the bandwidth needs to be infinity. We can only have an approximation or near the same sinc spectrum. ISI can also still happen if control is not exercised over the pulse shaping.

Figure 2: An impulse response of a Gaussian Filter [15]

In GSM, we apply Gaussian filtering for Gaussian Filtering Minimum Shift keying (GMSK) a modulation technique. Basically it is similar as the Minimum Shift Keying (MSK) but the data stream must first go through pulse shaping via Gaussian filter before being applied to the modulator. MSK is already a good modulation scheme as it possess constant envelope and maintains phase continuity. GMSK allows for reduced sideband power which results in the reducing of out-of-band interference between the signal carriers in adjacent frequency channels. The GMSK technique has an advantage of being able to carry data while still maintaining an efficient usage of spectrum. The reduce power in the GMSK is very useful especially for mobile phones as lower battery consumption is needed for operation [16]. The drawback of GMSK is that, it requires more modulation memory in the system and causes ISI.

We have two ways to generate GMSK modulation. The most basic way is to apply Gaussian filter on the input signal and then apply a frequency modulator with a modulation index of 0.5 [2] [16]. The problem with this method is that it must have an exact modulation index of 0.5. In the real world, this is impossible as component tolerance drift can vary[16].

Figure: Flow chart of GMSK modulation using a Gaussian filter and Voltage controlled oscillator

The second method is more realistic and widely used. This GMSK method uses the Quadrature (I-Q) modulator. The operation starts by having the Gaussian filtered data separated into two parts, in-phase I and quadrature phase (Q). The I and Q components will then be mixed up to the frequency of the RF carrier to have a modulated RF signal. This kind of modulator can maintain 0.5 modulation index without having any modifications. The performance of this quadruple modulation depends on the accurate creation of I and Q components. For demodulation, this scheme can be used in reverse [16].

X – mixer or multiplier

LO – Local oscillator

Figure 3: Block diagram of I-Q modulator

Inter symbol interference and channel equalization

ISI in the GSM system is mainly caused by multipath propagation. Multipath propagation is a result when signals arrive at different times (delay) because it is does not travel in line of sight (LOS). In reality, connection will never be in LOS all the time so the signals will go through different paths by being reflected or refracted from different objects to reach the destination. When the signals travel through multiple paths, they will arrive at different times depending on the route they used. It is also possible for reflected signals to overlap with the subsequent signals [13]. This in addition, results in distortion to the received signals because all the signals have different delay. This situation happens either from mobile station to base station or vice versa. Since the delay spread is more than the symbol time, frequency selective fading occurs.

Figure 4: An example of multipath propagation

Figure 5: ISI as a result of multipath distortion [13]

To combat the problem with multipath propagation, we use and ISI equalizer. This equalization technique is based on the MLSE which uses the Viterbi Algorithm [3] [10]. Figure below shows the block diagram of the ISI equalizer.

Figure 6: Block diagram on how ISI equalizer is used in GSM environment

When the base station or the mobile station transmits a TDMA burst, not all of is user data. Instead, 26 bits are allocated for the training sequence and they are known by their receivers (either mobile station or base station). Each of the known sequence bits unique for a certain transmitter is unique for a certain transmitter and this sequence bits is also repeated in every transmission burst. The figure below shows the normal burst structure in the GSM burst.

Figure 7: GSM Normal Burst Structure

A channel estimator is needed because to perform MLSE, we require information on the CIR (Channel Impulse Response). The channel estimator will estimate the CIR for each of the bursts by comparing the transmitted bits with the received signal to produce he(t) [10]. Channel estimation in GSM uses Linear MMSE (Minimum mean square error) [11]. Since the match filter is in time domain, r(t) will be convoluted with the signal obtained from the channel estimation, he(t) to create a model signal Y(t). The output model signal obtained can then be used to estimate the transmitted bits based on the bits received by performing MLSE. The last process uses Viterbi Algorithm hence the process, Viterbi equalisation [2] [9].

Speech coding in GSM transceivers

Speech is originally analog in nature and GSM is a digital system. In order to use the speech information, we need to run to a series of process known as speech processing. Figure shows how the speech processing is done in a GSM system.

In speech coding, the GSM system has used a variety of ways to fit in 3.1 kHz audio into between 6.5 and 13 kbit/s. The first two codecs used was called Half Rate (5.8 kbit/s) and Full rate (13 kbit/s) [4]. Both of this codecs use LPC (Linear Prediction Function) where voice signals need to be digitized, and secured using encryption over a narrow voice channel. As time passes, the GSM system was further developed to use the Enhanced Full Rate (EFR) codec which is a 12.2 kbit/s codec and it uses a full-rate channel.

Figure 8: Flow-diagram on GSM speech processing [8]

Full rate speech coder is actually part of the Regular Pulse Excitation – Long Term Prediction (RPE-LTP) coders [4]. Firstly the speech encoder will take an input of 13 bit uniform PCM signal from either the audio part of the mobile station (MS) or the Public Switched Telephone Network (PSTN) side by using 8 bit/A-law to 13 bit uniform PCM conversion. The encoded speech is then delivered to the channel coding function which will then produce an encoded block having 456 bits with a gross bit rate of 22.8 kbps [4] [5]. The remaining 9.8 kbps is used for error protection purposes. The reverse action is performed for decoding. When encoding, 160 frames in 1 sample is encoded to a block of 260 bits with a sampling rate of 8000 samples/s, hence the bitrate of 13kbps [5]. On the decoding part, 260 bits of encoded blocks is mapped back to the 160 frames output reconstructed speech sample.

EFR (Enhanced Full Rate) is a newer version of the speech codec which uses ACELP (Algebraic Code Excited Linear Prediction) algorithm. The motivation for this development is because of the mediocre / poor quality of the GSM-Full Rate codec. This codec is a step-up from the previous FR because it provides speech quality equivalent or close to wireline telephony which uses 32 kbps ADPCM (Adaptive Pulse Code Modulation) [6]. This codec can provide wireline quality in both error and error-free conditions [6]. EFR which is also a form of traffic channel is bi-directional and can transmit both speech and data [9].

Figure 9: shows how error correction is done at layer 1 of the GSM air interface

Conclusion

All in all, this paper has helped me to better understand the GSM system and how it works in the physical layer. GSM has many sources including but not limited to, books, journals, application notes, lecture notes, documentation as well as survey papers. After reading from various sources, I learned to read efficiently and think critically as the papers written are quite hard and requires a meticulous reading to thoroughly understand what is being presented. I acquired basic research and development (R&D) skills and technical writing skills after almost a month of heavy reading and research. How the physical layer in the GSM system works is also understood. The acquired signal must first be shaped through a Gaussian filter in the GMSK modulator. The Quadruple modulator scheme is used as it does not require modifications to maintain 0.5 modulation index. ISI in the GSM are mostly caused by multipath propagations in which gives frequency selective fading. Frequency selective fading happens when the delay time is spread because symbols arrive at different times. To address the problems with ISI, we need to have an ISI equalizer. ISI equalizer consists of many components such as match filter and MLSE by Viterbi algorithm. I also learned that we have two speech coding options; full rate speech coder and EFR. All this components are essential when building a GSM system.

References

[1] Guifen Gu, Guili Peng “The Survey of GSM Wireless Communication System” International Conference on Computer and Information Application (ICCIA) , 2010

[2] B. Baggini, L. Coppero, G. Gazzoli, L. Sforzini, F. Maloberti, G. Palmisano “Integrated Digital Modulator and Analog Front-End for GSM Digital Cellular Mobile Radio System”, Proc. IEEE 1991 CICC vol. 31, pp.7.6.1{4, Mar. 1991.

[3] M. Drutarovsky, “GSM Channel Equalization Algorithm – Modern DSP Coprocessor Approarch” Radioengineering Vol. 8, No 4, December 1999.

[4] Besacier, L.; Grassi, S.; Dufaux, A; Ansorge, M.; Pellandini, F., “GSM speech coding and speaker recognition,”Acoustics, Speech, and Signal Processing, 2000. ICASSP ’00. Proceedings. 2000 IEEE International Conference on, vol.2, no., pp.II1085,II1088 vol.2, 2000

[5] www.etsi.org, “European digital cellular telecommunications system (Phase 1); Speech Processing Functions; General Description (GSM 06.01)”, GTS 06.01 version 3.0.0, January 1991.

[6] Jarvinen, K.; Vainio, J.; Kapanen, P.; Honkanen, T.; Haavisto, P.; Salami, R.; Laflamme, C.; Adoul, J.-P., “GSM enhanced full rate speech codec,” Acoustics, Speech, and Signal Processing, 1997. ICASSP-97., 1997 IEEE International Conference on , vol.2, no., pp.771,774 vol.2, 21-24 Apr 1997

[7] “Fundamentals: Signalling at the Air-Interface” Rohde and Schwartz Training Center v1.0

[8] http://www.rfwireless-world.com/Tutorials/gsm-speech-processing.html

[9] “GSM Air Interface & Network Planning” Training Document, Nokia Networks Oy, Finland, Jan 2002

[10] Vipin Pathak,“MLSE BASED EQUALIZATION AND FADING CHANNEL MODELING FOR GSM” (Hughes Software systems, Delhi), pp. 100-104, 2003

[11] Manoj Bapat, Dov Levenglick, and Odi Dahan, “GSM Channel Equalization, Decoding, and SOVA on the MSC8126 Viterbi Coprocessor (VCOP)” Freescale Semiconductor Application Note, Rev.0, 2005

[12] Baltersee, J.; Fock, G.; Meyr, H.; Yiin, L., “Linear MMSE channel estimation for GSM,” Global Telecommunications Conference, 1999. GLOBECOM ’99 , vol.5, no., pp.2523,2527 vol.5, 1999

[13] Kang, A. S., and Vishal Sharma. “Pulse Shape Filtering in Wireless Communication-A Critical Analysis.” Pulse 2, no. 3 (2011).

[14] James R. Andrews, “Low-Pass Risetime Filters for Time Domain Applications”, Picosecond Pulse Labs, Application Note AN-7a, March 1999.

[15] http://www.ni.com/white-paper/3876/en/

[16] http://www.radio-electronics.com/info/rf-technology-design/pm-phase-modulation/what-is-gmsk-gaussian-minimum-shift-keying-tutorial.php

[17] Fred Kostedt, James C. Kemerling, “Practical GMSK Data Transmission”, MX.com, INC, Application Note GMSK, 1998.

Example of micro operations microinstruction

Q1. Give an Example of micro operations, microinstruction, micro program, micro code.

Sol :- Following are the examples of micro operations:-

Bus and Memory Transfers
Arithmetic Microoperations
Logic Microoperations

Example of Microinstruction:-

For Fetching Data:-

IF inter.

ELSE next

inst.map

Example of micro program :-

sp := sp + (-1);

mar := sp; mbr := ac; wr;

wr;

This pushes the AC value onto the stack

Example of Micro code:-

mar := sp; rd;

sp := sp + 1; rd;

ac := mbr;

Pop a number from the stack and place it in the AC

Q2 How Information Technology can be used for strategic advantages in business?

Ans – Globalisation- IT has not only brought the world closer, but it has allowed the world’s economy to become a single interdependent system. We not only share information quickly and efficiently, but also bring down barriers of linguistic and geographic boundaries. The world has developed into a global village due to the help of information technology allowing countries like Chile and Japan who are not only separated by distance but also by language to shares ideas and information with each other.

Communication- With the help of information technology, communication has also become cheaper, quicker, and more efficient. We can now communicate with anyone around the globe by simply text messaging them or sending them an email for an almost instantaneous response. The internet has also opened up face to face direct communication from different parts of the world thanks to the helps of video conferencing.

Cost effectiveness- Information technology has helped to computerize the business process thus streamlining businesses to make them extremely cost effective money making machines. This in turn increases productivity which ultimately gives rise to profits that means better pay and less strenuous working conditions.

More time – IT has made it possible for businesses to be open 24 x7 all over the globe. This means that a business can be open anytime anywhere, making purchases from different countries easier and more convenient. It also means that you can have your goods delivered right to your doorstep with having to move a single muscle.

Q3. What Characteristics of software make it different from other engineering products?

Ans :- Characteristics of software products :-

Software products may be

Generic – developed to be sold to a range of different customers.

Custom – developed for a single customer according to their specification.

Q4. What are different addressing modes available?

Sol :- Various Addressing Modes are :-

(1) Immediate Addressing Mode :-

Immediate addressing is used to load constants into registers and to use constants as operands.
The constant is part of the instruction word
e.g. ADD 5
Add 5 to contents of accumulator
5 is operand
Limited range

(2) Direct Addressing Mode :-

With direct addressing the address is part of the instruction
Usually the OpCode is one word and address is the succeeding word or words. Effective address (EA) = address field (A)
e.g. ADD A
Add contents of cell A to accumulator
Look in memory at address A for operand
Single memory reference to access data
No additional calculations to work out effective address
Limited address space

(3) Indirect Addressing Mode :-

RegisterMemory cell pointed to by address field contains the address of (pointer to) the operand
EA = (A)

Look in A, find address (A) and look there for operand

e.g. ADD (A)

Add contents of cell pointed to by contents of A to accumulator

(4) Register Direct Addressing Mode :-

Limited number of registers
Very small address field needed
Shorter instructions
Faster instruction fetch
No memory access
Very fast execution
Very limited address space
Multiple registers helps performance
Requires good assembly programming or compiler writing

(5) Register Indirect Addressing Mode :-

The instruction specifies a register which contains the address of the operand

MOVE #$1000,R7 ;R7 = $1000

As there are usually only a small number of internal registers the address of the register is easily contained in the instruction word.
It is efficient and is very useful for working with arrays and pointers. Operand is held in register named in address field EA = R
If an array of numbers is stored at $1000, then can be accessed in sequence by adding 1 to the register after each access.
Operand is in memory cell pointed to by contents of register R
Large address space (2n)
One fewer memory access than indirect addressing

(6) Displacement (Indexed) Addresing Mode :-

EA = A + (R)
Address field hold two values

A = base value

R = register that holds displacement

or vice versa

Q5 How will you differentiate b/w Arrays and Stacks?Explain by giving an example.

Ans- An array can be defined as an infinite collection of homogeneous elements. A stack is a data structure in which all insertions and deletions are done at the same end called the top. It is often called last in first out (LIFO) data structure.

Q6. How translator differs from Complier?

Ans :- translator- it is a device that changes a sentence from one language to another without the change of meaning.

Compiler :- It reads the entire program and converts it to the object code. It provides errors not one line but errors of the entire program. It consumes little time for converting a source program to an object program. Compilers are preferred when the length of the program is large. It provides security.

Q7 Out of Linear and Binary Search, which one is preferred where and why?

Ans- in linear search,we access each elemnt of array one by one sequentially and in binary search we seach in minimum number of steps.in binary search elemnts have to be in the sorted form.

Binary search is preferred over linear search because time consumed is less.

Evaluating The Effectiveness Of Iris Recognition And Afis Technology Essay

Introduction

The biometric scanning technology is a great revolution for contemporary society. There are many application made from biometric scanning technologies, which may use in many different way. Some of these applications based on biometrics are complex. This report is going to evaluate the effectiveness of Iris recognition and AFIS for to control accessing to a secure workplace.

Background

The human beings use the biometric information, which has already been a long history. For example, using fingerprint for signature, identifying someone from gait. In the middle of nineteen century, many features of human beings’ body were used to identify the criminal cases (Jain and Ross, 2004). With the progress of human society, a practically modern biometric scanning system is becoming more and more important.

Today, there are many security problems such as to control access to a secure workplace, privacy and data security, which can be solved by the biometric scanning system. However, there were lists of practical biometric applications. To evaluate the effectiveness and find out which one is preferable, which is a required and meaningful action.

Definition

The biometric scanning system seems a kind of technology that uses the features of the human beings’ body. Jain and Ross (2004) point that the biometric scanning system is a science system that based on the identity of a person, which include the physical and behavioural features of the human beings. The physical attributes is static features such as fingerprint, face, retina, iris, vein pattern, hand and finger geometry. The behavioural attributes is dynamic features such as voice, gait and signature.

Outline

This report will talk about using biometric scanning system to control accessing to a secure workplace by two aspects, which include Iris recognition and Automatic Fingerprint Identification Scanning system. These two aspects will put perspective in a critical way in order to show both positive and negative. Hopefully, the outcome of this work can be a guide book for those who wish to select a possible biometric scanning system to protect the security of accessing a secure workplace.

The AFIS

In fact, to control accessing to a secure workplace is a category of identification and authentication. There are many method can reach this objective. The AFIS is one of these methods. In this section, firstly, it will find out the definition of the AFIS. Secondly, it will look at the uniqueness property of fingerprint identification. Thirdly, it will look at the reliability, convenience and availability. Lastly, it will mention fingerprint identification could be affordable.

What is the AFIS?

The AFIS is an abbreviation of Automatic Fingerprint Identification Scanning. This is a kind of biometric that use people’s fingerprint to identify something. The AFIS may use a database to store data that include fingerprint image, detail features of fingerprint of ten fingers and something else. It can be a tool for identification and authentication of large population group. The system will search in the database to match the correct person. Maltoni and Cappelli (2008) argues that using computer to process the fingerprint data that people input into the computer and computer will achieve the goal of identification and authentication by a series of actions such as sorting, locating, analysing and comparing.

The uniqueness property

The automatic fingerprint identification scanning has uniqueness property because it based on the fingerprint. The fingerprint is a physical attribute of human body. It even was used in the ancient time. Using a Fingerprint to make an identification and authentication probably has already more than hundreds years (Jain and Ross, 2004). At present, it was used in many areas for identification and authentication purpose because it is a simply unique way. For example, this method is used for door lock, car lock, computer accessing, internet accessing, attendance recording, bank account accessing, etc. There are a number of ways that use this method for identification and authentication purpose can be found.

Some still do not think the uniqueness property is a very good feature for identification and authentication because there is a possibility of copy. When people use fingerprint to make an identification and authentication, the finger must be touched. It will leave a vestigial fingerprint on the touched panel of fingerprint machine, which will be copied easily. If this action is done by others who have ulterior motives, then the individual information will be used in an illegal way. This is very dangerous for individual information (Maltoni and Cappelli, 2008).

However, the copy from the vestigial fingerprint of others is easy to identify although make a copy is a feasible method. Usually, this copy is so-called artificial fingerprint. Compare with the real fingerprint from human body directly, the copy is unclear, the shape is not complete, dull and one-dimensional. Therefore, users do not need to worry about this case.

Reliability, convenience and availability

The automatic fingerprint identification scanning is reliability, convenience and availability because fingerprint identification and authentication is the mature biometric technology (Komarinski, 2005). Firstly, the fingerprint is reliable because it includes quite huge information and will keep for all life of human being, In spite of the fingerprint just a little part of human body. Secondly, this is a convenient way and with available information which people used for identification and authentication. The reason is that human fingerprint cannot change and it can use any time and people do not need to worry about forgot things like cards, keys or others.

The reliability of match fingerprints could be a problem. Due to automatic fingerprint identification scanning uses computer to compare and match the fingerprints. This action is only referring to some simple information about fingerprint such as shape or other simple information. Meanwhile, the performance of computer will affect the reliability of fingerprints matching as well. The results might not be accurate completely (Maltoni & Cappelli, 2008).

In fact, this is not a drawback to fingerprint identification because this issue caused by the performance of computer. This fact just demonstrates that the information come from the fingerprint is extremely large, even the computer was made busy. This is not the drawback of fingerprint but the exactly advantage it is. At present, the computing technology upgrade rapidly, thus, it does not need to worry too much for this. The reliability of computing matching algorithm will be improved.

Smaller Equipments and Cost Effective

Fingerprint identification scanning needs a fingerprint identification machine, which just is smaller equipment. Generally speaking, the fingerprint machine compare with its effectiveness, the price of this kind of equipment is probably not too much. It is affordable for some organizations even just for individuals. In addition, Maltoni and Cappelli (2008) argues that there are many type of fingerprint identification machine that for different usage of situation in the market. It seems just like a common machine, which such as microwave oven, TV, computer and something else. Due to this is a machine then there could have some issues sometimes. However, it is not a fault of the fingerprint identification. Much more, this should not be an excuse to say that the disadvantages of using fingerprint identification outweigh the advantages.

Iris Recognition

The iris recognition is another method to control accessing to a secure workplace. In this section, firstly, it will find out the definition of iris recognition as well. Secondly, it will indicate the higher reliability of iris recognition, it will mention the iris recognition is difficult to hoodwink.

What Is Iris Recognition?

As previous definition of biometrics, the iris is a statically human physical attribute. The iris, a kind of externally visible tissue with coloured, is an internal component part of eyes. Each iris contains a unique iris image. It includes many features such as lens, filaments, spots, structure, concave points, rays, wrinkles and stripes and other features structure, Patel (2008) claim that Iris can be used for biometric identification and authentication. Its key features are highly complex and unique. There is no two iris is same. The equipment of iris recognition scanning consists of a fully automatic camera to look for user’s eyes, when the camera found the iris, it began to focus. Iris recognition will take a high quality image of the iris.

Higher Reliability

The most important feature of the iris recognition is that it is a relatively stable and highly reliable method used to control access to a secure workplace. Firstly, iris has higher uniqueness and located inside of eyes, which include abundant information inside of human eyes. Secondly, iris recognition has higher stability due to its appearance is difficult to change after formed. Thirdly, there is higher recognition accuracy. Shoniregun and Stephen (2008) argues that the rate of correct of iris recognition is relative high compare with other solutions of biometrics. Lastly, it is a quickly biometric security scanning system. In most situations, it is only need one second for each person. It is very much lower than fingerprint identification scanning. According to these features, there is a strong possibility that the iris recognition could be a better way for controlling accessing to a secure workplace.

Any biometric scanning technology has its drawbacks. Iris recognition as a kind of relative emerging technology has drawbacks as well. To use this method will cost high due to this technology is relative new and probably is not as mature as fingerprint identification. Why iris recognition is cost high? The main reason is that iris recognition needs an extremely high quality camera lens. This required core component for iris recognition is very expensive. Furthermore, there is a drawback seems like the most important point, which is that very difficult to scan and read the black eyes. If to do that, needs a good quality light source.

However, there is a essential point cannot forget that security is the most import thing for using a biometric scanning system to control accessing to a secure workplace. Compare with other biometric scanning system, iris recognition is more secured, stable, reliable, convenient and fast way to protect secure workplace. As technologies development, the cost might be reduced. Users would benefit more from the iris recognition.

Difficult to hoodwink

Using iris recognition can prevent hoodwinking. Due to iris is a specific part of human eyes. It cannot be touched. Vacca (2007) argues that when it used for identification and authentication, which is totally without physical touch. It is a very important feature for protect individual biometric information. Compare with fingerprint identification scanning, iris recognition is better than automatic fingerprint identification scanning. If someone would like to change the appearance of iris, and then need to do very careful operating, moreover, it will have a big risk of sight.

Conclusion

To sum up, the advantages of using biometric scanning system outweigh the disadvantages. It is no exaggeration to say that those benefits from automatic fingerprint identification scanning are very attractive for individuals and organizations (Patel, 2008; Shoniregun and Stephen, 2008). From a critical thought, the report has been mentioned two biometric scanning methods which are automatic fingerprint identification scanning and iris recognition. Although these two methods exist some possible issues, advantages is mainly and clearly part. Firstly, as for AFIS, it has mentioned three benefits of fingerprint identification, which are uniqueness, reliability, convenient, availability and cost effective. Secondly, regarding iris recognition, it has mentioned the higher reliability and difficult to hoodwink. As a suggestion, for highly security reason, the iris recognition can be used. For example, airport, immigration checking. If just for common use such as enter into the office, classroom or computer room, automatic fingerprint identification scanning is good for that.

(Words count: 1928)