Cell Phone Industry

INTRODUCTION

The purpose of this study is to explain the innovation and evaluate the innovations’ benefits in a particular industry. The subject of my assignment is the company of Research In Motion (RIM) and RIM’s product which is called Blackberry in cell phones industry. All industries need to creative new products or improve their products to satisfy the costumers. By this reason innovation is very important issue in today’s competitive business world.

The RIM produce innovate the e-mail wireless system and in this point Blackberries are one of the best smart phones in cell phones sector. This assignment also makes some estimates about Blackberry and its position in the cell phones sector.

THE HISTORY OF CELL PHONES INDUSTRY

In the past centuries, nowadays’ communication technology was only a dream. Cell phones, internet chat, e-mail were very extreme machines or system for the people. However depend on the improving of communication technology all these dreams has been becoming real. (M. Woods – 2005).

During the 1940’s, the radio technology was developed and this was constituted the first step of cell phones’ history. The beginning of cell phones was based on the innovation of radio communication which was used especially in taxi caps, police cars and other means of transportations to provide two ways communicate one to another or communicate only the central base. And also individual radios help to improving cell phone communication technology with patching into a phone line via live operator to make a phone call. Swedish police used the first official mobile phone in 1946. There was a link between telephone network and the new technology and the new system was very different from two way radio system. But it was not sufficient enough, only 6 phone calls were done before the battery was finished.

The modern cell phones technology was created by the same time with creation of hexagonal cells in 1947. Moreover the development of cell phone technologies naturally has been following the improving of technologies and first electronics cell phone developed during the 1960’s. The problem of these cell phones is; the user had to stay one cell area because cell areas which were serviced by a base station were unable to hand off cellular phone calls from one base station to another. While you could make a phone call, you weren’t able to continue the call after you reached a set range. This problem was solved by A. E. Joel in 1970, call handoff system was built up and this system provides the user carrying on the phone call from one place to another without switching off the phone. Following decades the cell phones technologies continue the improving. The cell phones can be classified with three groups; first, second and third generations.

First Generation Cell Phones: In 1983, the first portable cell phone was presented to world by Motorola and the name of this product was Motorola DynaTAC 8000X. The cost of research to improve the cell phone was over 100 million dollars and it took about 15 years to prepare it for market. It looked like brick and it weighed about 28 ounces. Until the beginning of the 1990’s the popularity of cell phones increase due to innovations in cellular networks. However, most common style of using was in car like a car phone because of its dimension.

Second Generation Cell Phones: During the 1990’s, due to new system such as GSM,IS-136 and IS-95 cell phones skipped the second generation (2G). The new digital mobile phones started to use in the United State in 1990 and in Europe by 1991. 2G mobile phones enabled to get network signal faster, the quality of calls was better and the amount of dropped calls decrease by the digital system. The size of 2G phones absolutely smaller than the “brick” phones which were produced former decade. 2G phones were usually in the range of 100 to 200 grams; in addition, most of them did not need large batteries. With these innovations also provided remarkable increase the amount of cell phone user.

Third Generation Cell Phones: The most common name of third generation is 3G and these phones available today. Only few years after 2G, 3G phones were improved. Due to many innovations in technology and services, standards for 3G are usually different depending on the network. Today’s cell phones are not only using for handle text messages and such contacts book. The new cell phones include GPS, Wi-Fi connectivity and motion sensors. These phones are able to get the e-mails, connect the computers and provide the videophone. According to Andy Jones, head of information security research at British Telecommunications, “Mobile phones are becoming a bigger part of our lives”.

“RESEARCH IN MOTION” AND BLACKBERRY

The History of Research in Motion: In 1984, a pair of engineering students, one each from the Universities of Waterloo and of Windsor found the Research in Motion. The technology of RIM created the most common Blackberry mobile communication tool. RIM was the first wireless data technology developer in North America and created several wireless products, including wireless Point-of-Sale devices, radio modems, and the first two-way messaging pager. In 1998, the small portable wireless handheld was produced by RIM and the name of the machine was RIM 950. RIM 950 handled e-mails contacts and calendaring with a fitted QWERTY keyboard. The first Blackberry two-way was produced in 1999 but the opportunity of pushing e-mail and SMS was allowed on the world in 2002 with 5810. Businesses soon saw the power of the QWERTY keypad handsets as an office tool, and haven’t looked back since, with the first colour models in 2003 and first packing Wi-Fi in 2004.

Today, the Blackberry phone is one of the leading cell phones in the market and the popularity of the Blackberry has been increasing day by day. The user of Blackberry phones take significantly advantages, especially business who needs an advanced phone to communicate with their office when staying outside. The blackberry earned the many professionals’ confidence by the way of answering their needs. Also business world not only one market which Blackberry sold, many people interest in using smart phones to connect internet or GPS, take and share photos with the phone.

INNOVATION

According to management expert Peter Drucker, innovation is change that creates a new dimension of performance. One of the most important things is creativity for innovation. The foundation of innovation is new ideas and new criteria and also personal success. The source of creativity is statement of new ideas and new solutions. In addition, innovation get agreement from the others to be successful and the idea of innovation influence the other people ideas’. All industries need innovation every time to improve their influence areas and to reach their target. Customers always have some future expectations and organizations want to answer these demands with new innovations. To make a successful innovation, recognize and estimate the new opportunities is playing significant role. Creativity involves transformation such as dramatic change in form, structure, process, appearance or character of a person, a product, or an environment. In other words, creativity involves the making change from present situation to future for greater returns. Form creativity new life begins in business world. (Marci Segal – 2003).

The results of innovation are hard to measure. It is so complicated to understand. There are many sorts of dimension that include a diversity of activities. The main point of innovation is about product, it should be new. Moreover it may also be a new process of production, alternatives of a cheaper material, include improvement for this issue and uncovered product, open for the improvement and have a easy way to developed more innovations. The transformation process to provide technological and economic consideration from innovation is also complex. (R. Landau, N. Rosenberg – 1986).

INNOVATION OF BLACKBERRY

The innovation of Blackberry maybe creates a question about is it radical innovation or a step-change innovation. Although the e-mails are sent by computer, it is significant to note that blackberry provide an opportunity to send and receive e-mails while moving or out of office. Also the size of blackberry is very small it is exactly a smart phone and obtains important advantages to users. As a result, blackberry is a radical innovation for those corporate executives, government officials and emergency driven professionals for whom doing business has changed.

Jim Balsillie, Co-CEO state an opinion about importance of time. If the blackberry had represented earlier, it may not have been very popular because e-mails did not use very common. “It was the right time for us to do that because the offering and the market opportunity and the value proposition and uniqueness stood on its own merit. We did it at the time and we certainly have no regrets. It appears, in hindsight, to have been a very wise strategy,” he says.

The benefits of blackberry against laptop and the mobile phone are a little bit complicated. The position of Blackberry lie between laptop and mobile phone in other words the value of Blackberry is in the middle of these two devices. However the security of data being directly erasable if the Blackberry is lost and the management of sophisticated images are more than for the laptop and mobile phone. Moreover, time is very important in today’s competitive business world. Blackberry considers using time more efficiently with providing business to check e-mails and complete their transaction out of the office without computer. (Swastik Nigam – 2007).

It is certain that Blackberry is one of the most powerful devices which bring the wireless technology to daily life use. The person who is carrying this extreme machine will never become out of contact. The Research In Motion (RIM), the producer of Blackberry, after the first device which called Interactive Pagers hit the market in the summer of 1998 has been becoming of a phenomena in market.

THE REPORT OF RIM AND FUTURE EXPECTATIONS

As I mentioned before the first RIM product hit the market at 1998. After that producing Blackberries models was followed this improvement. In a short time, Blackberry have came a popular and also RIM sales have increased sharply. The RIM started to sell the Blackberry firstly in USA and after a time period the Blackberry spread all over the world.

2009 Market

2008 Market

Company

Shares

Sales (%)

Shares

Sales (%)

Nokia

14,911.2

42,1

14,588.6

45,1

Research in Motion

7,233.6

19,9

4,311.8

13,3

Apple

3,938.8

10,8

1,725.3

5,3

HTC

1,957.3

5,4

1,276.9

4

Fujitsu

1,387

3,8

1,317.5

4,1

Others

6,896.4

18,8

9,094.8

28,1

The table above indicates that the smart phone sales increased from first quarter of 2008 to first quarter of 2009. Garther analysts illustrate the significant success in Research in

Motion and Apple sales. “Much of the smart phone growth during the first quarter of 2009 was driven by touch screen products, both in midtier and high-end devices,” said Roberta Cozza, principal analyst at Gartner, based in Egham, UK. Touch screen is not only one reason for this increasing, people all over the world wants mobile e-mail, music service and also internet from their phone with depending on the technological development in smart phone industry. (Stamford, Conn., May 20, 2009)

Comparison with Apple: The major competitor of Research in Motion is Apple iPhone. The cost of latest version Blackberry smart phone, which is called Blackberry Storm, has a higher production cost than iPhone 3G. According to iSuppli, a research firm in electronics market, one unit of Blackberry Storm cost is $ 202.89 although an iPhone cost is $174.33.

Despite the cost difference and popularity of Apple’s best-selling device, RIM still tops the league of smart phone sales. RIM said that it shipped 4.4 million Blackberry handsets in the fourth quarter alone, bumping up total numbers to about 14 million for the 2008 fiscal year, and more than doubling sales of 6.4 million for fiscal 2007. By comparison, Apple has said that it sold 2.3 million iPhones in the three months to December 29, and a total of about 4 million in the six months since the device’s US launch. Blackberry accounted for 41 per cent of all smart phones sold in the US in the fourth quarter, compared with the iPhone’s 28 per cent share, according to the Reading-based researcher Canalys. With this successful year RIM recently started targeting to become a leader in consumer market. The RIM’s chief executive Jim Balsilie added that despite of the global credit crunch they goals the increase the revenue and they will make more profit from the Blackberry sells. (Times Online, April 03, 2008).

The Blackberry producer Research in Motion declared that they beat the analysts’ expectations in forth quarter of 2009. RIM made a $ 518.3 million profit in the forth quarter and a 26 percent increase from the same period last year. The revenue for the forth quarter of 2009 was $ 3.46bn, up 84 percent from a year ago. According to co-chief executive at RIM Jim Balsillie, the company is very happy about their success of Blackberry products’ sales in forth quarter of 2009. (Tom Young Computing, 03 April 2009).

Future Plans of Research in Motion: Additionally, RIM also has some future plans to improve their products and reach their target. The company wants to add some new facilities to Blackberry. The company will add application of Java and Adobe Reader to provide the costumer better smart phone especially in internet world. With these applications consumers, who are using the Blackberry could access all information easily. The format of data will not be a problem for the Blackberry’s owners. The most effective innovation might be occur in the future is; RIM thinking to do a deal with T-mobile to provide costumers better service. Although the credit crunch, according to co-chief of RIM they will try to not lose their market position and continue the improvement of their products.

CONCLUSION

Today, Research In Motion Limited’s Blackberry is the leading product of representing wireless e-mail for the smart phone. The Blackberry are sold across North America and Europe and nowadays it is taken root the Asia Pacific. The Blackberry is a big innovation for the cell phone industry by provides the easiest way to connect e-mail with wireless to the costumers.

As far I am concerned that innovation should include some new ideas and creativity. Although wireless technology was being used by computers, the RIM’s Blackberry meets the wireless to the smart phones at first time. Now, all smart phones’ user expects from the phones to connect internet and want to check their e-mail while buying a new one.

RIM was founded in 1984 by two engineering student. The firm started to produce Blackberry in 1998 and sharply became popular in the smart phones’ market. Also RIM’s Blackberry has some competitor in today’s rivalry market such as Nokia and iPhone but the target of company is to become a leader in this sector.

These days, the RIM is looking for new innovations to improve the Blackberry. All industries need to develop and create new innovation to answer costumers’ expectations. Following the improvement in technology products has been becoming more and better and if an organization wants to reach top or to increase the sales, they innovate new products.

References

1. Michael Woods. “The History Of Communication”, Lerner Publication, First Edition – 2005

2. http://www.tech-faq.com/history-of-cell-phones.shtml

3. http://www.newscientist.com/article/mg20427301.100-the-pocket-spy-will-your-smartphone-rat-you-out.html?full=true

4. http://www.brighthub.com/office/collaboration/articles/8041.aspx

5. http://experts.thelink.co.uk/2008/12/19/a-brief-history-of-blackberry/

6. Kathryn Vercillo. “History of Major Blackberry Models from RIM 96”, available at: http://hubpages.com/hub/History-of-Major-Blackberry-Models-from-RIM

7. Marci Segal. “Quick Guide to the Four Temperaments and Creativity: A Psychological Understanding of Innovation”, Telos Publication, 2003.

8. Ralph Landau, Nathan Rosenberg, National Academy of Engineering. “The Positive sum strategy: harnessing technology for economic growth”, Conference publication, 1986.

9. Swastik Nigam, “Understanding New Product Innovation: Research in Motion’s Blackberry” ,available at: http://insightory.com/view/231//understanding_new_product_innovation:_research_in_motion%27s_blackberry

10. http://www.businessweek.com/innovate/content/apr2008/id2008044_416784.htm

11. Tom Young Computing, “Blackberry maker sees strong profit growth”, 03 April 2009, available at: http://www.computing.co.uk/computing/news/2239782/blackberry-maker-sees-strong

12. Stamford, Con., “Gartner Says Worldwide Mobile Phone Sales Declined 8.6 Per Cent and Smartphones Grew 12.7 Per Cent in First Quarter of 2009”, available at: http://www.gartner.com/it/page.jsp?id=985912

13. http://laptopcom.blogspot.com/2009/01/apple-iphones-are-less-costly-to.htm

14. http://business.timesonline.co.uk/tol/business/industry_sectors/telecoms/article3676963.ece

Autocad Vs Microstation: Summary and Evaluation

Patricia Ferreras

Table of Contents (Jump to)

What is CAD?

AutoCAD and MicroStation History

Research

2D Design Features

3D Design Features

Interoperability

Conclusion

Bibliography

What is CAD?

CAD is an abbreviation of Computer Aided Design, and refers to software used to create detailed, precise drawings and technical illustrations. CAD software is capable of creating two-dimensional (2D) or three-dimensional (3D) models. (WhatIs.com, 2011)

AutoCAD and MicroStation History:

AutoCAD was introduced in 1982 as a desktop application. Since 2010, it has evolved into a mobile web and cloud based application, currently marketed as AutoCAD 360. (Wikipedia, 2014)

MicroStation was introduced early in 1987 with the capacity to write to design files with the extension “.DGN”. In its early days, it had simple modification abilities, and it was capable of displaying each element in their intermediate states during placement. MicroStation V8i (SELECTseries 2)- July 2010 added integrated point cloud support. (Bentley, 2014)

Research

The scope of my research is to compare the two leaders software packages in design, both of them are used by a wide range of professionals, mainly in the fileds of Engineering, Architecture and Industrial Design.

Different people have different needs or preferences, but I want to keep my research as objective as possible, that is why I going to focus is three key aspects of the software, 2D, 3D and interoperability.

2D Design Features:

The comparison is based in the latest versions of each program, and on the available functions and tools for managing 2D designs that a CAD software provides. (Chief, 2012)

AutoCAD 2013

MicroStation V8i

In 2D Designing, what makes users happy are the features and tools that AutoCAD provides, such as:

The Sketch tool which allows 2D Drafts to be intuitively drawn
A customizable tool palette, color palette and command log
Tape Measure Tool, text Box and Snap to grid functionalities,
From 3D models generates 2D Drawings

Even though MicroStation provides many useful tools, it is lacking some key features for 2D Designing. Its available features include:

An advanced sketch tool
Color palette, tool palettes and command logs, all of which are customizable
From 3D Models, could generate 2D Drawings

3D Design Features:

These features include Simulation, 3D Modeling and animation using features provided by the CAD software. (Chief, 2012) (enggcyclopedia, 2012)

AutoCAD 2013

MicroStation V8i

AutoCAD 2013 include the following features and tools needed for the 3D Modeling, Rendering and Animation:

Has parametric Modeling Tools
Material changes as they occur, so it could be viewed in real time
Extrude 3D Models from 2D Drawings
Photorealistic models could be created.
Basic animation projects are enable through its animation features
AutoCAD is recognized as a business-oriented design tool, and is regarded as following industry standards.

MicroStation provides more of a thorough platform for 3D Modeling and Animation. It is more advanced than AutoCAD in some respects:

Parametric modeling tools and features
Real time modifications can be previewed as they are implemented
Microstation can “extrude” 3D Models from 2D Drawings with 3D geometric surfaces
Photorealistic models can be achieved
3D Printing is supported
MicroStation claims to respect its users by providing them with a CAD environment built to cater for all their needs.

Interoperability:

This factor relates to collaboration functionality with other CAD Applications, the ability for more than one designer to work simultaneously on one platform, cloud features, and a number of other functions. (Chief, 2012) (Alvarez, 2006)

AutoCAD 2013

MicroStation V8i

AutoCAD 2013’s full-featured user-interface and interoperability features include the following:

Support for readable and writable file formats such as: DWG, PDF, 3DS, DWF
The Autodesk Cloud Feature, Autodesk 360 allows designers on the CAD Software Workspace to work away from the office. It provides each user with roughly 3 GB of space, and this figure can be increased to cater for file sharing.
Integration of Google Mapping
The users felt that the re-design introduced with Land Development was very different from the previous applications such as CivilCAD and Softdesk.

MicroStation is an application with total integration of other applications within Bentley or external applications that develop in a MicroStation environment . It wins the war of interoperability. The main features include the following:

It supports readable and writeable file formats including: Sketchup, DWF, Revit, PDF,DWG
An AutoCAD interoperability function which allow designers to work with all of Autodesk’s CAD Versions
Geo-location is supported by providing designers with coordinates for actual real-life buildings. It allows integration of Google Maps, and allows for sharing and networking between designers.
Designer’s work is protected from intellectual theft via a digital signature feature
The users find quite easy to use the different applications of MicroStation, is very similar design.

Conclusion

Which one is better? Well, from a personal point of view, AutoCAD is a clear winner when it comes to 2D Design. This is possibly because it was the first CAD software that I learned how to use, and the one that I have used more often. But I am not the only one that thinks this: many professionals that use AutoCAD agreed that “AutoCAD still trumps the MicroStation with its advanced 2D Drafting capabilities” (Chief, 2012)

When it comes to 3D support, I have to give this to MicroStation. I lost count of how many coffees I had while the computer was rendering a 3D Design. It takes practically forever, so the best thing to do was to let the machine to do the magic and take a break. (Alvarez, 2006)

In conclusion, both CAD tools provide features which classify them as advanced drafting tools. These tools can be used by CAD designers to draw and design both 2D and 3D Designs, independent of their complexity. (Prakoso, 2011)

In summary, I would regard AutoCAD as a better drafting tool, but MicroStation as a better CAD platform.

Bibliography

History of MicroStation – MicroStation – Wiki – MicroStation – Be Communities by Bentley. 2014. History of MicroStation – MicroStation – Wiki – MicroStation – Be Communities by Bentley. [ONLINE] Available at: http://communities.bentley.com/products/microstation/w/microstation__wiki/3164.history-of-microstation.aspx. [Accessed 10 March 2014].

Autodesk – Company. 2014. Autodesk – Company. [ONLINE] Available at: http://usa.autodesk.com/company/. [Accessed 10 March 2014].

AutoCAD – Wikipedia, the free encyclopedia. 2014. AutoCAD – Wikipedia, the free encyclopedia. [ONLINE] Available at: http://en.wikipedia.org/wiki/AutoCAD. [Accessed 15 March 2014].

BE Magazine En Espanol – Volume 1-Issue 12. 2014. BE Magazine En Espanol – Volume 1-Issue 12. [ONLINE] Available at: http://www.nxtbook.com/fx/books/bemagazine/vol1issue1spanmexico/index.php?startpage=12. [Accessed 15 March 2014].

MicroStation® vs. AutoCAD® – which is better. 2014. MicroStation® vs. AutoCAD® – which is better. [ONLINE] Available at: http://www.indiacadworks.com/blog/microstation-vs-autocad-comparing-features/. [Accessed 15 March 2014].

What’s the Difference Between AutoCAD and Other 3D programs?. 2014. What’s the Difference Between AutoCAD and Other 3D programs?. [ONLINE] Available at: http://animation.about.com/od/faqs/f/Whats-The-Difference-Between-Autocad-And-Other-3d-Programs.htm. [Accessed 15 March 2014].

Microstation or Revit..what to choose? | Forum | Archinect. 2014. Microstation or Revit..what to choose? | Forum | Archinect. [ONLINE] Available at: http://archinect.com/forum/thread/96142/microstation-or-revit-what-to-choose. [Accessed 18 March 2014].

AutoCAD versus MicroStation, which one is the best? | CAD Notes. 2014. AutoCAD versus MicroStation, which one is the best? | CAD Notes. [ONLINE] Available at: http://www.cad-notes.com/autocad-versus-microstation-which-one-is-the-best/. [Accessed 21 March 2014].

. 2014. . [ONLINE] Available at: http://www.google.ie/url?sa=t&rct=j&q=&esrc=s&source=web&cd=6&ved=0CF0QFjAF&url=http%3A%2F%2Fcad-software.findthebest.com%2Fcompare%2F5-19%2FAutoCAD-vs-MicroStation-V8i&ei=So4sU5j-MqWI7AaL5YGgBg&usg=AFQjCNHRN8j20Fq52oVzAW6c2-7ihTQpNQ&bvm=bv.62922401,d.ZGU. [Accessed 21 March 2014].

MIS Related Issues

ASOS.com & MIS RELATED ISSUES

aˆ?aˆ?Despite the spectacular dot-com bust a few years ago, the Internet has markedly changed the way we do business.” (Reynolds, 2004, 78)

Conducting business in the digital economy refers to the use of Web-based systems, on the Internet and other electronic networks, to accomplish some form of e-commerce. Networked computing is helping some companies excel and is helping others simply to survive. Generally, the collection of computing systems used by an organization is defined as Information Technology (IT). In the developed countries, almost all medium and large organizations use information technologies, including e-commerce, to support their operations. IT, in its narrow definition, refers to the technological side of an Information System (IS); it includes the hardware, software, databases, networks, etc. An IS collects, processes, stores, analyzes, and disseminates information for a specific purpose. It processes the inputs (data, instructions) by using IT and produces outputs (reports, calculations). It includes people, procedures, facilities and it operates within an environment.

MIS refers to the management of IS’s and it raises a lot of concerns, such as global, e-commerce, software system choices, ethical, social and operation strategy issues. (Turban et al, 2001)

The aim of this paper is to examine five selected MIS topics and analyze specifically one or two of the occurring issues of each topic; then, connect these issues to the ASOS.com enterprise.

The paper begins with a brief observation of the background of ASOS.com. Initially, the selected topics and issues, derived from relevant literature, will be described. Next, a discussion of whether each issue is a problem for ASOS.com or not is presented. Furthermore, a reflective paragraph follows. The paper ends with the conclusion section and the references.

ASOS.com

Asos.com, is the UK’s largest online-only fashion and beauty store, attracting over one million visitors a week. Its name stands for aˆ?As Seen On Screen’ and it was chosen to show the brand’s intention, to supply the public with outfits similar to the styles of celebrities. Asos.com targets 16 to 34 year olds; it offers women’s fashion, menswear, accessories, jewelry and beauty products. It provides potential customers browsing its content, with a number of unique features. For example, individual catwalk model videos of most clothing items on the site, and a fashion blog, which is frequently updated with articles relating to celebrity and entertainment.

Asos.com is greatly admired for its large variety of fashion and beauty goods and for the speed at which it keeps up with the latest fashion trends. (http://www.asos.com/)

Asos.com’s headquarters are located in Camden Town, in North London. The company was launched by Nick Robertson in 2000. Since then, the online company has seen significant growth. Over Christmas season of 2008, it reported a 100% increase in sales and for the financial year ending 31 March 2009, it reported a revenue of ?165,395,000.

Asos.com is being run by a board of three directors and two non-executive directors.

It is a PLC, quoted on the AIM (Alternative Investment Market) part of the London Stock Exchange. AIM is not as strict in its rules as the main market; it therefore, helps smaller companies to raise capital through the sale of shares.

Asos.com chose to use the web channel, since researches have shown that online sales have been increasing faster than any other sector. It has targeted young people, for these represent around 60% of online shoppers. To attract them, it offers a diverse range of brands and products. Above all, it offers a pleasurable shopping experience, by ensuring that the website provides much more than a customer would expect from a shop. The site, also, provides more choices, competitive prices, new styles and, above all, convenience.

While growing, Asos.com has developed a more complex structure. It has worked hard to keep up with changes in technology. The website is being kept up-to-date by constantly adding new products and product lines.

Moreover, Asos.com uses other communication channels to drive growth. These include a monthly magazine of 116 pages and an e-mail newsletter that is being sent to 1.8 million users each week. In addition, it distributes PR pieces in other publications and encourages word-of-mouth recommendation. (http://www.asos.com/)

GLOBAL MIS/RISKS
Designing websites for a global audience

The designing of successful web-sites that present information about products and services is a relatively recent occupation, which introduced new issues and challenges that designers were called to face.

It is well-known that some web-sites seem to be more efficient than others. They are more often visited and more purchases occur. Very little is reported in the literature on web design and the evaluation of the factors that formulate a successful web-site. However, many studies have been made, in order to empirically test the features, often mentioned by trade journals and vendors, as being critical to design a successful web site.

The roles of a web-site are many and important; some of them are: marketing research, marketing tool, public relation machine and means of payment. Web sites are basically an interface between the costumer and the firm.

The web sites are being designed in order to facilitate organizations, to carry out business’ activities by using Internet technology. On their web sites, each organization promotes and sells its products or services, provides catalogs, technical support and obtains useful feedback from its consumers. (Udo and Marquis, 2001/2)

According to Udo and Marquis, there are eight factors that contribute to the design of an effective web site:

* Download Time (response time)

* Navigation

* Graphics Usage

* Interactivity

* Cohesion

* Consistency

* Use of Frames

* Amount of Advertisements

(Udo and Marquis, 2001/2)

According to Tilson, R. principles like simplicity, satisfaction (feedback) and versatility (flexibility) are also very important in designing e-commerce sites.

In fact, Tilson describes eight factors, with which the designer achieves the following:

* Simplicity: doesn’t compromise usability for function.

* Support: user is in control with proactive assistance.

* Obviousness: makes objects and their controls visible and intuitive.

* Encouragement: makes actions predictable and reversible.

* Satisfaction (feedback): creates a feeling of progress and achievement.

* Accessibility: makes all objects accessible at all times.

* Versatility (flexibility): supports alternate interaction techniques.

* Personalization: allows users to customize.

(Tilson et al, 1998)

Asos.com has very successfully designed its website for a global audience.

As already mentioned, the website offers to potential customers a large variety of unique features, such as catwalk videos, a fashion blog, a diverse range of brands and products, a pleasurable shopping experience, competitive prices, new styles and convenience, in order to attract more users and set itself apart from other similar websites. The company tries to keep up with changes in technology by frequently updating the website, which is also kept up-to-date by constantly adding new products and product lines, according to the latest fashion trends.

Its global success has been recognized through many awards such as:

% Sep 2002: E-commerce awards aˆ“ Highly commended

% Feb 2005: More Magazine Fashion Awards aˆ“ Most Addictive Online Shopping

% Oct 2008: Aim Awards aˆ“ Company of the Year

% Nov 2008: Company High Street Awards aˆ“ Best Online Shopping

% Mar 2009: Cosmopolitan Online Fashion Awards aˆ“ Best Online Retailer

(http://www.asos.com/)

E-COMMERCE AND ITS ISSUES

aˆ?Electronic commerce is taking off both in terms of the number of users shopping, as well as, the total amount of people, who are spending via Internet based transactionsaˆ?.

(Tilson et al, 1998)

E-commerce is gaining importance rapidly, in today’s business environment. The practice of e-commerce has been in existence since 1965 and has attracted the interest of many pundits. Most companies accepted and adopted the e-commerce technology faster than any other technology in the history of mankind. The reason is that the benefits are plenty; by creating a web site they can be seen all around the world, reach out to new customers, have lower transaction costs, meet their customers’ expectations and needs, provide new services and products and therefore, remain competitive.

(Khan, 2008)

E-commerce presents enormous opportunities for both consumers and businesses in the world. The self-service enabled by it, allows consumers to conduct a wide-range of activities. They can access thousands of online sites and purchase anything, from groceries to books, cars, credit-cards and loans. As Mark Hurst stated: aˆ?It’s ease of use, it’s ease of use. Why doesn’t the industry get that?aˆ? (Andrews, 1998)

Legal and Ethical issues in E-Business

Internet technology has posed new challenges for the protection of individual privacy.

Computer information, regarding Internet users, is generated every day through credit card purchases, telephone calls, magazine subscriptions, video rentals, mail-order purchases, banking records and local/state/federal government records. If this information is put together and minded properly, it could reveal a user’s credit information, driving habits, tastes, associations, political interests and much more.

It is possible to record many online activities, including which websites a user has visited, which newsgroups or files he has accessed and what items he has purchased over the Web. Some organizations use this information to better target their offerings. Others monitor the Internet usage of their employees to see how they are using company network resources.

A new data analysis technology called NORA (non-obvious-relationship-awareness) offers even more powerful profiling capabilities to the government and the private sector. NORA can take information about people from sources like employment applications, telephone records, customer listings and aˆ?wanted’ lists, and correlate relationships to find obscure hidden connections that might help identify criminals or terrorists.

Cookies are another noteworthy issue. Cookies are tiny files deposited on a computer hard drive, when a user visits certain websites. Cookies identify the visitor’s Web-browser software and track visits to the website. When the visitor returns to the site, the website software will search the visitor’s computer, find the cookie and know what that person has done in the past. Then the site can customize its contents for each visitor’s interests.

There are several more ethical/legal concerns occurring from the use of IT. For example, another issue is the protection of intellectual property such as software, digital books, digital music, or digitized video. It is extremely difficult to protect intellectual property, when it can so easily be copied and distributed. In addition, there is the matter of the aˆ?spam’ messages and the computer theft/fraud (stealing personal credit card information). (Laudon and Laudon, 2005)

Asos.com guarantees privacy and security over credit card purchases for the website visitors. There have been strong movements from the political view to protect the Web users’ privacy and security. Additionally, the online industry has preferred self-regulation to privacy legislation for protecting consumers. In 1998, the online industry formed the Online Privacy Alliance to encourage self-regulation to develop a set of privacy guidelines for its members. The group promotes the use of online seals, like TRUSTe, certifying websites adhering to certain privacy principles. There has also been created an additional industry association called Network Advertising Initiative (NAI) to develop its own privacy policies to help consumers opt out of advertising network programs and provide consumers redress from abuses.

On top of the legislation, new technologies are available to protect user privacy during interactions with websites. Many of these tools are used for encrypting e-mail, making surfing and e-mail activities appear anonymous, preventing client computers from accepting cookies etc. Asos.com plans to use such activities in the short-run, to ensure its visitors privacy. (Laudon and Laudon, 2005)

SOFTWARE SYSTEM CHOICES
To buy off-the-shelf packages or develop from scratch

The acquisition of new software and hardware can bring dramatic applications that will change an organization. Managers must be prepared to make risky acquisitions that will have a significant impact on the firm. There has been some movement towards outsourcing and strategic alliances to reduce the time required to develop vital applications. Buying instead of making is one strategy to bring change more quickly in the firm. However, the purchase decision is one that usually warrants advice from systems professionals. Software can be bought from a large number of companies. Manufacturers of large computers often sell proprietary software for them, especially operating systems. Companies like Computer Associates sell a great deal of software for large-scale computers.

The main attraction of buying of-the-shelf packages is to avoid having to develop a custom system. Custom programming is expensive and time-consuming; therefore, when a package is available, it should be considered.

The major advantage of using a package is cost savings. The package developer expects to sell a number of packages to recover the investment in developing it. The cost is thus amortized over a number of users. The cost to the developer though is usually higher than the development of a single application would be, since the package must be general enough to be used by a number of customers. This increased generality makes the package larger, more complex and often less efficient to operate than an application specifically developed for a single application. (Henry and Lucas, 2000)

The aˆ?make or buy’ decision is always a difficult one for management. The availability of new technologies in the marketplace and a movement by firms to get back to their core competencies, have led many companies to select the aˆ?buy’ option.

Because of the high cost and long time required to develop software, most managers look first at whether they can buy existing software and modify it, if necessary, to avoid programming an application from scratch. (Henry and Lucas, 2000)

(Henry and Lucas, 2000)

Asos.com’s managers had to face all these difficulties in order to make a decision of whether to develop their own custom system or buy existing software and modify it, according to their needs. They could have depended upon outsourcing to develop and operate their applications, or choose to retain part of their IT functions and to partially outsource some activities, which I believe would be the best decision for the company. Unfortunately, there isn’t much information in the Asos.com website of how the organization’s software systems were developed.

ETHICS / SOCIAL ISSUES

Information ethics relates to standards of right and wrong in information processing practices. Organizations must deal with ethical issues relating to their employees, customers and suppliers. Ethical issues are of high importance, for they have the power to damage the image of an organization and to destroy the morale of the employees.

Ethics is a complicated area, since ethical issues are not cut-and-tried; they vary between people, cultures and countries. What may be regarded as ethical by one person may be regarded as unethical by another. (Turban et al, 2001)

Displacement of employees with Information Technology

IT offers many benefits to the organizations. A key benefit of IT is for example, the reengineering of work; it provides operations with many benefits such as elimination of production waste and reduction of operating costs. On the other hand, redesigning business processes could cause millions of employees to lose their jobs.

As Rifkin (1993) said:

aˆ?aˆ?We will create a society run by a small high-tech elite of corporate professionals [aˆ¦] in a nation of the permanently unemployed. ” (Laudon and Laudon, 2005)

Some argue that relieving bright, educated workers from reengineered jobs will result in their replacement to better jobs in fast-growth industries. However this does not apply to unskilled, blue-collar workers and less well-educated, old managers.

Consequently, IT has created new ethical dilemmas, in which one set of interests is pitted against another. For example, many of the large telephone companies in the USA are using IT to reduce the sizes of their workforces. Voice recognition software reduces the need for human operators by enabling computers to recognize a customer’s responses to a series of computerized questions. Competing values at work are, therefore, developed and groups are lined on either side of a debate. Companies argue that displacing employees with IT is ethical, since they have the right to use IS’s to increase their productivity and reduce the size of their workforce, in order to lower expenses and remain in business. The employees that are being displaced argue that their employers have responsibilities against their welfare and that their displacement with IT is unethical. (Laudon and Laudon, 2005)

Asos.com, as an online corporation, occupies much less personnel than traditional fashion and beauty industries. The company hires high-educated, IT experts to run the operating software and keep the company’s website updated with the most innovative features. Displacement of employees with IT is not an issue for Asos.com, although, the latest movement of businesses moving into the digital economy, could be an issue for less and less workers are needed.

STRATEGY AND IS / INITIATIVES

aˆ?aˆ?Competitive Advantage is at the core of a firm’s success or failure.” (Turban et al, 2001)

Ensure continued, powerful Competitive Advantage

Computer-based IS’s have been enhancing competitiveness and creating strategic advantage for several decades.

A competitive strategy is defined as a broad-based formula for how a business is going to compete, what its goals should be and what plans or policies will be required to carry out those goals. Through its competitive strategy, an organization seeks a competitive advantage in an industry.

A Competitive Advantage represents an advantage over competitors in some measure such as cost, quality, or speed. A strategic IS can assist an organization to gain a competitive advantage through contribution to its strategic goals and the ability to considerably increase performance and productivity. (Turban et al, 2001)

M. Porter’s competitive forces model is the most popular framework for analyzing competitiveness. It is used to develop strategies for organizations, with the purpose of increasing their competitive edge. Porter’s model identifies five major forces that could endanger an organization’s position in a given industry.

These forces are:

1. The threat of entry of new competitors

2. The bargaining power of suppliers

3. The bargaining power of customers

4. The threat of substitute products or services

5. The rivalry among existing firms in the industry (Turban et al, 2001)

Asos.com is an online-only corporation; this fact alone provides the company with a

strong competitive advantage over its (traditional-clothing-stores) competitors. Asos.com possesses a competitive advantage, mainly, in a matter of cost/expenses, since it doesn’t own stores and doesn’t occupy vendors and other kinds of staff (cleaning staff, security, etc.) For Asos.com, Internet technologies offer very powerful tools that can increase success through traditional sources of competitive advantage. For example, apart from low cost, Asos.com has an excellent customer service and superior supply chain management. Low costs contribute to more advantages such as competitive prices and supreme quality for the company’s products.

Consider and Decide IT Strategy: be a leader, a follower or an experimenter

Strategic management refers to the conduct of drafting, implementing and evaluating cross-functional decisions that will enable an organization to achieve its long-term objectives.

IT contributes to strategic management in many ways. For example, it can contribute through innovative applications, competitive weapons, changes in processes, links with business partners, cost reductions, relationships with suppliers and customers, new products and competitive intelligence.

Porter’s model identifies the forces that influence competitive advantage in the marketplace. Managers are interested in the development of a strategy that aims to establish a profitable and sustainable position against these five forces. To accomplish this, a company needs to develop a strategy of performing activities differently from a competitor.

There are many different strategies that managers can choose, according to which copes best with their operation. Some of them are the following:

Cost Leadership Strategy: Produce products at the lowest cost in the industry. A firm achieves cost-leadership in its industry by thrifty buying practices, efficient business processes, forcing up the prices paid by competitors and helping customers/suppliers reduce their costs.

Differentiation Strategy: Offer different products, services, or product features. By offering different-better products companies can charge higher prices, sell more, or both.

Niche Strategy: Select a narrow-scope segment (niche market) and be the best in quality, speed or cost in that market.

Innovation Strategy: Introduce new products and services, put new features in existing ones, or develop new ways to produce them.

There are many other strategies such as Growth, Alliance, Operational Effectiveness, Time Strategy, etc. (Turban et al, 2001)

Asos.com has tried to implement Niche strategies. It has chosen a relatively small

market segment (it targets consumers aged 16 to 34) in the clothing/fashion and beauty industry and it has tried to be the best in cost, quality, and speed of the deliveries (same day delivery service through the use of MetaPack delivery management software and CitySprint’s SameDay Courier solution). Given that it is an online enterprise, and it can therefore keep expenses low, it demonstrates very competitive prices and a very wide range of products and brands. It also maintains high standards of quality for its products. (http://www.asos.com/)

REFLECTIVE PARAGRAPH

I believe that the main lesson learnt here is that the use of IT has many benefits, but also many drawbacks. IT and the use of information systems can bring rapid change to organizations, enhance productivity and reduce costs. A firm established on the World Wide Web has countless advantages, which is why most medium and large organizations and even small ones create a website. In contrast, numerous concerns derive from the exploit of IT. Global, software system choices, e-commerce, strategy and IS initiatives, and ethical, social and political, are only some of the subjects that raise a great deal of issues. However, there are many options to be considered for the use of IT and most of the matters can be resolved.

CONCLUSION

To conclude, a brief observation of the background of ASOS.com has been given. The five selected MIS topics have been examined and the occurring issues have been analyzed. Moreover, a discussion of whether each described issue is a problem or not for ASOS.com has been presented. Finally, my reflective thoughts have been outlined, in the reflective paragraph.

Last of all, mentioning that in the 21st century we find ourselves living in the age of computerization, is essential. There is plenty room for future development of IT and IS. Information and Communication Technologies for Development, for example, refers to the application of Information and Communication Technologies within the field of socioeconomic development or international development and its concept is intimately associated with applications in the developing nations. It is concerned with the direct application of IT approaches to poverty reduction. Information and communication technologies can be applied either in a direct (their use directly benefits the disadvantaged population) or in an indirect sense (facilitates the improvement of general socio-economic conditions). In many indigent regions of the world, legislative and political measures are required to facilitate or enable application of information and communication technologies.

References

Andrews, W. (1998) aˆ?At Far Too Many Sites aˆ?Buyer Be Lost’ Appliesaˆ?, Internet World, Vol. 98, Issue 6

Henry, C. and Lucas, Jr. (2000) Information Technology for Management, McGraw-Hill, 7th edition, Ch. 15-17

Khan, K. M. (2008) Managing Web Service Quality: Measuring Outcomes and Effectiveness, Information Science Reference, Ch. 1,3

Laudon, K. and Laudon, J. (2005) Management Information Systems: Managing the Digital Firm, Prentice Hall, 9th edition, Ch. 5

Reynolds, Janice (2004) The Complete E-commerce Book: Design, Build and Maintain a Successful Web-Based Business, CMP, 2nd edition, p.76-79

Tilson, R., Dong, J., Martins, S. and Kieke, E. (1998) aˆ?Factors and Principles Affecting the Usability of Four E-commerce Sites’, Proc. Of the 4th Conference on Human Factors and the Web

Turban, E., McLean, E., Wetherbe, J. (2001) Information Technology for Management: Transforming Business in the Digital Economy, Wiley, 3rd edition, Ch. 1, 2

Udo, Godwin J. and Marquis, Gerald P. (2001/2), aˆ?Factors Affecting E-commerce Web Site Effectiveness’, Journal of Computer Information Systems, Vol. 42, Issue 2, 10-17

The ASOS.com web-site: http://www.asos.com/

Voice User Interface (VUI)

ABSTRACT:

Voice User Interface (VUI) is the interface which works on the user’s demand in the form of speech. The speech engine recognizes the keyword from many ambiguous words in the surroundings and works on the user demand. The basic VUI is constructed by the XML Language.

The keyword has to be recorded in the speech engine at the time of construction. The basic security of the VUI is given by the confidentiality of the keyword for the specific work. The keyword refers to the password which is separate for each work that has to be performed by the system.

The new thing according to me is to make the system to understand our commands and work according to it with the same perfection. But the only difference is that we have not to store any keywords in the time of initialization.

The voice tester attached to the speech engine gives the security which tests the voice frequency of the user and allows only the authorized user to access the engine. This gave high security and which cannot be broken easily.

For that, have to include three different things in the library functions of the XML Language. The three things are:

User Language,
Translator,
Phonetics.

The user language is set according to the user and it should be included in the library function.

The user has option to give the command in different language and the translator converts it to the machine language.

The phonetics is the language of pronunciation. It includes all the pronunciations that have to be pronounced by the user. But the pronunciations by the user should be accurate.

The security of this VUI is high and it should be initialized. The user has to give the authorized voice that can only access the system. If the user wants to increase, the security, the voice modulation along with some keyword can also be stored in the system. This increases the security level in the system. The user can specify the no of users through that they can also be access the system with the specific key word.

The VUI has finds many applications in voice mailings, Home appliances, entertainment, etc.,

Kinect is the special device used to sense the voice of the user. The VUI finds its applications in cheaper and perfect manner. By, connecting this VUI to the speech engine to any of the appliances, we can make it as user-friendly.

By implementing, this concept in appliances and other systems, we can make our work simple, cheaper and quicker. The system connected with the speech engine is very helpful and is very easy to handle.

INTRODUCTION

Voice user Interface (VUI) is an interface which works on the demands of the user which are given in the form of speech.

VUI concept is firstly introduced to make some devices more secure. The voice can be a high secure element and which cannot be hacked easily by the user.

Also, it has main advantage of easy of work and saving the time of working the user has many demands and it can be easily done by the system.

Kinect is the speech device which is used to record the speech from the user and it converts it in to the words.

This device finds an application in the field of VUI to record the voice and send it to the system.

BASIC CONCEPT OF VUI

VUI has the basic concept of recording the voice and convert it to the words and works according to it. The following picture refers to the conversion of speech to words.

WORKING OF BASIC VUI

VUI which is basically word in some user- system interface in which the help of human is not required. For example, it can be used in the field of “Customer Care”, of the mobile companies.

Example

Consider a user calling to the customer care regarding the need of information

The recorded voice of the system welcomes the user and gives the user, to choice and it explains the meaning of the choice.
The user has to process a number according to his need and the system recognizes the number, and connects it to the respective module.

User: – //Calling for Customer Care//

Customer Care:- Welcome Sir,

Your choices are

Security code,
New Schemes,
Sim details
Balance enquiry

User: – //pressing no.2//

Customer Care: – Sir your choice is to know about the new schemes. The call is connected to our secretary.

//Now, the call is connected to the secretary//

In this example the user has to specify a number and it is now processed and it gives details about it. Otherwise, if the demand is not solved by the system then it connects to the call to the Human who can clear all doubts of the user.

Draw backs:-

Pressing the number, is works of time
Suppose, our demand is in the choice 10 we have to wait until the 10th choice appearance, It is waste of time.

IMPROVEMENT OF VUI

The VUI finds its improvement in the field of its applications. The applications have also improved along with the improvement of VUI.

There are three basic applications in this stage of VUI, they are

Customer Care,
Home appliance,
Mobile applications.
Customer Care:-
The improvement in this stage of VUI is in the saving of time the user has to specify his choice according to his demand instead of pressing the number.

Ex:

User: //calling for customer care//

Customer Care : Welcome Sir, your choices are

1. Security code

2. New schemes

3. Sim details

4. Balance enquiry

User: 2 //It is specified by speech//

Customer Care: Ok Sir, Now your call is connected to our chief.

In this example, the user has to specify his choice through voice not by pressing the number.

Draw backs:-

Here also, the user has to wait by hearing all the choices until his choice appearance. It also wastes the time.

ii) HOME APPLIANCES:

The VUI has found its application in the field of Home appliances. Here, the appliances work according to the user demand through voice.
The vice keyword is to be different for different devices the keyword is to be specified at the time of initialization.

Ex:

If the user have to be on the Fan and off the fan. The keyword for this process is to be specified.

ON – F1

OFF- F2

Now, the user have to say “F1” if he wants to ON the fan, and he have to say “F2” if he wants to OFF the fan.

Like that, he has to more the keyword for each appliance at the time of initialization.

Drawback:

The different key word for devices is difficult to have in the memory.
The security of keyword is only according to our confidential level.

iii) Mobile applications:-

The VUI is used in come mobile applications. There is a mobile app which can react to our questions and it can be a companion to us.

Ex:-

User: Hay

App: Hay

User:what is your name?

App:My name is X

User:Do you like me?

App:Yes, I like you.

In this, example, the mobile interact with the user according to his question.

Draw backs:

The answer, given by the mobile is only stored in the form of templates. The reaction is similar for maximum number of questions by the user.

NEW CONCEPT ABOUT VUI:

My new idea about VUI is to make the system to understand the user’s demand and work according to it with the same perfection.

For that, the Library functions of XML language has to be included with some extra functions.

There are three main functions to be included in the Module: They are:

User Language,
Translator,
Phonetics

i) USER LANGUAGE:

The user can demand in any type of language which he can able to speak.

But only thing is that it is to be included in the Library functions. The user language should be a specific one. If we want to change the user language we have to say the keyword for changing to that language.

The keyword has to be specified at the time of initialization of the speech system.

ii) TRANSLATOR:

Translator can also be called as Convertor. Convertor has main work of converting the speech words into the words which can be understand by the system.

The speech engine records the speech in the form of speaking words and sent it to the computer in the form of words which can be understand by the computer.

This convertor is also included in the library functions of the XML. It does not require any keyword and it can be automatically executed.

iii) PHONETICS:

Phonetics is the representations of speaking words in the form of some special symbols which can also be included in the library functions.

It is very important in this case because this is to specify the pronunciation of words and it meaning.

The meaning of the word differs according to the pronunciation.

Ex: “read”

Is it present tense or past tense?

It is only depends upon the pronunciation of the word by the user.

The speech engine has to record the voice and it should by checked by the pronunciation and it is executed according to the meaning.

For placing a demand to the system, the user has to specify the correct pronouncing. Otherwise, the system responds to the meaning of the pronunciation.

Some of the applications to be implemented with VUI are:

1. CUSTOMER CARE:

In this concept, it is very easy to select our choice in the customer care.
The user has not to wait for the desired choice number. The user has to specify his need and system responds according to the demand.

Calling

Calling

Ex:User: //calling customer care//

Customer Care: Welcome Sir, What do you like to know sir?

User: About balance amount.

Customer Care: wait sir, we will send you the information of your balance via SMS, sir.

Here, there is no job of waiting for the chief to solve the problem.

Advantage:

i) The user has not to wait for the choice to be heard. There is no waste to time.

2. Home appliances:

The VUI has found its applications in Home appliances also. The keyword not necessary in the case of this concept.
The system can understand our demand and respond to it accordingly.
The statement given by the user may be different but the pronunciation and the competency must be accurate.Ex: If the user wants to ON the fan, there are many possibilities.
Switch on the fan (Or)
ON the fan, (or)
Turn on the fan

All the statements are accessed by the systems but the pronunciation matters.

MOBILE APPLICATIONS:
The mobile app that respond to ourselves according to our questions and it cannot be a template function, if, we use this concept in VUI.
The answers given by the mobile app is reasonable and it is a great companion to us.
VEHICLES:
For example, a car can be started by a keyword stored by ourselves. The car can be started only if the voice is identified correctly and high security matters.
The tape (or) FM in the car can be switch on by VUI and the channel of FM can be changed without any physical contact or eye contact and it does not destructs driving.

SECURITY:

The security of this VUI is only depends upon the voice modulation. The frequency of the voice of the user decides the ON/OFF of the device.

The frequency of the voice of user is stored in the system at the time of initialization. The system is accessed by only the authorized user.

For increasing the security. We can also include the passwords, and some special sounds that are only peculiar to ourselves can also be included.

The breaking of all these security is maximum impossible. Thus the system is prevented from Hacking.

CONCLUSION:-

By implementing this concept in the real-life applications, we can do our jobs faster, effective and secure.

By connecting this with the computer, we can make our computer as user-friendly.

By this, we can improve the security of the accessing of device and it enables the user to become smarter to do all the work in short span of time.

Artificial Intelligence in Business Applications

Artificial Intelligence and Robotics

Business functions that can/cannot be automated

INTRODUCTION

Computer systems today are a part of almost all businesses; this is because they provide us, along with the added use of the Internet, with a variety of means that made business operations easier, productivity higher, and communication processes faster. Computers and the programs (or the software applications that are installed on them) along with the robotic systems do a great amount of the tasks that were previously performed by the employees and/or workers themselves. This transformation, towards an automated work environment, saved businesses a lot of unwanted expenses, a lot of time, and caused profits to increase steadily. Computers substituted, in different business structures, classical machines and tools, such as the calculator, the fax, the telephone, the photocopier and many more. The automation of different business functions led many organisations and companies to a higher level in what concerns production and management.

But the point that should be understood is that even though many processes and functions related to businesses and organisations have been automated, there are still many aspects that are not, or that cannot be, automated for a wide range of reasons.

BUSINESS FUNCTIONS

The main objective of any business is to achieve success. To be able to reach success, an organisation needs to have an effective structure because any entity depends exclusively on two factors which are management and use of information. An efficient use of information systems can allow an easier and faster access to data that are essential for the workflow and for the quality of that work and, therefore, can assist the management in performing its duties in the best possible manner and in making the right decisions at the right times. In order to achieve such objectives, specific business functions should be established and specific tasks should be performed.

Every kind of business and every organisation, depending on the nature of their operations, the products or services that are provided by them, their geographic location, and depending on the management and production schools that they relate to, have different business functions, but there are certain generic functions that apply to all kinds of businesses all over the world. These functions are usually general management, information management, operations management, marketing, finance and accounting, and human resources.

Lan and Unhelkar (2005) identify the various generic business functions by stating that they are the function of Management and Administration which is the department whose tasks are to “corporate resources, corporate image, quality in all aspects, industrial relations, stakeholders relations, productivity, [and] promotion,” the function of Human Resources that should deal with “job analysis, position classification, employee training, employee selection, employee auditing and promotion” in addition to other related tasks, the function of Finance and Accounts that is responsible for “the capital operations required by the entire enterprise activities… the funds required by management, administration, sales, marketing, human resources, [and] purchasing,” the function of Purchase and Procurement, the function of Sales and Marketing, and the function of Customer Care or Customer Support.

According to another source, “business functions are universal and apply to every type of business. The most essential business functions are marketing, operations (production of goods and services), finance, and human resource management” (Plunkett, Attner, and Allen 2005). Here, we find a view according to which all functions are the same regardless of the type of business.

The main question is to understand whether the above mentioned functions can be in whole or in part automated and/or computerised. In other words, can all the tasks concerning the business functions be transferred to intelligent electronic or robotic agents reaching the level of efficiency and proficiency in which humans are capable of performing them?

AUTOMATION AND ARTIFICIAL INTELLIGENCE

In order to understand if all (or only some) business functions can be automated, it is important to understand the meaning of the concept itself. According to MSN Encarta (2005) automation is a “system of manufacture designed to extend the capacity of machines to perform certain tasks formerly done by humans, and to control sequences of operations without human intervention. The term automation has also been used to describe non-manufacturing systems in which programmed or automatic devices can operate independently or nearly independently of human control. In the fields of communications, aviation, and astronautics, for example, such devices as automatic telephone switching equipment, automatic pilots, and automated guidance and control systems are used to perform various operations much faster or better than could be accomplished by humans.”

For us to reach such a system, a certain computerised aspect should be developed; an aspect which enables machines to execute given tasks according to the desired level. For such an objective, experts and programmers should be able to produce information systems that possess some of the characteristics of intelligence; this is why such systems are referred to as systems of artificial intelligence, or simply intelligent machines; in other words, computerised systems that are pre-programmed to perform a certain mission with the same level of accuracy of a trained human being. It is the science of creating machines that are intelligent, and in a more specific context, intelligent computer software-programs functioning according to the present hardware. It attempts to comprehend the mechanisms in which human intelligence works and then imitates it in the way the prospective intelligent machines should work, avoiding the limitations of biologically related weaknesses.

Bailey (1992) describes his understanding of intelligence as the ability to reason or have a logical thinking, and to have an effect on the environment; this will require a good level of knowledge that should be acquired. To be able to simulate humans, machines should possess the capability of understanding the world. Computers, or intelligent machines, should be knowledgeable on a level that is even deeper and more detailed than we are Depending on knowledge, intelligent machines (or computers or robots) will be in a position to answer any of our questions, they could perform any task efficiently, and they can solve complex and difficult problems much more rapidly.

Bailey also states that another important feature that intelligent machines should have is connectivity to each other all around the world through the use of networks, which will make it even easier for them to gain more knowledge and to communicate it to one another. Then there is another feature that is the ability to establish an effective level of communication between intelligent computers and humans through both written and verbal means and not through commands typed through a keyboard and a screen. Finally Bailey puts the physical qualities, such as vision, hearing, as the final of his desired intelligent computer or robot through the use of visual and auditory sensors similar to, or better than, those of humans.

AUTOMATING BUSINESS FUNCTIONS

The organisational structure is the setting that defines all the departments of the organisation, identifies the responsibilities and duties of each department, regulates the relationship between the various departments and explains how each of them should interact with the others in the way that guarantees the achievement of the desired outcome.

As Clarke and Anderson explain, “an organizational role is defined as a set of functions together with a set of obligations and authorities. The same human or artificial agent can play several roles” [within that specific organisation] (187).

The various tools of Information Technology can assist the company in gathering, elaborating, processing, storing/documenting, and distributing all the information that is needed for planning, decision making, and control. The use of computers and the simplicity they offer are important elements in what concerns the enhancement of all the mentioned processes. This fact explains how information technology influences the way in which organisations tend to arrange the tasks and processes within them. Ross (2003) explains that “information technology (IT) has progressively become key link integrating the business enterprise and its logistics capabilities with its customers and supplies… Simply, the organization’s ability to create, collect, assimilate, access, and transfer information must be in alignment with the velocity of the activities necessary to execute effectively supplier, customer service, logistics and financial processes.”

As mentioned earlier, many aspects related to the various tasks of businesses are now computerised and/or automated. Accounting and financial processes, for example, are not done only on paper as they once used to be; instead complete computer systems that rely on software applications are those that elaborate, document, communicate, and distribute the various pieces of information among different employees working in different departments. Another example is that related to the processes of sales and marketing which depend heavily on the Internet and the means of communication offered by it. “Sales force automation modules in CRM [Customer Relation Management] systems help sales staff increase their productivity by focusing sales efforts on the most profitable customers, those who are good candidates for sales and services. CRM systems provide sales prospect and contact information, product information, product configuration capabilities, and sales quote generation capabilities” (Laudon and Laudon 2006).

For what concerns the accounts and finance function, there are clear indications that many of its tasks have been computerised. “Large and medium-sized businesses are using ASPs [Application Service Providers] for enterprise systems, sales force automation, or financial management, and small businesses are using them for functions such as invoicing, tax calculations, electronic calendars, and accounting” (Laudon and Laudon 2006). Another form of automation in this context is presented by Sanghvi (2007) as he states that “online technologies have enabled payroll services to become a popular way for accounting firms to improve client service, enhance loyalty, and gain incremental business… Many small business owners turn to their accountant for back-office services while they focus on growing their businesses,” and this means that, through online systems, they can provide the external accountants with all the information needed in order to produce their legally accurate and acceptable financial documentation.

Concerning human resources management, there are certain computerised systems that are capable of performing the main parts of the process that are related to that function. Torres-Coronas and Arias-Oliva (2005) refer to what they define as e-recruiting; which consists of the “practices and activities carried on by the organization that utilizes a variety of electronic means to fill open positions effectively and efficiently. The e-recruiting process consists of the following iterative steps: identification of hiring needs; submission of job requisition; approval of the job requisition via a job database; job posting on the Internet; online search of the job database by job seekers, online pre-screening/online self-assessment; submission of applications by applicants directly into an applicant database; online search of the applicant database for candidate selection; online evaluation of resume/application; interviewing by recruiters/hiring managers; online pre-employment screening; and job offer and employment contract”

Another example of a computerised business function, which is auditing, is presented by Caster and Verardo (2007): “The increasing prevalence of complex computer information systems and electronic data interchanges has made most business transactions electronic in nature… Technological advances have altered not only the actual form of evidential matter required to be obtained by auditors, but also the competence of this evidence. Technology has had a significant impact on audit evidence, and existing auditing procedures could be improved in many ways.” The authors indicated that new technologically related regimes of audits have been created to automate the auditing process.

Laudon and Laudon (2006) explain that certain businesses took enormous steps towards the automation of the entire processes related to their core activity: “The management of UPS decided to use automation to increase the ease of sending a package using UPS and of checking its delivery status, thereby reducing delivery costs and increasing sales revenues The technology supporting this system consists of handheld computers, barcode scanners, wired and wireless communications networks, desktop computers, UPS’s central computer, storage technology for the package delivery data, UPS inhouse package tracking software, and software to access the World Wide Web.” The author indicates that the various processes of UPS have improved substantially thanks to the computerisation and inter-connectivity of their functions.

When we study the potentials of automation for what concerns business functions, it should be clearly stated that each function is a separate case with its own factors and qualities, which can allow or limit the possibilities of full computerisation of its different processes and tasks.

Dorf and Kusiak (1994) state that almost every aspect of the manufacturing process can be automated: “Most manufacturing operations can be automated. Given the large number of manufacturing processes and assembly operations used in industry (the number is in the thousands) and the many possible ways in which any given operation can be automated.” The authors give different examples of automated systems, such as the Automated Production Lines (which is “a production system consisting of a series of automated workstations connected by an automatic parts transfer mechanism”), Position and Motion Control Systems (which are required to position “a work head or tool relative to a work part to accomplish a process””), and the Industrial Robotics (which are “general-purpose programmable machine possessing certain anthropomorphic characteristics”).

When the other business functions are examined, we find that almost every single task within the realm of each function can be automated: Information concerning the major issues related to the business as a whole can be produced by computer systems on regular basis, and passed on to management for examination and study before reaching the right decisions in what concerns the survival and progress of their organisation. Accountancy and financial processes can be completely handled by intelligent systems that can, for example, calculate wages according to working hours, process payments to institutions and banks through electronic means over the Internet, can produce invoices and receipts to customers and suppliers, and can also manage shareholder’s issues. In the human resources function, information and requests can be effectuated electronically, but the final step, which is employees selection, cannot be performed by automated systems; because here the human factor and the human inter-activity is, and most probably will always be, the determining point. This is also valid for what concerns sales and marketing, the computerised system can perform all that is needed except the stages related to policy making and to physical delivery of products, as here the human factor is still required.

There are certain missing parts if the desired objective is to reach a total automated business; such parts can be overcome only if (or when) we manage to solve deep and important problems in what concerns artificial intelligence. Creating systems that can ‘think’ as humans and can perform tasks related to the human factor will not be a fast endeavour, as we are still in the beginning of what concerns understanding and imitating intelligence.

CONCLUSION

As mentioned earlier, most of the tasks that are related to virtually all business functions can be computerised and/or automated, but the most important element is still the human factor. At the present level of technology, we are unable to create a fully automated business and we cannot transform an existing business entirely into a computerised one. Some business functions, such as accountancy and information management can be fully automated, some other functions, such as human resources and sales and marketing, can be computerised to a very high level, while other functions, such as general management, cannot be automated.

Another reason, beside the technological limitations of the field of artificial intelligence today, is that people (whether customers or suppliers) are still not accustomed to dealing solely with machines.

Works Cited

Bailey, C. (1992) Truly Intelligent Computers. Coalition for Networked Information [online]. Available from: [cited 13 April 2007].

Caster, P. and Verardo, D. (2007) Technology Changes the Form and Competence of Audit Evidence. The CPA Journal, 77(1), pp. 68-70.

Clarke, R. and Anderson, P. (2001) Information, Organisation, and Technology: Studies in organisational Semiotics. Norwell, Massachusetts: Kluwer Academic Publishers.

Dorf, R.C. Kusiak, A. (1994) Handbook of Design, Manufacturing and Automation. Hobokin, NJ: John Wiley & Sons, Inc.

Lan, Y.C. and Unhelkar, B. (2005) Global Enterprise Transitions: Managing the Process. Hershey, PA: Idea Group Publishing Inc.

Laudon, J. and Laudon, K. (2006) Management Information Systems: Managing the Digital Firm 10th ed. Upper Saddle River, NJ: Prentice Hall.

Microsoft Encarta 2006. (2005) Automation. [CD-ROM]. Microsoft Corporation.

Plunkett, W. R. Attner, R. F. and Allen, G. (2005) Management: Meeting and Exceeding Customer Expectations. Mason, Ohio: Thomson South-Western – Publisher.

Ross, D. F. (2003) Distribution: Planning and Control 6th ed. Norwell, Massachusetts: Kluwer Academic Publishers.

Sanghvi, A. (2007) Improving Service Through Online Payroll. The CPA Journal, 77(3), pp. 11.

Torres-Coronas, T. and Arias-Oliva, M. (2005) e-Human Resources Management: Managing Knowledge People. Hershey, PA: Idea Group Publishing.

Are Computers Really Intelligent?

Are computers really intelligent?

Computer Intelligence has been in hot debate since the 1950’s when Alan Turing invented the Turing Test. The argument over the years has taken two forms: strong AI versus weak AI:. That is, strong AI hypothesises that some forms of artificial intelligence can truly reason and solve problems, with computers having an element of self-awareness, but not necessarily exhibiting human-like thought processes. (http://en.wikipedia.org/wiki/Strong_AI). While Weak AI argues that computers can only appear to think and are not actually conscious in the same way as human brains are. (http://www.philosophyonline.co.uk/pom/pom_functionalism_AI.htm).

These areas of thinking cause fundamental questions to arise, such as:

‘Can a man-made artefact be conscious?’ and ‘What constitutes consciousness?’

Turing’s 1948 and 1950 papers followed the construction of universal logical computing machines, introducing the prospect that computers could be programmed to execute tasks which would be called intelligent when performed by humans.(Warner 1994: 118). Turing’s idea was to create an imitation-game on which to base the concept of a computer having its own intelligence. A man(A), and a woman (B), are separated from an interrogator, who has to decipher who is the man and who is the woman. A’s objective is to trick the interrogator, while B tries to help the interrogator in discovering the identities of the other two players.(Goldkind, 1987: 4). Turing asks the question:

‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?’ (Quoted from Goldkind 1987: 4).

Turing’s test offered a simple means test for computer intelligence; one that neatly avoided dealing with the mind-body problem. (Millican, P.J.R., 1996: 11). The fact that Turing’s test did not introduce variables and was conducted in a controlled environment were just some of its shortfalls. Robert French, in his evaluation of the test in 1996, stated the following: ‘The philosophical claim translates elegantly into an operational definition of intelligence: whatever acts sufficiently intelligent is intelligent.’ However, as he perceived, the test failed to explore the fundamental areas of human cognition, and could be passed ‘only by things that have experienced the world as we have experienced it.’ He thus concluded that ‘the Test provides a guarantee not of intelligence but of culturally-oriented human intelligence.’ (Ibid : 12).

Turing postulated that a machine would one day be created to pass his test and would thus be considered intelligent. However, as years of research have explored the complexities of the human brain, the pioneer scientists who promoted the idea of the ‘electronic brain’ have had to re-scale their ideals to create machines which assist human activity rather than challenge or equal our intelligence.

John Searle, in his 1980 Chinese Room experiment argued that a computer could not be attributed with the intelligence of a human brain as the processes were too different. In an interview he describes his original experiment:

Just imagine that you’re the computer, and you’re carrying out the steps in a program for something you don’t understand. I don’t understand Chinese, so I imagine I’m locked in a room shuffling Chinese symbols according to a computer program, and I can give the right answers to the right questions in Chinese, but all the same, I don’t understand Chinese. All I’m doing is shuffling symbols. And now, and this is the crucial point: if I don’t understand Chinese on the basis of implementing the program for understanding Chinese, then neither does any other digital computer on that basis because no computer’s got anything I don’t have. (Free Inquiry 1998: 39).

John Searle does not believe that consciousness can be reproduced to an equivalent of the human capacity. Instead, it is the biological processes which are responsible for our unique make-up. He says that ‘consciousness is a biological phenomenon like any other and ultimately our understanding out it is most likely to come through biological investigation’ (Searle, 1990 :58-59. Quoted from McCarthy, 2001, http://www-formal.stanford.edu/jmc/’). Considered this way it is indeed far fetched to think that the product of millions of years of biological adaptation can be equalled by the product of a few decades of human thinking. John McCarthy, Professor Emeritus of Computer Science at Stanford University advocates the potential for computational systems to reproduce a state of consciousness, viewing the latter as an ‘abstract phenomenon, currently best realized in biology,’ but arguing that consciousness can be realised by ‘causal systems of the right structure.’ (McCarthy, 2001, http://www-formal.stanford.edu/jmc/ )

The famous defeat of Garry Kasparov, the world chess champion, in 1997 by IBM’s computer, Deep Blue, promoted a flurry of debate about whether Deep Blue could be considered as intelligent. When asked for his opinion, Herbert Simon, a Carnegie Mellon psychology professor who helped originate the fields of AI and computer chess in the 1950s, said it depended on the definition of intelligence used. AI uses two definitions for intelligence: “What are the tasks, which when done by humans, lead us to impute intelligence?” and “What are the processes humans use to act intelligently?”

Measured against the first definition, Simon says, Deep Blue “certainly is intelligent. (“http://whyfiles.org/040chess/main3.html). According to the second definition he claims it partly qualifies.(Ibid).

The trouble with the latter definition of intelligence is that scientists don’t as yet know exactly what mechanisms constitute consciousness. John McCarthy, Emeritus professor at Stanford University explains that intelligence is the ‘computational part of the ability to attain goals in the world.’ He emphasises that problems in AI arise as ‘we cannot yet characterise in general what computational procedures we want to call intelligent.’ (McCarthy 2003: 3). To date, computers can perform a good understanding of specific mechanisms through the running of certain programs; what McCarthy deems ‘somewhat intelligent.’ (McCarthy 2004: 3).

Computing language has made leaps and bounds during the last century, from the first machine code to mnemonic ’words’ In the 90’s the so-called high-level languages were the type used for programming, with Fortran being the first compiler language. Considering the rapid progress of computer technology since it first began over a hundred years ago, it is likely that unforeseeable developments will occur over the next decade. A simulation of the human imagination might go a long way to convincing people of computer intelligence.

However, many believe that it is unlikely that a machine will ever equal the intelligence of the being who created it. Arguably it is the way that computers process information and the speed with which they do it that constitutes its intelligence, thus causing computer performance to appear more impressive than it really is. Programs trace pathways at an amazing rate – for example, each move in a game of chess, or each section of a maze can be completed almost instantly. Yet the relatively simple process – of trying each potential path – fails to impress once it’s realised. (Reed, 2003: 09). Thus, the intelligence is not in the computer, but in the program.

For practical purposes, and certainly in the business world, the answer seems to be that if it seems to be intelligent, it doesn’t matter whether it really is. (Reed 2003: 09). However, computational research will have a difficult task to explore simulation of, or emulation of, the areas of human cognition. Research continues into the relationship between the mathematical descriptions of human thought and computer thought, hoping to create an identical form.(Wagman, M., 1991: 2). Yet the limits of computer intelligence are still very much at the surface of the technology. In contrast, the flexibility of the human imagination that creates the computer can have little or no limitations. What does this mean for computer intelligence? It means that scientists need to go beyond the mechanisms of the human psyche, and perhaps beyond programming, if they are to identify a type of machine consciousness that would correlate with that of a human..

References

Goldkind, J., 1987, Machines and Intelligence: A Critique of Arguments against the Possibility of Artificial Intelligence. New York: Greenwood Press Inquiry. Council for Democratic and Secular Humanism. Volume: 18. Issue: 4. . Page Number: 39+.

McCarthy, J., 2001, ‘What is Artificial Intelligence?’ Available online from:

‘http://www-formal.stanford.edu/jmc/’

[Accessed 14/11/06]

Millican, P.J.R., 1996, The Legacy of Alan Turing. (Volume1). Oxford: Clarendon Press

Online Encyclopedia. Available online from:

‘http://en.wikipedia.org/wiki/Strong_AI.’

[Accessed 17/11/06]

Reed, F., 2003, ‘Artificial Intellect Really Thinking?’. The Washington Times. May 1, 2003. p. B09

Wagman, M., 1991, Artificial Intelligence and Human Cognition: A Theoretical Intercomparison of Two Realms of Intellect. New York: Prager

Warner, J, 1994, From Writing to Computers. New York: Routledge

URL’S

‘http://www.philosophyonline.co.uk/pom/pom_functionalism_AI.htm’

[Accessed 17/11/06]

‘http://whyfiles.org/040chess/main3.html’

[Accessed 14/11/06]

Further Reading

DeLancey, C., 2002, Passionate Engines: What Emotions Reveal about Mind and Artificial Intelligence. New York: Oxford University Press

Wagman, M., 2000, Scientific Discovery Processes in Humans and Computers: Theory and Research in Psychology and Artificial Intelligence. Westport, CT: Praeger

Applications of Group Technology

1.0 Introduction

Nowadays, the global economic of the world is getting increase and improves. All of the countries, nations and citizens are developing in a fast growing trend. Now the manufacturing processes also have to improve to such a way that produce and manufacture a good quality product with lower costs. Therefore a term group technology has been introduced. Group technology actually refers to a kind of technique which classifies the manufacturing part (sometimes named as family part) according to the size, shape, process or length. Some companies may group in according to other criterion such as the product’s function as long as it makes the company’s works easier. Actually, this group technology really can help to reduce the production costs. This is because grouping the part together (group of the part can be known as family part) can help to save the transportation costs, time of production and all this consequently increase the productivity, therefore, group technology definitely will be very useful to many of us.

Actually people start using group technology since the 1920s. People start using group technology in manufacturing process during that moment. We are also able to see the example of group technology in our daily life. For example, when we go to the library in our school, all the book will be classify according to the faculty and the title of the books. This method is not only makes the works of the library staff easier but also enables the students or lecturer to find the books that they need easily.

During the past decades, people normally used group technology to reduce the cost of production in manufacturing process. However, now the situation has changed, group technology has now become an important strategy for most of the manufacturing processes. It is an important aspect to improve the productivity and cause the company to develop in a better way.

2.0 Method to develop family part

In order to use the concept of group technology, the companies have to think of the way to classify the group. The first step will be classifying the part of the products (family parts) according to two different attributes which will be based on the geometric characteristics and production process characteristics.

In geometric characteristics, the products will be classified according to the size, shape and length. However, if the products are being classified according to the operation process, then the machine used for production, method of processing and tools used to hold the products will all being take into consideration. This actually is a method to group the part based on their attributes. Therefore, the company should observe their own product and determine the best way to classify the products.

For example, for those company that actually own a factory that produce screw or bolt nut, the best method of classification will be geometric classification since the products may varies in shape and size. However, if the company is actually a health care company, and they are producing body shampoo, toothpaste and mouthwash then the better way to classify the product will be the operation processing classification. This is because the products produced are actually all the same in term of size and shape, but they may be varying in flavor and smell. This is causing the operation process to be different from each other.

Then, the next step will be determining the method to form part families. Normally, the company will use three different methods to form the part families. The methods include coding and classification, manual visual inspection and production flow analysis. Different product may have different method to form the part families. The company should find the best way to group them and obtain the best result.

2.1 Classification and coding

Classification is a process to group all the related parts into a group and sometimes can be known as part families. Wherease, coding is means that the part families are being assigned with a symbol. People will be able to obtain the information of the particular product based on the code. Each character of the code also carries the information of the previous character. This is the most effective and accurate method to classify group technology among these three methods.

Nowadays, there are almost more than 100 types of classification and coding are available in the world. Besides, the coding system may vary from one company to another company. Many company had hire some experts to improve the coding system but there is no any universal acceptance until now. Some of the people may think that coding is just a simple task and can be easily done. However, it is actually a very difficult complex problem. Many time and energy have to be spent in order to find the best coding system that suitable for the company.

As stated above, there are about 100 kind of coding system available in the world. However, all of the classification and the coding systems are actually can be grouped into three different kind. That will be Hierarchical (monocode), Attribute (polycode) and Hybrid (mixed code).

Hierarchical (monocode)

Hierarchical (monocode) is actually a coding system which the each character being used will need to depend on the meaning and carry along the information of the previous character. This kind of system also will show a tree structure pattern. The benefit of using this coding system will be the coding system is seem to be more apparent and all the information will be able to obtained based on the code itself. Hierarchical (monocode) is very useful in many ways especially when the products are differing in shape, size or other geometric characteristics.

Forming a good hierarchical system can be very difficult. However, there are some hints for those companies who wish to form hierarchical system. When forming the codes, the company should ask themselves some questions. The answers collected from the choices will enable them to form an effective coding system.

The benefits of this kind of coding system will be enables us to obtain many information within a single few digits code. Furthermore, this kind system makes some part of the part can bring along some useful abstracting information to the company. However, the disadvantage will be it is quite impossible to produce a perfect hierarchical system. The other disadvantage will be the existence of sub group can lead to many different sub-sub group, this cause some position having blank code.

The figure above is an example of hierarchical (monocode) system and we can see that there is some imperfection in this system due to there is some empty coding exists. There some researches being done in order to improve and increase the efficiency of this hierarchical system.

Attribute (Polycode)

The attribute code system sometimes can be known as polycode and chain code system. This code is different with the monocode in term of their digits. In this polycode system, the digits within the code are all carrying different information. Each digit represents each different attribute of the part.

The advantage of this attribute (polycode) system will be very easily understood. This is due to each character is independent from one another. The company can know the details exactly after they saw the code. Unlike hierarchical code, the character has to depend on the previous information. Therefore, only the users who are familiar with the process can read the exact details. But this kind of coding system also did show a very significant disadvantage. This will be the code formed will be very long and large.

The figure below will be an example of attribute code system. From this table, we can clearly see that each character actually represent different attitude of the products. For example, if we obtain a product with a code of ‘32123’ that it will represent that this product actually has the characteristics of, boxlike in external shape, center hole in internal shape, do not have any number of hole, the type of hole will be cross and the gear teeth is actually an external spur.

Digit

Class of feature

Possible value of digits

1

2

3

4

1

External shape

Cylindrical without deviation

Cylindrical with deviation

Boxlike

2

Internal shape

None

Center hole

Brind center hole

3

Number of holes

0

1-2

3-5

4

Type of holes

Axial

Cross

Axial cross

5

Gear teeth

Worm

Internal spur

External spur

..

Hybrid (mixed code)

The final category of coding system will be the hybrid (mixed code). This is a kind of code system which most of the company prefer. This system combines the advantage from the Hierarchical (monocode) and Attribute (polycode) system. People may define this is an advanced system code. The example of this code will be ‘A12131B120’. The first alphabet ‘A’ is actually represents the type of part such as gear or screw. The next 5 digit used to represent the attribute of the part. Then the ‘B’ in the code above represents another subgroup such as the material or design. The following digit will be explaining the attitude of the subgroup stated. This clearly shows that this system did rely on the precious digit but also have some digit are independent which are able to show the attribute by itself.

Actually there are also some organizations and company use a special kind of hybrid (mixed code) system. They are known as DCLASS or MICLASS code and contain up to 8-12 digits in a particular code. However, there is only some specific organizations such as (The Netherlands Organization for Applied Scientific Research) are using this special kind of hybrid (mixed) code system.

Manual visual inspection

Manual visual inspection will be one of the methods used to classify part families. Actually, this method is not accurate enough compared to coding and production flow analysis. This method is actually classifying the products based on the physical appearance. Sometimes, there will be some companies or organizations are classifying the products only based on the photograph and arrange them based on their feature. Therefore, we can conclude that this is one of the most inaccurate methods of grouping part families.

However, there are actually some organizations are suitable to use this kind of coding system for their group technology. There is one company in United State successfully save their company from being backrub by using this manual visual inspection to classify their group technology. Actually, all of the method will be effective as long as it is suitable for the production of the company.

Production flow analysis

The other method to determine the part families will be the production flow analysis. This kind of classification system will mainly focus on the production process. The products that need to be manufactured by the same working process will be classified as a part families and then being processed by using a single kind of machines. The machines that used to manufacture part families are known as cells. Cellular manufacturing is referring to the manufacturing process when the part families are being manufacture by cell machine. The advantage will be this production flow analysis system need less effort compared to coding and classification system.

When classifying products by using this system, we need to form a matrix. This matrix actually did bring along some special meaning. This actually is a machine-component chart and it is also a M * N matrix where M = number of machines and N = number of parts. The 2 figures below show the production flow analysis. The grouping will be based on the operation process and find the optimum solution for it. For example, there is a duplicate D in production flow analysis. This is because machine D need to produce too many parts. If only single machine to be used, group 1 and group 2 have to be combined but they have too many dissimilar routings,

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

A

1

1

B

1

1

1

C

1

1

1

1

1

1

D

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

E

1

1

F

1

1

1

1

G

1

1

1

1

1

H

1

1

1

1

I

1

1

1

1

1

J

1

1

K

1

1

Figure above is the component machine chart

2

11

1

10

16

18

12

6

3

4

7

8

9

14

15

19

5

13

17

H

1

1

1

1

D

1

1

1

1

1

(Group1)

F

1

1

1

1

A

1

1

B

1

1

1

E

1

1

D

(Group 2)

1

1

1

1

1

1

1

1

1

1

C

1

1

1

1

1

1

I

1

1

1

1

1

G

1

1

1

1

1

J

(Group 3)

1

1

K

1

1

Figure above is the production flow analysis

3.0 Flow Chart in forming group technology

Group Technology

Advantages and disadvantages of group technology

Group technology will play an important role in the future production plants although now it still does not achieve the widespread application. Using group technology strategy is definitely beneficial to all of us. This strategy is not only beneficial to the company but also beneficial to the citizen or users.

The advantages of group technology toward the manufacturing process will be the production process can be improved. For example, the part control can become tighter, the physical layout of the machine group can be improved and the ordering tied toward the production also can be improved when group technology is being used. The other advantage will be a more systematic design and redesign can be produced. This will consequently lead to reducing on the planning time of the process and the setup time. The other advantage toward the manufacturing process will be the purchasing cost of the material can be reduced due to the materials are purchase in a very large quantity. This can also improve the accuracy of cost estimation of a certain companies or organizations. All these will definitely bring to an improvement toward the company and causes company or organizations to develop faster.

The improvement also can be clear seen on the product itself when group technology is being used. We can clearly see the quality of the products being improved. This is because the number of completely new design can be reduced and this causes the workers more familiar with the part. This leads to a production of better quality products.

The advantage that brings to the customer or users is the cost of the product is reduced when group technology being introduced. This is because the transportation fee to tr

Application Layer Protocols in TCP/IP Model

Chapter 1: Introduction
1.1: Background

Information is deemed to be the backbone of the business stability and sustainability in the twenty-first century irrespective of the size or magnitude of the business as argued by Todd and Johnson (2001)[1]. This is naturally because of the fact that the increase in the use of information technology and the dependence on information communication over the Internet to entities associated with the business in geographically separated locations is accomplished through the effective use of the secure communication strategies across the Internet using the TCP/IP model. The prospect of saving costs through electronic transactions across the Internet which not only saves costs associated with the traditional business process but also makes the transfer instant in nature thus overcoming the time constraint associated with the procurement and distribution of goods and services by an organization has made it critical to utilise secure communication methodologies to leverage sustainable communications strategy for the organization. Furthermore, it is also critical to ensure that the organization conforms to the legal requirements in terms of security infrastructure to enable information privacy and data protection of personal and sensitive information of individuals concerned with the organization as argued by Todd and Johnson (2001). This makes it clear that a secure communication infrastructure is thus essential in an organization to harness the potential of Information Technology effectively.

Public Key Infrastructure (PKI) is an increasingly utilised method of data communication authentication using various application layer protocols for secure communication in a client server environment across the Internet (Nash et al, 2001[2]). The increase in the number of application layer protocols of the TCP/IP model in the twenty-first century through the use of protocols including HTTP (Hyper Text Transfer Protocol), SSL (Secured Sockets Layer) and TLS (Transport Layer Security) protocols to enable the desired level of security in the data being communicated makes it clear that there is potential for hacking and unauthorised access to the authentication information by hackers and other malicious users across the Internet whilst using one of the aforementioned protocols for data communication across the Internet. The increasing level of attacks on the servers through unauthorised users across the Internet to access sensitive information in an unauthorised fashion even with the existence of the aforementioned protocols has increased the need to assess the weakness of these protocols in terms of the potential areas where a hacker can attack data communication to decipher information that is hacked in order to make sense to eventually abuse the information for personal gains. In this report a critical overview on the areas of weaknesses on the application layer protocols in the TCP/IP model in the light of using PKI is presented to the reader.

1.2: Aim and Objectives

Aim: The aim of this report is to identify the key weaknesses of the application layer protocols of the TCP/IP model in the implementation of the PKI for secure data communication over the Internet.

Objectives:

The above aim is accomplished through steering the research conducted in this report on the following objectives

To conduct a background overview on the PKI and five layers of TCP/IP model.
To conduct a critical overview on the key components that enables the effective authentication and secure communication using a given protocol in the PKI infrastructure.
To perform an analysis on the key application layer protocols that is used in the TCP/IP model implementing the PKI architecture.
To assess the SSL/TLS protocol and the key weaknesses of the protocol in terms of areas where there is possibility of potential attacks by an unauthorised user or hacker without the knowledge of the user.
To Assess the Secure Electronic Transaction (SET) protocol and its key weaknesses in terms of the components of the protocol that can be manipulated by the hackers for unauthorised access to personal and sensitive information.
1.3: Research Methodology

A qualitative approach to the research is conducted through analysing the published information on the protocols and the RFC (Request for Comment) documents on the protocol. This approach is deemed effective as the protocols that are being analysed if tested would require a substantial amount of commitment of resources and funds to establish the infrastructure in order to effectively simulate the test environment for achieving reliable results. Secondary research resources from journals, books and other web-resources are used for constructing the analytical arguments for the research conducted in this report.

1.4: Research Scope

The TCP/IP communication model is deemed to be a critical platform for effective communication across the Internet. As each of the five layers of the TCP/IP model can be implemented using a variety of protocols, the scope of this research is restricted to the application layer protocols in the light of implementing PKI. This is due to the fact that the entire landscape of protocols comprised by the five layers of the TCP/IP model is extensive in nature and analysis of all the layers would not only require commitment of resources and funds but also the time necessary to perform the research. Hence the research scope is restricted to the application layer protocols of the TCP/IP model focusing specifically on the TLS and SET protocols.

1.5: Chapter Overview
Chapter 1: Introduction

This is the current chapter that introduces the reader to the aim, objectives, research methodology and scope of the research conducted in this report. This chapter is mainly to set the stage for the research presented in this report.

Chapter 2: Literature Review

This chapter presents a critical analysis on the Public Key Infrastructure (PKI). The overview throws light on the key components of PKI along with an overview on the benefits and constraints associated with its implementation. This is followed by the review of the five layers of the TCP/IP model. The purpose of the review of the TCP/IP model is mainly to provide an insight on the various levels of security implemented within the TCP/IP model prior to analysing the application layer related security components. The review of the application layer components is mainly focused on the technical elements associated with the implementation of the protocol and the authentication process like the algorithm, authentication methods etc., This review forms the basis to review the application layer protocols in subsequent chapters although protocol specific components will be dealt with in their respective analysis.

Chapter 3: The TLS Protocol

In this chapter a comprehensive overview of the SSL/TLS protocol architecture is presented to the reader. This overview is followed by the assessment of the security implementation and the major weaknesses associated with the protocol architecture that form potential entry points for the network hackers and attacks. The analysis also presents examples from the encryption algorithms and code samples from Open Source SSL on how to conduct code level network hacking on the SSL/TLS architecture. The exploitation of the PKI set-up in terms of the CA and RA that forms the basis for man in the middle attacks are also reviewed in this chapter in the light of the TLS encryption and transfer of information across the Internet between client and server. The chapter is concluded by a review on the client and server side attacks on the web-application environment to address the rising concerns on the weaknesses of the SSL/TLS that is being exploited by hackers eventually affecting electronic commerce transactions. The chapter also reviews the TLS weaknesses in the light of short public keys, 40-bit bulk encryption keys, anonymous servers, authentication procedure, authentication algorithms and the weakness associated with the use of a given algorithm over the other etc., The research also focuses on the cryptographic functions and the role of these functions in the security infrastructure implementation using the protocol.

Chapter 4: The SET protocol

This chapter like the chapter 3 commences with a comprehensive overview of the Set architecture and its implementation procedure in the electronic commerce environment. This is followed by a code level analysis on the major areas of weaknesses in terms of the protocol’s encryption strategy and the major issues associated with room for hackers to decrypt and even alter the information. The chapter then proceeds to a comprehensive overview of the Set architecture and encryption weaknesses in terms of the vulnerability related to intrusion, spoofing, PKI implementation, and use in UDP protocol etc.,

Chapter 5: Discussion and conclusion

This chapter commences with a discussion on the research conducted in chapters 3 and 4. The discussion aims to summarise the key weaknesses and the extent to which they can be overcome using security measures in terms of authentication algorithms, certificates etc. This is followed by a review of the objectives in the research in order to identify the consistency of the research conducted against the objectives set at the beginning of the report. The chapter then concludes the research followed by recommendations on further research on the topic.

Chapter 2: Literature Review
2.1: Security Trends

Todd and Johnson (2001) argue that the early Internet applications intended for Electronic commerce and information sharing although capable of delivering the desired service lacked seriously in the security related to the information being transferred as well as the abuse of the stolen information by unauthorised users for personal gains. This has naturally made the process of security a priority element that affected the growth of Electronic Commerce in the twentieth century since the dawn of the Internet. Todd and Johnson (2001) further argue that with the increase in the availability of network access, security became a matter of how to create the hardened outer wall (i.e.) the prevention of unauthorised access to the information systems rather than access control implemented on individual information systems exposed directly to the network.

Encryption is a terminology used extensively in the data protection, securing information over transit from sender to receiver, in a network environment apart from the use of encryption standards to secure information at the storage itself like the server or the client computer where the information resides. This is naturally because of the increase in the security infringements due to hacking into the communication channel by unauthorised users resulting in loss of sensitive information (Burnett and Paine, 2001[3]). There are numerous methods of encrypting the information in order to enable secure encryption between receiver and sender over the Internet two of which are deemed popular. They are symmetric and asymmetric cryptography methods of encryption used to secure information over the Internet as argued by Burnett and Paine (2001). The former is synonymous with the private key encryption system where a single encryption key or secret key is shared between the communicating parties to encrypt/decrypt information that is being transferred. The major weakness is the threat of loosing the private key which when discovered would render the strategy ineffective as it exposes the communication channel and the information being transferred to the hacker or intruder who has gained unauthorised access (Nash et al, 2001).

2.2: Public Key Infrastructure – an overview

The case of Asymmetric cryptography mentioned in section 2.1 forms the basis for the Public Key Infrastructure (PKI). This is an encryption strategy involving a public and a private key where the public key is used for encryption by the users in the public domain to send information to the server which alone uses the private key to decrypt the information in order to authenticate the user (Todd and Johnson, 2001). The Public Key Infrastructure is one of the successful and deemed to be a reliable approach for enabling secure communication through Trusted Third Party authentication and approval of the overall communication process involving data communication.

The key components that form a successful PKI infrastructure are described as follows

Certificate Authority (CA) – This is deemed to be the controller for issuing the public key and the digital certificate and its verification whilst communication is being established between a sender and receiver. The role of the CA is to generate the public and private keys for a given user alongside issuing and verifying the digital certificate. This makes it clear that the CA’s effective operation is a pivotal element for the successful and secure communication between the server and the client in a PKI environment. The CA is typically a company or group of companies independent of the users/organizations involved in the communication thus playing a Trusted Third Party role to enable security through independent verification of the digital certificates (Todd and Johnson, 2001).
Registering Authority (RA) – The RA acts as the verifier for the certificate authority before a digital certificate is issued to a requestor. This process is one of the key independent authorisation strategies deployed by the PKI infrastructure that is deemed to be a security measure as well as the key weakness to the overall effectiveness of the PKI strategy itself. The PKI as such is deployed as a methodology to enable secure handshake in order to establish an exclusive (or secure) communication channel between the sender and the receiver (Burnett and Paine, 2001). This handshake process is where the CA and the RA play a pivotal role to verify the validity of the communicating parties in order to complete the communication process. For instance, a credit card transaction over the Internet would require the bank, card issuing authority and the payment processing authorities to independently verify the identity of the buyer using the credit card details supplied. This process is conducted using the PKI handshake process where the public key provided by the vendor is accessed by the CAs and RAs to validate the transaction between the buyer and the vendor. in the real-world scenario, the CA and RA host separate servers with the respective public keys that are generated
Directories – These are the locations on the Internet domain where the public keys are held. Typically the public keys are held at more than one directory in order to enable quick access to the information as well as a double check on the key retrieved in terms of its validity and accuracy.
Certificate Management System (CMS) – This is the software application that controls or monitors the overall certificate issue and verification process. As this is a package, it varies from one authority to another depending upon the choice of the infrastructure by the certifying authority. So the CA and the RA that host the directory for the public keys and the digital certificates issued for the users using the keys are managed using a CMS.

The operation of PKI in a typical banking example is presented below to enable a better appreciation of the overall PKI concept.

The credit card transaction described under RA above is where the CA issues a digital certificate for the details supplied by card holder using the public key provided by the vendor which in turn is verified by the RA prior to sending to the bank. The bank holds the private key which is used to decrypt the information provided along with the certificate in order to validate the transaction. The acknowledgement from the bank or the financing institution is then encrypted using the private key and sent back to the user who can decrypt the information using the public key in order to view the status. This process is conducted by the application layer protocols in case of TCP/IP where the data is encrypted using the encryption standards in lieu with those of the PKI to achieve the above-described secure transaction process.

Yet another example that can help appreciate the PKI effectively is the typical Internet banking service provided by banking institutions to its account holders. The account holder enters the verification information on the Internet Banking site for the bank which is encrypted using the public key stored in the public directory using an approved CA and then sent to the bank which decrypts the information for authentication and then allows the user to view the bank account in case of successful authorisation. The subtle difference between authentication and authorisation is the fact that the former is the process of establishing connection whilst the latter is the actual validation process dedicated for the user verification within the established connection to access the specific information for the user (Nash et al, 2001).

The key security strategy is the sharing of the public key whilst retaining the private key generated using the same algorithm simultaneously as argued by Burnett and Paine (2001). This is because of the fact that the private key due to its secure nature by providing only to the requester makes it clear that the requester (or the bank in the case of Internet banking) can enable an effective means of secure communication not only for the purpose of verification of the user but also to authenticate the server to the client using the private key thus providing room for establishing a secure communication channel to enable data communication. This makes it clear that the security established using the PKI is predominantly dependant on the following key entities of the infrastructure

CA and RA – The validity and reliability of the authorities involved is a critical aspect associated with the successful implementation of the PKI. This is because of the fact that the client or the user when sending the verification information from an un-secure computer entirely depends on the certifying authority to protect the information transferred. Hence an attack on the server hosting the directory and the public keys for issuing the digital certificate would provide the hacker with a suite of opportunities to abuse sensitive information from stealing information up to enabling man in the middle attacks using initial verification information to lure further information from the user. These are discussed further in subsequent chapters.
Encryption Algorithm – The encryption algorithm used for issuing the public and private keys is the second and most critical element for the effectiveness of the PKI infrastructure. This is because of the fact that the security is only as strong as the weakness of the encryption algorithm as argued by Nash et al (2001). This justifies that the reliability and protection of the data transferred by the protocols using the PKI faces a key single point of failure as the weakness of the encryption algorithm being used for issuing the keys by the issuing authority.

Benefits

The major benefits of the PKI include the following

Security due to verification by the Trusted Third Parties (TTP) in the form of the CAs and RAs to issue and verification of the digital certificates.
Continuous development on the algorithms generating the public and private keys for the requester provides room to capture any weakness in the existing algorithm that can be fixed on the latest version being developed. The exponential rate at which the electronic commerce is growing has made the PKI a popular and reliable authentication process by popular vendors like Verisign (Nash et al, 2001).
The security infrastructure associated with the storage of the public keys and the issue of the digital certificates by the CA and RA makes the process of verification secure due to the presence of independent verification authorities apart from the CA. This naturally limits the rate of attacks due as the failure to meet the authorisation at the RA will terminate the connection or not allow further communication to the target computer.

Constraints, Weaknesses and threats

The involvement of TTP increases the costs associated with the infrastructure set-up and maintenance (Todd and Johnson, 2001). This naturally affects the overall development and continuous security verification process as the verification authorities naturally face high level of costs associated with the maintenance in terms of security measures to storage and communication.
The encryption applied by the communication protocols is not secured for communication interference thus making it clear the changes to the header contents through monitoring the network traffic is plausible thus resulting in network attacks on the client computer. Man in the Middle which was mentioned before is a classical example for this case. This is because the ability to mask the header information on the data packets will enable the hacker to mislead the Internet user in revealing sensitive information without the knowledge of the user that he/she is actually communicating with the hacker and not the vendor/intended provider. This is dealt with at the encryption and algorithm level in chapters 3 and 4.
The weakness of the encryption algorithm used for generating the keys and the digital certificates is yet another issue that threatens the security enforced by PKI. This is because of the fact that the encryption applied to issue the digital certificate for the purpose of authentication is protected only at the data level and hence destination and source details can be altered by hackers to spoof the parties involved in divulging sensitive information. As the weakness of the encryption algorithm is mainly the case of developments to the hacking methods for penetrating the security measure, the continuous research and development strategies to ensure that the encryption algorithm implemented is secure enough necessitates commitment of funds and resources for the purpose.
The authentication algorithm used by the CA and the RA is yet another area of weakness that affects the security infrastructure implemented using PKI. This is because of the fact that the authentication algorithm is not merely the encryption algorithm for generating the keys, issuing the digital certificates but also the process of authenticating the CA and RA to the server computer of the vendor or the receiver with the details of the user or the client. This process naturally provides room for hackers to attack the data communication process at the authentication level if not successful straight to the encryption algorithm used for the key generation (Nash et al, 2001).
The fact that the PKI can be implemented successfully in the TCP/IP model alone makes it further vulnerable or a weak security strategy for other protocols that are not supported at the TCP/IP application layer. This makes the PKI limited to only a few application layer protocols that forms the TCP/IP application layer to enable secure data communication.
2.3: TCP/IP Model

Blank (2004) (p1)[4] argues that ‘TCP/ IP is a set of rules that defines how two computers address each other and send data to each other’. This makes it clear that the TCP/IP is merely the communication framework that dictates the methods to be deployed in order to achieve secure communication between two computers. Rayns et al (2003)[5] further argue that the use of TCP/IP in the network communication is mainly due to the platform independence of the framework and the room for development of new protocols and encryption methodologies in each of the five layers of the TCP/IP model. TCP/IP forms the standard for a protocol stack that can enable secure communication through enabling multiple protocols to work together within the TCP/IP framework. This approach is the primary architectural feature of the TCP/IP standard that makes it popular due to the fact that security can be implemented at multiple levels of the communication stack through introducing protocols at each layer of the TCP/IP model (Rayns et al, 2003). An overview on the layers of the TCP/IP model and the various elements of security implemented are presented below.

The five layers of the TCP/IP model are

Application Layer
Transport Layer
Network Layer
Data Link Layer and
Physical Layer.

The stack of the layers in the TCP/IP model and the key protocols that are normally used in these layers of the communication framework of the protocol suite is shown in fig 1 below.

From fig 1 it is evident that the overall TCP/IP implementation in a given network can be established using any number of protocols to enable security and speed of data transfer between computers. The reader must also note that the protocols mentioned in each layer shown in Fig 1 are merely a sample of the overall protocol suite as the number of protocols in each layer is extensive in nature with specific application purpose as well as interoperable and scalable properties as argued by Rayns et al (2003).

Blank (2004) further argues that the layers are logically arranged in such a way that closer to the top the data protocols associated with user application like the HTTP, SSL, BOOTP, SMTP, etc., with respect to the nature of the user application are available in order to enable data encryption to form the payload for the data packets transferred by the TCP/IP protocol stack whilst those towards the bottom layers like ARP, Ethernet etc., form the actual procedures for authentication and establishing connection between the computers in the network. Hence the application developer has the ability to quickly identify the protocol for his/her communication purpose at the desired level of data granularity.

This level of abstraction also provides the user with the ability to isolate the process of encryption and security of the data from the actual communication process of transferring information from computer to computer. This makes it clear that the effective implementation of the protocols to encrypt data as well as enable secure information transfer between computers is plausible through choosing the right combination of protocols from each layer to form the protocol stack of the TCP/IP suite (Blank, 2004). Each of the five layers is discussed in detail below.

Application Layer – This is the top most layer of the TCP/IP stack which provides the user applications with a suite of protocols to enable encryption and communication of the information from one computer to another. The application layer of the TCP/IP stack is also the level at which the web applications and business logic associated with the data transfer are incorporated prior to encryption. From the diagram in fig 1 it can be seen that the SSL/TSL or the Secure Electronic Transaction (SET) protocols are not visible at the application layer. This is because of the fact that the these protocols are not exactly for the encryption of the user application information and not dedicated to client interface application but independent protocols that encrypt the information being sent using one of these application layer protocols. Hence their position is actually between the Application Layer and the Transport Layer of the TCP/IP stack where the encryption of the data is completed prior to including the information for transfer using one of the transport layer protocols. The nature of the security encryption protocols like the SSL and TLS to enable encryption on the data being communication prior to transport using the appropriate transport layer protocol classify them as application layer protocols (Blank, 2004). Hence the role of the application layer in the TCP/IP stack is to enable interaction between the front-end or the user interface of the applications being used by the client computer in order to transfer information from one computer to another in a given application. Hence one can argue that the application layer protocols are predominantly used in case of client server communication applications where there is data transfer between the client and the server in the full-duplex mode (Feit, 1998[6]).

Transport Layer – The transport layer enables the end-to-end message transfer capabilities for the network infrastructure independent of the underlying network alongside providing error tracking, data fragmentation and flow control as argued by Feit (1998). It is in the transport layer where the header information for the packet (i.e.) the details of the fragment of data being transferred off the overall information to be sent from the target computer to the receiver. The header therefore contains the details of the packet in terms of the position of the packet in the overall data sequence, source & target address, etc., in order to enable the network router to transfer the packet to the appropriate destination computer in the network.

The two major classifications of the transport layer application in terms of transmission of information and connectivity include

Connection-Oriented Implementation – This is accomplished using TCP (Transmission Control Protocol) where a connection must be enabled between the two communicating computers in conformance with the authentication and association rules prior to enabling data transfer. Feit (1998) further argue that the data transfer in a connection-oriented implementation is completed successfully only when the connection established is live and active between the two computers. This makes it clear that a connection must be established using sessions in order to ensure security through terminating the session in case of user inactivity as well as providing facility for authentication to the desired security level. The implementation of PKI is one of the security strategies that are accomplished using the connection-oriented strategies of the transport layer in order to enable secure communication between the client and the server. This makes it clear that the header of the packet must contain details of the session in order to ensure that the transmission is indeed part of the established reliable byte stream.

The key aspects associated with the security in case of the aforementioned include

Sequential data transfer – the data received by the target computer is the same order in which it is transferred. This makes it clear that implementing a connection-oriented strategy for large data transfer would hamper the performance in terms of speed and session time-out issues.
Higher level of error control – This is naturally because of the fact that the connection oriented approach to the communication ensures that there is a live communication channel between the sender and the receiver throughout the transmission process thus controlling the loss of packets or data segments. This naturally minimises the error level in the data being transferred.
Duplication Control – Duplicate data is discarded and also controlled to a minimal level due to the synchronous data transfer methodology implemented by the process.
Congestion Control – Network traffic is monitored by the TCP protocol effectively as part of the transport layer tasks thus ens

Annotated Bibliography: Automated Brain Tumor Detection

V. Zeljkovic et al.(2014)[1]] proposed computer aided way of automated brain tumor detection with MRI images. This technique enables the particular segmentation of tumor tissues by with the correctness and also reproducibility just like physical segmentation. The outcomes display 93. 33% precision with irregular images and also total accuracy with healthy brain MR images. This technique for tumor detection with MR images also gives information relating to it’s specific location and also documents it’s design. As a result, this particular assistive technique enhances investigative effectiveness and also lowers the opportunity of human mistake and misdiagnosis.

S. Ghanavati et al. (2012) [2] delveloped a multi-modality framework for automated tumor discovering is actually recent, fusing unlike magnetic Resonance Imaging strategies which includes T1-weighted, T2-weighted, as well as T1 along with gadolinium comparison agent. The intensity, shape deformation, symmetry, as well as consistency capabilities have been produced from each image.

H. Yang et al. (2013). [3] experimented many segmentation strategies, no approach can easily segment all the b rain tumor information sets. Clustering as well as classification approach are very vulnerable with the 1st parameters. A few clustering strategies certainly are a stage operations and donot maintain the connectivity among regions. Training data and the appearance of the tumor strongly affect the results of the atlas-based segmentation. Edge-based deformable contour model is experienced the initialization with the evaluating curve as well as noise.

H. Kaur et al. (2014) [4]has dedicated to the brain tumor detection strategies. The brain tumor detection is is definitely an essential vision application inside the medical field. This specific work offers firstly displayed an evaluation about a variety of well-known strategies for automated segmentation of heterogeneous image information that can require an actions towards bridging the gap in between bottom-up affinity-based segmentation techniques as well as top-down generative model based structured strategies. The key purpose of the work is usually to find out a variety of ways to detect brain tumor in a effective methods. The way to unearthed which the absolute almost all of active techniques has ignored the quality images likes including images along with noise or bad brightness. Also many techniques target tumor detection has neglected the use of object based segmentation. To overcome the limits of previously work a new strategy has been offered in this research work.

I.Maiti et al. (2012) [5] offered a new way for brain tumor detection is developed. For this purpose watershed method may be used in combination with edge detection operation. It is a colour based brain tumor detection method using colour brain MRI graphics in HSV colour space. The RGBimage is changed into HSV coloring image by which the image is split in several regions hue, saturation, as well as intensity. After contrast enhancement watershed algorithm is applied on this image for every region. Canny edge detector is put on this result image. after combining the three images final brain tumor segmented image is obtained.

M.S R et al.(2014)[6] proposed a segmentation and k-means clustering is combined for the improvementt evaluation regarding MR images. The results that translate the actual unsupervised segmentation techniques better than supervised segmentation techniques. The pre-processing is needed to display images from the supervised segmentation methods. The image training and testing data which significantly complicates the process though the picture analysis regarding known K-means clustering process is straightforward in comparison with used fuzzy clustering techniques.

H.AejazAslam et al.(2013)[7] have suggested a new way of image segmentation applying Pillar K-means criteria. The system can be applied this k-means criteria optimized after Pillar. Pillar algorithm takes this keeping pillars should be located as far from each other to be able to avoid this force distribution of a upper limit, because just like the number of centroids between data distribution. This algorithm can optimize this K-means clustering with image segmentation in the issues with precision along with calculation time.

A. Al. Badarneh et ‘s. (2012)[8] suggested a an automatic classification method for tumor of MRI images avoiding this Automatic classification of MRI images involves extreme accuracy, considering that the non-accurate examination along with postponing supply of the accurate inspection would produce raise the prevalence of more serious conditions. This work demonstrates the effects of neural network (NN) along with K-Nearest Neighbour (K-NN) algorithms upon brain tumor.. the results demonstrate that technique accomplishes 100% classification precision applying KNN along with 98. 92% applying NN.

K.Sharma et al.(2014)[9] discussed magnetic resonance imaging is important imaging strategy used in the detection of brain tumor. brain tumor is one of the most harmful diseases occurring among several people. brain MRI performs an essential role for radiologists to detect and treat brain tumor patients. Research of the medical image by the of the radiologist is a difficult process along with the accuracy is determined by his or her experience. Thus, the actual computer aided techniques becomes really important as they overcome these limitations. Many automated methods are offered , but automating this method is extremely challenging because of various appearance of the tumor among the different patients. There are many feature extraction and classification methods which are used for detection associated with brain tumor from MRI pictures.

S.Royet al.(2013)[10] reviewed the several recent brain tumor segmentation along with diagnosis methodology for MRI of brain image. MRI is an advanced medical imaging method providing prosperous information about the human soft-tissue structure. there are different brain tumor diagnosis and segmentation methods to find the segment a brain tumor from MR Images. These detection iand segmentation strategies are evaluated with signifiance placed on Informative advantages and drawbacks of such methods for brain tumor diagnosis and segmentation. The usage of MRI image detection and segmentation in different techniques are defined.

Natarajan et ‘s. (2012) [11] proposed brain tumor recognition method for MRI human brain images. The MRI human brain images are generally firsty pre-processed using median filtration, then segmentation of image is performed using threshold segmentation and also morphological functions are used to obtain the tumor region. This method provides accurate shape of tumor within MRI human brain image.

Manoj et .al. (2012) [12] explained the information of size of tumor plays critical role with the the treatment of malicious tumors. Manual segmentation of human brain tumors as of magnet Resonance images is a challenging and also time cousuming task. This method for the discovering of tumor in human brain by segmentation and histogram thresholding. The prepared process can be efficiently useful to identify contour of the tumor and it is geometrical description. It can be helpful application for the experts especially the doctors entertained in this particular field.

Roopali et.al.. (2014)[13] disscused the segmentation strategy, which was carried out using a method based on threshold segmentation, watershed segmentation along with morphological operators. This proposed segmentation method seemed to be experimented with MRI scanned images associated with human brains: hence finding tumor in the images. Samples of human brains were taken, scanned applying MRI process and were prepared through segmentation methods this provides the efficient end results.

Nisha et.al.(2014)[14] described the method aims the automatic detection along with classification| of human brain tumors while benign as well as malignant. The performance proposed by the system is usually 96%. This proposed method concentrate on this segmentation associated with MRI and helps in the automatic detection of human brain tumor through the assistance of level set method with the classification of tumor as benign or malignant using artificial neural syatem.

Kanimozhi and Dhanalakshmi et.al. (2013) [15] described the basic algorithm for detecting the actual variety and outline of tumor in brain MR images is described. Usually, CT scan as well as MRI that could possibly in intracranial hole produces a entire image of brain. This image is visually|examined by the overall specialist for identification and examination of brain tumor. To stay away that , it uses computer aided technique for segmentation (detection) of brain tumor on the basis of two algorithms. This allows the actual segmentation of tumor tissues along with correctness and reproducibility comparable to physical segmentation. In including, it also reduces the full time for examination. At the ending of treatment the actual tumor is extracted from MR image and its exact location and also the outline identified . Any time degree of tumor is shown based on amount of region determined on the cluster.

Njeh, Ines et al. (2014)[16] researched at an instant distribution-matching, data-driven algorithm for 3d multimodal MRI brain glioma tumor and edema segmentation in several strategies. They learned non-parametric model distributions which in turn characterize the typical areas in present information. then they explained his or her segmentation problems since optimisation of various expense features of the similar type, each that contain two terms distribution matching earlier, which in turn examines an international similarity among distributions, and a smoothness before prevent the occurrence of small, isolated areas in the solution. Obtained using recent bound-relaxation results, the actual optima in the value features provide the actual complement in the tumor region as well as edema region in almost real-time. According to global instead of pixel wise data, the proposed algorithm doesn’t need the learning on the sizable, manually-segmented training set , as may be the situation involving modern day methods. Thus, the results are independent to the choice of a an exercise set. Quantitative evaluations in the publicly available training and assessment information fixed from the MICCAI multimodal brain tumor segmentation challenge (BraTS 2012) Obtained using recent bound-relaxation results, the actual optima in the value features provide the actual complement in the tumor region as well as edema region in almost real-time. According to global instead of pixel wise data, the proposed algorithm doesn’t need the learning on the sizable, manually-segmented training set , as may be the situation involving modern day methods. Thus, the results are independent to the choice of a an exercise set. Quantitative evaluations in the publicly available training and assessment information fixed from the MICCAI multimodal brain tumor segmentation challenge (BraTS 2012) shown that their algorithm assure an incredibly competing effectiveness for complete edema and tumor segmentation, among nine existing methods, obtaining a desirable calculating execution time (less than 0.5 s per image).

Njeh, Ines et al. (2014)[16] looked at an immediate distribution-matching, data-driven formula intended for 3d multimodal MRI brain glioma growth and edema segmentation in several strategies. These people learned non-parametric model distributions which in turn characterize the typical areas in today’s information. After that, many people stated his or her segmentation complications since optimisation involving various expense features in the similar kind, every single made up of a pair of phrases some sort of submitting matching earlier, which in turn examines an international likeness among distributions, and ( some sort of smoothness previous to avoid the incident involving tiny, isolated areas in the remedy. Obtained using recent bound-relaxation final results, the actual optima in the selling price features provide the actual complement in the growth spot as well as edema spot in almost real-time. According to global instead of pixel clever data, the actual proposed formula doesn’t need yet another learning on the large, manually-segmented training fixed, as may be the situation involving modern day methods. Thus, the actual coming answers are in addition to the selection of a workout fixed. Quantitative opinions inside the openly available training and assessment information fixed on the MICCAI multimodal brain growth segmentation obstacle (BraTS 2012) shown that will his or her formula assure an incredibly competing effectiveness intended for comprehensive edema and growth segmentation, between nine current competing methods, obtaining a desirable calculating execution occasion (less as compared to 0. 5 utes every image).

Roy, Sudipta et al. (2013) [17] mentioned tumor segmentation from magnetic resonance imaging (MRI) information is an essential but difficult manual task performed by medical professionals. Automating this procedure is a challenging job due to higher diversity in the visual of tumor tissues between various affected individuals and oftentimes similarity with the common tissues. MRI is definitely an improved professional medical imaging technique providing abundant information about the human tissue anatomy. There are various human brain tumor detection with segmentation techniques to detect and segment a human brain tumor from MRI images. These detection and segmentation methods are usually evaluated having a significance added to enlightening the advantages and drawbacks of human brain tumors detection and segmentation. Using MRI image detection and segmentation in several methods can also be explained. In this article a quick overview of various segmentation for detection of human brain tumor MRI of human brain have been discussed.

Sapra, Pankaj et al. [18] described and compared the particular technique of automated detection of brain tumor by magnetic Resonance image (MRI) used in various stages of computer Aided Detection Process (CAD). Brain Image classification approaches are usually studied. Existing strategies are simply divided into region based and contour based strategies. These are usally focused on complete improved tumors or specific kinds of tumors. The quantity of sources needed to spell out there large number of| information is selected for tissues segmentation. In this paper, modified image segmentation approaches were applied on MRI scan images to be able to detect human brain tumors. Also in this paper, a modified Probabilistic neural Network Circle (PNN) model that is created on learning vector quantization (LVQ) together with image and data analysis and treatment approaches proposed to transport out a automated human brain tumor classification using MRI-scans. The evaluation from the modified PNN classifier functionality is measured with working out functionality, classification accuracies and computational time. The simulation results discovered how the modified PNN provide fast and accurate classification in contrast to the particular image processing and published conventional PNN approaches. Simulation results also discovered how the proposed method outer forms the corresponding PNN process offered and successfully handle the technique of human brain tumor classification inside MRI image together with 100% reliability.

Harati, Vida et al. 2011) [19] offered a much better fuzzy connectedness (FC) algorithm based on a variety for the reason that the seed point is selected automatically. This algorithm is actually independent of the tumour type when it comes to their pixels intensity. Tumour segmentation evaluation results based on similarity criteria show a better effectiveness from the proposed approach set along with the common methods, specially with MR images, with tumour areas with low contrast . Thus, the recommended technique is ideal for improving the the ability with automated estimation of tumour size and also position of brain tissues, which supplies more accurate study of necessary surgery, chemotherapy, and also radiotherapy techniques.

CONCLUSION AND FUTURE WORK

The brain tumor detection is a critical application of medical image processing. The literature survey has shown that probably the most of existing methods has ignored poor quality images like noisy images or poor brightness. Also the a lot of the presented work on tumor detection has ignored the usage of object based segmentation. The general goal of the research work would be to efficiently detect the brain tumor using the object detection and roundness metric. The brain tumor detection is a critical application of medical image processing. This work has firstly presented an evaluation on various well-known approaches for automatic segmentation of various image data that has a step toward bridging the distance between bottom-up affinity- based segmentation approaches and top-down generative model based techniques. The key contribution of the job would be to discover various approaches to detect brain tumor within an efficient way. The literature survey has shown that probably the most of existing methods has ignored poor people quality images like noisy images or poor brightness. Also the a lot of the traditional techniques of tumor detection has ignored the usage of object based segmentation. This work has proposed a fresh object based brain tumor detection using combined with the decision based median filtering. The method has shown relatively efficient results than neural based tumor detection technique. The design and execution of the proposed algorithm is done in MATLAB using image processing toolbox. The evaluation has shown that the proposed method has achieved around 94 % accuracy that has been 78 % in neural based technique. Also for high corrupted noisy images the proposed method has shown relatively effective results compared to the neural based tumor detection. Even using cases neural based tumor detection fails for highly corrupted noisy images. In near future we shall propose a fresh improved brain tumor detection approach that’ll improve the accuracy of tumor detection techniques further using fuzzy-neuron based image segmentation. Further the usage of the proposed algorithm will also be extended by utilizing theproposed technique for the breast cancer and also for skin detection.