Post-War Technological Advances | Essay

In the autumn of 1945, Hitler was dead and the war in the west was over. The Japanese had retreated from the Asian countries under their occupation and were determined to protect their homeland till the last man. The Kamikaze attacks of the Japanese Air Force and the militarily expensive battle of Okinawa had driven home the message that a military invasion of Japan would be very dear in terms of human life and could take months to achieve. The official estimate of likely casualties was pegged at between 1.4 to 4 million allied soldiers. The Japanese were obdurate in their decision not to surrender.

On August 6, and 9, 1945, the Americans revealed the potential of their weapons technology. Two atom bombs, the “Little Boy” and “Fat Man” were dropped on the Japanese cities of Hiroshima and Nagasaki. The allies did not need to negotiate any further. Emperor Hirohito surrendered within a month. The episode, however ghastly, drives home as nothing else, the tremendous ability of technological innovation to increase bargaining power.

The post war period has seen the emergence of stunning new technological innovations in diverse areas of science and technology. Many of these have arisen in weaponry and space science and effected major changes in power centres and national equations on a global scale. Technological innovations in other areas have given rise to a slew of products, created billions of pounds worth of assets, shaped huge corporations and generated massive economic empires. The names of Sony, Microsoft, Apple, Google and Nokia, to name but a few, flash through the mental landscape when the issue of innovation comes up.

Bargaining power, while being practically tantamount to unionism, is more specifically a tool to enhance control over or influence economic decisions like “the setting of prices or wages, or to restrict the amount of production, sales, or employment, or the quality of a good or a service; and, in the case of monopoly, the ability to exclude competitors from the market.” (Power, 2006)

Technical innovations have been principal drivers of change in human society since prehistory and have often created huge economic advantages for its creators or owners. The principal reason behind this is exclusivity, the owner of the innovation being the sole possessor of a particular technological item that can be used to achieve significant economic returns.

This exclusivity also gives the owners sharply increased bargaining powers through access to a technology outside the reach of others and meant for the possessors’ sole discretionary use. The owners of the innovation are able to use this bargaining power in various ways, which include speed to market, early mover advantage, setting of prices, fixing of terms of credit, negotiating of contracts, asking of advances, obtaining supplier credit, accessing venture capital or institutional funds and organising alliances with large corporates. The ability to innovate technologically has, on many occasions given its owner enormous economic clout and led to the formation of giant mega corporations. It has verily proven to be the biggest leveller in the marketplace, witness the effulgent rocket trail of the growth graphs of Microsoft and Google and the slow decline of numerous economic giants who have not been able to come up with anything new or worthwhile.

When discussing the bargaining power of technological innovation it would be appropriate to refer to Intel Corp and the manner in which it used its technological knowledge of chips to drive home terrific contracts with IBM and other PC manufacturers and thereby transformed itself from a small start up to a successful and respected corporation with an international footprint.

Jane Katz, in a 1996 article called From Market to Market for Regional Review elaborates on the great Intel story. IBM, at one time far behind Apple in the PC race, entered into alliances with Intel and Microsoft for microprocessors and operating systems and also took the decision to go in for open-architecture to allow other firms to develop compatible products and to avoid possible anti trust issues. Intel, at that time was an untested company and IBM, concerned about Intel being unable to meet its supply commitments forced Intel to give up its right to license to others in order to supply to Big Blue. PC sales did very well and Intel grew furiously and fast. In any case, this success led to Intel quickly developing the next generation of chips. The number of new players having grown rapidly, thanks to the open architecture policy of IBM, Intel’s bargaining power grew significantly with all PC makers.

Thus, the balance of power shifted. When it came time to produce the 286 generation of chips, Intel was able to limit licensing to five companies and retain a 75 percent market share. For the 386 chip and beyond, Intel regained most of its monopoly, granting a single license to IBM, good only for internal use. The market for PCs grew, and Intel became fixed as the industry standard. Ultimately, IBM turned to Apple and Motorola in a belated and still struggling effort to create a competitor to Intel chips, the Power PC. (Katz, 1996)

Technological innovation, of course, gives rise to very significant powers in the hands of its owners. It however needs to be remembered that an innovation is no more than another valuable possession, comparable to significant capital, excellent technical skills or valuable confidential information. It needs great commercial acumen, business foresight and knowledge of human psychology to convert this asset into an extremely effective bargaining tool for obtaining a competitive edge or significant economic benefits. All too often, it is squandered away because of an inadequate knowledge of law or business and it is left to others to pick up the pieces and enjoy the benefits.

In most cases, innovation is not restricted to one huge big bang or tremor causing development. It is a series of small innovations in the technological development of a product that at one stage results in the emergence of a product sharply differentiated from the others available in the marketplace; a product impossible to emulate or bring into play within the immediate future. A truly innovative technological development is one that makes a giant leap in the benefits to cost ration in some field of human enterprise. It is this quality that sets up the platform for emergence of big bargaining power.

Another way of putting this is that an innovation lowers the costs and/or increases the benefits of a task. A wildly successful innovation increases the benefits-to-costs ratio to such an extent that it enables you to do something it seemed you couldn’t do at all before or didn’t even know you wanted to do. Think of the following examples in these terms: the printing press, the camera, the telephone, the car, the airplane, the television, the computer, the electrostatic copier, the Macintosh, Federal Express, email, fax and finally the web. (Yost, 1996)

This power that technological innovation gives is used by different people in diverse ways. It often comes the way of young and brilliant techies who decide to sell, using their bargaining power to get the best possible price for their product from available bidders. Sabeer Bhatia and Jack Smith launched Hotmail, a free web based email service accessible from anywhere in the world and designed specifically to give freedom from restricting ISPs. The service notched up subscribers rapidly and Bhatia got a summons from the office of Bill Gates soon after he got his venture capital backing.

When he was only 28, Sabeer Bhatia got the call every Silicon Valley entrepreneur dreams of: Bill Gates wants to buy your company. Bhatia was ushered in. Bill liked his firm. He hoped they could work together. He wished him well. Bhatia was ushered out. “Next thing is we’re taken into a conference room where there are 12 Microsoft negotiators,” Bhatia recalls. “Very intimidating.” Microsoft’s determined dozen put an offer on the table: $160 million. Take it or leave it. Bhatia played it cool. “I’ll get back to you,” he said. Eighteen months later Sabeer Bhatia has taken his place among San Francisco’s ultra-rich. He recently purchased a $2-million apartment in rarified Pacific Heights. Ten floors below, the city slopes away in all directions. The Golden Gate Bridge, and beyond it the Pacific, lie on the horizon. A month after Bhatia walked away from the table, Microsoft ponied up $400 million for his startup. Today Hotmail, the ubiquitous Web-based e-mail service, boasts 50 million subscribers – one quarter of all Internet users. Bhatia is worth $200 million. (Whitmore, 2001)

Sometimes technological innovation does give a person the power to refuse 100 million dollars, confident in the knowledge that he will be able to bargain for more!

While many individual developers or smaller companies favour to take Bhatia’s route, preferring to cash the cheque first, others go for more, develop the product and try to take it to its full economic potential. The biggest hurdle to the exclusivity of a product comes from clandestine copying as Microsoft and the drug majors have found out in South East Asia and China. Rampant piracy and copyright breach lead to a situation where the latest software and drugs are available within weeks of being released in the market.

While this problem is being resolved at the national level with both India and China beginning to take stringent action for IPR protection the lesson to be learnt in direct and oblique ways is that the bargaining power of a technological development will vanish, vaporise into nothingness if its exclusivity can not be maintained. While retaining all of its excellence and potential to effect change and bring about improvement, a technological investment loses all of its economic advantage and bargaining power the moment it loses its exclusivity. Humanity gets to be served, possibly even at a lower price, but the creator, individual or organization ends up unrewarded and short changed for all the sacrifice, talent, expenditure and effort incurred in the development of the product or service.

It thus becomes critical to arrange for the exclusivity of the innovation if it needs to be used for economic advantage. This is generally done in various ways, an important route being to keep on working at further innovations to add value and to ensure that a significant differentiation always exists between it and other similar products in the marketplace. Microsoft and Google are excellent examples of this approach where continuous R & D efforts work towards creating a slew of features which become difficult to emulate and thereby continue to provide the bargaining edge.

In conclusion the importance of hard nosed business acumen to protect the technological innovation needs to be stressed. Measures for this include the arrangement of adequate security to protect the product or service from espionage and cloning, sufficient care in licensing and similar arrangements and the adoption of necessary business and commercial safeguards for appropriate trademark, copyright, patent or IPR protection

References

Katz, J, (1996), To Market to Market, Regional Review, Retrieved September 28 2006 from www.bos.frb.org/economic/nerr/rr1996/fall/katz96_4.htm

Power, (2006), Wikipedia, Retrieved September 28 2006 from. en.wikipedia.org/wiki/Power

Whitmore, S, (2001), Driving Ambition, Asiaweek.com, Retrieved September 28 2006 from www.asiaweek.com/asiaweek/technology/990625/bhatia.html

Yost, D.A, (1995), What is innovation, Dream host, Retrieved September 28 2006 from yost.com/misc/innovation.html

Operating systems in Nokia phones

Introduction:

Operating system basically acts as interface between user and hardware. A mobile operating system also known as mobile OS or a handheld operating system controls the mobile device. It works on the same principle as the operating systems in windows which control the desktop computers. However the mobile operating systems are simpler than that of windows operating systems.

Various operating systems used in smart phones include:
Symbian OS,
Iphone OS,
RIM’S Blackberry,
Linux
Palm webOS,
Android
Windows mobile operating system.
Various operating systems along with their detail are:
1) Symbian OS:Symbian operating system is designed for mobile devices with associated libraries, user interface, and framework.

It is used in various models of the phones around 100 models use this. It consists of kernel and middleware components of software stack. The upper layers are supplied by application platforms like S60, UIQ and MOAP.

This is NOKIA N92 with Symbian OS.

Reasons for designing Symbian OS:

To ensure the integrity and security of data,
Utilize the user time,
All resources are scarce.
Designing of Symbian OS:

It uses a microkernel which has a request and call-back approach to services. It maintains the separation between user interface and design. Mobile view controller is the object oriented design used by the applications and the OS. This OS is optimised for low power battery based devices and for ROM based systems.

The Symbian kernel supports sufficiently-fast real time response to build a single-core phone around it—that is, a phone in which a single processor core executes both the user applications and the signaling stack.

Structure of Symbian model:
UI Framework Layer
Application services layer
Java ME
OS services layer
Generic OS services
Communication services
Multimedia and graphics services
Connectivity services
Base services layer
Kernel services and hardware interface layer.

It uses microkernel architecture i.e., it includes only the necessary parts in order to maximize the robustness, responsiveness and availability. It contains scheduler, memory management and device drivers. Symbian is designed to emphasize compatibility with other devices, especially removable media file systems.

There is a large networking and communication subsystem, which has three main servers called: ETEL i.e, EPOC telephony, ESOCK i.e, EPOC sockets and C32 which is responsible for serial communication. Each of these has a plug-in scheme. All native Symbian C++ applications are built up from three framework classes defined by the application architecture: an application class, a document class and an application user interface class. These classes create the fundamental application behaviour.

Symbian includes a reference user-interface called “TechView”. It provides a basis for starting customization and is the environment in which much Symbian test and example code runs.

Versions of Symbian OS:
Symbian OS v6.0 and 6.1
Symbian OS 7.0 and 7.0s
Symbian OS 8.0
Symbian OS 8.1
Symbian OS 9.0
Symbian OS 9.1
Symbian OS 9.2
Symbian OS 9.3
Symbian OS 9.4
Symbian OS 9.5
2) Iphone OS:

It is internet and multimedia mobile phone designed by apple Inc. The Iphone functions as a camera phone, a portable media player, and an internet client.

Iphone OS is an operating system that runs on Iphone. It is based on the same DARWIN operating system used in MAC OS X. It is responsible for the interface’s motion graphics. The operating system takes up less than half a GB of the device’s total storage (4 to 32GB). It is capable of supporting bundled and future applications from Apple, as well as from third-party developers. Software applications cannot be copied directly from Mac OS X but must be written and compiled specifically for Iphone OS.

Like the iPod, the Iphone is managed with iTunes. The earliest versions of Iphone OS required version 7.3 or later, which is compatible with Mac OS X version 10.4.10 Tiger or later, and 32-bit or 64-bit Windows XP or Vista. The release of iTunes 7.6 expanded this support to include 64-bit versions of XP and Vista, and a workaround has been discovered for previous 64-bit Windows operating systems. Apple provides free updates to Iphone OS through iTunes, and major updates have historically accompanied new models. Such updates often require a newer version of iTunes — for example, the 3.0 update requires iTunes 8.2 — but the iTunes system requirements have stayed the same. Updates include both security patches and new features. For example, Iphone 3G users initially experienced dropped calls until an update was issued.

3) Android OS:

Android is a mobile operating system running on the Linux kernel. It allows developers to write managed code in the Java language, controlling the device via Google-developed Java libraries.

The unveiling of the Android distribution on 5 November 2007 was announced with the founding of the Open Handset Alliance, a consortium of 47 hardware, software, and telecom companies devoted to advancing open standards for mobile devices.

4) Palm webOS:

It is a mobile operating system running on the Linux kernel with proprietary components developed by Palm.

The Palm Pre Smartphone is the first device to launch with webOS, and both were introduced to the public at the Consumer Electronics Show. The Palm Pre and webOS were released on June 6, 2009. The second device to use the operating system, the Palm Pixi, was released on November 15, 2009. The webOS features significant online social network and Web 2.0 integration.

Features:

WebOS’s graphical user interface is designed for use on devices with touch screens. It includes a suite of applications for personal information management and makes use of a number of web technologies such as HTML 5, JavaScript, and CSS. Palm claims that the design around these existing technologies was intended to spare developers from learning a new programming language. The Palm Pre, released on June 6, 2009, is the first device to run this platform.

5) Rim’s Blackberry OS:

A proprietary multi-tasking operating system (OS) for the BlackBerry is provided by RIM which makes heavy use of the device’s specialized input devices, particularly the scroll wheel or more recently the trackball and track pad. The OS provides support for Java MIDP 1.0 and WAP 1.2. Previous versions allowed wireless synchronization with Microsoft Exchange Server’s e-mail and calendar. The current OS 4 provides a subset of MIDP 2.0, and allows complete wireless activation and synchronization with Exchange’s e-mail, calendar, tasks, notes and contacts.

Third-party developers can write software using these APIs, proprietary BlackBerry APIs as well, but any application that makes use of certain restricted functionality must be digitally signed so that it can be associated to a developer account at RIM. There is only the guarantee of authorship of an application but not of the quality or security of the code.

This is blackberry 7250 displaying the icons provided to it by the use of a proprietary multi-tasking operating system (OS).

6) Windows mobile operating systems:

Windows Mobile is a compact operating system developed by Microsoft, and designed for use in smartphones and mobile devices.

It is based on Windows CE, and features a suite of basic applications developed using the Microsoft Windows API. It is designed to be somewhat similar to desktop versions of Windows, feature-wise and aesthetically. Additionally, third-party software development is available for Windows Mobile, and software can be purchased via the Windows Marketplace for Mobile.

Originally appearing as the Pocket PC 2000 operating system, Windows Mobile has been updated multiple times, with the current version being Windows Mobile 6.5. Most Windows Mobile phones come with a stylus pen, which is used to enter commands by tapping it on the screen.

Windows Mobile’s share of the Smartphone market has fallen year-on-year, decreasing 20% in Q3 2009. It is the 4th most popular Smartphone operating system, with a 7.9% share of the worldwide Smartphone market.

The figure showing windows operating system used in smartphones.

Operating Systems Course: Reflection Essay

There are a lot of new concepts about Telecommunications and Networking that I’ve learned in depth in this course. It is one of the very interesting courses that I have done so far in IT. I feel it is worth doing this course online as there is a chance to learn so many concepts through our assignments. Wire shark labs were very interesting and we have gained practical knowledge on how networking works in real scenarios.

There are so many topics that I felt interesting throughout the course, but there is this topic ‘Modes of Network Operation’ which was the discussion topic in the 6th week that left an ‘aha’ moment.

In Infrastructure mode of Network operation, communications occur between a set of Wireless Adapter equipped computers and also between a wired networks by going through a Wireless Access Point (AP). Infrastructurerefers to switches, routers, firewalls, and access points (Aps). Access Points are responsible for handling traffic between wireless networks and also wired networks. There is no Peer to Peer communication in this mode. A wireless network in infrastructure mode which is also connected to wired network is called as BSS (Basic service set).A set of two or more service sets is called Extended Service Set (ESS).

The BSSID is a 48-bit number of the same format as a MAC address. This field uniquely identifies each BSS. The value of this field is the MAC address of the AP.

Advantages of Infrastructure mode:

Wide areas are covered by utilizing the high power of an access point in Infrastructure mode which is the advantage.
The learning curve will be less for knowing wireless strengths and weaknesses with Infrastructure Mode.
Number of clients can be supported in this mode of operation as additional access points can be added to WLAN to increase the reach of the infrastructure and support any number of wireless clients.
Infrastructure mode networks offer the advantage of scalability, centralized security management and improved reach.

Disadvantages:

The disadvantage associated with infrastructure wireless network is additional cost to purchase AP hardware.

ADHOC Mode: In this mode, each station is a peer to the other stations and communicates directly with other stations within the network. No Access points are required.

Advantages:

Because Ad Hoc Mode does not require an access point, it’s easier to set up, especially in a small or temporary network.

Disadvantages:

In Ad Hoc Mode connections are limited, for example between two laptops, to the power available in the laptops.
Because the network layout (thenetwork topology) in Ad Hoc Mode changes regularly, system resources are taken just to maintain connectivity.
In an Ad Hoc network with many computers, the amount of interference for all computers will go up, since each is trying to use the same frequency channel.
In Ad Hoc Mode, chains of computers will connect to pass your data, if your computer is not directly in range. On the other hand, you do not have control over the path your data takes. The automatic configuration routines may send your data through several computers, causing significant network delays.

Conclusion

Based on the above various mode of operation both offer advantages and as well as disadvantages. Based on the necessity one many opt for Ad hoc mode where set up is easy and no access points are required whereas Infrastructure mode is suited well for wireless networks as it supports any number of clients and offers advantages such as scalability, security and improved approach.

There are a lot of concepts of Operating Systems that are learnt in depth from the course Operating Systems, it being one of the most important courses to be known to end up in software industry. However, I feel, and have always felt that it’s important to understand where we came from and how we landed up here, to be able to understand where we are going. The technology on which the operating systems run on and the mechanics of OS have progressed more than that could have been imagined in the last 30 years. By understanding how that progress was made, we can apply it and make equal progress in the future too.

There are so many interesting topics in the discussions and journal entries throughout the course, but the first discussion on “Microsoft Windows 8: One Size Fits All?” remained as my favorite topic. It being the first discussion topic, also made me feel how interesting the entire Operating systems course would be. This topic grabbed my attention all of a sudden as I have been using Operating systems mostly Microsoft Windows from so many years without even knowing what exactly is happening behind it. The pros and cons of Windows 8 are summed from the “One size fits all” discussion.

This made me think of the practical application of an OS by comparing with the features of other OS. I felt it’s not possible to develop single OS which can be efficient on tablets and PC’s and that was the first time I had to disagree/not satisfied with Microsoft’s invention. Microsoft has been ruling the OS platforms. Windows 8 has drastic changes in platform and user interface of the operating systems. It had a smartphone before the Apple iPhone revolution came along, and it was pushing tablet PCs before the Apple iPad made it cool. But, as long as Microsoft’s history with mobile devices is, so is its stubborn desire to make everything about its Windows OS.

Nowadays we cannot even imagine the world without computer as they are a part of everyday life now. But many of us do not care about what is actually happening when we use a system. Though I had little knowledge about operating systems earlier, now even though I don’t know everything I’m sure that I have learnt a lot about the functioning of an operating system, many types of management techniques in various operating systems, scheduling algorithms, and Protection and Security mechanisms in OS.

An Operating system is a program that manages computers hardware and knowledge about operating system is necessary to start a career in software industry. Operating system provides a basis for application programs and acts as an intermediary between users and computer hardware and optimizes the utilization of hardware. It must exist in order for other programs to run.

I would definitely continue my career as a software developer after completion of Masters in IT as I have been a software developer earlier. As a developer I would be developing software applications and having in depth knowledge of Operating systems is always necessary. An Operating system provides a software platform on which other application programs can run. So in some scenario’s like asynchronous function calls in the program written, I would definitely understand the execution of program much better having knowledge of how operating system works.

I am pretty sure next time when I buy a laptop or an electronic device, I would not be lost with technical specifications. Indeed I would be more interesting to know the features and discuss the specifications confidently.

Finally, I would say learning about the operating system will help every IT professional in their career.

Online Technologies: Opportunities for Charities

Information Technology and developments in non profit organisations: How online technologies offer new opportunities for growth to the charity organisations

Table of Contents (Jump to)

Chapter 1 – Introduction

1.1 Introduction

1.2 Aims and Objectives

1.3 Overview

Chapter 2 Literature Review

2.1 Introduction

2.2 Charities

Chapter 1 – Introduction
1.1 Introduction

As Sergeant and Jay (2004, p.2) have commented, the concept of charity and their mission of raising funds to help the poor and needy has been around for centuries. However, both the numbers and complexities of charity organisations have multiplied significantly over recent decades. Sargeant and Tofallis (2000) confirmed reports from the NCVO that in the UK as of 1998, the number of NGO’s exceeded half a million, of which 40% could be designated as Charity based organisations., This group was then reported to have a collective estimated turnover of approaching ?20 billion. Both of these statistics will have grown dramatically over the past decade.

The main mission of charities is to deliver practical and constructive assistance to those in need; providing information on issues such as health problems and disability or promoting the message for fairer laws. These missions can be related to human activity, preservation of the natural world environment and its wildlife or seeking justice for those that are oppressed. However, currently charities have to face up to a number of obstacles in effectively performing the task for which they have been set up, most of which arise in two particular areas. Firstly, with the increasing growth of needy causes, there is a rise in the number of charitable organisations emerging to address these issues, increasing the competition for funds proportionately. Secondly, there is little doubt from the level of research that has been undertaken, that the charity giver is becoming increasingly discerning about the impact of their donations. This concerns centres around the desire to ensure that the gift has the maximum impact. Therefore, it is important to the donor that the minimum amount of that gift is used for the charity’s internal administrative purposes.

Despite the fact that the “mission” of a charity has in the past often been deemed more important than “economic intentions” (Hussey and Perrin 2003, p.200), the current climate within this sector is requiring them to become more efficient if they wish to sustain the objectives of their cause. This means that thy have to look for ways in which they can improve the effectiveness and the efficiency of their operations. In this regard, although somewhat belatedly when compared with the move by commercial corporations, the charity sector is increasingly studying the benefits of using information technology processes as a means of achieving the efficiencies that are required.

However, as Hackler and Saxton (2007), although some charities are incorporating information within their organisations, the extent, areas of the business covered and effectiveness of these developments has not yet been perfected in a significant number of cases. In fact, in some it is considered that with some charities it can be reducing efficiency. Indeed the research conducted by Sargeant and Tofallis (2000) concluded that “the performance of many charities would appear to fall well short of the efficient frontier with no immediately obvious explanation forthcoming for why this might be so.” Indeed, they could also find no pattern to the causes of these failures either.

It is the issue of information technology in particular its effective and efficient use in charity organisations that inspired this research project. Of specific interest is the intention to assess the impact that this technology has upon the duel targets of increasing financial efficiency and improving the delivery of the main services and missions of the charity.

1.2 Aims and Objectives

As stated previously, the aim of this research is to identify the ways in which information technologies can be used to improve the efficiencies of charity operations. In this regard it is intended to focus the research upon the usage of IT in the online environment. Thus the research question or hypothesis that has been set for this study is as follows: –

Online information technology processes can offer charities opportunities for growth and expansion in terms of the revenue and message and mission generating areas of their operations.”

To assist with the achievement of this goal the research will use the following framework of objectives: –

Growth and maximisation of revenue

It is intended to determine the extent to which a charity can make use of the IT opportunities available using the Internet to grow its revenue base and the methods by which this can be achieved.

Cost reduction and efficiency

Using the same premise as that included within the previous objective it is also the intention of this paper to address the issue of the appropriate IT methods that can be employed for increasing the efficiency of the charity organisation in terms of cost control and reduction where appropriate.

Mission and programmes

Bearing in mind the unique purpose of the charity format, which is that it has a mission to serve a specific cause, the research will also be ensure that, in addition to the financial objectives outlined above, the information processes examined are compatible with the enhancing of the message that charities need to communicate. This will be applied to both the potential and recipient of their services.

The research itself will use a mixture of data to address the research question. This will include reference to the extensive range of financial statements which are available from individual charity websites or the Charities Commission (2008) online resources, although only a sample of these reports will be utilised. To address the issues and concerns of the individual charities more directly, individual interviews will be conducted with a number of representatives from this sector.

1.3 Overview

The management and presentation of the research paper has followed a logical format. Chapter two presents a review of existing literature that is available and that relates to the issues being addressed by the researcher. This includes publications and comments by academics, professional observers and other interested stakeholders. Following this critical review, in chapter three it is intended to concentrate upon the methodology that has been applied to this project. It will provide an overview of the available methods and the reasons for the method that has been adopted in this instance. Chapter four provides the in-depth results of the research findings, both that which has been gathered from primary and secondary resources and these will be analysed and discussed in more detail in chapter five. Finally, the research project will reach a conclusion in chapter six and, where considered feasible and appropriate, the researcher’s recommendations will be presented and explained.

Included at the end of this study, although separated from the main body of the study, will be additional information. This will include a biography of the various resources that have been referred to or used to assist with the development of the project. In addition, in attached appendices, information that is considered of further value in understanding the issues raised and the examinations undertaken, including the transcripts of interviews, have also been included.

Chapter 2 – Literature Review
2.1 Introduction

To assess the issues of the charity use of online information technology, it is important to perform a critical review the existing literature that is available relating to various elements. In this case that will include providing a brief understanding of the charity environment. In addition, it will include a review of the information technology processes and their advantages as well as the areas where charities have been found to have deficiencies either in the usage of these technologies or the extent to which they have availed themselves of the technology itself. The chapter has been sectioned in a manner that appropriately addresses these areas.

2.2 Charities

As many academics have observed, in comparison with commercial organisations, the charity is a complex organisation, not least because of its structure and mode of operations (Wenham et al 2004, Hussey and Perrin 2003 and James 1983). There are even different to the other types of non-profit organisations referred to by Hackler and Saxton (2007), such as those that are often form for regulating the decisions and objectives of various parts of nation and international political policies. An example of these would be the various organisations that have been set up in the UK to deal with the reduction of carbon emissions such as The Carbon Trust.

The differences attributable to the charity organisation can be observed in many areas of the operation. For a start one of the main intentions that is needed for the organisation to qualify as a charity is for it to have a non-profit making objective (Hurray and Perrin 2003). Secondly, its mission that in the corporate sense would be classed as strategic objective is directed to the service of the external stakeholder or user (Hussey and Perrin 2003). In other words, where the purpose of the commercial organisation is to achieve financial success that will enable it to return additional value to the shareholders and potential investor, the charity’s financial aim is to utilise its funds specifically for the benefit of those whose demands and needs it is intending to address. Often, because of the break-even requirement, the charity will take on projects that are of no immediate benefit, but will have the effect of helping them to subsidise other, more highly valued activities (James 1983, p.351).

Another difference in organisational processes is that the charity revenues generating activities relies heavily upon the volunteer donor (Wenham et al 2004), therefore making it difficult to predict. In addition, this places constraints upon administrative expenditure in areas such as computers and other modern equipment (Sargeanr and Jay 2004). Furthermore, because of the purpose of the charity and the need to concentrate its expenditure upon projects that are determined within its mission statement, together with the fact that funds may be limited, many charities are heavily reliant upon the efforts of voluntary employees. Many of these employees might have limited knowledge of the operational processes that are required for an efficient organisation, which can be a disadvantage (Galaskiewicz et al 2006, p.338). This is especially true if there is a sizable organisation to manage.

Irrespective of these differences, to remain true to its mission statement and stated aims, every charity still has to create a strategy that allows it to address three specific operational procedures. These are the maximisation of incoming funds, minimising administrative costs to ensure the recipients of its objectives, in terms of projects and services, receive the maximum benefit and effective marketing, which is designed to attract donors and service users (Wenham et al 2004). Therefore, it is important for the charity to be organised in terms of its mission, which means having the right strategies in place (Hussey and Perrin 2003, p.215 and 218) and assessing their appropriateness. As Hackler and Saxton (2007) acknowledge, it is in these areas that the use of information technology can be considered.

All charities have to be registered with the Charities Commission (2008) irrespective of their size. An integral part if this registration is the need to provide regular financial statements which

Online Self Testing for Real Time Systems

A Survey on Different Architectures Used in Online Self Testing for Real Time Systems

I.ABSTRACT

On-line self-testing is the solution for detecting permanent and intermittent faults for non safety critical and real-time embedded multiprocessors. This paper basically describes the three scheduling and allocation policies for on-line self-testing.

Keywords-components: MPSoC, On-line self-testing, DSM technology

II.INTRODUCTION

Real-time systems are very important parts of our life now a day to day. In the last few decades, we have been studied the time aspect of computations. But in recent years it has increase exponentially among the researchers and research school. There has been an eye catching growth in the count of real-time systems. Being used in domestic and industry production. So we can say that real-time system is a system which not only depends upon the correctness of the result of the system but also on the time at which the result is produced. The example of the real-time system can be given as the chemical and nuclear plant control, space mission, flight control systems, military systems, telecommunications; multimedia systems and so on all make use of real-time technologies.

Testing is a fundamental step in any development process. It consists in applying a set of experiments to a system (system under test ? SUT), with multiple aims, from checking correct functionality to measuring performance. In this paper, we are interested in so-called black-box conformance testing, where the aim is to check conformance of the SUT to a given specification. The SUT is a “black box” in the sense that we do not have a model of it, thus, can only rely on its observable input/output behavior.

Real time is measured by quantitative use of clock (real clock)[1].Whenever we quantify time by using the real clock we use real time. A system is called real time system when we need quantitative expression of time to describe the behavior of the used system. In our daily lives, we rely on systems that have underlying temporal constraints including avionic control systems, medical devices, network processors, digital video recording devices, and many other systems and devices. In each of these systems there is a potential penalty or consequence associated with the violation of a temporal constraint.

a. ONLINE SELF TESTING

Online self-testing is the most cost-effective technique which is used to ensure correct operation for microprocessor-based systems in the field and also improves their dependability in the presence of failures caused by components aging.

DSM Technologies

Deep submicron technology means, the use of transistors of smaller size with faster switching rates[2]. As we know from Moore’s law the size of transistors are doubled by every year in a system, the technology has to fit those inc in transistors in small area with better performance and low-power[4].

III. Different Architectures used in Online Self Testing in Real Time Systems.

1.The Architecture of the DIVA Processing In Memory Chip

The DIVA system architecture was specially designed to support a smooth migration path for application software by integrating PIMs into conventional systems as seamlessly as possible. DIVA PIMs resemble, at their interfaces, commercial DRAMs, enabling PIM memory to be accessed by host software either as smart memory coprocessors or as conventional memory[2]. A separate memory to memory interconnect enables communication between memories without involving the host processor.

PIM Array PIM to PIM Interconnect

Fig.1: DIVA Architecture

A parcel is closely related to an active message as it is a relatively lightweight communication mechanism containing a reference to a function to be invoked when the parcel is received. Parcels are transmitted through a separate PIM to PIM interconnect to enable communication without interfering with host memory traffic. This interconnect must support the dense packing requirement of memory devices and allow the addition or removal of devices from system.

Each DIVA PIM chip is a VLSI memory device augmented with general purpose computing and communication hardware[3]. Although a PIM may consist of multiple nodes, each of which are primarily comprised of few megabyte of memory and a node processor.

2. Chip Multiprocessor Architecture (CMP Architecture)

Chip multiprocessors are also called as multi-core microprocessors or CMPs for short ,these are now the only way to build high-performance microprocessors, for a number of reasons[6].

limiting acceptance of CMPs in some types of systems.

Fig.2: The above figure shows the CMP Architecture[6]

3. SCMP Architecture: An Asymmetric Multiprocessor System-on-Chip

Future systems will have to support multiple and concurrent dynamic compute-intensive applications, while respecting real-time and energy consumption constraints. Within this framework, an architecture, named SCMP has been presented[5]. This asymmetric multiprocessor can support dynamic migration and preemption of tasks, thanks to a concurrent control of tasks, while offering a specific data sharing solution. Its tasks are controlled by a dedicated HW-RTOS that allows online scheduling of independent real-time and non real time tasks. By incorporating a connected component labelling algorithm into this platform, we have been able to measure its benefits for real-time and dynamic image processing.

In response to an ever increasing demand for computational efficiency, the performance of embedded system architectures have improved constantly over the years. This has been made possible through fewer gates per pipeline stage, deeper pipelines, better circuit designs, faster transistors with new manufacturing processes, and enhanced instruction level or data-level parallelism (ILP or DLP)[7].

An increase in the level of parallelism requires the integration of larger cache memories and more sophisticated branch prediction systems. It therefore has a negative impact on the transistors’ efficiency, since the part of these that performs computations is being gradually reduced. Switching time and transistor size are also reaching their minimum limits.

The SCMP architecture has a CMP structure and uses migration and fast preemption mechanisms to eliminate idle execution slots. This means bigger switching penalties, it ensures greater flexibility and reactivity for real-time systems.

Programming Model

The programming model for the SCMP architecture is specifically adapted to dynamic applications and global scheduling methods. The proposed programming model is based on the explicit separation of the control and the computation parts. Computation tasks and the control task are extracted from the application, so as each task is a standalone program. The control task handles the computation task scheduling and other control functionalities, like synchronizations and shared resource management for instance. Each embedded application can be divided into a set of independent threads, from which explicit execution dependencies are extracted. Each thread can in turn be divided into a finite set of tasks. The greater the number of independent and parallel tasks are extracted, the more the application can be accelerated at runtime.

Fig3:

SCMP Processing

As shown in Figure 9, the SCMP architecture is made of multiple PEs and I/O controllers. This architecture is designed to provide real-time guarantees, while optimizing resource utilization and energy consumption. The next section describes execution of applications in a SCMP architecture.

When the OSoC receives an execution order of an application, its Petri Net representation is built into the Task Execution and Synchronization Management Unit (TSMU) of the OSoC. Then, the execution and configuration demands are sent to the Selection unit according to application status. They contain all

of active tasks that can be executed and of coming active tasks that can be prefetched. Scheduling of all active tasks must then incorporate the tasks for the newly loaded application. If a non-configured task is ready and waiting for its execution, or a free resource is available, the PE and Memory Allocation Unit sends a configuration primitive to the Configuration Unit.

Fig4:SCMP Architecture[5]

Table Of Comparison

Name Of The Paper

Year of Publication

Author

Limits

The Architecture of the DIVA Processing In Memory Chip

2002

Jeff Draper, Jacqueline Chame, Mary Hall, Craig Steele, Tim Barrett,

Jeff LaCoss, John Granacki, Jaewook Shin, Chun Chen,

Chang Woo Kang, Ihn Kim, Gokhan Daglikoca

This paper has described a detailed description of DIVA PIM Architecture. This paper having some issues for exploiting memory bandwidth, particularly the memory interface and controller, instruction set features for fine grained parallel operation, and mechanism for address translation.

Chip Multiprocessor Architecture: Techniques to Improve Throughput and Latency

2007

KunleOlukotun, LanceHammond, James Laudon

This work provides a solid foundation for future exploration in the area of

defect-tolerant design. We plan to investigate the use of spare components,

based on wearout profiles to provide more sparing for the most vulnerable components.

Further, a CMP switch is only a first step toward the overreaching

goal of designing a defect-tolerant CMP system.

SCMP Architecture: An Asymmetric

Multiprocessor System on-Chip for Dynamic Applications

2010

NicolasVentroux, Raphael David

The new architecture, which has been called SCMP, consists of a hardware real-time operating system accelerator (HW-RTOS), and multiple computing, memory, and input/output resources.

The overhead due to control and execution management is limited by our highly efficient task and data sharing management scheme, despite of using a centralized control. Future works will focus on the development of tools to ease the programmation of the SCMP architecture.

Conclusion

We have done a survey how on-line self-testing can be controlled in a real-time embedded multiprocessor for dynamic but non safety critical applications using different architectures. We analyzed the impact of three on-line self-testing architectures in terms of performance penalty and fault detection probability. As long as the architecture load remains under a certain threshold, the performance penalty is low and an aggressive self test policy, as proposed in can be applied to

[8] D. Gizopoulos et al., “Systematic Software-Based Self -Test for Pipelined Processors”, Trans. on Vlsi Sys., vol. 16, pp. 1441-1453, 2008.

such architecture. Otherwise, on-line self-testing

should consider the scheduling decision for

mitigating the overhead in detriment to fault

detection probability. It was shown that a policy that periodically applies a test to each processor in a way that accounts for the idle states of processors, the test history and the task priority offers a good trade-off between the performance and fault detection probability. However, the principle and methodology can be generalized to other multiprocessor architectures.

References

[1] R. Mall. “Real-time system”: Theory and practice. Pearson Education, 3rd Edition, 2008.

[2] Analysis of On-Line Self-Testing Policies for Real-Time Embedded Multiprocessors in DSM Technologies O. Heron, J. Guilhemsang, N. Ventroux et al 2010 IEEE.

[3] Jeff Draper et al., The Architecture of the DIVA Processing In Memory Chip”, ICS’02, June.

[4] C. Constantinescu, “Impact of deep submicron technology on dependability of VLSI circuits”, IEEE DSN, pp. 205-209, 2002.

[5] Nicolas Ventroux and Raphael David, “SCMP architecture: An Asymmetric Multiprocessor System-on-Chip for Dynamic Applications”, ACM Second International Forum on Next Generation Multicore/Many core Technologies, Saint Malo, France, 2010.

[6] Chip Multiprocessor Architecture: Techniques to Improve Throughput and Latency.

[7] Antonis Paschalis and Dimitris Gizopoulos “Effective Software-Based Self-Test Strategies for On-Line Periodic Testing of Embedded Processors”, DATE, pp.578-583,2004.

IJSET 2014Page 1

Negative Effects of Technology on Society

Negative Effects of Technology

There is no doubt that technology is playing a critical role in developing societies as countries depend on it in all disciplines of life. Countries all over the globe are competing to invent and develop the highest technological devices that can maintain the highest efficiency and accuracy of the work. Starting from 1980s, people started to use technology every day. The use of technology kept rising dramatically that people used it in tiny things. That overuse resulted in many negatives. There are many negative effects of overusing of technology on societies but the three major effects could be health problems, privacy problems and social problems.

One of the negative sides of the rapid use of technology on societies is the health issue. There is no doubt that the technology is getting better and spreading around the world. That led societies to deal with it almost every day to get their work done resulting in many issues. These issues can mainly be divided into two main categories which are mental health problems and physical health problems. First of which is the physical health problem, there are many physical health issues that caused by dealing with technology but the critical ones are weaken the eyesight. People who usually deal with computers for a long periods of time like programmers suffer from blurry vision and eye soreness. According to Tripp (2013), people who usually use computers for long terms experience serious issues such as soreness of the eyes and blurring the vision. That can be clearly seen since eyes have to concentrate all the time on the screen that emits dangerous radiations that affects the eyes critically leading the eyes to start dropping tears which can make the eye blurry. Resulting in soreness of the eyes on long terms. In addition, technology can cause mental problems for some. Some people who usually spend hours dealing with TVs and Computers without interacting with people get discouraged and get an independent behavior which results in fear of talking to people resulting them to suffer from mental disease called anxiety. Unfortunately nowadays, parents are exposing their children to technology without taking in consideration the cruel effects of such an action. Causing their children to suffer from many mental diseases. Crawford (2011) mentioned that due to spending huge amounts of time; huge number of children were diagnosed by di polar disorder, anxiety and depression resulting in using enormous amounts of psychotropic medications. That can be clearly seen since the parents does not take in consideration that technology isolate the child from the outer world thus they get mental diseases. To sum up, technology has double edged effects which are mental and physical problems.

The second negative effect of overuse of technology on the society is the deprivation of privacy and security. As the world experiencing many advancements in the technology it is also facing problems of privacy and security that can strip the world from personal information. Firstly, privacy issues. Privacy problems are considered as one of the critical issues. Privacy issues are concerned with tracking locations and spying on information. It is very easy for professionals to trace and spy on any electronic device that connects to a network by simply tracking the IP location using programs then establishing connections by dominating over foibles resulting in accessing data of the user. For example, some advertising websites can track location, watch what users do and see what the users like and dislike by s doing a survey of which products are more preferred, while some countries usually spy on another countries to maintain its internal security and spy on extremely important information. According to Thai serves group (2012), minister of communication in Iran mentioned that the western are spying on the internet resulting in spreading of corruption. That is clearly seen since the west have more advanced instruments that can track and spy on anyone in the world with high accuracy. In addition to privacy problems, security problems can negatively affect anyone in the world, it is concerned with stealing of information. It is known that information of users of any website are saved in cloud storage of the website. Professionals aim for that cloud storage. If it is not well secured all of users information including ID and passwords will be threatened. For example, if a professional hacker login a bank database, he would cause a fatal damage to the users and the bank, running away leaving almost no trace. “Computer predators” (n.d.) mentioned that while computers are connected to the internet, hackers send malwares to seek for financial information and transfer it to them. That can be clearly seen since many hackers have used these malwares to penetrate banks systems in the west. Wrapping up, technology affects dramatically on the privacy and security

The third negative effect of overuse of technology on societies is the social issue. Through these years technology was getting more advanced not only in business and work fields but for numerous number of fields. One of these important fields that gains enormous amount of profit is the games field, which has been modified many times to make the games more realistic and has more similarities to the real life. Games are so realistic that killing and other disgusting scenes are included in games. The serious effects of games can be divided into two main categories which are temper fluctuating and lack of social skills .Firstly, due to killing and blood scenes in the games aggression spread among all the teenagers. According to Alamy (2012), teens who usually play brutal games become extremely aggressive. That can be clearly seen because watching such dangerous scenes make the teenagers or children more likely to attack people or cause critical damage to their friends in schools or in neighborhood. In addition to aggression, lack of social skills is another result of overuse of technology. Sitting all the day working on electronics such as computers could isolate the person. It is clear that technology boosted communications by easily contacting anyone by just pressing a button. But in fact it is responsible for the rapid loss of the relationships. Nowadays some people could be having a date and they are actually sitting next to each other but in reality they are not having any kind of contact. According to Howarth (2014), children social skills are decreasing dramatically because of spending long periods of time interacting with technology. That is clear because the more the child is attached to technology the less his social skills will be. Wrapping up, technology affects socializing of children.

Finally, technology was made to serve the world but people used it in intensive levels that it caused serious problems that are health problems, privacy problems and social problems. Health problems are considered as critical issues which affects mental and physical health of the user. Moreover, privacy is negatively affected due to spying and stealing of information of users. Furthermore, socializing is affected by rapid change of temper and lack of social skills. People should spend less time and communicate with each other and use technology in rates that it doesn’t hurt people to avoid getting any problems in the future.

Word Count -1198

Modelling of ?-turns using Hidden Markov Model

Modelling of ?-turns using Hidden Markov Model

Nivedita Rao
Ms. Sunila Godara

Abstract— One of the major tasks in predicting the secondary structure of a protein is to find the ?-turns. Functional and structural traits of a globular protein can be better understood by the turns as they play an important role in it. ?-turns play an important part in protein folding. ?-turns constitute on an average of 25% of the residues in all protein chains and are the most usual form of non-repetitive structures. It is already known that helices and ?-sheets are among the most important keys in stabilizing the structures in proteins. In this paper we have used hidden Markov model (HMM) in order to predict the ?-turns in proteins based on amino acid composition and compared it with other existing methods.

Keywords- ?-turns, amino acid composition, hidden Markov model, residue.

I. Introduction

Bioinformatics has become a vital part of many areas of biology. In molecular biology, bioinformatics techniques such as signal processing or image processing allow mining of useful results from large volumes of raw data. In the field ofgeneticsandgenomics, it helps in sequencing and explaining genomes and their perceivedmutations. It plays an important part in the analysis of protein expression, gene expression and their regulation. It also helps in literal mining of biological prose and the growth of biological and gene ontologies for organizing and querying biological data. Bioinformatics tools aid in the contrast of genetic and genomic data and more commonly in the understanding of evolutionary facets of molecule based biology. At a more confederated level, bioinformatics helps in analyzing and categorizing the biological trails and networks that are an significant part of systems biology. In structural biology, bioinformatics helps in the understanding, simulation and modelling of RNA, DNA and protein structures as well as molecular bindings.

The advancements in genome has increased radically over the recent years, thus resulting in the explosive growth of biological data widening the gap between the number of protein sequences stored in the databases and the experimental annotation of their functions.

There are many types of tight turns. These turns may subject to the number of atoms form the turn [1]. Among them is ?-turn, which is one of the important components of protein structure as it plays an important part in molecular structure and protein folding. A ?-turn invokes four consecutive residues where the polypeptide chain bends back on itself for about 180 degrees [2].

Basically these chain reversals are the ones which provide a protein its globularity rather than linearity. Even ?-turns can be further classified into different types. According to Venkatachalam [3], ?-turns can be of 10 types based on phi, psi angles and also some other. Richardson[4] suggested only 6 distinct types(I,I’,II,II’,VIa and VIb) on the basis of phi, psi ranges, along with a new category IV. Presently, classification by Richardson is most widely used.

Turns can be considered as an important part in globular proteins in respect to its structural and functional view. Without the component of turns, a polypeptide chain cannot fold itself into a compressed structure. Also, turns normally occur on the visible surface of proteins and therefore it possibly represents antigenic locations or involves molecular recognition. Thus, due to the above reasons, the prediction of ?-turns in proteins becomes an important element of secondary structure prediction.

II. RELATED WORK

A lot of work has been done for the prediction of ?-turns. To determine chain reversal regions of a globular protein, Chou at al. [5] used conformational parameters. Chou at al. [6] has given a residue-coupled model in order to predict the ?-turns in proteins. Chou at al. [7] used sequence of tetra peptide. Chou [8] again predicted tight turns and their types in protein using amino acid residues. Guruprasad K at al. [9] predicted ?-turn and ?-turn in proteins using a new set of amino acid and hydrogen bond. Hutchinson at al. [10] created a program called PROMOTIF to identify and analyse structural motifs in proteins. Shepherd at al. [11] used neural networks to predict the location and type of ?-turns. Wilmot at al. [12] analysed and predicted different types of ?-turn in proteins using phi, psi angles and central residues. Wilmot at al. [13] proposed a new nomenclature GORBTURN 1.0 for predicting ?-turns and their distortions.

This study has used hidden Markov model to predict the ?-turns in the protein. HMM has been widely used as biological tools.

(a) (b)

Figure 1.1 (a) defines Type-I ?-turns and (b) defines Type-II ?-turns. The hydrogen bond is denoted by dashed lines. [14]

III. Materials and methods
A. Dataset

The dataset used in the experiment is a non-redundant dataset which was previously described by Guruprasad and Rajkumar [9]. This dataset contains around 426 non-homologous protein chains. All protein chains do not have more than 25% sequence similarity. It is basically to ensure that there is very little correlation in the training set. In this dataset, each protein chain contains at least one beta turn and has X-ray crystallography with resolution 2 or more.

The dataset shows there are mainly ten classes and other classes are made using the combination of these ten classes.

Table 1 Datasets Description [14]

No. of ?- proteins ( class a )

68

No. of ?- proteins (class b )

97

No. of ?- proteins/ ?- proteins (class c )

102

No. of ?- proteins + ?- proteins (class d)

86

No. of multiple domain proteins (class e )

9

No. of small proteins (class f )

2

No. of coiled proteins (class g)

22

No. of low resolution proteins (class h )

0

No. of peptides (class i )

0

No. of designed proteins (class j )

1

No. of proteins with both a and b classes

3

No. of proteins with both a and c classes

7

No. of proteins with both a and d classes

5

No. of proteins with both b and c classes

6

No. of proteins with both b and d classes

4

No. of proteins with both b and f classes

1

No. of proteins with both c and d classes

10

No. of proteins with both c and g classes

1

No. of proteins with both b, c and d classes

2

B. Hidden markov model

In our work, we have used the probabilistic feature of HMM for ?-turns prediction. A model is presumed that ruminate the protein sequence being generated with a stochastic process that alternates amid two hidden states: “turns” and “non-turns”. The HMM is trained using 20 protein sequences.

The probability transition matrix is 2?2 for two states: turns and non-turns. The probability emission matrix is considered as 2?20 as there are 2 states and 20 amino acids. We prepared our probability transition matrix and probability emission matrix according to the knowledge that we have for dataset that is the probability of ?-non-turns is more than ?-turns in a protein sequence and by considering probabilities of each residue as the parameter taken from Chou [7] for calculating the emission and transition matrix.

There are more than ten classes and this HMM model parameter is estimated in2 super states and the training was performed.

Let P be a protein sequence of length n, which can also be expressed as

Where ri is the amino acid residue at sequence position i. The sequence is considered to be generated from r1 to rn in hidden Markov model. The model is trained using Baum-Welch algorithm [15].

Baum-Welch algorithm is a standard method for finding the maximum likelihood estimation of HMMs, in which posterior probabilities were performed by using both forward and backward algorithms. These algorithms were used to compile the state transition probability and emission probability matrices.

The initial probabilities are calculated, taking into account a correlation between residues in different position. The most probable path is calculated using Viterbi algorithm [16] as it automatically segments the protein into its component regions.

The probability of residue in the protein sequence used to generate the emission matrix given by

Where, m is the total number that of residue in the protein sequence and n is the total number of residues in the protein sequence.

C. Accuracy measures

Once the prediction of ?-turns is performed using the hidden Markov model, the problem arises of finding an appropriate measure for the quality of the prediction. Four different scalar measures are used to assess the models performance [17]. These measures can be derived four different quantities:

TP (true positive), p, is the number of correctly classified ?-turn residues.

TN (true negative), n, is the number of correctly classified non-?-turn residues.

FP (false positive), m, is the number of non-?-turn residues incorrectly classified as ?-turn residues.

FN( false negative), o, is the number of ?-turn residues incorrectly classified as non-?-turn residues.

The predictive performance of the HMM model can be expressed by the following parameters:

Qtotal gives the percentage of correctly classified residues.

MCC (Matthews Correlation Coefficient) [18] is a measure that counts for both over and under- predictions.

Qpredicted , is the percentage of ?-turn predictions that are correct.

Qobserved is the percentage of observed ?-turns that are correctly predicted.

IV. results and discussions
A. Results

This model is used to predict the ?-turns and is based on hidden Markov model.

There are basically two classes: turns and non-turns. It is used to predict one protein sequence at a time. It has been observed that it performs better than some existing prediction methods.

B. Comparison with other methods

In order to examine of this method, it has been compared with other existing methods as shown in table 2.

For now, the comparison is done on a single protein sequence. The comparison is for protein sequence with PDB code 1ah7.

Figure 2 shows comparison of Qtotal using different algorithms. Figure 3 shows comparison of Qpredicted using different algorithms. Figure 4 shows comparison of Qobserved using different algorithms. Figure 5 shows comparison of MCC using different algorithms. The HMM based method shows better results than some of the already existing algorithms of the prediction.

Figure 2. comparison of Qtotal with different algorithms

Figure 4. comparison of Qobserved with different algorithms

Figure 3. comparison of QPredicted with different algorithms

Figure 5. comparison of MCC with different algorithms

Table 2 Comparison with other methods

Prediction method

Qtotal (in %)

Qpredicted (in %)

Qobserved (in %)

MCC

Chou-Fasman algorithm

Thornton’s algorithm

1-4 & 2-3 correlation model

Sequence coupled model

GORBTURN

HMM based method

56.5

62.2

53.7

51.8

77.6

54.2

26.7

30.9

24.2

21.4

40.8

28.5

76.1

82.6

69.6

58.7

43.5

67.3

0.22

0.31

0.15

0.07

0.28

0.27

V. conclusion

In this paper, we presented a way in which HMM can be used to predict ?-turns in a protein chain. Our method is used to predict turns and non-turns of single protein sequence at a time. The results thus obtained are better than some of the other existing methods. The performance of the ?-turns can further be improved by considering other techniques such as using predicted secondary structures and dihedral angles from multiple predictors or by using feature selection technique [19] or by considering combination of many features together. We can also combine different machine learning techniques together to improve the performance of the prediction.

References

Chou, Kuo-Chen. “Prediction of tight turns and their types in proteins.”Analytical biochemistry286.1 (2000): 1-16.
Chou, P.Y. and Fasman, G.D. (1974) Conformational parameters for amino acids in helical, beta-sheet and random coil regions calculated from proteins.Biochemistry, 13, 211-222.
Venkatachalam, C. M. “Stereochemical criteria for polypeptides and proteins. V. Conformation of a system of three linked peptide units.”Biopolymers6.10 (1968): 1425-1436.
Richardson, Jane S. “The anatomy and taxonomy of protein structure.” Advances in protein chemistry34 (1981): 167-339.
Chou, P. Y., and G. D. Fasman. “Prediction of beta-turns.”Biophysical journal 26.3 (1979): 367-383.
Chou, K.C. “Prediction of beta-turns” Journal of Peptide Research(1997): 120-144.
Chou, Kou-Chen, and James R. Blinn. “Classification and prediction of ?-turn types.“Journal of protein chemistry16.6 (1997): 575-595.
Chou, Kuo-Chen. “Prediction of tight turns and their types in proteins.”Analytical biochemistry286.1 (2000): 1-16.
Guruprasad, Kunchur, and Sasidharan Rajkumar. “Beta-and gamma-turns in proteins revisited: a new set of amino acid turn-type dependent positional preferences and potentials.”Journal of biosciences25.2 (2000): 143.
Hutchinson, E. Gail, and Janet M. Thornton. “PROMOTIF—a program to identify and analyze structural motifs in proteins.”Protein Science5.2 (1996): 212-220.
Shepherd, Adrian J., Denise Gorse, and Janet M. Thornton. “Prediction of the location and type of ?-turns in proteins using neural networks.”Protein Science8.5 (1999): 1045-1055.
Wilmot, C. M., and J. M. Thornton. “Analysis and prediction of the different types of ?-turn in proteins.”Journal of molecular biology203.1 (1988): 221-232.
Wilmot, C. M., and J. M. Thornton. “?-Turns and their distortions: a proposed new nomenclature.”Protein engineering3.6 (1990): 479-493.
Available from :http://imtech.res.in/raghava/betatpred/intro.html
Welch, Lloyd R. “Hidden Markov models and the Baum-Welch algorithm.”IEEE Information Theory Society Newsletter53.4 (2003): 10-13.
Lou, Hui-Ling. “Implementing the Viterbi algorithm.”Signal Processing Magazine, IEEE12.5 (1995): 42-52.
Fuchs, Patrick FJ, and Alain JP Alix. “High accuracy prediction of ?aˆ?turns and their types using propensities and multiple alignments.”Proteins: Structure, Function, and Bioinformatics59.4 (2005): 828-839.
Matthews, Brian W. “Comparison of the predicted and observed secondary structure of T4 phage lysozyme.”Biochimica et Biophysica Acta (BBA)-Protein Structure405.2 (1975): 442-451.
Saeys, Yvan, Inaki Inza, and Pedro Larranaga. “A review of feature selection techniques in bioinformatics.”bioinformatics23.19 (2007): 2507-2517.

The mesh generation

Describe general methods (structured, unstructured, hybrid, adaptive, etc.) and discuss their key features and applications

A key step of the finite element method for numerical computation is mesh generation. One is given a domain (such as a polygon or polyhedron; more realistic versions of the problem allow curved domain boundaries) and must partition it into simple “elements” meeting in well-defined ways. There should be few elements, but some portions of the domain may need small elements so that the computation is more accurate there. All elements should be “well shaped” (which means different things in different situations, but generally involves bounds on the angles or aspect ratio of the elements). One distinguishes “structured” and “unstructured” meshes by the way the elements meet; a structured mesh is one in which the elements have the topology of a regular grid. Structured meshes are typically easier to compute with (saving a constant factor in runtime) but may require more elements or worse-shaped elements. Unstructured meshes are often computed using quadtrees, or by Delaunay triangulation of point sets; however there are quite varied approaches for selecting the points to be triangulated

The simplest algorithms directly compute nodal placement from some given function. These algorithms are referred to as algebraic algorithms. Many of the algorithms for the generation of structured meshes are descendents of “numerical grid generation” algorithms, in which a differential equation is solved to determine the nodal placement of the grid. In many cases, the system solved is an elliptic system, so these methods are often referred to as elliptic methods.

It is difficult make general statements about unstructured mesh generation algorithms because the most prominent methods are very different in nature. The most popular family of algorithms is those based upon Delaunay triangulation, but other methods, such as quadtree/octree approaches are also used.

Delaunay Methods

Many of the commonly used unstructured mesh generation techniques are based upon the properties of the Delaunay triangulation and its dual, the Voronoi diagram. Given a set of points in a plane, a Delaunay triangulation of these points is the set of triangles such that no point is inside the circumcircle of a triangle. The triangulation is unique if no three points are on the same line and no four points are on the same circle. A similar definition holds for higher dimensions, with tetrahedral replacing triangles in 3D.

Quadtree/Octree Methods

Mesh adaptation, often referred to as Adaptive Mesh Refinement (AMR), refers to the modification of an existing mesh so as to accurately capture flow features. Generally, the goal of these modifications is to improve resolution of flow features without excessive increase in computational effort. We shall discuss in brief on some of the concepts important in mesh adaptation.

Mesh adaptation strategies can usually be classified as one of three general types: r-refinement, h-refinement, or p-refinement. Combinations of these are also possible, for example hp-refinement and hr-refinement. We summarise these types of refinement below.

r-refinement is the modification of mesh resolution without changing the number of nodes or cells present in a mesh or the connectivity of a mesh. The increase in resolution is made by moving the grid points into regions of activity, which results in a greater clustering of points in those regions. The movement of the nodes can be controlled in various ways. On common technique is to treat the mesh as if it is an elastic solid and solve a system equations (suject to some forcing) that deforms the original mesh. Care must be taken, however, that no problems due to excessive grid skewness arise.

h-refinement is the modification of mesh resolution by changing the mesh connectivity. Depending upon the technique used, this may not result in a change in the overall number of grid cells or grid points. The simplest strategy for this type of refinement subdivides cells, while more complex procedures may insert or remove nodes (or cells) to change the overall mesh topology.

In the subdivision case, every “parent cell” is divided into “child cells”. The choice of which cells are to be divided is addressed below. For every parent cell, a new point is added on each face. For 2-D quadrilaterals, a new point is added at the cell centroid also. On joining these points, we get 4 new “child cells”. Thus, every quad parent gives rise to four new offsprings. The advantage of such a procedure is that the overall mesh topology remains the same (with the child cells taking the place of the parent cell in the connectivity arrangement). The subdivision process is similar for a triangular parent cell, as shown below. It is easy to see that the subdivision process increases both the number of points and the number of cells

A very popular tool in Finite Element Modelling (FEM) rather than in Finite Volume Modelling (FVM), it achieves increased resolution by increasing the order of accuracy of the polynomial in each element (or cell).

In AMR, the selction of “parent cells” to be divided is made on the basis of regions where there is appreciable flow activity. It is well known that in compressible flows, the major features would include Shocks, Boundary Layers and Shear Layers, Vortex flows, Mach Stem , Expansion fans and the like. It can also be seen that each feature has some “physical signature” that can be numerically exploited. For eg. shocks always involve a density/pressure jump and can be detected by their gradients, whereas boundary layers are always associated with rotationality and hence can be dtected using curl of velocity. In compressible flows, the velocity divergence, which is a measure of compressiblity is also a good choice for shocks and expansions. These sensing paramters which can indicate regions of flow where there are activity are referred to as ERROR INDICATORS and are very popular in AMR for CFD.

Just as refinement is possible by ERROR INDICATORS as mentioned above, certain other issues also assume relevance. Error Indicators do detect regions for refinement, they do not actually tell if the resolution is good enough at any given time. In fact the issue is very severe for shocks, the smaller the cell, the higher the gradient and the indicator would keep on picking the region, unless a threshold value is provided. Further, many users make use of conservative values while refining a domain and generally end up in refining more than the essential portion of the grid, though not the complete domain. These refined regions are unneccesary and are in strictest sense, contribute to unneccesary computational effort. It is at this juncture, that reliable and resonable measure of cell error become necessary to do the process of “coarsening”, which would reduce the above-said unnecessary refinement, with a view towards generatin an “optimal mesh”. The measures are given by sensors referred to as ERROR ESTIMATORS, literature on which is in abandunce in FEM, though these are very rare in FVM.

Control of the refinement and/or coarsening via the error indicators is often undertaken by using either the ‘solution gradient’ or ‘soultion curvature’. Hence the refinement variable coupled with the refinement method and its limits all need to be considered when applying mesh adaptation

A hybrid model contains two or more subsurface layers of hexahedral elements. Tetrahedral elements fill the interior. The transition between subsurface hexahedral and interior tetrahedral elements is made using degenerate hexahedral (pyramid) elements.

High quality stress results demand high quality elements, i.e., aspect ratios and internal angles as close to 1:1 and 90°, respectively, as possible. High quality elements are particularly important at the surface. To accommodate features within a component, the quality of elements at the surface of a hexahedral model generally suffers, e.g., they are skewed. Mating components, when node-to-node contact is desired, can also adversely affect the models element quality. Even more difficult is producing a tetrahedral model that contains high quality subsurface elements. In a hybrid model, the hexahedral elements are only affected by the surface mesh, so creating high quality elements is easy.

Minimal effort is required to convert CAD data into surface grids using the automated processes of pro-surf. These surface grids are read by pro-am. The surface grid is used to extrude the subsurface hexahedral elements. The thickness of each extruded element is controlled so that high quality elements are generated. The interior is filled automatically with tetrahedral elements. The pyramid elements that make the transition are also generated automatically.

A hybrid model will generally contain many more elements than an all-hexahedral model thus increasing analysis run-time. However, the time saved in the model construction phase – the more labor intensive phase – more than makes up for the increased run-time. Overall project time is reduced considerably. Also, as computing power increases, this “disadvantage” will eventually disappear.

Hexahedral Meshing

ANSYS Meshing provides multiple methods to generate a pure hex or hex dominant mesh. Depending on the model complexity, desired mesh quality and type, and how much time a user is able to spend meshing, a user has a scalable solution to generate a quick automatic hex or hex dominant mesh, or a highly controlled hex mesh for optimal solution efficiency and accuracy.

Mesh Methods:

Automated Sweep meshing

Sweepable bodies are automatically detected and meshed with hex mesh when possible
Edge increment assignment and side matching/mapping is done automatically
Sweep paths found automatically for all regions/bodies in a multibody part
Defined inflation is swept through connected swept bodies
User can add sizing controls, mapped controls , and select source faces to modify and take control over the automated sweeping
Adding/Modifying geometry slices/decomposition to the model also greatly aids in the automation of getting a pure hex mesh.

Thin Solid Sweep meshing

This mesh method quickly generates a hex mesh for thin solid parts that have multiple faces as source and target.
Can be used in conjunction with other mesh methods
User can add sizing controls, mapped controls, and select source faces to modify and take control over the automated sweeping

MultiZone Sweep meshing

This advanced sweeping approach uses automated topology decomposition behind the scenes to attempt to automatically create a pure hex or mostly hex mesh on complicated geometries
Decomposed topology is meshed with a mapped mesh or a swept mesh if possible. A user has the option to allow for free mesh in sub-topologies that can’t be mapped or swept.
Supports multiple source/target selection
Defined inflation is swept through connected swept bodies
User can add sizing controls, mapped controls and select source faces to modify and take control over the automated meshing

Hex-dominant meshing

This mesh method uses an unstructured meshing approach to generate a quad dominant surface mesh and then fill it with a hex dominant mesh
This approach generally gives nice hex elements on the boundary of a chunky part with a hybrid hex, prism, pyramid, test mesh internally
Tetrahedral Meshing

The combination of robust and automated surface, inflation and tet meshing using default physics controls to ensure a high-quality mesh suitable for the defined simulation allows for push-button meshing. Local control for sizing, matching, mapping, virtual topology, pinch and other controls provide additional flexibility, if needed.

Mesh Methods:

Patch conforming mesh method:

Bottom-up approach (creates surface mesh, then volume mesh)
Multiple triangular surface meshing algorithms are employed behind the scenes to ensure a high quality surface mesh is generated, the first time
From that inflation layers can be grown using several techniques
The remaining volume is meshed with a Delaunay-Advancing Front approach which combines the speed of a Delaunay approach with the smooth-transitioned mesh of an advancing front approach
Throughout this meshing process are advanced size functions that maintain control over the refinement, smoothness and quality of the mesh

Patch independent mesh method:

Top-down approach (creates volume mesh and extracts surface mesh from boundaries)
Many common problems with meshing occur from bad geometry, if the bad geometry is used as the basis to create the surface mesh, the mesh will often be bad (bad quality, connectivity, etc.)
The patch independent method uses the geometry only to associate the boundary faces of the mesh to the regions of interest thereby ignoring gaps, overlaps and other issues that give other meshing tools countless problems.
Inflation is done as a post step into the volume mesh. Since the volume mesh already exists, collisions and other common problems for inflation are known ahead of time.

Note: For volume meshing, a tetrahedral mesh generally provides a more automatic solution with the ability to add mesh controls to improve the accuracy in critical regions. On the contrary, a hexahedral mesh generally provides a more accurate solution, but is more difficult to generate.

Shell and Beam Meshing

For 2-D planar (axisymmetric), shell and beam models, ANSYS Meshing provides efficient tools for quickly generating a high quality mesh to accurately simplify the physics.

Mesh Methods for shell models:

Default surface meshing

Multiple surface meshing engines are used behind the scenes to provide a robust, automated surface mesh consisting of all quad, quad dominant or all tri surface mesh.
User can add sizing controls, and mapped controls to modify and take control over the automated meshing

Uniform surface meshing

Orthogonal, uniform meshing algorithm that attempts to force an all quad or quad dominant surface mesh that ignores small features to provide optimum control over the edge length
Describe key features of ALL existing meshing options in Ansys Mesh module and discuss their applications

The meshing tools in ANSYS Workbench were designed to follow some guiding principles:

Parametric: Parameters drive system
Persistent: Model updates passed through system
Highly-automated: Baseline simulation w/limited input
Flexible: Able to add additional control w/out complicating the workflow
Physics aware: Key off physics to automate modelling and simulation throughout system
Adaptive architecture: Open system that can be tied to a customer’s process

CAD neutral, meshing neutral, solver neutral, etc.

By integrating best in class meshing technology into a simulation driven workflow, ANSYS Meshing provides a next generation meshing solution.

London-based seatwave

Part I1. Introduction

London-based Seatwave.com was founded in January 2006 by Joe Cohen. He is currently President and CEO whilst Mr. GED Waring is currently VP of Technology and James Hamlin is Director Online Marketing. The site has undergone continuous growth since launch, and currently serves tens of thousands of monthly visitors.

Seatwave.com is a specialized online marketplace where fans can buy and sell tickets for concerts, theatre, sports, and live events and is the largest online ticket marketplace in Europe. Seatwave.com works by allowing Ticket Sellers to post the tickets they have for sale on the Seatwave site, and then letting buyers bid on them. Tickets go to the highest bidder and the site offers guaranteed delivery of tickets to winner and takes a small price off the ticket sold for compensation.

2. Seatwave Information Technology & applications

Seatwave’s success is dependent on its use of technology to help drive the supply and value chains of the business and in its three years since its inception, Seatwave has gone from strength to strength winning numerous accolades, including being named ‘Best Technology Media Company 2009’ by The Guardian.

Pure 360 emailing Technology

Seatwave decided to partner with progressive email marketing providers, Pure360, to create and deliver a highly effective, cost efficient, email marketing campaign by capitalizing on cutting edge email marketing technology.

Seatwave’s ongoing email marketing campaign uses Pure360’s ‘Intelligent Time Sending’ tool to analyze when each of its customers is most likely to open their emails, and click-through to the website. This information is used to ensure emails arrive in recipients’ inboxes at the time he or she is most receptive – an essential tool as Seatwave’s success is dependent on the audience responding quickly and purchasing tickets.

Timing is everything for Seatwave, and it is imperative that it sends out the latest information about events and ticket availability as quickly and efficiently as possible.

Pure360s ‘Automatic Message Import’ tool makes it possible for Seatwave to send out the latest offers by uploading web content automatically into their email marketing campaign, meaning they use minimal resources.

Seatwave Mobile Application

LONDON, ENG (Seatwave) 14 January 2009 – Seatwave, Europe’s largest online fan-to-fan ticket exchange, today announced its newest partnership with mobile platform provider, Snaptu. As part of the company’s continued expansion into the mobile environment and in a first for fan-to- fan ticket exchanges, the new application will provide a seamless mobile box office experience for fans. It will allow them to navigate through the full range of European concert dates on their mobile phone, and purchase with one call.

Cookies

“When you view our Site we may store information on the hard drive of your computer in the form of a “cookie” (essentially a small text file). Cookies allow us to tailor the Site to your interests and preferences”. (Seatwave.com 2009)

IP Addresses

“We study visitor trends since we are interested in the successful dissemination of information through the Site. Our server creates log files of information such as the Internet Protocol (IP) address from your network, what pages were explored and the length of your visit. Analysis software is used to generate reports, which helps us to learn more about how we can enhance your experience with the Site. This information is not used to develop a personal profile of you. The log files are regularly deleted” (Seatwave.com 2009).

Seatwave Ticket finder

Seatwave has secured a deal with MSN whereby MSN portal users can use Seatwave’s ‘Ticket Finder’ to search for secondhand tickets. The new objective for the online activity is to increase sales of tickets and encourage more people to sell tickets on the site.

Secure Online Account

As a buyer, you can review all your previous orders and track their status within My Account. For sellers, My Account allows you to view or amend your listings, track your sales and send out your tickets

Seatwave Ticket Cover

TicketCover is a new kind of insurance and Seatwave will be the first UK Company to ensure that consumers are refunded for the cost of a seat at sporting, music and other forms of live entertainment, if unforeseen circumstances prevail. Such circumstances include motor breakdown on the way to the venue, illness, injury, jury service and a range of other occurrences that could keep a person away from their chosen event.

The service will be administered by Mondial Insurance and the cost of the cover will be included in the price of all Seatwave tickets.

Seatwave Ticket Integrity

Seatwave guarantee that their tickets come only from legitimate sources and that they will represent them accurately and honestly. They also guarantee that you will receive the tickets you ordered (or similar ones) and that they will be with you by the day of the event. And if they don’t keep this commitment to you, they will take reasonable steps to source replacement tickets, of an equivalent value, to make sure you don’t miss out. If suitable replacement tickets (determinable solely at our discretion) cannot be found, we’ll refund 100% of the price you paid. No questions.

Additionally Seatwave Ticket Integrity™ guarantee is a two-way street. If you are selling tickets they promise that you will receive prompt payment from Seatwave for all orders that are confirmed and fulfilled.

3. Seatwave E-Business Models

Seatwave have adopted a combination of two E-business models, a Transaction fee revenue model and an E-Auction model. (Schneider. G 2009) explains that in the fee for transactional revenue model, businesses offer services for which they charge a fee that is based on the number or sized of transactions they process”

And Seatwave has successfully adopted this model whereby they are paid a commission for every ticket sold. Ttickets can be sold at any price selected by the seller, including below and above the face value printed on the ticket and Seatwave charges buyers a 15% service charge and sellers a 10% success fee. Seatwave has additionally adopted an E- Auction model or as they put it a fan to fan to online ticket exchange system.

The online auction business model is one in which participants bid for products and services over the internet.

When one thinks of online auctions they typically think of E-Bay, the largest online auction site. Like most auction companies, eBay does not actually sell goods that it owns itself. It merely facilitates the process of listing and displaying goods, bidding on items, and paying for them. It acts as a marketplace for individuals and businesses that use the site to auction off goods and services.

Several types of online auctions are possible. In an English auction the initial price starts low and is bid up by successive bidders. In a Dutch auction the price starts high and is reduced until someone buys the item. EBay also offers fixed price listings.

4. Seatwave Financial Performance

In January 2009 Seatwave was on 35% market share, GetMeIn (a UK startup founded by US guy James Gray and acquired by Ticketmaster is on 25%, and Viagogo is on 14 The principle industry area to which Seatwave belongs is events ticketing . Unfortunately Seatwave does not publish its financial statement but do provide growth margins which make it easier to measure the success and profitability of the company.

Europe’s Leading Ticket exchange increases lead on the field

London: 05 May 2009 – Seatwave, Europe’s leading fan-to-fan ticket exchange, today announced explosive growth LONDON: 05 May 2009 – Seatwave, Europe’s leading fan-to-fan ticket exchange, today announced explosive growth for Q1, supported by ComScore’s latest report confirming that Seatwave is Europe’s largest ticket exchange by a factor of more than 2 to 1 versus its nearest competitor. March sales alone grew by 287% year on year, one of the many indicators of the company’s increasing success.

Insert Courtesy Of Seatwave.com

Seatwave’s success can be attributed to two main factors “superior customer service and an excellent online customer experience. Couple with a great business model” The Company’s site demonstrates how importantly it takes the customer’s online experience. Burgess believes that the specific online experience they offer customers is an essential ingredient to the success of the company because the site is streamlined and easy to navigate, and it’s smartly designed to maintain its efficiency and functionality. The smart design is evident in the ability to quickly buy or sell tickets and business model is adopted is excellent because there is no time and geographical constraints, hence tickets and be sold and bought at any time, 24 / 7 and Sellers and bidders can participate from anywhere that has internet access. This makes them more accessible and reduces the cost of “attending” an auction.

5. Seatwave Strategy

Seatwave’s biggest market is the UK and its long term strategy for growth is based on three key parts.

Growth by global Expansion
Offering Marketing Leading Consumer Protection
Partnerships and Affiliations

1. The Seatwave business is growing rapidly and is the market leader in all the markets they operate within.

Seatwave operates in nine countries outside the UK. Including Germany, the Netherlands, Italy, Spain, France, Austria, Belgium, Switzerland and Ireland
Over 700,000 tickets on sale at any one time
Customer base of over 1.9 million unique active users.
1.7 Millions tickets for events in over 38 countries
Europe secondary ticket market worth $ 6.8 – 9.7 billion.
Bi- model approach e.g. Transactional Revenue model & E-auctioning model
Seatwave Cooperate sales which is a dedicated service for corporate entertainment needs.

2. Utilizing Technology & Offering Consumer Protection services

TicketIntegrity™ guarantees that buyers will receive the tickets they ordered in good time for the event, or offers a full refund.
TicketCover™ which provides a full refund if an event is cancelled. This refund includes the full price of the tickets purchased.
TicketCover™premium which covers buyers for a range of other circumstances that may prevent them from attending their performance, such as transport failure or severe illness.
TickFinder is a search application use on the msn portal to help users find secondhand tickets.

3. Partnerships and Affiliations

Official Ticket exchange partner of 9 different sport clubs
Affiliations with 4 separate music groups e.g. MTV.co.uk, MOBO Awards and Live Nation
In partnership with major media organizations e.g. MSN, Virgin Media and a new partnership with HMV
Seatwave donates a portion of every sporting ticket sold online to Sparks and is also a member of the Action for Brazils Children Trust.
Seatwave are in partnership with UPS to help facilitate and ensure a reliable ticket exchange transaction between buyers and sellers.
Part II
6. Suggested Evaluation Criteria

The methodologies used for the evaluation of Seatwave.com is based on Webqual which is an instrument for assessing the usability, information, and service interaction quality of Internet web-sites, particularly those offering e-commerce facilities (Webqual.co.uk homepage 2009).

WebQual (www.webqual.co.uk) is based on quality function deployment (QFD) – a “structured and Disciplined process that provides a means to identify and carry the voice of the customer through each stage of product and or service development and implementation” (Slabey, 1990). In the context of WebQual for traditional Web sites, users are asked to rate target sites against each of a range of qualities using a 7-point scale. The users are also asked to rate each of the qualities for importance (again, using a 7-point scale), which helps gain understanding about which qualities are considered by the user to be most important in any given situation.

In order to build a profile for Seatwave.com, the data was summarized around the questionnaire subcategories, and then the total score for each subcategory was indexed against the maximum score (based on the important rating for questions multiplied by three. The results suggested that the information quality and Usability aspect of the website rated extremely well at 100% respectively. Seatwave has facilitated this by providing tools that help the seller choose an appropriate selling price by comparing the average sale price for tickets being sold for the particular event and having a simple but structured approach to the design of the website and the way it presents information to its customers.

Additionally the service interaction weighted score was slightly lower than the other categories at 92.8% but still suggested a high sense of community, personalization and security in terms of processing transactions, however the lack of confidence within the website due to the ethical issues within the resale ticketing industry and the slim line of communication to the organization has robbed the site of a maximum score. It must be noted though that WebQual was not particularly useful for evaluating the technical aspects of the site despite providing a valuable profile of users perception of e-commerce quality, therefore an independent evaluation of its technical viability as an ecommerce site was carried out.

And one problem noticed when evaluating the site from a technical point of view is that it failed W3C markup validation, this means that there is no guarantee that the site will look the same in different browsers or even that it will work correctly. Also this means that non-graphical browsers and html translators such as those used by blind people may not be able to communicate the site to a properly.

The site has an XHTML transitional DOCTYPE header, this standard should be strictly adhered to in order to avoid the problems outlined above, by doing this the company can be sure that they are reaching the widest possible audience as their site would then work with the vast majority of viewing technologies.

Part III
Proposed future strategy for Seatwave
Future Strategy

Seatwave can enhance its future ecommerce business by aligning its current strategy with the primary ticket selling industry which will build up its reputation and strengthen its brand image. According to Katie Allen of the Guardian, Seatwave and rivals such as Viagogo have been accused of encouraging the growth of “bedroom touts”, who snap up tickets with the sole purpose of making a profit by selling them on, in addition Seatwave, as a secondary market ticket supplier, has no way of verifying if tickets are valid, counterfeit, or genuine. By becoming the leader in the Primary and secondary online ticketing market Seatwave will be able to increase its image as a reputable brand, penetrate new markets and hence increase its profit margins.

New Business Strategy Key Factors
New Potential Packages

As the two entities further combine their operations, they could begin to offer more packages to consumers such as discounted bundles of tickets and recorded music, and could offer corporate sponsors more attractive terms, too. At the same time, a vertically integrated behemoth could have the power to dictate higher prices.

Power to dictate Price ( Due To Economy of Scale )

Because it would be so vertically integrated, the new company would also be able to muscle out competing concert promoters and have more power to dictate ticket prices to consumers. The new company would have close ties to an array of artists and boast affiliation and new partnerships right across the entertainment spectrum

Expansions into the America’s

To new business strategy will allow Seatwave to venture out in to the Americas which is tightly regulated against ticket reselling. Because of the new strategy the business can segment its services geographically.

Alliances with other companies

The new strategy would merge Europe’s Largest ticketing exchange facilitator with a dominant ticketing and artist-management company. The resulting firm would be able to manage everything from recorded music to ticket sales and tour sponsorship. It could package artists in new ways, for example, allowing corporations such as a mobile phone provider to sponsor a concert tour and to sell an exclusive download of a song.

Conclusion

In conclusion Seatwave has dominated the European market and have attracted investment whilst protecting its customer; however the ticketing resale markets remains shrouded with suspicion, unfair practices and dodgy dealings. It and its two main rivals in the UK operate in a controversial area. Ticket touts have a bad reputation and Seatwave and its peers are, according to their critics, merely electronic equivalents of the spivs who hang around the doors of music and sporting venues offering dubiously acquired products.

Mr. Cohen points out that his venture offers those who are genuinely unable to obtain the tickets they want through a risk free channels, however a sales Account Manager for Seatwave, Lee Lake, was caught purchasing tickets for various concerts and gigs using 4 different addresses and 4 different credit cards and selling the same tickets through Seatwave at significantly higher prices than face value and not declaring that he is an employee of Seatwave in the transaction. In response, Chief Executive Joe Cohen allegedly stated the tickets were purchased as “backstop” tickets in case “fans” selling on Seatwave let people down. This proves that Seatwave’s strategy is in need of a revamp to attract a better reputation and removed the stigma that has dogged the industry for so long, the integration of its already strong ecommerce offering with an improved business strategy will be the pied piper that draws large audience to the site and puts its critics to rest.

Bibliography
Peter, M. (2001). Business Studies p.63-64 UK Hodder & Stoughton Ltd
Farmers Weekly. (2006). Citing Online Source. Tesco Club card Reward Program ( Accessed 3/27/2008)
Mike, M and Malcolm, M (2002) Marketing in Managing Bites p.86 GB Macmillan Press Ltd
Peter, M. (2001). Business Studies p.61 UK Hodder & Stoughton Ltd
Kotler, P. (1998). Principles Of Marketing p. 33 UK Prentice Hall College
Bill, W. (2008). Citing Online Source. Will Protest Hurt Tesco Brand ( Accessed 3/27/08)
Sir Terry, L. (2008). Citing Online Source. Consumers Changing Lifestyles ( Accessed 3/27/08)
Peter, M. (2001). Business Studies p.61 UK Hodder & Stoughton Ltd

Literature Review on Metamaterial

LITERATURE REVIEW ON METAMATERIAL

The Left-Handed Metamaterial (LHM) has a few unique properties such as negative refraction and backward wave. In this chapter, the basic theories behind their unique properties are presented and some applications of LHM toward the antenna application are discussed.

DEFINITION & BACKGROUND OF LEFT-HANDED METAMATERIAL

Electromagnetic Metamaterial can be defined as artificial effective homogenous electromagnetic structures with unusual properties not readily found in nature . A Left-Handed Metamaterial (LHM) [17][18]or Double Negative Metamaterial (DNG) is an electromagnetic Metamaterial that exhibit negative permittivity and permeability. This phenomenon can be characterized by the negative refraction index and the anti-parallel phase velocity which is also known as backward wave.

HISTORY OF LEFT-HANDED METAMATERIAL (LHM)

The initial work on LHM was started by V. G. Veselago from the Lebedjev Physical Institute in Moscow when he made a theoretical speculation of this artificial material that exhibit negative permittivity and negative permeability. Veselago speculation remain silent for 29 years until 1996, J. B. Pendry from Imperial College London and his co-author form GEC-Marconi published a paper about artificial metallic construction which exhibit negative permittivity and negative permeability. Following this interesting discovery, in 2001, the first experimental verification was made by Shelby, Smith and Schultz at the University of California. The left handed material structure consists of split ring resonator and thin wire inspired by J. B. Pendry as shown in figure 3.1.

Figure 3.1: First experimental LHM structure

Since the introduction of LHM twelve years ago, many researchers were interested in investigating this artificial material and several of them was using LHM to improve the properties of the microwave devices such as antennas and filters. Many papers have been published regarding the LHM integrated with antennas and their properties have been analyzed. The focusing affect of LHM has made a low gain antenna becomes directive and have an increment of gain.

FEATURES OF METAMATERIAL
Improvement in the performance of a small monopole antenna, realized via the use of an ENG envelope that compensates for its high capacitive reactance.
Lens effect produced by DNG slabs that are useful for enhancing the directivities of a small antennas, e.g. dipole and Microstrip patches, by collimating the cylindrical waves emanating from these antennas and focusing them at infinity.
Creation of super lenses which can have a spatial resolution below that of the wavelength.
UNIQUE PROPERTIES OF LEFT-HANDED METAMATERIALS

Negative Refractive Index: For conventional material with ???ˆr > 0 and ???‡r > 0, the refractive index is given??‘›=v???‡??‘Y???ˆ??‘Y, so that the conventional material possesses a positive refractive index. Yet, Left-handed Meta-material has both negative permittivity (???ˆr ???” <0) and negative permeability (???‡r ???” <0), the refractive index n has negative value [8]. Inverse Snell's law: An incident light that enters left-handed meta-materials from a right-handed medium will undergo refraction, but opposite to that usually observed for two right handed media.

The Snell’s law is described as

..3.3.1(a)

Where ????2 the incident is angle and ????1 is the refraction angle. Supposing medium I and medium II are conventional materials with ??‘›1>0 and ??‘›2>0 respectively, them refracted light will be bent with positive ? with the normal line OO’ as indicated by the 4th light ray in figure3.2. If medium II is a left-handed meta-material with ??‘›2<0, the refracted light will be bent in odd way with a negative angle with OO' as indicated by the 3rd light ray in figure 3.2

Figure 3.2 Passage of a light ray through the boundary between medium I with positive refractive index ??’???Y?>0 and medium II with refractive index ??’???Y?.

The phase velocity expression ???-??‘?=??‘???‘› ???” shows that the phase velocity ???-??‘? is related to the index of refraction , here c denotes the speed of light in a vacuum. For LHM has negative refractive index (??‘› ???” <0), the phase velocity has negative value. In LHM, the phase velocity is in the opposite direction of the energy flow in the sense that the energy flow leaves the source in waves with a phase velocity pointing backward as shown in figure 3.3.

Figure 3.3: The energy flow and group velocity propagate forward in LHMs but the phase velocity is backward

Veselago also predicted that the Doppler and Cerenkov effects will be reversed in LHM. An approaching source will appear to radiate at a lower frequency and charged particles moving faster than the speed of light in the medium will radiate in a backward cone, not a forward cone. These two exotic properties are not employed in this Dissertation, however details about them can be found in.

LEFT-HANDED METAMATERIAL STRUCTURE

The first LHM structure consists of split ring resonator (SRR) and thin wire (TW) or capacitance loaded strip (CLS)[19]. The SRR exhibits the negative value of permeability and the CLS and TW exhibit the negative value of permittivity in a certain range of frequency.

Split Ring Resonator (SRR)

(b)

Figure 3.4: (a) Circular split ring resonator and (b) Square split ring resonator

A split ring resonator (SRR) as shown in figure3.4 is part of the LHM structure that exhibit negative value of permeability. If the excitation of the magnetic field is perpendicular to plane of the structure, this will generate the magnetic dipole moment. The SRR is a highly conductive structure in which the capacitance between the two rings balances its inductance . The SRR induces high current density structure which creates a large magnetic moment.

Capacitance Loaded Strip (CLS) and Thin Wire (TW)

(a) (b)

Figure 3.5: (a) Capacitance loaded strip (CLS) and (b) Thin wire (TW)

figure 3.5(a) shows the capacitance loaded strip (CLS) and figure 3.5(b) shows the thin wire (TW). CLS and TW would produce strong dielectric like response. As electric field propagates parallel through the TW or CLS, it will induce a current along them. This will generate an electric dipole moment to the structure and exhibit a plasmonic-type of permittivity frequency .

CST SOFTWARE

CST was founded in 1992 byThomas Weiland. The main product of CST is CST STUDIO SUITE,which comprises A various modules dedicated to specific application areas. There are modules for microwave & RF applications, summarized in CST MICROWAVE STUDIO, low frequency (CST EM STUDIO), PCBs and packages (CST PCB STUDIO), cable harnesses (CST CABLE STUDIO), temperature and mechanical stress (CST MPHYSICS STUDIO) and for the simulation of the interaction of charged particles and electromagnetic fields (CST PARTICLE STUDIO). All modules are integrated with a system and circuit simulator (CST DESIGN STUDIO). The version is CST Microwave Studio 2010.

Figure 3.6 CST Microwave Studio

In next chapter, the design of the LHM is discussed and the procedure in the simulation of the LHM using CST software is elaborated thoroughly. Besides that, the design of the Metamaterial structures, patch Microstrip antennas are also elaborated.

CST Microwave Studio is a fully featured software package for electromagnetic analysis and design in the high frequency range. It simplifies the process of inputting the structure by providing a powerful solid modeling front-end which is based on the ACIS modeling kernel. Strong graphic feedback simplifies the definition of Your device even further. After the component has been modeled, a fully automatic meshing procedure (based on an expert system) is applied before the simulation engine is started. The simulators feature the Perfect Boundary Approximation (PBA™ method) and its Thin Sheet Technique (TST™) extension, which increases the accuracy of the simulation by an order of magnitude in comparison to conventional simulators. Since no method works equally well in all application domains, the software contains four different simulation techniques (Transient solver, Frequency domain solver, Eigenmode solver, Modal analysis solver) which best fit their particular applications.

The most flexible tool is the transient solver, which can obtain the entire broadband frequency behavior of the simulated device from only one calculation run (in contrast to the frequency stepping approach of many other simulators). This solver is very efficient for most kinds of high frequency applications such as connectors, transmission lines, filters, antennas and many more.