Study on Phone Usage for Financial Services

A STUDY ON USAGE OF MOBILE PHONE IN THE ACCESS OF FINANCIAL SERVICES AMONG RESIDENTS OF KANGUNDO CONSTITUENCY

Background to the study

The use of mobile has been taunted as the next big thing in the empowering of communities. ICT plays a big role in literally all spheres of life, and this explains why the government has supported laying of ICT infrastructure across the country. It is reported that Central Bank of Kenya’s enabling regulatory approach allows 23 million people (74% of adult population) to use mobile financial services via 90,000 agents (Alliance for Financial Inclusion, 2012, P.20).

Expansion of the ICT sector has a direct contribution to a society’s access to information and subsequently empowerment. The use of mobile phone has revolutionized banking in the recent past, by netting the initially unbanked. Inventions in mobile phone have made tremendous contributions to financial services advancement. Banks have jostled to outsmart each other by launching varied mobile banking services, so are the mobile network operators. Such services by the banks are dependent on the platforms of existing mobile phone network operators. These services are accessed through USSD, WAP applications and internet banking. We have seen the emergence of mobile bank accounts such as M-benki (KCB), M-Shwari (CBA), M-Kesho (Equity Bank) and Pesa Mob (Family Bank). There have been partnership deals among these Banks and Mobile phone operators. Moreover, customers are able to access credit facilities through these mobile bank accounts as well as make loan payments. Other services include funds transfer, airtime top up, credit card payment, accessing mini-statements, balance enquiries and even stoppage of cheques. Agency Banking, which was meant to bring banking services closer to the customers equally relies heavily on the use of mobile phones. They include KCB Mtaani and Co-op Jirani. However, it is notable that there is a variation in usage of the mobile phone platform between urban and rural areas. We shall seek to know the trends in the usage of mobile phones to access financial services by residents of Kangundo Constituency.

Statement of the problem

This is a study to gauge whether residents of Kangundo constituency have embraced mobile phone technology to access financial services.

Significance of study

This study seeks to appreciate the use to technology to ease financial services accessibility. Traditional methods of visiting banks have long been overtaken by inventions in technology. Therefore this study will seek to explore whether the residents of this constituency have taken advantage of the more convenient financial services provision methods, as is now commonly known- paperless and Branchless banking. Access to banking services includes access to credit facilities which are a key catalyst for economic empowerment. The findings of this survey will prove useful to the constituents of Kangundo, financial services providers as well as mobile network operators.

Purpose of the study

This study seeks to:

To establish the number of residents who own mobile phones
To establish the number of residents who have registered for mobile phone services such as Mpesa, Airtel Money, Yu Cash and Orange Cash
To establish the number of residents who have opened mobile bank accounts
To establish applications used to access mobile banking services: USSD, WAP, Internet banking
To establish the usage of bank agents in access of financial services
To establish demographic trends in access of Banking services (Age, sex, education, employment status)

Definition of concepts

USSD- Unstructured Supplementary Service Data

WAP– Wireless Application Protocol

Unbanked– By definition, unbanked customers have no checking, savings, credit, or insurance account with a traditional, regulated depository institution (Delloitte, 2012, p.2)

Literature Review

The government recognizes ICT as a foundation for economic development, and as such, Kenya’s vision of knowledge based economy aims at shifting the current industrial development path towards innovation where creation, adoption, adaptation and use of knowledge remain the key source of economic growth as this is a critical tool for expanding human skills and rests largely on a system of producing, distributing and utilizing information and knowledge that in turn plays a great role in driving productivity and economic prosperity (Government of Kenya, 2013, p.21). One of such ICT tools is the mobile phones which continue to offer a myriad of opportunities, specifically on the financial sphere. To leverage on the above, the Government bets on the increase in communication to spur economic growth in tandem with the vision 2030 blueprint.

As Watts, 2001 observes, ‘’some clients may prefer to access services at a distance. Increasingly, in all fields, consumers want a service to be available when they identify a need for it, with minimum delay and minimum effort: they want it here, and they want it now’’ (p6). The urge to access services with urgency and at a minimum cost is making more people gravitate towards technologically based products that are available through the mobile phone. The use of this gadget has simplified life and as such transactions can comfortably be initiated and terminated at one’s convenience. Further, it is notable that the settlement of these transactions is instant.

ICT increase efficiency, productivity, and access to goods, services, information, and markets. Demand for these benefits is high. If the right compliments- such as power, connectivity, content, skills and support systems, functional markets and supportive policy frameworks- can be put in place, demand for ICT will be correspondingly high (William J. Kramer, Beth Jenkins, Robert s. Katz, 2007, p.9). With Kangundo being a rural area, we shall then be interested in knowing how the use of mobile phone has impacted on its residents, and whether they have taken full advantage of this revolutionary tool that continue to transform lives across the globe.

Mobile phones have characterized the everyday life of Kenyans. Cheap Chinese phones have found their way in the market and this has eased the affordability of this ICT tool. Mobile ownership at the household level is almost as high as access. Approximately 75% of the households have at least a member who owns a mobile phone. In rural areas, ownership is 67% while in urban areas ownership reaches 90% (CCK, 2011, p.13).

It is essential for banks to sensitize on mobile banking and ensure that customers maximize its use bearing in mind the capital invested (Korir, 2012, p.43). Information is power and banks have a role to play if they are to penetrate and crack open the mobile banking market. Banks will rely much on studies to inform their decisions on the best way to tap in to this market. The government has indeed been on the forefront by championing for ease of access of banking services to all citizens.

Branchless banking through retail agents is made possible through the information and communication technologies that customers, retail agents and mobile network operators use to record and communicate transaction details quickly, reliably and cheaply over great distances. Among the first mobile network operators in the world to offer branchless banking were Globe Telecom and SMART in the Philippines. They launched their SmartMoney service in 2000 (in conjunction with Banco de Oro) followed by the G-Cash1 service in 2000. Customers can store cash, send funds from person to person, pay bills, make loan repayments and purchase goods at shops. They primarily use G-cash to buy airtime and to send money to friends and family (Financial Sector Deepening, 2009a, p1)

Mobile banking represents a more cost efficient channel for the banks, allowing them to charge

less for transactions, and permitting the consumer to have immediate access to information

related to their bank accounts.P.3.

Worldwide, more people now own a mobile phone than a bank account. A revolution in mobile phone payments is taking place. The way mobile devices are evolving makes it difficult for banks to find the right solution to manage complex technologies and provide a consistent service to customers. http://www.cr2.com/solutions/mobile-banking/mobile-banking-solution.html

Alliance for financial inclusion.

A High Level Conference on Kenya’s Economic Successes, Prospects and Challenges –

Making Inclusive Growth a Reality September 2013

Central Bank of Kenya’s enabling regulatory approach allows 23 million people (74% of adult population) to use mobile financial services via 90,000 agents. Pg 20

References

Alliance for Financial Inclusion. 2013. A High Level Conference on Kenya’s Economic Successes, Prospects and

Challenges – Making Inclusive Growth a Reality. Retrieved on February 22, 2014 from

Delloitte. (2012). Banking the Unbanked: Prepaid Cards, Mobile payments, and Global opportunities in Mobile

Banking. Retrieved February 22, 2014, from https://www.deloitte.com/assets/DcomunitedStates/Local%20Assets /Documents /FSI/US_FSI_Bankingtheunbanked_043012.pdf

Why Star Topology is Best

1.0 SYNOPSIS

This study focused on a star network topology. A star network is a local area network in which all devices are directly linked to a central point called a hub. Star topology looks like a star but not exactly a star.

The findings from the study revealed that in star topology every computer is connected to a central node called a hub or a switch. A hub is a device where the entire linking standards come together. The data that is transmitted between the network nodes passes across the central hub.

The project further goes on to explain the advantages, disadvantages and usage of star network topology. The centralized nature of a star network provides ease while also achieving isolation of each device in the network. However, the disadvantage of a star topology is that the network transmission is largely reliant on the central hub. If the central hub falls short then the whole network is out of action.

Star networks are one of the most common computer network topologies that are used in homes and offices. In a Star Network Topology it is possible to have all the important data backups on the hub in a private folder and this way if the computer fails you can still use your data using the next computer in the network and accessing the backup files on the hub. It has come to realization that this type of network offers more privacy than any other network.

2.0 INTRODUCTION

The main objective of this project is to discuss the advantages, disadvantages and usage of star network topology. A topology is a physical structure of a network. Star topology is a network structure comprising a central node to which all other devices attached directly and through which all other devices intercommunicate (http://www.yourdictionary.com/telecom/star-topology). The hub, leaf nodes and the transmission lines between them form a graph with the topology of a star.

Star is one of the most and oldest common topology in the local area network. The design of star topology comes from telecommunication system. In telephone system all telephone calls are managed by the central switching station. Just like in star topology each workstation of the network is connected to a central node, which is known as a hub. Hub is a device where the whole linking mediums come together. It is responsible of running all activities of the network. It also acts as a repeater for the data flow. Generally when build a network using two or more computers, you need a hub. It is possible to connect two computers to each other directly without the need of a hub but when adding a third computer in the network, we need a hub to allow a proper data communication within the network. In a Star Network the whole network is reliant on the hub. (http://www.buzzle.com/editorials/2-6-2005-65413.asp)

Devices such as file server, workstation and peripheral are all linked to a hub. All the data passes through the hub. When a packet comes to the hub it moves that packet to all the nodes linked through the hub but only one node at a time successfully transmits it. Data on a star network exceeds through the hub before continuing to its target. Different types of cables are used to link computers such as twisted pair, coaxial cable and fiber optics. The most common cable media in use for star topologies is unshielded or shielded twisted pair copper cabling. One end of the cable is plugged in local area network card while the other side is connected with the hub.

Due to the centralization in star topology it is easy to monitor and handle the network making it more advantageous. Since the whole network is reliant on the hub, if the whole network is not working then there could be a problem with the hub. The hub makes it easy to troubleshoot by offering a single point for error connection at the same time the reliance is also very high on that single point. The central function is cost effective and easier to maintain.

Star topology also has some draw backs. If the hub encounters a problem then the whole network falls short. In a Star Network Topology it is possible to have all the important data backups on the hub in a private folder and this way if the computer fails you can still use your data using the next computer in the network and accessing the backup files on the hub.

3.0 BACKGROUND STUDY

In this section the researcher has clarified and explained in details some of the advantages, disadvantages and usage of star topology. These three concepts are the main core of this project.

3.1 ADVANTAGES OF STAR NETWORK

3.1.1 Isolation of devices: each device is isolated by the link that connects it to the hub. By so doing it makes the isolation of the individual devices simple. This isolation nature also prevents any non centralized failure from affecting the network. In a star network, a cable failure will isolate the workstation that it links to the central computer, but only that workstation will be isolated. All the other workstations will continue to function normally, except that they will not be able to communicate with the isolated workstation. http://en.wikipedia.org/wiki/Star_network)

3.1.2 Simplicity: The topology is easy to understand, establish, and navigate. The simple topology obviates the need for complex routing or message passing protocols. As noted earlier, the isolation and centralization simplifies fault detection, as each link or device can be probed individually .Due to its centralized nature, the topology offers simplicity of operation. (http://en.wikipedia.org/wiki/Star_network)

3.1.3 If any cable is not working then the whole network will not be affected: in a star topology, each network device has a home run of cabling back to a network hub, giving each device a separate connection to the network. If there is a problem with a cable, it will generally not affect the rest of the network. The most common cable media in use for star topologies is unshielded twisted pair copper cabling. If small numbers of devices are utilized in this topology the data rate will be high. It is best for short distance ( http://fallsconnect.com/topology.htm#a)

3.1.4 You can easily add new computers or devices to the network without interrupting other nodes: The star network topology works well when computers are at scattered points. It is easy to add or remove computers. New devices or nodes can easily be added to the Star Network by just extending a cable from the hub. If the hub adds a device for example a printer or a fax machine, all the other computers on the network can access the new device by simply accessing the hub. The device need not be installed on all the computers in the network. The central function is cost effective and easier to maintain. If the computers are reasonably close to the vertices of a convex polygon and the system requirements are modest. And also when one computer falls short then it won’t affect the whole communication. (http://searchnetworking.techtarget.com/dictionary/definition/what-is-star-network.html#)

3.1.5 Centralization: the star topologies ease the chance of a network failure by linking all of the computers to a central node. All computers may therefore communicate with all others by transmitting to and receiving from the central node only. Benefits from centralization: As the central hub is the bottleneck, increasing capacity of the central hub or adding additional devices to the star, can help scale the network very easily. The central nature also allows the check up of traffic through the network. This helps evaluate all the traffic in the network and establish apprehensive behavior (http://www.buzzle.com/articles/advantages-and-disadvantages-of-different-network-topologies.html).

3.1.6 Easy to troubleshoot: in a star network the whole network is reliant on the hub so if the entire network is not working then there could be a problem with the hub. This feature makes it easy to troubleshoot by offering a single point for error connection ad at the same time the dependency is also very high on that single point

3.1.7 Better performance: star network prevents unnecessary passing of the data packet through nodes. At most 3 devices and 2 links are involved in any communication between any two devices which are part of this topology. This topology encourage a huge overhead on the central hub, however if the central hub has plenty of capacity, then very high network used by one device in the network does not affect the other devices in the network. Data Packets are sent quickly as they do not have to travel through any unnecessary. The big advantage of the star network is that it is fast. This is because each computer terminal is attached directly to the central computer (http://en.wikipedia.org/wiki/Star_network).

3.1.8 EASY INSTALLATION: Installation is simple, inexpensive, and fast because of the flexible cable and the modular connector.

3.2 DISADVANTAGES OF STAR NETWORK

3.2.1 If the hub or concentrator fails, nodes attached are disabled: The primary disadvantage of a star topology is the high dependence of the system on the functioning of the central hub. While the failure of an individual link only results in the isolation of a single node, the failure of the central hub renders the network inoperable, immediately isolating all nodes. (http://www.buzzle.com/articles/advantages-and-disadvantages-of-different-network-topologies.html )

3.2.2 The performance and scalability of the network also depend on the capabilities of the hub. Network size is limited by the number of connections that can be made to the hub, and performance for the whole network is limited by its throughput. While in theory traffic between the hub and a node is isolated from other nodes on the network, other nodes may see a performance drop if traffic to another node occupies a significant portion of the central node’s processing capability or throughput (http://en.wikipedia.org/wiki/Star_network).

Furthermore, wiring up of the system can be very complex.

3.2.3 The primary disadvantage of the star topology is the hub is a single point of failure: If the hub were to fall short the whole network would fail as a result of the hub being connected to every computer on the network. There will be communication break down between the computers when the hub fails.

3.2.4 Star topology requires more cable length: When the network is being extended then there will be the need of more cables and this result in intricate installation.

3.2.5 More Expensive than other topologies: it is expensive due to cost of the hub. Star topology uses a lot of cables thus making it the most costly network to set up as you also have to trunk to keep the cables out of harm way. Every computer requires a separate cable to form the network. . A common cable that is used in Star Network is the UTP or the unshielded twisted pair cable. Another common cable that is used in star networks is the RJ45 or the Ethernet cables

3.3 USAGES OF STAR NETWORK

Star topology is a networking setup used with 10BASE-T cabling (also called UTP or twisted-pair) and a hub. Each item on the network is connected to the hub like points of a star. The protocols used with star configurations are usually Ethernet or local-talk. Token Ring uses a similar topology, called the star-wired ring (http://fallsconnect.com/topology.htm#a).

Star Topology is the most common type of network topology that is used in homes and offices. In the Star Topology there is a central connection point called the hub which is a computer hub or sometimes just a switch. In a Star Network the best advantage is when there is a failure in cable then only one computer might get affected and not the entire network.

Star topology is used to ease the probabilities of network failure by connecting all of the systems to a central node. This central hub rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on the network, sometimes including the originating node. All peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the central node only (From Wikipedia, the free encyclopedia).

Star network is used to transmit data across the central hub between the network nodes. When a packet comes to the hub it transfers that packet to all nodes connected through a hub but only one node at a time successfully transmits it.

In local area networks where the star topology is used, each machine is connected to a central hub. In contrast to the bus topology, the star topology allows each machine on the network to have a point to point connection to the central hub and there is no single point of failure. All of the traffic which transverses the network passes through the central hub. The hub acts as a signal booster or repeater which in turn allows the signal to travel greater distances.

When it is important that your network have increased stability and speed, the star topology should be considered. When you use a hub, you get centralized administration and security control, low configuration costs and easy troubleshooting. When one node or workstation goes down, the rest of your network will still be functional.

4.0 APPENDIX

As the name suggests, this layout is similar to a star. The illustration shows a star network with five workstations or six, if the central computer acts as a workstation. Each workstation is shown as a sphere, the central computer is shown as a larger sphere and it is a hub, and connections are shown as a thin flexible cable. The connections can be wired or wireless links. The hub is a central to a star topology and the network cannot function without it. It connects to each separate node directly through a thin flexible cable (10BASE-T cable). One end of the cable is plugged into the connector on the network adapter card (either internal or external to the computer) and the other end connects directly to the hub. The number of nodes you can connect to a hub is determined by the hub.

5.0 CONCLUSION

A star network is a local area network in which all computers are directly connected to a common central computer. Every workstation is indirectly connected to every other through the central computer. In some star networks, the central computer can also operate as a workstation

A Star Network Topology is best suited for smaller networks and works efficiently when there is limited number of nodes. One has to ensure that the hub or the central node is always working and extra security features should be added to the hub because it s the heart of the network. To expand a star topology network, you’ll need to add another hub and go to a “star of stars” topology.

In a Star Network Topology it is possible to have all the important data backups on the hub in a private folder and this way if the computer fails you can still use your data using the next computer in the network and accessing the backup files on the hub.

6.0 REFERENCES
Available on http://en.wikipedia.org/wiki/Star_network
Available on http://en.wikipedia.org/wiki/Very_small_aperture_terminal

Available on http://fallsconnect.com/topology.htm#a

Available on http://searchnetworking.techtarget.com/dictionary/definition/what-is-star-network.html#(above)
Available on http://www.answers.com/topic/star_network
Available on http://www.buzzle.com/articles/advantages-and-disadvantages-of-different-network-topologies.html
Available on http://www.buzzle.com/editorials/2-6-2005-65413.asp
Available on http://www.blurtit.com/q826101.html

Speaker Driver: Comparison of Options

Speaker driver choice is a very important consideration, since the transducers themselves are of course the most fundamental part of the speaker. Regardless of other factors, one can never expect inferior drivers (and hence the system as a whole) to perform well.

There are two main options when choosing drivers; electrostatic or conventional voice-coil designs. Although many seem under the impression that electrostatic loudspeakers are a modern invention this is not the case; Janszen was granted the first U.S. patent for such a device in 1953[1]. Considering the relatively small market penetration of electrostatic transducers and the fact that they tend to appear largely in high-end designs, one might be led to assume that electrostatic panels are superior to conventional drivers. This however is only partially true.

One advantage of electrostatic panels is that full-range designs are possible, eliminating the need for crossovers and hence the associated problems with frequency and phase response in the crossover band. Another advantage is that the electrostatic panel is generally very light and hence offers excellent transient response, whilst also offering very good directionality and imaging. The latter may also be seen as disadvantage, since it effectively makes the ideal listening position rather narrow.

In terms of disadvantages, the chief problem with electrostatic designs is a difficulty in reproducing bass frequencies at high SPLs. Generally the panel excursion is small, which makes it hard for electrostatic transducers to move the required volume of air at low frequencies. Furthermore, since electrostatic transducers are not meant for use with an enclosure, phase cancellation is an issue, again resulting in reduced bass performance. Audiostatic, a company that manufactures audiophile full-range electrostatic speakers, admit of their own devices with regard to bass that “Obviously because of the limited membrane excursion they won’t produce ear shattering levels at that frequency”[2].

As a result of the aforementioned bass performance, many high-end electrostatic speakers are in fact hybrids, using voice-coil woofers for low frequencies with electrostatic panels covering the mid and high range. One example is the Martin Logan Summit[3], which whilst described as “our most advanced and sophisticated full-range loudspeaker” nevertheless makes use of two 10” woofers for low-end reproduction. Of course in this situation a crossover is still required, so the advantage of the possibility of a full-range design is often nullified in practice. Still, electrostatics may prove very attractive as high quality mid to high frequency drivers, although they are certainly not cheap.

In choosing conventional voice-coil drivers, there are many factors to consider. In terms of quality, it is certainly true that one does indeed get what one pays for. Whilst high quality manufacturers such as SEAS[4] are happy to provide detailed frequency response plots and Thiele-Small parameters for their transducers, many cheaper manufacturers are less transparent about their devices.

One common trick to beware of, often used by less scrupulous manufacturers, is the quoting of a recommended frequency range without stating the variation in output (in dB) across this range. A recommended operating range without any indication of the actual performance within the frequency band is virtually meaningless. Many assume a ±3dB range is implied when reading such data; it is unwise to make such assumptions.

Furthermore, even if frequency response across a range is qualified with the variation in output in dB, this is still not ideal. Obviously one desires that any variation in output magnitude will be a smooth variation; one still has no idea of how “lumpy” the response might be. For these reasons it is best to choose drivers that are accompanied by frequency plots, since this gives a far more accurate representation of true performance.

Another important consideration in choosing a driver is the application for which it is intended. For example, a woofer with a high maximum cone excursion and low Fs may perform very well in a large sealed cabinet but be totally unsuited to a ported implementation (Dickason, 2000). One can make use of the quoted Thiele-Small parameters to ascertain whether the driver is suitable for its intended purpose.

Construction materials also give an indication of how the driver may sound. In terms of woofer and midrange drivers, for example, an aluminium cone may indicate greater bass precision than an otherwise equivalent transducer with a paper cone; softer cones are associated with greater distortion than their stiffer counterparts. However, as Larsen (2003) notes “cone break-up behaviour and frequency response was shown to be strongly dependant on the Geometrical Stiffness of the Cone”. Hence the geometry of the design may be more important than the material used.

Diameter of the driver is also a hugely important factor for woofers, although of minor importance for tweeters. To reproduce bass frequencies at good SPLs, a large volume of air must be moved by the driver. To this end, there is absolutely no way a 6” driver can compete with a 12” driver of similar quality in terms of bass extension; it is simply not physically possible.

Power handling is another consideration that must be given thought when choosing a driver; the peak short-term power dissipated by a transducer can easily be double its long-term rating. Naturally for the best performance it is desirable to ensure that the driver is not operating too close to its quoted limits. One should think carefully about how hard the driver is likely to be driven and ensure its power handling is adequate; overdriving a unit at best will result in distortion and at worst may cause irreversible damage. In many cases users overdrive and damage units in an attempt to achieve a higher SPL, particularly in the bass region. If the system requirements are adequately specified and designed for, this should not happen.

For the high-budget client, the best solution will either be high-quality voice-coil drivers carefully selected to complement each other, or a hybrid electrostatic implementation. It is difficult to recommend a fully electrostatic solution due to the associated problems with low frequency performance, although for some clients this may be acceptable.

For the low-budget client, standard voice-coil drivers are the only solution. The quality of the drivers used will largely be influenced by pricing; one should carefully consider all factors and attempt to find the best solution within budget. Datasheets should be closely scrutinised to identify the strengths and weaknesses of each option before a solution is chosen.

In conclusion, notwithstanding the electrostatic debate, driver choice is largely influenced by price and performance. In general, the better specified the driver, the more expensive it is likely to be. If working with a high budget, one is likely to simply choose the best specified drivers. Conversely, with a limited amount of capital, one must make the best compromise that can be reached within budget.

Sources

Larsen, Peter. (2003). Geometrical Stiffness of Loudspeaker Cones, Loudsoft.

Borwick, John. (2001). Loudspeaker and Headphone Handbook, Focal Press.

Dickason, V. (1995). The Loudspeaker Design Cookbook, Audio Amateur Publications.

Rossing, T. (1990). The Science of Sound, Addison-Wesley.

1

Social Media as Emerging Technology

Investigate emerging IT technologies: Social Networks appear to be all the rage at the moment.

Introduction

Psychology is classically defined as “… The science of behavior …”, which in the case of human beings manifests itself when others are present, thus representing behavioral instances in social interaction (Kenny, 1996). The phenomenon of socialization and networking have been extended by the global presence of the Internet whereby individuals through specific social networking websites have access to a broad context of toher individuals that is further defined by the type of website which have differing population, age and constituency compositions (Freeman, 2004, pp. 10- 29). The internet through emails, instant messaging, online dating and blogging has created a relatively secure means for people to engage in socialized behavior while being able to feel relatively safe in terms of personality differences and other areas that might not be the case in situations whereby they are exposed to individuals on a direct basis with whom they might not have common interest areas and or outlooks (Ethier, 2004).

All of the preceding factors are components that have given rise to the dramatic increase and popularity in online social network services. Classmates.com, which was started in 1995, represented the first social network website, which was followed by Company of Friends that was the online network of the magazine Fast Company in 1997 that began the era of business networking (FastCompany.com, 2004). The promise of privacy, like-minded interests, and being able to socialize saw online social networking become extremely popular in 2002 and increase to the point where presently there are over 200 of these types of web sites globally (RateItAll.com, 2007). And as it is with any type of activity that attracts large numbers of people, social networking is big business. As a result the Internet has and is offering firms in this sphere an advantage in bringing together distinct profiles of individuals with marketing potential beyond any fees or charges to the members (Robson, 1996, pp. 250-260). However, that business segment, social networking, is increasing taking on the look of the dot-com frenzy that gripped in Internet in 2001 (Madslien, 2005). As was the question then, looms as the same questions now regarding online social networking. What are their business models? What type of revenue are they generating? What is their profitability? What are their differences and will the phenomenon last? These factors are areas that will be explored herein.

Online social networks are forums whereby people can meet new individuals, network and initiate or maintain contact with old acquaintances through the relative privacy of the Internet, thus enabling business or socially minded people to enlarge their spheres through providing and exchanging information on themselves (Epic.org, 2006). Facebook (2007) is system comprised of a number of networks, with each one based around a region, or company, high school and or college that permits its users to share information on themselves that allows a broad category of differing types and demographics of people to use their social networks as opposed to offering contacts that are geared to a specific type of profile. Thus it provides a more diverse population and appeal to advertisers implementing this type of expanded user profile. The differing networks within Facebook are independent as well as being closed off to users that are non-affiliated thereby providing control over the content to specific group profiles. It, Facebook, is an English language web site that enjoys popularity among college students as its largest profile group, numbering in excess of 17 million, or roughly 85% of all U.S. college students (Arrington, 2006). Facebook is free for users, utilizing advertising, banner ads and sponsored groups for revenues that are estimated to be in the area of $53,000,000 annually (Arrington, 2006).

Another type of social networking web site is LinkedIn, which is business oriented, primarily established to enable professional networking (Dragan, 2004). The company’s 40,000 member list includes such high profile individuals as company vice presidents, over 700, Chief Executive Officers, over 500, and 140 Chief Treasury Officers (Dragan, 2004). Not yet generating a profit, LinkedIn, charges a fee regarding its basic service and charges what it terms as ‘power users’ representing executive recruiters, investment professionals and sales representatives who use the service to tap into its network an additional charge (Liedtke, 2004). Many members utilized their personal contacts and associates to find, fill jobs and to increase their sales, thus offering a very high select user profile that also generates income from advertisers, however, the business model has yet to prove profitable (Liedtke, 2004). Founded in 2003, it has become a sort of ‘in’ place for professionals increasingly identifying its members as being in a special group of movers and shakers, as it is termed (Copeland, 2006). At present, LinkedIn has existed on venture capital funding representing almost $15 million USD from investors such as Sequoia Capital along with Greylock, with the company’s business model based upon advertising revenue and fees projected to generate $100 million in revenues by the year 2008 (Copeland, 2006). The goal is to increase the web site’s membership making it the number one professional resource for business and networking, job referrals, references, experts and whatever else is needed for professionals (Copeland, 2006).

The younger generation of teens and those in their early twenties tend to use hi5, which has over 40 million members in the pattern of a MySpace social network (Mashable.com, 2006). The massive traffic the web site generates makes it the eighth most visited social network web site in the United States, but is losing market share in the face of rival companies such as Facebook, Bebo, Piczo, Tagworld, Multiply and others that also covet this user group, with MySpace as the dominant performer, stealing market share from all these rivals (Mashable.com, 2006). In keeping with the general social network format, hi5 offers profile pages with basic services offered for free and the site, like others, generating revenues from advertising, banner ads and referrals to music and other web sites such as iTunes for music downloads. The mode of this social network allows users to connect to their friends, build and introduce themselves to new ones as well as invite their own (hi5.com, 2007). Still in the venture capital backed stage, hi5 does not provide information on its revenues or related data. Bebo (2007), as is the case with social networking sites geared at the younger generation, offers users the ability to post their pictures, write blogs and of course send messages. A relative newcomer, 2005, Bebo like hi5, Facebook, Tagworld, Multiply and other allows users to post their talents on their personal pages on a special “New Music Only on Bebo” section (Bebo, 2007).

Any discussion of online social networks must of course include MySpace, the largest web site of its kind, achieving almost 80% of online visits in this category (Answers.com, 2007a). With over 125 million users the site is targeted at the teenage and under thirty crowd that in typical fashion, allows users to create their own personal profile pages that can be enhanced with HTML code to make them into multimedia pages (Answers.com, 2007a). This aspect allows users to post special aspects on themselves, such as their talents, videos, music and paintings, with its success being proven by its purchase by News Corporation for in excess of 500 million USD (Answers.com, 2007a). MySpace business model of advertising revenues, banners and fees has achieved success as a result of size, the determining factor in Internet related businesses.

Friends United in the UK represents a combination of all of the other online social networking sites discussed. It encourages friends, family and individuals to connect for reunions, communication, genealogy, socializing, dating and like LinkedIn it offers job searches and job hunting (Friends Reunited, 2007). And in going one better than its American counterparts, the site offers television broadcasts via the company’s parent company ITV network as well as the popular format of music CD collections. All of these facets are revenue generators that users can access free (Answers.com, 2007b). With 15 million members, Friends United has access to almost half of all UK households with Internet service and was founded on the idea of the owners, Steve and Julie Pankhurst, who were looking for old classmates and found a lost friend of 30 years (Answers.com, 2007b). The success of the multiple interest web site, combining all of the features found in the highly successful U.S. social networks, and with its own fresh new wrinkles such as television broadcasts, resulted from the purchase of the company from the Pankhursts by ITV in December of 2005 for ?120,000,000.

As would be expected, online social networks have become a global phenomenon that has taken off particularly in the Asian region. Japan’s top social networking site ‘Mixi’ is a highly organized, in Japanese fashion, web site that is a kind of MySpace knock off in the Japanese language, utilizing the same advertising, banner ad, music referral business model (Kageyama, 2007). The cultural nuance is apparent in that “MySpace is about me, me, me and look at me …”, whereas “ … Mixi, is not all about me. It’s all about us” reflecting the more reserved nature of the Japanese culture (Kageyama, 2007). Social networks of the non online variety have long been a fixture of Asian societies, and in Korea CyWorld has grown to the point where it is launching a U.S. version with an initial investment of $10 million USD and a pledge to spend whatever it takes to be successful. (Kirkpatrick, 2006). With versions in Japan as well as China and Taiwan, CyWorld is an example of the universal nature of the social networking business model. The formulas utilized globally are basically the same, free access, bring in large numbers of people, charge advertisers, and diversify the revenue stream through music, television access, movie CD’s and other sources.

Conclusion

As was and is the case in the United States as represented by MySpace, market share and dominance determine value, to advertisers, investors and buyers. Friends United is the largest social networking site in the UK and commanded the same interest on the part of a large corporation that MySpace did in the U.S. Success translates as having a commanding percentage of a nation’s user profile, which aids in the web site being able to attract better and more advertisers at increased rates, along with banner ads, music web site referrals and other revenue streams. The venture capital backed nature of the online social network sites makes access to their profitability elusive, with all but the most popular sites, as indicated having been either acquired by large corporations, MySpace – Friends Reunited for example, or having an expansive nature, CyWorld and MySpace, indicating that revenues and profits must be adequate if not substantial. As eBay and Yahoo have proven, market dominance does translate into revenues, but there is a lag time that takes well heeled investors or corporations to underwrite.

And the stakes have made the game hotter as more entrants as well as current players up the ante (Hicks, 2004). But, that is not all bad news as “… not all online social networks are the same …” (Jacobs, 2006). And while the differences in demographics, profiles, appeal and niche are similar, the tremendous online numbers allow for the distinctions (Jacobs, 2006). And as is the case with dominant sized competitors, they have the clout to slowly dip into their smaller competitors, thus increasing their size advantage, or accomplishing the same through acquisition. And this brings up the other side of the coin, with most of the online social network sites funded by venture capitalists who are in it for the sell off to another company, and or stock play, is the phenomenon one that is ready to burst (seomoz.org, 2006). MySpace has yet to prove its $580 million investment by Rupert Murdoch’s News Corporation despite its size, and the venture capital market, which has pumped more that $824 million into the sector since 2001 is still awaiting returns on most of that money (Rosmarin, 2006). But, with MySpace and Friends Reunited pulling in almost half of their respective countries Internet access subscribers, the potential for huge profits represents a bet that most companies have opted not to miss out on. Privately held Facebook’s recent rejection of a $750 million offer is a demonstration of this point (Rosenbush, 2006). The jury and the results are still out as the industry grows and some consolidation occurs, then the real story will reveal itself in terms of profitability as well as staying power.

Bibliography

Answers.com (2007b) Friends Reunited. Retrieved on 24 February 2007 from http://www.answers.com/topic/friends-reunited

Answers.com (2007a) MySpace. Retrieved on 24 February 2007 from http://www.answers.com/topic/myspace

Arrington (2006) 85% of College Students Use Facebook. Retrieved on 24 February 2007 from http://www.techcrunch.com/2005/09/07/85-of-college-students-use-facebook/

Bebo (2007) Bebo. Retrieved on 24 February 2007 from http://www.bebo.com/

Copeland, M. (2006) A MySpace for grown-ups. 4 December 2006. Retrieved on 24 February 2007 from http://money.cnn.com/magazines/business2/business2_archive/2006/12/01/8394967/index.htm?postversion=2006120415

Dragan, R. (2004) LinkedIn. Retrieved on 23 February 2007 from http://www.pcmag.com/article2/0,4149,1418686,00.asp

Epic.org (2006) Social Networking Privacy. Retrieved on 23 February 2007 from http://www.epic.org/privacy/socialnet/default.html

Ethier, J. (2004) Current Research in Social Network Theory. Retrieved on 22 February 2007 from http://www.ccs.neu.edu/home/perrolle/archive/Ethier-SocialNetworks.html

Facebook (2007) Facebook. Retrieved on 23 February 2007 from http://www.facebook.com/

FastCompany.com (204) What the Heck is Social Networking. 16 March 2004. Retrieved on 22 February 2007 from http://blog.fastcompany.com/archives/2004/03/16/what_the_heck_is_social_networking.html

Freeman, L. (2004) The Development of Social Network Analysis: A Study in the Sociology of Science. Empirical Press

Friends Reunited (2007) Welcome to Friends Reunited – what are your old friends doing now. Retrieved on 24 February 2007 from http://www.friendsreunited.co.uk/friendsreunited.asp?WCI=FRMain&show=Y&page=UK&randomiser=4

hi5.com (2007) hi5. Retrieved on 24 February 2007 from http://www.hi5.com/

Hicks, M. (2004) Social Networking Keeps Buzzing. 15 October 2004. Retrieved 24 February 2007 from http://www.eweek.com/article2/0,1895,1677508,00.asp

Jacobs, D. (2007) Different Online Social Networks Draw Different Age Groups: Report. 7 October 2007. Retrieved on 24 February 2007 from http://www.ibtimes.com/articles/20061007/myspace-friendster-xanga-facebook.htm

Kageyama, Y. (2007) MySpace faces stiff competition in Japan. 18 February 2007. Retrieved on 24 February 2007 from http://news.yahoo.com/s/ap/20070219/ap_on_hi_te/japan_social_networking

Kenny, D. (1996) The Design and Analysis of Social-Interaction Research. Vol. 47. Annual Review of Psychology

Kirkpatrick, M. (2006) Massive Korean Social Network CyWorld Launches in U.S. 27 July 2006. Retrieved on 24 February 2007 from http://www.techcrunch.com/2006/07/27/this-is-nuts-cyworld-us-opens-for-use/

Liedtke, M. (2004) Networking site LinkedIn Causes Buzz – but can it be profitable? 25 October 2004. Retrieved on 24 February 2007 from http://seattlepi.nwsource.com/business/196580_linkedin25.html

Madslien, J. (2005) Dotcom Shares Still Spook Investors. Retrieved on 22 February 2007 from http://news.bbc.co.uk/1/hi/business/4333899.stm

Mashable.com (2006) hi5, Another Massive Social Network. Retrieved on 24 February 2007 from http://mashable.com/2006/07/16/hi5-another-massive-social-network/

RateItAll.com (2007) Social Networking Web Sites. Retrieved on 22 February 2007 from http://www.rateitall.com/t-1900-social-networking-web-sites.aspx?age=&zipcode=&gender=&sort=0&pagesize=all

Robson, W. (1996) Strategic Management and Information Systems: An Integrated Approach. Trans-Atlantic Publications

Rosenbush, S. (2006) Facebook’s on the Block. 28 March 2006. Retrieved on 24 February 2007 from http://www.businessweek.com/technology/content/mar2006/tc20060327_215976.htm?chan=technology_technology+index+page_today’s+top+stories

Rosmarin, R. (2006) The MySpace Bubble. 29 June 2006. Retrieved on 24 February 2007 from http://www.forbes.com/home/digitalentertainment/2006/06/29/myspace-network-facebook_cx_rr_0629socialnetwork.html

seomoz.org. (2006) Is Social Networking a Dotcom Bubble Waiting to Burst? 28 September 2006. Retrieved on 24 February 2007 from http://www.seomoz.org/blog/is-social-networking-a-dotcom-bubble-waiting-to-burst

Snoopy Tool Evaluation

Snoopy is a tool which is used for designing and animating hierarchical graphs along with others Petri nets. Snoopy also provides the facility to construct Petri nets and allows animation and simulation of the resulting token flow. This tool is used to verify technical systems specifically software-based systems and natural systems e.g. signal transduction, biochemical networks as metabolic and gene regulatory networks. Snoopy is in use for consideration of the qualitative network structure of a model under specific kinetic aspects of the specified Petri net class and investigation of Petri net models in several complementary conducts. Simultaneous usage of different Petri net classes in Snoopy is one of its outstanding features. Other features are:

It is extensible as its generic design aids the implementation of new Petri net classes.
It is adaptive as numerous models can be used simultaneously.
It is platform independent as it is executable on all common operating systems e.g. linux, mac, windows.

Two particular types of nodes i.e. logical nodes and macro nodes are meant for supporting the systematic construction, neat arrangement and design of large Petri nets. Logical nodes act as connector or multiple used places or transitions sharing the same factor or function. Macro nodes allow hierarchically designing of a Petri net. Snoopy allows edition and coloring of all elements in each Petri net class and manual or automatic change of network layout too. Prevention of syntactical errors in the network structure of a Petri net is facilitated by the implementation of the graphical editor.

Editor Mode:

Start Snoopy and go to File New or press the new button in the tool bar. It results in opening of a template dialogue that allows selection of the document template.

File: New/Open/Close Window/Save/Save as, Print, Export/Import, Preferences (change the default visualization) and Exit.

Edit: Undo/Redo, Select All/Copy/Copy in new net/Paste/Cut, Clear/Clear all, Hide/Unhide, Edit selected elements/Transform Shapes, Layout (automatic layout function), Sort Nodes (by ID or name), Check Net (duplicate nodes, syntax, consistency) and Convert to.

View : Zoom 100%/Zoom In/Zoom Out, Net Information (number of each element used in the model), Toogle Graphelements/Hierachy browser/Filebar/Log window, Show Attributes (choose for each elements which attributes to be shown in the model), Start Anim-Mode/SimulationMode/Steering-Mode.

Elements (list of all available elements): Select/ Place/Transition/ Coarse Place/Coarse Transition/ Immediate Transition/Deterministic Transition/Scheduled Transition/Parameter/Coarse Parameter/LookupTable, Edge/Read Edge/Inhibitor Edge/Reset Edge/Equal Edge/Modifier Edge and Comment.

Hierarchy (edit and browse hierarchy): Coarse (chosen elements are encapsulate in a macro node)/Flatten and Go Up in Hierarchy/Go To First Child in Hierarchy/Go To Next Sibling in Hierarchy/o To Previous Sibling in Hierarchy.

Search : Search nodes (by ID or name).

Extra : Load node sets (visualize, e.g., T-, P-invariants, siphons and traps), Interaction and General Information (title, author, description, literature).

Window (arrange all opened windows): Cascade/Tile Horizontally/Tile vertically, Arrange Icons/Next/Previous and Open Files.

Help: Help, About (current version), check update.

The tool bar holds four shortcuts that facilitate:

Open a new document.

Load a document.

Save a document.

Select an element.

All elements accessible in the current net class are displayed in panel for the graph elements. Left-click on one of the elements enables user to use one of these elements. Right click on the respective element allows user to edit or select all elements of the same class. All levels are displayed in hierarchy browser and any hierarchical level can be opened in a new window by a left-click. The editor pane can be considered as the canvas which allows user to draw the network. A left-click on the Editor pane activates chosen element and places the selected element on the canvas. Click left onto one node, hold the left-click, drag the line to the other node and drop the left-click, to draw an arc between two nodes. To add edges to an arc push the CRTL key and click left on the arc which facilitates the user to drag the edge with another left-click. Grid in the canvas tab can also be used for a better orientation. User can also pick edge styles i.e. line or spline in the preference dialogue in the elements tab.

Elements:

Nodes:

Elements

Graphics

Standard transition

Standard transition

Coarse place

Coarse transition

Immediate transition

Deterministic transition

Scheduled transition

Immediate Transition: Immediate transitions fire as soon as they are enabled. The waiting time is equal to zero.

Standard Transition (Timed Transition): A waiting time is computed as soon as the transition is enabled. The transition fires if the timer elapsed zero and the transitions is still enabled.

Deterministic Transition: Deterministic transitions fire as soon as the fixed time interval elapses during the entire simulation run time. The respective deterministic transitions must be enabled at the end of each repeated interval.

Scheduled Transition: Scheduled transitions fire as soon as the fixed time interval elapsed during the given time points. The respective deterministic transitions must be enabled at the end of each repeated interval.

Edges:

Elements

Graphics

Description

Standard edge

The transition is enabled and may fire if both pre-places and are sufficiently marked by tokens. After firing of the transition, tokens are removed from the pre-places and new tokens are produced on post place.

Read edge

The transition is enabled and may fire if both pre-places A and B are sufficiently marked by tokens. After firing of the transition, tokens are removed from the pre-place B but not from pre-place A, new tokens are produced on post place. The firing of the transition does not change the amount of tokens on pre-place A.

Inhibitor edge

The transition is enabled and may fire if pre-place B is sufficiently marked by tokens. The amount of tokens on pre-place A must be smaller than the given arc weight. After firing of the transition, tokens are removed from the pre-place B but not from pre-place A; new tokens are produced on place C. The firing of the transition does not change the amount of tokens on pre-place A.

Reset edge

The transition is enabled and may fire if pre-place B is sufficiently marked by tokens. The amount of tokens on pre-place A has no effect on the ability to enable the transition and affects only the kinetics. After firing of the transition, tokens are removed from the pre-place B according the arc weight and all tokens on pre-places A are deleted; new tokens are produced on place C.

Equal edge

The transition is enabled and may fire if number of tokens on pre-place A is equal to the corresponding arc weight and place B is sufficiently marked. After firing of the transition, tokens are removed from the pre-place B but not from preplace A; new tokens are produced on place C. The firing of the transition does not change the amount of tokens on pre-place A.

Modifier edge

The transition is enabled and may fire if pre-place B is sufficiently marked with tokens. The amount of tokens on pre-place A has no effect on the ability to enable the transition and affects only the kinetics. After firing of the transition, tokens are removed from the pre-place B but not from pre-place A; new tokens are produced on place C. The firing of the transition does not change the amount of tokens on pre-place A.

Functions:

Name

Meaning of function

BioMassAction(.)

Stochastic law of mass action. Tokens are interpretated as single

molecules.

BioLevelInterpretation(.)

Stochastic law of mass action. Tokens are interpretated as concentration.

ImmediateFiring(.)

Refers to immediate transitions.

TimedFiring(.)

Refers to deterministic transitions.

FixedTimedFiring Single(.)

Refers to deterministic transitions that only res once after a given timepoint

FixedTimedFiring(., ., .)

Refers to scheduled transitions.

abs(.)

Absolute value

acos(.)

Arc cosine function

asin(.)

Arc sine function

atan(.)

Arc tangent function

ceil(.)

Rounding up

cos(.)

Cosine function

exp(.)

exponential function

sin(.)

Sine function

sqrt(.)

Square root

tan(.)

Tangent function

floor(.)

Round off

log(.)

Natural logarithm with constant e as base

log10(.)

Common logarithm with constant 10 as base

pow(.)

Exponent

Parameters:

Parameters are used for defining individual parameters and rate or weight functions but are not able to define the number of tokens on a particular place. Third group of macro elements are coarse parameters which facilitate encapsulating parameters. High numbers of parameters are not visible on the top-level or can also be categorized by the use of coarse parameters.

Animation mode:

Snoopy allows user to observe the token flow in animation mode which starts by pressing F5 or going to View and then start AnimationMode. It will result in opening a new window which allow user to steer the animation. This part of snoopy is very beneficial to catch a first expression of the causality of a model and its workings as it provides information about the transitions too. In order to understand modeled mechanism, playing with the token flow prove to be worthwhile. The token flow can be animated manually by a single click on the transition. A message box is displayed revealing a message “This transition is not enabled” when user tries to fire a transition that is not enabled. Clicking-left and clicking-right on a place aids addition of tokens and extraction of tokens respectively. Animation of the token flow can also be controlled by using the radio buttons present on the animation steering panel. Usage of radio buttons involves step-wise forward and backward or sequentially as long as one transition can be enabled, otherwise a notification “Dead State: There are no more enabled transitions” is displayed on screen.

Simulation Mode

Pressing F6, going to view/Start Simulation or using the stochastic simulation button on the animation control panel, are three ways to perform stochastic simulations with the current model in the active window. Facilities of this mode include simulation of the time-dependent dynamic behavior of the model indicated by the token flow or the firing frequency of the transitions. The fluctuating concentration levels or the discrete number of the components over time is indicated by the token flow. This provides an impression of the time-dependent changes in model under consideration which is helpful in understanding the wet-lab system. More than a few simulation studies can be performed with considered model by manipulating the structure and perturbing the initial state and kinetics. All results can be manually and automatically exported in the standard *.csv-format and can be analyzed in other mathematical programs.

Simulation Control:

The simulation control allows selection of main settings and individualities for the simulation. It splits further into four panels:

Configuration Sets: Modification of configuration sets is carried out by edition of single entries or addition of new sets and picking the configuration sets that is suitable for the simulation run.
Simulation Properties: It includes setting interval start i.e. time point where simulation starts, interval end i.e. time points where simulation ends and output step count i.e. number of time-points that should be displayed in the given interval.
Export Properties: Various automatic export settings are accessible to the *.csv-format.
Start Simulation: It will initiate simulation with the selected settings and properties. Progress of simulation is indicated by the bar and the required time is displayed below.

Viewer/Node Choice:

It facilitates user by providing choices in displaying simulation results. It is divided into two panels:

Viewer Choice: It provides user an option to select one between data tables and data plots. Provided buttons in panel allow user to edit, add and delete the data tables and data plots. Token flow (places) or the firing frequency (transitions) can be displayed in a data table or data plot.
Place Choice: User can choose those nodes which should be displayed in the data table or data plot.

Display:

This panel allows displaying the simulation results as data table or data plot. If data table is selected, the token flow for the selected places is presented in a table. Some options which are used for model checking are present at the bottom of the window. If data plot is chosen, the x-axis displays the time-interval and the y-axis indicates the average number of tokens. View of the plot can be altered via the buttons located below i.e. compress/stretch x-axis, compress/stretch y-axis, zoom in/out and centre view. A csv export button allows user to export the simulation results of the selected places manually. Image of the current plot can be saved by using print button.

Model Checking Mode:

Snoopy is enabled to perform model checking of linear-time properties based on the stochastic simulation. A subset of probabilistic linear-time temporal logic (PLTL) is employed to formulate and authenticate properties. Various features of snoopy also include checking several features at the same time. In order to perform model checking in Snoopy, user needs to open the simulation window and select the table view. To perform model checking on all simulation traces, user have to enter or load a property that is checked by simulating the time-dependent dynamic behavior. Simulation window allows following options:

Enter State Property: User can specify a property in the dialogue box and no model checking is performed if it is empty.
Load state property: User can load a property which is defined in a text file.
Check state property: It refers to model checking which is performed on the basis of average behavior of the previous simulation.

Simulation run count is of assistance to state a number of simulation traces to which model checking can be applied. It splits into two types:

Default value 1 run: User is only able to get the information if the defined property holds true or is not false.
Arbitrary number of runs: The number of simulation runs supports defining probability of the defined properties as high accuracy calls for high number of simulation runs.

User can set the time interval where model checking should be applied with the help of interval start and interval end.

A log window displays model checking results that includes following elements:

Formula displays the formula checked during simulation.

Runs indicate the number of simulation runs performed.

Runtime shows the number of threads used for simulation.

Threads display the number of threads used for simulation.

Prop indicate the computed probability for the formula.

S ^2 displays the variance of the probability.

Confidence Interval indicates the size of the confidence interval.

[a,b] reveals the interval of the probability that is calculated from the confidence interval

Sleep Monitoring System Technology

Fully equipped quilt

–Improve your sleep with new media and communication technology

Explosion of information and advance of technology boost our life into a faster paced mode than ever before. This fast paced life increases the productivity of the whole society, but at the same time, brings healthy problems to an increasing number of individuals.

Sleeping is one of the most important aspect of our daily life, as well as a critical period for fixing the damage we have done to our bodies in the daytime. However, more and more people are losing their sleeping due to the pressure comes from outside world, leading to a vicious circle to their health. The product, a quilt, introduced in this article will be devoted to improving sleep and health conditions of the group of people who have sleep disorders and potential health problems. It will make use of the known facts of health indicators and existing technologies to inform the current health condition of the user and help with getting better experience in sleeping.

Health Indicator Detector

There are many things that people will use every day, the one with the longest consecutive using period is our quilt. The quilt I design takes advantage of this characteristics, making use of touch-skin sensors to detect and monitor user’s health indicators in a quiet and precise way.

One of the basic functions of this part is measuring body temperature. Traditional mercury thermometer will not be used, instead of it, infrared thermometer will be a better way to go. An infrared thermometer is a thermometer which indicates temperature from a part of the thermal radiation emitted by the object being measured. It can be used to help aim the thermometer, to describe the device’s ability to measure temperature in a certain distance. By knowing the amount of infrared energy emitted by the object and its emissivity, the object’s temperature can often be determined (Infrared Thermometer).

Another basic function for this part is measuring heartbeat. Since the heartbeat’s contraction is equal to the pulse, which is how many times a minute, our arteries expand because of the increase in blood pressure originated by our heartbeat. So the heartbeat is a very vital health parameter that is directly related to the soundness of the human cardiovascular system (Microcontroller Measures).

The microcontroller, fingertip will be used for measuring heartbeat. Heartbeat will pump the blood go through the body and that makes the blood volume inside the finger artery to change too. This fluctuation can be used on an optical sensing mechanism to put on the fingertip. The signal can also be used in this microcontroller to calculate the volatility, it is actually in the heart rate is magnified.

(Microcontroller Measures)

Both these heartbeat measuring machine and infrared thermometer two devices will be placed in a Wi-Fi environment, and the data which is collected from the user will be transmitted to the personal computer. Users can set a certain indicator, if the data exceeds the normal values, it will warning the user to go to the hospital to check their body whether they have health problems or not.

Wireless Sleeping Respiration Monitor

To achieve the goal of monitoring users’ respiration while sleeping, respiration sound sensor will be needed for the quilt. In addition, after detecting sleeping hazards like respiration arrest or sleep apnea, vibrate module hidden inside of the quilt will function to wake up the user to prevent further risk.

The whole system consists of three major parts, monitoring and data collection device, wireless communication devices and vibrating wake-up device. Monitoring and data collection device, as known as respiration sound sensor, will sense the user’s respiration sound, reduce the noise, amplify and digitalize the signal and pass it to wireless communication module. Wireless communication device will take over the collected data, encapsulate it and then send it via either Wi-Fi connection or Bluetooth connection. Those connections are connected with a device with data processing power, our smartphones will easily serve the purpose. After processing the received data, the data processing device will inform the vibration unit whether it needs to wake up the user based on the result comparing to the data in an online database. If the vibration module receives the signal to wake up the user, it will send pulse vibration to user’s body to wake them up.

The significance of the advantage is that sensors could be close to user’s nose and mouth to collect much better and precise data without interrupting or hindering the proper sleeping process. It improved the user experience by a remarkable amount from head-wearing devices with similar functions. In addition, the data is amplified and sent to user’s smartphone, a hand-held device, which increases the scalability. We could achieve more usage of the existing sensors by programming the applications on those hand-held devices, and keep them working properly and even better by distributing software/firmware updates over the internet. What if the computing devices is installed into the quilt? Users will feel hard to fall asleep because it is too heavy, and there could be hazards when the computing unit overheats over a period of intensive data processing. So using external hand-held device reduces the total weight and potential hazard of the quilt and in the meantime increases the processing power and stability (Patent).

Sleep Monitoring System

There is no doubt for the importance of sleeping, but because of the quick paced and high pressure environment, we cannot have a good sleep, which will threaten our health. But with the development of advanced technology, we can use it to trace our sleeping status, to find the problems and solve it, in order to improve our sleeping quality.

There are many applications exist on the phone, like in IOS platform, in the Android system and in the Windows Phone. The principle for smartphones and applications to monitor the sleep status are similar. They place an accelerometer in the phone to sense your action like turn over while you sleeping, in order to determine the depth of sleep. Of course, compare to the professional equipment, the monitor of mobile phones is not that much accurate, but the cost is much lower than other equipment. However, studies show that the radiation phone emit will affect the quality of our sleep. According to the working principle of the smartphone, an accelerometer can also be placed in the quilt, it will percept the user’s motion during sleep more precisely. There will also a small microphone in the quilt, it can record user’s fudge while their sleeping. Like I mentioned in the Health Indicator Detector, the recorder is also connected with Wi-Fi or Bluetooth, transmitted to the personal computer or application on the smartphone.

Besides this, tiny speaker can also be placed with the microphone, since the system can be connected with the personal computer, user can play some white noise in the speaker, which can be used to block snoring and other unwanted sounds, leading to deeper, more restful sleep (White Noise). When the system find out the user enter a deep sleep, it will pause playing the white noise.

This system can be used by anyone who even has little knowledge of scientific facts about sleeping or biology to easily track their status of sleeping. Reports could also be generated for a day, a week or a month based on the stored data in the personal computing device. It is really helpful for users to figure out what the trend of sleeping quality is like, either becoming better or worse. Additionally, the reports could be used for consulting doctors and physicians for what they need to improve their sleeping.

Conclusion

This quilt is not a product with totally new media or communication technology. It is a practical product with many existing technologies to provide an innovative service, and all the technologies used in this product have been used in years and are very mature in the market. However, products that can provide the similar convenience as the quilt does haven’t been found yet. Tossing and turning will not be a huge obstacle anymore for them to fall asleep with the help of white noise. Potential health hazards could also be found out be various monitoring system and data analyzing device. We can imagine that if this quilt arrives in store, it could save an enormous group of people with sleeping problems and using the media and communication technology to serve people, to help them feel better when the night comes.

Citations

“Infrared Thermometer.”Infrared Thermometer. OMEGA Engineering Inc., 1999. Web. 19 Apr. 2014.

“Microcontroller Measures Heart Rate through Fingertip.”Instructables.com. Instructables, n.d. Web. 19 Apr. 2014.

“White Noise Benefits and Uses.”White Noise Cds.com. N.p., n.d. Web. 19 Apr. 2014.”Patent CN102860840A – Wireless Sleeping Respiration Monitor.”Google Books. Ed.

Yujuan Quan, Qingnan Liu, Lei Wang, and Deming Chen. Google Patents, 09 Jan. 2013. Web. 19 Apr. 2014.

Sixth Sense Technology Introduction

Abstract: ‘Sixth Sense’ is a wearable gesture interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information. This technology will definitely give the user a new way of seeing the world with information at their fingertips it has been classified under the category ‘wearable computing’. The true power of Sixth Sense lies on its potential to connect the real world with the Internet, and overlaying the information on the world itself. The key here is that Sixth Sense recognizes the objects around you, displaying information automatically and letting you access it in any way you want, in the simplest way possible. This paper gives you just introduction about sixth sense. This paper makes you familiar with sixth sense technology which provides freedom of interacting with the digital world using hand gestures. The sixth sense prototype is comprised of pocket projector, a mirror, mobile components, color markers and a camera. The sixth sense technology is all about interacting to the digital world in most efficient and direct way. Sixth Sense devices are very much different from the Computers; this will be a new topic for the hackers and the other people also. Everyone can get general idea of sixth sense technology by look at this paper.

Keywords: Sixth Sense, wearable computing, Augmented Reality, Gesture Recognition, Computer Vision

__________________________________________________________*****_________________________________________________________

1. INTRODUCTION

We’ve evolved over millions of years to sense the world around us. When we encounter something, someone or some place, we use our five natural senses which include eye, ear, nose, tongue mind and body to perceive information about it; that information helps us make decisions and chose the right actions to take. But arguably the most useful information that can help us make the right decision is not naturally perceivable with our five senses, namely the data, information and knowledge that mankind has accumulated about everything and which is increasingly all available online.

Although the miniaturization of computing devices allows us to carry computers in our pockets, keeping us continually connected to the digital world, there is no link between our digital devices and our interactions with the physical world. Information is confined traditionally on paper or digitally on a screen.

Sixth Sense bridges this gap, bringing intangible, digital information out into the tangible world, and allowing us to interact with this information via natural hand gestures. ‘Sixth Sense’ frees information from its confines by seamlessly integrating it with reality, and thus making the entire world your computer.

All of us are aware of the five basic senses – seeing, feeling, smelling, tasting and hearing. But there is also another sense called the sixth sense. It is basically a connection to something greater than

what their physical senses are able to perceive. To a layman, it would be something supernatural. Some might just consider it to be a superstition or something psychological. But the invention of sixth sense technology has completely shocked the world. Although it is not widely known as of now but the time is not far when this technology will change our perception of the world.

Fig. 1.1: Six Senses

Sixth Sense is a wearable “gesture based” device that augments the physical world with digital information and lets people use natural hand gestures to interact with that information.

Right now, we use our devices (computers, mobile phones, tablets, etc.) to go into the internet and get information that we want. With Sixth Sense we will use a device no bigger than current cell phones and probably eventually as small as a button on our shirts to bring the internet to us in order to interact with our world! Sixth Sense will allow us to interact with our world like never before. We can get information on anything we want from anywhere

within a few moments! We will not only be able to

interact with things on a whole new level but also with people. One great part of the device is its ability to scan objects or even people and project out information regarding what you are looking.

1.1 History and Evolution of Sixth Sense Technology

Steve Mann is father of sixth sense who made a wearable computer in 1990. The Sixth Sense Technology was first implemented as the neck worn projector + camera system. He was a media lab student at that time. There after it was used and implemented by an Indian who is the man has become very famous in the recent Pranav Mistry. There will be a long future rather than the short period of history for the Sixth Sense technology.

1.2 Why choose Sixth Sense Technology

This sixth sense technology provides us with the freedom of interacting with the digital world using hand gestures. This technology has a wide application in the field of artificial intelligence. This methodology can aid in synthesis of bots that will be able to interact with humans. This technology enables people to interact in the digital world as if they are interacting in the real world. The Sixth Sense prototype implements several applications that demonstrate the usefulness, viability and flexibility of the system [4].

2. CONSTRUCTION AND WORKING

The Sixth Sense prototype comprises a pocket projector a mirror and a camera contained in a pendant like, wearable device. Both the projector and the camera are connected to a mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks user’s hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tip of the user’s fingers. The movements and arrangements of these fiducially are interpreted into gestures that act as interaction instructions for the

projected application interfaces. Sixth Sense supports multi-touch and multi-user interaction.

Fig. 2.1: Sixth Sense Technology Working

3. TECHNOLOGIES THAT ARE RELATED TO SIXTH SENSE DEVICES

3.1. Augmented Reality

The augmented reality is a visualization technology that allows the user to experience the virtual experience added over real world in real time. Augmented reality adds graphics, sounds, hepatic feedback and smell to the natural world as it exists [3].

3.2. Gesture Recognition

It is a technology which is aimed at interpreting human gestures with the help of mathematical algorithms. Gesture recognition technique basically special type of hand gloves which provide information about hand position orientation and flux of the fingers [3].

3.3. Computer Vision

Computer Vision is the technology in which machines are able to interpret necessary information from an image. This technology includes various fields like image processing, image analysis and machine vision. It includes certain aspect of artificial intelligence techniques like pattern recognition [3].

3.4. Radio Frequency Identification

Radio Frequency Identification systems transmit the identity of an object wirelessly, using radio

magnetic waves. The main purpose of this technology is to enable the transfer of a data via a portable device. This technology is widely used in the fields like asset tracking, supply chain management, manufacturing, payment system etc [3].

4. APPLICATIONS

The Sixth Sense device has a huge number of applications. The following are few of the applications of Sixth Sense Technology:-

4.1. Viewing Map:

With the help of a map application the user can call upon any map of his/her choice and navigate through them by projecting the map on to any surface. By using the thumb and index fingers movements the user can zoom in, zoom out or pan the selected map[2].

Fig -4.1: Viewing Map

4.2. Taking Pictures:

Another application of Sixth Sense devices is the

implementation of a gestural camera. This camera takes the photo of the location user is looking at by detecting the framing gesture. After taking the desired number of photos we can project them onto any surfaces and then use gestures to sort through those photos and organize and resize them[2].

Fig 4.2: Taking Pictures

4.3. Drawing Application:

The drawing application allows the user you to draw on any surface by tracking the fingertip movements of the user’s index finger. The pictures that are drawn by the user can be stored and replaced on any other surface. The user can also shuffle through various pictures and drawing by using the hand gesture movements[2].

Fig -4.3: Drawing Application

4.4. Making Calls:

We can make calls with the help of Sixth Sense device. The Sixth Sense device is used to protect the keyboard into your palm and using that virtual keypad we can make calls to anyone[2].

Fig -4.4. Making Calls

4.5. Interacting with Physical Objects:

The Sixth Sense system also helps to interact with physical objects we use in a better way. It augments physical objects by projecting more information about these objects projected on them. For example, a gesture of drawing a circle on the user’s wrist projects a watch on the user’s hand. Similarly a newspaper can show live video news or dynamic information can be provided on a regular piece of paper[2].

Fig -4.5: Watching News

4.6. Flight Updates:

The system will recognize your boarding pass and let you know whether your flight is on time and if the gate has changed[2].

Fig – 4.6: Flight Updates

4.7. Other Applications:

Sixth Sense also lets the user draw icons or symbols in the air using the movement of the index finger and recognizes those symbols as interaction instructions. For example, drawing a magnifying glass symbol takes the user to the map application or drawing a aˆ•@aˆ- symbol lets the user check his mail[2].

5. KEY FEATURES OF SIXTHSENSE

Sixth Sense is a user friendly interface which integrates digital information into the physical world and its objects, making the entire world your computer.
Sixth Sense does not change human habits but causes computer and other machines to adapt to human needs.
It uses hand gestures to interact with digital information, supports multi-touch and multi-user interaction.
Data access directly from machine in real time. It is an open source and cost effective and we can mind map the idea anywhere.
It is gesture-controlled wearable computing device that feeds our relevant information and turns any surface into an interactive display.
It is portable and easy to carry as we can wear it in our neck.
The device could be used by anyone without even a basic knowledge of a keyboard or mouse. There is no need to carry a camera anymore.
If we are going for a holiday, then from now on wards it will be easy to capture photos by using mere fingers

CONCLUSION

As this technology will emerge may be new devices and hence forth new markets will evolve. This technology enables one to account, compute and browse data on any piece of paper we can find around. Sixth Sense devices are very much different from the computers; this will be a new topic for the hackers and the other people also. First thing is to provide the security for the Sixth Sense applications and devices. Lot of good technologies came and died due to the security threats.

There are some weaknesses that can reduce the accuracy of the data. Some of them were the on palm phone keypad. It allows the user to dial a number of the phone using the keypad available on the palm. There will be a significant market competitor to the Sixth Sense technology since it still required some hardware involvement with the user.

REFRENCES

http://www.pranavmistry.com/projects/sixthsense/
http://dspace.cusat.ac.in/jspui/bitstream/123456789/2207/1/SI XTH%20SENSE%20TECHNOLOGY.pdf
http://en.wikipedia.org/wiki/SixthSense
http://www.engineersgarage.com/articles/sixth-sense-technology
http:/www.ted.com/talkspranav_mistry_the_thrilling_potential_of_sixthsense_technology.html

Role of Firms in Science and Technology | Essay

What roles do firms play in the generation and diffusion of new scientific and technological knowledge? Illustrate your answer by reference to one or more example.

Introduction:

The differences in the types of organisations, their structures, their goals and perspectives, and the way they recognise and face challenges can breed a lot of opportunities and avenues for producing and distributing new information to the world. Technology and science has made wonders for almost everyone living in this planet. It has changed the way we live. It has also introduced new sets of problems and issues which must be strategically addressed.

Firms are already in the forefront of responding to changes and challenges in their environment. They respond to these challenges through strategies that make use of support systems like technology and scientific research.

Today’s business and social transactions are being supported more and more by technological and scientific innovations and strategies. Knowledge of advanced technologies in the different sciences and frontiers has largely advanced most careers and business prospects. According to Dorf (2001, p. 39), [1] the purpose of a business firm is to create value for all of its stakeholders. As the firm tries to create new wealth for its shareholders, valuable products and services for its customers, it is already in the process of generating and distributing new sets of information. This includes the generation of new scientific and technological knowledge which would eventually be adopted by the society and other businesses as well. A firm then leads its market through effective technical and scientific innovation, sound business management of resources, and a solid technological strategy for the success of its business.

Improved technology and increased scientific knowledge will help increase food production, efficient management of resources, allow faster access to relevant and mission critical information, and enhanced business competitiveness. Technology has the most potential to deliver business sustainability and viability through the many opportunities for research and innovation.

While it cannot be denied that firms of today have a very definite and pivotal role to play in the generation of scientific and technological knowledge, much of their contribution center on how they formulate strategies to introduce new knowledge into their business functions.

Technology has been known to support a lot of business and decision making processes. Technology strategy should be considered a vital part of any strategic planning. Incorporating high-end technology without careful considerations of other organisational issues is a sure formula for failure. The growth of technology presented managers with a complex variety of alternatives. Many executives and managers are using the advent of technology as an opportunity to reconsider their business operations (Irving and Higgins, 1991).[2] Many still feel that technology and any available scientific knowledge can solve a lot of organisational problems. Unfortunately, other executives see technology as a panacea for various organisational ills. Sometimes, the introduction of technology may increase organisational and societal problems.

Firms have a definite role when it comes to the way technology and scientific knowledge is generated and distributed. With their technological and scientific knowledge at hand, they can be technology enhancers, identifiers of new markets, sources of customer exploration, and a gateway for information interchange. However, powerful technologies and scientific knowledge can have the potential for great harm or great good to mankind (O’Brien, 2001).[3]

Competition in the business environment has led to a lot of advanced technological and scientific research and development. Investment in a lot of monetary and manpower resources has increased the need for firms to compete with each other in the introduction of new technologies which may alter the political, economic, and social landscape. Gene Amdahl was interested in starting a new computer firm to compete with International Business Machines (Goodman and Lawless p. 66).[4] He understood quite clearly that he needed a new technological design, a service and support system, and a good library software. He chose to design his computer to be IBM-compatible. Regardless of the technological wonders he designed into his new computer, it would operate all the existing IBM software. This strategy has greatly enhanced his customers’ access to new IBM technologies as well as his own. While his company has tailored itself from another company’s technology, it was able to create and generate a new set of ideas which not only enhanced his company’s image but IBM’s as well.

High technology firms who generate a lot of technological and scientific knowledge have been able to identify new markets in the fields of computers, biotechnology, genetic engineering, robotics, and other markets. These firms depend heavily on advanced scientific and engineering knowledge.

Michael Dell, for example, started building personal computers in his University of Texas dorm room at age 19 (Ferrell and Hirt, 1996).[5] His innovative ideas and prototyping techniques have made Dell Computer one of the leading PC companies in the world with sales of $2.9 billion. Because of his company’s capacity to use technology to perform decision-making and focus on new customer demands and tastes, he was able to identify strategic markets for his PC Company all around the world in different contexts. When he shifted to new markets, other industry players followed. These industry players created another set of opportunities to explore other means. Through the early 1990s, Dell sold directly to the consumer through its toll-free telephone line (Schneider and Perry, 1990).[6] Eventually, it expanded its sales to the Internet and has logged a significant percentage of its overall sales from the Internet. This strategy has lowered overhead for the company. The web site is a significant part of Dell’s strategy for moving into the new millenium. Company officials predict that within the next few years, more than half of their sales will be from the web. Supporting such a booming online sales are a robust infrastructure of communication devices and networks, Dell servers, and electronic commerce software from Microsoft.

Just as with the globalisation of markets, changes due to advances in technology is not new to business marketing. Yet, technology change is expected to create new ways of marketing that haven’t existed (Dwyer and Tanner, 1999).[7] Du Pont, for example has developed a Rapid Market Assessment technology that enables the company to determine if a market, usually a country or region previously not served) warrants development (Bob, 1996).[8] The result of the analysis is a customer-focused understanding of the foreign market, independent of the level of economic development of that country or region.

Technology is changing the nature of business-customer interaction. If applied well, benefits increase to both parties. In the area of retail marketing for example, technology can be used to enhance interaction between retailers and customers.

Point-of-sale scanning equipment is widely utilized by supermarkets, department stores, specialty stores, membership clubs, and others-hundreds of thousands of firms in all. Retailers can quickly complete customer transactions, amass sales data, reduce costs, and adjust inventory figures (Berman and Evans, 1998).[9] At some restaurants, when dinner is over, the waiter brings the check-and a sleek box that opens like the check presentation folder used by many restaurants revealing buttons and a miniscreen. The waiter brings it over and disappears discreetly. Following instructions on the screen, you verify the tab, select the payment type (credit card or ATM card), insert the card into a slot, and enter your personal identification number of PIN. You can then enter a tip-a specific amount or, if you want the device to figure the tip, a percentage. Completing the transaction triggers a blinking light. This summons the waiter who then removes the device and the receipt is printed on another terminal (Berman and Evans, 1998).[10] In this manner, the restaurant, as a firm was able to innovate on new ways to make customers make further exploration and application of this new mechanism. This in turn introduced another set of mechanisms for making billing charges to customers in another business setting (like electricity and water bills). With this illustration, innovation on a new technology can be of great help to different industry players.

With signature capture, shoppers sign their names right on a computer screen. At Sears, the cardholder uses a special pen to sign a paper receipt-which becomes the cardholder copy-on top of a pressure-sensitive pad that captures the signature, stores it, and displays it on the checkout terminal screen so a clerk can compare it with the one on the back of the credit card. Sears has a brochure explaining the procedure is entirely voluntary and electronic signatures are not stored separately and can be printed only along with the entire sales receipt. Again, innovation centered on how customers can be better served has generated a whole new set of ideas for other firms to research on.

Gateway for Information Interchange

The web or the Internet has generated a lot of research interests nowadays. People rely on the web for retrieving and sending information. It’s being used for almost all sorts of business and personal transactions like in the area of learning and commerce. Stanford University Library’s HighWire Press began in early 1995with the online production of the weekly Journal of Biological Chemistry (JBC). By March 2001, it was producing 240 online journals giving access to 237,711 articles (Chowdhury and Chowdhury, 2001).[11] The journals focus on science, technology, medicine and other scientific fields. HighWire’s strategy of online publishing of scholarly journals is not simply to mount electronic images of printed pages; rather by adding links among authors, articles and citations, advanced searching techniques, high-resolution images and multimedia, and interactivity, the electronic versions provided added dimensions to the information being provided in printed journals. The dimensions allowed readers boundless opportunities to follow up what they have initially started. The role of firms here has been magnified quite a bit. Technical and scientific information can be distributed at the least possible time possible and in as many people as possible.

In another setting, consider the tremendous savings now those millions of Internet users are able to work from home – or at least, dial into the office more than drive there. Many offices are using the Internet to save office space, materials, and transportation costs. Using email and other electronic documents also saves energy, by saving paper. People who are online are able to explore most of the advantage technology and science has to offer them. It gives them the power to filter out what is and what is not useful. Newspapers are also going online. Arguably, of all the technologies, telecommunications, and the Internet, along with a renewable energy, has the most potential to deliver sustainability and the vision of integrated optical communication networks, is compelling enough for people to understand the underlying role that technology firms play in today’s technology-based society. Computer networks and the Internet have largely been the biggest technological breakthroughs made throughout the century. And the possibilities are even growing bigger for firms to do more to leverage its use.

Conclusion:

Firms play a very important role in the generation of new information and their eventual diffusion into the overall structure of businesses and society as well. Firms are seen as responsible generators of new ideas which not only help them attain competitive advantage over their rivals but also are also unconsciously improving the lives of people from different places around the globe. Competing firms explore different technical and scientific innovations which match their business strategy especially in a globalised business setting. The rate at which firms do research and development has spawned the need for further collaboration and cooperation even among their competitors in order to protect their strategic advantage. The introduction of technological and scientific standards has helped guide the introduction of new knowledge to definite direction to take. Firms also serve as a window to a lot more opportunities for information exchange and interaction between customers and even their competitors. The Internet has been the biggest contributor to the generation, infusion, and distribution of knowledge. It has also provided a lot of opportunities for firms to invest their time and resources in order to facilitate easier access to their products and services. It has also created a new set of commerce and learning methods which allowed more and more people to get involved even if time and distances presented challenges. The driving force behind all of these innovations is change. Without it, firms will not be motivated to introduce new sets of ideas and distributed them. Knowledge is empowerment. Acquiring technical and scientific knowledge through the initiatives of different organizations not only increases further competition but also improves the different political, social, and economic dimensions of society. The generation and diffusion of scientific and technological knowledge will not be possible if firms are not aware of the changes that are constantly shaping their business landscape. Today’s challenges is not on how technological and scientific information can be generated and distributed. It is more on using this knowledge on the right place and at the right time.

Bibliography
Books

Berman, B and Evans, J (1998), Retail Management: A Strategic Approach, Prentice Hall, New Jersey.

Bob, Donarth (1996), Global Marketing Management: New Challenges Reshape Worldwide Competition.

Chowdhury, G and Chowdhury, S (2001), Information Sources and Searching on the World Wide Web, Library Association Publishing, London.

Dorf, Richard (2001), Technology, Humans, and Society: Towards A Sustainable World, Academic Press, San Diego, California.

Dwyer, F and Tanner, J (1999), Business Marketing: Connecting Strategy, Relationships, and Learning, Mc-Graw Hill, Singapore.

Ferrell O and Hirt, G (1996), Business: A Changing World, 2nd edn, Times New Mirror Higher Education.

Goodman, R and Lawless, M (1994), Technology and Strategy: Conceptual Models and Diagnostics, Oxford University Press, New York.

Irving, R and Higgins, C (1991), Office Information Systems: Management Issues and Methods, John Wiley and Sons, Ontario.

O’Brien, James (2001), Introduction to Information Systems: Essentials for the Internetworked E-Business, McGraw-Hill, Singapore.

Schneider, G & Perry, J (1990), Electronic Commerce, Thomson Learning, Singapore.

Reasoning in Artificial Intelligence (AI): A Review

1: Introduction

Artificial Intelligence (AI) is one of the developing areas in computer science that aims to design and develop intelligent machines that can demonstrate higher level of resilience to complex decision-making environments (Lopez, 2005[1]). The computations that at any time make it possible to assist users to perceive, reason, and act forms the basis for effective Artificial Intelligence (National Research Council Staff, 1997[2]) in any given computational device (e.g. computers, robotics etc.,). This makes it clear that the AI in a given environment can be accomplished only through the simulation of the real-world scenarios into logical cases with associated reasoning in order to enable the computational device to deliver the appropriate decision for the given state of the environment (Lopez, 2005). This makes it clear that reasoning is one of the key elements that contribute to the collection of computations for AI. It is also interesting to note that the effectiveness of the reasoning in the world of AI has a significant level of bearing on the ability of the machine to interpret and react to the environmental status or the problem it is facing (Ruiz et al, 2005[3]). In this report a critical review on the application of reasoning as a component for effective AI is presented to the reader. The report first presents a critical overview on the concept of reasoning and its application in the Artificial Intelligence programming for the design and development of intelligent computational devices. This is followed by critical review of selected research material on the chosen topic before presenting an overview on the topic including progress made to date, key problems faced and future direction.

2: Reasoning in Artificial Intelligence
2.1: About Reasoning

Reasoning is deemed as the key logical element that provides the ability for human interaction in a given social environment as argued by Sincak et al (2004)[4]. The key aspect associated with reasoning is the fact that the perception of a given individual is based on the reasons derived from the facts that relative to the environment as interpreted by the individual involved. This makes it clear that in a computational environment involving electronic devices or machines, the ability of the machine to deliver a given reason depends on the extent to which the social environment is quantified as logical conclusions with the help of a reason or combination of reasons as argued by Sincak et al (2004).

The major aspect associated with reasoning is that in case of human reasoning the reasoning is accompanied with introspection which allows the individual to interpret the reason through self-observation and reporting of consciousness. This naturally provides the ability to develop the resilience to exceptional situations in the social environment thus providing a non-feeble minded human to react in one way or other to a given situation that is unique in its nature in the given environment. It is also critical to appreciate the fact that the reasoning in the mathematical perspective mainly corresponds to the extent to which a given environmental status can be interpreted using probability in order to help predict the reaction or consequence in any given situation through a sequence of actions as argued by Sincak et al (2004).

The aforementioned corresponds with the case of uncertainty in the environment that challenges the normal reasoning approach to derive a specific conclusion or decision by the individual involved. The introspective nature developed in humans and some animals provides the ability to cope with the uncertainty in the environment. This adaptive nature of the non-feeble minded human is the key ingredient that provides the ability to interpret the reasons to a given situation as opposed to merely following the logical path that results through the reasoning process. The reasoning in case of AI which aims to develop the aforementioned in the electronic devices to perform complex tasks with minimal human intervention is presented in the next section.

2.2: Reasoning in Artificial Intelligence

Reasoning is deemed to be one of the key components to enable effective artificial programs in order to tackle complex decision-making problems using machines as argued by Sincak et al (2004). This is naturally because of the fact that the logical path followed by a program to derive a specific decision is mainly dependant on the ability of the program to handle exceptions in the process of delivering the decision. This naturally makes it clear that the effective use of the logical reasoning to define the past, present and future states of the given problem alongside the plausible exception handlers is the basis for successfully delivering the decision for a given problem in chosen environment. The key areas of challenge in the case of reasoning are discussed below (National Research Council Staff, 1997).

Adaptive Software – This is the area of computer programming under Artificial Intelligence that faces the major challenge of enabling the effective decision-making by machines. The key aspect associated with the adaptive software development is the need for effective identification of the various exceptions and the ability to enable dynamic exception handling based on a set of generic rules as argued by Yuen et al (2002)[5]. The concept of fuzzy matching and de-duplication that are popular in case of software tools used for cleansing data cleansing in the business environment follow the above-mentioned concept of adaptive software. This is the case there the ability of the software to decide the best possible outcome for a given situation is programmed using a basic set of directory rules that are further enhanced using references to a variety of combinations that comprise the database of logical combinations for reasons that can be applied to a given situation (Yuen et al, 2002). The concept of fuzzy matching is also deemed to be a major breakthrough in the implementation of adaptive programming of machines and computing devices in Artificial Intelligence. This is naturally because of the fact that the ability of the program to not only refer to a set of rules and associated reference but also to interpret the combination of reasons derived relative to the given situation prior to arriving on a specific decision. From the aforementioned it is evident that the effective development of adaptive software for an AI device in order to perform effective decision-making in the given environment mainly depends on the extent to which the software is able to interpret the reasons prior to deriving the decision (Yuen et al, 2002). This makes it clear that the adaptive software programming in artificial intelligence is not only deemed as an area of challenge but also the one with extensive scope for development to enable the simulation of complex real-world problems using Artificial Intelligence.

It is also critical to appreciate the fact that the adaptive software programming in the case of Artificial Intelligence is mainly focused on the ability to not only identify and interpret the reasons using a set of rules and combination of outcomes but also to demonstrate a degree of introspection. In other words the adaptive software in case of Artificial Intelligence is expected to enable the device to become a learning machine as opposed to an efficient exception handler as argued by Yuen et al (2002). This further opens room for exploring into knowledge management as part of the AI device to accomplish a certain degree of introspection similar to that of a non-feeble minded human.

Speech Synthesis/Recognition – This area of Artificial Intelligence can be deemed to be a derivative of the adaptive software whereby the speech/audio stream captured by the device deciphers the message for performs the appropriate task (Yuen et al, 2002). The speech recognition in the AI field of science poses key issues of matching, reasoning to enable access control/ decision-making and exception handling on top of the traditional issues of noise filtering and isolation of the speaker’s voice for interpretation. The case of speech recognition is where the aforementioned issues are faced whilst in case of speech synthesis using computers, the major issue is the decision-making as the decision through the logical reasoning alone can help produce the appropriate response to be synthesised into speech by the machine.

The speech synthesis as opposed to speech recognition depends only on the adaptive nature of the software involved as argued by Yuen et al (2002). This is due to the fact that the reasons derived form the interpretation of the input captured using the decision-making rules and combinations for fuzzy matching form the basis for the actual synthesis of the sentences that comprises the speech. The grammar associated with the sentences so framed and its reproduction depends heavily on the initial decision of the adaptive software using the logical reasons identified for the given environmental situation. Hence the complexity of speech synthesis and recognition poses a great challenge for effective reasoning in Artificial Intelligence.

Neural Networks – This is deemed to be yet another key challenge faced by Artificial Intelligence programming using reasoning. This is because of the fact that neural networks aim to implement the local behaviour observed by the human brain as argued by Jones (2008)[6]. The layers of perception and the level of complexity associated through the interaction between different layers of perception alongside decision-making through logical reasoning (Jones, 2008). This makes it clear that the computation of the decision using the neural networks strategy is aimed to solving highly complex problems with a greater level of external influence due to uncertainties that interact with each other or demonstrate a significant level of dependency to one another. This makes it clear that the adaptive software approach to the development of the reasoned decision-making in machines forms the basis for neural networks with a significant level complexity and dependencies involved as argued by refenrece8.

The Single Layer Perceptions (SLP) discussed by Jones (2008) and the representation of Boolean expressions using SLPs further makes it clear that the effective deployment of the neural networks can help simulate complex problems and also provide the ability to develop resilience within the machine. The learning capability and the extent to which the knowledge management can be incorporated as a component in the AI machine can be defined successfully through identification and simulation of the SLPs and their interaction with each other in a given problem environment (Jones, 2008).

The case of neural networks also opens the possibility of handling multi-layer perceptions as part of adaptive software programming through independently programming each layer before enabling interaction between the layers as part of the reasoning for the decision-making (Jones, 2008). The key influential element for the aforementioned is the ability of the programmer(s) to identify the key input and output components for generating the reasons to facilitate the decision-making.

The backpropagation or backward error propagation algorithm deployed in the neural networks is a salient feature that helps achieve the major aspect of learning from mistakes and errors in a given computer program as argued by Jones (2008). The backpropagation algorithm in the multi-layer networks is one of the major areas where the adaptive capabilities of the AI application program can be strengthened to reflect the real-world problem solving skills of the non-feeble minded human as argued by Jones (2008).

From the aforementioned it is clear that the neural networks implementation of AI applications can be achieved to a sustainable level using the backpropagation error correction technique. This self-correcting and learning system using the neural networks approach is one of the major elements that can help implement complex problems’ simulation using AI applications. The case of reasoning discussed earlier in the light of the neural networks proves that the effective use of the layer-based approach to simulate the problems in order to allow for the interaction will help achieve reliable AI application development methodologies.

The discussion presented also reveals that reasoning is one of the major elements that can help simulate real-world problems using computers or robotics regardless of the complexity of the problems.

2.3: Issues in the philosophy of Artificial Intelligence

The first and foremost issue faces in the case AI implementation of simulating complex problems of the real-world is the need for replication of the real-world environment in the computer/artificial world for the device to compute the reasons and derive upon a decision. This is naturally due to the fact that the simulation process involved in the replication of the environment for the real-world problem cannot always account for exceptions that arise due to unique human behaviour in the interaction process (Jones, 2008). The lack of this facility and the fact that the environment so created cannot alter itself fundamentally apart from being altered due to the change in the state of the entities interacting within the simulated environment makes it a major hurdle for effective AI application development.

Apart from the real-world environment replication, the issue faced by the AI programmers is the fact that the reasoning processes and the exhaustiveness of the reasoning is limited to the knowledge/skills of the analysts involved. This makes it clear that the process of reasoning depending upon non-feeble minded human’s response to a given problem in the real-world varies from one individual to another. Hence the reasons that can be simulated into the AI application can only be the fundamental logical reasons and the complex derivation of the reasons’ combination which is dependant on the individual cannot be replicated effectively in a computer as argued by Lopez (2005).

Finally, the case of reasoning in the world of Artificial Intelligence is expected to provide a mathematical combination to the delivery of the desired results which cannot be accomplished in many cases due to the uniqueness of the decision made by the non-feeble minded individual involved. This poses a great challenge to the successful implementation of AI in computers and robotics especially for complex problems that has various possibilities to choose from as result.

3: Critical Summary of Research
3.1: Paper 1 – Programs with Common Sense by Dr McCarthy

The rather ambitious paper presented by Dr McCarthy aims to provide an AI application that can help overcome the issues in speech recognition and logical reasoning that pose significant hurdles to the logical reasoning in AI application development. However, the approach to the delivery of the aforementioned in the form of an advice taker is a rather feeble approach to the AI representation of the solution to a problem of greater magnitude. Even though the paper aims to provide an Artificial Intelligence application for verbal reasoning processes that are simple in nature, the fact that the interpretation of the verbal reasoning in the light of the given problem relative to an environment is not a simple component to be simulated with ease prior to achieving the desired outcome as discussed in section 2.

“One will be able to assume that the advice taker will have available to it a fairly wide class of immediate logical consequences of anything it is told and its previous knowledge”. (Dr McCarthy, Pg 2). This statement by the author in the research paper provides room for the discussion that the advice taker program proposed by Dr McCarthy is aimed to deliver an AI application using knowledge management as a core component for logical reasoning. This is so because of the nature of the statement which implies that the advice taker program will be able to deliver its decision through access to a wide range of immediate logical consequences of anything it is told and its previous knowledge. This makes it clear that the advice taker software program is not a non-viable approach as the knowledge management strategy for logical reasoning is a component under debate as well as development over a wide range of scientific applications related problems simulation using AI. The Two Stage Fuzzy Clustering based on knowledge discovery presented by Qain in Da (2006)[7] is a classical example for the aforementioned. It is also interesting to note that the knowledge management aspect of artificial intelligence programming is mainly dependant on the speed related to the access and processing of the information in order to deliver the appropriate decision relative to the given problem (Yuen et al, 2002). A classical example for the aforementioned would be the use of fuzzy matching for validation or suggestion list generation on Online Transaction Processing Application (OLTP) on a real-time basis. This is the scenario where a portion of the data provided by the user is interpreted using fuzzy matching to arrive upon a set of concrete choices for the user to choose from (Jones, 2008). The process of choosing the appropriate option from the given suggestion list by the individual user is the component that is being replaced using Artificial Intelligence in machines to choose the best fit for the given problem. The aforementioned is evident in case of the advice taker software program that aims to provide a solution for responding to verbal reasoning processes of the day-to-day life of a non-feeble minded individual.

The author’s objective ‘to make programs that learn from their experience as effectively as humans do’, makes it clear that the knowledge management approach with the ability of the program to utilise a database type storage option to store/access its knowledge and previous experiences as part of the process. This makes it clear that the advice taker software maybe a viable option if the processing speed related to the retrieval and storage of information from a database of such magnitude which will grow in size at an exponential rate is made available for the AI application. The aforementioned approach can be achieved by the use grid computing technology as well as other processing capabilities with the availability of electronic components at affordable prices on the market. The major issue however is the design for such an application and the logical reasoning processes of retrieving such information to arrive at a decision for a given problem. Form the discussion presented in section 2 it is evident that the complexity in the level of logical reasoning results in higher level of computation to account for external variants thus providing the decision appropriate to the given problem. This cannot be accomplished without the ability to deliver process through the existing logical reasons from the application’s knowledgebase. Hence the processing speed and efficiency of computation in terms of both the architecture and software capability is a question that must be addressed to implement such a system.

Although the advice taker software is viable in a hardware architecture perspective, the hurdle is the software component that must be capable of delivering the abstraction level discussed by the author. This is because, the ability to change the behaviour of the system by merely providing verbal commands from the user which is the main challenge faced by the AI application developers. This is so because of the fact that the effective implementation of the aforementioned can be achieved only with the effective usage of the speech recognition and logical reasoning that is already available to the software for incorporating the new logical reason as an improvement or correction to the existing set-up of the application. This approach is the major hurdle which also poses the challenge of identifying the key speech patterns that are deemed to be such corrective commands over the statements’ classification provided by the user author for providing information to the application. From the above arguments it can be concluded that the author’s statement – “If one wants a machine to be able to discover an abstraction, it seems most likely that the machine must be able to represent this abstraction in some relative simple way” – is not a task that is easily realisable. It is also necessary to address the issue that the abstractions that can be realised by the user can be realised by an AI application only if the application being used already has a set of reasons or room for learning the reasons from existing reasons prior to decision-making. This process can be accomplished only through complex algorithms as well as error propagation algorithms discussed in section 2.3. This makes it clear that the realization of the advice taker software’s capability to deliver to represent any abstraction in a relative simpler way is far fetched without the appropriate implementation of self-corrective and learning algorithms. The fact that learning is not only through capturing the previous actions of the application in similar scenarios but also to generate logical reasons based on the new information provided to the application by the users is an aspect of AI application which is still under development but the necessary ingredient for the advice taker software. However, considering the timeline associated with the research presented by Dr McCarthy and the developments till date, one can say that the AI application development has seen higher level of developments to interpret information from the user to provide an appropriate decision using the logical reasoning approach. The author’s argument that for a machine to learn arbitrary behaviour simulating the possible arbitrary behaviours and trying them out is a method that is extensively used in the twenty-first century implementation of the artificial intelligence for computers and robotics. The knowledge developed in the machines programmed using AI is mainly through the use of the arbitrary behaviours simulated and their results loaded into the machine as logical reasons for the AI application to refer when faced with a given problem.

Form the arguments of the author on the five features necessary for an AI application hold viable in the current AI application development environment although the ability of the system to create subroutines which can be included into procedures as units is still a complex task. The magnitude of the processor speed and related requirements on the hardware architecture is the problem faced by the developers as opposed to the actual development of such a system. The author’s statement that ‘In order for a program to be capable of learning something it must first be capable of being told it’ is one of the many components of the AI application development that has seen tremendous development since the dawn of the twenty-first century (Jones, 2008). The multiple layer processing strategy to address complex problems in the real world that have influential variants both within the input provided as well as the output in the current state of AI application development is synonymous to the above statement by Dr McCarthy.

The neural networks for adaptive behaviour presented in great detail by Pfeifer and Scheier (2001)[8] further justifies the aforementioned. This also opens room for discussion on the extent to which the advice taker application can learn from experience through the use of neural networks as an adaptive behaviour component for programming robots and other devices facing complex real-world problems. This is the kind of adaptive behaviour that is represented by the advice taker application by Dr McCarthy who described it nearly half a century ago. The viability of using neural networks to take comments in the form of sentences (imperative or declarative) is plausible with the use of the adaptive behaviour strategy described above using neural networks.

Finally, the construction of the advice taker described by the author can be met with in the current AI application development environment although the viability of the same would have been an enormous challenge at the time when the paper was published. The advice taker construction in the twenty-first century AI environment can be accomplished using either a combination of computers and robotics or one of the two as a sole operating environment. So development of the AI application either using computers or robotics for the delivery of the advice taker is plausible depending upon the delivery scope for the application and its operational environment. Some of the hurdles faced however would be with the speech recognition and the ability to distinguish imperative sentences to declarative sentences. The second issue faced in the case of the advice taker will be the scope of application as the simulation of various instances for generating the knowledge database is plausible only within the defined scope of the application’s target environment as opposed to the non-feeble human mind that can interact with multiple environments at ease. The multiple layer neural networks approach may help tackle the problem only to a certain level as the ability to distinguish between different environments when formed as layers is not easily plausible without the knowledge on its interpretation stored within the system. Finally, a self-corrective system for AI application is plausible in the twenty-first century but the self learning system using the logical reasons provided is still scarce and requires a greater level of design resilience to account for input and output variants of the system. The stimulus-response forms described by the author in the paper is realisable using the multiple layer neural networks implementation with the limitation on the scope of the advice taker restricted to a specific problem or set of problems. The adaptive behaviour simulated using the neural networks mentioned earlier justifies the ability to achieve the aforementioned.

3.2: Paper 2 – A Logic for Default Reasoning

Default reasoning in the twenty-first century AI applications is one of the major elements that attribute to the effective functioning of the systems without terminating unexpectedly unable to handle the exception raised due to the combination of the logic as argued by Pfeifer and Scheier (2001). This is naturally because of the fact that the effective use of the default reasoning process in the current AI application development environment aims to provide default reasoning when an exhaustive list of the reasons that are simulated and rules combinations are effectively managed. However, the definition of exhaustive or the perception of an exhaustive list for the development in a given environment is limited to the number of simulations that the users can develop at the time of AI application design and the adaptive capabilities of the AI system post implementation (refernece8). This makes it clear that the effective use of the default reasoning in the AI application development can be achieved only through handling a wide variety of exceptional conditions that arise in the normal operating environment for the problem being simulated (Pfeifer and Scheier, 2001). In the light of the above arguments the assertion by the author on the default reasoning as beliefs which may well be modified or rejected by subsequent observations holds true in the current AI development environment.

The default reasoning strategy described by the author is deemed to be a critical component in the AI application development mainly because of the fact that the defaulting reasons are not only aimed to prevent unhandled exceptions leading to abnormal termination of the program but also the effective learning from experience strategy implemented within the application. The learn from experience described in the section 2 as well as the discussion presented in section 3.1 reveal that the assignment of a default reason for an adaptive AI application will provide room for identifying the exceptions that occur in the course of solving problems thus capturing new exceptions that can replace the existing default value. Furthermore, the fact that the effective use of the default reasoning strategy in AI applications also limits the learning capabilities of the application in cases where the adaptive behaviour of the system is not effective although preventing abnormal termination of the system using the default reason.

The logical representation of the exceptions and defaults and the interpretation used by the author to interpret the phrase ‘in the absence of any information to the contrary’ as ‘consistent to assume’ justifies the aforementioned. It is further evident from the arguments of the author that the default reason creation and its implementation into the neural network as a set of logical reasons are complex than the typical case wise conditional analysis on establishing a given condition holds true to the situation on hand. Another interesting factor to the aforementioned it the fact that the definition of the conditions must incorporate room for partial success owing to the fact that the typical logical approach of success or failure do not always apply to the AI application problems. Hence it is necessary to ensure that the application is capable of accommodating partial success as well as accounting for a concrete number to the given problem in order to generate an appropriate decision. The discussion on the non-monotonic character of the application defines the ability to effectively formulate the condition for default reasoning rather than merely defaulting due to the failure of the system to accommodate for the changes in the environment as argued by Pfeifer and Scheier (2001). Carbonell (1980)[9] further argues that the type hierarchies and their influence on the AI system have a significant bearing on the default reasoning strategies defined for a given AI application. This is naturally because of the fact that the introduction of the type hierarchies in the AI application will provide the application to not only interpret the problem against the set of rules and reference data stored as reasons but also assign it within the hierarchy in order to identify the viability of applying a default reason to the given problem. The arguments of Carbonell (1980) on Single-Type and Multi-Type inclusion with either strict or non-strict partitioning justify the above-mentioned argument. It is further critical to appreciate the fact that the effective implementation of the type hierarchy in a logical reasoning environment will provide the AI application with greater level of granularity to the definition and interpretation of the reasons pertaining to a given problem (Pfeiffer and Scheier, 2001). It is this state of the AI application that can help achieve a significant level of independence and ability to interact effectively in the environment with minimal human intervention. The discussion on the inheritance mechanisms presented by Carbonell (1980) alongside the implementation of the inheritance properties as the basis for the implementation of AI systems in the twenty-first century (Pfeifer and Scheier, 2001) further justify the need for default reasoning as an interactive component as opposed to a problem solving constant to prevent abnorm

Rapid developments in technology

Global trends and technological development and their effect on strategy and technology on organisations, with a focus on the Sony Corporation.

Abstract

In recent years there have been rapid developments in technology which have lead to the opening up of a global market. This has brought both opportunities and challenges to enterprises. Enterprises that want to operate globally have to plan appropriate business strategies. When formulating these strategies they have to consider the importance of the domestic and global situation of the enterprise. This study examines the effect of technological progress and global changes, with a particular focus on how they have affected the Sony Corporation. There is a discussion of Sony’s business strategies and their strong points and shortcomings. The study ends with suggestions as to how Sony could resolve some of its recent problems.

Introduction

In recent years the phenomenon of globalization has taken place. This has come about because of rapid progress in technology and communications. Now the world has become one marketplace and goods and services which were available only in one place in the past can now be bought almost anywhere in the world. This has many advantages for industries as it has expanded their market, but it has also brought many challenges. Among the challenges which must be dealt with by companies wishing to enter the globalization are tariffs and international competition, particularly from newly industrializing counties (NICs) such as Malaysia, China and so forth. This has lead to many enterprises formulating global strategies and many of them have achieved success in the global market. However, to succeed in the global market it is not sufficient to have good global strategies; it is also necessary to be able to use these strategies in a balanced manner. The domestic market and the local culture are key elements which must be carefully taken into account in global strategies.

Many enterprises look to the example of Japanese companies when determining their global strategies, as it is generally considered that their global strategies have been very successful and have permitted them to enter and succeed in many international markets.

The principal focus of this study will be the Sony Corporation. There will be a discussion of Sony’s management of new technology and globalization. Examples will be given of Sony’s global strategies, and the advantages and disadvantages they have encountered due to these strategies will be presented and discussed.

Globalization

Every firm should understand the implications of globalization in order to develop a global strategy successfully. The term “globalization” signifies the increased mobility of goods, services, manpower, technology and worldwide. Globalization may be described as a process by which countries all over the world are joined in a worldwide interdependent community. This process is driven by a combination of economic, technological, socio-cultural and political factors.

Raskin (2002) defined globalization as the worldwide integration of economical, cultural, political, religious, and social systems. He added that globalization, through the increasing integration of economies and lifestyles worldwide, leads to similarities in production and consumption patterns, and hence cultural homogenization. From an economic perspective, globalization signifies the convergence of prices, products, wages, rates of interest and profits towards standards of developed countries (Ismail, 2003). Similarly, Theodore (1983) argued that the main factors driving economic globalization of the economy are movement of labour force; international trade; movement of capital; integration of financial markets; cross-border transactions; and free movement of international capital.

Basic components of globalization are the globalization of markets and the globalization of production. The former signifies a move away from a system in which national markets are separate entities, divided by trade barriers and barriers of distance, time and culture, towards the merging of national markets into a single global market. The latter, globalization of production, refers to a tendency by individual companies to spread their production processes over various locations around the world in order to benefit from differences in cost and quality of elements of production (Hill, 2007).

Drivers of globalization

The principal driving forces that facilitate or support the extension of globalization are the following.

Advances in transportation: A reduction in the cost of transporting goods and services from country to country assists in bringing prices in the country of manufacture nearer to prices in the export market. Developments in transport technology have lead to a reduction in the cost of transport as well as to an improvement in the speed and reliability of transporting both goods and people. This has meant that it has become cost-effective to access new and expanding markets, thus enabling companies to extend their business further than would have been feasible in the past.

Technological advances: The huge reduction in the cost of transmitting and communicating information in recent years has played a vital role in the global growth of enterprises. This phenomenon has been called “the death of distance”, and is particularly noticeable in the growth of trade in knowledge products through the Internet.

De-regulation of financial markets: The process of the de-regulation of financial markets has lead to the abolition of capital controls in many countries. Capital markets have opened up in both developed and developing countries, facilitating foreign direct investment and encouraging the flow of money across national borders.

Avoidance of import protection: Many enterprises seek to avoid the tariff and non-tariff barriers imposed by regional trading blocs in order to gain more competitive access to rapidly-growing economies such as those in the emerging markets.

Economies of scale: Many economists take the view that there has been a rise in the estimated minimum efficient scale (MES) related to particular industries. Technological changes, innovation and invention in various markets have been factors contributing to this increase. An increase in the MES means that the domestic market may be considered as not being large enough for the selling needs of these industries, making expansion into overseas markets essential.

The effect of globalization on international business

In recent years, companies have been required to deal with business issues in an international context due to the move towards globalization and internationalization as well as the nature of competition. The principal aspects of global business environments are the following.

The forces of globalization

Every aspect of the global business environment is affected by the drivers of globalization. Although globalization increases business opportunities, it also leads to an increase in competition. Companies must be aware of the basic and often sweeping changes in both society and commerce resulting from globalization (Wild, Wild and Han, 2008).

National business environment

Although globalization has initiated a process of homogenization among different cultures, political systems, economic systems, legal systems, and levels of economic development in different countries, many of these the differences remain marked and enduring. Any enterprise wishing to expand overseas must be aware of these differences, and be able to formulate and implement appropriate policies and strategies to deal with them successfully (Hill, 2006).

International business environment

The international business environment has both a direct and indirect effect on how firms carry out their operations. As can been seen by the long-term movement to less rigid national borders, no business can remain entirely isolated from occurrences in the international business environment. As globalization processes lead to the increasing interrelation of the flows of trade, investment, companies are required to seek production bases and new markets at the same time. Firms must monitor the international business environment closely to determine the impact it may have on their business activities (Wild, Wild, and Han, 2008).

Management of international companies

The management of a completely domestic firm is not at all the same as the management of a transnational one, as market rules differ and forms must take these differences into account. Thus, it is national business environments which define the context of managing an international firm (Wild, Wild and Han, 2008).

Competitive Advantage in the Global Market

In the global marketplace, it is vital for companies to sustain competitive advantage. The term competitive advantage was used first by Michael Porter of the Harvard Business School in the U.S.A. Basically, it means the place a company has in relation to its competitors in the same industry. Firms seek to obtain a competitive advantage and then to sustain it. According to Porter (1998), there are three ways that a firm can do these things. The first way is by cost leadership, which means that a firm will have cost advantage is it can offer the same goods or services as its competitors, but at less cost than them. The second way is differentiation. The differentiation advantage refers to when a company can offer better goods or services than its competitors, but for the same price. This company will then become a leader in the industry. The third way is focus. This means that a company can concentrate on a narrow part of the market, which is known as a market niche, to obtain competitive advantage. Some of them may focus on cost and some of them may focus on differentiation (Porter, 1998). However, it is not easy for a firm to gain competitive advantage and it is even more difficult to keep it (Passemard and Kleiner, 2000). This is because if a company has a differentiation competitive advantage, soon another company will find how to make the same product with the same quality. If a company has a cost competitive advantage, then other companies will look for ways to make their products as cheap (ibid).

However, there are several factors that contribute to a firm obtaining competitive advantage. One of these factors is having good resources. Another factor is having a skilled work force. Countries’ governments also can affect firms, as taxes vary much from country to country and some governments may offer tax incentives or subsidies to companies (Passemard and Kleiner, 2000).

The advent of globalization has offered companies with markets all over the world. This has offered many opportunities to expand, but it has also faced them with challenges. According to Ari (2008), globalization is “a process of increasing interconnectedness, integration and interdependence among not just economies but also societies, cultures and political institutions”. He adds that a result of globalisation is that “the borders between countries lose their significance and can no longer deter trade and communication”. Regarding business and economics, globalization means that there is liberalisation of trade and creation of world markets (ibid). However, it also means that global industries are competing with all industries in the world.

There are many strategies industries can use to obtain and keep competitive advantage in the global market. According to Porter (1998), companies should make their strategy on a basis of strong analysis of the industry’s structure and nationally or internationally there are five forces that they should consider carefully, as follows:

The threat from new firms in their industry.
The threat of products that could replace their products.
The bargaining power of suppliers
The bargaining power of customers.
Competition between companies in the same sector

Segal-Horn (1996) points out that companies must be very careful when they are planning global strategy because some strategies which are effective in one country are not effective in another country. Companies have to decide if they want to have one product and marketing strategy for every country or if they have to adapt their strategy for different countries. Adaptation is more necessary for some industries than for others. For example, requirements of steel are more or less the same globally, but there will be large differences for consumer products and food and drinks. Companies have to consider this very carefully. For example, if they can use the same advertisement all over the world it is much cheaper for them, but the advertisement may not be effective in some countries, so they would lose money (ibid). To make such a strategy it is necessary for companies to have very good information about the country they want to sell their products in, which is called market intelligence (ibid). They have to be careful not to miss the differentiation advantage in any country (ibid). To have such information, they must do much market research. Many companies find that it is useful to have a joint venture with a local company in the country because that company already has good information and expertise about the market there.

De Toni et al (2008) state that “In global industries, competitive advantage derives in large part from the integration and co-ordination on an international scale of various activities”. According to Ward et al (1990) companies in a global market should have five competitive priorities, which are cost; delivery performance (dependability and speed); quality; flexibility (product mix and volume); and innovativeness.

If companies are looking for cost advantage there can be many benefits to them from globalization. This is because the can choose to buy their supplies from the cheapest supplier in any country in the world and they are not limited to suppliers in their country, as they were in the past before globalization facilitated communication and transport (Ari, 2008). In addition, they can choose to produce their products in a country where labour costs are less than in their country (ibid). Moreover, they can also sell their products through the Internet and reach millions of customers that were impossible for them to reach in the past

Sony Corporation Profile

Sony was founded in Japan just after the Second World War by Ibuka and Morita and was known initially as the Tokyo Telecommunications Engineering Company. At first their business consisted of radio repairs and manufacturing voltmeters in small quantities. However, Ibuka and Morita were interested in innovative electronics products and were also aware of the importance of international markets. They developed Sony into an international brand, expanding their business first into the U.S.A. and then into Europe. The company’s name was changed to Sony Corporation in 1958.

Currently, the Sony Corporation employs more than 150,000 people worldwide. It is one of the largest media conglomerates in the world and has six operating divisions, which are electronics, games, music, films, financial services and miscellaneous. Sony Electronics is one of world’s foremost makers of electronic products for both the business and individual consumer markets, while its games division produces, among other products, Playstation, and its music division is the second largest such company in the world. Sony’s film division produces and distributes films for the cinema as well as for TV and computers and its financial services segment includes savings and loans. Under the miscellaneous division, Sony is involved in advertising and Internet-related business.

For the financial year 2007-2008, Sony reported combined annual sales of ?8,871.4 billion with a net income of ?369.4 billion.

Historical background

The Sony Corporation has long been in the forefront of technological innovation and has devoted a considerable portion of its budget to research and development (R&D) in order to obtain and keep its competitive advantage. Some of Sony’s main developments were the following:

In 1949 Sony developed a prototype for a magnetic tape recorder prototype in 1949 and introduced paper-based recording tape a year later. In 1955, the company introduced Japan’s first transistor radio and was listed on the Tokyo Stock Exchange. The Sony Corporation of America (SONAM) was subsequently set up in the U.S.A. and the world’s first direct-view portable TV was introduced in 1960. Also in that year, Sony Overseas S.A. was set up in Switzerland; while a year later Sony became the first Japanese company to offer shares on the New York Stock Exchange in same year. Further technological innovations followed throughout the 1960s, including world’s smallest and lightest transistor television and the Trinitron colour television. Since then, the Sony Corporation have developed and produced the world’s first personal cassette player, the Sony Walkman, which was introduced in 1979, the world’s first CD player, launched in 1982. More recent innovations include the home-use PC VAIO in 1997, Blu-ray Disc drive Notebook PC in 2006 and the OLED television in 2007. The Sony Corporation also expanded into the mobile telecommunications business in 2001 with the establishment of Sony Ericsson Mobile Communications, while a year later it acquired one of its rival companies, Aiwa, through a merger.

Sony’s Global Strategies
The World Marketplace

In the 1950s Japanese products suffered from a poor reputation. In an effort to overturn this, one of its founders, Mr. Morita, went to the United States travelled to U.S.A to learn from companies there and with a view to introducing his company’s products to the American market and beyond. In 1958, having obtained the licensing right to the transistor patent from U.S. company AT&T, they developed the world’s smallest transistor radio, which they launched in both Japan and the U.S.A. It was at this point the decision was taken to change the company’s name to Sony, as it was short, easy to pronounce and memorable. The intention was to make Sony an internationally recognised brand, and in this they have succeeded, as, according to Richard (2002), Sony has become one of the most widely recognized brands in the world (Richard, A. 2002).

Global marketing and operations

According to Kikkawa (1995), only nine major Japanese companies Sony; Toyota; Honda, Nippon Steel; Toray; Teijin; Sumitomo Chemical; Shin-Etsu Chemical; and Matsushita. Kikkawa argued that these companies succeeded in the international marketplace by supplying products globally and/or carrying out global operations. Sony’s products have been developed to fulfil the requirements of consumers worldwide; therefore, the corporation can offer the same products all over the world. One instance of this is the Sony Playstation, which appeals to consumers in every country in the world. In its ability to anticipate and fulfil the requirements of consumers Sony has gained an advantage over its rivals.

The strategy of innovation

Masaru Ibuka, one of the founders of the Sony Corporation, stated that the key to Sony’s success was “never to follow the others”. In effect, the company’s central strategic advantage in its global strategy has always been continual innovation.

Global expansion and market selection

As far as global expansion is concerned, Sony has always given careful consideration to operating in markets they considered to be important and where they had reason to believe the company’s products would be most in demand (Richard, 2002). This lead to the initial decision to expand first to the United States, where they could market their products while at the same time learning from U.S. technology. The rationale behind this was that it would easier to expand to other markets once they had established a strong brand name in the United States. This in fact proved to be the case and expansion to European markets soon followed, as mentioned previously.

Advantages of Global Strategy
Reducing costs

Sony has used several elements global strategy to its advantage. For instance, every Sony factory is able to produce at full capacity due to Sony products being sold all over the world; this results in a reduction in production costs. In addition, although Sony has numerous product lines, they are standard worldwide. This means that Sony does not have the expense of producing several versions of a single product to suit various markets.

Worldwide recognition

As Sony’s products are known, sold and serviced all over the world, brand recognition among consumers is extremely high. This results in increased sales, as consumers feel secure about purchasing Sony products.

Enhancing competitive advantage

In addition, in recent years Sony has been an enthusiastic participant in the Sustainable Energy Europe Campaign, making efforts to produce energy-efficient products. The corporation is also involved social and environmental concerns through its active and high-profile Corporate Social Responsibility (CSR) programme. These activities have contributed greatly to Sony’s ability to increase their competitive advantage over its rivals.

Sony’s CSR programme

Sony developed their Corporate Social Responsibility (CSR) programme in the awareness that the corporation’s business has direct and indirect effects on society and the environment in which their business is conducted. The programme is concerned with the interests of all the corporation’s stakeholders, such as shareholders, customers, employees, suppliers, business partners, and local communities. This has contributed to the improvement of Sony’s corporate value.

The European Commission awarded Sony a Sustainable Energy Europe Award in early 2007, in acknowledgement of Sony’s efforts towards increasing the energy efficiency of its products and its participation in the Sustainable Energy Europe Campaign. By 2007, Sony had modified all their TV sets to consume less energy than the market average. This was a result of their research and development and lead to Sony TV sets increasing their market share. In this way, consumers can be satisfied that their television viewing is consuming a good deal less energy than previously, other stakeholders such as shareholders and suppliers are satisfied by the increase in sales of Sony TVs and electricity consumption also decreases.

Another element in Sony’s CSR programme is its improvement of its system for its employees to take leave to look after their children. Sony modified this system in the spring of 2007, with the aim establishing a working environment in which taking child care leave was facilitated. They also attempted to encourage fathers to become more involved in caring for their children. This modification has lead to an enhancement of the work-home life balance of Sony employees.

It can be seen from these examples that Sony has made use of the advantages of globalization in its CSR programme to achieve a competitive advantage over its rivals.

Disadvantages of Global Strategy

While global strategy offers many advantages for international enterprises, it also brings with it certain disadvantages. These consist mainly of costs related to greater coordination, reporting requirements, and added staff. In addition, international enterprises must be careful to avoid the pitfall of allowing over centralization to lead to a reduction in the quality of management in any country, as this can result in quality toward individual country can be reduced due to which damaging the motivation and drive of local employees. There is also a risk inherent in offering standardised products, as such products may prove to be less appropriate in some countries than in others. Similarly, use of standardised marketing strategies may not always be successful, as, without cultural adaptation, certain strategies may be inappropriate in specific countries.

Finally, the over-use of global strategies may also result in unnecessary or inefficient expenditure. In the case of Sony, a considerable portion of the corporation’s budget is spent on in R&D to fulfil international requirements and this may have led Sony to over-diversify. In order to compete with global competitors, Sony has ‘a finger in every pie’, so to speak, and this may have led the corporation to stray too far from its core competency which is electronics product expertise. Moreover, the possibility exists that over-diversification may result in clouding consumers’ perceptions of the brand.

Currently, Sony is facing a challenge to its market supremacy from the Samsung Company. In contrast to Sony, Samsung’s global strategy consists of limiting its diversification and focusing its resources on a small number of dominant businesses. This strategy has so far proved very successful for Samsung.

Recommendations

Although the Sony Corporation has succeeded in building one of the most widely recognised brand names in the world, its market dominance appears to be based on increasingly unsteady ground. This is indicated by the fact that Sony’s net profit for the third quarter of 2006 fell by 94% to ?1.7 billion, compared to ?28.5 billion for the same period in 2005 (Benson, 2006). This dramatic fall in profits may be attributed to the crucial strategic concerns confronting Sony.

Sony’s manufacturing process is in need of restructuring, as the quality of some Sony products has declined. This has resulted in damage to their reputation and a consequent decrease in the competitiveness of their products. For instance, Forbes magazine reported in October 2006 that 9.6 million Sony laptop batteries has had to be recalled as they were prone to overheating and were therefore dangerous. In addition, Japanese consumers expressed their dissatisfaction with the new system of the Sony PS3 (Wonova, 2006). It would appear from these examples that Sony’s quality control system is not always as efficient as it should be.

Apart from quality control issues, Sony has shown itself unable to respond rapidly and effectively to changes in market demand and its competitive advantage is therefore compromised. One example of this is the delay in the European launch of PS3 because of manufacturing problems (BBC, 2006). Sony was unable to satisfy the market demand, leaving the way open for rivals in the field such as Nintendo and Microsoft to increase their market share. Moreover, Sony did not respond as quickly as certain other television manufacturers to the increasing demand fro plasma television and therefore allowed their competitors to gain a head start on them in this market. Mintzberg et al. (1999) pointed out that “the first mover may gain advantages in building distribution channels, in tying up specialized suppliers or in gaining the attention of customers”, adding that “the first product of a class to engage in mass advertising tends to impress itself more deeply in people’s minds than the second, third or fourth”. Hence, Sony forfeited its competitive advantage and a considerable part of the market share in the games and television market. It is evident that Sony’s operational strategy is deficient and requires improvement.

In order to address these issues, Sony is putting into practice strategies from both the “inside out” resource-based perspective (Hamel and Prahalad, 1990; Barney, 1991) and “outside in” positioning perspective (Porter, 1980; Mintzberg et al., 1998), also known as the market-based perspective (Finlay, 2000). It has been suggested that combining these perspectives can optimise an enterprise’s capabilities and result in achieving and maintaining greater competitive advantages (Finlay, 2000; Thompson and Strickland, 2003; Johnson et al. 2005; Lynch, 2006). According to Hatch (1997) competitive strategy necessitates the exploitation of a company’s existing internal and external firm specific capabilities and the cultivation of new capabilities. Sony should determine appropriate methods for managing external changes in the constantly shifting business environment, and also determine how to make full use of their existing capabilities and resources to respond effectively to this environment. Moreover, Sony must be attentive to potential threats in the future and put in place the mechanisms required to neutralise these.

Conclusion

It can be seen that globalization brings both advantages and disadvantages for businesses. On one hand, they can sell their products in almost any country in the world, while progress in communication and transport means that they can choose cheaper suppliers and make their products in countries where labour costs are lower. On the other hand, it brings disadvantages in that they also have competitors from all over the world.

Appropriate planning and implementation of global strategies within the constantly evolving environment of technology can provide enterprises with opportunities for survival and expansion in an increasingly competitive market. However, inappropriate global strategies which are not well-conceived or well-implemented can result in losses. Several factors could contribute to such losses including increased costs due to additional staff and insufficient attention to the requirements of the local market. It is vital that enterprises find an appropriate balance between over-globalisation and under-globalisation, although there are no precise guidelines for determining such a balance. Among the keys to obtaining and sustaining competitive advantage in a global market is careful planning and strategy, which includes obtaining detailed information about the target country and focusing on cost or differentiation advantage

. References
Ari, A. (2008). Globalisation. Online at http://www.geocities.com/anil.ari_global/index.html# Accessed on 10th August, 2009
Barney, J. B. (1991), “Firm resources and sustained competitive advantage”, Journal of Management, Vol. 17, No. 1, pp. 99-120.
Barney, J. B. (2001), “Is the resource-based “view” a useful perspective for strategic management research? Yes”, Academy of Management Review, Vol. 26, No. 1, pp. 41-56.
De Toni, A., Filippini, R. and Forza R. (1999). Interational Journal of Operations and Production Management. Vol.12, No. 4, pp. 718 Passemard, D. and Kleiner, B.H. (2000) Competitive Advantage in Global Industries. Management Research News. Vol. 23, Issue 7/8, pp.111-117
Finlay, P. (2000), Strategic Management: An introduction to business and corporate strategy, Prentice Hall.
Hamel, G. and Prahalad, C.K. (1990), “Capabilities-Based Competition”, Harvard Business Review, Vol. 70, No. 3.
Hamel, G. and Prahalad, C.K. (1994), “Competing for the future”, Harvard Business School Press.
Hatch, M.J. (1997), Organization Theory: Modern Symbolic and Postmodern Perspectives, Oxford University Press.
Hill, C.W.L. (2007), International business: competing in the global marketplace, Boston: McGraw-Hill/Irwin.
Johnson, G. (2005), Exploring Corporate Strategy: Text and Cases, 7th Edition, Prentice Hall.
Kikkawa, T. (1995), Growth in cluster of entrepreneurs: The case of Honda Motor and Sony
Lynch, R. (2006), Corporate Strategy, 4th Edition, Prentice Hall.<