Speaker Driver: Comparison of Options

Speaker driver choice is a very important consideration, since the transducers themselves are of course the most fundamental part of the speaker. Regardless of other factors, one can never expect inferior drivers (and hence the system as a whole) to perform well.

There are two main options when choosing drivers; electrostatic or conventional voice-coil designs. Although many seem under the impression that electrostatic loudspeakers are a modern invention this is not the case; Janszen was granted the first U.S. patent for such a device in 1953[1]. Considering the relatively small market penetration of electrostatic transducers and the fact that they tend to appear largely in high-end designs, one might be led to assume that electrostatic panels are superior to conventional drivers. This however is only partially true.

One advantage of electrostatic panels is that full-range designs are possible, eliminating the need for crossovers and hence the associated problems with frequency and phase response in the crossover band. Another advantage is that the electrostatic panel is generally very light and hence offers excellent transient response, whilst also offering very good directionality and imaging. The latter may also be seen as disadvantage, since it effectively makes the ideal listening position rather narrow.

In terms of disadvantages, the chief problem with electrostatic designs is a difficulty in reproducing bass frequencies at high SPLs. Generally the panel excursion is small, which makes it hard for electrostatic transducers to move the required volume of air at low frequencies. Furthermore, since electrostatic transducers are not meant for use with an enclosure, phase cancellation is an issue, again resulting in reduced bass performance. Audiostatic, a company that manufactures audiophile full-range electrostatic speakers, admit of their own devices with regard to bass that “Obviously because of the limited membrane excursion they won’t produce ear shattering levels at that frequency”[2].

As a result of the aforementioned bass performance, many high-end electrostatic speakers are in fact hybrids, using voice-coil woofers for low frequencies with electrostatic panels covering the mid and high range. One example is the Martin Logan Summit[3], which whilst described as “our most advanced and sophisticated full-range loudspeaker” nevertheless makes use of two 10” woofers for low-end reproduction. Of course in this situation a crossover is still required, so the advantage of the possibility of a full-range design is often nullified in practice. Still, electrostatics may prove very attractive as high quality mid to high frequency drivers, although they are certainly not cheap.

In choosing conventional voice-coil drivers, there are many factors to consider. In terms of quality, it is certainly true that one does indeed get what one pays for. Whilst high quality manufacturers such as SEAS[4] are happy to provide detailed frequency response plots and Thiele-Small parameters for their transducers, many cheaper manufacturers are less transparent about their devices.

One common trick to beware of, often used by less scrupulous manufacturers, is the quoting of a recommended frequency range without stating the variation in output (in dB) across this range. A recommended operating range without any indication of the actual performance within the frequency band is virtually meaningless. Many assume a ±3dB range is implied when reading such data; it is unwise to make such assumptions.

Furthermore, even if frequency response across a range is qualified with the variation in output in dB, this is still not ideal. Obviously one desires that any variation in output magnitude will be a smooth variation; one still has no idea of how “lumpy” the response might be. For these reasons it is best to choose drivers that are accompanied by frequency plots, since this gives a far more accurate representation of true performance.

Another important consideration in choosing a driver is the application for which it is intended. For example, a woofer with a high maximum cone excursion and low Fs may perform very well in a large sealed cabinet but be totally unsuited to a ported implementation (Dickason, 2000). One can make use of the quoted Thiele-Small parameters to ascertain whether the driver is suitable for its intended purpose.

Construction materials also give an indication of how the driver may sound. In terms of woofer and midrange drivers, for example, an aluminium cone may indicate greater bass precision than an otherwise equivalent transducer with a paper cone; softer cones are associated with greater distortion than their stiffer counterparts. However, as Larsen (2003) notes “cone break-up behaviour and frequency response was shown to be strongly dependant on the Geometrical Stiffness of the Cone”. Hence the geometry of the design may be more important than the material used.

Diameter of the driver is also a hugely important factor for woofers, although of minor importance for tweeters. To reproduce bass frequencies at good SPLs, a large volume of air must be moved by the driver. To this end, there is absolutely no way a 6” driver can compete with a 12” driver of similar quality in terms of bass extension; it is simply not physically possible.

Power handling is another consideration that must be given thought when choosing a driver; the peak short-term power dissipated by a transducer can easily be double its long-term rating. Naturally for the best performance it is desirable to ensure that the driver is not operating too close to its quoted limits. One should think carefully about how hard the driver is likely to be driven and ensure its power handling is adequate; overdriving a unit at best will result in distortion and at worst may cause irreversible damage. In many cases users overdrive and damage units in an attempt to achieve a higher SPL, particularly in the bass region. If the system requirements are adequately specified and designed for, this should not happen.

For the high-budget client, the best solution will either be high-quality voice-coil drivers carefully selected to complement each other, or a hybrid electrostatic implementation. It is difficult to recommend a fully electrostatic solution due to the associated problems with low frequency performance, although for some clients this may be acceptable.

For the low-budget client, standard voice-coil drivers are the only solution. The quality of the drivers used will largely be influenced by pricing; one should carefully consider all factors and attempt to find the best solution within budget. Datasheets should be closely scrutinised to identify the strengths and weaknesses of each option before a solution is chosen.

In conclusion, notwithstanding the electrostatic debate, driver choice is largely influenced by price and performance. In general, the better specified the driver, the more expensive it is likely to be. If working with a high budget, one is likely to simply choose the best specified drivers. Conversely, with a limited amount of capital, one must make the best compromise that can be reached within budget.

Sources

Larsen, Peter. (2003). Geometrical Stiffness of Loudspeaker Cones, Loudsoft.

Borwick, John. (2001). Loudspeaker and Headphone Handbook, Focal Press.

Dickason, V. (1995). The Loudspeaker Design Cookbook, Audio Amateur Publications.

Rossing, T. (1990). The Science of Sound, Addison-Wesley.

1

Social Media as Emerging Technology

Investigate emerging IT technologies: Social Networks appear to be all the rage at the moment.

Introduction

Psychology is classically defined as “… The science of behavior …”, which in the case of human beings manifests itself when others are present, thus representing behavioral instances in social interaction (Kenny, 1996). The phenomenon of socialization and networking have been extended by the global presence of the Internet whereby individuals through specific social networking websites have access to a broad context of toher individuals that is further defined by the type of website which have differing population, age and constituency compositions (Freeman, 2004, pp. 10- 29). The internet through emails, instant messaging, online dating and blogging has created a relatively secure means for people to engage in socialized behavior while being able to feel relatively safe in terms of personality differences and other areas that might not be the case in situations whereby they are exposed to individuals on a direct basis with whom they might not have common interest areas and or outlooks (Ethier, 2004).

All of the preceding factors are components that have given rise to the dramatic increase and popularity in online social network services. Classmates.com, which was started in 1995, represented the first social network website, which was followed by Company of Friends that was the online network of the magazine Fast Company in 1997 that began the era of business networking (FastCompany.com, 2004). The promise of privacy, like-minded interests, and being able to socialize saw online social networking become extremely popular in 2002 and increase to the point where presently there are over 200 of these types of web sites globally (RateItAll.com, 2007). And as it is with any type of activity that attracts large numbers of people, social networking is big business. As a result the Internet has and is offering firms in this sphere an advantage in bringing together distinct profiles of individuals with marketing potential beyond any fees or charges to the members (Robson, 1996, pp. 250-260). However, that business segment, social networking, is increasing taking on the look of the dot-com frenzy that gripped in Internet in 2001 (Madslien, 2005). As was the question then, looms as the same questions now regarding online social networking. What are their business models? What type of revenue are they generating? What is their profitability? What are their differences and will the phenomenon last? These factors are areas that will be explored herein.

Online social networks are forums whereby people can meet new individuals, network and initiate or maintain contact with old acquaintances through the relative privacy of the Internet, thus enabling business or socially minded people to enlarge their spheres through providing and exchanging information on themselves (Epic.org, 2006). Facebook (2007) is system comprised of a number of networks, with each one based around a region, or company, high school and or college that permits its users to share information on themselves that allows a broad category of differing types and demographics of people to use their social networks as opposed to offering contacts that are geared to a specific type of profile. Thus it provides a more diverse population and appeal to advertisers implementing this type of expanded user profile. The differing networks within Facebook are independent as well as being closed off to users that are non-affiliated thereby providing control over the content to specific group profiles. It, Facebook, is an English language web site that enjoys popularity among college students as its largest profile group, numbering in excess of 17 million, or roughly 85% of all U.S. college students (Arrington, 2006). Facebook is free for users, utilizing advertising, banner ads and sponsored groups for revenues that are estimated to be in the area of $53,000,000 annually (Arrington, 2006).

Another type of social networking web site is LinkedIn, which is business oriented, primarily established to enable professional networking (Dragan, 2004). The company’s 40,000 member list includes such high profile individuals as company vice presidents, over 700, Chief Executive Officers, over 500, and 140 Chief Treasury Officers (Dragan, 2004). Not yet generating a profit, LinkedIn, charges a fee regarding its basic service and charges what it terms as ‘power users’ representing executive recruiters, investment professionals and sales representatives who use the service to tap into its network an additional charge (Liedtke, 2004). Many members utilized their personal contacts and associates to find, fill jobs and to increase their sales, thus offering a very high select user profile that also generates income from advertisers, however, the business model has yet to prove profitable (Liedtke, 2004). Founded in 2003, it has become a sort of ‘in’ place for professionals increasingly identifying its members as being in a special group of movers and shakers, as it is termed (Copeland, 2006). At present, LinkedIn has existed on venture capital funding representing almost $15 million USD from investors such as Sequoia Capital along with Greylock, with the company’s business model based upon advertising revenue and fees projected to generate $100 million in revenues by the year 2008 (Copeland, 2006). The goal is to increase the web site’s membership making it the number one professional resource for business and networking, job referrals, references, experts and whatever else is needed for professionals (Copeland, 2006).

The younger generation of teens and those in their early twenties tend to use hi5, which has over 40 million members in the pattern of a MySpace social network (Mashable.com, 2006). The massive traffic the web site generates makes it the eighth most visited social network web site in the United States, but is losing market share in the face of rival companies such as Facebook, Bebo, Piczo, Tagworld, Multiply and others that also covet this user group, with MySpace as the dominant performer, stealing market share from all these rivals (Mashable.com, 2006). In keeping with the general social network format, hi5 offers profile pages with basic services offered for free and the site, like others, generating revenues from advertising, banner ads and referrals to music and other web sites such as iTunes for music downloads. The mode of this social network allows users to connect to their friends, build and introduce themselves to new ones as well as invite their own (hi5.com, 2007). Still in the venture capital backed stage, hi5 does not provide information on its revenues or related data. Bebo (2007), as is the case with social networking sites geared at the younger generation, offers users the ability to post their pictures, write blogs and of course send messages. A relative newcomer, 2005, Bebo like hi5, Facebook, Tagworld, Multiply and other allows users to post their talents on their personal pages on a special “New Music Only on Bebo” section (Bebo, 2007).

Any discussion of online social networks must of course include MySpace, the largest web site of its kind, achieving almost 80% of online visits in this category (Answers.com, 2007a). With over 125 million users the site is targeted at the teenage and under thirty crowd that in typical fashion, allows users to create their own personal profile pages that can be enhanced with HTML code to make them into multimedia pages (Answers.com, 2007a). This aspect allows users to post special aspects on themselves, such as their talents, videos, music and paintings, with its success being proven by its purchase by News Corporation for in excess of 500 million USD (Answers.com, 2007a). MySpace business model of advertising revenues, banners and fees has achieved success as a result of size, the determining factor in Internet related businesses.

Friends United in the UK represents a combination of all of the other online social networking sites discussed. It encourages friends, family and individuals to connect for reunions, communication, genealogy, socializing, dating and like LinkedIn it offers job searches and job hunting (Friends Reunited, 2007). And in going one better than its American counterparts, the site offers television broadcasts via the company’s parent company ITV network as well as the popular format of music CD collections. All of these facets are revenue generators that users can access free (Answers.com, 2007b). With 15 million members, Friends United has access to almost half of all UK households with Internet service and was founded on the idea of the owners, Steve and Julie Pankhurst, who were looking for old classmates and found a lost friend of 30 years (Answers.com, 2007b). The success of the multiple interest web site, combining all of the features found in the highly successful U.S. social networks, and with its own fresh new wrinkles such as television broadcasts, resulted from the purchase of the company from the Pankhursts by ITV in December of 2005 for ?120,000,000.

As would be expected, online social networks have become a global phenomenon that has taken off particularly in the Asian region. Japan’s top social networking site ‘Mixi’ is a highly organized, in Japanese fashion, web site that is a kind of MySpace knock off in the Japanese language, utilizing the same advertising, banner ad, music referral business model (Kageyama, 2007). The cultural nuance is apparent in that “MySpace is about me, me, me and look at me …”, whereas “ … Mixi, is not all about me. It’s all about us” reflecting the more reserved nature of the Japanese culture (Kageyama, 2007). Social networks of the non online variety have long been a fixture of Asian societies, and in Korea CyWorld has grown to the point where it is launching a U.S. version with an initial investment of $10 million USD and a pledge to spend whatever it takes to be successful. (Kirkpatrick, 2006). With versions in Japan as well as China and Taiwan, CyWorld is an example of the universal nature of the social networking business model. The formulas utilized globally are basically the same, free access, bring in large numbers of people, charge advertisers, and diversify the revenue stream through music, television access, movie CD’s and other sources.

Conclusion

As was and is the case in the United States as represented by MySpace, market share and dominance determine value, to advertisers, investors and buyers. Friends United is the largest social networking site in the UK and commanded the same interest on the part of a large corporation that MySpace did in the U.S. Success translates as having a commanding percentage of a nation’s user profile, which aids in the web site being able to attract better and more advertisers at increased rates, along with banner ads, music web site referrals and other revenue streams. The venture capital backed nature of the online social network sites makes access to their profitability elusive, with all but the most popular sites, as indicated having been either acquired by large corporations, MySpace – Friends Reunited for example, or having an expansive nature, CyWorld and MySpace, indicating that revenues and profits must be adequate if not substantial. As eBay and Yahoo have proven, market dominance does translate into revenues, but there is a lag time that takes well heeled investors or corporations to underwrite.

And the stakes have made the game hotter as more entrants as well as current players up the ante (Hicks, 2004). But, that is not all bad news as “… not all online social networks are the same …” (Jacobs, 2006). And while the differences in demographics, profiles, appeal and niche are similar, the tremendous online numbers allow for the distinctions (Jacobs, 2006). And as is the case with dominant sized competitors, they have the clout to slowly dip into their smaller competitors, thus increasing their size advantage, or accomplishing the same through acquisition. And this brings up the other side of the coin, with most of the online social network sites funded by venture capitalists who are in it for the sell off to another company, and or stock play, is the phenomenon one that is ready to burst (seomoz.org, 2006). MySpace has yet to prove its $580 million investment by Rupert Murdoch’s News Corporation despite its size, and the venture capital market, which has pumped more that $824 million into the sector since 2001 is still awaiting returns on most of that money (Rosmarin, 2006). But, with MySpace and Friends Reunited pulling in almost half of their respective countries Internet access subscribers, the potential for huge profits represents a bet that most companies have opted not to miss out on. Privately held Facebook’s recent rejection of a $750 million offer is a demonstration of this point (Rosenbush, 2006). The jury and the results are still out as the industry grows and some consolidation occurs, then the real story will reveal itself in terms of profitability as well as staying power.

Bibliography

Answers.com (2007b) Friends Reunited. Retrieved on 24 February 2007 from http://www.answers.com/topic/friends-reunited

Answers.com (2007a) MySpace. Retrieved on 24 February 2007 from http://www.answers.com/topic/myspace

Arrington (2006) 85% of College Students Use Facebook. Retrieved on 24 February 2007 from http://www.techcrunch.com/2005/09/07/85-of-college-students-use-facebook/

Bebo (2007) Bebo. Retrieved on 24 February 2007 from http://www.bebo.com/

Copeland, M. (2006) A MySpace for grown-ups. 4 December 2006. Retrieved on 24 February 2007 from http://money.cnn.com/magazines/business2/business2_archive/2006/12/01/8394967/index.htm?postversion=2006120415

Dragan, R. (2004) LinkedIn. Retrieved on 23 February 2007 from http://www.pcmag.com/article2/0,4149,1418686,00.asp

Epic.org (2006) Social Networking Privacy. Retrieved on 23 February 2007 from http://www.epic.org/privacy/socialnet/default.html

Ethier, J. (2004) Current Research in Social Network Theory. Retrieved on 22 February 2007 from http://www.ccs.neu.edu/home/perrolle/archive/Ethier-SocialNetworks.html

Facebook (2007) Facebook. Retrieved on 23 February 2007 from http://www.facebook.com/

FastCompany.com (204) What the Heck is Social Networking. 16 March 2004. Retrieved on 22 February 2007 from http://blog.fastcompany.com/archives/2004/03/16/what_the_heck_is_social_networking.html

Freeman, L. (2004) The Development of Social Network Analysis: A Study in the Sociology of Science. Empirical Press

Friends Reunited (2007) Welcome to Friends Reunited – what are your old friends doing now. Retrieved on 24 February 2007 from http://www.friendsreunited.co.uk/friendsreunited.asp?WCI=FRMain&show=Y&page=UK&randomiser=4

hi5.com (2007) hi5. Retrieved on 24 February 2007 from http://www.hi5.com/

Hicks, M. (2004) Social Networking Keeps Buzzing. 15 October 2004. Retrieved 24 February 2007 from http://www.eweek.com/article2/0,1895,1677508,00.asp

Jacobs, D. (2007) Different Online Social Networks Draw Different Age Groups: Report. 7 October 2007. Retrieved on 24 February 2007 from http://www.ibtimes.com/articles/20061007/myspace-friendster-xanga-facebook.htm

Kageyama, Y. (2007) MySpace faces stiff competition in Japan. 18 February 2007. Retrieved on 24 February 2007 from http://news.yahoo.com/s/ap/20070219/ap_on_hi_te/japan_social_networking

Kenny, D. (1996) The Design and Analysis of Social-Interaction Research. Vol. 47. Annual Review of Psychology

Kirkpatrick, M. (2006) Massive Korean Social Network CyWorld Launches in U.S. 27 July 2006. Retrieved on 24 February 2007 from http://www.techcrunch.com/2006/07/27/this-is-nuts-cyworld-us-opens-for-use/

Liedtke, M. (2004) Networking site LinkedIn Causes Buzz – but can it be profitable? 25 October 2004. Retrieved on 24 February 2007 from http://seattlepi.nwsource.com/business/196580_linkedin25.html

Madslien, J. (2005) Dotcom Shares Still Spook Investors. Retrieved on 22 February 2007 from http://news.bbc.co.uk/1/hi/business/4333899.stm

Mashable.com (2006) hi5, Another Massive Social Network. Retrieved on 24 February 2007 from http://mashable.com/2006/07/16/hi5-another-massive-social-network/

RateItAll.com (2007) Social Networking Web Sites. Retrieved on 22 February 2007 from http://www.rateitall.com/t-1900-social-networking-web-sites.aspx?age=&zipcode=&gender=&sort=0&pagesize=all

Robson, W. (1996) Strategic Management and Information Systems: An Integrated Approach. Trans-Atlantic Publications

Rosenbush, S. (2006) Facebook’s on the Block. 28 March 2006. Retrieved on 24 February 2007 from http://www.businessweek.com/technology/content/mar2006/tc20060327_215976.htm?chan=technology_technology+index+page_today’s+top+stories

Rosmarin, R. (2006) The MySpace Bubble. 29 June 2006. Retrieved on 24 February 2007 from http://www.forbes.com/home/digitalentertainment/2006/06/29/myspace-network-facebook_cx_rr_0629socialnetwork.html

seomoz.org. (2006) Is Social Networking a Dotcom Bubble Waiting to Burst? 28 September 2006. Retrieved on 24 February 2007 from http://www.seomoz.org/blog/is-social-networking-a-dotcom-bubble-waiting-to-burst

Snoopy Tool Evaluation

Snoopy is a tool which is used for designing and animating hierarchical graphs along with others Petri nets. Snoopy also provides the facility to construct Petri nets and allows animation and simulation of the resulting token flow. This tool is used to verify technical systems specifically software-based systems and natural systems e.g. signal transduction, biochemical networks as metabolic and gene regulatory networks. Snoopy is in use for consideration of the qualitative network structure of a model under specific kinetic aspects of the specified Petri net class and investigation of Petri net models in several complementary conducts. Simultaneous usage of different Petri net classes in Snoopy is one of its outstanding features. Other features are:

It is extensible as its generic design aids the implementation of new Petri net classes.
It is adaptive as numerous models can be used simultaneously.
It is platform independent as it is executable on all common operating systems e.g. linux, mac, windows.

Two particular types of nodes i.e. logical nodes and macro nodes are meant for supporting the systematic construction, neat arrangement and design of large Petri nets. Logical nodes act as connector or multiple used places or transitions sharing the same factor or function. Macro nodes allow hierarchically designing of a Petri net. Snoopy allows edition and coloring of all elements in each Petri net class and manual or automatic change of network layout too. Prevention of syntactical errors in the network structure of a Petri net is facilitated by the implementation of the graphical editor.

Editor Mode:

Start Snoopy and go to File New or press the new button in the tool bar. It results in opening of a template dialogue that allows selection of the document template.

File: New/Open/Close Window/Save/Save as, Print, Export/Import, Preferences (change the default visualization) and Exit.

Edit: Undo/Redo, Select All/Copy/Copy in new net/Paste/Cut, Clear/Clear all, Hide/Unhide, Edit selected elements/Transform Shapes, Layout (automatic layout function), Sort Nodes (by ID or name), Check Net (duplicate nodes, syntax, consistency) and Convert to.

View : Zoom 100%/Zoom In/Zoom Out, Net Information (number of each element used in the model), Toogle Graphelements/Hierachy browser/Filebar/Log window, Show Attributes (choose for each elements which attributes to be shown in the model), Start Anim-Mode/SimulationMode/Steering-Mode.

Elements (list of all available elements): Select/ Place/Transition/ Coarse Place/Coarse Transition/ Immediate Transition/Deterministic Transition/Scheduled Transition/Parameter/Coarse Parameter/LookupTable, Edge/Read Edge/Inhibitor Edge/Reset Edge/Equal Edge/Modifier Edge and Comment.

Hierarchy (edit and browse hierarchy): Coarse (chosen elements are encapsulate in a macro node)/Flatten and Go Up in Hierarchy/Go To First Child in Hierarchy/Go To Next Sibling in Hierarchy/o To Previous Sibling in Hierarchy.

Search : Search nodes (by ID or name).

Extra : Load node sets (visualize, e.g., T-, P-invariants, siphons and traps), Interaction and General Information (title, author, description, literature).

Window (arrange all opened windows): Cascade/Tile Horizontally/Tile vertically, Arrange Icons/Next/Previous and Open Files.

Help: Help, About (current version), check update.

The tool bar holds four shortcuts that facilitate:

Open a new document.

Load a document.

Save a document.

Select an element.

All elements accessible in the current net class are displayed in panel for the graph elements. Left-click on one of the elements enables user to use one of these elements. Right click on the respective element allows user to edit or select all elements of the same class. All levels are displayed in hierarchy browser and any hierarchical level can be opened in a new window by a left-click. The editor pane can be considered as the canvas which allows user to draw the network. A left-click on the Editor pane activates chosen element and places the selected element on the canvas. Click left onto one node, hold the left-click, drag the line to the other node and drop the left-click, to draw an arc between two nodes. To add edges to an arc push the CRTL key and click left on the arc which facilitates the user to drag the edge with another left-click. Grid in the canvas tab can also be used for a better orientation. User can also pick edge styles i.e. line or spline in the preference dialogue in the elements tab.

Elements:

Nodes:

Elements

Graphics

Standard transition

Standard transition

Coarse place

Coarse transition

Immediate transition

Deterministic transition

Scheduled transition

Immediate Transition: Immediate transitions fire as soon as they are enabled. The waiting time is equal to zero.

Standard Transition (Timed Transition): A waiting time is computed as soon as the transition is enabled. The transition fires if the timer elapsed zero and the transitions is still enabled.

Deterministic Transition: Deterministic transitions fire as soon as the fixed time interval elapses during the entire simulation run time. The respective deterministic transitions must be enabled at the end of each repeated interval.

Scheduled Transition: Scheduled transitions fire as soon as the fixed time interval elapsed during the given time points. The respective deterministic transitions must be enabled at the end of each repeated interval.

Edges:

Elements

Graphics

Description

Standard edge

The transition is enabled and may fire if both pre-places and are sufficiently marked by tokens. After firing of the transition, tokens are removed from the pre-places and new tokens are produced on post place.

Read edge

The transition is enabled and may fire if both pre-places A and B are sufficiently marked by tokens. After firing of the transition, tokens are removed from the pre-place B but not from pre-place A, new tokens are produced on post place. The firing of the transition does not change the amount of tokens on pre-place A.

Inhibitor edge

The transition is enabled and may fire if pre-place B is sufficiently marked by tokens. The amount of tokens on pre-place A must be smaller than the given arc weight. After firing of the transition, tokens are removed from the pre-place B but not from pre-place A; new tokens are produced on place C. The firing of the transition does not change the amount of tokens on pre-place A.

Reset edge

The transition is enabled and may fire if pre-place B is sufficiently marked by tokens. The amount of tokens on pre-place A has no effect on the ability to enable the transition and affects only the kinetics. After firing of the transition, tokens are removed from the pre-place B according the arc weight and all tokens on pre-places A are deleted; new tokens are produced on place C.

Equal edge

The transition is enabled and may fire if number of tokens on pre-place A is equal to the corresponding arc weight and place B is sufficiently marked. After firing of the transition, tokens are removed from the pre-place B but not from preplace A; new tokens are produced on place C. The firing of the transition does not change the amount of tokens on pre-place A.

Modifier edge

The transition is enabled and may fire if pre-place B is sufficiently marked with tokens. The amount of tokens on pre-place A has no effect on the ability to enable the transition and affects only the kinetics. After firing of the transition, tokens are removed from the pre-place B but not from pre-place A; new tokens are produced on place C. The firing of the transition does not change the amount of tokens on pre-place A.

Functions:

Name

Meaning of function

BioMassAction(.)

Stochastic law of mass action. Tokens are interpretated as single

molecules.

BioLevelInterpretation(.)

Stochastic law of mass action. Tokens are interpretated as concentration.

ImmediateFiring(.)

Refers to immediate transitions.

TimedFiring(.)

Refers to deterministic transitions.

FixedTimedFiring Single(.)

Refers to deterministic transitions that only res once after a given timepoint

FixedTimedFiring(., ., .)

Refers to scheduled transitions.

abs(.)

Absolute value

acos(.)

Arc cosine function

asin(.)

Arc sine function

atan(.)

Arc tangent function

ceil(.)

Rounding up

cos(.)

Cosine function

exp(.)

exponential function

sin(.)

Sine function

sqrt(.)

Square root

tan(.)

Tangent function

floor(.)

Round off

log(.)

Natural logarithm with constant e as base

log10(.)

Common logarithm with constant 10 as base

pow(.)

Exponent

Parameters:

Parameters are used for defining individual parameters and rate or weight functions but are not able to define the number of tokens on a particular place. Third group of macro elements are coarse parameters which facilitate encapsulating parameters. High numbers of parameters are not visible on the top-level or can also be categorized by the use of coarse parameters.

Animation mode:

Snoopy allows user to observe the token flow in animation mode which starts by pressing F5 or going to View and then start AnimationMode. It will result in opening a new window which allow user to steer the animation. This part of snoopy is very beneficial to catch a first expression of the causality of a model and its workings as it provides information about the transitions too. In order to understand modeled mechanism, playing with the token flow prove to be worthwhile. The token flow can be animated manually by a single click on the transition. A message box is displayed revealing a message “This transition is not enabled” when user tries to fire a transition that is not enabled. Clicking-left and clicking-right on a place aids addition of tokens and extraction of tokens respectively. Animation of the token flow can also be controlled by using the radio buttons present on the animation steering panel. Usage of radio buttons involves step-wise forward and backward or sequentially as long as one transition can be enabled, otherwise a notification “Dead State: There are no more enabled transitions” is displayed on screen.

Simulation Mode

Pressing F6, going to view/Start Simulation or using the stochastic simulation button on the animation control panel, are three ways to perform stochastic simulations with the current model in the active window. Facilities of this mode include simulation of the time-dependent dynamic behavior of the model indicated by the token flow or the firing frequency of the transitions. The fluctuating concentration levels or the discrete number of the components over time is indicated by the token flow. This provides an impression of the time-dependent changes in model under consideration which is helpful in understanding the wet-lab system. More than a few simulation studies can be performed with considered model by manipulating the structure and perturbing the initial state and kinetics. All results can be manually and automatically exported in the standard *.csv-format and can be analyzed in other mathematical programs.

Simulation Control:

The simulation control allows selection of main settings and individualities for the simulation. It splits further into four panels:

Configuration Sets: Modification of configuration sets is carried out by edition of single entries or addition of new sets and picking the configuration sets that is suitable for the simulation run.
Simulation Properties: It includes setting interval start i.e. time point where simulation starts, interval end i.e. time points where simulation ends and output step count i.e. number of time-points that should be displayed in the given interval.
Export Properties: Various automatic export settings are accessible to the *.csv-format.
Start Simulation: It will initiate simulation with the selected settings and properties. Progress of simulation is indicated by the bar and the required time is displayed below.

Viewer/Node Choice:

It facilitates user by providing choices in displaying simulation results. It is divided into two panels:

Viewer Choice: It provides user an option to select one between data tables and data plots. Provided buttons in panel allow user to edit, add and delete the data tables and data plots. Token flow (places) or the firing frequency (transitions) can be displayed in a data table or data plot.
Place Choice: User can choose those nodes which should be displayed in the data table or data plot.

Display:

This panel allows displaying the simulation results as data table or data plot. If data table is selected, the token flow for the selected places is presented in a table. Some options which are used for model checking are present at the bottom of the window. If data plot is chosen, the x-axis displays the time-interval and the y-axis indicates the average number of tokens. View of the plot can be altered via the buttons located below i.e. compress/stretch x-axis, compress/stretch y-axis, zoom in/out and centre view. A csv export button allows user to export the simulation results of the selected places manually. Image of the current plot can be saved by using print button.

Model Checking Mode:

Snoopy is enabled to perform model checking of linear-time properties based on the stochastic simulation. A subset of probabilistic linear-time temporal logic (PLTL) is employed to formulate and authenticate properties. Various features of snoopy also include checking several features at the same time. In order to perform model checking in Snoopy, user needs to open the simulation window and select the table view. To perform model checking on all simulation traces, user have to enter or load a property that is checked by simulating the time-dependent dynamic behavior. Simulation window allows following options:

Enter State Property: User can specify a property in the dialogue box and no model checking is performed if it is empty.
Load state property: User can load a property which is defined in a text file.
Check state property: It refers to model checking which is performed on the basis of average behavior of the previous simulation.

Simulation run count is of assistance to state a number of simulation traces to which model checking can be applied. It splits into two types:

Default value 1 run: User is only able to get the information if the defined property holds true or is not false.
Arbitrary number of runs: The number of simulation runs supports defining probability of the defined properties as high accuracy calls for high number of simulation runs.

User can set the time interval where model checking should be applied with the help of interval start and interval end.

A log window displays model checking results that includes following elements:

Formula displays the formula checked during simulation.

Runs indicate the number of simulation runs performed.

Runtime shows the number of threads used for simulation.

Threads display the number of threads used for simulation.

Prop indicate the computed probability for the formula.

S ^2 displays the variance of the probability.

Confidence Interval indicates the size of the confidence interval.

[a,b] reveals the interval of the probability that is calculated from the confidence interval

Sleep Monitoring System Technology

Fully equipped quilt

–Improve your sleep with new media and communication technology

Explosion of information and advance of technology boost our life into a faster paced mode than ever before. This fast paced life increases the productivity of the whole society, but at the same time, brings healthy problems to an increasing number of individuals.

Sleeping is one of the most important aspect of our daily life, as well as a critical period for fixing the damage we have done to our bodies in the daytime. However, more and more people are losing their sleeping due to the pressure comes from outside world, leading to a vicious circle to their health. The product, a quilt, introduced in this article will be devoted to improving sleep and health conditions of the group of people who have sleep disorders and potential health problems. It will make use of the known facts of health indicators and existing technologies to inform the current health condition of the user and help with getting better experience in sleeping.

Health Indicator Detector

There are many things that people will use every day, the one with the longest consecutive using period is our quilt. The quilt I design takes advantage of this characteristics, making use of touch-skin sensors to detect and monitor user’s health indicators in a quiet and precise way.

One of the basic functions of this part is measuring body temperature. Traditional mercury thermometer will not be used, instead of it, infrared thermometer will be a better way to go. An infrared thermometer is a thermometer which indicates temperature from a part of the thermal radiation emitted by the object being measured. It can be used to help aim the thermometer, to describe the device’s ability to measure temperature in a certain distance. By knowing the amount of infrared energy emitted by the object and its emissivity, the object’s temperature can often be determined (Infrared Thermometer).

Another basic function for this part is measuring heartbeat. Since the heartbeat’s contraction is equal to the pulse, which is how many times a minute, our arteries expand because of the increase in blood pressure originated by our heartbeat. So the heartbeat is a very vital health parameter that is directly related to the soundness of the human cardiovascular system (Microcontroller Measures).

The microcontroller, fingertip will be used for measuring heartbeat. Heartbeat will pump the blood go through the body and that makes the blood volume inside the finger artery to change too. This fluctuation can be used on an optical sensing mechanism to put on the fingertip. The signal can also be used in this microcontroller to calculate the volatility, it is actually in the heart rate is magnified.

(Microcontroller Measures)

Both these heartbeat measuring machine and infrared thermometer two devices will be placed in a Wi-Fi environment, and the data which is collected from the user will be transmitted to the personal computer. Users can set a certain indicator, if the data exceeds the normal values, it will warning the user to go to the hospital to check their body whether they have health problems or not.

Wireless Sleeping Respiration Monitor

To achieve the goal of monitoring users’ respiration while sleeping, respiration sound sensor will be needed for the quilt. In addition, after detecting sleeping hazards like respiration arrest or sleep apnea, vibrate module hidden inside of the quilt will function to wake up the user to prevent further risk.

The whole system consists of three major parts, monitoring and data collection device, wireless communication devices and vibrating wake-up device. Monitoring and data collection device, as known as respiration sound sensor, will sense the user’s respiration sound, reduce the noise, amplify and digitalize the signal and pass it to wireless communication module. Wireless communication device will take over the collected data, encapsulate it and then send it via either Wi-Fi connection or Bluetooth connection. Those connections are connected with a device with data processing power, our smartphones will easily serve the purpose. After processing the received data, the data processing device will inform the vibration unit whether it needs to wake up the user based on the result comparing to the data in an online database. If the vibration module receives the signal to wake up the user, it will send pulse vibration to user’s body to wake them up.

The significance of the advantage is that sensors could be close to user’s nose and mouth to collect much better and precise data without interrupting or hindering the proper sleeping process. It improved the user experience by a remarkable amount from head-wearing devices with similar functions. In addition, the data is amplified and sent to user’s smartphone, a hand-held device, which increases the scalability. We could achieve more usage of the existing sensors by programming the applications on those hand-held devices, and keep them working properly and even better by distributing software/firmware updates over the internet. What if the computing devices is installed into the quilt? Users will feel hard to fall asleep because it is too heavy, and there could be hazards when the computing unit overheats over a period of intensive data processing. So using external hand-held device reduces the total weight and potential hazard of the quilt and in the meantime increases the processing power and stability (Patent).

Sleep Monitoring System

There is no doubt for the importance of sleeping, but because of the quick paced and high pressure environment, we cannot have a good sleep, which will threaten our health. But with the development of advanced technology, we can use it to trace our sleeping status, to find the problems and solve it, in order to improve our sleeping quality.

There are many applications exist on the phone, like in IOS platform, in the Android system and in the Windows Phone. The principle for smartphones and applications to monitor the sleep status are similar. They place an accelerometer in the phone to sense your action like turn over while you sleeping, in order to determine the depth of sleep. Of course, compare to the professional equipment, the monitor of mobile phones is not that much accurate, but the cost is much lower than other equipment. However, studies show that the radiation phone emit will affect the quality of our sleep. According to the working principle of the smartphone, an accelerometer can also be placed in the quilt, it will percept the user’s motion during sleep more precisely. There will also a small microphone in the quilt, it can record user’s fudge while their sleeping. Like I mentioned in the Health Indicator Detector, the recorder is also connected with Wi-Fi or Bluetooth, transmitted to the personal computer or application on the smartphone.

Besides this, tiny speaker can also be placed with the microphone, since the system can be connected with the personal computer, user can play some white noise in the speaker, which can be used to block snoring and other unwanted sounds, leading to deeper, more restful sleep (White Noise). When the system find out the user enter a deep sleep, it will pause playing the white noise.

This system can be used by anyone who even has little knowledge of scientific facts about sleeping or biology to easily track their status of sleeping. Reports could also be generated for a day, a week or a month based on the stored data in the personal computing device. It is really helpful for users to figure out what the trend of sleeping quality is like, either becoming better or worse. Additionally, the reports could be used for consulting doctors and physicians for what they need to improve their sleeping.

Conclusion

This quilt is not a product with totally new media or communication technology. It is a practical product with many existing technologies to provide an innovative service, and all the technologies used in this product have been used in years and are very mature in the market. However, products that can provide the similar convenience as the quilt does haven’t been found yet. Tossing and turning will not be a huge obstacle anymore for them to fall asleep with the help of white noise. Potential health hazards could also be found out be various monitoring system and data analyzing device. We can imagine that if this quilt arrives in store, it could save an enormous group of people with sleeping problems and using the media and communication technology to serve people, to help them feel better when the night comes.

Citations

“Infrared Thermometer.”Infrared Thermometer. OMEGA Engineering Inc., 1999. Web. 19 Apr. 2014.

“Microcontroller Measures Heart Rate through Fingertip.”Instructables.com. Instructables, n.d. Web. 19 Apr. 2014.

“White Noise Benefits and Uses.”White Noise Cds.com. N.p., n.d. Web. 19 Apr. 2014.”Patent CN102860840A – Wireless Sleeping Respiration Monitor.”Google Books. Ed.

Yujuan Quan, Qingnan Liu, Lei Wang, and Deming Chen. Google Patents, 09 Jan. 2013. Web. 19 Apr. 2014.

Sixth Sense Technology Introduction

Abstract: ‘Sixth Sense’ is a wearable gesture interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information. This technology will definitely give the user a new way of seeing the world with information at their fingertips it has been classified under the category ‘wearable computing’. The true power of Sixth Sense lies on its potential to connect the real world with the Internet, and overlaying the information on the world itself. The key here is that Sixth Sense recognizes the objects around you, displaying information automatically and letting you access it in any way you want, in the simplest way possible. This paper gives you just introduction about sixth sense. This paper makes you familiar with sixth sense technology which provides freedom of interacting with the digital world using hand gestures. The sixth sense prototype is comprised of pocket projector, a mirror, mobile components, color markers and a camera. The sixth sense technology is all about interacting to the digital world in most efficient and direct way. Sixth Sense devices are very much different from the Computers; this will be a new topic for the hackers and the other people also. Everyone can get general idea of sixth sense technology by look at this paper.

Keywords: Sixth Sense, wearable computing, Augmented Reality, Gesture Recognition, Computer Vision

__________________________________________________________*****_________________________________________________________

1. INTRODUCTION

We’ve evolved over millions of years to sense the world around us. When we encounter something, someone or some place, we use our five natural senses which include eye, ear, nose, tongue mind and body to perceive information about it; that information helps us make decisions and chose the right actions to take. But arguably the most useful information that can help us make the right decision is not naturally perceivable with our five senses, namely the data, information and knowledge that mankind has accumulated about everything and which is increasingly all available online.

Although the miniaturization of computing devices allows us to carry computers in our pockets, keeping us continually connected to the digital world, there is no link between our digital devices and our interactions with the physical world. Information is confined traditionally on paper or digitally on a screen.

Sixth Sense bridges this gap, bringing intangible, digital information out into the tangible world, and allowing us to interact with this information via natural hand gestures. ‘Sixth Sense’ frees information from its confines by seamlessly integrating it with reality, and thus making the entire world your computer.

All of us are aware of the five basic senses – seeing, feeling, smelling, tasting and hearing. But there is also another sense called the sixth sense. It is basically a connection to something greater than

what their physical senses are able to perceive. To a layman, it would be something supernatural. Some might just consider it to be a superstition or something psychological. But the invention of sixth sense technology has completely shocked the world. Although it is not widely known as of now but the time is not far when this technology will change our perception of the world.

Fig. 1.1: Six Senses

Sixth Sense is a wearable “gesture based” device that augments the physical world with digital information and lets people use natural hand gestures to interact with that information.

Right now, we use our devices (computers, mobile phones, tablets, etc.) to go into the internet and get information that we want. With Sixth Sense we will use a device no bigger than current cell phones and probably eventually as small as a button on our shirts to bring the internet to us in order to interact with our world! Sixth Sense will allow us to interact with our world like never before. We can get information on anything we want from anywhere

within a few moments! We will not only be able to

interact with things on a whole new level but also with people. One great part of the device is its ability to scan objects or even people and project out information regarding what you are looking.

1.1 History and Evolution of Sixth Sense Technology

Steve Mann is father of sixth sense who made a wearable computer in 1990. The Sixth Sense Technology was first implemented as the neck worn projector + camera system. He was a media lab student at that time. There after it was used and implemented by an Indian who is the man has become very famous in the recent Pranav Mistry. There will be a long future rather than the short period of history for the Sixth Sense technology.

1.2 Why choose Sixth Sense Technology

This sixth sense technology provides us with the freedom of interacting with the digital world using hand gestures. This technology has a wide application in the field of artificial intelligence. This methodology can aid in synthesis of bots that will be able to interact with humans. This technology enables people to interact in the digital world as if they are interacting in the real world. The Sixth Sense prototype implements several applications that demonstrate the usefulness, viability and flexibility of the system [4].

2. CONSTRUCTION AND WORKING

The Sixth Sense prototype comprises a pocket projector a mirror and a camera contained in a pendant like, wearable device. Both the projector and the camera are connected to a mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks user’s hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tip of the user’s fingers. The movements and arrangements of these fiducially are interpreted into gestures that act as interaction instructions for the

projected application interfaces. Sixth Sense supports multi-touch and multi-user interaction.

Fig. 2.1: Sixth Sense Technology Working

3. TECHNOLOGIES THAT ARE RELATED TO SIXTH SENSE DEVICES

3.1. Augmented Reality

The augmented reality is a visualization technology that allows the user to experience the virtual experience added over real world in real time. Augmented reality adds graphics, sounds, hepatic feedback and smell to the natural world as it exists [3].

3.2. Gesture Recognition

It is a technology which is aimed at interpreting human gestures with the help of mathematical algorithms. Gesture recognition technique basically special type of hand gloves which provide information about hand position orientation and flux of the fingers [3].

3.3. Computer Vision

Computer Vision is the technology in which machines are able to interpret necessary information from an image. This technology includes various fields like image processing, image analysis and machine vision. It includes certain aspect of artificial intelligence techniques like pattern recognition [3].

3.4. Radio Frequency Identification

Radio Frequency Identification systems transmit the identity of an object wirelessly, using radio

magnetic waves. The main purpose of this technology is to enable the transfer of a data via a portable device. This technology is widely used in the fields like asset tracking, supply chain management, manufacturing, payment system etc [3].

4. APPLICATIONS

The Sixth Sense device has a huge number of applications. The following are few of the applications of Sixth Sense Technology:-

4.1. Viewing Map:

With the help of a map application the user can call upon any map of his/her choice and navigate through them by projecting the map on to any surface. By using the thumb and index fingers movements the user can zoom in, zoom out or pan the selected map[2].

Fig -4.1: Viewing Map

4.2. Taking Pictures:

Another application of Sixth Sense devices is the

implementation of a gestural camera. This camera takes the photo of the location user is looking at by detecting the framing gesture. After taking the desired number of photos we can project them onto any surfaces and then use gestures to sort through those photos and organize and resize them[2].

Fig 4.2: Taking Pictures

4.3. Drawing Application:

The drawing application allows the user you to draw on any surface by tracking the fingertip movements of the user’s index finger. The pictures that are drawn by the user can be stored and replaced on any other surface. The user can also shuffle through various pictures and drawing by using the hand gesture movements[2].

Fig -4.3: Drawing Application

4.4. Making Calls:

We can make calls with the help of Sixth Sense device. The Sixth Sense device is used to protect the keyboard into your palm and using that virtual keypad we can make calls to anyone[2].

Fig -4.4. Making Calls

4.5. Interacting with Physical Objects:

The Sixth Sense system also helps to interact with physical objects we use in a better way. It augments physical objects by projecting more information about these objects projected on them. For example, a gesture of drawing a circle on the user’s wrist projects a watch on the user’s hand. Similarly a newspaper can show live video news or dynamic information can be provided on a regular piece of paper[2].

Fig -4.5: Watching News

4.6. Flight Updates:

The system will recognize your boarding pass and let you know whether your flight is on time and if the gate has changed[2].

Fig – 4.6: Flight Updates

4.7. Other Applications:

Sixth Sense also lets the user draw icons or symbols in the air using the movement of the index finger and recognizes those symbols as interaction instructions. For example, drawing a magnifying glass symbol takes the user to the map application or drawing a aˆ•@aˆ- symbol lets the user check his mail[2].

5. KEY FEATURES OF SIXTHSENSE

Sixth Sense is a user friendly interface which integrates digital information into the physical world and its objects, making the entire world your computer.
Sixth Sense does not change human habits but causes computer and other machines to adapt to human needs.
It uses hand gestures to interact with digital information, supports multi-touch and multi-user interaction.
Data access directly from machine in real time. It is an open source and cost effective and we can mind map the idea anywhere.
It is gesture-controlled wearable computing device that feeds our relevant information and turns any surface into an interactive display.
It is portable and easy to carry as we can wear it in our neck.
The device could be used by anyone without even a basic knowledge of a keyboard or mouse. There is no need to carry a camera anymore.
If we are going for a holiday, then from now on wards it will be easy to capture photos by using mere fingers

CONCLUSION

As this technology will emerge may be new devices and hence forth new markets will evolve. This technology enables one to account, compute and browse data on any piece of paper we can find around. Sixth Sense devices are very much different from the computers; this will be a new topic for the hackers and the other people also. First thing is to provide the security for the Sixth Sense applications and devices. Lot of good technologies came and died due to the security threats.

There are some weaknesses that can reduce the accuracy of the data. Some of them were the on palm phone keypad. It allows the user to dial a number of the phone using the keypad available on the palm. There will be a significant market competitor to the Sixth Sense technology since it still required some hardware involvement with the user.

REFRENCES

http://www.pranavmistry.com/projects/sixthsense/
http://dspace.cusat.ac.in/jspui/bitstream/123456789/2207/1/SI XTH%20SENSE%20TECHNOLOGY.pdf
http://en.wikipedia.org/wiki/SixthSense
http://www.engineersgarage.com/articles/sixth-sense-technology
http:/www.ted.com/talkspranav_mistry_the_thrilling_potential_of_sixthsense_technology.html

Role of Firms in Science and Technology | Essay

What roles do firms play in the generation and diffusion of new scientific and technological knowledge? Illustrate your answer by reference to one or more example.

Introduction:

The differences in the types of organisations, their structures, their goals and perspectives, and the way they recognise and face challenges can breed a lot of opportunities and avenues for producing and distributing new information to the world. Technology and science has made wonders for almost everyone living in this planet. It has changed the way we live. It has also introduced new sets of problems and issues which must be strategically addressed.

Firms are already in the forefront of responding to changes and challenges in their environment. They respond to these challenges through strategies that make use of support systems like technology and scientific research.

Today’s business and social transactions are being supported more and more by technological and scientific innovations and strategies. Knowledge of advanced technologies in the different sciences and frontiers has largely advanced most careers and business prospects. According to Dorf (2001, p. 39), [1] the purpose of a business firm is to create value for all of its stakeholders. As the firm tries to create new wealth for its shareholders, valuable products and services for its customers, it is already in the process of generating and distributing new sets of information. This includes the generation of new scientific and technological knowledge which would eventually be adopted by the society and other businesses as well. A firm then leads its market through effective technical and scientific innovation, sound business management of resources, and a solid technological strategy for the success of its business.

Improved technology and increased scientific knowledge will help increase food production, efficient management of resources, allow faster access to relevant and mission critical information, and enhanced business competitiveness. Technology has the most potential to deliver business sustainability and viability through the many opportunities for research and innovation.

While it cannot be denied that firms of today have a very definite and pivotal role to play in the generation of scientific and technological knowledge, much of their contribution center on how they formulate strategies to introduce new knowledge into their business functions.

Technology has been known to support a lot of business and decision making processes. Technology strategy should be considered a vital part of any strategic planning. Incorporating high-end technology without careful considerations of other organisational issues is a sure formula for failure. The growth of technology presented managers with a complex variety of alternatives. Many executives and managers are using the advent of technology as an opportunity to reconsider their business operations (Irving and Higgins, 1991).[2] Many still feel that technology and any available scientific knowledge can solve a lot of organisational problems. Unfortunately, other executives see technology as a panacea for various organisational ills. Sometimes, the introduction of technology may increase organisational and societal problems.

Firms have a definite role when it comes to the way technology and scientific knowledge is generated and distributed. With their technological and scientific knowledge at hand, they can be technology enhancers, identifiers of new markets, sources of customer exploration, and a gateway for information interchange. However, powerful technologies and scientific knowledge can have the potential for great harm or great good to mankind (O’Brien, 2001).[3]

Competition in the business environment has led to a lot of advanced technological and scientific research and development. Investment in a lot of monetary and manpower resources has increased the need for firms to compete with each other in the introduction of new technologies which may alter the political, economic, and social landscape. Gene Amdahl was interested in starting a new computer firm to compete with International Business Machines (Goodman and Lawless p. 66).[4] He understood quite clearly that he needed a new technological design, a service and support system, and a good library software. He chose to design his computer to be IBM-compatible. Regardless of the technological wonders he designed into his new computer, it would operate all the existing IBM software. This strategy has greatly enhanced his customers’ access to new IBM technologies as well as his own. While his company has tailored itself from another company’s technology, it was able to create and generate a new set of ideas which not only enhanced his company’s image but IBM’s as well.

High technology firms who generate a lot of technological and scientific knowledge have been able to identify new markets in the fields of computers, biotechnology, genetic engineering, robotics, and other markets. These firms depend heavily on advanced scientific and engineering knowledge.

Michael Dell, for example, started building personal computers in his University of Texas dorm room at age 19 (Ferrell and Hirt, 1996).[5] His innovative ideas and prototyping techniques have made Dell Computer one of the leading PC companies in the world with sales of $2.9 billion. Because of his company’s capacity to use technology to perform decision-making and focus on new customer demands and tastes, he was able to identify strategic markets for his PC Company all around the world in different contexts. When he shifted to new markets, other industry players followed. These industry players created another set of opportunities to explore other means. Through the early 1990s, Dell sold directly to the consumer through its toll-free telephone line (Schneider and Perry, 1990).[6] Eventually, it expanded its sales to the Internet and has logged a significant percentage of its overall sales from the Internet. This strategy has lowered overhead for the company. The web site is a significant part of Dell’s strategy for moving into the new millenium. Company officials predict that within the next few years, more than half of their sales will be from the web. Supporting such a booming online sales are a robust infrastructure of communication devices and networks, Dell servers, and electronic commerce software from Microsoft.

Just as with the globalisation of markets, changes due to advances in technology is not new to business marketing. Yet, technology change is expected to create new ways of marketing that haven’t existed (Dwyer and Tanner, 1999).[7] Du Pont, for example has developed a Rapid Market Assessment technology that enables the company to determine if a market, usually a country or region previously not served) warrants development (Bob, 1996).[8] The result of the analysis is a customer-focused understanding of the foreign market, independent of the level of economic development of that country or region.

Technology is changing the nature of business-customer interaction. If applied well, benefits increase to both parties. In the area of retail marketing for example, technology can be used to enhance interaction between retailers and customers.

Point-of-sale scanning equipment is widely utilized by supermarkets, department stores, specialty stores, membership clubs, and others-hundreds of thousands of firms in all. Retailers can quickly complete customer transactions, amass sales data, reduce costs, and adjust inventory figures (Berman and Evans, 1998).[9] At some restaurants, when dinner is over, the waiter brings the check-and a sleek box that opens like the check presentation folder used by many restaurants revealing buttons and a miniscreen. The waiter brings it over and disappears discreetly. Following instructions on the screen, you verify the tab, select the payment type (credit card or ATM card), insert the card into a slot, and enter your personal identification number of PIN. You can then enter a tip-a specific amount or, if you want the device to figure the tip, a percentage. Completing the transaction triggers a blinking light. This summons the waiter who then removes the device and the receipt is printed on another terminal (Berman and Evans, 1998).[10] In this manner, the restaurant, as a firm was able to innovate on new ways to make customers make further exploration and application of this new mechanism. This in turn introduced another set of mechanisms for making billing charges to customers in another business setting (like electricity and water bills). With this illustration, innovation on a new technology can be of great help to different industry players.

With signature capture, shoppers sign their names right on a computer screen. At Sears, the cardholder uses a special pen to sign a paper receipt-which becomes the cardholder copy-on top of a pressure-sensitive pad that captures the signature, stores it, and displays it on the checkout terminal screen so a clerk can compare it with the one on the back of the credit card. Sears has a brochure explaining the procedure is entirely voluntary and electronic signatures are not stored separately and can be printed only along with the entire sales receipt. Again, innovation centered on how customers can be better served has generated a whole new set of ideas for other firms to research on.

Gateway for Information Interchange

The web or the Internet has generated a lot of research interests nowadays. People rely on the web for retrieving and sending information. It’s being used for almost all sorts of business and personal transactions like in the area of learning and commerce. Stanford University Library’s HighWire Press began in early 1995with the online production of the weekly Journal of Biological Chemistry (JBC). By March 2001, it was producing 240 online journals giving access to 237,711 articles (Chowdhury and Chowdhury, 2001).[11] The journals focus on science, technology, medicine and other scientific fields. HighWire’s strategy of online publishing of scholarly journals is not simply to mount electronic images of printed pages; rather by adding links among authors, articles and citations, advanced searching techniques, high-resolution images and multimedia, and interactivity, the electronic versions provided added dimensions to the information being provided in printed journals. The dimensions allowed readers boundless opportunities to follow up what they have initially started. The role of firms here has been magnified quite a bit. Technical and scientific information can be distributed at the least possible time possible and in as many people as possible.

In another setting, consider the tremendous savings now those millions of Internet users are able to work from home – or at least, dial into the office more than drive there. Many offices are using the Internet to save office space, materials, and transportation costs. Using email and other electronic documents also saves energy, by saving paper. People who are online are able to explore most of the advantage technology and science has to offer them. It gives them the power to filter out what is and what is not useful. Newspapers are also going online. Arguably, of all the technologies, telecommunications, and the Internet, along with a renewable energy, has the most potential to deliver sustainability and the vision of integrated optical communication networks, is compelling enough for people to understand the underlying role that technology firms play in today’s technology-based society. Computer networks and the Internet have largely been the biggest technological breakthroughs made throughout the century. And the possibilities are even growing bigger for firms to do more to leverage its use.

Conclusion:

Firms play a very important role in the generation of new information and their eventual diffusion into the overall structure of businesses and society as well. Firms are seen as responsible generators of new ideas which not only help them attain competitive advantage over their rivals but also are also unconsciously improving the lives of people from different places around the globe. Competing firms explore different technical and scientific innovations which match their business strategy especially in a globalised business setting. The rate at which firms do research and development has spawned the need for further collaboration and cooperation even among their competitors in order to protect their strategic advantage. The introduction of technological and scientific standards has helped guide the introduction of new knowledge to definite direction to take. Firms also serve as a window to a lot more opportunities for information exchange and interaction between customers and even their competitors. The Internet has been the biggest contributor to the generation, infusion, and distribution of knowledge. It has also provided a lot of opportunities for firms to invest their time and resources in order to facilitate easier access to their products and services. It has also created a new set of commerce and learning methods which allowed more and more people to get involved even if time and distances presented challenges. The driving force behind all of these innovations is change. Without it, firms will not be motivated to introduce new sets of ideas and distributed them. Knowledge is empowerment. Acquiring technical and scientific knowledge through the initiatives of different organizations not only increases further competition but also improves the different political, social, and economic dimensions of society. The generation and diffusion of scientific and technological knowledge will not be possible if firms are not aware of the changes that are constantly shaping their business landscape. Today’s challenges is not on how technological and scientific information can be generated and distributed. It is more on using this knowledge on the right place and at the right time.

Bibliography
Books

Berman, B and Evans, J (1998), Retail Management: A Strategic Approach, Prentice Hall, New Jersey.

Bob, Donarth (1996), Global Marketing Management: New Challenges Reshape Worldwide Competition.

Chowdhury, G and Chowdhury, S (2001), Information Sources and Searching on the World Wide Web, Library Association Publishing, London.

Dorf, Richard (2001), Technology, Humans, and Society: Towards A Sustainable World, Academic Press, San Diego, California.

Dwyer, F and Tanner, J (1999), Business Marketing: Connecting Strategy, Relationships, and Learning, Mc-Graw Hill, Singapore.

Ferrell O and Hirt, G (1996), Business: A Changing World, 2nd edn, Times New Mirror Higher Education.

Goodman, R and Lawless, M (1994), Technology and Strategy: Conceptual Models and Diagnostics, Oxford University Press, New York.

Irving, R and Higgins, C (1991), Office Information Systems: Management Issues and Methods, John Wiley and Sons, Ontario.

O’Brien, James (2001), Introduction to Information Systems: Essentials for the Internetworked E-Business, McGraw-Hill, Singapore.

Schneider, G & Perry, J (1990), Electronic Commerce, Thomson Learning, Singapore.

Reasoning in Artificial Intelligence (AI): A Review

1: Introduction

Artificial Intelligence (AI) is one of the developing areas in computer science that aims to design and develop intelligent machines that can demonstrate higher level of resilience to complex decision-making environments (Lopez, 2005[1]). The computations that at any time make it possible to assist users to perceive, reason, and act forms the basis for effective Artificial Intelligence (National Research Council Staff, 1997[2]) in any given computational device (e.g. computers, robotics etc.,). This makes it clear that the AI in a given environment can be accomplished only through the simulation of the real-world scenarios into logical cases with associated reasoning in order to enable the computational device to deliver the appropriate decision for the given state of the environment (Lopez, 2005). This makes it clear that reasoning is one of the key elements that contribute to the collection of computations for AI. It is also interesting to note that the effectiveness of the reasoning in the world of AI has a significant level of bearing on the ability of the machine to interpret and react to the environmental status or the problem it is facing (Ruiz et al, 2005[3]). In this report a critical review on the application of reasoning as a component for effective AI is presented to the reader. The report first presents a critical overview on the concept of reasoning and its application in the Artificial Intelligence programming for the design and development of intelligent computational devices. This is followed by critical review of selected research material on the chosen topic before presenting an overview on the topic including progress made to date, key problems faced and future direction.

2: Reasoning in Artificial Intelligence
2.1: About Reasoning

Reasoning is deemed as the key logical element that provides the ability for human interaction in a given social environment as argued by Sincak et al (2004)[4]. The key aspect associated with reasoning is the fact that the perception of a given individual is based on the reasons derived from the facts that relative to the environment as interpreted by the individual involved. This makes it clear that in a computational environment involving electronic devices or machines, the ability of the machine to deliver a given reason depends on the extent to which the social environment is quantified as logical conclusions with the help of a reason or combination of reasons as argued by Sincak et al (2004).

The major aspect associated with reasoning is that in case of human reasoning the reasoning is accompanied with introspection which allows the individual to interpret the reason through self-observation and reporting of consciousness. This naturally provides the ability to develop the resilience to exceptional situations in the social environment thus providing a non-feeble minded human to react in one way or other to a given situation that is unique in its nature in the given environment. It is also critical to appreciate the fact that the reasoning in the mathematical perspective mainly corresponds to the extent to which a given environmental status can be interpreted using probability in order to help predict the reaction or consequence in any given situation through a sequence of actions as argued by Sincak et al (2004).

The aforementioned corresponds with the case of uncertainty in the environment that challenges the normal reasoning approach to derive a specific conclusion or decision by the individual involved. The introspective nature developed in humans and some animals provides the ability to cope with the uncertainty in the environment. This adaptive nature of the non-feeble minded human is the key ingredient that provides the ability to interpret the reasons to a given situation as opposed to merely following the logical path that results through the reasoning process. The reasoning in case of AI which aims to develop the aforementioned in the electronic devices to perform complex tasks with minimal human intervention is presented in the next section.

2.2: Reasoning in Artificial Intelligence

Reasoning is deemed to be one of the key components to enable effective artificial programs in order to tackle complex decision-making problems using machines as argued by Sincak et al (2004). This is naturally because of the fact that the logical path followed by a program to derive a specific decision is mainly dependant on the ability of the program to handle exceptions in the process of delivering the decision. This naturally makes it clear that the effective use of the logical reasoning to define the past, present and future states of the given problem alongside the plausible exception handlers is the basis for successfully delivering the decision for a given problem in chosen environment. The key areas of challenge in the case of reasoning are discussed below (National Research Council Staff, 1997).

Adaptive Software – This is the area of computer programming under Artificial Intelligence that faces the major challenge of enabling the effective decision-making by machines. The key aspect associated with the adaptive software development is the need for effective identification of the various exceptions and the ability to enable dynamic exception handling based on a set of generic rules as argued by Yuen et al (2002)[5]. The concept of fuzzy matching and de-duplication that are popular in case of software tools used for cleansing data cleansing in the business environment follow the above-mentioned concept of adaptive software. This is the case there the ability of the software to decide the best possible outcome for a given situation is programmed using a basic set of directory rules that are further enhanced using references to a variety of combinations that comprise the database of logical combinations for reasons that can be applied to a given situation (Yuen et al, 2002). The concept of fuzzy matching is also deemed to be a major breakthrough in the implementation of adaptive programming of machines and computing devices in Artificial Intelligence. This is naturally because of the fact that the ability of the program to not only refer to a set of rules and associated reference but also to interpret the combination of reasons derived relative to the given situation prior to arriving on a specific decision. From the aforementioned it is evident that the effective development of adaptive software for an AI device in order to perform effective decision-making in the given environment mainly depends on the extent to which the software is able to interpret the reasons prior to deriving the decision (Yuen et al, 2002). This makes it clear that the adaptive software programming in artificial intelligence is not only deemed as an area of challenge but also the one with extensive scope for development to enable the simulation of complex real-world problems using Artificial Intelligence.

It is also critical to appreciate the fact that the adaptive software programming in the case of Artificial Intelligence is mainly focused on the ability to not only identify and interpret the reasons using a set of rules and combination of outcomes but also to demonstrate a degree of introspection. In other words the adaptive software in case of Artificial Intelligence is expected to enable the device to become a learning machine as opposed to an efficient exception handler as argued by Yuen et al (2002). This further opens room for exploring into knowledge management as part of the AI device to accomplish a certain degree of introspection similar to that of a non-feeble minded human.

Speech Synthesis/Recognition – This area of Artificial Intelligence can be deemed to be a derivative of the adaptive software whereby the speech/audio stream captured by the device deciphers the message for performs the appropriate task (Yuen et al, 2002). The speech recognition in the AI field of science poses key issues of matching, reasoning to enable access control/ decision-making and exception handling on top of the traditional issues of noise filtering and isolation of the speaker’s voice for interpretation. The case of speech recognition is where the aforementioned issues are faced whilst in case of speech synthesis using computers, the major issue is the decision-making as the decision through the logical reasoning alone can help produce the appropriate response to be synthesised into speech by the machine.

The speech synthesis as opposed to speech recognition depends only on the adaptive nature of the software involved as argued by Yuen et al (2002). This is due to the fact that the reasons derived form the interpretation of the input captured using the decision-making rules and combinations for fuzzy matching form the basis for the actual synthesis of the sentences that comprises the speech. The grammar associated with the sentences so framed and its reproduction depends heavily on the initial decision of the adaptive software using the logical reasons identified for the given environmental situation. Hence the complexity of speech synthesis and recognition poses a great challenge for effective reasoning in Artificial Intelligence.

Neural Networks – This is deemed to be yet another key challenge faced by Artificial Intelligence programming using reasoning. This is because of the fact that neural networks aim to implement the local behaviour observed by the human brain as argued by Jones (2008)[6]. The layers of perception and the level of complexity associated through the interaction between different layers of perception alongside decision-making through logical reasoning (Jones, 2008). This makes it clear that the computation of the decision using the neural networks strategy is aimed to solving highly complex problems with a greater level of external influence due to uncertainties that interact with each other or demonstrate a significant level of dependency to one another. This makes it clear that the adaptive software approach to the development of the reasoned decision-making in machines forms the basis for neural networks with a significant level complexity and dependencies involved as argued by refenrece8.

The Single Layer Perceptions (SLP) discussed by Jones (2008) and the representation of Boolean expressions using SLPs further makes it clear that the effective deployment of the neural networks can help simulate complex problems and also provide the ability to develop resilience within the machine. The learning capability and the extent to which the knowledge management can be incorporated as a component in the AI machine can be defined successfully through identification and simulation of the SLPs and their interaction with each other in a given problem environment (Jones, 2008).

The case of neural networks also opens the possibility of handling multi-layer perceptions as part of adaptive software programming through independently programming each layer before enabling interaction between the layers as part of the reasoning for the decision-making (Jones, 2008). The key influential element for the aforementioned is the ability of the programmer(s) to identify the key input and output components for generating the reasons to facilitate the decision-making.

The backpropagation or backward error propagation algorithm deployed in the neural networks is a salient feature that helps achieve the major aspect of learning from mistakes and errors in a given computer program as argued by Jones (2008). The backpropagation algorithm in the multi-layer networks is one of the major areas where the adaptive capabilities of the AI application program can be strengthened to reflect the real-world problem solving skills of the non-feeble minded human as argued by Jones (2008).

From the aforementioned it is clear that the neural networks implementation of AI applications can be achieved to a sustainable level using the backpropagation error correction technique. This self-correcting and learning system using the neural networks approach is one of the major elements that can help implement complex problems’ simulation using AI applications. The case of reasoning discussed earlier in the light of the neural networks proves that the effective use of the layer-based approach to simulate the problems in order to allow for the interaction will help achieve reliable AI application development methodologies.

The discussion presented also reveals that reasoning is one of the major elements that can help simulate real-world problems using computers or robotics regardless of the complexity of the problems.

2.3: Issues in the philosophy of Artificial Intelligence

The first and foremost issue faces in the case AI implementation of simulating complex problems of the real-world is the need for replication of the real-world environment in the computer/artificial world for the device to compute the reasons and derive upon a decision. This is naturally due to the fact that the simulation process involved in the replication of the environment for the real-world problem cannot always account for exceptions that arise due to unique human behaviour in the interaction process (Jones, 2008). The lack of this facility and the fact that the environment so created cannot alter itself fundamentally apart from being altered due to the change in the state of the entities interacting within the simulated environment makes it a major hurdle for effective AI application development.

Apart from the real-world environment replication, the issue faced by the AI programmers is the fact that the reasoning processes and the exhaustiveness of the reasoning is limited to the knowledge/skills of the analysts involved. This makes it clear that the process of reasoning depending upon non-feeble minded human’s response to a given problem in the real-world varies from one individual to another. Hence the reasons that can be simulated into the AI application can only be the fundamental logical reasons and the complex derivation of the reasons’ combination which is dependant on the individual cannot be replicated effectively in a computer as argued by Lopez (2005).

Finally, the case of reasoning in the world of Artificial Intelligence is expected to provide a mathematical combination to the delivery of the desired results which cannot be accomplished in many cases due to the uniqueness of the decision made by the non-feeble minded individual involved. This poses a great challenge to the successful implementation of AI in computers and robotics especially for complex problems that has various possibilities to choose from as result.

3: Critical Summary of Research
3.1: Paper 1 – Programs with Common Sense by Dr McCarthy

The rather ambitious paper presented by Dr McCarthy aims to provide an AI application that can help overcome the issues in speech recognition and logical reasoning that pose significant hurdles to the logical reasoning in AI application development. However, the approach to the delivery of the aforementioned in the form of an advice taker is a rather feeble approach to the AI representation of the solution to a problem of greater magnitude. Even though the paper aims to provide an Artificial Intelligence application for verbal reasoning processes that are simple in nature, the fact that the interpretation of the verbal reasoning in the light of the given problem relative to an environment is not a simple component to be simulated with ease prior to achieving the desired outcome as discussed in section 2.

“One will be able to assume that the advice taker will have available to it a fairly wide class of immediate logical consequences of anything it is told and its previous knowledge”. (Dr McCarthy, Pg 2). This statement by the author in the research paper provides room for the discussion that the advice taker program proposed by Dr McCarthy is aimed to deliver an AI application using knowledge management as a core component for logical reasoning. This is so because of the nature of the statement which implies that the advice taker program will be able to deliver its decision through access to a wide range of immediate logical consequences of anything it is told and its previous knowledge. This makes it clear that the advice taker software program is not a non-viable approach as the knowledge management strategy for logical reasoning is a component under debate as well as development over a wide range of scientific applications related problems simulation using AI. The Two Stage Fuzzy Clustering based on knowledge discovery presented by Qain in Da (2006)[7] is a classical example for the aforementioned. It is also interesting to note that the knowledge management aspect of artificial intelligence programming is mainly dependant on the speed related to the access and processing of the information in order to deliver the appropriate decision relative to the given problem (Yuen et al, 2002). A classical example for the aforementioned would be the use of fuzzy matching for validation or suggestion list generation on Online Transaction Processing Application (OLTP) on a real-time basis. This is the scenario where a portion of the data provided by the user is interpreted using fuzzy matching to arrive upon a set of concrete choices for the user to choose from (Jones, 2008). The process of choosing the appropriate option from the given suggestion list by the individual user is the component that is being replaced using Artificial Intelligence in machines to choose the best fit for the given problem. The aforementioned is evident in case of the advice taker software program that aims to provide a solution for responding to verbal reasoning processes of the day-to-day life of a non-feeble minded individual.

The author’s objective ‘to make programs that learn from their experience as effectively as humans do’, makes it clear that the knowledge management approach with the ability of the program to utilise a database type storage option to store/access its knowledge and previous experiences as part of the process. This makes it clear that the advice taker software maybe a viable option if the processing speed related to the retrieval and storage of information from a database of such magnitude which will grow in size at an exponential rate is made available for the AI application. The aforementioned approach can be achieved by the use grid computing technology as well as other processing capabilities with the availability of electronic components at affordable prices on the market. The major issue however is the design for such an application and the logical reasoning processes of retrieving such information to arrive at a decision for a given problem. Form the discussion presented in section 2 it is evident that the complexity in the level of logical reasoning results in higher level of computation to account for external variants thus providing the decision appropriate to the given problem. This cannot be accomplished without the ability to deliver process through the existing logical reasons from the application’s knowledgebase. Hence the processing speed and efficiency of computation in terms of both the architecture and software capability is a question that must be addressed to implement such a system.

Although the advice taker software is viable in a hardware architecture perspective, the hurdle is the software component that must be capable of delivering the abstraction level discussed by the author. This is because, the ability to change the behaviour of the system by merely providing verbal commands from the user which is the main challenge faced by the AI application developers. This is so because of the fact that the effective implementation of the aforementioned can be achieved only with the effective usage of the speech recognition and logical reasoning that is already available to the software for incorporating the new logical reason as an improvement or correction to the existing set-up of the application. This approach is the major hurdle which also poses the challenge of identifying the key speech patterns that are deemed to be such corrective commands over the statements’ classification provided by the user author for providing information to the application. From the above arguments it can be concluded that the author’s statement – “If one wants a machine to be able to discover an abstraction, it seems most likely that the machine must be able to represent this abstraction in some relative simple way” – is not a task that is easily realisable. It is also necessary to address the issue that the abstractions that can be realised by the user can be realised by an AI application only if the application being used already has a set of reasons or room for learning the reasons from existing reasons prior to decision-making. This process can be accomplished only through complex algorithms as well as error propagation algorithms discussed in section 2.3. This makes it clear that the realization of the advice taker software’s capability to deliver to represent any abstraction in a relative simpler way is far fetched without the appropriate implementation of self-corrective and learning algorithms. The fact that learning is not only through capturing the previous actions of the application in similar scenarios but also to generate logical reasons based on the new information provided to the application by the users is an aspect of AI application which is still under development but the necessary ingredient for the advice taker software. However, considering the timeline associated with the research presented by Dr McCarthy and the developments till date, one can say that the AI application development has seen higher level of developments to interpret information from the user to provide an appropriate decision using the logical reasoning approach. The author’s argument that for a machine to learn arbitrary behaviour simulating the possible arbitrary behaviours and trying them out is a method that is extensively used in the twenty-first century implementation of the artificial intelligence for computers and robotics. The knowledge developed in the machines programmed using AI is mainly through the use of the arbitrary behaviours simulated and their results loaded into the machine as logical reasons for the AI application to refer when faced with a given problem.

Form the arguments of the author on the five features necessary for an AI application hold viable in the current AI application development environment although the ability of the system to create subroutines which can be included into procedures as units is still a complex task. The magnitude of the processor speed and related requirements on the hardware architecture is the problem faced by the developers as opposed to the actual development of such a system. The author’s statement that ‘In order for a program to be capable of learning something it must first be capable of being told it’ is one of the many components of the AI application development that has seen tremendous development since the dawn of the twenty-first century (Jones, 2008). The multiple layer processing strategy to address complex problems in the real world that have influential variants both within the input provided as well as the output in the current state of AI application development is synonymous to the above statement by Dr McCarthy.

The neural networks for adaptive behaviour presented in great detail by Pfeifer and Scheier (2001)[8] further justifies the aforementioned. This also opens room for discussion on the extent to which the advice taker application can learn from experience through the use of neural networks as an adaptive behaviour component for programming robots and other devices facing complex real-world problems. This is the kind of adaptive behaviour that is represented by the advice taker application by Dr McCarthy who described it nearly half a century ago. The viability of using neural networks to take comments in the form of sentences (imperative or declarative) is plausible with the use of the adaptive behaviour strategy described above using neural networks.

Finally, the construction of the advice taker described by the author can be met with in the current AI application development environment although the viability of the same would have been an enormous challenge at the time when the paper was published. The advice taker construction in the twenty-first century AI environment can be accomplished using either a combination of computers and robotics or one of the two as a sole operating environment. So development of the AI application either using computers or robotics for the delivery of the advice taker is plausible depending upon the delivery scope for the application and its operational environment. Some of the hurdles faced however would be with the speech recognition and the ability to distinguish imperative sentences to declarative sentences. The second issue faced in the case of the advice taker will be the scope of application as the simulation of various instances for generating the knowledge database is plausible only within the defined scope of the application’s target environment as opposed to the non-feeble human mind that can interact with multiple environments at ease. The multiple layer neural networks approach may help tackle the problem only to a certain level as the ability to distinguish between different environments when formed as layers is not easily plausible without the knowledge on its interpretation stored within the system. Finally, a self-corrective system for AI application is plausible in the twenty-first century but the self learning system using the logical reasons provided is still scarce and requires a greater level of design resilience to account for input and output variants of the system. The stimulus-response forms described by the author in the paper is realisable using the multiple layer neural networks implementation with the limitation on the scope of the advice taker restricted to a specific problem or set of problems. The adaptive behaviour simulated using the neural networks mentioned earlier justifies the ability to achieve the aforementioned.

3.2: Paper 2 – A Logic for Default Reasoning

Default reasoning in the twenty-first century AI applications is one of the major elements that attribute to the effective functioning of the systems without terminating unexpectedly unable to handle the exception raised due to the combination of the logic as argued by Pfeifer and Scheier (2001). This is naturally because of the fact that the effective use of the default reasoning process in the current AI application development environment aims to provide default reasoning when an exhaustive list of the reasons that are simulated and rules combinations are effectively managed. However, the definition of exhaustive or the perception of an exhaustive list for the development in a given environment is limited to the number of simulations that the users can develop at the time of AI application design and the adaptive capabilities of the AI system post implementation (refernece8). This makes it clear that the effective use of the default reasoning in the AI application development can be achieved only through handling a wide variety of exceptional conditions that arise in the normal operating environment for the problem being simulated (Pfeifer and Scheier, 2001). In the light of the above arguments the assertion by the author on the default reasoning as beliefs which may well be modified or rejected by subsequent observations holds true in the current AI development environment.

The default reasoning strategy described by the author is deemed to be a critical component in the AI application development mainly because of the fact that the defaulting reasons are not only aimed to prevent unhandled exceptions leading to abnormal termination of the program but also the effective learning from experience strategy implemented within the application. The learn from experience described in the section 2 as well as the discussion presented in section 3.1 reveal that the assignment of a default reason for an adaptive AI application will provide room for identifying the exceptions that occur in the course of solving problems thus capturing new exceptions that can replace the existing default value. Furthermore, the fact that the effective use of the default reasoning strategy in AI applications also limits the learning capabilities of the application in cases where the adaptive behaviour of the system is not effective although preventing abnormal termination of the system using the default reason.

The logical representation of the exceptions and defaults and the interpretation used by the author to interpret the phrase ‘in the absence of any information to the contrary’ as ‘consistent to assume’ justifies the aforementioned. It is further evident from the arguments of the author that the default reason creation and its implementation into the neural network as a set of logical reasons are complex than the typical case wise conditional analysis on establishing a given condition holds true to the situation on hand. Another interesting factor to the aforementioned it the fact that the definition of the conditions must incorporate room for partial success owing to the fact that the typical logical approach of success or failure do not always apply to the AI application problems. Hence it is necessary to ensure that the application is capable of accommodating partial success as well as accounting for a concrete number to the given problem in order to generate an appropriate decision. The discussion on the non-monotonic character of the application defines the ability to effectively formulate the condition for default reasoning rather than merely defaulting due to the failure of the system to accommodate for the changes in the environment as argued by Pfeifer and Scheier (2001). Carbonell (1980)[9] further argues that the type hierarchies and their influence on the AI system have a significant bearing on the default reasoning strategies defined for a given AI application. This is naturally because of the fact that the introduction of the type hierarchies in the AI application will provide the application to not only interpret the problem against the set of rules and reference data stored as reasons but also assign it within the hierarchy in order to identify the viability of applying a default reason to the given problem. The arguments of Carbonell (1980) on Single-Type and Multi-Type inclusion with either strict or non-strict partitioning justify the above-mentioned argument. It is further critical to appreciate the fact that the effective implementation of the type hierarchy in a logical reasoning environment will provide the AI application with greater level of granularity to the definition and interpretation of the reasons pertaining to a given problem (Pfeiffer and Scheier, 2001). It is this state of the AI application that can help achieve a significant level of independence and ability to interact effectively in the environment with minimal human intervention. The discussion on the inheritance mechanisms presented by Carbonell (1980) alongside the implementation of the inheritance properties as the basis for the implementation of AI systems in the twenty-first century (Pfeifer and Scheier, 2001) further justify the need for default reasoning as an interactive component as opposed to a problem solving constant to prevent abnorm

Rapid developments in technology

Global trends and technological development and their effect on strategy and technology on organisations, with a focus on the Sony Corporation.

Abstract

In recent years there have been rapid developments in technology which have lead to the opening up of a global market. This has brought both opportunities and challenges to enterprises. Enterprises that want to operate globally have to plan appropriate business strategies. When formulating these strategies they have to consider the importance of the domestic and global situation of the enterprise. This study examines the effect of technological progress and global changes, with a particular focus on how they have affected the Sony Corporation. There is a discussion of Sony’s business strategies and their strong points and shortcomings. The study ends with suggestions as to how Sony could resolve some of its recent problems.

Introduction

In recent years the phenomenon of globalization has taken place. This has come about because of rapid progress in technology and communications. Now the world has become one marketplace and goods and services which were available only in one place in the past can now be bought almost anywhere in the world. This has many advantages for industries as it has expanded their market, but it has also brought many challenges. Among the challenges which must be dealt with by companies wishing to enter the globalization are tariffs and international competition, particularly from newly industrializing counties (NICs) such as Malaysia, China and so forth. This has lead to many enterprises formulating global strategies and many of them have achieved success in the global market. However, to succeed in the global market it is not sufficient to have good global strategies; it is also necessary to be able to use these strategies in a balanced manner. The domestic market and the local culture are key elements which must be carefully taken into account in global strategies.

Many enterprises look to the example of Japanese companies when determining their global strategies, as it is generally considered that their global strategies have been very successful and have permitted them to enter and succeed in many international markets.

The principal focus of this study will be the Sony Corporation. There will be a discussion of Sony’s management of new technology and globalization. Examples will be given of Sony’s global strategies, and the advantages and disadvantages they have encountered due to these strategies will be presented and discussed.

Globalization

Every firm should understand the implications of globalization in order to develop a global strategy successfully. The term “globalization” signifies the increased mobility of goods, services, manpower, technology and worldwide. Globalization may be described as a process by which countries all over the world are joined in a worldwide interdependent community. This process is driven by a combination of economic, technological, socio-cultural and political factors.

Raskin (2002) defined globalization as the worldwide integration of economical, cultural, political, religious, and social systems. He added that globalization, through the increasing integration of economies and lifestyles worldwide, leads to similarities in production and consumption patterns, and hence cultural homogenization. From an economic perspective, globalization signifies the convergence of prices, products, wages, rates of interest and profits towards standards of developed countries (Ismail, 2003). Similarly, Theodore (1983) argued that the main factors driving economic globalization of the economy are movement of labour force; international trade; movement of capital; integration of financial markets; cross-border transactions; and free movement of international capital.

Basic components of globalization are the globalization of markets and the globalization of production. The former signifies a move away from a system in which national markets are separate entities, divided by trade barriers and barriers of distance, time and culture, towards the merging of national markets into a single global market. The latter, globalization of production, refers to a tendency by individual companies to spread their production processes over various locations around the world in order to benefit from differences in cost and quality of elements of production (Hill, 2007).

Drivers of globalization

The principal driving forces that facilitate or support the extension of globalization are the following.

Advances in transportation: A reduction in the cost of transporting goods and services from country to country assists in bringing prices in the country of manufacture nearer to prices in the export market. Developments in transport technology have lead to a reduction in the cost of transport as well as to an improvement in the speed and reliability of transporting both goods and people. This has meant that it has become cost-effective to access new and expanding markets, thus enabling companies to extend their business further than would have been feasible in the past.

Technological advances: The huge reduction in the cost of transmitting and communicating information in recent years has played a vital role in the global growth of enterprises. This phenomenon has been called “the death of distance”, and is particularly noticeable in the growth of trade in knowledge products through the Internet.

De-regulation of financial markets: The process of the de-regulation of financial markets has lead to the abolition of capital controls in many countries. Capital markets have opened up in both developed and developing countries, facilitating foreign direct investment and encouraging the flow of money across national borders.

Avoidance of import protection: Many enterprises seek to avoid the tariff and non-tariff barriers imposed by regional trading blocs in order to gain more competitive access to rapidly-growing economies such as those in the emerging markets.

Economies of scale: Many economists take the view that there has been a rise in the estimated minimum efficient scale (MES) related to particular industries. Technological changes, innovation and invention in various markets have been factors contributing to this increase. An increase in the MES means that the domestic market may be considered as not being large enough for the selling needs of these industries, making expansion into overseas markets essential.

The effect of globalization on international business

In recent years, companies have been required to deal with business issues in an international context due to the move towards globalization and internationalization as well as the nature of competition. The principal aspects of global business environments are the following.

The forces of globalization

Every aspect of the global business environment is affected by the drivers of globalization. Although globalization increases business opportunities, it also leads to an increase in competition. Companies must be aware of the basic and often sweeping changes in both society and commerce resulting from globalization (Wild, Wild and Han, 2008).

National business environment

Although globalization has initiated a process of homogenization among different cultures, political systems, economic systems, legal systems, and levels of economic development in different countries, many of these the differences remain marked and enduring. Any enterprise wishing to expand overseas must be aware of these differences, and be able to formulate and implement appropriate policies and strategies to deal with them successfully (Hill, 2006).

International business environment

The international business environment has both a direct and indirect effect on how firms carry out their operations. As can been seen by the long-term movement to less rigid national borders, no business can remain entirely isolated from occurrences in the international business environment. As globalization processes lead to the increasing interrelation of the flows of trade, investment, companies are required to seek production bases and new markets at the same time. Firms must monitor the international business environment closely to determine the impact it may have on their business activities (Wild, Wild, and Han, 2008).

Management of international companies

The management of a completely domestic firm is not at all the same as the management of a transnational one, as market rules differ and forms must take these differences into account. Thus, it is national business environments which define the context of managing an international firm (Wild, Wild and Han, 2008).

Competitive Advantage in the Global Market

In the global marketplace, it is vital for companies to sustain competitive advantage. The term competitive advantage was used first by Michael Porter of the Harvard Business School in the U.S.A. Basically, it means the place a company has in relation to its competitors in the same industry. Firms seek to obtain a competitive advantage and then to sustain it. According to Porter (1998), there are three ways that a firm can do these things. The first way is by cost leadership, which means that a firm will have cost advantage is it can offer the same goods or services as its competitors, but at less cost than them. The second way is differentiation. The differentiation advantage refers to when a company can offer better goods or services than its competitors, but for the same price. This company will then become a leader in the industry. The third way is focus. This means that a company can concentrate on a narrow part of the market, which is known as a market niche, to obtain competitive advantage. Some of them may focus on cost and some of them may focus on differentiation (Porter, 1998). However, it is not easy for a firm to gain competitive advantage and it is even more difficult to keep it (Passemard and Kleiner, 2000). This is because if a company has a differentiation competitive advantage, soon another company will find how to make the same product with the same quality. If a company has a cost competitive advantage, then other companies will look for ways to make their products as cheap (ibid).

However, there are several factors that contribute to a firm obtaining competitive advantage. One of these factors is having good resources. Another factor is having a skilled work force. Countries’ governments also can affect firms, as taxes vary much from country to country and some governments may offer tax incentives or subsidies to companies (Passemard and Kleiner, 2000).

The advent of globalization has offered companies with markets all over the world. This has offered many opportunities to expand, but it has also faced them with challenges. According to Ari (2008), globalization is “a process of increasing interconnectedness, integration and interdependence among not just economies but also societies, cultures and political institutions”. He adds that a result of globalisation is that “the borders between countries lose their significance and can no longer deter trade and communication”. Regarding business and economics, globalization means that there is liberalisation of trade and creation of world markets (ibid). However, it also means that global industries are competing with all industries in the world.

There are many strategies industries can use to obtain and keep competitive advantage in the global market. According to Porter (1998), companies should make their strategy on a basis of strong analysis of the industry’s structure and nationally or internationally there are five forces that they should consider carefully, as follows:

The threat from new firms in their industry.
The threat of products that could replace their products.
The bargaining power of suppliers
The bargaining power of customers.
Competition between companies in the same sector

Segal-Horn (1996) points out that companies must be very careful when they are planning global strategy because some strategies which are effective in one country are not effective in another country. Companies have to decide if they want to have one product and marketing strategy for every country or if they have to adapt their strategy for different countries. Adaptation is more necessary for some industries than for others. For example, requirements of steel are more or less the same globally, but there will be large differences for consumer products and food and drinks. Companies have to consider this very carefully. For example, if they can use the same advertisement all over the world it is much cheaper for them, but the advertisement may not be effective in some countries, so they would lose money (ibid). To make such a strategy it is necessary for companies to have very good information about the country they want to sell their products in, which is called market intelligence (ibid). They have to be careful not to miss the differentiation advantage in any country (ibid). To have such information, they must do much market research. Many companies find that it is useful to have a joint venture with a local company in the country because that company already has good information and expertise about the market there.

De Toni et al (2008) state that “In global industries, competitive advantage derives in large part from the integration and co-ordination on an international scale of various activities”. According to Ward et al (1990) companies in a global market should have five competitive priorities, which are cost; delivery performance (dependability and speed); quality; flexibility (product mix and volume); and innovativeness.

If companies are looking for cost advantage there can be many benefits to them from globalization. This is because the can choose to buy their supplies from the cheapest supplier in any country in the world and they are not limited to suppliers in their country, as they were in the past before globalization facilitated communication and transport (Ari, 2008). In addition, they can choose to produce their products in a country where labour costs are less than in their country (ibid). Moreover, they can also sell their products through the Internet and reach millions of customers that were impossible for them to reach in the past

Sony Corporation Profile

Sony was founded in Japan just after the Second World War by Ibuka and Morita and was known initially as the Tokyo Telecommunications Engineering Company. At first their business consisted of radio repairs and manufacturing voltmeters in small quantities. However, Ibuka and Morita were interested in innovative electronics products and were also aware of the importance of international markets. They developed Sony into an international brand, expanding their business first into the U.S.A. and then into Europe. The company’s name was changed to Sony Corporation in 1958.

Currently, the Sony Corporation employs more than 150,000 people worldwide. It is one of the largest media conglomerates in the world and has six operating divisions, which are electronics, games, music, films, financial services and miscellaneous. Sony Electronics is one of world’s foremost makers of electronic products for both the business and individual consumer markets, while its games division produces, among other products, Playstation, and its music division is the second largest such company in the world. Sony’s film division produces and distributes films for the cinema as well as for TV and computers and its financial services segment includes savings and loans. Under the miscellaneous division, Sony is involved in advertising and Internet-related business.

For the financial year 2007-2008, Sony reported combined annual sales of ?8,871.4 billion with a net income of ?369.4 billion.

Historical background

The Sony Corporation has long been in the forefront of technological innovation and has devoted a considerable portion of its budget to research and development (R&D) in order to obtain and keep its competitive advantage. Some of Sony’s main developments were the following:

In 1949 Sony developed a prototype for a magnetic tape recorder prototype in 1949 and introduced paper-based recording tape a year later. In 1955, the company introduced Japan’s first transistor radio and was listed on the Tokyo Stock Exchange. The Sony Corporation of America (SONAM) was subsequently set up in the U.S.A. and the world’s first direct-view portable TV was introduced in 1960. Also in that year, Sony Overseas S.A. was set up in Switzerland; while a year later Sony became the first Japanese company to offer shares on the New York Stock Exchange in same year. Further technological innovations followed throughout the 1960s, including world’s smallest and lightest transistor television and the Trinitron colour television. Since then, the Sony Corporation have developed and produced the world’s first personal cassette player, the Sony Walkman, which was introduced in 1979, the world’s first CD player, launched in 1982. More recent innovations include the home-use PC VAIO in 1997, Blu-ray Disc drive Notebook PC in 2006 and the OLED television in 2007. The Sony Corporation also expanded into the mobile telecommunications business in 2001 with the establishment of Sony Ericsson Mobile Communications, while a year later it acquired one of its rival companies, Aiwa, through a merger.

Sony’s Global Strategies
The World Marketplace

In the 1950s Japanese products suffered from a poor reputation. In an effort to overturn this, one of its founders, Mr. Morita, went to the United States travelled to U.S.A to learn from companies there and with a view to introducing his company’s products to the American market and beyond. In 1958, having obtained the licensing right to the transistor patent from U.S. company AT&T, they developed the world’s smallest transistor radio, which they launched in both Japan and the U.S.A. It was at this point the decision was taken to change the company’s name to Sony, as it was short, easy to pronounce and memorable. The intention was to make Sony an internationally recognised brand, and in this they have succeeded, as, according to Richard (2002), Sony has become one of the most widely recognized brands in the world (Richard, A. 2002).

Global marketing and operations

According to Kikkawa (1995), only nine major Japanese companies Sony; Toyota; Honda, Nippon Steel; Toray; Teijin; Sumitomo Chemical; Shin-Etsu Chemical; and Matsushita. Kikkawa argued that these companies succeeded in the international marketplace by supplying products globally and/or carrying out global operations. Sony’s products have been developed to fulfil the requirements of consumers worldwide; therefore, the corporation can offer the same products all over the world. One instance of this is the Sony Playstation, which appeals to consumers in every country in the world. In its ability to anticipate and fulfil the requirements of consumers Sony has gained an advantage over its rivals.

The strategy of innovation

Masaru Ibuka, one of the founders of the Sony Corporation, stated that the key to Sony’s success was “never to follow the others”. In effect, the company’s central strategic advantage in its global strategy has always been continual innovation.

Global expansion and market selection

As far as global expansion is concerned, Sony has always given careful consideration to operating in markets they considered to be important and where they had reason to believe the company’s products would be most in demand (Richard, 2002). This lead to the initial decision to expand first to the United States, where they could market their products while at the same time learning from U.S. technology. The rationale behind this was that it would easier to expand to other markets once they had established a strong brand name in the United States. This in fact proved to be the case and expansion to European markets soon followed, as mentioned previously.

Advantages of Global Strategy
Reducing costs

Sony has used several elements global strategy to its advantage. For instance, every Sony factory is able to produce at full capacity due to Sony products being sold all over the world; this results in a reduction in production costs. In addition, although Sony has numerous product lines, they are standard worldwide. This means that Sony does not have the expense of producing several versions of a single product to suit various markets.

Worldwide recognition

As Sony’s products are known, sold and serviced all over the world, brand recognition among consumers is extremely high. This results in increased sales, as consumers feel secure about purchasing Sony products.

Enhancing competitive advantage

In addition, in recent years Sony has been an enthusiastic participant in the Sustainable Energy Europe Campaign, making efforts to produce energy-efficient products. The corporation is also involved social and environmental concerns through its active and high-profile Corporate Social Responsibility (CSR) programme. These activities have contributed greatly to Sony’s ability to increase their competitive advantage over its rivals.

Sony’s CSR programme

Sony developed their Corporate Social Responsibility (CSR) programme in the awareness that the corporation’s business has direct and indirect effects on society and the environment in which their business is conducted. The programme is concerned with the interests of all the corporation’s stakeholders, such as shareholders, customers, employees, suppliers, business partners, and local communities. This has contributed to the improvement of Sony’s corporate value.

The European Commission awarded Sony a Sustainable Energy Europe Award in early 2007, in acknowledgement of Sony’s efforts towards increasing the energy efficiency of its products and its participation in the Sustainable Energy Europe Campaign. By 2007, Sony had modified all their TV sets to consume less energy than the market average. This was a result of their research and development and lead to Sony TV sets increasing their market share. In this way, consumers can be satisfied that their television viewing is consuming a good deal less energy than previously, other stakeholders such as shareholders and suppliers are satisfied by the increase in sales of Sony TVs and electricity consumption also decreases.

Another element in Sony’s CSR programme is its improvement of its system for its employees to take leave to look after their children. Sony modified this system in the spring of 2007, with the aim establishing a working environment in which taking child care leave was facilitated. They also attempted to encourage fathers to become more involved in caring for their children. This modification has lead to an enhancement of the work-home life balance of Sony employees.

It can be seen from these examples that Sony has made use of the advantages of globalization in its CSR programme to achieve a competitive advantage over its rivals.

Disadvantages of Global Strategy

While global strategy offers many advantages for international enterprises, it also brings with it certain disadvantages. These consist mainly of costs related to greater coordination, reporting requirements, and added staff. In addition, international enterprises must be careful to avoid the pitfall of allowing over centralization to lead to a reduction in the quality of management in any country, as this can result in quality toward individual country can be reduced due to which damaging the motivation and drive of local employees. There is also a risk inherent in offering standardised products, as such products may prove to be less appropriate in some countries than in others. Similarly, use of standardised marketing strategies may not always be successful, as, without cultural adaptation, certain strategies may be inappropriate in specific countries.

Finally, the over-use of global strategies may also result in unnecessary or inefficient expenditure. In the case of Sony, a considerable portion of the corporation’s budget is spent on in R&D to fulfil international requirements and this may have led Sony to over-diversify. In order to compete with global competitors, Sony has ‘a finger in every pie’, so to speak, and this may have led the corporation to stray too far from its core competency which is electronics product expertise. Moreover, the possibility exists that over-diversification may result in clouding consumers’ perceptions of the brand.

Currently, Sony is facing a challenge to its market supremacy from the Samsung Company. In contrast to Sony, Samsung’s global strategy consists of limiting its diversification and focusing its resources on a small number of dominant businesses. This strategy has so far proved very successful for Samsung.

Recommendations

Although the Sony Corporation has succeeded in building one of the most widely recognised brand names in the world, its market dominance appears to be based on increasingly unsteady ground. This is indicated by the fact that Sony’s net profit for the third quarter of 2006 fell by 94% to ?1.7 billion, compared to ?28.5 billion for the same period in 2005 (Benson, 2006). This dramatic fall in profits may be attributed to the crucial strategic concerns confronting Sony.

Sony’s manufacturing process is in need of restructuring, as the quality of some Sony products has declined. This has resulted in damage to their reputation and a consequent decrease in the competitiveness of their products. For instance, Forbes magazine reported in October 2006 that 9.6 million Sony laptop batteries has had to be recalled as they were prone to overheating and were therefore dangerous. In addition, Japanese consumers expressed their dissatisfaction with the new system of the Sony PS3 (Wonova, 2006). It would appear from these examples that Sony’s quality control system is not always as efficient as it should be.

Apart from quality control issues, Sony has shown itself unable to respond rapidly and effectively to changes in market demand and its competitive advantage is therefore compromised. One example of this is the delay in the European launch of PS3 because of manufacturing problems (BBC, 2006). Sony was unable to satisfy the market demand, leaving the way open for rivals in the field such as Nintendo and Microsoft to increase their market share. Moreover, Sony did not respond as quickly as certain other television manufacturers to the increasing demand fro plasma television and therefore allowed their competitors to gain a head start on them in this market. Mintzberg et al. (1999) pointed out that “the first mover may gain advantages in building distribution channels, in tying up specialized suppliers or in gaining the attention of customers”, adding that “the first product of a class to engage in mass advertising tends to impress itself more deeply in people’s minds than the second, third or fourth”. Hence, Sony forfeited its competitive advantage and a considerable part of the market share in the games and television market. It is evident that Sony’s operational strategy is deficient and requires improvement.

In order to address these issues, Sony is putting into practice strategies from both the “inside out” resource-based perspective (Hamel and Prahalad, 1990; Barney, 1991) and “outside in” positioning perspective (Porter, 1980; Mintzberg et al., 1998), also known as the market-based perspective (Finlay, 2000). It has been suggested that combining these perspectives can optimise an enterprise’s capabilities and result in achieving and maintaining greater competitive advantages (Finlay, 2000; Thompson and Strickland, 2003; Johnson et al. 2005; Lynch, 2006). According to Hatch (1997) competitive strategy necessitates the exploitation of a company’s existing internal and external firm specific capabilities and the cultivation of new capabilities. Sony should determine appropriate methods for managing external changes in the constantly shifting business environment, and also determine how to make full use of their existing capabilities and resources to respond effectively to this environment. Moreover, Sony must be attentive to potential threats in the future and put in place the mechanisms required to neutralise these.

Conclusion

It can be seen that globalization brings both advantages and disadvantages for businesses. On one hand, they can sell their products in almost any country in the world, while progress in communication and transport means that they can choose cheaper suppliers and make their products in countries where labour costs are lower. On the other hand, it brings disadvantages in that they also have competitors from all over the world.

Appropriate planning and implementation of global strategies within the constantly evolving environment of technology can provide enterprises with opportunities for survival and expansion in an increasingly competitive market. However, inappropriate global strategies which are not well-conceived or well-implemented can result in losses. Several factors could contribute to such losses including increased costs due to additional staff and insufficient attention to the requirements of the local market. It is vital that enterprises find an appropriate balance between over-globalisation and under-globalisation, although there are no precise guidelines for determining such a balance. Among the keys to obtaining and sustaining competitive advantage in a global market is careful planning and strategy, which includes obtaining detailed information about the target country and focusing on cost or differentiation advantage

. References
Ari, A. (2008). Globalisation. Online at http://www.geocities.com/anil.ari_global/index.html# Accessed on 10th August, 2009
Barney, J. B. (1991), “Firm resources and sustained competitive advantage”, Journal of Management, Vol. 17, No. 1, pp. 99-120.
Barney, J. B. (2001), “Is the resource-based “view” a useful perspective for strategic management research? Yes”, Academy of Management Review, Vol. 26, No. 1, pp. 41-56.
De Toni, A., Filippini, R. and Forza R. (1999). Interational Journal of Operations and Production Management. Vol.12, No. 4, pp. 718 Passemard, D. and Kleiner, B.H. (2000) Competitive Advantage in Global Industries. Management Research News. Vol. 23, Issue 7/8, pp.111-117
Finlay, P. (2000), Strategic Management: An introduction to business and corporate strategy, Prentice Hall.
Hamel, G. and Prahalad, C.K. (1990), “Capabilities-Based Competition”, Harvard Business Review, Vol. 70, No. 3.
Hamel, G. and Prahalad, C.K. (1994), “Competing for the future”, Harvard Business School Press.
Hatch, M.J. (1997), Organization Theory: Modern Symbolic and Postmodern Perspectives, Oxford University Press.
Hill, C.W.L. (2007), International business: competing in the global marketplace, Boston: McGraw-Hill/Irwin.
Johnson, G. (2005), Exploring Corporate Strategy: Text and Cases, 7th Edition, Prentice Hall.
Kikkawa, T. (1995), Growth in cluster of entrepreneurs: The case of Honda Motor and Sony
Lynch, R. (2006), Corporate Strategy, 4th Edition, Prentice Hall.<

Quality of Service (QoS): Issues and Recommendations

The Effects Of Movement On QoS –

As the mobile device moves from a cell protected from one base station to an adjoining cell of a different base station during a connection handover takes place. This hand over time may just result in a short loss of communication which would possibly not be obvious for voice interplay however can outcomes in loss of information for different applications. For mobile computing, the base station may have to provide regional processing, storage or other services as good as communication.

Variations in link quality will additionally be caused by atmoA­ circular conditions such as rain or lightning. These effects need additional refined dynamic QoS management than fixed systems.

It is therefore the variation in QoS that is that the crucial distinction between mobile systems and communications based on wired networks. This implies for adaptive QoS management that specifies a variety of acceptable QoS levels, instead of attempting to ensure specific values. The QoS management is additionally accountable for cooperation with QoS aware applications to support adaptation, instead of insulating applications from variation in underlying QoS. The effects of quality on QoS need then that algorithms utilized should be capable of managing frequent loss and reappearance of mobile device within the network, and that overhead ought to be reduced in periods of low connectivity. This is in contrast to traditional distributed applications, wherever moderately stable presence and systematically high network quality square measure usually assumed.

The Restrictions Of Portable Devices On QoS –

Portability of the mobile computing device imposed variety of problems that place limitations on QoS. The main limitation is within the physical size of mobile computers. Systems usually are designed with the limitations of batteries in mind. Current battery technology still needs appreciable area and weight for modest power reserves, and isn’t expected to become considerably additional compact in future. This then places limits on the style due to the ought to offer low power consumption as a primary style goal: low power processors, displays and peripherals, and the observe of getting systems powered down or “sleeping” once not in active use are common measures to reduce power consumption in portable PCs (Personal computer) and PDAs (Personal digital assistant). Low power consumption elements are usually grade

of processing power below their higher consumption desktop counterparts, so limiting the complexness of tasks performed. The practice of intermittent activity might seem as frequent failures in some situations. Similarly, mobile technology needs vital power, notably for transmission, thus network association should be intermittent.

The second purpose is that of user interfaces: giant screens, large keyboards, and refined and straightforward to use pointer systems are commonplace in a desktop surroundings. These facilitate data wealthy, complicated user interfaces, with precise user management. In portable computers, screen size is reduced, keyboards are typically additional incommodious, and pointer devices less refined. PDAs have tiny, low resolution screens that are usually additional suited to text than graphics and will solely be monochrome. They have stripped miniature keyboards, and pen based mostly, voice, or easy cursor input and selection devices. These limitations in input and show technology need a considerably totally different approach to user interface style. In sush type of environments where users may use a variety of systems in different situations, the interface to applications may then be heterogeneous.

QoS management in a mobile environment should enable for scaling of delivered information, and also less complicated user interfaces once connecting using a common combination of portable devices and higher power non-portable devices [1, 6] and field of context aware computing provides groundwork during this area, wherever instead of treating the geographical context (as for mobility), one can treat the choice of end system as giving a resource context.

The Effects On Other Non-Functional Parameters –

Any style of remote access will increase security risks however wireless based mostly communication is especially likely to unseen undetected therefore mobility complicates traditional security mechanisms. Even nomadic systems can build use of less secure telephone and net based mostly communications than workplace systems using LANs. Some

Organizations might place restrictions on what knowledge or services will be accessed remotely, or need a lot of subtle security than is required for workplace systems. In addition, there are legal and moral problems rose within the observance of users’ locations.

Cost is another parameter that might be stricken by the employment of mobile communications. However, whereas wireless connections are frequently more expensive, the basic principles of QoS management in relevancy price are the same as for fixed systems. The only major extra quality is formed by the risk of a bigger range of connection, and therefore price, options, and the risk of performing accounting in multiple currencies.

WORK ON MANAGEMENT OF QoS IN MOBILE ENVIRONMENTS

Management Adaptivity – As declared within the section “The Effects of Movement on QoS,” one of the key ideas in managing QoS for mobile environments is adaptation to changes in QoS. In the following we tend to discuss 3 categories of change that have to be catered for.

Large-grained change is characterized as changes due to varieties of end system, or network connection in use, generally these can vary infrequently, often only between sessions, and therefore are managed mostly at the initialization of interaction with applications, probably by suggests that of context awareness.
Hideable changes are those minor fluctuations, some of that could be peculiar to mobile systems, that are sufficiently little in degree and period to be managed by traditional media aware buffering and filtering techniques. Buffering is often used to take away noise by smoothing a variable (bit or frame) rate stream to a constant rate stream. Filtering of packets could differentiate between those containing base and improvement levels of information in multimedia streams, e.g., moving from color to black and white images and are like those in fixed network systems [35]. However, as mobile systems move, connections with totally different base stations have to be set up and connections to remote servers re-routed via the new base stations. This needs moving or putting in filters for these connection, different connection could not give the same QoS as the previous one, and so the needed filter technique could differ. To manage this needs an extension of the traditional interactions for migrating connections between base stations. The choice and handover of management should realize of offered QoS, needed QoS, and the capability of the network to accommodate any needed filters. Wherever the network cannot maintain the current level of service, base stations ought to initiate adaptation in conjunction with handover [14, 41].
Fine-grained change are those changes that are often transient, however vital enough in vary of variation and period to be outside the range of effects that will be hidden by traditional QoS management ways. These include:
Environmental effects in wireless networks.
Other flows beginning and stopping in a part of the system so affecting resources available.
Changes in accessible power inflicting power management
Functions to be initiated, or degradation in functions like radio transmission.

These types of change should be informed with the applications involved, as they need interation between QoS management and the application for adaptation.

In several conditions it is a reasonable to assume that the wireless connection will determine the overall QoS. However, an end-to-end QoS management is still needed, specially for multicast systems, and those using the internet for their connection. The impact of price on patterns of desired adaptivity also becomes more pronounced in mobile systems, wherever connections usually have a charge per unit time or per unit data.

Adaptation paths connected with QoS management ought to be able to describe how a lot of the users are willing to pay for a certain level of presentation quality or timeliness. The heterogeneousness inherent in systems that might offer network access through more than one media also will be a issue here, as certain sorts of connection can cost more than others, and cost of connection will vary due to telecoms supplier traffic structures.

Resource Management And Reservation –

Some researchers contend that resource reservation isn’t relevant in mobile systems, as the accessible bandwidth in connections is just too extremely variable for a reservation to be meaningful. However, some resource allocation and admission control would appear reasonable once resources are scarce, even if laborious guarantees of resource provision are not practical. [44, 47] proposes that guarantees be created in admission control on lower bounds of needs, whereas providing best-effort service beyond this. This is achieved by creating advance reservation of minimum levels of resources within the next predicted cell to confirm accessibility and smooth handoff, and maintaining a portion of resources to handle unforeseen events. The issue of resource reservation is given some thought by those engaged on base stations and wired elements of mobile infrastructures, as these high bandwidth elements should be shared by several users, therefore the traditional resource management approach still applies.

Context Awareness –

A further aspect of resource management is that of large grained adaptivity, and context awareness. [49] defines situation as “the entire set of circumstances close surrounding agent, including the agent’s own internal state” and from this context as “the elements of the situation that ought to impact behavior. ” Context aware adaptation may include migrating data between systems as a results of mobility; dynamic a user interface to reflect location dependent information of interest; choosing a local printer or power conscious scheduling of actions in portable environments. The QoS experienced is also dependant on awareness of context, and applicable adaptation to that context [11]. A elementary paper on context awareness is [13], that emphasizes that context depends on more than location, i.e., vicinity to alternative users and resources or environmental conditions such as lighting, noise or social things. In consideration of QoS presentation, the problems with network connectivity, communications price and bandwidth, and location are obvious factors, poignant data for interactions as well as how end systems are used and user’s preferences, for instance, network bandwidth may be available to supply spoken messages on a PDA (Personal digital assistant) with audio capability, however in several situations text show would still be the most applicable delivery mechanism – speech might not be intelligible on a noisy factory floor, and secrecy is also required in conferences with customers. “Quality” will therefore cover all non-functional characteristics of information poignant any aspect of perceived quality.

CONCLUSION

We discussed the critical issues faced by QoS in a mobile environment, the time those challenges emerged and the techniques that were put forward to tackle those challenges following literature to discussed work.

Psychosocial Effects of Technology

Olivia Di Giulio
Introduction

As individuals of a modern society, we are use to technology being present in almost every area of our everyday lives. Being that technology is so present in our everyday lives, it is almost impossible to live a normal life without it. Technology such as laptop computers and cell phones have become fixtures of modern culture, affecting how we communicate, work, and spend our free time. Though the effects appear minimal on the surface, technology can alter an individual’s physiological state. Technology affects how view ourselves, our relationship with others, and the ways in which we communicate, therefore, creating negative psychosocial affects on the lives of individuals.

Though technology is meant to promote the positive aspects of human connection, it creates an abundance of negative affects and backlashes. Technology has been created and manifested in numerous forms throughout the twenty first century. Technology is a large umbrella term, due to the thousands of creations that can fall under its category. Technology can range from a physical creation such as a laptop and a cellphone, to a virtual creation such as the Internet, its various websites, and various social media applications that can be accessed from both cell phones and computers. The internet, which can be accessed from numerous technological devices, allows individuals to fully participate in its virtual world through sharing pictures, online chat forums, blog posts, and to write about their life and daily activities through social media. Through these various avenues, the Internet allows users to create virtual relationships and communicable ties. Though all of these facets seem extremely positive, the negative impacts outweigh its benefits. For every positive feature, in turn, creates a negative psychological impact in some shape or form. Technology can affect our individual mental states of being, how we view ourselves, the ways in which we communicate, and our relationships with others, which are some of the most important features of our human existence. Through technology we have redefined acceptable behaviors and moral norms, the basis of communication, and who we are as a culture.

One might ask why it matters that technology has affected our psychosocial sates of being. It matters because we are mentally no longer the same culture that we were before these technological advancements. As a society, our mental states have changed negatively. We have become lazy, dependent on technology, isolated, and unable to put down our technological devices. Though technology can be extremely helpful, these are not positive changes, and have affected the human brain, human interaction, and communication culture as a whole. We must be observant as a culture in how often we use our technology/ the ways in which we use our technology, in order to lessen its negative psychosocial affects, otherwise, we will not be able to live without it. In order to be proactive and lessen these affects, we must look at the devices that have forever changed the face of communication and the negative ways in which it affects our mental state and social aspects of society.

There are numerous technological advancements that have entirely redefined communication as a whole and the ways in which our society communicates. These technological advancements consist of cell phones, which allow instant communication through texting, and computers, which allow for the download of various communication software, applications, and social media apps (which can be found on both devices). Frequent uses of these devices and applications have allowed methods of communication to be entirely redefined, because most elements of communication can now take place virtually. Technology is extremely convenient and appealing, making it extremely difficult for users to resist, or wish to have face-to-face communication. A survey of undergraduate students showed that 85 percent use technology and social media to stay in touch with friends as opposed to other forms of communication (HumanKinetics.com). Due to its convenience and easy accessibility, technological communication has become a staple of our society and has entirely redefined not only the way in which we communicate, and but also affecting one’s relationships, due to communication playing a significant role in the creation of human ties.

Technology Negatively Affecting Personal Relationships

The quality of and logistics of human relationships have suffered negative affects due to technology use. Communication is a huge aspect of relationship building and when the basis of communication changes, the basis of relationship building changes as well. Communication plays a fundamental role in producing “the common understandings” that help create moral norms and “social value systems” (Bruce Drake, Kristi Yuthas, Jesse Dillard). Within technological avenues such as texting, communication is entirely virtual and many elements of conversation are lost such as body language, tone, and facial expressions, allowing conversation to become extremely impersonal and lack depth (Pyschcentral.com). According to psychologist Sherry Turkle, technological communication, such as texting, ironically interrupts relationship building, and does not foster conditions, which are necessary to build a true connection with another individual (Pyschcentral.com). Being that individuals are constantly connected through texting, they do not receive the proper alone time, which is necessary in developing a connection with others (Pyschcentral.com). In a recent study it has been found that the interruption of texting in a physical conversation “inhibits the development of closeness and trust”, and reduces the empathy that one can feel for others (Wbur.org).

Technology does not substitute the quality of physical conversation and does not reach the same heights and depth that physical conversation can. Through conversation, individuals search for and create moral norms, in which technology prevents the possibility of having these in depth conversations (Bruce Drake, Kristi Yuthas, Jesse Dillard). Physical conversation provides the tools necessary in which people can develop “personal identity, build close relationships, solidarity and community”; elements that are all lost within technological communication (Bruce Drake, Kristi Yuthas, Jesse Dillard). Instead, communication and relationships fostered through technology are extremely substance less, due to the fact that it is difficult to kindle a true connection in a virtual world, have in depth conversations, and rely on virtual fulfillment. Therefore, technological relations have numerous backlashes. Like realistic relationships, the relationships created through technology give individuals reassurance and validation. If the multitude of these associations is not fulfilled through virtual interaction, it can cause one to feel empty. It is extremely likely for one to feel empty when they rely on this type of validation, because it is virtual, and therefore, less likely for these associations to be fulfilled instantly, as opposed to physical contact. Relationships and the process of relationship building have changed, due to our societies shift in dialogue thanks to technology.

What we say and how we say it has been entirely changed thanks to technology, which has reinvented the technicalities of language. Cell phones and computers that operate off of a wireless connection can provide users with extremely fast technological communication, allowing messages to be delivered with speed. Abbreviations and colloquial language allow users to type fast messages within texts and chat rooms to one another. Though these aspects seem extremely positive, they are can be extremely dangerous for communication culture. Wireless connection and new conversational mechanisms provide the perfect equation to entirely redefine the face of communication. Users have become extremely accustomed to this type of fast pace communication, to the point where they can no longer live without it, due to its convenience and simplicity. Technology makes users desire speed as an essential need, which is extremely detrimental to quality communication. Technological communication, such as texting, and online chat rooms, have virtually destroyed the English language and uses of its correct forms within these devices, have become few and far between. Individuals are no longer taking the time to place emphasis on certain expressions or to be grammatically correct, because it is simply easier and faster to speak colloquially, therefore, preventing quality communication (Donovan A. McFarlane). Quality communication requires effort and without it, it leads to various misunderstandings (Donovan A. McFarlane). When communication is misunderstood, it is no longer efficient or achieves its purpose (Donovan A. McFarlane). In our society speed is often mistaken for efficiency (Donovan A. McFarlane). Individuals would rather summarize what they are saying, instead of properly explaining their ideas, due to our society’s need for speed, that technology makes us desire (Donovan A. McFarlane). Though it is meant to simplify communication, technology has made communication more difficult, due its impersonal nature and lack of quality, which promotes ineffectiveness, as opposed to cohesive dialogue (Donovan A. McFarlane).

Technology Affecting Behavior, Mental Health, and Mental Processes

As a culture, behavior has also been redefined through what is now seen as morally correct and acceptable. Technology has set these new standards in behavior and implemented entirely new social boundaries. It been said that technology such as the Internet, does not promote social integration (Kraut, Patterson, Lundmark). Over the last 35 years “Citizens vote less, go to church less, discuss government with their neighbors less, are members of fewer voluntary organizations, have fewer dinner parties, and generally get together less for civic and social purposes” (Kraut, Patterson, Lundmark) due to technology, therefore, enabling social disengagement and a less unified society. According to HumanKinetics.com, technology can cause one to feel, “distracted, overly stressed, and isolated”, due to frequent use. Technological avenues, such as texting, further manifest negative behavioral habits by hindering our ability to confront situations, allowing individuals to hide behind the screen of their phone (Pyschcentral.com). Bernard Guerney Jr., founder of the National Institute of Relationship Enhancement, believes that texting creates a “lack of courage” to approach an intense or awkward situation, because it is simply easier to hide behind a screen, which can hinder one’s social growth (Pyschcentral.com). One can grow from certain life experiences, which now have now become obsolete through the advent of texting (Pyschcentral.com). Technology also manifests lazy behavior (Insidetechnology360.com). Technology’s numerous functions enable most manual work to be done digitally, therefore, making the lives of individuals much easier and ultimately making them lazier. As technology evolves, devices are able to do more and more for users (Insidetechnology360.com). For example, Apple’s iPhone feature Siri, allows users to press a button and talk into the phone to request an action such as surfing the web, or making a phone call. As if making a phone or surfing the web was not easy enough, Apple has made it all the more easier by allowing users to perform these actions with a push of a button. Features like this, in addition to many other features of technology, breed a lazy society, because we no longer have to perform any actions ourselves, because technology can simply do it for us.

Additionally, technology enables the developing of more severe personality disorders. With features that enable users to create a profile about their life on social media sites, such as Facebook, and features that allow users to post up-to-the minute pictures on their daily activities on social media apps such Instagram, it allows users to become fixated on their appearance and reputation. Therefore, users will often post their best traits via Internet, enabling for the manifestation of behavioral conditions, such as narcissism (Humankinetics.com). The more one is engrossed, the more likely one can experience physiological, emotional, and behavioral changes such as narcissism (Yi-Fen Chen). Certain activities and interactions a user can partake in will increase the likelihood that there will be psychological traces left behind from the virtual environment, within the individual, after experiencing it (Yi-Fen Chen).

The negative affects of technology, which are visible to the human eye, appear minimal. These affects can be seen in the way communication has changed and the way in which we narcissistically portray ourselves via Internet, and do not seem extremely harmful. The affects in which we cannot see, such as, those that affect the brain are the most detrimental, because they target our mental health. Negative affects of technology of have further manifested themselves in the forms of possible addictions and mental illnesses. Being that technology is extremely present in our lives and convenient, it is hard for some to live without it, creating an inseparable and unhealthy relationship between the user and technology in the form of an addiction. Though it is not a recognized disorder by the American Psychiatric Association, there has been much speculation to include Internet Addiction in in the latest addition of the DiagnosticandStatisticalManualofMentalDisorders (U.S. National Library of Medicine), due to the manifestation of unhealthy relationships between users and technology. Internet Addiction is seen as an impulsive “spectrum disorder” which consists of “online and/or offline computer usage and consists of at least three subtypes: excessive gaming, sexual preoccupations, and e-mail/text messaging” (U.S. National Library of Medicine). In 2012 study done by the Department of Psychology and Neuroscience at the University of Colorado in Boulder, Colorado, showed a strong correlation between problematic Internet use and psychotic-like experiences (U.S. National Library of Medicine).

As a society, we must be extremely conscious and aware towards our technology use, due to its horrible psychosocial affects. Due to the way that it is positively promoted within our society, most individuals would never suspect the horrible backlashes of technology. We must be proactive about the way in which we use technology/ how we use our technology in order to prevent serious changes towards our behavior, mental health, relationships, and how we communicate. These affects are extremely detrimental towards our society and if we do not act upon them by monitoring our technology use, communication, social interaction, and our own mental health will only grow worse, and we will therefore have a communication crisis.

Works Cited
Adler, Iris. “How Our Digital Devices Are Affecting Our Personal Relationships.” wbur.org. 2013. Web. 02 Nov. 2014. http://www.wbur.org/2013/01/17/digital-lives-i

Chen, Yi-Fen. “See you on Facebook: exploring influences on Facebook continuous usage”. Behaviour & Information Technology 39 (2014): 59–70. Web.

Drake, Bruce, Yuthas, Kristi, Dillard, Jesse. “It’s Only Words – Impacts of Information Technology on Moral Dialogue.” Journal of Business Ethics 23 (2000): 41-59. Web.

Human Kinetics. “Technology can have positive and negative impact on social interactions.” humankinetics.com. Web. 02 Nov. 2014. http://www.humankinetics.com/excerpts/excerpts/technology-can-have-positive-and- negative-impact-on-social-interactions

Kraut, Robert, Patterson, Michael, Lundmark, Vicki. “Internet Paradox: A Social Technology That Reduces Social Involvement and Psychological Well-Being?” American Psychologist 9 (1998): Web.

McFarlane, Donovan. “Social Communication in a Technology-Driven Society: A Philosophical Exploration of Factor-Impacts and Consequences.” American Communication Journal 12 (2010): 1-2.Web.

Mittal VA, Dean DJ, Pelletier, A. “Internet addiction, reality substitution and longitudinal changes in psychotic-like experiences in young adults.” Early Intervention Psychiatry 3 (2013): 1751-7893. Web.

Mohan, Bharath. “Is Technology Making Humans More Lazy – Yes.” Insidetechnology360.com. R.R. Donnelley, 20 Feb. 2011. Web. 24 Nov. 2014.

http://www.insidetechnology360.com/index.php/is-technology-making-humans-more-lazy-yes-5968/

Pies, Ronald. “Should DSM-V Designate “Internet Addiction” a Mental Disorder”?” Psychiatry 2 (2009): 31-37. Web.

Suval, Lauren. “Does Texting Hinder Social Skills?”Psych Central.com. Psych Central, 2012. Web. 02 Nov. 2014. http://psychcentral.com/blog/archives/2012/05/02/does-texting- hinder-social-skills/

1