Effect of Technology on Public Transportation

What evidence of the social shaping of technology, if any, is provided by the history of public transport in London & Paris (1820-1990)?

The following will discuss the evidence or otherwise of the social shaping of technology with regard to public transport in London and Paris between 1820 and 1990. During this period technological advances in public transport were pronounced and whether they shaped social changes will be outlined below. London and Paris are apt examples to use as they developed rapidly during the 19th Century and had continued to change until the end of the period.

In 1820 both London and Paris were expanding cities yet their transport systems with the exception of canals to London had hardly changed at all in hundreds of years. However, the impact of industrialisation and urbanisation would mean that London and Paris would need the improvements in public transport to get their populations to work, school and home again. These advances in technology in turn would bolster the social and economic changes that had fostered them in the first place. The British population increased from 10 million in 1800 to 36 million in 1990 whilst that of France went from 27 million to 40 million (Roberts, 1996, p.322). In the same period the population of London went from 900,000 to 4.7 million whilst that of Paris went from 600,000 to 3.6 million. Most of the rise in the London and Paris populations resulted from the increased migration promoted by public transport (Roberts, 1996, p.322). The term ‘commuter’ came into everyday use during the 1850s to describe the people that travelled into and around London daily to work. These commuters travelled by train and in any of the 800 horse drawn bus services. After 1862 commuters could travel on the first complete section of the underground from Paddington to Farrington Street. The underground was developed and built by partners including the City of London and Great Western Railway. The construction of such systems in London and Paris showed great engineering skills not least because of the need to tunnel or bridge the Thames and the Seine respectively (Evans, 2000, p.101). The Paris metro was opened on July 19 1900 when it only went from Porte de Vincennes to Porte Mailliat. Like the London underground the metro was extended much further than the original line. Line 1 for example now runs from Chateau de Vincent to La Defense. The Paris metro gained a reputation for not only being more efficient that the London underground but also more elegant. The metro resulted from the engineering know how of Fulgence Bienvenue and the architectural elegance of Hector Guimard. The metro has 211 kilometres or 130 miles of track that serves 380 stations that means that any within Paris is merely 500 metres away from the nearest station. The metro is slightly bigger than half of the London underground yet has a hundred stations more (Mills, 1997-2005).

Improvements in technology meant that more people travelled to London and Paris to live and work, thus more of them could travel within and beyond the city limits. That was due to the increase in the provision of public transport. In the early part of the period 1820 to 1990 was the advent of the railways. The first successful rail service between Stockton and Darlington was developed by George Stephenson provided the impetus for a great expansion of railways (Hobsbawm, 1962, p.187). As respective capital cities London and Paris were logically at the centre of their national rail networks. Technically speaking, although the train services into, from and in London were providing a public service they were privately owned until after 1945. Britain had a head start over France when it came to the amount and density or rail and track not only in the capital but nationally as well, over 750 kilometres squared compared to between 250-499 kilometres squared for France (Hobsbawm, 1975, p.310). The advent of the railways meant that the Londoners and Parisians could have better links to the provinces, also cities such as Newcastle and Marseilles were easier to reach. The railways also meant that other parts of their cities were easier to get to (Hobsbawm, 1975, p.56). Southern Railway that ran the majority of train services in and around London was the only private rail operator (before nationalisation) that was regularly in profit (Black, 2000, p.89).

Linked to the spread of the railways was the adoption of underground – systems in both London and Paris. The underground and metro systems offered the capacity and ability to carry millions of commuters daily without causing as much disruption as having all the rail tracks above ground. London expanded its operative underground -system in 1890 and Paris alongside other cities followed within a decade. The London underground is roughly double the size of the Paris metro since the completion of its last extension in 1999 with 392 kilometres or 244 miles of track with 280 stations (Crystal, 2003, p.950). In contrast to the railways the London underground continued to expand during the 1960s and beyond. The new Victoria Line of the 1960s was followed by the Jubilee Line and the extension of the system to Heathrow Airport in the 1970s (Black, 2000, p. 91). The underground systems gave the advantage of transporting more people with greater speed than other forms of both private and public transport. At that point cars and buses were barely in existence. Even as cars became more common they remained out of the price range of many Londoners and Parisians until the 1950s. Using public transport had the advantage of being cheaper without the need to worry about parking or having to stay stuck in traffic jams (Black, 2000, p.86).

Another way that public transport has made on the social shaping of technology in London and Paris was the role of buses. Prior to the invention of the internal combustion engine there had been the horse driven bus. However, the buses driven by petrol or diesel engines were able to carry more passengers further than their horse driven predecessors. Buses could pick passengers up from places where the train and the underground did not go. Buses were introduced into London and other British cities from 1898 (Black, 2000, p.87). Buses tended to operate later services than the trains did in London. Within London and outside it, train companies before the Second World War often ran bus services. The Second World War led to London’s travel infrastructure been badly damaged whilst Paris had escaped heavy bombing although other parts of the French rail and roads had been destroyed (Black, 2000, p.88).

In most respects the coming of railways amply demonstrated the social shaping of technology. It helped to speed the movement of people from the smaller towns and villages to major cities such as London and Paris. The railways allowed goods or people to travel much faster and also generated great wealth for their investors. Such wealth was shown in the elegant stations such as King’s Cross and Paris du Nord. The railways employed thousands directly or indirectly whilst transporting millions more (Hobsbawm, 1987, p.27). France had been slower in building railways than Britain yet managed to double the amount of track it had between 1880 and 1913 (Hobsbawm, 1987, p.52). The railway workers and other transport workers shaped society in ways linked to technology or in times of industrial disputes the refusal to use that technology. Both the British and French transport workers had a reputation for their radical trade unionism. In the British General strike of May 1926 support amongst London’s transport workers was solid and not a bus, train or underground train ran for nine days (Brendon, 2000, pp 46-47). France tended to be more prone to strikes than Britain. In the summer of 1936, Paris and the rest of the country came to a halt after a series of strikes spread to the transport workers after starting at Renault (Brendon, 2000, p. 296). Even in more recent times strikes on the metro are frequent, especially if the French trade unions are unhappy with their government. Unlike their counterparts in London most Parisians can walk to work if that happens (Mills, 1997-2005).

There was another development in public transport that allowed some social shaping due to technology, the aircraft. At first air travel was restricted to the rich, the military and cargo carriers. However the increasing cheapness of flights and the opening of airports such as Charles de Gaulle and Heathrow near Paris and London respectively made package holidays and internal business flights easier (Hobsbawm, 1994, p. 15). It was in the production of the supersonic airliner Concorde that both countries collaborated to show how technologically advanced they were. Concorde would allow people to travel to and from London and Paris in luxury as well as been good for national prestige (Crystal, 2003, p. 214). Whilst the French have made efforts to maintain and modernise their rail network in Paris and nationally the decline in the British railways has been marked. The total mileage of track halved between 1945 and 1992 whilst the number of car owners increased twenty fold in the same period. That meant that public transport was taken more seriously in Paris than London (Black, 2000, pp. 90-92).

Therefore, it can be argued that social shaping technology was evidenced by public transport in London and Paris between 1820 and 1990. It was the development and expansion of the railways that greatly contributed to the expansion of London and Paris during the 19th Century. The railways generated wealth and trade as well as bringing people and jobs to both London and Paris. The development of underground-systems also contributed to social shaping and more and more people were able to commute to work and school. Public transport was further enhanced with the introduction of powered buses whilst the availability of cycles and later cars meant that not everybody had to rely on public transport. Whilst the greater availability of public transport had made social shaping changes the wider availability of cars led to more people moving out of the cities centres in to the suburbs. Public transport still remains vital for millions of Londoners and Parisians and commuters that travel from further afield to go about their everyday business in London or Paris.

Bibliography

Black, J (2000) Modern British History since 1900, Macmillan Foundations, Macmillan, London

Brendon, P (2000) The Dark Valley – A Panorama of the 1930s, Jonathan Cape, London

Crystal, D (2003) The Penguin Concise Encyclopaedia, Penguin Group, London

Hobsbawm, E (1962) The Age of Revolution 1789-1848, Weidenfeld & Nicholson, London

Hobsbawm, E (1975) The Age of Capital 1848-1875, Weidenfeld & Nicholson, London

Hobsbawm, E (1994) The Age of Extremes – the short Twentieth Century 1914-1991, Michael Joseph, London

Mills, I C (1997-2005) The Paris metro – www.discoverfrance

(Barry Vale)

Effects of Violent Video Games | Essay

Violent computer games, and their possible effect on players.

Introduction

The Problems

Social Effects of Playing Computer Games

Conclusion

Bibliography

Introduction

Feeding children’s passion for computers, billions of dollars in both public and private funds are being spent to give children access in school, at home, and in the community. Nearly every school is now equipped with computers, (Fisch, 2004, p. 2) and over two-thirds of our nation’s children have access at home. (Fisch, 2004, p. 4) But is computer technology actually improving their lives? Computer technology has transformed society in a number of profound ways. For better or worse, the increasing pervasiveness of computer technology is a reality no one can ignore or stop, not that one would. Computers are fast becoming integrated into nearly every aspect of daily living, from school to work, to banking and shopping, to paying taxes and even voting. They provide access to a wide range of information without a trip to the library. They convey personal messages in place of the post office or telephone. And they compete with newspapers, radio, and television in providing entertainment and news of the day.

Computer technology also has a profound effect on our economy. Not only are computers changing the way goods and services are manufactured, distributed, and purchased, but they are also changing the skills workers need to be productive and earn a living. This climate sets the stage whereby we encourage our children to utilize a computer, as such represents not the world of tomorrow, but the world of today, and thus they need to be computer literate. The public generally agrees that for children to participate socially, economically, and politically in this new and different world, they must acquire a certain level of comfort and competence in using computers. National polls indicate widespread support for providing children with access to computers to enable them to learn adequate computer skills and improve their education (Trotter, 1998, pp. 6-9). In surveys, most parents and children report that they view computers and the Internet as a positive force in their lives, despite concerns about exposure to inappropriate commercial, sexual, and violent content (National School Boards Foundation, 2000). Most parents believe that the Internet can help children with their homework and allow them to discover fascinating, useful things, and that children without access are disadvantaged compared to those with access.

The scenario’s described above represent the current generation of parents, as opposed to their children. A generation that grew up on computer and video games that their parents had no idea of what they were playing, or even what the technology was. Thus, there was a real understanding and involvement gap (Brougere et al, 2004, p. 1-4). Those basically unsupervised children are now adults. Adults that grew up selecting their own video and computer games while developing their own culture without guidance to determination as to what was good for them or bad, as they were just interested in the experience of a new technology changing the world of play and relaxation. Thus it was the game, almost regardless of what it was, and not the content that ruled (Brougere et al, 2004, p. 1-4).

When discussing violence in computer games, as a result of this foundational background understanding, there are three standards from which to choose, the children of parents who grew up playing games and basically picking them out themselves, those whose parents supervised what they played and purchased, which is a small minority, and lastly, those adults who either didn’t have either video games or computers in their home. The assumption is, that almost all of today’s adults played video and or computer games when they were children, if not their own, then on a friends console or computer. Children of a generation whose parents were maybe exposed somewhat to computers at work, but more often than not, were not.

Thus, the problem of violence becomes one that rests on the shoulders of game developers, manufacturers and designers based upon industry research, educational and emotional findings, as well as studies concerning the effects of violence on children. The parents of today’s adults knew about the creeping violence on TV, that they grew up with and which was publicized when they were children and teenagers. But, the circumstances are different today, as there is no television standards board making noise about PC and video game content. Thus, the level of acceptable violence as well as the controls, industry oversight and general standards as to what is and is not acceptable comes into play. If you question the underlying foundation, think about the popularity of Madden football. Like it our not, that U.S. sport tops boxing for all out mayhem, violence, competitive spirit and aggression. Thus, the dilemma as represented by attempting to equate the level of violence and their effect is based upon a generation that really had no boundaries.

The jury is still out on the subject of the effects of computer games on children, teenagers, and young adults, and it is seemingly hopelessly divided. There are “an overwhelming number of parents”, pegged at 96 percent, based upon a survey conducted by the Interactive Digital Software Association who indicate that they pay attention to video and computer game content that their children play (Business Wire, 2003). That same survey indicated that 44 percent of the parents in homes that own either video game consoles and or computers stated that they themselves used to play interactive games and that they play with their children either on a daily or weekly basis (Business Wire, 2003). The returns from that survey found that all in all over 60% of the responding parents play interactive games with their children at least once in the month (Business Wire, 2003). The parents responding at a rate of 89 percent stated that they were there when the games were purchased for children under the age of 18.

The survey revealed some interesting trends, as well as revealed that the children who are playing computer and or video games are the offspring of former and present players themselves. This brings to mind if these parents acceptance level of violence in computer and video games is slightly jaded in terms of what constitutes violence. In fact, the majority of gamers, as they are termed, are in fact adults, according to the Interactive Digital Software Association survey (Business Wire, 2003). The survey revealed that the entire universe of game players is getting older. The percentage of players who were under the age of 18 made up just 30 percent of the gamer population, which is down from the 34 percent recorded in 2002 (Business Wire, 2003). However, the survey avoided the critical issue, the extent of violence in the games the parents indicated that they were supervising the buying for, as well as playing with their children. The survey did state that 36 percent of the games played on computer were action oriented, which tied with puzzle, board, and card games for the top spot (Business Wire, 2003). In fact, the preferences were almost evenly divided across the four categories, with driving and racing games scoring at 36 percent, and sports at 32 percent.

Excessive, unmonitored use of computers, especially when combined with use of other screen technologies, such as television, can place children at risk for harmful effects on their physical, social, and psychological development. Children need physical activity, social interaction, and the love and guidance of caring adults to be healthy, happy, and productive. (Hartmann and Brougere, 2004, p. 37- 41) Too much time in front of a screen can deprive children of time for organized sports and other social activities that are beneficial to child development. (Hartmann and Brougere, 2004, p. 37- 41) In addition, children may be exposed to violent, sexual, or commercial content beyond their years, with long-term negative effects (Brougere et al, 2004, pp. 8). At present, excessive use of computers among children, especially younger children, is not typical. National survey data gathered in spring of 2000 indicated that children ages 2 to 17 spent about 34 minutes per day, on average, using computers at home, with use increasing with age (Preschoolers ages 2 to 5 averaged 27 minutes per day, school-age children ages 6 to 11 averaged 49 minutes per day, and teens ages 12 to 17 averaged 63 minutes per day) (Brougere et al, 2004, pp. 9). Available data on computer use at school suggest that exposure in the early primary grades, at least, is relatively modest. A spring 1999 survey of 26 elementary schools in the heart of Silicon Valley, where computer use might be expected to be high, found that although 70% of teachers in kindergarten through third grade had their students do some work on computers, the students’ computer time averaged less than 10 minutes per day (Brougere et al, 2004, pp. 11). This data suggest that younger children in particular are not currently using computers for excessive amounts of time.

In the case of video games, even their critics acknowledge that they are instructing our children. The critics just don’t like the form and the sometimes violent and sexually explicit content of the instruction, which they believe teaches children aggressive behaviors (Suellentrop, 2006). Yet if such games are nothing more than “murder simulators,” as one critic has called them, why is it, as gaming enthusiasts never tire of pointing out, that the murder rate has declined in recent years, there are more video games, and more violent ones, than ever (Suellentrop, 2006). The important thing to find out about video games isn’t whether they are teachers; “The question is,” as game designer Ralph Koster writes in A Theory of Fun for Game Design (2004), “what do they teach?” (Suellentrop, 2006). The marketing strategies of game companies links closely to Hollywood action movies as a means to reach more gamers.

The Cinema has emerged as the most prominent influence on games. Both cinema and games are superficially alike, in that they are relatively modern media that deliver audio-visual content to paying audiences. The similarities that the media share have meant that some artistic strategies can be transferred between the two. However, there is a limit to the extent that artistic techniques can be taken from one and used in the other. Game designers are increasingly using unsuitable cinematic conventions in the creation of their games. Activision, a Santa Monica based game manufacturer generated the Fantastic 4 game in agreement with its studio, whereby you can “assume the persona of Mir. Fantastic/Reed Richards, Invisible Woman/Sue Storm, Human Torch/Johnny Storm, or Thing/Ben Grimm and master their individual attributes and unique powers to solve puzzles, overcome obstacles, and defeat enemies. Another option is to control the Fantastic 4 together as a team and dynamically switch between characters during their adventures, and combine super powers in order to level more devastating attacks and accomplish missions” (Society for the Advancement of Education, 2005). And the trend includes almost any Hollywood movie that can be converted to action, with the Fantastic 4 representing a mild version of what the industry has to offer. The basic theme is the ‘good guys, against the ‘bad guys’ in such re-creations as “X-Men Legends II: Rise of Apocalypse, the rival X-Men and Brotherhood” where you are “… bonded by a common enemy, fight side by side for the first time, allowing players to switch instantly between super-power wielding teammates as they overcome obstacles, solve puzzles, and defeat more than 100 types of enemies” (Society for the Advancement of Education, 2005).

Violence is a popular form of entertainment “a crowd of onlookers enjoys a street fight just as the Romans enjoyed the gladiators, and wrestling is a popular spectator sport not only in the United States, but in many countries in the Middle East (Centerwall, 1989, p. 23). Local news shows provide extensive coverage of violent crimes in order to increase their ratings. Technological advances have dramatically increased the availability of violent entertainment. The introduction of television was critical, particularly in making violent entertainment more available to children. More recently, cable systems, videocassette recorders, and video games have increased exposure. Hand-held cameras and video monitors now permit filming of actual crimes in progress. Economic competition for viewers, particularly young viewers, has placed a premium on media depictions of violence, as their attention translates into store sales.

The Problems

The level of acceptable violence of computer games, as well as violence in itself thus represents the question, as the top selling computer and video games all were violence based. And while the non-violent ‘Sims’ simulation game proved to be the top seller at 16 million copies, the next four games totaled 32 million (Wikipedia, 2007). Of those games Starcraft, 9.5 million copies sold, is a strategy war game played in space, whereby one can get a good idea of its content by the name on its expansion pack, ‘StarCraft: Brood War’ (Blizzard Entertainment, 2007). Half-Life (Planet Half-Life, 2007), 8 million copies, is a first person shooter game featuring blood spatters and other effects. Of the top ten computer games four are violence based, and of the next ten, 11 through 20, 5 are violence-based games (Wikipedia, 2007). Thus the ethics are sales, as well as creative foundation and premise from which the games are fashioned. The differing themes represent directions in terms of game development, what the manufacturer has build their reputation on, and the gamer profile they appeal to. Based upon the preceding the industry is split down the middle, with half gong for violence, and the other utilizing non-violent content.

There is considerable evidence that violence on television, in video as well as computer games is harmful to children (Hope, 2005). And just as the current parents became adjusted to certain levels of violence in their exposures decades ago, such has magnified for their offspring according to lecturer Lesley Murphy of Robert Gordon University (Grant, 2006). The preceding calls for a scientific psychology concerning the effects violence games had on the parents to understand the level their children are being exposed to. Such should not only help us to understand our own the parent’s violence level, it should help to determine where this all stands in the realm of what is normal, speaking in relative terms. Playing computer games can be an important building block to computer literacy because it enhances children’s ability to read and visualize images in three-dimensional space and track multiple images simultaneously and there is also limited evidence available also indicates that home computer use is linked to slightly better academic performance. (Alington et al (1992, pp. 539-553).

Dominick (1984, pp. 136-147) expresses concern there are the findings that playing violent computer games may increase aggressiveness and desensitize a child to suffering, and that the use of computers may blur a child’s ability to distinguish real life from simulation. Compared to girls, boys spend more than twice as much time per week playing computer games (Funk, 1993, pp. 86-89) and are five times more likely to own a computer game system (Griffiths and Hunt, 1995, pp. 189-193). In a study of self-reported leisure time activities of 2,200 third and fourth graders, computer games topped the list of activities among boys: 33% of boys reported playing computer games, compared with fewer than 10% of girls (Harrel et al, 1997, 246-253). Initially it was thought that this disparity was the result of the games’ violent themes and lack of female protagonists (Malone, 1981, pp. 333-370). A more likely reason, however, is the difference between the genders in their play preferences: boys tend to prefer pretend play based on fantasy, whereas girls tend to prefer pretend play based on reality, a rare theme for computer games, even those designed specifically for girls.

Social Effects of Playing Computer Games

As mentioned earlier, game playing has long been the predominant use of home computers among children–especially among younger boys. Although the available research indicates that moderate game playing has little social impact on children, concerns nonetheless have been raised about excessive game playing, especially when the games contain violence. Research suggests that playing violent computer games can increase children’s aggressive behavior in other situations.

Existing research indicates that moderate game playing does not significantly impact children’s social skills and relationships with friends and family either positively or negatively. Studies often found no differences in the “sociability” and social interactions of computer game players versus non-players, (Phillips et al, 1995, pp. 687-691) but a few studies found some mildly positive effects. For example, one study found that frequent game players met friends outside school more often than less frequent players. (Colwell et al, 1995, pp. 195-206) Another study of 20 families with new home computer game sets explored the benefits and dangers of playing games and found that computer games tended to bring family members together for shared play and interaction. (Mitchell, 1998, pp. 121-135)

Less is known, however, about the long-term effects of excessive computer use among the 7% to 9% of children who play computer games for 30 hours per week or more. (Griffiths and Hunt, 1995, pp. 189-193). It has been suggested that spending a disproportionate amount of time on any one leisure activity at the expense of others will hamper social and educational development. (Griffiths and Hunt, 1995, pp. 189-193) Indeed, one study of fourth- to twelfth-grade students found that those who reported playing arcade video games or programming their home computer for more than an hour per day, on average, tended to believe they had less control over their lives compared with their peers. (Wiggins, 1997) In addition, some evidence suggests that repeated playing of violent computer games may lead to increased aggressiveness and hostility and desensitize children to violence. (Provenzo, 2001, pp. 231-234)

Although educational software for home computer use includes many games that encourage positive, pro-social behaviors by rewarding players who cooperate or share, the most popular entertainment software often involves games with competition and aggression, and the amount of aggression and violence has increased with each new generation of games. A content analysis of recent popular Nintendo and Sega Genesis computer games found that nearly 80% of the games had aggression or violence as an objective. (Dietz, 1998, pp. 425-442) One survey of seventh- and eighth-grade students found that half of their favorite games had violent themes. (Funk, 1993, pp. 86-89) Yet parents often are unaware of even the most popular violent titles, despite the rating system from the Entertainment

In a 1998 survey, 80% of junior high students said they were familiar with Duke Nukem–a violent computer game rated “mature” (containing animated blood, gore, and violence and strong sexual Content), but fewer than 5% of parents had heard of it. (Oldberg, 1998) Numerous studies have shown that watching violent television programs and films increases children and adults’ aggression and hostility (Friedrich-Cofer and Huston, 2000, pp. 364-371) thus, it is plausible that playing violent computer games would have similar effects. The research on violent computer games suggests that there is, indeed, an association between playing such games and increased aggression, and that the critical variable is a preference for playing aggressive games, rather than the amount of time spent playing. (Friedrich-Cofer and Huston, 2000, pp. 364-371).

Several experimental studies suggest that playing a violent game, even for brief periods, has short-term transfer effects, such as increased aggression in children’s free play, (Friedrich-Cofer and Huston, 2000, pp. 364-371) hostility to ambiguous questions, and aggressive thoughts. For example, one study of third and fourth graders found that those children who played a violent game (Mortal Kombat II) responded more violently to three of six open-ended questions than did children who played a nonviolent computer game (basketball) (Friedrich-Cofer and Huston, 2000, pp. 364-371). Furthermore, it has been found that children who have a preference for and play aggressive computer games demonstrate less pro-social behavior, such as donating money or helping someone. (Friedrich-Cofer and Huston, 2000, pp. 364-371).

Studies of television have found that continued exposure to violence and aggression desensitizes children to others’ suffering, (Rule and Ferguson, 2001, pp. 29-50) but studies of computer games have not yet explored such a link. At least since the 1980s, however, both the U.S. and British military have used violent video games for training, reportedly to desensitize soldiers to the suffering of their targets and to make them more willing to kill. (Kiddoo, 2000, pp. 80-82).

Conclusion

The foundation of violence in computer games stems from the fascination with violence as spawned by the movies as well as television. These mediums have become an overbearing influence in game development and its expressive methods are being applied in game context. A look at the graphics of any video game reveals the similarities as well as attempt to capture as much realism as possible. Such is a natural evolution of the product and technology, but such also is continually blurring the fantasy atmosphere that used to be clearly delineated. The violence that exists in over 50 percent of computer as well as video games is not so much a product of the designers and manufacturers; it is a product of society in that the function of their businesses is to fulfill a need. And since the foundation for that need is there, they continue to create the games to fill it.

The problem starts and exists with the consumer market, one that is a product of television and cinema culture that has been at work long before computer and video games arrived. There is now a sincere understanding that the effects have become deeply rooted facets of industrialized cultures, and games can not be blamed, yet they, along with other entertainment medium are contributing to the problem. Youth violence affects us all, and thus a reversal of the process is going to be a difficult undertaking as a result of the historical context from which it came.

A look at the top selling video game categories reveals the extent of the problem:

Table – Top Games Genres

(Wikipedia, 2007)

Rank

Genre

1

Strategy / RPG

2

Action

3

Sport Games

4

Racing

5

All Shooter Games

6

Simulations

7

Family Entertainment

8

Children’s Entertainment

9

Fighting

10

Other Games

11

Edutainment

With the following games rated as all time favorites, based on violent content:

Donkey Kong, 1981, Nintendo Co. Ltd, Nintendo of America, Inc., Arcade.
Doom, 1993, id Software, id Software, P.C. DOS.
Dragon’s Lair, 1983. Magicom Multimedia, Cinematronics, Arcade.
Duke Nukem, 1991, Apogee Software Ltd., Apogee Software Ltd., PC DOS.
E.T.: The Extraterrestrial, 1983, Atari, Inc., Atari Inc., Atari 2600.
Final Fantasy series I – IX, 1990 – 2003, Square Enix Co., Sony Computer Entertainment America, Inc., Nintendo Entertainment System, Super Nintendo Entertainment System, PlayStation, PlayStation 2.
Final Fantasy VII, 1996, Square Co., Sony Computer Entertainment America, Inc., PlayStation.
Grand Theft Auto III, DMA Design Ltd., Rockstar Games, PlayStation2
Half-Life, 1998, Valve Software, Sierra On-Line, Inc., P.C. Win. ’95.
Legend of Zelda: The Wind Waker, 2003, Nintendo Co. Ltd, Nintendo of Europe, Inc., GameCube.
Mario Bros I-VII, 1983 –2003, Nintendo Co. Ltd, Nintendo of America, Inc., Nintendo Entertainment System, Super Nintendo Entertainment System, GameCube.
Max Payne, 2001, Remedy Entertainment Ltd., GodGames, Win. ’95.
Metal Gear Solid, 1998, Konami Computer Entertainment Japan Co., Ltd., Konami of America, Inc., PlayStation.
Metal Gear Solid 2: Sons of Liberty, 1998, Konami Computer Entertainment Japan Co., Ltd., Konami of America, Inc., PlayStation 2.
Myst, 1994, Broderbund Software, Keyboard Mouse, Macintosh.
Pac-Man, 1980, Namco Ltd., Midway Mfg. Co., Arcade.
Perfect Dark, 2000, Rare Ltd., Rare Ltd., Nintendo 64.
Pokemon, 1998, Game Freak, Inc., Nintendo of America, Inc., Game Boy.
Pong, 1973, Atari, Inc., Atari Inc., Arcade.
Resident Evil, 2002, Capcom Co., Ltd., Capcom U.S.A., Inc., GameCube.
Rogue Leader, 2001, Factor 5, Lucas Arts, GameCube.
Silent Hill, 1999, Konami Computer Entertainment Kobe (KCEK), Konami of America, Inc., PlayStation.
Space Invaders, 1978, Taito Corporation, Taito America Corp., Arcade.
Spacewar, 1962, Russell, S.
Street Fighter II, 1991, Capcom Co., Ltd., Capcom U.S.A., Inc., Arcade.
Super Mario Bros., 1985, Nintendo Co. Ltd, Nintendo of America, Inc., Nintendo Entertainment System.
Tekken 3, 1998, Namco Ltd., Namco Hometek, Inc., PlayStation.
Tennis for Two, 1958. Higinbotham, W.
Tetris, 1989, Pajitnov, A., Nintendo of America, Inc., Game Boy.
Tomb Raider, Core Design Ltd., Eidos Interactive, PlayStation.
Tomb Raider: The Angel of Darkness, 2003, Core Design Ltd., Eidos Interactive, PlayStation 2.
Winning Eleven 6: Final Evolution, 2003, Konami Computer Entertainment Kobe (KCEK), Konami Computer Entertainment Japan Co., Ltd., GameCube.
Wolfenstein 3D, 1991, Apogee Software Ltd., Apogee Software Ltd., PC DOS
Zelda I –VI, 1987-2003, Nintendo Co. Ltd, Nintendo of America, Inc., Nintendo Entertainment System, Super Nintendo Entertainment System, GameCube.

As a business, the economics of a return on investment figures importantly into the reasons as to why so many violent games are produced. Simply speaking there is a market for them! The high cost of producing games engenders a desire within the companies financing games production to ensure a return on their investment. In most popular mass culture, this has seen a cautious approach to creating content. There has been a streamlining of the creation of content, be it music, films or games, that has seen the removal of as many variables as possible in order to produce content that can be easily quantified and accounted for. Companies are reluctant to take risks and the simplest way of avoiding them is to repeat previously profitable formulae, or in the case of a developing medium, such as games, to adopt the techniques of the more developed and superficially similar medium of cinema. Designers are reliant upon the finance provided by publishing companies to create games. This has seen the production of numerous games based on Hollywood films and characters, or the construction of games that can be marketed and sold on the strength of their cinematic aesthetics and sensibilities.

Computer and video game companies base their strategies of what to produce based upon careful market research and raw numbers, and the fact is, since 50% of the market has been and continues to be buying violent game content, they will continue to design and market these types of games!

And while the problem is deep seated, there is a logical and easy solution, if only the adults will play along. The survey conducted by the Interactive Digital Software Association (2001), indicated the following statistics:

adults purchase 90% of all games sold

And that is the only statistic that will be utilized to make the point. As the controlling variable in the purchase, it is the adults that need to be reached. The problem is how? Educating Adults to the problem is the logical answer. But as the primary buyers of games overall, they are also heavy buyers of violent game content themselves. The preceding is more than an ethical dilemma, it is a cultural one. One whereby the cycle needs to be broken with the same vigor and force that instilled it in the first place. But, that took decades and billions in advertising and marketing dollars to put into place. Thus it seems that the only force large enough to impact upon this situation are governments. Therein lies the ethical problem, for this speaks of another regulation is a world that is fast becoming over regulated in order to save ourselves from ourselves!

The solution that the preceding is leading up to is the same as has been done in the instance of cigarette smoking, warning labels on each box as a mandated action. Could the foundation for this approach be similar to the health risk utilized in the instance of cigarettes, only in this instance as a societal risk? That represents an extremely touchy subject as it seemingly broaches upon freedom of choice. The warning labels and legislation to curtail smoking has achieved success as a result of the non-smokers who did not wish to inhale second hand smoke in restaurants, offices and other public indoor locales. These restrictions did not and do not restrict smokers for smoking. Thus, why would it curtail violent game players from playing.

Thus, could a violent game tax be the solution. This would or might represent a choice in that the extra money so charged would be put into a victims and marketing fund to fuel additional education on the dangers of violence. Seemingly, that might create an outcry as well, however, as is the case with any type of social change, the majority wins out, thus the non-violent lobby would have to organize itself for a long

Effect of Phone Type on Texting Frequency

Texting and Mobile Phones among Fourth Year High School Students in Saint Augustine’s School
Ballocanag, Brian Emil
Dungan, DonEllise
Francisco, Ralph Vincent
Jacinto, Arvin Jhay
Javillionar, Kevin Jayson
Laplana, Clifford Sean
Lite, Gwynette
Manzano, Aixel
Nicolas, Rinalyn
Tacho, Mariella Stephanie Lyne

Abstract

This study was concluded to give an answer to the problem if there is really a significant effect of the typeof mobile phones to the frequency of texting. The researchers distributed 24 copies of questionnaire to the Junior and Senior students of Saint Augustine’s School, 2014-2015, to know if how many times do they text daily using the type of mobile phones that they have. The Chi-Square Test of Independence was used to test the null hypothesis. The researchers accepted the null hypothesis since the P-value was more than the significance level 0.05. Thus, it was concluded that the frequency of texting is not dependent on the type of mobile phone.

Introduction

Mobile Phones are great for talking to someone without seeing his/her face. But they’re also great for messaging — especially text messaging, to get in touch with our loved ones and even some strangers without having a phone call which really requires cost.

Often, we flaunt our mobile phones simply because they are smartphones and were manufactured by some of the famous companies in the field of gadgets. We care less the phones that are locally-made and classical. Sometimes, we are fond of using the popular-branded smartphones because they are being advertised in the television and we don’t want to be left behind by the high-tech and industrialized world.

At present, we are attracted to expensive and high-class brand of mobile phones. We often believe in some cell phone companies telling that their products are better than their competitor’s products. We are then persuaded and lured by them that we begin to patronize and buy their mobile phones without so much hesitation. And our biggest and most specific reason is that, we text more when using them than when using the old-branded and normal mobile phones.

Is there really a relationship between texting and the type of mobile phone?

Teenagers from the wealthier household and who own the brands of the top five mobile phone manufacturer smart phones use text message slightly more frequent than teens who own the low-end standard mobile phones and from lower income household (PewInternet, 2009).

The objective of this study was to determine if there is a relationship between texting and the type of mobile phone.

This study did not include the originality of the mobile phones that the interviewees have. It did not matter if they were imitated or not.

Those Grade-9 and 4th year students of Saint Augustine’s School, year 2014-2015, were the ones who were interviewed.

Mobile Phones

Smartphones

Smartphone, refers to mobile phone which works like personal computers, has an independent operating system. Users can install software and games provided by the third party service providers, in order to extend the function of the mobile phone. And it can connect to mobile Internet through mobile communication network followed (Kumar, March 2012).

Texting

Frequency

The volume of texting among teens has risen from 50 texts a day in 2009 to 60 texts for the median teen text user. Older teens, boys, and blacks are leading the increase. Texting is the dominant daily mode of communication between teens and all those with whom they communicate (Lenhart, 2012).

Teen texting

The Pew Internet survey shows that the heaviest texters are also the heaviest talkers. The heaviest texters (those who exchange more than 100 texts a day) are much more likely than lighter texters to say that they talk on their cell phone daily. Some 69% of heavy texters talk daily on their cell phones, compared with 46% of medium texters (those exchanging 21-100 texts a day) and 43% of light texters (those exchanging 0-20 texts a day) (Lenhart, 2012).

The null hypothesis was there is no significant effect of the type of mobile phone on the frequency of texting.

The alternative hypothesis was there is a significant effect of the type of mobile phone on the frequency of texting.

Methodology
Participants

The 243 out of 276 Junior and Senior students of Saint Augustine’s School (SAS) who have mobile phones who answered the questionnaire, computer with access to internet where the articles, journals and data regarding the study were taken, 24 copies of questionnaire and the facts about texting and mobile phones were the participants of this investigatory project.

Procedure

The 24 copies of questionnaire were distributed to every column of each classroom of the Juniors and Seniors last November 24, 2014.Through the questionnaire, the researchers asked for the total number of the students who have smartphones and those who have regular phones. They were questioned if how many times do they text daily- 1-5 times,6-10 times,11-15 times or 15-20 times. The result of the survey was summarized in a 2?4 table but later simplified to a 2?2 table because those who text 1-5 and 6-10 times a day were taken as one as well as those who text 11-15 and 16-20 times in order to make the solution to the problem less complicated.

Data Analysis

A chi-square test of independence was performed to test the null hypothesis of no association between type of mobile phone and frequency of texting.

Results

The P-value, 0.25, which was more than the significance level 0.05 provided a very strong evidence that the frequency of texting doesn’t depend on the type of mobile phone. Thus, the researchers accepted the null hypothesis and it was proper to conclude that the type of mobile phone, smartphone and regular phone, has no significant effect on the frequency of texting.

Discussions

All the textual data were based on online articles. They were borrowed, read, analyzed, and summarized. The numerical data, which were gathered through a questionnaire, were summed up in a 2?2 table for a more concise and apprehensible look. However, they were originally summarized in a 2?4 table but to make it easier and faster to arrive to the answer, the researchers have chosen to just take those who text 1-5 and 6-10 times a day as one and the who text 11-15 and 16-20 times both in the row of smartphone and regular phone. There were approximately 12 % of the respondents who did not answer the questionnaire both intentionally and unintentionally but it did not stop the researchers from proceeding to the next step.

Using the numerical data, and the Chi-Square Test of Independence as the statistical tool, the researchers computed for the degrees of freedom (DF), expected frequencies (Er,c) and test statistics (X2) . Er,c and X2 were rounded off to the nearest hundredths.

The researchers used the Chi-Square Distribution table to find for the P-value which was found out to be 0.25. The null hypothesis, saying that the type of mobile phone has no significant effect on the frequency of texting, was accepted because the P-value was far higher than the significance level 0.05.

Appendices
Raw Data

Mobile Phone Type

Number of student texters

1-10 times

11-20 times

Row Total

Smartphone

67

115

182

Regular phone

24

37

61

Column Total

91

152

243

*Students who have mobile phone: 243

*Students who did not answer: 33

*Total population: 276

B. Statistical Computations

Using the numerical data, the researchers computed for the degrees of freedom, expected frequencies, test statistic, and approximate P-value associated with the test statistic and degrees of freedom.

Degrees of Freedom

DF = (r – 1) * (c – 1)

where r is the number of levels for one categorical variable, and c is the number of levels for the other categorical variable.

DF = (r – 1) * (c – 1)=(2-1)*(2-1) =1

Expected Frequencies

Er,c= (nr* nc) / n

where Er,cis the expected frequency count for levelrof Variable A and levelcof Variable B, nris the total number of sample observations at level r of Variable A, ncis the total number of sample observations at levelcof Variable B, and n is the total sample size.

Er,c= (nr* nc) / n

E1,1=( 182*91)/243= 68.16 E1,2=( 182*152)/243=113.84

E2,1=( 61*91)/243=22.84 E2,2 =( 61*152)/243=38.16

Test Statisitics

?2= ? [ (Or,c– Er,c)2/ Er,c]

where Or,cis the observed frequency count at levelrof Variable A and levelcof Variable B, and Er,cis the expected frequency count at levelrof Variable A and levelcof Variable B.

?2= ? [ (Or,c– Er,c)2/ Er,c]

=(67-68.16)2/68.16+(115-113.84)2/113.84+(24-22.84)2/22.84+(37-38.16)2/38.16

=0.10+0.01+0.06+0.34

=0.51

P-value

Using the Chi-Square Distribution Table

The first higher value than the Test Statistics, going to the right, row of 1 as the DF, was 1.32, so looking up to its P-value in the uppermost cell of its column was equal 0.25.

C. Questionnaire

To all the Juniors and Seniors,

This questionnaire is very much needed for the completion of our 3rd Grading Investigatory project. We ask for your active participation and honesty in answering the given questions. Thank you!

Yours Truly,

Group 2 of IV-2

Year and Section:

How many are you in your classroom?

Per column:

1. How many are you in your column?

2. Who are the students who own smartphones and non-smartphones?

For number 2, follow the format below.

Students with smartphones ( phones with access to internet, camera, etc.)

Name

1-5 times

6-10 times

11-15 times

16-20 times

Ex. Marielle

Name

1-5 times

6-10 times

11-15 times

16-20 times

Ex. Marielle

Students with regular phone (phone intended for messaging and calling, w/o access to internet and do not consist of downloadable applications.)

References

Central Intelligence Agency(2011).The worldfactbook. Retrieved Sept., 14, 2014, from https://www.cia.gov/library/publications/the-world-factbook/index.html

Chartered Institute of Personnel and Development. Pestle analysis. Retrieved from http://www.cipd.co.uk/hr-resources/factsheets/pestle-analysis.aspx

Kumar, Dinesh(March 2012). An empirical study of brand preference of mobile phones among college and university students.

Lenhart, Amanda (2012). Teens, smartphones &texting.Retrieved from http://www.pewinternet.org/2012/03/19/teens-smartphones-texting/

Mika Husso (2011). Analysis of competition in the mobile phone markets of the United States and Europe. http://epub.lib.aalto/ethesis/pdf/12638/hse_ethesis_12638.pdf.fi/en

Nurullah, A.S. (2009). The cell phone as an agent of social change. Retrieved from http://ualberta.academia.edu/AbuSadatNurullah/Papers/109273/The_Cell_Phone_as_an_Agent_of_Social_Change

Sharma, S., Gopal, V., Sharama, R., & Sharma, N.,(Eds.).(2012). Study on mobile phones brand pattern among the college students of Delhi-NCR.Retrieved from http://www.slideshare.net/monikakumari1971/a-study-on-mobile-phones-brand switching-pattern-among-the-college-students-of-delhincr-33612332631pb

The Carphone Warehouse (2006). The mobile life youth report 2006: The impact of the mobile phone on the lives of young people.Retrieved fromhttp://www.mobilelife2006.co. uk/

Effect of Microsoft’s Monopolistic Approach

The Effect of Microsoft’s Monopolistic Approach to Software Bundling on Innovation and Competition.
Contents (Jump to)

Chapter 1 – Introduction

Chapter 2 – Literature Review

2.1 Monopolist or Fierce Competitor

2.2 Bundling, Innovative or Stifling Competition

2.2.1 Bundling Examples in Other Industries

2.3 The Case Against Microsoft

Chapter 3 – Analysis

3.1 Bundling, Competitive or Market Restrictive?

3.2 Strategies to Gain Market Share

3.3 Microsoft and The European Union

Chapter 4 – Conclusion

Bibliography

Chapter 1 – Introduction

When mentioning Microsoft, one’s thoughts naturally turn to computers, as the two are inexorably tied together. And while they both need each other, software was the latter development in this marriage of needs. Based upon digits, computers utilize this foundation as the basis for their computations (Berdayes, 2000, p. 76). A digit is a “… numeral … that represents an integer …” and includes … any one of the decimal characters ‘0’ through ‘9’ …” as well as “… either of the binary characters ‘0’ or ‘1’ …” (Atis, 2005). Computers utilize digits under the ‘base-2 number system’, which is also termed as the ‘binary number system’ (Berdayes, 2000, p. 3). The base-2 system is utilized in computers as it implements easier with present day technology. A base-10 system could be used, however its cost in terms of technology innovation would make computers prohibitively expensive (Berdayes, 2000, pp. 53-56). Via the utilization of binary digits as opposed to decimal digits, bits thus have only two values, ‘0 and 1’ (Barfield and Caudell, 2001, p. 344, 368). The preceding is important in understanding the relationship of numbers to computers as well as Microsoft’s later entrance into this world. The following provides a visual understanding of how this works:

Table 1 – Decimal Numbers in the Binary System

(Swarthmore University, 2005)

Decimal Number

Binary Number

0

=

0

1

=

1

2

=

10

3

=

11

4

=

100

5

=

101

6

=

110

7

=

111

8

=

1000

9

=

1001

10

=

1010

11

=

1011

12

=

1100

13

=

1101

14

=

1110

In computers, bits are utilized in conjunction with bytes, which are represented as ‘8-bit bytes’ that work as follows:

Table 2 – 8 Bit Bytes

(Barfield and Caudell, 2001. pp. 50-54)

Decimal Number

Bytes

0

=

0000000000000000

1

=

0000000000000001

2

=

0000000000000010

65534

=

1111111111111110

65535

=

1111111111111111

The earliest computer has been traced back to the ‘abax’, which is the Greek word that describes ‘calculating board’ as well as ‘calculating table’ which as invented in China and called the abacus, it was also used in ancient Greece, the Roman Empire, Russia, Japan, and is still in use by the blind (qi-journal.com, 2005). Operating much as the bits and bytes in the modern computer, the abacus has a vertical row of beads that represent multiples of 10, 1, 10, 100, 1,00 and so forth (qi-journal.com, 2005). The basic principle of the abacus operates in much the same manner as the modern computer, through numerical representation. The first generations of modern computers were huge in comparison with today’s small, powerful and fast machines, and needed air-conditioned rooms to dissipate the heat. Programming on the first commercial computer in 1951, the UNIVAC, was a group of related mechanisms driven my mathematical equations that had to be written in order for the UNIVAC to work on problems (hagar.up.ac.za, 2006). It would take another 6 years for the first personal computer to be developed, the IBM 610 Auto-Point, which was termed as a ‘personal computer’ because it only took one individual to operate it, however, the cost in 1957 termed at $55,000 translates in to well over $100,000 in today’s value (maximon.com, 2006).

In 1975 saw the introduction of the Altair 8800, which sold for $439, with 256 bytes of RAM, which also represented the year that Bill Gates, along with Paul Allen founded Microsoft (maximon.com, 2006). Altair was seeking a computer language, which Gates and Allen delivered via a program called BASIC on 23 July 1975, which they gave the company “… exclusive worldwide rights to … for 10 years” (Rich, 2003, p. 34). Sold as an add-on with the Altair 8800 for $75, the preceding provided the revenue underpinnings for Microsoft (Rich, 2003, p. 35). Generating just $381,715 in 1977, Microsoft was upstaged by Apple Computers that made machines as well as their own operating system (Rich, 2003, p. 36). Apple’s success caught the attention of IBM, which was not in the personal computer market, the foregoing was the means via which Gates entered the picture with IBM based upon DOS, program it secured from Seattle Computer for just $50,000 that heralded the beginnings of the industry giant (Rich, 2003, p. 51). Microsoft MS-DOS represented the foundation for the beginning financial strength of the company, which would enable it to develop Windows 95 and successive versions leading to Vista in 2007. Along the way, Microsoft has been accused, rightly or wrongly, of a monopolistic approach to software bundling that has stifled competition and innovation. This paper will seek to examine this facet, its effects, how it happened and the ramifications of the statement.

Chapter 2 – Literature Review
2.1 Monopolist or Fierce Competitor

In “Trust on Trial: How the Microsoft Case is Reframing the Rules of Competition”, by Richard McKenzie (2000, p. 1), reflects that Microsoft in the last 25 years has become “… the world’s premier software company, dominating many of the markets it has entered and developed…” and also finds itself “…under legal assault …” for monopolist behaviour. McKenzie (2000, p. 2) indicates that in the United States “…it’s the Justice Department against Microsoft, but behind the courtroom scenes there has been a good deal of political maneuvering by other major American corporate high-tech combatants -Sun Microsystems, Oracle, Netscape, IBM, and America Online, to name just a few – who would like nothing better than to see their market rival, Microsoft, get its comeuppance in the court of law”. In this instance it is the “…efficacy of antitrust law enforcement has been on trial” as the Microsoft case represents “…the first large-scale antitrust proceedings of the digital age;” (McKenzie, 2000. p. 2). McKenzie (2000, p. x) reflects upon the government case against Microsoft as a monopolist, indicating that while its operating system comes “ … preloaded on at least nine of every ten computers containing Intel microprocessors sold in the country, if not the world” was it this that made the company a monopolist?

The market dominance that Microsoft has in the fact that its operating system comes preloaded in over 90% of the computers sold was expressed by the former United States Republican candidate Robert Dole, who stated “Microsoft’s goal appears to be to extend the monopoly it has enjoyed in the PC operating system marketplace to the Internet as a whole, and to control the direction of innovation.” (McKenzie, 2000, p. 28). This view was also repeated by the media as well as New York Attorney General Dennis Vacco who see Microsoft’s “…product development strategies are evidence of monopoly power: …” in that the “ … Windows operating system has become almost the sole entry point to cyberspace” (McKenzie, 2000, p. 29). It is without question that Microsoft’s dominance resulting from preloaded operating software provides it with an advantage in introducing other forms of software. But, is that simply good business practices or predatory behaviour? For consideration, McKenzie (2000, p. 47) points to the book written by Judge Bork “The Antitrust Paradox” where he stated repeatedly “… antitrust should not interfere with any firm size created by internal growth …”. And like it or not, that is how Microsoft got into the position it now enjoys. But, in all the rhetoric, there is another facet to Microsoft’s dominance, the PC manufacturers themselves. As stated by the manufacturers themselves, there simply is no other choice! (McKenzie, 2000, p. 29).

Eric Browning, the chief executive of PC manufacturer Micron has said “I am not aware of any other non-Microsoft operating system product to which Micron could or would turn as a substitute for Windows 95 at this time” (McKenzie, 2000, p. 30). This sentiment was also echoed by John Romano, an executive at Hewlett-Packard who advised “… we don’t have a choice …” (McKenzie, 2000, p. 30). The tie-in between monopoly power and market dominance has been explained by Franklin Fisher, the chief economist for the Justice Department as “Monopoly power is a substantial degree of market power,” or the ability of a firm “(a) to charge a price significantly in excess of competitive levels and (b) to do so over a significant period of time” (McKenzie, 2000, p. 30). Fisher further asserts that Microsoft’s dominance in the market “… is protected by “barriers to entry” in the form of “economies of scale in production,” “network effects,” and “switching costs …” (McKenzie, 2000, p. 30). Fisher adds that “There are no reasonable substitutes for Microsoft’s Windows operating system for Intel-compatible desktop PCs. Operating systems for non-Intel-compatible computers are not a reasonable substitute for Microsoft’s Windows operating system” because there would be high costs to switching to non-Intel-compatible computers like Mac and Unix” (McKenzie, 2000, p. 30).

However, the monopolistic tendencies of Microsoft have not resulted in the company charging higher prices as a result of its dominant position. This view was put forth by the chief economic consultant for the state attorneys general in that “…the absence of viable competitors in Intel-compatible operating systems means that Microsoft doesn’t have to worry about raising its price or using its economic weight in other ways …” (McKenzie, 2000, p. 30). He asserts that “ … a monopolist would continue to raise its price so long as its profits rose. …” (McKenzie, 2000, p. 31). Something that Microsoft has not done. Such is inconsistent with the manner in which monopolists behave. The line of reasoning for the preceding is that “…the cost of the operating system represents on average 2.5 percent of the price of personal computers (and at most 10 percent for very inexpensive personal computers), so “even a 10 percent increase in the price of the OS [operating system] would result at most in a 1 percent increase in the price of even inexpensive PCs …” (McKenzie, 2000, p. 31). Warren-Boulton thus concludes “…that Microsoft’s price for Windows is very likely far below the monopoly price …” which is a result of “…the so-called “coefficient” of the price elasticity of demand facing any firm (the ratio of the percentage change in the quantity to the percentage change in the price …” (McKenzie, 2000, p. 31).

Therefore, argues McKenzie (2000, p. 32) a monopolist would not price its product in the very low range, “…because a very low elasticity implies that a price increase will increase profits …”, thus the government’s case has opposing views of Microsoft’s monopolist position, a telling facet in considering the overall implications of the company. The foregoing direct contradicts Franklin Fisher’s, the chief economist for the Justice Department, claims that Microsoft earns “ … superhigh profits …”, which its low prices does not support (McKenzie, 2000, p. 32). Thus, in being a so-called monopolist, Microsoft’s pricing policies do not reflect the behaviour of one. The complicated market, competitive, product and business realities of Microsoft in a competitive market must also be viewed as the company taking actions to protect its position through new product introductions as well as making it difficult for competitors to gain an edge, the manner in which all firms operate if they intend to remain in business and continue as market leaders. The fact that Microsoft provides its Internet browser free along with its operating system, serves the interest of customers in that they have this feature already available in the purchase of their computers. It also represents a competitive action that limits other browsers from gaining an edge in the market.

McKenzie (2000, p. 32) aptly points our that “ … Any firm that is dominant in a software market isn’t likely to want to give up its dominance, especially if there are substantial economies of scale in production and network effects in demand …”, something with both Fisher as well as Warren-Boulton indicate is true in the software industry. McKenzie (2000, p. 32) adds that if Microsoft where to start losing market share for its operating system “…it could anticipate problems in keeping its applications network intact, which could mean its market share could spiral downward as a new market entrant makes sales and those sales lead to more and more applications being written for the new operating system …”. The flaw in the monopolist argue, as pointed out by McKenzie (2000. p. 34) is that even if a company had a 100% share of the market “…it must price and develop its product as though it actually had market rivals because the firm has to fear the entry of potential competitors …”. To make his point, McKenzie (2000, p. 34) points to classic microeconomics textbooks that teach that a monopolist represents a ‘single producer’ “…that is capable of restricting output, raising its prices above competitive levels, and imposing its will on buyers …” therefore in the position of the U.S. Justice Department, Microsoft’s high, 90%, market share is a near or almost monopoly, that McKenzie (2000, p. 34) aptly states is like almost being pregnant, you either are or you aren’t.

To illustrate his point, McKenzie (2000, p. 34) points to the company called Signature Software, which at the time had “…100 percent of the market for a program that allows computer users to type their letters and e-mails in a font that is derived from their own handwriting”. He adds that despite it being the singular producer in the market, the company “…prices its software very modestly, simply because the program can be duplicated with relative ease.” McKenzie (2000, p. 34) also points out that Netscape at one time almost completely dominated the browser market, yet did not price its advantage in monopolist fashion. In protecting its position, Microsoft developed and introduced new products, all of which any other firm had the opportunity to do and thus innovate, yet such did not happen. McKenzie (2000, p. 137) asserts that the aggressive development of new products by Microsoft was in defense of its market position as well as being good marketing and customer satisfaction practices. He points to the following innovations by Microsoft that helped to cement is market dominance and stave off competitive inroads, all of which could have been created by other firms (McKenzie, 2000, p. 137):

1975

Microsoft develops BASIC as the first programming language written for the PC. A feat that could have been accomplished by anther firm had they innovated and gotten the initial contract with Altair for the 8800.

1983

Microsoft developed the first mouse based PC word processing program, Word.

1985

The company develops the first PC based word processing system to support the use of a laser printer.

1987

Microsoft’s Windows/386 became the first operating system to utilize the new Intel 32-bit 80386 processor.

1987

Microsoft’s introduces Excel, the first spreadsheet that was designed for Windows.

1989

Word became the first word processing system to offer tables.

1989

Microsoft Office becomes the first business productivity application offering a full suite of office tools.

1991

Word becomes the first productivity program to incorporate multimedia into its operation.

1991

Word version 2.0 becomes the first word processing program to provide drag and drop capability.

1995

Internet Explorer becomes the first browser to support multimedia and 3D graphics

1996

Microsoft’s Intellimouse is the first pointing device to utilize a wheel to aid in navigation.

1996

Microsoft introduces Picture It, the first program to permit consumers to create, enhance and share photo quality images over their PC’s.

1997

DirectX becomes the first multimedia architecture to integrate Internet ready services.

1998

Microsoft’s WebTV in conjunction with the hit television show Baywatch becomes the first internationally syndicated Internet-enhanced season finale.

1999

Windows 2000, which later becomes Windows NT adds the following innovations as firsts to a PC operating system,

Text to speech engine,
Multicast protocol algorithms that are reliable,
Improvements in the performance registry,
Inclusion of DirectX,
Vision based user interfaces,
Handwriting recognition,

and a number of other innovations to enhance its operating system, and maintain as well as increase its market position.

The preceding represents examples of innovation spurred by Microsoft that could have been introduced by its competitors in various fields first, but where not. Thus, Microsoft in these instances, as well as others introduce consumer enhancing innovations to further its market dominance through aggressive new product development, a path that was open to others as well.

2.2 Bundling, Innovative or Stifling Competition

Rosenbaum’s (1998) book “Market Dominance: How Firms Gain, Hold, or Lose it and the impact on Economic Performance” provides a perspective on the means via which companies gain as well as lose market share, and the tactics they employ to best their competition. Few people remember that when Microsoft introduced Microsoft Word and Excel, the dominant software programs for word processing and spreadsheets were Lotus 1-2-3- and WordPerfect (Rosenbaum, 1998, p. 168). In fact, WordPerfect was the application found in all businesses, period (Rosenbaum, 1998, p. 168). Each of the preceding applications cost approximately $300, which Microsoft bested by selling his Office Suite program for $250. Through providing limited use Word programs in Windows, consumer had the chance to test Word before buying it (Rosenbaum, 1998, p. 168). More importantly, Microsoft’s spreadsheet, word processing, presentation programs were simply better and easier to use that the competition. By innovatively offering a free limited version of Word with the operating system, Microsoft induced trial, to which it had to follow up on with a better product.

In looking at competitive practices and competition analysis, there is a relationship that exists between the structure of the market and innovation, to which Hope (2000, p. 35) poses the question as to “…whether monopoly is more conducive to innovation than competition …”. Hope (29000, p. 35) indicates that in response to the foregoing, there is no “…clear-cut answer, probably because there is none …”. Hope (2000, p. 35 puts forth the theory that “…Most economists, and virtually all designers of competition policy, take market structure as their starting point – as something which is somehow, almost exogenously, given (although it may be affected by competition policy), and which produces results in terms of costs, prices, innovations, etc …” However, Hope (2000, p. 35) tells us that this is wrong, based upon elementary microeconomics, as “…Market structure is inherently endogenous… (and is) … determined by the behaviour of existing firms and by entry of new ones, simultaneously with costs, prices, product ranges, and investments in R&D and marketing”. Exogenous variables, if they in fact exist in a particular situation, represent facets such as product fundamentals such as “…production processes, entry conditions, the initial preferences of the consumers, variables determined in other markets, and government policy …” (Hope, 2000, p. 35). As a result, Hope (2000, p. 35) advises that the questions as to whether “…there will be more innovation with monopoly than with competition is no more meaningful than to ask whether price-cost margins will be higher if costs are high than if they are low …”.

2.2.1 Bundling Examples in Other Industries

Aron and Wildman (1999, p. 2) make the analogy of Microsoft’s bundling methodology with that of cable television whereby a broadcaster how owns a “… marquee channel can preclude competition in thematic channels (such as comedy or science fiction channels) by bundling their own thematic channels with the …” marquee channel. The preceding illustrates the idea that consumers tend to value channels such as HBO, Cinemax and Showtime that their reputation helps to cause consumers to consider other program platforms they offer. These channels advertise their other channels on their marquee stations and vise versa, offering bundling of channels at reduced prices to encourage purchase. Aron and Wildman (1999, p. 2) offer the logic that “…a provider that attempts to compete by offering a thematic channel on a stand-alone basis, without an anchor channel, would not be able to survive the competitive pressure of a rival with an anchor.” The argument that having a marquee channel, or anchor, is key to the viability of broadcasters is supported by the development of pay television in the United Kingdom. Aron and Wildman (1999, p. 2). The dominant pay television supplier is BSkyB which controls “…most of the critical programming rights in Britain, enabling it to use bundled pricing to execute a price squeeze against rivals …” which as in the case of Microsoft “…the pay television industry is that a firm that monopolizes one product (here, an anchor channel) can effectively leverage that monopoly to preclude competition in another product market by using bundled pricing” (Aron and Wildman, 1999, p. 2).

Aron and Wildman (1999, p. 3) provide another example of how firms utilize bundling to inhibit their competition, through the example of Abbott and Ortho laboratories, which produce blood-screening tests utilized to test blood that is donated for viruses. Interestingly Abbott produced all five of the test utilized to check for viruses, whereas Ortho only produced three, thus Abbott bundled the five tests in a manner that Ortho was unable to compete, thus effectively making it a monopolist (Aron and Wildman (1999, p. 3). Were these good business practices that this enabled Abbott to increase its market share at the expense of another company that did not innovate in producing all five tests to complete? Ortho claimed that “…Abbott was effectively a monopolist in two of the tests, Ortho claimed that Abbott could and did use a bundled pricing strategy to leverage its monopoly into the other non-monopolized tests and preclude competition there” (Aron and Wildman, 1999, p. 3).

The preceding examples show “…that a monopolist can preclude competition using a bundled pricing strategy …” (Aron and Wildman, 1999, p. 3) and that in so doing can accomplish such without charging prices in excess of what is reasonable for their customers, which makes sound business sense in that capturing the market thus eliminates the need for such, and also provides the business condition that prevents competitors from re-entering the market at lower prices. Thus it is rational for a monopolist to behave as if competitors exist, which in fact they will if it provides such an opportunity through increased pricing. The examples indicated show that “ … it is indeed possible in equilibrium for a provider who monopolizes one product (or set of products) to profitably execute a fatal price squeeze against a rival in another product by using a bundled pricing strategy” (Aron and Wildman, 1999, p. 3).

2.3 The Case Against Microsoft

Spinello (2002, p. 83) in his work “Regulating Cyberspace: The Policies and Technologies of Control” inform us that there are four distinct aspects of the United States government case which is based upon violations of the Sherman Act, which are as follows:

The company’s monopolization of the PC operating systems market was achieved via anticompetitive means, specially in the instance of the utilization of its browser, in violation of “Section 2 of the Sherman Act, which declares that it is unlawful for a person or firm to “monopolize…any part of the trade or commerce among the several States, or with foreign nations” (Spinello, 2002, p. 83).
That Microsoft engaged in “…Unlawful exclusive dealing arrangements in violation of Sections 1 and 2 of the Sherman Act (this category includes Microsoft’s exclusive deal with America Online)” (Spinello, 2002, p. 83).
That Microsoft in its attempt to maintain it competitive edge in browser software “…attempted to illegally amass monopoly power in the browser market) in violation of Section 2 of the Sherman Act …” (Spinello, 2002, p. 83).
And that the bundling of its browser along with the operating system was in violation of “…Section 1 of the Sherman Act (Section 1 of this act prohibits contracts, combinations, and conspiracies in restraint of trade, and this includes tying arrangements) …” …” (Spinello, 2002, p. 83).

Spinello (2002, p. 89) provides an analysis of the Department of Justice case against the company utilizing a distinct example as represented by Netscape. He contends that the option for consumer choice was never inhibited by Microsoft, and that Netscape’s own practices contributed to the decline in popularity of its browser.

Chapter 3 –Analysis
3.1 Bundling, Competitive or Market Restrictive?

The Concise Dictionary of Business Management (Statt, 1999, p. 109) defines a monopoly as “A situation in which a market is under the control or domination of a single organization …”. The Dictionary continues that “This condition is generally considered to be met at one-quarter to one-third of the market in question … (and that) … A monopoly is contrary to the ideal of the free market and is therefore subject to legal sanctions in all industrialized countries with a capitalist or mixed economy”. In addressing this facet of the Microsoft case, McKenzie (2000, p. 27) elaborates that Microsoft’s market position as a ‘single seller’ in the market as a result of its dominance represents “… latent, if not kinetic, monopoly power” and in the opinion of the judge presiding over the case, the company is “…illegally exploiting its market power in various ways to its own advantage and to the detriment of existing and potential market rivals and, more important, consumers”. This goes to the heart of the matter concerning the assertion that Microsoft’s monopolist approach is stifling competition and innovation as its bundling practices effectively eliminates software such as Netscape and others from becoming an option for other companies as the Internet browser Explorer comes preloaded with Windows and Vista operating software. This view was publicly asserted by the United States Attorney General at the time, Janet Reno in a 1997 press conference where she stated on behalf of the Justice Department that “Microsoft is unlawfully taking advantage of its Windows monopoly to protect and extend that monopoly” (McKenzie, 2000, p. 27).

Gillett and Vogelsang (1999, p. xiv) in “Competition, Regulation, and Convergence: Current Trends in Telecommunications Policy Research” advise that “…Bundling is a contentious element of software competition that has been at the heart of the Microsoft antitrust litigation, and represents an integral aspect in the examination of how and if Microsoft’s monopolistic approach to software bundling has an effect on innovation and competition. They state that “ … through bundling, can profitably extend this monopoly to another product, for which it faces competition from a firm offering a superior product (in the sense that it would generate more surplus than the product offered by the monopolist) (Gillett and Vogelsang, 1999, p. xiv). They continue that “…Bundling the two products turns out to be an equilibrium outcome that makes society in general and consumers in particular worse off than they would be with competition without bundling …”. Gillett and Vogelsang (1999, p. xiv) offer the idea that “…bundling is likely to be welfare reducing and that unbundling would not be a suitable remedy …”

Aron and Wildman (1999, p. 1) advise us that through the use of bundling a company can exclude its rivals through the combined pricing, thus successfully leveraging its monopoly power. They continue that the preceding represents part of an equilibrium strategy by which the monop

Digital image processing

Vision is the most dynamic of all our senses since it provides us with a huge amount of information about what surrounds us. It is not surprising that an ancient Chinese proverb that quotes: “A picture is worth a thousand words” is still widely used. All this information is valuable for simple procedures (for example planning our everyday activities), but also for more complex processes as the development of our intelligence. At the level of social organization, images are also important as a means of transmitting information, and almost all of today’s media are based on our vision. The huge amount of visual information and the need for its processing, lead scientists and technicians towards research in order to discover a means for digital image storage and processing using computers. This effort resulted in a new Information Engineering Industry called “Digital Image Processing and Analysis”. This industry began to grow fifteen years ago. However, it has shown a dynamic development, especially during the most recent years and it is considered a science and technology with a promising future and many potential. As the title indicates, Digital Image Processing is concentrated on digital images and their processing by a computer. Therefore, both the input and output of this process are digital images. Digital image processing can be used for various reasons: improvement of the quality of images, filtering of noise caused by transmission, compression of image information, image storage and digital transmission. On the other hand, digital image analysis deals with the description and recognition of the content of an image. This description is usually symbolic. Therefore, the input when it comes to digital image analysis, is a digital image and the output is a symbolic description. Image analysis principally tries to mimic human vision. Therefore, an identical term which is often used is “Computer Vision”. It has to be underlined that computer vision is a complex neuro-physiological mechanism driven by upper level knowledge (high level vision). The characteristics of this mechanism are not known and existing mathematical models are yet inadequately accurate. As a result, it is difficult to simulate high level vision by a computer. For this reason, the methods used for image analysis when it comes to machine vision and human vision vary significantly. Image analysis is easier in the case of applications where the environment, objects and lighting conditions are fixed. This is usually the case of a production process in industry. The branch of computer vision which is used in industry is called “Robotic Vision”. The analysis is much more difficult in applications where the environment is unknown and there is a large number of objects or the different objects are unclear or difficult to separate (for example in biomedical applications or in outdoor / natural scenes). In such applications, even experts find it difficult to recognize objects. For these reasons, it is still difficult to obtain a general image analysis system. Most existing systems are designed for specialized applications.

OTHER RELATED RESEARCH AREAS

Digital image processing and analysis are related to various other scientific areas because of their subject of research. Recently, there is a tendency, at least in terms of applications, for digital image processing to become an interdisciplinary industry. Some related research areas are:

Digital Signal Processing
Graphics
Pattern Recognition
Artificial Intelligence
Telecommunications and Media
Multimedia Systems

We will examine the relation of each of these areas with digital image processing and image analysis independently, since the way they are related is not very clear.

Digital Image Processing Vs Digital Signal Processing

Every image can be described as a two-dimensional signal. Therefore, for the analysis and processing of digital images all the techniques of digital signal processing can be used. This area provides the theoretical and programming base for image processing.

Digital Image Processing Vs Graphic

Fundamentally, the subject of graphic is digital synthesis. Therefore, the input is a symbolic description and the output is a digital image. For this purpose a geometric modelling of the display object takes place, as well as a digital description of the lighting conditions and digital production of the objects’ illuminants in the assumed position of the camera.

Digital Image Processing Vs Pattern Recognition

Pattern recognition deals with the classification of an object to a class of models (class pattern). For example, trying to recognize whether a new object is a resistor, a capacitor, or an integrated circuit. For this purpose, an object has to be described using certain characteristics (features), mostly numbers (for example: diameter and area), and then it can be classified based on these characteristics.

Digital Image Processing Vs Artificial Intelligence

Artificial intelligence and image understanding are areas where a symbolic representation of an image is converted to another more complex representation or a representation more easily comprehensible to humans. Usually, techniques for representation of human knowledge (knowledge representation) and reasoning (inference) are used for this purpose. The analysis of a “scene” requires higher cognitive processes and that is why it is also known as high-level vision. On the other hand, image processing is more related to the lower levels of vision, that take place in the human eye and optic nerve and as a result it is also known as low level vision.

Digital Image Processing Vs Telecommunications

The field of telecommunications is related to digital image transmission in telecommunication networks that transmit voice and data. The resulting networks are called Integrated Services Digital Networks (ISDN). A key problem concerning image transmissions is the compression of the images’ content, since a colour image requires about 750 Kbytes for its description. The construction of special algorithms for coding and decoding is also required. Digital image processing is also directly connected to the HDTV (High Definition TV). Its basic aim is the compression of the vast amount of information and the improvement of the quality of images that are received.

Digital Image Processing Vs New Generation Databases

The new generation of databases includes image, signal (voice) and data storage. In this field, digital image processing deals with image coding and analysis by finding smart ways of recovery (retrieval) of images.

DIFFERENT AREAS OF DIGITAL IMAGE PROCESSING

Digital image processing includes several areas that are closely related. Some of those areas are mentioned below:

Capture of the image
Digital Filtering of the image
Edge Detection
Region Segmentation
Shape Description
Texture Analysis
Motion Analysis
Stereoscopy

It is logical that the description of all these areas is not possible in a short presentation. However, the literature is so wide that several books would be needed in order to describe adequately the digital image processing. Moreover, image processing is a cognitive area that makes extensive use of specialized mathematical, which makes it difficult to be presented to an audience. For this reason the description of the area it is purely qualitative.

Capture of the Image

The first thing that has to be described is the capturing mechanism of the images. The most classic means of capturing an image is by a photographic camera and a film. However, this technique is not very useful in the field of digital image processing, since the captured image cannot be easily processed by computer. On the other hand, electronic capture is particularly interesting because the image can be digitized and then processed by a computer. For this reason, conventional electronic video cameras are widely used. Electronic video cameras scan the image and produce an electrical signal as an output. There are various camera technologies (for example Orthicon, Vidicon, CCD). The electric signal produced by the camera is then led to a frame grabber. During the process of digitalization, the analogue signal is converted to a digital signal using an A / D converter. Thus, the image is converted into a matrix of 256?256 or 512?512 points (spots). Each point is typically represented by 8 bits, i.e. 256 levels of brightness. However, a common technique in some fields (e.g. robotics) is a binary representation of images that uses only 1 bit / position. This representation is used in order to save memory and speed in the case of simple applications. In some other cases where the colour of an image is critical, colour cameras and three A / D converters are used. In this case the three primary RGB colours (red-green-blue) are saved with 3?8 bits / position. As a result, digital image processing has large memory requirements, even for black and white images. The digitized image is stored as a file on the computer’s local disk. To be able to see the image, we need to transfer it to a special RAM memory (image memory) connected to a monitor. Such monitors may be black and white or colour (RGB). Colour monitors are mostly used even in black and white applications because they have the ability to show “pseudocolours”. Finally, the image in any program of image processing appears as a two-dimensional table (array) 256?256 or 512?512 which is “filled” by the computer’s local disk or by the image memory in which the image is stored.

The process of capturing an image can cause the following distortions:

Blurring
Noise
Geometric Distortions

Therefore, before any application the correction of these distortions is essential. Geometric corrections are mostly needed where geometric information is important, e.g. stereoscopes, topography. The reduction of blurring is done through the process of recovery (restoration). The recovery process is particularly important in applications where there is movement, (e.g. a ‘scene’ of a road) because the motion introduces blurring. In most cases the filtering of the image is also very important in order to remove noise. This can be done by various linear or nonlinear filters. Usually, nonlinear filters are mostly used because they maintain the contrast of the edges, which is a very important factor for human vision. The overall image contrast can also be improved by special non-linear techniques (contrast enhancement).

Edge Detection

Another important process of image analysis is the recognition (tracing) of contours. There are many techniques that can be used for edge detection. The development of various edge detection techniques was imperative due to the important information about the objects used for identification, which can be found in the contours. The dual problem of edge recognition is the recognition of regions in an image. This problem is called image segmentation. Usually the different regions of an image are coloured with “pseudocolours”.

Texture Analysis

In several industrial applications the recognition (or analysis) of the texture is very important. An example of the importance of texture recognition in industrial applications is its use in recognition of different fabrics, or recognition of flaws in a cloth.

Recognition of traffic is also a very important field of computer vision for many applications, e.g. traffic monitoring, automatic driving, recognition of moving objects, digital television, videoconferencing, telephone with image compression and broadcast animation. Is should be noticed, that recognition of traffic has large memory requirements for storage and real time processing. This can only be achieved through parallel image processing and use of special VLSI chips.

Shape Description

Another area of computer vision which is particularly useful in pattern recognition is the description of shape (shape representation). A shape is described either by its border, or by the area it covers. The edge of a shape can be described in different ways, e.g. Fourier descriptors, splines. The area of a shape can be described by methods of mathematical morphology, decomposition with simple shapes, etc. These methods are used either for the storage of a shape, or for its identification.

Stereoscopes

Many applications require measurement of depth. In this case stereoscopy with two cameras can be used. Stereoscopy is particularly useful in photogrammetry and robot movement in a three dimensional space.

Digital Convergence – Processing and Transferring Data

Introduction

Digital convergence (DC) is the proliferation of information in digitized form (bits) and the efficient flow of information in the digital network. Digital convergence is the various ways in which digitized data are processed and transferred [1]. The Knowledge economy is driven by DC where digital systems are embedded ubiquitously in the business processes that help the users to exchange information, store and access data, collaborate, communicate, learn and trade in real time. The digital information can also be accessed from and stored in a remote location which supports workers that are mobile and/or located in distant locations. DC is facilitated by internet, access networks (like 3G,4G, wireless LAN, wireless broadband) and high network connectivity ; leading a surge in virtualization of computing and storage functions of digitized data [2]. Easy communication, information exchange and collaboration made possible over the global digital network with the aid of Digital Convergence has caused a surge in Cloud Computing; which is where digitized data, computational platform and infrastructure to compute enabled by the digital platform is stored in the “cloud” – outside the walled premises of the organization on a sharable platform [2].

Digital Convergence is the current trend in Pervasive Computing which follows the mantra of access to information anywhere, anytime. Gartner Research states that worldwide cloud services revenue enabled by digitized data is estimated to exceed $56.3 billion in 2009, which is a jump of 21.3 percent from the $46.4 billion spent on the cloud last year [3]. Furthermore, Gartner analyst predicts by the year 2013, the Cloud service revenue will reach $150.1 billion[4]. Hence, Digital convergence (DC) is an important paradigm in information technology. Theory of digital options suggests that IT indirectly supports agility by offering firms with digital options [5], which are described as a set of IT-enabled capabilities in the form of digitized work processes and knowledge systems. This theory emphasizes that IT enhances the reach and richness of a firm’s knowledge and it is processed to help the firm improve its agility i.e. its ability to sense and respond to environment change.

The term ‘digital options’ denote that a firm may apply its IT-enabled capabilities in the form of digitized work processes to emerging opportunities, or they may remain unused depending on the dynamic capabilities of a firm [6]. In a dynamic environment; competitive advantage is short lived; hence firms continuously generate competitive actions to achieve series of short term competitive advantage and firms with greater number and variety of competitive actions achieve competitive position [7-9]. Attempts have been made to identify the factors that lead to competitiveness but there are no formal empirical study so far that investigates the link between Digital Convergence and competitive advantage.

Research justification and research questions Dynamic capabilities of a firm are composed of Adaptive, Absorptive and Innovative capability[10]. Prior research has shown that Knowledge sharing and absorptive capability of the firm (ability and motivation of the firm’s employees to utilize knowledge) improves innovation capability of the firm[11]. Review of the previous IS research suggests that continuously generating competitive actions , Knowledge Management and Agility is important in achieving competitive position but there has been no formal empirical study that examines the role of innovation capability in improving firm’s business process agility and the role of Digital Convergence in leveraging innovation capability in competitive actions.

There have been several calls for research to examine relationship between Organizational capabilities, Agility, digital systems and competitive actions. The specific research problems include examining the relationship between digital systems and competitive actions and Firm and network capabilities for leveraging digital systems in competitive actions[12] and examining what IT capabilities are vital to business success in contemporary digital environment? [13].There has been call for research to study the next wave of nomadic computing including Digital Convergence that enables organizations to: mobilize information, share the information, develop new forms of organizational structure, capability, and agility [1]. In response to these calls this study proposes to study the following research questions and the research model is illustrated in Fig 1.

1. Does Innovation capability of the firm help in making a firm more Agile?
2. What role does digital convergence play in influencing the strength of the proposed relationship between Innovation capability and Agility?
3. Does Digital Convergence help in developing the digital collaboration (both external and internal)?
4. What role does location of the partner play in building the innovation alliance network or in other words Digital collaborators of a firm are more locally dispersed / more globally dispersed / are they somewhat equally dispersed between local and global locations? Digital Convergence inhibits or facilitates Digital collaboration between partners that are local and global?
5. Does Digital Collaboration (like between competitors) have any role in shaping business process agility?
6. Improving the business process agility of the firm makes the firm more competitive?
7. Which type of digital collaboration is perceived to be the most valuable for enterprise’s innovation activities?

Literature Review
Digital convergence

Prevalence of digitized data has resulted in Digital Convergence (DC) [14]. The Digital network today is connected with IP phone, IP camera, IP TV, Point of sale systems, digital learning devices, portable medical and other technologies that provide unified communication and collaboration tools even to those workers who are mobile. “When all media is digital…Bits co-mingle effortlessly. They start to get mixed up and can be used and re-used separately or together.”[15] or in other words DC makes use and reuse of information easier. The definition of digital convergence (DC) has evolved over time. The assimilation of concepts on Digital Convergence from the review of literature is outlined below. In the year 1977, Japan’s NEC Corporation first defined DC as communications merging with computers.

Digital convergence requires ubiquitous and powerful computers that can handle communications with digitized content[16, 17]. DC is the convergence of content ( character, sound, text, motion, picture into a bit stream ) and convergence of transmission ( bits can be managed and transmitted quickly and efficiently and in large volumes) enabled by distributed computing and internetworking [18]. DC can also be classified as Network convergence: Fixed to mobile convergence (FMC) is the seamless distribution of digitized content over mobile and fixed technologies enabling the collapse of boundary between fixed network operator and mobile network operators. It provides access to the digitized service irrespective of location and device.

FMC means that a single device can connect and be switched between wired and wireless networks. [19]. Digital convergence can also be viewed as Business Process convergence or integration: It is the ability to represent audio, video, text and other media in digital form, manage this rich digital content and tie it to transactional capability and interactive services [20].For e.g. In a doctor’s office the patient signature can be captured digitally, all the business transactions like patient scheduling, recording of the information about the procedure performed and the rate for the services performed, payment collection, processing for insurance claim, patient medical records can be managed digitally and later those records can be accessed by management to track the performance of the clinic efficiently.

Also the business process convergence can help business provide personalized interactive products for the consumers. DC is the ability to integrate and converge enterprise wide business process with single point of access to it, 24?7, where digitized data are stored in a shared repository and managed by enterprise wide software like the Enterprise resource planning (ERP) software. DC is Device convergence where same digital device can be used for multiple forms of digital content used for complementary services like mobile phone can be used as video player, music player, and sound recorder, GPS, email and web search[16]. It is defined as the convergence of computing, communication and consumer electronics [21] In the current scenario, future digital convergence means producing digital environments that are aware, receptive and adaptive to humans connected in a network. The interacting computational devices connected to such pervasive, human-centered computing network are able to communicate with each other [22]. Digital convergence can help working from home, conduct live meeting without travelling using video conferencing. Based on Past research, Digital convergence can be summarized as convergence of: a) digital content, b) network/transmission, c) business process/service, d) digital devices and e) infrastructure supporting pervasive computing.

Innovation Capability

Past research on Innovation capability of a firm has concluded that it includes the ability of the firm to have product innovation capability, process innovation capability and market innovation capability which are summarized below. The role of environmental innovation capability and organizational innovation capability in shaping firm agility has not been studied so far. Product Innovation capability: Innovation capability is the ability to develop new products or services [23-25], ability to be first mover in the market [26] and ability to introduce more new products than other firms [26]. Process Innovation capability: This is the ability of the firm to develop new methods of production [23-25], develop new organizational forms[23], seek new and novel solutions to problems[23] and to discover new methods and sources of supply[23]. Market Innovativeness: This is the ability to identify new markets[23]. Organisation for Economic Co-operation and Development (OECD) is headquartered in Paris and administers Community Innovation Survey (CIS). The Community Innovation Survey (CIS) was updated recently in the year 2008 and it lists Organizational Innovation Capability and Environment Innovation Capability as new measures for innovativeness [27]. Innovation surveys were first experimented with in several Western European countries but have since been conducted in many other countries including Canada, all EU countries, Switzerland, Russia, Turkey, Australia, New Zealand, South Korea, South Africa and most Latin American countries.

Organizational Innovation Capability

As per the CIS 2008 [27], organizational innovation capability is the ability of the firm to have new organizational method in the business practice. The new methods includes new business practices for organising work or procedures (i.e. supply chain management, business re-engineering, lean production, quality management, education/training systems, etc), new knowledge management systems to better use or exchange information, knowledge and skills within and outside the enterprise, New methods of workplace organisation for distributing responsibilities and decision making (i.e. first use of a new system of employee responsibilities, team work, decentralisation, integration or de-integration of departments, etc) and New methods of organising external relations with other firms or public institutions (i.e. first use of partnerships, outsourcing, alliances or sub-contracting, etc.) Environmental innovation capability: As per the CIS 2008 [27], this is the ability to produce new or significantly improved product (good or service), process, organizational method or marketing method that generates environmental advantage compared to alternatives. CIS 2008 also suggests that firm marketing innovation capability of a firm includes ability to make significant changes to product design or packaging, ability to develop new media or techniques for product promotion, develop new sales channel and develop new methods of pricing goods. Product Innovation capability also includes the ability of the firm to develop products adaptive to the needs of the customer. Process innovation capability includes ability to develop new or improved supporting activities for business processes and ability to provide new method of providing staff welfare (employees are provided incentives and encouraged to behave in novel and original ways) and key executives are encouraged to take new risks [27].

Competition:

Knowledge Management (KM) theory and the Science of competitiveness suggests that KM improves competitive position by improving productivity, agility, innovation and/or reputation – PAIR [28, 29]. In dynamic markets knowledge assets become critical as a source of competition [30]. Along with KM , greater Agility will breed superior organizational performance [31].Entrepreneurial agility (the ability to anticipate and proactively take competitive actions) and Adaptive agility (the ability to sense and react to change) are both significant predictors of sustainable competitive advantage[32]. There is also significant relationship between sustainable competitive advantage and profitability [32]. Dynamic capabilities: In fast evolving markets, competition is a moving target and firms should have dynamic capabilities to gain competitive advantage [6]. Drawing on previous research findings, Dynamic capability is composed of adaptive capability, absorptive capability and innovative capability[10]. Review of literature has defined Competitive action [and response] as “externally directed, specific, and observable competitive move initiated by a firm to enhance its relative competitive position”[33]. Previous research has concluded that Knowledge Assets, Agility, Dynamic capability are important for being competitive but the key question that this study investigates the relationship of digital convergence with Innovation capability, building innovation co-operation, Business Process agility and competitive advantage.

Agility

The different types of Agility identified in the literature are : Operational (internally focused initiative), Partnering (Supply chain initiative) and Customer (demand side initiative) [5], Entrepreneurial and Adaptive[32], Strategic[34], Business-Process [35]. Operational agility has been defined in the literature as the ability to sense and seize business opportunities quickly, accurately, and cost-efficiently. Customer agility is the ability to adapt to customers, identify new business opportunities and implement these opportunities with customers; and the role of IT in customer agility is to facilitate the development of virtual customer communities for designing new product, feedback and testing. Partnership agility: is the ability to leverage partner’s knowledge, competencies, and assets in order to identify and implement new business opportunities. Individual firms do not have all the resources required to effectively compete and value creation for the firm can be leveraged better through pooling of assets between partners. The role of IT in partnering agility is to support Inter organizational networks for collaboration, communication and integration of business processes.

Organizational agility is important for business success [36]. Agility of an organization is significantly determined by the operational ability of the organization. Greater agility is achieved when the Inter-organizational system used has a task and strategic fit, has been assimilated into the organization and the system is adopted network wide [31]. Organizations that are agile i.e. to be able to take competitive actions continuously perform better than organizations that don’t [37]. Business-processes agility can be classified as : process-level agility, which is how quickly an organization can add new capabilities into its standard processes (E.g. how quickly a company can acquire AJAX capability into its ordering process) ; and transaction-level agility, which measures the how good the organization is in customizing capabilities for individual customer transactions (For example, how well a company can customize AJAX ordering capability to include bar-code label on the box, an RFID tag on a certain type of container, and paper invoice with bulk billing based on the individual transaction with a customer)[38].

Theory and Propositions

Resource based view (RBV) of the firm [39, 40] suggests that valuable, rare, inimitable and non substitutable (VRIN) resources and capabilities as the source of competitive advantage. The extension to this is the theory of Dynamic capabilities. This theory emphasizes that development of organizational capabilities over the time and their constant renewal by management influences can be a source of competitive advantage. In contrast to the earlier view that IT infrastructure and IT investment provides the source of competitive advantage, dynamic capabilities theory emphasizes that consistent development of the capability to apply IT , allows firms to be flexible and innovate continuously, looking put for emerging opportunities, and countervailing threats from competitors to help shape a superior firm [41]. Theory of capability state competition lists Dynamic capabilities, core competencies and resources as a basis for superior performance of a firm [42] According to the Dynamic capabilities theory it is not just the availability of resources that matter, but also the high performance routines operating inside the firm and embedded in the firm’s processes that utilizes them [43].

The theory proposes that a firm’s IT application can be imitable across firms but the firm’s capability to apply IT strategically can be inimitable [44]. Based on this theory the innovation capability of a firm cannot be easily replicated by other firms and will help the firm achieve competitive advantage.
Innovation was described by Schumpeter (1934) as development of new products, new methods of production, new sources of supply, opening of new markets and new ways of organizing businesses. As per OECD’s CIS 2008 survey innovation ability is the ability to implement new or significantly improved product (good or service), or process, a new marketing method, or a new organisational method, or a new environmentally friendly product or process in business practices, workplace organisation or external relations. It has been suggested that firm’s radical or incremental innovation drives the firm to respond to market changes and opportunities[45]. This study investigates this empirically by proposing the following.

Proposition 1: Innovation capability will be related to Organizational Agility.

Digital systems are pervasive and can make knowledge accessible through intranets, digital knowledge repositories and databases and can make the knowledge richer by video conferencing and digital collaborative facilities. Digitization offers firms significant opportunities to achieve greater agility [46]. Digital convergence allows for transfer of digitized information in different ways. Information theory provides completely rational explanations for competitive action: those who have the information will be most aware, motivated and capable of responding. ICT use on Multifactor productivity (MFP growth) are typically linked to firms’ experience in innovation [47]. It has been suggested that several firm capabilities like the firm’s digital platform is an important enabler of agility [5]. Thus this study proposes that Innovation Capability will drive agility more for firms that have Digital convergence than for firms that do not.

Proposition 2: Relationship between Innovation capability and organizational Agility will be moderated by digital convergence

Firms that have digitized their process have digital options that can help create new channels to access customers, build real-time integration within supply chain network , gain efficiencies in internal operations and offering new digital products or services [48]. This study proposes that firms that have digitized their processes will have digital convergence that can help digital collaboration with customers, other members in the supply chain network, other firms in the industry, competitors and other firms within the enterprise; both locally and globally.

Proposition 4: Digital convergence will promote both local and international innovation partnership.

It has been suggested that Digital collaborations will result in Co-evolution among businesses which implies flexibility in the asset mix , capabilities and knowledge resulting in Agility [49]. Knowledge management is related to organizational agility [50] and conducting knowledge management leads to five types of knowledge manipulation activities: knowledge acquisition, selection, generation, assimilation, and emission[28].

Proposition 5: Digital Collaboration for Innovation has a direct relationship with Agility.

Proposition 6: Organizational Agility will be related to a higher level of competitive position/competitive advantage.

Research Design
Data Sample

The proposed research model will be empirically tested using a data gathered from managers of companies. The target respondent list will be compiled from the Dun & Bradstreet database consisting of large organizations both public and private operating in North America that has a certain level of market uncertainty and competition. As per OECD’s definition there are two types of innovation intensive industries comprising of a) High tech industrial companies like Manufacturing and b) companies that provide knowledge intensive services like IT consultancy, Telecom services, Banking and Financial, Retail, Insurance, Health Care, Education etc . The diverse sample from both the public and the private sector will help increase the generalizability of the results from this study. The focus of this study is digital convergence. Although the surge in digital convergence with variable strength is seen across all industry sectors and all size firms; this study focuses on medium to large companies with large number of employees. The reason being, for large size companies the availability of finance makes it easier to invest in digital systems.

Methodology

A pilot study with IS academics and graduate students will be conducted for the preliminary assessment of the proposed scale for each construct and to identify ambiguous questions and instructions. Cronbach’s alpha (a) coefficient will be computed for each multi item scale to test for reliability. Alpha greater than 0.7 is generally considered to be acceptable reliability[51].
It is important to assess the biases that results from using a single method, a mail survey administered at a single point in time, to measure the constructs proposed in this study due to Common Method Variance (CMV). The Harmon’s one factor test will be used to assess CMV[52].

Measures are being taken to elicit information about all the variables that are being studied. Whatever possible existing scales will be used but new scale to measure digital convergence will be developed. A seven point likert scale (1= very Weak, 7=Very strong) was used to measure the constructs. This study adapts the previously validated scales used in the past to measure organizational innovation capability [25-27, 53]. The adapted scale in the study consists of : Product innovation which has 3 questions, Process Innovation has 4,organizational innovation has 4,Marketing innovation has 4 and Environmental innovation has 2 questions as shown in the Appendix.

Digital Collaboration for innovation is active participation with other enterprises or non-commercial institutions on innovation activities using a digital platform. This type of collaboration does not require for the collaborator to benefit commercially. Pure contracting out of work with no active co-operation is excluded in defining digital collaboration for innovation. The measure is adapted from the OECD community innovation survey, 2008. It consists of selecting the different types of collaborators and their location as shown in Appendix.

Eight measures of business process agility was used from a previously validated instrument [35] which was developed based on conceptual framework provided by prior research [5, 54]. These items measure how quickly and well the firms can undertake key business actions such as responding to changes in aggregate demand, customizing a product to a specific customer or market, reaction to new product or service launches by competitors, change prices or product mix, move into or retrench from markets, adopt new process and redesign the supply chain.

Little empirical work has been done on Digital Convergence and this proposal synthesizes concepts from the current IS literature on Digital Convergence to help develop the operationalization of the Digital convergence Construct.

This study proposes breaking down DC into 6 first order constructs consisting of content convergence, transmission convergence, Network convergence, Business Process convergence or integration, Device convergence and Pervasive digital environment; which will be easier to operationalize . The Next step will be operationalize these variables, transform the propositions into formal hypotheses for the purpose of empirical testing.

This study proposes to measure competitive position of a company based on performance of their company relative to their major competitors using a seven-point Likert scale( 1-significantly decreased, 7= significantly increased) in terms of : Market share, Sales volume and Customer Satisfaction. The results from the self reported will be validated by calculating correlation with the results from accounting related measures available from Financial Reports. Previous literature supports the use of accounting measures such as – Return on Sales (ROS), Return on Assets (ROA) often used as proxy for efficiency, operation income to measure a company’s position to compete

Data Analysis

This study proposes to use PLS to estimate the research model as it is common in behavioral literature to use multiple item measures for latent constructs. Path model using PLS will be used for interpreting the main results of this study because this study uses perceptual measures coming from one respondent for constructs that require multiple item indicators.

Significance of this research

Innovation and Agility are seen as being important across many industries, especially those operating in a dynamic and globally competitive environment. The impact of Digital Convergence upon a firm’s ability to compete in such an environment has important implications for managers. The relationship between Innovation capability and Business Process Agility has not been studied empirically in context with competitive advantage. The results from this proposed study can provide guidance to managers to answer questions like: Should managers develop environmental innovation capability, organizational and marketing innovation capability to gain more Agility?

Should Managers invest in digital convergence for building digital collaboration for innovation? Is there any gain in collaboration for innovation (even with competitors) in improving firm agility? Does business process agility provide competitive advantage for large companies? Will the benefit in developing innovation capability increase by investment in Digital Convergence?

This proposed study is important to researchers as it adds to the growing body of literature linking a Firm’s capability and Agility. It draws from resource-based view of the firm and dynamic capability theory to explain the relationship between firm’s innovation capability and its competitive performance. This study provides an empirical test of relationship between business process agility and competitiveness. The study also provides identification of Digital Convergence. Finally, the results of this proposed study is important to respondents as it answers if the leverage of innovation capability for competitive advantage is contingent on investment in Digital Convergence.

Differences Between Virtual and Real Worlds

The nearer the virtual world comes to replicating the ‘real’ world the more we will have to question what, if any, the difference continues to be. Discuss.

Introduction

The concept of the virtual has been around for much longer than the technology which now makes the virtual part of our lives. In various forms, such as that put forward by Descartes, the concept of the virtual has long permeated the thoughts of scholars. However, it is only in very recent times that the concept of the virtual has really taken flight. For now, with emerging technologies and a world which is more unstable than ever, the world of the virtual has grown rapidly. Now, the virtual is getting closer and closer to reality, and soon we may not be able to distinguish these worlds from each other. Whilst it is clear that right now we have some idea of the difference between the real world and the virtual world, the differences between the two are much less than before.

Traditionally, the difference between the virtual and the real worlds was the concept of identity or body – that in the virtual world your mind was separated from your body and therefore the difference between the virtual and the real was the physical form. However, this view is slowly being eroded away and there are many who now believe the virtual cannot be separated from the real in terms of body or physicality. This is a view that the researcher shares, and this concept shall be one of the main points of focus in the essay.

Another area which could measure the distinction between the virtual and the real is the concept of risk. Risk is still one of the few concepts which seems to differ in the virtual and the real worlds, at least in a number of cases. Although this difference is being reduced, currently it seems possible to distinguish the virtual and real worlds using the concept of risk.

The first part of this essay will look at the way in which the virtual has superseded many aspects of the real world in our lives, and how these areas can no longer be easily distinguished. These normal, everyday events are now both real and virtual simultaneously, and there is no discernible distinction. The second part of the essay will deal with the concept of physicality in the virtual world, and how the evidence now suggests that the virtual and real worlds cannot be distinguished through the identity of the physical body.

The final part of the essay will show that despite this convergence between the virtual and the real, there are some distinctions to be made. The way in which the virtual can be distinguished from the real comes down to the concept of risk, although this distinction is also being reduced and perhaps with the introduction of future technologies this distinguishing feature will also be completely eroded.

Information, branding and the virtual

One area of our lives that has seen a dramatic shift from the real to the virtual over the last ten years is information and branding. Information used to be stored predominantly in ‘real’ forms such as paper books. However, most information is now accessed in virtual form, through computers, radio and television and other digital media. Books and music are now accessed as digital facsimiles of the original, and our world in terms of information has shifted from real to virtual (Argyle and Shields, 1996). This process has come so far that there can be no real distinction between the real and virtual worlds of information. We see books and music in paper form in the same way we do digital e-books and mp3 music. They are perceptually very similar and contain the same information that we require. Our world has shifted from the real and local to the virtual and the global.

The shift from real to virtual has also occurred in terms of branding. Brand concepts for many real objects are now so strong that we no longer think of the real object itself but in fact associate the virtual brand as the key identity. For example, vacuum cleaners being called ‘Hoovers’, tissues being called ‘Kleenex’ and digital music players being called ‘Ipods’(Shields, 2003). The brand stands as a virtual representation of the actual real objects, but is virtually indistinguishable, particularly in linguistic terms. The virtual, as Shields puts it, has moved from being something simply transformative and vague to something banal and common. The virtual has permeated or even superseded the real in some aspects of life, and the virtual is now for all intents and purposes indistinguishable from the real.

Whilst it could be argued that these virtual forms differ from their real counterparts by way of physical form (i.e. the virtual item has no physical tangible form, whereas the real form does), in the cases of information and branding this distinction of physicality is too small to be clear. For example, the virtual or digital form of a book does have physical form, in that the information can be printed out or used in a physical way just like the real paper book can. This is the same for branding, where the virtual brand names are so synonymous with the real physical objects that they cannot be separated. In these cases ,the virtual has superseded the real, and completely destroyed the boundaries between them. Therefore, this is an area where there cannot be said to be any real distinction between the virtual world and the real world.

Physical form, identity within the virtual world

The most commonly argued distinction between the real and virtual worlds is the concept of physical form and identity. It is argued by many that in the virtual world you do not have a complete physical identity and that your mind being removed from the physical body is the distinction between the real and virtual. In other words, in the virtual world you have no real identity or physical interaction, whereas in the real world you do.

Although it seems clear that the physical characteristics that identify our physical are not obviously visible in the virtual world, the physical self is not completely left behind and we do in fact experience physical manifestations within the virtual world.

As pointed out by Katie Argyle and Rob Shields (1996), who point out that ‘presence’ doesn’t simply vanish in the virtual world, and that the technology simply mediates our physical presence. With technology the way it is today, and the fact it is sure to improve in the future, we can now act holistically through our bodies within the virtual world. Although it might seem that our bodies are not part of the virtual, in truth we cannot actually escape our bodies. In other words, we do not lose our body within the virtual world, but rather experience and interact in the virtual world through our bodies.

Argyle points out that the emotions we feel whilst interacting in the virtual world are in fact real and are physical. She gives the example of her online presence ‘Kitty’, (Argyle and Shields, 1996, in Kolko, p66) where she says that the interactions she has with people, although in a virtual world the emotions and feeling are real and felt in her body. Argyle’s online interaction shows no separation from the body, and therefore is a suggestion that the physical body cannot be used as a distinguishing feature between real and virtual.

This view is supported by Ellen Ullman, who says how she fell in love via email. She only knew this person via the virtual world of email, and not through physical interaction, yet the virtual world elicited real and physical feelings within her. Ullman does differentiate her online body from her ‘real’ one by saying that the love she felt was through her ‘virtual’ body, but she does not separate these two bodies, and the physical form is still linked to the online body or persona (Ullman, 1996)

It seems that when interacting online, it is extremely difficult or nearly impossible to completely separate your online persona from your real and physical persona. For instance, in online games where people can send messages to each other within character, they may also ask about the person in ‘real life’, such as how old they are and where they have come from. People often get confused about which ‘life’ the other person is talking about, and so acronyms have been created to stop the confusion such as IC (in character) and IRL (in real life). This confusion is down to the fact that there will always be a part of our persona within the virtual that cannot be separated, and therefore the body cannot be a distinguishing feature between the virtual and the real.

Not only does the physical form remain within the virtual world by virtue of its interaction and emotions experienced in the virtual, but many of the laws and rights that govern the real are now also governing the virtual. Even if you want to make a distinction between the virtual body, such as an avatar or image, and the real physical body, many of the outcomes are the same. The laws and rights of the virtual are now mirroring the real, and blurring the lines further between the two worlds.

For instance, take the rights of avatars in the virtual world. Whilst avatars cannot have the same basic right as real bodies, namely the right to life (they are not alive after all), they may feel many of the hazards and wish to have the same rights as their real counterparts. For instance, take the example of sexual harassment within the virtual world. Whilst the legal definitions for sexual assault and rape are generally not met by virtual encounters, much of the feelings of intrusion, fear or disgust can be the same. Take the example of Julien Dibbell’s account of Mr Bungle’s rape on the characters Legba and Starsinger in LambdaMOO (Dibbell, 1996). The ‘rape’ was virtual and involved an avatar seizing two other female avatars and graphically describing the actions performed. Whilst this may not constitute ‘real’ rape, the trauma for the two controllers of the female avatars was indeed real. This is because the body cannot be separated completely when in the virtual world, and even if the exact actions are not the same it cannot really be said that physicality is a distinguishing factor between the real and the virtual as the body affected similarly in both worlds.

The lack of risk within the virtual world

Whilst the traditionally held view of being able to distinguish the real and the virtual by the separation of the mind and the body has been shown to be flawed, there is definite promise in the concept of risk as a distinguishing feature. The reasons for this are that risk is something which can only truly be experienced through the physical without the mediating virtual world. Of course, certain risks such as financial risk and emotional risk are possible in the virtual world as these risks will have similar, if slightly dulled, consequences in the virtual world.

However, the notion of risk to the physical body or fear is a feature that separates the virtual and the real. For instance, creating virtual business that has real financial consequences may seem to have the same features as real business, but there are differences. Using the virtual world all products can be tested, mapped and made 100% safe before they are sold on. Experiments can be conducted without the potential risks to human life or to property. The concept of risk is in many ways eliminated from the design process, aside from some of the same financial concerns of design costs. Identities and properties can be re-used, re-hashed or completely changed in the virtual world, meaning there is less risk of catastrophic failure.

However, perhaps the biggest issue in terms of risk is the fact that fear of physical harm is almost completely removed in the virtual world. Emotions are still prevalent and part of our physicality remains in the virtual. However, the risks associated with physical damage or damage to property cannot be properly appreciated within the virtual world. There are no risks in this sense in the virtual world, whereas there are in the real world.

For instance, take the example of realistic computer games such as Forza Motorsport (Lockergnome, 2005). Developed as one of the most advanced driving games for console play, an experiment was conducted to see the difference in performance and reactions between the virtual and real worlds. Six performance cars were driven round the same Atlanta circuit by drivers on the game and on the real life circuit. The results were startling in that the performances between virtual and real worlds were extremely similar. The braking and turning points were extremely similar and the performances in each car were almost the same whether in virtual or real world (Lockergnome, 2005). However, the times in the real world were slower than on the game – much of this attributed to the risk involved in driving the car in the real world. In the game, the drivers could push their cars to the limit in the knowledge that if they crashed there would be no damage to the car. This removed the risk factor and allowed them to get better times. However, on the real track they had to be more careful, for crashing at 150mph in a high-powered car would be far too risky. This slowed them down, and showed despite all other factors being seemingly indistinguishable, the major difference between the real and the virtual was the risk factor.

It can also be seen in the prevalence of online dating and sexual play from people who would not normally engage in such activities in the real world. In the rapidly growing virtual world called Second Life (Linden Research Inc, 2007), there are many groups of various role-players who act out fantasies that they would not do in real life. This is not because it is impossible for them to do in real life (although perhaps some are), but because of the risk factors involved in real life. The feelings they receive from these interactions may be almost identical to the ones they would get in real life interaction, but the crucial difference is risk. Whilst for some interactions this will not affect the sensation, it does for others, such as those involved in fantasies of pain or domination. The risk factor is the one which can distinguish the virtual from the real world.

Conclusion

Whilst we are still living in a world where the real and the virtual seem in most cases to be separate, this is perhaps less true than we think. The real world of information has become indistinguishable from the virtual, and with new technologies the virtual world has become far more expansive and convincing. This change has meant that it is now impossible to separate all of our real physical identity from the virtual representation of ourselves in the virtual world. However, one factor that still lets us distinguish the virtual from the real is the concept of risk, which is not yet fully realised within the virtual world. However, if in future the technology of the virtual increases to the point where physical sensations can be experienced fully in the virtual world, then physical risk to the body and other items of value will be possible. If risk can be fully replicated in the virtual world, then it seems the two worlds would be very close to being completely indistinguishable.

Bibliography

Argyle, K., and Shields, R. (1996) Is there a body in the net?, Living Bodies Chapter 4, SAGE publications Ltd, London.

Dibbell, J. (1996) ( My Dinner With Catharine MacKinnon and Other Hazards of Theorizing Virtual Rape, available online at http://www.juliandibbell.com/texts/mydinner.html

Kolko, B.E. We Are Not Just (Electronic) Words: Learning the Literacies of Culture, Body, and Politics. http://bethkolko.com/includes/pdfs/wearenotjustelectronicwords.pdf

Levy, P. (2002) Cyberculture. Minneapolis: University of Minnesota Press.

Shapiro, M.A., and McDonald, D.G. (1992) I’m Not a Real Doctor, but I Play One in Virtual Reality: Implications of Virtual Reality for Judgments about Reality Journal of Communication 42 (4), 94–114.

Shields, R. (2003) The Virtual, Cultures of internet, Virtual Spaces, Real Histories Chapter 2, Routledge, London.

Ullman, E. (1996) “Come in, CQ: The Body on the Wire.” Wired-Women: Gender and New Realities in Cyberspace. Ed. Lynn Cherny and Elizabeth Reba Wiese. Seattle: Seal, 1996. 3-23.

Development of Microprocessor Based Automatic Gate

ABSTRACT

In this paper, we give detailed information about development of microprocessor based automatic gate. In common gate operations many times troubles will occur, using microprocessor based automatic gate, we can completely remove these troubles easily. We are going to use this automatic gate in Automatic Car Parking. The automatic gate senses vehicle which they come near to it. It automatically opens, wait for a definite time, and closes after the time has passed. This system can also regularly check the number of vehicle that entered the parking area and calculate the available space limit of the area. The automatic gate developed in this paper is controlled by software, which can be modified any time whenever the system needs the change.

Keywords: automatic gate, microprocessor, automobile, traffic controllers.

INTRODUCTION

Need of automatic gate is rapidly increasing day by day. This system described the use of microprocessor as a controller. This automatic gate is nothing but the alternative of manual gate. Manual systems are costly, time consuming. Micro controlled gate are used in making sound system, Robot, automatic breaking system, etc.

This automatic gate can be used in parking of residential home, organization, in public car parking. This system consists of an automatic remote control to open and close the door for parking. It opens the door only when the space is there.

The automatic gate which is used here is not for security purpose. It is just developed to eliminate the problems which are faced by the older manual method.

SYSTEM OVERVIEW

The system presented here is microprocessor based automatic gate. Here microprocessor is used to control the sensor which gives the information about space limit. This system opens, wait and closes door for car. And counts the number of car entered or exit. This system consists of trigger circuit, sensor, CPU and memory module, display, gate and power supply unit. First sensor gives input signal to system. The sensor is optical when the car cross it then the signal is HIGH otherwise it is LOW. Trigger is responsible for the HIGH and LOW signals. This trigger coverts the analog signal to digital. If the signal is HIGH then trigger sends the signal to interface unit. Then the car enters the parking. If the signal is LOW then the car never enters to the parking area. Power supply unit supplies DC voltage for system.

Block diagram of system
HARDWARE AND SOFTWARE DESIGN

The system design is divided into two parts:

Hardware design.
Software design.

Hardware design

Sensor unit
Trigger circuit
CPU module
Memory module
Display unit
Gate control unit
Power supply unit

1. Sensor Unit:-

It is an optical sensor; this is the light dependent register. This will change its resistance with intensity of light. In this system we use ORP12 it is called as dark resistance of 10?. When light ray are focused then resistance is low and if lights are disturbed, resistance will start increasing to dark resistance. Two pair of resister is used one for entrance gate and another for exit gate. Sensor unit send output to trigger circuit. When light ray focused output voltage is v01 and v02. And when light is getting interrupted then the voltage increases to 5v.

2. Trigger Circuit:-

This is made up of trigger, two input NAND gate. This receives the output from sensor unit. If there is output from sensor unit then only trigger circuit go HIGH, otherwise it remains at LOW level.

3. CPU Module:-

This provides system clock, reset and access to address data and control bus. Additional circuits are used which are:

Clock circuit.
Reset circuit.

Clock circuit: Crystal Oscillator is used to implement clock circuit. Cristal oscillator is more reliable for the high level output voltage. In this design the CPU which is used, has a clock cycle. Thus we use crystal oscillator and is pass through flip flop.

Reset Circuit: After the power is supplied this circuit initializes CPU if Halt occurs. If the CPU is reset the execution starts. It will clear the interrupt.

4. Memory Module:-

In this module two techniques are used linear select and fully decoding technique. In linear select each bit select a device, can be done with small system. Doesn’t need any decoding hardware, but it is time consuming. In fully decoding memory address is required to select memory device.

Address Decoder: It tells about space in memory to allocate the address pointed by microprocessor. In this combinational circuits are used. It can enable multiple inputs. When all enables are active then only decoder has active low outputs.

5. Display Unit:-

Display unit uses the decimal and hexadecimal format for displaying purpose.

Display unit consists of-

Z80 PIO: – It provides 8-bit I/O port. It needs a driver to fed output to 7-segment display. Whenever a vehicle crosses the gate, this unit send signal to driver.
BCD to 7 segment decoder: – For displaying decimal digit, decoder is used to take 4-bit BCD input.
7-segment display

6. Gate Control Unit:-

Gate control unit is made up of —

PNP and NPN transistor
Diodes
Motor.

Transistors are used to control opening of gate through motor. There is time interval of 10 seconds between opening and closing of gate.

Diodes are used to protect transistor from reverse bias register to improve switching line.

A DC Motor is used to control opening and closing of gate.

7. Power Supply Unit:-

Power supply unit designed is 5v DC and is doesn’t change even if there is variation in AC voltage. Component of power supply unit is:-

Transformer: 220 or 240 transformer.

Diode: converts AC current to DC.

Filter Capacitor: used to reduce ripple voltage.

Regulator: it receives DC input, and return it as the output

Software design

Software design is referred as the coding. Here we have to program the system. Program modules are:

Main Program
Sensor Subroutine
Delay Subroutine
Output Subroutine

Steps involving in software design:

Algorithm
Flow Chart
Coding

Algorithm

START

1. cnt1 = 0, cnt2 = 0, lim = 20

2. Read the sensor bit

3. Compare sensor bit with entry code and exit code.

a. If sensor bit = entry code then goto step 5

b. Elseif sensor bit = exit code then goto step 6

4. Go to step 2

5.

a. Open, wait and close

b. Increment cnt1 and display

c. Go to step 7

6a.Open, wait and close

b. Increment cnt2 and display

7. Subtract cnt2 from cnt1

8. Compare result with lim

a. If result = lim then step 9

b. Else go to step 2

9. Fetch sensor bit

10. Compare sensor bit

a. If status = exit code then step6

b. Else raise alarm

11. Goto step 9.

CONCLUSION

By this system with the help of microcontroller gate project’s goal is achieved. The design can be applicable for any kind of system which needs sensor. In this system sensor plays the important part to this parking system. For effectiveness one should have the proper knowledge about the sensor, microprocessor, and assembly language.

The sensor works effectively if operates in high intensity of light. This automatic gate can be used in organization; public car park etc. and this system don’t make for any security purpose.

Development of Digital Television Technology

Digital TV broadcasting and HDTV
Introduction

While Gugliemo Marconi is known as the inventor of wireless telegraphy in 1897 (Winston, 1998, p. 70), the inventor of television becomes a little more complicated as it entailed an evolution of over ten years to move from its concept to an actual picture transmission and reception. The patent for the electronic scanning tube, termed iconoscope, was held by Vladimir Zworykin, an Russian born inventor who worked for Westinghouse in 1923, however, Westinghouse did not see the utility in his invention and ordered Zworykin onto other projects (Bogart, 1956, p. 8, 348). Philo Farnsworth (Horvitz, 2002. p. 9, 92) advanced the concept, and it was John Logie Baird who accomplished the first transmissions of face shapes in 1924, who is also credited with the first television broadcast in 1926 (Horvitz, 2002, p. 101). From there, the development of television escalated with analog broadcasting representing the transmission method utilized in television until 2000 began the age of digital television and radio broadcasting (Huff, 2001, pp. 4,8,69).

To understand digital television, one needs a basic understanding of the manner in with analog television works. In the analog system a video camera takes pictures at 30 frames per second, which are then rasterized into rows of individual dots, termed pixels that are assigned specific color and intensity (howstuffworks.com, 2007a). Next, these pixel rows are then combined with synchronization signals termed horizontal and vertical sync, which permits the receiving television set understand how these rows should be displayed (howstuffworks.com, 2007a). The final signal that contains the preceding represents the composite video signal, which is separate from the sound (howstuffworks.com, 2007a). The difference between analog television and digital is that the analog system as a 4:3 aspect ration, which means the television screen is four units wide by three units high, thus a 25 inch analog television measured diagonally is 15 inches in height by 20 inches in width, with the aspect ratio for a digital television is represented by a 16:9 aspect ratio (Metallinos, 1996, pp. 27, 206 – 207).

Digital broadcasting, as is the case in all broadcast formats, including radio, utilize part of the electromagnetic spectrum (Montgomery and Powell, 1985, pp. 20, 237). Electromagnetic wave frequencies consist of radio, infrared, light that is visible, ultraviolet, x-ray, gamma and then cosmic rays, in order of the lowest to the highest (Weber, 1961, pp. 105, 184). In reality, digital television broadcasting is a subset of digital radio broadcasting, under the ‘one-way digital radio standards’, which not only includes digital radio and television broadcasting, but digital terrestrial television, DVB-T, ISDB-t, ATSC, T-DMB, mobile TV, Satellite TV, radio pagers, as well as the Eureka 147 standard (DAB) to name a few (Levy, 2001, pp. 7,10,11,33). This examination shall delve into an understanding of digital television broadcasting, DAB, DVB-T, HDTV, and its deployment in Europe as well as the United States.

Television’s New Age

The advantages of digital television is that it offers a broader array of viewing options for both the consumer as well as broadcast stations in that it provides a clear picture and sharper sound, along with the ability of broadcasters to offer multiple sub-channels as a result of its formats (Levy, 2001, p. 71).

The three formats, consisting of 1. 480i, which is 704X480 pixels that is broadcast at 60 interlaced frames a second representing 30 complete frames each second, and 480p which is 704X480 pixels that is broadcast at 60 complete frames each second, 2. 720p, whereby the picture is at 1280X720 pixels that is broadcast at 60 complete frames a second, and thirdly, 1080i where the picture is at 1920X1080 pixels that is sent at 60 interlaced frames each second representing 30 complete frames each second, and 1080p whereby the picture is broadcast at 1920X1080 pixels that is broadcast at 60 complete frames each second (howstuffworks.com, 2007b).

Note: The above indicates the 525 horizontal line scans whereby each contains approximately 680 pixels. Each pixel represents one element of the picture and contains three areas of red, green and blue phosphor, which may be either rectangular or dots. The electron gun send out electron beams that strike the phosphors causing them to glow, with electromagnets located near the guns directing the beams in sequence to each pixel, with the broadcast signal providing information on how bright the phosphors should be made, at what time and in what sequence.

As digital television broadcasting and digital audio broadcasting, DAB, are both based upon the electromagnetic wave principle, they work in the same manner, with DAB providing a broader range of digital channels that are not available on FM, as well as less hiss and interference, tuning to a station format or name and the support of scrolling radio text, MP3 playback and pause and rewind features (Scott, 1998, p. 9, 210).

DVB-T represents the Euopean standard for broadcast of digital terrestrial television. DVB-T, or Digital Video Broadcasting – Terrestrial, is a new system whereby the digital audio and video data stream is compressed by use of a OFDM modulation that utilizes concatenated channel coding (Levy, 2001, pp. 3-21). Al-Askary et al (2005) advise that OFDM utilizes convolutional coding that does not have capability to adapt to variations of fading properties of individual sub-channels, thus providing clear distortion freer signals and reception. In the DVB-T method when utilized by broadcasters the signals transmitted are sent from one aerial antenna to another using a signal blaster to the home receivers (White, 2007). The broadcast is transmitted utilizing a digital audio-video stream that is compressed, based on the MPEG-2 standard, which is the result of the combination of one or more ‘Pactetised Elementary Streams’ (Chiariglione, 2000).

Note: In summary, the source coding are multiplexed into programme streams, with one or more of these joined to create a MPEG-2 Transpot Stream that is transmitted to set top boxes in the home. It can accommodate six to eightMHz wide channels.

Digital Audio Broadcasting (DAB), which is also termed ‘Eureka 147’ represents the technology employed for the broadcasting of audio through the use of digital radio transmmision (Huff, 2001, pp. 67-78). In order to achieve the sound reproduction quality attributable to DAB, the bit rate levels must be high enough for the audio codec in the MPEG Layer 2 to provide the quality inherent in the system, as well as high enough to enable the error correction coding (digitalradiotech.co.uk, 2007). Both the DAB as well as the DVB-T systems utilize ‘orthogonal frequency division multiplexing’ (OFDM) modulation, with each system being able to handle 1536 sub-carriers (digitalradiotech.co.uk, 2007). The DAB and DVB-T also use the QPSK singal constellation to modulate the subcarriers, and also use 2 bits per symbol which the signal constellations can transmit on each of the subcarriers (digitalradiotech.co.uk, 2007).

DAB (Digital Audio Broadcasting) is particularly suited to utilization in multimedia transmission systems, such as sound, moving pictures and text along with data (Levy, 2001, p. 177). As a radio frequency signal, DAB’s ability in being picked up by radio receivers represents an advantage over DVB-T, whose mobile reception signal “… is significantly affected by …” the fast changing nature of the transmission channel, thus it is needed to utilize two antennas on the received along with a more complex and “… elaborate signal processing for … channel tracking” (Lauterjung, 1999). And while DVB-T was developed orginally for stationary reception utilizing a roof-top directional antenna as well as a non-directional antenna contained on a portable receiver, it has been adapted for moble reception as indicated (Lauterjung, 1999). Recent developments in tests conducted in Germany as well as Singapore have shown that DBV-T can be utilized in mobile reception, however the drawback is battery life as a result of power consumption (dvb.org, 2004).

HDTV, high-definition television, utilizes approximaetly ten times the amount of pixels as a standard analog television set, representing a high end 1920 X 1080 pixels, against an analog television set’s 704 X 480 pixels (Huf, 2001, pp. 140-141).

The high resolution of HDTV requires greater bandwidth thus making broadcast operators make a major financial commitment to deploy the new standard (Brown and Picard, 2005, pp. 47-49). The deployment problem means that in order to make the system work with their current infrastructure, operators would have to reduce the number of channels being offered, a marketing and customer problem in that operators have built their competitive systems on offering a greater number of channel selections. Brown and Picard (2005, p. 336) advise “The significance of the SDTV/HDTV issue is that, because the transmission of HDTV requires much more spectrum than SDTV, a trade-off is involved for any DTV system between a greater number of SDTV channels and a smaller number of HDTV channels (currently 4 to 6 SDTV channels can be transmitted within the amount of spectrum required for one HDTV channel)”.

In addition to the foregoing, there is a lack of uniform standards in “Standardization, compatibility, interoperability and application portability are essential pillars in the erection of a successful and competitive European digital television system” (Nolan, 1997, p. 610). The National Association of Broadcasters’ estimate that the cost of the new equipment to carry HDTV and retain the number of stations will be between $10 to $40 million based on the station size (Pommier, 1995). Deployment will represent a problem in that the wider TV format will be cut off on standard square type televisions thus necessitating consumers to switch to wide screen television receivers in addition to the special HDTV receiver need to watch high definition broadcasts which can be received over cable or satellite (Brown and Picard, 2005, pp. 110-115). The HD receiver being sold at ?299 by UK broadcaster BSkyB, along with an added ?10 for the service on top of the basic subscription charge are another example of the inhibiting factors in deployment O’Brien, 2006).

HDTV basically represents what Dietrich Westerkamp who is the worldwide director of broadcast standards at the electronics giant Thomson, which is the largest European manufacturer of HD satellite receivers, calls “… a chicken and egg situation” (O’Brien, 2006). The situation has been the case with HDTV in the United States as well as Europe, with broadcasters waiting to see enough purchasers of the new television sets before making the financial commitment concerning equipment changes, and consumers waiting to see stations available before making the financial commitment for the new HDTV sets. The answer could be coming from television manufacturers who are starting to turn out HD compatible sets. One such example is Samsung, who has announced that two-thirds of its flat panel production will be HD compatible (O’Brien, 2006). Something will be needed to help jump-start the HDTV situation as presently the size of the potential viewing audience is to small to justify the conversion expense, explains Rudi Kuffner, spokesperson for Germany’s largest broadcaster ARD (O’Brien, 2006).

Conclusion

Since the first television broadcast of face shapes by John Baird in 1924, and the first television broadcast in 1926 (Horvitz, 2002, p. 101) television has come a long way. The introduction of digital television and radio broadcasting in 2000 has increased the viewing experience in providing a broader array of channels, signal clarity and sound as well as giving broadcasters an expanded marketing option of more to offer consumers in a highly competitive market. The new flat panel television sets and digital broadcasting have expanded the ways in which consumers as well as broadcasters view the market. With mobile television systems and the new digital radio channels offering playback and other features, entertainment is getting another big boost. With the biggest new development, that has been around for over four years set to enhance broadcasting and viewing pleasure, when the financial justifications reach the investment levels. HDTV represents the next quantum leap in television despite all of its problems. Technology keeps improving the sphere of entertainment, and it is ultimately consumers who benefit.

Bibliography

Al-Askary, O., Sidiropoulos, L., Kunz, L., Vouzas, C., Nassif, C. (2005) Adaptive Coding for OFDM Based Systems using Generalized Concatenated Codes. Radio Communications Systems, Stockholm, Sweden

Bogart, L. (1956) The Age of Television: A Study of Beijing Habits and the Impact of Television on American Life. Frederick Ugar Publishing. New York, United States

Brown, A., Picard, R. (2005) Digital Terrestrial Television in Europe. Lawrence Erlbaum Associates. Mahwah, N.J., United States

Chiariglione, L. (2000) MPEG-2. Retrieved on 2 April 2007 from http://www.chiariglione.org/mpeg/standards/mpeg-2/mpeg-2.htm

digitalradiotech.co.uk (2007) Comparison of the DAB, DMB & DvB-H Systems. Retrieved on 2 April 2007 from http://www.digitalradiotech.co.uk/dvb-h_dab_dmb.htm

dvdaust.com (2007) Aspect Ratios. Retrieved on 30 March 2007 from http://www.dvdaust.com/aspect.htm

dvb.org (2004) DVB-H Handheld. Retrieved on 2 April 2007 from http://www.dvb.org/documents/white-papers/wp07.DVB-H.final.pdf

Horvitz, L. (2002) Eureka! Stories of Scientific Discovery. Wiley, New York, United States

howstuffworks.com (2007b) How Digital Television Works. Retrieved on 31 March 2007 from http://www.howstuffworks.com/dtv3.htm

howstuffworks.com (2007a) Understanding Analog TV. Retrieved on 30 March 2007 from http://electronics.howstuffworks.com/dtv1.htm

Huff, A. (2001) Regulating the Future: Broadcasting Technology and Governmental Control. Greenwood Press, Westport, CT, United States

Kiiski, A. (2004) Mobile Virtual Network Operators. Research Seminar on Telecommunications Business, Helsinki University of Technology

Levy, D. (2001) Europe’s Digital Revolution: Broadcasting Regulation, the EU and Nation State. Routledge, London, United Kingdom

Lawrence Berkeley National Lab (2004) Electromagnetic Spectrum. Retrieved on 2 April 2007 from http://www.lbl.gov/MicroWorlds/ALSTool/EMSpec/EMSpec2.html

Lauterjung, J. (1999) An enhanced testbed for mobile DVB-T receivers. Retrieved on 2 April 2007 from http://www.rohde-schwarz.com/www/dev_center.nsf/frameset?OpenAgent&website=com&content=/www/dev_center.nsf/html/artikeldvb-t

Metallinos, N. (1996) Television Aesthetics: Perceptual, Cognitive, and Compositional Bases. Lawrence Erlbaum Associates. Mahwah, New Jersey, United States

Montgomery, H., Powell, J. (1985) International Broadcasting by Satellite: Issues of Regulation, Barriers to Communication. Quorum Books, Westport, CT., United States

Nolan, D. (1997) Bottlenecks in pay TV: Impact on market development in Europe. Vol. 21, No. 7. Telecommunications Policy

O’Brien (2006) Broadcasters shrink from taking HDTV leap. 30 August 2006

PBS.org. (2006b) Electronic TV. Retrieved on 30 March 2007 from http://www.pbs.org/opb/crashcourse/tv_grows_up/electronictv.html

PBS.org (2006a) Mechanical TV. Retrieved on 30 March 2007 from http://www.pbs.org/opb/crashcourse/tv_grows_up/mechanicaltv.html

PBS.org (2006b) Widescreen. Retrieved on 2 April 2007 from http://www.pbs.org/opb/crashcourse/aspect_ratio/widescreen.html

Pommier, G. (1995) High Definition Television (HDTV). Retrieved on 3 April 2007 from http://gabriel.franciscan.edu/com326/gpommier.html

Scott, R. (1998) Human Resource Management in the Electronic Media. Quorum Books, Westport, CT, United States

University of Toledo (2005) Television. Retrieved on 2 April 2007 from http://www.physics.utoledo.edu/~lsa/_color/31_tv.htm

Weber, J. (1961) General Relativity and Gravitational Waves. Interscience Publishers, New York, United States

White, D. (2007) What is DVB-T? Retrieved on 1 April 2007 from http://www.wisegeek.com/what-is-dvb-t.htm

Winston, B. (1998) Media Technology and Society: A History From the Telegraph to the Internet. Routledge, London, United Kingdom