Contemporary Indian Theatre And Habib Tanvirs

Habib Tanvir and Naya Theatre are two inseparable names which will always be remembered in the modern theatrical scenario in India. It’s been a year since the death of Habib Tanvir, one of the most popular Indian Hindi, Urdu playwrights, a poet, a theatre director, and an actor, but still the majority of theatergoers in India remember his famous artworks like Agra Bazar and Charandas Chor. The country will always recall this man as the founding father of contemporary theatre of India. But before we go into his life and work details we will have a quick understanding of the evolution of contemporary theatre in India.

The traditional theatre,

The classical or Sanskrit theatre and

The Modern theatre.

Contemporary Indian theatre, as we know it today, has been widely influenced by the change in the political scenario in India. During the 200 years of British rule Indian theatre came in direct contact with western theatre. With the union of power by the British Raj in Maharashtra, Tamil Nadu and Bengal, it was in the metropolises of Bombay, Madras and Calcutta that they first introduced their style of theatre, primarily based on London concept.

This genre of theatre began to expand in the 1850s as more enthusiasts started to perform their own play on different languages based on western style. Due to the growth of this new form of the theatre the other conventional form of theatre felt the heat. Theatre started being ticketed from the 1870s. By the 20th century and First World War, it became a product for sale and was restricted into the auditorium.

As the Indian freedom movement picked momentum, the creative side of the theatre took a setback. In 1922, the Indian Communist Party was founded and along with it came the Indian People`s Theatre Association (IPTA), which worked as its cultural wing. They took the initiative of portable theatre and these were based on various political agenda primarily against the British Rule. Indian theatre was turning out as a medium of social and political change that would be more concerned about reaching out to the common people.

Post-Independence, Indian theatre got a fresh and broader outlook from appropriate mixing of various styles from medieval, Sanskrit, and western theatre. This newly found entity was further enhanced by the formation of Sangeet Natak Academy in Jan 1953 and the National School of Drama, New Delhi under Ebrahim Alkazi in 1959. This dramatic revival brought many pioneers in the theatrical front among which Habib Tanvir was one of the most popular theatre playwright-director in Hindi and Urdu. Along with B.V. Karanth (1928-2002), Ibrahim Alkazi (born 1923), Utpal Dutt (1929-1993) & Satyadev Dubey (born 1936), Tanvir shaped the structure of modern theatre in India.

The individuality in Tanvir’s form of theatre was that it showed how Indian theatre could be simultaneously blended with traditional and contemporary aspects. His theater was not fixed to any one form as a whole. His works reaped the skills, energies of folk performance and made them relevant to the secular and democratic perspective. The effect was that his artwork was as challenging as it was entertaining. During the five decades of his stint in theatre, Tanvir gave such memorable productions as Agra Bazar[1954], Mitti ki Gari[1958], Gaon ka Naam Sasural Mor Naam Damaad[1973], Charandas Chor[1975], Jis Lahore Ni Dekhya[1990], and Rajrakt[2006], of which many are renowned as classics of the contemporary Indian stage.

In popular culture, the name of Habib Tanvir is closely related to the concept of the folk theatre. However, Habib Tanvir’s appeal with the ‘folk’ was motivated by the folk performers who brought their own styles along with them. Habib Tanvir plays involved actors who can sing and dance. His project from the start had been to utilize elements of folk as an instrument to produce theater to appeal general masses.

Habib Ahmed Khan was born in Raipur, Chhattisgarh to Hafiz Ahmed Khan, who belonged to Peshawar. ‘Tanvir’ was a pen-name he took later when he started writing poetry. Raipur, during that time was a small town surrounded by villages. As a child, Tanvir too had many opportunities to visit villages, interact with the residents and listen to the songs of the locals. He was so attracted by those melodies that he even memorized some of them.

Tanvir completed his schooling from Laurie Municipal High School in Raipur and his BA from Morris College Nagpur in 1944. After pursuing his Masters for 1 year at Aligarh Muslim University (AMU), Tanvir moved to Bombay in 1945 and joined All India Radio (AIR). He also joined the PWA (Progressive Writers’ Association) and became an essential part of IPTA (Indian People’s Theatre Association) as an actor. When the Communist Party of India was banned many IPTA members were jailed or went underground. From 1948-50, Habib solely handled the responsibility of running the organization.

In 1954, Tanvir moved to Delhi, and worked with Hindustani Theatre formed by Qudsia Zaidi and authored many plays. It was in this period he met Moneeka Mishra, also an actor-director, whom he later married. In the same year, he produced ‘Agra Bazar’, based on the times of the 18-th-century Urdu poet, Nazir Akbarabadi, an older poet in the generation of Mirza Ghalib. He used students of Jamia Millia Islamia and local residents and folk artists from Okhla village and created an ambience never seen before in Indian theatre. The play was not staged in a restricted space, but in a bazaar, a marketplace. Later, On a Govt of India scholarship, Tanvir went to England in 1956. He received training at the Royal Academy of Dramatic Art, and the British Drama League, and having exposure to Western drama and production styles.

He traveled extensively throughout Europe, watching theatre. In 1956 he spent about 8 months in Berlin and saw numerous productions by Bertolt Brecht. Being Tanvir’s first experience with the German playwright-director’s work he was quickly influenced by it. Simplicity and directness were the benchmark of Berliner Ensemble productions, and Tanvir was reminded of Sanskrit drama, about its simplicity in technique and presentation. By the time he got back to India, he was determined to unlearn much of what he had learnt at RADA. Thus following a path of development opposite to that followed by other Indian directors trained in Britain.

Soon after returning from Europe, he worked with some folk artists of Chhattisgarh and tried to understand their forms and techniques. His first production, Mitti ki Gadi, included 6 folk actors from Chhattisgarh in the cast. Besides, to give a distinct Indian form and style, he used the conventions and techniques of folk stage. This play though is now performed entirely by village artists, but it is still considered as one of the best modern portrayal of the classic.

Tanvir and his wife Moneeka Misra founded Naya Theatre in 1959. During this stage of career, Tanvir’s interest in the folk traditions and performers continued to grow. But, it was not until the early 1970s that this association reached a new and more sustained phase.

Tanvir wasn’t entirely satisfied with the working of folk actors. He identified two ‘faults’ in his approach to tackle them. Firstly, the problem with the rural artists was they not only could read or write but couldn’t even remember what way they needed to move in the stage. So, it wasn’t wise enough to pre-define their movements in advance. Secondly, making these people speak standard Hindi in Hindustani plays created a severe handicap for them and restricted their freedom of expression and creativity in performance.

To improvise on these faults, the folk actors were allowed to speak in their native Chhattisgarhi dialect. He also worked intensively with rural performers in their language delivery and style of performance. Also, to make them feel stage worthy, he allowed them their own portion of delivery in their own traditional way. The second breakthrough came when Tanvir conducted a nacha workshop in Raipur in 1972 where more than a hundred folk participants were involved in a month-long exercise. During this workshop, three different traditional comedies were selected and combined to form a full length play. Further improvisations linked them up to a full story, leading to a stage play called Gaon ka Naam Sasural, Mor Naam Damaad.

This play marked a turning point in Tanvir’s career, not only because the play was a grand success in Delhi but that he finally found the form and style he was searching since his directorial debut. Since then, he continued his construction and casting of play through improvisations. Through this method, at that time he produced his best work – Charandas Chor(1975). This play is still the evergreen favorite for most theatre goers.

Tanvir’s Naya Theatre worked almost entirely with folk actors. But, his occasional productions with other theatre groups were also marked by the style he developed through his work with folk artists. But, this newly developed style was not “folk theatre” by any sense. He was still an urban artist with sensibility, modern outlook and strong sense of history and politics. His unique style and content in theatre always reflected his commitment to common people and their causes, primarily due to his involvement with the leftist cultural movement in early years.

Tanvir’s fascination with the “folk” was motivated by the fact that he believed there is a huge artistic and creative energy inherent in these traditions. He always borrowed techniques, music and themes from these traditions as and when required. His theatre never belonged to any one form or tradition wholly. His plays, from the beginning, have been utilizing elements of folk traditions as a tool and make them give new, contemporary meanings, and to create an art form which has that touch of soil in it.

The performance styles of the actors were always in their conventional nacha background, but the plays were not original nacha productions. While the number of actors in a nacha play is usually 2 or 3, the rest being background dancers and singers, Tanvir’s plays used to involve a whole casting of actors, some of whom could sing and dance. His productions always had a structure which one doesn’t associate with the original form of the nacha.

Another significant difference is that while the nacha songs are mostly used as intermediate musical delays, in Tanvir’s plays they were closely embedded as an important part of the theme of the play. This is best displayed in some his adaptations like The Good Woman of Szechwan (Shaajapur ki Shantibai) and A Midsummer’s Night Dream (Kamdeo Ka Apna, Basant Ritu Ka Sapna). Tanvir not only gave his poetic compositions the freshness of the original but has also used his words to fit native tunes with ease and skill.

However, Tanvir was always conscious not to create a difference between his own educated minds over the uneducated creative mind of his actors. An example of this approach is the way Tanvir mixed his poetry to the traditional tribal and folk music, retaining its own imaginative power without in any way less valuing the latter. Another example is the way he allowed his actors and their skills to be projected by less complicating the lighting & stage design.

Therefore in contrast to the stylish genre of drama on one side and the ‘traditional’ theatre on the other, Habib Tanvir, with his own blend of tradition, folk creativity and critical consciousness, offered a fresh and innovative model of field of dramatics. It is this rich blend which made his art so memorable.

Even after Tanvir’s death, his innovative art form and style is still being carried forward through newer productions of Naya Theatre. Seeing recent performance of Naya Theatre actors in movie “Peepli Live” we can probably comment that Tanvir’s art form is gradually crossing the barriers of contemporary theatre and exploring newer towards mainstream cinema.

Blackbird Play Review And Analysis Theatre Essay

After being separated for 15 years, Una comes looking for Ray at his workplace after discovering his picture in a magazine. They once had an illicit relationship, and have been suffering the consequences ever since. What transpires next is a series of chilling twists and turns as details of their sordid past begin to unravel.

Blackbird is essentially a 75-minute duologue between two tormented souls, in an extremely filthy and under-maintained office pantry, which Ray calls a “pigsty”. This intense confrontation, being the focal point of the entire play, situates itself in a confined space. The claustrophobia is evident in the beginning of the play, when Ray keeps finding excuses to leave the pantry.

Director Tracie Pang’s artistic directions add a dimension of compelling realism, that would have been otherwise missing from the near-claustrophobic confrontation taking place onstage. The minimalist set design by Nicholas Li (with just a dim fluorescent tube light, a dispensing machine, a clogged litter bin, a few lockers, one table and four chairs) echoes Ray’s repressed life. The barbed wire lining the top of the set is a fitting reminder of the entrapment Una felt throughout her entire life. The subtle use of sound by Darren Ng (constant buzzing sound of a dully running office) also contributes to the mellow tone of the play.

The most sublime scene in the play fully transports the audience to relive that fateful moment of elopement 15 years ago. The interplay between actors, set, lights and sound is at its best. Darren Ng’s sound design (seagulls on a beach, a bell tolling midnight) balances perfectly with the action onstage, teasing out the nuances during that scene. The projection of symbolic images on the pantry windows also creates a stunning effect.

It is no surprise that David Harrower’s script has received the critical acclaim it has. The beauty of the script lies in its emotive capture of the juvenile mindset. The lines written for Una’s flashback of her younger days (the yearning thoughts, the defence mechanism, the way a young girl would see the world) is spot-on and succinct. I am impressed by how Harrower slowly teases the audience by choosing to reveal morsels of new information about their past as the plot unfolds, thus ensuring that the audience is constantly engaged.

Every line of dialogue between Una and Ray is wrought with a dark emotion which blurs the boundaries between right and wrong. The audience plunges deep into the damaged and disturbed psyches of Harrower’s two characters who seek for answers but arrive at none. Like most plays dealing with illicit affairs, Blackbird leaves the audience questioning: Who is the culprit? Who is the victim? Is there necessarily a clear-cut right and wrong in their relationship? It is Una who discovered Ray’s whereabouts and sought him out, but to what purpose: Revenge, reconciliation or resolution?

Augusto Boal, the founder of Theatre of the Oppressed, sees theatre as “the passionate combat of two human beings on a platform” (Boal, 1995). Boal’s approach attempts to substitute passivity with empowerment (monologue with dialogue). Monologue creates a relationship of oppressor versus oppressed, as the person talking forces his counterpart into listening. All relationships could tend to become a monologue, a man and a woman, one of them tends to become the actor and the other one, the spectator. Human relationship should be a dialogue but one of them sometimes becomes active and the other passive. So oppression is this: All dialogues that become monologues (Boal, 1979).

In Blackbird, the roles of the oppressor and the oppressed are constantly reversed as Una and Ray attempt to assume power over each other. The confrontation between Una and Ray starts at a frenetic pace with Una being the oppressor, circling Ray like a vulture and forcing him into a corner with words like a scalpel. Ray keeps finding excuses to leave the pantry as he suspects Una of hiding a weapon.

However, the tables are turned (literally) when Ray starts to justify his wrongdoings with an assertive tone, leaning towards Una with clenched fists, while Una tries to avoid him by facing the wall. During Una’s flashback monologue, she clutches her bag tightly as she recalls about her suffering, while Ray collapses into a chair, burying his head in his palms with repent. The tug-of-war continues as they dig up the past through passion-laden monologues and exchanges.

Blackbird is a dialogue of hurt and wayward passion, told with superb onstage chemistry. Credit goes to Daniel Jenkins and Emma Yong for digging deep to produce extraordinarily layered performances. Their excellent turns bring Harrower’s deservedly-acclaimed script to life. I specifically wish to highlight Emma Yong’s performance.

Yong’s connection to her character Una is exceptionally amazing. She shows her remarkable versatility as a 27-year-old who has experienced deep tragedy as a child. This illicit affair resurfaces after 15 years where Ray has moved on to a new life, while Una has been left to drown in shame. She remains stone-faced the entire time but her eyes express a myriad of emotions, from hatred to madness to confusion to yearning.

Yong’s tears of conflicted pain during her flashback monologue is beautifully heart-wrenching. She ably navigates the complex psychological aspect of Una’s character and conveys the emotional range required for a character who had sexual intimacy with a man at a tender age. However, one minor flaw would be her pace in line delivery, which sounds rushed at times.

Jenkins plays his character Ray with equal passion. His pace, in contrast to Yong, is more balanced. He discharges his performance with gusto, engaging the audience and leading them to sympathise with his plight as the drama unfolds. I was surprised that Jenkins was not initially cast as the male lead.

Blackbird was postponed from March 2010 to September due to the unusual circumstance of actor Patrick Teoh quitting the production. Teoh felt that he was unable to fulfil the demands of the role. After watching the play, one could probably see where he was coming from.

It is essentially just two people in the same space for 75 minutes, but truth be told, it did not feel that long at all. The 75-minute playing time is filled to the brim with palpable tension and raw emotions. When the cliffhanger climax ended with a truly unexpected twist, I found myself at the edge of my seat.

Quoting Una’s opening line: “Shocked?”

Yes indeed.

Blackbird appears to be a simple situation begging for a simple judgment: “It was abuse, was it not?” But the complicated tangle of emotions leaves one with a feeling of disquiet and unease which is hard to shake off, even after the curtain falls.

Plastic Theatre in A Streetcar Named Desire

1. Introduction

“I don’t want realism. […] I want […] magic!” (Williams, Streetcar Named Desire 130)

It is Blanche DuBois who states this quotation in Tennessee Williams’ A Streetcar Named Desire. In this drama from 1947, two worlds, embodied by the two characters of Blanche DuBois and Stanley Kowalski, clash. That conflict between realism and a romantic view of things is visible through the whole play, increasing from scene to scene, and reaches its peak in Stanley’s rape of Blanche in Scene Ten. After that suppression of the romanticism and with Blanche going to an asylum, one might think that the realistic point of view triumphs, but in my opinion her leaving and her acting, still relying on the “kindness of strangers” (Williams, Streetcar Named Desire 159), leads to the impression of a survival of her fantasy world. She just “escapes from the demonic night world and completes the cycle of romance” (Thompson 28). But I don’t think that her illusions win over Stanley’s realism, as she is “a Romantic protagonist committed to the ideal but living in the modern age, a broken world” (Holditch 147).

In Williams’ play A Streetcar Named Desire, things are not always called by their names, but he creates a sense of indirectness. With the aid of telling names and special attitudes of the characters, he caricatures a truth behind things. However, this is not restricted to the protagonists and their quotations, but also concerns the play itself, including the stage directions. The feeling of hidden truths is supported by effects and motifs, for example the adoption of light and music or the gestures of the actors. This realization of a play on a stage is called the “Plastic Theatre”, as the audience gets more involved through the use of different senses. This leads to a vivid impression of the feelings and thoughts of the protagonists. Williams himself created the term of the “Plastic Theatre” in his production notes to The Glass Menagerie. There he writes about a “conception of a new, plastic theatre which must take the place of the exhausted theatre of realistic conventions if the theatre is to resume vitality as a part of our culture” (Williams, Glass Menagerie 4).

2. Definitions

To provide a solid basis for the following thoughts concerning the different characters of A Streetcar Named Desire and their points of view, I want to introduce and explain the two terms of “realism” and “romanticism” briefly. Both of them can also been seen as epochs in American Literature, but I just want to focus on the general statement. In addition, I want to expose further information about the idea of the “Plastic Theatre”.

2.1. Realism

In the Longman Dictionary of Contemporary English, realism is described as “accepting and dealing with life and its problems in a practical way, without being influenced by feelings or false ideas”. This means that one takes things as they are, evaluating situations only with the aid of the visible facts, not relying on false hopes or following non-realistic ideals. The human reason has, from a realistic viewpoint, a higher value and is more important than emotions or spontaneous impressions.

2.2. Romanticism

The romantic perspective is in contrast to the realistic one. Romanticism is related to “highly imaginative or impractical” (Longman Dictionary, “Romantic.”) attitudes, admiring ideals which are not realistic or even unachievable. In romanticism, feelings and emotions are stated higher than rational thinking and human reason, not only in the context of love issues, but also in the way of dealing with situations and problems. Impressions are not based on visible facts, but on ideal conceptions, and these conceptions might be sometimes quite fictional or utopian.

2.3. The Plastic Theatre

“To express his universal truths Williams created what he termed plastic theater, a distinctive new style of drama. He insisted that setting, properties, music, sound, and visual effects – all the elements of staging – must combine to reflect and enhance the action, theme, characters, and language” (Griffin 22).

Like Griffin, many authors, including Tennessee Williams himself, tried to explain the Plastic Theatre, but it was barely discussed in public. After he established the idea of the Plastic Theatre in the production notes to The Glass Menagerie, Williams never publicly discussed it again. But from that moment on, his plays were very theatrical, with lyrical and poetic language, his scenic descriptions “draw on metaphors from the world of art and painting” and with quite symbolic use of sound and light (Kramer).

3. A Streetcar Named Desire: The Truth Behind Things

In Williams play A Streetcar Named Desire, the audience gets the impression that facts are not just stated within the text, but between the lines. The characters are often described better through their behavior and gestures than through their actual quotations. From scene to scene it gets clearer that Blanche and Stanley are embodiments of two very contrasting viewpoints of life: extreme romanticism and down-to-earth realism. This is also visible through different symbolic motifs, which emerge various times in the play. Connected with a very evocative use of music and light and many telling names from the beginning on, the whole play seems conspicuously allusive.

3.1. Romanticism and Realism in A Streetcar Named Desire

We are presented in A Streetcar Named Desire with “two polar ways of looking at experience: the realistic view of Stanley Kowalski and the ‘non-realistic’ view of his sister-in-law, Blanche DuBois” (Kernan 17). Williams brings the two views into conflict immediately.

3.1.1. Blanche DuBois as the Romantic Protagonist

When the audience meets Blanche, her appearance is described as “incongruous to this setting” (Williams, Streetcar Named Desire 8). In Scene One she arrives at the Elysian Fields, where her sister Stella and her brother-in-law Stanley Kowalski live. Her clothes are white and fluffy, looking very delicate and “as if she were arriving at a summer tea or cocktail party in the garden district” (Williams, Streetcar Named Desire 9). She is very shocked about the habitation of her sister and calls it a “horrible place” (Williams, Streetcar Named Desire 13). The reader is confronted instantly with her deranged self-awareness, as she asks Stella to turn the “merciless” (Williams, Streetcar Named Desire 13) light off, because she does not want to be looked at in the bright light. This behavior is visible through the whole play. Blanche always tries to avoid over-light and glare. Her vanity about her looks is also remarkable in the way Blanche presents her figure to her sister, fishing for compliments and stating that she has the same figure as she had ten years ago. (Williams, Streetcar Named Desire 18). She often states very romantic quotations through the whole play, e.g. concerning the pretty sky where she “ought to go [aˆ¦] on a rocket that never comes down” (Williams, Streetcar Named Desire 44).

When the relationship between Blanche and Mitch, a friend of Stanley, becomes more intimate, the audience gets an impression of Blanche’s romantic conception. She calls him her “Rosenkavalier” and wants him to bow, just like the gentlemen in the Old South would do (Williams, Streetcar Named Desire 90). Although she was married once, she tries to behave like she would be untouched and a virgin, which she is obviously not. When Mitch says that he cannot understand French, she asks “Voulez-vous couchez avec moi ce soir?” (Would you like to have sex with me tonight?) (Williams, Streetcar Named Desire 95). The information about her past, that she had many men in a hotel called the “Flamingo”, and the way she speaks about her relationship with Mitch, that she does not love him, but just want a man with whom she can rest, brings certainty for the audience.

So Blanche’s character can be described as a very romantic one. For her, outwardness is very important, and to appear very delicate and pure she is not afraid of telling lies. She is a fake, a person who likes to be better than she actually is, living in a fantasy world which has nothing to do with the real life. “Already damaged by [aˆ¦] the harsh realities of disease and death, Blanche’s Romanticism is reduced in some moments to nothing more than sentimentality” (Holditch 155).

3.1.2. Stanley Kowalski as the Realistic Protagonist

Stanley Kowalski seems as the embodiment of a “real man”, opposed to or ignorant of the transcendent, very sexual and physical. When the audience gets in contact with him for the first time, he carries a package of meat and throws it to his wife Stella. He is described as “strongly, compactly built. Animal joy in his being is implicit in all his movements and attitudes” (Williams, Streetcar Named Desire 24). His relationship to his wife is a very sexual one, as Stanley treats his wife in a very physical way and Stella states that she is very attracted to him. When Blanche leaves to the asylum and Stella cries, he consoles her by touching in a sexual way (Williams, Streetcar Named Desire 160), which is characteristic of their relationship.

His view of things is a very realistic one. When Blanche informs Stanley and Stella that she had lost the plantation of their parents, Belle Reve, Stanley thinks that in fact she did not lose it, but perhaps sold it and did not give them their part of the money. For him, this would be an affront against himself, as the property of his wife Stella is his own, too. He thinks Blanche bought jewelry, clothes like a “solid-gold dress” and “Fox-pieces” (Williams, Streetcar Named Desire 32) from the returns of the plantation. In reality, the furs are “inexpensive summer furs” (Williams, Streetcar Named Desire 33) and the jewelry is glass. This mistake is “the mistake of the realist who trusts to literal appearance, to his senses alone” (Kernan 18).

Stanley’s view of things, the realistic one, is the one which works in the modern, broken world. He embodies this harsh world with all its physical, material and sexual aspects. His strong appearance and his human reason is all he needs to get along in the real world.

3.1.3. Conflict between Romanticism and Realism

The two points of view clash from the beginning of the play on until the end. Blanche embodies the romantic one, whereas Stanley stands for the realism.

“In the course of the play Williams manages to identify this realism with the harsh light of the naked electric bulb which Blanche covers with a Japanese lantern. It reveals pitilessly every line in Blanche’s face, every tawdry aspect of the set. And in just this way Stanley’s pitiless and probing realism manages to reveal every line in Blanche’s soul by cutting through all the soft illusions with which she has covered herself” (Kernan 18).

Kernan explains very descriptive the relationship between the two protagonists. Stanley does not treat Blanche with much respect, which is visible through the way he talks about her bathing and her way of dressing. But also Blanche has an aversion to him, calling him “sub-human – something not quite to the stage of humanity yet” (Williams, Streetcar Named Desire 74). For her, Stanley is a threat, because he is able to destroy her fantasy world and to uncover her past and her real face. The conflict increases from scene to scene and reaches its peak in the rape of Blanche. Stanley has to prove his dominance and therefore rapes her to force his reality on her. But she is not broken after the rape, she is just even deeper in her fantasy world, which is shown by the way she trusts the doctor, holding tight to his arm, still depending on “the kindness of strangers” (Williams, Streetcar Named Desire 159).

Finally the audience gets the impression that the “realistic point of view has the advantage of being workable. Blanche’s romantic way of looking at things, sensitive as it may be, has a fatal weakness: it exists only by ignoring certain positions of reality” (Kernan 18).

3.2. The Plastic Theatre in A Streetcar Named Desire

Williams tried to communicate circumstances not only by the acting of the protagonists, but also through symbols and various effects. “The setting, lighting, props, costumes, sound effects, and music, along with the play’s dominant symbols, the bath and the light bulb, provide direct access to the private lives of the characters” (Corrigan 50). The many telling names in the play give additional information and enforce the impression of a truth behind things. In the following subchapters I want to discuss exemplary Blanche’s bathing, the adoption of music and sounds and the use of telling names.

3.2.1. Blanche’s Bathing

Blanche bathes very often in this play. She obviously wants to clean herself from her past. After the bathing, she feels “all freshly [aˆ¦] and [aˆ¦] like a brand new human being” (Williams, Streetcar Named Desire 35). Every time she is confronted with the real, brutal world, she wants to escape in her dream world, which is strongly connected with bathing. In Scene Three when the men have a Poker Night and Stanley “gives a loud whack of his hand” on Stella’s thigh, she instantly says “I think I will bathe” (Williams, Streetcar Named Desire 49). In Scene Seven, she bathes again, “little breathless cries and peals of laughter are heard as if a child were frolicking in the tub” (Williams, Streetcar Named Desire 110), while Stanley tells Stella about Blanche’s past and her affairs with a seventeen-year-old boy and many other men. The title of the song Blanche sings while bathing is It’ Only a Paper Moon and it is described as a “saccharine popular ballad which is used contrapunctually with Stanley’s speech” (Williams, Streetcar Named Desire 106). Especially the verse “- But it wouldn’t be make-believe If you believed in me!” (Williams, Streetcar Named Desire 107) is very ironic, because Blanche does not seem very trustworthy at all, and so the song even accentuates her disreputable past. After the rape, she bathes again in Scene Eleven and is very worried about her hair, as if the soap would not be completely washed out.

The many baths in the play show that Blanche will never be done with bathing, because she is always confronted with the real world and could not clean herself from her past. It gives her “a brand new outlook on life” (Williams, Streetcar Named Desire 115), but cannot change her life really.

3.2.2. Music and Sounds

The use of music and sounds is also very theatrical in the play. The Blue Piano “expresses the spirit of the life which goes on” (Williams, Streetcar Named Desire 6) and is always heard when the conflict between real world and Blanche’s fantasy world seems to increase. It is heard, for example, when Blanche arrives at Elysian Fields and grows louder when she informs Stella about the loss of Belle Reve as well as when Stanley tells her that Stella is going to have a baby. It also suggests the fall of Blanche as it is swelling when Stanley rapes Blanche and afterwards when he consoles Stella, who cries because of Blanche’s leaving.

Another music, which is strongly connected with Blanche’s past, is the polka music. It is always heard when Blanche talks about her dead husband. It emerges for the first time when Stanley mentions that Blanche was married once (Williams, Streetcar Named Desire 28). She tells Mitch the story about her husband’s death, he shot himself after dancing with Blanche in a casino. He was homosexual and she discovered him with another man and said while dancing he disgusted her (Williams, Streetcar Named Desire 103) and therefore he shot himself. It also appears when Stanley gives Blanche a ticket back to Laurel where she lived and when he takes Stella to the hospital and Blanche remains in the flat. So the song predicts Blanche’s downfall, as it is always heard when she is haunted by her past.

3.2.3. Telling Names

There are various telling names in Williams play. Blanche’s name itself is quite telling, as “blanche” is French and means “white”, which is very fitting when looking at her character. The name of her plantation, “Belle Reve” is also French, meaning “beautiful dream”. Blanche behaves like she would still live in this dream, refusing to face the truth and the real world.

There are many more telling names, but I want to concentrate now on the perhaps most important one, the “Streetcar Named Desire” as it is the title of the play. Blanche takes the “streetcar named Desire” (Williams, Streetcar Named Desire 9) to get to the apartment of the Kowalskis. This is very telling itself, as the audience finds out more and more about her past and that she leaved Laurel as a broken woman somehow, but her desire to live her life as an elegant, trustworthy and honest woman is still present. So she tries to live a, for her, desirable life, and she hopes to find that in New Orleans.

By the aid of the telling names, which are visible from the beginning of the play on, the use of music and the different symbols which appear often, it seems very theatrical and plastic. The audience gets an impression of the characters and the circumstances in various ways.

4. Conclusion

In Tennessee Williams’ A Streetcar Named Desire, the conflict between Romanticism and Realism, embodied by the two protagonists Blanche DuBois and Stanley Kowalski, is the major theme of the play. With the aid of the characterization of these protagonists and the explanation of the conflict between them I was able to verify this thesis. These two persons are very polarized, visible through their points of view, their behavior and gestures. But in the end, only one point of view is workable, namely the realistic one of Stanley. Blanche lives in her dream world, even in the end after her rape. Stanley is not able to crush her, but she can only survive in her romantic fantasy world, which leads to the impression that she cannot exist in the modern age.

The Truth behind things in this play is also visible through the “Plastic Theatre”. Williams caricatured this hidden truth by the use of music and sounds, symbols and motifs, and telling names. My notions about Blanche’s bathing, the Blue Piano and the Polka in the play, and the telling names were exemplary for this plastic and sculptural theatre, and therefore I showed the existence of a truth behind things and that the term of the “Plastic Theatre” fits for A Streetcar Named Desire.

A Raisin in the Sun by Lorraine Hansberry | Analysis

Playwrights of ColorA Raisin in the Sun

“To be young, gifted and black” (Lorraine Hansberry) is a phrase which is commonly associated with Lorraine Hansberry, which comes from the collection of autobiographical pieces which were put together by her ex-husband in her honor when she died. Throughout the years, individuals from all walks of life have come to America with dreams of a better life, in many different areas such as social, educational, and economical opportunities as well as political and religious freedoms. With these wishes and dreams, the phrase “life, liberty and the pursuit of happiness” (Mitchell), which to many Americans embodies the American dream, can become a reality or just a harsh reminder of what the American dream stands for because for some it comes true but for many, they are never able to reach their dream. She wrote the play A Raisin in the Sun to show people that supporting friends and family members is important through the hard and trying time. If you are able to work hard and truly believe in yourself, dreams can come true in one form or another. The American dream to each individual, no matter age, race or gender has a different meaning. A Raisin in the Sun is important because it crosses over the continued debate of racial and gender issues which arose during the time this play was written, and even during the present day and age.

Lorraine Hansberry was born in Chicago in 1930. Through her earlier years, Hansberry’s parents sent her to public school rather than private schools in a protest against the segregation laws. In 1938, the Hansberry family was one of the first African American families to move into an all white neighborhood. After moving in, the neighbors threatened them with violence and legal action, but the Hansberry’s would not put up with any of it and Hansberry’s father would later bring his case all the way to the Supreme Court. When she finally went to college, she ended up studying at multiple schools including, the “University of Chicago; at the Art Institute of Chicago; at the New School of Social Research in New York; in Guadalajara, Mexico; and at the University of Wisconsin”(Encyclodpedia of World Biography on Lorraine Vivian Hansberry). While attending college, she saw a school performance of a play by the playwright Sean O’Casey and decided to become a writer. In 1950, she ended up dropping out of college and moved to New York. While in New York, she decided to take classes in writing at the New School for Social Research and ended up working as an associate editor of Paul Robeson’s newspaper/magazine Freedom. During this period of her life, she met many leading African-American intellectuals, activists and famous writers, such as one famous writer, Langston Hughes. In 1953 Hansberry ended up marrying Robert Nemiroff, who was white, also a graduate student in Jewish literature, a songwriter, and took part in participating in the political events of the time at the protesting discrimination at New York University. Nemiroff gained his huge success with his hit song, ‘Cindy, Oh Cindy’, and after Nemiroff’s success, and Hansberry’s many part time jobs, she was able to settle down and devote herself entirely to writing. While writing, it eventually took its form in a play, which came from a poem by Langston Hughes, called “Harlem”. The success of the play, A Raisin in the Sun, ended up winning the award for best play of the year, which made Lorraine Hansberry the first African American and the youngest American to win the New York Drama Critics Circle Award. “She used her new fame to help bring attention to the American civil rights movement as well as African struggles for independence from colonialism”(A Raisin in the Sun). After many years, Hansberry had marital problems with Nemiroff and they decided to divorce in 1964. Hansberry was only able to live long enough to see one other play, besides A Raisin in the Sun, be produced. On January 12, 1965, Hansberry died of pancreatic cancer at the young age of thirty-four. She ended up being one of the first playwrights to portray real African American characters and their struggles in day to day activities of African American life. This was shown in her play by the inspiration of her own family’s struggles against the legal battles in segregated housing laws during her childhood.

The working title of A Raisin in the Sun was originally ‘The Crystal Stair’ after a line in an earlier poem by Langston Hughes, who was another African American playwright, poet, novelist, and short story writer. Hansberry ended up changing the title of her play again, after another one of Langston Hughes’ later poems, which asked:

“What happens to a dream deferred?

Does it dry up like a raisin in the sun?

Or fester like a sore-and then run?

Does it stink like rotten meat?

Or crust and sugar over-like a syrupy sweet?

Maybe it just sags like a heavy load.

Or does it explode” (Hughes)?

Produced and finished in 1957, the play A Raisin in the Sun, was the first drama by a black woman to be produced on Broadway. It took two years after it was finished, on March 1959, for the play to be revealed on Broadway at the Ethyl Barrymore Theater. From there, the Broadway production moved to the Belasco theatre and ran for 530 performances, where it started earning many awards. This play is unique in many aspects and covers many important issues. The play was unique because it was the first play to be produced on Broadway, written and directed by an African American and the first to have an all-black cast. The play gained huge success even though the producer, Phil Rose, had never produced a play, and large investors were initially not interested in it. In all the places the play was shown, New York, Chicago, Philadelphia, the audiences absolutely loved it and shortly thereafter it became a huge success. With its huge success and fame, it ended up having a long run in theater and was later turned into a movie and after that, was later turned into a Broadway musical.

The play, A Raisin in the Sun, is important in many different aspects of everyday life. With Lorraine Hansberry growing up how she did, in the neighborhood and time, she knew all about disappointment, false hope and despair. Hansberry’s ancestors also knew about the hard times with exploitations, despair, frustrations and their dreams turning into dreadful nightmares as they came north to hopefully find a better life. Hansberry records the history of her ancestor’s nightmares in a Raisin in the Sun, by portraying a classic story of the Younger family, struggling to realize their dreams by escaping ghetto life. Hansberry’s screenplay shows the story of the Younger family, but it actually reveals the plight of all families and individuals who have at one point experienced or those who are living right now, in despair, have lost hope in their life and have failed dreams and goals. Her immense dedication to this play, gives it its power for all people who read it and for those who end up dealing with it in everyday life. This play is an excellent choice for many different types of classes such as, literature, drama, history and film classes. The play will keep the attention of many different types of people based off of the play’s action, dialogue, and cast of dynamic characters which captivate many different types of audiences from high school students through college students up to the adult readers. Young people endure many different frustrations with their lifestyle and rebel against parents which can bring little gratification at times. However, the adolescent who wants to truly believe that dreams do come true and are not made up, comes from the adolescent who is hiding beneath the cynical surface, making the heart beat of the true idealist.

“Through Hansberry’s careful craftsmanship, the universal themes of the importance of dreams and the frustration of dreams deferred, the strength of family, the importance of not selling out, the problems of conflicting expectations, the belief that love and trust will win over deceit and selfishness, and the dangers of prejudice and stereotyping are as powerful today as they were nearly four decades ago when she wrote the play” (TeacherVision).

Adolescents come from many different families, with different types of problems and family structures, so they need exposure to the values which are shown within a traditional family, and this play delivers that without lecturing or preaching. Another reason A Raisin in the Sun is important is because of its historical value. The play shows the challenges and conflicts by reflecting the provocative natures through the racial attitudes through time, starting around the 1950s making its way to the present. Prejudice is seen in many forms, and the characters in Hansberry’s play along with the screenplay’s visuals bring this theme to life like nothing ever could.

A A A A A A A A A A A A A This play represents life in the racial or ethnic community in many different and unique ways. The play is considered a turning place in American art because it addresses so many important issues and conflicts when this play was produced during the 1950s. The 1950s brought along the stereotypical age of the happy housewives and portrayed the African Americans as being comfortable with their inferior status. These stereotypes resulted in the social resentment that would eventually find public voice in the civil rights movement and importance in later movements such as the feminist movements of the 1960s. The play was also a revolutionary work for its time and can be shown by the way Hansberry created the African American Younger family, by portraying one of the first real and honest depictions of a black family on an American stage.A Usually in a play, groups or individual African-Americans were always portrayed in the typical ethnic stereotypical roles and were displayed as small and comedic but this play overall portrays a united black family in a realistic light, which ends up being far from the comedic style which most people may think of. Hansberry uses black dialect throughout the play and introduces important issues, questions and concerns which many other families during this time and even during the present day and time run into, such as poverty, discrimination, and the creation of African-American racial identity. This play looks at the racial tensions between the black and white communities in addition to exploring the tensions within the black community itself. This can be shown when the family tries to reach their goals despite the challenges of poverty and racism all around them, by putting a down payment on a house in an all-white suburb neighborhood and shortly after this, the family is hit with racism in an unusual form from the white community. Throughout the play, Hansberry asks difficult and thought provoking questions about assimilation and figuring out ones true identity. One way this is shown, is through revealing Beneatha to a trend of celebrating African heritage, through the character of Asagai (her boyfriend and maybe future husband). Another important issue this play represents is how it addresses feminist questions about another important issue, marriage. The topic of marriage comes up for Beneatha in this play towards the end, which Hansberry portrays as not being necessary for all women and that every women should have ambitious career goals instead of giving up on their dreams before they have a chance to fight for their own personal dreams. Hansberry also approaches an abortion debate, which is touched on during a time when abortion was not allowed and is still causing concern and a lot of controversial talk today. Having this play written during the time period and being produced when it was, was such a huge success for someone with her status as being young, black and a woman growing up in the 1950s. This showed how much she overcame as a woman, how much people were starting to accept change and how people started understanding important topics which needed to be addressed during this time. No matter the age, race or gender of a person, it shows just how important the idealism of a single person’s, race and gender is in the pursuit of dreams and just how crucial dreams are in an individual’s life. As the play focuses primarily on dreams and what happens to the dreams in driving and motivating the main character’s actions, emotions and feelings throughout the play, it also reveals what happens to people out in the real world. Any negative dreams that happen in an individual’s life, no matter the age, gender or race of a person, seem to stem from the fact that people are placing stress and importance on objects rather than on family pride and happiness. Like the main point of this play says, if everyone attempts to support and encourage their family, and not only focusing on themselves and being selfless, they can lift each other up and support each other through the toughest of times. This can happen if you never give up hope on each other and never give up on your own dreams.

A A A A A A A A A A A A A This play focuses on major issues such as racism between white and black communities, abortion, marriage, assimilation and finding one’s true identity but in the end the play boils down to a timeless point; dreams are what make each person, white or black, push on in life in order to live each day like it was their last. A Raisin in the Sun is central, in the continued debate over racial and gender concerns, making this play a critical cultural document in an essential period of American history.

MLA Citation

“A Raisin in the Sun.” 2009. SparkNotes. 15 November 2009 .

“A Raisin in the Sun.” 2000-2009. TeacherVision. 14 November 2009 .

“A Raisin in the Sun: The Quest for the American Dream.” EDSITEment. 4 December 2009 .

“Encyclodpedia of World Biography on Lorraine Vivian Hansberry.” 2005-2006. BookRags. 14 November 2009 .

Hughes, Langston. “Harlem (A Dream Deferred).” Lorraine Hansberry 15 November 2009: 1040.

Liukkonen, Petri. Lorraine Hansberry. 2008. 14 November 2009 .

“Lorraine Hansberry.” 15 November 2009: 1037.

Mitchell, Diana. “A Teacher’s Guide to Lorraine Hansberry’s A Raisin in the Sun.” A Teacher’s Guide to the Signet and Plume Editions of the Screenplay Lorraine Hansberry’s Raisin in the Sun. 2 December 2009 .

Moon, Andrea and Cathy Hartenstein. “A Raisin in the Sun Study Guide.” The Cleveland Play House. 4 December 2009 .

Wireless networks: Security

WIRELESS networks ,due to ease of installation ,cost benefits and the capability of connectivity , hence communication anywhere ,has made it the most popular way of network setup in this 21st century. With increase in the need of mobile systems, the current electronic market has also been flooding with laptops, pdas, RFID devices, healthcare devices and wireless VOIP (Voice over IP) which are WIFI (Wireless Fidelity) enabled. With the 3G (Third Generation) and 4G (Fourth Generation) cellular wireless standards, mobiles phones are also WIFI enabled with very high speed being provided for data upload and download .Nowadays a malls and public areas not mention even cities are WIFI capable, enabling a person to access the internet or even contact a remote server in his office from anywhere in that city or even from his mobile phone while just strolling down the road.

But as every good technology has its own drawbacks so does the wireless networks .Just as in the case of wired networks they are also prone to intruder attacks or more commonly known as Wireless hacking thus compromising the networks , security, integrity and privacy. The basic reason for this is when the wireless network was first introduced, it was considered to have security and privacy built into the system while transmitting data. This misconception had basically arisen because wireless system transmitters and receivers used spread spectrum systems which have signals in the wide transmission band. Since the RF(Radio Frequency ) receivers which at that time could only intercept signal in the narrow transmission band these wireless signals were potentially considered in the safe zone .But it did not take long to invent devices that could intercept these wireless signals as well .Hence the integrity of data send over wireless networks could be easily compromised .With the development of technology so has the methods and ways in which a network can be attacked become more vicious .

Fig-1: WLAN (Wireless Local Area Network)

Security of wireless networks against such vicious attacks is hence the become the priority for the network industry. This is because not all networks are equally secure .The security depends on where this network is used. For example, if the requirement of the wireless is to provide a wireless hotspot in a shopping mall then then the security of this is never concerned with but if it’s for a corporate they have their own security authentication and user access control implemented in the network.

II. WHY WIRELESS networks are prone to attacks?

There are number of reasons why wireless networks are prone to malicious attacks .These are the most challenging aspects to eb considered when a secure wireless network has to be established.

a) Wireless network are open networks: The reason for this is that there is no physical media protecting these networks .Any packet transmitted and received can be intercepted if the receiver has the same frequency as the transmitter receiver used by h wireless network .There is also a common misconception that if the authentication and encryption are properly used the network will not be compromised .But what about the messages send back and forth before the authentication and encryption comes into play ?

b) Distance and Location: The attacker can attack from any distance and location and is only limited by the power of the transmitter .Special devices have been designed which can attack even short distance networks such the Bluetooth

c) Identity of the Attacker: Attacker can always remain unidentified because he uses a series of antennas or other compromised networks before reaching the actual target. This makes wireless network attackers very difficult to track.

Some of the reasons why such attacks are so common is because of the easy availability of information from none other than the Internet, easy to use cheap technology and of course the motivation to hack .

III. wireless hacking – step by step

To understand the security protocols for wireless networks currently in use, first it is important to understand the methods through which a weak network is attacked by a hacker .These are also known as wireless intrusion methods .

A. Enumeration:

Also know as network Enumeration, the first and foremost step to hacking which is finding the wireless network. The wireless network could be any specific target or even a random weak network which can be compromised and used to attack other end systems or networks .This feat is achieved by using a network discovery software which are now a day’s available online in plenty, to name a few are Kismet and Network stumbler .

In order to have more information about the network, the packets that are send and received by the network can sniffed using network analyzers also known as sniffers .A large number of information can be obtained by using this including IP address, SSID numbers even sensitive information such as MAC address , type of information and also the other networks that this compromised end system.

Yet another problem faced is the use of network mappers which can be used to find he servers that run these compromised networks hence also attacking these servers which could then affect proper functioning and information transfer between these servers and to other networks connected to it .

B. Vulnerability Assesment:

This is mainly done by the hacker y using a vulnerability scanner .After the hacker has found the network he want to attack he uses this program in order to detect the weakness of the computer , computer systems networks or even applications. After this the intruder decided on the most possible means of entry into the network.

C. Means of Entry:
IV. TYPES OF THREATS & ATTACKS
A. Eaves Dropping and Traffic Analysis:

This is the form of attack that makes use of the weak encryption of the network .This always compromises the integrity and security of the network .All attacks such as war driving , war chalking ,packet sniffing traffic analysis all fall under this category

B. Message Modification:

These attacks are mainly used to modify the data that is send across a network .The modification might be giving wrong information or also adding malicious content to the data packet send form one station to another .This compromises the integrity and privacy of the Data .

C. Rogue Devices:

Theses could be devices such as APS , application software programs which has been compromised by the intruder and made to function according to him/her. Such devices can compromise the integrity of the network as well as the data send across it .These devices can also launch reply attacks and also make the network associated to malicious content websites or information.

D. Session Hijacking:

This attack occurs after a valid session has been established between two nodes to through the AP.In the attacker poses as a valid AP to the node trying to establish connection and a valid node to the AP .The attacker can then send malicious or false information to the node that the connection has already been established with .The legitimate node believe that the AP has terminated he connection with it . The hacker can then use this connection to get sensitive information from the network or the node .

E. Man In the Middle Attacks:

This is similar to that of a session hijacking attack but in this case it is a rogue AP that acts as valid client to the legitimate AP and valid AP to the legitimate client .Once this has been established the rogue AP can access all information from the , intercept communication , send malicious information to other clients through this .

These are just few of the security threats and attacks in wireless environments .With the advancing technologies there many more possible security threats that can be faced by these networks in the future.

V. BASIC REQUIREMENTS IN WIRELESS NETWORK SECURITY

With the vulnerability of wireless networks ,security and countering o such malicious attacks have become one of the top priorities addressed by enterprises ,corporate as well as research fields in IT .There are many pints to be considered when the security of a network is concerned the most important f which are : authentication, accountability and encryption .

A. Authentication:

This is very familiar to anyone using a network in his or her work place or even accessing he email on the internet and the very first step in promoting a secure wireless network . .There many different ways of authentication and many different tools and methods have been used over the years in order.. make the primary process, more reliable and fool prof.Some of the most widely used methods are :

a) User name and Password combinations generally defined as something that a person knows.

b) Smart Card, RFIDs and Token technologies also known as something that a person has

c) Biometric Solutions such as finger printing , retina scanning which can be generally defined as something that a person is or are.

Now the reliability of each one of these methods can vary depending on the level on which it has been implemented .In the case very low level authentication s only one kind of method I used to secure the network .One of the weakest forms of authentication can be considered as the use of only ID card or token technologies as if a person looses this , he can compromise the security of the network .Even in the case of username and password the strength of the authentication is only as good as the complexity of the information used as username or even password .People generally prefer to use passwords that are easy to remember but also known to many other people in that organization or even outside One of the much better ways of securing a network through authentication is to use biometric solutions such as fingerprinting or retina scanning .But of course technology has advanced to the extend that even fingerprints or even retinas can be forged .Nowadays a number of methods of combinational methods are used as authentication with high security premises or networks guarded by more than two or three kinds of authentications .

B. Accountability

After a user has been authenticated to use the network it is important to have t able to track the computer usage of each person using the network so that incase of any foul play the person responsible can be held responsible .When the networks were very small it was very easy f a network administrator to track the usage of each person on a network .But with huge networks, remote access facilities and of course the wireless networks it has become quite a difficult task .AS mentioned earlier , there are many ways in which a hacker can make himself difficult to track down .Many software’s and firmware’s have been created which is used in conjecture with the authentication protocols inoder to make the wireless network more secure and robust .

C. Encryption:

This is the most important step in building and securing a strong wireless network infrastructure .he steps generally followed for this are :

a) Methods based on public key infrastructure (PKI)

b) Using high bit encryption scheme

c) Algorithm used for encryption must be well known and proven to be very unbreakable.

Current wireless network security solutions can be classified into three broad categories:

a) unencrypted solutions

b)encrypted solutions

c) combination.

In this paper with emphasis as explained in the abstract will eb on encrypted solutions for wireless security. A brief discussion on the unencrypted methods has still been given for basic understanding.

I n the case of encryption based security protocols ,a details description is given about the ones that are commonly used in wireless LANS in this paper .After which the latest and developing technologies will be discussed .The three major generations of security as existing today and also cited in many papers ,journals and magazines are as follows :

1) WEP (Wired Equivalent Privacy)

2) WPA (Wi-Fi Protected Access)

3) WPA2

The image below shows the layer in which the wireless network security protocols come into play which is of course the link layer:

Fig-1: 802.11 AND OSI MODEL

VI. WIRELESS SECURITY – UNENCRYPTED
A. MAC Registration:

This is one of the weakest methods network security..MAC registration was basically used to secure university residential networks as college apartments or dorm rooms. The basic way of doing this is to configure DHCP (Dynamic Host Configuration Protocol) to lease IP address to only a know set of MAC address which can be obtained manually by running automated scripts on a network server so basically any person with a valid registration can enter into the network .Session logs also cannot be generated because of which accounting of the logs become impossible. Last but not the least since this method of securing was basically used for switched and wired networks encryption was never included.

B. Firewalls:

In this method, network authentication is one through either HTTP( Hyper text Transfer Protocol),HTTPS or telnet .When an authentication requirement is received by the network it is directed to the authentication server .On validating the authentication the firewalls add rules to the IP address provided to that user , This IP address also has timer attached to it in order to indicate the rule time out of this IP address. When executed through HTTPS it is basically a session based as well as a secure process .But any other process which is adapted from a switched wired network firewalls does not provided encryption.

C. Wireless Firewall Gateways :

One of the most latest as well as considerably fool proof method in unencrypted solutions in Wireless Firewall Gateways or WFGs.This is a single wireless gate way is integrated with firewall, router, web server and DHCP server and it’s because of all these being in one system that makes WFGS a very secure wireless security solution. When a user connects to the WFG, he/she receives a IP address form the DHCP serve .Then the web server ( HTTPS) asks for a user name and password and this is executed by the PHP ( Hypertext Preprocessor).Address spoofing and unauthorized networks are avoided by PHP as the DHCP logs are constantly compare with the current updated ARP(Address Resolution Protocol).This verifies that the computer that is connect to the network is using he the IP address that has been leased to it by the DHCP server .Then this information is passed on to the authentication server which in turn adds rules to this IP address .Up ne the expiration of the DHCP lease the sessions are terminated . The WFGS hence make the authentication and accountably pat f the network more reliable ,But as this is also an unencrypted method it lacks the most important accept of security.

VII. WEP-WIRED EQUIVALENT PRIVACY

This protocol was written in accordance with the security requirements required for IEE 802.11 wireless LAN protocol .IT is adapted from the wired LAN system and hence the security and privacy provided by it is also equivalent to the security and privacy provided a wired LAN. Through it’s an optional part of wireless network security, it will give a considerably secure networking environment.

The algorithm used in WEP is known as the RC4(Rivest Cipher 4) .In this method a pseudo random number is generated using encryption keys of random lengths .This is then bound with the data bits using a OR(XOR) functionality in order t generate an encrypted data that is then send .Too look at in more in detail :

A. Sender Side:

The pseudo random number is generated using the 24 bit IV(initialization Vector ) given by the administrator network and also a 40 r 104 bit secret key or WEP key given by the wireless device itself. Which is then added together and passed on to theWEP PRNG (Pseudo Random Number Generator).At the same time the plain text along with an integrity algorithms combined together to form ICV (integrity check value) .The pseudo number and the ICV are then combined together to form a cipher text by sending them through an RC4.This cipher text is then again combined with IV to form the final encrypted message which is then send.

Fig-2: WEP SENDER SIDE

B. Receiver Side:

In the receiver side the message is decrypted in five steps .Firs the preshared key and the encrypted message are added together .The result is then passed through yet another PRNG .The resulting number is passed through an CR4 algorithm and this resulting in retrieving the plain text .This again combines with another integrity algorithm to form a new ICV which is then compared with the previous ICV t check for integrity.

Fig-3: WEP RECIEVER SIDE

C. Brief Descriptions:

a) Initialization Vector : are basically random bit the size f which is generally 24 bits but it also depends on the encryption algorithm .This IV is also send to the receiver side as it is required for decrypting the data send .

b) Preshared Key: is more or less like a password .This is basically provided by the network administrator and is shared between the access point and all network users

c) Pseudo Random Number Generator: This basically creating a unique secret key for each packet sends through the network. This is done by using some 5 to at most 13 characters in preshared key and also by using randomly taken characters from IV.

d) ICV and Integrated Algorithm: This is used to encrypt the plain text or data and also to create a check value which can be then compared y the receiver side when it generates its own ICV .This is done using CRC (Cyclic Redundancy Code) technique to create a checksum .For WEP, the CRC-32 of the CRC family is used.

D. RC4 Algorithm:

RC$ algorithm is not only proprietary to WEP .IT can also be called a random generator, stream cipher etc .Developed in RSA laboratories in 1987 , this algorithm uses logical functions to be specific XOR to add the key to the data .

Figure 5: RC4 Algorithm

E. Drawbacks of WEP:

There are many drawbacks associated with the WEP encryptions. There are also programs now available in the market which can easily hack through these encryption leaving the network using WEP vulnerable to malicious attacks:

Some of the problems faced by WEP:

WEP does not prevent forgery of packets.
WEP does not prevent replay attacks. An attacker cans simply record and replay packets as desired and they will be accepted as legitimate
WEP uses RC4 improperly. The keys used are very weak, and can be brute-forced on standard computers in hours to minutes, using freely available software.
WEP reuses initialization vectors. A variety of available

Cryptanalytic methods can decrypt data without knowing the encryption key

WEP allows an attacker to undetectably modify a message without knowing the encryption key.
Key management is lack and updating is poor
Problem in the RC-4 algorithm.
Easy forging of authentication messages.
VIII. WPA -WIFI PROTECTED ACCESS

WPA was developed by the WI-FI alliance to overcome most of the disadvantages of WEP. The advantage for the use is that they do not have t change the hardware when making the change from WEP to WPA.

WPA protocol gives a more complex encryption when compared to TKIP and also with the MC in this it also helps to counter against bit flipping which are used by hackers in WEP by using a method known as hashing .The figure below shows the method WPA encryption.

Figure 6: WAP Encryption Algorithm (TKIP)

As seen it is almost as same as the WEP technique which has been enhanced by using TKIP but a hash is also added before using the RC4 algorithm to generate the PRNG. This duplicates the IV and a copy this is send to the next step .Also the copy is added with the base key in order to generate another special key .This along with the hashed IV is used to generate the sequential key by the RC4.Then this also added to the data or plan text by using the XOR functionality .Then the final message is send and it is decrypted by using the inverse of this process.

A. TKIP (Temporal Key Integrity Protocol):

The confidentiality and integrity of the network is maintained in WPA by using improved data encryption using TKIP. This is achieved by using a hashing function algorithm and also an additional integrity feature to make sure that the message has not been tampered with

The TKIP has about four new algorithms that do various security functions:

a) MIC or Micheal: This is a coding system which improves the integrity of the data transfer via WPA .MIC integrity code is basically 64bits long but is divided into 32 bits of little Endean words or least significant bits for example let it be (K0 , K1) .This method is basically used to make that the data does not get forged .

b) Countering Replay: There is one particular kind of forgery that cannot me detected by MIC and this is called a replayed packet .Hackers do this by forging a particular packet and then sending it back at another instance of time .In this method each packet send by the network or system will have a sequence number attached to it .This is achieved by reusing the IV field .If the packet received at the receiver has an out of order or a smaller sequencing number as the packet received before this , it is considered as a reply and the packet is hence discarded by the system .

c) Key mixing: In WEP a secure key is generated by connecting end to end the base layer which is a 40 bit or 104 bit sequence obtained for the wireless device with the 24 bit IV number obtained from the administrator or the network. In the case of TKIP, the 24 bit base key is replaced by a temporary key which has a limited life time .It changes from one destination to another. This is can be explained in Phase one of the two phases in key mixing.

In Phase I, the MAC address of the end system or the wireless router is mixed with the temporary base key .The temporary key hence keeps changing as the packet moves from one destination to another as MAC address for any router gateway or destination will be unique.

In Phase II, the per packet sequence key is also encrypted by adding a small cipher using RC4 to it. This keeps the hacker from deciphering the IV or the per packet sequence number.

d) Countering Key Collision Attacks or Rekeying : This is basically providing fresh sequence of keys which can then be used by the TKIP algorithm .Temporal keys have already been mentioned which has a limited life time .The other two types f keys provided are the encryption keys and the master keys .The temporal keys are the ones which are used by the TKIP privacy and authentication algorithms .

B. Advantages of WPA:

The advantage of WPA over WEP can be clearly understood from the above descriptions .Summarising a few:

a) Forgeries to the data are avoided by using MIC

b) WPA can actively avoid packet replay by the hacker by providing unique sequence number to each packets.

c) Key mixing which generates temporal keys that change at every station and also per packet sequence key encryption.

d) Rekeying which provides unique keys for that consumed by the various TKIP algorithms.

IX. WPA2-WIFI PROTECTED ACCESS 2

WPA 2 is the as the name suggests is a modified version of WPA in which Micheal has be replaced with AES based algorithm known as CCMP instead of TKIP .WPA” can operate in two modes: one is the home mode and he enterprise mode .In the home mode all he users are requires to use a 64 bit pass phrase when accessing the network. This is the sort encryption used in wireless routers used at home or even in very small offices. The home version has the same problems which are faced by users of WEP and the original WPA security protocol.

The enterprise version is of course for used by larger organisation where security of the network is too valuable to be compromised .This is based on 802.1X wireless architecture , authentication framework know as RADIUS and the another authentication protocol from the EAP ( Extensible Authentication Protocol ) Family which is EAP-TLS and also a secure key .

A. 802.1X:

Figure 7: 802.1X Authentication Protocol

In order to understand the security protocols used in WPA2 it is important know a little bit about the 802.1X architecture for authentication. This was developed in order to overcome many security issues in 802.11b protocol. It provides much better security for transmission of data and its key strength is of course authentication There are three important entities in 802.1x protocol which is the client, authenticator and authentication.

a) Client : is the STA(station) in a wireless area network which is trying to access the network ,This station could be fixed , portable or even mobile. It of course requires client software which helps it connect to the network.

b) Authenticator: This is yet another name given to an AP (Access Point).This AP receives the signal from the client and send it over to the network which the client requires connection from There are two parts to the AP i.e. the non control port and the control port which is more of a logical partitioning than an actual partition..The non control port receives the signal and check its authentication to see if the particular client is allowed to connect to the network .If the authentication is approved the control port of the AP is opened for the client to connect with the network.

c) Authentication: RADIUS (Remote Authentication Dial in User Service) server .This has its own user database table which gives the user that has access to the he network, this makes it easier for the APs as user information database need not be stored in the AP .The authentication in RADIUS is more user based than device based .RADIUS makes the security system more scalable and manageable.

Figure 8: EAP/RADIUS Message Exchange

B. EAP (Extended Authentication Protocol):

The key management protocol used in WAP2 is the EAP (Extended Authentication Protocol).It can also be called as EAPOW (EAP over wireless).Since there are many versions of this protocols in the EAP family it will advisable to choose the EAP protocol which is very best suited for that particular network .The diagram and the steps following it will describe how a suitable EAP can be selected for that network :

a) Step1: By checking the previous communication records of the node using a network analyser program, it can be easily detected if any malicious or considerably compromising packets has been send to other nodes or received from to her nodes to this node .

b) Step 2: By checking the previous logs for the authentication protocols used, the most commonly used authentication protocol used and the most successful authentication protocol can be understood.

Figure 9: EAP Authentication with Method Selection Mechanism

c) Step 3: The specifications of the node itself have to be understood such as the operating system used the hardware software even the certificate availability of the node.

After all this has been examined the following steps can be run in order to determine and execute the most suitable EAP authentication protocol:

1. Start

2. if (communication_record available) then

read communication_record;

if(any_suspicious_packets_from_the_other_node) then

abort authentication;

go to 5;

else

if (authentication record available) then

read authentication record;

if (successful authentication available) then

read current_node_resources;

if (current_node_resources comply with

last_successful_method) then

method = last_successful_method;

go to 4;

else

if (current_node_resources comply with

most_successful_method) then

method = most_successful_method;

go to 4;

else

go to 3;

else

go to 3;

else

go to 3;

else

go to 3;

3. read current_node_resources;

execute method_selection(current_node_resources);

4. execute authentication_process;

5.End

X. RSN-ROBUST SECURITY NETWORKS

RSN was developed with reference to IEEE 802.11i wireless protocol .This connection can provide security from very moderate level to high level encryption schemes .The main entities of a 802.11i is same as that of 802.1x protocol which is the STA (Client), AP and the AS (authentication server).RSN uses TKIP or CCMP is used for confidentiality and integrity protection of the data while EAP is used as the authentication protocol.

RSN is a link layer security i.e it provides encryption from one wireless station to its AP to from one wireless station to another..It does not provided end to end security IT can only be used for wireless networks and in the case of hybrid networks only the wireless part of the network .

The following are the features of secure network that are supported by RSN ( WRITE REFERENCE NUMBER HERE) :

a) Enhanced user authentication mechanisms

b) Cryptographic key management

c) Data Confidentiality

d) Data Origin and Authentication Integrity

e) Replay Protection.

A. Phases of RSN:

RSN protocol functioning can be divided in the five distinct phases .The figure as well as the steps will describe the phases in brief:

a) Discovery Phase: This can also be called as Network and Security Capability discovery of the AP.In this phase the AP advertises that it uses IEE 802.11i security policy .An STA which wishes to communicate to a WLAN using this protocol will up n receiving this advertisement communicate with the AP .The AP gives an option to the STA on the cipher suite and authentication mechanism it wishes to use during the communication with the wireless network.

Figure 9: Security States of RSN

b) Authentication Phase: Also known as Authentication and Association Phase .In the authentication phase, the AP uses its non control part to check the authentication proved by the STA with the AS .Any other data other than the authentication data is blocked by the AP until the AS return with the message that the authentication provided by the STA is valid .During this phase the client has no direct connection with the RADIUS server .

c) Key Generation and Distribution: During this phase cryptographic keys are generated by both the AP and the STA. Communication only takes place between the AP and STA during this phase.

d) Protected Data Transfer Phase: This phase as the name suggest is during which data is transferred through and from the STA that initiated .the connection through the AP to the STA on the other end of the network.

e) Connection Termination Phase: Again as the name suggests the data exchanged is purely between the AP and the STA to tear down the connection

Wireless Communication: Applications and Limitations

Wireless Communications
INTRODUCTION

First of all, the meaning of wireless must be clearly identified: Wireless communications are the technology that uses any type of waves to substitute the use of cables and wires in order to create links (or a certain kind of connectivity) between different devices; such waves can be radio waves, infrared waves, or microwaves.

Even though many people think that wireless communication is a new form of technology, the truth is that many devices that already existed for many decades use wireless technology, in one way or another, to accomplish the tasks or to deliver the services that they were designed for. Such devices include radio and television transmission-reception devices, military communication devices and many more. Mostly, such technology was being utilised only by governments and large organisations.

The difference that appeared in the last few years is the one that involves computer systems and other related pieces of equipment and that which involves telephony and communications, which made it possible for individuals and small and medium organisations to have access to such technology and to be able to use it for specific and personalised uses.

WIRELESS COMMUNICATIONS

Today, wireless communications are growing steadily in almost all sectors, which include home and individual uses, organisational and governmental uses, and scientific and research institutional uses as well. This is evident in every aspect of connectivity that is present and that is available for each person of us; mobile phones (especially those classified as smart-phones) are the most wide-spread devices that utilise wireless communications for almost all the requirements that wireless technology provides; this includes wireless voice connections, data and messaging connections, and multimedia (audio and video) exchange links. For those devices to be able to accomplish that, they cover most of the types of frequencies that are available through the use of technologies such as infrared, Bluetooth, WiFi, GSM and much more.

Wireless Local Area Networks (WLANs) are the most significant technological advancement since the beginning of the computer (and the Internet) age. Such technology provides the possibility of active connectivity to companies, universities, schools, research institutions, and even entities of a far smaller nature. A growing number of homes is now applying WLANs because it can provide the users with the same kind of service without the need for cables.

Prasad and Ruggieri (2003) give more details about WLANs by stating that “WLAN systems are a technology that can provide very high data rate applications and individual links (e.g., in company campus areas, conference centers, airports) and represents an attractive way of setting up computers networks in environments where cable installation is expensive or not feasible. They represent the coming together of two of the fastest-growing segments of the computer industry: LANs and mobile computing, thus recalling the attention of equipment manufactures. This shows their high potential and justifies the big attention paid to WLAN by equipment manufacturers. Whereas in the early beginning of WLANs several proprietary products existed, nowadays they are mostly conform to the Institute of Electrical and Electronics Engineering (IEEE) 802.11b (also known as Wi-Fi) standard. It operates in the unlicenced 2.4-GHz band at 11 Mbps and it is currently extended to reach 20 Mbps.”

When talking about wireless computer connectivity, it must be stated that there are two methods in which it operates today. The first is the “Ad Hoc mode, or Independent Basic Service Set (IBSS). This is a peer-to-peer wireless network. This means that it does not have an access point controlling the conversation.” This method is usually used for small networks which consist of five or less users. Its point of access which “manages the conversions is gone and the clients send beacons to each other… These beacons contain a timer synchronization function (TSF) to ensure that the timing is correct. This function is usually handled in the access point.” The second method is “the Infrastructure mode, which is called Extended Basic Service Set (EBSS). This is the main type of wireless network. In an EBSS, an access point controls all traffic. Setting up a wireless network in this category requires a piece of networking equipment referred to as an access point. This access point is where the Ethernet data is converted into a wireless signal that is then transferred out through the access point’s antenna. To hear and understand this signal, a wireless network interface card is needed. This card has a small antenna inside it and can hear the wireless signal and transfer it to the computer” (Earle, 2006).

As is the case for what concerns wired computer networks, wireless networks are either Wide Area Networks (WAN) or Local Area Networks (LAN).

As for the wireless LANs, Vacca (2003) explains that “wireless data local-area networks (WiFi LANs) have surged in popularity. WiFi LANs provide network access only for approximately 300 ft around each access point, but provide for bandwidth up to 11 Mbps for the IEEE 802.11b protocol, and up to 100 Mbps for the emerging 802.11a protocol. Best of all, the technology is available now and affordable” and the author explains that their reduced cost of deployment, compared to that of the wired LANs, made them more attractive in what concerns the enlargement of corporate networks to other locations: “The wireless data LAN is a ‘nice and clean’ extension to an office’s wired LAN. Wireless data LANs are attractive to offices that want to enable workers to take laptops into a conference room. Wireless data has a place now” According to the author, “WiFi is especially popular in the manufacturing, distribution, and retail industries”

Liska (2003) explains that the main purpose of using wireless WAN technology is to enable the connection to the Internet and to allow the connection between different offices (of a certain company, for example) that are located in different geographical locations. The author states that “Wireless WANs have emerged as a low-cost alternative to a traditional method of Internet access. Wireless WAN connection can offer the same amount of bandwidth as a T1, at a fraction of the cost. Wireless connections are also being deployed in areas where cable and DSL access is not available.”

Another form of wireless networks is the Wireless Personal Area Network (WPAN). In this type, the infrastructure network is not required as there is no need for a central link (or main connection of reference) as the connection is created between specific small devices and users within a given location. The main basic idea behind personal area network is the possibility to inter-connect two or more user-devices within a space of small coverage (that is not more than 10m) where ad hoc communication takes place which is also called personal operating space (POS). “The network is aimed at interconnecting portable and mobile computing devices such as laptops, personal digital assistants (PDAs), peripherals, cellular phones, digital cameras, headsets, and other electronics devices” (Prasad and Ruggieri, 2003).

To give more details about this kind of wireless network, Vacca (2003) states that “the term ad hoc connectivity refers to both the ability for a device to assume either master or slave functionality and the ease in which devices may join or leave an existing network. The Bluetooth radio system has emerged as the first technology addressing WDPAN applications with its salient features of low power consumption, small package size, and low cost. Wireless data rates for Bluetooth devices are limited to 1 Mbps, although actual throughput is about half this data rate. A Bluetooth communication link also supports up to three voice channels with very limited or no additional bandwidth for bursty wireless data traffic.”

For what concerns the standards of wireless networking, we find that for many years the systems were dependant on manufacturers, and this created problems regarding the compatibility of different systems with one another; that is why many standards are now present for use with the wireless systems. “This made the industry push the IEEE to make some wireless standards and help facilitate the growth of wireless with common standards that allowed various manufacturer cards to work with various manufacturer wireless networks” The standards used today include the 802.11 standard which “was the first WLAN standard accepted by multiple vendors as a true industry standard,” Other standards are 802.11a, 802.11b, 802.11c, and the 802.11g which was approved by the IEEE in 2003. There are other standards such as “The 802.11i standard [which is] a security standard that can apply to other 802.11 standards” and there is the 802.11j which is “for use in Japan only” (Earle, 2006).

According to Pallato (2004), a unified 802.11n Wi-Fi will be widely used soon. The 802.11n “is based on a new radio technology called MIMO (multiple input/multiple output) that allows the transmission of up to 100M bps over a much wider range than the earlier versions.” This will certainly be a step in the right direction in attempting to unify all the wireless standards into one technology that can be accessible to everyone anywhere. But this unification is facing problems and delays: Reardon (2006) explains that “the new standard that will allow notebook users to connect to wireless access points at much faster speeds than is currently available [will be delayed].. The IEEE approved a draft version of the standard called 802.11n, after much controversy and infighting among chipmakers. A second draft was due for the standard by late fall of this year [2006], but now a new draft won’t likely be ready until January 2007. This could push back the final ratification of the standard until 2008… The delay in adopting a standard has been caused by the nearly 12,000 changes to the draft that have been submitted to the standards group”.

The future of the wireless communications technologies is promising; this is because more mobility and speed are the most required factors in what concerns inter-connectivity, they are certainly more desired than the wired options, especially that the cost and the security options are being improved constantly. “Though still an imperfect technology, wireless data LANs are, nonetheless, booming and remain at least one market segment that’s expected to achieve its anticipated growth rate. IDC forecasts worldwide wireless data LAN semiconductor revenue alone to grow at a 30 percent compound annual growth rate during the next 4 years. And, 68 percent of networking solution providers already deploy wireless data LANs and WANs” (Vacca, 2003).

As can be seen by now, wireless technologies are becoming more requested and more used by all sectors of users, from large organisations to schools to home and office users. The overwhelming success of the mobile phone devices (especially the smart-phones with the possibility to have Wi-Fi, Bluetooth, and Infrared links) will force the industry to grow faster and to provide the instruments and the hardware needed for its propagation for lower prices. One of the emerging realities of today is what is called ‘Wi-Fi covered towns’: The more hotspots (or Wi-Fi access points) there will be available in homes, offices, coffee shops, restaurants, and bookstores, the more the ‘covered city’ concept can be put to practice. And with the arrival of the $100-smarphone by the year 2008, more and more people will find themselves directly inside the wireless age. Some are already talking about providing the Wi-Fi service through traditional radio frequencies, and with that, what we were accustomed to in what concerns TV reception can be used for wireless connections to the internet, and through that to the entire world. According to Long (2006), “One of the latest WLAN technologies [and one of those that are expected to flourish within the next 5 years], MIMO, or multiple-input multiple-output, splits the connection workload [within a LAN] into multiple data streams for increased range and throughput. Another technology, OFDM, or orthogonal frequency division multiplexing, is a technique for transmitting large amounts of digital data over radio waves.”

The future of communications is already known: Every individual will be able to get connected to any group of users he/she chooses, will be connected to his work network on the move and at his home and even when he/she is on vacation with no PDA or laptop. This is why the future seems to be revolving around the WPANs. The Wireless network installation and application will be extremely cheap that continuing with wired networks will be totally unacceptable by all means.

APPLICATIONS

Even if the beginning of the wireless applications was focused on applications related to vertical markets such as retail, warehousing, and manufacturing, “current growth is being driven by other market segments. These include enterprise, small office/home office, telecommunications/Internet service provider (ISP) and the public access throughput compared to cellular mobiles networks are the lead drivers for wireless LAN deployment. Voice over IP (VoIP) is also expected to drive this technology in the future” (Smyth, 2004).

Vacca (2003) explains that an entire range of applications and services are either dependent on wireless technology or are to be deployed depending on it. The author mentions the service of Triangulation which can be (and is being) used to locate the position of a mobile device through measuring the distance from two or more known points. Another application is Assisted GPS for determining the exact geographic position of the device in use. One important service that is also mentioned is the High-Resolution Maps service. Another important application of wireless networks is the one given to rural areas and locations where no cable or wire related new technologies can reach, for this the wireless technology can be deployed through satellite. “A new breed of satellite technologies and services allows providers to bring high-speed, always-on, two-way access to the planet’s farthest reaches. For example, McLean, Virginia–based StarBand Communications (a joint venture of Israeli satellite powerhouse Gilat Satellite Networks, EchoStar Communications, and Microsoft) is the first company to launch two-way consumer service in the United States.”

The potentials for wireless applications are endless in virtually all sectors. Such applications can be used for workers and sales employees, for warehouse personnel who order parts, for accountants who generate invoices, and for transportations companies such as DHL and UPS. “New applications are appearing at an ever-increasing rate. Mobile workers, such as salespeople, field service technicians, and delivery people, are an obvious target for new wireless applications…. Wireless technology applications can arm these workers with tools and data access capabilities that were previously limited to desk-bound employees” (Hayes, 2003).

With wireless communications, data transfer (especially of larger files, such as those related to multimedia – audio and video – and huge reports and presentations) will become easier whenever the mobile devices become improved in order to utilise higher bandwidths and faster access possibilities. TV and video streaming to wireless devices has already started and the improvements will keep on appearing. There are no limits to the applications of wireless technology. We have already reached the technological know-how that enables us to realise almost all the desired wireless applications, and the cost of their deployment and use will drop until it becomes more common than anything else. Any innovation in the wireless technology camp will be profitable to the manufacturers and desired by the users, mobility is something that is becoming more essential for any organisation or individual that aims at success.

“Wireless Applications are in their Internet infancy and awaiting broader bandwidth. As this becomes available the scope for applications on a cost-per-view basis will increase. Of particular interest for the future are the attempts to commercialize WWW by offering software, which relies on the WWW’s free infrastructure to be viable, on pay-per-use basis” (Bidgoli, 2004).

PROBLEMS

As mentioned earlier, one of the most important problems facing the wireless technology today is the different standards used by different manufacturers, but this is a problem that is supposed to be resolved shortly.

The real important issue is security. Earle (2006) mentions some of the security related issues such as Analysis (“the viewing, recording, or eavesdropping of a signal that is not intended for the party who is performing the analysis”), Spoofing (“impersonating an authorized client, device, or user to gain access to a resource that is protected by some form of authentication or authorization”), Wireless Denial-of-Service (“achieved with small signal jammers”), and Malicious Code (which can be used to “infect and corrupt network devices”). These risks are present in both wireless computer networks and in mobile devices such as mobile phones and PDAs.

The major solutions for this are encryption and authentication solutions in various kinds and modalities. But still, the security issue is the most important reason for delay concerning the movement of all the applications and services toward the wireless realm.

Another problem is the bandwidth; most mobile devices need to be developed further in order to turn the experience of using them into one that is similar to desktop computers and wired LAN connected devices.

Works Cited

Prasad, R. and Ruggieri, M. (2003) Technology Trends in Wireless Communications. Boston, MA: Artech House Publishers.

Earle, A. E. (2006) Wireless Security Handbook. Boca Raton, FL: Auerbach Publications.

Liska, A. (2003) The Practice of Network Security: deployment strategies for production environments. Upper Saddle River, NJ: Pearson Education, Inc.

Vacca, J. R. (2003) Wireless Data Demystified. New York, NY: McGraw-HIll Companies, Inc.

Pallato, J. (2004) Unified 802.11n Wi-Fi Standard to Emerge in Mid-2006. eWeek.com.[Accessed 22nd January 2007]. Available from World Wide Web:

Reardon, M. (2006). New Wi-Fi standard delayed again. ZDNet Tech News. [Accessed 21st January 2007]. Available from World Wide Web:

Long, M. (2006) The Future of Wireless Networks. Newfactor.com. [Accessed 20th January 2007]. Available from World Wide Web:

Smyth, P. (2004) Mobile and Wireless Communications:: Key Technologies and Future Applications. London, UK: The Institution of Electrical Engineers.

Hayes, I. S. (2003) Just Enough Wireless Computing. Upper Saddler River, NJ: Pearson Education, Inc.

Bidgoli, H. (2004) The Internet Encyclopedia. Hoboken, NJ: John Wiley & Sons, Inc.

Virtual University System Limitations

Virtual University: Literature Review

Technology today allows us to record, analyze, and evaluate the physical world to an unprecedented degree. Enterprises in the new millennium are increasingly relying on technology to ensure that they meet their mission requirements. It is important to note here that, “Educational organizations have been referred to as complex and arcane enterprises” (Massy, 1999). For educational institutions, this reliance on technology will require new mission statements, revised catalogs and other materials, different learning environments and methods of instruction, and, perhaps most significantly, new standards for measuring success. To achieve these objectives, several initiatives in the form of web based systems, simulations, games etcetera are being developed and tested. Among these approaches, simulations and games are found to be the most effective ones (Massy, 1999). The author will review one such initiative, namely ‘Virtual U’ also known as Virtual University (Virtual U Project, 2003). The author will begin with a brief review of the use of simulation and gaming approaches in educational institutions.

In the last decade, behaviorist approach has given way to constructivist approach in the field of instructional design. Behaviorist approach is an instructor led approach in which formal concepts and systems can be transmitted to students by giving them formal descriptions in combination with the presentation of examples (Leemkuil et al., 2000). On the other hand, Constructivist approach is a student led approach in which the students learn through activity or social interaction such as games, simulations, and case studies (Jacques, 1995).

Gaming is considered to produce a wide range of learning benefits like, improvement of practical reasoning skills, higher levels of continuing motivation, and reduction of training time and instructor load (Jacobs & Dempsey, 1993). Games are effective communication tools because they are fun and engaging (Conte, 2003). Simulations are also very close to games. Simulations resemble games in that both contain a model of some kind of system and learners can provide both with input and observe the consequences of their actions (Leemkuil et al., 2000).

Virtual U was conceived and designed by William F. Massy, a professor and university administrator and the president of the Jackson Hole Higher Education Group (PR Newswire, 2000). The project was funded by $1 million from the Alfred P. Sloan Foundation in New York. Data were provided by the Institute for Research on Higher Education at the University of Pennsylvania (Waters and Toft, 2001). In designing the game, Massy and Ausubel (Program Director, The Alfred P. Sloan Foundation) included detailed data from 1,200 U.S. academic institutions, as well as information culled from government sources (Schevitz, 2000). The first version of Virtual U which was released in the year 2000 was produced by Enlight Software of Hong Kong and was sold commercially for about $129 (Goldie, 2000).

The Virtual University system was developed along the lines of the popular game known as, ‘SimCity’. The primary objective of the Virtual U game was to develop the skills of the players for managing an educational institution. According to Moore and Williams (2002) ‘Virtual U will let you test your skill, judgment, and decisions’, while managing an educational institution. This game based environment has been designed specifically to enable any person to tackle various scenarios and problems that are usually encountered in an educational institution. “The game is driven by a powerful simulation engine that uses a combination of micro-analytic and system dynamics methods and draws on an extensive compilation of data on the U.S. higher education system” (Massey, 1999). Technically the system was developed using C++ in a windows based environment. Virtual U in its current state does not run on the ‘Macintosh’ based systems due to the usage of proprietary windows based graphics. However, it is envisaged by the authors that a version for Macintosh users will be developed in the near future.

The Virtual U game employs several strategies and allows the user or the player as per his/her requirements (Rainwater et al., 2003). In general the player is appointed as the University president and allowed to manage the University as a whole. In this role the player is concerned about institution level policies, budget etcetera. Then there are scenario based strategies like improving teaching or research performance in a particular faculty, where the player assumes the role of a faculty head (Rainwater et al., 2003). Lastly there are a possible 18 chance cards. Chance cards are emergency situations that arise during the game play and require immediate attention. Overall, Virtual University not only allows players to explore secondary and tertiary effects of a couple of years’ worth of actions they might take as academic administrators but they can also customize it by adjusting everything from the size of the faculty and student body to the cost of maintaining campus roads and buildings (Conte, 2003).

Moore and Williams (2002) identify a few limitations in the Virtual university system.

1. One needs to have extensive administrative knowledge or experience to play Virtual U effectively. The amount of prior knowledge required may prohibit some of the audiences to use the system.

2. Second limitation is pertaining to performance indicators. There is lack of assessment-informed decision making in the game. The “teach better” goal is one of the game scenarios, yet there is nowhere a link between the teacher quality and the student learning.

3. Educational quality and prestige indicators are the two performance indicators the developer advises the player to pay close attention to. Within the educational quality framework, one has access to quantitative inputs and outputs (for example, number of degrees granted) rather than measures of quality. Also there are a limited number of variables which a player can chose or adjust (course mix, number of students shut out of courses, level of faculty teaching talent, class size, faculty morale, and faculty time devoted to teaching activities). The prestige indicator is even more limited.

4. A final Virtual U limitation identified by Moore and Williams (2002) is its lack of flexibility in the area of faculty management. While a player may reallocate departmental resources, teaching loads, and priorities in hiring new faculty, he cannot actually fire or remove faculty.

The developers acknowledge on several occasions that the game is fairly complex and is not easy for beginners to start with (Massey, 1999). The author of this review believes that learning a complex game will be fairly difficult and time consuming for the users (administrative) who are already on a tight time schedule. Even postgraduate research students seldom get time or would like to play games if not related to their own research. Younger students would be easily attracted to such complex games and learn them quickly even though it might not be of much use for them in the short term. In addition to these factors the availability of a windows only version of the system will restrict an ever growing community of ‘Macintosh’ users in the United States educational institutions. Despite the above mentioned limitations, Virtual U is a useful and laudable effort (Moore and Williams, 2002). On the whole the Virtual U is a good introduction to those that wish to get a feel for the day to day operation of a university (Waters and Toft, 2001).

References

Conte, C. (2003). Honey, I shrunk the deficit! Retrieved February 17, 2006, from http://proquest.umi.com/pqdweb?did=77042147&Fmt=7&clientId=8189&RQT=309&VName=PQD

Ellington, H.I. & Earl, S. (1998). Using games, simulations and interactive case studies: a practical guide for tertiary-level teachers. Birmingham: SEDA Publications. Leemkuil, H., Jong, T. d., & Ootes, S. (2000). Review Of Educational Use Of Games And Simulations. Retrieved February 17, 2006, from http://kits.edte.utwente.nl/documents/D1.pdf

Goldie, B. (2000). A computer game lets you manage the university. The Chronicle of Higher Education Retrieved February 17, 2006, fromhttp://proquest.umi.com/pqdweb?did=47712857&Fmt=7&clientId=8189&RQT=309&VName=PQD

Jacobs, J.W. & Dempsey, J.V. (1993). Simulation and gaming: Fidelity, feedback and motivation. In: Leemkuil, H., Jong, T. d., & Ootes, S. (2000). Review Of Educational Use Of Games And Simulations. Retrieved February 17, 2006, from http://kits.edte.utwente.nl/documents/D1.pdf

Jacques, D. (1995). Games, simulations and case studies – a review. In: Leemkuil, H., Jong, T. d., & Ootes, S. (2000). Review of Educational Use Of Games And Simulations. Retrieved February 17, 2006, from http://kits.edte.utwente.nl/documents/D1.pdf

Leemkuil, H., Jong, T. d., & Ootes, S. (2000). Review Of Educational Use Of Games And Simulations. Retrieved February 17, 2006, from http://kits.edte.utwente.nl/documents/D1.pdf

Massy, W. F. (1999). Virtual U: The University Simulation Game. Retrieved February 17, 2006, from http://www.virtual-u.org/documentation/educause.asp

Moore, D. L., & Williams, K. (2002). Virtual U. Assessment Update Retrieved February 17, 2006, from http://search.epnet.com/login.aspx?direct=true&db=aph&an=10350107&loginpage=Login.asp&site=ehost

PR Newswire, (2000). Virtual U Released; University Management Goes High Tech Computer Simulation Tackles the Management Challenges of Higher Education. February 17, 2006, from http://proquest.umi.com/pqdweb?did=55540413&Fmt=7&clientId=8189&RQT=309&VName=PQD

Rainwater, T., Salkind, N., Sawyer, B., & Massy, W. (2003). Virtual U 1.0 Strategy Guide. Retrieved February 17, 2006, from http://www.virtual-u.org/downloads/vu-strategy-guide.pdf

Schevitz, T. (2000). University Game Plan / Professor emeritus’ computer simulation lets players test skills as college administrators. San Francisco Chronicle, February 17, 2006, from http://proquest.umi.com/pqdweb?did=47957859&Fmt=7&clientId=8189&RQT=309&VName=PQD

Virtual U Project. (2003). Virtual U. Retrieved February 17, 2006, from http://www.virtual-u.org

Waters, B., & Toft, I. (2001, October) Virtual U: A University Systems Simulation. Conflict Management in Higher Education Report Retrieved February 17, 2006, from http://www.campus-adr.org/CMHER/ReportResources/Edition2_1/VirtualU2_1.html

Virtual Supermarket Technology: Advantages and Disadvantages

Advantages of Hybermarket Technology
Virtual subway Store

Virtual subway stores makes shopping easier and saves time. For the point of make shopping easier and simple is customers can scan the barcode from any product they have at hand to place an order for one. It is easier for the customer to reorder food and office supplies that ran out of them, instead of having to remember to put it on a list, for a future trip to the store. The customer can get the product by setting the delivery time and location when they make the payment.

Retailer set retail kiosks with life-size images of real product displays, placed in high-traffic areas like subway stations, which will be attacking the customer while they are waiting around. It’s serving as an advertisement for the retailer. However, it can also save lots of money for the retailer, which they would have been spending open a real store.

Card Scanning Technology

Card scanning technology is more secure. This simple technology has increased the level of card security. It uses encryption and authentication technology which has increased the level of card security associated with payment cards. The microprocessor chip embedded at the heart of the smart card requires contact to the card reader and certain areas of the chip can be programmed for specific industries. Next, save to transport. Having the cards gives the holder the ability to carry larger amounts of money. It can reduce the problem of stolen money. Once cards had been stolen, it is nearly impossible to recover it because it needs the pin number of the cards. It can avoid the long queue. Card scanning technology just needs a few seconds to complete the payment, it gives the customer a fast and simple shopping experience.

Intelligent retail and purchases

Advantage of intelligent retail and purchase is it help customer found the stuff faster and easier. This type of technology include the location of each stuff, after customer key in the stuff system will come out the location that the stuff placed, can give the route to the stuff. Next, customer can list the stuff that they want before they go to the hypermarket. It is a good habit of plan before shopping. It can avoid buying the stuff that not really wants. Besides that, the technology of intelligent retail and purchase will show that the promotion of the hypermarket. It can attract the customer to visit the hypermarket and able to help the customer to find out the latest promotion product. Last but not least, intelligent retail and purchase can also give the suggestion or notification of promotion of customer favorable. The system of the technology had record the purchase of the customer, so system can depend of the previous purchases then come out the suggestion.

Touch Screen Purchasing

Advantage of touch screen purchasing is fast and simple purchasing. This technology only need the customer to stay in front of the touch screen kiosk of the hypermarket, key in the name of the product, select the brand of item that they want, after that customer can get the product after they pay. It can save the time of searching for item placed and the difference price of the similar product. Customer can get the information though the touch screen kiosk. Technology of touch screen purchasing is very easy to use. Customer only need to key in the stuff that they want, after that kiosk will come out the information of the product. No any personal device needed, so it’s also easily for everyone to use the technology.

Disadvantages of Hypermarket Technology

Disadvantage of virtual subway store is may late delivery. In the concept of virtual subway store, delivery places an important role. Customer will get the product by using delivery. If the delivery company suddenly comes out issue, for example weather, personal factor and so on, it will affect the time of deliver. Next, virtual shopping mode might elevate the case of returning product. Since customer does not seem the actual product that they will get, it is unable to make sure that the product is in good conditions before they purchase. Additional fees like shipping will be hidden until the late of the checkout process was a disadvantage of virtual subway mode. Sometimes retailer was hidden the cost of shipping fees, in order to get more attractive of customer. When the customer almost finishes the order they only get that they need to pay the shipping fees as well.

The disadvantage of card scanning technology is easily lost. Because of the card are small, it is easily lost the card if the person are irresponsible. If lost the card, it will be very inconvenience, because a card may have double or more uses. The process of register another card it take many procedure, it is very trouble to the user. Next, Possible Risk of Identify Theft. While if use the card collect, it make the job of bearing person more easier. However, for criminals seeking a new identity, they are like gold, based on the amount of information it can contain on an individual.

Unnecessary Surveys will be provided after shopping is the disadvantage of the intelligent retail and purchases. Due to the system of this technology, the survey will be provided before customer go out. Retailer always seeks for improvement for their business in order to maintain the customer, and find out the way that increase loyalty customer. Next, intelligent retail and purchase are too dependent on mobile device. Hypermarket can only use the mobile device to connect the hypermarket system connection. It required the customer have the mobile devices and also application to connect with the hypermarket system. While for those who are not match to the system connection they are unable to use the technology.

Disadvantage of touch screen purchasing is low privacy. Customer personal information may be seems by large screen kiosk. When customer stands in front of the kiosk making purchase, someone maybe stand behind them, so that they will saw the personal information of the customer. Lost the delight of shopping. Some of customers’ is more likely to enjoy searching the product, touch and feel the product. So when the hypermarket become advance, customer do not have the delight of shopping. For touch screen purchasing, it’s also need customer go out to the hypermarket to buy the stuff themselves. It just provides the kiosk to select the product to save the customer time for find the stuff placed.

Implementation challenges

The main challenge of many retailer is retailer have to implement technology into existing stores to become multichannel. Such as smart screens, in-store tablets and the use of near-field communications (NCF) for contactless payments. Besides that, implementation challenge by using virtual subway store is good mobile connective is required. The order takes by virtual subway mode by using mobile to scan the barcode of the product. But in real life, there is no all people are practiced with the mobile function.

Implementation challenge of card scanning technology is slow adoption. If used as a payment card, not every store or restaurant will have the hardware necessary to use these cards. One of the reasons for this is since the technology is more secure, it is also more expensive to produce and use. Therefore, some stores may charge a basic minimum fee for using smart cards for payment, rather than cash.

The challenge of intelligent retailer and purchase is not everyone have mobile device. It is very hard for the hypermarket to implement this system while some of the customer does not have the mobile devices or application. In order to set up the system, it needs the accurate data. The data have to been set at the factory, and it was costly.

Touch Screen purchasing implementation challenge is set up cost. Hypermarket need to set up lot of touch screen kiosk at the hypermarket. The cost of set up kiosk is expensive. This type of technology is only save the place of display of the product. It’s also need to keep the inventory of all the product that they sell. The inventory costs are high for this technology.

Security Issues

For the secure issue, customer should always look for the address-bar padlock symbol. Look for the address-bar padlock symbol. A webpage should always be Secure Sockets Layer (SSL)-encrypted when plan to use credit card information to shop. SSL encryption ensures privacy by restricting the computers that can access the data being transferred, limiting access to user and the online retailer exclusively. Next, never give out credit card number over email. Legitimate retailers will never ask for credit card information or other sensitive personal details over email. The only time that needs to give the information of credit card is when you are on an SSL-encrypted webpage operated by a trusted retailer. If shopping on a mobile device, stick to apps you know. Mobile shopping presents its own set of security issues, it is better for us to use apps that came directly from retailers, and to make purchases inside those apps.

Lastly, never make purchases over public, for example unsecured WIFI. It could leave your personal information at risk.

Integrating security practices across agencies, a task requiring collaboration among separate and dissimilar internal organizations; achieving smart card interoperability across the government; and maintaining the security of smart card systems and the privacy of personal information. According to Worku (2010), e-payment and e-banking applications represent a security challenge as they highly depend on critical ICT systems that create vulnerabilities in financial institutions, businesses and potentially harm customers

Customer information is set in the mobile, so when they lost the mobile, personal information will be disclosed. Next, the system of the technology is having the data of customer purchases record. They can simply send the annoying promotion message to the customer and also sell the customer purchase record to the other industry.

Customer personal information display in the touch screen kiosk. Interest parties will easily to get the information for immoral purpose. It is very danger for the customer. Beside that customer should know their right. Consumers are required to write a physical letter within 60 days detailing any complaint to the retailer, with a return receipt acting as proof that the creditor received the letter.

The Federal Trade Commission provides an example letter, so all you need to do is fill in the blanks with your information.

Virtual Reality in Today’s Society

Virtual reality is a computer-generated simulation of the real world. This simulation is not static, instead it responds to the user’s input, whether vocal or tactile, in real time. In order to achieve this interactivity, the computer must constantly monitor the user’s movements or verbal commands and react instantaneously in order to change the synthetic world experienced by the user and in response to him or her. [1] By making use of all of a human’s sensory experience in this way, virtual reality takes the quality of interactivity achieved, say in a computer game, one stage further. Users of virtual reality can see and move objects, they can also touch and feel them. [2] This essay explores the evolution of virtual realities and the many uses of virtual reality in society today, as well as considering its ethical implications.

Burdea, and Coiffet comment that the history of virtual reality dates back more than forty years. The Sensorama Simulator virtual reality video arcade game was invented by Martin Heilig in 1962. This game had the capability to simulate a motorcycle ride through a city, using 3-D effects, seat vibrations, appropriate smells, sounds and wind effects using fans. [3] Head-mounted displays were introduced in 1966 by Ivan Sutherland, but were heavy and uncomfortable. In 1985, Michael McGreevey of NASA developed a cheaper and lighter version of the helmet, fitted with mini display screens and sensors to track movement. The sensory glove had been designed in the early 1980s, but it was in 1986 that Jaron Lanier designed a new glove to fit in with the helmet to create a full virtual reality. [4] Advancements continued to be made in graphics and then in 1993 virtual reality became the theme for a major conference of the Institute of Electrical and Electronics Engineers (IEEE) in Seattle, making it clear that virtual reality had entered the main stream scientific community. [5]

Since the end of the 1980s, new interfaces communicate three-dimensional images using the head-mounted display (HMD), using video cameras to track the image of the user in a virtual world where he can manipulate objects. More recently there has been a development called CAVE (Cave Automatic Virtual Environment), where the user is enclosed in a six sided environment surrounded by projection screens which they view wearing light stereoglasses, giving the impression of 3-D. [6] The suggestive impression is one of one of immersing oneself in the image space, moving and interacting there in “real time”, and intervening creatively’. [7] However, Burdea and Coiffet point out that with the swift advancements in technology, ‘virtual reality today is done mostly without head-mounted displays, by using large projection screens or desk top PCs’, and sensing gloves are now regularly replaced with joysticks. [8]

The world of computer games has become a major area of importance for virtual reality, where the sense of immersion is important for gaming excitement. This creation of interactive virtual worlds has used grand, sweeping cinematic sequences and other techniques used in traditional cinema, such as ‘the expressive use of camera angles and depth of field, and dramatic lighting of 3-D computer generated sets to create mood and atmosphere’. [9] Actors could be used, superimposed over 3-D backgrounds, or as the games became more advanced, synthetic characters were created moving in real time. [10] This means that the space in which the characters move can now change over time, rendering the same space different when visited at a later time during the game. These changes enabled computer designers to integrate the player more deeply into the gaming world cinematically and to create a sense of visual reality.

The immersion experienced when playing a computer game is made a much more total and intense experience when the player becomes a part of the game, that is, physically enters a virtual world. Virtual reality ‘provides the subject with the illusion of being present in a simulated world.’ [11] This virtual world, unlike the purely visual engagement of a computer game, allows for bodily engagement with the synthetic world. Virtual reality also allows the user to change elements of this simulated world: it gives an added feeling of control. Virtual reality allows people to experience elements of life without any physical commitments, possible dangers or general inconveniences of a real experience.

Lev Manovich comments that virtual worlds are sometimes put forward as the logical successors of cinema, that they are ‘the key cultural form of the twenty-first century just as cinema was the key cultural form of the twentieth century’. [12] Indeed, Grau and Custance compare virtual reality with film, saying: ‘virtual reality now makes it possible to represent space as dependent on the direction of the observer’s gaze: the viewpoint is no longer static or dynamically linear, as in the film, but theoretically includes an infinite number of possible perspectives.’ [13]

Technically, virtual reality ‘utilises the same framing’ as a cinema rectangular frame. This kind of frame only allows a partial view of a wider space. The virtual camera, as with a cinema screen, moves around in relation to the viewer in order to reveal different parts of the shot. [14] This framing device is vital to the virtual reality world in that it gives a small shot of a larger world, thereby providing a wholly subjective and totally personal viewing experience.

While Manovich looks to cinema as a basis for virtual technology, Grau and Custance look to art. They argue that the idea of virtual reality ‘rests firmly on historical art traditions, which belong to a discontinuous movement of seeking illusionary image spaces’. [15] Taking into account the lack of technology further back in history, Grau and Custance believe that ‘the idea stretches back at least as far as classical antiquity and is alive again today in the immersive visualization strategies of virtual reality art.’ [16] Indeed, for Grau and Custance, this basic idea of finding these ‘immersive spaces of illusion’ is threaded through the history of art.

Grau and Custance also point out the lack of natural involvement with the world through the technological illusion of power and control. They say, ironically that ‘the adherents of virtual reality … have often reiterated their claim that immersion in virtual reality intensifies their relationship with nature’. [17] Indeed, an experience so totally reliant on technology and devoid of anything natural can bring about this feeling of connection to nature due to its resemblance of the real world.

Manovich too comments on the illusive quality of any ‘natural’ involvement or control. He says that the user is only altering things that are already inside the computer, the data and memory of the virtual world. [18] The realm of virtual reality is driven by the desire to find a perfect recreation of the real world, a perfect illusion. The ideal interface seems to be one in which the interface or computer itself is entirely invisible, it seeks to block out the very means of creation of the virtual world, making the existence of the user in the virtual world seem totally ‘natural’. [19]

The experience means that the user is totally isolated from the actual world whilst at the same time given this feeling of total ‘natural’ immersion in a new world as well as a sense of omnipotence. The user in effect becomes a kind of fictional character that they have themselves created, doing whatever they like, whenever they like, always with a sense of immortality. There are ethical problems relating to the potential decrease in real physical interaction and normal human relationships as people may potentially come to prefer their virtual world to their real life. Indeed, in virtual reality, the physical world no longer exists at all, as all ‘real’ action takes place in virtual space. [20] There is another ethical concern, that of the possibility of children accessing unsuitable experiences in a virtual world, as censorship would be difficult. This is similar to the problem of violence and adult themes in films and on the internet being available to children today. Virtual reality is an area of even greater concern, however, as children will have the opportunity to take part in the action themselves. Another concern is that criminals could practice their crimes in a virtual world before acting in reality.

There are many positive uses for virtual reality today in areas such as: medicine, education, entertainment and psychology. For example, virtual reality can provide flight and driving simulation, operation simulation, it can help with architectural design or treatment of phobias. These things can be practised realistically without the fear of anything going wrong with flying training, driving experience or surgery. Virtual reality can also potentially be used in medicine to evaluate a patient and diagnose problems as well as possibly aid in operations. Disabled people have the opportunity to join in activities not usually available to them. An architect can use the method to plan out a building before starting work constructing it: using virtual reality avoids the need to build several different prototypes. Someone afraid of spiders can meet one in a virtual world under careful programming to reduce sensitivity over a period of time, indeed, any phobia could be treated using this kind of virtual reality exposure therapy. The field of education is a huge potential area of use for virtual reality; it can even be used to practice sport.

There is another important use for virtual reality that is not related to entertainment or education. Telepresence is an ever-increasing part of the digital and virtual world. Telepresence combines three kinds of technology: robotics, telecommunications and virtual reality. With telepresence, ‘the user of a virtual environment, for example, can intervene in the environment via telecommunication and a remote robot and, in the opposite direction, to receive sensory feedback, a sensory experience of a remote event .’ [21]

Manovich calls telepresence a ‘much more radical technology than virtual reality, or computer simulations in general’. [22] Indeed, Manovich explains that with virtual reality, the user controls a simulated world, that is, the computer data. In contrast, ‘telepresence allows the subject to control not just the simulation but reality itself’ because it allows the user to ‘manipulate remotely physical reality in real time through its image’, [23] that is, the user’s action affect what happens right then in separate place, useful for tasks such as, Manovich suggests, ‘repairing a space station’; [24] the technique can also be used successfully in battle to direct missiles. [25]

So, virtual reality operates on two very opposing grounds. On the one hand it allows great freedom for the user, as he feels he can move anywhere through space with the camera, but at the same time, virtual reality totally confines the body in its simulated world. Manovich recognises that the physical world is subordinated in this way as he says virtual reality renders ‘physical space … totally disregarded’, [26] However, with telepresence, the physical world is very much regarded. Indeed, Mark Hansen thinks Manovich’s comment on the lack of physicality overlooks the experience of space in the potential of virtual reality, even if the body is actually confined. [27] Hansen uses the example of telepresence to explain how simulation and space can coincide to be effective. Indeed, with telepresence, the physical actions, although limited in the space where the user resides, do have an effect at another location. In this way space has been found and used, if not in the same location as the user, their movements have still had a physical effect somewhere else. [28]

It seems that virtual reality has many uses in society today, from entertainment to medicine; from psychology to architecture. Telepresence is now a powerful and extremely useful part of the virtual and digital world. With the continuing advancement of technology and the many great uses virtual reality can surely have in society, it is important to bear in mind the negative consequences if virtual reality techniques are not closely monitored, especially as they become more widely available. The ethical implications of a society plugged always into their private, virtual worlds would not be a positive development for human relationships; children also need to be protected from an environment where anything and everything can appear real and personal to the user. However, as long as we are aware of the potential negative implications, the development of advanced virtual reality has great potential benefits for society.

Sources Used

Burdea, G. C. and Coiffet, P. (2003). Virtual Reality Technology. Chichester: Wiley-IEEE

Grau, O. and Custance, G. (2004). Virtual Art: From Illusion to Immersion. Cambridge: MIT Press

Hansen, M. B. N. (2004). New Philosophy for New Media: A New Philosophy for a New Media. Cambridge: MIT Press

Heim, M. (1994). The Metaphysics of Virtual Reality. Oxford: Oxford University Press

Manovich, L. (2002). The Language of New Media. Cambridge: MIT Press

Sherman, W. R. and Craig, A. B. (2003). Understanding Virtual Reality: Interface, Application, and Design. San Francisco: Morgan Kaufmann

http://library.thinkquest.org/26890/virtualrealityt.htm

Video Conferencing: Advantages and Disadvantages

The exponential growth in the knowledge based society triggered by the equally strong impact of information technology and its various tools have expanded the human intellectual creativity. Information technology portal has thus enabled both the analysis as well as the development of ideas and concepts between individuals with the access of a simple computer and a telephone connection. The combination of a computer, a telephone and the services of an Internet Service Provider have given birth to a number of users to accomplish targets previously deemed to be impossible. The synergy of both information technology and the people behind the computer have resulted in the accomplishment of goals, in turn providing excellent results for their respective organizations. One such area of this new mode of exchanging information amongst the various information technology portals is video-conferencing, a development which has further reduced costs and time to take decisions, meet people, interact, learn and teach even from the comfort of their living or board rooms respectively. Certainly one of the most informative modes of telecommuting, video-conferencing has emerged as a strong tool for exchanging information, imparting training, and learning/teaching varied courses in both the business and academic environments. The following paper will strive to present some of the salient aspects and characteristics of video-conferencing, its uses, advantages, disadvantages, as well as analyse it from the perspective of business organizations, with a particular focus on use of video-conferencing as a means of communication for venue providers and event management organizations.

Our present day environment is evidence of an era in which time is the essence, and in majority of instances of crucial importance. This is true for both the fiercely competitive business environment as well as the ever fast pace of the knowledge based industries. A brief overview of the developments in the last two decades would reveal that the global economy has shown a somewhat similar set of trends as was witnessed during the era of industrialization some three centuries ago. Thus, one can easily observe the gradual transition from the industrial based economies to the present day knowledge based economy. This can be evidenced in practically every sphere of life, including but not limited to businesses, private and social lives. The onset and spread of information technology and its various modes are largely responsible for this significant transition. Today, access to information is not the domain of a few groups/regions and individuals, neither can it manipulated; instead access to information is now possible through a personal computer, a telephone connection, and services of an Internet Service Provider. This has resulted in transforming information into one of the biggest challenges, and into fully developed knowledge based economy. Those with the latest information in their respective disciplines are assumed successful, and this is only possible through the appropriate use of the modern tools of information technology, with video-conferencing as being one such tool. Such is the gravity, and need to acquire knowledge that one has to practically stay a few steps ahead of their nearest competitor, simply to exist in the present day competitive environment. The market dynamics and realities of respective industries practically force individuals and organizations alike to stay abreast and compete in the face of the allied challenges successfully. This is only possible by accepting challenges, however intricate and large they may be, and converting them into effective source of knowledge. Using technology as a conduit for access to this knowledge not only saves significant resources, but also the factor of time as a crucial aspect is fully exploited and saved. It is this saving of time and resources that have given rise to such tools as video-conferencing, providing an edge to the patterns of doing business and living a successful life. Though marred by a number of drawbacks and disadvantages, video-conferencing has nevertheless emerged as one of the most effective tools of communications in the present day business environment; and it is this mode of modern communication, which will comprise a larger segment of the following paper.

According to the information accessed from the web pages of www.whatis.com, videoconference is a means of communication between two groups of people from separate locations. Generally, video-conference involves the use of a audio, video, and ancillary equipment enabling both the groups of people to see, hear and converse with each other from multiple locations. Emerging from the environment of a boardroom, classroom, or a manufacturing site, video-conferencing provides each party to interact with each other as if they were sitting in front of each other in the same room. The single most important advantage of video-conferencing has been the provision of or enhancement of speed for the business processes and operations, just as the use of e-mail and facsimile has speeded up access to information. Some of the major benefits derived from video-conferencing include, but are not limited to cost savings in travel, accommodation, staff time, greater and enhanced communication amongst employees at distant locations, and between suppliers and customers. (Video Conferencing UK, 2005)

As also briefly outlined in the opening paragraphs, it is the access to information and knowledge that has enabled individuals and organizations to stay abreast of their nearest competitors, an aspect that is true for businesses an academia alike. Simply put, a business organization cannot remain competitive if it does not have access to advance information in its respective industry; similarly a teacher cannot impart education/training to its pupils if he/she remains behind latest set of researches and information about their respective subjects. Acknowledging the fact that the present day era in fact comprises of a networked environment, the importance of video-conferencing takes on truly dynamic dimensions. This is all the more true in the face of global events which can leave a devastating effect on the local and international economy, and upon which no individual, organization or country can command any measure of control.

Examples of such global events that have shattered economies, devastated entire countryside’s, and left a trail of human misery and loss of property include the tragic events of September 11, the SARS (Severe Acute Respiratory Syndrome) virus of South East Asia, the devastating tidal waves of Tsunami destroying precious life and property from Island of Maldives in South East Asia to the shores of Dar-es-Salaam in the East African country of Tanzania. It is events such as stated in the preceding lines which makes the importance of communication tool of video-conferencing ever more critical in the present day environment.

The need for information technology tools such as video-conferencing is further precipitated in view of the diverse nature of our societies across the globe, which in turn give rise to political, economic, and social risks, the threat of global diseases, terrorism including bio-terrorism either or all of which then pose a significant challenge not only to the productivity and economics of a nation, but to the individuals and organizations across the globe as well. Just as the significant nature of advances in medical research that have triggered a revolution in the treatment and care of variety of diseases, the revolution in information technology has accomplished similar results, providing and collecting crucial data and information from every corner of the globe and atmosphere for the general benefit of global populations. Information technology tools such as video-conferencing have thus made it possible for providing better productivity and enhanced performance in our organizations allowing general populations to take preventive and corrective action in the face of emergencies, crisis situations, or even using it to raise production levels and launching new and better products in the face of severe competitions. Video-conferencing thus aids in the accomplishment of performance excellence, provides for an advance information portal to thwart off threats of disease, spread of virus, the onset of incoming natural calamities including storms, cyclones such as those witnessed in the Tsunami of December of 2004. It is thus essential for practically all businesses, academic institutions, government agencies, and the general populations to develop their respective multi-cultural and technology supported communication systems so that they are better able to address either of the said contingencies, and engage and use information technology tools including video-conferencing to accomplish the same. (Andersen, 2004)

Though the above sections have briefly outlined the growing importance of video-conferencing as an important tool of information technology, the following review of articles are a further attempt to provide evidence to this respect. The first article is titled “Online In the Outback: The Use of Videoconferencing by Australian Aborigines” authored by Mark Hodges and published in “Technology Review” issue of April 1996.

Upon reading the said article by Mark Hodges, it was evident that while the use of video-conferencing still remained a remote idea and its application still under-utilized in countries such as the United States of America and other European countries, the Warlpiri aborigines of Tanami region of Australia’s Northern Territory have been effectively using this technology since 1993. The exchange of information through the use of video-conferencing given the name of ‘Tanami Network’ taking its name from the region links some four settlements of Walpiri aborigines, as well as with the major Australian cities of Sydney, Darwin, and Alice Springs.

The use of video-conferencing for these aborigines has proved to such a successful venture that the aborigines are able to communicate and gain vital information from a number of government service providers located in the said urban cities; while at the same time video-conferencing has also provided these Walpiri aborigines access to customers and business organizations for Walpiri arts and crafts, established links with other Australian aborigines and with indigenous populations living in countries of the world.

Also used for consultations amongst the aborigine leaders to arrive at important decisions for their traditional ceremonies and community related issues, the use of video-conferencing has successfully been expanded for such applications as access to educational programmes including adult and secondary education, teacher training, legal assistance, social security, and access for remote health care.

In essence, the Tanami Network, using the video-conferencing tool of information technology has thus provided these Australian aborigines an excellent portal for enhancing their quality of family and community life. Perhaps the single most important advantage gained by the use of video-conferencing technology by the Australian aborigines has been to overcome lack of communication factor within the close circle of family and friends, which even today stands threatened by alarming influence of Australian western culture as well as the geographic isolation of these fragile aborigines across the Australian continent.

Thus, video-conferencing has been successfully used in areas of education, ceremonial functions, decision-making, and access to health care, promotion of Aborigine artifacts arts and culture, and access to businesses located in urban areas of Australia, as well as far off places such as London and the United States of America respectively. The link created by video-conferencing with the aborigines living in other parts of the world is yet another major accomplishment of this technology. The use of video-conferencing has thus resulted in the creation of a close network with Saami of Scandinavia, the Inupiat of Alaska, the Inuit of Canada, and the Little Red Cree Nation living in the state of Alberta in Canada.

A similar video-conferencing network also in Australia provided aborigine students of New South Wales the opportunity to continue secondary education. Providing a link between 4 schools situated in remote locations, the students use the video-conferencing technology to finish the final 2 years of their education, against the option to either drop out of school, or the more expensive option of joining a boarding school located at a distance ranging from 200 to 400 kilometers. In addition to the crucial opportunity to continue education for the aborigine students, the video-conferencing technology also provides these populations with topics and subjects otherwise not available within the confines of the aborigine community. (Hodges, 1996; Fischer, 1992; Munn, 1973; Young, 1995)

The above sections have briefly provided some of the salient features and uses of video-conferencing in present day environment, as well as touched upon the subject of some of the situations where video-conferencing as a tool of information technology can save precious lives and property. The following section comprises of a brief overview of the development of video-conferencing over the last 5 years in particular, and its introduction as an important tool for exchanging information over the last few decades.

A brief on the development over the last 3 decades of information technology shows that, indeed video-conferencing emerged as one of the most viable forms of communication as compared to the standard telephone set originally created by Graham Bell. Some of the first impressions of video-conferencing reveal that it comprises of being expensive, does not portray the images as may be required, may not work due to inadequate bandwidths or unavailability of a suitable phone connection, difficulties in establishing the ancillary equipment such as the monitors and the network of cords and wires, or as simple excuses as the way people would actually appear on a monitor screen, and the list simply may go on.

Yet, all these and other excuses are now history, as the last 5 years have witnessed a tremendous growth and development of an entirely new set of equipment together with relevant advances in telecommunication technology. This has made the use of video-conferencing mode of communication not only cost effective; but the hardware and software now in use are fairly easy to use with minimum of training required. This has fulfilled the two most important demands of the business circles across the globe; first video-conferencing has brought a significant reduction in travel expenses, and secondly, it has made communication between people scattered across continents fairly simple and within the grasp of general populations/communities.

In fact studies carried out by Wainhouse Research noted that since the onset of easy-to-use software, cost effective hardware and access to telephone lines in the last 2 years, there has been a steady growth of approximately 30 percent in annual revenues across the video-conferencing industry.

The availability of such equipment as web-camera is yet another evolution which has turned a simple desk-top computer into a ‘digital-media’ thus changing the traditional video-conferencing technology into a new spectrum, and providing practically everyone with a desk-top, a telephone line, and a good Internet connection with a modern video-conferencing technology.

The last 5 years have also witnessed the introduction of Integrated Services Digital Network (ISDN) -based networks with Internet Protocol (IP) systems, even though the first still dominate majority of the videoconference industry across the globe. Studies carried out by Frost & Sullivan on the use of Internet noted that more than 95 percent of the videoconferences used the ISDN networks; the same study also noted that 20 percent of the entire video-conferencing by groups and organizations was done through the Internet Protocol, and more than 92 percent of personal video-conferencing was IP based respectively.

A brief comparison between IP based networks for video-conferencing and ISDN networking shows that IP based networking for video-conferencing is economical, provides for an exchange of information and data in a better manner, offers an easy integration option of video-conferencing and desk-top computers, and the facility of a better managed video-conferencing network. The same study also show that by next year, the differences between ISDN based network and IP-based networks for video-conferencing will be practically eliminated.

Another major development in the video-conferencing industry is the growing demand for managing video-conferencing by organizations at their own premises and using the same staff. Respective employees in the information technology departments such as storage of data and e-mail management in addition to the responsibilities already handle this. With the new responsibilities of managing video-conferencing over traditional networking functions, this is indeed a major shift in the video-conferencing industry. The new trends of using desktop computers as hubs for video-conferencing are also a source of worry for companies and organizations engaged with or providing specific software and equipment for the video-conferencing industry. Some of the organizations worthy of mentioning involved in products and services for the video-conferencing industry include Avaya, Cisco, Microsoft, and Nortel Networks.

With the desktop computer already in use as a hub for video-conferencing, the video-conferencing industry is coming up with ever-new developments and technologies constantly in search of upgrading the quality of both audio and video images to be transmitted over the network.

Some of the modern tools introduced include the videophone, a product launched by Motorola/World Gate Communications, which transmits full-motion video images with an excellent audio levels requiring a high speed Internet connection, yet in appearance it is simply a cellular (mobile) phone.

The LCD-Integrated Display is yet another modern tool for communication. This is an advanced version and a combination of integrated video-conferencing codecs, cameras, microphones and speakers all installed within the desktop computer. Already introduced by three major manufacturers, namely Polycom, Sony and Tanberg, each of the companies have successfully launched their products featuring the said characteristics for videoconferencing. Sony’s model PCS-TL50 perhaps stands out as the most advanced version, as it can perform the double function of desktop computer display, as well as easily switched on to video-conference monitor.

Another development is the software based video-conferencing technology. Polycom’s desktop model PVX is one such example of this new technology, which only requires a USB web-cam, a desktop computer, and software from either of the vendors in the video-conferencing industry. The significant feature of software-based video-conferencing is that it offers high-resolution pictures and high levels of audio. Polycom’s PVX model offers a 30-frames per second picture frame, while the quality of sound is at 14kHz; making it one of best performing information technology tools in video-conferencing. (Regenold, 2005)

As also reiterated in the above sections of the paper, the information technology portal of video-conferencing has proved its worth due to its tremendous potential to reach anywhere and at any time. In addition, the physical presence is totally eliminated for imparting training, education, or merely exchanging information with employees of the same organization. An overview of the different situations and sectors where video-conferencing is widely applied includes education and professional training, though it is also used in vital meetings amongst board members of an organization situated in distant locations across the globe.

Though professional training and corporate application in business organization is said to be the most important application of video-conferencing, it is the arena of education where its application has proved most beneficial. As also described in the above case studies of Aborigines of Australia receiving feedback and information from distant locations as far as London and the United States of America, or receiving education within the vast territories of the Australian continent, video-conferencing has truly added new dimensions in the discipline of education.

One may note that though video-conferencing in the arena of education has been in practice for a number of years, its combination with online form of education has added significant value to the discipline of education. Both these technologies of video-conferencing and Online have thus not only improved the quality of education as visual cues and body language are utilized in video-conferencing, the technological pairing of the two has allowed for the provision of education experts without the need to physically call them. Thus, both the factor of time and place have been made independent, as also bringing a significant reduction in the costs of travel that would otherwise be required to move experts from one location to another. (Reed & Woodruff, 1995; Willis, 1996)

From the above it would be evident that video-conferencing and Online mode of education when combined truly offers an excellent form of imparting education minus the numerous obstacles that may be required in the absence of both the said technology portals. However, there are numerous studies which provide significant evidence that video-conference even when combined with Online form of education has its own set of limitations, and perhaps these limitations are the reasons for the inability to make video-conferencing a virtual success.

One such limitation, and perhaps greatest obstacle is the lack of interaction amongst the participants of a conference-conference. Also termed as “talking heads”, this format of imparting education and training is observed to loose its viability in the absence of true interaction, or failure to encourage participants to actively participate in the respective education/training program. In this context, one may observe that a face-to-face presentation comprising of no less than 50-minutes is it a tiring experience for the participants, and to bear a lecture through video-conferencing is practically an impossible exercise.

As also evident through a number of studies, a one-sided lecture can only remain productive, or majority of participants remain active listeners for a maximum of 20 minutes only. After the passage of approximately 20 minutes into the one-sided lecture, an atmosphere of drowsiness can be witnessed amongst the participants. It is this fact, due to which video-conferencing even with the assistance of Online technology has not really been a favorite form of imparting education or training.

There are however two methods or solutions for addressing such dilemmas as the lack of interaction amongst the participants. First is the pedagogical approach, while the second solution is through the effective use of technological aides.

In the pedagogical approach for addressing the lack of interaction amongst the participants, there are three basic principals, which can provide avenues for active participation from the participants.

First point is breaking the ice. These are creation of an atmosphere which provides for a motivating factor, in turn pushing the participants to actively take part in the ongoing lecture while there are amidst a video-conference; this motivation and the respective atmosphere also allows for overcoming feelings of self-consciousness. This is also called breaking the ice.

Secondly, the shorter a lecture and more focused it is, the better outcome in the shape of interaction by the participants, as well as easy transfer of knowledge/training text is observed. One way to accomplish this, and make presentations short is to provide a break after every 20 minutes, and engage the participants in some form of activity.

Third point, and perhaps the most important is the officering participants to get involved in the interaction, and not to leave upon them to decide whether or not to participate. This factor is also important, as it allows for both breaking the ice, as well as breaking the same lecture or training session into a number of segments, each supported by a separate form of activity from the participants. Involving participants and engaging them for active interaction can be accomplished by involving them in debates between number of experts of the same discipline, through the adoption of role models or role-playing, putting controversial questions to the participants so that they are able to offer a variety of answers to the same question, instead of asking a question which only has one answer. This third point of involving the participants also implies that interaction amongst the participants has to pre-planned prior to the actual video-conference session, and cannot be simply pursued during the respective session or educational text. Though this form of inviting and engaging the participants is truly effective in delivering a truly successful lecture or training program whether professional or educational, its single largest drawback lies in the fact that this can only practiced and implemented in a live presentation or videoconference.

Addressing the dilemma or failure to actively participate in a videoconference from a technological perspective can be accomplished through the application of recorded messages, or training programmes. In this manner, the participants can gain access to the respective educational/ training material at the their own disposal, normally through the use of Internet. (Shearer, 2003; Kunz, 2000)

It allows for the utilization of existing and proven technologies.

There is significantly little training required.

Video-conferencing can be used in a number of settings, environments, and configurations.

It is one of the most practical tools for creating a direct liaison with both audio as well as visual linkages amongst the participants.

The operating costs are comparatively less, and this too depends on the distance and number of sites.

Taking the case of an interview of a potential candidate by a committee of officials within an organization (such as interviewing a candidate to fulfill a faculty position in an academic institution) shows that advantages of video-conferencing far outweigh the disadvantages. First of all, convenience of the applicant is at the forefront followed by significant reduction in travel costs, time otherwise needed for the primary responsibilities. Then there is the additional advantage of videotaping the entire proceedings of the interview, for later screening, as well as for those concerned officials who may not be available for the interview.

One of the profound and proven advantages of video-conferencing has been observed in the teaching/learning environment of academic institutions. With exponential growth in the learning/teaching environment, in particular through the use of ‘Online’ forms of education, videoconference has provided new dimensions to the teaching and learning situations. Though there emerges the need for specific equipment and personnel for video-conferencing, the basic requirement of an Internet Service Provider, a laptop or computer and a web-camera are all that is required for video-conferencing to take place.

Video-conferencing has also found tremendous advantages amongst teachers and pupils for a one-to-one teaching format, and communication with small groups of students located in distance locations. This is particularly true since the onset of ‘Internet’ as a means of direct communication. The same application has also found tremendous advantages for business communications for both long distance meetings, and one-to-one contact with employees located in distance branches of the respective organization.

Though relatively less in usage, the use of ISDN conferencing is an advanced version of video-conferencing, which provides for significantly better quality of both audio and video. The principle usage of the ISDN form of conference-conference is in the learning/ teaching environment where there exists the need to ‘ask the expert’. It is this advantage of calling upon external experts in far off locations that this ISDN video-conferencing is best applied. Another advantage of this form of video-conferencing is the facility to support entire group of professionals or students and involve them in the teaching/learning environment through direct interaction.

One of the disadvantages of video-conferencing is observed in the initial establishment costs, which can be high as compared to traditional modes of meetings.

Video-conferencing is still considered an evolving technology, hence standardization and its usage is yet to be fully developed.

One of the major restraining factors and a disadvantage of video-conferencing is the inadequate infrastructure of local telephone networks, which is one of the prime requisites.

Expansion of video-conferencing facilities and locations require substantial financing, hence its utility remains limited.

The operational costs of videoconference also serve as an impediment.

Taking the same example of an interview of a candidate by a team of officials of an organization, there also exist disadvantages of video-conferencing; these can include potential technical difficulties such as problems with the software, hardware, and/or failure of the network. Though these problems could well be tested prior to the actual event, such as the interview, there is always the possibility of an unexpected technical problem to emerge either before or even during the actual video-conferencing activity.

A major impediment in video-conferencing is the lack of personal interaction, a factor that is often regarded as an important feature of any meeting, interview or feedback. A prime example of lack of personal interaction can be observed in the ever-important handshake that is considered an important aspect in the conclusion of a business meeting, or the successful completion of an interview.

Then there is the aspect of eye contact, which too remains absent during a videoconference; as eye-contact serves as an important feature for physical assessment of an individual (such as an applicant during an interview), and situations during a videoconference.

Another disadvantage observed during a videoconference is the absence of trained and support personnel, in turn creating a host of problems for participants who may be unfamiliar with the video-conferencing equipment/environment, with the result that the same videoconference would make matters worse instead of providing facility for the participants.

The disadvantages observed in the ISDN form of video-conferencing are the relative high costs incurred in the installation, rental and call charges. In addition the specific equipment for video-conferencing required for supporting ISDN too is costly. Then there is the difficult pattern of understanding data collaboration in ISDN, which is difficult to use, making it a disadvantage for video-conferencing.

Conclusion

The above paper strives to present the topic of video-conferencing in a number of perspectives, and provides evidence in respect of the popularity one of the most advanced forms of communication prevalent today in various industries. Whether it is the arena of academia, business organizations, professional trainers, to government offices, the information technology portal of video-conferencing h