Development of Microprocessor Based Automatic Gate

ABSTRACT

In this paper, we give detailed information about development of microprocessor based automatic gate. In common gate operations many times troubles will occur, using microprocessor based automatic gate, we can completely remove these troubles easily. We are going to use this automatic gate in Automatic Car Parking. The automatic gate senses vehicle which they come near to it. It automatically opens, wait for a definite time, and closes after the time has passed. This system can also regularly check the number of vehicle that entered the parking area and calculate the available space limit of the area. The automatic gate developed in this paper is controlled by software, which can be modified any time whenever the system needs the change.

Keywords: automatic gate, microprocessor, automobile, traffic controllers.

INTRODUCTION

Need of automatic gate is rapidly increasing day by day. This system described the use of microprocessor as a controller. This automatic gate is nothing but the alternative of manual gate. Manual systems are costly, time consuming. Micro controlled gate are used in making sound system, Robot, automatic breaking system, etc.

This automatic gate can be used in parking of residential home, organization, in public car parking. This system consists of an automatic remote control to open and close the door for parking. It opens the door only when the space is there.

The automatic gate which is used here is not for security purpose. It is just developed to eliminate the problems which are faced by the older manual method.

SYSTEM OVERVIEW

The system presented here is microprocessor based automatic gate. Here microprocessor is used to control the sensor which gives the information about space limit. This system opens, wait and closes door for car. And counts the number of car entered or exit. This system consists of trigger circuit, sensor, CPU and memory module, display, gate and power supply unit. First sensor gives input signal to system. The sensor is optical when the car cross it then the signal is HIGH otherwise it is LOW. Trigger is responsible for the HIGH and LOW signals. This trigger coverts the analog signal to digital. If the signal is HIGH then trigger sends the signal to interface unit. Then the car enters the parking. If the signal is LOW then the car never enters to the parking area. Power supply unit supplies DC voltage for system.

Block diagram of system
HARDWARE AND SOFTWARE DESIGN

The system design is divided into two parts:

Hardware design.
Software design.

Hardware design

Sensor unit
Trigger circuit
CPU module
Memory module
Display unit
Gate control unit
Power supply unit

1. Sensor Unit:-

It is an optical sensor; this is the light dependent register. This will change its resistance with intensity of light. In this system we use ORP12 it is called as dark resistance of 10?. When light ray are focused then resistance is low and if lights are disturbed, resistance will start increasing to dark resistance. Two pair of resister is used one for entrance gate and another for exit gate. Sensor unit send output to trigger circuit. When light ray focused output voltage is v01 and v02. And when light is getting interrupted then the voltage increases to 5v.

2. Trigger Circuit:-

This is made up of trigger, two input NAND gate. This receives the output from sensor unit. If there is output from sensor unit then only trigger circuit go HIGH, otherwise it remains at LOW level.

3. CPU Module:-

This provides system clock, reset and access to address data and control bus. Additional circuits are used which are:

Clock circuit.
Reset circuit.

Clock circuit: Crystal Oscillator is used to implement clock circuit. Cristal oscillator is more reliable for the high level output voltage. In this design the CPU which is used, has a clock cycle. Thus we use crystal oscillator and is pass through flip flop.

Reset Circuit: After the power is supplied this circuit initializes CPU if Halt occurs. If the CPU is reset the execution starts. It will clear the interrupt.

4. Memory Module:-

In this module two techniques are used linear select and fully decoding technique. In linear select each bit select a device, can be done with small system. Doesn’t need any decoding hardware, but it is time consuming. In fully decoding memory address is required to select memory device.

Address Decoder: It tells about space in memory to allocate the address pointed by microprocessor. In this combinational circuits are used. It can enable multiple inputs. When all enables are active then only decoder has active low outputs.

5. Display Unit:-

Display unit uses the decimal and hexadecimal format for displaying purpose.

Display unit consists of-

Z80 PIO: – It provides 8-bit I/O port. It needs a driver to fed output to 7-segment display. Whenever a vehicle crosses the gate, this unit send signal to driver.
BCD to 7 segment decoder: – For displaying decimal digit, decoder is used to take 4-bit BCD input.
7-segment display

6. Gate Control Unit:-

Gate control unit is made up of —

PNP and NPN transistor
Diodes
Motor.

Transistors are used to control opening of gate through motor. There is time interval of 10 seconds between opening and closing of gate.

Diodes are used to protect transistor from reverse bias register to improve switching line.

A DC Motor is used to control opening and closing of gate.

7. Power Supply Unit:-

Power supply unit designed is 5v DC and is doesn’t change even if there is variation in AC voltage. Component of power supply unit is:-

Transformer: 220 or 240 transformer.

Diode: converts AC current to DC.

Filter Capacitor: used to reduce ripple voltage.

Regulator: it receives DC input, and return it as the output

Software design

Software design is referred as the coding. Here we have to program the system. Program modules are:

Main Program
Sensor Subroutine
Delay Subroutine
Output Subroutine

Steps involving in software design:

Algorithm
Flow Chart
Coding

Algorithm

START

1. cnt1 = 0, cnt2 = 0, lim = 20

2. Read the sensor bit

3. Compare sensor bit with entry code and exit code.

a. If sensor bit = entry code then goto step 5

b. Elseif sensor bit = exit code then goto step 6

4. Go to step 2

5.

a. Open, wait and close

b. Increment cnt1 and display

c. Go to step 7

6a.Open, wait and close

b. Increment cnt2 and display

7. Subtract cnt2 from cnt1

8. Compare result with lim

a. If result = lim then step 9

b. Else go to step 2

9. Fetch sensor bit

10. Compare sensor bit

a. If status = exit code then step6

b. Else raise alarm

11. Goto step 9.

CONCLUSION

By this system with the help of microcontroller gate project’s goal is achieved. The design can be applicable for any kind of system which needs sensor. In this system sensor plays the important part to this parking system. For effectiveness one should have the proper knowledge about the sensor, microprocessor, and assembly language.

The sensor works effectively if operates in high intensity of light. This automatic gate can be used in organization; public car park etc. and this system don’t make for any security purpose.

Development of Digital Television Technology

Digital TV broadcasting and HDTV
Introduction

While Gugliemo Marconi is known as the inventor of wireless telegraphy in 1897 (Winston, 1998, p. 70), the inventor of television becomes a little more complicated as it entailed an evolution of over ten years to move from its concept to an actual picture transmission and reception. The patent for the electronic scanning tube, termed iconoscope, was held by Vladimir Zworykin, an Russian born inventor who worked for Westinghouse in 1923, however, Westinghouse did not see the utility in his invention and ordered Zworykin onto other projects (Bogart, 1956, p. 8, 348). Philo Farnsworth (Horvitz, 2002. p. 9, 92) advanced the concept, and it was John Logie Baird who accomplished the first transmissions of face shapes in 1924, who is also credited with the first television broadcast in 1926 (Horvitz, 2002, p. 101). From there, the development of television escalated with analog broadcasting representing the transmission method utilized in television until 2000 began the age of digital television and radio broadcasting (Huff, 2001, pp. 4,8,69).

To understand digital television, one needs a basic understanding of the manner in with analog television works. In the analog system a video camera takes pictures at 30 frames per second, which are then rasterized into rows of individual dots, termed pixels that are assigned specific color and intensity (howstuffworks.com, 2007a). Next, these pixel rows are then combined with synchronization signals termed horizontal and vertical sync, which permits the receiving television set understand how these rows should be displayed (howstuffworks.com, 2007a). The final signal that contains the preceding represents the composite video signal, which is separate from the sound (howstuffworks.com, 2007a). The difference between analog television and digital is that the analog system as a 4:3 aspect ration, which means the television screen is four units wide by three units high, thus a 25 inch analog television measured diagonally is 15 inches in height by 20 inches in width, with the aspect ratio for a digital television is represented by a 16:9 aspect ratio (Metallinos, 1996, pp. 27, 206 – 207).

Digital broadcasting, as is the case in all broadcast formats, including radio, utilize part of the electromagnetic spectrum (Montgomery and Powell, 1985, pp. 20, 237). Electromagnetic wave frequencies consist of radio, infrared, light that is visible, ultraviolet, x-ray, gamma and then cosmic rays, in order of the lowest to the highest (Weber, 1961, pp. 105, 184). In reality, digital television broadcasting is a subset of digital radio broadcasting, under the ‘one-way digital radio standards’, which not only includes digital radio and television broadcasting, but digital terrestrial television, DVB-T, ISDB-t, ATSC, T-DMB, mobile TV, Satellite TV, radio pagers, as well as the Eureka 147 standard (DAB) to name a few (Levy, 2001, pp. 7,10,11,33). This examination shall delve into an understanding of digital television broadcasting, DAB, DVB-T, HDTV, and its deployment in Europe as well as the United States.

Television’s New Age

The advantages of digital television is that it offers a broader array of viewing options for both the consumer as well as broadcast stations in that it provides a clear picture and sharper sound, along with the ability of broadcasters to offer multiple sub-channels as a result of its formats (Levy, 2001, p. 71).

The three formats, consisting of 1. 480i, which is 704X480 pixels that is broadcast at 60 interlaced frames a second representing 30 complete frames each second, and 480p which is 704X480 pixels that is broadcast at 60 complete frames each second, 2. 720p, whereby the picture is at 1280X720 pixels that is broadcast at 60 complete frames a second, and thirdly, 1080i where the picture is at 1920X1080 pixels that is sent at 60 interlaced frames each second representing 30 complete frames each second, and 1080p whereby the picture is broadcast at 1920X1080 pixels that is broadcast at 60 complete frames each second (howstuffworks.com, 2007b).

Note: The above indicates the 525 horizontal line scans whereby each contains approximately 680 pixels. Each pixel represents one element of the picture and contains three areas of red, green and blue phosphor, which may be either rectangular or dots. The electron gun send out electron beams that strike the phosphors causing them to glow, with electromagnets located near the guns directing the beams in sequence to each pixel, with the broadcast signal providing information on how bright the phosphors should be made, at what time and in what sequence.

As digital television broadcasting and digital audio broadcasting, DAB, are both based upon the electromagnetic wave principle, they work in the same manner, with DAB providing a broader range of digital channels that are not available on FM, as well as less hiss and interference, tuning to a station format or name and the support of scrolling radio text, MP3 playback and pause and rewind features (Scott, 1998, p. 9, 210).

DVB-T represents the Euopean standard for broadcast of digital terrestrial television. DVB-T, or Digital Video Broadcasting – Terrestrial, is a new system whereby the digital audio and video data stream is compressed by use of a OFDM modulation that utilizes concatenated channel coding (Levy, 2001, pp. 3-21). Al-Askary et al (2005) advise that OFDM utilizes convolutional coding that does not have capability to adapt to variations of fading properties of individual sub-channels, thus providing clear distortion freer signals and reception. In the DVB-T method when utilized by broadcasters the signals transmitted are sent from one aerial antenna to another using a signal blaster to the home receivers (White, 2007). The broadcast is transmitted utilizing a digital audio-video stream that is compressed, based on the MPEG-2 standard, which is the result of the combination of one or more ‘Pactetised Elementary Streams’ (Chiariglione, 2000).

Note: In summary, the source coding are multiplexed into programme streams, with one or more of these joined to create a MPEG-2 Transpot Stream that is transmitted to set top boxes in the home. It can accommodate six to eightMHz wide channels.

Digital Audio Broadcasting (DAB), which is also termed ‘Eureka 147’ represents the technology employed for the broadcasting of audio through the use of digital radio transmmision (Huff, 2001, pp. 67-78). In order to achieve the sound reproduction quality attributable to DAB, the bit rate levels must be high enough for the audio codec in the MPEG Layer 2 to provide the quality inherent in the system, as well as high enough to enable the error correction coding (digitalradiotech.co.uk, 2007). Both the DAB as well as the DVB-T systems utilize ‘orthogonal frequency division multiplexing’ (OFDM) modulation, with each system being able to handle 1536 sub-carriers (digitalradiotech.co.uk, 2007). The DAB and DVB-T also use the QPSK singal constellation to modulate the subcarriers, and also use 2 bits per symbol which the signal constellations can transmit on each of the subcarriers (digitalradiotech.co.uk, 2007).

DAB (Digital Audio Broadcasting) is particularly suited to utilization in multimedia transmission systems, such as sound, moving pictures and text along with data (Levy, 2001, p. 177). As a radio frequency signal, DAB’s ability in being picked up by radio receivers represents an advantage over DVB-T, whose mobile reception signal “… is significantly affected by …” the fast changing nature of the transmission channel, thus it is needed to utilize two antennas on the received along with a more complex and “… elaborate signal processing for … channel tracking” (Lauterjung, 1999). And while DVB-T was developed orginally for stationary reception utilizing a roof-top directional antenna as well as a non-directional antenna contained on a portable receiver, it has been adapted for moble reception as indicated (Lauterjung, 1999). Recent developments in tests conducted in Germany as well as Singapore have shown that DBV-T can be utilized in mobile reception, however the drawback is battery life as a result of power consumption (dvb.org, 2004).

HDTV, high-definition television, utilizes approximaetly ten times the amount of pixels as a standard analog television set, representing a high end 1920 X 1080 pixels, against an analog television set’s 704 X 480 pixels (Huf, 2001, pp. 140-141).

The high resolution of HDTV requires greater bandwidth thus making broadcast operators make a major financial commitment to deploy the new standard (Brown and Picard, 2005, pp. 47-49). The deployment problem means that in order to make the system work with their current infrastructure, operators would have to reduce the number of channels being offered, a marketing and customer problem in that operators have built their competitive systems on offering a greater number of channel selections. Brown and Picard (2005, p. 336) advise “The significance of the SDTV/HDTV issue is that, because the transmission of HDTV requires much more spectrum than SDTV, a trade-off is involved for any DTV system between a greater number of SDTV channels and a smaller number of HDTV channels (currently 4 to 6 SDTV channels can be transmitted within the amount of spectrum required for one HDTV channel)”.

In addition to the foregoing, there is a lack of uniform standards in “Standardization, compatibility, interoperability and application portability are essential pillars in the erection of a successful and competitive European digital television system” (Nolan, 1997, p. 610). The National Association of Broadcasters’ estimate that the cost of the new equipment to carry HDTV and retain the number of stations will be between $10 to $40 million based on the station size (Pommier, 1995). Deployment will represent a problem in that the wider TV format will be cut off on standard square type televisions thus necessitating consumers to switch to wide screen television receivers in addition to the special HDTV receiver need to watch high definition broadcasts which can be received over cable or satellite (Brown and Picard, 2005, pp. 110-115). The HD receiver being sold at ?299 by UK broadcaster BSkyB, along with an added ?10 for the service on top of the basic subscription charge are another example of the inhibiting factors in deployment O’Brien, 2006).

HDTV basically represents what Dietrich Westerkamp who is the worldwide director of broadcast standards at the electronics giant Thomson, which is the largest European manufacturer of HD satellite receivers, calls “… a chicken and egg situation” (O’Brien, 2006). The situation has been the case with HDTV in the United States as well as Europe, with broadcasters waiting to see enough purchasers of the new television sets before making the financial commitment concerning equipment changes, and consumers waiting to see stations available before making the financial commitment for the new HDTV sets. The answer could be coming from television manufacturers who are starting to turn out HD compatible sets. One such example is Samsung, who has announced that two-thirds of its flat panel production will be HD compatible (O’Brien, 2006). Something will be needed to help jump-start the HDTV situation as presently the size of the potential viewing audience is to small to justify the conversion expense, explains Rudi Kuffner, spokesperson for Germany’s largest broadcaster ARD (O’Brien, 2006).

Conclusion

Since the first television broadcast of face shapes by John Baird in 1924, and the first television broadcast in 1926 (Horvitz, 2002, p. 101) television has come a long way. The introduction of digital television and radio broadcasting in 2000 has increased the viewing experience in providing a broader array of channels, signal clarity and sound as well as giving broadcasters an expanded marketing option of more to offer consumers in a highly competitive market. The new flat panel television sets and digital broadcasting have expanded the ways in which consumers as well as broadcasters view the market. With mobile television systems and the new digital radio channels offering playback and other features, entertainment is getting another big boost. With the biggest new development, that has been around for over four years set to enhance broadcasting and viewing pleasure, when the financial justifications reach the investment levels. HDTV represents the next quantum leap in television despite all of its problems. Technology keeps improving the sphere of entertainment, and it is ultimately consumers who benefit.

Bibliography

Al-Askary, O., Sidiropoulos, L., Kunz, L., Vouzas, C., Nassif, C. (2005) Adaptive Coding for OFDM Based Systems using Generalized Concatenated Codes. Radio Communications Systems, Stockholm, Sweden

Bogart, L. (1956) The Age of Television: A Study of Beijing Habits and the Impact of Television on American Life. Frederick Ugar Publishing. New York, United States

Brown, A., Picard, R. (2005) Digital Terrestrial Television in Europe. Lawrence Erlbaum Associates. Mahwah, N.J., United States

Chiariglione, L. (2000) MPEG-2. Retrieved on 2 April 2007 from http://www.chiariglione.org/mpeg/standards/mpeg-2/mpeg-2.htm

digitalradiotech.co.uk (2007) Comparison of the DAB, DMB & DvB-H Systems. Retrieved on 2 April 2007 from http://www.digitalradiotech.co.uk/dvb-h_dab_dmb.htm

dvdaust.com (2007) Aspect Ratios. Retrieved on 30 March 2007 from http://www.dvdaust.com/aspect.htm

dvb.org (2004) DVB-H Handheld. Retrieved on 2 April 2007 from http://www.dvb.org/documents/white-papers/wp07.DVB-H.final.pdf

Horvitz, L. (2002) Eureka! Stories of Scientific Discovery. Wiley, New York, United States

howstuffworks.com (2007b) How Digital Television Works. Retrieved on 31 March 2007 from http://www.howstuffworks.com/dtv3.htm

howstuffworks.com (2007a) Understanding Analog TV. Retrieved on 30 March 2007 from http://electronics.howstuffworks.com/dtv1.htm

Huff, A. (2001) Regulating the Future: Broadcasting Technology and Governmental Control. Greenwood Press, Westport, CT, United States

Kiiski, A. (2004) Mobile Virtual Network Operators. Research Seminar on Telecommunications Business, Helsinki University of Technology

Levy, D. (2001) Europe’s Digital Revolution: Broadcasting Regulation, the EU and Nation State. Routledge, London, United Kingdom

Lawrence Berkeley National Lab (2004) Electromagnetic Spectrum. Retrieved on 2 April 2007 from http://www.lbl.gov/MicroWorlds/ALSTool/EMSpec/EMSpec2.html

Lauterjung, J. (1999) An enhanced testbed for mobile DVB-T receivers. Retrieved on 2 April 2007 from http://www.rohde-schwarz.com/www/dev_center.nsf/frameset?OpenAgent&website=com&content=/www/dev_center.nsf/html/artikeldvb-t

Metallinos, N. (1996) Television Aesthetics: Perceptual, Cognitive, and Compositional Bases. Lawrence Erlbaum Associates. Mahwah, New Jersey, United States

Montgomery, H., Powell, J. (1985) International Broadcasting by Satellite: Issues of Regulation, Barriers to Communication. Quorum Books, Westport, CT., United States

Nolan, D. (1997) Bottlenecks in pay TV: Impact on market development in Europe. Vol. 21, No. 7. Telecommunications Policy

O’Brien (2006) Broadcasters shrink from taking HDTV leap. 30 August 2006

PBS.org. (2006b) Electronic TV. Retrieved on 30 March 2007 from http://www.pbs.org/opb/crashcourse/tv_grows_up/electronictv.html

PBS.org (2006a) Mechanical TV. Retrieved on 30 March 2007 from http://www.pbs.org/opb/crashcourse/tv_grows_up/mechanicaltv.html

PBS.org (2006b) Widescreen. Retrieved on 2 April 2007 from http://www.pbs.org/opb/crashcourse/aspect_ratio/widescreen.html

Pommier, G. (1995) High Definition Television (HDTV). Retrieved on 3 April 2007 from http://gabriel.franciscan.edu/com326/gpommier.html

Scott, R. (1998) Human Resource Management in the Electronic Media. Quorum Books, Westport, CT, United States

University of Toledo (2005) Television. Retrieved on 2 April 2007 from http://www.physics.utoledo.edu/~lsa/_color/31_tv.htm

Weber, J. (1961) General Relativity and Gravitational Waves. Interscience Publishers, New York, United States

White, D. (2007) What is DVB-T? Retrieved on 1 April 2007 from http://www.wisegeek.com/what-is-dvb-t.htm

Winston, B. (1998) Media Technology and Society: A History From the Telegraph to the Internet. Routledge, London, United Kingdom

Developing Sensor Technology

Abstract

The need for sensor devices has been growing to develop new applications in several technological fields. The current state-of-the-art of this sensor technology used in modern electronic nose designs to operate in a different manner. The chamber of the E-Nose sensor is to be upgraded mainly for reducing the nuisance alarms and to improve reliability to detect smoke which is caused by fire and non-fire particles. This paper gives a brief state of the art of different fire and non-fire particles that emits smoke and various chemical gas sensors used to detect smoke and a fire detection algorithm.

Keywords- Sensors; Smoke; Electronic-Noses; Fire Detection Algorithm fire particles; non-fire particles

Introduction

The conception of an electronic nose could appear sort on an up-to-date technology. Scientists initial developed a synthetic nose within the 1930’s that used sensors to measure levels of ultra-violet light found in mercury. Currently these devices are employed in numerous technological fields for various applications.

Presently these devices used as trendy fireplace detection frameworks for the simultaneous estimations of carbon monoxide gas (CO), carbon dioxide (CO2), and smoke. The concentration of the rates of CO and CO2 in smoke offers a path to cut back the frequency of nuisance alarms so as to extend the reliability of smoke detectors. The sensors that square measures incorporated during this fireplace sighting system at the side of fire detection algorithmic rule detect smoke that is caused by fire or non-fire particles, and alarmed accordingly.

Previous fire detection systems used sensors for measuring temperature, smoke, and combustion products which include oxygen (O2), carbon monoxide (CO), carbon dioxide (CO2), water vapor (H2O), hydrogen cyanide (HCN), acetylene (C2H2), and nitric oxide (NO) but they does not give any reliable results. Some used Gas Chromatography – Mass Spectrometry (GC-MS) along with Fourier Transform Infrared (FTIR) Spectroscopy analyzed smoke [1].

Advances in fire detection systems are being sought to decrease the detection time and the frequency of unnecessary alarms. Most of the research works done with the Multi-Sensor Detectors for accomplishing these goals because there may have some trouble in using smoke detectors with a single sensor to discriminate the smoke produced from fire and non-fire sources. The 95% frequency of unnecessary alarms reported by smoke detectors during the 1980’s in the U.S. is due to that limitation.

Section 1 briefly introduces the Fire Detection System incorporated in an Electronic-Nose and different Gas Sensors that detects smoke in Section 2. Later, section 3 gives a brief description about the Fire and Non-Fire Particles and how the sensory system is designed in an E-Nose for preventing Fire accidents in section 4. Finally, we concluded in Section 5.

Chemical Gas Sensors

The environment needs to be monitored [2] time to time as many accidents took place lack of it. So in order to control the Industrial Process, Chemical Sensing Technologies has been emerging out to mainly emphasize on

Control of combustion processes (oxygen)

Flammable gases in order to protect against Fire Explosion.

Toxic gases for environmental monitoring.

Solid Electrolyte Sensor

SE sensor [3] [4] is based on the principle of electrochemical gas detection, which is used to detect chemicals or gases that can be oxidized or reduced in chemical reactions.

It mainly contains three electrodes:

A sensing or working electrode which reacts when gas is available by either oxidizing or reducing the target gas.

A counter electrode which provides a comparing converse response to that occurring at the sensing electrode so as to provide a net current stream.

A reference electrode that stays unaffected by the chemical reactions occurring on the sensing and counter electrodes and provides a stable potential against which measurements are frequently created.

Figure 1. Solid Electrolyte Sensor

SEC sensors (Figure 1) used in millions of vehicles to monitor the exhausted gases and minimize the toxic emissions.

Thermal-Chemical Sensors

Thermal-chemical sensors [2] works on principle that there will be a change in temperature (a?†T) when heat energy is released or absorbed (a?†Eh). The pellistor is the most common thermal-chemical sensor (other thermal sensors are based on either on thermistors or on thermopiles). They are used for monitoring of combustible gases.

Figure 2.Thermal-Chemical Sensors

Gravimetric Chemical Sensors

They are also known as piezoelectric sensors [5]. They are of two types used for gas sensing – Surface Acoustic Wave (SAW) device and the Quartz Crystal Micro Balance (QCM) as in Figure 4.

Figure 3. SAW Device

Figure 4. Quartz Crystal Balance

SAW device produces a surface wave that travels along the surface of the sensor while the QCM produces a wave that travels through the bulk of the sensor as shown in Figure 3. Both work on the principle that a change in the mass of piezoelectric sensor coating due to gas absorption results in a change in the resonant frequency of exposure to a vapor.

Conducting Polymer Sensor:

Conducting polymers [2] are plastics and they change their resistance while they adsorb or desorb specific chemicals (Figure 5). The adsorption of these chemicals mainly emphasized on the polarity (charge) and their molecular structure (shape and size).

Figure 5. Conducting Polymer Sensor

Due to their high sensitivity, low price and rapid response time at room temperatures, Conducting Polymer Sensor best suits for chemical sensing.

IR Spectroscopy Sensors:

The Spectroscopic Sensors [2] determine the concentration of several gases at a time and they work on the principle that all the gases interfere and adsorb infrared spectrum at specific wavelengths due to their natural molecular vibration. Some systems with narrow band interference filters or laser light sources for a specific gas (like CO2) are termed as monochromatic systems.

Figure 6. IR Spectroscopy Sensors

In the above Figure 6, some concentration of CO2 present in the sample gas is absorbed by the infrared detector at a wavelength of 4.3 ?m while an infrared light periodically emitted from the light source. These sensors are most suitable for CO2 gas and shows low cross-sensitivity with different gasses and are moderate at the reaction, fairly good at accuracy and linearity but are cumbersome and costly.

Optical Fiber Sensors

The optical fiber utilized as a locality of those sensors [6] is coated with fluorescent dye. On association with the vapor, the Polarity variations within the fluorescent dye will changes the dye’s optical properties such as wavelength shift in fluorescence, intensity and spectrum changes. These optical as in Figure 7 changes are used as the retaliation mechanism for gas.

Figure 7. Optical Fiber Sensor

Optical gas sensors are mostly used to detect concentrations of ammonia (NH3). They have very fast response times, short of what 10 micro sec for sampling and analysis and are compact; lightweight can be multiplexed on a single fiber network, immune to electromagnetic interference (EMI) and can operate in high radiation areas.

MOSFET Sensors:

The metal oxide semiconductor field-effect transistor (MOSFET) sensors [4, 7] based on a change of electrostatic potential. They comprise of three layers, they are catalytic metal also called the gate (palladium), a silicon oxide insulator (platinum) and a silicon semiconductor (iridium or rhodium) as in Figure 8. When polar compounds interact with this metal gate, the current flowing through the sensor is modified.

Figure 8. MOSFET Sensor [7]

As no hydrogen atoms are released, molecules such as ammonia or carbon monoxide cannot be detected with a thick metal layer. But it can be possible to detect them when the metal gate is thinned. These MOSFET sensors or MOS sensors are very robust and have a relatively low sensitivity.

E-Nose as Fire Detection System

An electric or artificial nose can sense different types of chemicals and even distinguish particles not only for identifying individuals, but also used for the detection of fire. They work on the principle that smoke is made up of different chemical compounds. These devices consist of dozens of sensors that sense different types of chemical compounds found in the air. Some of the chemicals that cause smoke leads to flames are discussed below.

Smoke

It is a collection of solid and liquid particulates in air and emits gases when a material undergoes combustion or pyrolysis [8]. This is a commonly an unwanted by-product of fires (including candles, stoves, fire ramp and oil lamps), but may also be used for fumigation i.e., pest control. Smoke signals is communication for long distances like smoke signals to transmit signals, news or to indicate the people to gather in a place, offensive and defensive capabilities in the military (smoke-screen), cooking, or smoking like marijuana, tobacco and etc.).

Heptane:

It is a non-polar solvent and minor component of gasoline [9] with chemical formula H3C (CH2)5CH3 or C7H16. This is a colorless liquid and very hazardous chemical that appears which sense like petrolic odor. The structure of Heptane is shown in Figure 9.

Figure 9. Heptane Structure

It is commercially available as mixed isomers for use in paints and coatings and mainly applied in pharmaceutical manufacturing laboratories and for research & development. It has a melting point at ?91.0 to ?90.1°C; ?131.7 to ?130.3°F; 182.2 to 183.0K

Toluene

It is a fragrant hydrocarbon (Its IUPAC deliberate name is methylbenzene) [10] is broadly utilized as a solvent and as an industrial feedstock. It is a water-insoluble clear liquid with the typical smell of paint thinners. In some cases toluene is also used as an inhalant drug for its intoxicating properties; on the other hand, breathing in toluene can possibly cause serious neurological damages.

Figure 10. Toluene Structure

Toluene (Figure 10) is principally utilized as a precursor to benzene. The second positioned application includes its disproportionation to a mixture of benzene and xylene.

Methanol

Methanol is the simplest alcohol, and is a light, unstable, colorless, ignitable fluid with a unique smell as same as to, however marginally sweeter as that of drinking alcohol which we called as ethanol [11]. It is otherwise referred as methyl alcohol, wood alcohol, which is produced as a by-product of the destructive distillation of wood, wood naphtha or wood spirits, with the formula CH3OH for the structure in Figure 11 (often abbreviated MeOH).

Figure 11. Structure of Methanol

It is likewise utilized for delivering biodiesel by means of transesterification response. At room temperature, it is a polar fluid, and is utilized as a liquid catalyst, dissolvable, fuel, and as a denaturant for ethanol. Methanol is created regularly in the anaerobic metabolism of numerous mixtures of microbes, and is normally present in little sums in the earth.

HDPE Beads

High Density Poly Ethylene Beads [1] are white hermoplastic base resin and looks like wax and have the properties of electric wire.

Figure 12. HDPE Beads

HDPE Beads in the above Figure.12 used for extrusion packaging film, rope, woven bags, fishing nets, water pipes; injection of low-end commodity and housing, non-bearing load components, plastic box, turnover box; extrusion blow moulding containers, hollow products, bottles and it has society of plastic industry resin ID code is 2.

Mixed Plastics

Blended plastic [12] shown in Figure 13 is a term that covers all non-container plastic bundling sourced from the wastage of households, and it incorporates inflexible and adaptable plastic things of different polymer types and shades. It excludes plastic bottles and non-packaging items.

Figure 13. Mixed Plastics

Dry Ice:

Figure 14 shows that it is the strongest manifestation of Carbon dioxide and fundamentally utilized as a cooling agent. It transmutes at ?78.5 °c (?109.3 °f) at Earth atmospheric pressures. This great frost makes the strong perilous to handle without protection due to burns caused by freezing (frostbite). It is referred as “Card ice” [13].

Figure 14. Dry Ice

Fire Detection Mechanism in E-Nose

A Novel technique should be employed in E-Nose to respond immediately whenever the fire accidents took place [14]. The main objective of this mechanism is to reduce the nuisance alarms. Several experiments are conducted on various materials that causes smoke and observed how the materials go on burning while ignited them. The table 1 indicates that the ignition method and fire type (how the material burns) of the particular material which causes fire.

Every E-Nose contains a sensory system (two components in E-Nose one is sensory system and the other component is a pattern recognition system [15]) and we need to enhance it so that it can be used as the fire detection system. In the sensory system, one among the above mentioned gas sensors are selected such that they detects particular material’s smoke and according to the classification algorithm and differentiate it whether the smoke is from fire and non-fire particles.

Table 1: List of Particles Causes Fire

Sl.No

Material

Ignition Method

Fire Type

1

Heptane

Lighter

Flaming

2

Toluene

Lighter

Flaming

3

Methanol

Lighter

Flaming

4

HDPE Beads

Lighter

Smouldering

5

MixedPlastics

Coil + Pilot

Flaming

6

Dry Ice

N/A

N/A

The following Figure 15 shows the internal design of sensory system to be deployed in the E-Nose for reducing nuisance alarms as well as to react accordingly to the material that causes a fire.

Figure. 15: Mechanism of Fire Detection System

Based on type of these chemical compounds, the system can give information to the clients about the Fire and Non-Fire particles [16]. The system will perform the perfect action by ringing alert and empowers the fire extinguisher to keep the spreading of kindle to some degree by grouping fire and non-fire particles. The accompanying Table 2 gives the brief description of distinct fire extinguishers feasible in the market.

Table 2: Types of Fire Extinguishers

Extinguisher

Protection Against

Used For

CO2Fire Extinguisher

Class B Fires – Petrol,Oil,Paints,Fats

Online electrical equipment

Water Fire Extinguisher

Class A Fires – combustible materials like wood and paper

Common Household purpose

Powder Fire Extinguisher

Class A,B,C Fires

General Purpose

Foam Fire Extinguisher

Class A,B Fires – liquids or materials that liquefy

Shopping malls

Wet Chemical Extinguishers

Fires – Cooking Oil or Fats

Professional Kitchens

Eco Fire Extinguishers

Fires – Water and Foam

Environment

Where each Extinguisher specifies the classes of fires and they are listed below gives the details of their contents for which they belong to.

Table 3: Classes of Fire

Class Name

Contents

Class IA

Di-ethyl Ether, Ethylene Oxide, Some Light Crude Oils

Class IB

Motor and Aviation Gasoline, Toluene, Lacquers, Lacquer Thinner

Class IC

Xylene, Some paints, Some solvent-based Cements

Class II

Diesel Fuel, Paint Thinner

Class IIIA

Home Heating Oil

Class IIIB

Cooking Oils, Lubricating oils, Motor oils

Conclusion and Future Work

Presently many more fire accidents are taking place and most of them are regarded as nuisance alarms i.e., the sensors that detect smoke will ring the alarm even though it is not necessary. In order to overcome this problem, this paper provided a novel technology that which holds the potential to give numerous benefits in terms of fire accidents like to reduce the nuisance alarms and to increase the reliability of the sensors.

This mechanism not only reduces the false alarms, but also prevents the danger by enabling the in-built extinguisher whenever the fire particle is sensed. In future, we have a tendency to develop the precise classifier algorithm to distinguish the smoke from fire and non-fire particles.

Design Limitations for Speakers

Introduction

There are many factors which determine the characteristics of a loudspeaker; to produce a successful design a careful balance of many factors must be achieved. Most of the challenges and considerations of loudspeaker design stem from the inherent limitations of the drivers themselves.

Desirable Characteristics & Real-Word Implementation

For a coherent approach to loudspeaker design to be established, one may elucidate the problem by considering two main sets of criteria; the desired characteristics of the finished system and the limitations which impinge on the achievement of these desired characteristics. The key desirable characteristics for the finished system are listed below.

Reproduction of all frequencies across input range
Flat frequency response across input range
Adequate Damping
Good Efficiency
Adequate SPL or perceived loudness
Minimal distortion
Minimal noise

Many of the above considerations are quite obvious. In terms of frequency response it is desired that the response of the system as a whole should be as flat as possible, since to truthfully reproduce a signal all frequencies across the input range should be represented equally. Weems (2000, p.14) notes that “smoothness of response is more important than range”. Naturally noise and distortion are undesirable for accurate signal reproduction. Damping is an important concern; when a signal is no longer applied to a loudspeaker there will be a natural tendency for the cone to continue to move under its own inertia. Thus damping must be employed in order to ensure that the SPL generated by such movement is sufficiently low and relatively inaudible. Rossing (1990, p.31) refers to damping as “loss of energy of a vibrator, usually through friction”. This is a simplification, however, the back EMF generated by the driver and the varying impedance seen by the amplifier of the crossover/driver network play an important role. As Weems (2000, p.17) rightly says “there are two types of damping, mechanical and electrical”.

Another quite obvious consideration is that the loudspeaker must indeed be loud enough. This is related to the issue of efficiency, since the more inefficient the speaker, the more power will be needed to drive it. The choice of enclosure design plays quite a significant role here, as will be seen shortly.

In terms of limitations, there are several immediate problems posed by the nature of the drivers themselves that must be addressed. Firstly, the sound from the back of the speaker cone is 180 degrees out of phase with the sound from the front. This phase separation means the sounds will cancel each other at lower frequencies, or interfere with each other in a more complex manner at high frequencies; clearly neither is desirable.

In some senses it would be ideal to mount the drivers in a wall with a large room behind, the so-called “infinite baffle”, having the sound from the rear of the cone dissipate in a large separate space, being thus unable to interfere with the sound produced by the front. In reality this is impractical; however some provision must be made to isolate sound from the rear of the cone. To this end, some sort of enclosure must be made for the drivers, yet this presents a new set of considerations.

Without an enclosure, a loudspeaker is very inefficient when the sound wavelengths to be produced are longer than the speaker diameter. This results in an inadequate bass response; for an 8 inch speaker this equates to anything below around 1700Hz[1]. So the infinite baffle is terribly inefficient in terms of the SPL produced at lower frequencies. Furthermore, the free cone resonance of the speaker works against the flat frequency response that is desired; input frequencies close to the resonant frequency will be represented too forcefully.

Another real-world complication is the fact that for high-fidelity applications, no one loudspeaker will be able to handle the entire range of input frequencies; “the requirements for low frequency sound are the opposite to those for high frequencies” (Weems, 2000, p.13). Higher frequencies require less power to be reproduced, but the driver must respond more quickly, whereas low frequencies require a larger driver and hence greater power to be effectively realised.

In view of the above, multiple drivers must be used, with each producing a certain frequency range of the input signal; at the very least a woofer and tweeter are required. In order to deliver only the appropriate frequencies to each driver, a device known as a crossover must be implemented. This can take the form of passive filter circuits within the speaker itself, or active circuitry that filters the signal prior to amplification. In the latter case, multiple amplifiers are needed, making this a more costly approach. The fundamentals of crossover design will be dealt with in a separate document and are hence not dealt with in detail here.

Enclosure Design

Faced with the reality that an enclosure is in almost all cases a practical necessity, perhaps the most important aspect of speaker design in the design of the enclosure itself. The first step in producing a successful design is to decide upon the drivers to be used and use this as a basis for choosing a cabinet design, or to decide upon the desired cabinet type first and allow this to inform the choice of driver. In general, most of the design work with regard to the cabinet is focused firmly toward the woofer, since the enclosure design is most critical with regard to midrange/bass performance. In typical 2-way designs, the tweeter is mounted in the same box as the woofer, but it is the latter which largely defines the cabinet dimensions.

In the past the design of enclosures was often something of a hit-or-miss affair, however the research of Thiele (1971) and Small (1973) has led to a much more organised design process. Most transducers today are accompanied by a comprehensive datasheet of Thiele-Small parameters, which allow most of the guess work to be taken out of enclosure design.

Ignoring more exotic enclosure designs, the first question is whether the enclosure should be ported or sealed (it should be noted that in reality even “sealed” enclosures are very slightly open or “leaky” in order to allow the internal pressure to equalise with the surroundings). If a driver has already been chosen, this can be determined from the Efficiency Bandwidth Product, which is defined as:

EBP = Fs / Qes(1)

Where Fs is the free air resonance of the driver and Qes the electrical Q or damping. In general, an EBP of 50 or less indicates a sealed box, whilst an EBP above 90 suggests a ported enclosure (Dickason, 2000). In between, the choice of enclosure lies more or less with the designer and a driver that falls in the middle should perform acceptably in either closed or ported situations.

So, what are the advantages and disadvantages of sealed vs ported enclosures? A sealed enclosure is very simple to build, whilst a ported enclosure requires some degree of tuning to ensure the port is matched correctly to the driver – in the ported or “bass reflex” design a tube extends into the cabinet allowing some air to escape from inside; if correctly tuned the air that leaves the port is delayed in phase by 180 degrees, hence reinforcing the sound from the front of the cone.

With a sealed enclosure the air inside acts as an approximately linear spring for the transducer cone and assuming the driver has a low Fs, a healthy bass extension with a gentle roll-off of -12dB per octave can be expected. The disadvantages are several; the enclosure may need to be quite large to achieve an acceptable Qtc (the damping value for a sealed system) and efficiency is poor. Further, with a sealed enclosure the driver reaches maximum excursion at resonance, which translates to greater distortion. Therefore a driver for use in a sealed enclosure requires quite a large linear throw to perform well. By contrast, in correctly tuned ported enclosures the driver is maximally damped at resonance, so a large linear throw is not critical and distortion is lower as a result. The basic methods of sealed and ported cabinet design shall now be explained.

Sealed Enclosure Design

To design a sealed enclosure the basic methodology is quite straightforward; the essential challenge is simply to find the optimum volume for the cabinet for the chosen driver. First one must decide on the value of the damping constant Qtc; the optimum value is 0.707 since it gives the lowest -3db break frequency and hence the best potential for bass extension, as well as good transient response. If the enclosure size is too large at this optimum value then Qtc may be increased, resulting in a trade-off between bass performance, transient response and enclosure volume. However, the more Qtc is increased, the more boomy and muddy the sound will become.

Depending on the application, the enclosure size may not be important; in this case an optimum Qtc is encouraged. Once Qtc is known, the constant ? may be calculated using the below formula, where Qts is the total Q factor of the driver at resonance (this may be obtained from the manufacture’s data sheet).

? = [Qtc/Qts]2 – 1(2)

Having calculated ?, the correct enclosure volume Vb is trivial to determine using the relationship below. Note that Vas is the equivalent volume of air that has the same acoustic compliance as the driver; again this may be obtained from the datasheet or experimentally. Note from equation (1) that a lower Qts will result in a higher ?, and hence a smaller enclosure. Thus for two transducers with equivalent acoustic compliance, a lower Qts will result in a smaller enclosure.

Vb = Vas/?(3)

Assuming the required box volume is acceptable, one may then also calculate the resonant frequency of the system (fs is the free-air resonant frequency of the driver):

(4)

Once fc is known the -3db break frequency may also be found:

(5)

Recall that below this frequency the roll-off is -12dB per octave and one can gain a fairly good impression of the bass performance to be expected. Naturally it is desirable for f3 to be low for maximum extension into the bass area, hence a low fs is a characteristic one should look for when choosing a driver for sealed enclosure use. If it is felt that the break frequency is too high, then a different driver must be selected for the sealed implementation.

Ported Enclosure Design

For ported cabinet design, the equations are more complex and it is generally not practical to attempt to design such an enclosure by hand. Instead there are a number of free and commercial software calculators available that simplify the process. One good freeware calculator is AJ Vented Designer[2]. Using such a program enables the designer to quickly ascertain what size enclosure and port is required for a given driver and whether this is feasible – for certain combinations the port may not physically fit within the enclosure for example. In addition, the program also plots the theoretical frequency response of the design, which simplifies matters greatly.

Acoustic Damping and Avoiding Resonance

In addition to the type of enclosure and the calculation of the required volume, diameter and size of ports (if ported), there are several other design considerations. Firstly, standing waves within the enclosure must be minimised. Thus enclosures are often stuffed with fibreglass, long-fibre wool or polyurethane foam.

In addition to standing waves and the resonance of the enclosure, one must also bear in mind the possibility of dimensional resonances with sealed designs. To avoid this it is prudent to ensure that length, width and height of the enclosure are all different and to not centrally mount the drivers.

The choice of cabinet material and thickness are also factors that require careful consideration; in general wood is the most appropriate material and a thicker structure is likely to be more rugged and be less susceptible to undesirable vibration. The structure should also be isolated from the floor since vibrations passed to a floor (especially a wooden floor) can cause the floor to vibrate which will muddy or colour the sound. Spikes or stands are commonly used to achieve this.

Conclusion

There are many factors that affect speaker design but perhaps the most important is that of the enclosure itself. More exotic enclosures such as band-pass and transmission line configurations are beyond the scope of this document, however it should be noted that there are many different approaches beyond the common sealed or ported methodologies. As with any engineering problem, successful speaker design requires a careful balance of many often opposing factors to be reached.

Sources

Borwick, John. (2001). Loudspeaker and Headphone Handbook, Focal Press.

Dickason, V. (1995). The Loudspeaker Design Cookbook, Audio Amateur Publications.

Rosenthal, M. (1979). How to select and use loudspeakers and enclosures, SAMS.

Rossing, T. (1990). The Science of Sound, Addison-Wesley.

Weems, D. (2000). Great sound stereo speaker manual, McGraw-Hill.

1

Dangers of the Internet | Essay

Abstract

This essay presents a critical debate on whether the Internet is as dangerous as the physical world. First, the unique dangers posed by the Internet are outlined. This is followed by an examination of some of the major threats to safety that are present in the physical world but not in the virtual world. In the conclusion, the report also looks into how the virtual world might shape in the future.

Is the World of Networked Computing as Dangerous as the Physical World?
Introduction

In cyberspace, no one hears your screams.

(Merkow and Breithaupt, 2000)

Modern society depends on the technology of networked computing more than ever. Whether it is the Internet, the World Wide Web (WWW), or other less well-known networks, people around the world depend on it for multifarious reasons from work and entertainment to essentials of life such as life support in medicine. Networked computing offers easy access, a large degree of anonymity and while this presents us with unique opportunities, it also presents us with unique dangers. In light of the increasing use and even dependence on networked computing, it is pertinent to examine the social, physical and ethical dangers presented by it. This essay critically debates the issue of whether the world of networked computing is as dangerous as the physical world.

The Dangers on the Internet
Preying by Paedophiles

One of the most disturbing crimes on the Internet today is ‘grooming’. Child grooming is an act where a paedophile will befriend a child, or form an intimate relationship in order to lower a child’s sexual inhibitions. Grooming will initiate from chat rooms designed for children and teenagers and sometimes through emails, where an adult will pose as a teenager, but will often move into using instant messaging services so that the paedophile can talk the victim into sending images and even using a webcam. Research conducted by the Cyberspace Research Unit at the University of Central Lancashire states “another of the frequent topics concerned on-line grooming and in particular, ways in which to avoid detection” (O’Connell, 2003). While this statement gives concern that paedophiles may be able to escape without notice, the report goes on to say, “Throughout each of the stages there are clear and easily identifiable differences in the patterns of behaviour of the individuals.” The stages that are talked about here are known as ‘Friendship forming state’ where the paedophile will just spend time getting to know the child, ‘Relationship forming state’ where the paedophile will start to ask questions about things such as school and home life, ‘Risk assessment stage’ where the paedophile will ask the child questions like who else uses the computer, ‘Exclusivity stage’ where the victim is encouraged to trust the paedophile, and ‘Sexual stage’ where the paedophile will ask the child about previous intimate experiences.

Bullying and Other Negative Electronic Relationships

The virtual world is home to some serious negative and destructive electronic relationships. Cyber bullying, one of the more common ones, is mainly targeted at school pupils in addition to actual physical and verbal bullying. Carnell (2007) points out to evidence that many pupils are being targeted in their own homes, by phone texts, silent calls, on instant messenger, and by abusive websites and forums, some set up with the specific intention of causing humiliation and embarrassment. This shows the severity of cyber bullying in society today.

Griffiths, M.D. (1998) offers the following explanation. The Internet is easy to access from home or work. It is becoming quite affordable and has always offered anonymity. For some people it offers an emotional and mental escape from real life, and this is especially true for individuals who are shy or feel trapped in unhappy relationships. It is also true for individuals who work long hours and have little opportunity for social life. Electronic (or internet) relationships started off when chatrooms were introduced and really boomed since the creation of Instant Messaging. A person can enter a chatroom, use an alias, and can talk to other members without revealing their true identity. However, this raises an important question. If you can do all that without revealing your true identity can you really trust the person you are talking to? Can you be certain that they are being honest with you? Some say that it’s not real and therefore they don’t really worry about it, while others suggest that Internet relationships have a way of tapping into deep feelings and it’s easier to get hurt. Katz and Rice (2002, p286) suggest, “students are meeting and “dating” on the internet…they even have monogamous relationships this way, telling others who might ask that they will not go out with them because they are “dating” someone.” Various researches suggest that it is more common for young people to meet and date people using the Internet and it is becoming more widely accepted as a social meeting point. This however causes concerns about why people are choosing to use the Internet for this reason. Many people feel more comfortable talking about feelings over instant messaging, and this is especially true of shy people or people that feel trapped in an offline relationship.

Addictions

The Internet also has the notoriety of helping to create unhealthy addictions. The majority of UK bookmakers now run online websites in which people can make exactly the same bets they would in the betting shop, but from the comfort of their own home. The rate at which the online gambling industry is commercialised today is astronomical. From 2005 to 2006 the sector has become the fifth largest advertiser online, jumping to 2.5 billion from 911 million ads in the last year (Schepp, 2002). And this is without the likes of TV ads, magazine ads, and adverts on the radio. This means that the majority of people in society now see online gambling as more acceptable than in recent years. Besides the increased risk of fraud on the Internet, the online gambling also poses the serious problem of an easier way to get addicted. This is because it is relatively easier to sit in front of a computer and gamble than to walk to the nearest betting shop in the cold winter to make a bet. Gambling is however, just one of the addictions people are vulnerable to online. Mitchell (2000) uses the term ‘Internet addiction’ to indicate the spectrum of additions that one is susceptible to on the Internet. He states that although there is some disagreement about whether Internet addiction is a real diagnosis, compulsive Internet use has psychological dangers, and reports such behaviour can result in the users having withdrawal symptoms, depression, social phobia, impulse control disorder, attention deficit disorder, etc.

Viruses and Hacking

In 2000, the number of worldwide email mailboxes was put at 505 million, and this was expected to increase to 1.2 billion in 2005 (Interactive Data Corporation, 2001). Schofield (2001) points out that more than 100 million people use instant messaging (IM) programs on the net, and a growing number among them also use it to transfer files. This number is obviously growing, but this example shows that online communication is becoming a much widely used method of communication. Online communication such as email and instant messaging does not come without problems. Hindocha (2003) states that instant messengers can transfer viruses and other malicious software as they provide the ability to transfer text as well as files. Viruses and malicious software can be transferred to the recipient’s computer without the knowledge of the user. This makes them very dangerous. As the use of online communications becomes more widespread, it is seen as an opportunity for people to gain access to the files on a computer. Hindocha (2003) gives the example of hackers using instant messaging to gain unauthorised access to computers, bypassing desktop and perimeter firewall implementations. This is a real concern for most users, especially as the instant messaging and email client software are ‘trusted’ software; for a home user, their personal information stored on the computer, such as internet banking security details, identifying information that could be used in identity theft, etc. are the risks. However, online communication software such as these are also often used in businesses also, and in this case, extensive records of financial information are vulnerable. Hindocha (2003) goes on to say about instant messaging systems, “finding victims doesn’t require scanning unknown IP addresses, but rather simply selecting from an updated directory of buddy lists.” This throws up serious concerns.

Theft and Fraud

Electronic commerce faces the major threats of theft and fraud. Online theft commonly occurs in the form of identity theft, and less commonly, outright theft, for example by unauthorised access to bank accounts. Camp (2000) points out that while it may seem a big leap to exchange a bill of paper money for machine readable data streams, the value bound to the paper abstraction of wealth is simply a reflection of trust in the method of abstraction that is widely shared and built over centuries. Money on the Internet is simply a different abstraction of wealth, and has similar issues with trust and risk as traditional money, together with the additional dangers posed by the virtual nature of the environment. Because all communication on the Internet is vulnerable to unauthorised access, this means that it is relatively easy to commit fraud. Where legislation is not a deterrent, technology is almost none. Credit card fraud and theft, electronic banking theft, etc. are some of the more common crimes committed online involving money.

What Makes It Safer Than The Physical World?
Safe from Immediate Physical Harm

Perhaps the only upper hand the virtual world has is that its inhabitants are immune to the immediate threat of physical violence; one cannot be randomly mugged online. However, vulnerable people are still susceptible to physical violence and harm, perhaps more to self-harm; there are many websites that promote anorexia, suicide and self-harm, and this can leave a big impact on impressionable minds.

Presence of Strong Safeguards

The main safeguards on the Internet are policing with the accompanying legislation, and technology itself. There are organisations in place to deal with the abusive websites and forums, appropriate legislation to prevent child pornography, paedophilia, theft, fraud and a variety of other online crime. There is also a vast array of technology that can help keep adults and children safe online, from parental control software that can restrict the websites viewed by children, to anti-virus and cryptography software and firewalls that help prevent hacking and viruses and keep data safe.

Conclusion
Staying safe online

It is commonly accepted that the Internet provides us with opportunities that have been hitherto unavailable. Many sing the praises of this so-called information superhighway; however, it is prudent not to be lulled into a false sense of security by the promising opportunities. People should be made aware of the dangers lurking in the Internet, and be given the education and means to take steps to stay safe online. Just as children are taught not to speak to strangers in the real world, they should be taught not to speak to strangers online as well. Education in schools should include education about how to stay safe online; just as children are taught that eating fruit and vegetables are healthy, they should also be taught that excessive online activities can lead to addiction, with various negative consequences. This is because the virtual world is not very different from the physical world in terms of people waiting to take advantage of the weak and vulnerable, and also with respect to dangers such as addiction.

The future of the virtual world

In many ways, the virtual world is a reflection of the real world. After all, the people who inhabit the real world are the same people that also inhabit the virtual world. It follows therefore, that what people do and want to do in the real world, they would try to do in the virtual world too. Where the physical constraints of the virtual world restrict them, they would try to find ways to get around it. The rapid development of technology also gives rise to new means by which people can do things, beneficial or harmful. The development of virtual reality may mean that one day, people in the virtual world may not be immune to immediate physical harm either. However, the technology by itself is neither good nor bad; it is the way the technology is put to use that creates positive and negative consequences for human beings. In the end, it can be said that virtual world is perhaps just as dangerous as the physical world.

References

Camp, L. J.(2000) Trust and Risk in Internet Commerce Publication: Cambridge, Mass MIT Press.

Carnell, L. (2007) Pupils Internet Safety Online. Bullying Online [online]. Available at: http://www.bullying.co.ukpupils/internet_safety.php (last accessed Aug 2007)

Griffiths, M.D. (2002) The Social Impact of Internet Gambling Social Science Computer Review, Vol. 20, No. 3, 312-320 (2002) SAGE Publications

Griffiths, M. (1998) Does Internet and computer “addiction” exist? Some case study evidence International Conference: 25-27 March 1998, Bristol, UK

IRISS ’98: Conference Papers (Available online at http://www.intute.ac.uk/socialsciences/archive/iriss/papers/paper47.htm – last accessed Aug 2007)

Griffiths, M.D. (2000) Cyber Affairs. Psychology Review, 7, p28.

Hindocha, N. (2003) Threats to Instant Messaging. Symantec Security Response, p3.

Interactive Data Corporation (2001) Email mailboxes to increase to 1.2 billion worldwide by 2005 CNN.com (Available online at http://archives.cnn.com/2001/TECH/internet/09/19/email.usage.idg/ – last accessed Aug 2007)

Katz, J.E. and Rice, R.E. (2002) Social Consequences of Internet Use. Massachusetts Institute of Technology. p286.

Merkow, M. S. and Breithaupt, J. (2000) The Complete Guide to Internet Security New York AMACOM Books

Mitchell, P. (2000) Internet addiction: genuine diagnosis or not? The Lancet,Volume 355,Issue 9204,Pages 632-632

O’Connell, R. (n.d.) A Typology of Child Cyber Sexploitation and Online Grooming Practices. Cyberspace Research Unit UCLAN, p7-9.

Schepp, D. (2002) Internet Gambling Hots Up BBC Online (Available online at http://news.bbc.co.uk/2/hi/business/1834545.stm – last accessed Aug 2007)

Smith, J. and Machin, A.M. (2004) Youth Culture and New Technologies. New Media Technologies, QUT.

Crossover Design for Speakers

Crossover Design

In terms of crossover design, there are two distinct options; passive or active crossovers. Passive crossovers are the most common implementation, since only one amplifier is required. In this case, filters comprising passive components (inductors, capacitors and resistors) are used to ensure that the correct frequency range is supplied to each driver. Low-pass, high-pass and band-pass filters are commonly used and need to be matched to ensure that the frequency roll-offs compliment each other, such that in the crossover zone(s) the combined acoustic output of the drivers maintains a flat frequency response.

In terms of these passive filters, it is the order of the filters used that is the primary consideration. A first order filter has a roll-off of -6dB per Octave and a Butterworth characteristic. First order filters are undesirable for two reasons; a +3dB peak is introduced at the centre of the crossover band and the crossover bandwidth is large due to the gentle roll-off, which means the drivers need to be capable of handling a greater frequency range. However, first order filters require the least components, incur less power loss as a result and do not introduce a phase change in the output.

Second order filters are the most commonly used type in passive crossovers, since they are relatively simple but solve the problems associated with first order filters. The roll-off is -12dB per octave and the filters may be designed with a Linkwitz-Riley characteristic which maintains a flat frequency response across the crossover band, unlike the combination of Butterworth filters.

Third order filters offer a roll-off of -18dB per octave, however there is a problem of phase separation; in a two-way configuration there is a phase shift of 270 degrees which “can result in lobing and tilting of the coverage pattern” (DellaSala, G. 2004). Some designs such as the D’Appolito configuration[1], which uses three drivers, actually make use of this phase separation in order to minimise lobing, however the D’Appolito configuration is notoriously complex and difficult to implement well without precise driver measurements.

If a high-order crossover is desired, fourth order filters are perhaps the best choice. Although they are more complex in terms of design and require more components, the advantages are a small crossover bandwidth (roll-off is -24dB per octave) and a 360 degree phase shift; hence no phase correction is required. Passive crossovers beyond fourth order are generally not considered. Borwick (2001, p.267) notes these “are seldom used in passive crossover designs because of their complexity, cost and insertion losses”.

The other approach to crossover design is the active crossover. In this case active filters (normally based around op-amps) are used to divide the input signal into the required frequency bands prior to amplification; the crossover has multiple outputs and a separate power amplifier is needed for each frequency band. Some audiophiles complain that active crossovers (which normally employ high-order active filters) are not a good choice, due to the poor transient response of high order filters. However as Elliot (2004) notes, “the additional control that the amp has over the driver’s behaviour improves the transient performance, and especially so at (or near) the crossover frequency – the most critical frequency point(s) in the design of any loudspeaker”.

Apart from the increased complexity and multiple power amplifier requirement, active crossovers are far superior to their passive counterparts in almost every way, although some purists may disagree. Good quality op-amps are cheap, as are the required resistors and capacitors (since these do not need to handle much power). The active solution means frequency response is no longer defined by the quite complicated combined resistive, capacitive and inductive load of the passive crossover and drivers. Thus the frequency response of the crossover is independent of dynamic changes in the load. Furthermore, the active crossover makes it easy to tune the crossover dynamically; with most commercially available active crossovers one can simply dial in the required frequency bands.

Efficiency is improved with active crossovers, since no power is lost by the amplifier in driving passive inductors or resistors. The amplifier also has the best possible control over transient response, since there is nothing between it and the driver other than cable. Thus the amplifier can respond directly and “presents the maximum damping factor at all times, regardless of frequency” (Elliot R. 2004).

In view of the above one may then wonder why passive crossovers continue to remain so popular, since it seems far more logical to implement frequency division before amplifying the signal. Ease of installation is perhaps the main factor. Almost all commonly available hi-fi systems use speakers with passive crossovers. For the consumer this makes things easy; the speakers are simply connected to the amplifier and installation is complete.

In contrast, turnkey active solutions for the average consumer are not forthcoming, although rack-mounted “professional” active crossovers can be obtained for quite reasonable prices (around ?150 for a 4th order 2 way Linkwitz-Riley design)[2]. However, these require a fair amount of audio engineering expertise to set up correctly and the typical home listener simply does not possess this knowledge.

For the high-budget client seeking the best audio reproduction, active crossovers are certainly the best option; the technical advantages have been seen to be numerous. This is offset by the fact that the system will be far more complicated to correctly install, but it is assumed in this case that complexity of installation is of little concern to the high-budget client who is unlikely to handle the installation themselves in any case.

For the low-budget client, the best solution is the passive crossover. It is a simple option, only requires one amplifier and yet produces acceptable sound quality. It is far from the best solution, but adequate if a competitive price point is desired.

In conclusion, all but a few dyed-in-the-wool purists will agree that the active crossover is a superior solution in terms of quality and control. What it lacks in simplicity is outweighed by a far superior level of control over frequency response and the drivers themselves. However, due to issues of complexity one can expect that the traditional passive crossover shall continue to lead a healthy existence in the majority of loudspeaker designs.

Sources

Borwick, John. (2001). Loudspeaker and Headphone Handbook, Focal Press.

DellaSala, G. (2004). Filter & Crossover Types for Loudspeakers, Audioholics Magazine.

Dickason, V. (1995). The Loudspeaker Design Cookbook, Audio Amateur Publications.

Elliot R. (2004). Active vs Passive Crossovers, Elliot Sound Products.

Rossing, T. (1990). The Science of Sound, Addison-Wesley.

1

Critical success factors of cable tv (pay-tv) against other competitors in hong kong.

Abstract

In this proposal, we hope to learn the real business strategies though the finding from research. And try to give some suggestion for these companies to increase their sales and profit.

There are the flows of the research proposal.

First, to introduce the background of Pay-TV Limited and its industry. Let you have a basically knowledge of this industry in the pass and now.

Secord, to list the objectives to help myself to achieve the proposal aim.

Third, to have a critical review of relevant literature from books, articles, internet, or magazine. Discussing the business theory how to apply in the real business world, and in the case, we can see which strategies the company is using and what success factors here. Most important, what we can understand clearly the marketing strategies in a real situation from the result of the research.

Additionally, to describe the research method which I had used. Including the data collection method, sampling method and the size of sample.

By using questionnaire, 100 to150 people will be asked, in order to find out the competitive advantage of Cable TV. Relationship between factors (the quality of TV programme, the price, customer supporting service) and the attitude of people towards which Pay-TV will be found.

Aim

This works aims to point out the attractive and competition of Pay-TV and though the research to find out their success factors (competitive advantage with main competitor), and to treat the finding as business strategies learning. Besides, to provide some suggestions and evidences how to get more potential customers, in order to increase the sales and profit of these companies.

Background

Some may not understand why Pay-TV can exist in Hong Kong a long time and have a stable marketing share. In fact, the major choose to watch Free charge TV such as TVB and ATV. However, this free-charge TV programme can not satisfy some people. But, Pay-TV programme focus on this market, they produce special TV programme and buy copyright of overseas TV programme, which free-charge TV have not provided. Besides, another selling point of Pay TV is that provide sport direct seeding such as football and NBA.

In recently years, the more fierce competition was caused by more and more pay-TV service Company had entry to this market. However, the Cable Pay-TV which was the first Limited successfully obtaining a Subscription Television Broadcasting Licence from the Government and can also maintain a stable marketing share these year. And its main competitor is NOW-TV which is subsidiary Company of PCCW. (REVIEW OF PAY TV MARKET)

The following are the background of Cable Pay-TV and Now-TV.

I-Cable

The Pay-TV service is operated by Hong Kong Cable Television Limited, a wholly-owned subsidiary of the Group. The Group successfully obtaining a Subscription Television Broadcasting Licence from the Government in 1993 which Pay-TV service launched in the same year set the trend of multi-channel pay-television service for Hong Kong.

Hong Kong Cable currently produces over 10,000 hours of programming a year, which is the largest television programs producer in Hong Kong. Throughout the years, it has successfully established a leading position in News, Movies and Sports television programming and will continue to introduce innovative local and international programmes for customers. (http://www.i-cable.com)

Now TV

Now TV is a 24-hour pay-TV service provider in Hong Kong. It is transmitted through the company’s Netvigator broadband network via an IPTV service. It is transmitted through the company’s Netvigator broadband network via an IPTV service, with a total of 175 channels, of which 156now Broadband TV Channels, including eight high-definition channels and 15 music channels and 19 pure TVB PAY VISION Channel, and another 17 categories and VOD service. Launched in September 2003, the service is operated by the leading Hong Kong fixed-line telecom operator PCCW, through its subsidiary, PCCW VOD Limited. As of June 2009, the user up to 990,002 1000, 700,005 of them in 1000 to paying customers.

However, I-Cable is like to success maintain their market share against the challenge of Now TV. In order to know clearly the success factors of I-Cable (business strategies, promotion, price, the programme quality, supporting service) we need to ask a number of questions. (http://www.now-tv.com)

CableTV VS Now TV
Why people choose Pay-TV?
What channel of people in contact Pay-TV?
Which one is more famous?
What is the relationship between factors and the attitude of people towards watch Cable-TV/Now-TV?
How do people needs changing?
Can Cable-TV/Now-TV meet these changing needs?

The answer will be found in the following.

Objective and research questions

Below are the main points of the objectives of this research

Study the general demographic of target customers.
Study the TV watching behavior of customers.
Determine the customers, performance on various kinds of TV Programme.
Identify the reason of choosing Pay TV.
Evaluate which attributes of Pay TV are important to customers.
Identify which is the most effective promotion channel.
Examine the channel people how to get the Pay-TV information.
Examine the reasons why they buy Pay-TV services from that channel
Examine the impact of price, sport direct seeding of customers towards Pay-TV.
Examine the Supporting service of Pay-TV.

We’ll analyze the market theories such as 7Ps of market strategies form the results of research. The answers of the above are based on the relevant literature, and the sampling interview. The all detail as follows.

Critical review of relevant literature

There are 5 parts of critical review, the first 4 parts are the finding form the relevant literature. The last 2 parts are the introduction of market strategy of them, and the review of this part.

1. The main difference between free-charge TV and Pay-TV

According the literature, Free-charge TV offer mainly entertainment programme, and the major of programme are made by themselves.

and their programme focuses on popular habit.

However, Pay-TV offer over 100 overseas TV channel and Sport direct seeding, and some of this programme is information programme what offers professional knowledge, the information of special habit to people. (Kotler, P. and G. Armstrong (2008))

In these years, more and more people are willing to pay money watching Pay-TV. The reasons are easy to understand, the two local free-charge TV cant satisfy the people, and young people who aged around 20, their needs of watching TV are changing. In the pass, people treat TV as their main entertainment everyday. However, the young have much other entertainments, and they watch TV in order to watch sport competition, get information. It means Pay-TV still has a great potential market in the coming years.

2. The current competition of Pay-TV market in Hong Kong
3. The promotion strategies of two Pay-TV limited

The promotion strategies of them is similar, their promotion focus the potential customers who have special needs such habit (cooking, religion, drama) or want to watch non-local TV programme (Discovery Channel, CC TV). And their promotion are also similar, the number of TV programme and sport direct seeding are their selling points.

Now TV is more emphasize their promotion to attract the potential customer now, but Cable TV just keep quality of their original service. In fact, people used to watch Cable TV because their longer history and people know their quality of TV programme more. In marketing, Cable-TV is like a cash-cow,

4. The famous TV programme

Cable TV has the excellent news TV programme, and English league direct seeding. It is one of the reasons why Cable TV can maintain market share. Although English league may be not the highest level football league (many people agree Spain league is becoming the highest league in recently years and the Spain league direct seeding is offered by Now-TV in the future 3 years.), However, anyone know that major of Hong Kong people like to watch English league more than others.

Additionally, Cable TV has also the direct seeding of champions of league and World Cup in 2010. It is a great competitive advantage with Now-TV in this year and the coming 3 years. (The newest situation of people needs change)

5. The relationships between factors and the attitude of young people towards I-Cable/Now-TV

There are some factor will influence young people how to choose which Pay-TV.

a) Price(extend)

Cable TV adopt non- selectivity Price(packaging of service), we need to buy a number of channel at the same time; Now TV offers selectivity Price, we can pay a basic free, then the extra-charge are based on each channel, but Now TV are also offer a price for the packaging of all services.

According one news, a great number of people are unsatisfying because Cable TV increase the basic charge from $239 to $259, and the extra-charge of football direct seeding. (http://hk.news.yahoo.com/article/091124/4/fbx5.html)

b) Promotion
c) Sport direct seeding (extend)

It is one important factors why Cable TV success and they can increase the price in a bad economy. Cable TV spend a high cost to get the right of footfall direct seeding, and increase the price to cover the cost. It is their strategy. However, they may ignore the young people needs change.

In recently years, English League is successful in Hong Kong, it has many factors such as the time of competition, and football player stars. However, the Spain League are willing to start early in the next year, and many stars transfer from English league to Spain League. It may make people prefer watch Spain league more. (http://hk.news.yahoo.com/article/091120/4/fa0e.html)

d) Technique supporting and customer service (extend)

Cable TV had a developed supporting system early, but they don’t improve anything. However, Now TV usually improves their Technique supporting system. I believe Now TV will have a developed system what is better than Cable in the coming several years.

6. Market strategies(extend)

Pay-TV adopts the Concentrated Marketing (Kotler, P. and G. Armstrong 2008)(Where the organization concentrates its marketing effort on one particular segment. The firm will develop a product that caters for the needs of that particular group).

The all detail marketing theory and suggestions will describe after the sampling interview.

Research methods/ Methodologies

Category

Options

The degree to which the research question has been crystallized

Exploratory study

Formal study

The method of data collection

Monitoring

Communication Study

The power of the researcher to produce effects in the variables under study

Ex post facto

The purpose of the study

Reporting

Descriptive

Causal-Explanatory

The time dimension

Cross-sectional

The topical scope – Breadth and depth – of the study

Statistical

The research environment

Field setting

The participants’ perceptional awareness of the research activity

Actual routine

The main purpose of our study is needed to find out the comparison of Cable TV and Now TV. We need to collect the primary data and secondary data to analysis the success factors each other.

First, we collect the secondary data from internet to know backgrounds, histories, and the annual reports of each Pay TV Limited. and collecting other useful information on the internet, articles or relevant literature.

Second, to use “Personal Interview” (Questionnaire) collecting the primary data. Indeed that information is related to our objective. We will design a set of questionnaire about 7ps.

The method is taken by samples in Hong Kong (different regions in Hong Kong, Kowloon and New Territory), a half of male and female. It can be avoided unfair saturation. The sample size will be 100 to 150. The age distribution limits are around 18 to 65. Our survey method is face-to-face interview, after the interview we’ll give them a little gift. (Such as coupon)

We can understand the competitive advantage each other through the result of information and make the recommendations how to maintain market share and what service they need to improve. However, secondary data is limited, so we will get the information mainly come from primary data.

Project Plan

Refer to the page15 or Excel [project plan]

The Convergence of Business and Technology

While technological convergence is no longer a new idea, the fascination with the subject lies with the capabilities and applications of both hybrid and brand new technological platforms and the ways previous stand alone industries, have been reconfigured and thereby mobilised to provide enhanced service delivery. Such convergence pertains to the “digitisation of communications and the ways discrete media formats have become accessible to other media forms; have been further factors in this process” (Saltzis, 2007). In technical terms, Saltzis (2007) reminds us that “the new technologies convergence can be attributed to developments in digitization, bandwidth and compression; as well as interactivity.

Moreover, the rapidity and pervasiveness of technological convergence has seized the entrepreneurial imagination and arrested the attention of economic rationalists, with respect to “the devices used by institutions within the communications and media industries, as well as the information they process, distribute, and exchange over and through these devices” (Mosco and McKercher 2008: 37). Such convergence also focuses upon the “integration of or interface between and among different media systems and organizations, made possible by the development of new technologies” (Mosco and McKercher 2008: 37).

With this being said, a more fertile field to explore, derives from the recognition that while technology continues to converge, so does the corporate world. The nub of this issue is the nature and extent of the link between these two types of convergence, and the nuanced ways in which one shapes and is shaped by the other. Corporate convergence, according to Babe (1996:284-285) refers to the “mergers, amalgamations, and diversifications, whereby media organisations come to operate across previously distinct industry boundaries.” Babe extends this explanation stating that corporate convergence refers to the non-technical features of convergence, which also “contribute to the blurring of industry boundaries” (Babe 1996: 284-285). Examples he cites in the 1990’s from his Canadian context include “ Time Warner combining book publishing, music recording, and movie making, not to mention cable television, (while) Rogers Communications, Inc. engage in newspaper and magazine publishing, long-distance and cellular telephony, cable television, and radio/television broadcasting” (Babe 1996: 284-285).

While it is self evident that “corporate convergence promotes and is promoted by technological convergence” (Mosco and McKercher 2008: 37), closer attention is warranted to examine the nature of the promotion and the ways these two significant convergences influence each other. It is illuminating as we do this to itemise dimensions of technological convergence, to begin to pinpoint the areas of synergy between technology and corporate enterprise. The International Telecommunications Union (ITU) has been helpful in its examination of convergence, by singling out ‘device convergence,’ ‘network convergence,’ ‘service convergence’ and ‘regulatory convergence’ (ITU 2008). While the ITU cites examples of devices include mobile phone, camera and internet access device, network examples include fixed-mobile convergence and next-generation networks (ITU 2008). Moreover, service convergence is exemplified by voice services over the internet; not to forget regulatory convergence for broadcasting and telecommunications, citing the example of the Office of Communication (Ofcom) in the United Kingdom (ITU 2008).

The view of convergence from the corporate stakeholder, according to Andriole (2005:28), is ideally a “multi-disciplinary, anticipatory, adaptive and cautious” one, no longer about “early adoption of unproven technology,” but instead about questions of “business technology acquisition, deployment and management” (Andriole 2005: 28). The sense that the momentum has changed within the corporate sector, prompting corporate leaders to be ready to have ‘convergence conversations’ is clearly articulated by Andriole (2005). It is advocated that companies will benefit by thinking in terms of “business technology convergence plans” (Andriole 2005: 28). Instead of technology being a footnote or a discrete department within a corporation, through its own array of convergences, it now occupies a central position in underpinning corporate cultures. As a response to this generational shift in consciousness, business planning now closely consults with technological providers, shaping corporate decisions and goals. This change of thought led spawned a new series of business planning questions, which demonstrate some of the links between technological and corporate convergence. Questions which illustrate this include: “‘How does technology define and enable profitable transactions?’; ‘What business models and processes are underserved by technology?’; ‘Which are adequately or over-served by technology?’” (Andriole 2005: 29)

Now when strategic planning is tabled as an agenda item within companies, the matter of technological capabilities is taken seriously, as corporations realise that sidelining technological innovation, is a stepping stone towards giving away market edge to one’s competitors. Indeed, Andriole (2005: 30) forewarns of the perils of business technology segmentation. Instead of a new business initiative being conceived then asking what technological capability exist to support it, Andriole (2005: 30) argues that technologists must be present as part of the materialisation process of a company’s development goals and strategies.

One fundamental area a business model which values efficiency and effectiveness is the calibre of the internal and external communications systems and infrastructure. In the 21st century business context of global interfacing, communications which are “pervasive, secure and reliable” (Andriole 2005: 30), are a base line issue. The incentive to acquire such state of the art systems is one factor driving further technological convergence, as the market demand fosters technological innovation to bring market edge to communications. The airline industry is a practical case in point, with specific international airlines branding being fostered by the level of their onboard entertainment systems for travelling customers. Some international airlines have invested heavily in this component of their corporate identity to enhance their market niche, displaying convergence through the multi-media, multi-channel video and music on demand, personalised entertainment systems, which now permit replay and play back functions (Yu 2008).

We are reminded us that a large area of compatibility and synchronicity between technological and corporate convergence relates to the classical knowledge networks, such as universities, corporations and investors, who derive great benefits from convergence, finding more penetrating ways to exchange information and knowledge, their primary resource Saltzis (2007:2). Additionally, since political, economic and financial power is derived from shared information, the value of corporate convergence to the stock markets and to companies is self evident. In relation to the priming of information flow via the synergy between corporate and technological convergence, some observers are beginning to draw attention to the sociological trend that knowledge, through these processes, has become less of a community resource and increasingly a commodity. As information is commodified, it is packaged to target specific interest groups and economic stakeholders, who prize specific knowledge for specific outcomes, in terms of client need and demand. This instance of the knowledge super highway shows that knowledge can be ‘positioned’ within the market with greater precision through convergence, yet , in so doing, may easily lose its original contextual underpinnings that imbued it with richer nuances of meaning in the first place. This phenomenon is perhaps no more evident than in cable television, where networks and individual channels are devoted to specific content delivery 24 hours a day. The downside of course, is that information must be assimilated rapidly on the take up side by the media corporation, just as it is foisted upon the consumer with a ‘forced- feed’ pretext, to make room for the next feed. Information, through such convergent capabilities, that permit ‘bites’ of knowledge to be digitally transferred globally and instantaneously, allows knowledge to be stripped of the framework in which it emerged, just as it is quickly, yet superficially digested by the global consumer. When information held the status of being a community resource, rather than a global commodity, it could be used at the will of the consumer, for their own determined purpose, rather than the commodified purpose preselected by the respective media conglomerates that perpetuate the promulgation of endless information.

Further challenges to technological and corporate convergence trends, apart from dilution of meaning due to the multiplicity and potentially splintering of sources, according to ITU (2008) concerns, “content distribution and management, sustainability and scalability, innovation management, competitive dynamics, tariff policies, network security, regulatory coherence and consumer protection” (ITU 2008). While the broadening of avenues for content distribution has the allure of versatility, the revolutionary distribution of music in the past decade illustrates the potency of convergence, threatening to undermine the very industry it was seeking to promote. I-Tunes and other legal internet based distribution pathways for music radically altered the income and revenue streams derived from popular music providers globally. While the consumer was benefited through the open door of access to music, (just as the educational market was reconfigured once educational corporations began to exploit the potentialities of online delivery of educational content at school and university level), the demand for live music globally initially declined, yet has now been buoyed up by the benefits of enhanced global exposure, on account of the global penetration capacity of online music.

Another aspect of this link that has pressurised corporations like never before has been how to safeguard the integrity of informational, entertainment or intellectually creative products, once they are so widely available via the world wide web. The proliferation of cloned products has the tendency to diminish the quality, reputation or demand for the original. Corporations have had to weigh the benefits of more universal distribution, against this tendency to have the integrity of a product compromised. This, in one sense has been as much about re-education of the consumer, who remains driven by the desire for quality in many instances, overlooking the detracting influence of You-Tube look alike musical bands renditions of hit singles by either reputable or promising new talent.

Patently, issues of security remain paramount, in this race towards virally changing convergences, whether it is the protection of personal data by entertainment companies, the finance sector or an individual relying upon social networking websites to foster their new relationships. Banks reputation for safety once built at the store front only, to remain competitive amid their market rivals, has now shifted to the quality and integrity of their web presence. This same notion extends of course, to an ever growing margin of the retail sector, and the sporting sectors, who realise that within the 21st century era of the new media users, the ‘digital native’ populations will increasingly rely upon web based sources for their interfacing with the world. Ironically, even large scale media conglomerations recognize the technological convergence can allow the operator of a mobile phone with a camera component, to drive world changing conditions, in the event that anybody happens to be at the right place at the right time, and films an international crisis on the telephone, then posts it on the web, embarrassingly before a major news corporation has the time or the infrastructure to outrun them. This realization has brought a new sense of recognition from major news broadcasters, to the power and penetration of websites like You-Tube, creating in journalists a scrutinizing eye for such alternate culture havens to assist the construction of mainstream breaking news stories.

The future looks bright for the ongoing convergence of technologies and corporate agendas. We are reminded of the profound benefits of the digitization revolution, yielding “enormous gains in transmission speed and flexibility over earlier forms of electronic communication,” (Mosco & McKercher 2008: 38) “extending the range of opportunities to measure and monitor, package and repackage entertainment and knowledge” (Mosco & Mckercher 2008: 38).

Nonetheless, the need to balance economic welfare and human welfare continues to be of concern, and one of the many implications of the increasing reciprocity, between technological and corporate convergence. In the field of media journalism news production convergence, Klinenburg reiterates that convergence facilitates a more rapid confluence of sources impinging upon an event or a story, yet it also intensifies the pressures upon the journalists time to “conduct interviews, go out into the field, research and write” (2007: 128). The processing time available at the human level continually diminishes, and when the technical speed is permitted to eclipse the human processes of digestion of knowledge and subsequent reflection, the result may ironically, in spite of a seemingly infinitely greater number of sources, be inferior, less news worthy and more insubstantial, than in would have been if the journalist had to rely upon more traditional methods of crafting a story to be broadcast or published.

While we have such warnings of convergence being manifest as a “concentration of technological ownership, in the form of the global media conglomerates” (Saltzis 2007), occurring in tandem “at the three levels of networks, production and distribution” (Saltzis 2007), it is prudent to be cogniscent of the fact that such monopolization can create an hegemonic corporate empire, allowing such media outlets to in effect be massive funnels for particular ideological positions. Divergence of ownership, on the other hand, may be a way to democratise control and use of these powerful message delivery mechanisms, yet without inbuilt check and balance systems, the corporate stakeholder will rarely consider that their over- influence in the market place of ideas is detrimental to society.

Since convergence researchers are ambivalent about the relative degree to which the “conglomeration of the global media has been the causal factor of technical convergence, or whether it is its by-product” (Saltzis 2007), there remains much to scrutinize, as we more globally to a yet more convergent means of conducting business; as well as producing, disseminating and consuming information, for diverse purposes. Saltzis’s observations seem pertinent in the final analysis. While the “benefits of these transitions include the merging of consumer bases; the creation of synergies with shared resources (utilising economies of scope and scale); as well as cross-promotion, the instability of the global media system, with its intense competition, advertising, peer-to-peer file sharing technologies, have established significant challenges for both the music and film industries” (Saltzis 2007). The matter of e-regulation is, as Saltzis asserts, “in its infancy” (2007), with many more competing political, economic and ethical questions to consider, as the global market place continues to converge.

Bibliography

Mosco, V. & McKercher, C. (2008) The Laboring of Communication: Will Knowledge Workers of the World Unite? Rowman & Littlefield

Saltzis, K. (2007) Corporate and Technological Convergence (Lecture 8): New Media and the Wired World MS2007.

International Telecommunications Union (2008) World Telecommunications Policy Forum 2009 ‘Convergence’, accessed December 13, 2008 from http://www.itu.int/osg/csd/wtpf/wtpf2009/convergence.html

Yu, R (2008) Airlines Upgrade Entertainment in Economy Cabin USA Today retrieved from http://www.usatoday.com/travel/flights/2008-05-05-inflight-entertainment_N.htm December 13, 2008.

Controller Area Network Sensors: Applications in Automobiles

1: Introduction

In this paper an overview on the Controller area network sensors and their real world application in the automobiles is presented to the reader. The fact that controller area networks employ various sensors and actuators to monitor the overall performance of a car (K.H. Johansson et al, 2001[1]), this paper focuses only on the sensors and their role in supporting the CAN performance.

2: Laser Speed Velocimetry (LSV) sensor

The application of this sensor is present in the Reanult range of vehicles where the company is incorporating the LSV as an on-board sensor to measure ground speed with better than 0.1km/h accuracy (LM Info, 2006[2]). The LSV technology is an approach to measure ground speed of a moving automobile with greater accuracy thus ensuring better on the road performance as argued by K.H. Johansson et al (2001). The purpose of the technology is to measure real-time speed of an automobile at accuracy level of 0.1km/h.The technology behind the LSV system comprises of the sensor that continuously records the interference with the motion surface that is fed back to the controller in the system that measures the speed of the car. The diagram in fig 1 below explains the aforementioned effectively.

The above is the schematic representation of the mounting of the LSV 065 sensor head (Source: www.polytech.com). The above diagram further clarifies that the use of the LSV system not only provides an effective and accurate measurement of the speed but also proves that the use of this system can provide an effective control over the performance and over the velocity of an automobile.

The LSV systems from Polytech, the schematic for which was presented in this section “combine a sensor head, a controller and software into a rugged industrial package that makes precision measurements from standstill to speeds of more than 7,200 m/min in either direction” (LM INFOR Special Issue, 2006).

2: Braking System sensors and Speed sensors

The ABS system utilizes multiple sensors to prevent the wheels from locking whilst braking at high speeds. The main sensors used in this set-up are

2.1: Speed sensor

The speed sensor is the sensor that is fitted to each wheel of the automobile. The purpose of the sensor is to identify the wheel slip whilst braking which is then fed back to the ABS controller unit to control. The speed sensor records the speed of the rotation of the wheel and when one or more of the wheels are recorded to be rotating at a considerably lower speed then the ABS control unit reduces the pressure on the pressure valves thus ensuring that the braking does not lock the wheel. The speed sensor equipment comprises of various models and can be mounted on different positions of an automobile in facilitating the measurement of the speed. The application of the speed sensor in the ABS is one of the many applications of speed measurement.

The ECM method of measuring the speed using speed sensors is increasingly popular as part of the ABS technology. It is also argued as the later version of ABS that overcomes the fundamental sensor positioning related flaws in the ABS system.

The ECM uses the Pulse Code Modulation technique to communicate with the sensor and the control system of the Controller Area Network of the automobile.

From the figure above it is clear that the ECM plays a critical role as the controller to capture the sensor signals and transmit to the master controller area network Electronic Control Unit (ECU) for overall control of the automobile. It is also evident that the sensor plays a vital role in the speed measurement and efficient operation of the ABS braking system.

The fundamental difference between the VSS and the WSS is that the VSS is part of the controller area network and is connected directly to the ECU of the network whilst the WSS feeds into the ABS controller unit that is connected to the CAN of the car or automobile under consideration.

The VSS is also a successful and flexible method for motorbikes and other two-wheeled vehicles as the mounting is simpler compared to the WSS mounting for ABS system that is popular in a car.

The cases of VSS mounted in transaxle and the transmission serve effectively for the purpose of velocity measurement and also provides near accurate readings for the efficient speed control by the driver of the car or the rider of the bike.

In the case of the VSS mounted in the transmission, the sensor sends a 4 pulse signal at regular intervals to the combination meter that then sends the signal to the ECU of the CAN in the car. The signal so sent is recorded as the speed and shown to the driver as the velocity of the car. This approach is more accurate to the traditional analogue approach to the speed measurement and management.

The above schematic makes it clear that although the mounting of the sensor on the transaxle provides an efficient method of measuring velocity, the response of the sensor can be damaged due to the mechanical wear and tear that is directly associated with the transaxle in a car.

The VSS mounted in the transmission is perceived to have resolved the issue through the mounting of the sensor near the core rotor and using a magnetic field to hold the sensor in position. This approach agreed to the more effective compared to the former where the mechanical wear and tear was a critical drawback to the overall performance of the system. The schematic mounting of the VSS in the transmission is presented in the fig 4 below.

The above mounting schematic in the figure further justifies that the positioning of the sensor by the rotor will help measure the speed effectively and more accurately

3: Differential Hall effect Sensors

Daniel Dwyer (2007)[3] argues that the differential Hall effect sensors are not only capable of accurately measuring speed but also providing the safety measures through effectively controlling the speed. The hall effect sensors utilize the fundamental principle behind the Hall Effect which is described as follows

“When a bias voltage is applied to the silicon plate via two current contacts, an electric field is created and a current is forced.” … Daniel Dwyer (2007).

This principle is utilized in the gear tooth profiling and speed measurement through the gear tooth sensing both in the linear and the differential cases. The differential case is argued as a more successful element especially in case of the automatic transmission automobiles because of the need to effectively control the speed associated with the car.

Another interesting element with the differential Hall-effect sensors is the fact that the sensor positioning is robust in nature and its wear and tear is minimal.

The differential element sensing that is the key for the differential Hall effect sensors utilizes the fundamental Hall effect. Alongside the sensor also “eliminates the undesired effects of the back-biased field through the process of subtraction.” (Daniel Dwyer (2007). The differential baseline field for the sensor is made close to zero gauss since each of the two Hall elements in on the IC (the sensor) approximately see the same back-biased field as argued by Daniel Dwyer (2007). A schematic representation of the differential element sensing is presented in fig 5 below.

The major feature of the Differential Hall effect sensor is the production in the form of an integrated circuit that can respond to the magnetic field interference and differential effects due to the change in speed and the gear tooth positioning in the magnetic field.

The differential element sensing and the speed measurement is accomplished through the overall peak holding of the Integrated Circuit (IC) in the field. Although the traditional peak-detecting scheme could resolve the issue of peak holding, the sensor requires an external capacitor for peak holding in order to effectively control the overall automobile speed.

Since a large gain is required to generate a signal strong enough to overcome the air gap in the case of the hall effect sensor especially in the drawbacks associated with the timing accuracy and duty cycle performance in the slope of the magnetic signal strength as argued by Daniel Dwyer (2007).

From the above arguments it is clear that the Hall effect sensor is a successful but expensive sensor to perform measurements and be programmed as part of the overall CAN of the automobile.

Thus to conclude the research in this paper the four sensors that were discussed include The Laser Speed Velocimetry (LSV) sensor and an insight on the LSV 065 module as an example. This sensor proves to be successful and accurate speed measurement equipment but the mounting and safety related elements pose a big drawback for its commercial application. The Wheel speed sensor for the ABS in an automobile was then discussed followed by the analysis of the Velocity Speed Sensor. Finally the Differential Hall Effect sensor was discussed in the research paper. This sensor on the other hand can be mounted easily in an automobile and can perform effectively to provide accurate measurements but has higher cost liability and maintenance requirements making it a secondary choice to the traditional VSS And WSS sensors used in most of the cars.

Context Aware Academic Planner Design

Designing a Context Aware Academic Planner
Al Khan bin Abdul Gani

Abstract

Academic calendar planner is an application whereby can give tremendous advantages to students, particularly university students and academic personnel. By using the academic calendar planner, student and academic personnel can manage their academic schedule anytime anywhere. Academic calendar planner let user to edit and amend their calendar activity up to date. Rather than that, user can have the interaction between other user which is interaction between lecturers and students. One ability that can’t be find in other academic calendar planner is the ability to change the view from monthly, weekly and daily basis and per semester based on user preference. And for that, academic calendar planner allow user to create group and which each user has ability to see the schedule of other user.

Keywords— Academic Planner, social application

Introduction

The aim of this paper is to determine the context aware to be considered to develop academic planner by do literature review on previous paper and conducting a survey of students and lecturers to acquire the response regarding the academic planner. This paper focuses on proposed academic planner for UiTM. Academic calendar planner is an application whereby can give tremendous advantages to students, particularly university students and academic personnel. By using the academic calendar planner, student and academic personnel can manage their academic schedule anytime anywhere. Academic calendar planner let user to edit and amend their calendar activity up to date. Rather than that, user can have the interaction between other user which is interaction between lecturers and students. One ability that can’t be find in other academic calendar planner is the ability to change the view from monthly, weekly and daily basis and per semester based on user preference. And for that, academic calendar planner allow user to create group and which each user has ability to see the schedule of other user.

Background

This application develop for those student, lecturer and academic personnel who’re looking for featured application to manage their academic calendar. Current system in Universtity for an example UiTM only provide non-dynamic academic application to Student and Lecturer. Basically they totally rely on academic calendar to help them manage their academic schedules. But the problem with the existing academic calendar is, the calendar are limited to certain activities such as:

Only academic personnel has right to add new academic plan, university events, public holidays etc.
Lecturer and student can only view the calendar. They don’t have the authorization to do the updates or change any of the calendar information.
Sometimes Lecturer wants to cancel and do the class replacement. Because of limited functionality of the current academic calendar, this leads to unreliable calendar information.
In certain circumstances, student need to meet their lecturer, unfortunately lecturer is not are not around. This is due to unreliable calendar information about the availability status.
METHODOLOGY

This research is to determine key areas for a specification requirement to be considered for designing a context aware Academic. Two approaches have been used to find the best practice to identify the appropriate elements and features based on a literature survey and questionnaires.

Current issues

Current need of

Common elements

students and lecturers

FRAMEWORK

Element/Feature Application

Figure 1: Research mission.

Figure 1 represent the methods used to determine the features before design the application

Literature review

A literature review need to be done in order to continue the study on this topic. A literature survey was conducted to investigate the current issues and common element features of developing a context aware. Table 2 is a draft of element functions involving the academic planner system.

TABLE 2: DRAFT FROM LITERATURE SURVEY

Elements/functions

Sources

Academic related matter

Course registration

[4], [9], [10], [11], [12], [13]

Course selection

[3], [6], [4], [9], [10], [11], [12]

Academic Progress

[8], [13]

Course information

[4], [6]

Scheduling

[3], [12]

Plan of study

[7], [8]

Academic calendar

[13]

Delivery Methods

Bulletin board

[6]

Online Discussion

[13]

Chat

[14]

Support System

Appointment

[6]

The existing other Planner

A literature review need to be done in order to continue the study on this topic

Google Calendar

Thunderbird

Microsoft Outlook

Features

Easy to access Fast and reliable

Robust calendaring tool

Email integration

Sync and great collaboration tools

Limit

Don not support full integration with client email and contacts

Hefty price

Platform

Web-Based

All platforms

Windows

Conversation with other contact

No support

No support

No support

Context awareness

Ubiquitous computing (pervasive systems) was first proposed by Weiser (1991). Context-aware systems are a type of pervasive system and are viewed by computer scientists as a mature technology [1, 2]. A definition for context is given by Day in [3]: “context is any information that can be used to characterize the situation of an entity, an entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and application themselves“. Context-aware systems are able to gather contextual information from a variety of sources without explicit user interaction and adapt their operation accordingly [4]. Context-aware systems have the ability to integrate easily with any service domain, such as healthcare, commerce, learning and transport.

A context-aware system must include three essential elements: sensors, processing and action. Three types of sensors are defined: physical, virtual and logical [5]. A physical sensor, such as a camera or thermometer, captures information about its local environment [6]. In contrast, virtual sensors extract information from virtual space, which is defined as the set of data, applications and tools created and deployed by the user. Logical sensors combine physical and virtual sensors to extract context information. For example, a company can infer that an employee is working from home using login information (a virtual sensor) and a camera (physical sensor) [1].

Context-aware user interfaces facilitate the user interaction by suggesting or prefilling data derived from the user’s current context. This raises the problem of determining which context information can be used as input for which interaction element in the user interface. This task is especially challenging as the texts that describe the elements, e.g. their labels, often differ in the terminology used. To facilitate the interaction with an application, we need user interfaces (UIs) that provide proactive assistance, for example by suggesting which values to enter in a form.

Melanie is his paper present a novel mapping process for that purpose which combines the advantages of string-based and semantic similarity measures to bridge the vocabulary gap between context and UI element, and which is able to automatically extend its vocabulary by observing the user’s interactions. Their research show that these two features dramatically increase the quality of the resulting mapping. Unlike previous approaches, the proposed mapping process does not require any training or manually tagged data. Further, it does not only use the label to describe the context and UI elements, but additional texts like their tooltips.

Context-aware applications are expected to become a remarkable application area within future mobile computing. As mobile phones form a natural tool for interaction between people, the influence of the current context on collaboration is desirable to take into account to enhance the efficiency and quality of the interaction [1].

Context-aware mobile devices have so far been investigated mainly from the technological point of view, examining context-recognition and sensor technologies inferring logic, system architectures or infrastructure. There have also been examples where contextual information has been used to facilitate co-operation between mobile users. User’s personal information, such as reminders, phonebook contacts or calendar notes, can be used as an information source which is used when creating location-sensitive messages, as done with CybreMinder [2]. Schmidt et al. [3] introduced a context-aware phonebook, which indicates the availability of a contact the user wants to call to. Location is probably the most commonly used context attribute, and it has been used to develop numerous location-aware mobile systems, such as GUIDE tour guide in Lancaster [4] or visitor’s guide at Tate Gallery, London [5].

Cloud Application

A cloud application (or cloud app) is an application program that functions in the cloud, with some characteristics of a pure desktop app and some characteristics of a pure Web app. A desktop app resides entirely on a single device at the user’s location (it doesn’t necessarily have to be a desktop computer). A Web app is stored entirely on a remote server and is delivered over the Internet through a browser interface.

Like desktop apps, cloud apps can provide fast responsiveness and can work offline. Like web apps, cloud apps need not permanently reside on the local device, but they can be easily updated online. Cloud apps are therefore under the user’s constant control, yet they need not always consume storage space on the user’s computer or communications device. Assuming that the user has a reasonably fast Internet connection, a well-written cloud app offers all the interactivity of a desktop app along with the portability of a Web app. If you have a cloud app, it can be used by anyone with a Web browser and a communications device that can connect to the Internet. While tools exist and can be modified in the cloud, the actual user interface exists on the local device. The user can cache data locally, enabling full offline mode when desired. A cloud app, unlike a Web app, can be used on board an aircraft or in any other sensitive situation where wireless devices are not allowed, because the app will function even when the Internet connection is disabled. In addition, cloud apps can provide some functionality even when no Internet connection is available for extended periods (while camping in a remote wilderness, for example).

Cloud apps have become popular among people who share content on the Internet. Linebreak S.L., based in Spain, offers a cloud app named (appropriately enough) “CloudApp,” which allows subscribers to share files, images, links, music, and videos. Amazon Web Services offers an “AppStore” that facilitates quick and easy deployment of programs and applications stored in the cloud. Google offers a solution called “AppEngine” that allows users to develop and run their own applications on Google’s infrastructure. Google also offers a popular calendar (scheduling) cloud app.

FINDINGS
Questionnaires Analysis

Elements/functions

Students (361)

Lecturer (155)

%

%

%

%

Agree

Disagree

Agree

Disagree

Academic related matter

Course registration

Course selection

Academic Progress

Course information

Scheduling

Plan of study

Academic Planner

Reminder

Appointment

Sync With Major Calendar Application

Context Awareness

Proposed Feature In Academic Planner

After several study in traditional planner and existing planner that related to Academic Planner, reviewing literature and questionnaire, the new features introduced to improve the academic planner

Optimizing class scheduling in collaborative mobile systems through distributed voting

Decision making through distributed voting can help automate routine-like collaborative class schedule, appointment and Event. In this paper author concentrate on how distributed voting strategies can be used for scheduling meetings in mobile and pervasive environments. Their work focuses on optimizing the meeting scheduling result for each participant in a mobile team by using user-specific preferences and information available on their devices. This negotiation is done in a distributed manner directly between the peers. In this paper author describe different approaches for the decision making strategy involving voting theory to balance out the different user preferences and availabilities. The weight of the votes from each participant can also be adjusted according to their importance or necessity in the given meeting. We also introduce briefly an approach to support distributed decision making strategies pervasively using a lightweight Web-based platform. To conclude the paper, we give our views on the future development directions and evaluation plans as well as extend the approach for other related domains [1].

Categorizing Task Occurrence Pattern

When we make a future plan of our work, we can predict or forecast the upcoming tasks, because we know that fair amount of our tasks are to be occurred as were occurred in the last year/month repeatedly. In addition, we know we have many dependent tasks; for example, there will be a series of regular meetings with the ofi¬?ce staff for which various auxiliary tasks need to be completed, for example, Announcement, Setting up Room, and Sending Minutes tasks. These related tasks are approximately on the same time grid with other corresponding tasks. This type of regularity is called a Task Occurrence Pattern, which arises from the repetition of tasks and the alignment of related tasks [4]. To coni¬?rm how much the real tasks are on the Task Occurrence Pattern, all tasks of a year of a user, who is a graduate student, are gathered and inspected from the view point of dependence and recurrence.