Developing Sensor Technology

Abstract

The need for sensor devices has been growing to develop new applications in several technological fields. The current state-of-the-art of this sensor technology used in modern electronic nose designs to operate in a different manner. The chamber of the E-Nose sensor is to be upgraded mainly for reducing the nuisance alarms and to improve reliability to detect smoke which is caused by fire and non-fire particles. This paper gives a brief state of the art of different fire and non-fire particles that emits smoke and various chemical gas sensors used to detect smoke and a fire detection algorithm.

Keywords- Sensors; Smoke; Electronic-Noses; Fire Detection Algorithm fire particles; non-fire particles

Introduction

The conception of an electronic nose could appear sort on an up-to-date technology. Scientists initial developed a synthetic nose within the 1930’s that used sensors to measure levels of ultra-violet light found in mercury. Currently these devices are employed in numerous technological fields for various applications.

Presently these devices used as trendy fireplace detection frameworks for the simultaneous estimations of carbon monoxide gas (CO), carbon dioxide (CO2), and smoke. The concentration of the rates of CO and CO2 in smoke offers a path to cut back the frequency of nuisance alarms so as to extend the reliability of smoke detectors. The sensors that square measures incorporated during this fireplace sighting system at the side of fire detection algorithmic rule detect smoke that is caused by fire or non-fire particles, and alarmed accordingly.

Previous fire detection systems used sensors for measuring temperature, smoke, and combustion products which include oxygen (O2), carbon monoxide (CO), carbon dioxide (CO2), water vapor (H2O), hydrogen cyanide (HCN), acetylene (C2H2), and nitric oxide (NO) but they does not give any reliable results. Some used Gas Chromatography – Mass Spectrometry (GC-MS) along with Fourier Transform Infrared (FTIR) Spectroscopy analyzed smoke [1].

Advances in fire detection systems are being sought to decrease the detection time and the frequency of unnecessary alarms. Most of the research works done with the Multi-Sensor Detectors for accomplishing these goals because there may have some trouble in using smoke detectors with a single sensor to discriminate the smoke produced from fire and non-fire sources. The 95% frequency of unnecessary alarms reported by smoke detectors during the 1980’s in the U.S. is due to that limitation.

Section 1 briefly introduces the Fire Detection System incorporated in an Electronic-Nose and different Gas Sensors that detects smoke in Section 2. Later, section 3 gives a brief description about the Fire and Non-Fire Particles and how the sensory system is designed in an E-Nose for preventing Fire accidents in section 4. Finally, we concluded in Section 5.

Chemical Gas Sensors

The environment needs to be monitored [2] time to time as many accidents took place lack of it. So in order to control the Industrial Process, Chemical Sensing Technologies has been emerging out to mainly emphasize on

Control of combustion processes (oxygen)

Flammable gases in order to protect against Fire Explosion.

Toxic gases for environmental monitoring.

Solid Electrolyte Sensor

SE sensor [3] [4] is based on the principle of electrochemical gas detection, which is used to detect chemicals or gases that can be oxidized or reduced in chemical reactions.

It mainly contains three electrodes:

A sensing or working electrode which reacts when gas is available by either oxidizing or reducing the target gas.

A counter electrode which provides a comparing converse response to that occurring at the sensing electrode so as to provide a net current stream.

A reference electrode that stays unaffected by the chemical reactions occurring on the sensing and counter electrodes and provides a stable potential against which measurements are frequently created.

Figure 1. Solid Electrolyte Sensor

SEC sensors (Figure 1) used in millions of vehicles to monitor the exhausted gases and minimize the toxic emissions.

Thermal-Chemical Sensors

Thermal-chemical sensors [2] works on principle that there will be a change in temperature (a?†T) when heat energy is released or absorbed (a?†Eh). The pellistor is the most common thermal-chemical sensor (other thermal sensors are based on either on thermistors or on thermopiles). They are used for monitoring of combustible gases.

Figure 2.Thermal-Chemical Sensors

Gravimetric Chemical Sensors

They are also known as piezoelectric sensors [5]. They are of two types used for gas sensing – Surface Acoustic Wave (SAW) device and the Quartz Crystal Micro Balance (QCM) as in Figure 4.

Figure 3. SAW Device

Figure 4. Quartz Crystal Balance

SAW device produces a surface wave that travels along the surface of the sensor while the QCM produces a wave that travels through the bulk of the sensor as shown in Figure 3. Both work on the principle that a change in the mass of piezoelectric sensor coating due to gas absorption results in a change in the resonant frequency of exposure to a vapor.

Conducting Polymer Sensor:

Conducting polymers [2] are plastics and they change their resistance while they adsorb or desorb specific chemicals (Figure 5). The adsorption of these chemicals mainly emphasized on the polarity (charge) and their molecular structure (shape and size).

Figure 5. Conducting Polymer Sensor

Due to their high sensitivity, low price and rapid response time at room temperatures, Conducting Polymer Sensor best suits for chemical sensing.

IR Spectroscopy Sensors:

The Spectroscopic Sensors [2] determine the concentration of several gases at a time and they work on the principle that all the gases interfere and adsorb infrared spectrum at specific wavelengths due to their natural molecular vibration. Some systems with narrow band interference filters or laser light sources for a specific gas (like CO2) are termed as monochromatic systems.

Figure 6. IR Spectroscopy Sensors

In the above Figure 6, some concentration of CO2 present in the sample gas is absorbed by the infrared detector at a wavelength of 4.3 ?m while an infrared light periodically emitted from the light source. These sensors are most suitable for CO2 gas and shows low cross-sensitivity with different gasses and are moderate at the reaction, fairly good at accuracy and linearity but are cumbersome and costly.

Optical Fiber Sensors

The optical fiber utilized as a locality of those sensors [6] is coated with fluorescent dye. On association with the vapor, the Polarity variations within the fluorescent dye will changes the dye’s optical properties such as wavelength shift in fluorescence, intensity and spectrum changes. These optical as in Figure 7 changes are used as the retaliation mechanism for gas.

Figure 7. Optical Fiber Sensor

Optical gas sensors are mostly used to detect concentrations of ammonia (NH3). They have very fast response times, short of what 10 micro sec for sampling and analysis and are compact; lightweight can be multiplexed on a single fiber network, immune to electromagnetic interference (EMI) and can operate in high radiation areas.

MOSFET Sensors:

The metal oxide semiconductor field-effect transistor (MOSFET) sensors [4, 7] based on a change of electrostatic potential. They comprise of three layers, they are catalytic metal also called the gate (palladium), a silicon oxide insulator (platinum) and a silicon semiconductor (iridium or rhodium) as in Figure 8. When polar compounds interact with this metal gate, the current flowing through the sensor is modified.

Figure 8. MOSFET Sensor [7]

As no hydrogen atoms are released, molecules such as ammonia or carbon monoxide cannot be detected with a thick metal layer. But it can be possible to detect them when the metal gate is thinned. These MOSFET sensors or MOS sensors are very robust and have a relatively low sensitivity.

E-Nose as Fire Detection System

An electric or artificial nose can sense different types of chemicals and even distinguish particles not only for identifying individuals, but also used for the detection of fire. They work on the principle that smoke is made up of different chemical compounds. These devices consist of dozens of sensors that sense different types of chemical compounds found in the air. Some of the chemicals that cause smoke leads to flames are discussed below.

Smoke

It is a collection of solid and liquid particulates in air and emits gases when a material undergoes combustion or pyrolysis [8]. This is a commonly an unwanted by-product of fires (including candles, stoves, fire ramp and oil lamps), but may also be used for fumigation i.e., pest control. Smoke signals is communication for long distances like smoke signals to transmit signals, news or to indicate the people to gather in a place, offensive and defensive capabilities in the military (smoke-screen), cooking, or smoking like marijuana, tobacco and etc.).

Heptane:

It is a non-polar solvent and minor component of gasoline [9] with chemical formula H3C (CH2)5CH3 or C7H16. This is a colorless liquid and very hazardous chemical that appears which sense like petrolic odor. The structure of Heptane is shown in Figure 9.

Figure 9. Heptane Structure

It is commercially available as mixed isomers for use in paints and coatings and mainly applied in pharmaceutical manufacturing laboratories and for research & development. It has a melting point at ?91.0 to ?90.1°C; ?131.7 to ?130.3°F; 182.2 to 183.0K

Toluene

It is a fragrant hydrocarbon (Its IUPAC deliberate name is methylbenzene) [10] is broadly utilized as a solvent and as an industrial feedstock. It is a water-insoluble clear liquid with the typical smell of paint thinners. In some cases toluene is also used as an inhalant drug for its intoxicating properties; on the other hand, breathing in toluene can possibly cause serious neurological damages.

Figure 10. Toluene Structure

Toluene (Figure 10) is principally utilized as a precursor to benzene. The second positioned application includes its disproportionation to a mixture of benzene and xylene.

Methanol

Methanol is the simplest alcohol, and is a light, unstable, colorless, ignitable fluid with a unique smell as same as to, however marginally sweeter as that of drinking alcohol which we called as ethanol [11]. It is otherwise referred as methyl alcohol, wood alcohol, which is produced as a by-product of the destructive distillation of wood, wood naphtha or wood spirits, with the formula CH3OH for the structure in Figure 11 (often abbreviated MeOH).

Figure 11. Structure of Methanol

It is likewise utilized for delivering biodiesel by means of transesterification response. At room temperature, it is a polar fluid, and is utilized as a liquid catalyst, dissolvable, fuel, and as a denaturant for ethanol. Methanol is created regularly in the anaerobic metabolism of numerous mixtures of microbes, and is normally present in little sums in the earth.

HDPE Beads

High Density Poly Ethylene Beads [1] are white hermoplastic base resin and looks like wax and have the properties of electric wire.

Figure 12. HDPE Beads

HDPE Beads in the above Figure.12 used for extrusion packaging film, rope, woven bags, fishing nets, water pipes; injection of low-end commodity and housing, non-bearing load components, plastic box, turnover box; extrusion blow moulding containers, hollow products, bottles and it has society of plastic industry resin ID code is 2.

Mixed Plastics

Blended plastic [12] shown in Figure 13 is a term that covers all non-container plastic bundling sourced from the wastage of households, and it incorporates inflexible and adaptable plastic things of different polymer types and shades. It excludes plastic bottles and non-packaging items.

Figure 13. Mixed Plastics

Dry Ice:

Figure 14 shows that it is the strongest manifestation of Carbon dioxide and fundamentally utilized as a cooling agent. It transmutes at ?78.5 °c (?109.3 °f) at Earth atmospheric pressures. This great frost makes the strong perilous to handle without protection due to burns caused by freezing (frostbite). It is referred as “Card ice” [13].

Figure 14. Dry Ice

Fire Detection Mechanism in E-Nose

A Novel technique should be employed in E-Nose to respond immediately whenever the fire accidents took place [14]. The main objective of this mechanism is to reduce the nuisance alarms. Several experiments are conducted on various materials that causes smoke and observed how the materials go on burning while ignited them. The table 1 indicates that the ignition method and fire type (how the material burns) of the particular material which causes fire.

Every E-Nose contains a sensory system (two components in E-Nose one is sensory system and the other component is a pattern recognition system [15]) and we need to enhance it so that it can be used as the fire detection system. In the sensory system, one among the above mentioned gas sensors are selected such that they detects particular material’s smoke and according to the classification algorithm and differentiate it whether the smoke is from fire and non-fire particles.

Table 1: List of Particles Causes Fire

Sl.No

Material

Ignition Method

Fire Type

1

Heptane

Lighter

Flaming

2

Toluene

Lighter

Flaming

3

Methanol

Lighter

Flaming

4

HDPE Beads

Lighter

Smouldering

5

MixedPlastics

Coil + Pilot

Flaming

6

Dry Ice

N/A

N/A

The following Figure 15 shows the internal design of sensory system to be deployed in the E-Nose for reducing nuisance alarms as well as to react accordingly to the material that causes a fire.

Figure. 15: Mechanism of Fire Detection System

Based on type of these chemical compounds, the system can give information to the clients about the Fire and Non-Fire particles [16]. The system will perform the perfect action by ringing alert and empowers the fire extinguisher to keep the spreading of kindle to some degree by grouping fire and non-fire particles. The accompanying Table 2 gives the brief description of distinct fire extinguishers feasible in the market.

Table 2: Types of Fire Extinguishers

Extinguisher

Protection Against

Used For

CO2Fire Extinguisher

Class B Fires – Petrol,Oil,Paints,Fats

Online electrical equipment

Water Fire Extinguisher

Class A Fires – combustible materials like wood and paper

Common Household purpose

Powder Fire Extinguisher

Class A,B,C Fires

General Purpose

Foam Fire Extinguisher

Class A,B Fires – liquids or materials that liquefy

Shopping malls

Wet Chemical Extinguishers

Fires – Cooking Oil or Fats

Professional Kitchens

Eco Fire Extinguishers

Fires – Water and Foam

Environment

Where each Extinguisher specifies the classes of fires and they are listed below gives the details of their contents for which they belong to.

Table 3: Classes of Fire

Class Name

Contents

Class IA

Di-ethyl Ether, Ethylene Oxide, Some Light Crude Oils

Class IB

Motor and Aviation Gasoline, Toluene, Lacquers, Lacquer Thinner

Class IC

Xylene, Some paints, Some solvent-based Cements

Class II

Diesel Fuel, Paint Thinner

Class IIIA

Home Heating Oil

Class IIIB

Cooking Oils, Lubricating oils, Motor oils

Conclusion and Future Work

Presently many more fire accidents are taking place and most of them are regarded as nuisance alarms i.e., the sensors that detect smoke will ring the alarm even though it is not necessary. In order to overcome this problem, this paper provided a novel technology that which holds the potential to give numerous benefits in terms of fire accidents like to reduce the nuisance alarms and to increase the reliability of the sensors.

This mechanism not only reduces the false alarms, but also prevents the danger by enabling the in-built extinguisher whenever the fire particle is sensed. In future, we have a tendency to develop the precise classifier algorithm to distinguish the smoke from fire and non-fire particles.

Design Limitations for Speakers

Introduction

There are many factors which determine the characteristics of a loudspeaker; to produce a successful design a careful balance of many factors must be achieved. Most of the challenges and considerations of loudspeaker design stem from the inherent limitations of the drivers themselves.

Desirable Characteristics & Real-Word Implementation

For a coherent approach to loudspeaker design to be established, one may elucidate the problem by considering two main sets of criteria; the desired characteristics of the finished system and the limitations which impinge on the achievement of these desired characteristics. The key desirable characteristics for the finished system are listed below.

Reproduction of all frequencies across input range
Flat frequency response across input range
Adequate Damping
Good Efficiency
Adequate SPL or perceived loudness
Minimal distortion
Minimal noise

Many of the above considerations are quite obvious. In terms of frequency response it is desired that the response of the system as a whole should be as flat as possible, since to truthfully reproduce a signal all frequencies across the input range should be represented equally. Weems (2000, p.14) notes that “smoothness of response is more important than range”. Naturally noise and distortion are undesirable for accurate signal reproduction. Damping is an important concern; when a signal is no longer applied to a loudspeaker there will be a natural tendency for the cone to continue to move under its own inertia. Thus damping must be employed in order to ensure that the SPL generated by such movement is sufficiently low and relatively inaudible. Rossing (1990, p.31) refers to damping as “loss of energy of a vibrator, usually through friction”. This is a simplification, however, the back EMF generated by the driver and the varying impedance seen by the amplifier of the crossover/driver network play an important role. As Weems (2000, p.17) rightly says “there are two types of damping, mechanical and electrical”.

Another quite obvious consideration is that the loudspeaker must indeed be loud enough. This is related to the issue of efficiency, since the more inefficient the speaker, the more power will be needed to drive it. The choice of enclosure design plays quite a significant role here, as will be seen shortly.

In terms of limitations, there are several immediate problems posed by the nature of the drivers themselves that must be addressed. Firstly, the sound from the back of the speaker cone is 180 degrees out of phase with the sound from the front. This phase separation means the sounds will cancel each other at lower frequencies, or interfere with each other in a more complex manner at high frequencies; clearly neither is desirable.

In some senses it would be ideal to mount the drivers in a wall with a large room behind, the so-called “infinite baffle”, having the sound from the rear of the cone dissipate in a large separate space, being thus unable to interfere with the sound produced by the front. In reality this is impractical; however some provision must be made to isolate sound from the rear of the cone. To this end, some sort of enclosure must be made for the drivers, yet this presents a new set of considerations.

Without an enclosure, a loudspeaker is very inefficient when the sound wavelengths to be produced are longer than the speaker diameter. This results in an inadequate bass response; for an 8 inch speaker this equates to anything below around 1700Hz[1]. So the infinite baffle is terribly inefficient in terms of the SPL produced at lower frequencies. Furthermore, the free cone resonance of the speaker works against the flat frequency response that is desired; input frequencies close to the resonant frequency will be represented too forcefully.

Another real-world complication is the fact that for high-fidelity applications, no one loudspeaker will be able to handle the entire range of input frequencies; “the requirements for low frequency sound are the opposite to those for high frequencies” (Weems, 2000, p.13). Higher frequencies require less power to be reproduced, but the driver must respond more quickly, whereas low frequencies require a larger driver and hence greater power to be effectively realised.

In view of the above, multiple drivers must be used, with each producing a certain frequency range of the input signal; at the very least a woofer and tweeter are required. In order to deliver only the appropriate frequencies to each driver, a device known as a crossover must be implemented. This can take the form of passive filter circuits within the speaker itself, or active circuitry that filters the signal prior to amplification. In the latter case, multiple amplifiers are needed, making this a more costly approach. The fundamentals of crossover design will be dealt with in a separate document and are hence not dealt with in detail here.

Enclosure Design

Faced with the reality that an enclosure is in almost all cases a practical necessity, perhaps the most important aspect of speaker design in the design of the enclosure itself. The first step in producing a successful design is to decide upon the drivers to be used and use this as a basis for choosing a cabinet design, or to decide upon the desired cabinet type first and allow this to inform the choice of driver. In general, most of the design work with regard to the cabinet is focused firmly toward the woofer, since the enclosure design is most critical with regard to midrange/bass performance. In typical 2-way designs, the tweeter is mounted in the same box as the woofer, but it is the latter which largely defines the cabinet dimensions.

In the past the design of enclosures was often something of a hit-or-miss affair, however the research of Thiele (1971) and Small (1973) has led to a much more organised design process. Most transducers today are accompanied by a comprehensive datasheet of Thiele-Small parameters, which allow most of the guess work to be taken out of enclosure design.

Ignoring more exotic enclosure designs, the first question is whether the enclosure should be ported or sealed (it should be noted that in reality even “sealed” enclosures are very slightly open or “leaky” in order to allow the internal pressure to equalise with the surroundings). If a driver has already been chosen, this can be determined from the Efficiency Bandwidth Product, which is defined as:

EBP = Fs / Qes(1)

Where Fs is the free air resonance of the driver and Qes the electrical Q or damping. In general, an EBP of 50 or less indicates a sealed box, whilst an EBP above 90 suggests a ported enclosure (Dickason, 2000). In between, the choice of enclosure lies more or less with the designer and a driver that falls in the middle should perform acceptably in either closed or ported situations.

So, what are the advantages and disadvantages of sealed vs ported enclosures? A sealed enclosure is very simple to build, whilst a ported enclosure requires some degree of tuning to ensure the port is matched correctly to the driver – in the ported or “bass reflex” design a tube extends into the cabinet allowing some air to escape from inside; if correctly tuned the air that leaves the port is delayed in phase by 180 degrees, hence reinforcing the sound from the front of the cone.

With a sealed enclosure the air inside acts as an approximately linear spring for the transducer cone and assuming the driver has a low Fs, a healthy bass extension with a gentle roll-off of -12dB per octave can be expected. The disadvantages are several; the enclosure may need to be quite large to achieve an acceptable Qtc (the damping value for a sealed system) and efficiency is poor. Further, with a sealed enclosure the driver reaches maximum excursion at resonance, which translates to greater distortion. Therefore a driver for use in a sealed enclosure requires quite a large linear throw to perform well. By contrast, in correctly tuned ported enclosures the driver is maximally damped at resonance, so a large linear throw is not critical and distortion is lower as a result. The basic methods of sealed and ported cabinet design shall now be explained.

Sealed Enclosure Design

To design a sealed enclosure the basic methodology is quite straightforward; the essential challenge is simply to find the optimum volume for the cabinet for the chosen driver. First one must decide on the value of the damping constant Qtc; the optimum value is 0.707 since it gives the lowest -3db break frequency and hence the best potential for bass extension, as well as good transient response. If the enclosure size is too large at this optimum value then Qtc may be increased, resulting in a trade-off between bass performance, transient response and enclosure volume. However, the more Qtc is increased, the more boomy and muddy the sound will become.

Depending on the application, the enclosure size may not be important; in this case an optimum Qtc is encouraged. Once Qtc is known, the constant ? may be calculated using the below formula, where Qts is the total Q factor of the driver at resonance (this may be obtained from the manufacture’s data sheet).

? = [Qtc/Qts]2 – 1(2)

Having calculated ?, the correct enclosure volume Vb is trivial to determine using the relationship below. Note that Vas is the equivalent volume of air that has the same acoustic compliance as the driver; again this may be obtained from the datasheet or experimentally. Note from equation (1) that a lower Qts will result in a higher ?, and hence a smaller enclosure. Thus for two transducers with equivalent acoustic compliance, a lower Qts will result in a smaller enclosure.

Vb = Vas/?(3)

Assuming the required box volume is acceptable, one may then also calculate the resonant frequency of the system (fs is the free-air resonant frequency of the driver):

(4)

Once fc is known the -3db break frequency may also be found:

(5)

Recall that below this frequency the roll-off is -12dB per octave and one can gain a fairly good impression of the bass performance to be expected. Naturally it is desirable for f3 to be low for maximum extension into the bass area, hence a low fs is a characteristic one should look for when choosing a driver for sealed enclosure use. If it is felt that the break frequency is too high, then a different driver must be selected for the sealed implementation.

Ported Enclosure Design

For ported cabinet design, the equations are more complex and it is generally not practical to attempt to design such an enclosure by hand. Instead there are a number of free and commercial software calculators available that simplify the process. One good freeware calculator is AJ Vented Designer[2]. Using such a program enables the designer to quickly ascertain what size enclosure and port is required for a given driver and whether this is feasible – for certain combinations the port may not physically fit within the enclosure for example. In addition, the program also plots the theoretical frequency response of the design, which simplifies matters greatly.

Acoustic Damping and Avoiding Resonance

In addition to the type of enclosure and the calculation of the required volume, diameter and size of ports (if ported), there are several other design considerations. Firstly, standing waves within the enclosure must be minimised. Thus enclosures are often stuffed with fibreglass, long-fibre wool or polyurethane foam.

In addition to standing waves and the resonance of the enclosure, one must also bear in mind the possibility of dimensional resonances with sealed designs. To avoid this it is prudent to ensure that length, width and height of the enclosure are all different and to not centrally mount the drivers.

The choice of cabinet material and thickness are also factors that require careful consideration; in general wood is the most appropriate material and a thicker structure is likely to be more rugged and be less susceptible to undesirable vibration. The structure should also be isolated from the floor since vibrations passed to a floor (especially a wooden floor) can cause the floor to vibrate which will muddy or colour the sound. Spikes or stands are commonly used to achieve this.

Conclusion

There are many factors that affect speaker design but perhaps the most important is that of the enclosure itself. More exotic enclosures such as band-pass and transmission line configurations are beyond the scope of this document, however it should be noted that there are many different approaches beyond the common sealed or ported methodologies. As with any engineering problem, successful speaker design requires a careful balance of many often opposing factors to be reached.

Sources

Borwick, John. (2001). Loudspeaker and Headphone Handbook, Focal Press.

Dickason, V. (1995). The Loudspeaker Design Cookbook, Audio Amateur Publications.

Rosenthal, M. (1979). How to select and use loudspeakers and enclosures, SAMS.

Rossing, T. (1990). The Science of Sound, Addison-Wesley.

Weems, D. (2000). Great sound stereo speaker manual, McGraw-Hill.

1

Dangers of the Internet | Essay

Abstract

This essay presents a critical debate on whether the Internet is as dangerous as the physical world. First, the unique dangers posed by the Internet are outlined. This is followed by an examination of some of the major threats to safety that are present in the physical world but not in the virtual world. In the conclusion, the report also looks into how the virtual world might shape in the future.

Is the World of Networked Computing as Dangerous as the Physical World?
Introduction

In cyberspace, no one hears your screams.

(Merkow and Breithaupt, 2000)

Modern society depends on the technology of networked computing more than ever. Whether it is the Internet, the World Wide Web (WWW), or other less well-known networks, people around the world depend on it for multifarious reasons from work and entertainment to essentials of life such as life support in medicine. Networked computing offers easy access, a large degree of anonymity and while this presents us with unique opportunities, it also presents us with unique dangers. In light of the increasing use and even dependence on networked computing, it is pertinent to examine the social, physical and ethical dangers presented by it. This essay critically debates the issue of whether the world of networked computing is as dangerous as the physical world.

The Dangers on the Internet
Preying by Paedophiles

One of the most disturbing crimes on the Internet today is ‘grooming’. Child grooming is an act where a paedophile will befriend a child, or form an intimate relationship in order to lower a child’s sexual inhibitions. Grooming will initiate from chat rooms designed for children and teenagers and sometimes through emails, where an adult will pose as a teenager, but will often move into using instant messaging services so that the paedophile can talk the victim into sending images and even using a webcam. Research conducted by the Cyberspace Research Unit at the University of Central Lancashire states “another of the frequent topics concerned on-line grooming and in particular, ways in which to avoid detection” (O’Connell, 2003). While this statement gives concern that paedophiles may be able to escape without notice, the report goes on to say, “Throughout each of the stages there are clear and easily identifiable differences in the patterns of behaviour of the individuals.” The stages that are talked about here are known as ‘Friendship forming state’ where the paedophile will just spend time getting to know the child, ‘Relationship forming state’ where the paedophile will start to ask questions about things such as school and home life, ‘Risk assessment stage’ where the paedophile will ask the child questions like who else uses the computer, ‘Exclusivity stage’ where the victim is encouraged to trust the paedophile, and ‘Sexual stage’ where the paedophile will ask the child about previous intimate experiences.

Bullying and Other Negative Electronic Relationships

The virtual world is home to some serious negative and destructive electronic relationships. Cyber bullying, one of the more common ones, is mainly targeted at school pupils in addition to actual physical and verbal bullying. Carnell (2007) points out to evidence that many pupils are being targeted in their own homes, by phone texts, silent calls, on instant messenger, and by abusive websites and forums, some set up with the specific intention of causing humiliation and embarrassment. This shows the severity of cyber bullying in society today.

Griffiths, M.D. (1998) offers the following explanation. The Internet is easy to access from home or work. It is becoming quite affordable and has always offered anonymity. For some people it offers an emotional and mental escape from real life, and this is especially true for individuals who are shy or feel trapped in unhappy relationships. It is also true for individuals who work long hours and have little opportunity for social life. Electronic (or internet) relationships started off when chatrooms were introduced and really boomed since the creation of Instant Messaging. A person can enter a chatroom, use an alias, and can talk to other members without revealing their true identity. However, this raises an important question. If you can do all that without revealing your true identity can you really trust the person you are talking to? Can you be certain that they are being honest with you? Some say that it’s not real and therefore they don’t really worry about it, while others suggest that Internet relationships have a way of tapping into deep feelings and it’s easier to get hurt. Katz and Rice (2002, p286) suggest, “students are meeting and “dating” on the internet…they even have monogamous relationships this way, telling others who might ask that they will not go out with them because they are “dating” someone.” Various researches suggest that it is more common for young people to meet and date people using the Internet and it is becoming more widely accepted as a social meeting point. This however causes concerns about why people are choosing to use the Internet for this reason. Many people feel more comfortable talking about feelings over instant messaging, and this is especially true of shy people or people that feel trapped in an offline relationship.

Addictions

The Internet also has the notoriety of helping to create unhealthy addictions. The majority of UK bookmakers now run online websites in which people can make exactly the same bets they would in the betting shop, but from the comfort of their own home. The rate at which the online gambling industry is commercialised today is astronomical. From 2005 to 2006 the sector has become the fifth largest advertiser online, jumping to 2.5 billion from 911 million ads in the last year (Schepp, 2002). And this is without the likes of TV ads, magazine ads, and adverts on the radio. This means that the majority of people in society now see online gambling as more acceptable than in recent years. Besides the increased risk of fraud on the Internet, the online gambling also poses the serious problem of an easier way to get addicted. This is because it is relatively easier to sit in front of a computer and gamble than to walk to the nearest betting shop in the cold winter to make a bet. Gambling is however, just one of the addictions people are vulnerable to online. Mitchell (2000) uses the term ‘Internet addiction’ to indicate the spectrum of additions that one is susceptible to on the Internet. He states that although there is some disagreement about whether Internet addiction is a real diagnosis, compulsive Internet use has psychological dangers, and reports such behaviour can result in the users having withdrawal symptoms, depression, social phobia, impulse control disorder, attention deficit disorder, etc.

Viruses and Hacking

In 2000, the number of worldwide email mailboxes was put at 505 million, and this was expected to increase to 1.2 billion in 2005 (Interactive Data Corporation, 2001). Schofield (2001) points out that more than 100 million people use instant messaging (IM) programs on the net, and a growing number among them also use it to transfer files. This number is obviously growing, but this example shows that online communication is becoming a much widely used method of communication. Online communication such as email and instant messaging does not come without problems. Hindocha (2003) states that instant messengers can transfer viruses and other malicious software as they provide the ability to transfer text as well as files. Viruses and malicious software can be transferred to the recipient’s computer without the knowledge of the user. This makes them very dangerous. As the use of online communications becomes more widespread, it is seen as an opportunity for people to gain access to the files on a computer. Hindocha (2003) gives the example of hackers using instant messaging to gain unauthorised access to computers, bypassing desktop and perimeter firewall implementations. This is a real concern for most users, especially as the instant messaging and email client software are ‘trusted’ software; for a home user, their personal information stored on the computer, such as internet banking security details, identifying information that could be used in identity theft, etc. are the risks. However, online communication software such as these are also often used in businesses also, and in this case, extensive records of financial information are vulnerable. Hindocha (2003) goes on to say about instant messaging systems, “finding victims doesn’t require scanning unknown IP addresses, but rather simply selecting from an updated directory of buddy lists.” This throws up serious concerns.

Theft and Fraud

Electronic commerce faces the major threats of theft and fraud. Online theft commonly occurs in the form of identity theft, and less commonly, outright theft, for example by unauthorised access to bank accounts. Camp (2000) points out that while it may seem a big leap to exchange a bill of paper money for machine readable data streams, the value bound to the paper abstraction of wealth is simply a reflection of trust in the method of abstraction that is widely shared and built over centuries. Money on the Internet is simply a different abstraction of wealth, and has similar issues with trust and risk as traditional money, together with the additional dangers posed by the virtual nature of the environment. Because all communication on the Internet is vulnerable to unauthorised access, this means that it is relatively easy to commit fraud. Where legislation is not a deterrent, technology is almost none. Credit card fraud and theft, electronic banking theft, etc. are some of the more common crimes committed online involving money.

What Makes It Safer Than The Physical World?
Safe from Immediate Physical Harm

Perhaps the only upper hand the virtual world has is that its inhabitants are immune to the immediate threat of physical violence; one cannot be randomly mugged online. However, vulnerable people are still susceptible to physical violence and harm, perhaps more to self-harm; there are many websites that promote anorexia, suicide and self-harm, and this can leave a big impact on impressionable minds.

Presence of Strong Safeguards

The main safeguards on the Internet are policing with the accompanying legislation, and technology itself. There are organisations in place to deal with the abusive websites and forums, appropriate legislation to prevent child pornography, paedophilia, theft, fraud and a variety of other online crime. There is also a vast array of technology that can help keep adults and children safe online, from parental control software that can restrict the websites viewed by children, to anti-virus and cryptography software and firewalls that help prevent hacking and viruses and keep data safe.

Conclusion
Staying safe online

It is commonly accepted that the Internet provides us with opportunities that have been hitherto unavailable. Many sing the praises of this so-called information superhighway; however, it is prudent not to be lulled into a false sense of security by the promising opportunities. People should be made aware of the dangers lurking in the Internet, and be given the education and means to take steps to stay safe online. Just as children are taught not to speak to strangers in the real world, they should be taught not to speak to strangers online as well. Education in schools should include education about how to stay safe online; just as children are taught that eating fruit and vegetables are healthy, they should also be taught that excessive online activities can lead to addiction, with various negative consequences. This is because the virtual world is not very different from the physical world in terms of people waiting to take advantage of the weak and vulnerable, and also with respect to dangers such as addiction.

The future of the virtual world

In many ways, the virtual world is a reflection of the real world. After all, the people who inhabit the real world are the same people that also inhabit the virtual world. It follows therefore, that what people do and want to do in the real world, they would try to do in the virtual world too. Where the physical constraints of the virtual world restrict them, they would try to find ways to get around it. The rapid development of technology also gives rise to new means by which people can do things, beneficial or harmful. The development of virtual reality may mean that one day, people in the virtual world may not be immune to immediate physical harm either. However, the technology by itself is neither good nor bad; it is the way the technology is put to use that creates positive and negative consequences for human beings. In the end, it can be said that virtual world is perhaps just as dangerous as the physical world.

References

Camp, L. J.(2000) Trust and Risk in Internet Commerce Publication: Cambridge, Mass MIT Press.

Carnell, L. (2007) Pupils Internet Safety Online. Bullying Online [online]. Available at: http://www.bullying.co.ukpupils/internet_safety.php (last accessed Aug 2007)

Griffiths, M.D. (2002) The Social Impact of Internet Gambling Social Science Computer Review, Vol. 20, No. 3, 312-320 (2002) SAGE Publications

Griffiths, M. (1998) Does Internet and computer “addiction” exist? Some case study evidence International Conference: 25-27 March 1998, Bristol, UK

IRISS ’98: Conference Papers (Available online at http://www.intute.ac.uk/socialsciences/archive/iriss/papers/paper47.htm – last accessed Aug 2007)

Griffiths, M.D. (2000) Cyber Affairs. Psychology Review, 7, p28.

Hindocha, N. (2003) Threats to Instant Messaging. Symantec Security Response, p3.

Interactive Data Corporation (2001) Email mailboxes to increase to 1.2 billion worldwide by 2005 CNN.com (Available online at http://archives.cnn.com/2001/TECH/internet/09/19/email.usage.idg/ – last accessed Aug 2007)

Katz, J.E. and Rice, R.E. (2002) Social Consequences of Internet Use. Massachusetts Institute of Technology. p286.

Merkow, M. S. and Breithaupt, J. (2000) The Complete Guide to Internet Security New York AMACOM Books

Mitchell, P. (2000) Internet addiction: genuine diagnosis or not? The Lancet,Volume 355,Issue 9204,Pages 632-632

O’Connell, R. (n.d.) A Typology of Child Cyber Sexploitation and Online Grooming Practices. Cyberspace Research Unit UCLAN, p7-9.

Schepp, D. (2002) Internet Gambling Hots Up BBC Online (Available online at http://news.bbc.co.uk/2/hi/business/1834545.stm – last accessed Aug 2007)

Smith, J. and Machin, A.M. (2004) Youth Culture and New Technologies. New Media Technologies, QUT.

Crossover Design for Speakers

Crossover Design

In terms of crossover design, there are two distinct options; passive or active crossovers. Passive crossovers are the most common implementation, since only one amplifier is required. In this case, filters comprising passive components (inductors, capacitors and resistors) are used to ensure that the correct frequency range is supplied to each driver. Low-pass, high-pass and band-pass filters are commonly used and need to be matched to ensure that the frequency roll-offs compliment each other, such that in the crossover zone(s) the combined acoustic output of the drivers maintains a flat frequency response.

In terms of these passive filters, it is the order of the filters used that is the primary consideration. A first order filter has a roll-off of -6dB per Octave and a Butterworth characteristic. First order filters are undesirable for two reasons; a +3dB peak is introduced at the centre of the crossover band and the crossover bandwidth is large due to the gentle roll-off, which means the drivers need to be capable of handling a greater frequency range. However, first order filters require the least components, incur less power loss as a result and do not introduce a phase change in the output.

Second order filters are the most commonly used type in passive crossovers, since they are relatively simple but solve the problems associated with first order filters. The roll-off is -12dB per octave and the filters may be designed with a Linkwitz-Riley characteristic which maintains a flat frequency response across the crossover band, unlike the combination of Butterworth filters.

Third order filters offer a roll-off of -18dB per octave, however there is a problem of phase separation; in a two-way configuration there is a phase shift of 270 degrees which “can result in lobing and tilting of the coverage pattern” (DellaSala, G. 2004). Some designs such as the D’Appolito configuration[1], which uses three drivers, actually make use of this phase separation in order to minimise lobing, however the D’Appolito configuration is notoriously complex and difficult to implement well without precise driver measurements.

If a high-order crossover is desired, fourth order filters are perhaps the best choice. Although they are more complex in terms of design and require more components, the advantages are a small crossover bandwidth (roll-off is -24dB per octave) and a 360 degree phase shift; hence no phase correction is required. Passive crossovers beyond fourth order are generally not considered. Borwick (2001, p.267) notes these “are seldom used in passive crossover designs because of their complexity, cost and insertion losses”.

The other approach to crossover design is the active crossover. In this case active filters (normally based around op-amps) are used to divide the input signal into the required frequency bands prior to amplification; the crossover has multiple outputs and a separate power amplifier is needed for each frequency band. Some audiophiles complain that active crossovers (which normally employ high-order active filters) are not a good choice, due to the poor transient response of high order filters. However as Elliot (2004) notes, “the additional control that the amp has over the driver’s behaviour improves the transient performance, and especially so at (or near) the crossover frequency – the most critical frequency point(s) in the design of any loudspeaker”.

Apart from the increased complexity and multiple power amplifier requirement, active crossovers are far superior to their passive counterparts in almost every way, although some purists may disagree. Good quality op-amps are cheap, as are the required resistors and capacitors (since these do not need to handle much power). The active solution means frequency response is no longer defined by the quite complicated combined resistive, capacitive and inductive load of the passive crossover and drivers. Thus the frequency response of the crossover is independent of dynamic changes in the load. Furthermore, the active crossover makes it easy to tune the crossover dynamically; with most commercially available active crossovers one can simply dial in the required frequency bands.

Efficiency is improved with active crossovers, since no power is lost by the amplifier in driving passive inductors or resistors. The amplifier also has the best possible control over transient response, since there is nothing between it and the driver other than cable. Thus the amplifier can respond directly and “presents the maximum damping factor at all times, regardless of frequency” (Elliot R. 2004).

In view of the above one may then wonder why passive crossovers continue to remain so popular, since it seems far more logical to implement frequency division before amplifying the signal. Ease of installation is perhaps the main factor. Almost all commonly available hi-fi systems use speakers with passive crossovers. For the consumer this makes things easy; the speakers are simply connected to the amplifier and installation is complete.

In contrast, turnkey active solutions for the average consumer are not forthcoming, although rack-mounted “professional” active crossovers can be obtained for quite reasonable prices (around ?150 for a 4th order 2 way Linkwitz-Riley design)[2]. However, these require a fair amount of audio engineering expertise to set up correctly and the typical home listener simply does not possess this knowledge.

For the high-budget client seeking the best audio reproduction, active crossovers are certainly the best option; the technical advantages have been seen to be numerous. This is offset by the fact that the system will be far more complicated to correctly install, but it is assumed in this case that complexity of installation is of little concern to the high-budget client who is unlikely to handle the installation themselves in any case.

For the low-budget client, the best solution is the passive crossover. It is a simple option, only requires one amplifier and yet produces acceptable sound quality. It is far from the best solution, but adequate if a competitive price point is desired.

In conclusion, all but a few dyed-in-the-wool purists will agree that the active crossover is a superior solution in terms of quality and control. What it lacks in simplicity is outweighed by a far superior level of control over frequency response and the drivers themselves. However, due to issues of complexity one can expect that the traditional passive crossover shall continue to lead a healthy existence in the majority of loudspeaker designs.

Sources

Borwick, John. (2001). Loudspeaker and Headphone Handbook, Focal Press.

DellaSala, G. (2004). Filter & Crossover Types for Loudspeakers, Audioholics Magazine.

Dickason, V. (1995). The Loudspeaker Design Cookbook, Audio Amateur Publications.

Elliot R. (2004). Active vs Passive Crossovers, Elliot Sound Products.

Rossing, T. (1990). The Science of Sound, Addison-Wesley.

1

Critical success factors of cable tv (pay-tv) against other competitors in hong kong.

Abstract

In this proposal, we hope to learn the real business strategies though the finding from research. And try to give some suggestion for these companies to increase their sales and profit.

There are the flows of the research proposal.

First, to introduce the background of Pay-TV Limited and its industry. Let you have a basically knowledge of this industry in the pass and now.

Secord, to list the objectives to help myself to achieve the proposal aim.

Third, to have a critical review of relevant literature from books, articles, internet, or magazine. Discussing the business theory how to apply in the real business world, and in the case, we can see which strategies the company is using and what success factors here. Most important, what we can understand clearly the marketing strategies in a real situation from the result of the research.

Additionally, to describe the research method which I had used. Including the data collection method, sampling method and the size of sample.

By using questionnaire, 100 to150 people will be asked, in order to find out the competitive advantage of Cable TV. Relationship between factors (the quality of TV programme, the price, customer supporting service) and the attitude of people towards which Pay-TV will be found.

Aim

This works aims to point out the attractive and competition of Pay-TV and though the research to find out their success factors (competitive advantage with main competitor), and to treat the finding as business strategies learning. Besides, to provide some suggestions and evidences how to get more potential customers, in order to increase the sales and profit of these companies.

Background

Some may not understand why Pay-TV can exist in Hong Kong a long time and have a stable marketing share. In fact, the major choose to watch Free charge TV such as TVB and ATV. However, this free-charge TV programme can not satisfy some people. But, Pay-TV programme focus on this market, they produce special TV programme and buy copyright of overseas TV programme, which free-charge TV have not provided. Besides, another selling point of Pay TV is that provide sport direct seeding such as football and NBA.

In recently years, the more fierce competition was caused by more and more pay-TV service Company had entry to this market. However, the Cable Pay-TV which was the first Limited successfully obtaining a Subscription Television Broadcasting Licence from the Government and can also maintain a stable marketing share these year. And its main competitor is NOW-TV which is subsidiary Company of PCCW. (REVIEW OF PAY TV MARKET)

The following are the background of Cable Pay-TV and Now-TV.

I-Cable

The Pay-TV service is operated by Hong Kong Cable Television Limited, a wholly-owned subsidiary of the Group. The Group successfully obtaining a Subscription Television Broadcasting Licence from the Government in 1993 which Pay-TV service launched in the same year set the trend of multi-channel pay-television service for Hong Kong.

Hong Kong Cable currently produces over 10,000 hours of programming a year, which is the largest television programs producer in Hong Kong. Throughout the years, it has successfully established a leading position in News, Movies and Sports television programming and will continue to introduce innovative local and international programmes for customers. (http://www.i-cable.com)

Now TV

Now TV is a 24-hour pay-TV service provider in Hong Kong. It is transmitted through the company’s Netvigator broadband network via an IPTV service. It is transmitted through the company’s Netvigator broadband network via an IPTV service, with a total of 175 channels, of which 156now Broadband TV Channels, including eight high-definition channels and 15 music channels and 19 pure TVB PAY VISION Channel, and another 17 categories and VOD service. Launched in September 2003, the service is operated by the leading Hong Kong fixed-line telecom operator PCCW, through its subsidiary, PCCW VOD Limited. As of June 2009, the user up to 990,002 1000, 700,005 of them in 1000 to paying customers.

However, I-Cable is like to success maintain their market share against the challenge of Now TV. In order to know clearly the success factors of I-Cable (business strategies, promotion, price, the programme quality, supporting service) we need to ask a number of questions. (http://www.now-tv.com)

CableTV VS Now TV
Why people choose Pay-TV?
What channel of people in contact Pay-TV?
Which one is more famous?
What is the relationship between factors and the attitude of people towards watch Cable-TV/Now-TV?
How do people needs changing?
Can Cable-TV/Now-TV meet these changing needs?

The answer will be found in the following.

Objective and research questions

Below are the main points of the objectives of this research

Study the general demographic of target customers.
Study the TV watching behavior of customers.
Determine the customers, performance on various kinds of TV Programme.
Identify the reason of choosing Pay TV.
Evaluate which attributes of Pay TV are important to customers.
Identify which is the most effective promotion channel.
Examine the channel people how to get the Pay-TV information.
Examine the reasons why they buy Pay-TV services from that channel
Examine the impact of price, sport direct seeding of customers towards Pay-TV.
Examine the Supporting service of Pay-TV.

We’ll analyze the market theories such as 7Ps of market strategies form the results of research. The answers of the above are based on the relevant literature, and the sampling interview. The all detail as follows.

Critical review of relevant literature

There are 5 parts of critical review, the first 4 parts are the finding form the relevant literature. The last 2 parts are the introduction of market strategy of them, and the review of this part.

1. The main difference between free-charge TV and Pay-TV

According the literature, Free-charge TV offer mainly entertainment programme, and the major of programme are made by themselves.

and their programme focuses on popular habit.

However, Pay-TV offer over 100 overseas TV channel and Sport direct seeding, and some of this programme is information programme what offers professional knowledge, the information of special habit to people. (Kotler, P. and G. Armstrong (2008))

In these years, more and more people are willing to pay money watching Pay-TV. The reasons are easy to understand, the two local free-charge TV cant satisfy the people, and young people who aged around 20, their needs of watching TV are changing. In the pass, people treat TV as their main entertainment everyday. However, the young have much other entertainments, and they watch TV in order to watch sport competition, get information. It means Pay-TV still has a great potential market in the coming years.

2. The current competition of Pay-TV market in Hong Kong
3. The promotion strategies of two Pay-TV limited

The promotion strategies of them is similar, their promotion focus the potential customers who have special needs such habit (cooking, religion, drama) or want to watch non-local TV programme (Discovery Channel, CC TV). And their promotion are also similar, the number of TV programme and sport direct seeding are their selling points.

Now TV is more emphasize their promotion to attract the potential customer now, but Cable TV just keep quality of their original service. In fact, people used to watch Cable TV because their longer history and people know their quality of TV programme more. In marketing, Cable-TV is like a cash-cow,

4. The famous TV programme

Cable TV has the excellent news TV programme, and English league direct seeding. It is one of the reasons why Cable TV can maintain market share. Although English league may be not the highest level football league (many people agree Spain league is becoming the highest league in recently years and the Spain league direct seeding is offered by Now-TV in the future 3 years.), However, anyone know that major of Hong Kong people like to watch English league more than others.

Additionally, Cable TV has also the direct seeding of champions of league and World Cup in 2010. It is a great competitive advantage with Now-TV in this year and the coming 3 years. (The newest situation of people needs change)

5. The relationships between factors and the attitude of young people towards I-Cable/Now-TV

There are some factor will influence young people how to choose which Pay-TV.

a) Price(extend)

Cable TV adopt non- selectivity Price(packaging of service), we need to buy a number of channel at the same time; Now TV offers selectivity Price, we can pay a basic free, then the extra-charge are based on each channel, but Now TV are also offer a price for the packaging of all services.

According one news, a great number of people are unsatisfying because Cable TV increase the basic charge from $239 to $259, and the extra-charge of football direct seeding. (http://hk.news.yahoo.com/article/091124/4/fbx5.html)

b) Promotion
c) Sport direct seeding (extend)

It is one important factors why Cable TV success and they can increase the price in a bad economy. Cable TV spend a high cost to get the right of footfall direct seeding, and increase the price to cover the cost. It is their strategy. However, they may ignore the young people needs change.

In recently years, English League is successful in Hong Kong, it has many factors such as the time of competition, and football player stars. However, the Spain League are willing to start early in the next year, and many stars transfer from English league to Spain League. It may make people prefer watch Spain league more. (http://hk.news.yahoo.com/article/091120/4/fa0e.html)

d) Technique supporting and customer service (extend)

Cable TV had a developed supporting system early, but they don’t improve anything. However, Now TV usually improves their Technique supporting system. I believe Now TV will have a developed system what is better than Cable in the coming several years.

6. Market strategies(extend)

Pay-TV adopts the Concentrated Marketing (Kotler, P. and G. Armstrong 2008)(Where the organization concentrates its marketing effort on one particular segment. The firm will develop a product that caters for the needs of that particular group).

The all detail marketing theory and suggestions will describe after the sampling interview.

Research methods/ Methodologies

Category

Options

The degree to which the research question has been crystallized

Exploratory study

Formal study

The method of data collection

Monitoring

Communication Study

The power of the researcher to produce effects in the variables under study

Ex post facto

The purpose of the study

Reporting

Descriptive

Causal-Explanatory

The time dimension

Cross-sectional

The topical scope – Breadth and depth – of the study

Statistical

The research environment

Field setting

The participants’ perceptional awareness of the research activity

Actual routine

The main purpose of our study is needed to find out the comparison of Cable TV and Now TV. We need to collect the primary data and secondary data to analysis the success factors each other.

First, we collect the secondary data from internet to know backgrounds, histories, and the annual reports of each Pay TV Limited. and collecting other useful information on the internet, articles or relevant literature.

Second, to use “Personal Interview” (Questionnaire) collecting the primary data. Indeed that information is related to our objective. We will design a set of questionnaire about 7ps.

The method is taken by samples in Hong Kong (different regions in Hong Kong, Kowloon and New Territory), a half of male and female. It can be avoided unfair saturation. The sample size will be 100 to 150. The age distribution limits are around 18 to 65. Our survey method is face-to-face interview, after the interview we’ll give them a little gift. (Such as coupon)

We can understand the competitive advantage each other through the result of information and make the recommendations how to maintain market share and what service they need to improve. However, secondary data is limited, so we will get the information mainly come from primary data.

Project Plan

Refer to the page15 or Excel [project plan]

The Convergence of Business and Technology

While technological convergence is no longer a new idea, the fascination with the subject lies with the capabilities and applications of both hybrid and brand new technological platforms and the ways previous stand alone industries, have been reconfigured and thereby mobilised to provide enhanced service delivery. Such convergence pertains to the “digitisation of communications and the ways discrete media formats have become accessible to other media forms; have been further factors in this process” (Saltzis, 2007). In technical terms, Saltzis (2007) reminds us that “the new technologies convergence can be attributed to developments in digitization, bandwidth and compression; as well as interactivity.

Moreover, the rapidity and pervasiveness of technological convergence has seized the entrepreneurial imagination and arrested the attention of economic rationalists, with respect to “the devices used by institutions within the communications and media industries, as well as the information they process, distribute, and exchange over and through these devices” (Mosco and McKercher 2008: 37). Such convergence also focuses upon the “integration of or interface between and among different media systems and organizations, made possible by the development of new technologies” (Mosco and McKercher 2008: 37).

With this being said, a more fertile field to explore, derives from the recognition that while technology continues to converge, so does the corporate world. The nub of this issue is the nature and extent of the link between these two types of convergence, and the nuanced ways in which one shapes and is shaped by the other. Corporate convergence, according to Babe (1996:284-285) refers to the “mergers, amalgamations, and diversifications, whereby media organisations come to operate across previously distinct industry boundaries.” Babe extends this explanation stating that corporate convergence refers to the non-technical features of convergence, which also “contribute to the blurring of industry boundaries” (Babe 1996: 284-285). Examples he cites in the 1990’s from his Canadian context include “ Time Warner combining book publishing, music recording, and movie making, not to mention cable television, (while) Rogers Communications, Inc. engage in newspaper and magazine publishing, long-distance and cellular telephony, cable television, and radio/television broadcasting” (Babe 1996: 284-285).

While it is self evident that “corporate convergence promotes and is promoted by technological convergence” (Mosco and McKercher 2008: 37), closer attention is warranted to examine the nature of the promotion and the ways these two significant convergences influence each other. It is illuminating as we do this to itemise dimensions of technological convergence, to begin to pinpoint the areas of synergy between technology and corporate enterprise. The International Telecommunications Union (ITU) has been helpful in its examination of convergence, by singling out ‘device convergence,’ ‘network convergence,’ ‘service convergence’ and ‘regulatory convergence’ (ITU 2008). While the ITU cites examples of devices include mobile phone, camera and internet access device, network examples include fixed-mobile convergence and next-generation networks (ITU 2008). Moreover, service convergence is exemplified by voice services over the internet; not to forget regulatory convergence for broadcasting and telecommunications, citing the example of the Office of Communication (Ofcom) in the United Kingdom (ITU 2008).

The view of convergence from the corporate stakeholder, according to Andriole (2005:28), is ideally a “multi-disciplinary, anticipatory, adaptive and cautious” one, no longer about “early adoption of unproven technology,” but instead about questions of “business technology acquisition, deployment and management” (Andriole 2005: 28). The sense that the momentum has changed within the corporate sector, prompting corporate leaders to be ready to have ‘convergence conversations’ is clearly articulated by Andriole (2005). It is advocated that companies will benefit by thinking in terms of “business technology convergence plans” (Andriole 2005: 28). Instead of technology being a footnote or a discrete department within a corporation, through its own array of convergences, it now occupies a central position in underpinning corporate cultures. As a response to this generational shift in consciousness, business planning now closely consults with technological providers, shaping corporate decisions and goals. This change of thought led spawned a new series of business planning questions, which demonstrate some of the links between technological and corporate convergence. Questions which illustrate this include: “‘How does technology define and enable profitable transactions?’; ‘What business models and processes are underserved by technology?’; ‘Which are adequately or over-served by technology?’” (Andriole 2005: 29)

Now when strategic planning is tabled as an agenda item within companies, the matter of technological capabilities is taken seriously, as corporations realise that sidelining technological innovation, is a stepping stone towards giving away market edge to one’s competitors. Indeed, Andriole (2005: 30) forewarns of the perils of business technology segmentation. Instead of a new business initiative being conceived then asking what technological capability exist to support it, Andriole (2005: 30) argues that technologists must be present as part of the materialisation process of a company’s development goals and strategies.

One fundamental area a business model which values efficiency and effectiveness is the calibre of the internal and external communications systems and infrastructure. In the 21st century business context of global interfacing, communications which are “pervasive, secure and reliable” (Andriole 2005: 30), are a base line issue. The incentive to acquire such state of the art systems is one factor driving further technological convergence, as the market demand fosters technological innovation to bring market edge to communications. The airline industry is a practical case in point, with specific international airlines branding being fostered by the level of their onboard entertainment systems for travelling customers. Some international airlines have invested heavily in this component of their corporate identity to enhance their market niche, displaying convergence through the multi-media, multi-channel video and music on demand, personalised entertainment systems, which now permit replay and play back functions (Yu 2008).

We are reminded us that a large area of compatibility and synchronicity between technological and corporate convergence relates to the classical knowledge networks, such as universities, corporations and investors, who derive great benefits from convergence, finding more penetrating ways to exchange information and knowledge, their primary resource Saltzis (2007:2). Additionally, since political, economic and financial power is derived from shared information, the value of corporate convergence to the stock markets and to companies is self evident. In relation to the priming of information flow via the synergy between corporate and technological convergence, some observers are beginning to draw attention to the sociological trend that knowledge, through these processes, has become less of a community resource and increasingly a commodity. As information is commodified, it is packaged to target specific interest groups and economic stakeholders, who prize specific knowledge for specific outcomes, in terms of client need and demand. This instance of the knowledge super highway shows that knowledge can be ‘positioned’ within the market with greater precision through convergence, yet , in so doing, may easily lose its original contextual underpinnings that imbued it with richer nuances of meaning in the first place. This phenomenon is perhaps no more evident than in cable television, where networks and individual channels are devoted to specific content delivery 24 hours a day. The downside of course, is that information must be assimilated rapidly on the take up side by the media corporation, just as it is foisted upon the consumer with a ‘forced- feed’ pretext, to make room for the next feed. Information, through such convergent capabilities, that permit ‘bites’ of knowledge to be digitally transferred globally and instantaneously, allows knowledge to be stripped of the framework in which it emerged, just as it is quickly, yet superficially digested by the global consumer. When information held the status of being a community resource, rather than a global commodity, it could be used at the will of the consumer, for their own determined purpose, rather than the commodified purpose preselected by the respective media conglomerates that perpetuate the promulgation of endless information.

Further challenges to technological and corporate convergence trends, apart from dilution of meaning due to the multiplicity and potentially splintering of sources, according to ITU (2008) concerns, “content distribution and management, sustainability and scalability, innovation management, competitive dynamics, tariff policies, network security, regulatory coherence and consumer protection” (ITU 2008). While the broadening of avenues for content distribution has the allure of versatility, the revolutionary distribution of music in the past decade illustrates the potency of convergence, threatening to undermine the very industry it was seeking to promote. I-Tunes and other legal internet based distribution pathways for music radically altered the income and revenue streams derived from popular music providers globally. While the consumer was benefited through the open door of access to music, (just as the educational market was reconfigured once educational corporations began to exploit the potentialities of online delivery of educational content at school and university level), the demand for live music globally initially declined, yet has now been buoyed up by the benefits of enhanced global exposure, on account of the global penetration capacity of online music.

Another aspect of this link that has pressurised corporations like never before has been how to safeguard the integrity of informational, entertainment or intellectually creative products, once they are so widely available via the world wide web. The proliferation of cloned products has the tendency to diminish the quality, reputation or demand for the original. Corporations have had to weigh the benefits of more universal distribution, against this tendency to have the integrity of a product compromised. This, in one sense has been as much about re-education of the consumer, who remains driven by the desire for quality in many instances, overlooking the detracting influence of You-Tube look alike musical bands renditions of hit singles by either reputable or promising new talent.

Patently, issues of security remain paramount, in this race towards virally changing convergences, whether it is the protection of personal data by entertainment companies, the finance sector or an individual relying upon social networking websites to foster their new relationships. Banks reputation for safety once built at the store front only, to remain competitive amid their market rivals, has now shifted to the quality and integrity of their web presence. This same notion extends of course, to an ever growing margin of the retail sector, and the sporting sectors, who realise that within the 21st century era of the new media users, the ‘digital native’ populations will increasingly rely upon web based sources for their interfacing with the world. Ironically, even large scale media conglomerations recognize the technological convergence can allow the operator of a mobile phone with a camera component, to drive world changing conditions, in the event that anybody happens to be at the right place at the right time, and films an international crisis on the telephone, then posts it on the web, embarrassingly before a major news corporation has the time or the infrastructure to outrun them. This realization has brought a new sense of recognition from major news broadcasters, to the power and penetration of websites like You-Tube, creating in journalists a scrutinizing eye for such alternate culture havens to assist the construction of mainstream breaking news stories.

The future looks bright for the ongoing convergence of technologies and corporate agendas. We are reminded of the profound benefits of the digitization revolution, yielding “enormous gains in transmission speed and flexibility over earlier forms of electronic communication,” (Mosco & McKercher 2008: 38) “extending the range of opportunities to measure and monitor, package and repackage entertainment and knowledge” (Mosco & Mckercher 2008: 38).

Nonetheless, the need to balance economic welfare and human welfare continues to be of concern, and one of the many implications of the increasing reciprocity, between technological and corporate convergence. In the field of media journalism news production convergence, Klinenburg reiterates that convergence facilitates a more rapid confluence of sources impinging upon an event or a story, yet it also intensifies the pressures upon the journalists time to “conduct interviews, go out into the field, research and write” (2007: 128). The processing time available at the human level continually diminishes, and when the technical speed is permitted to eclipse the human processes of digestion of knowledge and subsequent reflection, the result may ironically, in spite of a seemingly infinitely greater number of sources, be inferior, less news worthy and more insubstantial, than in would have been if the journalist had to rely upon more traditional methods of crafting a story to be broadcast or published.

While we have such warnings of convergence being manifest as a “concentration of technological ownership, in the form of the global media conglomerates” (Saltzis 2007), occurring in tandem “at the three levels of networks, production and distribution” (Saltzis 2007), it is prudent to be cogniscent of the fact that such monopolization can create an hegemonic corporate empire, allowing such media outlets to in effect be massive funnels for particular ideological positions. Divergence of ownership, on the other hand, may be a way to democratise control and use of these powerful message delivery mechanisms, yet without inbuilt check and balance systems, the corporate stakeholder will rarely consider that their over- influence in the market place of ideas is detrimental to society.

Since convergence researchers are ambivalent about the relative degree to which the “conglomeration of the global media has been the causal factor of technical convergence, or whether it is its by-product” (Saltzis 2007), there remains much to scrutinize, as we more globally to a yet more convergent means of conducting business; as well as producing, disseminating and consuming information, for diverse purposes. Saltzis’s observations seem pertinent in the final analysis. While the “benefits of these transitions include the merging of consumer bases; the creation of synergies with shared resources (utilising economies of scope and scale); as well as cross-promotion, the instability of the global media system, with its intense competition, advertising, peer-to-peer file sharing technologies, have established significant challenges for both the music and film industries” (Saltzis 2007). The matter of e-regulation is, as Saltzis asserts, “in its infancy” (2007), with many more competing political, economic and ethical questions to consider, as the global market place continues to converge.

Bibliography

Mosco, V. & McKercher, C. (2008) The Laboring of Communication: Will Knowledge Workers of the World Unite? Rowman & Littlefield

Saltzis, K. (2007) Corporate and Technological Convergence (Lecture 8): New Media and the Wired World MS2007.

International Telecommunications Union (2008) World Telecommunications Policy Forum 2009 ‘Convergence’, accessed December 13, 2008 from http://www.itu.int/osg/csd/wtpf/wtpf2009/convergence.html

Yu, R (2008) Airlines Upgrade Entertainment in Economy Cabin USA Today retrieved from http://www.usatoday.com/travel/flights/2008-05-05-inflight-entertainment_N.htm December 13, 2008.

Controller Area Network Sensors: Applications in Automobiles

1: Introduction

In this paper an overview on the Controller area network sensors and their real world application in the automobiles is presented to the reader. The fact that controller area networks employ various sensors and actuators to monitor the overall performance of a car (K.H. Johansson et al, 2001[1]), this paper focuses only on the sensors and their role in supporting the CAN performance.

2: Laser Speed Velocimetry (LSV) sensor

The application of this sensor is present in the Reanult range of vehicles where the company is incorporating the LSV as an on-board sensor to measure ground speed with better than 0.1km/h accuracy (LM Info, 2006[2]). The LSV technology is an approach to measure ground speed of a moving automobile with greater accuracy thus ensuring better on the road performance as argued by K.H. Johansson et al (2001). The purpose of the technology is to measure real-time speed of an automobile at accuracy level of 0.1km/h.The technology behind the LSV system comprises of the sensor that continuously records the interference with the motion surface that is fed back to the controller in the system that measures the speed of the car. The diagram in fig 1 below explains the aforementioned effectively.

The above is the schematic representation of the mounting of the LSV 065 sensor head (Source: www.polytech.com). The above diagram further clarifies that the use of the LSV system not only provides an effective and accurate measurement of the speed but also proves that the use of this system can provide an effective control over the performance and over the velocity of an automobile.

The LSV systems from Polytech, the schematic for which was presented in this section “combine a sensor head, a controller and software into a rugged industrial package that makes precision measurements from standstill to speeds of more than 7,200 m/min in either direction” (LM INFOR Special Issue, 2006).

2: Braking System sensors and Speed sensors

The ABS system utilizes multiple sensors to prevent the wheels from locking whilst braking at high speeds. The main sensors used in this set-up are

2.1: Speed sensor

The speed sensor is the sensor that is fitted to each wheel of the automobile. The purpose of the sensor is to identify the wheel slip whilst braking which is then fed back to the ABS controller unit to control. The speed sensor records the speed of the rotation of the wheel and when one or more of the wheels are recorded to be rotating at a considerably lower speed then the ABS control unit reduces the pressure on the pressure valves thus ensuring that the braking does not lock the wheel. The speed sensor equipment comprises of various models and can be mounted on different positions of an automobile in facilitating the measurement of the speed. The application of the speed sensor in the ABS is one of the many applications of speed measurement.

The ECM method of measuring the speed using speed sensors is increasingly popular as part of the ABS technology. It is also argued as the later version of ABS that overcomes the fundamental sensor positioning related flaws in the ABS system.

The ECM uses the Pulse Code Modulation technique to communicate with the sensor and the control system of the Controller Area Network of the automobile.

From the figure above it is clear that the ECM plays a critical role as the controller to capture the sensor signals and transmit to the master controller area network Electronic Control Unit (ECU) for overall control of the automobile. It is also evident that the sensor plays a vital role in the speed measurement and efficient operation of the ABS braking system.

The fundamental difference between the VSS and the WSS is that the VSS is part of the controller area network and is connected directly to the ECU of the network whilst the WSS feeds into the ABS controller unit that is connected to the CAN of the car or automobile under consideration.

The VSS is also a successful and flexible method for motorbikes and other two-wheeled vehicles as the mounting is simpler compared to the WSS mounting for ABS system that is popular in a car.

The cases of VSS mounted in transaxle and the transmission serve effectively for the purpose of velocity measurement and also provides near accurate readings for the efficient speed control by the driver of the car or the rider of the bike.

In the case of the VSS mounted in the transmission, the sensor sends a 4 pulse signal at regular intervals to the combination meter that then sends the signal to the ECU of the CAN in the car. The signal so sent is recorded as the speed and shown to the driver as the velocity of the car. This approach is more accurate to the traditional analogue approach to the speed measurement and management.

The above schematic makes it clear that although the mounting of the sensor on the transaxle provides an efficient method of measuring velocity, the response of the sensor can be damaged due to the mechanical wear and tear that is directly associated with the transaxle in a car.

The VSS mounted in the transmission is perceived to have resolved the issue through the mounting of the sensor near the core rotor and using a magnetic field to hold the sensor in position. This approach agreed to the more effective compared to the former where the mechanical wear and tear was a critical drawback to the overall performance of the system. The schematic mounting of the VSS in the transmission is presented in the fig 4 below.

The above mounting schematic in the figure further justifies that the positioning of the sensor by the rotor will help measure the speed effectively and more accurately

3: Differential Hall effect Sensors

Daniel Dwyer (2007)[3] argues that the differential Hall effect sensors are not only capable of accurately measuring speed but also providing the safety measures through effectively controlling the speed. The hall effect sensors utilize the fundamental principle behind the Hall Effect which is described as follows

“When a bias voltage is applied to the silicon plate via two current contacts, an electric field is created and a current is forced.” … Daniel Dwyer (2007).

This principle is utilized in the gear tooth profiling and speed measurement through the gear tooth sensing both in the linear and the differential cases. The differential case is argued as a more successful element especially in case of the automatic transmission automobiles because of the need to effectively control the speed associated with the car.

Another interesting element with the differential Hall-effect sensors is the fact that the sensor positioning is robust in nature and its wear and tear is minimal.

The differential element sensing that is the key for the differential Hall effect sensors utilizes the fundamental Hall effect. Alongside the sensor also “eliminates the undesired effects of the back-biased field through the process of subtraction.” (Daniel Dwyer (2007). The differential baseline field for the sensor is made close to zero gauss since each of the two Hall elements in on the IC (the sensor) approximately see the same back-biased field as argued by Daniel Dwyer (2007). A schematic representation of the differential element sensing is presented in fig 5 below.

The major feature of the Differential Hall effect sensor is the production in the form of an integrated circuit that can respond to the magnetic field interference and differential effects due to the change in speed and the gear tooth positioning in the magnetic field.

The differential element sensing and the speed measurement is accomplished through the overall peak holding of the Integrated Circuit (IC) in the field. Although the traditional peak-detecting scheme could resolve the issue of peak holding, the sensor requires an external capacitor for peak holding in order to effectively control the overall automobile speed.

Since a large gain is required to generate a signal strong enough to overcome the air gap in the case of the hall effect sensor especially in the drawbacks associated with the timing accuracy and duty cycle performance in the slope of the magnetic signal strength as argued by Daniel Dwyer (2007).

From the above arguments it is clear that the Hall effect sensor is a successful but expensive sensor to perform measurements and be programmed as part of the overall CAN of the automobile.

Thus to conclude the research in this paper the four sensors that were discussed include The Laser Speed Velocimetry (LSV) sensor and an insight on the LSV 065 module as an example. This sensor proves to be successful and accurate speed measurement equipment but the mounting and safety related elements pose a big drawback for its commercial application. The Wheel speed sensor for the ABS in an automobile was then discussed followed by the analysis of the Velocity Speed Sensor. Finally the Differential Hall Effect sensor was discussed in the research paper. This sensor on the other hand can be mounted easily in an automobile and can perform effectively to provide accurate measurements but has higher cost liability and maintenance requirements making it a secondary choice to the traditional VSS And WSS sensors used in most of the cars.

Context Aware Academic Planner Design

Designing a Context Aware Academic Planner
Al Khan bin Abdul Gani

Abstract

Academic calendar planner is an application whereby can give tremendous advantages to students, particularly university students and academic personnel. By using the academic calendar planner, student and academic personnel can manage their academic schedule anytime anywhere. Academic calendar planner let user to edit and amend their calendar activity up to date. Rather than that, user can have the interaction between other user which is interaction between lecturers and students. One ability that can’t be find in other academic calendar planner is the ability to change the view from monthly, weekly and daily basis and per semester based on user preference. And for that, academic calendar planner allow user to create group and which each user has ability to see the schedule of other user.

Keywords— Academic Planner, social application

Introduction

The aim of this paper is to determine the context aware to be considered to develop academic planner by do literature review on previous paper and conducting a survey of students and lecturers to acquire the response regarding the academic planner. This paper focuses on proposed academic planner for UiTM. Academic calendar planner is an application whereby can give tremendous advantages to students, particularly university students and academic personnel. By using the academic calendar planner, student and academic personnel can manage their academic schedule anytime anywhere. Academic calendar planner let user to edit and amend their calendar activity up to date. Rather than that, user can have the interaction between other user which is interaction between lecturers and students. One ability that can’t be find in other academic calendar planner is the ability to change the view from monthly, weekly and daily basis and per semester based on user preference. And for that, academic calendar planner allow user to create group and which each user has ability to see the schedule of other user.

Background

This application develop for those student, lecturer and academic personnel who’re looking for featured application to manage their academic calendar. Current system in Universtity for an example UiTM only provide non-dynamic academic application to Student and Lecturer. Basically they totally rely on academic calendar to help them manage their academic schedules. But the problem with the existing academic calendar is, the calendar are limited to certain activities such as:

Only academic personnel has right to add new academic plan, university events, public holidays etc.
Lecturer and student can only view the calendar. They don’t have the authorization to do the updates or change any of the calendar information.
Sometimes Lecturer wants to cancel and do the class replacement. Because of limited functionality of the current academic calendar, this leads to unreliable calendar information.
In certain circumstances, student need to meet their lecturer, unfortunately lecturer is not are not around. This is due to unreliable calendar information about the availability status.
METHODOLOGY

This research is to determine key areas for a specification requirement to be considered for designing a context aware Academic. Two approaches have been used to find the best practice to identify the appropriate elements and features based on a literature survey and questionnaires.

Current issues

Current need of

Common elements

students and lecturers

FRAMEWORK

Element/Feature Application

Figure 1: Research mission.

Figure 1 represent the methods used to determine the features before design the application

Literature review

A literature review need to be done in order to continue the study on this topic. A literature survey was conducted to investigate the current issues and common element features of developing a context aware. Table 2 is a draft of element functions involving the academic planner system.

TABLE 2: DRAFT FROM LITERATURE SURVEY

Elements/functions

Sources

Academic related matter

Course registration

[4], [9], [10], [11], [12], [13]

Course selection

[3], [6], [4], [9], [10], [11], [12]

Academic Progress

[8], [13]

Course information

[4], [6]

Scheduling

[3], [12]

Plan of study

[7], [8]

Academic calendar

[13]

Delivery Methods

Bulletin board

[6]

Online Discussion

[13]

Chat

[14]

Support System

Appointment

[6]

The existing other Planner

A literature review need to be done in order to continue the study on this topic

Google Calendar

Thunderbird

Microsoft Outlook

Features

Easy to access Fast and reliable

Robust calendaring tool

Email integration

Sync and great collaboration tools

Limit

Don not support full integration with client email and contacts

Hefty price

Platform

Web-Based

All platforms

Windows

Conversation with other contact

No support

No support

No support

Context awareness

Ubiquitous computing (pervasive systems) was first proposed by Weiser (1991). Context-aware systems are a type of pervasive system and are viewed by computer scientists as a mature technology [1, 2]. A definition for context is given by Day in [3]: “context is any information that can be used to characterize the situation of an entity, an entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and application themselves“. Context-aware systems are able to gather contextual information from a variety of sources without explicit user interaction and adapt their operation accordingly [4]. Context-aware systems have the ability to integrate easily with any service domain, such as healthcare, commerce, learning and transport.

A context-aware system must include three essential elements: sensors, processing and action. Three types of sensors are defined: physical, virtual and logical [5]. A physical sensor, such as a camera or thermometer, captures information about its local environment [6]. In contrast, virtual sensors extract information from virtual space, which is defined as the set of data, applications and tools created and deployed by the user. Logical sensors combine physical and virtual sensors to extract context information. For example, a company can infer that an employee is working from home using login information (a virtual sensor) and a camera (physical sensor) [1].

Context-aware user interfaces facilitate the user interaction by suggesting or prefilling data derived from the user’s current context. This raises the problem of determining which context information can be used as input for which interaction element in the user interface. This task is especially challenging as the texts that describe the elements, e.g. their labels, often differ in the terminology used. To facilitate the interaction with an application, we need user interfaces (UIs) that provide proactive assistance, for example by suggesting which values to enter in a form.

Melanie is his paper present a novel mapping process for that purpose which combines the advantages of string-based and semantic similarity measures to bridge the vocabulary gap between context and UI element, and which is able to automatically extend its vocabulary by observing the user’s interactions. Their research show that these two features dramatically increase the quality of the resulting mapping. Unlike previous approaches, the proposed mapping process does not require any training or manually tagged data. Further, it does not only use the label to describe the context and UI elements, but additional texts like their tooltips.

Context-aware applications are expected to become a remarkable application area within future mobile computing. As mobile phones form a natural tool for interaction between people, the influence of the current context on collaboration is desirable to take into account to enhance the efficiency and quality of the interaction [1].

Context-aware mobile devices have so far been investigated mainly from the technological point of view, examining context-recognition and sensor technologies inferring logic, system architectures or infrastructure. There have also been examples where contextual information has been used to facilitate co-operation between mobile users. User’s personal information, such as reminders, phonebook contacts or calendar notes, can be used as an information source which is used when creating location-sensitive messages, as done with CybreMinder [2]. Schmidt et al. [3] introduced a context-aware phonebook, which indicates the availability of a contact the user wants to call to. Location is probably the most commonly used context attribute, and it has been used to develop numerous location-aware mobile systems, such as GUIDE tour guide in Lancaster [4] or visitor’s guide at Tate Gallery, London [5].

Cloud Application

A cloud application (or cloud app) is an application program that functions in the cloud, with some characteristics of a pure desktop app and some characteristics of a pure Web app. A desktop app resides entirely on a single device at the user’s location (it doesn’t necessarily have to be a desktop computer). A Web app is stored entirely on a remote server and is delivered over the Internet through a browser interface.

Like desktop apps, cloud apps can provide fast responsiveness and can work offline. Like web apps, cloud apps need not permanently reside on the local device, but they can be easily updated online. Cloud apps are therefore under the user’s constant control, yet they need not always consume storage space on the user’s computer or communications device. Assuming that the user has a reasonably fast Internet connection, a well-written cloud app offers all the interactivity of a desktop app along with the portability of a Web app. If you have a cloud app, it can be used by anyone with a Web browser and a communications device that can connect to the Internet. While tools exist and can be modified in the cloud, the actual user interface exists on the local device. The user can cache data locally, enabling full offline mode when desired. A cloud app, unlike a Web app, can be used on board an aircraft or in any other sensitive situation where wireless devices are not allowed, because the app will function even when the Internet connection is disabled. In addition, cloud apps can provide some functionality even when no Internet connection is available for extended periods (while camping in a remote wilderness, for example).

Cloud apps have become popular among people who share content on the Internet. Linebreak S.L., based in Spain, offers a cloud app named (appropriately enough) “CloudApp,” which allows subscribers to share files, images, links, music, and videos. Amazon Web Services offers an “AppStore” that facilitates quick and easy deployment of programs and applications stored in the cloud. Google offers a solution called “AppEngine” that allows users to develop and run their own applications on Google’s infrastructure. Google also offers a popular calendar (scheduling) cloud app.

FINDINGS
Questionnaires Analysis

Elements/functions

Students (361)

Lecturer (155)

%

%

%

%

Agree

Disagree

Agree

Disagree

Academic related matter

Course registration

Course selection

Academic Progress

Course information

Scheduling

Plan of study

Academic Planner

Reminder

Appointment

Sync With Major Calendar Application

Context Awareness

Proposed Feature In Academic Planner

After several study in traditional planner and existing planner that related to Academic Planner, reviewing literature and questionnaire, the new features introduced to improve the academic planner

Optimizing class scheduling in collaborative mobile systems through distributed voting

Decision making through distributed voting can help automate routine-like collaborative class schedule, appointment and Event. In this paper author concentrate on how distributed voting strategies can be used for scheduling meetings in mobile and pervasive environments. Their work focuses on optimizing the meeting scheduling result for each participant in a mobile team by using user-specific preferences and information available on their devices. This negotiation is done in a distributed manner directly between the peers. In this paper author describe different approaches for the decision making strategy involving voting theory to balance out the different user preferences and availabilities. The weight of the votes from each participant can also be adjusted according to their importance or necessity in the given meeting. We also introduce briefly an approach to support distributed decision making strategies pervasively using a lightweight Web-based platform. To conclude the paper, we give our views on the future development directions and evaluation plans as well as extend the approach for other related domains [1].

Categorizing Task Occurrence Pattern

When we make a future plan of our work, we can predict or forecast the upcoming tasks, because we know that fair amount of our tasks are to be occurred as were occurred in the last year/month repeatedly. In addition, we know we have many dependent tasks; for example, there will be a series of regular meetings with the ofi¬?ce staff for which various auxiliary tasks need to be completed, for example, Announcement, Setting up Room, and Sending Minutes tasks. These related tasks are approximately on the same time grid with other corresponding tasks. This type of regularity is called a Task Occurrence Pattern, which arises from the repetition of tasks and the alignment of related tasks [4]. To coni¬?rm how much the real tasks are on the Task Occurrence Pattern, all tasks of a year of a user, who is a graduate student, are gathered and inspected from the view point of dependence and recurrence.

Comparison of Social Media Use in Different Countries

Evolution and history of social media:

Social media could be depicted as “types of electronic interchanges (Web sites for social systems administration and blogging) through which clients make online groups to impart data, plans, particular messages, and other substance (as features).” The same source characterizes organizing as “the trade of data or administrations around people, aggregations, or foundations; particularly: the growth of profitable connections for job or business.” (Edosomwan & Seymour, 2011)

There are numerous thoughts regarding the first event of social media. All around much of mankind’s history, people created innovations that make it less demanding for him to correspond with one another. In the late 1800s, the radio and phone were utilized for social connection, though restricted with the radio (Rimskii, 2011)

Social networks have developed through the years to the present day assortment which utilizes advanced media. Be that as it may, the social media isn’t that new. Furthermore, it didn’t begin with the workstation yet rather the phone. Throughout the 1950s, telephone phreaking, the term utilized for the rebel seeking of the phone system, started. This methodology was achieved through the utilization of custom made electronic gadgets that encouraged unapproved access to the phone framework to make free calls. Phreaks could discover phone organization test lines and gathering circuits to finish their assignment. Brett Borders expressed phreaks could hack into corporate unused voice post boxes to have the first websites and podcasts (Borders, 2009)

The public, During the 1960s, saw the advent of email (Borders, 2009). Nonetheless, the web was not accessible to general society until 1991. Email was initially a system to trade messages starting with one PC then onto the next; however both Pcs were obliged to be on the web. Today, email servers will acknowledge and store messages which permit beneficiaries to get to the email whenever it seems best. In 1969, ARPANET, made by Advanced Research Projects Agency (ARPA), a U.S. government organization, was created. ARPANET was an “early system of time-imparting machines that shaped the support of the web.” CompuServe, the third improvement of the 1960s, was likewise made in 1969 with a mission to give time-offering administrations by leasing time on its Pcs. With high charges, this administration was excessively unreasonable for some (Rimskii, 2011) (Ritholz)

Computer Technologies

Social media was further created throughout the 1970s. MUD, initially known as Multi-User Dungeon, Multi-User Dimension, or Multi-User Domain, was a constant virtual world with pretending recreations, intelligent fiction, and online talk. MUD is basically content based which obliges clients to sort summons utilizing a common dialect. BBS was made in 1978, that year as MUD. BBS is an equivalent word for release board framework. Clients log into the framework to transfer and download programming, read news, or trade messages with others. In the early years, release loads up were gotten to through a modem through a phone line by one man at once. At an early stage, notice sheets finished not have colour or design. Release sheets were the antecedents of the World Wide Web. Imagined in 1979 and secured in 1980, the Usenet is like a BBS. Usenet is a framework to post articles or news. The contrast from a BBS is that Usenet does not have a focal server or dedicated overseer messages are sent to different servers by means of news sustains (Ritholz)

Numerous social networking sites were made in the 1990s. A few illustrations incorporate Sixdegrees, Blackplanet, Asian Avenue, and Moveon. These are, or have been, online corner social locales where individuals can connect, including destinations for open approach promotion and a social system dependent upon a web of contacts model. Likewise, blogging administrations, for example, Blogger and Epinions were made. Epinions is a site where buyers can read or make audits of items. Thirdvoice and Napster were two product provisions made in the 90s that have since been evacuated from the business sector. Thirdvoice was a free module that permitted clients to post remarks on pages. Adversaries of the product contended that remarks were frequently obscene or offensive. Napster was a product provision that permitted shared record offering. Clients were permitted to impart music documents bypassing ordinary dispersion techniques, which at last was dead set to be a violation of copyright laws (Edosomwan & Seymour, 2011)

In 2000 social media accepted an incredible support with the seeing of numerous social networking sites springing up. This exceedingly supported and converted the cooperation of people and associations who offer basic enthusiasm toward music, training, films, and fellowship, taking into account social networking. Around those that were propelled included LunarStorm, six degrees, cyworld, ryze, and Wikipedia. In 2001, fotolog, sky online journal and Friendster were propelled, and in 2003, MySpace, LinkedIn, last FM, tribe.net, Hi5 and so forth. In 2004, prominent names like Facebook Harvard, Dogster and Mixi developed. Throughout 2005, enormous names like Yahoo!360, YouTube, cyword, and Black planet all developed (Junco, R, Heibergert, G, & Loken, E, 2011)

Facebook

Facebook is a social networking website propelled in February 2004, and it is secretly worked by Facebook, Inc. Facebook was established by Mark ZuckerBerg and others when he was an understudy at Harvard; however when the site was at first propelled, it was limited to Harvard people just. Later the benefit was reached out to secondary school people and later to everybody that is 13 years or more established (Boyd, 2007).

Starting July 2010, Facebook has more than 500 million dynamic clients. In January 2009, Facebook was positioned as the most utilized social system around the world. Additionally, in May 2010, Google advertised that more individuals went to Facebook than whatever possible website on the planet. It announces that this was uncovered from discoveries on 1,000 sites over the world (TIMES, 2010).

Clients may make an individual profile; include different clients as companions, and trade messages, including programmed warnings, photographs and remarks when they overhaul their profile. Furthermore, Facebook clients may join basic investment client aggregations, composed by work environment, school, school, or different aspects. Facebook permits any individual who is no less than 13 years of age to turn into an enrolled client of the website. Once a day, activity to Facebook system is on the ascent. Facebook likewise turned into the top social system over eight unique markets in Asia, Philippines, Australia, Indonesia, Malaysia, Singapore, New Zealand, Hong Kong and Vietnam. On October 24, 2007, Microsoft reported that it had acquired a 1.6% offer of Facebook for $240 million, giving Facebook an aggregate intimated worth of around $15 billion. Microsoft’s buy included rights to place global ads on Facebook; different organizations have similarly stuck to this same pattern (STONE, 2007)

The overall status of Facebook
Egypt:

With 16 million clients, Egypt is positioned first around the Arab locale nations that utilization Facebook, and seventeenth worldwide regarding crowd size, as stated by an as of late issued report. This speaks to 1.4% of worldwide Facebook clients (DailyNews Egypt, 2013 ).

As stated by the e-marketing Egypt Online Competitiveness Intelligence report, Egypt’s Facebook group saw something like 41% development contrasted with a year ago, the amount of clients in 21 July 2012 being 11.3 million. This implies that the current number of clients is 18.84% of Egypt’s populace.

Starting 21 July 2013, there are 61 million Facebook clients in the Arab world, 26% them are Egyptian. The report read: “48.11% of web clients in Egypt are Facebook clients.”

The report, distributed in August 2013, expressed that 12 million of Egyptian Facebook clients are underneath 30 years of age. The sex dispersion Facebook clients in Egypt has demonstrated that female clients are of much more youthful ages, with females under 30 years of age speaking to 81% of aggregate female clients in Egypt. The amount of male clients is something like 10 million, speaking to about 63% of aggregate clients (DailyNews Egypt, 2013 ).

Hong Kong:

Another review has uncovered that Hong kongers are investing more of a chance perusing and composing online journals, and on social media sites, for example, Facebook and YouTube, than different markets around the globe, including the US. Whether over a home broadband administration or a cell phone Hong kongers are investing an expanding measure of time viewing feature, gaming, shopping and offering on the web (Ketchum, 2011)

On a week by week groundwork, 77% of Hong kongers studied were perusing websites, 52% were composing websites and 92% were locked in on Facebook demonstrating a fundamentally more elevated amount of online cooperation than different markets studied around the globe.

Social media is assuming a part in tending to tests to work-life adjust in Hong Kong a spot well known for its buckle down play hard lifestyle. 68% of those overviewed invest the same or more of an opportunity with companions online than they do in individual. Social networking sites are likewise as powerful on acquiring choices as universal media. Half of respondents had made a buy dependent upon web journal suggestions (Ketchum, 2011).

India:

The report gauges 243 million web clients in the nation by June 2014, overwhelming the US as the world’s second biggest web base after China (Times of India, 2013).

Indians principally utilize the web for correspondence, generally as email; social media is likewise a vital driver of web use in India. This aspect of the IMAI report might be authenticated with information from different sources, for example, Facebook, as stated by which India had 82 million month to month dynamic clients by June 30, 2013, the second biggest land district for Facebook after the US and Canada. Facebook does not work in China.

Web infiltration in India is determined to a great extent by cell telephones, with a portion of the least expensive and most essential hand-sets today offering access to the web. India has 110 million portable web clients of which 25 million are in rustic India. The development of web infiltration in provincial India is determined to a great extent by the cellular telephone; 70% of country India’s animated web populace get to the web by means of cell telephones (Times of India, 2013)

Ukraine:

Overall status of Facebook in Ukraine can be judged from the protests happened throughout in Ukraine. The principal eminent pattern is that Facebook is, no doubt utilized considerably more eagerly than Twitter. The authority EuroMaidan Facebook page, began on Nov. 21, now has in excess of 126,000 preferences. Very nearly the sum of the data on this page is in Ukrainian, proposing that the data is outfitted to locals instead of the universal group, and there is confirmation of vibrant association. A gander at the most prevalent Facebook posts on this page affirm this instinct: numerous posts give news redesigns that create serious dialogs; however the page is additionally used to give paramount logistical data to protestors. There are, for instance, posts with maps of spots to get free tea and access to warm spaces, exhortation on the best way to abstain from being incited by government executors, flyers to print and disseminate around the city, and also data on where dissidents will be assembling (Barbera & Metzger, 2013).

Strategy for Facebook going forward

Considering the above situations in selected countries it can be concluded that Facebook is very famous and its user are increasing day by day. By looking into the situation of Egypt and Ukraine, it can be noticed that Facebook played a vital role in revolutions and movements. Strategy for these two countries should be made very carefully, so that Facebook Inc. may not get involved in any political matter. However strategy for India and Hong Kong may be the same. In both countries this social website is providing services for business and social need in positive way. All these countries are very populated and number of users is large so Facebook can make a handsome profit by ad displaying business. Its strategy for mobile users is a real issue for Facebook. Large number of users is due to mobile and internet facility on mobiles, but problem is format of display is different in different mobiles hence there is problem in ad displaying (Cho, 2013 ).

Optical Fiber Sensors and Conventional Sensors

ABSTRACT

This study deals with the comparison of the two types of sensors which are widely used in civil engineering, namely, conventional sensors and optical fiber sensors. Temperature and displacement are the two principal parameters which are measured with the aid of Fiber optic sensors. Bragg Grating, Interferometric, Intensity Sensors, and optical time domain reflectometry (OTDR) are some of the techniques which are used for sensing. In this study, various case studies have been undertaken and have been analyzed. With the aid of these case studies, a detailed analysis and comparison of the sensors is carried out.

Chapter 1: INTRODUCTION

In the last two decades, the world has witnessed a revolution in the sectors of optoelectronics and fiber optic communications. Various products such as laser printers and bar code scanners which have become a part of our daily usage, are a result of this technical revolution only. The reasons for the phenomenal growth of the fiber optics are many. The most conspicuous reason being the ability of the fiber optics to provide high performance and highly reliable communication links and that too at a very low bandwidth cost. As we see that optoelectronic and fiber communications industry has progressed a lot, and along with these industries fiber optic sensors have also benefited a lot from these developments. Due to the mass production in these industries, availability of fiber optic sensors at a low cost has been made possible in recent years. With their availability at affordable costs, fiber optic sensors have been able to enter the domain which was otherwise being ruled by the traditional sensors.

In recent years, the demand for the development of new materials to strengthen, upgrade and retrofit existing aged and deteriorated concrete structures has increased rapidly. The continuing deterioration and functional deficiency of existing civil infrastructure elements represents one of the most significance challenges facing the world’s construction and civil engineers. Deficiencies in existing concrete structures caused by initial flawed design due to insufficient detailing at the time of construction, aggressive chemical attacks and ageing of structural elements enhance an urgent need of finding an effective means to improve the performance of these structures without additionally increasing the overall weight, maintenance cost and time. In the last 50 years, a large number of civil concrete structures have been built; many of these structures, particularly in off-shore regions have now deteriorated and require repair in a short period of time.

Moreover, the increase of traffic volume and population in many developing countries is causing the demand to upgrade existing concrete structures to increase. The damage of reinforced concrete (RC) structures through reinforcement corrosion and residual capacity are the most important issues that concern engineers. These problems occur not only in constructed concrete structures but also in structures strengthened by externally bonded steel reinforcements.

In the past, the external steel plate bonding method has been used to improve strength in the tensile region of concrete structures with an epoxy adhesive and has proved to be successful over a period of 20 years. However, the use of steel reinforced plates and bars has its disadvantages including high corrosion rates, which could adversely affect the bond strength and cause surface spalling of the concrete, due to volumetric

change in the corroded steel reinforcements. Since the early 1980s, fibre-reinforced plastic (FRP) materials have been used as a replacement for conventional steel materials for concrete strengthening applications. In recent years, the interest in utilizing FRP materials in the civil concrete industry in forms of rods, plates, grid and jacket has grown increasingly. When an FRP plate with high tensile strength properties bonds on the concrete surface, it can strengthen the structure with minimum changes to its weight and

dimensions. FRP offers substantial improvement in solving many practical problems that conventional materials cannot solve to provide a satisfactory service life of the structure. Unlike the conventional steel materials, FRP is corrosion resistant. The beneficial characteristics of using the FRP in concrete construction include its high strength-to-weight ratio, low labour requirement, ease of application, reduced traffic interruption during repair, cost reductions in both transportation and in situ maintenance for a long-term strategy. Its high damping characteristic also attracts more structural engineers to use these materials for seismic retrofitting. Due to the increasing use of FRP-plate bonding techniques in strengthening civil concrete structures, the interest in finding a suitable means of monitoring the structural health conditions of these strengthened structures has therefore increased substantially. Since strengthened structures are covered by the FRP plates, the mechanical properties of the concrete may not be measured or detected easily through conventional nondestructive evaluation (NDE) methods, such as strain measurements using surface mounted strain gauges or extensometers, radiography, thermography and acoustic emission methods, particularly in areas with microcracks

and debonds underneath the externally-bonded plate. Besides, these structural inspection technologies, in certain cases, require special surface preparations or a high degree of flatness in the concrete surface. These requirements may be hard to achieve, particularly

for an area that is exposed to a harsh environment. During the 1990s, a multi-disciplinary field of engineering known as ‘Smart Structures’ has developed as one of the most important research topics in the field.The structure is formed by a marriage of engineering materials with structurally-integrated sensor systems. The system is capable of assessing damage and warning of impending weakness in the structural integrity of

the structure. Fibre-optic sensor technology is a most attractive device currently used in the aerospace and aircraft industry for on-line monitoring of large-scale FRP structures. The development of distributed fibreoptic sensors, which provides information on a large

number of continuously distribution parameters such as strain and temperature is of great interest in most engineering applications.11,12 The sensors are embedded into a structure to form a novel self-strainmonitoring system, i.e. the system can self-detect its

health status and send response signals to operators during any marginal situation during service. The embedding sensor, due to its extremely small physical size, can provide the information to a high accuracy and resolution without influencing the dimension and

mechanical properties of the structure. Fibre-optic sensors present a number of advantages over the conventional strain measuring devices: (a) providing an absolute measurement that is sensitive to fluctuation in irradiance of the illuminating source; (b) enabling the measurement of the strain in different locations in only one single optical fibre by using multiplexing techniques;(c) having a low manufacturing cost for mass

production; and (d) its ability to be embedded inside a structure without influencing the mechanical properties of the host material.

A new development of ‘Smart materials and structures’ was driven by a strong demand for high performance over recent years. A system integrated into structures and being able to monitor its host’s physical and mechanical properties such as temperature and

strain, during service is appreciated as a ‘Smart structural health monitoring system’. The term smart material and structure is widely used to describe the unique marriage of material and structural engineering by using fibre-optic sensors and actuation control technology. The smart structure is constructed of materials that can continuously monitor their own mechanical and physical properties, and thereby, be capable of assessing damage and warning of impending weakness in structural integrity. This design concept results in improved safety and economic concerns regarding the weight saving and avoidance of over-designing of the structure in the long run. In Fig. 1, a schematic illustration of the structure’s possibilities created by the confluence of the four disciplines is shown. In the figure, a structure invested with actuating, sensing and neutral networking systems to form a new class of adaptive structures is shown. A structure with integrated sensor or actuator systems is able to provide a self-structural health monitoring or actuating response, respectively. If both systems are integrated together into a structure, the sensor and actuators can act as nervous and muscular systems, like a human body, to sense the conditions such as mechanical strain and temperature of the structure

(a smart structure) and to provide control of such changes of stiffness, shape and vibration mode (a controlled structure). The combination of these two systems

into one structure is called a ‘Smart adaptive structure’. This structure with a built-in neural networking system, like a brain, is then able to self evaluate the conditions, which are based on changes of structural parameters, thermal conditions and ambient environments to give an appropriate mechanical adjustment. This structure is commonly called an ‘Intelligent adaptive structure’.

1.1 BACKGROUND OF THE STUDY

There has been an unprecedented development in the fields of optoelectronics and fiber optic communications. This in turn, has brought about a revolution in the sectors of telecommunication and various other industries. This has been made possible with the aid of high performance and reliable telecommunication links which have low bandwidth cost.

Optical fibers have numerous advantages and some disadvantages. The advantages include their small size, resistance to electromagnetic interference and high sensitivity. On the other hand, some of its disadvantages are their high cost and unfamiliarity to the end user. But its great advantages completely overshadow its minor disadvantages. So, in this study an attempt is being made to compare the modern age fiber optic sensors with the conventional sensors. Also, with the aid of the case studies, the impact of fiber optic sensor technology on monitoring of civil structures is studied (McKinley and Boswell 2002).

1.2 PROBLEM STATEMENT

In the past various kinds of sensors have been used in civil engineering for measuring temperature, pressure, stress, strain etc. And as the optical fiber sensors spread their wings, the civil engineering is bound to gain a lot from these modern sensors.

Presently, there exist a number of problems with the existing civil infrastructures. These civil infrastructures such as bridges etc. have a pretty long service period which may amount to several decades or maybe even hundred years. Thus, during this time period, these structures suffer from corrosion, fatigue and extreme loading. Since concrete is used mostly in these civil infrastructures, it degradation is a major issue all over the world.

The amount of degradation and the time when the degradation starts depends on various factors and is inevitable and unavoidable. Thus, in order to keep these civil structures in good condition, it becomes necessary that their condition be monitored and adequate steps be taken. Thus, we need sensors which can monitor these structures throughout the life of these structures. Thus, in this study the impact of fiber optic sensors is studied on civil structures.

1.3 OBJECTIVES

There are a few objectives that are planned to be achieved at the end of this project, these are:

A general discussion on the present state of structural monitoring and the need of fiber optic sensors in this field

A general study on Comparison between Conventional Sensors and Optical Fiber Sensors

Review of Case Studies on Fiber Optic Sensors application in Civil Engineering Structures

1.4 WORK PLAN

Discussion, reading and observation

Problem identification through reading, discussion and observation of the area studied

Understand and identify the background of problem

Studying feasibility and needs to carry out the investigation

Identification of the Title for the project

Identify the aim, objective and scope of the project

Literature Review

Understanding the background of the problem

Understanding the history of the sensor technology in structural monitoring

Carrying out literature survey on generic technologies of sensors for concrete structures

Identify the types of sensor involved in monitoring the structural in civil engineering

Identify the technique used and the working principle for each type of sensors (in particular optical fiber sensors)

Case Study

Choose the relevant and related case study for discussion

Describe important aspects of case study

Analyze the use of sensors in the case study

Discussion, Conclusion and Recommendations

Discuss the similarities and differences

Discuss the technical facets of sensor application

Draw the overall conclusion for this project

Give some recommendation for future

Chapter 2: APPLICATIONS

These days the fiber optic sensors are being used for a variety of applications, the most prominent of them being:

Measurement of rotation and acceleration of bodies

Measurement of electric and magnetic fields

Measurement of temperature and pressure of bodies

Measurement of acoustics and vibrations of various bodies

Measurement of strain, viscosity and chemical properties of materials

Measurement of surface condition and tactile sensing

Measurement of sound , speed and proximity of bodies

Determination of color and weight of different objects

Measurement of linear and angular positions and this is widely utilized in civil engineering structures

2.1 ADVANTAGES OF FIBER OPTIC SENSORS

Like with any other technology, there are both advantages and disadvantages using fiber optic sensors. The prominent advantages being:

Fiber optic sensors are lightweight and this is of great importance in case of engineered structures

Fiber optic sensors are of smaller size as compared to the traditional sensors

Also, fiber optic sensors consume less power as compared to the traditional sensors

Along with this, these sensors show high resistance to electromagnetic interference as compared to the traditional sensors

On top of this, fiber optic sensors have enjoy high bandwidth and high sensitivity as compared to their traditional counterparts

Fiber optic sensors are usually embedded in objects and due to this, these sensors can gain access to areas which till date remain inaccessible with the aid of traditional sensors

Also, these sensors are accurate over a greater dynamic range as compared to the traditional sensors

Fiber optic sensors are also capable of being multiplexed which again is a further advantage over their traditional counterparts

Also, fiber optic sensors are capable of distributed measurements which gives them an edge over and above the traditional sensors

Last but not the least, they also show greater environmental ruggedness as compared to the traditional sensors

2.2 DISADVANTAGES OF FIBER OPTIC SENSORS

But all this is just one side of the coin. Though on seeing these advantages, it might appear that fiber optic sensors are way too advanced as compared to the traditional ones, but it is not exactly true. These fiber optic sensors also have some disadvantages due to which their advancement in today’s world has been somewhat curtailed. The major disadvantages of fiber optic sensors are:

Fiber optic sensors are quite costly as compared to the traditional sensors. Due to this, many people still consider traditional sensors to be a better option in cases where cost is a major consideration.

Secondly, these sensors have come into prominence only in the last two decades. Due to this, people appear to be somewhat less educated regarding their usage and operations. And this unfamiliarity with the usage of these sensors, has proved to be a major hurdle in being able to capture the whole market.

Also, these sensors are considered to be more fragile as compared to the traditional sensors which raises a question over their adaptability in extreme conditions

Also with the fiber optic sensors there exists the inherit ingress/egress difficulty

Fiber optic sensors usually have a non-linear output which is a cause for concern in some applications

From the above discussion, we can see that as is the case with any other new technology, there are both merits and demerits of fiber optic sensors. But, what is worth considering here is that the advantages of this technology are much more than its disadvantages and are able to outweigh them. Also, from the demerits which are mentioned here, it is clear that these demerits are bound to wither away as this technology develops and gains more prominence.

2.3 APPLICATIONS IN CIVIL ENGINEERING

Now we come to the discussion of the need and applications of the fiber optic sensors in the field of civil engineering structures. The monitoring of civil structures has a great significance in today’s world. Today, we not only need to construct reliable and strong civil structures, but we also need to monitor these structures in order to ensure their proper functioning and their safety. Also, with the aid of the monitoring of various parameters of the structures, we can get knowledge about state of the building and by using this data, we can in turn plan the maintenance schedule for the structure (Mckinley, 2000). Also, this data can give us an insight into the real behavior of the structure and can thus take make important decisions regarding the optimization of similar structures which are to b e constructed in future.

The maintenance of the structures can be approached in one of the two ways, namely:

Material point of view- In this approach, monitoring is concentrated on local properties of the materials which are used in the construction. In this approach, we observe the behavior of the construction materials under the conditions of load, temperature etc. In this approach, short base length sensors are usually utilized. Also, it is possible to get the information about the whole structure with the aid of extrapolation of the data obtained from these sensors.

Structural point of view- In this approach of measurement, the structure is viewed from a geometrical point of view. In this approach, long gauge length sensors appear to be the ideal choice. In this approach, we will be able to detect material degradation only if this material degradation has an impact on the form of the structure.

In the recent years most of the research work which has been carried out in field of optic sensors has been in the field of material monitoring rather than structural monitoring. It is also worth mentioning here that, more sensors are required in the case of material monitoring as compared to structural monitoring.

We know that civil engineering requires sensors that can be embedded in the concrete, mortars, steel, rocks, soil, road pavements etc. and can measure various parameters reliably. Also what should be taken into account is that these sensors should be easy to install and should not hamper the construction work or the properties of the structure in any derogatory manner. Also, it is common knowledge that at the sites of civil engineering, there exist the unavoidable conditions of dust, pollution, electromagnetic disturbances and of unskilled labor. Thus, the sensors to be used in these cases need to be rugged, should be inert to harsh environment conditions and should be easy to install and their installation could be carried out by unskilled labor. Along with all these things, it is imperative that these sensors are able to survive a period of at least ten years so that they can allow for a constant monitoring of the aging of the structure. Thus, we see that the fiber optic sensors can prove to be quite handful in civil engineering applications and structures. In the past various kinds of sensors have been used in civil engineering for measuring temperature, pressure, stress, strain etc. And as the optical fiber sensors spread their wings, the civil engineering is bound to gain a lot from these modern sensors (Vurpillot et al., 1998).

Chapter 3: LITERATURE REVIEW ON FIBER OPTIC SENSORS

Fiber optic sensors are of many kinds, but they can be broadly classified into two types, namely, extrinsic fiber optic sensors and intrinsic fiber optic sensors. There is a great deal of difference between these two types of fiber optic sensors and this difference is discussed in detail below.

3.1 EXTRINSIC FIBER OPTIC SENSORS

This type of fiber optic sensor is also known as hybrid fiber optic sensor.

As we can see in the figure above that there is a black box and an input fiber enters into this black box. And from this input fiber, information is impressed upon light beam. There can be various ways by which the information can be impressed upon. Usually this information is impressed upon the light beam in terms of frequency or polarization. This light which then posses the information is carried away by the optical fiber. The optical fiber now goes to an electronic processor. (Vurpillot et al., 1998) Here, in the electronic processor the information which is brought along by the fiber is processed. Though we can have separate input fiber and output fiber, but in some cases it is preferred to have the same fiber as the input fiber and the output fiber.

3.2 INTRINSIC FIBER OPTIC SENSORS

The other type of optic fiber sensors is the intrinsic fiber sensors. An example of an intrinsic fiber sensor is shown in the figure below. The working of the intrinsic fiber sensors is somewhat different from the working of the extrinsic fiber sensors. In the intrinsic fiber sensors, the light beam is modulated and we rely on this modulation in the fiber in order to carry out the measurement.

In the figure above, we can see an intrinsic fiber sensor or what is also known as all fiber sensor.

Intrinsic fiber optic sensors

Extrinsic fiber optic sensors

In this sensor, the fiber itself acts as the sensor medium

In this sensor, the fiber does not act as the sensor medium. It merely acts as a light delivery and collection system

In this fiber optic sensor, the light never leaves the medium and always stays inside the medium

In this fiber optic sensor, the light leaves the medium, then it is altered in some way and is collected by another fiber.

3.3 INTENSITY BASED FIBER OPTIC SENSORS

While there exist various kinds of fiber optic sensors today, but the most common of these sensors is the hybrid type fiber optic sensor which depends upon intensity modulation in order to carry out the measurements (Zako et al., 1995)

In the figure below, we can see a vibration sensor. In this vibration sensor, there exist two optical fibers.

The functioning of this fiber optic sensor is quite simple. In this fiber optic sensor, light enters from one side. And when this light exits from the other side, it exits in the form of a cone and the angle of this cone depends on two parameters. The two parameters upon which the angle of this cone depends are:

Firstly, it depends on the index of refraction of the core

Secondly, it depends on the cladding of the optical fiber

Also, the amount of light captured by the second optic fiber depends on a number of factors.

The prominent factors on which the amount of light captured depend are:

It depends on the acceptance angle

It also depends on the distance “d” between the optical fibers

Another type of fiber optic sensor is the flexible mounted mirror sensor. The important characteristics of this sensor are:

In this case, a mirror is mounted which is used to respond to external parameters such as pressure.

The modulation in intensity is caused the shifts in the mirror position.

These sensors are used in a variety of applications such as door closures. In a door closure, a reflective strip is used.

These sensors are used to measure small variations and displacements

3.4 LINEAR POSITION SENSORS

In today’s world, linear position sensors have become widely applicable. They are being used for various purposes (Zako et al., 1995). In many of the linear positioning sensors, wavelength division multiplexing is used. An illustration of the linear position sensor is shown in the figure below.

The various components of this linear position sensor are:

It consists of a broadband light source

It consists of various detectors as shown in the figure above

It also consists of wavelength division multiplexing element which acts as the principal component of this instrument.

It also consists of an encoder card

In the example above, a broadband light source is utilized. The light from this broadband source is carried to a wavelength division multiplexing system with the aid of a single optic fiber. The wavelength division multiplexing system is used to determine the linear position.

Another linear motion sensing method which is very widely used today and is quite similar to the method discussed above is known as the time division multiplexing method. This method is illustrated with the aid of a figure shown below.

In this method instead of a broadband light source a light pulse is used. Here, the combination of the returned signals takes place. As a result of this combination of the returned signals, the net signal which is produced moves onto the position of the encoder card.

The main areas in which these intensity based fiber optic sensors have found application are:

In commercial aircrafts

In military aircrafts

In these applications these modern sensors have performed quite well and are at par with the performance of the conventional sensors. But, because of the various advantages these sensors enjoy over and above the conventional sensors, these modern sensors are bound to replace the conventional sensors in the years to come.

3.5 LIQUID LEVEL SENSORS

This is another type of intensity based fiber optic sensor. In the functioning of this sensor, the principle of total internal reflection is utilized. Thus, in these sensors the refraction index of the glass and the fiber occupy the pivotal role.

These sensors can be utilized for a variety of purposes. The most prominent of its applications are:

Measurement of pressure changes in gels

Measurement of pressure changes in various liquids

Measurement of refractive index changes in gels

Measurement of refractive index changes in different types of liquids

Measurement of the level of a liquid in a vessel and this application is utilized in various industries to measure liquid levels

These sensors have an accuracy of about 5 percent and are gaining importance in various industries for their usefulness.

3.6 SOFO SENSORS

These are fiber optic sensors which are utilized for strain measurement. These sensors have become quite popular owing to their innate merits. Out of all the fiber optic sensors, these sensors are the ones which are being used most extensively today. These sensors are being used to measure curvature and various other parameters in giant civil structures. These sensors form a part of the interferometric system (Vurpillot et al., 1998). Also, these sensors have the ability of measuring the parameters in an absolute manner using low-coherent light. The important properties of these sensors are:

These fiber optic sensors enjoy a high resolution. The resolution of these sensors is 2 µm

These sensors can be of varied lengths. Their length can be as small as 0.2m or can be as large as 20m.

Also, these sensors have the property of being temperature compensated

The SOFO system setup consists of a number of equipments. The main components of the SOFO system setup are:

It consists of a fiber optic sensor which forms the crux of this monitoring system. It is the most important component of the monitoring system. It consists of a sensor chain with partial reflectors.

One terminal of this sensor is connected to the coupler

Another terminal of the sensor chain with partial reflectors is connected to the LED.

The coupler in turn is connected to the photo diode and a mobile mirror.

This whole portable reading unit is connected to portable computer terminal. This ensures that that the whole monitoring system can be taken to the location and can be directly used at site.

These sensors can be utilized in two ways. They can either be embedded in the structure at the time of the construction of the structure. Or, they can used to measure the various parameters externally.

Though in both the cases, that is, in case of embedding or in the case of external anchoring, the performance of the sensors remains the same, but still, in modern smart structures, embedding is preferred (Perez 2001).

This is because, in the case of embedded sensors, the sensors continuously measure the parameters and are easy to manage. Whereas in the older structures, where embedding is not preferred, external anchoring is used.

Chapter 4: CASE STUDIES
Case study 1: Monitoring of San Giorgio pier

San Giorgio pier is a massive concrete structure. Its length is about 400metres. It