Application Layer Protocols in TCP/IP Model

Chapter 1: Introduction
1.1: Background

Information is deemed to be the backbone of the business stability and sustainability in the twenty-first century irrespective of the size or magnitude of the business as argued by Todd and Johnson (2001)[1]. This is naturally because of the fact that the increase in the use of information technology and the dependence on information communication over the Internet to entities associated with the business in geographically separated locations is accomplished through the effective use of the secure communication strategies across the Internet using the TCP/IP model. The prospect of saving costs through electronic transactions across the Internet which not only saves costs associated with the traditional business process but also makes the transfer instant in nature thus overcoming the time constraint associated with the procurement and distribution of goods and services by an organization has made it critical to utilise secure communication methodologies to leverage sustainable communications strategy for the organization. Furthermore, it is also critical to ensure that the organization conforms to the legal requirements in terms of security infrastructure to enable information privacy and data protection of personal and sensitive information of individuals concerned with the organization as argued by Todd and Johnson (2001). This makes it clear that a secure communication infrastructure is thus essential in an organization to harness the potential of Information Technology effectively.

Public Key Infrastructure (PKI) is an increasingly utilised method of data communication authentication using various application layer protocols for secure communication in a client server environment across the Internet (Nash et al, 2001[2]). The increase in the number of application layer protocols of the TCP/IP model in the twenty-first century through the use of protocols including HTTP (Hyper Text Transfer Protocol), SSL (Secured Sockets Layer) and TLS (Transport Layer Security) protocols to enable the desired level of security in the data being communicated makes it clear that there is potential for hacking and unauthorised access to the authentication information by hackers and other malicious users across the Internet whilst using one of the aforementioned protocols for data communication across the Internet. The increasing level of attacks on the servers through unauthorised users across the Internet to access sensitive information in an unauthorised fashion even with the existence of the aforementioned protocols has increased the need to assess the weakness of these protocols in terms of the potential areas where a hacker can attack data communication to decipher information that is hacked in order to make sense to eventually abuse the information for personal gains. In this report a critical overview on the areas of weaknesses on the application layer protocols in the TCP/IP model in the light of using PKI is presented to the reader.

1.2: Aim and Objectives

Aim: The aim of this report is to identify the key weaknesses of the application layer protocols of the TCP/IP model in the implementation of the PKI for secure data communication over the Internet.

Objectives:

The above aim is accomplished through steering the research conducted in this report on the following objectives

To conduct a background overview on the PKI and five layers of TCP/IP model.
To conduct a critical overview on the key components that enables the effective authentication and secure communication using a given protocol in the PKI infrastructure.
To perform an analysis on the key application layer protocols that is used in the TCP/IP model implementing the PKI architecture.
To assess the SSL/TLS protocol and the key weaknesses of the protocol in terms of areas where there is possibility of potential attacks by an unauthorised user or hacker without the knowledge of the user.
To Assess the Secure Electronic Transaction (SET) protocol and its key weaknesses in terms of the components of the protocol that can be manipulated by the hackers for unauthorised access to personal and sensitive information.
1.3: Research Methodology

A qualitative approach to the research is conducted through analysing the published information on the protocols and the RFC (Request for Comment) documents on the protocol. This approach is deemed effective as the protocols that are being analysed if tested would require a substantial amount of commitment of resources and funds to establish the infrastructure in order to effectively simulate the test environment for achieving reliable results. Secondary research resources from journals, books and other web-resources are used for constructing the analytical arguments for the research conducted in this report.

1.4: Research Scope

The TCP/IP communication model is deemed to be a critical platform for effective communication across the Internet. As each of the five layers of the TCP/IP model can be implemented using a variety of protocols, the scope of this research is restricted to the application layer protocols in the light of implementing PKI. This is due to the fact that the entire landscape of protocols comprised by the five layers of the TCP/IP model is extensive in nature and analysis of all the layers would not only require commitment of resources and funds but also the time necessary to perform the research. Hence the research scope is restricted to the application layer protocols of the TCP/IP model focusing specifically on the TLS and SET protocols.

1.5: Chapter Overview
Chapter 1: Introduction

This is the current chapter that introduces the reader to the aim, objectives, research methodology and scope of the research conducted in this report. This chapter is mainly to set the stage for the research presented in this report.

Chapter 2: Literature Review

This chapter presents a critical analysis on the Public Key Infrastructure (PKI). The overview throws light on the key components of PKI along with an overview on the benefits and constraints associated with its implementation. This is followed by the review of the five layers of the TCP/IP model. The purpose of the review of the TCP/IP model is mainly to provide an insight on the various levels of security implemented within the TCP/IP model prior to analysing the application layer related security components. The review of the application layer components is mainly focused on the technical elements associated with the implementation of the protocol and the authentication process like the algorithm, authentication methods etc., This review forms the basis to review the application layer protocols in subsequent chapters although protocol specific components will be dealt with in their respective analysis.

Chapter 3: The TLS Protocol

In this chapter a comprehensive overview of the SSL/TLS protocol architecture is presented to the reader. This overview is followed by the assessment of the security implementation and the major weaknesses associated with the protocol architecture that form potential entry points for the network hackers and attacks. The analysis also presents examples from the encryption algorithms and code samples from Open Source SSL on how to conduct code level network hacking on the SSL/TLS architecture. The exploitation of the PKI set-up in terms of the CA and RA that forms the basis for man in the middle attacks are also reviewed in this chapter in the light of the TLS encryption and transfer of information across the Internet between client and server. The chapter is concluded by a review on the client and server side attacks on the web-application environment to address the rising concerns on the weaknesses of the SSL/TLS that is being exploited by hackers eventually affecting electronic commerce transactions. The chapter also reviews the TLS weaknesses in the light of short public keys, 40-bit bulk encryption keys, anonymous servers, authentication procedure, authentication algorithms and the weakness associated with the use of a given algorithm over the other etc., The research also focuses on the cryptographic functions and the role of these functions in the security infrastructure implementation using the protocol.

Chapter 4: The SET protocol

This chapter like the chapter 3 commences with a comprehensive overview of the Set architecture and its implementation procedure in the electronic commerce environment. This is followed by a code level analysis on the major areas of weaknesses in terms of the protocol’s encryption strategy and the major issues associated with room for hackers to decrypt and even alter the information. The chapter then proceeds to a comprehensive overview of the Set architecture and encryption weaknesses in terms of the vulnerability related to intrusion, spoofing, PKI implementation, and use in UDP protocol etc.,

Chapter 5: Discussion and conclusion

This chapter commences with a discussion on the research conducted in chapters 3 and 4. The discussion aims to summarise the key weaknesses and the extent to which they can be overcome using security measures in terms of authentication algorithms, certificates etc. This is followed by a review of the objectives in the research in order to identify the consistency of the research conducted against the objectives set at the beginning of the report. The chapter then concludes the research followed by recommendations on further research on the topic.

Chapter 2: Literature Review
2.1: Security Trends

Todd and Johnson (2001) argue that the early Internet applications intended for Electronic commerce and information sharing although capable of delivering the desired service lacked seriously in the security related to the information being transferred as well as the abuse of the stolen information by unauthorised users for personal gains. This has naturally made the process of security a priority element that affected the growth of Electronic Commerce in the twentieth century since the dawn of the Internet. Todd and Johnson (2001) further argue that with the increase in the availability of network access, security became a matter of how to create the hardened outer wall (i.e.) the prevention of unauthorised access to the information systems rather than access control implemented on individual information systems exposed directly to the network.

Encryption is a terminology used extensively in the data protection, securing information over transit from sender to receiver, in a network environment apart from the use of encryption standards to secure information at the storage itself like the server or the client computer where the information resides. This is naturally because of the increase in the security infringements due to hacking into the communication channel by unauthorised users resulting in loss of sensitive information (Burnett and Paine, 2001[3]). There are numerous methods of encrypting the information in order to enable secure encryption between receiver and sender over the Internet two of which are deemed popular. They are symmetric and asymmetric cryptography methods of encryption used to secure information over the Internet as argued by Burnett and Paine (2001). The former is synonymous with the private key encryption system where a single encryption key or secret key is shared between the communicating parties to encrypt/decrypt information that is being transferred. The major weakness is the threat of loosing the private key which when discovered would render the strategy ineffective as it exposes the communication channel and the information being transferred to the hacker or intruder who has gained unauthorised access (Nash et al, 2001).

2.2: Public Key Infrastructure – an overview

The case of Asymmetric cryptography mentioned in section 2.1 forms the basis for the Public Key Infrastructure (PKI). This is an encryption strategy involving a public and a private key where the public key is used for encryption by the users in the public domain to send information to the server which alone uses the private key to decrypt the information in order to authenticate the user (Todd and Johnson, 2001). The Public Key Infrastructure is one of the successful and deemed to be a reliable approach for enabling secure communication through Trusted Third Party authentication and approval of the overall communication process involving data communication.

The key components that form a successful PKI infrastructure are described as follows

Certificate Authority (CA) – This is deemed to be the controller for issuing the public key and the digital certificate and its verification whilst communication is being established between a sender and receiver. The role of the CA is to generate the public and private keys for a given user alongside issuing and verifying the digital certificate. This makes it clear that the CA’s effective operation is a pivotal element for the successful and secure communication between the server and the client in a PKI environment. The CA is typically a company or group of companies independent of the users/organizations involved in the communication thus playing a Trusted Third Party role to enable security through independent verification of the digital certificates (Todd and Johnson, 2001).
Registering Authority (RA) – The RA acts as the verifier for the certificate authority before a digital certificate is issued to a requestor. This process is one of the key independent authorisation strategies deployed by the PKI infrastructure that is deemed to be a security measure as well as the key weakness to the overall effectiveness of the PKI strategy itself. The PKI as such is deployed as a methodology to enable secure handshake in order to establish an exclusive (or secure) communication channel between the sender and the receiver (Burnett and Paine, 2001). This handshake process is where the CA and the RA play a pivotal role to verify the validity of the communicating parties in order to complete the communication process. For instance, a credit card transaction over the Internet would require the bank, card issuing authority and the payment processing authorities to independently verify the identity of the buyer using the credit card details supplied. This process is conducted using the PKI handshake process where the public key provided by the vendor is accessed by the CAs and RAs to validate the transaction between the buyer and the vendor. in the real-world scenario, the CA and RA host separate servers with the respective public keys that are generated
Directories – These are the locations on the Internet domain where the public keys are held. Typically the public keys are held at more than one directory in order to enable quick access to the information as well as a double check on the key retrieved in terms of its validity and accuracy.
Certificate Management System (CMS) – This is the software application that controls or monitors the overall certificate issue and verification process. As this is a package, it varies from one authority to another depending upon the choice of the infrastructure by the certifying authority. So the CA and the RA that host the directory for the public keys and the digital certificates issued for the users using the keys are managed using a CMS.

The operation of PKI in a typical banking example is presented below to enable a better appreciation of the overall PKI concept.

The credit card transaction described under RA above is where the CA issues a digital certificate for the details supplied by card holder using the public key provided by the vendor which in turn is verified by the RA prior to sending to the bank. The bank holds the private key which is used to decrypt the information provided along with the certificate in order to validate the transaction. The acknowledgement from the bank or the financing institution is then encrypted using the private key and sent back to the user who can decrypt the information using the public key in order to view the status. This process is conducted by the application layer protocols in case of TCP/IP where the data is encrypted using the encryption standards in lieu with those of the PKI to achieve the above-described secure transaction process.

Yet another example that can help appreciate the PKI effectively is the typical Internet banking service provided by banking institutions to its account holders. The account holder enters the verification information on the Internet Banking site for the bank which is encrypted using the public key stored in the public directory using an approved CA and then sent to the bank which decrypts the information for authentication and then allows the user to view the bank account in case of successful authorisation. The subtle difference between authentication and authorisation is the fact that the former is the process of establishing connection whilst the latter is the actual validation process dedicated for the user verification within the established connection to access the specific information for the user (Nash et al, 2001).

The key security strategy is the sharing of the public key whilst retaining the private key generated using the same algorithm simultaneously as argued by Burnett and Paine (2001). This is because of the fact that the private key due to its secure nature by providing only to the requester makes it clear that the requester (or the bank in the case of Internet banking) can enable an effective means of secure communication not only for the purpose of verification of the user but also to authenticate the server to the client using the private key thus providing room for establishing a secure communication channel to enable data communication. This makes it clear that the security established using the PKI is predominantly dependant on the following key entities of the infrastructure

CA and RA – The validity and reliability of the authorities involved is a critical aspect associated with the successful implementation of the PKI. This is because of the fact that the client or the user when sending the verification information from an un-secure computer entirely depends on the certifying authority to protect the information transferred. Hence an attack on the server hosting the directory and the public keys for issuing the digital certificate would provide the hacker with a suite of opportunities to abuse sensitive information from stealing information up to enabling man in the middle attacks using initial verification information to lure further information from the user. These are discussed further in subsequent chapters.
Encryption Algorithm – The encryption algorithm used for issuing the public and private keys is the second and most critical element for the effectiveness of the PKI infrastructure. This is because of the fact that the security is only as strong as the weakness of the encryption algorithm as argued by Nash et al (2001). This justifies that the reliability and protection of the data transferred by the protocols using the PKI faces a key single point of failure as the weakness of the encryption algorithm being used for issuing the keys by the issuing authority.

Benefits

The major benefits of the PKI include the following

Security due to verification by the Trusted Third Parties (TTP) in the form of the CAs and RAs to issue and verification of the digital certificates.
Continuous development on the algorithms generating the public and private keys for the requester provides room to capture any weakness in the existing algorithm that can be fixed on the latest version being developed. The exponential rate at which the electronic commerce is growing has made the PKI a popular and reliable authentication process by popular vendors like Verisign (Nash et al, 2001).
The security infrastructure associated with the storage of the public keys and the issue of the digital certificates by the CA and RA makes the process of verification secure due to the presence of independent verification authorities apart from the CA. This naturally limits the rate of attacks due as the failure to meet the authorisation at the RA will terminate the connection or not allow further communication to the target computer.

Constraints, Weaknesses and threats

The involvement of TTP increases the costs associated with the infrastructure set-up and maintenance (Todd and Johnson, 2001). This naturally affects the overall development and continuous security verification process as the verification authorities naturally face high level of costs associated with the maintenance in terms of security measures to storage and communication.
The encryption applied by the communication protocols is not secured for communication interference thus making it clear the changes to the header contents through monitoring the network traffic is plausible thus resulting in network attacks on the client computer. Man in the Middle which was mentioned before is a classical example for this case. This is because the ability to mask the header information on the data packets will enable the hacker to mislead the Internet user in revealing sensitive information without the knowledge of the user that he/she is actually communicating with the hacker and not the vendor/intended provider. This is dealt with at the encryption and algorithm level in chapters 3 and 4.
The weakness of the encryption algorithm used for generating the keys and the digital certificates is yet another issue that threatens the security enforced by PKI. This is because of the fact that the encryption applied to issue the digital certificate for the purpose of authentication is protected only at the data level and hence destination and source details can be altered by hackers to spoof the parties involved in divulging sensitive information. As the weakness of the encryption algorithm is mainly the case of developments to the hacking methods for penetrating the security measure, the continuous research and development strategies to ensure that the encryption algorithm implemented is secure enough necessitates commitment of funds and resources for the purpose.
The authentication algorithm used by the CA and the RA is yet another area of weakness that affects the security infrastructure implemented using PKI. This is because of the fact that the authentication algorithm is not merely the encryption algorithm for generating the keys, issuing the digital certificates but also the process of authenticating the CA and RA to the server computer of the vendor or the receiver with the details of the user or the client. This process naturally provides room for hackers to attack the data communication process at the authentication level if not successful straight to the encryption algorithm used for the key generation (Nash et al, 2001).
The fact that the PKI can be implemented successfully in the TCP/IP model alone makes it further vulnerable or a weak security strategy for other protocols that are not supported at the TCP/IP application layer. This makes the PKI limited to only a few application layer protocols that forms the TCP/IP application layer to enable secure data communication.
2.3: TCP/IP Model

Blank (2004) (p1)[4] argues that ‘TCP/ IP is a set of rules that defines how two computers address each other and send data to each other’. This makes it clear that the TCP/IP is merely the communication framework that dictates the methods to be deployed in order to achieve secure communication between two computers. Rayns et al (2003)[5] further argue that the use of TCP/IP in the network communication is mainly due to the platform independence of the framework and the room for development of new protocols and encryption methodologies in each of the five layers of the TCP/IP model. TCP/IP forms the standard for a protocol stack that can enable secure communication through enabling multiple protocols to work together within the TCP/IP framework. This approach is the primary architectural feature of the TCP/IP standard that makes it popular due to the fact that security can be implemented at multiple levels of the communication stack through introducing protocols at each layer of the TCP/IP model (Rayns et al, 2003). An overview on the layers of the TCP/IP model and the various elements of security implemented are presented below.

The five layers of the TCP/IP model are

Application Layer
Transport Layer
Network Layer
Data Link Layer and
Physical Layer.

The stack of the layers in the TCP/IP model and the key protocols that are normally used in these layers of the communication framework of the protocol suite is shown in fig 1 below.

From fig 1 it is evident that the overall TCP/IP implementation in a given network can be established using any number of protocols to enable security and speed of data transfer between computers. The reader must also note that the protocols mentioned in each layer shown in Fig 1 are merely a sample of the overall protocol suite as the number of protocols in each layer is extensive in nature with specific application purpose as well as interoperable and scalable properties as argued by Rayns et al (2003).

Blank (2004) further argues that the layers are logically arranged in such a way that closer to the top the data protocols associated with user application like the HTTP, SSL, BOOTP, SMTP, etc., with respect to the nature of the user application are available in order to enable data encryption to form the payload for the data packets transferred by the TCP/IP protocol stack whilst those towards the bottom layers like ARP, Ethernet etc., form the actual procedures for authentication and establishing connection between the computers in the network. Hence the application developer has the ability to quickly identify the protocol for his/her communication purpose at the desired level of data granularity.

This level of abstraction also provides the user with the ability to isolate the process of encryption and security of the data from the actual communication process of transferring information from computer to computer. This makes it clear that the effective implementation of the protocols to encrypt data as well as enable secure information transfer between computers is plausible through choosing the right combination of protocols from each layer to form the protocol stack of the TCP/IP suite (Blank, 2004). Each of the five layers is discussed in detail below.

Application Layer – This is the top most layer of the TCP/IP stack which provides the user applications with a suite of protocols to enable encryption and communication of the information from one computer to another. The application layer of the TCP/IP stack is also the level at which the web applications and business logic associated with the data transfer are incorporated prior to encryption. From the diagram in fig 1 it can be seen that the SSL/TSL or the Secure Electronic Transaction (SET) protocols are not visible at the application layer. This is because of the fact that the these protocols are not exactly for the encryption of the user application information and not dedicated to client interface application but independent protocols that encrypt the information being sent using one of these application layer protocols. Hence their position is actually between the Application Layer and the Transport Layer of the TCP/IP stack where the encryption of the data is completed prior to including the information for transfer using one of the transport layer protocols. The nature of the security encryption protocols like the SSL and TLS to enable encryption on the data being communication prior to transport using the appropriate transport layer protocol classify them as application layer protocols (Blank, 2004). Hence the role of the application layer in the TCP/IP stack is to enable interaction between the front-end or the user interface of the applications being used by the client computer in order to transfer information from one computer to another in a given application. Hence one can argue that the application layer protocols are predominantly used in case of client server communication applications where there is data transfer between the client and the server in the full-duplex mode (Feit, 1998[6]).

Transport Layer – The transport layer enables the end-to-end message transfer capabilities for the network infrastructure independent of the underlying network alongside providing error tracking, data fragmentation and flow control as argued by Feit (1998). It is in the transport layer where the header information for the packet (i.e.) the details of the fragment of data being transferred off the overall information to be sent from the target computer to the receiver. The header therefore contains the details of the packet in terms of the position of the packet in the overall data sequence, source & target address, etc., in order to enable the network router to transfer the packet to the appropriate destination computer in the network.

The two major classifications of the transport layer application in terms of transmission of information and connectivity include

Connection-Oriented Implementation – This is accomplished using TCP (Transmission Control Protocol) where a connection must be enabled between the two communicating computers in conformance with the authentication and association rules prior to enabling data transfer. Feit (1998) further argue that the data transfer in a connection-oriented implementation is completed successfully only when the connection established is live and active between the two computers. This makes it clear that a connection must be established using sessions in order to ensure security through terminating the session in case of user inactivity as well as providing facility for authentication to the desired security level. The implementation of PKI is one of the security strategies that are accomplished using the connection-oriented strategies of the transport layer in order to enable secure communication between the client and the server. This makes it clear that the header of the packet must contain details of the session in order to ensure that the transmission is indeed part of the established reliable byte stream.

The key aspects associated with the security in case of the aforementioned include

Sequential data transfer – the data received by the target computer is the same order in which it is transferred. This makes it clear that implementing a connection-oriented strategy for large data transfer would hamper the performance in terms of speed and session time-out issues.
Higher level of error control – This is naturally because of the fact that the connection oriented approach to the communication ensures that there is a live communication channel between the sender and the receiver throughout the transmission process thus controlling the loss of packets or data segments. This naturally minimises the error level in the data being transferred.
Duplication Control – Duplicate data is discarded and also controlled to a minimal level due to the synchronous data transfer methodology implemented by the process.
Congestion Control – Network traffic is monitored by the TCP protocol effectively as part of the transport layer tasks thus ens

Annotated Bibliography: Automated Brain Tumor Detection

V. Zeljkovic et al.(2014)[1]] proposed computer aided way of automated brain tumor detection with MRI images. This technique enables the particular segmentation of tumor tissues by with the correctness and also reproducibility just like physical segmentation. The outcomes display 93. 33% precision with irregular images and also total accuracy with healthy brain MR images. This technique for tumor detection with MR images also gives information relating to it’s specific location and also documents it’s design. As a result, this particular assistive technique enhances investigative effectiveness and also lowers the opportunity of human mistake and misdiagnosis.

S. Ghanavati et al. (2012) [2] delveloped a multi-modality framework for automated tumor discovering is actually recent, fusing unlike magnetic Resonance Imaging strategies which includes T1-weighted, T2-weighted, as well as T1 along with gadolinium comparison agent. The intensity, shape deformation, symmetry, as well as consistency capabilities have been produced from each image.

H. Yang et al. (2013). [3] experimented many segmentation strategies, no approach can easily segment all the b rain tumor information sets. Clustering as well as classification approach are very vulnerable with the 1st parameters. A few clustering strategies certainly are a stage operations and donot maintain the connectivity among regions. Training data and the appearance of the tumor strongly affect the results of the atlas-based segmentation. Edge-based deformable contour model is experienced the initialization with the evaluating curve as well as noise.

H. Kaur et al. (2014) [4]has dedicated to the brain tumor detection strategies. The brain tumor detection is is definitely an essential vision application inside the medical field. This specific work offers firstly displayed an evaluation about a variety of well-known strategies for automated segmentation of heterogeneous image information that can require an actions towards bridging the gap in between bottom-up affinity-based segmentation techniques as well as top-down generative model based structured strategies. The key purpose of the work is usually to find out a variety of ways to detect brain tumor in a effective methods. The way to unearthed which the absolute almost all of active techniques has ignored the quality images likes including images along with noise or bad brightness. Also many techniques target tumor detection has neglected the use of object based segmentation. To overcome the limits of previously work a new strategy has been offered in this research work.

I.Maiti et al. (2012) [5] offered a new way for brain tumor detection is developed. For this purpose watershed method may be used in combination with edge detection operation. It is a colour based brain tumor detection method using colour brain MRI graphics in HSV colour space. The RGBimage is changed into HSV coloring image by which the image is split in several regions hue, saturation, as well as intensity. After contrast enhancement watershed algorithm is applied on this image for every region. Canny edge detector is put on this result image. after combining the three images final brain tumor segmented image is obtained.

M.S R et al.(2014)[6] proposed a segmentation and k-means clustering is combined for the improvementt evaluation regarding MR images. The results that translate the actual unsupervised segmentation techniques better than supervised segmentation techniques. The pre-processing is needed to display images from the supervised segmentation methods. The image training and testing data which significantly complicates the process though the picture analysis regarding known K-means clustering process is straightforward in comparison with used fuzzy clustering techniques.

H.AejazAslam et al.(2013)[7] have suggested a new way of image segmentation applying Pillar K-means criteria. The system can be applied this k-means criteria optimized after Pillar. Pillar algorithm takes this keeping pillars should be located as far from each other to be able to avoid this force distribution of a upper limit, because just like the number of centroids between data distribution. This algorithm can optimize this K-means clustering with image segmentation in the issues with precision along with calculation time.

A. Al. Badarneh et ‘s. (2012)[8] suggested a an automatic classification method for tumor of MRI images avoiding this Automatic classification of MRI images involves extreme accuracy, considering that the non-accurate examination along with postponing supply of the accurate inspection would produce raise the prevalence of more serious conditions. This work demonstrates the effects of neural network (NN) along with K-Nearest Neighbour (K-NN) algorithms upon brain tumor.. the results demonstrate that technique accomplishes 100% classification precision applying KNN along with 98. 92% applying NN.

K.Sharma et al.(2014)[9] discussed magnetic resonance imaging is important imaging strategy used in the detection of brain tumor. brain tumor is one of the most harmful diseases occurring among several people. brain MRI performs an essential role for radiologists to detect and treat brain tumor patients. Research of the medical image by the of the radiologist is a difficult process along with the accuracy is determined by his or her experience. Thus, the actual computer aided techniques becomes really important as they overcome these limitations. Many automated methods are offered , but automating this method is extremely challenging because of various appearance of the tumor among the different patients. There are many feature extraction and classification methods which are used for detection associated with brain tumor from MRI pictures.

S.Royet al.(2013)[10] reviewed the several recent brain tumor segmentation along with diagnosis methodology for MRI of brain image. MRI is an advanced medical imaging method providing prosperous information about the human soft-tissue structure. there are different brain tumor diagnosis and segmentation methods to find the segment a brain tumor from MR Images. These detection iand segmentation strategies are evaluated with signifiance placed on Informative advantages and drawbacks of such methods for brain tumor diagnosis and segmentation. The usage of MRI image detection and segmentation in different techniques are defined.

Natarajan et ‘s. (2012) [11] proposed brain tumor recognition method for MRI human brain images. The MRI human brain images are generally firsty pre-processed using median filtration, then segmentation of image is performed using threshold segmentation and also morphological functions are used to obtain the tumor region. This method provides accurate shape of tumor within MRI human brain image.

Manoj et .al. (2012) [12] explained the information of size of tumor plays critical role with the the treatment of malicious tumors. Manual segmentation of human brain tumors as of magnet Resonance images is a challenging and also time cousuming task. This method for the discovering of tumor in human brain by segmentation and histogram thresholding. The prepared process can be efficiently useful to identify contour of the tumor and it is geometrical description. It can be helpful application for the experts especially the doctors entertained in this particular field.

Roopali et.al.. (2014)[13] disscused the segmentation strategy, which was carried out using a method based on threshold segmentation, watershed segmentation along with morphological operators. This proposed segmentation method seemed to be experimented with MRI scanned images associated with human brains: hence finding tumor in the images. Samples of human brains were taken, scanned applying MRI process and were prepared through segmentation methods this provides the efficient end results.

Nisha et.al.(2014)[14] described the method aims the automatic detection along with classification| of human brain tumors while benign as well as malignant. The performance proposed by the system is usually 96%. This proposed method concentrate on this segmentation associated with MRI and helps in the automatic detection of human brain tumor through the assistance of level set method with the classification of tumor as benign or malignant using artificial neural syatem.

Kanimozhi and Dhanalakshmi et.al. (2013) [15] described the basic algorithm for detecting the actual variety and outline of tumor in brain MR images is described. Usually, CT scan as well as MRI that could possibly in intracranial hole produces a entire image of brain. This image is visually|examined by the overall specialist for identification and examination of brain tumor. To stay away that , it uses computer aided technique for segmentation (detection) of brain tumor on the basis of two algorithms. This allows the actual segmentation of tumor tissues along with correctness and reproducibility comparable to physical segmentation. In including, it also reduces the full time for examination. At the ending of treatment the actual tumor is extracted from MR image and its exact location and also the outline identified . Any time degree of tumor is shown based on amount of region determined on the cluster.

Njeh, Ines et al. (2014)[16] researched at an instant distribution-matching, data-driven algorithm for 3d multimodal MRI brain glioma tumor and edema segmentation in several strategies. They learned non-parametric model distributions which in turn characterize the typical areas in present information. then they explained his or her segmentation problems since optimisation of various expense features of the similar type, each that contain two terms distribution matching earlier, which in turn examines an international similarity among distributions, and a smoothness before prevent the occurrence of small, isolated areas in the solution. Obtained using recent bound-relaxation results, the actual optima in the value features provide the actual complement in the tumor region as well as edema region in almost real-time. According to global instead of pixel wise data, the proposed algorithm doesn’t need the learning on the sizable, manually-segmented training set , as may be the situation involving modern day methods. Thus, the results are independent to the choice of a an exercise set. Quantitative evaluations in the publicly available training and assessment information fixed from the MICCAI multimodal brain tumor segmentation challenge (BraTS 2012) Obtained using recent bound-relaxation results, the actual optima in the value features provide the actual complement in the tumor region as well as edema region in almost real-time. According to global instead of pixel wise data, the proposed algorithm doesn’t need the learning on the sizable, manually-segmented training set , as may be the situation involving modern day methods. Thus, the results are independent to the choice of a an exercise set. Quantitative evaluations in the publicly available training and assessment information fixed from the MICCAI multimodal brain tumor segmentation challenge (BraTS 2012) shown that their algorithm assure an incredibly competing effectiveness for complete edema and tumor segmentation, among nine existing methods, obtaining a desirable calculating execution time (less than 0.5 s per image).

Njeh, Ines et al. (2014)[16] looked at an immediate distribution-matching, data-driven formula intended for 3d multimodal MRI brain glioma growth and edema segmentation in several strategies. These people learned non-parametric model distributions which in turn characterize the typical areas in today’s information. After that, many people stated his or her segmentation complications since optimisation involving various expense features in the similar kind, every single made up of a pair of phrases some sort of submitting matching earlier, which in turn examines an international likeness among distributions, and ( some sort of smoothness previous to avoid the incident involving tiny, isolated areas in the remedy. Obtained using recent bound-relaxation final results, the actual optima in the selling price features provide the actual complement in the growth spot as well as edema spot in almost real-time. According to global instead of pixel clever data, the actual proposed formula doesn’t need yet another learning on the large, manually-segmented training fixed, as may be the situation involving modern day methods. Thus, the actual coming answers are in addition to the selection of a workout fixed. Quantitative opinions inside the openly available training and assessment information fixed on the MICCAI multimodal brain growth segmentation obstacle (BraTS 2012) shown that will his or her formula assure an incredibly competing effectiveness intended for comprehensive edema and growth segmentation, between nine current competing methods, obtaining a desirable calculating execution occasion (less as compared to 0. 5 utes every image).

Roy, Sudipta et al. (2013) [17] mentioned tumor segmentation from magnetic resonance imaging (MRI) information is an essential but difficult manual task performed by medical professionals. Automating this procedure is a challenging job due to higher diversity in the visual of tumor tissues between various affected individuals and oftentimes similarity with the common tissues. MRI is definitely an improved professional medical imaging technique providing abundant information about the human tissue anatomy. There are various human brain tumor detection with segmentation techniques to detect and segment a human brain tumor from MRI images. These detection and segmentation methods are usually evaluated having a significance added to enlightening the advantages and drawbacks of human brain tumors detection and segmentation. Using MRI image detection and segmentation in several methods can also be explained. In this article a quick overview of various segmentation for detection of human brain tumor MRI of human brain have been discussed.

Sapra, Pankaj et al. [18] described and compared the particular technique of automated detection of brain tumor by magnetic Resonance image (MRI) used in various stages of computer Aided Detection Process (CAD). Brain Image classification approaches are usually studied. Existing strategies are simply divided into region based and contour based strategies. These are usally focused on complete improved tumors or specific kinds of tumors. The quantity of sources needed to spell out there large number of| information is selected for tissues segmentation. In this paper, modified image segmentation approaches were applied on MRI scan images to be able to detect human brain tumors. Also in this paper, a modified Probabilistic neural Network Circle (PNN) model that is created on learning vector quantization (LVQ) together with image and data analysis and treatment approaches proposed to transport out a automated human brain tumor classification using MRI-scans. The evaluation from the modified PNN classifier functionality is measured with working out functionality, classification accuracies and computational time. The simulation results discovered how the modified PNN provide fast and accurate classification in contrast to the particular image processing and published conventional PNN approaches. Simulation results also discovered how the proposed method outer forms the corresponding PNN process offered and successfully handle the technique of human brain tumor classification inside MRI image together with 100% reliability.

Harati, Vida et al. 2011) [19] offered a much better fuzzy connectedness (FC) algorithm based on a variety for the reason that the seed point is selected automatically. This algorithm is actually independent of the tumour type when it comes to their pixels intensity. Tumour segmentation evaluation results based on similarity criteria show a better effectiveness from the proposed approach set along with the common methods, specially with MR images, with tumour areas with low contrast . Thus, the recommended technique is ideal for improving the the ability with automated estimation of tumour size and also position of brain tissues, which supplies more accurate study of necessary surgery, chemotherapy, and also radiotherapy techniques.

CONCLUSION AND FUTURE WORK

The brain tumor detection is a critical application of medical image processing. The literature survey has shown that probably the most of existing methods has ignored poor quality images like noisy images or poor brightness. Also the a lot of the presented work on tumor detection has ignored the usage of object based segmentation. The general goal of the research work would be to efficiently detect the brain tumor using the object detection and roundness metric. The brain tumor detection is a critical application of medical image processing. This work has firstly presented an evaluation on various well-known approaches for automatic segmentation of various image data that has a step toward bridging the distance between bottom-up affinity- based segmentation approaches and top-down generative model based techniques. The key contribution of the job would be to discover various approaches to detect brain tumor within an efficient way. The literature survey has shown that probably the most of existing methods has ignored poor people quality images like noisy images or poor brightness. Also the a lot of the traditional techniques of tumor detection has ignored the usage of object based segmentation. This work has proposed a fresh object based brain tumor detection using combined with the decision based median filtering. The method has shown relatively efficient results than neural based tumor detection technique. The design and execution of the proposed algorithm is done in MATLAB using image processing toolbox. The evaluation has shown that the proposed method has achieved around 94 % accuracy that has been 78 % in neural based technique. Also for high corrupted noisy images the proposed method has shown relatively effective results compared to the neural based tumor detection. Even using cases neural based tumor detection fails for highly corrupted noisy images. In near future we shall propose a fresh improved brain tumor detection approach that’ll improve the accuracy of tumor detection techniques further using fuzzy-neuron based image segmentation. Further the usage of the proposed algorithm will also be extended by utilizing theproposed technique for the breast cancer and also for skin detection.

Airport Tracking Device for Blind and Partially Sighted

Expanding ambient technologies for blind and partially sighted people has rapidly grown over the last few years, enabling people to become more independent in their daily lives. Ambient intelligence is already becoming commonplace in the environment through the widespread use of computing, mobile devices, and information appliances, thereby increasing the ease of communication “between individuals, between individuals and things, and between things.”[1]

A new ICT device has been developed that will help those with impaired or no sight safely navigate through airports. This report will explain some of the technology that will go into this device and how it will work in terms of providing directional assistance in a place like an airport where one’s surroundings are not familiar and confusion is easy when there is a lot of noise and movement.

Understanding Ambient Technologies

The field of ambient technologies focuses on providing “greater user-friendliness, more efficient services support, user empowerment, and support for human interactions.”[2] In this way, the devices that come from the use of this technology will offer for “opportunities for social integration and independent living for elderly people and people who are disabled.”[3] Interestingly enough, this technology may also advance in terms of being able to tune into any cognitive limitations and the devices incorporating ambient technology can then adjust themselves to that person’s specific abilities and limitations.[4]

This emerging type of technology is supporting a new way for human beings and technology to interact so that “devices will no longer be perceived as computers, but rather as augmented elements of the physical environment.”[5] The movement to an information-based society will “be populated by a multitude of hand-held and wearable micro-devices and computational power and interaction peripherals.”[6] This provides a host of opportunities for many people who might not have been able to be as interactive with their environment due to some physical disability.

Complexity and Challenges

Ambient technologies still has a long way to go in terms of reaching its objectives. There are a number of overriding challenges to this emerging technology. Generally, it is perceived that ambient technologies must be “reliable, continuously available in space and time, consistent in its functionalities and interaction in private and (crowded and potentially hazardous) public spaces.”[7]

In 2005, IBM researchers identified a number of areas that would need to be addressed before ambient technologies could achieve some of the aforementioned benefits. These challenges include “the distribution of interaction over devices and modalities, the balance between automation and adaption and direct control, the identification of contextual dependencies among services, health and safety issues, privacy and security, and social interaction in ambient intelligence environments.”[8] A number of other challenges are present that must address the unique qualities of each user of the device, including their “abilities, needs, requirements, and preferences.”[9]

The complexity and challenge of designing a device that can be used for an airport is extensive. The device must take a lot of external factors into consideration, including noise, language barriers, security, and communications interference. It is hoped that improvements in voice synthesis and recognition will help in noisy environments as well as assist those who might not be able to use keyboards or other object manipulation inputs.[10] Additional enhancements will be needed to incorporate the development of an automatic language translation component,[11] which would be imperative in certain situations like an international airport or for during international travel.

A further challenge is to introduce this device into existing information system environment within airport and have it integrate with any number of different system environments. Currently, there is no standardised operating system across global environments. In terms of a device that would be helpful in an airport, there would need to be the “deployment of networks of sensors in closed spaces” that would help with GPS localisation capabilities.[12] Universal access is also a critical issue and challenge because it is vital that these types of devices be affordable and available to all who might need it to compensate for their physical limitations.

Device Capabilities and Benefits

The user of the device can configure it so that the device understands the user’s specific requirements related to their physical disability of blindness. Having this capability will allow the user to make appropriate decisions, feel more confident, and achieve greater independence and social interaction.[13] This is done through a voice recognition system, which is considered a user adaptive interface that allows the user to interact with the device so that it can also verbally navigate the user in the right direction.[14] The device is then used as if it was an electronic guide dog that can help the person by letting them know about “nonfamiliar physical obstacles”[15] as well as provide the proper directions on how to get to the right destination. This device would also interact with other ambient technologies that may incorporate other user adaptive interfaces, such as scent recognition and output and tactile recognition and output[16] that can be used to fulfil other personal needs usually done with one’s physical eyes.

The device is able to overcome some of those external factors found in an airport. It will be a micro-device that the user can conveniently wear in some fashion around their neck or wrist so that they can continue to carry their luggage or belongings but still be guided by a device that can work with an airport information system to guide them through the airport to their appropriate gate or other destination whilst navigating certain objects that are not visible to the sight-impaired person, such as people, baggage, and signs.[17]

The device can integrate multimedia content, including sound and graphics[18] to help those with partial or no sight find their way, with interactive sensorial and motor abilities[19] which allows the device to interact with the travellers as if they were getting help from another human being. In other words, partially sighted or blind travellers will be able to ask the device questions and receive a response that will help them navigate through the airport. It is important that the user interface on this device be as “straightforward and meaningful without the user being overwhelmed by options and menus.”[20]

To address the various information system environments in airports, an environmental-level adaption can be used because it “extends the scope of accessibility to cover potentially all applications running under the same interactive environment rather than a single application.”[21] This will enable the device to run successfully in all environments, thereby reducing some of the insecurity for the user who may be apprehensive about how the device will affect their experiences.

Device Enhancements

As ambient technologies further progress, devices using this technology will be characterised by “increasing ubiquity, mobility and personalization.” The devices could be reconfigured,[22] according to which network the user has come in contact with – at an airport, a store, a bank, etc. This will be important because of the critical need to solve some of the cognitive overload, confusion and frustrations[23] that will result as human beings — visually impaired or otherwise — try and adapt to a new way of interacting with each other and their surrounding environment.

Ambient technologies must also advance in their alignment with other technologies in terms of “miniaturization, low power devices, wireless devices, security and encryption, biosensors and scalability.”[24] Many of these other technologies could hold the answer in terms of advancing the goal of ambient technologies to meld the idea of technology and human interaction into one action. Further research is also being conducted on an open source and standard for networks that will allow for widespread accessibility and adoption of ambient technology devices as well as more effective communication regardless of their location[25] so that these can be used in such public and global places as airports.

As with most technology, there will be many glitches that will need to be overcome. It can be difficult, especially for those who are partially or completely blind, to learn to depend on a device to overcome their physical limitations only to find that it has malfunctions.[26] Therefore, it is imperative that a number of tests be conducted and backup information systems be developed to minimise any technical glitches. Other technical issues related to security and privacy can arise from a device’s network being compromised by viruses and works if there is not great care taken to insure that the networks are not vulnerable to attack.[27] This would involve further research into how numerous protective tactics now in place, such as proxy firewalls and intrusion detection system,[28] can be integrated with ambient technology in devices to keep people safe, especially in public areas where larger networks may be breached.

To further the development of ambient technologies for such devices as an airport device for the blind and partially blind, it is recommended that candidates for the device be involved in the design life cycle and testing phase to ensure that the user interface is capable of delivering on its objective and that the subject using the device feels confident that it will improve their interaction with their external environment.

Conclusions

There is a wide demand for devices like the one developed for use in an airport because there are far-ranging benefits involved in its creation and implementation in the marketplace. However, there are many technology, legal, privacy, and security issues to overcome as well as detailed explanations about these devices so that those who need them the most can quickly feel comfortable with the idea of interacting with technology in a way that also responds to them and their cognitive abilities and limitations.

However, it is clear that as devices come to market, such as the airport-enabled solution, more people will feel comfortable using them to enhance their interaction with others and provide a more independent way of travelling for those who might have felt previously inhibited. Although standardisation can be a slow process, this will provide time to achieve greater enhancements to various devices, such as the airport information and navigation device, so that some of the other challenges can already be solved to make implementation more likely. The growth in this market is explosive and real opportunity will be realised as ambient technology delivers lower cost and user-friendly devices.

References

Emiliani, P.L. and Stephanidis, C. (2005). Universal access to ambient intelligence environments: opportunities and challenges for people with disabilities. IBM Systems Journal, 605-619. Available from: http://researchweb.watson.ibm.com/journal/sj/443/emiliani.html.

Gill, J., ed. (2008). Ambient intelligence: Paving the way. Cost 219. Available from: http://www.tiresias.org/cost219ter/ambient_intelligence/Ambient_Intelligence.pdf.

Gill, J., ed. (2005). Making life easier: How new telecommunications services could benefit people with disabilities. Cost219. Available from: http://www.tiresias.org/cost219ter/making_life_easier/making_life_easier.pdf.

Raisinghani, M.S., Benoit, A., Ding, J., Gomez, M., Gupta, K., Gusila, V., Power, D., and Schmedding, O. (2004). Ambient intelligence: Changing forms of human-computer interaction and their social implications. Journal of Digital Information. Available from: http://journals.tdl.org/jodi/rt/printerFriendly/jodi-155/147.

1

Advantages and Disadvantages of Technology Today

Chirag Patel

The world has come very far with respect to technology. In reality, technology, social media, and smart phones have breached the mainstay in our everyday lives in a short period of time. Gone are the days of cassette and VHS tapes. Gone are the days of typewriters and cursive handwriting. Those outdated technologies have been replaced with tablets, smartphones, and social media websites like Facebook. The same types of technologies have found its way into healthcare. Lambert, K., Barry, P., & Stokes, G. (2012) state that, “Social media has infiltrated all of our lives both personally and professionally.” For better or for worse these technologies have blended into our everyday lives with no end in sight therefore, knowing they aren’t going away and how we use them will say as much about ourselves and as society as a whole.

Today’s technologies allow us to be more connected to one another. Both patients and healthcare providers have information available at their fingertips including a patient’s personal health information (PHI). On concern could be stated as such, “How safe are today’s technologies and will patient’s personal health information be compromised?” The U.S. Department of Health and Human services created HIPPA (Health Insurance Portability and Accountability Act). HIPPA is a federal law that protects medical information of patient’s and is enforced by the Office of Civil Rights.

According to Lambert, K., Barry, P., & Stokes, G. (2012), “The use of social media may expose professionals and healthcare entities to liability under the Health Insurance Portability and Accountability Act (HIPAA) as well as individual state privacy laws. HIPAA, as modified by the Health Information Technology for Economic and Clinical Health Act (HITECH), governs the permitted use and disclosure of PHI by covered entities, including hospitals, physicians and other healthcare providers. The HITECH Act provides breach notification requirements and expands various requirements to business associates.”

So why take the risks? Like anything in this world sometimes you have to take the good with the bad. A few advantages of today’s technologies and social media sites include accessibility. Patients can now play an active engagement in their health care. Social media and various apps allow an individual to do their own research on their conditions. It can give a patient a feeling of empowerment when otherwise they would feel helpless. Social media and varying apps allow for individuals to connect with support groups and message boards that can lend much needed empathy from people who are going through similar situations. We’ve heard the stories of bullying on Facebook but on the flip side there are stories of triumph and support when used in a way that garners sympathy and empathy. Facebook can be both an advantage and disadvantage depending on how it is used.

Doctors, nurses, and other healthcare providers have advantages as well. They too can have the latest research and decision making support tools available to them at palm of their hands. Access to real time information such as the latest prescription recall or access to the most recent white paper of medical breakthroughs benefits both the healthcare provider and ultimately the patient. This collective online and mobile brain trust allow for healthcare providers to create robust medical strategies that can help in the decision making. Online and smart phone resources include mobile apps like Epocrates ®, Medscape, and even AHRQ ePSS an app designed by the United States Department of Health & Human Services (HHS). Online communities’ such as American Medical Association provide resources from varying topics including: managing your practice; medical ethics; legal issues; and career development.

In general, most individuals prefer to keep their health status confidentiality hence, the patient-doctor confidentiality relationship. But with smartphones and use of social media the totality of a person’s health information could be vulnerable if safeguards are not in place. Solomon, P., et al. (2012) suggests that healthcare providers who have access to patient information made aware of strategies and facility policies in order to safeguard patient privacy. They should also be mindful and place themselves in a situation where access can be vulnerable i.e. leaving a computer on and unsecure. Solomon, P., et al. (2012) state emphatically that “Confidentiality is a legal right for clients as well as a professional ethical responsibility of providers.” A break in trust serves to weaken the relationship between the healthcare provider and the patient.

Let’s go back to our original scenario. The nurse worked a night shift while her friend attended a concert. The lead singer of the concert the nurse missed is now her patient. At the end of her shift, what does she do? Our group chose the following conclusion:

You go on Facebook, on your day off, and talk about the night you had at work and how you didn’t really feel as bad having to miss the concert, because you actually got to meet Jerod in person and even “Got his number!” You then post a picture of Jerod on Facebook and Instagram, figuring that most of your contacts would never recognize him anyway. It’s your day off and your personal time, so no harm, no foul, right?

The scenario above is a plausible outcome in the world with which we live in. However, there are a lot of things wrong with the nurse’s line of thinking. It is not unreasonable for the nurse to think that her personal Facebook page is her private business. However, in the New York trial of Romano v. Steelcase (2010) the Supreme Court stated that, “It is reasonable to infer from the limited postings on plaintiffs public Facebook® and MySpace® profile pages that her private pages may contain material and information that are relevant to her claims or that may lead to the disclosure of admissible evidence. To deny defendant an opportunity to access these sites not only would go against the liberal discovery policies of New York favoring pretrial disclosure, but would condone plaintiffs attempt to hide relevant information behind self-regulated privacy settings.” In other words, what the nurse posts on her Facebook could be used against her for several reasons 1). The photo was taken while she was working, 2). The photo violates a patient’s right to privacy and confidentiality and 3). By violating the patient’s right to privacy and confidentiality she could be setting herself for termination of employment and/or criminal or civil violations. Those are possible consequences that may prove costly in the long run. The nurse should stop and ask herself if the notoriety would be worth losing her reputation and career over.

In summary, there are many advantages and disadvantages to smartphones and use of social media. Advantages include active engagement for patients in their health status; readily available resources in real-time; and ease of use and accessibility for all users both front-end and back-end. Some disadvantages include lack of privacy; accountability for posts on personal social media sites; and data integrity and vulnerability. Do the Pro’s outweigh the Con’s? One can only say that training, awareness, and professional and ethical responsibilities should dictate an individual’s actions. As the old saying goes, “Just because you can doesn’t mean you should.” A good warning that should be heeded by all.

References

Kosieradzki, J. (2011). Social media and privacy: when personal posts intersect with the business of litigation. Journal of Legal Studies in Business. (17), 51-64

Lambert, K., Barry, P., & Stokes, G. (2012). Risk management and legal issues with the use of social media in the healthcare setting. American Society for Healthcare Risk Management. 31(4), 41–47. doi: 10.1002/jhrm.20103

Romano v. Steelcase, 907 N.Y.S.2d 650, 658 (N.Y. Sup. 2010)

Solomon, P., Molinaro, M., Mannion, E., & Cantwell, K. (2012). Confidentiality Policies and Practices in Regard to Family Involvement: Does Training Make a Difference?. American Journal Of Psychiatric Rehabilitation, 15(1), 97-115. doi:10.1080/15487768.2012.655648

Virtual 3D Thermal Human Modelling

In recent years, with the revolutionary changes and remarkable innovations on functional and intelligent materials, a growing trend on functional and smart wearable products have been introduced and accepted by the market. Clothing is one of the most common branches. For some fit or tight fit functional clothing, more design elements on human anatomy, physiology, pathophysiologic and biomechanics have been undertaken by them to enhance the special functions such as body protection, recovery, rehabilitation, treatment, shaping and performance enhancement.

Mannequins, as one of the efficient design tools, also known as human model, are frequently-used by fashion designers, patternmakers and manufacturers, which equip them with tangible or virtual 3D model. Besides, digital 3D human models are increasingly adopted to enhance the efficiency and sustainability in these human centred disciples. Geometry human model (G-model) presents the basic dimensional information of human body. In front of these new revolutions on design and technology trends, the traditional G-model may not respond well to the emerging new requirements on design and manufacturing progress of functional clothing. The digital human body applied to fashion and functional design and manufacturing need be endued with more efficient information of the human body. There is a necessity toward launching functional human model, as an accelerating, enhancing and inspiring tool for fashionable and functional product design, especially for functional clothing design.

Body temperature is a vital feature of human beings, which indicates the comfort and health status of the human body. As a heat transfer system, human body requires well-balanced thermoregulatory control loop. Clothing is often regarded as the second skin of the human body which can fulfil the functions of balancing the heat and moisture conditions and thus provide thermal comfort.

Besides, problems like sub-health and ageing population gradually attract more attentions on healthcare. Due to the significant importance of body temperature in indicating the pathophysiologic features of the human body as emphasized by medical researchers in clinic, functional clothing with thermal functions like rehabilitation and treatment will be a meaningful, practical and innovative functional product to take care of the human body, like a special wearable medicine.

Science and technology are changing the life of the human beings. Unquestionably, functional and smart products are the ongoing trend for the future. The thinking of the insiders, like researchers, designers or product developers, must be progressive with the tidal current of advances in science and technology.

To develop thermal related functional products, a visualised and quantified human model is essential and prerequisite. Without accurate and reliable thermal information revealing the inside secrets of the human body, the process of functional product development is like that a blind man feels an elephant. Aworkmanmustsharpenhistoolsifheis to dohisworkwell.There is a knowledge gap and a tool absence for accurate and visualised functional design and manufacturing. To launch the thermal human modelling (T-model) is a far-sighted and necessary step.

With the rapid developments of medical imaging and anthropometric technology, 3D body scanning and 2D Infrared thermography (IRT) provide relatively accurate and visible information of the human body, which help to further understand human body from physical, physiological and pathophysiologic aspects. 3D body scanners, as instruments to capture the whole body and create a set of dimensionally accurate data, are widely used in many areas, such as human modelling and human-centred product development in fashion industry. IRT has been used as an effective and non-invasive medical diagnosis tool, which helps to monitor the skin temperature distribution and evaluate the health conditions of the human body in an ocular way. These two facilities lay a solid foundation for the practicability of thermal human modelling.

1.2 Aims and objectives

According to the knowledge gap and the tool absence for accurate and visualised functional design and manufacturing, besides the practicability based on the increasingly advanced medical imaging and anthropometric technology, five major aims and objectives of the research had been set up as shown from a to e.

An in-depth discovery of the potential relationship among physically anthropometric parameters and physiological properties like body temperatures.
A systematic approach on constructing visualised, quantified and individualized 3D thermal human modelling (Ti-model) with physiological features.
A systematic approach on constructing visualised, quantified and individualized 3D thermal human modelling (Ti-model) with pathophysiologic features.
Averaged 2D thermal images to be comparable with individual’s IR images to detect the invisible abnormity of individuals for healthcare and diseases monitoring.
3D thermal human modelling (Ta-model) to be comparable with Ti-model to detect the invisible abnormity of individuals for healthcare and diseases monitoring.
1.3 The significances of the research

This study will provide brand new and far-sighted solutions for the accurate accomplishment of functional products development, in special for functional clothing, with quantified and visualised T-models. The multi-disciplinary research broadens the field of vision for the human being and presenting the connections of physical, physiological and pathophysiologic features from aspects of statistics, 2D and 3D. The significances of the research are to be made in two aspects.

For theoretical foundation, this research will built up a linkage between physical and physiological features of the human being which can awake the thinking on further quantitative between them and providing advisable index in functional design application. Besides, this study will advance the knowledge on the commonality of the skin temperature distributions of the human beings, which have great meaning for physiological study, clinic diagnose and ergonomic design applications.

For individual applications, this new model can be applied to functional product development. Especially for functional clothing developers, they will be able to do 3D functional design, 3D pattern making and virtual fitting in an accurate, efficient and traceable way. In the times of 3D printing technology, it is an indispensable tool and platform for the antecedent parties in healthcare areas.

1.4 Research methodology

A multidisciplinary methodology crossover thermal physiology, medical diagnose, computer graphics, ergonomics and functional design is adopted to accomplish the aims and objective of the study. The medical imaging and anthropometry technology help to acquire physical and physiological data of the human body from individual experiments. From the viewpoint of statistics, 2D and 3D dimensions, generalization and individual approaches on human thermal properties are to be analysed by means of statistical software, mathematical programming software and 3D design software.

1.5 Thesis organization

This study has been conducted in the background stated above. The overall organization of this thesis is shown in Figure 1-1.

Chapter 1 is the general introduction of the whole thesis including the background the research, aim and objectives of the research, significances of the research and research methodology.

Chapter 2 is the theoretical foundation of the whole thesis including the literature review on two emphasized and carried out research areas, human thermal function and clothing, and 3D human model applied to Computer Aided Design in fashion industry.

Chapter 3 introduced the research methodology of this study. The developments of medical imaging and anthropometry technology, an in-depth and specific foundation had been taken to further understanding human body in physical and physiological aspects. The computer graphic technology provided dependable software and tools to achieve the desired research aims and complete the interdisciplinary research.

The employment of the data acquired from the individual experiments were introduced in Chapter 4,5 and 6 which constitute a systematic understanding platform of the human body from the viewpoint of statistics, 2D and 3D dimensions, universality and individuality on thermal properties.

In Chapter 4, statistic analysis by means of correlation analysis and Principal Component Analysis (PCA) were conducted to find out the relationship among anthropometric parameters and body temperatures, which built up quantified connections of physical and physiological features of the human beings.

Individualized thermal human modelling methods and results were introduced in Chapter 5. In this chapter, systematic introductions and illustrations were presented step by step on how to construct a Ti-model with physiological and pathophysiologic features. The first step was to pre-process the 3D body scanning data including data alignment, data cleaning and component selections. Simultaneously, skin temperature data sets were pre-processed by plotting into 2D thermal images with physiological or pathophysiologic features by mathematical programming. The last step was thermal model construction. The 2D thermal images with physiological or pathophysiologic features were projected to 3D human body and the corresponding Ti-model with physiological or pathophysiologic features was created. This chapter was to quantify and visualise the invisible body code conveyed by the skin temperatures in the aspect of three-dimension and individualization.

Chapter 6 introduced the studies on distribution regularity of skin temperature from two dimensions with averaged IR images and its application on three-dimensional thermal human modelling (Ta-model). In the programming environment of Matlab software, mathematical calculation and anatomical landmarks of the human body were combined to find out the regularity of skin temperatures distributions of the human beings. The mapping process from 2D IR images to 3D individual G-model created Ta-model. The examples of comparing with individual’s IR images and Ti-model had been presented, which helped to detect the invisible individual differences for healthcare and diseases monitoring. This research progress bridges over the gap between science research and technology applications on thermal studies with quantified and visualised methods.

Chapter 7 were the conclusions of the works and the suggestions for future research work were brought forward. Bibliography and Appendix I to V for previous chapters were sequentially attached behind Chapter 7.

Use of Films for ESOL Learners

CHAPTER I

INTRODUCTION

This chapter presents background of the study, statement of the problem, purpose of the study, significance of the study, scope and limitation of the research, and definition of key terms.

Background of The Study

Writing is a complicated skill, writing is a skill that the language teacher must teach to their students. And also it is very important because writing van give the student chance to show or express their personalities, and to mastery and to develop the English ability (Scot and Ytreberg,1992). In addition, because of writing, the learners learn to communicate with other people in order they will understand each other, or to read the message and they need to write it. (Raimes, 1983). That its why writing will give benefit for students if they mastered this writing.

Writing is not skill where the students get easily and naturaly. It means, English as Foreign Language learners are not taught how to write a good narrative story in English language. However, to teach writing not only about grammar, the mechanics of the alphabet or the spelling, but also the learners need to see the ideas or concepts in English language.

Lack of vocabulary is also the problem when the teacher ask students to write. The students look confused and asking their friend about what is the English language for some words.

High school students are asked to write simple and short sentences, messages, short announcements, and also to write narrative, and other type of paragraphs (Depdiknas, 2006). In the statement above, the teaching of writing at high school is a simple one. However, writing is productive skills besides speaking, but still look complicated skill for SMA students to master. It is a complex activity that need a variety of skills.

Due to that condition, the researcher tries to find out a kind of technique that can help students write sentences or a simple paragraph and encourage them in the writing activity. The researcher assumes that one of the good ways of teaching writing is by using media. Instructional media is important in teaching and learning processes in order the students can enhance and promote learning and support the teacher’s instruction. The use of media needs to be planned carefully.

There are so many kinds of media that can be used in the teaching writing process. One of them is short movie. Short movie can be the basis of the most difficult side: motivate students to write. Short movie as the media are very useful for teaching English writing, especially to attract and giving the anxiety to the students’ attention and to deliver the information. So, in teaching writing, the teacher can use short movie to motivate the students to write, to help, to stimulate and to guide students to write a narrative paragraph.

In this research, the researcher tries to implement the using of movie strategy into the teaching of narrative paragraph. A narrative paragraph is a paragraph that retells events happening in the past. It focuses on individual participants, uses correct grammar: past tense, focuses on a sequence of events, and it uses action clauses.

To make a good narrative paragraph, it would be better if the teachers use short movie to make the learning process clear and make students understand, and the students will arrange the sentences in a good chronological order.

The researcher believes that picture series is applicable for the students in SMAN I MANYAR GRESIK because it may guide, help, motivate and encourage the students to express and show their ideas, opinions, and thoughts onto paper.

1.2 Statement of The Problem

The research problem in this research is in a question form: “How can 11th grade of SMAN I MANYAR students’ ability in writing narrative paragraphs be improved by using short movie?”

1.3 Purpose of The Study

According to the problem above this research is to describe how the 11th grade students ability in writing narrative paragraph at SMAN 1 MANYAR can be improved by using short movie.

1.4 Significance of The Study

The findings of this research can be useful for the teacher and other researchers. For the teachers, the finding of this study can give the alternative way or technique in teaching writing narrative texts.

1.5 Scope and Limitation of The Study

The research is focused on the teaching and learning process by involving the 11th grade students of SMAN I MANYAR GRESIK in short movie to improve their writing ability in narrative texts. The improvement is focused on four components: organization, vocabulary, grammar, and mechanic. Those components are analyzed using analytic scoring rubric for writing.

1.6 Definition of Key Terms

In order to avoid misunderstanding, the researcher defines several important terms in this proposal:

Short movie is a movie that has a short duration about 15-20 minutes length.
Narrative is a piece of text which tells a story and has generic structure begins from orientation, complications, and resolution.
Writing ability is a way that needs skill of communicating a message to a reader to express idea, thoughts and feelings.
Improve is make something to be better. From low to high.

CHAPTER II

REVIEW OF RELATED LITERATURE

This chapter aims to provide a review of the literature related to the teaching of English in Indonesia, the problem of writing, the previous research and the media.

2.1 The Teaching of English in Indonesia

English is the international language which is used in communication, or an activity every time. Mastering English is getting important. In Indonesia, English is a compulsory subject. But it seems that the teaching of English as a Foreign Language is not to lead the students to be able to communicate, but only to prepare the students to pass the national examination (Kam & Wong, 2004:181).

But nowadays, many teachers and learners realize if learning English is not only the skill that we need to pass the exams, but also for communication. Saukah (2000) states that the purpose of teaching English as a foreign language in Indonesia is that the learners will master to use English for communication; in written or oral language. The ability to communicate is the way how we are able to understand and show to express something.

Writing is one of four language skills which has important role in teaching English as a Foreign Language. Brown (2001), writing is simple as putting the ideas or concepts into paper. Compared to speaking, writing is more difficult because writing has the typical characteristics of language that are more complex than those of spoken language such as the degree of formality.

Naturally, the process of writing needs the different set of competencies and skills which not every writer has. As beginners, Senior High School students, of course, cannot be expected to master and apply all those writing skills.

The students still have a lot of problems in expressing their ideas in writing form. The curriculum expects students to be able to write simple message and simple paragraph at Junior High School. This expectation has not been achieved yet because the students still find it difficult to express and show their ideas in the written language especially in English. This statement based on fact that most of the students’ paper cannot be understood well because there are so many errors.

2.2 Previous Research

Research on using short movie strategy has been conducted by some researchers. Sumarsih (2006) did a study using short movie to teach English at the XI IPA-1 students of SMA Negeri 8 Medan. The study showed that the first score of the students’ test was 42,5 for the total improvement from the first competency test to the third competency test was 68,75%. The conclusion is that the student achievement was improved by using the media such as short movie.

So the points that we can conclude from using short movie strategy in teaching writing are (1) stimulates the students to be active in English classes during the activity, (2) activates the four language skills (speaking, listening, readning and writing at the same time), (3) produces a fun English class as the best way to learn English, (4) increase students’ achievement.

Media for Teaching Writing

Listiyaningsih (2002), to facilitate the teaching and learning process, several kinds of media can be used as useful means of teaching in interesting ways. In fact, teaching and learning activities are communication processes. So, using media in teaching writing are good to encourage and stimulate the students to be actively involved during the teaching and learning processes.

The media are:

Short Movie
Sound speaker
Proyektor

CHAPTER III

RESEARCH METHODOLOGY

This chapter contains the description of the research methodology. It includes research design, population and sample, subject and setting, data collection, and data analysis.

3.1 Research Design

In this study, the researcher uses Classroom Action Research because the researcher wants to improve the students writing skill. The researcher uses short movie as instructional media to improve the students writing skill. It will be brought by the researcher as a new teaching technique in the class. Particularly, the aim of this study is to find a new strategy or technique in learning English writing which can help the teacher to solve classroom problems.

The researcher implemented the CAR by Kemmis and Mac Taggart (1998). There are four phase or steps in this action research: (1) planning an action, (2) implementing an action, (3) observing and (4) reflecting.

3.2 Population and Sample

The population of the study consisted of 360 students ; 124 male students and 236 female students in SMAN 1 MANYAR Gresik 11th grade .

The sample of this study consisted of 36 students of class XI IPS 2: 16 male students and 20 female students, which chosen by cluster sampling at SMAN 1 MANYAR Gresik 11th grade .

3.2.1 Subject and Setting

This research was conducted at SMAN 1 MANYAR Gresik 11th grade.. This school had thirty (30) classes and each level had ten (10) classes. The subjects of this study were class XI IPS 2, at the academic year 2013/2014. The class consists of thirty six (36) students.

The reason why the researcher chose this class because this class had the most problems in writing.

3.3 Data Collection

3.3.1 Intrument

The instrument of this research;

First, document collection was conducted by collecting students’ papers at the end of the steps to be evaluated. The data that researcher test are two data in cycle 1 and cycle 2. Both of the cycle are test which will have different movie that will be showed to the students. And the papers the students submitted not just the result of narrative paragraph, but also all their drafting in order to evaluate their progress when they write before.

Second, field notes were used as instruments to know what was happened such as the condition and the setting of the class, the atmosphere of the classroom and the other unexpected things that happened.

Third, interviews were conducted in two types; at the beginning of the study in order to gather data about the students’ problems in writing and at the end of to find out the students’ understanding the implementation of the narrative paragraph using short movie strategy.

Finally, questionnaires was applied at the end of the cycle to know about the students’ responses and attitude in the implementation of the approach.

3.3.2 The Procedure of Collecting Data

The researcher did the steps proposed by Kemmis and Mc Taggart (1998) as illustrated below;

The researcher explains the research procedures start from preliminary study and research implementation; including planning, implementation, observation, and reflection which is appropriate with the illustration above.

Planning: The teacher plans about the lesson plan, materials, media, the instruments.
Implementation: In this part, the teaching and learning processes are carried out by the researcher, helped by a collaborator or teacher to observe the students’ progress during the process of learning.
Observation : the process of recording and gathering all of the data during the teaching and learning processes.
Reflection: the researcher and the collaborative teacher are discussed the result of the implementation if it is success or not.

3.4 Data Analysis

In evaluating the students’ writing scores and results, the researcher uses analytic scoring rubric whose components of writing are scored partly and separately based on the composition such as; content, language use, and mechanic. The researcher wants the students will has minimum target score at least 60.

Table 1. Scoring Rubric of Evaluating the Students’Writing Products

Components of Writing

Level

Scale and Descriptor

Content:

Vocabulary
Chronological order

4

The content is relevant to the topic and easy to understand.

3

The content is almost complete, relevant to the topic.

2

The content is relevant to the topic but is not quite easy to understand.

1

The content is not quite relevant to the topic.

Language use:

Use Past Tense

4

No grammatical inaccuracies

3

Some grammatical inaccuracies

2

Several grammatical inaccuracy

1

Frequent grammatical inaccuracies

Mechanics:

Spelling
Punctuation
Capitalization

4

It uses correct spelling, good punctuation, and capitalization

3

It has occasional errors of spelling, mistaken punctuation, and capitalization

2

It has frequent errors of spelling, punctuation, and capitalization

1

It has no mastery of convention – dominated by errors of spelling, punctuation, and capitalization

Adapted from J.B. Heaton (1990:111) with some modification.

From the scoring rubric of writing narrative in table 1, the maximum score is 12 (3 x 4) and the minimum is 3 (3 x 1). So, to identify the final score of the students’ achievement in writing narrative is based on the following scores category in the table. And the scoring is:

Data Display

There are four kinds of data that collected in this research and most of them were in the form of qualitative data. They were collected from document collection, field notes, interviews, and questionnaires.

1

What it takes to be a teacher

What It Takes To Be a Teacher

“Choosing a career is a challenging, exciting, and perhaps even a threatening task for most today” (Morales, 1994, para. 1). “You may have a clear idea about a career you’d like to pursue. Then again, you might not have a clue” (Mariani, 2011). In today’s society there are thousands of careers to choose from. Woman, as well as men are open to career options from Computer Engineering to Teaching. As our society begins to advance there are many careers which can one day be taken over by more advanced technology, such as computers. Teaching however, is a career that will always be in demand. Teachers are responsible for teaching fundamentals which are needed in everyday life. Not only is teaching a promising career, it is also a rewarding and beneficial career. Pursuing a career as a teacher is very demanding, however, it can be a rewarding career.

“Teacher: one who’s occupation is to instruct” (Merriam-Webster’s, 1993, p. 1059). “The teaching process can be broadly defined as the transmission of knowledge” (Morales, 1994, para. 14). “Teaching developed into a profession after the early 1800’s when the first teacher training was founded in Europe” ( The World Book Encyclopedia, 2011, p. 68). Since the 1800’s, teaching has long evolved and become extremely important to society. “Whether in elementary or high schools or in private or public schools, teachers provide the tools and the environment for their students to develop into responsible adults” (U.S. Department of Labor, 2009). Teaching is a career which I have always found interest in. “A puzzling question comes to mind: Why would anyone choose to teach in this day and age when there is such a wide range of careers from which to choose and when becoming a teacher is being made tougher and tougher?” ( Morales,

1994, para. 3). For me the answer to this question is simple. Teaching is a rewarding, and beneficial career. There is so much more to teaching then showing students how to read, and

write. According to the United States Department of Labor (2009) “Teachers play an important role in fostering the intellectual and social development of children during their formative years”.

The path to becoming a teacher will require years of schooling. “The traditional route to becoming a public school teacher involves completing a bachelor’s degree from a teacher education program and then obtaining a license” (U.S. Department of Labor, 2009). “Aspiring secondary school teachers most often major in the subject they plan to teach, while also taking a program of study in teacher preparation” (U.S. Department of Labor, 2009). Along with years of schooling, “Every state requires a public elementary and high school teachers to obtain a teaching certificate before teaching in that state” (The World Book Encyclopedia, 2011, p. 68). As technology continues to grow and people become more knowledgeable the requirements to becoming a teacher are gradually becoming more difficult. “Evidence of tougher certification requirements is widespread. State legislators are mandating teacher accountability by passing laws that make it more difficult to enter the teaching profession” (Morales, 1994, para. 4). Being able to teach is not the only skill teachers need to have in order to land a teaching position. “In addition to being knowledgeable about the subjects they teach, teachers must have the ability to communicate, inspire trust and confidence, and motivate students, as well as understand the students’ educational and emotional needs” (U.S. Department of Labor, 2009).

Although teaching may look simple there are many responsibilities a teacher holds. “They plan, evaluate, and assign lessons; prepare, administer, and grade tests; listen to oral presentations; and maintain classroom discipline” (U.S. Department of Labor, 2009). Teachers also hold another responsibility as stated by April Whatley, “ Teacher educators are those individuals responsible for the development of future teachers” (2009). When one decides on becoming a teacher they must first realize there are certain job conditions they will be forced to work with on a daily basis. “Teachers may experience stress in dealing with large classes, heavy

workloads, or old schools that are run down and lack modern amenities” (U.S. Department of Labor, 2009). A positive aspect of being a teacher is the hours and vacations you receive.

Unlike any other job most teachers work normal 40 hour work week, but have two months of paid vacation. “Many teachers work more than 40 hours a week, including school duties performed outside the classroom” (U.S. Department of Labor, 2009). During the summer teachers have the advantage of a long vacation. “Most teachers work the traditional 10-month school year, with a 2-month vacation during the summer” (U.S. Department of Labor, 2009).

Teachers salaries range widely depending upon where one works, how much they work and what degree they hold. “Median annual earnings of kindergarten elementary, middle and secondary school teacher ranges from $47,100 to $51,180” (Krasna, 2010). Throughout the day teachers deal with students who can often cause stress when they become disobedient. There are also other factors that teachers deal with on a daily basis that can cause stress, such as grading large amounts of work. “Teachers may experience stress in dealing with large classes, heavy workloads, or old schools that are run down and lack modern amenities” (U.S. Department of Labor, 2009). Throughout most of the day teachers are working with students. “Teachers are sometimes isolated from their colleagues because they work alone in a classroom of students” (U.S. Department of Labor, 2009).

Like any other career there are many positive and negative aspects to becoming a teacher. One large advantage is all the paid vacation time a teacher has. All the extra time a teacher has allows them to pursue other things. “ During the vacation break, those on the 10-month schedule may teach in the summer sessions, take other jobs, travel or pursue personal interests” (U.S. Department of Labor, 2009). Being a teacher also has its disadvantages, “One challenge is that there isn’t always a clear answer to the questions people face” (Krasna, 2010). In today’s society it is becoming more difficult to land a job as a teacher. The credentials to become a teacher becoming more difficult. Although, it is getting more difficult to land a job as a teacher, teaching is a career that will always be needed, regardless of what time period one is in, or where they are located in the world. “Schools in the United States and Canada hire new teachers each year. Some opportunities occur because experienced teachers retire or leave to pursue other career paths” (The World Book Encyclopedia, 2011, p. 68). There are many opportunities to increase you position as a teacher. “Master of education programs typically prepare their recipients to be elementary secondary or special education teachers and can offer courses in teaching methods , curriculum and instruction , classroom management and mathematics” (Krasna, 2010). According to the U.S. Department of Labor (2009)

With further preparation, teachers may move into such positions as school librarians, reading specialists, instructional coordinators, and guidance counselors. Teachers may become administrators or supervisors. In some systems, highly qualified experienced teachers can become senior or mentor teachers, with higher pay and additional responsibilities. They guide and assist less experienced teachers while keeping most of their own teaching responsibilities.

Being a teacher is an extremely beneficial career, although, it is definitely a career that is harder then it looks. Throughout all the research I have done, I have come to realize this is definitely a career I want to pursue and commit my studies to. There are many benefits to becoming a teacher. Teachers impact many lives and help many people. To attain my goal of one day becoming a successful teacher I will need to earn my degree in teaching. I hope to one day be able to lend the world my knowledge, and be considered a teacher. “Teaching offers inner rewards; a sense of having contributed to the betterment of humanity, a sense of having made a difference in this ever-changing world” (Morales, 1994, para. 14).

Benefits of Video Conferencing

VIDEO CONFERENCINGWHAT IS VIDEO CONFERENCING?

A Video Conference (known as video teleconference) is a set of interactive telecommunication technologies which allow two or more locations to interact via two-way video and audio transmissions simultaneously. It has also been called ‘visual collaboration’ and is a type of groupware.

In other word Video Conferencing is a communications technology that integrates video and voice to connect remote users with each other as if they were in the same room. Each user needs computer, web cam, microphone, and broadband internet connection for participation in video conferencing. Users see and hear each other in real-time, allowing natural conversations.

Video Conferencing differs from a videophone calls in that its designed to serve a conference rather than individuals. It is an intermediate form of video telephony, first deployed commercially by A T & T during the early 1970s using their picture phone technology.

Video Conferencing is becoming increasingly popular as a way to facilitate meetings, and save time and money on travel and accommodation.

HOW IT WORKS

Video Conferencing can be used in a host of different environments, which is one of the reasons the technology is so popular. General uses for video conferencing include business meetings, educational training or instruction and collaboration among health officials or other representatives. Thus far video conferencing has been helping in different sphere of life. The most usage field of Video conferencing –

Interviewing prospective students and staff
Presentations
Seminar presentations to remote audiences
Business meeting
Distance Learning
Telecommuting
Telemedicine
BENIFICIAL OF VIDEO CONFERENCE

The biggest advantage or benefit Video Conferencing has to offer is the ability to meet with people in remote locations without incurring travel expenses or other expenses associated with face to face communication. Business meetings, educational meetings, healthcare conferences and more can all be easily conducted thanks to video conferencing technology. Individuals living in remote areas can also use video conferencing to keep in touch if you will, with the world at large.

More people are easily accessed and contacted using video conferencing. Because of this technology information and knowledge are often disseminated at more rapid rates, and collaboration between people occurs more willingly and freely. Students can take advantage of video conferencing to take classes at distant locations that would normally be unavailable. They can also take classes that will accommodate busy schedules.

Video Conferencing can stimulate better brainstorming, knowledge sharing and information gathering. Businesses can use video conferencing to provide presentations to key members of an organization or to solicit new clients in a professional manner, regardless of their location. The possibilities for communication are virtually endless thanks to video conferencing technologies.

Video Conferencing provides with the ability to meet and to work with others over a distance. The following list includes several examples of the benefits for businesses using video conferencing:

Reduce travel costs.
Improve use of executive time.
Speed up decision-making.
Keep meetings brief and more focused than face-to-face meetings.
Enable top management to quickly and effectively communicate with employees sitting in multiple locations.
Allows virtual project management via video and data conferencing with geographically dispersed peer groups at short notice.
Provides an effective way of delivering cost-efficient training to individuals without the requirement to consistently travel to central locations.
Creates a medium for conducting interviews.

Working out of home has never been easier or more practical. Videoconferencing makes it possible to stay connected with people in a very real way. Videoconferencing allows users to save resources by meeting with clients and/or colleagues via videoconference. This reduces travel expenses, while maintaining face-to-face contact.

For a minimal cost, it is possible to set-up a fully functional videoconferencing system that works in a professional and reliable way from your home office.

HOW TO DO THIS

Video Conferencing used to be something of a black art. Today, easy-to-use and manage technology means that users need know little about how the equipment actually works, What’s important is what it can do, now how it does it.

Video Conferencing has become popular over the last decade. Video conferencing is when two or more parties communicate in real time in separate locations with both video and audio signals. Technology used in Video conferencing

Video Input

Video Output

Audio Input

Audio Output

Data Transfer

Data Compression Software

Acoustic Echo Cancellation

This is the technology/software used for video conferencing. The software is usually used for Video Conferencing. So start video conferencing in your preferable work and make your world easy and trouble free.

Victims of Bullying

Victims of Bullying

Schools offer more than educational opportunities; they offer many opportunities for social interaction for youth. These social opportunities also offer many opportunities for children to become victims of bullying. In the last ten years, there has been a dramatic rise of research on bullying in the United States. This research has been spurred by continued extreme school violence where the perpetrators of the violence had been victims of bullying.

Bullying encompasses a range of various aggressive behaviors, which are targeted at an identified victim (Espalage, 2002). It is differentiated from fighting because it involves an imbalance in strength such that the individual targeted has difficulty defending him or herself. Bullying has been a common obstacle of childhood for many generations (Olweus, 1995). Many people believe that bullying is a natural part of growing up that does not cause serious harm but help to toughen children up (Pianta & Walsh, 1995). On the other hand, extensive research in this area has identified consequences for the victims of bulling (Olweus, 1995).

There have been many high profile cases of victims of bullying who have retaliated by horrific school shootings (Kumpulamen, Rasanen, & Puura, 2001). A number of recent studies have investigated the immediate and short-term effects of peer victimization (Espelage, 2002; Espelage & Swearer, 2003; Nansel, Overpeck, Pilla, Ruan, Simons-Morton, & Scheidt, 2001). Rejection from a peer group has been linked to adverse psychological and physical consequences (Kumpulamen et al., 2001). Victims have been noted to be at risk for increased levels of depression, anxiety, and psychosomatic symptoms (Nansel et al., 2001). School avoidance and feelings of isolation are common among victims. Furthermore, it has been reported that these victims of bullying are developing post-traumatic stress disorder (Kumpulamen et al., 2001). This reveals the detrimental impact that peer rejection may have on youth and the importance of more research on the long-term impact bullying has on victims.

The media has portrayed “bullies” and “nerds or geeks” in numerous films, thus bringing awareness of childhood social hierarchies and the desire to be accepted as part of a group. The “nerds” are social outcast who are commonly victimized by their peers and often blamed for not being tough enough. Recent research and pop culture movies like “Mean Girls” have brought more attention to girls and their bullying behaviors. There is limited research on the prevalence and effects bullying has on girls (Brinson, 2005).

Many bullies experience mental health difficulties. One study found that one-third of bullies have attention-deficit disorder, 12.5% were suffering from depression, and 12.5% had oppositional-conduct disorder (Kumpulamen et al., 2001). Bullies then in turn take out their frustrations on someone the see as weaker than them. These bullies are also seeking to impress their peers. The rejection felt by the victim can have a direct impact on their lives.

Several authors suggest that youth who are continually victimized may be at risk for poorer psychological functioning as adults (Espelage, 2002; Nansel et al., 2001). There has not been much research in this particular area. Little is known about how these victims function as adults. Research suggest that adolescents do not simply grow out of emotional problems with age, which implies that youth who have poor social skills may continue to experience difficulty in their area of maintaining relationships as adults (Nansel et al., 2001). Espelage (2002) found that many victims of bullying continue to think about their experiences of being bullied and recall painful memories well into adulthood.

Depression and suicidal ideation have been found to be common outcomes of being bullied for both boys and girls. Bullies themselves have been prone to depression (Espelage, 2002). Bullying behaviors has similarly been found to transfer from the classroom to the streets, male bullies having been found to be seventeen times more likely to be frequently violent outside of the classroom and female bullies over one hundred times more likely to be frequently violent on the streets (Brinson, 2005). Longitudinal research has found that bullying and aggressive behavior were identified as being characteristics of those students who later became involved in criminal behavior (Nansel et al., 2001).

Statement of Problem

There have been limited mixed-methods studies on the phenomenon of bullying (Espelage & Swearer, 2003). There has been no research that has attempted to explore the long term effects of bullying on individuals who have experienced it. This study will use a mixed-methods approach to explore both the long term effects of bullying on individuals that were bullied in their youth.

Statement of Purpose

The purpose of this concurrent, mixed methods study is to explore and generate themes about the long term effects bullying, that occurred in childhood, has on men and women. The quantitative research questions will address the prevalence of bullying between male and female participants that they encountered at school when they were in their teens. Qualitative open-ended questions will be used to probe significant resilience factors by exploring aspects of the bullying experiences and how they impacted the person’s adult life.

Theoretical Framework

Several theories have sought to explain the existence of bullying behavior. Some developmental theorists perceive bullying as a child’s attempt to establish social dominance over other children. This dominance is established through developmentally appropriate actions; in the early years, when children lack complex social skills, they bully using physical means. As these overt acts are punished by disciplinarians, and as children develop a larger repertoire of verbal language, bullying becomes more verbal in nature. Finally, when children gain the skills to understand and participate in intricate social relationships, they begin to use these relationships as a more covert type of bullying in order to establish power and social dominance (Smith, 2001).

Resilience theory is defines as a person’s ability to cope or adapt to stressful situations. In different environments, resilience can have different meanings. In a high crime neighborhood, resilience could mean just surviving unscathed from the violence. This is having the ability to overcome a challenging set of circumstances with success. Studies in resilience theory demonstrate that resilient individuals are those who grow and develop as a result of trauma. Rather than being stunted by life difficulties, they recover from the traumatic events with an increased sense of empathy, enhanced coping skills. (Pianta & Walsh, 1998).

Peer rejection theory provides an important context for socialization that fosters social skills that children learn and use through out their lives. Rejection theory is based on the premise that children that are rejected by their peers are not given the same opportunities to socialize and develop socialization skills. This further distances them from their peers (Cole & Gillenssen, 1993).

Life course perspective is an appropriate lens to use when reviewing bullying and the after effects it has on the victims of it. Research has shown that bullying can cause victims to have varying degrees of posttraumatic stress syndrome (Houbre et al., 2006). Elder (1998) researched the social pathways in the life course. This research revealed that individual’s lives are influenced by their ever-changing effects of their experiences.

Research Questions/Null Hypothesis

Research Question #1:

How are men and women impacted by the bullying they encountered as youth?

Null Hypothesis #1:

There will be no statistical significant difference in how men and women are impacted by bullying that they encountered as youth as measured by the Revised Olweus Bully/Victim Questionnaire.

Research Question #2:

How did bullying as a youth affect men?

Null Hypothesis #2:

There will be no evidence that being bullied in their youth will have an impact on their adult lives as men.

Research Question #3:

How did bullying as a youth affect women?

Null Hypothesis #3:

There will be no evidence that being bullied in their youth will have an impact on their adult lives as women.

Research Question #4:

What are the implications in their current life that they feel resulted from the bullying they encountered as youth?

Null Hypothesis #4:

There will be no statistically significant evidence of implications in their current life that were a result from bullying that they encountered as youth.

Research Question #5:

How do they feel their bullying experiences impacts their ability to socialize with people now?

Null Hypothesis #5:

There will be no evidence that bullying experiences in their past will impact an adult’s ability to socialize with other people.

Definition of Terms

Bully/victims: individuals who both bully others and are victims of bullying (Espelage & Swearer, 2003).

Bullying: aggressive behavior that occurs repeatedly over time and includes both physical and emotional acts that are directed towards another individual with the intent to inflict harm or discomfort (Olweus, 1993).

Bystander: individual who observes a bullying incident (Olweus, 1993).

Emotional Scarring: the association of negative feelings with the recollection of painful memories of being bullied (Espelage, 2002).

Peer: an individual belonging to the same groups based on age, grade, and status (Olweus, 1993).

Victim of Bullying: an individual who is exposed repeatedly over time to aggressive behavior that is inflicted by his peers with the intent to cause harm or discomfort (Espelage, 2002; Olweus, 1993).

Assumptions

The assumptions made about the participants in this study are that they are of sound mind to participate in this study.

The assumptions made that all of the participants will answer the web survey honestly.

The assumptions made that all of the participants were bullied in their youth.

Delimitations

The research recognizes the following delimitations for the study:

The sample size will be dependent of the amount of people who respond to the email of inquiry at this study.
All respondents are mentally competent to answer the questions in the online survey.
The participants have the potential to be spread out across the United States.

Limitations

Quantitative research looks for generalizability of the research findings to the larger population (Crestwell, 2005). Generalizability is not as important to qualitative research that is seeking to explore a phenomenon and the impact it has. If more men respond then women to this survey, then it would not be an equally distributed sample. Socio-economic status is not asked in this study.

Using Drama to Teach Literacy

Abstract:

The term ‘oracy’ meaning:

‘the ability to speak fluently and articulately and to understand and respond to what other people say’.

was first used by Wilkinson in 1965 (Definition, Microsoft Encarta World English Dictionary).

Since that time the fact that it is central to all aspects of the learning process and activities in which children engage in school has been increasingly recognised. The development of talking and listening skills is central to the reading process and to participation in all curricular areas.

This term my focus was teaching oracy and literacy to year 4 children in an interactive and communicative environment created through the use of drama.

By the end of the series of lessons I wanted children in year 4 to be able to identify social, moral and cultural issues in stories. Drama was employed as a tool to create roles showing how behaviour could be interpreted from different points of view.

I shall present a discussion of the rationale behind the activities I have chosen, the ways in which the children engaged with them and the success of this approach to the teaching of oracy. I shall support my work with research evidence in the areas of talking and listening, the wider area of literacy, and research pertaining to effective teaching and learning generally.

I will discuss what I found when I assessed the progress made by the children and the implication this has for my future role as a teacher by linking my work with the Professional Standards for Qualified Teacher Status and Requirements for Initial Teacher Training.

Introduction:

The acquisition of language, a complex process, is essential for effective communication throughout life. Creating opportunities for the development of oracy in the classroom is essential if children are to develop the ability to communicate. With research showing that children are increasingly spending time in solitary activities related to computers (MacGilchrist et al., 2006, p.12), thereby reducing opportunities for talking in the home, it is essential for schools to act as facilitators in the development of talking and listening.

The National Literacy Strategy defines literacy thus:

‘Literacy unites the important skills of reading and writing. It also involves speaking and listening which, although they are not separately identified in the framework, are an essential part of it. Good oral work enhances pupils’ understanding of language in both oral and written forms and of the way language can be used to communicate. It is also an important part of the process through which pupils read and compose texts.’

(National Literacy Strategy: Framework for Teaching, p.3).

The lack of reference to talking and listening as a separate area has been addressed in later recommendations with an acknowledgement that ‘language is an integral part of most learning and oral language in particular has a key role in classroom teaching and learning’ (DfES, 2003, p.3). The document is highly prescriptive in the means through which contexts for talk should be established.

This paper will present work carried out with a year 4 class in respect of oracy taught through drama. I will evaluate the opportunities given to children for developing oracy and the ways in which children responded to the tasks.

The role of talking and listening:

For the past fifty years researchers have been making a clear case for the importance of talk in the learning process. The psychologists Vygotsky and Bruner have demonstrated the fundamental importance to cognitive processes and learning of speaking and listening (Lambirth, 2006, p.59).

Talk is both a medium for teaching and learning and one of the materials from which a child constructs meaning (Edwards & Mercer, 1987, p.20). I wanted the talking and listening activities to act as a medium for teaching and learning through the children’s interaction. My aim was that they would be teaching and learning from each other through their discussion group work. Their construction of meaning would come about as a result of their understanding of the text and the dilemmas faced by David (see appendix 2).

Opportunities for developing talking and listening:

Developing talking and listening skills is a complex process which must be carefully managed in the classroom. In all curricular areas oral skills should be constantly being developed through a range of activities and, like other areas of the curriculum, should be differentiated to allow for a range of abilities within the class (see appendix 2). Different subjects offer opportunities for different kinds of talk (DfES, 2003, p.4). It is therefore a very important feature of effective teaching to give children as many opportunities as possible to engage in a variety of types of talk. Children make sense of the world as they learn the communication skills to interact with others in their culture (Lambirth, 2006, p.62).

Light and Glachan have shown that children working together and sharing their ideas orally can develop solutions to problems that they could not manage to solve independently (Light & Glachan, 1985). Carnell and Lodge suggest that more school learning should be based on talk and dialogue between pupils as ‘it has the power to engage learners in learning conversations, keeps them open to new ideas and requires both honesty and trust (Carnell & Lodge, 2002, p.15).

Planning the activities:

When planning the activities I sought to involve the following aspects:

Modelling appropriate speaking and listening;
Encouraging sensitive interaction;
Ensuring goals are set with clear criteria for success;
Planning opportunities for children to investigate, apply and reflect on language in use.

(DfES, 2003, p.19) (see planning appendices 1 & 2).

I chose to provide opportunities for talk in the context of drama, giving the children opportunities to engage with one another. Research has shown that children learn more effectively when given opportunities to share ideas. Grugeon points out that this is a skill, like others, and must be taught. ‘Children who are expected to work together in groups need to be taught how to talk to one another. They need talk skills which enable them to get the best out of their own thinking and that of all other members of the group (Grugeon et al., 2001, p.95). For this reason I modelled the activities for the children so that they would have a clear understanding of what they were required to do and how best to go about the tasks in hand (see appendix 2). Some of the children were tentative in respect of their engagement at the beginning of the exercise but the group work gave them opportunities to develop their confidence and self esteem.

Developing appropriate talking and listening:

It is important to be aware of the difference between incidental talk, in which children engage in the course of an activity, but is not directly related to the learning intentions, and talk which is a main focus of the activity. In my drama activities, I wanted children to be focused on their talk through appropriate activities which would engage them and hold their interest. When planning the activities I was aware of the need to engage pupils on the basis of their prior knowledge ‘To prompt learning , you’ve got to begin with the process of going from inside to outside. The first influence on new learning is not what teachers do pedagogically but the learning that is already inside their heads (Gagnon, 2001, p.51). It was with this in mind that I decided on David’s dilemma. I felt that the children would have sufficient previous knowledge of the ideas presented to be able to identify with the characters and the dilemmas faced by them (see appendix 2).

Establishing Rules:

In all conversations there are rules, for example, only one person talking at a time. Cordon suggests that ‘ children receive little help in understanding and appreciating the ground rules for group discussion’ (Cordon, 2000, p.86) an issue that I felt it was important to address through the establishment of guidance for the children. This is vital to the process so that all children have equal opportunities to participate in the talking and listening activities.

Aims:

My aims in the drama activities were:

To encourage purposeful talk, the skills associated with which the children could later transfer to other areas of their learning.
To develop children’s ability to work in a group.
To enable children to develop the confidence and competence to present their work to a group of their peers.
To develop children’s skills in forming opinions, responding to other children’s opinions and oral presentation skills.
Drama as a tool for developing talking and listening:

I chose to approach the teaching of speaking and listening through drama as it affords many opportunities for children to develop their speaking and listening skills. Drama helps children to understand their world more deeply and allows them an opportunity to find ways to explore and share that understanding (Wyse, 2001, p.213).

Research about learning has shown that children learn most effectively when learning is meaningful to them. Learning happens in the process of coming to new understandings in relation to existing knowledge (MacGilchrist et al., 2006, p.52). For this reason I gave children the opportunity to create their own scenarios in acting out David’s dilemma. In the group activities I wanted the talk to be open-ended so that the children could question, disagree with, extend and qualify each other’s utterances (DfES, 2003, p.7).

After their group activities children had the opportunity to share their ideas with the class, giving them important experiences in presenting their opinions and listening to the views of others. Children were actively engaged in tasks which gave validity to all of their ideas and opinions. When given opportunities, children are keen to engage with issues on text and challenge the conventions of the story (Baumfield & Mroz, 2004, p.55). I wanted children to have experience of challenging the ideas they were faced with by developing their own responses to scenarios and the behaviour of characters.

Links with reading:

The development of effective talking and listening skills is vital to the reading process. Before their oral work, children were finding main ideas in the text to support their viewpoints (see appendix 1). Only after the children had established the supporting information they wished to use, were they in a position to verbalise their ideas. Reading and talking were also linked through the requirement that the children orally summarise the salient points in a written argument. Through a discussion of the ways in which authors are able to develop their ideas children can develop ways in which to present their own ideas to an audience. Effective questioning was essential to this part of the process to provide a framework for the development of the children’s ideas in the correct context. As children have more experience and gain more confidence in this type of activity they are able to act as effective peer questioners, a very useful aspect of pupil self-assessment. Through this process children can measure the success of their own learning. Baumfield and Mroz advocate the development of a community of inquiry to develop pupils’ critical analysis of text (Baumfield & Mroz, 2004, p.58).

Developing opportunities for talk:

In the classroom a variety of types of talk occur throughout the day. The ways in which children interact with each other is very different to the way in which they interact with the teacher who does 70% of the talking in the course of a day

(Baumfield & Mroz, 2004, p.49). This clearly means that children are not being given sufficient opportunities to develop talking and listening skills critical to success in all other areas. To enhance the role of talk in shaping and developing learning requires a reduction in the teachers role as classroom controller and a shift towards an enabler of talk for thinking (Myhill, 2006, p.19). After the initial modelling and discussion, it was important for me to let the groups work, as far as possible, along the problem path independently.

It was my intention to give children a variety of opportunities to engage in different types of talk. They had opportunities to talk in small groups when working on their scenarios and afterwards had opportunities to present their work to the whole class.

Talking in groups:

Working in groups has been shown to develop a sense of belonging in children, something which I regard as very important in the classroom. Osterman has pointed out that, ‘There is substantial evidence showing or suggesting that the sense of belonging influences achievement through its effects on engagement (Osterman, 2000, p.341). She goes on to say that children with a well developed sense of belonging in school tend to have more positive attitudes to school and each other. As shown in appendix 3 some of the children were lacking in confidence in the initial stages of the activities, something which I would seek to develop in children through more exposure to this type of activity.

Resnick has pointed out that while the majority of learning in schools is individualistic in its nature, this is contrary to other aspects of life such as work and leisure activities which are much more social in the nature (Resnick, 1987). It is essential, therefore, that children develop the skills needed for group work so that they have ability to engage in participatory aspects of education. When planning the group activities for the children I was conscious of making sure that each child had a part to play in the development and presentation of each activity. Francis has pointed out that the majority of talking and listening activities involve the teacher doing most of that talking with the children interjecting at suitable gaps in the teacher discourse (Francis, 2002, p.29), something which I wanted to avoid by giving the children ownership of the activities. This would ensure that all children were engaged in the process and less likely to be passive. At the same time children had to be able to quietly listen to the views of others, thereby developing strategies for turn-taking. All the children engaged in the process very well.

Assessment:

Assessment for learning is a very important aspect of the teaching and learning process and from the point of view of my own professional development the ability to effectively assess pupil learning is a very important competence to have. As Dann has pointed out, ‘if assessment genuinely seeks to give some indication of pupils’ level of learning, pupils will need to understand and contribute to the process’ (Dann, 2002, p.2). In assessing the effectiveness of the activities it is important to assess the appropriateness of the children’s talk for the task. The children participated in the assessment process through their involvement in the plenary sessions. This was coupled with my observations of children’s success on the task (see appendix 3). All of the children achieved the objectives and reported that they enjoyed the activities. Children’s talk is a very good indicator of their understanding of a task. The fact that all the children experienced success with the tasks and were able to carry them out using appropriate language was demonstrative of their understanding of the characters and dilemmas with which they were faced. Talking and listening is very valuable to assess understanding particularly with children who have special educational needs and may have difficulty with written tasks.

Myers has presented research carried out in primary schools which suggests that children who participate in group work enjoy the experience of working with others and find it very helpful in the learning process (Myers, 2001, cited in MacGilchrist et al., 2006, p.159). My evaluation of the drama activities leads me to agree with this, particularly in light of the comment made by one of the children ‘I wish we could always do drama with English’(see appendix 3).

Children’s language, like most of their learning, responds to encouragement (Fontana, 1994, p.78). This is an important idea to bear in mind when giving the children feedback and it is important to praise their efforts at contributing. I would hope that this would encourage the children who were initially reluctant participants in their efforts in the future.

What I have Learnt:

I have developed a greater degree of understanding of the role of talking and listening in the curriculum as well as an understanding of how children progress in this area and what they should be expected to achieve. I hope to build on this in my future development and feel that I have made progress in terms of the standards laid out by the Training and Development Agency.

Appendix 1:

Literacy planning:

Appendix 2: Lesson Observation Sheets:

Appendix 3:

Evaluation:

Evaluation: Week 2

All groups were very engaged and enjoyed the task. They said that they wished they could always do drama with English.

Possible action to be taken:

More use of drama when teaching English.

Assessments

Child’s Name

Objective achieved?

Comments:

Action:

Andrei

More able

v

Very animated – leader of group

Speaking ad listening skills

Leo

Middle Group

v

Co-operative

Robert

Middle Group

v

Tentative at first – more engaged with script

Confidence building

Oona

Middle group

v

Good directional skills

Use of props (desk)

Good team player

Danielle

More able

v

Works well in her team.

Alexandra

SEN

v

Tentative – very aware of being stared at.

Confidence building

References:

Baumfield, V. & Mroz, M. (2004) Investigating Pupils’ questions in the primary classroom in E.C. Wragg (Ed.)(2004) The RoutledgeFalmer Reader in Teaching and Learning. London:RoutledgeFalmer.

Burns, C. & Myhill, D. (2004) Interactive or inactive? A consideration of the nature of interaction in whole class teaching. Cambridge Journal of Education, 34, 1, 35-49.

Carnell, E. & Lodge, C. (2002) Supporting Effective Learning. London: Paul Chapman Publishing.

Cooper, P. & McIntyre, D. (1996) Effective Teaching and Learning. Buckingham:Open University Press.

Cordon, R. (2000) Literacy and Learning Through Talk: Strategies for the Primary Classroom. Buckingham: Open University Press.

Dann, R. (2002) Promoting Assessment as Learning. London: RoutledgeFalmer.

Department for Education and Employment (1998) The National Literacy Strategy: Framework for Teaching. London: DfEE.

Department for Education and Employment (2003) Speaking, Listening and Learning Handbook. London: DfEE.

Department for Education and Skills (2003) Speaking, Listening, Learning: Working with children in key stages 1 and 2. London: DfES.

Edwards,D. & Mercer, N. (1987) Common Knowledge. London: Metheun.

Francis, P. (2002) Get on with your talk. Secondary English Magazine, 5, 4, 28-30.

Gagnon, G.W. (2001) Designing for Learning. London: Paul Chapman Publishing.

Grugeon, E., Hubbard, L., Smith, C. & Dawes, L. (2001)(2nd edition) Teaching Speaking and Listening in the Primary School. London: David Fulton.

Lambirth, A. (2006) Challenging the laws of talk: ground rules, social reproduction and the curriculum. The Curriculum Journal, 17, 1, 59-71.

Light, P. & Glachan, M. (1985) Facilitation of individual problem-solving through peer group interaction. Journal of Educational Psychology, 5, 3-4.

MacGilchrist, B., Myers, K. & Reed, J. (2006) The Intelligent School. London: Sage Publications.

Myhill, D. (2006) Talk, talk, talk: teaching and learning in whole class discourse. Research Papers in Education, 21, 1, 19-41.

Osterman, K. (2000) Students’ need for belonging in the school community. Review of Educational Research, 70, 3, 323-367.

Resnick, L.B. (1987) Learning in school and out. Educational Researcher, 16, 9, 13-40.

Training and Development Agency (2002) Qualifying to Teach: Professional Standards for Qualified Teacher Status and Requirements for Initial Teacher Training. London: Training and Development Agency for Schools.

Thompson, P. (2006) Towards a sociocognitive model of progression in spoken English, Cambridge Journal of Education, 36, 2, 207-220.

Vygotsky, L. (1972) Thought and Language. Cambridge, MA: MIT.

Wyse, D. & Jones, R. (2001) Teaching English Language and Literacy. London: RoutledgeFalmer.