Acquisition and Participation Metaphors of Learning

Introduction

A wealth of research has been devoted to the goal of understanding an array of different theories of learning which have emerged within the last 50 years. The focus of this paper is to address two specific paradigms, within which learning is now understood. These consist of the acquisition metaphor and the participatory metaphor of learning. The relative merits of each paradigm, has been evinced through a coalescence of scientific research, appropriating findings from an array of emerging fields of inquiry. Greeno (1997:14) notes that progression in the field of cognitive science has illuminated our understanding of the “processes of problem-solving, reasoning, understanding and memory”, whilst advancements in understanding social interaction are derived from “ethnography, ethnomethodology, symbolic interactionism, discourse analysis, and sociocultural psychology.” In broad terms, these two distinct lines of inquiry have fuelled the alternate metaphors of acquisition and participation, as ways of thinking about the nature of learning. When paradigms such as these develop, they bring with them the distinctive array of terminology characteristic of the intellectual currents, which spawn them. Griffin (2003: 68) helpfully acknowledges that the reason in part why the lines of inquiry about learning have been divergent is that “different authors have used different terminology to describe the types of learning that they have studied.” Greeno (1997: 14) rightly concedes that the “prospects for theoretical advancement” are improved if the scientific agenda prizes synthesis. The proverbial maxim that ‘iron sharpens iron’ is relevant here, where the two metaphors of learning have lived through an intellectual period in binary opposition, illustrated by aspects of Brown, Collins and Duguid (1988); Andersen, Reder and Simon (1996) and Greeno (1997). Indeed, as Greeno (1997: 15) notes in his concluding remarks, “the cognitive and situative perspectives are both valuable for informing discussions of educational practice, but in rather different ways.”

The prismatic-like dimensions of learning have allowed it to be categorised variously, reflective of a variety of operating paradigms. Binary categorisations including “single or double loop” (Argyris and Schon, 1978); “maintenance or innovative” (Botkin et al 1979); “banking or problem-posing” (Freire 1972); “reflective or non-reflective” (Jarvis 1992); “formative or transformative” (Mezirow 1991); or “surface or deep” (Marton 1982); are all noted by Griffin (2003: 68-72). These theoretical constructions of learning, can be in part at least, subsumed within the ambit of the two metaphors in question, namely learning as ‘acquisition’ or learning as ‘participation.’

Jonassen and Land (2000: 28), note that “Resnick (1987), in her presidential address to the American Educational Research Association, examined the practices in schools, which are predicated most strongly on the acquisition metaphor, comparing them to how individuals learn and use knowledge outside of schools. Her analysis focused attention on the collaborative, contextualised, and concrete character of learning outside of school, as opposed to the individual and abstract character of learning that occurs inside of school. Arguably, it was this analysis that served as one of the principal stimuli for the development of the participatory perspective with its emphasis on situated activity.”

The Participatory Metaphor

While the field of cognitive psychology is well established, the fields of social psychology and cultural studies are emergent fields. The participatory metaphor of learning has grown out of these more recently emerging psychological and sociological disciplines. Brown, Collins and Duguid (1989) observed that methods of learning that try to teach abstract concepts independent of authentic situations overlook the way understanding is acquired and developed through continued, situated use. These researchers also assert that “understanding is reliant upon complex social interactions and negotiations”. Brown, Collins and Duguid’s (1989) assertion that the nature of language acquisition is analogous to the nature of all knowledge acquisition is a useful interpretive device. Language vocabulary acquisition is a relatively rapid and efficient process when learners are participants in ‘authentic situations’, in this case explained as situations where a genuine functional need for language acquisition exists in order for individuals to participate in the flow of real life conversations. Herein, learners are active participants with ‘practitioners’, indeed ‘cognitive apprentices’ as Brown, Collins and Duguid (1988) postulate. An authentic language acquisition environment, encourages the awareness of nuance and the practice of negotiation to promptly deal with uncertainty, an option, arguably not as available to students in conventional classroom settings. By way of contrast, Brown, Collins and Duguid (1989) describe typical language acquisition approaches in schools as extremely inefficient, due to the level of contrivance, belying the value of formal definition and memorisation without regular practice.

According to Brown, Collins and Duguid (1989:1), knowing …is inextricably situated in the physical and social context of its acquisition and use.” This representation of knowing resonates with Jonassen and Land’s (2000: 28) comments that, “knowing about refers to an activity – not a thing. Knowing about is always contextualised – not abstract; knowing about is reciprocally constructed within the individual-environment interaction – not objectively defined or subjectively created; and knowing about is a functional stance on the interaction–not a ‘truth’.” Participatory advocates underline the “inseparability of knowing and doing”, an assertion, which, if widely true, raises enormous challenges for schools and other formalised educational institutions.

Brown, Collins and Duguid (1989), explore the enticing notion of ‘cognitive apprenticeship,’ positioning teachers as masters of apprentices, who utilise authentic domain activity. They make the astute observations that, “social interaction, social construction of knowledge is significant, therefore conversation, narrative and anecdote, should not be dismissed as noise.” Furthermore, they assert ‘legitimate peripheral participation’ is significant for it often involves apprentices, attempting to enter the culture. This articulation of genuine learning imbibes the sociological significance of the learning framework. The participatory metaphor of learning empowers the individual and the social group within the learning context. Other common terms noted amidst situated cognition adherents, terms such as participatory, brokering and negotiating, elevate the status and significance of the learner within the learning environment, implying an active, engaged and enculturated role on behalf of the learner, in relation to the learning process. These concepts indicate the premise that learning is an active process, and certainly not an inert, static product, such as an intact body of rarefied knowledge, permanently beyond dispute or modification. This framework for understanding learning has real currency at a time when geo-political shifts in an increasingly globalised world and village, exposes the tentative nature of knowledge, which may have been perceived as immutably fixated in previous centuries. The elevation of the learner’s status in relation to the act and process of knowing, is an appealing way to view the nature of learning.

A logical extension of this interpretation of learning, is its predilection according to Brown, Collins and Duguid (1989), for “collective problem solving, enacting multiple roles, confronting ineffective strategies, and utilising collaborative work skills.” The corresponding conviction, that learning is a transaction, also pinpoints a false assumption. In this light, it is deemed to be false, “that knowledge is individual and self-structured, that schools are neutral in terms of what is learned in them, that concepts are abstract and immutable, and are independent of the context in which they are acquired, that (JPF) behaviour should be discouraged.”

Johansen and Land (2000: 84) notes ‘situated cognition’, (or SitCog to its pundits), while holding some advantages over previous foundations, does not presently offer a comprehensive account of cognition. “For SitCog to fully serve as an integrating framework, a means of accommodating multiple perspectives needs to be developed, to allow inclusion of selected ideas and practices from behaviourism, symbolic cognition, and other theories, both psychological and non-psychological.”

Johansen and Land (2000) note that SitCog also presents an opportunity to define the designer’s role in new ways. The design task is seen in interactional, or participatory (rather than rational-planning), terms. They assert that (2000:84) “design and control become situated within the political and social context of actual learning environments. Rather than applying the best learning theory, designers and participants of learning environments honour the constraints and affordances of the local situation. A situated view of design, then, is one that supports the worthy practices of participants and stakeholders, using whatever theories, tools, or technologies at their disposal.”

New situations continually recast concepts in a more densely textured form- concepts are ever evolving. Concepts are always under construction and defy categorical description

Brown et al (1988) provide a clear account of situated cognition, a term noted frequently in the literature review which draws attention to the critical role of situation or context in the process of learning. The concept of situativity, is a key component of the participatory metaphor of learning. It asserts that knowledge is a product of a specific learning situation, embodying a set of cultural assumptions, which facilitate the cultural construction of knowledge

The researchers advocate the “inseparability of knowing and doing”, which has enormous implications for education and learning, if their further assertion is correct, than conventional educational settings and theories of mind, disassociate knowing and doing as two distinct practices.

This conceptualisation of learning acknowledges the significance of the activity, whereby authentic activities are defined as ordinary activities of the practitioners of a culture. Brown, Collins and Duguid (1989), indicate school activities are hybrid- framed within the values of one culture- school, (while attributed to the culture of another domain, such as that of the historian or the mathematician). Proponents of the need for authentic learning activities, applaud the participatory metaphor of learning. These researchers desire learning activities congruent with what practitioners do, a noble aspiration embracing the insights of the apprentice model of admission and enculturation, into the beliefs and practices of particular learning communities. The corollary, amongst some situative theorists, most notably Lave, is regrettably a fairly strident expose of the limitations of schooling, since knowing becomes transmuted within school contexts, so school culture replaces, rather than allows access to the authentic domain of knowledge.

Assert that growing body of research into cognition undermines the notion that abstract knowledge can readily be transferred from the minds of teachers to the minds of students. “Knowing …is inextricably situated in the physical and social context of its acquisition and use” p1 If extracted from these, it is irretrievably transformed.

Anderson, Reder and Simon (1996) attempt to distil four key claims posed by the situative learning proponents, then to systematically dismantle each one, from a viewpoint more akin to the acquisition metaphor of learning. To complicate this debate, Greeno’s (1997) rejoinder, asserts that Anderson et al (1996), misreads the paradigm of situative cognition, providing an overly simplistic distillation of the case for the ‘SitCogs’.

Andersen et al (1996), state that ‘sitcogs’ claim all knowledge is context specific or context bound, yet this is going too far. Their rebuttal suggests research yet to be conducted may show that knowledge is made more transferable, when initial explicit instructions that transferability of knowledge concepts is articulated and value. They also found some research failed to find evidence of context specificity in relation to learning; that how tightly knowledge is bound to context depends upon the nature of the knowledge. Furthermore, they concluded that knowledge is more context-bound when taught in a single context, moreover links between school based competencies and workplace competencies show some correlation, diffusing a degree of the potency of some situated learning advocates.

The Acquisition Metaphor

The consolidated field of cognitive psychology, shaping theories of learning over several decades, has espoused the view that knowledge is a product that is capable of consumption and acquisition. This more long-standing understanding of learning has not surprisingly felt threatened by the situative cognition view. It is seen by many as a conservative or conventional conceptualisation of learning, attuned to the enculturation process of traditional schooling.

A belief from within this camp, is the notion of the existence and value of abstract knowledge; deemed to be valuable in its supposed dexterity, to be able to reappear for reapplication in relation to additional contexts in meaningful ways for learners. Greeno (1997:15) admits that while more drawn to the situative learning paradigm, nonetheless, “the cognitive perspective clarifies aspects of intellectual performance and learning, with its emphasis on and clarification of informational structures of skill, knowledge, strategies and understanding.”

While the situative camp has to some extent charged knowledge with an inability to be transferred, once stripped of the original context in which it is learnt, Greeno (1997), defends the participatory model. He suggests its recognition that the notion of transferability of knowledge must be examined with greater subtlety and detection of nuance. Andersen et al (1996), cites evidence of studies to show the full gamut of opinion about degrees to which knowledge transfers or not, which superficially appears to undermine the situative, participatory view, that knowledge removed from its context is diminished. The further claim attributed by Andersen et al (1996), to the situative view of knowledge and seeming attack upon the acquisition pundits, is the assertion that training by abstraction is of little use. The writers support the use of abstract instruction combined with concrete examples as a powerful approach to knowledge acquisition, citing studies which purport to demonstrate the efficacy of abstract knowledge. Finally, they pose the claim by situative proponents, that instruction needs to be done in complex social environments. To counter this, Andersen et al notes that part training is often more effective than holistic training, exemplified through tax code being better learnt whilst removed from the social context of interaction with a tax client – thereby removed from the social environment. Furthermore, cooperative, group learning studies which are deemed to be inclusive, yet studies do not categorically show group learning to be necessarily superior.

Recommendations and Conclusions

Brown, Collins and Duguid (1989) recommend that since situated learning postulates that activity and perception precede conceptualisation, they therefore need to be better understood. In line with this, key terms used to bolster both the participatory and the acquisitional metaphors of learning need more precise definition.

It seems that both conceptualisations of learning recognise much of the merit in the opposing camp, as well as (at least in an intuitive manner) the artificiality of binary opposition in fields of academic research and inquiry. The dialectical approach to research within the relevant scientific disciplines, appear to recognise the value and goal of synthesis, in order that robust progress in understanding of the nature of learning occurs.

Bibliography
Books

Griffin, C et al (2003) The Theory & Practice of Learning, London. Kogan Press

Jonassen, D.H., and Land, S.M., (2000) Theoretical Foundations of Learning Environments.. Mahwah, NJ. Lawrence Erlbaum Associates

Journal Articles

Anderson, J. R., Reder, L. M., & Simon H. A. (1996). Situated Learning and Education, Educational Researcher, Vol 25, No. 4, pp 5-11, American Educational Research Association

Brown, J.S., Collins, A. & Duguid, S. (1989). Situated cognition and the culture of learning. Educational Researcher,Vol 18, No. 1, pages 32-42. American Educational Research Association

Greeno, J,G. (Jan. to Feb. 1997) Response: On Claims That Answer the Wrong Questions, Educational Researcher, Vol 26, No. 1, pages 5-17, American Educational Research Association

Accommodations for Intellectually Disabled Students

Abstract:

This following research paper describes about Intellectual Disability and its limitations. And some of the common characteristics of ID. It also provides the comprehensive view of modifications, accommodations and assistive technology and transition planning to assist disabilities. Some agencies and inclusion tips are also mentioned. Lastly, concluding the article formally by giving final suggestions.

Definition:

Intellectual disability (ID), also known as mental retardation, is categorized by below-average intelligence or mental ability and a deficiency of skills required for continuous living. People having intellectual disabilities can and be able to learn new skills, but they are able to learn them more slowly. There are different degrees of intellectual disability; from lenient to intense. This disability originates before the age of 18. (Definition of Intellectual Disability, n.d.)

Common characteristics of Intellectual disability:

There are many signs of intellectual disability. For example, individuals with intellectual disability may:

Have trouble speaking,
Find it hard to remember things,
Not understand how things work,
Have difficulty understanding social rules,
Have difficulty seeing the result of their actions,
Have trouble solving problems, and/or
Have trouble thinking with logic and more
Limitations of Intellectual Disability:

Someone with Intellectual disability has limitations in two areas. These areas are:

Intellectual Functioning: Also known as IQ, this is known as a person’s ability to learn reason, make decisions, and solve problems.
Adaptive Behavior: is the collection of conceptual, social, and applied skills which are learned and completed by people in their daily lives like being able to communicate efficiently, cooperate with others, and take care of one. And these are defined as:
Conceptual skills: Literacy and language; time, money, number concepts; and self-direction.
Social skills: Social responsibility, interpersonal skills, self-esteem, acceptance, caution, social problem solving, and the ability to follow rules/obey laws and to avoid being victimized.
Practical skills: actions of daily living (personal care), work-related skills, healthcare, travel/transportation, schedules/routines, safety, use of money, use of the telephone. (Tracy)
Analysis of ways for addressing the needs of students within this disability category:

There are many ways that disabilities can affect the ability to perform effectively on the job. Levels of disability and ability are unique to an individual. Most accommodations are simple, creative alternatives for traditional ways of doing things. Following are some of the strategies, accommodations, modifications and assistive technology analysis that will help people having intellectual disabilities to participate at their full in work-based learning experiences. (Dwyer)

Strategies to address the needs of individual with intellectual disability:

It is important to implement strategies that address the needs of the individual. Following are few strategies that can help in addressing the need of an individual with Intellectual disability:

Understanding the Needs of Individuals with Disabilities
Managing Time and Classroom Activities
Teaching Techniques
Assessment Practices (Doka)

Accommodations for students with disabilities: It is very important to accommodate those individuals who have intellectual disabilities. So that they can be provided with normal environment where they can act like normal beings. Following are some of the modifications and accommodations for such individuals:

Assistive Technology: Implementing accommodations involves anticipating problems students with disabilities may have with instruction or assessment activities. Students may need to use some type of assistive technology to overcome or mitigate the effects of their disability. Assistive technology encompasses a wide range of tools and techniques. Some low-tech tools include pencil and tool grips, color-coding, and picture diagrams. High-tech tools include electronic equipment, such as a talking calculator, computer with word prediction software, and variable speech control audio recorder for playback. (Assistive Technology, Accommodations, and the Americans with Disabilities Act, 2001)

Instruction and Assessment: Suggestions for accommodations in specific areas of instruction and assessment are as following:

Reading
Listening
Writing
Mathematics
Completing assignments
Test preparation
Taking tests

Learning and Work Environment: Accommodations may be needed that involve:

Changes to the physical features or organization of the school or classroom,
Changes to the learning environment may include alterations to grouping arrangements, behavioral expectations,
Classroom management procedures,
And the physical setting.

Job Requirements: Job accommodations are defined on an individual basis. Some accommodations involve simple adaptations, while others require more sophisticated equipment or adjustments to physical facilities. The instructor and employer will need to analyze job tasks, basic qualifications and skills needed to perform the tasks, and the kinds of adjustments that can be made to ensure that performance standards will be met.

Modifications for students with disabilities:

Modifications to the expectations or outcomes of the curriculum may be necessary for a student with a disability. Modifications may include modified program or course requirements, concepts or skills significantly below the targeted grade level, or alternate curriculum goals.

Impact of Modifications: When considering modifications, it is important to evaluate the long-range impact of changing expectations. Students with disabilities who are not challenged to reach the same level of achievement as their nondisabled peers may not be able to earn a standard diploma in high school or a career certificate or degree from a postsecondary institution. Modifications may also limit the types of careers and occupations in which students can find work. (HOW TO SELECT, ADMINISTER, AND EVALUATE ACCOMMODATIONS FOR INSTRUCTION OF STUDENTS WITH DISABILITIES, 2011)

Modified Occupational Completion Points: Career education programs are different at the high school level. The student performance standards may be modified as long as they are aimed at fulfilling the requirements of the specific job selected by the individual student. Teams may modify the curriculum and identify a completion point that falls between established completion points, known as modified occupational points.

Transition planning for students with Intellectual disabilities:

Transition is usually described as a coordinated set of activities for a student, designed to promote successful progress to and from school. Transition relates to entry into and exit from each educational level, such as pre-school to elementary school, elementary school to secondary school, and secondary school to post-school activities, including postsecondary education (both university and college), vocational training, apprenticeships, employment, adult education, independent living and community participation. Successful transition for all students including those who have learning disabilities is based on:

the student’s identified needs
the student’s recognized strengths, skills and competencies
the student’s interests
the student’s preferences
the student’s short and long term goals
the student’s past experiences, including academic achievements, co-curricular and
Volunteer involvements at school and in the community. (Tracy)
Agencies available for intellectually disabled:

There are many agencies all around the world that are catering the needs of individuals with intellectual disability that includes:

National Intellectual Disability Care Agency (NIDCA)
U.S. Organizations for People with Intellectual Disabilities:
The ArcLink
Find my roommate
MOSAIC
Think College and many more.
Intellectual Disabilities’ agency of the New River Valley (IDA)
Inclusion Tips:

The tips below are general guidelines to help make simple accommodations:

Academic Accommodations: Teachers may need to make adaptations to the curriculum and learning activities in order to fully include these students.
Physical and Sensory Accommodations: This includes hearing impairments, visual impairments and physical disabilities.
Behavioral Accommodations: It is important to have well managed and consistent behavioral plan in order to help students learn more appropriate behaviors.
Conclusion:

Intellectual disability is a very common disability. It should be eliminated by using different techniques that come in handy and that are mentioned in this particular research paper. Children with such disability should be accommodated accordingly. And there are a lot of ways through which a child can get accommodations. These pupils need special care and attention. People with such disabilities are often not seen as full citizens of society. There should be movement for self-advocacy, self determination and self direction by the people with intellectual disabilities. And there is a need to eliminate it either with the help of technology or either with providing comprehensive treatment.

Works Cited

(2001). Assistive Technology, Accommodations, and the Americans with Disabilities Act. National Institute on Disability and Rehabilitation and Rehabilitation Research. Cornell University. Retrieved February 16, 2014, from http://www.ilr.cornell.edu/extension/files/download/Assistive_Tech.pdf

Definition of Intellectual Disability. (n.d.). Retrieved February 16, 2014, from Aaidd.org: http://aaidd.org/intellectual-disability/definition#.UwCy9vmSxvA

Doka, K. J. (n.d.). Individuals with intellectual disabilities: Struggling with loss and grief. Retrieved February 16, 2014, from http://www.rescarenz.org.nz/Publications & Papers/ciwid.pdf

Dwyer, K. P. (n.d.). Disciplining Students With Disabilities. National Association of School Psychologists (. Retrieved February 16, 2014, from http://www.wrightslaw.com/info/discipline.stud.dis.dwyer.pdf

(2011). HOW TO SELECT, ADMINISTER, AND EVALUATE ACCOMMODATIONS FOR INSTRUCTION OF STUDENTS WITH DISABILITIES. Department of Education. Nebraska: NEBRASKA DEPARTMENT OF EDUCATION . Retrieved February 16, 2014, from http://www.education.ne.gov/assessment/pdfs/Accommodations_Guidelines_Students_Disabilities_Nov_2011.pdf

Tracy, J. (n.d.). Intellectual disability. Centre for Developmental Health Victoria. Centre for Developmental Health Victoria. Retrieved February 16, 2014, from Nichcy.org: http://www.cddh.monash.org/assets/documents/intellectual-disability.pdf

Vessel Traffic Management System (VTMS)

Literature Review

The aim of this chapter is to capture the main idea of the research in depth and provide a review on literature related to the study and go through the ideas of various authors towards the relevancy of the study and establish the need for the research.

2.1 Evolution Vessel Traffic Management System

A vessel traffic management system (VTMS) is a nautical vessel movement observing system established by harbor or port authorities. According to TRANSAS (2014) the VTMS system utilizes information collected by advanced sensors, for example, radar, AIS, closed-circuit television (CCTV), Meteo-Hydro and other electronic object detection systems. The primary purpose of VTMS is to improve the safety and efficiency of navigation, improve features of port services, protection of life at sea and the safeguard marine environment.

In 1946 a demonstration was done in order to identify the helpfulness of coast based radar system in Liverpool. The initial effort in developing harbour controlled radar was done by establishing a system at the end of Victoria Pier, Douglas, Isle of Man in 1948. (Hughes, 2009)

With the rapid growth of marine industry marine safety and efficient navigation has been addressed as one of the issues that have major consideration. Different methods for improving the marine safety have been developed the past few decades. Some of them can be stated as radio-communications, navigation rules, electronic chart systems and identification systems. (Goralski, Ray, & Gold, 2011)

Goralski et al. (2011) further describes that most recent technological developments in improving vessel traffic management includes radar, electronic charting like Electronic Chart Display Information Systems, (ECDIS), vessel traffic control and management (VTMS) and automatic identification system (AIS) and communication. Several sources of data are combined from sensors such as GPS, radar and AIS in order to improve the vessel traffic monitoring. The final objective of this is offer more precise understanding of the navigational situations.

Many developed countries utilize the services of highly sophisticated VTMS. The Port of London is one of the UK’s busiest ports utilize an exceptionally advanced VTMS. In this VTMS the data from radars are associated with a mass of other data inside a very advanced computer system. This gives an ongoing picture and a thorough record of all developments at Port of London. (Goldman, 2011)

2.2 Vessel Traffic Management Systems in Commercial Setting

As described by Goralski, Ray, & Gold, (2011) many researchers have presented theories of developing an efficient vessel monitoring systems. The need for diminishing human error and decreasing the number and danger of accidents at sea is a need to be addressed. Developing such system to be used in real-time situations is a challenging task. Not much research has been done in this area.

The world’s first three dimensional ECDIS prototype was demonstrated in Brest in 2007. This was a research led by Dr. Rafal Goralski and his team. It’s possible to incorporate data from many sensors around a port to produce a real time three dimensional traffic management visualization tool. (Goralski, Ray, & Gold, 2011)

As stated by Goralski, Ray, & Gold, (2011) an interface has been developed and presently being trialed in the Port of Milford Haven. This system is used in real-time for navigation observing and control. The system is considered to be the first commercial operation of a 3D VTS.

Transas Marine Limited and GeoVS Limited offer 3D vessel traffic monitoring solutions. Transas Group is a worldwide pioneer in marine navigation systems. Transas presented its initial 3D vessel activity monitoring system to the business in 2008. This system gives most extreme backing to VTS administrators.

(TRANSAS, 2011).

Sri Lanka’s first home-developed vessel movement administration system was the result of investigation led by the modeling and simulation group of University of Colombo, School of Computing. The system includes two dimensional and three dimensional views of the harbor. The three dimensional VTMS was established at the Colombo-South harbor in 5th August 2013. (UCSC,2014)

2.3 Need for more improved Vessel Traffic Management System

The commercial 3D VTMS that were mentioned above are closed proprietary and extremely expensive solutions. This fact raised the need to implement a novel vessel traffic monitoring solution. The modeling and simulation group of University of Colombo, School of Computing developed the Sri Lanka’s first home-developed vessel movement administration system. This proposed and developed solution is entirely based on the free and open source structures (Sandaruwan, et al. 2013). There are limitations of the existing solution. In the existing solution, real-time movements of the ships can be visualized. However in the existing solution the path of a moving ship is not continuous.

Goldman (2011) discussed that one of the major considerations in improving the VTMS is to enhance the use of Automated Identification System (AIS). The objective is to provide more data about the vessel’s positions. Furthermore a significant feature of the VTMS upgrading has been to further increase the continuity of the vessel display and resilience.

In a research carried by Popovich, Christophe, Vasily, Cyril, Tianzhen, & Dmitry, (2009) states that some of the important issues to consider in VTMS. The concerns are operability, accuracy and completeness of moving and positioning of vessels. Moreover a key problem in the vessel’s location estimation is addressed. That is in the occasions where the estimated location is different with the actual location of the vessel, and then the system should avoid such circumstances.

2.4 Automatic Identification System (AIS)

The SOLAS (Safety of Life at Sea) Convention by the IMO (International Maritime Organization). According to that the Automatic Identification System (AIS) is an automatic system utilized on ships and other vessels for distinguishing and finding vessels by electronically trading information with other adjacent vessels, AIS base stations, and satellites.

AIS play a vital role in managing vessel traffic and improving maritime security. Vessel engaged in international voyages AIS is required from registered tonnage (RT) of 300. A vessel travelling in national waters AIS is required from registered tonnage (RT) of 500. (SOLAS Chapter v, 2002)

AIS information is classified as 2 types of information static and dynamic. Vessel name, call sign, MMSI number (user ID), IMO number, dimension, type of the ship are static information. Position, course over ground, speed over ground, true heading, rate of turn are dynamic information. (Vesseltracker, n.d.)

AIS transponders naturally transmit information at regular intervals through a VHF radio incorporated with the AIS. The position and speed originate from the ship’s GPS or, if that comes up short, from another GPS receiver. Other information is incorporated when AIS transponder is installed on the ship. (Weatherdock, 2014)

The AIS signals are then received by other shore-based facilities like VTMS or nearby vessels. The received information is then used to display ships on two dimensional marine charts. This helps to observe ships activities. This enables ports and coastal states to recognize ships in their waters and regulate the vessel activity. (Weatherdock, n.d.) . In Sri Lanka such receivers located at Colombo and Mirissa, receive AIS signals emitted from vessel at Colombo harbour. This information is used to display the vessel on two dimensional marine charts. The ships are represented by arrow heads.

2.5 Applications of AIS

There certain usages in AIS data.

To enhance security nautical activities
To safeguard the maritime surroundings
To support collision avoidance.
To manage vessel traffic in busy harbors.

2.6 State Estimation Problems

The objective is to estimate the states of a dynamic system sequentially, utilizing set of noisy measurements. Orlande et al. (2012) describes that in state estimation problems, the accessible measured information is utilized together with prior learning of the physical phenomena. This task is undertaken by minimising the error.

There are many applications in state estimations numerous fields. Orlande et al. (2012) describes that the position of an aircraft can be found using estimation. Also it may also be possible to locate the position using GPS system and altimeter. Usually these measurements are not always accurate. In state estimation combines the model predictions and GPS measurements to obtain more accurate estimates of air craft position. This idea can be incorporated in the research since the measurements are available during the course of the ship. It is possible to make estimations for the locations of the ship for the places where measurements are missing. Also it is possible to check whether the estimations are reliable with the measurements.

2.7 Kalman Filter

The Kalman Filter also known as linear quadratic estimation was developed by Rudolf E. Kalman around 1960. Peter Swerling developed a similar algorithm in 1958. Richard S. Bucy of the University of Southern California backed the theory, making it often being called the Kalman–Bucy filter.

As stated by Madhumitha & Aich (2010) the Kalman Filter is a mathematical system used to correct observed values that contain inaccuracies and other disturbances and produce values with are nearer to true values. In many military and space operations Kalman filter is widely used. The fundamental operation done by the Kalman Filter is to produce estimates of the true and calculated values. Then the uncertainty is calculated along with a weighted average of both the estimated and measured values.

A considerable amount of literature has reported that there exist different variants of the Kalman Filter. Discussions such as that conducted by Madhumitha & Aich (2010) presented that different variants of the Kalman Filter including Extended Kalman Filter (EKF) and Unscented Kalman Filter. The Extended Kalman filter is an extended variant of the original Kalman Filter. The requirement of linear equations for the measurement and state-transition models is relaxed; instead, the models may be nonlinear. The Unscented Kalman filter (UKF) is an improved alternative to the (EKF) for a variety of application. According to Kandepu, Bjarne, & Lars, (2008) the performance of the UKF is better than the EKF in terms of robustness and speed of convergence. However computational effort in both EKF and UKF are almost the same.

Webb, Prazenica, Kurdila & Lind (2007) addresses a problem of obtaining a robust, real-time estimation of aircraft states from a set of measurements. The solution is gotten through by implementing implicit extended Kalman filter, a variation of the classical Kalman filter. The approach taken in this paper is to use the Kalman Filter to provide reliable state estimation. The resulting estimates are implicit functions of the aircraft states, the tracked feature points, and the camera parameters.

In a research carried out by Freeston (2002) the Kalman Filter has been implemented for robot localization. Robot localization means the method whereby a robot locates its own position in the world in which it functions. The measurements of the robots x and y components of the position and the orientation is available. The information can be represented by a state vector. In order to find out its position, the robot uses beacon distance and angle measurements and kinetic data. This data consists of error. The Kalman Filter is one of the better methods to incorporate measurements into estimates. The Kalman Filter identifies that the measurements are noisy and that occasionally they are discarded. Furthermore the Kalman Filter identifies measurements that have only a small effect on the state estimate. The Kalman filter smooth out the uneven effects of noise in the state variable being estimated by add in more information from trustworthy data than from untrustworthy data. The user is able to provide the value of the error in the data and the system as an input in the filter. The Kalman filter computes an estimate of the position by considering the noise in the data and the system.

The Kalman Filter algorithm can be used to combine measurements from different sources such as vision measurements and kinetic information and different times updates as a robot is moving. In addition the algorithm provides an estimate of the state variable vector uncertainty which is a measure of how accurate the estimate. This situation is somewhat similar to the situation discussed in the research. This idea can be utilized in the research to obtaining better estimates of the state variables by minimizing the effect of the noisy measurements. (Freeston, 2002)

2.8 Particle Filter

The Kalman filter (KF) has revealed tremendously useful, however has stern assumptions about linearity and Gaussian noise. This is not always satisfied in real world applications. In such situations Particle Filter can be used to obtain solutions. (Orlande, et al., 2012)

The Particle Filter Method is a Monte Carlo technique that can be utilized to obtain the outcome of state estimation. Particle filtering methods can be used in situations which are non-linear and/or non-Gaussian. Particle Filter otherwise called as bootstrap filter, condensation algorithm, interacting particle approximations and survival of the fittest. (Orlande, et al., 2012)

In Karlsson (2005) the Particle Filter is adapted to some positioning and tracking applications. Particle Filter is constructed on a model which is linearized and a Gaussian noise assumption. A method for estimating position of industrial equipment that works underwater is developed. The data is collected from sonar sensor and surface direction finding system using radar readings and sea chart data. The problem is approached by using Bayesian methods and data collected from maps are used to improve the estimation performance. A real-time application of the Particle Filter as well as hypothesis testing is presented for a collision prevention application.

A situation is somewhat similar to the condition talked about in the research is discussed by Ceranka & Niedzwiecki (2003). A navigation system for the estimation of the pedestrian position, based on evidence from sources like GPS, is created using the Particle Filter approach. Although the GPS provide accurate information obstacles such as high buildings, trees, bridges may weaken or reflect the signals. This leads to significant growth of errors or even creates loss of GPS signals completely. The Particle Filtering approach is suggested to be suitable in this situation in order to estimate the missing locations and make sure the estimates comply with the constraints of the digital map.

2.9 Chapter Summary

In this chapter the past studies and discoveries presented by various researchers related to the research is discussed. The details about the development of vessel traffic management systems (VTMS) up to the present day commercial vessel traffic management systems are presented. The problems associated with the VTMS are addressed. Then the facts about the AIS data are presented. Then the chapter addressed the solutions to improve the VTMS such as state estimation. The theoretical background of the Kalman Filter is presented as a solution to the state estimation problem. In the instances the Kalman Filter is not applicable the Particle Filter is presented as a better approach.

Strategy Planning and Implementation

Task 1a) The organisation of my choicce in discussing the Strategy Planning and Implementation assignment would be Pantaloon Retail India Limited. The reason behind choosing this organisation as matter of discussion can be mentioned as follows:

i) I am an ex-employee of Panataloon Retail India Limited being on the rolls of the company for nearly 4 years overseeing comapany’s Marketing & Business Operations in the state of Gujarat, India encompassing 5 Pantaloon Retail Lifestyle stores in the cities of Ahmedabad, Baroda, Surat & Rajkot in Gujarat.

ii) Organised retailing is emerging in Indian sub-continent with Pantaloon Retail India Limited being the forerunner.

iii) Started from a humble begiining in late 20th century with single outlet operation today it is Rs.10 billion turnover, with a presence over 30 cities with combination of 500 mega stores, super stores and lifestyle stores with over 20,000 employees.

Task 1b) Stakeholders are the persons or a community(group of people) who are directly or indirectly associated with an organisation for attaining its objectives and are directly or indirectly affected by the actions, decisions and policies made by the organisation. Stakeholders of the company are its Directors, Employees, Creditors, Customers, Vendors, Government Agencies, Owners and Shareholders. Hence, in other words all the human entities directly or indirectly associated with the organisation are the stake holders of the organisation.

In particular, the major stakeholders of my chosen organisation are the Employees, Customers and Vendors.

Retail is man intensive industry and hence the role and importance of team work is the essence of providing international standard experience of shopping to its customers.

Customers are king of the retail business. Custmers are listened, obliged, serviced and are given the primary importance in Pantaloon Retail India Limited. It is believed here that if Customers are happy then the company will survive. The company follows the M.K.Gandhi’s famous qoute, “A customer is the most important visitor on our premises, he is not dependent on us. We are dependent on him. He is not an interruption in our work. He is the purpose of it. He is not an outsider in our business. He is part of it. We are not doing him a favor by serving him. He is doing us a favor by giving us an opportunity to do so.”

Vendors are the blood line of the organisation. Vendors provide the organisation with the right product win desired quantity. The also support the organisation in terms of payback period giving an edge to maintain healthy cash flows for development.

Hence, we observe that these stake holders are of considerable importance to the organisation growth and to combat fierce competition and to meet customer satisfaction. The specific considerations that the company has for these stakeholders are as follows:

Employees: The employees should be suitably compensated in terms of monetary and other intangible benefits so that the high level of enthusiasm towards work and customer focus is maintained. The happiness and retention of customers can be ensured only with satisfied employees.

Customers: As mentioned above, Customer is the key focal point of Pantaloon, hence all the business persepective should be should be suitable oriented towards customer needs and wants. The company consideration would be providing Value for Money experience to its customers and reaching them the products wherever and whenver they want.

Vendors: The company considers Vendors as partner for growth. The company has taken measurable steps towards assessing them, procuring procedure, payments and to deal with the grievance. The company has further taken a step ahead by incorporating electronic touch points to minimise the lengthy procedure and ensure time saving.

Task 1c)Organised retail is agressive and is becoming challenging every moment. All the strategies formulated in Retail have been centered around the most important entity ‘The Customer’. The company has always focussed towards attracting customers, retaining the existing customers and giving customers an experience which would enable to come back again and again. The crieteria that Pantaloon focussed are:

1.Cost: The most challenging factor in todays business. A good earnings to cost ratio would only decide the fate of the retail business. No matter what top-line or bottom line a company desires, Pantaloon has always focussed on Cost Strategy to offer maximum benefit to its Customers.

2. Market Penetration: After opening its various retail formats in the metro cities in India, the company has decided the Cluster Development Strategy viz. 8 city strategy for market penetration. By doing this the company will focus towards only cluster of cities for market reach in step wise manner.

3. New Product Strategy: The company strategy has always been+ based on Ideas to give their customers something new and unique. The company looks to achieve a healthy share of each Rupee spent by the customers. Thus company started from garments retailing has moved into Food Retail, Fast Food and Speciality cuisine, Gaming, Hyper market segment, home products, e-shopping and insurance sector.

4. Square foot sales : The strategic objective of the company is to seek healthy square foot sales in order to maintain good profit margin in terms of attaining healthy top line.

5. Private Brands: One of the most important criteria for attaining strategic management objective is introducing Private Label Brands which would ensure gaining a healthy bottom-line margin.

6. Vendor Strategy: Touch screen single point operation for vendor slection, product identification and payment procedure. The company believes vendors or manufacturers are partners to the business and hence venodr management is key importance to the company.

Task 1d) Pantaloon Retail India Limited was formed to deliver organised garment retailing in India. The company wanted to blend fashion with affordability. Due to its fast expansion and growth the garment trader and the dictribution channel(middle man) has also earned huge margins. They are the trader who used to source the material from the manufacturers and used to store and supply the merchandise to Pantloon.

There was a sudden demand in raising the margin of the merchandise by the intermediary channel. Pantaloon used to source 80% of their merchandise of reputed brands from these channels. This sudden raise could not have direct implication on the customers as increase in price to the final product would mean losing business. Hence company initially bear this loss. After sometime the comapny wanted to discuss with these intermediate channels for price renegotiation, failing which the supply of the goods were stopped creating a vacuum in the supply. This was affected by poor merchandise and customer complaints which continued for several months till the time the company had done renegotiation with new set of Intermediary channel including contacting the manufacturers directly. Bu this, the company felt the necessity of having their own private brands including manufacturing and aquiring manufacturing set up of few other companies. The company took around 6 months for consolidation exercise and thereafter Pantaloon relaunched its Retail Stores with nearly 80% of private manufactured merchandise.

Task 2) Develop Vision, Mission, Objectives & Measures

a. For your chosen organisation, list down its ethical, cultural, environmental, social and business objectives. How are these influenced by the current business and economic climate?

Pantaloon Retail India Limited with its multi-retail business in various sectors has consolidated its operations under the umbrella concern of Future Group. The company has laid down and oberve the following values to cover its ethical, cultural, environmental, social and business objectives. The values are as below:

Ethical

i) Respect & Humility: Respect for every individual associated in business and be humble to all. This value entails the core people function. The company pays utmost respect, listen and act accordingly to its Customers, Employees and Vendors through various channels. The Senior Management evaluates and acts accordingly on any grievance, comments and suggestions made by Customers, Employees and Vendors.

ii) Openness: To be open and receptive for new ideas, knowledge and information. The company has various platform of communication with its people and analyse all the ideas or comments and shares its opinion with the Stakeholders. E.g. the company has Share With Us Book placed in all the retail stores through which a Customer can communicate with the Senior Management.

Cultural

i) Valuing & Nurturing Relationships: To build long-term relationships. Business particularly retail business is strategic in nature. It has complete reliance on Relationship building and nurturing the emotions.It is observed that it requires more money to attract new customers rather that retaining and satisfying new customers.

ii) Simplicity & Positivity: Simplicity in thought, business and actions. Thinking simple yet positive render positive vibrations in the economy and earn respect from its stakeholders.

Environmental

i) Flow: To respect and understand universal laws of nature. The company follows the natural way of business and respect and adheres to the rules and policies laid down from time to time. It also acts accordingly keeping in mind the current economic condition and takes steps relevant in order to satisfy needs of its stake holders.

Social

i) Indianness : Confidence within ourselves and amongst our product. The company respects Indian culture and offer the products that suits Indian households.

ii) Adaptability: To be adaptive and flexible to meet new challenges.The only thing that is constant in the Universe is Change. Hence the company is always open to change and modify accordingly its product lines as per the need and demand of the present market.

Business Objectives

i) Intropsection: Leading to purposeful thinking. The company from time to time does meaningful invigoration of its entire process in order to audit and take any corrective action if any. Based on the outcome the company re-strategise any of its process or function.

ii) Leadership: Leadership in thought and idea and its application in business. The comany belives to be Number 1 in whatever business they are and work hard to retain its position.

(Source:Pantaloon Retail India Limited website id http://www.pantaloon.com/corporate_state.asp,Dated October 26, 2009)

Based on the above, it is clearly observed that Pantaloon Retail India Limited is equipped with broad range of measures to handle stressfull business environment and economic changes that may occur. Specifically, particular change in any macro economic policies are treated with utmost sincerety and needed change is implemented to overcome it. Thus more recently the economic crises has very little effect on Pantaloon, as the company could envisage the problem and taken specific measures to overcome this. The percentage of leased out premises to its its total retail outlet was nearly 65% towards the beginning of year 2008. The company’s outflow in rental expenditure was a large sum of money. The company could oversee the challenge of offereing competitive pricing during the era of economic meltdown. Based on the same company started having its own premises by creating a special vehicle Future Capital Holdings which is 100% subsidiary of Pantaloon Retail India Limited. This has brought down not only the rental outflow but also ensured better offering to customers in terms of competitive pricing as compared to its competitors.

The four levels of measurements

The four levels of measurements1. Explain briefly how you would use number properties to describe the four levels of measurements.

Answer: Measurements can be classified into four different types of scales. These are:

Nominal
Ordinal
Interval
Ratio
Nominal scale:

Nominal measurement consists of assigning items to groups or categories. No quantitative information is conveyed and no ordering of the items is implied. Religious preference, race, and sex are all examples of nominal scales. Frequency distributions are usually used to analyze data measured on a nominal scale. Categorical data and numbers that are simply used as identifiers or names represent a nominal scale of measurement. Numbers on the back of a baseball jersey and social security number are examples of nominal data.

At the nominal scale, i.e., for a nominal category, one uses labels; for example, rocks can be generally categorized as igneous, sedimentary and metamorphic. For this scale some valid operations are equivalence and set membership. Nominal measures offer names or labels for certain characteristics.

The central tendency of a nominal attribute is given by its mode; neither the mean nor the median can be defined.

Ordinal scale:

An ordinal scale is a measurement scale that assigns values to objects based on their ranking with respect to one another. For example, a doctor might use a scale of 0-10 to indicate degree of improvement in some condition, from 0 (no improvement) to 10 (disappearance of the condition).

An ordinal scale of measurement represents an ordered series of relationships or rank order. Individuals competing in a contest may be fortunate to achieve first, second, or third place. First, second, and third place represent ordinal data.

In this scale type, the numbers assigned to objects or events represent the rank order (1st, 2nd, 3rd etc.) of the entities assessed. An example of ordinal measurement is the results of a horse race, which say only which horses arrived first, second, third, etc. but include no information about times.:

The central tendency of an ordinal attribute can be represented by its mode or its median, but the mean cannot be defined.

Interval scale:

Quantitative attributes are all measurable on interval scales, as any difference between the levels of an attribute can be multiplied by any real number to exceed or equal another difference. A highly familiar example of interval scale measurement is temperature with the Celsius scale. In this particular scale, the unit of measurement is 1/100 of the difference between the melting temperature and the boiling temperature of water at atmospheric pressure. The “zero point” on an interval scale is arbitrary; and negative values can be used. The formal mathematical term is an affine space (in this case an affine line). Variables measured at the interval level are called “interval variables” or sometimes “scaled variables” as they have units of measurement.

Ratios between numbers on the scale are not meaningful, so operations such as multiplication and division cannot be carried out directly. But ratios of differences can be expressed; for example, one difference can be twice another.

The central tendency of a variable measured at the interval level can be represented by its mode, its median, or its arithmetic mean. Statistical dispersion can be measured in most of the usual ways, which just involved differences or averaging, such as range, inter quartile range, and standard deviation. Since one cannot divide, one cannot define measures that require a ratio, such as studentized range or coefficient of variation. More subtly, while one can define moments about the origin, only central moments are useful, since the choice of origin is arbitrary and not meaningful. One can define standardized moments, since ratios of differences are meaningful, but one cannot define coefficient of variation, since the mean is a moment about the origin, unlike the standard deviation, which is (the square root of) a central moment

Ratio scale:

The ratio scale of measurement is the most informative scale. It is an interval scale with the additional property that its zero position indicates the absence of the quantity being measured. You can think of a ratio scale as the three earlier scales rolled up in one.

The ratio scale of measurement is similar to the interval scale in that it also represents quantity and has equality of units. However, this scale also has an absolute zero (no numbers exist below the zero).

A ratio scale is a measurement scale in which a certain distance along the scale means the same thing no matter where on the scale you are, and where “0” on the scale represents the absence of the thing being measured.

Most measurement in the physical sciences and engineering is done on ratio scales. Mass, length, time, plane angle, energy and electric charge are examples of physical measures that are ratio scales. The scale type takes its name from the fact that measurement is the estimation of the ratio between a magnitude of a continuous quantity and a unit magnitude of the same kind. Informally, the distinguishing feature of a ratio scale is the possession of a non-arbitrary zero value. For example, the Kelvin temperature scale has a non-arbitrary zero point of absolute zero, which is denoted 0K and is equal to -273.15 degrees Celsius. This zero point is non arbitrary as the particles that compose matter at this temperature have zero kinetic energy.

All statistical measures can be used for a variable measured at the ratio level, as all necessary mathematical operations are defined. The central tendency of a variable measured at the ratio level can be represented by, in addition to its mode, its median, or its arithmetic mean, also its geometric mean or harmonic mean. In addition to the measures of statistical dispersion defined for interval variables, such as range and standard deviation, for ratio variables one can also define measures that require a ratio, such as studentized range or coefficient of variation.

2. Define the terms direct measurement and indirect measurement. Describe briefly how you would make profit of indirect measurement in psychological traits.

Answer: There are 2 types of measurement techniques are developed in order to measure quality or characteristics of attributes. First one is quantitative and second is qualitative. Quantitative can be measured directly and qualitative can not be measured directly. The height and weight of a person can be measured directly with scales in feet/meter, kilogram. But qualitative variable cannot be measured with scales such as feet, meter, kilogram etc. For example, Kindness, love and intelligence of a person can not be measured directly. Indirect measurement can be used for these cases. To measure this type of cases different indirect measures like answer to questions, IQ tests can be used. Indirect measurements are mostly used in social science. Richness, happiness, good life, poverty etc can be measured with the support of different indirect indicators.

In order to measure psychological traits we use behaviors as a basis for measurement. Qualities of an individual can be measured indirectly through psychological testing by developing indicators. In standard psychological test we develop the set of standard as questionnaire or guidance fro scoring the attributes or traits. We largely use objective types of question and interpret according to the guidance of answering. Human behavior can not measure as physical measurement like height, weight. The qualitative aspects like perception, emotion, retention etc can be measured through indirect measurement, which is based on some pre-defined set of standards.

3. What will happen if you use ordinary measurement as though they were interval or ratio measurement?

Ordinary data is non parametric data and interval and ratio are parametric data. Therefore we don’t use ordinary measurement if the data are in interval or ratio measurement. They differ from each other. To ensure measurement more reliable, selection of appropriate statistical tools according to the nature of data is important. If we use interval/ratio measurement when the data are ordinal scales it may leads false decision.

4. Which method census or sampling do you prefer the most for describing the reality of Nepali classroom teaching learning? Explain in brief.

Answer: Sampling method is more applicable than the census method for describing the reality of Nepali classroom teaching learning. To study about promotion, failure and drop out rate, census method can be used. However for the reality presentation, census method can not be convenient.

Through the census method each and every unit of the population can be taken into consideration. But it will be highly time and money consuming. Sampling method will make all process faster with less cost. While taking the sample size there is more important of inclusion and representation in the sampling i.e. ethnic group, caste, religion, , geographic zone, and gender, etc. Through educational perspective different grades, private and public school/college suppose to be included. The sample size should more representatives.

5. in a group of 50 children, the 8 children who took longer than 3 hours to complete a performance test in sent-up test were marked as DNC (did not complete). In computing a measure of central tendency for this distribution of scores, what measure we should use and why?

Median can be used in computing a measure of central tendency for the distribution of score as mentioned in the question. Median is not affected by extreme values. Arithmetic mean is affected by extreme values. As Median is the positional average, we can get the correct value of central tendency.

6. Give some examples where you need geometric and harmonic mean. Give geometrical interpretation of A.M., G.M. and H.M.

Answer:

Geometric Mean (G.M):

Geometric Mean (G.M) is widely used in averaging ratios and percentages and is computing average rates of increase or decrease. It is also advantageously used in the construction of index numbers. G.M. gives equal weights to equal ratios of change. It is also used to compute the average rate of growth or reduction of population or average increase or decrease of production, profit, sales etc. When we require to give more weight to smaller items and smaller weight to larger (e.g. Social and economic problems) G.M can be used.

Harmonic Mean (H.M.):

Harmonic Mean (H.M.) is used in computing the averages relating to the rates and ratios such as velocity speed etc., where time factor is the variable. It also can be used for making Human Development Indicator (HDI).

Geometrical interpretation of A.M., G.M., and H.M.

Let AD = a, DB = b

Then represents the radius of the semi – circle.

Hence radius OP = , which gives the value of A.M.

Similarly radius OQ = , Now OD = b =

Now DQ2 = OQ2 – OD2 = { }2 – { }2 = ab

Hence, DQ = , which represents G.M.

Now, in the right angled triangle ODM, DM2 = OD2 – OM2

And in right angled triangle DMQ, DM2 = DQ2 – MQ2

Hence, OD2 – OM2 = DQ2 – MQ2

Here, OQ = . Let OM = x, then MQ = – x

{}2 – x2 = ab – { – x}2

For solving, x =

Hence, MQ = – = , which represents H.M.

From above it is clear that OP = A.M.,

DQ = G.M.

MQ = H.M.

From the figure, it is clear that OP > DQ > MQ.

Hence, we can say that A.M. > G.M. > H.M.H

7. Give geometrical meaning of the formula used for Median and Mode for grouped data.

Answer.: Geometrical meaning of the formula used for Median:

Let consider the following continuous frequency distribution, (x1 < x2 < ……xn+1).

Class interval: x1 – x2 , x2 – x3, …………………. xk – xk+1, ………. xn – xn+1

Frequency: f1 f2 …………………… fk …………… fn

The cumulative frequency distribution is given by:

Class interval: x1 – x2 , x2 – x3, …………………. xk – xk+1, ………. xn – xn+1

frequency : F1 F2 …………………… Fk …………… Fn

Where, Fi = f1 + f2 + ………..+ fi-1. The class xk – xk+1 is the median class if and only if

Fk-1 < N/2 < Fk.

Now, if we assume that the variate values are uniformly distributed over the median – class which implies that the ogive is a straight line in the median – class, then we get from the fig.1,

tan =

i.e.

or

or,

=

Where is the frequency and h the magnitude of the median class.

Hence, BS =

Hence, Median = OT = OP + PT = OP + BS = l +

This is the required formula.

Geometrical meaning of the formula used for Median:

Let us consider the continuous frequency distribution:

Class interval : x1 – x2 , x2 – x3, …………………. xk – xk+1, ………. xn – xn+1

frequency : f1 f2 …………………… fk …………… fn

If fk is the maximum of all the frequencies, then the modal class is (xk – xk-1).

Let us further consider a portion of the histogram, namely, the rectangle erected on the modal class and the two adjacent classes. The modal is the value of x for which the frequency curve has a maxima. Let the modal point be Q (fig. 2)

From the figure, we have tan? =

and tana =

or,

or, , where h is the magnitude of the model class. Thus solving for LM, we get

LM =

Hence, Mode = OQ = OP + PQ = OP + LM

= l +

8. Squaring deviations and then taking squares seems to be useless. Why do we use square?

Answer: Squaring deviation and then taking squares seems to be useless however actually it has certain meaning like the squaring of the deviations (x-x) removes the drawbacks of ignoring signs of the deviations in computation of mean deviation. Taking the sign into consideration we obtain positive values always when squared. But squaring gives aunit that isthe square of theunit the quantity is measured in. This step provides it suitable for further mathematical treatment.

9. Study the following summary statistics of the scores of two graders VI and VII. Now give your answer to the following questions and give figures to support your answers.

a. Which class had the larger number of pupils?

Answer: Grade VI had larger number of pupils.

b. Which class on the average had the higher scores?

Answer: Grade VII on the average had the higher scores.

c. In which class were the scores more scattered? (Given four different statistics to show the difference in scatter.)

Answer: For Grade VI, the scores are more scattered. The four different measures to show the difference in scatter ness are as follows:

Interquartile range
Coefficient of S.D.
Coefficient of M.D. from mean
Coefficient of variation
9. Are the distributions of scores about the mean symmetrical? What is your evidence? If not, which class has high scores not balanced by similar low scores?

The distribution of scores about the mean in both classes are not symmetrical as we can find Mean = Median = Mode is not satisfied for both the grades.

In grade VI, since Mean < Median

In grade VII, since Mean > Median > Mode, it is positively skewed. That is there is greater variation towards the higher values of the variables.

10. Take one distributed data grouped into different frequencies and calculate different measure of central tendencies (Arithmetic mean, Median, and mode) and measures of dispersion (Q.D., MD, and SD). Give your judgments about your data concerning to symmetry.

Answer: Suppose, the weights of 50 students of a class are classified below.

For Mean;

Mean = A +

= 65- = 64.87

Hence, Mean =64.87

For Median;

Hence, Median lies in the class 60-70

Median = = 60+=66.2

Hence, Median=66.2

For Mode;

Since maximum frequency occurs at two classes, so the given distribution is a bimodal distribution.

So, Mode =3 median-2 mean

=3*66.87-2*64.87

=198.6-129.74=68.86

Hence, Mode=68.86

For Quartile Deviation;

Position of Q1=

Hence First Quartile (Q1) lies in the interval 50-60

Now, Q1 =

50+

Hence, First Quartile (Q1)= 57.28

Position of Q3=

Hence Third Quartile (Q3) lies in the interval 70-80

Again, Q3 =

=70+

=73.62

Now, QD= ==8.17

Hence, Quartile Deviation (QD) = 8.17

For Mean Deviations;

Mean deviation from mean

Calculation of Standard Deviations

Now, N = 75, ?fd’ = -1 ?fd’2 = 89

=

=

=1.08*10

=10.81

To identify Symmetry

Here, Mean = 64.87

Median = 66.2

Mode = 68.86

Hence, the curve is not symmetrical.

Calculation of Skewness

Sk = is negative skewed.

The Decision-Making Process at Toyota

The minor assessment is centred around Toyota’s annual report. Each student is expected to submit a case report, based also on the analysis of relevant background readings in addition to the case study itself, addressing the following issues: · Explain what is meant by the term “decision-making” and analyse it in connection with the concepts of risk and uncertainty. · Discuss the decision-making process at Toyota. · Briefly analyse the automotive industry and explain how its dynamics influence Toyota’s managers in making decisions. · Apply forecasting models to Toyota case study (e.g. provide a 2-year moving average graph using sales data).

Table of contents

Introduction: The decision-making process 1

Risks and uncertainties in decision-making process 2

Case study: Decision-making process at Toyota 3

Automotive industry analysis 4

Influence of automotive industry in Toyota’s decision making process 5

Financial analysis 6

Forecasting model: 2-year moving average graph 7

Forecasting model: weighted moving average and exponential smoothing 8

Conclusion: Toyota heading towards Sustainable Growth 9

References and Sources 10

Introduction: The decision-making process

We can define decision-making, as a conscious and human process, involving both individual and social phenomena, an ongoing process of evaluating alternatives for meeting an objective. A particular course of action to select that course of action most likely to result in attaining the objective.

The decision-making process allow us to raise our vision beyond our immediate concerns and, in turn, allow us to evaluate our existing beliefs an actions in a new light in order to make an important and useful decision. Achieving an objective requires action leading to a desired outcome. In theory, how one proceeds should inevitably affect what one achieves, and in turn this should affect future actions.

Risks and uncertainties in decision-making process

The ability of a firm to absorb, transfer, and manage risk is critical in management’s decision-making process when risky outcomes are involved. This will often define management’s risk appetite and help to determine, once risks are identified and quantified, whether risky outcomes may be tolerated. For example, many financial risks can be absorbed or transferred through the use of a hedge, while legal risks might be mitigated through unique contract language. If managers believe that the firm is suited to absorb potential losses in the event the negative outcome occurs, they will have a larger appetite for risk given their capabilities to manage it.

Managing uncertainty in decision-making relies on identifying, quantifying, and analyzing the factors that can affect outcomes. This enables managers to identify likely risks and their potential impact.

Decision makers are used to assessing risk because decision-making is usually associated with some degree of risk taking, but not all outcomes are easily assessed. Some unknown outcomes may not previously have been seen or experienced and so they are uncertain. In theory the outcome may have a low probability to occur but if so would happen it could be troublesome.

So it is important for every company, especially in ever changing and competitive markets to deal with risks using a ever-better decision-making process. All of the decisions anyhow are taken by individuals so the strategy for risk avoidance is tied in with a personal reference point. Of course it’s fundamental nowadays, for big corporations, to have extremely good employees in this department. The skills and needs of the decision-maker and the role of the decision within an organization, the importance of the risk analysis will depend on the objectives of the decision. A wise approach to decision-making might seek contributions from different angles. The importance placed on data analysis, management skills, organizational awareness, and custom and practice in the assessment of risk would be vital. In this field of course with any doubt Toyota is one of the finest players in the market, with a top notch decision-making process.

Case study: Decision-making process at Toyota

Automotive industry analysis

The worldwide automotive market is highly competitive. Toyota faces intense competition from automotive manufacturers in the markets in which it operates. Although the global economy continues to recover gradually, competition in the automotive industry has further intensified among difficult overall market conditions. In addition, competition is likely to further intensify due to continuing globalization in the worldwide automotive industry, possibly resulting in further industry reorganization. Factors affecting competition include product quality and features, safety, reliability, fuel economy, the amount of time required for innovation and development, pricing, customer service and financing. Increased competition may lead to lower vehicle unit sales, which may result in a further downward price pressure and adversely affect Toyota’s financial condition and results of operations. Toyota’s ability to adequately respond to the recent rapid changes in the automotive market and to maintain its competitiveness will be fundamental to its future success in existing and new markets and to maintain its market share. There can be no assurances that Toyota will be able to compete successfully in the future. That’s the risk connected with every business activity. Through this uncertainties Toyota has to deal with a top-notch management.

Each of the markets in which Toyota competes has been subject to considerable volatility in demand, so the risk is becoming even higher year after year affecting all business decisions.

Demand for vehicles depends on social, political and economic conditions in a given market and the introduction of new vehicles and technologies.

As Toyota’s revenues are derived from sales in markets worldwide, economic conditions in such markets are particularly important to Toyota. In Japan, the economy gradually recovered due to increasing personal consumption and last-minute demand encouraged by the increase of the consumption tax. In the United States, the economy has seen constant gradual retrieval mainly due to increasing personal consumption and the European economy has shown signs of recovery too. In the meantime, growth in emerging markets slowed down due to weakening currencies of emerging markets, increases in interest rates of emerging markets to protect the local currencies, and political instability in some nations. The shifts in demand for automobiles is continuing, and it is unclear how this situation will transition in the future.

Influence of automotive industry in Toyota’s decision making process

Toyota’s future success depends on its ability to offer new innovative & competitively products that meet customer demand on a timely basis. Their corporate DNA is headed to continuous innovation and ensure that tomorrow’s Toyota is even better than today’s.

Toyota’s current management structure is based on the structure introduced in April 2011. In order to fulfill the Toyota Global Vision, Toyota reduced the Board of Directors and decision-making layers, changing the management process from the ground-up, facilitating rapid management decision-making.

In April 2013, Toyota made organizational changes with the goal of additional increasing the speed of decision making by clarifying responsibilities for operations and earnings.

In detail Toyota’s group divided the automotive business into the following four units —Lexus International (Lexus business); Toyota No. 1 (North America, Europe and Japan); Toyota No. 2 (China, Asia & the Middle East, East Asia & Oceania; Africa, Latin America & the Caribbean); and Unit Center (engine, transmission, and other “unit”-related operations)

Meeting customer demand by introducing attractive new vehicles and reducing the amount of time required for product development are critical to automotive producers. In particular, it is critical to meet customer demand with respect to quality, safety and reliability. The timely introduction of new vehicle models, at competitive prices, meeting rapidly changing customer preferences and demand is more fundamental to Toyota’s success than ever, as the automotive market is rapidly transforming in light of the changing global economy.

Toyota has to be ready for every occasion to occur in this ever changing global economy. Toyota’s managers every year are taking under consideration every occasion to happen. Within a managerial decision-making context, a risk might be viewed as the chance of negative outcome for a decision which has a possible uncertainty element, usually on the downside.

Financial Analysis

In terms of finances, the carmaker boosted its profit forecast for the current fiscal year ending March, expecting net income to rise to 2.0 trillion yen ($16.97 billion, 14.7 billion euros). It also said revenue would come in at 26.5 trillion yen. Toyota Motor Corporation had revenues for the full year 2014 of 25.692tn. This was 16.44% above the prior year’s results.

Regarding the competition between Toyota, Volkswagen and ford, top players in the market, Toyota is average a positive trend. Moreover Toyota has the highest income since the year 2009.

Forecasting model: 2-year moving average graph

year

sales

2 yrs moving average

error

2006

21036909.00

2007

23948091.00

2008

26289240.00

22492500

3796740

2009

20529570.00

25118665.5

-4589095.5

2010

18950973.00

23409405

-4458432

2011

18993688.00

19740271.5

-746583.5

2012

18583653.00

18972330.5

-388677.5

2013

22064192.00

18788670.5

3275521.5

2014

25691911.00

20323922.5

5367988.5

2015 Forecast

23878051.5

Forecasting model: weighted moving average and exponential smoothing

We could use instead different methods. The moving average is a simple method that doesn’t take in in consideration the weight or real value that a number has. In fact to overcome this issue we can adopt the “weighted moving average method” and the “exponential smoothing method”.

Using the “weighted moving average method” I take under considerations 3 years, which I consider the most important. The value of weights it is based on the percentage growth every year.

year

sales

3 yrs weight mov av

%growth

2006

21036909.00

2007

23948091.00

12%

2008

26289240.00

9%

2009

20529570.00

-28%

2010

18950973.00

-8%

2011

18993688.00

0%

2012

18583653.00

2%

-2%

2013

22064192.00

26%

16%

2014

25691911.00

72%

14%

2015 Forecast

100%

24606538.9

Using weighted moving average, we can have a better forecast. However, it is more important for a better forecast to use the exponential smoothing method.

Here I take in consideration all of the years moving from 2008. Found out that the smoothing factor is pretty high, 0.99.

year

sales

Forecast with smoothing factor

2006

21036909

2007

23948091

2008

26289240

2009

20529570

2010

18950973

2011

18993688

2012

18583653

2013

22064192

2014

25691911

2015 forecast exponential smoothing

25691729.6

I took under consideration 0.9 as my alpha because in this particular case higher alpha means that the recent history will have more weightage in the forecast calculation. As we can see from page 26/68 I took under consideration Toyota’s Consolidated Performance (U.S. GAAP).

I think that the last one is the most appropriate method to see a realest forecast for the next year.

Of course the calculation has been made “ceteris paribus” so everything it is supposed to be the same next year, but as showed before this particular market is subject to constant changes. For this reason and other random errors the forecast could be higher or lower, but however we can obviously see a positive trend in Toyota’s business.

Thanks to the tireless efforts of all concerned, today Toyota’s group can take pride in the strengths of its management practices and culture. Even its president is convinced that they are now in a position to take a definitive step forward toward sustainable growth.

Conclusion: Toyota heading towards Sustainable Growth

So is Toyota heading towards a sustainable growth? What is the engine for sustainable growth? Toyota has learned from experience that they can achieve sustainable growth only if they manage to create great cars that bring smiles and if they foster the human resources needed to make this a reality. At the same time, ever-better cars can be produced only through efforts made by employees on the front line. Individuals must take ownership of their work and place the utmost emphasis on local manufacturing, swift decision making, and immediate action. As it continues to grow however, tasks that were once routine may become increasingly difficult to perform. As I see it, Toyota’s current situation is particularly critical as we are now entering another expansion phase. This is a really important moment for Toyota. For this, because of the risks associated with the future Toyota should continue to seek perfection in his work of manufacturing, but especially in its management process where the decision-making process takes a fundamental part.

References and sources

For further readings…

– Ken Segall, Insanely Simple, the obsession that drives Apple’s success, Published by Portfolio Trade, 2013

– Robbins, De cenzo & Coulter, Fundamentals of management, Global edition, 8th Edition, Pearson Higher Education, (2014 version)

– Burns and Stalker, The management of innovation, Tovistock Publications, London, 1961

Some internet sites…

ADAPT OR DIE, by John S. McCallum – Ivey Business Journal about management [accessed November 18, 2014] http://iveybusinessjournal.com/topics/strategy/adapt-or-die#.VGvDZDSG_ng

1

Fear of Crime Survey Results

Data

The data set used and analysed consisted of results from residents (N=300) who participated in the 2014, Gold Coast Community Survey on fear of crime and the factors that are associated with individual perceptions of what contributes to their fear. The data gathered from the survey analyses groups of categorical variables including fear, demographic characteristics, news and information, as well as community characteristics. Fear and News and information are categorised into their own variables with multiple values, whereas demographic and community characteristics are grouped represented by individual variables and further represented through multiple values. Demographic characteristics include; gender, age, income and education level. Community characteristics include; collective efficiency and Social cohesion. A detailed description of the data set including values is shown in Table 1. In this analysis the primary focus is to determine the association between fear and various other factors, thus determining that fear is the categorical dependant variable and the subsequent variables are independent variables.

Table 1

Sub-sample size and Frequencies of variables. (N­=300)

Variable

n

% of variable

Age

15 – 24

25 – 64

55 – 65

65 +

20

56

49

175

6.7

18.7

16.3

58.3

Gender

Male

Female

130

170

56.7

43.3

Income

Under 50k

Above 50k

136

164

45.3

54.7

Highest Level of Education Completed

Year 11 or 12 or equivalent

Degree

Higher Degree

171

87

42

57.0

29.0

14.0

Primary Source of News and Information

Television

Radio

Print

Internet

Other

190

23

52

30

5

63.3

7.7

17.3

10.0

1.7

Collective efficiency

Low

Moderate

High

80

148

72

26.7

49.3

24.0

Social Collective

Low

Moderate

High

70

153

77

23.3

51.0

25.7

Methods

To determine whether there was a connection between fear of crime and various factors that could possibly influence or are associated with each individual’s perceptions, a chi-square r x c test for independence was conducted on the assembled data. This test was chosen to be conducted for this analysis due to all the variables being used are categorical with multiple values. Therefore meeting two assumptions for the chi-square test for independence; all categorical variables (Nominal or Ordinal) and should consist of two or more categorical variables. The other primary assumption of the chi-test for independence, which is the expected frequency should not drop below five in more than 25% of the cells in a contingency table was also met. The results displayed only two (4.55%) cells falling below the expected frequency count of five, with the minimum being 2.08, therefore not contributing to more than 25% cells of the contingency tables.

Results

A chi-square r x c test for independence was performed to examine the relationship or association between fear of crime and various factors that contributed to each participants perceptions. Within this analysis there were multiple variables to be examined to determine the association with fear of crime, the significant findings will be discussed prior to results table 2. Within the age of participant variable, 48% of participants over the age of 65 were fearful of crime, compared to 2.3% of participants aged between 55 and 64, 4.7% of participants between the ages of 25 and 54 years, and 3.3% of participants aged between 15 and 24 years. The relation between the dependant variable fearful / not fearful and the variable age of participant, showed that there was a significant association (X? (3, N=300) = 106.59, p ? .001). The Cramer’s V was 0.59, thus resulting in approximately 35% variance of frequencies of fear can be explained by the variance of age. Within the variable news and information, 46.7% of participants perceived the television increased fear of crime compared to 3.3% due to the radio, 7.0% due to print, 1.3% due to the internet and 0% due to other sources. The relation between the dependant variable and the variable of news and information, showed that there was also a significant association (X? (4, N=300) = 59.39, p ? .001). The Cramer’s V was .445, thus resulting in approximately 20% variance of frequencies of fear can be explained by the variable of news and information. Both the Age variable and the news and information variable showed statistically higher associations with fear of crime, representing factors from demographic characteristics and news and information; compared to alternate variables, particularly community characteristics. Further detailed results of variables shown in table 2.

Table 2

Results of chi-square test on variables associated with fear on crime

Variable

df

??

p value

V

Variance %

Age

3

106.59

?.001

.596

35%

Gender

1

8.27

.004

.166

3%

Income

1

0.74

.388

-.050

.25%

Schooling

2

16.00

?.001

.231

5%

News and Information

4

59.39

?.001

.445

19%

Collective Efficiency

2

18.16

?.001

.246

6%

Social Cohesion

2

19.63

?.001

.256

6%

Conclusion

The variables age and news and information both have a significant association with the fear of crime within the Gold coast community. Addressing the research questions, the preceding data demonstrates that demographic characteristics and news and information both are related to residents fear of crime thus, concluding that the answers to research question one and two are, true, there is a relation The third research question enquiring the relationship between community characteristics and residents fear of crime, although the data concluded there is a slight relationship, it is not as significant as the other variables. Therefore it is suggested that strategies address the residents fear on crime by focusing on the factor of age and the production of news and information of crime, to alter the perceptions.

Survey of Satisfaction with College Facilities

Assignment 1
1. Plan for collection of Primary data and secondary data

Primary data is the data which is collected directly from the field, i.e., it is first hand data and the secondary data is collected from some other source, i.e. second hand data. In this given problem primary data can be collected through interview to the Students and Staffs of the college. A Questionnaire will be prepared which will be filled up on the response of these individuals. Based on the information provided by them the database will be prepared and thus the primary data will be collected.

In case of secondary data, the data could be collected from any organization / department which collects the school/college data or from any journal or from any Researcher.

2. Present the survey methodology and sampling frame used

There are different areas in the college which are simultaneously used both by the Students of the college as well as the Staffs of the college. Considering those area alongwith few other he questionnaire is prepared. The survey will be then conducted on the basis of that and a selected sample will be chosen randomly from the students and from the staffs. Here since one has to plan a survey methodology, first thing which has to be done is to identify the sample members. For this purpose a total number of 50 individuals may be selected form70 students and 30 staffs taking 50% from each group (that is 35+15=50). Now, the Interview method will be used here for collection of data. Data on satisfaction level of each individual will be collected on different variables. Once the sampling units is finalized, sampling frame needed to be done. Sampling frame is basically the area/ space from where the sampling is to be done. Here one has to see whether all the units in the population are available in the sample. List of students and staffs must be representative of all classes and segments of the college.

The level of satisfaction will be coded as 5= very good, 4= good, 3= average, 2= bad, 1= very bad, in five categories, following Likert scaling.

3. Design a questionnaire to know the opinion of students and staff on the matter
Gender………………………………………
Origin ……………………………………….
Age……………………………………………..

Very good

Good

Average

Bad

Very Bad

How much satisfied are you on the overall infrastructure of the college?

How much satisfied will you be if Laundry facilities are available?

How much satisfied are you with the Hostel facilities of the college?

How much satisfied are you with the gym?

How much satisfied are you with the parking facility of the college?

How much satisfied are you on the toilet facilities?

How much satisfied are you on the structure of labs in the college?

4. Information for decision making by summarizing data using representative values

The data collected from the after the survey is recoded as per Likert scaling and is as below:

How much satisfied are you on the overall infrastructure of the college?

How much satisfied will you be if Laundry facilities are available?

How much satisfied are you with the Hostel facilities of the college?

How much satisfied are you with the gym?

How much satisfied are you with the parking facility of the college?

How much satisfied are you on the toilet facilities?

How much satisfied are you on the structure of labs in the college?

1

4

2

2

5

2

2

4

2

5

4

4

2

3

4

2

3

3

2

2

4

2

2

4

4

5

1

1

2

4

3

4

5

4

2

2

1

2

5

4

6

1

4

2

2

1

2

4

7

2

2

4

4

2

4

2

8

5

3

2

2

4

5

5

9

3

5

1

3

2

2

4

10

4

2

2

5

3

2

2

11

4

4

4

4

5

4

4

12

5

5

2

5

2

4

2

13

2

2

3

4

4

1

4

14

4

2

2

2

2

2

4

15

5

4

4

4

4

3

4

16

3

4

2

4

2

2

4

17

2

1

1

4

1

1

2

18

4

2

2

5

2

2

5

19

5

3

4

4

4

3

4

20

1

2

2

2

2

2

2

21

4

1

3

4

3

1

2

22

5

2

5

4

5

2

2

23

2

3

2

4

2

3

4

24

5

2

4

4

4

4

2

25

4

1

2

2

5

4

3

26

5

2

4

5

2

4

5

27

4

3

2

4

4

2

2

28

3

4

1

2

2

3

4

29

5

4

2

1

1

5

5

30

4

4

4

2

2

2

2

31

3

4

2

4

5

4

2

32

4

2

3

2

2

2

4

33

5

3

5

3

4

4

4

34

3

2

2

2

2

2

1

35

5

4

4

4

1

1

2

36

4

3

5

2

2

2

3

37

3

2

2

1

4

4

2

38

4

4

2

2

2

2

1

39

5

4

4

4

3

3

2

40

3

2

4

2

5

5

3

41

5

5

1

3

4

2

2

42

4

4

2

5

5

2

1

43

5

2

3

4

4

1

2

44

3

4

2

2

2

2

3

45

5

4

1

4

4

2

4

46

4

4

2

4

4

1

2

47

5

4

3

4

4

2

4

48

3

2

2

4

4

4

2

49

5

5

4

2

3

2

3

50

4

4

3

5

3

3

5

5. For Analytical; purpose the variables are denoting as below:

How much satisfied are you on the overall infrastructure of the college?

How much satisfied will you be if Laundry facilities are available?

How much satisfied are you with the Hostel facilities of the college?

How much satisfied are you with the gym?

How much satisfied are you with the parking facility of the college?

How much satisfied are you on the toilet facilities?

How much satisfied are you on the structure of labs in the college?

overall infrastructure

Laundry facilities

Hostel facilities

gym

parking facility

toilet facilities

structure of labs

Mean

overall infrastructure

3.88

Laundry facilities

3

Hostel facilities

2.66

gym

3.26

parking facility

3

toilet facilities

2.7

structure of labs

3.06

The average of the Overall infrastructure is 3.88 which indicated that on average people are recognizing the overall infrastructure as almost “GOOD”.

When it was asked “How much satisfied will you be if Laundry facilities are available?”, the mean response is 3 which is Average. This is quite sensible because this facility is yet to be there in the college.

Regarding “Hostel facilities”, the average response is below averages which indicate that there is some urgent need to repair this sector.

Gym facility is slightly more than the “Average”. The condition is better than “Average” but less than “Good”

In case of “parking facility” the satisfaction level is exactly “Average” which indicates that there is scope to improve the sector.

Satisfaction level on “Toilet facilities” is below average which also requires urgent attention of the college authority.

Structure of Labs also requires some kind of attention.

6. Drawing valid conclusions based on information derived from the survey

Laundry

The above diagram shows that 36% are saying good to Laundry facility and also 36% is recognizing it as Bad. Only 8% is saying it “very Good” and 8% is saying very Bad. Some kind of symmetrical situation is observed here. It seems that the service provider is paying good attention to selected individuals.

Hostel facilities

In case of Hostel 46% saying it “Bad” and this is a matter of concern. While discussing with Average values, the data indicates the same urgency. But at the same time it can be observed that 24% , which is in 2nd position as far as percentages in concern, is saying is good. It may indicate that some portion of the Hostel is having better situation than the rest. Also 6% feeling their accommodation as “Very Good”.

Gym

While discussing about Gym, which is yet to be established, 48% is in favour of this, out of which 42% saying it as a Good facility and 6% as “Very Good” facility.

Parking

The above diagram shows that 38% are saying “Bad” to Parking facility and also 30% is recognizing it as “Good”. Only 8% is saying it “very Bad” and 12% is saying “Very Good”.

Toilet

In case of Toilet 44% saying it “Bad” and this is also a matter of concern. While discussing with Average values, the data indicates the same urgency. But at the same time it can be observed that 22% is saying is good. It may indicate that there is also some better situation. Also 8% feeling as “Very Good”.

Lab

In case of Lab, which is more related to education, 46% is in favour of this, out of which 36% saying it as a Good facility and 10% as “Very Good” facility.

7. Trend lines

Now as per the given question, the trend lines have to be created in the spreadsheet graph. For this purpose , here the intercept is considered as zero(0) and then the equation is shown alongwith the scatter plot and the trend line.

Here the first variable “Overall infrastructure” is considered as the Dependent Variable and there are other six independent variable. Taking each Independent variables separately, the trend line along with the graph will be created.

Case 1. Overall infrastructure and Satisfaction on Laundry:

As shown in the graph, the required equation is Y=.732x

Case 2. Overall infrastructure and Satisfaction on Hostel:

The required equation is Y= .659X

Case 3. Overall infrastructure and Satisfaction on Gym :

The required equation is Y= .787X

Case 4. Overall infrastructure and Satisfaction on Parking facility :

The equation here is Y= .740X.

Case 5. Overall infrastructure and Satisfaction on Toilet facility :

Here the equation is Y=.656x

Case 5.Overall infrastructure and Satisfaction on Toilet facility :

Here the equation is Y= .735x

9.Business Report

All the equations are formed considering the variables separately. In each equation, if value of x is given, the estimated value of Y will be obtained by solving the equation with simple calculation. The dependent variable is considered as “Overall infrastructure:” which actually says whether there is really any need of ‘refurbishing the whole college” or not. This dependent variable depends on several other issues/ factors as considered as Independent variable.

So, taking care or giving attention on those areas actually help the project to take decision whether or not this could be done. The above analysis tells which area needs utmost attention and which area is somehow okay up to this. Based on the analysis, one could say that the two issues .i.e. Toilet and Hostel need to be addressed seriously.

Assignment 2
Question No.1

X = scores of a market survey regarding the acceptability of a new product launch by a company

Frequency Table with a class interval of 5

Class interval

5-10

10-15

15-20

20-25

25-30

30-35

35-40

40-45

45-50

frequency

2

0

1

1

6

2

10

8

2

Mean, variance and standard deviation.

Mean () = ,where N = and Variance = ?2= 2

Standard Deviation = Square root of variance.

And Xi here will be the mid value of the class interval. The following table is being constructed for the required calculations.

Class interval

mid value(xi)

frequency

5-10

7.5

2

10-15

12.5

0

15-20

17.5

1

20-25

22.5

1

25-30

27.5

6

30-35

32.5

2

35-40

37.5

10

40-45

42.5

8

45-50

47.5

2

247.5

= 32

Here

mid value(xi)

frequency

7.5

2

56.25

15

112.5

12.5

0

156.25

0

0

17.5

1

306.25

17.5

306.25

22.5

1

506.25

22.5

506.25

27.5

6

756.25

165

4537.5

32.5

2

1056.25

65

2112.5

37.5

10

1406.25

375

14062.5

42.5

8

1806.25

340

14450

47.5

2

2256.25

95

4512.5

247.5

= 32

= 8306.25

= 1095

= 40600

Mean = = 1095/32 = 34.22

Variance = ?2= (40600/32) – (34.22)2

= 97.83

Standard Deviation = Square root of variance =9.90

Score corresponding to 50% percentile.

50% percentile is also the median.

Here the data set has to be written in increasing order

8

8

18

25

26

26

27

27

29

30

32

35

36

37

38

39

39

39

40

40

40

40

41

41

42

43

44

44

45

45

48

49

There are 32 observations in all

=> There will be two middle values.

The average of those two middle values will be the value corresponding to 50% percentile or the Median.

Now since both are 39 implies the average is also 39.

So, it could be said that the score 39 corresponds to 50% percentile.

Calculate the location of third quartile.

Rewriting the data set in increasing order:

8

8

18

25

26

26

27

Statistical Analysis on Crime Rate in Nigeria

CHAPTER TWO

2.1 INTRODUCTION: In this chapter we are going to review some research work which has been carry out.

Crime is one of the continuous problems that bedevil the existence of mankind. Since forth early days, crime had been a disturbing threat to his personality, property and lawful authority (Louis et al., 1981). Today, in the modern complex world, the situation is most highly disturbing. Crime started in the primitive days as a simple and less organized issue, and ended today as very complex and organized. Therefore, the existence of crime and its problems have spanned the history of mankind. Nigeria has one of the alarming crime rates in the world (Uche, 2008 and Financial, 2011). Cases of armed robbery attacks, pickpockets, shoplifting and 419 have increased due to increased poverty among population (Lagos, undated). In the year 2011, armed robbers killed at least 12 people and possibly more in attacks on a bank and police station in North-Eastern Nigeria (Nossiter, 2011). However, Maritz (2010) has considered that image as merely exaggeration. He added that, as is the case with the rest of the world, Nigeria’s metropolitan areas have more problems with crime than the rural areas. Most crimes are however, purely as a result of poverty. Despite the fact that, crime is inevitable in a society Durkheim (1933), various controlling and preventive measures had been taken, and are still being taken to reduce the menace. However, crime control and prevention is still bedeviled by numerous complex problems. When an opportunity for crime is blocked, an offender has several alternative types of displacement (Gabor, 1978). However, the introduction of modern scientific and technical methods in crime prevention and control has proved to be effective. The application of multivariate statistics has made some contributions to many criminological explanations (Kpedekpo and Arya, 1981 and Printcom, 2003).

Principal Component Analysis (PCA) is very useful in crime analysis because of its robustness in data reduction and in determining the overall criminality in a given geographical area. PCA is a data analysis tool that is usually used to reduce the dimensionality (number of variable) of a large number of interrelated variables while retaining as much of the information (variation) as possible. The computation of PCA reduced to an eigenvalue –eigenvector problem. It is performed either on a correlation or a covariance matrix. If some group of measures constitutes the scores of the numerous variables, the researchers may wish to combine the score of the numerous variables into smaller number of super variables to form the group of the measures (Jolliffe, 2002).This problem mostly happens in determining the relationship between socio-economic factors and crime incidences. PCA uses the correlation among the variables to develop a small set of components that empirically summarised the correlation among the variables. In a study to examine the statistical relationship between crime and socio-economic status in Ottawa and Saskatoon, the PCA was employed to replace a set of variables with a smaller number of components, which are made up of inter-correlated variables representing as much of the original data set as possible (Exp, 2008). Principal component analysis can also be used to determine the overall criminality. When the first eigenvector show approximately equal loadings on all variables then the first PC measures the overall crime rate. In Printcom (2003) for 1997 US crime data, the overall crime rate was determined from the first PC ,and the same result was achieved by Hardle and Zdenek (2007) for the 1985 US crime data. The second PC which is interpreted as “type of crime component” has successively classified the seven crimes into violence and property crime. U usman et al (2012) carried out a research on ‘An investigation on the Rate of crime in Sokoto Using Principal Component Analysis. From the results, three Principal Components was retained from seven, using the Scree plot and Loading plot indicating that correlation exist between crimes against persons and crime against properties. Yan Fang (2011) use multivariate methods analysis of crime data in Los Angeles Communities and from the findings Principal Component Analysis was successfully applied into the data by extracting five PCs out of the 15 original variable, which implies a great dimensionality reduction. In addition, this 85% variance of the original dataset, thus he does not loss much information. Shehu el al (2009) research on analysis of crime data using principal component analysis: A case study of Katsina State. The paper consists of the average eight major crimes reported to the Police for the period 2006-2008. The crime consist of robbery auto theft, house and store breakings, theft, and grievous hurt and wounding, murder, rape, and assault. Correlation matrix and Principal Component analysis were developed to explain the correlation between the crimes and to determine the distribution of the crimes over the Local Government areas of the State.

2.2 Classification of Crime

The classification of crime differs from one country to another. In the United States, the Federal Bureau of Investigation tabulates the annual crime data as Uniform Crime Reports (UCR). They classify violations of laws which derive from common law as part 1 (index) crimes in UCR data, further categorized as violent as property crimes. Part 1 violent crimes include murder and criminal homicide (voluntary manslaughter), forcible rape, aggravated assault, and robbery; while part 1 property crimes include burglary, arson, larceny/theft, and motor vehicle theft. All other crimes count as part II crimes (Wiki/Cr.2009).In Nigeria, the Police classification of crime also depends on what law prescribed. In Nigeria Police Abstract of Statistics (NPACS), offences are categorized into four main categories:

i. Offences against persons includes: manslaughter, murder and attempted murder, assault, rape, child stealing, grievous hurt and wounding, etc.

ii. Offences against property includes: armed robbery, house and store breakings, forgery, theft/stealing, etc.

iii. Offences against lawful authority include: forgery of current notes, gambling, breach of peace, bribery and corruption, etc.

iv. Offences against local act include: traffic offences, liquor offences, etc.

2.3 Causes of Crimes

Criminal behaviour cannot be explained by a single factor, because human behaviour is a complex interaction between genetic, environmental, social psychological and cultural factor. Different types of crimes are being committed by different types of people, at different times, in different places, and under different circumstances (Danbazau, 2007). Here we discuss some of the causes of crime:

Biogenetic factors: Criminologists are with the opinion that criminal activity is due to the effect of biologically caused or inherited factors (Pratt and Cullen, 2000). According to Lombrose (1911), a criminal is born, not made; that criminals were the products of a genetic constitution unlike that found in the non-criminal population.

Social and environmental factor (Sutherland, 1939): The environment is said to play significant role in determining criminal behaviour. Factors within the environment that mostly influence criminal behavior include poverty, employment, corruption, urbanization, family, moral decadence, poor education, technology, child abuse, drug trafficking and abuse, architectural or environmental design Oyebanji (1982) and Akpan (2002) have attribute the current crime problem in Nigeria to urbanisation, industrialisation and lack of education. Kutigi (2008) has said that the factors of crime in Nigeria and poverty and ignorance which are at the same time the opinion of many Nigerians (Azaburke, 2007). In another dimension, according to Ayoola (2008), lack of integrity, transparency and accountability in the management of public funds, especially at all levels of government have been identified as the factors responsible for the endemic corruption that has eaten deep into the fabric of the Nigerian society over the years.

2.4 The Nigerian Police

The most important aspect of criminal justice system is the police. Criminal justice system can be defined as a procedure of processing the person accused of committing crime from arrest to the final disposal of the case (Danbazau, 2007). However, for the past three decades there have been serious dissatisfaction and public criticisms over the conduct of the police (Danbazau, 2007). Then, what are the causes of the police failure in preventing and controlling the crimes? So many factors can be attributed to the problem. There are the issue of inadequate manpower, equipment and professionalism (Danbazau, 2007), corruption (Al-Ghazali, 2004) and poor public perception on the Nigeria Police (Okeroko, 1993), which has consequently made the Nigerian Public unwilling to corporate with the police in crime prevention and control.

2.5 Statistics of Crimes in Nigeria

Nigeria has one of the highest crime rates in the world. Murder often accompanies minor burglaries. Rich Nigerians live in high – security compounds. Police in some states are empowered to “shoot on sight” violent criminals (Financial Times, 2009).In the 1980s, serious crime grew to nearly epidemic proportions, particularly in Lagos and other urbanized areas characterised by rapid growth and change, by stark economic inequality and deprivation, by social disorganisation, and by inadequate government service and law enforcement capabilities (Nigeria,1991).Annual crime rates fluctuated at around 200 per100,000 populations until the early 1960s and then steadily increased to more than 300 per 100,000 by the mid-1970s. Available data from the 1980s indicated a continuing increase. Total reported crime rose from almost 211,000 in 1981 to between 330,000 and 355,000 during 1984 – 85. The British High Commission in Lagos cited more than 3000 cases of forgeries annually (Nigeria, 1991).In the early 1990s, there was growing number of robberies from 1,937 in 1990 to 2,419 in 1996, and later the figure declined to 2,291 in 1999. Throughout the 1990s, assault and theft constituted the larger category of the crime. Generally, the crime data grow from 244,354 in 1991 to 289,156 in 1993 (Cleen,1993) and continued to decline from 241,091 in 1994 to 167,492 in 1999 (Cleen, 2003). The number of crime slightly declined to 162,039 in 2006, a reduction of 8 percent from 2005 (Cleen, 2006).

2.6 Principal Component Analysis Theories

Having a large number of variables in a study makes it difficult to decipher patterns of association. Variables sometimes tend to repeat themselves. Repetition is a sign of multicolinearity of variables, meaning that the variables may be presenting some of the same information. Principal Components Analysis simplifies multivariate data in that it reduces the dimensionality of the data. It does so by using mainly the primary variables to explain the majority of the information provided by the data set. Analysis of a smaller number of variables always makes for a simpler process.

Simply stated, in principal components analysis we take linear combinations of all of the original variables so that we may reduce the number of variables from p to m, where the number m of principal components is less than p. Further, the method allows us to take the principal components and use them to gain information about the entire data set via the correlation between the principal components and the original variables. Matrices of correlations or loadings matrices show which principal component each variable is most highly associated with. The first principal component is determined by the linear combination that has the highest variance. Variance measures the diffusion of the data.

After the first principal component is obtained, we must determine whether or not it provides a sufficient amount of or all of the information displayed by the data set. If it does not provide adequate information, then the linear combination that displays the highest variance accounted for after the first principal component’s variation is removed is designated as the second principal component. This process goes on until an ample amount of information/variance is accounted for. Each principal component accounts for a dimension and the process continues only on the remaining dimensions. Designating a dimension as a principal component often reveals information about correlations between remaining variables which at first was not readily available.

The main objective of Principal Components Analysis is to locate linear combinations , with the greatest variance. We want where ? is the covariance matrix, to be the maximum among all the normalized coefficient vectors a„“i. This result is achieved by way of Lagrange Multipliers. Taking the partial derivative with respect to a„“i of the Var(yi) – ?(a„“iTa„“i – 1), where ? is the Lagrange Multiplier results in the equation

where a„“i is not equal to the zero vector. From the above equations it can be easily verified that ? is a characteristic root of ? and ?i is equal to the variance of yi where ?1>?2> … > ?p are the characteristic roots. Note that they are positive. The characteristic vector corresponding to ?1, the root that accounts for the maximum variance, is a„“1. The percentage of variance that any particular principal component accounts for can be calculated by dividing the variance of that component by the sum of all the variances, i.e.

We use the high correlations between the principal components and the original variables to define which components we will utilize and which ones we will discard. One device that assists us in this decision process is a scree plot. Scree plots are graphs of the variance (eigenvalue) of each principal component in descending order. A point called an “elbow” is designated. Below this point is where the graph becomes somewhat horizontal. Any principal components whose variances lie above this point are kept and the others are discarded. The original variables that are highly correlated with each principal component that is kept determine what the label of that particular component will be.

Methods of Risk Analysis and Management

RISK ANALYSIS METHODS

Risk management can be divided into four steps: risk identification, risk assessment, risk control, and risk records. In recent years, studies have mostly focused on the risk assessment. Risk assessment is to analyze and measure the size of risks in order to provide information to risk control. Four steps are included in the risk assessment.

According to the results of risk identification and build an appropriate mathematical model.
through expert surveys, historical records, extrapolation, etc. to obtain the necessary, basic information or data available, and then choose the appropriate mathematical methods to quantify the information.
Choose proper models and analysis methods to deal with the data and adjust the models according to the specific circumstances.
Determine the size of risks according to certain criteria. In the risk assessment extrapolation, subjective estimation, probability distribution analysis and other methods are used to obtain some basic data or information. Further data analysis often use following basic theory and methods: layer analysis method, mode cangue logical analysis method, Monte Carlo simulation, the gray system theory, artificial neural network method, fault tree analysis, Bayesian theory, an influence diagram method and Markov process theory.

We can divide the methods into qualitative analysis and Quantitative Analysis.

Qualitative analysis:
1. Fault Tree Analysis
Fault Tree Analysis

Fault Tree Analysis (Fault Tree Analysis, FTA) can be used for qualitative analysis of risk and can also be used for quantitative analysis. It is mainly used for large-scale complicated system reliability and safety analysis. It is also an effective method to Unification reliability and safety analysis, through hardware, software, environment, human factors.FTA is drawing a variety of possibilities of failure in system failure analysis, from whole to part, according to the tree structure. Fault tree analysis using tree form, the system

The failure of components and composition of the fault system are connected. We are always using fault tree in qualitative or quantitative risk analysis. The difference in them is that the quantitative fault tree is good in structure and it requires use of the same rigorous logic as the formal fault tree, but qualitative fault tree is not. Fault tree analysis system is based on the target which event is not hoped to happen (called the top event), one level down from the top event analysis of the direct cause of their own events (call low event), according to the logical relationship between the upper and lower case, the analysis results are obtained.

2. Event Tree Analysis

Event tree analysis (event tree analysis, ETA) also known as decision tree analysis, is another important method of risk analysis. It is the events of a given system, the analysis of the events may cause a series of results, and thus evaluates the possibility of the system. Event tree is given an initial event all possible ways and means of development, every aspect of the event tree events (except the top incidents) are the implementation of certain functions of measures to prevent accidents, and all have binary outcomes (success or failure). While the event tree illustrates the various incidents causes of the accident sequence group. Through various intermediate steps in the accident sequence group can organize the complexity of the relationship between the initial incident and the probability of systemic risk reduction measurement, and identify the accident sequence group. So we can calculate the probability of each of the key sequence of events occurred.

3. Cause-Consequence Analysis

Cause and consequence analysis is a combination of fault tree analysis and event tree analysis. It uses the cause analysis (fault tree analysis) and the result analysis (event tree analysis), CCA aims to identify the chain of events leading to unexpected consequences, according to the probability of occurrence of different events from CCA diagram to calculate the probability of different results, then the risk level of the system can be determined.

4. Preliminary Risk Analysis

Preliminary risk analysis or hazard analysis is a qualitative technique which involves a disciplined analysis of the event sequences which could transform a potential hazard into an accident. In this technique, the possible undesirable events are identified first and then analyzed separately. 2 For each undesirable events or hazards, possible improvements, or preventive measures are then formulated.

This method provides a basis for determining hazard categories and which analysis methods are most suitable. It is proved valuable in the working surrounding to which activities lacking safety measures can be readily identified.

5. Hazard and Operability studies (HAZOP)

The HAZOP technique was origined in the early 1970s by Imperial Chemical Industries Ltd. HAZOP is firstly defined as the application of a formal systematic critical examination of the process and engineering intentions of new or existing facilities to assess the hazard potential that arise from deviation in design specifications and the consequential effects on the facilities as a whole.2

This technique is usually performed using a set of guidewords: NO/NOT, MORE OR/LESS OF, AS WELL AS, PART OF REVERSE, AND OTHER THAN. These guidewords, a scenario that may result in a hazard or an operational problem is identified. Consider the possible flow problems in a process line, the guide word MORE OF will correspond to high flow rate, while that for LESS THAN, low flow rate. The consequences of the hazard and measures to reduce the frequency with which the hazard will occur are then discussed. This technique is accepted widely in the process industries. It is mostly regarded as an effective tool for plant safety and operability improvements. Detailed procedures on how to perform the technique are available in some relevant literatures.

Quantitative Analysis:
Fault Tree Analysis

It is explained in the Qualitative analysis.

Expected value

Expected value is the possible outcome times the probability of its occurrence. An expected value shows the percentage of yielding a target in a business.

Sensitivity analysis

In sensitivity analysis shows how the outcome changes in response of a particular variable change. One can get result from optimistic, most likely and pessimistic values. An example of inputs for sensitivity analysis is the material and labor cost that can be much fluctuated.