The Decision-Making Process at Toyota

The minor assessment is centred around Toyota’s annual report. Each student is expected to submit a case report, based also on the analysis of relevant background readings in addition to the case study itself, addressing the following issues: · Explain what is meant by the term “decision-making” and analyse it in connection with the concepts of risk and uncertainty. · Discuss the decision-making process at Toyota. · Briefly analyse the automotive industry and explain how its dynamics influence Toyota’s managers in making decisions. · Apply forecasting models to Toyota case study (e.g. provide a 2-year moving average graph using sales data).

Table of contents

Introduction: The decision-making process 1

Risks and uncertainties in decision-making process 2

Case study: Decision-making process at Toyota 3

Automotive industry analysis 4

Influence of automotive industry in Toyota’s decision making process 5

Financial analysis 6

Forecasting model: 2-year moving average graph 7

Forecasting model: weighted moving average and exponential smoothing 8

Conclusion: Toyota heading towards Sustainable Growth 9

References and Sources 10

Introduction: The decision-making process

We can define decision-making, as a conscious and human process, involving both individual and social phenomena, an ongoing process of evaluating alternatives for meeting an objective. A particular course of action to select that course of action most likely to result in attaining the objective.

The decision-making process allow us to raise our vision beyond our immediate concerns and, in turn, allow us to evaluate our existing beliefs an actions in a new light in order to make an important and useful decision. Achieving an objective requires action leading to a desired outcome. In theory, how one proceeds should inevitably affect what one achieves, and in turn this should affect future actions.

Risks and uncertainties in decision-making process

The ability of a firm to absorb, transfer, and manage risk is critical in management’s decision-making process when risky outcomes are involved. This will often define management’s risk appetite and help to determine, once risks are identified and quantified, whether risky outcomes may be tolerated. For example, many financial risks can be absorbed or transferred through the use of a hedge, while legal risks might be mitigated through unique contract language. If managers believe that the firm is suited to absorb potential losses in the event the negative outcome occurs, they will have a larger appetite for risk given their capabilities to manage it.

Managing uncertainty in decision-making relies on identifying, quantifying, and analyzing the factors that can affect outcomes. This enables managers to identify likely risks and their potential impact.

Decision makers are used to assessing risk because decision-making is usually associated with some degree of risk taking, but not all outcomes are easily assessed. Some unknown outcomes may not previously have been seen or experienced and so they are uncertain. In theory the outcome may have a low probability to occur but if so would happen it could be troublesome.

So it is important for every company, especially in ever changing and competitive markets to deal with risks using a ever-better decision-making process. All of the decisions anyhow are taken by individuals so the strategy for risk avoidance is tied in with a personal reference point. Of course it’s fundamental nowadays, for big corporations, to have extremely good employees in this department. The skills and needs of the decision-maker and the role of the decision within an organization, the importance of the risk analysis will depend on the objectives of the decision. A wise approach to decision-making might seek contributions from different angles. The importance placed on data analysis, management skills, organizational awareness, and custom and practice in the assessment of risk would be vital. In this field of course with any doubt Toyota is one of the finest players in the market, with a top notch decision-making process.

Case study: Decision-making process at Toyota

Automotive industry analysis

The worldwide automotive market is highly competitive. Toyota faces intense competition from automotive manufacturers in the markets in which it operates. Although the global economy continues to recover gradually, competition in the automotive industry has further intensified among difficult overall market conditions. In addition, competition is likely to further intensify due to continuing globalization in the worldwide automotive industry, possibly resulting in further industry reorganization. Factors affecting competition include product quality and features, safety, reliability, fuel economy, the amount of time required for innovation and development, pricing, customer service and financing. Increased competition may lead to lower vehicle unit sales, which may result in a further downward price pressure and adversely affect Toyota’s financial condition and results of operations. Toyota’s ability to adequately respond to the recent rapid changes in the automotive market and to maintain its competitiveness will be fundamental to its future success in existing and new markets and to maintain its market share. There can be no assurances that Toyota will be able to compete successfully in the future. That’s the risk connected with every business activity. Through this uncertainties Toyota has to deal with a top-notch management.

Each of the markets in which Toyota competes has been subject to considerable volatility in demand, so the risk is becoming even higher year after year affecting all business decisions.

Demand for vehicles depends on social, political and economic conditions in a given market and the introduction of new vehicles and technologies.

As Toyota’s revenues are derived from sales in markets worldwide, economic conditions in such markets are particularly important to Toyota. In Japan, the economy gradually recovered due to increasing personal consumption and last-minute demand encouraged by the increase of the consumption tax. In the United States, the economy has seen constant gradual retrieval mainly due to increasing personal consumption and the European economy has shown signs of recovery too. In the meantime, growth in emerging markets slowed down due to weakening currencies of emerging markets, increases in interest rates of emerging markets to protect the local currencies, and political instability in some nations. The shifts in demand for automobiles is continuing, and it is unclear how this situation will transition in the future.

Influence of automotive industry in Toyota’s decision making process

Toyota’s future success depends on its ability to offer new innovative & competitively products that meet customer demand on a timely basis. Their corporate DNA is headed to continuous innovation and ensure that tomorrow’s Toyota is even better than today’s.

Toyota’s current management structure is based on the structure introduced in April 2011. In order to fulfill the Toyota Global Vision, Toyota reduced the Board of Directors and decision-making layers, changing the management process from the ground-up, facilitating rapid management decision-making.

In April 2013, Toyota made organizational changes with the goal of additional increasing the speed of decision making by clarifying responsibilities for operations and earnings.

In detail Toyota’s group divided the automotive business into the following four units —Lexus International (Lexus business); Toyota No. 1 (North America, Europe and Japan); Toyota No. 2 (China, Asia & the Middle East, East Asia & Oceania; Africa, Latin America & the Caribbean); and Unit Center (engine, transmission, and other “unit”-related operations)

Meeting customer demand by introducing attractive new vehicles and reducing the amount of time required for product development are critical to automotive producers. In particular, it is critical to meet customer demand with respect to quality, safety and reliability. The timely introduction of new vehicle models, at competitive prices, meeting rapidly changing customer preferences and demand is more fundamental to Toyota’s success than ever, as the automotive market is rapidly transforming in light of the changing global economy.

Toyota has to be ready for every occasion to occur in this ever changing global economy. Toyota’s managers every year are taking under consideration every occasion to happen. Within a managerial decision-making context, a risk might be viewed as the chance of negative outcome for a decision which has a possible uncertainty element, usually on the downside.

Financial Analysis

In terms of finances, the carmaker boosted its profit forecast for the current fiscal year ending March, expecting net income to rise to 2.0 trillion yen ($16.97 billion, 14.7 billion euros). It also said revenue would come in at 26.5 trillion yen. Toyota Motor Corporation had revenues for the full year 2014 of 25.692tn. This was 16.44% above the prior year’s results.

Regarding the competition between Toyota, Volkswagen and ford, top players in the market, Toyota is average a positive trend. Moreover Toyota has the highest income since the year 2009.

Forecasting model: 2-year moving average graph

year

sales

2 yrs moving average

error

2006

21036909.00

2007

23948091.00

2008

26289240.00

22492500

3796740

2009

20529570.00

25118665.5

-4589095.5

2010

18950973.00

23409405

-4458432

2011

18993688.00

19740271.5

-746583.5

2012

18583653.00

18972330.5

-388677.5

2013

22064192.00

18788670.5

3275521.5

2014

25691911.00

20323922.5

5367988.5

2015 Forecast

23878051.5

Forecasting model: weighted moving average and exponential smoothing

We could use instead different methods. The moving average is a simple method that doesn’t take in in consideration the weight or real value that a number has. In fact to overcome this issue we can adopt the “weighted moving average method” and the “exponential smoothing method”.

Using the “weighted moving average method” I take under considerations 3 years, which I consider the most important. The value of weights it is based on the percentage growth every year.

year

sales

3 yrs weight mov av

%growth

2006

21036909.00

2007

23948091.00

12%

2008

26289240.00

9%

2009

20529570.00

-28%

2010

18950973.00

-8%

2011

18993688.00

0%

2012

18583653.00

2%

-2%

2013

22064192.00

26%

16%

2014

25691911.00

72%

14%

2015 Forecast

100%

24606538.9

Using weighted moving average, we can have a better forecast. However, it is more important for a better forecast to use the exponential smoothing method.

Here I take in consideration all of the years moving from 2008. Found out that the smoothing factor is pretty high, 0.99.

year

sales

Forecast with smoothing factor

2006

21036909

2007

23948091

2008

26289240

2009

20529570

2010

18950973

2011

18993688

2012

18583653

2013

22064192

2014

25691911

2015 forecast exponential smoothing

25691729.6

I took under consideration 0.9 as my alpha because in this particular case higher alpha means that the recent history will have more weightage in the forecast calculation. As we can see from page 26/68 I took under consideration Toyota’s Consolidated Performance (U.S. GAAP).

I think that the last one is the most appropriate method to see a realest forecast for the next year.

Of course the calculation has been made “ceteris paribus” so everything it is supposed to be the same next year, but as showed before this particular market is subject to constant changes. For this reason and other random errors the forecast could be higher or lower, but however we can obviously see a positive trend in Toyota’s business.

Thanks to the tireless efforts of all concerned, today Toyota’s group can take pride in the strengths of its management practices and culture. Even its president is convinced that they are now in a position to take a definitive step forward toward sustainable growth.

Conclusion: Toyota heading towards Sustainable Growth

So is Toyota heading towards a sustainable growth? What is the engine for sustainable growth? Toyota has learned from experience that they can achieve sustainable growth only if they manage to create great cars that bring smiles and if they foster the human resources needed to make this a reality. At the same time, ever-better cars can be produced only through efforts made by employees on the front line. Individuals must take ownership of their work and place the utmost emphasis on local manufacturing, swift decision making, and immediate action. As it continues to grow however, tasks that were once routine may become increasingly difficult to perform. As I see it, Toyota’s current situation is particularly critical as we are now entering another expansion phase. This is a really important moment for Toyota. For this, because of the risks associated with the future Toyota should continue to seek perfection in his work of manufacturing, but especially in its management process where the decision-making process takes a fundamental part.

References and sources

For further readings…

– Ken Segall, Insanely Simple, the obsession that drives Apple’s success, Published by Portfolio Trade, 2013

– Robbins, De cenzo & Coulter, Fundamentals of management, Global edition, 8th Edition, Pearson Higher Education, (2014 version)

– Burns and Stalker, The management of innovation, Tovistock Publications, London, 1961

Some internet sites…

ADAPT OR DIE, by John S. McCallum – Ivey Business Journal about management [accessed November 18, 2014] http://iveybusinessjournal.com/topics/strategy/adapt-or-die#.VGvDZDSG_ng

1

Fear of Crime Survey Results

Data

The data set used and analysed consisted of results from residents (N=300) who participated in the 2014, Gold Coast Community Survey on fear of crime and the factors that are associated with individual perceptions of what contributes to their fear. The data gathered from the survey analyses groups of categorical variables including fear, demographic characteristics, news and information, as well as community characteristics. Fear and News and information are categorised into their own variables with multiple values, whereas demographic and community characteristics are grouped represented by individual variables and further represented through multiple values. Demographic characteristics include; gender, age, income and education level. Community characteristics include; collective efficiency and Social cohesion. A detailed description of the data set including values is shown in Table 1. In this analysis the primary focus is to determine the association between fear and various other factors, thus determining that fear is the categorical dependant variable and the subsequent variables are independent variables.

Table 1

Sub-sample size and Frequencies of variables. (N­=300)

Variable

n

% of variable

Age

15 – 24

25 – 64

55 – 65

65 +

20

56

49

175

6.7

18.7

16.3

58.3

Gender

Male

Female

130

170

56.7

43.3

Income

Under 50k

Above 50k

136

164

45.3

54.7

Highest Level of Education Completed

Year 11 or 12 or equivalent

Degree

Higher Degree

171

87

42

57.0

29.0

14.0

Primary Source of News and Information

Television

Radio

Print

Internet

Other

190

23

52

30

5

63.3

7.7

17.3

10.0

1.7

Collective efficiency

Low

Moderate

High

80

148

72

26.7

49.3

24.0

Social Collective

Low

Moderate

High

70

153

77

23.3

51.0

25.7

Methods

To determine whether there was a connection between fear of crime and various factors that could possibly influence or are associated with each individual’s perceptions, a chi-square r x c test for independence was conducted on the assembled data. This test was chosen to be conducted for this analysis due to all the variables being used are categorical with multiple values. Therefore meeting two assumptions for the chi-square test for independence; all categorical variables (Nominal or Ordinal) and should consist of two or more categorical variables. The other primary assumption of the chi-test for independence, which is the expected frequency should not drop below five in more than 25% of the cells in a contingency table was also met. The results displayed only two (4.55%) cells falling below the expected frequency count of five, with the minimum being 2.08, therefore not contributing to more than 25% cells of the contingency tables.

Results

A chi-square r x c test for independence was performed to examine the relationship or association between fear of crime and various factors that contributed to each participants perceptions. Within this analysis there were multiple variables to be examined to determine the association with fear of crime, the significant findings will be discussed prior to results table 2. Within the age of participant variable, 48% of participants over the age of 65 were fearful of crime, compared to 2.3% of participants aged between 55 and 64, 4.7% of participants between the ages of 25 and 54 years, and 3.3% of participants aged between 15 and 24 years. The relation between the dependant variable fearful / not fearful and the variable age of participant, showed that there was a significant association (X? (3, N=300) = 106.59, p ? .001). The Cramer’s V was 0.59, thus resulting in approximately 35% variance of frequencies of fear can be explained by the variance of age. Within the variable news and information, 46.7% of participants perceived the television increased fear of crime compared to 3.3% due to the radio, 7.0% due to print, 1.3% due to the internet and 0% due to other sources. The relation between the dependant variable and the variable of news and information, showed that there was also a significant association (X? (4, N=300) = 59.39, p ? .001). The Cramer’s V was .445, thus resulting in approximately 20% variance of frequencies of fear can be explained by the variable of news and information. Both the Age variable and the news and information variable showed statistically higher associations with fear of crime, representing factors from demographic characteristics and news and information; compared to alternate variables, particularly community characteristics. Further detailed results of variables shown in table 2.

Table 2

Results of chi-square test on variables associated with fear on crime

Variable

df

??

p value

V

Variance %

Age

3

106.59

?.001

.596

35%

Gender

1

8.27

.004

.166

3%

Income

1

0.74

.388

-.050

.25%

Schooling

2

16.00

?.001

.231

5%

News and Information

4

59.39

?.001

.445

19%

Collective Efficiency

2

18.16

?.001

.246

6%

Social Cohesion

2

19.63

?.001

.256

6%

Conclusion

The variables age and news and information both have a significant association with the fear of crime within the Gold coast community. Addressing the research questions, the preceding data demonstrates that demographic characteristics and news and information both are related to residents fear of crime thus, concluding that the answers to research question one and two are, true, there is a relation The third research question enquiring the relationship between community characteristics and residents fear of crime, although the data concluded there is a slight relationship, it is not as significant as the other variables. Therefore it is suggested that strategies address the residents fear on crime by focusing on the factor of age and the production of news and information of crime, to alter the perceptions.

Survey of Satisfaction with College Facilities

Assignment 1
1. Plan for collection of Primary data and secondary data

Primary data is the data which is collected directly from the field, i.e., it is first hand data and the secondary data is collected from some other source, i.e. second hand data. In this given problem primary data can be collected through interview to the Students and Staffs of the college. A Questionnaire will be prepared which will be filled up on the response of these individuals. Based on the information provided by them the database will be prepared and thus the primary data will be collected.

In case of secondary data, the data could be collected from any organization / department which collects the school/college data or from any journal or from any Researcher.

2. Present the survey methodology and sampling frame used

There are different areas in the college which are simultaneously used both by the Students of the college as well as the Staffs of the college. Considering those area alongwith few other he questionnaire is prepared. The survey will be then conducted on the basis of that and a selected sample will be chosen randomly from the students and from the staffs. Here since one has to plan a survey methodology, first thing which has to be done is to identify the sample members. For this purpose a total number of 50 individuals may be selected form70 students and 30 staffs taking 50% from each group (that is 35+15=50). Now, the Interview method will be used here for collection of data. Data on satisfaction level of each individual will be collected on different variables. Once the sampling units is finalized, sampling frame needed to be done. Sampling frame is basically the area/ space from where the sampling is to be done. Here one has to see whether all the units in the population are available in the sample. List of students and staffs must be representative of all classes and segments of the college.

The level of satisfaction will be coded as 5= very good, 4= good, 3= average, 2= bad, 1= very bad, in five categories, following Likert scaling.

3. Design a questionnaire to know the opinion of students and staff on the matter
Gender………………………………………
Origin ……………………………………….
Age……………………………………………..

Very good

Good

Average

Bad

Very Bad

How much satisfied are you on the overall infrastructure of the college?

How much satisfied will you be if Laundry facilities are available?

How much satisfied are you with the Hostel facilities of the college?

How much satisfied are you with the gym?

How much satisfied are you with the parking facility of the college?

How much satisfied are you on the toilet facilities?

How much satisfied are you on the structure of labs in the college?

4. Information for decision making by summarizing data using representative values

The data collected from the after the survey is recoded as per Likert scaling and is as below:

How much satisfied are you on the overall infrastructure of the college?

How much satisfied will you be if Laundry facilities are available?

How much satisfied are you with the Hostel facilities of the college?

How much satisfied are you with the gym?

How much satisfied are you with the parking facility of the college?

How much satisfied are you on the toilet facilities?

How much satisfied are you on the structure of labs in the college?

1

4

2

2

5

2

2

4

2

5

4

4

2

3

4

2

3

3

2

2

4

2

2

4

4

5

1

1

2

4

3

4

5

4

2

2

1

2

5

4

6

1

4

2

2

1

2

4

7

2

2

4

4

2

4

2

8

5

3

2

2

4

5

5

9

3

5

1

3

2

2

4

10

4

2

2

5

3

2

2

11

4

4

4

4

5

4

4

12

5

5

2

5

2

4

2

13

2

2

3

4

4

1

4

14

4

2

2

2

2

2

4

15

5

4

4

4

4

3

4

16

3

4

2

4

2

2

4

17

2

1

1

4

1

1

2

18

4

2

2

5

2

2

5

19

5

3

4

4

4

3

4

20

1

2

2

2

2

2

2

21

4

1

3

4

3

1

2

22

5

2

5

4

5

2

2

23

2

3

2

4

2

3

4

24

5

2

4

4

4

4

2

25

4

1

2

2

5

4

3

26

5

2

4

5

2

4

5

27

4

3

2

4

4

2

2

28

3

4

1

2

2

3

4

29

5

4

2

1

1

5

5

30

4

4

4

2

2

2

2

31

3

4

2

4

5

4

2

32

4

2

3

2

2

2

4

33

5

3

5

3

4

4

4

34

3

2

2

2

2

2

1

35

5

4

4

4

1

1

2

36

4

3

5

2

2

2

3

37

3

2

2

1

4

4

2

38

4

4

2

2

2

2

1

39

5

4

4

4

3

3

2

40

3

2

4

2

5

5

3

41

5

5

1

3

4

2

2

42

4

4

2

5

5

2

1

43

5

2

3

4

4

1

2

44

3

4

2

2

2

2

3

45

5

4

1

4

4

2

4

46

4

4

2

4

4

1

2

47

5

4

3

4

4

2

4

48

3

2

2

4

4

4

2

49

5

5

4

2

3

2

3

50

4

4

3

5

3

3

5

5. For Analytical; purpose the variables are denoting as below:

How much satisfied are you on the overall infrastructure of the college?

How much satisfied will you be if Laundry facilities are available?

How much satisfied are you with the Hostel facilities of the college?

How much satisfied are you with the gym?

How much satisfied are you with the parking facility of the college?

How much satisfied are you on the toilet facilities?

How much satisfied are you on the structure of labs in the college?

overall infrastructure

Laundry facilities

Hostel facilities

gym

parking facility

toilet facilities

structure of labs

Mean

overall infrastructure

3.88

Laundry facilities

3

Hostel facilities

2.66

gym

3.26

parking facility

3

toilet facilities

2.7

structure of labs

3.06

The average of the Overall infrastructure is 3.88 which indicated that on average people are recognizing the overall infrastructure as almost “GOOD”.

When it was asked “How much satisfied will you be if Laundry facilities are available?”, the mean response is 3 which is Average. This is quite sensible because this facility is yet to be there in the college.

Regarding “Hostel facilities”, the average response is below averages which indicate that there is some urgent need to repair this sector.

Gym facility is slightly more than the “Average”. The condition is better than “Average” but less than “Good”

In case of “parking facility” the satisfaction level is exactly “Average” which indicates that there is scope to improve the sector.

Satisfaction level on “Toilet facilities” is below average which also requires urgent attention of the college authority.

Structure of Labs also requires some kind of attention.

6. Drawing valid conclusions based on information derived from the survey

Laundry

The above diagram shows that 36% are saying good to Laundry facility and also 36% is recognizing it as Bad. Only 8% is saying it “very Good” and 8% is saying very Bad. Some kind of symmetrical situation is observed here. It seems that the service provider is paying good attention to selected individuals.

Hostel facilities

In case of Hostel 46% saying it “Bad” and this is a matter of concern. While discussing with Average values, the data indicates the same urgency. But at the same time it can be observed that 24% , which is in 2nd position as far as percentages in concern, is saying is good. It may indicate that some portion of the Hostel is having better situation than the rest. Also 6% feeling their accommodation as “Very Good”.

Gym

While discussing about Gym, which is yet to be established, 48% is in favour of this, out of which 42% saying it as a Good facility and 6% as “Very Good” facility.

Parking

The above diagram shows that 38% are saying “Bad” to Parking facility and also 30% is recognizing it as “Good”. Only 8% is saying it “very Bad” and 12% is saying “Very Good”.

Toilet

In case of Toilet 44% saying it “Bad” and this is also a matter of concern. While discussing with Average values, the data indicates the same urgency. But at the same time it can be observed that 22% is saying is good. It may indicate that there is also some better situation. Also 8% feeling as “Very Good”.

Lab

In case of Lab, which is more related to education, 46% is in favour of this, out of which 36% saying it as a Good facility and 10% as “Very Good” facility.

7. Trend lines

Now as per the given question, the trend lines have to be created in the spreadsheet graph. For this purpose , here the intercept is considered as zero(0) and then the equation is shown alongwith the scatter plot and the trend line.

Here the first variable “Overall infrastructure” is considered as the Dependent Variable and there are other six independent variable. Taking each Independent variables separately, the trend line along with the graph will be created.

Case 1. Overall infrastructure and Satisfaction on Laundry:

As shown in the graph, the required equation is Y=.732x

Case 2. Overall infrastructure and Satisfaction on Hostel:

The required equation is Y= .659X

Case 3. Overall infrastructure and Satisfaction on Gym :

The required equation is Y= .787X

Case 4. Overall infrastructure and Satisfaction on Parking facility :

The equation here is Y= .740X.

Case 5. Overall infrastructure and Satisfaction on Toilet facility :

Here the equation is Y=.656x

Case 5.Overall infrastructure and Satisfaction on Toilet facility :

Here the equation is Y= .735x

9.Business Report

All the equations are formed considering the variables separately. In each equation, if value of x is given, the estimated value of Y will be obtained by solving the equation with simple calculation. The dependent variable is considered as “Overall infrastructure:” which actually says whether there is really any need of ‘refurbishing the whole college” or not. This dependent variable depends on several other issues/ factors as considered as Independent variable.

So, taking care or giving attention on those areas actually help the project to take decision whether or not this could be done. The above analysis tells which area needs utmost attention and which area is somehow okay up to this. Based on the analysis, one could say that the two issues .i.e. Toilet and Hostel need to be addressed seriously.

Assignment 2
Question No.1

X = scores of a market survey regarding the acceptability of a new product launch by a company

Frequency Table with a class interval of 5

Class interval

5-10

10-15

15-20

20-25

25-30

30-35

35-40

40-45

45-50

frequency

2

0

1

1

6

2

10

8

2

Mean, variance and standard deviation.

Mean () = ,where N = and Variance = ?2= 2

Standard Deviation = Square root of variance.

And Xi here will be the mid value of the class interval. The following table is being constructed for the required calculations.

Class interval

mid value(xi)

frequency

5-10

7.5

2

10-15

12.5

0

15-20

17.5

1

20-25

22.5

1

25-30

27.5

6

30-35

32.5

2

35-40

37.5

10

40-45

42.5

8

45-50

47.5

2

247.5

= 32

Here

mid value(xi)

frequency

7.5

2

56.25

15

112.5

12.5

0

156.25

0

0

17.5

1

306.25

17.5

306.25

22.5

1

506.25

22.5

506.25

27.5

6

756.25

165

4537.5

32.5

2

1056.25

65

2112.5

37.5

10

1406.25

375

14062.5

42.5

8

1806.25

340

14450

47.5

2

2256.25

95

4512.5

247.5

= 32

= 8306.25

= 1095

= 40600

Mean = = 1095/32 = 34.22

Variance = ?2= (40600/32) – (34.22)2

= 97.83

Standard Deviation = Square root of variance =9.90

Score corresponding to 50% percentile.

50% percentile is also the median.

Here the data set has to be written in increasing order

8

8

18

25

26

26

27

27

29

30

32

35

36

37

38

39

39

39

40

40

40

40

41

41

42

43

44

44

45

45

48

49

There are 32 observations in all

=> There will be two middle values.

The average of those two middle values will be the value corresponding to 50% percentile or the Median.

Now since both are 39 implies the average is also 39.

So, it could be said that the score 39 corresponds to 50% percentile.

Calculate the location of third quartile.

Rewriting the data set in increasing order:

8

8

18

25

26

26

27

Statistical Analysis on Crime Rate in Nigeria

CHAPTER TWO

2.1 INTRODUCTION: In this chapter we are going to review some research work which has been carry out.

Crime is one of the continuous problems that bedevil the existence of mankind. Since forth early days, crime had been a disturbing threat to his personality, property and lawful authority (Louis et al., 1981). Today, in the modern complex world, the situation is most highly disturbing. Crime started in the primitive days as a simple and less organized issue, and ended today as very complex and organized. Therefore, the existence of crime and its problems have spanned the history of mankind. Nigeria has one of the alarming crime rates in the world (Uche, 2008 and Financial, 2011). Cases of armed robbery attacks, pickpockets, shoplifting and 419 have increased due to increased poverty among population (Lagos, undated). In the year 2011, armed robbers killed at least 12 people and possibly more in attacks on a bank and police station in North-Eastern Nigeria (Nossiter, 2011). However, Maritz (2010) has considered that image as merely exaggeration. He added that, as is the case with the rest of the world, Nigeria’s metropolitan areas have more problems with crime than the rural areas. Most crimes are however, purely as a result of poverty. Despite the fact that, crime is inevitable in a society Durkheim (1933), various controlling and preventive measures had been taken, and are still being taken to reduce the menace. However, crime control and prevention is still bedeviled by numerous complex problems. When an opportunity for crime is blocked, an offender has several alternative types of displacement (Gabor, 1978). However, the introduction of modern scientific and technical methods in crime prevention and control has proved to be effective. The application of multivariate statistics has made some contributions to many criminological explanations (Kpedekpo and Arya, 1981 and Printcom, 2003).

Principal Component Analysis (PCA) is very useful in crime analysis because of its robustness in data reduction and in determining the overall criminality in a given geographical area. PCA is a data analysis tool that is usually used to reduce the dimensionality (number of variable) of a large number of interrelated variables while retaining as much of the information (variation) as possible. The computation of PCA reduced to an eigenvalue –eigenvector problem. It is performed either on a correlation or a covariance matrix. If some group of measures constitutes the scores of the numerous variables, the researchers may wish to combine the score of the numerous variables into smaller number of super variables to form the group of the measures (Jolliffe, 2002).This problem mostly happens in determining the relationship between socio-economic factors and crime incidences. PCA uses the correlation among the variables to develop a small set of components that empirically summarised the correlation among the variables. In a study to examine the statistical relationship between crime and socio-economic status in Ottawa and Saskatoon, the PCA was employed to replace a set of variables with a smaller number of components, which are made up of inter-correlated variables representing as much of the original data set as possible (Exp, 2008). Principal component analysis can also be used to determine the overall criminality. When the first eigenvector show approximately equal loadings on all variables then the first PC measures the overall crime rate. In Printcom (2003) for 1997 US crime data, the overall crime rate was determined from the first PC ,and the same result was achieved by Hardle and Zdenek (2007) for the 1985 US crime data. The second PC which is interpreted as “type of crime component” has successively classified the seven crimes into violence and property crime. U usman et al (2012) carried out a research on ‘An investigation on the Rate of crime in Sokoto Using Principal Component Analysis. From the results, three Principal Components was retained from seven, using the Scree plot and Loading plot indicating that correlation exist between crimes against persons and crime against properties. Yan Fang (2011) use multivariate methods analysis of crime data in Los Angeles Communities and from the findings Principal Component Analysis was successfully applied into the data by extracting five PCs out of the 15 original variable, which implies a great dimensionality reduction. In addition, this 85% variance of the original dataset, thus he does not loss much information. Shehu el al (2009) research on analysis of crime data using principal component analysis: A case study of Katsina State. The paper consists of the average eight major crimes reported to the Police for the period 2006-2008. The crime consist of robbery auto theft, house and store breakings, theft, and grievous hurt and wounding, murder, rape, and assault. Correlation matrix and Principal Component analysis were developed to explain the correlation between the crimes and to determine the distribution of the crimes over the Local Government areas of the State.

2.2 Classification of Crime

The classification of crime differs from one country to another. In the United States, the Federal Bureau of Investigation tabulates the annual crime data as Uniform Crime Reports (UCR). They classify violations of laws which derive from common law as part 1 (index) crimes in UCR data, further categorized as violent as property crimes. Part 1 violent crimes include murder and criminal homicide (voluntary manslaughter), forcible rape, aggravated assault, and robbery; while part 1 property crimes include burglary, arson, larceny/theft, and motor vehicle theft. All other crimes count as part II crimes (Wiki/Cr.2009).In Nigeria, the Police classification of crime also depends on what law prescribed. In Nigeria Police Abstract of Statistics (NPACS), offences are categorized into four main categories:

i. Offences against persons includes: manslaughter, murder and attempted murder, assault, rape, child stealing, grievous hurt and wounding, etc.

ii. Offences against property includes: armed robbery, house and store breakings, forgery, theft/stealing, etc.

iii. Offences against lawful authority include: forgery of current notes, gambling, breach of peace, bribery and corruption, etc.

iv. Offences against local act include: traffic offences, liquor offences, etc.

2.3 Causes of Crimes

Criminal behaviour cannot be explained by a single factor, because human behaviour is a complex interaction between genetic, environmental, social psychological and cultural factor. Different types of crimes are being committed by different types of people, at different times, in different places, and under different circumstances (Danbazau, 2007). Here we discuss some of the causes of crime:

Biogenetic factors: Criminologists are with the opinion that criminal activity is due to the effect of biologically caused or inherited factors (Pratt and Cullen, 2000). According to Lombrose (1911), a criminal is born, not made; that criminals were the products of a genetic constitution unlike that found in the non-criminal population.

Social and environmental factor (Sutherland, 1939): The environment is said to play significant role in determining criminal behaviour. Factors within the environment that mostly influence criminal behavior include poverty, employment, corruption, urbanization, family, moral decadence, poor education, technology, child abuse, drug trafficking and abuse, architectural or environmental design Oyebanji (1982) and Akpan (2002) have attribute the current crime problem in Nigeria to urbanisation, industrialisation and lack of education. Kutigi (2008) has said that the factors of crime in Nigeria and poverty and ignorance which are at the same time the opinion of many Nigerians (Azaburke, 2007). In another dimension, according to Ayoola (2008), lack of integrity, transparency and accountability in the management of public funds, especially at all levels of government have been identified as the factors responsible for the endemic corruption that has eaten deep into the fabric of the Nigerian society over the years.

2.4 The Nigerian Police

The most important aspect of criminal justice system is the police. Criminal justice system can be defined as a procedure of processing the person accused of committing crime from arrest to the final disposal of the case (Danbazau, 2007). However, for the past three decades there have been serious dissatisfaction and public criticisms over the conduct of the police (Danbazau, 2007). Then, what are the causes of the police failure in preventing and controlling the crimes? So many factors can be attributed to the problem. There are the issue of inadequate manpower, equipment and professionalism (Danbazau, 2007), corruption (Al-Ghazali, 2004) and poor public perception on the Nigeria Police (Okeroko, 1993), which has consequently made the Nigerian Public unwilling to corporate with the police in crime prevention and control.

2.5 Statistics of Crimes in Nigeria

Nigeria has one of the highest crime rates in the world. Murder often accompanies minor burglaries. Rich Nigerians live in high – security compounds. Police in some states are empowered to “shoot on sight” violent criminals (Financial Times, 2009).In the 1980s, serious crime grew to nearly epidemic proportions, particularly in Lagos and other urbanized areas characterised by rapid growth and change, by stark economic inequality and deprivation, by social disorganisation, and by inadequate government service and law enforcement capabilities (Nigeria,1991).Annual crime rates fluctuated at around 200 per100,000 populations until the early 1960s and then steadily increased to more than 300 per 100,000 by the mid-1970s. Available data from the 1980s indicated a continuing increase. Total reported crime rose from almost 211,000 in 1981 to between 330,000 and 355,000 during 1984 – 85. The British High Commission in Lagos cited more than 3000 cases of forgeries annually (Nigeria, 1991).In the early 1990s, there was growing number of robberies from 1,937 in 1990 to 2,419 in 1996, and later the figure declined to 2,291 in 1999. Throughout the 1990s, assault and theft constituted the larger category of the crime. Generally, the crime data grow from 244,354 in 1991 to 289,156 in 1993 (Cleen,1993) and continued to decline from 241,091 in 1994 to 167,492 in 1999 (Cleen, 2003). The number of crime slightly declined to 162,039 in 2006, a reduction of 8 percent from 2005 (Cleen, 2006).

2.6 Principal Component Analysis Theories

Having a large number of variables in a study makes it difficult to decipher patterns of association. Variables sometimes tend to repeat themselves. Repetition is a sign of multicolinearity of variables, meaning that the variables may be presenting some of the same information. Principal Components Analysis simplifies multivariate data in that it reduces the dimensionality of the data. It does so by using mainly the primary variables to explain the majority of the information provided by the data set. Analysis of a smaller number of variables always makes for a simpler process.

Simply stated, in principal components analysis we take linear combinations of all of the original variables so that we may reduce the number of variables from p to m, where the number m of principal components is less than p. Further, the method allows us to take the principal components and use them to gain information about the entire data set via the correlation between the principal components and the original variables. Matrices of correlations or loadings matrices show which principal component each variable is most highly associated with. The first principal component is determined by the linear combination that has the highest variance. Variance measures the diffusion of the data.

After the first principal component is obtained, we must determine whether or not it provides a sufficient amount of or all of the information displayed by the data set. If it does not provide adequate information, then the linear combination that displays the highest variance accounted for after the first principal component’s variation is removed is designated as the second principal component. This process goes on until an ample amount of information/variance is accounted for. Each principal component accounts for a dimension and the process continues only on the remaining dimensions. Designating a dimension as a principal component often reveals information about correlations between remaining variables which at first was not readily available.

The main objective of Principal Components Analysis is to locate linear combinations , with the greatest variance. We want where ? is the covariance matrix, to be the maximum among all the normalized coefficient vectors a„“i. This result is achieved by way of Lagrange Multipliers. Taking the partial derivative with respect to a„“i of the Var(yi) – ?(a„“iTa„“i – 1), where ? is the Lagrange Multiplier results in the equation

where a„“i is not equal to the zero vector. From the above equations it can be easily verified that ? is a characteristic root of ? and ?i is equal to the variance of yi where ?1>?2> … > ?p are the characteristic roots. Note that they are positive. The characteristic vector corresponding to ?1, the root that accounts for the maximum variance, is a„“1. The percentage of variance that any particular principal component accounts for can be calculated by dividing the variance of that component by the sum of all the variances, i.e.

We use the high correlations between the principal components and the original variables to define which components we will utilize and which ones we will discard. One device that assists us in this decision process is a scree plot. Scree plots are graphs of the variance (eigenvalue) of each principal component in descending order. A point called an “elbow” is designated. Below this point is where the graph becomes somewhat horizontal. Any principal components whose variances lie above this point are kept and the others are discarded. The original variables that are highly correlated with each principal component that is kept determine what the label of that particular component will be.

Methods of Risk Analysis and Management

RISK ANALYSIS METHODS

Risk management can be divided into four steps: risk identification, risk assessment, risk control, and risk records. In recent years, studies have mostly focused on the risk assessment. Risk assessment is to analyze and measure the size of risks in order to provide information to risk control. Four steps are included in the risk assessment.

According to the results of risk identification and build an appropriate mathematical model.
through expert surveys, historical records, extrapolation, etc. to obtain the necessary, basic information or data available, and then choose the appropriate mathematical methods to quantify the information.
Choose proper models and analysis methods to deal with the data and adjust the models according to the specific circumstances.
Determine the size of risks according to certain criteria. In the risk assessment extrapolation, subjective estimation, probability distribution analysis and other methods are used to obtain some basic data or information. Further data analysis often use following basic theory and methods: layer analysis method, mode cangue logical analysis method, Monte Carlo simulation, the gray system theory, artificial neural network method, fault tree analysis, Bayesian theory, an influence diagram method and Markov process theory.

We can divide the methods into qualitative analysis and Quantitative Analysis.

Qualitative analysis:
1. Fault Tree Analysis
Fault Tree Analysis

Fault Tree Analysis (Fault Tree Analysis, FTA) can be used for qualitative analysis of risk and can also be used for quantitative analysis. It is mainly used for large-scale complicated system reliability and safety analysis. It is also an effective method to Unification reliability and safety analysis, through hardware, software, environment, human factors.FTA is drawing a variety of possibilities of failure in system failure analysis, from whole to part, according to the tree structure. Fault tree analysis using tree form, the system

The failure of components and composition of the fault system are connected. We are always using fault tree in qualitative or quantitative risk analysis. The difference in them is that the quantitative fault tree is good in structure and it requires use of the same rigorous logic as the formal fault tree, but qualitative fault tree is not. Fault tree analysis system is based on the target which event is not hoped to happen (called the top event), one level down from the top event analysis of the direct cause of their own events (call low event), according to the logical relationship between the upper and lower case, the analysis results are obtained.

2. Event Tree Analysis

Event tree analysis (event tree analysis, ETA) also known as decision tree analysis, is another important method of risk analysis. It is the events of a given system, the analysis of the events may cause a series of results, and thus evaluates the possibility of the system. Event tree is given an initial event all possible ways and means of development, every aspect of the event tree events (except the top incidents) are the implementation of certain functions of measures to prevent accidents, and all have binary outcomes (success or failure). While the event tree illustrates the various incidents causes of the accident sequence group. Through various intermediate steps in the accident sequence group can organize the complexity of the relationship between the initial incident and the probability of systemic risk reduction measurement, and identify the accident sequence group. So we can calculate the probability of each of the key sequence of events occurred.

3. Cause-Consequence Analysis

Cause and consequence analysis is a combination of fault tree analysis and event tree analysis. It uses the cause analysis (fault tree analysis) and the result analysis (event tree analysis), CCA aims to identify the chain of events leading to unexpected consequences, according to the probability of occurrence of different events from CCA diagram to calculate the probability of different results, then the risk level of the system can be determined.

4. Preliminary Risk Analysis

Preliminary risk analysis or hazard analysis is a qualitative technique which involves a disciplined analysis of the event sequences which could transform a potential hazard into an accident. In this technique, the possible undesirable events are identified first and then analyzed separately. 2 For each undesirable events or hazards, possible improvements, or preventive measures are then formulated.

This method provides a basis for determining hazard categories and which analysis methods are most suitable. It is proved valuable in the working surrounding to which activities lacking safety measures can be readily identified.

5. Hazard and Operability studies (HAZOP)

The HAZOP technique was origined in the early 1970s by Imperial Chemical Industries Ltd. HAZOP is firstly defined as the application of a formal systematic critical examination of the process and engineering intentions of new or existing facilities to assess the hazard potential that arise from deviation in design specifications and the consequential effects on the facilities as a whole.2

This technique is usually performed using a set of guidewords: NO/NOT, MORE OR/LESS OF, AS WELL AS, PART OF REVERSE, AND OTHER THAN. These guidewords, a scenario that may result in a hazard or an operational problem is identified. Consider the possible flow problems in a process line, the guide word MORE OF will correspond to high flow rate, while that for LESS THAN, low flow rate. The consequences of the hazard and measures to reduce the frequency with which the hazard will occur are then discussed. This technique is accepted widely in the process industries. It is mostly regarded as an effective tool for plant safety and operability improvements. Detailed procedures on how to perform the technique are available in some relevant literatures.

Quantitative Analysis:
Fault Tree Analysis

It is explained in the Qualitative analysis.

Expected value

Expected value is the possible outcome times the probability of its occurrence. An expected value shows the percentage of yielding a target in a business.

Sensitivity analysis

In sensitivity analysis shows how the outcome changes in response of a particular variable change. One can get result from optimistic, most likely and pessimistic values. An example of inputs for sensitivity analysis is the material and labor cost that can be much fluctuated.

Employee Performance Analysis

Project Outline:

This research is about the Employee performance in an organization. Data related to several factors such as Employee Productivity, Customer Satisfactions Scores, Accuracy Scores, Experience and Age of Employees is taken into consideration. Statistical methods are used to identify if there is any impact of Age and Experience of Employees on factors such as Productivity, Customer Satisfaction and Accuracy.

Theoretical Framework:

XYZ Corporation operating out of Illinois, US want to find out if the age and experience of employees have an impact on his/her performance. They have hired an external consultant to study the impact of these two factors (age and experience) on the performance metrics of the employees. According to the results of the research conducted by this external consultant, XYZ Corporate will design a strategy of recruiting the right talent which will have maximum performance.

Design and Methodology:

Design and Methodology used by the external consultant include identifying the various performance factors common across different businesses within XYZ Corporation. The performance measures common for all businesses included:

Customer Satisfaction Scores
Accuracy Scores
Productivity

The consultants decided to study the impact of age of employees and their experience on the above factors by using statistical methods.

Details on participants and sampling methods:

Sampling Methods:

Sampling is the process of selecting a small number of elements from a larger defined target group of elements. Population is the total group of elements we want to study. Sample is the subgroup of the population we actually study. Sample would mean a group of ‘n’ employees chosen randomly from organization of population ‘N’. Sampling is done in situations like:

We sample when the process involves destructive testing, e.g. taste tests, car crash tests, etc.
We sample when there are constraints of time and costs
We sample when the populations cannot be easily captured

Sampling is NOT done in situations like:

We cannot sample when the events and products are unique and cannot be replicable

Sampling can be done by using several methods including: Simple random sampling, Stratified random sampling, Systematic sampling and Cluster sampling. These are Probability Sampling Methods. Sampling can also be done using methods such as Convenience sampling, Judgment sampling, Quota sampling and Snowball sampling. These are non-probability methods of sampling.

Simple random sampling is a method of sampling in which every unit has equal chance of being selected. Stratified random sampling is a method of sampling in which stratum/groups are created and then units are picked randomly. Systematic sampling is a method of sampling in which every nth unit is selected from the population. Cluster sampling is a method of sampling in which clusters are sampled every tth time.

For the non-probability methods, Convenience sampling relies upon convenience and access. Judgment sampling relies upon belief that participants fit characteristics. Quota sampling emphasizes representation of specific characteristics. Snowball sampling relies upon respondent referrals of others with like characteristics.

In our research, the consultant organization used a Simple Random Sampling method to conduct the study where they chose about 75 random employees and gathered data of age, experience, their Customer Satisfaction scores, their Accuracy Scores and their Productivity scores.

The employees were bifurcated into 3 age groups, namely, 20 – 30 years, 30 – 40 years and 40 – 50 years. Similarly, they were also bifurcated into 3 experience groups, namely, 0 – 10 years, 10 – 20 years and 20 – 30 years.

Data Analysis:

Below are the different data analysis options used by the consultant:

Impact of Age on Accuracy
Impact of Experience on Accuracy
Impact of Age on Customer Satisfaction
Impact of Experience on Customer Satisfaction
Impact of Age on Productivity
Impact of Experience on Productivity

For each of the above statistical analysis, we will need to use Hypothesis testing methods. Hypothesis testing tells us whether there exists statistically significant difference between the data sets for us to consider to represent different distribution. The difference that can be detected using hypothesis testing is:

Continuous Data
Difference in Average
Difference in Variation
Discrete Data
Difference in Proportion Defective

We follow the below steps for Hypothesis testing:

Step 1 : Determine appropriate Hypothesis test
Step 2 : State the Null Hypothesis Ho and Alternate Hypothesis Ha
Step 3 : Calculate Test Statistics / P-value against table value of test statistic
Step 4 : Interpret results – Accept or reject Ho

The mechanism of Hypothesis testing involves the following:

Ho = Null Hypothesis – There is No statistically significant difference between the two groups
Ha = Alternate Hypothesis – There is statistically significant difference between the two groups

We also have different types of errors that can be caused if we are using hypothesis testing. The errors are as noted below:

Type I Error – P (Reject Ho when Ho is true) = ?
Type II Error – P (Accept Ho when Ho is false) = ?

P Value – Statistical Measure which indicates the probability of making an ? error. The value ranges between 0 and 1. We normally work with 5% alpha risk, a p value lower than 0.05 means that we reject the Null hypothesis and accept alternate hypothesis.

Let’s talk a little about p-value. It is a Statistical Measure which indicates the probability of making an ? error. The value ranges between 0 and 1. We normally work with 5% alpha risk. ? should be specified before the hypothesis test is conducted. If the p-value is > 0.05, then Ho is true and there is no difference in the groups (Accept Ho). If the p-value is < 0.05, then Ho is false and there is a statistically significant difference in the groups (Reject Ho).

We will also discuss about the types of hypothesis testing:

1-Sample t-test: It’s used when we have Normal Continuous Y and Discrete X. It is used for comparing a population mean against a given standard. For example: Is the mean Turn Around Time of thread i‚?15 minutes.
2-Sample t-test: It’s used when we have Normal Continuous Y and Discrete X. It is used for comparing means of two different populations. For example: Is the mean performance of morning shift = mean performance of night shift.
ANOVA: It’s used when we have Normal Continuous Y and Discrete X. It is used for comparing the means of more than two populations. For example: Is the mean performance of staff A = mean performance of staff B = mean performance of staff C.
Homogeneity Of Variance: It’s used when we have Normal Continuous Y and Discrete X. It is used for comparing the variance of two or more than two populations. For example: Is the variation of staff A = variation of staff B = variation of staff C.
Mood’s Median Test: It’s used when we have Non-normal Continuous Y and Discrete X. It is used for Comparing the medians of two or more than two populations. For example: Is the median of staff A = median of staff B = median of staff C.
Simple Linear Regression: It’s used when we have Continuous Y and Continuous X. It is used to see how output (Y) changes as the input (X) changes. For example: If we need to find out how staff A’s accuracy is related to his number of years spent in the process.
Chi-square Test of Independence: It’s used when we have Discrete Y and Discrete X. It is used to see how output counts (Y) from two or more sub-groups (X) differ. For example: If we want to find out whether defects from morning shift are significantly different from defects in the evening shift.

Let’s look at each of the analysis for our research:

Impact of Age on Accuracy

Practical Problem

Hypothesis

Statistical Tool Used

Conclusion

Is Accuracy impacted by Age of Employees

H0: Accuracy is independent of the Age of Employees

H1: Accuracy is impacted by Age of Employees

One-Way ANOVA

p-value < 0.05 indicates that performance measure of accuracy is impacted by age factor

One-way ANOVA: Accuracy versus Age Bucket

Source DF SS MS F P

Age Bucket 2 0.50616 0.25308 67.62 0.000

Error 72 0.26946 0.00374

Total 74 0.77562

S = 0.06118 R-Sq = 65.26% R-Sq(adj) = 64.29%

Individual 95% CIs For Mean Based on

Pooled StDev

Level N Mean StDev ——+———+———+———+—

20 – 30 years 26 0.75448 0.06376 (—*–)

30 – 40 years 26 0.85078 0.07069 (—*–)

40 – 50 years 23 0.95813 0.04416 (—*—)

——+———+———+———+—

0.770 0.840 0.910 0.980

Pooled StDev = 0.06118

Boxplot of Accuracy by Age Bucket

Conclusion: P-value of the above analysis < 0.05 which indicates that we reject the null hypothesis and thus, the performance measure of accuracy is impacted by age of employees. As the age increases, we observe that the accuracy of the employees also increases.

Impact of Experience on Accuracy

Practical Problem

Hypothesis

Statistical Tool Used

Conclusion

Is Accuracy impacted by Experience of Employees

H0: Accuracy is independent of the Experience of Employees

H1: Accuracy is impacted by Experience of Employees

One-Way ANOVA

p-value < 0.05 indicates that performance measure of accuracy is impacted by experience factor

One-way ANOVA: Accuracy versus Experience Bucket

Source DF SS MS F P

Experience Bucke 2 0.53371 0.26685 79.42 0.000

Error 72 0.24191 0.00336

Total 74 0.77562

S = 0.05796 R-Sq = 68.81% R-Sq(adj) = 67.94%

Individual 95% CIs For Mean Based on

Pooled StDev

Level N Mean StDev ——-+———+———+———+–

0 – 10 years 24 0.74403 0.05069 (–*—)

10 – 20 years 23 0.84357 0.05354 (—*–)

20 – 30 years 28 0.94696 0.06660 (–*–)

——-+———+———+———+–

0.770 0.840 0.910 0.980

Pooled StDev = 0.05796

Boxplot of Accuracy by Experience Bucket

Conclusion: P-value of the above analysis < 0.05 which indicates that we reject the null hypothesis and thus, the performance measure of accuracy is impacted by experience of employees. As the experience increases, we observe that the accuracy of the employees also increases.

Impact of Age on Customer Satisfaction

Practical Problem

Hypothesis

Statistical Tool Used

Conclusion

Is Customer Satisfaction Score impacted by Age of Employees

H0: Customer Satisfaction Score is independent of the Age of Employees

H1: Customer Satisfaction Score is impacted by Age of Employees

One-Way ANOVA

p-value < 0.05 indicates that performance measure of Customer Satisfaction score is impacted by age factor

One-way ANOVA: Customer Satisfaction versus Age Bucket

Source DF SS MS F P

Age Bucket 2 49.51 24.75 18.92 0.000

Error 72 94.23 1.31

Total 74 143.74

S = 1.144 R-Sq = 34.44% R-Sq(adj) = 32.62%

Individual 95% CIs For Mean Based on

Pooled StDev

Level N Mean StDev ———+———+———+———+

20 – 30 years 26 6.906 1.164 (—-*—–)

30 – 40 years 26 8.041 1.156 (—–*—-)

40 – 50 years 23 8.907 1.107 (—–*—–)

———+———+———+———+

7.20 8.00 8.80 9.60

Pooled StDev = 1.144

Boxplot of Customer Satisfaction by Age Bucket

Conclusion: P-value of the above analysis < 0.05 which indicates that we reject the null hypothesis and thus, the performance measure of Customer Satisfaction Score is impacted by age of employees. As the age increases, we observe that the Customer Satisfaction Score of the employees also increases.

Impact of Experience on Customer Satisfaction

Practical Problem

Hypothesis

Statistical Tool Used

Conclusion

Is Customer Satisfaction Score impacted by Experience of Employees

H0: Customer Satisfaction Score is independent of the Experience of Employees

H1: Customer Satisfaction Score is impacted by Experience of Employees

One-Way ANOVA

p-value < 0.05 indicates that performance measure of Customer Satisfaction score is impacted by experience factor

One-way ANOVA: Customer Satisfaction versus Experience Bucket

Source DF SS MS F P

Experience Bucke 2 51.20 25.60 19.92 0.000

Error 72 92.54 1.29

Total 74 143.74

S = 1.134 R-Sq = 35.62% R-Sq(adj) = 33.83%

Individual 95% CIs For Mean Based on

Pooled StDev

Level N Mean StDev ——–+———+———+———+-

0 – 10 years 24 7.035 1.277 (—–*—–)

10 – 20 years 23 7.570 0.922 (—–*—–)

20 – 30 years 28 8.948 1.160 (—-*—-)

——–+———+———+———+-

7.20 8.00 8.80 9.60

Pooled StDev = 1.134

Boxplot of Customer Satisfaction by Experience Bucket

Conclusion: P-value of the above analysis < 0.05 which indicates that we reject the null hypothesis and thus, the performance measure of Customer Satisfaction Score is impacted by experience of employees. As the experience increases, we observe that the Customer Satisfaction Score of the employees also increases.

Impact of Age on Productivity

Practical Problem

Hypothesis

Statistical Tool Used

Conclusion

Is Productivity impacted by Age of Employees

H0: Productivity is independent of the Age of Employees

H1: Productivity is impacted by Age of Employees

One-Way ANOVA

p-value < 0.05 indicates that performance measure of Productivity is impacted by experience factor

One-way ANOVA: Productivity versus Age Bucket

Source DF SS MS F P

Age Bucket 2 0.74389 0.37194 194.56 0.000

Error 72 0.13765 0.00191

Total 74 0.88153

S = 0.04372 R-Sq = 84.39% R-Sq(adj) = 83.95%

Individual 95% CIs For Mean Based on

Pooled StDev

Level N Mean StDev ——+———+———+———+—

20 – 30 years 26 0.93959 0.04287 (-*–)

30 – 40 years 26 0.81511 0.05831 (-*-)

40 – 50 years 23 0.69291 0.01747 (–*-)

——+———+———+———+—

0.720 0.800 0.880 0.960

Pooled StDev = 0.04372

Boxplot of Productivity by Age Bucket

Conclusion: P-value of the above analysis < 0.05 which indicates that we reject the null hypothesis and thus, the performance measure of Productivity is impacted by age of employees. As the age increases, we observe that the Productivity of the employees decreases.

Impact of Experience on Productivity

Practical Problem

Hypothesis

Statistical Tool Used

Conclusion

Is Productivity impacted by Experience of Employees

H0: Productivity is independent of the Experience of Employees

H1: Productivity is impacted by Experience of Employees

One-Way ANOVA

p-value < 0.05 indicates that performance measure of Productivity is impacted by experience factor

One-way ANOVA: Productivity versus Experience Bucket

Source DF SS MS F P

Experience Bucke 2 0.74024 0.37012 188.61 0.000

Error 72 0.14129 0.00196

Total 74 0.88153

S = 0.04430 R-Sq = 83.97% R-Sq(adj) = 83.53%

Individual 95% CIs For Mean Based on

Pooled StDev

Level N Mean StDev –+———+———+———+——-

0 – 10 years 24 0.94474 0.03139 (–*–)

10 – 20 years 23 0.83120 0.05754 (–*-)

20 – 30 years 28 0.70599 0.04118 (–*-)

–+———+———+———+——-

0.700 0.770 0.840 0.910

Pooled StDev = 0.04430

Boxplot of Productivity by Experience Bucket

Conclusion: P-value of the above analysis < 0.05 which indicates that we reject the null hypothesis and thus, the performance measure of Productivity is impacted by experience of employees. As the experience increases, we observe that the Productivity of the employees decreases.

Conclusion of the Analysis:

As Age and Experience increases, the Accuracy and Customer Satisfaction Scores of Employees increases
As Age and Experience increases, the Productivity of Employees decreases

Bibliography:

The data used in this analysis is self-created data using statistical software.

Research Schedule (Gantt Chart) of the Project:

Quantitative Reasoning and Analysis: An Overview

Frances Roulet

State the statistical assumptions for this test.

Frankfort-Nachmias & Nachmias (2008) refers to the statistical inference as the procedure about population characteristics based on a sample result. In the understanding of some of these characteristics of the population, a random sample is taken, and the properties of the same is study, therefore, concluding by indicating if the sample are representative of the population.

An estimator function must be chosen for the characteristic of the population to be a study. Once the estimator function is applied to the sample, the results will an estimate. When using the appropriate statistical test, it can determine whether this estimate is based only on chance and if so, this will be called the null hypothesis and symbolized as H0 (Frankfort-Nachmias & Nachmias, 2008). This is the hypothesis that is tested directly, and if rejected as being unlikely, the research hypothesis is supported. The complement of the null hypothesis is known as the alternative hypothesis. This alternative hypothesis is symbolized as Ha. The two hypothesis are complementary; therefore, it is sufficient to define the null hypothesis.

According to Frankfort-Nachmias & Nachmias (2008) the need for two additional hypotheses arises out of a logical necessity. The null hypothesis responded to the negative inference in order to avoid the fallacy of affirming the consequent; in other words the researcher is required to eliminate the false hypotheses instead of accepting true ones.

Once the null hypothesis has been formulated, the researcher continues to test it against the sample result. The investigator, test the null hypothesis by comparing the sample result to a statistical model that provides the probability of observing such a result. This statistical model is called as the sampling distribution (Frankfort-Nachmias & Nachmias, 2008). Sampling distribution allows the researcher to estimate the probability of obtaining the sample result. This probability is well known as the level of significance or symbolically designated as ? (alpha); which, is also the probability of rejecting a true hypothesis, H0 is rejected even though it is true (false positive) becomes Type I error. Normally, a significance level of ? = .05 is used (even though at times other levels such as ? = .01 may be used). This means that we are willing to tolerate up to 5% of type I errors. The probability value (value p) of the statistic used to test the null hypothesis, considering that, p

The most common approach for testing a null hypothesis is to select a statistic based on a sample of fixed size, calculate the value of the statistic for the sample and then reject the null hypothesis if and only the statistic falls in the critical region. The statistical test may be one-tailed or two-tailed. In a one-tailed hypothesis testing specifies a direction of the statistical test, extreme results lead to the rejection of the null hypothesis and can be located at either tail (Zaiontz, 2015). An example of this is observed in the following graphic:

Figure 1 – Critical region is the right tail

The critical value here is the right (or upper) tail. It is quite possible to have one-sided tests where the critical value is the left (or lower) tail.

In a two-tailed test, the region of rejection is located in both the left and right tails. Indeed, the two-tailed hypothesis testing doesn’t specify a direction of the test.

An example of this is illustrated graphically as follows:

Figure 2 – Critical region is the left tail.

This possibility is being taken care as a two-tailed test using with the critical region and consisting of both the upper and lower tails. The null hypothesis is rejected if the test statistic falls in either side of the critical region. And, to achieve a significance level of ?, the critical region in each tail must have a size ?/2.

The statistical power is 1-?, the power is the probability of rejecting a false null hypothesis. While the significance level for Type I error of ? =.05 is typically used, generally the target for ? is .20 or .10 and .80 or .90 is used as the target value for power (Zaiontz, 2015).

When reading of the effect size, it is important to comprehend that an effect is the size of the variance explained by statistical model. This situation is opposed to the error, which is the size of the variance not explained by the model. The effect size is a standardized measure of the magnitude of an effect. As it is standardized, by comparing the effects across different studies with different variables and different scales can be done. For example, the differences in the mean between two groups can be expressed in term of the standard deviation. The effect size of 0.5 signifies that the difference between the means is half of the standard deviation. The most common measures of effect size are Cohen’s d, Pearson’s correlation coefficient r, and the odds ratio, even though there are other measures also to be used.

Cohen’s d is a statistic which is independent of the sample size and is defined as , where m1 and m2 represent two means and ?pooled is some combined value for the standard deviation (Zaiontz, 2015).

The effect size given by d is normally viewed as small, medium or large as follows:

d = 0.20 – small effect

d = 0.50 – medium effect

d = 0.80 – large effect

The following value for d in a single sample hypothesis testing of the mean:

.

The main goal is to provide a solid sense of whether a difference between two groups is meaningfully large, independent of whether the difference is statistically significant.

On the other hand, t Test effect size indicates whether or not the difference between two groups’ averages is large enough to have practical meaning, whether or not it is statistically significant.

However, a t test questions whether a difference between two groups’ averages is unlikely to have occurred because of random chance in sample selection. It is expected that the difference is more likely to be meaningful and “real” if:

the difference between the averages is large,
the sample size is large, and,
responses are consistently close to the average values and not widely spread out (the standard deviation is low).

A statistically significant t test result becomes one in which a difference between two groups is unlikely to have occurred because the sample happened to be atypical. Statistical significance is determined by the size of the difference between the group averages, the sample size, and the standard deviations of the groups. It is suggested, for practical purposes, that statistical significance suggests that the two larger populations from which the sample are “actually” different (Zaiontz, 2015).

The t test’s statistical significance and the t test’s effect size are the two primary outputs of the t test. The t statistic analysis is used to test hypotheses about an unknown population mean (µ) when the value of the population variance (?2) is unknown. The t statistic uses the sample variance (S2) as an estimate of the population variance (?2) (Zaiontz, 2015).

In the t test there are two assumptions that must be met, in order to have a justification for this statistical test:

Sample observations must be independent. In other words, there is no relationship between or among any of the observations (scores) in the sample.
The population from which a sample has been obtained must be normally distributed.
Must have continuous dependent variable.
Dependent variable has a normal distribution, with the same variance, ?2, in each group (as though the distribution for group A were merely shifted over to become the distribution for group B, without changing shape):

Bottom of Form

Top of Form

Bottom of Form

Bottom of Form

Note: ?, “sigma”, the scale parameter of the normal distribution, also known as the population standard deviation, is easy to see on a picture of a normal curve. Located one ? to the left or right of the normal mean are the two places where the curve changes from convex to concave (the second derivative is zero) (Zaiontz, 2015).

The data set selected was from lesson 24.

The independent variable: Talk.

The dependent variable: Stress.

Hypotheses.

Null hypothesis: H0: 1- 2 = 0;

There is no difference between talk and level of stress.

In the null hypothesis, the Levene’s Test for equality of variances is H0: p= 0.5.

Alternative hypothesis: Ha: 1- 2 = 0;

There is difference between level of stress and talk.

In the alternative hypothesis, the Levene’s Test for equality of variances is Ha: p<> 0.5.

Statistical Report

The group statistics of the independent samples test indicated that low stress (n=15, m=45.20, SD=24.969, SE=6.447) scored higher than the high stress (n=15, m=22.07, SD=27.136, SE=7.006).

Group Statistics

stress

N

Mean

Std. Deviation

Std. Error Mean

talk

Low Stress

15

45.20

24.969

6.447

High Stress

15

22.07

27.136

7.006

The sample size of this results is of N=30, and its TS=t30=2.430; pvalue=.881 <.05 provided evidence to retain the null hypothesis. Therefore it is not statistically significant, if p.05 < indicating that the variances or standard deviations are the same in all probability.

Independent Samples Test

Levene’s Test for Equality of Variances

t-test for Equality of Means

F

Sig.

t

df

Sig. (2-tailed)

Mean Difference

Std. Error Difference

95% Confidence Interval of the Difference

Lower

Upper

Talk

Equal variances assumed

.023

.881

2.430

28

.022

23.133

9.521

3.630

42.637

Equal variances not assumed

2.430

27.808

.022

23.133

9.521

3.624

42.643

Levene’s Test for equality of variances reports that the P<.05 >P.05, the analysis for this test p=.881 is significance, therefore, is assumed that the variances are equal. There is no evidence that the variances of the two groups are different from each other. In comparing the P-value of .881 to .05 we find that there is no evidence to reject the null hypothesis, therefore, the H0 is not rejected.

Graph.

SPSS syntax and output files.

T-Test

T-TEST GROUPS=stress(1 2)

/MISSING=ANALYSIS

/VARIABLES=talk

/CRITERIA=CI(.95).

Notes

Output Created

28-JAN-2015 00:27:21

Comments

Input

Data

C:UsersFrances RoulAppDataLocalTempTemp1_new_datasets_7e-10.zip

ew_datasets_7e

ew_datasets_7eLesson 24 Data File 1.sav

Active Dataset

DataSet3

Filter

Weight

Split File

N of Rows in Working Data File

30

Missing Value Handling

Definition of Missing

User defined missing values are treated as missing.

Cases Used

Statistics for each analysis are based on the cases with no missing or out-of-range data for any variable in the analysis.

Syntax

T-TEST GROUPS=stress(1 2)

/MISSING=ANALYSIS

/VARIABLES=talk

/CRITERIA=CI(.95).

Resources

Processor Time

00:00:00.02

Elapsed Time

00:00:00.01

[DataSet3] C:UsersFrances RoulAppDataLocalTempTemp1_new_datasets_7e-10.zipnew_datasets_7enew_datasets_7eLesson 24 Data File 1.sav

GGraph

* Chart Builder.

GGRAPH

/GRAPHDATASET NAME=”graphdataset” VARIABLES=stress talk MISSING=LISTWISE REPORTMISSING=NO

/GRAPHSPEC SOURCE=INLINE.

BEGIN GPL

SOURCE: s=userSource(id(“graphdataset”))

DATA: stress=col(source(s), name(“stress”), unit.category())

DATA: talk=col(source(s), name(“talk”))

GUIDE: axis(dim(1), label(“stress”))

GUIDE: axis(dim(2), label(“talk”))

GUIDE: text.title(label(“Independent -Sample t-Test Graph”))

SCALE: cat(dim(1), include(“1”, “2”))

SCALE: linear(dim(2), include(0))

ELEMENT: interval(position(stress*talk), shape.interior(shape.square))

END GPL.

Notes

Output Created

28-JAN-2015 01:31:16

Comments

Input

Data

C:UsersFrances RoulAppDataLocalTempTemp1_new_datasets_7e-10.zip

ew_datasets_7e

ew_datasets_7eLesson 24 Data File 1.sav

Active Dataset

DataSet3

Filter

Weight

Split File

N of Rows in Working Data File

30

Syntax

GGRAPH

/GRAPHDATASET NAME=”graphdataset” VARIABLES=stress talk MISSING=LISTWISE REPORTMISSING=NO

/GRAPHSPEC SOURCE=INLINE.

BEGIN GPL

SOURCE: s=userSource(id(“graphdataset”))

DATA: stress=col(source(s), name(“stress”), unit.category())

DATA: talk=col(source(s), name(“talk”))

GUIDE: axis(dim(1), label(“stress”))

GUIDE: axis(dim(2), label(“talk”))

GUIDE: text.title(label(“Independent -Sample t-Test Graph”))

SCALE: cat(dim(1), include(“1”, “2”))

SCALE: linear(dim(2), include(0))

ELEMENT: interval(position(stress*talk), shape.interior(shape.square))

END GPL.

Resources

Processor Time

00:00:00.14

Elapsed Time

00:00:00.14

[DataSet3] C:UsersFrances RoulAppDataLocalTempTemp1_new_datasets_7e-10.zipnew_datasets_7enew_datasets_7eLesson 24 Data File 1.sav

References

Frankfort-Nachmias, C., & Nachmias, D. (2008). Research methods in the social sciences. (Seventh Edition). New York, N.Y.: Worth Publishers.

Zaiontz, C. (2015). Real statistics using Excel. www.real-statistics.com

Laureate Education, Inc. (Executive Producer). (2009n). The t test for related samples. Baltimore: Author.

Prescription Drug Abuse

O’Neil, Michael, and Karen L. Hannah. “Understanding the cultures of prescription drug abuse, misuse, addiction, and diversion.” West Virginia Medical Journal, vol. 106, no. 4, 2010, p. 64+. AcademicOneFile, go.galegroup.com/ps/i.do?p=AONE&sw=w&u=lom_accessmich&v=2.1&id=GALE%7CA237942597&it=r&asid=cf3d399c91b954af8322f68a7a6d999a. Date accessed 24 Feb. 2017.

“Prescription Drugs.” NIDA for Teens, USA.gov, National Institute on Drug Abuse,

teens.drugabuse.gov/drug-facts/prescription-drugs. Date accessed 24 Feb. 2017.

“Resources.” Health in Aging, www.healthinaging.org/resources/resource:eldercare-at-home-pain/.A Date accessed 26 Feb. 2017.

Roes, Nicholas A. “When anger complicates recovery.” Addiction Professional, Nov. 2007, p. 48+. Health Reference Center Academic, go.galegroup.com%2Fps%2Fi.do%3Fp%3DHRCA%26sw%3Dw%26u%3Dlom_accessmich%26v%3D2.1%26id%3DGALE%257CA172176738%26it%3Dr%26asid%3D57e34cb3d45dbadee3b3b8596892f346. Accessed 2 Mar. 2017.

“The Effects of Painkillers on the Brain and Body.” Maryland Addiction Recovery Center, 12 Feb. 2015, www.marylandaddictionrecovery.com/effects-of-painkiller-on-the-brain-and-body. Date accessed 28 Feb. 2017.

Parenting Styles in early childhood

Parenting Style as a Mediator between Children’s Negative Emotionality and Problematic Behavior in Early Childhood

Abstract

Parenting style is of particular interest in the negative emotional development leading to difficult behavior in children. This paper evaluates research focused on the impact parenting has on children’s negative behavior. The objective was to determine the affects of authoritative and authoritarian parenting as it relates to negative behavior in children. Comparisons will be made to several studies showing similar results. The objective, procedures and results will be evaluated to determine the strength of the research conducted and the validity of the study. Even with limitations, the research does in fact support that authoritative parenting – which is firm but loving – is more effective at helping children not act out than is authoritarian parenting, which emphasizes compliance and conformity.

Introduction

Anyone who has ever spent time with preschool children knows that the lives of such young people are marked both by negative emotions and by acting out (often described as “temper tantrums”). Both are typical and age appropriate. However, also age appropriate to the preschool cohort is the need to begin to learn how to regulate their behavior. While young children have some ability to be self-regulating (as opposed to infants), they lack the cognitive and emotional skills to be able to do so on their own in any consistent matter. Thus one of the tasks of parenting preschool-aged children is to help them learn to separate negative emotions from negative actions.

Key to this process is teaching children that negative emotions are perfectly acceptable. The parenting style that is best geared to teaching both aspects of this – that negative emotions are natural but that negative acting out is not acceptable – is the authoritative parenting style. In contrast, an authoritarian parenting style can be fundamentally harmful to the process of teaching young children to honor but contain their negative emotions such as anger, fear, and dislike.

Authoritarian parenting is marked by the parents’ having very high expectations of compliance to the rules that they put into place and a high level of conformity to the parents’ beliefs. Authoritarian parents tend to give commands rather than explanations. Authoritative parents also set standards and hold expectations for their children but also allow an appropriate amount of independence on the part of the child and allows for questioning and discussion.

Statement of the problem

The problem explored in by the research focused on here is how may parents help young children learn how to separate their negative emotions (especially anger and frustration, both very common – and entirely acceptable – emotions at this stage of life). Parents may often find themselves both angry and frustrated at the child who turns around and bites a friend on the playground or who collapses onto the grocery store floor when denied an especially sugary treat and respond in much the same way as their children – yelling back and losing their own tempers. This is hardly an effective response.

The most effective response, according to the research examined here, is for parents to help their children understand their emotions, put words to those emotions, and to find appropriate ways to act out their emotions – perhaps by tearing paper into small pieces, building up towers of blocks and knocking them over, etc. Parents who help their children separate negative emotions from negative actions are authoritative, allowing children to ask questions and receive honest answers. Parents who insist on compliance and conformity tend to exacerbate their children’s negative behavior.

The hypothesis that this paper examines is the following: An authoritative parenting style helps reduce negative behaviors in preschool children that are associated with negative emotions.

Literature Review

The research summarized here fully supports the idea that parents using an authoritative style are more successful at helping their children reduce their negative behaviors than are parents using an authoritarian style. Paulessen-Hoogeboom et al (2008) found that while young children will act out in negative ways at times regardless of parenting style (this is only to be expected at this developmental stage), authoritative parenting helped reduced this behavior. In other words, “that the relations between child negative emotionality and internalizing and externalizing behaviors were partially mediated by mothers’ authoritative parenting style” (p. 209).

Moreover, when the authors used confirmatory factor analysis to decontaminate possible overlap in item content between measures assessing temperament and problematic behavior, the association between negative emotionality and internalizing behavior was fully mediated by authoritative parenting. (p.209)

The researchers used the following definition for authoritative parenting: “Authoritative parenting is characterized by a combination of high warmth, firm but fair control, and the use of explanations and reasoning” (p. 212). They observed 98 male and 98 female children from two and a half to four years in Dutch daycare centers. They assessed the parents’ style of interaction with their children and determined how effective authoritarian and authoritative parents were in terms of helping their children disconnect negative emotions from negative “externalization”. They found that there was a statistically positive correlation between authoritative parenting and children’s ability to disconnect negative feelings from negative actions.

The study attempts to provide insight by measuring maternal perception of their children as it relates to their problematic behaviors both internal and external. In an effort to fill in gaps that exist in previous research studies, the focus was on 3 year old toddlers. In collaboration with child health centers in Holland, 196 preschool children and their mother were randomly selected through a letter distributed to 750 families from the health center. The researchers set out to find direct associations on negative emotions and higher levels of negative emotionality based on authoritarian parenting compared to authoritative parenting. The study intended to indirectly relate problematic behavior to the type of parenting style. Lastly, they wanted to show the association between decreased levels of SES in relation to the level of authoritative parenting and the internalizing and externalizing behaviors. (Figure 1, 2008)

Findings

Paulessen-Hoogeboom et al (2008) present us with a number of key findings that have such pervasive implications for parenting. All toddlers engage in behaviors such as biting, hitting, screaming, or otherwise acting out. Such behaviors arise as a result of negative emotions. Parents often find these behaviors hard to deal with – along with other children and other caregivers. The response by others in the children’s world may be highly negative itself and may thus provoke additional negative feelings, which in turn provoke additional negative behaviors. This is a cycle that is bad for all concerned.

Paulessen-Hoogeboom et al (2008) further validated the finding of others that an authoritarian parenting style is aimed at getting children to stop these negative behaviors by commanding them to follow parental orders. However, they also found, such a parenting style ignores the underlying emotions and so is ineffective in preventing the negative behaviors involved. Authoritative parents talk with their children about these emotions, help them understand that such emotions are natural and appropriate, and that there are better ways to express these feelings that will not be seen as negative by others. It is this key part – acknowledging emotions while helping children disconnect emotions from actions – that makes authoritative parenting effective in reducing negative actions.

In other words, parents and young children can work together (with the far greater amount of work being done by the parents, of course) to create a positive feedback system in which children learn to value their emotions while moderating their behavior.

The next important finding by Paulessen-Hoogeboom et al (2008) was that whatever elements of “personality” or “temperament” are innate, any inborn tendency to act out negatively is far less important than parenting style in terms of the behavior of children. IN other words, Paulessen-Hoogeboom et al (2008) found that authoritative parenting can overcome innate tendencies in children to act out. This is a very important finding for parents and other caregivers.

In this longitudinal study, research showed that while young children will act out in negative ways at times regardless of parenting style authoritative parenting helped reduced this behavior. (Paulessen-Hoogeboom, et al, 2008) Using correlation and covariance showed in preliminary analysis there was no significant differences in the mean scores based on gender or birth-order variables. Using a variety of statistical analysis tools including chi-square, AGFI to measure the amount of variance and covariance the results indicated a good fit. The adjusted model, which omitted certain paths, resulted in removing the authoritarian parenting from the model. This revealed a negative association between emotionality and maternal authoritative parenting. (Figure 2, 2008)

Discussion

The study sets out to determine possible cause and link to children’s negativity emotionality and problematic behavior through a sample drawn from the general population. There was evidence that a child’s negative emotions and problematic behavior is related to parenting and is mediated by authoritative parenting from the maternal parent.

This research is echoed by others and in fact substantiates the body of research in this area. Similar findings were reported by Kochanska, Murray, & Coy (1997) found that mothers who scored high on sensitivity measures and responded quickly to requests made by their toddlers (that is, mothers who used an authoritative parenting style) were effective in limiting negative behavior on the part of their children. Both sensitivity and speed in responding to requests were made in response to children’s expressing negative emotions in words: The maternal response emphasized and supported the children’s use of verbal expression rather than physical acting out when the child felt negative emotions.

In this longitudinal study, one year after the researchers initially observed the toddlers, they found that the children rated higher on cooperativeness and prosocial behavior than did children who had parents with a less responsive style.

Kochanska, Murray, & Coy (1997) found that both outgoing and shy toddlers benefited from a responsive but firm parenting style. This finding is important because it suggests that parenting style can at least in some measure trump temperament or personality, or “Different socialization experiences can predict the same developmental outcomes for children with different predispositions, and a given socialization experience can predict divergent developmental for different children.”

Another study that that the groundwork for the work by Paulessen-Hoogeboom etal was Clark & Ladd (2000). In observing kindergarten-aged children and their mothers, they assessed the level of mutual warmth, happiness, reciprocity, and engagement. (They used these terms to operationalize the concept of authoritative parenting.) They found that children and mothers who scored high on all of these measures (and who thus met the requirements for an authoritative family) scored much higher on positive behavior regardless of internal emotional state. Both teachers and peers described these children as being more empathetic, more socially accepting and acceptable, as having more friends, and as having more harmonious relationships with both other children and adults.

The body of research in this area was confirmed and consolidated by Paulessen-Hoogeboom et al (2008). All three of these studies find clear, significantly statistical results between an authoritative parenting style and the ability of young children to contain negative emotions in an appropriate way. Paulessen-Hoogeboom et al (2008) summarized their findings:

The finding that an authoritative parenting style mediates the relations between negative emotionality and problematic behaviors underscores the importance of providing effective parenting support to parents who have difficulties in dealing with their young child’s negative emotionality on a daily basis.

When parents can be trained and encouraged to react to their children’s negative emotionality in an adaptive way, parent-child interactions may become more enjoyable, thereby reducing the occurrence of problematic behaviors and preventing more serious behavioral problems later in life (Campbell, 1995; Patterson, 1982). We note that even in general population samples, a substantial percentage of children (up to 10%) may develop internalizing- and externalizing-behavior problems in the clinical range. (p. 226)

In any research, you must consider any limitations that may affect the results of the study. In this study, there were several limitations to be noted. The correlation design set limits on the causal interpretation, some findings may be accounted for based on genetics, there was a not a diversity in socioeconomic backgrounds and the study only focused on one parent. The findings also revealed a significant association between increased negative emotionality associated with less supportive parenting and was more prevalent in lower socioeconomic backgrounds. (Paulussen-Hoogeboom, Stams, Hermanns, & Peetsma, 2007).

Conclusion

The findings of Paulessen-Hoogeboom et al (2008) reveal that young children can be helped by authoritative parenting to disengage negative emotions from negative behavior. This is a lesson that has immense value for the entire lifespan. Through authoritative parenting, mothers were able to help them understand that such emotions are natural and appropriate, and that there are better ways to express these feelings that will not be seen as negative by others. These findings are consistent with other studies that have been done. The study is not without limitation but still successfully supports the hypothesis presented.

References

Grazyna Kochanska,Kathleen Murray,&Katherine C Coy.(1997). Inhibitory control as a contributor to conscience in childhood: From toddler to early school age.Child Development,68(2),263-277. Retrieved February 23, 2010, from Career and Technical Education. (Document ID:12543990).

Karen E Clark,&Gary W Ladd.(2000). Connectedness and autonomy support in parent-child relationships: Links to children’s socioemotional orientation and peer relationships.Developmental Psychology,36(4),485-498. Retrieved February 23, 2010, from Research Library. (Document ID:56531644).

Marja C Paulussen-Hoogeboom,Geert Jan J M Stams,Jo M A Hermanns,&Thea T D Peetsma.(2007). Child Negative Emotionality and Parenting From Infancy to Preschool: A Meta-Analytic Review.Developmental Psychology,43(2),438. Retrieved February 23, 2010, from Research Library. (Document ID:1249797641).

Paulussen-Hoogeboom,M.,Stams,G.,Hermanns,J.,Peetsma,T.,&van den Wittenboer,G..(2008). Parenting Style as a Mediator Between Children’s Negative Emotionality and Problematic Behavior in Early Childhood.The Journal of Genetic Psychology,169(3),209-26. Retrieved February 23, 2010, from Research Library. (Document ID:1548809441).

Analysis of Obesity in the UK

Obesity in England: Reason & Consequences

Generally, the objective of this statistics report is to evaluate the obesity in England.

1.0 Abstract

The main purpose of this report, is to identify the statistics analytical report regarding ‘Obesity in England’ that is specifically based on the physical activity and the lifestyles of people in England. In addition to the objective of this report, is to highlight the fact that peoples’ physical activities and lifestyles are changing year by year. Additionally, this report will analyse the obesity statistics of the population in England. The report will then discuss about the physical activity of the population relating to obesity in England. In order to ease the understanding of the reader, historical tables and pie charts will be included in this report which can also help readers to make comparisons between the obesity rate, physical activities and lifestyle statistics.

2.0 Introduction

Figure 1 represents the calculation formula of BMI with different units of measurements. The unit of ‘masses in BMI can be applied by using Kilograms (kg), Pounds (lbs), or Stones (st). However, the SI units for BMI is still remain on kilograms.

(Figure 1)

Obesity can be defined as an individual who is overweight with a significant degree of body fat and fatty acid (NHS, 2012). In the past twenty five years, the occurrence of obesity in England, was measured and studies found that the statistical records, had doubled the figures from the past years (Publich Health England, 2014). There are several reasons that could cause obesity to happen. The two main factors influencing obesity are, due to lack of physical activities and lifestyles. Obesity is undoubtedly harmful for an individual’s health. An individual who face obesity, may encounter some severe health issues such as diabetes, strokes, heart disease and even some common cancer such as breast cancer or colon cancer (NHS, 2012). The question is, how can one determine whether an individual is considered obese or not?

An individual’s weight can be measured in various ways and measurement to determine the severity of overweight. However, according to the United Kingdom’s National Health Service (NHS), the method that is widely practice for body weighting is the body mass index (BMI). By using the calculations in (Figure 1), individuals can acknowledge whether he/she is overweight or obese. BMI overweight severity is separated into a few categories. For instances, individuals with BMI range of 25-29 would be considered as overweight, while individuals who falls in the second category with BMI between 30 and 40, would be considered as obese, followed by people who has his/her BMI over 40, would be considered as unhealthy obesity (NHS, 2012).

This report will provide essential statistics data to give a bigger picture of obesity in England for readers. The statistics will be supported with graphs, tables and pie charts that will be included as well to demonstrate a better illustration of the comparison between the variables. Last but not least, by the end of the report, readers will understand the potential reason of obesity in terms of physical activities and consequences of obesity.

3.0 Methodology

The information that was used in this report, were collected through various types of sources such as online journals, articles, internet and books. These sources were done using secondary data. In addition, several reliable websites and annual reports of official institutions were used to interpret and analyse the data and was converted into information to discuss this statistics report. The websites that were used in this report consists of Guardian, Telegraph, and National Health Service (NHS). Furthermore, regarding to the obesity’s data and information, the data were mainly obtained from the reports published by NHS in order to improve the creditability and reliability of this report. In short, the information, data and materials in this report are extremely genuine, trustworthy and reliable.

4.0 Findings

4.1 Statistics of obesity in England by age group (2002 to 2012)

(Graph 1)

Source: Hospital Episode Statistics (HES), Health & Social Care Information Centre (2014).

According to (Graph 1) above, the graph specifically shows the statistics of obesity in England from year 2002 to 2013 according to age groups from the age of 16. The statistics showed that the obesity’s population in England, is trending up from 2002 to 2013 for all age group (16 to 74 and over). In 2002, there was a record of 29,237 people facing obesity while in 2003 the obesity rate had significantly increased to 33,546 people which calculated 14.74% change. During that moment, the population of obesity in England rose rapidly from year 2004 to 2009 with 21.45%, 27.68%, 29.20%, 20.39%, 27.28% and 38.90% increase respectively. In population, the numbers of people suffering from obesity, had gone up dramatically from 40,741 to 142,219 people.

By comparing to year 2009, the percentage change of the obesity’s population had reached its peak which is 48.91% in 2010. There was a record of 211,783 individual which are obese from the age of 16 to 74 and over. Additionally, the statistics of people facing obesity in England climbed up to 266,666 with a 25.91% change comparing to year 2010. Last but not least, the total population of obesity in England in year 2012, had reached up to 292,404 people. However, this increase had accounted to only 9.65% change in population of obesity. In the bigger picture, the population for obesity in England had been escalated from year 2002 to 2013 with an increase of massive 900%.

4.2 Obesity between men and women in England (Year 2002-2012)

(Graph 2)

Source: Hospital Episode Statistics (HES), Health & Social Care Information Centre (2014).

As you can see, (Graph 2) represents the obesity’s population between the men and women in England. The graph shows a significant uptrend formed with the recorded statistics of obesity’s population. Other than that, you can see the difference between the obese men and women. The difference between the men and women that are obese, showed that both genders were increasing year by year. In 2002, the number of women who suffered from obesity (17,169 people) were 5,100 people higher than the number of obese men (12,068 people).

Furthermore, in 2007, the number of obese women (48,829 people) had a 16,749 people of difference compared to the obese men which was tripled the result of year 2002. Nevertheless, the most significant data recorded was in year 2012. The population of women being obese (192,795 people), was approximately twice as many as the population of obese men (99,579 people).

In result, we can conclude that regarding to England’s obesity’s population, the number of women who suffered from obesity are higher than men. According to the research, lack of physical activity were the cause of obesity.

5.0 Physical activity

Physical activity is known to bring healthy benefits to individuals and it is proved that this will reduce incidence of many chronic conditions such as obesity (HSCIC, 2012). However, individuals that are lack of physical activity may suffer from obesity.

5.1 Physical activity guidelines

MPA (minutes/week)

VPA (minutes/week)

Active

150

10 < or a‰¦ 75

Some activity

60-149

30-74

Low activity (Overweight)

30-59

15-29

Inactive (Obese)

< 30

< 15

MPA: Moderate intensity Physical Activity

VPA: Vigorous intensity Physical Activity

(Figure 2)

Source: Hospital Episode Statistics (HES), Health & Social Care Information Centre (2014).

HSCIC (2012) had set up a standard for physical activity guidelines as shown in (Figure 2). The activities are divided into four categories to determine whether an individual is active or inactive. Individuals must meet the requirements of at least either MPA or VPA or both in order to fall into that category.

5.2 Self-reported physical activity of men and women

(Chart 1) (Chart 2)

Source: Hospital Episode Statistics (HES), Health & Social Care Information Centre (2014).

HSCIC (2012) stated that individuals must have at least 30 MPA in order to get rid of obesity. Low activity and inactive individual will be considered as overweight and obese. Chart 1 and Chart 2 are the pie charts that represent the self-reported physical activity data that HSCIC (2012) collected. According to both of the figures, the percentage of active individuals in terms of physical activity of men (67%) is obviously more than the women (55%) by a difference of 12%. Relatively, 26% of women in year 2012 are inactive regarding to their physical activity. Furthermore, the percentage of low activity of women is slightly (2%) higher than men. In contrast, the inactive population of men in their physical activity was just 19% which is 7% lower than the women.

In comparison, the percentages of inactive women are higher than inactive men whereas the percentages of active men are higher than the women. In short, since the individuals that fall in the ‘low activity’ and ‘inactive’ category, are considered to be overweight and obese. Therefore, referring to (Figure 3), we can conclude that physical activity is be one of the main reasons that caused obesity and it also showed why the population of obese women was more than men since year 2002 until 2012.

6.0 Comparative rates of adults’ obesity in 2010

(Graph 3)

Source: National Obesity Observatory, International Comparisons of Obesity Prevalence, available at: www.noo.org.uk/NOO_about_obesity/international/

Graph 3 shows latest data of comparative rates of adult’s obesity in year 2010. As we can see, the country’s highest obesity prevalence is the United States (35.70%). This is followed by Mexico, Scotland and New Zealand coming in second, third and fourth place accordingly with the obesity prevalence of 30%, 28.20% and 26.50% respectively. England’s obesity prevalence is 26.10% which is considered high by comparing to countries such as Australia (24.60%), Northern Ireland (23%), Luxembourg (22.50%) and Slovak Republic (16.90%). Last but not least, Japan and Korea have the least obesity prevalence by comparing to other countries in the graph shown; they have a percentage of 3.90% and 3.80% relatively. Ultimately, this graph shows that the obesity level of England which is considered severe.

6.1 Map of excess weight of England

Map 1 shows the percentage of adults that are involved in obesity from different regions of England.

Guardian (2014) stated that it has an average of 64% adults bringing obese in England by considering all the regions.

(Map 1)

7.0 Cost of Obesity

The cost of obesity, consists of human cost and National Health Service (NHS) cost. This session will discuss about both the cost for obesity.

Figure 2 shows the relative risk of women and men in terms of the diseases caused by obesity. The table consist of diseases that may cause hypertension, stroke and cancer. It can be seen that the relative figures of women, is higher by comparing to the men especially in the Type 2 Diabetes which is two times more of the probability. Type 2 Diabetes can cause serious life shortening that will affect the mortality of human being (NAO, 2011).

7.1 Human Cost of obesity

Disease

Relative risk – Women

Relative risk – men

Type 2 Diabetes

12.7

5.2

Hypertension

4.2

2.6

Myocardial Infarction

3.2

1.5

Cancer of the Colon

2.7

3

Angina

1.8

1.8

Gall Bladder Diseases

1.8

1.8

Ovarian Cancer

1.7

Osteoarthritis

1.4

1.9

Stroke

1.3

1.3

(Figure 2)

Source: National Audit Office estimates based on literature review

7.2 NHS Cost of Obesity

(Graph 4)

Source: National Audit Office estimates (2012)

Graph 4 shows the approximate obesity cost in 2012. It is estimated a spending of ?457m on obesity cost, is considered as a burden to the England’s economy. NAO (2012) estimated that the obesity cost for year 2015, will increase dramatically up to ?6.3 billion and up to ?9.7 billion by year 2050. The reason behind the cost of obesity will be significant high, is because of the indirect cost of lost output in economy. NAO (2001) stated that the economy will be in recession due to the sickness or death of the England’s workforce caused by obesity. Therefore, the consequences of obesity must not be ignored but must be taken into serious considerations.

8.0 Conclusion

In short, the statistics of this report identified some important details regarding obesity in England. It is important to understand how the impact of obesity and the growth of population can cause the increase of people with obesity to be two times more in the past 25 years. Furthermore, the trend for obesity in all different age groups, showed an increase in England from year 2002-2013. The differences between the genders as well, will show the reasons to why there is an increase in obesity in relations to physical activities because of the activeness of men, inactiveness of women and vice versa. Importantly, this report stated the consequences of obesity which is severe illnesses that causes death with related risk statistics about men and women. Lastly, the report showed the comparison between other countries related to obesity, the percentage of obesity in the regions of England, followed by the human and NHS cost of obesity.

9.0 Recommendations

As aforementioned, the level of obesity in England is getting more and more significant year by year. Government should conduct more campaign to fight obesity as it will provide more information about importance of physical activity in life to individuals or families. In addition, government should continue to subsidise NHS for the ‘Health Check programme’ in order to prevent and avoid severe disease such as heart disease, stroke, and cancer.

Besides, government should not just focus on physical activity; they must focus on other reason that causes obesity as well, such as diet and lifestyle. Government could implement some political strategy to fight obesity, such as increase the taxation of fat-food in order to stop people from buying the unhealthy product. Last but not least, government could also increase the advertising of healthy campaign and advertisement of disadvantages of obesity to encourage people to get rid of obesity.

10.0 References:

Boseley, S. (2014). The Guardian: Almost two-thirds of adults in England classed as overweight by health body. [Online] Available at: http://www.theguardian.com/society/2014/feb/04/two-thirds-adults-overweight-england-public-health [Last Accessed 28th March 2014].

National Health Service. (2014). Obesity: Introduction. [Online] Available at: http://www.nhs.uk/conditions/Obesity/Pages/Introduction.aspx [Accessed 27th March 2014].

Public Health England. (2014). Trends in Obesity Prevalence. [Online] Available at: http://www.noo.org.uk/NOO_about_obesity/trends [Accessed 20th March 2014].

Figure 1: Source :http://healthy-living.knoji.com/does-your-bmi-really-matter/

HSCIC. (2014). Statistics on Obesity, Physical Activity and Diet: England 2014. [Online] Available at: http://www.hscic.gov.uk/catalogue/PUB13648/Obes-phys-acti-diet-eng-2014-rep.pdf [Accessed 20th March 2014].

HSCIC. (2012). Physical activity in Adults. [Online] Available at: http://www.hscic.gov.uk/catalogue/PUB13218/HSE2012-Ch2-Phys-act-adults.pdf [Accessed 24th March 2014].

NAO. (2012). An Update on the Government’s Approach to Tackling Obesity. [Online] Available at: http://www.nao.org.uk/wp-content/uploads/2012/07/tackling_obesity_update.pdf [Accessed 24th March 2014].

HSCIC. (2012). Chapter 7: Health Outcomes. [Online] Available at: http://www.hscic.gov.uk/searchcatalogue?productid=13887&returnid=3945 [Accessed 24th March 2014].

NAO. (2001). Tackling Obesity in England. [Online] Available at: http://www.nao.org.uk/wp-content/uploads/2001/02/0001220.pdf [Accessed 28th March 2014].

Public Health England. (2013). Social Care and Obesity: A Discussion Paper. [Online] Available at: http://www.local.gov.uk/documents/10180/11463/Social+care+and+obesity+-+a+discussion+paper+-+file+1/3fc07c39-27b4-4534-a81b-93aa6b8426af [Accessed 29th March 2014].

HSCIC. (2012). Statistics on Obesity, Physical Activity and Diet: England, 2012. [Online] Available at: http://www.hscic.gov.uk/catalogue/PUB05131/obes-phys-acti-diet-eng-2012-rep.pdf [Accessed 20th March 2014].

HSCIC. (2013). Statistics on Obesity, Physical Activity and Diet: England, 2013. [Online] Available at: http://www.bhfactive.org.uk/userfiles/Documents/obes-phys-acti-diet-eng-2013-rep.pdf [Accessed 20th March 2014].