Impact of Smartphones on Students

Problem Statement

With the advanced technology nowadays, smartphone is viewed as a important device and an integral part of the Malaysian society. According to The Sun Daily Report, a last year concluded analysis revealed that Malaysia’s smartphone penetration has increased to 63% in year 2013 from 47% in 2012, while tablet penetration has increased almost three times to become 39% from 14% (Afrizal, 2013). University students are among the highest contributors to the increasing number of smart phone sales (Jacob & Isaac, 2008). However, often use of smartphone can become a habit or dependency of student and indirectly affect their lifestyle. There are several general aspects of lifestyle have been categorized, such as health, education, psychology, socialization and security, in which may be in the positive side or the negative side.

Regarding impact of smartphone in business field, Rashedul Islam, Rofiqul Islam & Tahidul Arafhin Mazumder (2010) states that the drastic growth of the businesses during past few years is mainly because the rising use of smartphones and the mobile application. Smartphone has made the feature of advertising business sector becomes interesting and effective. However, the negative impact of smartphone is towards the PCs market as shown in survey result of year 2011, smartphone’s shipment in that full year was 487.7 millions, exceeds PCs with 17.63%. Smartphones nowadays are much more fomidable than the PCs that 10 more years ago, people are now using the smartphone to check news feed, status update and photo posting as well (Mogg, 2012). Microsoft-Intel Alliance as the long dominated of PCs also faced pressure to get into the market of mobile device. Soon, PCs may be replaced by smartphone as smartphone seems to have a optimistic growth in the future although there is still million sales of PCs in every year (eWeek, 2012).

Accordingly health surveys regarding smartphone done by Sarwar & Soomro (2013), most of the users in USA use smart phone to search for the information and facilities related to health. Many health mobile applications are available stimulate users for prescription management, encourage other options of treatment, offer price comparison and verification of prescriptions as well. However, Russian and Eastern European scientists issued the earliest reports that low level exposure to RF radiation of smart phone could cause a wide range of health effects, including behavioural changes, effects on the immunological system, reproductive effects, changes in hormone levels, headaches, irritability, fatigue, and cardiovascular effects (Russian National Committee, n.d.). In addition, research of World Health Organization suggested this behaviour is similar to a compulsive-impulsive disorder, whereby an inability to access the services are associated with negative health consequences, including withdrawal and depression and other negative repercussions such as social isolation and fatigue (WHO, 2011). According to Coleman (2013), smartphones can also contribute to the deterioration of our eyes, squash our spines, give us saggy jowls, damage our hearing, damage our sleep cycle and cause dark circles under our eyes.

Meanwhile, in term of education, Sarwar & Soomro (2013) indicated that smartphone has provided society to be exposed towards huge amount of educational and learning purposes due to internet availability and increasing demand of smartphone. Regarding the survey of King (2012), majority of the American adults think that smartphone usage contributes a positive impact towards the youth’s education in America, eg. E-readers for study purpose. Students with the help of technology are able to access educational programs (Font, 2013). For instance Dell has launched Youth Learning (an alphabetization initiative) which support the learning programs. Besides that, smartphone provides a basic human need to help students relieve their boredom and decompression between tasks (Shawn Knight, 2012). However, there is some negative impact of smartphone dependency on education. Over dependency of smartphone can leads to addiction, means although there is no real need’s communication, still hope to have constant communication with outside through social network. (Lee, 2012). According to the The Times of India: Health, (2013) experts said that our memory will be reduced and cognitive thinking will be killed when using the smartphonealthough it makes the life more convenient and easier. People now depend much on the search engine through smartphone cause them to become poor thinker and getting more lazy than before.

For impact onto psychology, based on another research of Sarwar & Soomro (2013) conducted, there is a positive impact onto human psychological, smartphone is used for reduction of tension work life. Nowadays, keep update with the latest news is very vital process for reducing tension. However, negative impact of smartphone dependency exists. Spending more than seven hours a day using smartphones and experiencing symptoms such as anxiety, insomnia and depression when cut off from the device is considered as addiction (Nam, 2013). Students who are addicted to smartphones not only distract themselves from studies, but also damage their interpersonal skills. According to Sarwar and Soomro (2013), addiction to smartphones affects our quality of sleep as well as creates friction in our social and family life.

For lifestyle of socialization, the survey of Yi-Fan Chen done in U.S. College shows that students have several strong socialization motives for using the mobile phone to contact both family and friends (CHEN, 2007). Smartphone features, for examples, text to speech, GPS and social Websites, people can easily remain integrated with society especially those with special needs and elderly age (Sarwar & Soomro, 2013). However, the report of Amanda (2012) shows that over dependent of smartphone brings the impact of there is only 35% of the teens who owns a smart phone have face-to-face socializing outside of school. According to Teoh (2011), Americans are socializing and spending the average time of 2.7 hours per day on their mobile device. The time people used to socialize via mobile device is twice of the time spending on eating and is more than one third of time spending on sleeping per day.

For impact of smartphone on security, Sarwar & Soomro (2013) stated that safety of children can be known by parents with the availability of Internet connection through a Smartphone. Furthermore, by setting up password security, it can protect the sensitive data inside the smartphone and also restricts access in case the smartphone was lost or been stolen. (BullGuard Security Centre, 2013) According to Enisa’s report (2010), the data leakage from smartphone may affected our assets throughout such as personal data, corporate intellectual property, classified information, financial assets and etc. If smart phone user lost the smartphone, for example, every information like address, e-mail, log data in web browser, SMS (Short Message Service) and etc. can be exposed if there is no appropriate security solutions (Smith, 2011). Next target for criminal attacks likely will be smartphone and social networking site (Sarwar & Soomro, 2013). According to WhoCalledMyPhone.Net (as cited in Darrell, 2013), 24% of smartphone users check their phone while driving, which can directly cause accidents or fatal accidents.

In short, smartphone has contributed positive impacts to human, but too much dependent on smartphone also cause negative consequences. Hence, our study will put more effort on the impacts of smartphone dependency into lifestyle. Smartphone brings impact to various fields such as business, health, education, psychology, socialization and security as well. However, during our research, the target of study area is among undergraduate students in UUM. Hence, some fields are not suitable for students for instance business. In short, there are only five lifestyles which will be used for our survey, include health, education, psychology, socialization and security.

References

Afrizal. (2013, September 5). Malaysia’s smartphone penetration rises by 16%. The SunDaily. Retrieved March 2, 2014 from http://www.thesundaily.my/news/820932

Amanda, L. (2012). Teens, Smartphones & Texting. Pew Research Center’s Internet & American Life Project , pp. 1-34.

BullGuard Security Centre. (2013). Eight ways to keep your smartphone safe: Mobile Security. Retrieved March 23, 2014, from http://www.bullguard.com/bullguard-security-center/mobile-security/mobile-protection-resources/8-ways-to-keep-your-smartphone-safe.aspx

CHEN, Y.-F. (2007). The mobile phone and socialization: The consequences of mobile phone use in transitions from family to school life of U.S. college students . Journal of Cyber Culture and Information Society , pp. 1-152.

Coleman, C. (2013, July 21). How your mobile can give you acne…not to mention asaggy jaw and sleepless nights. Daily Mail. Retrieved March 18, 2014, fromhttp://www.dailymail.co.uk/femail/article-2372752/How-MOBILE-acne–mention-saggy-jaw-sleepless nights.html?ITO=1490&ns_mchannel=rss&ns_campaign=1490

Darrell, R. (2013). The impressive effects of smartphones on society (infographic). Bit Rebels. Retrieved March 18, 2014, from http://www.bitrebels.com/technology/the-effects-of-smartphones-on-society/

eWeek, September 5, 2012, ”Intel Microsoft Influence Declining as Smartphones Tablets Rise Analysts 342948”, http://business.highbeam.com/137475/article-1G1-301713950/intelmicrosoft-influence-declining-smartphones-tablets

ENISA (n.d.). Top Ten Smartphone Risk. Retrieved 17 March 2014, from http://www.enisa.europa.eu/activities/Resilience-and-CIIP/critical-applications/smartphone-security-1/top-ten-risks

Gehi, R. (2013, December 3). Your smartphone is destroying your memory. The Times of India. Retrieved 23 March, from http://timesofindia.indiatimes.com/life-style/health-fitness/health/Your-smartphone-is-destroying-your-memory/articleshow/19412724.cms

Jacob, S.M. and Isaac, B. (2008).The mobile devices and its mobile learning usage analysis. Proceedings of the International Multi-conference of Engineers and Computer Scientists, Hong Kong, Vol. 1, March, 19-21, pp. 782-87.

King, R. (2012). Mobile devices have positive impact on education, survey says. Retrieved from http://www.zdnet.com/blog/btl/mobile-devices-have-positive-impact-on-education-survey-says/68028

Knight, S. (2012, September 26). Retrieved March 17, 2014, from http://www.techspot.com/news/50310-smartphones-cure-boredom-but-is-that-necessarily-a-good-thing.html

Lee, C.-s. (2012). Smartphone addiction: disease or obsession? Retrieved March 18, 2014, from Korea Times: http://www.koreatimes.co.kr/www/news/opinon/2012/11/298_117506.html

Md. Rashedul Islam, Md. Rofiqul Islam,Tahidul Arafhin Mazumder. (2010). Mobile

Application and Its Global Impactaˆ-, International Journal of Engineering & Technology, IJETIJENS, Vol: 10, No:06, http://www.ijens.org/107506-0909%20ijet-ijens.pdf

Mogg, T. (2012). “Smartphone sales exceed those of PCs for first time, Apple smashes

record”. Digital Trend. Retrieved from http://www.digitaltrends.com/mobile/smartphone-sales-exceed-those-of-pcs-for-first-time-apple-smashes-record/

Nam, I. (2013, Jul 23). A rising addiction among youths: Smartphones.Wall StreetJournal (Online). Retrieved March 18, 2014, from http://eserv.uum.edu.my/docview/1411097432?accountid=42599

Russian National Committee on Non Ionizing Radiation Protection , Sanitary Rules of the Ministry of Health (Russia): SanPin 2.1.8/2.2.4.1190-03 point 6.9.

Sarwar, M., & Soomro, T.R. (2013, March). Impact of smartphone’s on society. European Journal of Scientific Research, 98 (2), 216-226. Retrieved March 18, 2014, from http://www.europeanjournalofscientificresearch.com/

Smith, M. (2011). A Practical Analysis of Smartphone Security. Salvendy (Eds.): Human Interface, Part I , pp. 311–320.

Font, S. (2013). How smartphones narrow the achievement gap in education. Retrieved 23 March 2014, from http://mobileworldcapital.com/en/article/78

Teoh, L. (2011). Mobile Stats 2011: 91% Use Mobile Phone to Socialize. Retrieved 16 March 2014, from http://www.biztechday.com/mobile-stats-2011-91-use-mobile-phones-to-socialize/

WHO. (2011). Mobile Phone Use: A Growing Problem of Driver Distraction. Journal of WHO , pp. 1-50.

How Sleeping Hours Affect Students’ Studies

STATISTICAL TECHNIQUES FOR BEHAVIORAL SCIENCE I

HOW SLEEPING HOURS AFFECT STUDENTS STUDIES IN UTAR PERAK CAMPUS

FACULTY OF ARTS AND SOCIAL SCIENCE

TITLE : HOW SLEEPING HOURS AFFECTS STUDENTS’ STUDIES IN UTAR PERAK CAMPUS

Marks

1.

Abstract

/ 10 marks

2.

Chapter 1 Introduction

/ 10 marks

3.

Chapter 2 Literature Review

/ 15 marks

4.

Chapter 3 Method

/ 15 marks

5.

Chapter 4 Data Analysis and Result

/ 25 marks

6.

Chapter 5 Conclusion

/ 15 marks

7.

References

/ 10 marks

8.

Penalty for _____________________ (if any)

Total

/100 marks

Index

Title

Page

1.

Abstract

Sleep deprivation and poor sleep quality affect the study performances of students. The purpose of this statistical study is to determine whether the amount of sleeping hours affect the students’ studies of UTAR Perak Campus. It is hypothesized that participants who have lower sleep deprivation and higher sleep quality will perform better in their studies than those who experience higher sleep deprivation and lower sleep quality.

Introduction

According to Gilbert and Weaver (2010), human bodies require not only basic needs of air, water and food to function well but also sufficient sleep as it is important for learning, memory consolidation, critical thinking and decision making. For optimal functioning in academic, sleep is essential.

Sleep deprivation is now widely recognized as one of the significant public health issues not only among students but people of all ages and groups. Some shows excessive sleepiness and this is then related to not the quantity of sleep obtained but the quality of sleep. (Gilbert & Weaver, 2010)

Both sleep deprivation and poor sleep quality are prominent among students because often they have irregular sleep patterns due to the workloads from their study schedule and also clubs’ activities. This resulted in them having short sleep lengths in the weekdays and also later wake-up time on weekends. (Gilbert & Weaver, 2010)

It is recognized by university psychologists that student academic performance is being negatively affected by poor sleep quality and/or sleep deprivation. Though depression is also one of the factor that affects academic performances of students, sleep quality may even be more significant of a factor than depression in affecting students’ studies. (Gilbert & Weaver, 2010)

It is found that impact of sleepiness on mood is large as higher negative mood states are being reported by students who fell asleep during class.

Research Questions

Will sleeping hours affect the academic performance of students of UTAR Perak Campus?

Researchers want to find out how different amount of sleeping hours affect the studies of students.

What are the factors that affect the quantity of students’ sleeping hours ?

Researchers are interested in finding factors which will affect both the quality and quantity of students’ sleeping hours which will then leads to affecting the students’ studies.

Will a student’s sleeping habit being influenced by friends and family?

Researchers are keen to know to the extent of how friends and family will affect a student’s sleeping habit.

How many hours of sleep do the male and female students need per day ?

Researchers want to study about the amount of sufficient sleeping hours required by female and male students.

What are the differences in CGPA scores of both male and female students according to the amount of sleeping hours they have ?

The researchers are keen to study the differences in CGPA score obtained by both genders of students according to the amount of sleeping hours they have.

Literature Review

Sleep is very important to a human being’s health. The consequences of sleep manifest in both health and performance. The relationships between sleep and performance have been studied in many different fields including human science, medicine, psychology, education, and business and etc. Sleep-related variables for instance sleep deficiency, sleep quality, sleep habits have been shown to influence the performance of students (Lack, 1986; Mulgrew et al., 2007; National Sleep Foundation, 2008; Pilcher & Huffcutt, 1996; Rosekind et al., 2010). According to Weitzman et al. (1981) , Delayed Sleep Phase Syndrome (DSPS) was defined into three big categories which are long sleep latency on weekdays (normally fall asleep between 2 a.m. to 6 a.m.), normal sleep length on weekends (usually sleep late and wake up late on weekends); and difficulty in staying asleep. These sleep problem is common and is present in students around the world.

Results indicates that in the U.S., 11.5% of undergraduate students were found to have DSPS (Brown, Soper, & Buboltz, 2001). Not only that , Australian studies found the prevalence of DSPS in students (17%) to be higher than in adults (6-7%) (Lack, 1986; Lack, Miller, & Turner, 1988). Studies related to DSPS have also been conducted in other countries such as Japan, Norway, and Taiwan (Hazama, Inoue, Kojima, Ueta, & Nakagome, 2008; Schrader, Bovim, & Sand, 1993; Yang, Wu, Hsieh, Liu, & Lu, 2003). Furthermore, In Lack’s (1986) study, the DSPS group experienced sleepiness on weekdays more often rather than the non-DSPS group. In addition, , it was found that members of the DSPS group performed at a lower level academically when compared with the non-DSPS group when course grades were examined . In a more recent study, Trockel et al. (2000) found that first-year college students with lower GPAs reported later bedtimes on weekdays and weekends and later wake-up times on weekdays and weekends.

On the other hand, the relationship between sleep and academic performance was reviewed in other studies . Approximately 103 studies related to sleep loss, learning capacity, and academic performance; samples were carried out among students in different university by Curcio, Ferrara, and Gennaro (2006). According to Curcio, Ferrara, and Gennaro (2006), sleep loss was negatively correlated with academic performance. Results indicates that sleep-deprived students performed poorly on learning capacity skills for instance attention, memory, and problem-solving tasks, and that the lack of sleep therefore indirectly affected their academic performance. Sleep deprivation is a term meaning loss of sleep Drummond and McKenna (2009) . Moreover, sleep loss resulted in daytime sleepiness that was also correlated with poor academic performance studies showed a significant relationship between lower GPA and lack of sleep among college students. The Multiple Sleep Latency Test was an instrument used to evaluate daytime sleepiness, which has been used by previous researchers (Carskadon, Harvey, & Dement, 1981; Fallone, Acebo, Arnedt, Seifer, & Carskadon, 2001; Randazzo, Muehlbach, Schweitzer, & Walsh, 1998).

On the other hand, another study was conducted to determine the various sleep patterns in medical students appearing in various ongoing professional at Shifa College of College of Medicine, Islamabed and to find out relationship between number of hours of sleep before an examination with academic performance in relevant exam. Majority of the students had reduced sleep in exam days and its reason was found to be studying late at night before paper and academic performance. There have been various reasons for decreased sleep in university and college students including watching TV and using the internet. A study done in a Pakistani medical university indicated that 58.9 per cent of the university and college students the most common cause of sleep deprivation was watching television and listening to music affects the students slept less than 8 hours a day . In addition, stress, in university and college students, is also a very essential contributing factor in inability to sleep at night. Consumption of caffeine , pain killers, substance abuse and smoking at night to keep them awake is also another trend seen among students. This greatly contributes to sleeplessness at night among students and affects their academic performances adversely. (Oshodi OY, Aina OF, Onajole AT, Omvik S, Pallesen S, Bjorvatn B, Thayer J, Nordhus H. Qureshi AU, Ali AS, Hafeez A, Ahmed TM ). Moreover, the study showed that students who achieved good grade (A, B) were those who slept for more than 7 hours but those who majority failed in exam, were also mostly those who slept less.

However, a similar study done in USA showed that students who had struggling grades (C’s D’s / F’s) were those who slept significantly less than those who scored A and B grades ( Wolfson AR, Carskadon MA). According to the study, student slept an average of only 4.74 hours before the exam and females slept less (4.71+1.82 hours) as compared to males (4.77+3.27 hours ). This was similar to a cross sectional study done in Sao Paulo, which showed that boys slept about 390 minutes. However, their academic performance was not affected by the disturbance in the sleep cycle.

Furthermore, another research was done with 103 samples of undergraduate classes of University of Minnesota.This study separate unhealthy sleep habits into two categories which are quality and quantity of sleep. In this study, the survey asked questions related to their sleep habits in term of quality and quantity of sleep which separately measured in order to break up the term of “unhealthy sleep habits” and analysed this topic by using a different method compare to past research. In this research, the researchers found that sleep quantity and academic performance are related. This research shown that have the relation between part of sleep deprivation measures for the average week and the average amount of sleep obtained in a night and GPA. This result helps the college student by practical applications. From this research, the researchers found that amount of sleep and academic success are positively correlated, although cannot conclude that sleep better, score better in exam.

Methodology

Participants of the Study

There are 50 participants in this study. They are degree students from University Tunku Abdul Rahman (Kampar) who are from faculty of art & social science (FAS), faculty of business and finance (FBF), faculty of information communication and technology (FICT), institute of Chinese studies (ICS) and faculty of science (FSC). Their age range is from 20-24. Besides, there are 5 male and 5 female from each faculty.

Instruments

Our questionnaire consist of 15 closed-ended questions and each question involved different level of measurement such as nominal, ordinal, interval and ration scale.Our demographic details are gender, age, courses of studies and CGPA comprised in the questionnaire.

‘Sleep measures’ consists measurement of Total Sleep Time (TST), Sleep Onset Latency (SOL), Sleep Efficiency (SE) and Wake After Sleep Onset (WASO). It is determined by the Cole-Kripke (1992). Total Sleep Time (TST), which is duration of time actigraphically-determined as “sleep” within a 24-hour period, including daytime and nighttime periods of sleep. Sleep Onset Latency (SOL), which is time used between getting into bed and falling asleep, calculated as the time used from the start of actigraphically-determined “inactivity” to the first minute scored as sleep. We only refer TST and SOL among the four measurements in our questionnaire (question 2 and question3).

To measure the sleep quality of students, we decided to use the Adult Sleep–Wake Scale (ADSWS). It is a self-report pencil-and-paper measure of sleep quality consisting of ¬?ve behavioral dimensions, which are Going to Bed, Falling Asleep, Maintaining Sleep, Reinitiating Sleep, and Returning to Wakefulness. The questionnaire consists of time taken to fall asleep at night (range from <10 minutes to >1 hour), the amount of sleeping hours required in order to function well on the following day (range from <5 hours to 8 hours), the factors affect the quality or quantity of student’s sleeping hours and the perception of academic performance influenced by insufficient sleep.

Procedure

We are curious about how sleeping hours affect students’ studies, so we came out our research questions. After that, we set our questionnaires and printed out for the participants. We select randomly 5 male and 5 female from each faculty. Our questionnaire also include inform consent for the participants. On average, each participant took about 10 to 15 minutes to complete our questionnaire. Once they completed their questionnaire, we collect the data immediately.

Data Analysis

Figure 1.Amount of sleeping hours affecting students’ studies

CGPA

0.00-1.99

2.00-2.19

2.20-2.99

3.00-3.49

3.50-4.0

<5hours

0

0

2

1

0

5-6hours

0

3

3

0

1

6-7hours

0

7

8

0

1

7-8hours

2

3

16

3

0

Figure 1 shows the CGPA scores obtained by students of UTAR Kampar campus according to the amount of sleeping hours they have. Students are likely to obtained highest CGPA score range of 3.50 to 4.0 when they obtained seven to eight hours of sleep per day.

Figure 2.Factors affecting the quality and quantity of students’ sleeping hours

Factors

% UTAR Kampar student (Total 50 students)

night owl

18

homework

34

friends

14

Co-curriculum

6

time management

14

Figure 2 shows the factors that affect the quality and quantity of students’ sleeping hours. 34% of the total of 50 students chose homework as the biggest factor that affects their sleeping hours whereas only 6% of them chose co-curriculum as the factor that is affecting their sleeping hours. Other factors include being night owls, socializing with friends and time management

Figure 3.Will a student’s sleeping habit being influenced by friends and family.

Sleeping habit being influenced by friends and family

male

female

yes

11

13

no

4

7

Figure 3 shows that 11 male and 13 female students stated that their sleeping hours were influenced by friends and family while 4 male and 7 female students stated that their sleeping hours were not being influenced by friends and family.

Figure 4.Amount of sleeping hours required between different genders of students

Sleeping hours

Male

Female

<5hours

2

1

5-6hours

4

3

6-7hours

6

10

7-8hours

13

11

Figure 4 shows that 13 male and 11 female students stated that they require seven to eight hours of sleep per day while only 2 male and 1 female students require less than five hours of sleep per day.

Figure 5 shows the CGPA score obtained by both male and female students according to the amount of sleeping hours per day.

Male

CGPA score range

Sleeping hours

0.00-1.99

2.00-2.19

2.20-2.99

3.00-3.49

3.50-4.0

<5hours

2

5-6hours

1

3

6-7hours

3

2

1

7-8hours

2

1

10

Female

CGPA score range

Sleeping hours

0.00-1.99

2.00-2.19

2.20-2.99

3.00-3.49

3.50-4.0

<5hours

1

5-6hours

2

1

6-7hours

4

6

7-8hours

2

6

3

Figure 5 shows that 10 male and 6 female students who had seven to eight hours of sleep per day score an average CGPA at the range of 2.20-2.99 while only 2 male students who had the same amount of sleeping hours score the lowest range of CGPA at 0.00-1.99. Only 1 male student who had less than five hours of sleep had a CGPA score range of 2.20-2.99 and 1 female student who had the same amount of sleeping hours had a CGPA score range of 3.00-3.49.

History of statistics and its significance

History of Statistics and its Significance

Statistics is a relatively new subject, which branched from Probability Theory and is widely used in areas such as Economics and Astrology. It is a logic and methodology to measure uncertainty and it is used to do inferences on these uncertainties (Stigler, 1986). The history of Statistics can be firstly traced back to the 1600’s. John Graunt (1620-1674) could be considered as the pioneer of statistics and as the author of the first book regarding statistics. He published Natural and Political observations on the Bills of Mortality in 1662 whereby he was studying the plague outbreak in London at the time requested by the King. Graunt was asked to come up with a system that would allow them to detect threats of further outbreaks, by keeping records of mortality and causes of death and making an estimation of the population. By forming the life table, Graunt discovered that ‘statistically’, the ratio of male to females are almost equal. Then in 1666, he collected data and started to examine life expectancies. All of this was fundamental as he was arguably the first to create a condensed life table from large data and was able to do some analysis on it. In addition, this is widely used in life insurance today, showing the importance and significance of Graunt’s work (Verduin, 2009). Another reason why this is significant is because of his ability in demonstrating the value of data collection (Stigler, 1986). Then in 1693, Edmond Halley extended Graunt’s ideas and formed the first mortality table that statistically made the relationship between age and death rates. Again, this is used in life insurance (Verduin, 2009).

Another contributor to the formation of statistics is Abraham De Moivre (1667-1823). He was the first person to identify the properties of the normal curve and in 1711, introduced the notion of statistical independence (Verduin, 2009). In 1724, De Moivre studied mortality statistics and laid down foundations of the theory of annuities, inspired by the work of Halley. This is significant as annuities are widely used in the Finance industry today, in particular, when forming actuarial tables in life insurance. De Moivre then went on to talk about the idea of the normal distribution which can be used to approximate the binomial distribution (O’Connor and Robertson, 2004).

William Playfair (1759-1823) was the person who invented statistical graphics, which included the line graph and the bar graph chart in 1786 and the pie chart in 1801. He believed that charts were a better way to represent data and he was “driven to this invention by a lack of data”. This was a milestone as these graphical representations are used everywhere today, the most notable being the time-series graph, which is a graph containing many data points measured at successive uniform intervals over a period of time. These graphs can be used to examine data such as shares, and could be used to predict future data (Robyn 1978).

Adolphe Quetlet (1796-1874) was the first person to apply probability and statistics to Social Sciences in 1835. He was interested in studying about human characteristics and suggested that the law of errors, which are commonly used in Astronomy, could be applied when studying people and through this, assumptions or predictions could be in regards to physical features and intellectual features of a person. Through Quetlet’s studies, he discovered that the distribution of certain characteristics when he made a diagram of it was in a shape of a bell curve. This was a significant discovery as Quetlet later went on to form properties of the normal distribution curve, which is a vital concept in Statistics today. Using this concept of “average man”, Quetlet used this to examine other social issues such as crime rates and marriage rates. He is also well known for the coming up with a formula called the Quetlet Index, or more commonly known as Body Mass Index, which is an indication or measure for obesity. This is still used today and you could find out your BMI by calculating. If you get an index of more than 30, it means the person is officially obese (O’Connor and Robertson, 2006).

Other members who made little but significance contributions to Statistics are Carl Gauss and Florence Nightingale. Gauss was the first person who played around with the least squares estimation method when he was interested in astronomy and attempted to predict the position of a planet. He later proved this method by assuming the errors are normally distributed. The method of least squares is widely used today, in Astronomy for example, in order to minimise the error and improve the accuracy of results or calculations (O’Connor and Robertson, 1996). It was also the most commonly used method before 1827 when trying to combine inconsistent equations (Stigler, 1986). Nightingale was inspired by Quetlet’s work on statistical graphics and produced a chart detailing the deaths of soldiers where she worked. She later went on to analyse that state and care of medical facilities in India. This was significant as Nightingale applied statistics to health problems and this led to the improvement of medical healthcare. Her important works were recognised as became the first female to be a member of the Royal Statistical Society (Cohen, 1984).

One of the greatest contributors was Francis Galton (1822-1911) who helped create a statistical revolution which laid foundations for future statisticians like Karl Pearson and Charles Spearman (Stigler, 1986). He was related to Charles Darwin and had many interests, such as Eugenics and Anthropology. He came up with a number of vital concepts, including the regression, standard deviation and correlation, which came about when Galton was studying sweet peas. He discovered that the successive sweet peas were of different sizes but regressed towards the mean size and the distribution of their parents (Gavan Tredoux, 2007). He later went on to work with the idea of correlation when he was studying the heights of parents and the parent’s children when they reach adulthood, where he made a diagram of his findings and found an obvious correlation between the two. He then performed a few other experiments and came to the conclusion that the index of the correlation was an indication to the degree in which the two variables were related to one another. His studies were significant as they are all fundamental in Statistics today and these methods are used in many areas for data analysis, especially with extracting meaningful information between different factors (O’Connor and Robertson, 2003).

The History of Statistics: The Measurement of Uncertainty before 1900

Stephen M Stigelr

Publisher: Belknap Press of Harvard University Press, March 1, 1990

p1, 4, 40, 266

http://www.leidenuniv.nl/fsw/verduin/stathist/stathist.htm

A short History of Probability and Statistics

Kees Verduin

Last Updated: March 2009

Last Accessed: 02/04/2010

http://www-history.mcs.st-and.ac.uk/Biographies/De_Moivre.html

The MacTutor History of Mathematics archive

Article by: J J O’Connor and E F Robertson

Copyright June 2004

Last Accessed: 05/04/2010

The American Statistician Volume: 32, No: 1

Quantitative graphics in statistics: A brief history

James R. Beniger and Dorothy L. Robyn

p1-11

http://www-groups.dcs.st-andrews.ac.uk/~history/Biographies/Quetelet.html

The MacTutor History of Mathematics archive

Article by: J J O’Connor and E F Robertson

Copyright August 2006

Last Accessed: 06/04/2010

http://www-history.mcs.st-and.ac.uk/Biographies/Gauss.html

The MacTutor History of Mathematics archive

Article by: J J O’Connor and E F Robertson

Copyright December 1996

Last Accessed: 06/04/2010

Scientific American 250

Florence Nightingale

I. Bernard Cohen

March 1984, p128-37/p98-107depending on country of sale

http://galton.org/

Francis Galton

Edited and Maintained by: Gavan Tredoux

Last Updated: 12/11/07 (according to the update in ‘News’ section)

Last Accessed: 07/04/2010

http://www-history.mcs.st-and.ac.uk/Biographies/Galton.html

The MacTutor History of Mathematics archive

Article by: J J O’Connor and E F Robertson

Copyright October 2003

Last Accessed: 07/04/2010

Statistics Essays – Histogram

A histogram is often used for representing data from a continuous variable which are summarised as a grouped frequency distribution.

We use Excel to generate a Box to represents both the original and the corrected sets of data. The result is the following diagram:

The different methods of diagrammatic representation of statistical data are bar chat, histogram, steam and leaf, and lineplots. The bar chart is more appropriate to data from a discrete distribution that are summarised using a frequency distribution. A histogram is often used for representing data from a continuous variable which are summarised as a grouped frequency distribution. A histogram is therefore similar to a bar chat, but is used to present continuous data. Steam and leaf gives a visual representation similar to the histogram but has the advantage that it does not lose the detail of the individual data point in the grouping. All these diagrams serve to examine the general shape of the distribution of data and help in making conjecture about values of quantities such as the median, the mean or the interquartile range. The last one, the lineplot, is often appropriate for smaller data sets, and can be useful for example to check whether toe data sets have a common variance.

We denote by and the mean of the original set and the corrected set respectively. Then we have:

i.e. .

i.e. .

Since we have an even number of observation, the median in this case will be the midpoint of the two middle observations. That’s:

For the original set the median is ;
For the corrected set the median is .

The standard deviation of each data set is given by , where , are the different values in each data set. Hence:

For the original set, , and for the corrected set .

The lower quartile is defined to be the th observation counting from below, and the upper quartile is the same but counting from above. The interquartile is simply the difference between the upper and the lower quartile. We have the results in the following table.

Original set

Corrected set

Lower quartile

3.815

3.7475

Upper quartile

3.3925

3.3925

Interquartile

0.4225

0.355

Question 2

Theoretically, the fact that 9 and 12 can be made up in as many ways as 10 and eleven 11 means that both sets of numbers should have the same probability to appear. The first thing that should be noted here is the fact that this is true if and only if when we throw a dice, all the numbers have the same probability of appearance, which if not always the case in practice when if when we need to allow consideration such as the on uniformity of the surface on which the dice is thrown, the angle and the velocity at which the is thrown, and even any deformation on the dice which all have an effect on the number that we will get. This problem thus highlights the impossibility of the probability to be an absolutely precise science as oppose to the other branches of mathematics.

Question 3

The probability that a film processed on machine X is . Also, the quality of a film is independent of the quality of all the films processed before it. Thus the probability that three films randomly chosen from a batch coming from machine X is simply .
Let’s denote by the event “the batch came from machine X”, the event “the three film are all of good quality”. Clearly, what we are asking for is the probability that and occur at the same time, which is the probability that the three films are all of good quality and the batch came from machine X. Using the theory of conditional probabilities, we have:
.

Since all of all films are processed on machine X, then . is simply the probability the probability that we calculated above. Thus . Hence:

.

Question 4

At each question only two things can happen:
1-the student can answer the correctly, and we denote by the probability that this does happen;
2-or the student can choose the wrong outcomes among the five possible, and we denote by the probability that this does happen.

Obviously we must have . Given that only five outcomes are available at each question, only one of which being correct, we have , and .

The experiment that consists in answering a single question can therefore be viewed as a Bernoulli experiment with parameter . Hence, Taking all the multiple-choice examination can be viewed as Binomial experiment with parameter , where . Let’s be the random variable representing the number of correct answer achieved by the student. Clearly, the distribution of Binomial with parameter . The probability that the student passes the test is the , which is equivalent to . But:

,

where for each , .

Hence,
.

This gives us , and thus the probability that the student passes the test is .

Question 5

Bayes’ Formula
Let E, F be subsets of some sample space S, and let Fc be the complement of F in S. We can express E as

because in order for a point to be in E it must be either in E and F or in E but not in F. As EF and EFc are mutually exclusive we can write
Applying this to the conditional probability equation gives
.
Consider the following problem:

We have three boxes labelled U1, U2 and U3. Each of them contains a mix of white and red balls. The proportion of white balls is each of them is as follows: 30% for U1, 60% for U2, 40% for U3.
We draw one ball from U1; if it is a white ball then we draw a ball in U2, otherwise we draw a ball in U3.
We would like to find the probability that the first draw gives a red ball knowing that the second draw has given a given a white ball.

We denote by the event “the second draw is made in the box Ui”, the event “the second draw gives a white ball”.

Clearly, if the first draw gives a red ball, then the second can be made only in U3. Thus the probability that the first draw gives a red ball knowing that the second draw has given a given a white one is exactly the same as the probability that the second ball comes from U3 knowing that it is a white ball, which is nothing else than . Using the Bayes’ formula, we have

. (1)

It can be easily seen that and are mutually exclusive as a the second draw can not happen in both U2 and U3 simultaneously. Also since the second draw can happen only either in U2 or U3, then gives all the possibility on where the second draw can happen. That is why
.

The top of the fraction (1) is simply application of the conditional probability.

Hence:

Gambling Addiction Literature Review

Literature Review

Chapter 2: Literature Review

2.1 Introduction

This chapter covers a review of past literatures pertaining to the topic under study. As an opening, it brings in the limelight the backbone of gambling. Several definitions about gambling and the rationale behind are put forward as described by several authors. Following this, the different types of gambling activities adopted by the university students are highlighted; namely poker, sports wagering and lotteries for example. Furthermore, gamblers response towards the gambling activities and their problems are reviewed and contrasted.

2.2 What is gambling?

Gambling is the wagering of money or something of material value (stakes) on an event with an uncertain outcome with the primary intent of winning additional money or goods. Three key elements in gambling are: Consideration, Chance and Prize (I. N. Rose, 2013).

McGill university review refers gambling as any game or activity in which you may risk money or a valuable object in order to win money.

The elements present in gambling are firstly that one needs to realize that by gambling, something valuable is being put at risk, secondly the outcome of the game is determined by chance and finally once a bet is made it is irreversible.

2.3 History of gambling:

Gambling is one of mankind’s oldest doings as indicated by writings and equipments found in tombs and other places. The foundation of gambling is considered to be divinatory by emitting marked sticks and other objects and inferring the upshot, man sought the understanding of future and the aims of gods.

Anthropologists have also pointed to the fact that gambling is more rampant in societies where there is an extensive belief in gods and spirits whose compassion may be sought. With the advent of legal gambling houses in the 17th century, mathematicians came to a decision to take a serious awareness in games of randomizing equipment, such as dice and cards, out of which grew the field of probability theory.

Organised approved sports betting dates back to the late 18th century where there was a swing in the official stance towards gambling, from considering it to a sin to considering it to a vice and a human weakness and lastly to seeing it as a mostly harmless and even entertaining activity.

By the start of the 21th century approximately four out of five people in western nations gambled at least every week.

2.4 Who is a gambler?

A person who wagers money on the outcome of games or sporting events can be categorized as a gambler. Gamblers can visit gambling houses, or through any other facility, to place their bets and hope for a win. There are three common types of gambler the social gambler, the professional gambler and the problem gambler. The professional gamblers are the rarest form of gambler and do not depend on luck but much more of games of skills to make an earning. They have full control over the money, time and energy they are spending on the game. The social gambler considers gambling to be a recreational activity and they maintain control of their betting, the energy and the time they spend on the game. They consider their betting to be a price to be paid for entertainment. Problem gambler involves the continuous involvement in gambling despite negative consequences and this can lead to other health and social problems.

2.5 Gambling across the globe

2.5.1 Gambling age

The gambling age across the globe varies greatly. In some countries and areas gambling is proscribed altogether, in others gambling is only authorized for foreigners. In some areas, everyone is allowed to play but the betting age requirement is not the same for citizens as for foreigners. An example of such a country is Portugal where foreigners are allowed to venture in all casinos at the age of 18, while citizens need to be 21 or 25 depending on the gaming house.

The most familiar gambling age across the sphere is 18 years and more than 50% of western countries have this gambling age. There are nonetheless abundance of examples of countries that have a superior limit, such as Greece and Germany. Germany is a good model of how thorny the question of gambling age really is as Germany, just like in the USA, has different ages in different states within the nation. Most German states require you to be 18 years old, but some have placed the age constraint at 21 years instead.

Generally speaking, one can see a trend of countries and states lowering the gambling age from the once dominating gambling age of 21 year to just 18 years. This trend has been going for quite some time and across large parts of the world.

2.5.2 Top of the world

Certain countries are, as a whole, hot ongambling. Measured in terms of loss per capita of adults, the two top nations containing the maximum loss stand head and shoulders above the world. Those two infamous gambling Mecca’s are Australia and Singapore (American Gaming Association, 2006).

The top five countries as to gambling losses per capita of the adult population comprise: Australia, Singapore, Ireland, Canada and Finland. The average net yearly per adult expenditure on gambling for these nations runs from $1,275 down to $540 (American Gaming Association, 2006).

2.6 Gambling in Mauritius:

It was recently declared that the Council of Ministers in Mauritius endorsed the resolution that bookmakers operating out of the Champ de Mars racecourse are now permitted to work only on Fridays and Saturdays. Till now they were allowed to take bets upon publication of the official program of races on Thursdays. The raison d’etre set for this decision is that it will smooth the progress of condensing the influence of gambling on the Mauritians.

Gambling has become part of the foundation of the Mauritian society over the years. This takes account of casino gambling, online gambling, horse race betting and the “loterie verte”. Althoughhorse racingis still a popular betting sport, the Lotto, since its preface on the 7th of November 2009 as the new national lottery, has exceeded it in standing. We just have to pay attention to the radio for a few minutes or take a glimpse at the billboards when driving on the public road to get to know about the jackpot for the coming draw. There are more than 500 counters across the island in supermarkets, petrol pumps, and shops facilitating customers to play the Lotto. Around 12 scratch cards have also been pioneered giving people the prospect of winning instant money. When people primarily used to place their hard-earned money on horses, now they are being ensnared into wasting it on the Lotto. A considerable number of people are already conquered by the “jackpot fever”, spending more than usual when the jackpot gets bigger.

2.7 Types of gambling:

Gambling is a vast world which compromises of many branches from which people try their luck in the hope to make more money or just for the thrill of the game. In Mauritius you can easily find casinos, gaming houses (which is smaller than a casino but offers the same service for middle class players) and shops where you gamble. Some of the available forms of gambling present on the island are:

2.7.1 The lottery. The ‘lottery verte’ and the Lotto are the most common and most profitable types of gambling for the government in Mauritius. The ‘Lottery verte’ is a monthly lottery where you have to buy the tickets at a retailer, which can be found everywhere, and you just have to wait for the end of the month to check your results and see if you have won. The prices of the tickets are Rs10 each and you are eligible to win prices ranging from Rs 100000 to Rs 10 million. On the other hand you have the Lotto which settled itself in Mauritius more recently and now it’s the new craze for Mauritian. The idea is that you have to select 6 numbers out of 40 (each number can be selected only once) and then you just have to go to any supermarkets or retailer to validate your 6 numbers. Each ticket cost Rs20 and you can play as much ticket you want. The lottotech, the company which runs the lotto, makes a public draw, on air, on the national channel every Saturday. The lotto is a lottery where you have a cumulative jackpot, that is if no one wins the jackpot this week then the other week they will add this to a new jackpot thus every time you have the chance of winning a bigger one if you lose, and this jackpot starts at Rs5 million and can go up to Rs70 million (biggest jackpot won till now).

2.7.2 Horse racing. Horse racing is anchored in our society for ages and it forms part nowadays in our cultural and historical heritage. It was introduced in Mauritius by the English before the independence and it is still going strong. In the beginning horse racing was more for fame and social status than for making money and gambling. Latter to make the horse industry run and thrive, the board introduced betting on the horse racing and this was also a good opportunity for government to get tax money. Horse racing is a huge event in Mauritius, every Saturday and on some special occasions on Sundays we have horse racing at the Champ de Mars which is the race tracks found in the capital Port Louis. Nowadays in every rural and urban area you can find bookmakers who will take your bets on the horses as from Friday and on racing days you have a huge crowd who converge to the Champ de Mars for the fun and in the hope of making money.

2.7.3 Casino. A casino is a facility which accommodates certain types of gambling activities such as slot machines, poker, blackjack, big or small, van lak, dice and roulette for example. Casinos are situated at strategic areas to lure more and more clients, such strategic areas might be near hotels, touristic attractions, or even a city or town which is well frequented by many people. In Mauritius you have many casinos or gaming houses, which are smaller casinos but still well frequented by the people, found in the urban areas such as Rose-Hill, Vacoas, Port-Louis and some touristic places such as Grand Baie. Most games played have mathematically-determinedoddsthat ensure the house has at all times an overall advantage over the players. This can be expressed more precisely by the notion ofexpected value, which is uniformly negative (from the player’s perspective). This advantage is called thehouse edge. This is why there is an adage “the house always wins” for the casinos. In Mauritius nowadays we can witness more and more casinos being offered a patent and opening their doors to the public. The government knows that this is a prolific market and if they can make gambling accessible to more tourists and people it will surely be an advantage to them since the casinos have to pay a huge tax and money to get their patent. We can see that several tournaments are being organized in Mauritius, such as the World Poker Tour National Mauritius, which lures people from all over Africa and the Indian Ocean to come to Mauritius just to play poker. The hotels now when they are advertising the island they also advertise casinos to get more tourists, a new clientele and a really good strategy that differs from other hotels as they are targeting more and more high class ‘gambling tourists’ and which is a very profitable market.

2.7.4 Scratch cards. This is the new craze among the Mauritian people. Scratch cards are simple and easily available across the whole island. The rule is simple just buy one and you have to scratch the opaque surface which concealed the information, if you get the required symbols you win, and the most attractive part of it is the opportunity to win instantly as compared to lottery where you have to wait for the draw and the prices at which they are sold and the prizes that you can get from it. Cards can range from Rs20 to Rs100 and prizes may vary from Rs200 000 to Rs1 million. The scratch cards are supervised by the Lottotech the same company which manage the Lotto in Mauritius.

2.7.5 Online gambling. Easy, availability, and affordable are the words usually associated with online gambling. Easy to log in on some betting sites, no account needs to be created and no fees to be paid. Availability because of the fact that it is all over the internet, you do not have to look far to find online gambling sites. Banner ads and pop-ups can be found on mostly every site which has a high level of traffic by people. It is affordable since some sites just let you bet for free and if you win then you have to cash in to be able to play, some allows you to choose how many you want to bet and give you live odds according to what is happening which cannot be found elsewhere. Online gambling targets most of the time teenagers, this is a strategy called ‘grooming’ whereby they make the teenagers feel acquainted with the attractiveness of the game so that when he becomes older he will still be a potential income earner and a player.

2.8 Gambling among university students

Gambling is omnipresent among university students as demonstrated through researches. The vast majority of students gamble without experiencing ill effects, yet almost 8% of university students may build up a gambling problem (Derevensky, J. L., & Gupta, R. (2007). Gambling was once an acceptable form of entertainment on campuses but with the new laws, it is now forbidden to participate in any kind of gambling activities, but still it can be found everywhere. However, the warning signs of developing a gambling problem are not brought forward, as is seen with other potentially addictive behaviors, such as drug use and alcohol consumption. With the swell in gambling venues, social recognition of gambling, and access to extensive and inexpensive means of gambling, it is not astounding that studies have found high rates of gambling linked adverse problems among college students.

2.9 Problem gambling

Problem gambling or ludomania is an urge to continuously gamble despite harmful negative consequences or a desire to stop. The prevalence of problem gambling has been evaluated at 7.8% among university students which is considerably high than the roughly 5% rate found among the general population (Blinn, Pike, Worthy, Jonkman, 2006). Students facing problem gambling illustrate many signs including isolating behavior, lowered academic performance, poor impulse control and displaying extreme overconfidence, and participating in other high risk behaviors such as bringing on alcohol, tobacco and marijuana use and risky sexual behavior (LaBrie, etal, 2003), (Goodie, A.S, 2005). Environmental factors also contribute to problem gambling. The surroundings of a student are a key factor in determining whether he is prone to problem gambling. If the students live in an area where gambling opportunities and social normative beliefs that are supportive of gambling activities are available, this increases the likelihood of gambling participation and of development of a gambling problem. Staffs that are conscious of environmental conditions that may contribute to problem gambling can develop policies to help these students (Wehner,M. 2007).

2.9.1 Gambling Addiction and Problem Gambling

Whether you wage on scratch cards, sports, poker, roulette, or slots, in a casino or online, problem gambling can sprain relationships, impede with work, and escort to fiscal cataclysm. You may even do things you never contemplate you would, like stealing money to gamble or reimburse your debts. You may believe you can’t stop but, with the right help, you can triumph over a gambling problem or compulsion and reclaim control of your life. The first step is recognizing and acknowledging the problem. Gambling dependence is occasionally referred to as the “hidden illness” because there are no apparent substantial signs or symptoms like there are in drug or alcohol addiction. Problem gamblers on average refute or minimize the problem. They also go to great lengths to bury their gambling habits. For example, problem gamblers regularly depart from their loved ones, sneak around, and lie about where they’ve been and what they’ve been up to (Jeanne Segal, Ph.D., Melinda Smith, M.A., and Lawrence Robinson, 2013).

DuPont Enterprise Financial Analysis

With the fast pace of modern society, the competitions between companies are becoming more fierce gradually. In order to catch the tide of financial progress, rational analyses are required for enterprise to understand a company’s financial situation and operational efficiency. As a result, entrepreneurs can judge their enterprises’ competitive position in the industry and sustainable development ability based on these analyses. DuPont analysis and factor analysis have been widely applied in enterprise financial analysis. Using these analysis methods can accurately calculate the various influence factors on the direction and extent of the influence of financial indicators, help enterprises to plan in advance, provide matter control and afterwards supervision, promote enterprises’ goal management and improve enterprise management level (Casella & Berger, 2002). Which analysis method is more informative for the analysis of corporate financial information? Admittedly, DuPont analysis plays a necessary role in financial analysis. While some experts are against this idea by claiming that factor analysis has a wider range of applications. This essay is aiming to explore the application of DuPont analysis methods in corporate financial management and whether this kind of analysis is more feasible than factor analysis in terms of enterprise development.

There is no doubt that DuPont analysis will be introduced to find its feasibility. Under the condition of considering the inner link of financial indicators, DuPont analysis uses the relationship between several major financial ratios to synthetically analyze the financial position of the enterprise. It is a classical method to evaluate the company profitability and shareholders’ equity returns level and evaluate enterprise performance from a financial perspective(Angelico & Nikbakht, 2000). The basic idea of DuPont analysis is to decompose the enterprise net assets yield to the product of a number of financial ratios, thus it can help to make an in-depth analysis of business performance. The most significant feature of DuPont model is to connect several ratios that are used to evaluate corporate efficiency and financial conditions according to their inner links, then form a complete index system, and finally reflect the enterprise by return on equity comprehensively (Angelico & Nikbakht, 2000). This method can make the level of financial ratio analysis more clear, organized and outstanding, to provide the operation and profitability of enterprises for financial statement analysts. DuPont analysis takes related values in place according to their inner links by DuPont chart and the core value is the return on equity. There are three key points that need to be noted when people utilize DuPont analysis (Bartholomew; Steele, et al, 2008): first, sales net interest rate reflects the relationship of net profit and sales income, and it depends on the sales revenue and total cost. Second, total assets can be referred as an important factor influencing asset turnover ratio and return on equity. Third, equity multiplier is influenced by asset-liability ratio index. To sum up, DuPont analysis system can explain the reason and trend of factor changes.

Though DuPont analysis has a lot of advantages and it’s widely applied, it also has some limitations. From the perspective of performance evaluation, DuPont analysis can only show financial information and cannot reflect the strength of enterprise (Harman, 1976). Primarily, DuPont analysis focuses on short-term financial results but ignore the long-term value creation. Moreover, financial indicators reflect the enterprise operating performance in the past, to measure industrial enterprises to meet the requirements of the times. But in the current information age, customers, suppliers, employees, technology innovators have more and more influence on the enterprise operating performance, and DuPont analysis is powerless in these aspects. In addition, DuPont analysis cannot solve the problem of intangible assets valuation that is very important to enhance the competitiveness of enterprises in a long term.

Despite all of these drawbacks, DuPont analyses are still the most prevalent tactics in enterprises around the world. The main reason is that enterprises nowadays combine classical DuPont analysis theories with the modern financial management goal. Enterprises design new DuPont analysis method based on the combination of the enterprise value maximization goal and the stakeholders’ interest maximization goal. In this way, stakeholders not only include the shareholders of an enterprise, but also consists creditors, business operators, customers, suppliers, employees and government. All these factors are essential for corporate financial management. The damage in either party of enterprise stakeholders’ interest is not conductive to the sustainable development of the company, also not conductive to reach the maximization of enterprise value. In other words, terminal aim of new DuPont analysis is within the framework of law and morality, under the premise of harmonious development, effectively balance the corporate stakeholders’ interest, realize the maximization of enterprise value. On the top that, new DuPont

However, factor analysis is feasible in the field that DuPont analysis cannot. Factor analysis is mainly used for determining the influence direction and degree of every factor in the total change in some kind of economic phenomenon affected by many factors (Bartholomew; Steele, et al, 2008). Factor analysis is the application and development of index method principle. It’s based on the index method principle. In the analysis of things change influenced by many factors, in order to observe the effects of some factors change, it will make other factors be fixed, and then analyze and replace item by item, so this method is also known as sequential substitution method (Harman, 1976). Based on comparative analysis, factor analysis is frequently used to find differences in the process of comparing and fatherly explore the cause (Larsen; Warne, 2010). Using factor analysis method, the first step is to study the formation process of the object and find various factors of analysis object; then to compare factors with the corresponding criterion item by item to determine the influence degree of differences of every factors, to help find the main contradiction and indicate the main direction of solving the problem for the next step. For instance, the relationship of a financial value and related factors can be represented as: Actual value: P1= A1xB2xC1; Standard value: P2=A2xB2xC2. The overall variance between the actual value and standard value is P1-P2, and it’s affected by three factors, namely A, B and C. The degree of influence of every factor can be calculated as: Influence of factor A: (A1-A2) xB2xC2; Influence of factor B: A1x (B1-B2) xC2; Influence of factor C: A1xB1x (C1-C2). Plus the above influence value, it is the overall variance: P1-P2. From the above analysis, it can be seen that factor analysis can be used for the detailed analysis of the degree of influence and can be more beneficial to guide the decision makers to find financial issues and propose solutions.

In conclusion, DuPont analysis and factor analysis have their own range of application. Through DuPont analysis system can provide better reasons and trends of financial index changes, factor analysis is better in enterprises’ financial analysis. Factor analysis can be used for more detailed analysis of the degree of influence and can be more beneficial to guide the decision makers to find financial issues ultimately and propose solutions fundamentally. In sum, factor analysis method has more extensive scope of application.

Design Thinking and Decision Analysis

Topic:

How can decision analysis support the decision making process in design thinking in selecting the most promising properties during the transition fromdivergent toconvergent thinking phases?

Executive Summary

Table of Content

Executive Summary

List of Figures

List of Tables

Index of Abbreviations

1.Introduction

2.Overview of Design Thinking and Decision Analysis

2.1.A New Approach to Problem Solving

2.1.1.What is a Design Thinker?

2.1.2.The Iterative Steps of Design Thinking

2.2.Decision Analysis

2.2.1.Decision Analysis Process

2.2.2.Multi Attribute Decision Making

3.Application Based on a Case Study

3.1.The Design Challenge

3.2.The Static Model

3.2.1.The Alternatives

3.2.2.Objectives and Measures of Effectiveness

3.2.3.Multi Attribute Decision Making

3.2.4.Sensitivity Analysis of the SAW Method

3.3.The Case Study’s solution

4.Conclusion

List of Literature

Statutory Declaration

Appendix

List of Figures

Figure 1: The IDEO process – a five step method (Kelley and Littman, 2004: 6-7)

Figure 2: Figure 3: The HPI process – a six step model (Plattner et al., 2009)

Figure 3: Fundamentals of Decision Analysis (Ralph L. Keeney, 1982)

Figure 4: Schematic form of a decision tree (Keeney and Raiffa, 1993)

Figure 5: A choice problem between two lotteries (Keeney and Raiffa, 1993)

Figure 6: MADM Decision Matrix

Figure 7: The tree main idea clusters

Figure 8: Decision Making Matrix

Figure 9: Decision Maker Matrix for the Design Challenge

List of Tables

Table 1: Different ways of describing design thinking (Lucy Kimbell, 2011)

Table 2: Realization of attributes in alternatives scale

Index of Abbreviations

DADecision Analysis

DCDesign Challenge

DM Decision Maker

DTDesign Thinking

HPIHasso-Plattner-Institute

HMWHow Might We

IWITMI Wonder If That Means

MADMMulti Attribute Decision Making

MCDMMulti Criteria Decision Making

SAWSimple Additive Weighting

1. Introduction

Everyone is in the need to make decisions every day. Those decisions may be shaped by an outstanding problem which just needs to be solved or it may just be the question whether to buy a new pair of shoes or not. Moreover, the problem may easily be solvable by a simple equation or there might be the necessity to formulate the problem in the first place since the difficulty is too diffuse to be absorbed. Due to the huge variety of different problems our society faces every day and with all divergent needs for a solution process, there is a constant need to draft and identify methods that support everyone in making decision. Undoubtedly, there are many methodologies and approaches out there that support the decision making process for daily small decisions that need to be made to life changing decisions. Decision analysis (DA), which is one of the formal methods and design thinking (DT), which is one of the innovative methodologies out there, are two instances of problem solving methods.

Both methods have been applied in similar fields, such as business, technology, and personal life but with divergent intentions. On the on hand, there is DT which is one of the more recent methodologies that helps to get from a problem to a solution with the support from a finite number of iterative steps that the design thinker will follow. Brown, who is the CEO of IDEO, describes DT as a method that is so powerful and implicit that can be used from teams broadly across the globe to create impactful innovative ideas that can be realized by the society or companies (Brown, 2010: 3). On the other hand, there is DA which is an approach that includes a variety of procedures that helps to find a formal solution to an identified problem and creates a more structured solution procedure. Howard was the person who shaped the term DA in 1964 and has been irreplaceable for the development of DA (Roebuck, 2011: 99).

This paper will combine DA and DT to investigate whether DA can leverage the DT process in order to find the most viable solution to a problem. Moreover, this paper will find out whether or not those two approaches can profit from each other. Selected procedures of DA will be integrated in the DT process by reference to a case study. Over and above, the solution generated by the DA technique will be compared with the chosen alternative in the case study that followed the regular DT process. Comparing those two outcomes, this paper will work out whether or not DA can support the DT process.

The second chapter is descriptive of the fundamentals of DA and DT. After the outline of the foundations, the third chapter will apply chosen DA procedures into the DT process on the basis of a case study. Moreover, the chosen alternative by the design thinking team in the case study will be analysed. In the final chapter, the major finding will be summarized and evaluated.

2. Overview of Design Thinking and Decision Analysis
2.1. A New Approach to Problem Solving

Design Thinking is an iterative and innovative approach to solve problems of all kinds that the society is facing. Moreover, it is a human-centred and at the same time investigatory process that puts its emphasis on collaboration, prototyping, and field research (Lockwood, 2010: xi). It is a set of fundamentals than can be applied by different people and to a huge range of problems (Brown, 2010: 7). DT is not a linear, but an iterative process in which the designers are constantly learning from mistakes and improving their ideas. Designers hope to find a linear model that will help them to understand the logic behind a design process; therefore, it is a constant search for decision making protocols that would support the designers’ processes (Buchanan, 1992). In sum, DT is a user-centred approach to solve a variety of problems with the aim to integrate people from various fields; ranging from consumers and business people to designers.

There are a variety of ways to describe DT, as illustrated in Table 1. According to Brown, DT is an organisational resource with the goal to create innovation. Cross describes the method as a cognitive style with the purpose of problem solving. Another famous definition concludes that “Design Thinking means thinking like a designer would” (Roger, 2009). However, the purpose and aim of DT is in its core identical, whether one is applying the processes modified by Cross or Brown (Plattner et al., 2009: 113).

Design thinking as a cognitive style

Design Thinking as a general theory of design

Design Thinking as an organizational resource

Key texts

Cross 1982; Schon 1986; Rowe [1987] 1998; Lawson 1997; Cross 2006; Dorst 2016

Buchanan 1992

Dunne and Martin 2006; Bauer and Eagan 2008; Brown 2009; Martin 2009

Focus

Individual designers, especially experts

Design as a field or discipline

Businesses and other organizations in need of innovation

Design’s purposes

Problem solving

Taming wicked problems

Innovations

Key concepts

Design ability as a form of intelligence; reflection-in-action, abductive thinking

Design has no special subject matter of its own

Visualization, prototyping, empathy, integrative thinking, abductive thinking

Nature of design problems

Design problems are ill-structured, problem and solution co-evolve

Design problems are wicked problems

Organizational problems are design problems

Sites of design expertise and activity

Traditional design disciplines

Four orders of design

Any context from healthcare to access to clean water (Brown and Wyatt 2010)

Table 1: Different ways of describing design thinking (Lucy Kimbell, 2011)

Over the last five years, the term DT has become very present in our society. On top of that, DT is a new term in design and management circles, which shows the demand for creative and innovative methods across various sectors (Kimbell, 2011). Nevertheless, this method is still underdeveloped when it comes to applying design methods at the management level (Dunne and Roger, 2006). But why is the interest in design growing and the term has become ubiquitous? The society is facing a lot of challenges; from educational problems to global warming and economic crisis. Brown sees DT as a powerful approach that can be applied to a huge variety of problems and as a consequence creates impactful solutions to these challenges. On top of that he argues that design has become nothing short of a tactic of viability (Brown, 2010: 3). The method is not limited to the creation and design of a physical product, but it can also result in the conception of a process, tools to communicate, or a service (Brown, 2010: 7). Therefore, it is a method that helps to learn from mistakes and to find impactful and sustainable solutions.

2.1.1. What is a Design Thinker?

Many individuals have their own personal picture of what a designer is and mostly, would not associate themselves with such a term. Nevertheless, the expression designer is not only limited to creative graphic designers that are working in agencies. There are many professionals who would fall under the term designer, from people that are working in corporations and are trying to implement a new innovative way of thinking to people who are creating a new customer experience (Porcini, 2009). Mauro Porcini puts a lot of emphasis on the fact that describing design is a huge challenge, since design can be anything from recognizing impactful solutions to the personal experience that the answers will originate (Porcini, 2009).

According to Brown design thinkers have four characteristics in common (Brown, 2008):

Empathy

Design thinkers have the ability to walk in the shoes of someone else; they view situations from the perspective of other people. This talent allows them to see a lot of thing that others are not able to observe which leads to solutions that are especially tailor-made for the users.

Integrative thinking

Integrative thinking allows the design thinker to go beyond simple solutions by seeing and assembling all the noticeable coalitions to a solution. The ability to not confide on the processes that are characterized by an A or B choice, allows them to involve even antithetic solutions.

Optimism and Experimentalism

Design thinkers are individuals who are confident that to each existing solution there is another one which is more impactful and feasible for the corresponding stakeholders. By experimenting with new information and the existing circumstances and moreover, by asking the most powerful questions, design thinkers are able to ascertain long-lasting innovations.

Collaboration

Another key aspect of the design thinking process is the ability to collaborate with experts from a variety of fields. This talent allows to not only integrating the designers and producers, but also the end user. Moreover, a design thinker him/herself has experience in many different fields and is not only an expert in DT.

2.1.2. The Iterative Steps of Design Thinking

As already mentioned above, there are many ways to describe DT. On top of that, the process may sometimes be described in three, five or six steps in literature. For example, at IDEO, which is one of the leading design consultancies in the world, the designers are working with a five step model (Kelley and Littman, 2004: 6-7).

Figure 1: The IDEO process – a five step method (Kelley and Littman, 2004: 6-7)

However, at the Hasso-Plattner-Institute in Potsdam, the process consists of six steps. The two processes consisting of a different amount of steps only differ in their emphasis on the overall process and a different description but not in their principles (Plattner et al., 2009: 113). In order to describe the process which will later be applied to a case study, the thesis will focus on the six steps process described by Plattner et. al. (Plattner et al., 2009: 114).

Figure 2: Figure 3: The HPI process – a six step model (Plattner et al., 2009)

Understand

The iterative DT process starts with a phase called understanding, which includes defining the problem and explaining the scope. Defining the so called Design Challenge (DC) is crucial for the success of the method since the whole team working on the challenge needs to have the same understanding of the problem to be solved. Moreover, the target group needs to be identified by the team in order to be able to move to the next phase. In the first phase, the emphasis is put on obtaining the knowledge that is required to solve the formulated DC.

Observe

The aim of the second phase is to become an expert. The DT team observes all the existing solutions to the identified problem and challenges them; more specifically the team tries to improve their understanding why there has not been an adequate solution up to that point. The team tries to get a 360° degree view on the problem, integrating all participants and people affected. One of the main activities in this phase is the direct contact with the future users or clients of the product/service for the intended solution. It is crucial to involve the future users since those people are building the target group and know what their wishes, requirements, way of behaviour and needs are. In addition, the team needs to examine carefully the processes and ways of behaviour. In order to do so the team needs to walk in the shoes of the end users. In sum, the second phase emphasises the need to reproduce the end user’s ways of behaviour while being able to fully understand the end user’s perspective.

Point of View

The third phase, called point of view, is the stage where all the findings from the previous phases are interpreted and evaluated. Since in most cases the team has branched out in the second phase, this phase brings everyone together in order to exchange findings. The team will segregate the relevant facts from the dispensable information. This separation helps to define the point of view more precisely which will lighten the fourth phase for the whole team. A method which is often used at this stage is the creation of a persona. A persona is a fictive and ideal-typical end user of the product or service. During this exercise the whole team deploys their findings from the second stage, the observing phase, with the aim to find the right viewing angle on the DC. For the purpose of finding the right perspective, it is important to question and realign the problem from a huge variety of different viewing angles. Recapitulatory, during the third phase the team assembles the key aspects from the end users in order to be able to start finding ideas in the next phase.

Ideate

The ideation phase is characterized by the reorientation of the team’s thinking process from divergent to convergent thinking. In the beginning of the phase, the team is still in a divergent thought process – the group of people is generating as many ideas for a solution as possible. All these concepts should contain a potential solution to the DC and should not be debated by the team in the beginning. It is a phase during which the team experiments with a variety of ideas and invests in the creative thinking process by leaving as much room as possible for everyone to generate constructive ideas.

In contrast to the first half of the ideation phase, the second half is shaped by the convergent method. During the convergent thought process, the team’s goal is to identify the one solution or the best solutions to the DC. This process consists of logical steps towards the exploration of solution/s. There are some creative techniques on how to narrow down the ideas in the ideation phase, for example (Center for Care Innovation, 2013-2014):

Sticky note voting: Every team member gets three stickers and places those next to the ideas that are most viable and feasible to him/her. The ideas with the most stickers will be prototyped in the next phase.
Idea morphing: Each idea will be presented in front of the whole team. After each presentation the team is looking out for synergies to merge some ideas or mixing some elements.

In sum, during this phase the team generates ideas for the exploration of solutions with the help of the information gathered during the last three phases.

Prototype

This phase appears for many people to be really different to what they have been used to during solution oriented processes. The aim of this phase is to visualize the ideas for the users; thereby, the users are able to give feedback more easily and may also be able test the idea. The prototype should not be the perfect visible idea, but the preproduction model should be able to transfer the message, show the strengths and weaknesses of the idea, and moreover, it should help the team to improve the idea even further. It is a visualization of the idea with the use of, for example, modelling clay, paper, Lego figures, and any material that might be within reach. If the solution is a service function, the prototype might be a theatrical performance. Moreover, some teams create a virtual prototype if the idea that cannot be visualized in a real model. All in all, the intention of the phase is to make an idea come alive and visible to the users.

Test

During the testing phase the idea will be tried out with the user. The most important part of this step is that the idea will be tested with the end users and not only within the DT team itself. The testing phase is about identifying the idea’s strengths and weaknesses together with the end user. It is about identifying mistakes because only from these misconceptions the team can learn and further improve the idea, since it is all about the user who will be making use of the idea. Therefore, the team has to put a lot of emphasis on learning from that experience.

2.2. Decision Analysis

Every human being constantly takes decisions throughout the day. On the one hand, there are many minor decisions from the preference of food each day, the question if one should stay in bed or not, to the colour of clothes someone wants to wear. On the other hand, people face situations where they have to choose whether to take a job or not or which car they would like to purchase. Some decisions have a larger and more significant impact than others; therefore, it is important to understand the consequences of the decisions that are being made (Gregory, 1988: 2).

Decision Analysis is designed to help when dealing with difficult decisions by offering more structure and guidance (Clemen, 1996: 4). DA supports the decision making process: it helps to better and fully understand the obstacles that are connected with having to make a decision and, on top of that, helps to make better choices (Clemen, 1996: 3). Moreover, DA permits the operator to make any decisions in a more effective and consistent way (Clemen, 1996: 4). In consequence, DA is a framework as well as a tool kit for approaching various decisions. Nevertheless, the judgement of each DM differs from person to person. One DM may have a preference which manifests itself in the chosen attributes and alternatives. Another DM may not have a preference and, on top of that, the judgement skills may vary from DM to DM as well (Hwang and Yoon, 1981: 8).

According to Keeney, the DA approach concentrates on five fundamental issues that are elementary for all decisions (Keeney, 1982):

Figure 3: Fundamentals of Decision Analysis (Ralph L. Keeney, 1982)

In order to be able to address multidisciplinary problems, the decision problem is divided into several parts which are analysed and integrated during the DA process (Keeney, 1982). Over the last years, various approaches have been identified, such as the shaped DA process by Keeney or the Multi Attribute Decision Making (MADM) method. The later one supports the decision making when a finite number of alternatives have been identified with various, mostly conflicting attributes.

2.2.1. Decision Analysis Process

Over the last decades, many analysts have been working on modifying and improving the DA steps included in the process; therefore, there are many procedures out there with a common purpose: Choosing the best alternative. Keeney describes the DA process in five major steps (Keeney and Raiffa, 1993: 5-6):

Preanalysis

During the first phase the focus is on gathering the alternatives and clarifying the objectives. The decision maker (DM) faces a situation where there is indecisiveness about any steps that are relevant in order to solve the problem. At this stage the problem is already at hand.

Structural analysis

At this stage the DM is confronted with structuring the problem. There are several questions that the DM will need to answer; for example, what call can be made? What are the decisions that can be delayed? Is there specific information that supports the choices that could be made? Figure 4 shows a decision tree in which the abovementioned questions are systematically put into place. The decision nodes which are displayed as 1 and 3 (squared) are the nodes that are controlled by the DM and the chance nodes, shown as 2 and 4 (circled), are the nodes which are not fully controlled by the DM.

Figure 4: Schematic form of a decision tree (Keeney and Raiffa, 1993)

Uncertainty Analysis

The third phase, called the Uncertainty Analysis, starts with assigning the probabilities to each path that is branching off from the chance nodes (in Figure 4, these are the paths left and right from points 2 and 4). The assignment of the probabilities to the branches of the decision tree is a subjective procedure (Keeney and Raiffa, 1993: 6; Gregory, 1988: 172). Nevertheless, the DM makes the assignments by using a variety of techniques based on experimental data. These assignments will be checked for conformity.

Utility or Value Analysis

The objective of the fourth step is the assignment of so called utility values to each path of the decision tree, whereas these represent the consequences connected to that path. The decision path that is shown in Figure 4 represents only one plausible path. In a real problem, many factors will be associated with the path; such as economical costs, psychological costs as well as benefits that the DM considers r

Construction of a Research Questionnaire

Construction of appropriate questionnaire itemsSection 2, Question 3

Describe what is involved in testing and validating a research questionnaire. (The answer to question 3 should be no fewer than 6 pages, including references)

The following criteria will be used in assessing question 3:

Construction of appropriate questionnaire items
Sophistication of understanding of crucial design issues
Plan for use of appropriate sampling method and sample
Plan to address validity and reliability in a manner appropriate to methodology

In order to construct an appropriate research questionnaire, it is imperative to first have a clear understanding of the scope of the research project. It would be most beneficial to solidify these research goals in written form, and then focus the direction of the study to address the research questions. After developing the research questions, the researcher would further read the related literature regarding the research topic, specifically searching for ideas and theories based on the analysis of the construct(s) to be measured. Constructs are essentially “mathematical descriptions or theories of how our test behavior is either likely to change following or during certain situations” (Kubiszyn & Borich, 2007, p. 311). It is important to know what the literature says about these construct(s) and the most accurate, concise ways to measure them. Constructs are psychological in nature and are not tangible, concrete variables because they cannot be observed directly (Gay & Airasian, 2003). Hopkins (1998) explains that “psychological constructs are unobservable, postulated variables that have evolved either informally or from psychological theory” (p. 99). Hopkins also maintains that when developing the items to measure the construct(s), it is imperative to ask multiple items per construct to ensure they are being adequately measured. Another important aspect in developing items for a questionnaire is to find an appropriate scale for all the items to be measured (Gay & Airasian, 2003). Again, this requires researching survey instruments similar to the one being developed for the current study and also determining what the literature says about how to best measure these constructs.

The next step in designing the research questionnaire is to validate it-to ensure it is measuring what it is intended to measure. In this case, the researcher would first establish construct validity evidence, which is ensuring that the research questionnaire is measuring the ideas and theories related to the research project. An instrument has construct validity evidence if “its relationship to other information corresponds well with some theory” (Kubiszyn & Borich, 2007, p. 309). Another reason to go through the validation process is to minimize factors that can weaken the validity of a research instrument, including unclear test directions, confusing and/or ambiguous test items, and vocabulary and sentence structures too difficult for test takers (Gay & Airasian, 2003).

After developing a rough draft of the questionnaire, including the items that measure the construct(s) for this study, the researcher should then gather a small focus group that is representative of the population to be studied (Johnson, 2007). The purpose of this focus group is to discuss the research topic, to gain additional perspectives about the study, and to consider new ideas about how to improve the research questionnaire so it is measuring the constructs accurately. This focus group provides the researcher with insight on what questions to revise and what questions should be added or deleted, if any. The focus group can also provide important information as to what type of language and vocabulary is appropriate for the group to be studied and how to best approach them (Krueger & Casey, 2009). All of this group’s feedback would be recorded and used to make changes, edits, and revisions to the research questionnaire.

Another step in the validation process is to let a panel of experts (fellow researchers, professors, those who have expertise in the field of study) read and review the survey instrument, checking it for grammatical errors, wording issues, unclear items (loaded questions, biased questions), and offer their feedback. Also, their input regarding the validity of the items is vital. As with the other focus group, any feedback should be recorded and used to make changes, edits, and revisions to the research questionnaire (Johnson, 2007).

The next step entails referring to the feedback received from the focus group and panel of experts. Any issues detected by the groups must be addressed so the research questionnaire can serve its purpose (Johnson, 2007). Next, the researcher should revise the questions and research questionnaire, considering all the input obtained and make any other changes that would improve the instrument. Any feedback obtained regarding the wording of items must be carefully considered, because the participants in the study must understand exactly what the questions are asking so they can respond accurately and honestly. It is also imperative to consider the feedback regarding the directions and wording of the research questionnaire. The directions of the questionnaire should be clear and concise, leaving nothing to personal interpretation (Suskie, 1996). The goal is that all participants should be able to read the directions and know precisely how to respond and complete the questionnaire. To better ensure honesty of responses, it is imperative to state in the directions that answers are anonymous (if applicable), and if they mistakenly write any identifying marks on the questionnaire, those marks will be immediately erased. If that type of scenario is not possible in the design of the study, the researcher should still communicate the confidentiality of the information obtained in this study and how their personal answers and other information will not be shared with anyone. Whatever the case or research design, the idea is to have participants answer the questions honestly so the most accurate results are obtained. Assuring anonymity and/or confidentiality to participants is another way to help ensure that valid data are collected.

The next phase entails pilot-testing the research questionnaire on a sample of people similar to the population on which the survey will ultimately be administered. This group should be comprised of approximately 20 people (Johnson, 2007), and the instrument should be administered under similar conditions as it will be during the actual study. The purpose of this pilot-test is two-fold; the first reason is to once again check the validity of the instrument by obtaining feedback from this group, and the second reason is to do a reliability analysis. Reliability is basically “the degree to which a test consistently measure whatever it is measuring” (Gay & Airasian, 2003, p. 141). A reliability analysis is essential when developing a research questionnaire because a research instrument lacking reliability cannot measure any variable better than chance alone (Hopkins, 1998). Hopkins goes on to say that reliability is an essential prerequisite to validity because a research instrument must consistently yield reliable scores to have any confidence in validity. After administering the research questionnaire to this small group, a reliability analysis of the results must be done. The reliability analysis to be used is Cronbach’s alpha (Hopkins, 1998), which allows an overall reliability coefficient to be calculated, as well as coefficients for each of the sub-constructs (if any). The overall instrument, as well as the sub-constructs, should yield alpha statistics greater than .70 (Johnson, 2007). This analysis would decide if the researcher needs to revise the items or proceed with administering the instrument to the target population. The researcher should also use the feedback obtained from this group to ensure that the questions are clear and present no ambiguity. Any other feedback obtained should be used to address any problems with the research questionnaire. Should there be any problems with particular items, then necessary changes would be made to ensure the item is measuring what it is supposed to be measuring. However, should there be issues with an entire construct(s) that is yielding reliability and/or validity problems, then the instrument would have to be revised, reviewed again by the panel of experts, and retested on another small group. After the instrument goes through this process and has been corrected and refined with acceptable validity and reliability, it is time to begin planning to administer it to the target population.

After the research questionnaire has established validity and reliability, the next step is to begin planning how to administer it to the participants of the study. To begin this process, it is imperative to define who the target population of the study is. Unfortunately, it is often impossible to gather data from everyone in a population due to feasibility and costs. Therefore, sampling must be used to collect data. According to Gay and Airasian (2003), “Sampling is the process of selecting a number of participants for a study in such a way that they represent the larger group from which they were selected” (p. 101). This larger group that the authors refer to is the population, and the population is the group to which the results will ideally generalize. However, out of any population, the researcher will have to determine those who are accessible or available. In most studies, the chosen population for study is usually a realistic choice and not always the target one (Gay & Airasian, 2003). After choosing the population to be studied, it is important to define that population so the reader will know how to apply the findings to that population.

The next step in the research study is to select a sample, and the quality of this sample will ultimately determine the integrity and generalizability of the results. Ultimately, the researcher should desire a sample that is representative of the defined population to be studied. Ideally, the researcher wants to minimize sampling error by using random sampling techniques. Random sampling techniques include simple random sampling, stratified sampling, cluster sampling, and systematic sampling (Gay & Airasian, 2003). According to the authors, these sampling techniques operate just as they are named: simple random sampling is using a means to randomly select an adequate sample of participants from a population; stratified random sampling allows a researcher to sample subgroups in such a way that they are proportional in the same way they exist in the population; and cluster sampling randomly selects groups from a larger population (Gay & Airasian, 2003). Systematic sampling is a form of simple random sampling, where the researcher simply selects every tenth person, for example. These four random sampling techniques, or variations thereof, are the most widely used random sampling procedures. While random sampling allows for the best chance to obtained unbiased samples, sometimes it is not always possible. Therefore, the researcher resorts to nonrandom sampling techniques. These techniques include convenience sampling, purposive sampling, and quota sampling (Gay & Airasian, 2003). Convenience sampling is simply sampling whoever happens to be available, while purposive sampling is where the researcher selects a sample based on knowledge of the group to be sampled (Gay & Airasian, 2003). Lastly, quota sampling is a technique used in large-scale surveys when a population of interest is too large to define. With quota sampling, the researcher usually will have a specific number of participants to target with specific demographics (Gay & Airasian, 2003).

The sampling method ultimately chosen will depend upon the population determined to be studied. In an ideal scenario, random sampling would be employed, which improves the strength and generalizability of the results. However, should random sampling not be possible, the researcher would mostly likely resort to convenience sampling. Although not as powerful as random sampling, convenience sampling is used quite a bit and can be useful in educational research (Johnson, 2007). Of course, whatever sampling means is employed, it is imperative to have an adequate sample size. As a general rule, the larger the population size, the smaller the percentage of the population required to get a representative sample (Gay & Airasian, 2003). The researcher would determine the size of the population being studied (if possible) and then determine an adequate sample size (Krejcie & Morgan, 1970, p. 608). Ultimately, it is desirable to obtain as many participants as possible and not merely to achieve a minimum (Gay & Airasian, 2003). Lastly, after an adequate sample size for the study has been determined, the researcher should proceed with the administration of the research questionnaire until the desired sample size is obtained. The research questionnaire should be administered in similar conditions, and potential participants should know and understand that they are not obligated in any way to participate and that they will not be penalized for not participating (Suskie, 1996). Also, participants should know how to contact the research should they have questions about the research project, including the ultimate dissemination of the data and the results of the study. The researcher should exhaust all efforts to ensure participants understand what is being asked so they can make a clear judgment regarding their consent to participate in the study. Should any of the potential participants be under the age of 18, the researcher would need to obtain parental permission in order for them to participate. Lastly, it is imperative that the researcher obtain approval from the Institutional Review Board (IRB) before the instrument is field-tested and administered to the participants. People who participate in the study should understand that the research project has been approved through the university’s IRB process.

References

Gay, L. R., & Airasian, P. (2003). Educational research: Competencies for analysis and Applications (7th ed.). Upper Saddle River, NJ: Pearson Education, Inc.

Hopkins, K. D. (1998). Educational and psychological measurement and evaluation (8th ed.). Boston: Allyn & Bacon.

Johnson, J. T. (2007). Instrument development and validation [Class handout]. Department of Educational Leadership & Research, The University of Southern Mississippi.

Krejcie, R. V., & Morgan, D. W. (1970). Determining sample size for research activities. Educational and Psychological Measurement, 30, 607-610.k

Krueger, R. A., & Casey, M. A. (2009). Focus groups: A practical guide for applied research (4th ed.). Thousand Oaks, SA: Sage Publications, Inc.

Kubiszyn, T., & Borich, B. (2007). Educational testing and measurement: Classroom application and practice (8th ed.). Hoboken, NJ: John Wiley & Sons.

Suskie, L. A. (1996). Questionnaire survey research: What works (2nd ed.). Tallahassee, FL: Association for Institutional Research.

What is churn? An overview

Churn is the phenomenon where a customer switches from one service to a competitor’s service (Tsai & Chen, 2009:2). There are two main types of churn, namely voluntary churn and involuntary churn. Voluntary churn is when the customer initiated the service termination. Involuntary churn means the company suspended the customer’s service and this is usually because of non-payment or service abuse.

Companies, in various industries, have recently started to realise that their client set is their most valuable asset. Retaining the existing clients is the best marketing strategy. Numerous studies have confirmed this by showing that it is more profitable to keep your existing clients satisfied than to constantly attract new clients (Van Den Poel & Lariviere, 2004:197; Coussement & Van Den Poel, 2008:313).

According to Van Den Poel and Lariviere (2004:197) successful customer retention has more than just financial benefits:

Successful customer retention programs free the organisation to focus on existing customers’ needs and the building of relationships.
It lowers the need to find new customers with uncertain levels of risk.
Long term customers tend to buy more and provide positive advertising through word-of-mouth.
The company has better knowledge of long term customers and they are less expensive with lower uncertainty and risk.
Customers with longer tenures are less likely to be influenced by competitive marketing strategies.
Sales may decrease if customers churn, due to lost opportunities. These customers also need to be replaced, which can cost five to six times more than simply retaining the customer.
1.1.Growth in Fixed-line Markets

According to Agrawal (2009) the high growth phase in the telecommunications market is over. In the future, wealth in the industry will be split between the companies. Revenues (of telecommunication companies) are declining around the world. Figure 2 shows Telkom’s fixed-line customer base and customer growth rate for the previous seven years. The number of lines is used as an estimate for the number of fixed-line customers.

Figure 2-Telkom’s fixed-line annual customer base (Idea adopted from Ahn, Han & Lee (2006:554))

With the lower customer growth worldwide, it is becoming vital to prevent customers from churning.

1.2.Preventing Customer Churn

The two basic approaches to churn management are divided into untargeted and targeted approaches. Untargeted approaches rely on superior products and mass advertising to decrease churn (Neslin, Gupta, Kamakura, Lu & Mason, 2004:3).

Targeted approaches rely on identifying customers who are likely to churn and then customising a service plan or incentive to prevent it from happening. Targeted approaches can be further divided into proactive and reactive approaches.

With a proactive approach the company identifies customers who are likely to churn at a future date. These customers are then targeted with incentives or special programs to attempt to retain them.

In a reactive targeted approach the company waits until the customer cancels the account and then offers the customer an incentive (Neslin et al., 2004:4).

A proactive targeted approach has the advantage of lower incentive costs (because the customer is not “bribed” at the last minute to stay with the company). It also prevents a culture where customers threaten to churn in order to negotiate a better deal with the company (Neslin et al., 2004:4).

The proactive, targeted approach is dependent on a predictive statistical technique to predict churners with a high accuracy. Otherwise the company’s funds may be wasted on unnecessary programs that incorrectly identified customers.

1.3.Main Churn Predictors

According to Chu, Tsai and Ho (2007:704) the main contributors to churn in the telecommunications industry are; price, coverage, quality and customer service. Their contributions to churn can be seen from Figure 3.

Figure 3 indicates that the primary reason for churn is price related (47% of the sample). The customer churns because a cheaper service or product is available, through no fault of the company. This means that a perfect retention strategy, based on customer satisfaction, can only prevent 53% of the churners (Chu et al., 2007:704).

1.4.Churn Management Framework

Datta, Masand, Mani and Li (2001:486) proposed a five stage framework for customer churn management (Figure 4).

The first stage is to identify suitable data for the modelling process. The quality of this data is extremely important. Poor data quality can cause large losses in money, time and opportunities (Olson, 2003:1). It is also important to determine if all the available historical data, or only the most recent data, is going to be used.

The second stage consists of the data semantics problem. It has a direct link with the first stage. In order to complete the first stage successfully, a complete understanding of the data and the variables’ information are required. Data quality issues are linked to data semantics because it often influences data interpretation directly. It frequently leads to data misinterpretation (Dasu & Johnson, 2003:100).

Stage three handles feature selection. Cios, Pedrycz, Swiniarski and Kurgan (2007:207) define feature selection as “a process of finding a subset of features, from the original set of features forming patterns in a given data set…”. It is important to select a sufficient number of diverse features for the modelling phase. Section 5.5.3 discusses some of the most important features found in the literature.

Stage four is the predictive model development stage. There are many alternative methods available. Figure 5 shows the number of times a statistical technique was mentioned in the papers the author read. These methods are discussed in detail in Section 6.

The final stage is the model validation process. The goal of this stage is to ensure that the model delivers accurate predictions.

5.5.1Stage one – Identify data

Usually a churn indicator flag must be derived in order to define churners. Currently, there exists no standard accepted definition for churn (Attaa, 2009). One of the popular definitions state that a customer is considered churned if the customer had no active products for three consecutive months (Attaa, 2009; Virgin Media, 2009; Orascom Telecom, 2008). Once a target variable is derived, the set of best features (variables) can be determined.

5.5.2Stage two – Data semantics

Data semantics is the process of understanding the context of the data. Certain variables are difficult to interpret and must be carefully studied. It is also important to use consistent data definitions in the database. Datta, et al. (2001) claims that this phase is extremely important.

5.5.3Stage three – Feature selection

Feature selection is another important stage. The variables selected here are used in the modelling stage. It consists of two phases. Firstly, an initial feature subset is determined. Secondly, the subset is evaluated based on a certain criterion.

Ahn et al. (2006:554) describe four main types of determinants in churn. These determinants should be included in the initial feature subset.

Customer dissatisfaction is the first determinant of churn mentioned. It is driven by network and call quality. Service failures have also been identified as “triggers” that accelerate churn. Customers who are unhappy can have an extended negative influence on a company. They can spread negative word-of-month and also appeal to third-party consumer affair bodies (Ahn et al., 2006:555).

Cost of switching is the second main determinant. Customers maintain their relationships with a company based on one of two reasons: they “have to” stay (constraint) or they “want to” stay (loyalty). Companies can use loyalty programs or membership cards to encourage their customers to “want to” stay (Ahn et al., 2006:556).

Service usage is the third main determinant. A customer’s service usage can broadly be described with minutes of use, frequency of use and total number of distinct numbers used. Service usage is one of the most popular predictors in churn models. It is still unclear if the correlation between churn and service usage is positive or negative (Ahn et al., 2006:556).

The final main determinant is customer status. According to Ahn et al. (2006:556), customers seldom churn suddenly from a service provider. Customers are usually suspended for a while due to payment issues, or they decide not to use the service for a while, before they churn.

Wei and Chiu (2002:105) use length of service and payment method as further possible predictors of churn. Customers with a longer service history are less likely to churn. Customers who authorise direct payment from their bank accounts are also expected to be less likely to churn.

Qi, Zhang, Shu, Li and Ge (2004?:2) derived different growth rates and number of abnormal fluctuation variables to model churn. Customers with growing usage are less likely to churn and customers with a high abnormal fluctuation are more likely to churn.

5.5.4Stage four – Model development

It is clear from Figure 5 that decision tree models are the most frequently used models. The second most popular technique is logistic regression, followed closely by neural networks and survival analysis. The technique that featured in the least number of papers is discriminant analysis.

Discriminant analysis is a multivariate technique that classifies observations into existing categories. A mathematical function is derived from a set of continuous variables that best discriminates among the set of categories (Meilgaard, Civille & Carr, 1999:323).

According to Cohen and Cohen (2002:485) discriminant analysis makes stronger modelling assumptions than logistic regression. These include that the predictor variables must be multivariate normally distributed and the within-group covariance matrix must be homogeneous. These assumptions are rarely met in practice.

According to Harrell (2001:217) even if these assumptions are met, the results obtained from logistic regression are still as accurate as those obtained from discrimination analysis. Discriminant analysis will, therefore, not be considered.

A neural network is a parallel data processing structure that possesses the ability to learn. The concept is roughly based on the human brain (Hadden, Tiwari, Roy & Ruta, 2006:2). Most neural networks are based on the perceptron architecture where a weighted linear combination of inputs is sent through a nonlinear function.

According to de Waal and du Toit (2006:1) neural networks have been known to offer accurate predictions with difficult interpretations. Understanding the drivers of churn is one of the main goals of churn modelling and, unfortunately, traditional neural networks provide limited understanding of the model.

Yang and Chiu (2007:319) confirm this by stating that neural networks use an internal weight scheme that doesn’t provide any insight into why the solution is valid. It is often called a black-box methodology and neural networks are, therefore, also not considered in this study.

The statistical methodologies used in this study are decision trees, logistic regression and survival analysis. Decision tree modelling is discussed in Section 6.1, logistic regression in Sections 6.2 and 6.3 and survival analysis is discussed in Section 6.4.

5.5.5Stage five – Validation of results

Each modelling technique has its own, specific validation method. To compare the models, accuracy will be used. However, a high accuracy on the training and validation data sets does not automatically result in accurate predictions on the population dataset. It is important to take the impact of oversampling into account. Section 5.6 discusses oversampling and the adjustments that need to be made.

1.5.Adjustments for Target Level Imbalances

From Telkom’s data it is clear that churn is a rare event of great interest and great value (Gupta, Hanssens, Hardie, Kahn, Kumar, Lin & Sriram, 2006:152).

If the event is rare, using a sample with the same proportion of events and non-events as the population is not ideal. Assume a decision tree is developed from such a sample and the event rate (x%) is very low. A prediction model could obtain a high accuracy (1-x%) by simply assigning all the cases to the majority level (e.g. predict all customers are non-churners) (Wei & Chiu, 2002:106). A sample with more balanced levels of the target is required.

Basic sampling methods to decrease the level of class imbalances include under-sampling and over-sampling. Under-sampling eliminates some of the majority-class cases by randomly selecting a lower percentage of them for the sample. Over-sampling duplicates minority-class cases by including a randomly selected case more than once (Burez & Van Den Poel, 2009:4630).

Under-sampling has the drawback that potentially useful information is unused. Over-sampling has the drawback that it might lead to over-fitting because cases are duplicated. Studies have shown that over-sampling is ineffective at improving the recognition of the minority class (Drummond & Holte, 2003:8). According to Chen, Liaw & Breiman, (2004:2) under-sampling has an edge over over-sampling.

However, if the probability of an event (target variable equals one) in the population differs from the probability of an event in the sample, it is necessary to make adjustments for the prior probabilities. Otherwise the probability of the event will be overestimated. This will lead to score graphs and statistics that are inaccurate or misleading (Georges, 2007:456).

Therefore, decision-based statistics based on accuracy (or misclassification) misrepresent the model performance on the population. A model developed on this sample will identify more churners than there actually are (high false alarm rate). Without an adjustment for prior probabilities, the estimates for the event will be overestimated.

According to Potts (2001:72) the accuracy can be adjusted with equation 1. It takes prior probabilities into account.

With:

: the population proportion of non-churners

: the population proportion of churners

: the sample proportion of non-churners

: the sample proportion of churners

: the number of true negatives (number of correctly predicted non-

churners)

: the number of true positives (number of correctly predicted churners)

: the number of instances in the sample

However, accuracy as a model efficiency measure trained on an under-sampled dataset is dependent on the threshold. This threshold is influenced by the class imbalance between the sample and the population (Burez & Van Den Poel, 2009:4626).

Business decision making in different ways

1.0 Introduction

This project is not only done for the sake of submitting as we are asked to but also to gain knowledge by a lot of means in both practical and theoretical ways. Text books and study guides cannot give complete knowledge to any student. And I believe that the assignments are given for students to gain extra practical knowledge from the wide world around. In the study of business decision making us mainly focus on the knowledge of different methods of data analyses and how it is useful for business contest and then the presentation of data in an appropriate way to make decisions and predictions. Its purpose is to build better understanding of different business issues and the ways to tackle them. This project report is under the wide range of business decision making of an organization. We have discussed representative measures and measures of dispersions and the difference between them and how they are used to interpret information in a useful manner. After that we use graphs to present the data in order to make them easy then using the graphs I draw some conclusion for business purposes. Finally we have given some solutions for a company which is encountering problems in telecommunications and inventory control. I have discussed the usefulness of intranet in the process of inventory control to overcome from poor inventory management. Also I have provided some solutions by comparing two proposals using DCF and IRR techniques and clearly mention which proposal the company should adopt in order to enhance its inventory control capacity effectively. This report helped me to apply the theoretical knowledge into real world examples and evaluate the advantages and disadvantages and make business decisions.

2.1Collecting and maintaining the medical data and Medical Records.

In modern clinics and hospitals, and in many public health departments, data in each of these categories can be found in the records of individuals who have received services there, but not all the data are in the same file. Administrative and economic data are usually in separate files from clinical data; both are linked by personal identifying information. Behavioural information, such as the fact that an individual did not obtain prescribed medication or fails to keep appointments can be extracted by linking facts in a clinical record with the records of medications dispensed and/or appointments kept. Records in hospitals and clinics are mostly computer-processed and stored, so it is technically feasible to extract and analyze the relevant information, for instance, occupation, diagnosis, and method of payment for the service that was provided, or behavioural information. Such analyses are often conducted for routine or for research purposes, although there are some ethical constraints to protect the privacy and preserve the confidentiality of individuals.

Primary sources-

Primary data sources are where YOU yourself have collected the data and it is not someone else’s. For example a questionnaire created by you and handed out to the specific people, is a primary source. You can then use them to prove a certain hypothesis and explain a situation.

Statistics,

Surveys,

Opinion polls,

Scientific data,

Transcripts

Records of organizations and government agencies

Secondary data-

Secondary data are indispensable for most organizational research. Secondary data refer to information gathered by someone other than the researcher conducting the current study.

Books

Periodicals government publications of economic indicators,

Census data,

Statistical abstracts,

Data bases,

The media, annual reports of companies,

Case studies

Other archival records.

2.2 Data collection methodology and Questionnaire
Records of Births and Deaths

Vital records (certifications of births and deaths) are similarly computer-stored and can be analyzed in many ways. Collection of data for birth and death certificates relies on the fact that recording of both births and deaths is a legal obligation—and individuals have powerful reasons, including financial incentives such as collection of insurance benefits, for completing all the formal procedures for certification of these vital events. The paper records that individuals require for various purposes are collected and collated in regional and national offices, such as the U.S. National Center for Health Statistics, and published in monthly bulletins and annual reports. Birth certificates record details such as full name, birthdate, names and ages of parents, birthplace, and birthweight. These items of information can be used to construct a unique sequence of numbers and alphabet letters to identify each individual with a high degree of precision. Death certificates contain a great deal of valuable information: name at birth as well as at death, age, sex, place of birth as well as death, and cause of death. The personal identifying information can be used to link the death certificate to other health records. The reliability of death certificate data varies according to the cause and place: Deaths in hospitals have usually been preceded by a sufficient opportunity for investigations to yield a reliable diagnosis, but deaths at home may be associated with illnesses that have not been investigated, so they may have only patchy and incomplete old medical records or the family doctor’s working diagnosis, which may be no more than an educated guess. Deaths in other places, such as on the street or at work, are usually investigated by a coroner or medical examiner, so the information is reasonably reliable. Other vital records, for example, marriages and divorces and dissolution of marriages, have less direct utility for health purposes but do shed some light on aspects of social health.

Health Surveys

Unlike births and deaths, health surveys are experienced by only a sample of the people; but if it is a statistically representative sample, inferences about findings can be generalized with some confidence. Survey data may be collected by asking questions either in an oral interview or over the telephone, or by giving the respondents a written questionnaire and collecting their answers. The survey data are collated, checked, edited for consistency, processed and analyzed generally by means of a package computer program. A very wide variety of data can be collected this way, covering details such as past medical events, personal habits, family history, occupation, income, social status, family and other support networks, and so on. In the U.S. National Health and Nutrition Surveys, physical examinations, such as blood pressure measurement, and laboratory tests, such as blood chemistry and counts, are carried out on a subsample.

Records of medical examinations on school children, military recruits, or applicants for employment in many industries are potentially another useful source of data, but these records tend to be scattered over many different sites and it is logistically difficult to collect and collate them centrally.

Health Research Data

The depth, range, and scope of data collected in health is diverse and complex, so it cannot be considered in detail here. Research on fields as diverse as biochemistry, psychology, genetics, and sports physiology have usefully illuminated aspects of population health, but the problem of central collection and collation and of making valid generalizations reduces the usefulness of most data from health-related research for the purpose of delineating aspects of national health.

Unobtrusive Data Sources and Methods of Collection

Unobtrusive methods and indirect methods can be a rich source of information from which it is sometimes possible to make important inferences about the health of the population or samples thereof. Economic statistics such as sales of tobacco and alcohol reveal national consumption patterns; counting cigarette butts in school playgrounds under collected conditions is an unobtrusive way to get a very rough measure of cigarette consumption by school children. Calls to the police to settle domestic disturbances provide a rough measure of the prevalence of family violence. Traffic crashes involving police reports and/or insurance claims reveal much about aspects of risk-taking behavior, for example, the dangerous practice of using cell phones while driving. These are among many examples of unobtrusive data sources, offered merely to illustrate the potential value of this approach.

The questionnaire contains something in each of the following categories:

Personal identifying data: name, age (birth date), sex, and so on.
Socio-demographic data: sex, age, occupation, place of residence.
Clinical data: medical history, investigations, diagnoses, treatment regimens.
Administrative data: referrals, sites of care.
Economic data: insurance coverage, method of payment.
Behavioral data: adherence to the recommended regimen (or otherwise).
3.0 Data Analysis
Representative Values.

These are also called as measures of location or measures of central tendency. They indicate the canter or most typical value of a data set lies. This includes three important measures: mean, median and mode. Mean and median can be only applied for quantitative data, but mode can be used with either quantitative or qualitative data.

Mean

This is the most commonly used measure which is the average of a data set. This is the sum of the observations divided by the number of observations.

Advantages of mean-objective:

Easy to calculate
Easy to understand
Calculated from all the data.

Disadvantages-affected by-outlying values

May be some distance from most values.

Median

Median of a data set is the number that divides the bottom 50% of the data from the top 50%.

Advantages-
Easy to understand
Give a value that actually occurred
Not being affected by outlying values.
Disadvantages-
Does not consider all the data
Can be used only with cardinal data.
Not easy to use in other analyses.

Mode

Mode of a data set is the number that occur frequently (more than one)

Advantages-
Being an actual value
Not affected by outlying value
Disadvantages-
Can be more than one mode or none
Does not consider all the data
Cannot be used in further analyses.

Comparison of mean, median and mode

For this garage, its representative values are as follows,

Mean- 335

Median- 323

Mode- 430

As we can see mean and median does not vary drastically, but mode on the other hand varies.

Here the owner has to select which price he has to charge among all these.

Mode is very high and it doesn’t consider all the values, so if the owner charge ?430 it will be expensive and the customers may switch to competitors. Therefore, owner should not choose mode.

Now the selection is between mean and median. Both of them look reasonable and close to most of the cost in October. Median is usually preferred when the data set have more extreme observations. Unless it is likely to select mean because it considers all the data.

From the overview of the cost in October it doesn’t have extreme values at all. So the mean value wouldn’t have affected much.

Therefore it is advisable that the owner chooses the mean value of ?335

Measures of Dispersion

Representative measures only indicate the location of a set of data and two data sets can have same mean, median and mode. In that case we cannot make any decision using representative values. To describe the difference we use a descriptive measure that indicates the amount of variation which is known as measures of dispersion or measures of spread.

This includes the following measurements:

Range-Range is simply the difference between the highest value and the lowest value. It is easy to calculate and understand, but it only consider the largest and smallest value and ignore all the other values and it is highly affected by extreme values.
Quartile range- Quartile range is the difference between 3rd quartile and 1st quartile. It is also easy to calculate, but it does not consider all the values in a data set so it is not a good indicator.
Variance and Standard Deviation- Variance measures how far the observations are from the mean. This is more important statistics because it considers all the observations and is used for further analysis. Standard deviation is the square root of variance. Both variance and standard deviation provide useful information for decision making and making comparisons.

From the calculation range is ?284 and quartile range is ?170, but because of the defects of them we cannot use them to derive further decisions. Variance is 8426.9 and standard deviation is 91.79. From the figures we can see observations are highly deviated from the mean. Variance and Standard deviations are used to compare two data sets. So the owner of this garage can compare these two figures with a similar garage or the cost of November and make decisions such as select the price which has smaller variance and standard deviation.

Quartiles and percentiles also like representative measure. They indicate the percentage of value below a certain value i.e. 3rd quartile indicate 75% of the observations are below a certain amount and 25% of observations are above.

Quartiles and percentile values of the garage

Quartiles- 1

248.5

2

322.5

3

418.5

Percentiles- 75%

418.5

50%

322.5

60%

349.4

From the above figures we can see only 25% of the values are above ?418 so we shouldn’t charge a price above than that if we do so we will lose many of their customers. 25% of the observations are above ?248.5 so we have to select a price between ?248 and ? 418. Earlier we have found out the mean which is ?335. This is between 2nd quartile and 60% of percentile. So from the use of quartile and percentile we can select ?335 as the service price. Thus quartile and percentile help us in decision making.

Correlation coefficient measures the strength of the linear relationship between two variables. It is denoted by “r”. Value of “r” always lie between -1 and +1. If “r” is closer to +1, two variables have strong positive relationship. Correlation and coefficient also helps to make business decisions.

4.0 Presentation of Information

Tables are good at presenting a lot of information, but it can still be difficult to identify the underlying patterns. Therefore the uses of charts and graphs play an important part in data presentation in an effective way. Graphical method includes scatter graph, bar charts, line charts, pie charts and histograms.

Pie charts

They are simple diagrams that give a summary of categorical data. Each slice of a circle represents one category. Pie charts are very simple and can make an impact but they show only very small amounts of data. When there is more data it becomes complicated and confusing. But using pie charts we can make comparisons. Here we can see the amount of commission Trevor plc paid is increasing because year 2008 has big proportion in the circle then 2007,2006 and 2005.so we can expect the amount will be higher than 2008 for the next year.

Bar Charts

Like pie charts, bar charts show the number of observations in different categories. Each category is represented by separate bar and the length of the bar is proportionate to the number of observations. Contrast to pie chart, more amounts of data can be plotted in bar charts. It is easy to make comparisons in different periods with different observations. Here sales of BMW and Mercedes are increasing continuously but sales of other cars fluctuating. Also we can see over all turn over also increasing year by year.

Line Chart

This is also another way of data presenting. Here we use line rather than using bar or circles. It is easy to draw line chart and easy to understand the underlying trend and make predictions. Area chart also like line chart but it shows the whole amount and shows each category as area. By using area chart we can understand the trend and also make comparisons. Line chart of Trevor plc indicates except Lexus, sales of other cars are increasing. But Mercedes show a dramatic increase from 2006 to 2008. During the period between 2005 and 2006 car sales tend to be steady. From the out come of this line chart Trevor plc mainly focus on BMW and Mercedes to increase its turn over in the forth coming years. Area chart also indicates the same result that line chart shows.

Scatter Diagram and the trend line

Scatter diagram drawn using two variables. Here we draw commission against year. Commission is plot in the “y” axis and year in the “x” axis. Scatter diagram explain the relationship between two variables whether they are positively or negatively correlated and whether they are strong or weak. Commission has a positive relationship with year for Trevor plc and the relationship is strong because most of the observation lies closer to straight line. We have calculated the correlation coefficient between commission and year and it comes 0.9744 this indicates strong positive relationship.

Trend lines used to understand the underlying trend and make useful forecasting. The trend line of Trevor plc shows upward trend among commission and year. We can predict the commission would be approximately ?18000 in 2009 and it would be ?18500-?19000 in year 2009.

6.0 Intranet

To: The Board of Directors

From: Management Consultant

Date: 20.12.2009

Subject: Intranet and its evaluation

Intranet is a private network that is contained within an enterprise. It may consist of many interlinked local area networks. Typically, an intranet includes connections through one or more gateway computers to the outside Internet. The main purpose of an intranet is to share company information and computing resources among employees. And also to share information within the branches of the same organisation.

Advantages:
Easy access to internal and external information
Improves communication
Increases collaboration and coordination
Supports links with customers and partners
Can capture and share knowledge
Productivity can be increased
Margins of errors will be reduced
High flexibility
It provides with timely and accurate information
It allows communication within the branches of the organisation.
Disadvantages:
Installation can maintenance can be expensive.
This may reduce face to face meetings with clients or business partners.
7.0 Management Information System

Management information system (MIS) is a system that allows managers to make decisions for the successful operation of businesses. Management information systems consist of computer resources, people, and procedures used in the modern business enterprise. MIS also refers to the organization that develops and maintains most or all of the computer systems in the enterprise so that managers can make decisions. The goal of the MIS organization is to deliver information systems to the various levels of managers: Strategic, Tactical and Operational levels.

Types of Information vary according to the levels of management.

Strategic management will need information for long term planning and corporate strategy. This will be less structured.

Tactical Management needs to take short term decisions to focus on improving profitability and performance.

Operational management needs information on day to day operations of the organisation.

11.0 Conclusion

Finally, I would like to conclude my report on Business Decision making. Firstly, I started with various method of data collection the analysis of the data gathered and prepared a sample questionnaire based on the example used. Then the presentation of data through tables have been discussed and continued with the information for decision making. Afterwards, I moved to evaluate the advantages and disadvantages of Intranet and its usefulness in controlling inventory. I also discussed about various inventory control methods used by organisations. Finally, I drew a conclusion on the investment decision scenario given.

This report made me clearly understand all the subject areas I learnt in the lectures and I found it useful.