Methodology Research Data

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

According to Walliman (2001), a methodology explains the theory behind the research methods or approaches. This chapter highlights the theories behind the methodology employed and examines the research methods that are most appropriate for this research which help to better understand the topic under investigation.

This research undertakes an analytical review of customer retention techniques of Indian banks, using Citibank as a case study. This chapter outlines how this analysis is undertaken and describes the rationale behind the choice of research design and the construction of the method.

Research Method Construction

Much of the research undertaken in social sciences is primary. This is based on the collection of primary data, that is, data originated by the researcher for the purpose of the investigation at hand (Stewart and Kamins, 1993). Primary analysis is the original analysis of data in a research study. It is what one typically imagines as the application of statistical methods.

However, not every study or research undertaking must begin with the collection of primary data. In some cases, the information required is already available from published sources. This is called secondary research – the summation, collation, and/or synthesis of existing research. Secondary information consists of sources of information collected by others and archived in some form. These sources include reports, industry studies, as well as books and journals.

The collection, generation, and dissemination of information is growing. This means that there exists a tremendous amount of secondary data that is relevant to today’s decision-making problems. Knowledge accumulation increasingly relies on the integration of previous studies and findings. Glass (1976) argues that when the literature on a topic grows and knowledge lies untapped in completed research studies, “this endeavour (of research synthesis) deserves higher priority … than adding a new experiment or survey to the pile” (Glass, 1976, p. 4).

One of the main reasons to value secondary data comes from the ease of collection for research use (Houston, 2004). This information can be of considerable importance for two reasons.

Time savings – typically, the time involved in searching secondary sources is much less than that needed to complete primary data collection.

Cost effectiveness – similarly, secondary data collection in general is less costly than primary data collection. For the same level of research budget a thorough examination of secondary sources can yield a great deal more information than can be had through a primary data collection exercise.

Another, and perhaps more important, benefit to researchers from employing secondary data is that alternative types of data can provide multi-method triangulation to other research findings (Houston, 2004). This is because the knowledge bases regarding many constructs, such as retention and loyalty, have been built heavily through survey research approaches. All things being equal, secondary data should be used if it helps researchers to solve the research problem (Saunders et al., 2006).

If there exists data that solves or lends insight into the research problem, then little primary research has to be conducted. Because resource constraints are always a problem for the researcher, it makes good sense to exhaust secondary data sources before proceeding to the active collection of primary data. In addition, secondary data may be available which is entirely appropriate and wholly adequate to draw conclusions and answer the question or solve the problem.

This secondary analysis may involve the combination of one data set with another, address new questions or use new analytical methods for evaluation. Secondary analysis is the re-analysis of data for the purpose of answering the original research question with different statistical techniques, or answering new questions with old data. Secondary analysis is an important feature of the research and evaluation landscape. Generally, secondary research is used in problem recognition and problem clarification.

However, in addition to being helpful in the definition and development of a problem, secondary data is often insufficient in generating a problem solution (Davis, 2000). Whilst the benefits of secondary sources are considerable, their shortcomings have to be acknowledged. There is a need to evaluate the quality of both the source of the data and the data itself. The first problem relates to definitions. The researcher has to be careful, when making use of secondary data, of the definitions used by those responsible for its preparation.

Another relates to source bias. Researchers have to be aware of vested interests when they consult secondary sources. Those responsible for their compilation may have reasons for wishing to present a more optimistic or pessimistic set of results for their organisation. Also, secondary data can be general and vague and therefore may not help with decision-making. In addition, data may be incomplete. Finally, the time period during which secondary data was first compiled may have a substantial effect upon the nature of the data.

Considering these shortcomings, primary data collection strategy was also adopted after analysis and collection of secondary data. This was purposely done by the author as the author wanted to analyze the previous similar researches before drafting a primary data collection questionnaire. In constructing the primary data collection method, data needs were first specified. Primary data was collected in the form of interviews with Citibank operational and branch managers, focus groups were also conducted with a sample of Citibank customers.

These methods were considered to be the most appropriate in terms of achieving the objectives of the study and worked out best within the time and cost constraints. Semi-structured probing interview with Citibank management staff revealed in depth information and insights on customer retention and relationship banking. Focus groups conducted with Citibank customers was the best way to get information out of them as ideas from person sparked off ideas from another and the group gelled together very well.

Also, facial expressions and bodily movements indicated quite a lot in a focus group. It wasn’t feasible to conduct telephonic interview or video-conferencing due to the costs involved. Though, initially some thoughts were given to conducting telephonic interviews with Citibank employees, but later on the idea was shelved because of time and cost constraints.

Secondary data for this research concentrated on collecting data from books, journals, online publications, white papers, previous researches, newspapers (Economic Times) taped interviews, websites, research databases etc. Secondary data was collected and partially analyzed before embarking onto primary data collection methods so that the designing of focus group and interview questions can be framed properly.

Although, most of the secondary data was collected by the time primary data collection methods were embarked upon, but secondary data collection didn’t stop altogether. In a way, the data collected from the secondary data and the data gathered from field research helped in triangulation. The field research also helped in testing the hypothesis that was developed after studying the concepts and theories (deductive approach).

Also, after gaining sufficient insight on the topic, it made it easier for the author to frame the questionnaire, because first the questions to test the hypothesis were framed and then specific questions were framed which would have helped in forming a hypothesis (inductive approach). Primary research tried to delve as deep as possible into areas which could not have been covered by secondary research and where first-hand information was absolutely necessary to come to a definitive conclusion.

Research Approach

Qualitative method is a kind of research that produces findings not arrived at by means of statistical procedures or other means of quantification. It is based on a meaning expressed through words (Saunders, 2006).

Qualitative research method often provides rich descriptive and exploratory data and is exploratory in nature. Quantitative methods on the other hand, uses numbers and statistical method, it tends to be based on meanings derived from numbers.

The research approach used for this research is primarily qualitative. Both the primary data collection methods concentrate on qualitative data collection. But, quantitative data is also collated in the form of company reports. Company reports were reviewed to analyse the effect of retention measures on management accounts.

So, both quantitative and qualitative methods of data collection technique is applied, although the major part of the research relies upon qualitative data and its analysis. Qualitative secondary information from a variety of sources are gathered like Citibank Case Studies, Web page , Reference books , Journals , Online journals, Newspaper and Magazine Articles , Taped interviews , Business news channel views , Research Agency) databases . Quantitative data from Citibank Company Reports and other supermarkets are collected and analyzed to compare and contrast the effect of various retention initiatives.

The Research Design

A research design is the framework or a plan for a study used as a guide to collect and analyse data, it is the blueprint that is followed (Churchill and Iacobucci 2005; pg 73). Kerlinger (1996; pg 102) defines ‘a plan and structure of investigation to obtain answers to research questions.’ The plan here means the overall scheme or program of the research and includes an outline of what the researcher seeks to do from hypotheses testing to the final analysis of the data.

A structure is framework organization or configuration of the relations among variables of a study (Robson, 2002; pg 73). The research design expresses both the structure of the research and the plan of investigation used to obtain empirical evidence on the relations of the problem. Some of the common approaches to research design include exploratory research, descriptive research and causal research.

For the purpose of this research, an exploratory research is conducted as little previous researches are available on customer retention in Indian banks. Hence little information is available on how to solve the research since there is little past references. The focus of this study is on gaining insights and familiarity with the subject area of customer retention for more rigorous investigation at a later stage.

The approach is very open and a wide range of data and information can be gathered and it will provide the conclusive answer to the problem defined. This research will study which existing theories or concepts with regards to customer retention can be applied to the problem defined. It will rely on extensive face to face interviews conducted with Bank Managers of Citibank to understand the concept of customer retention and how it is implemented. One of the reasons for carrying out an exploratory study for the purpose of this study is, because some facts about customer retention are known by the author but more information is required to build a theoretical framework.

Sample

Sample selection in this study, was driven by the need to allow maximum variation in conceptions. Individual managers were interviewed according to their expected level of insight regarding customer retention. In total, five interviews were conducted, all participants were employed by Citibank in India.

In addition, two interviewees had been directly involved in developing the retention strategy while the other three had gained experience in implementing retention strategies. Thus, the likelihood of uncovering a range of variations between conceptions of retention was increased.

Focus group participants banked with Citigroup in some form or the other (current accounts, credit cards, loans etc). These participants represented a mix of genders, age, banking experience, discipline and experience of banking with different banks.

Method of Data Collection

Data was collected using a semi-structured interview technique, which is characterized by (Booth, 1997 as being both open and deep.Open refers to the fact that the researcher is open to be guided by the responses made by the interviewee (Marton, 1994; Booth, 1997). Deep describes how, during the interview, individual interviewees are encouraged to discuss their conceptions in depth until both the researcher and the interviewee reach a mutual understanding about the phenomenon in question (Booth, 1997; Svensson, 1997).

In this study, this facilitated the prompting of interviewees to move beyond the concept of retention and into relationship building and loyalty. All face-to-face interviews were conducted with a single member separately in the participant’s office, with the interviews lasting between 30 and 40 minutes. Initially a “community of interpretation” (Apel, 1972) between the researcher and participant was established, with the researcher explaining that the objective of the research was to understand what constitutes effective retention strategy and the importance of retention within the banking community.

The question encouraged the participants to reflect upon and articulate their own lived experience of retention. They also focused on the “structural-how” aspects of customer retention. In asking about the roles and activities related to retention, it tried to figure out the ‘how’ component of retention. The interviews progressed around these topics, with participants guiding the agenda based on the extent of their interest in the topic.

For example, the majority of interviewees drew upon comparisons between the American banking systems when expressing their views on the retention process. In addition to the primary questions, follow-up questions were asked as appropriate. Examples included “What do you mean by that?”, “What happens?”, and “Is that how you see your role?” These questions encouraged individual participants to elaborate the meaning of customer retention.

Data Analysis

All five interviews and focus group sessions were taped and then transcribed verbatim. The transcripts were then analysed by the research team using investigator triangulation (Janesick, 1994). In line with the suggestions of Francis (1996), a structural framework for organizing the transcripts was first developed.

This prevented the research team getting lost in the enormous amount of text contained in each transcript and ensured we focused on the underlying meaning of the statements in the text, rather than on the specific content of particular statements.

The components of the framework were dimensions of supervisors’ conceptions, which were not predetermined by the researchers but were revealed in the texts. The phenomenographic approach seeks to identify and describe the qualitatively different ways of experiencing a specific aspect of reality (Marton, 1981, 1986, 1988, 1994, 1995; Van Rossum & Schenk, 1984; Johansson, Marton, et al., 1985; Sa? ljo? , 1988; Sandberg 1994, 1997, 2000, 2001; Svensson 1997).

These experiences and understandings, or ways of making sense of the world, are labelled as conceptions or understandings. The emphasis in phenomenography is on how things appear to people in their world and the way in which people explain to themselves and to others what goes on around them, including how these explanations change (Sandberg, 1994).

The framework we used to organize the data in each transcript comprised four dimensions of the explanations that supervisors used to make sense of their world, as expressed by them in the interview:

(a) What the interviewee’s conception of supervision meant to the interviewee in terms of the goal of supervision (referential-what);

(b) How the conception was translated by the interviewee into roles and activities (structural-how);

(c) What the conception meant to the interviewee in terms of the desired outcomes of the PhD supervision (referential-what); and

(d) What factors influenced the interviewee’s conception (external influences). The organizing framework was then used to reduce the text in each interview transcript to its essential meaning. Each researcher reread the first interview transcript. Discussion, debate, and negotiation then followed as we applied the components of our organizing framework to the first interviewee. Where differences of opinion occurred, a researcher attempted to convince the others of the veracity of their claims.

As a result of this ongoing and open exchange, we reached agreement about the components of the first interviewee’s conceptions that we believed were most faithful to the interviewee’s understanding of their lived experience of PhD supervision, as represented by their interview transcript. We then repeated this process for the next interviewee until all of the transcripts had been reduced into the organizing framework.

Conceptions began to emerge from our organizing framework as we alternated between what the interviewees considered PhD supervision to be, how they enacted supervision in their roles and activities, and why they had come to this understanding of supervision. Once these conceptions emerged, we tentatively grouped together interviewees who shared conceptions of supervision that were similar to each other and were different from those conceptions expressed by other super-visors. We then cross-examined our interpretations of each interviewee’s understanding of supervision by proposing and debating alternative interpretations.

This cross-examination continued until we, as a group, reached agreement on two issues: First, we believed we had established the most authentic interpretation of each interviewee’s understanding.

Second, we believed we had grouped interviewees expressing qualitatively similar understandings into the same category of description and had grouped interviewees expressing qualitatively different conceptions into different categories of description. Five categories of description, which we labelled as Conceptions 1 through to 5, emerged from this process.

Through the same iterative process, and through open dialogue and debate between the members of the research team, we were then able to map the five conceptions into an outcome space. The outcome space illustrates the relationships between the differing conceptions in two ways: First, the outcome space illustrates the outcome of higher priority sought by the supervisor (completion of the PhD or new insight).

Second, it distinguishes the fundamental approach to supervision as either pushing (the student is a self-directed learner) or pulling (the student is a managed learner) the student through the process. Table 2 summarizes the techniques we applied, as derived from the literature, to improve the validity and reliability of our interpretations of the interviewee’s experiences as expressed in the transcripts.

Example French Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Quels sont les facteurs de la montee du Front National et de son succes aux elections presidentielles en 2002?

Apres les elections presidentielles de 2002 et la nomination de Jean-Marie Le Pen, leader du Front National, au premier tour le 21 avril, la France fut jetee en confusion. Personne, il est vrai, ne s’y attendait, mais vu de recul cette victoire partielle peut etre expliquee par nombreux facteurs a la fois psychologiques, politiques et sociaux.

1. Le nombre de candidats pour la presidence

Un incroyable total de seize candidats se sont presentes aux elections de 2002 ! Un choix extensif comme certains pourraient penser, trop extensif pensent d’autres. Tout le monde pensait que, comme tous les ans, Jacques Chirac le president actuel et Lionel Jospin le Premier Ministre l’epoque, se retrouverent face face au deuxieme tour. Les deux partis vivaient en co-habitation dans le gouvernement depuis quelques annees dej. ca allait etre la bataille du siecle. Mais la Gauche etait fragmentee en de nombreux petits partis (l’extreme gauche, les ecologistes, la nouvelle gauche), la fragilisant vis–vis des autres partis. Les Franais, desillusionnes face cette gauche divisee, se tournerent vers les seules alternatives qui leur parurent plausible ; la droite, l’extreme droite ou l’abstentionnisme. Le Front National reu 11,75% des votes mais l’electorat lui-meme est pense monter jusqu’ 16%. Le taux de voteurs pour le parti n’a jamais ete aussi eleve, et cette manifestation du pouvoir de l’electorat fut un reveil alarmiste pour les partis divises de la gauche socialiste.

2. Le taux d’abstention

Il est pense que ce fut, en plus du grand nombre de candidats, le large taux d’abstention qui couta sa place aux deuxieme tour au premier ministre Lionel Jospin. Il renona d’ailleurs la position de ‘leader’ de la gauche socialiste apres cette defaite, si humiliante pour son parti et si inattendue. Etre battu par un pourcentage si faible fut pris comme un vote de non-confiance en son parti. Il est vrai que, les Franais originalement de gauche qui s’etaient abstenus de voter au premier tour en proteste, se rassemblerent vite au second et voterent en masse pour Chirac par peur de la tres reelle montee de l’extreme droite en France. Mais le mal, malheureusement, avaient dej ete fait. Il est aussi pense que la hausse du taux d’abstention est lie la montee des theories anarchistes venue d’Espagne et de Corse, mais ceci n’est qu’un speculation pour tenter d’expliquer le graduel desillusionnement des Franais. Les abstentionnistes furent appeles voter en masse au deuxieme tour pour eliminer toute chance qu’avait Le Pen d’acceder la presidence, aussi improbable soit-elle.

3. L’insecurite, la violence et le chaumage en France

L’agenda politique en France en 2002 fut egalement crucial pour determiner du futur

President de la Republique. L’insecurite face la menace terroriste que provoqua le 11 septembre, la violence urbaine et la hausse du chaumage en France pousserent une population vieillissante se confier un homme contre l’immigration massive et l’integration raciale. Le Pen est connu pour ses propos anti-semite, en particulier l’affirmation que ‘l’immigration de mass est le pire danger nous avons jamais rencontre en histoire’. L’influx d’immigrants de partout dans le monde en France, un pays repute pour sa solidarite vis–vis de refugies politiques, est pense etre une des causes directes de la montee du chaumage dans l’hexagone. Dans ce contexte la sympathie pour les causes de Le Pen est evidente. Similairement, la violence dans les banlieues de Paris est des autres grandes villes franaises comme Marseille et Lille est pense etre egalement lie l’immigration et la montee de jeunes auxquelles les valeurs franaises n’ont pas encore ete inculquees.

4. La popularite de la fille de Le Pen

Le charisme de la fille de Le Pen, Marine, une avocate de 34 ans, est egalement pense avoir eu un effet non negligeable sur le succes du Front National aux elections de 2002. Elle fut integree en tant que membre du parti apres son succes aux elections regionales. En avril 2003 Jean-Marie Le Pen nomma sa fille une des cinq vice-presidents de Front National. Il est pense que cette demarche fut une man-uvre pour baisser l’influence de Bruno Gollnisch qui se positionnait pour remplacer Le Pen quand il prendra eventuellement sa retraite. Ce transfert de pouvoir ressemble beaucoup au transfert de pouvoir comme dans une monarchie. Cependant, l’influence de Marine Le Pen sur le succes du parti est passe pre dater les elections. En tant que femme, elle inspira confiance l’electorat feminin qui, jusqu’alors, ne representant qu’un minime pourcentage du parti. Elle est devenu un personnage mediatique apparaissant regulierement la television nationale pour defendre les idees de son pere. La strategie du parti vers le recrutement d’un plus jeune electorat fut aussi favoriser par ses apparitions publiques.

Heureusement, Le Pen fut battu en masse par Jacques Chirac au deuxieme tour le 5 mai, mais le choc de ce premier tour perdure. Les Franais, par contre, n’hesiterent pas exprimer leur deception avoir ete force voter pour un president qui, peut-etre, n’aurait pas gagne si Lionel Jospin etait passe au premier tour. Comme le dit Le Quebecois Libre, ‘Dans cet affrontement entre les candidats dits de droite et d’extreme droite (les deux offrant toutefois des programmes largement etatistes), quelle est la meilleure conduite adopter pour faire avancer la liberte ?’ Ce quoi nous faisons face maintenant est une strategie de diabolisation du Front National initiee par les socialistes et la droite luttant contre une strategie de normalisation de leur parti dirigee par le Front National avec Marine Le Pen en tete.

What is Meant by Market Efficiency?

This work was produced by one of our professional writers as a learning aid to help you with your studies

Market efficiency has been a topic of interest and debate central amongst financial economists for more than five decades. Indeed, two of the recipients of the Nobel Memorial Prize in Economic Sciences in 2013, Eugene Fama and Robert Shiller, have debated about the efficiency of markets since the 1980s. Concerns about market efficiency were catapulted to prominence most recently by the financial crisis of 2007-8. Efficient capital markets are foundational to economic theories that posit the allocative efficiency of free markets, which requires informationally efficient capital allocation markets, such as those for equity and fixed income trading. An extended line of research has uncovered evidence of various anomalies which seem to challenge notions of market efficiency, and has also attempted to explain the causes of one such anomaly, the so-called “size effect.” Though there appears to be substantial evidence that the size effect is real and persistent, violating the efficiency market hypothesis, no substantial evidence supports the size effect as violating market efficiency.

“In an allocationally efficient market, scarce savings are optimally allocated to productive investments in a way that benefits everyone” (Copeland, et al., 2005, p. 353). To provide optimal investment allocation, capital prices must provide market participants with accurate signals, and therefore prices must fully and instantaneously reflect all available relevant information (Copeland, et al., 2005). In advanced economies, secondary stock markets play an indirect role in capital allocation by revealing investment opportunities and information about managers’ past investment decisions (Dow & Gorton, 1997). For secondary stock markets, and other formal capital markets, to efficiently and effectively fulfill these two roles, securities prices must “be good indicators of value” (Fama, 1976, p. 133). Therefore, allocative market efficiency requires capital market prices to be informational efficient.

Informational efficiency implies no-arbitrage pricing of tradeable securities and entails several defining characteristics that form the basis of the efficiency market hypothesis. Generally, “A market is efficient with respect to information set ?_t if it is impossible to make economic profits by trading on the basis of information set ?_t” (Jensen, 1978, p. 98), where economic profits are defined as risk-adjusted returns minus trading and other costs. If security prices reflect all available relevant information, such as P/E ratios and past return variances, then it would be impossible to to use such information to profitably trade these securites. Therefore tests of the possibility of using publicly available information to earn economic profits constitute tests of infomational effiency.

Tests of informational market efficiency generally take three forms, and comprise the elements of the efficient market hypothesis. Fama (1969) defined the three forms of market efficiency as the weak, semi-strong and strong form, with each form characterised by the nature of the information central to its application. Weak form efficiency tests are tests of the viability of using past price history of the market to predict future returns (which is a necessary, but not sufficient, condition for trading for economic profits). The semi-strong form of the efficienct market hypothesis tests whether all publicly available information could be used by traders to earn economic profits. And finally, the strong form of market effiency tests the viability of using all information, public as well as private, to generate economic profits. In the literature and amongst practicioners, it is the semi-strong form which “represents the accepted paradigm and is what is generally meant by unqualified references in the literature to the ‘Efficient Market Hypothesis’” (Jensen, 1978, p. 99). And though some references to ‘market efficiency’ allude to the allocative efficiency of markets, the term market efficiency usually refers to informational efficiency as operationally defined by Fama’s efficiency market hypothesis, specifically the semi-strong formulation.

Since its formulation in the late 1960s, researchers have conducted thousands of tests of the efficiency market hypothesis and have found various anomalies, such as the size effect, which appear to violate the market efficiency. Banz (1981) examined NYSE-listed common stock returns between 1936 and 1975 and found stocks with the smallest market capitalisaation earned a risk-adjusted return 0.40% per month higher than the remaining firms in his sample, which was the first evidence that the ‘size effect’ posed a challenge to semi-strong form efficiency. Analysing a sample of 566 NYSE and AMEX stocks over the 1963–1977 period, Reinganum (1981) found that portfolios constructed based on size exhibited predicatability of future returns, with the smallest sized portfolio outperforming the largest decile by 1.77% per month. Keim (1983), testing NYSE and AMEX stocks over the 1963-1979 period, reported a size premium of approximately 2.5% per month. Lamoureux & Sanger (1989) found a size premium for NASDAQ stocks (2.0% per month) and for NYSE/AMEX stocks (1.7% per month) over the 1973 to 1985 period. Fama & French (1992, p.438) concluded, “The size effect (smaller stocks have higher average returns) is robust in the 1963-1990 returns on NYSE, AMEX, and NASDAQ stocks.” Though evidence continued to mount of a size effect, which entails that average stock returns of firms with small market capitalisation were significantly higher than the average returns for large capitalisation firms, Fama and French’s paper preceded decades of research regarding explanations for the size effect and its possible implications.

Over the years researchers have offered a variety of empirical explanations, some of them mutually exclusive, for the size effect. Robert Merton (1987) argued that smaller firms have smaller investor bases and are less likely than larger firms to enjoy an institutional following amongst investors, making smaller firms less liquid and cheaper, which resulted in greater risk-adjusted returns. Chan & Chen (1991) asserted that smaller firms are more likely than large firms to either be distressed or only marginally profitable, and therefore small firms’ prices are more responsive to changing business conditions, which loaded the size effect. Fama & French (1993, p.5) formed 25 portfolios of securities based on size and book-to-market and found that these “portfolios constructed to mimic risk factors related to size and BE/ME capture strong common variation in returns, no matter what else is in the time-series regressions. This is evidence that size and book-to-market equity indeed proxy for sensitivity to common risk factors in stock returns.” Verifying their argument that the size effect was a proxy for common risk factors, Fama & French (1995) found evidence that firm size loaded profitability risk into the cross-section of stock returns. These, and other, empirical findings shed light on possible reasons for the size effect, but a consensus explanation never developed around a single cause.

In contrast to the empirical and economic explanations for the size effect, some researchers questioned whether the size effect existed at all. Shumway & Warther (1999) argued that the small firm effect is essentially a statistical illusion, related not to actual share prices but to market microstructure issues which inhibit proper measurement of price movements. They examined prices of NASDAQ-listed firms from 1972 to 1995, a period previous research associated with significant size effect, and found that after considering delisting bias (by accounting for delisted firms’ final price movements before removal from the sample), the size effect disappeared completely. Wang (2000) argued along similar lines, contending the size effect resulted from survivorship bias. He argued that small stocks are relatively more volatile and therefore more likely than large firms to be delisted due to bankruptcy or failing to meet listing requirements. These delisted stocks are often excluded from the samples studied for the size effect, which would bias the returns of small stocks upwards. Wang (2000) used simulation experiments to test for the likelihood of the small firm effect under such circumstances and concluded that the effect was spurious. Examining all of the above explanations and others, Dijk (2011, p. 3272) concludes, “The empirical evidence for the size effect is consistent at first sight, but fragile at closer inspection. I believe that more empirical research is needed to establish the validity of the size effect.” Though the causes of the size effect are interesting and remain an important topic of debate, more important are the possible implications of the size anomaly for the efficiency market hypothesis.

The size anomaly appears to present a violation of efficient markets, especially to those observers who wrongfully presume that market efficiency implies stock prices must follow a random walk; however, no researcher has yet to show that information related to firm size can be leveraged by traders to earn economic profits. Recalling Jensen’s (1978) definition of informational efficiency, the size effect violates market efficiency only if such information could be used to generate risk-adjusted abnormal returns. Though the size effect may indicate that stock returns are predictable, “if transaction costs are very high, predictability is no longer ruled out by arbitrage, since it would be too expensive to take advantage of even a large, predictable component in returns” (Timmermann & Granger, 2004, p. 19). Therefore return predictability invalidates market efficiency when it produces risk-adjusted returns that subsume transaction costs. According to Stoll and Whaley (1983), who test whether the size anomaly can be exploited to earn risk-adjusted returns greater than transactions costs, find it is not possible for the sample of NYSE-listed firms examined over the 1960 to 1979 period. This is due in part to the relatively insignificance of small firms in relation to the market as a whole. As noted by Fama (1991, p. 1589), “the bottom quintile [of stocks] is only 1.5% of the combined value of NYSE, AMEX, and NASDAQ stocks. In contrast, the largest quintile has 389 stocks (7.6% of the total), but it is 77.2% of market wealth.” So, even if the size effect is granted perfect validity, it does not necessarily negate the efficient market hypothesis.

A final set of reasons ameliorating concerns about the size effect’s threat to market efficiency is related to model specification. Abstracting from the specific arguments related to size effects, consideration of the joint hypothesis problem dampens concerns that size effects could be determined to violate market efficiency. Roll (1976) noted that the pricing models used to test market efficiency were also necessarily testing the validity of the specification of the market model (specifically, the validity of the market model proxy), which means that researchers’ models were necessarily underspecified. Violations seemingly attributable to the size effect, or any other apparent anomaly, can always be attributed to mispecification of the market model or mismeasurement of the market proxy, making it impossible to definitively infer anamolous behavior as evidence of market efficiency. Additionally, this time pointed out by Fama (1991, pp. 1588-9), “small-stock returns…are sensitive to small changes (imposed by rational trading) in the way small-stock portfolios are defined. This suggests that, until we know more about the pricing (and economic fundamentals) of small stocks, inferences should be cautious for the many anomalies where small stocks play a large role…”. Therefore, though there seems to be robust evidence for a size effect, transaction costs overwhelm risk-adjusted returns and model specification concerns generally blunt notions that size effects can be shown to disprove market efficiency.

The global financial crisis of 2007-8 renewed prominent calls for dispensation of the notion of efficient markets, as the allocative efficiency of markets seemed in doubt after so much capital appeared to be wasted on ill-advised investments. But efficient market allocation of investments relies not on ex post views of past downturns, but on ex ante decisions about future investment opportunities. Efficient markets imply that all relevant information is impounded in current asset prices, maximising market participants’ ability to allocate investment, which necessarily implies that the future is unpredictable—market efficiency prohibited the ability to forecast the financial crisis, as the model predicts. Alternatively, a long line of research has examined the possibility that anomalies, such as the size effect, disprove market efficiency. The size effect, however, though an interesting puzzle regarding the cross-section of stock returns, does not disprove market efficiency.

References
Banz, R., 1981. The relationship between return and market value of common stocks. Journal of Financial Economics, 9(1), pp. 3-18.
Chan, K. & Chen, N., 1991. Structural and Return Characteristics of Small and Large Firms. The Journal of Finance, 46(4), pp. 1467-84.
Copeland, T., Watson, J. & Shastri, K., 2005. Financial Theory and Corporate Policy. Fourth ed. London: Pearson.
Dijk, M. A. v., 2011. Is size dead? A review of the size effect in equity returns. Journal of Banking & Finance, 35(12), pp. 3263-74.
Dow, J. & Gorton, G., 1997. Stock Market Efficiency and Economic Efficiency: Is There a Connection?. The Journal of Finance, 52(3), pp. 1087-1129.
Fama, E., 1969. Efficient Capital Markets: A Review of Theory and Empirical Work. The Journal of Finance, 25(2), pp. 383-417.
Fama, E., 1976. Foundations of Finance. New York: Basic Books.
Fama, E., 1991. Efficient Capital Markets: II. The Journal of Finance, 46(5), pp. 1575-1617.
Fama, E. F. & French, K. R., 1992. The Cross-Section of Expected Stock Returns. The Journal of Finance, 47(2), pp. 427-465.
Fama, E. F. & French, K. R., 1993. Common risk factors in the returns on stocks and Bonds. Journal of Financial Economics, 33(1), pp. 3-65.
Fama, E. F. & French, K. R., 1995. Size and Book-to-Market Factors in Earnings and Returns. The Journal of Finance, 50(1), pp. 131-155.
Jensen, M., 1978. Some Anomalous Evidence Regarding Market Efficiency. Journal of Financial Economics, 6(2/3), pp. 95-101.
Keim, D. B., 1983. Size-related anomalies and stock return seasonality: Further empirical evidence. Journal of Financial Economics, 12(1), pp. 13-32.
Lamoureux, C. G. & Sanger, G. C., 1989. Firm Size and Turn-of-the-Year Effects in the OTC/NASDAQ Market. The Journal of Finance, 44(5), pp. 1219-1245.
Merton, R., 1987. A Simple Model of Capital Market Equilibrium with Incomplete Information. The Journal of Finance, 42(3), pp. 483-510.
Reinganum, M. R., 1981. Misspecification of capital asset pricing: Empirical anomalies based on earnings’ yields and market values. Journal of Financial Economics, 9(1), pp. 19-46.
Shumway, T. & Warther, V. A., 1999. The Delisting Bias in CRSP’s Nasdaq Data and Its Implications for the Size Effect. The Journal of Finance, 54(6), pp. 2361-79.
Timmermann, A. & Granger, C. W., 2004. Efficient market hypothesis and forecasting. International Journal of Forecasting, 20 (1), pp. 15-27.
Wang, X., 2000. Size effect, book-to-market effect, and survival. Journal of Multinational Financial Management, 10(3-4), pp. 257-73.

Mergers and Acquisitions Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Mergers and Acquisitions (M&A) occur when two or more organisations join together all or part of their operations (Coyle, 2000). Strictly defined, a corporate takeover refers to one business acquiring another by taking ownership of a controlling stake of another business, or taking over a business operation and its assets (Coyle, 2000). Corporate takeovers have been occurring for many decades, and have historically occurred on a cyclical basis, increasing and decreasing in volume in what has been termed merger waves since the late 1800s (Sudarsanam, 2010). There can be a number of distinct motives for corporate M&A and this short essay will discuss a number of these, drawing on theoretical and financial theory as well as empirical evidence to explain their rationale.

The first group of motives to be discussed are those that relate and can be explained by the classical approach to financial theory (Icke, 2014). These motives assume that firms do not make mistakes and acquire other companies as they believe that doing so will result in increased profitability (Baker & Nofsinger, 2010) as they allow for the achievement of enhanced economies of scale or scope (Lipcynski et al., 2009). This theoretical perspective can be used to explain a number of motives.

First, corporate takeovers can be used as a route to achieving geographic expansion. By acquiring another company in a different country or with more geographically-diverse operations, an acquiring company can expand its markets and thus expand its sales opportunities. The larger business post-acquisition can then, if implemented efficiently, benefit from economies of scale associated with reducing unit input costs, ultimately increasing profitability.

A second reason for completing a takeover could be to increase market share within a market a firm is already operating in. This can result in increased profits through again allowing for increased economies of scale through decreasing unit costs and can also increase profitability by reducing the number of competitors in a market.

Thirdly, acquiring businesses at different stages in the supply chain, known as vertical integration (Icke, 2014), can allow for enhanced profitability as it can facilitate enhanced value in the supply chain and the potential to exercise control and scale benefits over inputs to production and the overall cost of output.

Other motives for corporate takeovers can be categorised as being more consistent with the behavioural school of thought. This considers that M&A is driven by factors other than for pure profit maximization (Icke, 2014; Martynova and Renneboog, 2008). There a number of reasons why M&A may take place where the opportunity to benefit from scale economies is not the key driver. First, a company may engage in an acquisition in a bid to increase their size to prevent bids from other companies. This is consistent with the concept of ‘eat or be eaten’ (Gorton et al., 2005) which hypothesizes that during waves of M&A activity, firms feel vulnerable to takeover bids and as such feel compelled to engage in their own M&A activity in order to increase their size and minimize interest from potential bids.

A second motive for M&A that relates more to the behavioural school (but does possess some economic basis) is the opportunistic M&A activity associated with management taking advantage of a relative increase in the value of its stock to acquire a target in an equity-funded acquisition. In this case, it is the perceived opportunity to buy another company ‘cheaply’ that drives the acquisition, rather than the profit motive if all other variables are held equal.

What empirical evidence do we have in regard to value creation following a takeover for:
the bidder firm’s shareholders
the acquired firm’s shareholders

Mergers and Acquisitions (M&A) occur when two or more organisations join together all or part of their operations (Coyle, 2000). A number of empirical studies have been performed in order to ascertain the extent of value creation following a takeover for both the bidder firm and the acquired firm. Shareholders of the acquired firm have consistently experienced positive value impacts (Icke, 2012; Martynova and Renneboog, 2008) following completion of a takeover, while evidence of value creation following a takeover for the acquirer has been inconsistent and is broadly considered to be inconclusive (Angwin, 2007). This essay will discuss the empirical evidence of the value impact following corporate takeovers for both parties, looking at a broad range of evidence spanning the time following announcement to the fiscal years following completion of a takeover. The essay will briefly discuss the limitations of the evidence based on the highly differentiated nature of the M&A landscape and the presence of significant independent variables. It will then evaluate the results before arguing that for the bidder firm’s shareholders evidence of value creation is broadly inconclusive and that it appears that any value creation that is witnessed differs depending on the type and motives for the acquisition, as well as when it is taking place. It will argue that, as is consistent with the majority of empirical studies, value creation for the acquired company’s shareholders is positive (Martynova and Renneboog, 2008).

The value creation experienced by shareholders in the bidder firm following a takeover can be considered both post announcement and in the years following completion and integration of the businesses. Value impacts at announcement are most profound in the impact of share price fluctuations while performance-based metrics, such as profitability, can be used to assess value impacts following takeover (Icke, 2012).

First looking at the empirical evidence that supports positive value creation for the acquiring shareholders it is clear that there are a number of studies that demonstrate the positive value creating effects of a cross-section of transaction types. Looking at the US and Europe, Martynova and Renneboog (2008) measure the value impacts following a takeover by studying a century of historical M&A transactions. The evidence indicates that in the case of European cross-border transactions, value is created in terms of post-acquisition performance. Looking at developing countries, Kumar (2009) finds that in the case of developing economies, acquirer shareholders tend to experience better returns in both the short and long term following an acquisition than in developed economies. Gugler et al (2003), look specifically at the impact on sales and profitability of a takeover and find that acquisitions have a statistically significant impact on profit of the acquiring company. Chari et al. (2010) look at cross-border transactions and provides evidence that the acquirers will experience improved post-merger performance, but that this is dependent on having intangible asset advantages that can be exploited abroad. Villalonga (2004) studied diversification takeovers in a study that reviewed the share price performance of diversified conglomerates versus non-diversified trading peers in the years following the transaction. The evidence reveals that diversified firms actually trade at a large and significant premium to their peers, thus suggesting that this type of acquisition can drive long term value gain for shareholders in the post-acquisition entity. Draper and Paudyal (2006) studied the value creating impacts of private versus public takeovers and found that value creation for the acquiring company when the target is private is broadly positive. An empirical study by Icke (2014) looks at European and US M&A transactions by motive for takeover and finds that, in terms of announcement effects on share price, transactions driven by an increase in market share, research & development synergies and vertical transactions are rewarding for the acquiring company. In terms of longer term gains, Icke (2014) shows that transactions driven by increase in market share, geographic expansion, vertical integration and diversification all have a positive effect.

In contrast, there is a wide body of empirical research which contrasts with the findings of the above studies and covers a range of different M&A situations where value is in fact destroyed for the acquiring company shareholder both in terms of share price at announcement and in terms of post-integration performance. In a study that considers a broad range of takeover motivations, Walker (2000) finds that acquiring companies experience overall negative value impacts and those anomalies in which acquirers actually gain in the longer term are so infrequent they are considered to be statistically insignificant. When Martynova and Renneboog (2008) study US transactions aimed at achieving diversification the evidence indicates that post-acquisition value is destroyed for acquiring shareholders following a transaction and that wealth effects at announcement for acquirers are inconclusive. In a 2005 study, Powell and Stark (2005) find that post-acquisition performance in terms of sales impact, is actually positive for the acquirer, however, when this is controlled for extra working capital, the effect is inconclusive and likely a net negative result. Looking at vertical integration takeovers, both Kedia et al (2011) and Walker (2000) find that in the case of US transactions, takeovers result in value destruction for acquiring company shareholders. Icke (2014) also found that R&D driven takeovers have a negative effect.

The empirical evidence in relation to target firms’ shareholder value creation is significantly more conclusive across the spectrum of types of M&A. Empirical studies, which tend to focus on value creation for the owners of target companies primarily looks at shareholder value at announcement (Icke, 2014) in the form of share price rises and the premiums acquirers pay. Martynova and Renneboog (2008) find that targets gain value from announcement of a takeover and furthermore find that this gain is consistent across merger cycles, regardless of whether the takeover occurs during the peak or the low of the merger waves witnessed throughout the past century (Martynova and Renneboog 2008). Their study into US takeovers demonstrates that the value creation is significant in size, often reaching double digit growth on the value prior to announcement. In a study of hostile versus friendly takeovers, Shwert (1996) found that target shareholders experience significant gains from a takeover that has come about as a result of a tender process, rather than a hostile a single party bidding round, although found broadly positive results across both types for target shareholders. Likewise, studying the method of payment and the impact on value creation for target shareholders, Goergen and Renneboog (2004) found that all-cash offers trigger ARs of almost 10 percent upon announcement whereas all-equity bids or offers combining cash, equity and loan notes only generated a return of 6 percent but still resulted in positive value creation for the target company.

Empirical studies have also been conducted on transaction data based around the concept of merger waves. That is to look at transactions not as isolated occurrences but as events that have taken place within one of the six identified waves of M&A activity since the late 1800s (Sudarsanam, 2010). By looking at takeovers from the perspective of when they occurred, it is possible to identify more consistent patterns in value creation and to derive theories of attribution for these gains. Icke (2014) reviews a number of studies and finds that value creation for shareholders in both the target and the acquiring company varies depending on the wave in which it occurs when other variables are considered to be constant. Icke (2014) shows that the third wave generated largely positive returns for parties engaged in takeovers, while the fourth was broadly negative and the value impacts were indistinguishable during the fifth. This evidence of environment-sensitivity adds further complexity to the evidence surrounding value creation in takeovers.

Overall there is a wealth of empirical evidence available into the value impacts of corporate takeovers, however, the evidence is broadly inconclusive in determining the value creating opportunities for acquirers while it is broadly conclusive that target company shareholders will gain (Martynova and Renneboog 2008). The inconclusive nature is caused by methodological inconsistencies as a result of mixed methods, the difficulty capturing operational change, the different time periods and sample size distortions (Icke, 2014) as well as the vastly differentiated base of empirical evidence that exists, as discussed in this essay. As Icke (2014) states, the value effects of takeovers are, ultimately, non-conclusive. However, based on the empirical evidence discussed in this essay and drawing on Wang and Moini (2012), the general conclusion can be seen to be that in short-term event studies (addressing the impacts post-announcement) acquirers’ will either experience some normal returns or significant losses, while the target firms have shown to consistently experience positive value creation in the same timeframe. Post-acquisition performance is extremely difficult to measure and the evidence has been mixed. Furthermore, as Angwin (2007) argues, strategic motivations are essential for understanding post-takeover performance and for measuring the isolated effects of the takeover.

In conclusion, there exists a number of studies and a diverse body of empirical evidence into the value creating effects of takeovers for both target and acquirer shareholders. For target shareholders, studies focus on the announcement effects and are broadly positive, while for acquirer shareholders, studies look at both announcement and post transaction performance and show a broadly negative value impact with some evidence of positive value creation in certain types of M&A scenario and during certain periods (waves) in history.

Bibliography
Angwin, D (2007). Motive Archetypes in Mergers and Acquisitions (M&A): The implications of a Configurational Approach to Performance. Advances in Mergers and Acquisitions. 6. pp77-105.
Baker, KH and Nofsinger, JR. (2010). Behavioral Finance: Investors Corporations and Markets. Hoboken, Nj:John Wiley & Sons Inc.
Chari, A., Ouimet, P.P. and Tesar L.L.. (2010). The value of control in emerging markets. Review of Financial Studies. 23(4). pp1741-1770.
Coyle, B (2000). Mergers and Acquisitions. Kent: Global Professional Publishing.
Draper P. and Paudyal K. (2006). Acquisitions: Private versus Public. European Financial Management. 12(1). pp57-80.
Goergen, M and Renneboog, L (2004). Shareholder Wealth Effects of European Domestic and CrossBorder Takeover Bids. European Financial Management. 10(1). pp9-45.
Gugler, K., D.C. Mueller, B.B. Yurtoglu and Ch. Zulehner (2003). The Effect of Mergers: An International Comparison. International Journal of Industrial Organization. 21(5). pp625-653.
Icke, D (2014). AN EMPIRICAL STUDY OF VALUE CREATION THROUGH MERGERS & ACQUISITIONS – A STRATEGIC FOCUS. Aarhus University, Business & Social Sciences [online]. Available at: http://pure.au.dk/portal/files/68137404/Final_Thesis_31.12.2013_Daniel_Michael_Icke.pdf
Kedia, S., Ravid, S.A., Pons, V. (2011).When Do Vertical Mergers Create Value?. Financial Management. 40(4). 845-877.
Kumar, N. 2009. How emerging giants are rewriting the rules of M&A. Harvard Business Review. 87(5). pp115-121.
Lipczynski, J., Wilson, O.S. and Goddard, J. (2009). Industrial Organization: Competition, Strategy, Policy. Third edition. Essex, England: Pearson Education Limited.
Martynova, M and Renneboog, L (2008). A Century of Corporate Takeovers: What Have We Learned and Where Do We Stand?. Journal of Banking & Finance. 32(10). pp2148- 2177.
Powell, RG. and Stark, AW. (2005). Does operating performance increase post-takeover for UK takeovers? A comparison of performance measures and benchmarks. Journal of Corporate Finance. 11(1-2), pp293-317.
Schwert G.W. (1996). Markup Pricing in Mergers and Acquisitions. Journal of Financial Economics. 41(2). pp153-192.
Sudarsanam, S (2010). Creating Value from Mergers and Acquisitions – The Challenges. Essex, England: Pearson Hall.
Villalonga, B.N. (2004). Diversification Discount or Premium? New Evidence from the Business Information Tracking Series. The Journal of Finance. 59(2). pp 479-506.
Wang,D. and Moini, H. (2012). Performance Assessment of Mergers and Acquisitions: Evidence from Denmark. [online]. Available at: http://www.g-casa.com/conferences/berlin/papers/Wang.pdf
Walker, MM (2000). Corporate Takeovers, Strategic Objects, and Acquiring-Firm Shareholder Wealth. Financial Management. 29(200). pp55-66.

Overvaluation of the Stock Market Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

Stock markets are considered to be among the most preferred investment platforms by investors, as they generate a high return on investment (Fong, 2014). There are many underlying reasons for this high return, one of which may be the valuation of the financial commodities traded in the stock market (Chang, 2005). Some financial analysts believe that the stock markets are extremely overvalued (Phoenix, 2014), while there are others who consider them as being slightly overvalued (Rosenberg, 2010). Another school of thought has a viewpoint that they are fairly valued (Wolf, 2008); while, some hold the opinion that they are undervalued (Pan, 2009). Due to these differences in viewpoints, it becomes difficult to gauge the extent to which stock markets are overvalued. The reasons for these differences in opinions are the different geographical locations (Tan, Gan and Li, 2010) and the different assumptions made in comparisons (Cheng and Li, 2015). The difference in the methods used for valuation also turns out to be one of the reasons, as every method has its merits and demerits (Khan, 2002). Stock market overvaluation may have severe negative effects including a market crash or increasing organisation’s agency costs, which need to be considered by managers in organization-wide strategic management (Jensen, 2005).

Methods used for Stock Valuation

Various methods are used for stock valuation; some of the common ones include Price to Earnings ratio (Stowe et al., 2008), Knowledge Capital Earnings (Ujwary-Gil, 2014) and Dividend Discount Model (Adiya, 2010). The price to earnings ratio is the most common method used to evaluate stock markets, whereby the company’s current stock price is compared with the predicted earnings it will yield in future (Stowe et al., 2008). Knowledge Capital Earnings – KCE is another method through which a company’s intellectual capital can be gauged and interpretation of the extent to which it is overvalued can be given (Ujwary-Gil, 2014). The KCE method, however, is specifically subjective if the analyst is interested in estimating the potential future earnings of an organization (Ujwary-Gil, 2014).

The Dividend Discount Model is based on the assumption that the price of a stock at equilibrium will be equal to the sum of all its upcoming dividend yields discounted back to its current value (Ivanovski, Ivanovska and Narasanov, 2015). One of the shortcomings of this model is with the company’s growth estimation, in which the averaged historical rates do not provide an accurate picture, as they ignore the ongoing economic conditions and the changes that take place in the company (Ivanovski, Ivanovska and Narasanov, 2015). Another issue identified by Mishkin, Matthews and Giuliodori (2013) is related to the accuracy of dividends forecasted based on the company’s past performance and the predicted future trends of the market; critics cast doubts on the accuracy of these figures, as they are purely based on estimation of analysts and may not be always correct.

Stock Markets are Extremely Overvalued

Hussman (2014), who is well-known for his accurate insights about the financial markets, comments in one of his speeches that due to their Zero Interest Rate and Quantitative Easing policies, the central banks have driven the stock prices up to twice as high as they are supposed to be. This imparts the stock markets to be overvalued by 100%. While different authors argue that every evaluation metric has its merits and demerits, which makes it difficult to conclude whether stock markets are overvalued when calculated via a specific metric, a Phoenix (2014) report provides evidence of the fact that stock markets are overvalued by almost every metric used for valuation. According to Autore, Boulton and Alves (2015), short interest rates are also a determinant of stock valuation; the lower the short interest rate of the initial stock, the more overvalued the stock will be.

An example could be that of the U.S. stock market which is analysed to be overvalued by 55% (Lombardi, 2014), while it is estimated to be overvalued by 80% according to another research (Heyes, 2015). Lombardi (2014) identifies it to be overvalued to such an extent due to the increasing presence of bullish stock advisors as compared to bearish advisors, which results in the investors being complacent without being anxious about a huge market sell-off. By evaluating the market through various methods, Tenebrarum (2015) established an opinion that the U.S. stock market is valued at its highest peak to date. Additionally, Lombardi (2014) recognises these indicators to be similar to those before the stock market crash in 2007. Hence it may lead us to a prediction that history might repeat itself, as specialists have already expected the forthcoming crash (Heyes, 2015).

Reasons behind Extreme Overvaluation

Moenning (2014) explains one of the reasons behind stock overvaluation to be investors’ inclination to fall in the trap of investing based on stock valuation, instead of business cycles. He further elaborates that due to the investors’ inclination towards highly valued stocks, companies raise their stock prices to make their stock seem attractive to be preferred by investors over that of other companies. Qian (2014) identifies a solution to this that if investors are discouraged from merely considering stock valuation while looking for investment options, companies will not have an incentive to undertake systematic mispricing of their stocks, which results in overvaluation.

Another reason behind overvaluation of stock market has been suggested by Autore, Boulton and Alves (2015); according to whom the stocks are overvalued to a great extent due to the higher levels of failure to deliver. Three major exchanges report a huge number of failures to deliver in their daily listings approximately equal to 10,000 shares or 0.5% of the overall outstanding shares, which further explains the reason behind extreme overvaluation of stock markets (Autore, Boulton and Alves, 2015).

Stock Markets are Slightly Overvalued

Some analysts estimate the stock markets to be slightly overvalued as compared to what their value should be. Rosenberg (2010) further strengthens this point in his research which revealed that stock markets are overvalued by 35%. A Newstex (2010) report provides little evidence about the market being overvalued by 26%. Specialists from this school of thought believe that stock overvaluation may only result in a temporary disruption in the market, which may be resolved by cautiously reducing the stock prices.

Stock Markets are Fairly Valued

The ideal situation is the one when stock markets are appropriately valued, which Wolf (2008) identifies as an opportunity. He says that fairly priced stock markets are favoured by the investors and risk-seeking governments, as it is the situation with lesser uncertainty. With an overall market yield of 4%, Paler (2012) recognised the stock markets to be fairly valued, regarding them as a suitable investment platform. For example, Newstex (2015) reported Amazon’s stock price to be fairly valued at $295 per share as opposed to $380. This is because financial analysts believe that factors such as the potential decline in the annual revenue growth, reduction in operating profit margins due to increasing technology, marketing and other costs, and increased investment in growth strategies, such as international expansion, need to be accounted for when valuing stocks. It can thus be understood that overvalued stocks pose to be a threat for the financial markets because investors lose confidence, which results in a drop in revenue growth (Akbulut, 2013). The slightly overvalued stock markets may find their easy escape if the decline in the Central Bank rates results in a decrease in the wider interest rate spectrum (Saler, 1998).

Stock Markets are undervalued

Pan (2009) supports the claim that stock markets are undervalued, along with which he gives the example of the Asian stock market, which is approximately 30% undervalued. One of the reasons he identified for it was the political instability. Another example is provided by Pawsey (2009), whereby he analysed that most of the UK stock market remains undervalued and it has not been so in decades. The reason he identified was that the stocks are undersold as compared to the sales estimation. On one hand, the U.S. stock market is viewed as being extremely overvalued, while on the other, the U.K. stock market is severely undervalued. It can thus be seen that the geographical location plays a great role in the differences of opinion about overvaluation and undervaluation of the stock markets (Tan, Gan and Li, 2010).

There are some specific markets which are consistently undervalued for certain periods of time. An example could be the Russian stock market; Putin (2008) found Russian companies to be severely undervalued. Caldwell’s (2015) analysis also depicted that Russian stock market is among the three most undervalued markets globally. The analysis also included predictions that the Russian stocks might go down further, therefore investors need to beware before investing in such markets.

Reasons behind Stock Undervaluation

One of the reasons behind undervaluation of stock markets is the investor’s inclination towards highly valued stocks. Although some companies set their stocks at a lower price to make them seem cheaper and more attractive for the investors to buy, they find the investors doing the opposite, i.e. opting for highly valued stocks in anticipation of higher returns (Warner, 2010).

Reasons for Different Viewpoints
Different Assumptions and Valuation Methodology

The different viewpoints mentioned about stock valuation are based on the different assumptions (Cheng and Li, 2015) and different methods used for valuation (Khan, 2002). It also follows that these different methods have their own advantages and disadvantages, which if accounted for, may result in a different perspective. For example, price to earnings ratio is considered to be a worthless tool by some analysts because of its overoptimistic estimates (Tenebrarum, 2015). Taboga (2011) identifies another issue with this ratio, that it is mostly influenced by the fluctuations in earnings due to the business cycle oscillations. Hence he assumes that relying merely on this method may not provide a true picture of the extent to which stock market is overvalued.

Implications of Overvaluation

Hunter, Kaufman and Pomerleano (2005) explain that extreme overvaluation of the stock market should be taken into consideration and a solution should be devised for it, otherwise there would be higher probability of a crash. Liao (2014) also found a positive relationship between highly overvalued markets and possibility of a crash. He also found a negative relationship between extreme overvaluation and future share price jumps. Jensen’s (2005) study revealed that the overvaluation of a company’s stock gives rise to certain organisational forces which become difficult for the management to handle, damaging the organisation’s core value either partially or entirely.

On one hand, overvalued stock markets pose threats to the financial markets, while on the other, they help in boosting up the aggregate demand and supply, such that this positive effect may potentially be able to subside the negative effect (Cecchetti et al., 2000). Jones and Netter (2012) believe that mispriced stocks prove to be a source of encouragement for investors to trade, as a result of which they are reverted back to their reasonable prices.

Conclusion

The valuation of stock markets has long been an area of concern for financial institutions and analysts. The differences in valuation and the opinions regarding valuation occur because of the differences in the methods used for calculations and making estimates. Each method has its pros and cons and research suggests that one method alone cannot provide a true picture of the degree to which stock markets are overvalued or undervalued. There is evidence about stock markets being extremely overvalued, and there is also an equal amount of research suggesting they are fairly valued and/or undervalued. Considering the differences in methods used and the variation in geographical locations where these researches are conducted, it is difficult to hold a strong opinion about the extent to which stock markets are overvalued or undervalued, because critics against each school of thought have logical reasoning proving the limitations of the valuation method used by the other analysts. It is therefore necessary for the analysts to use a combination of two or methods for stock valuation, so that the doubts of the critics may be reduced, ensuring transparency in the financial data analysis.

References
Adiya, B. (2010). Discuss the Main Theories Underlying the Valuation of the Stock. Critically Assess the Role of Fundamental and Technical Analysis in Stock Market Valuation. EC 247 Term Paper, University of Essex.
Akbulut, M.E. (2013). Do Overvaluation-driven Stock Acquisitions Really Benefit Acquirer Shareholders? Journal of Financial and Quantitative Analysis, Vol. 48, No. 4, pp. 1025-1055.
Autore, D.M., Boulton, T.J., and Alves, M.V. (2015). Failure to Deliver, Short Sale Constraints, and Stock Overvaluation. Financial Review, Vol. 50, No. 2, pp. 143-172.
Caldwell, K. (2015). Revealed: The World’s Cheapest Stock Markets 2015. The Telegraph. 6th June. [Online] Available at: http://www.telegraph.co.uk/finance/personalfinance/investing/11654508/Revealed-The-worlds-cheapest-stock-markets-2015.html
Cecchetti, S.G., Genberg, H., Lipsky, J., Wadhwani, S. (2000). Asset Prices and Central Bank Policy. Geneva: International Center for Monetary and Banking Studies.
Chang, J. (2005). Shares Feature Attractive Valuations. Chemical Market Reporter, Vol. 268, No. 18, pp. 15.
Cheng, S., and Li, Z. (2015). The Chinese Stock Market Volume II: Evaluation and Prospects. London: Palgrave Macmillan.
Fong, W.M. (2014). The Lottery Mindset: Investors, Gambling and the Stock Market. London: Palgrave Macmillan.
Heyes, J.D. (2015). Stock Market is 50% to 80% Overvalued; Experts Warn Historical Crash now Imminent. Natural News. 17th September. [Online] Available at: http://www.naturalnews.com/051202_economic_predictions_stock_market_crash_James_Dale_Davidson.html
Hunter, W.C., Kaufman, G.G., and Pomerleano, M. (2005). Asset Price Bubbles: The Implications for Monetary, Regulatory and International Policies. London: MIT Press
Hussman, J. (2014). John Hussman: The Stock Market is overvalued by 100%. Phil’s Stock World. Newstex. Retrieved from ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/1621993284/fulltext?accountid=15977
Ivanovski, Z., Ivanovska, N., and Narasanov, Z. (2015). Application of Dividend Discount Model Valuation at Macedonian Stock-Exchange. UTMS Journal of Economics, Vol. 6, No. 1, pp. 147-154.
Jensen, M.C. (2005). Agency Costs of Overvalued Equity. Financial Management, Vol. 34, No. 1, pp. 5-19.
Jones, S.L., and Netter, J.M. (2012). Efficient Capital Markets. [Online] Available at: http://www.econlib.org/library/Enc/EfficientCapitalMarkets.html
Khan, A. (2002). 501 Stock Market Tips and Guidelines. USA: Writers Club Press.
Liao, Q. (2014). Overvaluation and Stock Price Crashes: The Effects of Earnings Management. PhD Dissertation, University of Texas.
Lombardi, M. (2014). EconMatters: U.S. Stock Market Overvalued by 55%? Newstex Global Business Blogs. Newstex. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/1641263053?pq-origsite=summon
Mishkin, F.S., Matthews, K., and Giuliodori, M. (2013). The Economics of Money, Banking and Financial Markets. European Edition. Barcelona: Pearson Education Limited.
Moenning, D. (2014). EconMatters: How Much are Stocks Overvalued? Newstex Global Business Blogs. Newstex. Retrieved from ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/1639491656?pq-origsite=summon
Newstex (2010). Is the Stock Market 26% Overvalued? Phil’s Stock World. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/189661843?pq-origsite=summon
Paler, N. (2012). Fidelity’s Roberts: Equity Markets Fair to Slightly Overvalued but better than Cash. Investment Week. 26th March. pp. 28. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/963553204?pq-origsite=summon
Pan, A. (2009). Asian Stock Markets Seen almost 30% Undervalued. Asiamoney. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/206616845?pq-origsite=summon
Pawsey, D. (2009). UK Stocks are ‘Significantly Undervalued’. Financial Advisor. Retrieved from ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/195110261?pq-origsite=summon
Phoenix Capital Research (2014). Stocks Are Severely Overvalued By Almost Every Predictive Metric. Phil’s Stock World. Newstex. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/1546016887?pq-origsite=summon
Putin (2008). Putin Says Russian Stock Market Undervalued. Daily News Bulletin. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/456062919?pq-origsite=summon
Qian, X. (2014). Small Investor Sentiment, Differences of Opinion and Stock Overvaluation. Journal of Financial Markets, Vol. 19, No. 1, pp. 219-246.
Rosenberg, D. (2010). Rosenberg: Stocks 35% Overvalued. Phil’s Stock World. Newstex. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/189666557?pq-origsite=summon
Saler, T. (1998). Fed could Rescue Slightly Overvalued Large-cap Stocks. Milwaukee Journal Sentinel. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/260844752?pq-origsite=summon
Stowe, J.D., Robinson, T.R., Pinto, J.E., McLeavey, T.W. (2008). Equity Asset Valuation Workbook. New Jersey: John Wiley & Sons, Inc.
Taboga, M. (2011). Under/Over-Valuation of the Stock Market and Cyclically Adjusted Earnings. International Finance, Vol. 14, No. 1, pp. 135-164.
Tan, Z.H., Gan, C., and Li, Z. (2010). An Empirical Analysis of the Chinese Stock Market: Overvalued/Undervalued. International Journal of Applied Economics & Econometrics, Vol. 18, No. 1, pp. 44-74.
Tenebrarum, P. (2015). EconMatters: The U.S. Stock Market is at its Most Overvalued in History. Newstex Global Business Blogs. Newstex. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/1656537926?pq-origsite=summon
Ujwary-Gil, A. (2014). Knowledge Capital Earnings of a Company Listed on Warsaw Stock Exchange. European Conference on Knowledge Management. Kidmore: Academic Conferences International Limited.
Warner, J. (2010). Why Stock Markets are still Undervalued? The Daily Telegraph. 19th January, pp. 4. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/321739234?pq-origsite=summon
Wolf, M. (2008). Why Fairly Valued Stock Markets are an Opportunity? Financial Times. 26th November, pp. 13.

The Development of the Balanced Scorecard

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

The intention of this essay is to analyse the ‘Balanced Scorecard’ and to review its effectiveness as a performance management tool. It will review briefly the short history of the ‘Balanced Scorecard’ and then analyse each of the different aspects of the management tool and describe how they link together.

History of the Balanced Scorecard

The notion of the ‘Balanced Scorecard’ first appeared in the Harvard Business Review in 1992 in an article titled “The Balanced Scorecard – Measures that Drive Performance,”authored by Robert Kaplan and David Norton (Kaplan and Norton 1992). They had conducted a year-long study with “12 companies at the leading edge of performance measurement, [and] devised a ‘balanced scorecard’”as a result of their research (Kaplan and Norton, 1992, p.71).

A ‘Balanced Scorecard’ is a “strategic planning and management system that is used to align business activities to the vision and strategy of the organisation, improve internal and external communications, and monitor organisation performance against strategic goals”(Balanced Scorecard Institute, Unknown). It was brought out of the necessity to include non-financial indicators to measure performance, where in the past businesses and managers focused primarily on financially-based indicators to measure performance. These financially-based performance measurement systems “worked well for the industrial era, but they are out of step with the skills and competencies companies are trying to master today”(Kaplan and Norton, 1992, p.71).

After spending a year with various companies, Norton and Kaplan realised that “Managers want a balanced presentation of both financial and operational measures”(Kaplan and Norton, 1992, p.71). The recognition of the importance of operational measures was a milestone in performance measurement systems, as financially-based measurements help indicate the final outcomes of actions and processes already set in place, whilst operational measures help aid the driving of future financial performance.

Since its inception in 1992 the ‘Balanced Scorecard’ is now “adopted by thousands of private, public, and non-profit enterprises around the world”(Kaplan, 2010, p. 2). Which provides testament to its importance and effectiveness as a performance management system, it is likely that businesses that have implemented the systems have seen profound impacts on their profit margins and the happiness and innovativeness’ of their workforce.

The Four Perspectives

The scorecard itself is made up of four different perspectives; Financial, Customer, Internal Business Processes, and Learning & Growth. By looking at these different perspectives the balanced scorecard “provide[s] answers to four basic questions; How do customers see us? What must we excel at? Can we continue to improve and create value? How do we look to shareholders?”(Kaplan and Norton, 1992, p.72) By providing senior managers with information from four important perspectives, another benefit of implementing a scorecard is that it minimises information over-load by “add[ing] value by providing both relevant and balanced information in a concise way for managers”(Mooraj, Oyon and Hostettler, 1999, p.489).

To understand more completely how the interaction of the phases helps an organisation create additional financial value whilst aiding in the learning and growth, internal business processes and customer satisfaction perspectives see the appendix for fig.1, and fig.2. The four different perspectives and the way they interconnect are an important issue, as such it is also important to analyse each of them on an individual basis; first it must be recognised that each of the perspectives is made up of Objectives, Measurements, Targets and finally Programmes.

Each of these areas within the perspective helps identify and measure a way in which a company can achieve its stated objective through the implementation of a programme. A basic example for customer perspective would be as follows;

ObjectiveMeasurementTargetProgramme
Reduce staff turnoverStaff turnover ratioA ratio of less than 6 monthsTo implement staff feedback and satisfaction survey’s with the aim of creating an environment in which they feel productive and appreciated

Learning & Growth Perspective

This perspective is the beginning of the scorecard and in conjunction with the cause and effect hypothesis (Fig.2), makes up arguably the most important aspect as its “intended to drive improvement in financial, customer and internal process performance”(Kaplan and Norton, 1993). This aspect focuses primarily on innovation and improvement of work level employees, essentially creating more efficiency within the internal business processes. However, in order to achieve required innovation and improvements in efficiency a motivated and empowered workforce is essential, one method of achieving this is to implement a “staff attitude survey, a metric for the number of employee suggestions measured whether or not such a climate was being created”(Kaplan and Norton, 1993). Other such methods which could be implemented are that of calculating revenue per employee, and as such it can then create a measurement which can be observed and recorded year on year to achieve a pre-set objective, thus fulfilling each of the required facets of the balanced scorecard in relation to this perspective.

By implementing a programme, in the form of a survey or other such measures “it [can] identify strategic initiatives and related measures, these gaps can then be addressed and closed by initiatives such as staff training and development”(Mooraj, Oyon and Hostettler, 1999, p. 483). Once work-force empowerment is achieved and employees are happy and informed about their roles and the overall strategic aim of the organisation and methods of observing, recording and measuring are in place it can now focus on the next stage of the balanced scorecard.

Internal Process Perspective

This perspective, once an empowered and informed work-force is achieved and employees are working to their full potential, focuses primarily on making business and/or manufacturing processes more efficient, creating more output for the input. In order to achieve these improvements a business may implement many changes that “may range from moderate and localized changes to wide-scale changes in business process, the elimination of paperwork and steps in processes, and introduction of automation and improved technology”(Balanced Scorecard Institute, 2002).

In order to achieve this increase in efficiency an organisation “managers must devise measures that are influenced by employees’ actions. Since much of the action takes place at the department and work-station levels, managers need to decompose overall cycle time, quality, product, and cost measures to local levels”(Kaplan and Norton, 1992, p.75). By devising measurements aimed at work-station levels, such as delivery time turnaround or decrease in waste produced, managers are able to observe and monitor increases or decreases in efficiency and also locate where these increases or decreases stem from. Once a suitable measurement system is in place, managers are able to create targets to achieve and finally programmes in which to implement in an attempt to meet the pre-set targets.

By implementing a programme which is easily communicated, achievable and produces results that can be monitored by all levels that are relevant to the process, it will find that employees will benefit from seeing the results they produce with the intention of further motivating the work-force to increase efficiency. Once efficiency within the internal business processes has been achieved and an objective, a measurement system, pre-set targets and a programme that is successfully implemented, it can focus on whether or not the increase in innovation and empowerment combined with efficiency has had its intended effect on the customer.

Customer Perspective

The next perspective is that of the customer perspective which could be argued to be one of, if not, the most important aspect as this is where an increase in sales revenue and thus an increase in income are generated. After creating an empowered, informed work-force and improving efficiency relating to business processes this should “lead to improved products and services”, (Balanced Scorecard Institute, 2002) which in turn should improve the quality of products and services and ideally, with reduced costs incurred from efficiency, lower the cost of products and services offered to customers.

In order to achieve this increase in customer satisfaction or market share a similar method is needed in which an organisation must first create an objective, such as increase market share by 10% or maintain or increase repeat purchases. Once an objective is set in place then the organisation must create a measurement system to implement, one which can be reviewed annually, monthly or even weekly, an example of this may include a % increase in customer loyalty cards or a % increase in sales revenue. Finally, a programme must be implemented in order to drive toward the objective; an example of this may be an increase in market research to explore the possibility of new market opportunities or perhaps an investment in a new marketing campaign and special offers directed at repeat customers.

Financial Perspective

The final perspective is that of the financial perspective, in the eyes of the shareholders this is by far the most important aspect and where the effort in the earlier facets of the balanced scorecards cumulates in an increase in profit margins and ratios such as Return on Investment (ROI). This perspective “included three measures of importance to the shareholder. Return-on-capital employed and cash flow reflected preferences for short-term results, while forecast reliability signalled the corporate parent’s desire to reduce the historical uncertainty cause by unexpected variations in performance”(Kaplan and Norton, 1993). The first two are self-evidently of importance to shareholders with a return generated for shareholders and cash flow results which result in larger profits, while reducing the risk of uncertainty caused by a variation in performance is of particular importance and is something that can only be achieved through getting every employee focused and aligned with the overall strategic aims of the company, through an informed, focused and appreciated workforce, an efficient internal business process, and a satisfied customer-base.

The Cause and Effect Relationship

It is clear that linkages are the most important aspect of the balanced scorecard and that the cause and effect relationship (fig.2) allows for strategic alignment throughout an organisation. This has been seen to be “the common thread to the successful implementation of the balanced scorecard,”(Murby and Gould, 2005, pp.10) another key element to the balanced scorecard is making sure “that all employees understand [the] strategy and conduct their business in a way that contributes to its mission and objectives”(Murby and Gould, 2005, pp.5).

The importance of the cause and effect relationship in conjunction with ensuring that each and every employee is aware of the overall company strategy allows and an organisation to create a foundation for success in that the learning & growth facet provides a company with informed, innovative and an enthusiastic work-force which allows the company to be in a position to progress into the future. A final key point would be allowing managers the ability to “introduce four new processes that help companies make [an] important link”(Kaplan and Norton, 2007). By being in a position to translate the vision, communicating the strategy and linking it to compartmental and individual goals, integrating business plans with financial goals and finally giving each employee the ability to provide feedback, a company has created an environment in which they can adjust and augment at each level should managers feel the need too.

Conclusion

In conclusion, the essay has covered the short history and fundamentals of the ‘Balanced Scorecard’ and has shown how it is made up of different perspectives which provides management with basic questions regarding important stakeholders. It also provides management which a detailed measurement system and an ability to observe progress, or regression, within each of the different perspectives via the inclusion of objectives, measurement tools and targets which are created by management themselves. This also allows management to make changes where necessary in order to ensure that the overall strategic vision of the company is still being pursued. The essay has also highlighted the importance of the cause and effect relationship and provides the “strategic-map”within the appendix which can help provide an illustrative view of how the “balanced scorecard”in conjunction with the cause and effect relationship can turn an empowered work-force into a long-term financially stable organisation. It also covers the importance of communication, something that most organisations overlook as can be seen by the removal of the work-level employee from the overall strategic vision, and something that most organisations only feel upper-level management should be informed of.

Bibliography
Balanced Scorecard Institute, (2002). “The Balanced Scorecard and Knowledge Management.”Available at: https://balancedscorecard.org/BSC-Knowledge-Management
Balanced Scorecard Institute, (Unknown). “Balanced Scorecard Basics.”Available at: http://balancedscorecard.org/Resources/About-the-Balanced-Scorecard
Kaplan, R.K. (2010). “Conceptual Foundations of the Balanced Scorecard,”Harvard Business School, pp. 1-36 [Online]. http://www.hbs.edu/faculty/Publication%20Files/10-074.pdf
Kaplan, R.K. and Norton, D.N. (1993). Putting the Balanced Scorecard to Work. [Online] Available at: https://hbr.org/1993/09/putting-the-balanced-scorecard-to-work
Kaplan, R.T. and Norton, D.N. (1992). “The Balanced Scorecard – Measures that Drive Performance,”Harvard Business Review, pp.70-80 [Online]. Available at: www.alnap.org/pool/files/balanced-scorecard.pdf
Kaplan, R.T. and Norton, D.N. (2007). Using the Balanced Scorecard as a Strategic Management System [Online]. Available at: https://hbr.org/2007/07/using-the-balanced-scorecard-as-a-strategic-management-system
Mooraj, S.T. Oyon, D.O. and Hostettler,D.H. (1999). “The Balanced Scorecard: a Necessary Good or an Unnecessary Evil?”European Management Journal, 17(5), pp.481-491. [Online]. Available at: http://members.home.nl/j.s.sterk/AQM/The%20balanced%20scorecard%20a%20necessary%20good%20or%20an%20unnecessary%20evil.pdf
Murby, L.M. and Gould, S.T. (2005). Effective Performance Management with the Balanced Scorecard – Technical Report, CIMA, pp.1-43 [Online]. Available at: http://www.cimaglobal.com/Documents/ImportedDocuments/Tech_rept_Effective_Performance_Mgt_with_Balanced_Scd_July_2005.pdf
Illustrations
Balanced Scorecard Institute, (2002). “Cause and Effect Hypothesis”. [Online] Available at: https://balancedscorecard.org/BSC-Knowledge-Management
Kaplan, R.S. (2010). “The Strategy Map links intangible assets and critical processes to the value proposition and customer and financial outcomes.”Page 23. [Online] Available at: http://www.hbs.edu/faculty/Publication%20Files/10-074.pdf
Appendix

(Figure 1)

(Figure 2)

Principals of Corporate Finance

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

The question of whether or not to proceed with a project requiring significant capital expenditure is one which involves considerations running the gamut of issues facing the firm. Taking a purely financial perspective the firm is required by Fischer’s Separation Theorem to return the maximum amount of wealth to shareholders (Fischer, Reprinted 1977). In the modern firm ownership is separated from control in the form of the capital of the company being held, traditionally at least by shareholders who have little to do with the day to day running of the firm, this being entrusted to the Directors appointed to the board by the trustees and shareholders. As such in the modern finance world there is a considerable agency problem whereby the owners of the firm’s capital have a degree of separation from the control of their capital (Farma, 1978). As such it is expected, and enforced by the market in terms of the willingness of investors to place capital under a firm’s control, that a firm will return wealth consummate with an acceptable degree of risk. Indeed it is the risk of an investment which dries the importance of investment appraisal in firms and understanding the difference between systematic and un-systematic risk underpins much of the following discussion of the investment appraisal process (Hirshliefer, 1961).

Un-systematic risk is the risk associated with the unique operations and conditions of the firm and is relatively unimportant (at least in terms of the financial theory) whilst systematic risk, especially as represented as the Beta of the firm (more of which later) is the risk of the class of share within the market (Pogue, 2004). The theory is that a share price is determined by its relation to the capital market line, in terms of the random walk theory, which governs the movement of the share with the market. Shares move as the market moves, generally speaking, and so how much they move represents the systematic risk to the shareholder. Beta is now one of the most common ways to measure the value of equity capital and is also used heavily in portfolio theory. It is not without controversy or criticism. Betas are worked out using a wide range of financial data from the past and as such many commentators have argued that Beta has little to tell us about the future. There are significant problems with translating accounting data into price relevant information, particularly there is at best a tenuous link between earnings and book values of assets and prices observed in the market. Particularly the Ohlson model which it is argued demonstrates a coefficient between these figures and price (also which it is assumed makes sense of both the Modigliani and Miller relevancy hypothesis and Gordon and Shapiro’s value metrics) (Pogue, 2004). Notwithstanding these criticisms and the accepted criticism of the random walk theory, which are considerable, Beta is still widely accepted as a way of dealing with systematic risk.

What does this mean for Investment Appraisal techniques? In terms of the accepted methodology of investment appraisal the goal of such appraisal has to be the increase in wealth of the shareholders, and as such many of the techniques which are readily deployed by managers have no theoretical basis. In the following appraisal of the project a number of techniques are used to give decision relevant information of the project (Graham, 2001). The company has two criteria which it uses to judge the acceptability of a project, the Return on Investment, which it states must be above 15% and the payback period, which must be within three years. Both of these methods give information in terms of in the first case, accounting data, and in the second a rule of thumb for recouping the initial investment within a specified time period. Neither of these methods tell us much about the financial and wealth creating aspects of the project in question (Hajddasinski, 1993). Payback is simply a measure of the amount of time it takes to recoup the initial investment, and as such has little to do with maximising shareholder wealth, it is entirely possible for a project to recoup the initial investment very quickly but them go on to actually destroy wealth in later years, particularly when a project runs for a significant period of time. The Accounting rate of return similarly tells us little about the wealth creation of the project, considering as it usually does non financial items such as depreciation which have little to do with the amount of actual wealth returned to shareholders. Neither of these techniques takes into consideration systematic risk to shareholders, and as such ignores an important and fundamental aspect of modern finance theory. Indeed it is only Net Present Value (NPV) which can tell us about the wealth creating and destroying aspects of a project and as such it is this technique (along with the similar technique of Internal Rate of Return (IRR)) which can give decision relevant information in terms of shareholder wealth (Lefley, 2004).
Briefly NPV uses a discount factor, based upon the Weighted Average Cost of Capital (WACC) which adjusts the incremental net cashflows of a project for systemic risk, thus ensuring that the wealth created for the company reflects the time value of money (Amran, 1999). Much of the methodology in NPV requires one to recognise incremental cashflows and to remove those which have no relevance to wealth creation, particularly accounting derivations such as depreciation. Other cashflows which need not be included are sunk costs and other costs which would exist regardless of the projects acceptance. Thus the analysis concentrates on the wealth creating (or destroying) aspects of projects rather than the book conventions and ephemeral of other techniques. It results in a cash figure, in terms of either wealth added or destroyed by the acceptance of the project and is particularly useful for the ranking of projects in times of capital rationing. NPV is a powerful decision making tool, but not without considerable problems in and of itself. NPV requires the firm to estimate future cashflows, and as will be seen, the accuracy of these cashflows are of significant importance to the viability of the project (Amran, 1999). Further the use of NPV is considered by many to be far more complex than most other techniques and non specialists may find the results and even the preparation of this analysis to be a significant challenge. Further the discount factor itself is often controversial, WACC is only one of a range of factors which can be used, but is most theoretically correct (as will be seen in the discussion later of the capital gearing theory), but without a very accurate discount factor the analysis is at serious risk of error (Hillier, 1963). Notwithstanding these problems NPV is one of the most relevant and reliable tools of investment appraisal and satisfies much of the theoretical underpinning of the subject of finance. This report finds that the project returns a positive NPV and satisfies all of the other investment criteria and therefore should be undertaken (Graham, 2001).

Results & Findings

Please see appendix A for the full derivation of the results and findings.

Net Profit (?)2792009

Payback

2.5 years

ARR%

55.84018

NPV

1767785

IRR

21%

This is based on a cost of equity capital of 6% which in turn is based on the calculation for Equity Capital under the Capital Asset Pricing Model (CapM):

Where Ke is the cost of equity capital, rf is the risk free rate (often gilts) ? is the assigned Beta of the share and rm is the market risk. For the company this equates to 5.31% which has been rounded up to the nearest whole (under the assumption that it is better to err on the side of caution)

Discussion and Analysis

As has been established in the introduction the primacy of the NPV technique carries with it a significant theoretical advantage over other methods. IRR too is based on the same methodology and gives the cost of capital at which the NV of the project would be zero, as such it provides for the maximum cost of capita at which the project would be viable. It would seem that this project is worth undertaking, not only does it satisfy all of the existing criteria for the firm, but it also returns wealth to the shareholders given the risk class of the share. The problems of NPV have to, however, be considered in line with the predictability of cashflows and the sensitivity of the project to the accuracy of these cashflows (Kim S.H. & Crick, 1986). If the cars per day through the toll booths were a thousand less the project returns a negative NPV and in effect destroys wealth for shareholders:

Time

0

1

2

3

4

Income

– Vehicles (Est) p/day

0

2000

1400

1200

1000

– Toll p/car (?)

0

4

5

5.5

6

– Income p/day

0

8000

7000

6600

6000

– Income p/annum

0

2920000

2555000

2409000

2190000

Expenditure

– Operating costs (@? p/vehicle)

0

2

2.5

3

3.5

– Total Operating costs

0

1460000

1277500

1314000

1277500

– Wages (@?288 p/day * 365)

0

105120

105120

105120

105120

– Outlay

5000000

Total Expenditure

5000000

1565122

1382623

1419123

1382624

Net Income

-5000000

1354878

1172378

989877

807376.5

Net Profit

-675491

ARR%

-13.5098

Discount @ 6% (Cost of Equity Capital)

0.942

0.888

0.8375

0.7903

DCF

-5000000

1276295

1041071

829022

638069.6

NPV

-1215542

As wages are fixed this cost is not sensitive to change, but other costs may be, if operating costs rise by 50% then the project also destroys wealth:

Time

0

1

2

3

4

Income

– Vehicles (Est) p/day

0

3000

2400

2200

2000

– Toll p/car (?)

0

4

5

5.5

6

– Income p/day

0

12000

12000

12100

12000

– Income p/annum

0

4380000

4380000

4416500

4380000

Expenditure

– Operating costs (@? p/vehicle)

0

3

3.75

4.5

4.75

– Total Operating costs

0

3285000

3285000

3613500

3467500

– Wages (@?288 p/day * 365)

0

105120

105120

105120

105120

– Outlay

5000000

Total Expenditure

5000000

3390123

3390124

3718625

3572625

Net Income

-5000000

989877

989876.3

697875.5

807375.3

Net Profit

-1514996

ARR%

-30.2999

Discount @ 6% (Cost of Equity Capital)

0.942

0.888

0.8375

0.7903

DCF

-5000000

932464.1

879010.1

584470.7

638068.7

NPV

-1965986

IRR

-14%

In both these scenarios the changes to the cashflows has a devastating effect on the viability of the project, one which is not communicated adequately (especially in terms of the costs) by ARR, or even payback. Imagine not quantative factors that may cause these scenarios to happen. Drivers believe that the price of the toll is too high and find alternative routes to avoid paying the toll. In the case of costs hikes in energy prices or other operating costs could easily impact on the viability. These quick examples demonstrate the dangers of making assumptions about the future, and as such one must be very careful about the assumptions made n cashflows. One way of adjusting for these un systematic risks is to conduct sensitivity analysis, and to use statistical techniques to adjust the NPV, this is often termed Expected Net Present Value (ENPV) and uses standard deviation to adjust for risk. Further the cost of capital is a significant factor in the reliability of NPV (Pogue, 2004). Herein the cost of equity capital is used, as the firm is geared to all equity this is probably a realistic cost of capital, but perhaps investors see the direction of the firm as particularly risky and require further compensation. Using WACC is only one option for managers and indeed the use of WACC does not always adequately adjust for the risk seen as inherently bigger as cashflow move forward in time. Consideration needs to be given to the discount factor used.

Lastly, and in particular reference to the WACC it is important to consider the nature of the capital structure o the company (Harris, 1991). Capital structure generally refers to the mixture of debt and equity which goes to make up the capital of the company, known as gearing, and represented as a proportional ration. Assume that the company has ?5,000,000 of equity, as is stated, in the form of equity and has no debt. As this is a large capital project the company is faced with a decision as to how to finance the project. Assuming that the only options are a rights issue to generate more equity or debt (in the form of debentures, typical of long term borrowing) then a decision needs to be made as to which course is better for the company as a whole. Gearing is another contentious issue in finance with no correct answer to the problem of optimal gearing. A number of theoretical approaches can be applied to the problem, most notably the work of Modigliani and Miller (MM) and their irrelevancy propositions (Modigliani, 1958). To understand this, it is important to understand a number of features of both debt and equity. Equity as has been said is governed by the risk it represents for equity holders, often in the form of Beta, Debt is not governed by this and is rather a cost in terms of the interest payments over the life of the debenture and the repayment of the capital sum at the end of the loan. Therefore Debt is often cheaper than equity as the risk is considered lower than that of a shareholder. If one thinks of an Income Statement from a set of accounts, one can clearly see that Interest is payable regardless of the profit attributable to shareholders, in effect the bank gets paid first. Further there is a tax shield on interest payments, as these are a cost of the company and therefore reduce the amount of corporation tax payable. Therefore consider the following example. The company currently has ?5m in equity and requires a further ?5m to finance the toll booth project. It s cost of equity capital is 6% but it is able to borrow at 5% debentures, the rate of corporation tax is 30%. A it stands the WACC is 6% and if the company issues a further ?5m to finance the project it will remain so, if however the company borrows the ?5m the following holds:

Debt

Equity

% Cost

5

6

Gearing

0.5

0.5

Wacc

5.5

With a further reduction of (1-T) to represent the tax shield this figure becomes 5.15%, the cost of capital has been effectively lowered. This means that future projects (as it is important to use the existing cost of capital for investment appraisal regardless of how the project is to be financed for NPV calculations) will be return more wealth to shareholders. The work of MM, however, pointed out that in a theoretically perfect world (no tax, symmetry of information and borrowing rates as well as other theoretical suppositions) the reduction is exactly off set by the increased risk from extending borrowing as follows:

(Source, G Arnold, Corporate Financial Management. 3rd Edition, London: Prentice Hall)

Therefore there is an increase of risk to equity shareholders with the i

Estimation of Optimal Hedge Ratios – Strategies

This work was produced by one of our professional writers as a learning aid to help you with your studies

Naive or one-to-one hedge assumes that futures and cash prices move closely together. In this traditional view of hedging, the holding of both the initial spot asset and the futures contract used to offset the risk of the spot asset are of equal magnitude but in opposite direction. In this case the hedge ratio (h) is one-to-one (or unit) (-1) over the period of the hedge.

This approach fails to recognize that the correlation between spot and futures prices is less than perfect and also fails to consider the stochastic nature of futures and spot prices and resulting time variation in hedge ratios (Miffre, City University).

The beta hedge recognizes that the cash portfolio to be hedged may not match the portfolio underlying the futures contract. With the beta hedge strategy, his calculated as the negative of the beta of the cash portfolio.

Thus, for example, if the cash portfolio beta is 1.5, the hedge ratio will be -1.5, since the cash portfolio is expected to move by 1.5 times the movement in the futures contract, where the cash portfolio is that which underlies the futures contract. The traditional strategy and the beta strategy yield the same value for h (Butterworth and Holmes 2001).

Minimum Variance Hedge Ratio (MVHR) was proposed by Johnson (1960) and Stein (1961). This approach takes into account the imperfect correlation between spot and futures markets and was developed by Ederington (1979). According to him, the objective of a hedge is to minimize the risk, where risk is measured by the variance of the portfolio return. The hedge ratio is identified as:

h*= – ?S,F / ?2F (1)

Where, ?S,F is the variance of the futures contract and ?S,F is the covariance between the spot and futures position. The negative sign mean that the hedging of a long stock position requires a short position in the futures market. The relation between spot and futures can be represented as:

St = ? + h*Ft + et (2)

Eq. (2), which is expressed in levels, can also be written in price difference as:

St – St-1 = ? + h*(Ft – Ft-1) + ?t (3)

or in price returns as:

St – St-1 / St-1 = ? + h*(Ft – Ft-1 / Ft-1) + ?t (4)

Eq. (4) can be approximated by:

logSt – logSt-1 = ? + h*(logFt – logFt-1) + ?t (5)

Eq. (6) can be re-written as:

RSt = ? + h*RFt + ?t (6)

Where, RSt and RFt are returns on spot and futures position at time t.

Equation (2) and (3) assume a linear relationship between the spot and futures while eq. (4)-(6) assumes that two prices follow a log-linear relation. Relative to equation (2)-(3), the hedge ratio represents the ratio of the number of units of futures to the number of units of spot that must be hedged, whereas, relative to eq. (4), hedge ratio is the ratio of the value of futures to the value of spot. (Scarpa and Manera, 2006)

Eq. (2) can easily produce auto correlated and heteroskedastic residuals (Ederington, 1979; Myers and Thompson, 1989: cited in Scarpa and Manera, 2006). Due to this reason, some authors suggest the use of eq (3)-(6), so that the OLS classical assumption of no correlation in the error terms is not violated.

Empirically, optimal hedge ratio h* can be obtained by simple Ordinary Least Square (OLS) approach, where the coefficient estimates of the futures gives the hedge ratio. This is can only be done when there is no co-integration between spot and futures prices/values and conditional variance-covariance matrix is time invariant (Casillo,XXXX). Even though application of MVHR relies on unrealistic assumptions, it provides an unambiguous benchmark against which to assess hedging performance ( Butterworth and Holmes, 2001).

Error Correction Model (ECM) approach for determining optimal hedge ratio takes in to account the important role played by the theory of co-integration between futures and spot market, which is ignored by MVHR (Casillo,XXXX). The theory of co-integration is developed by Engle and Granger (1981), who shows that if two series are co-integrated, there must exist an error correction representation that permits to include both the short-run dynamics and the long-run information.

ECM approach augments the standard OLS regression used in MVHR by incorporating error correction term (residual) and lagged variables to capture deviation from the long run equilibrium relationship and short-run dynamics respectively (XXXXect). The presence of the efficient market hypothesis and the absence of arbitrage opportunity imply that spot and futures are co-integrated and an error correction representation must exist (Casillo,XXXX) of the following form:

i=1

j=1

?St = ?et-1 + ??Ft + ? ?i?Ft-i + ? ?j?St-j + ut (7)

Where, ? is the optimal hedge ratio and et-1 = St-1 – ?Ft-1

All the above mentioned approaches employ constant variance and covariance to measure hedge ratio, which have some problems. The return series of many financial securities exhibit non-constant variance, besides having a skewed distribution. This has been demonstrated by Engle 1982, Lamoureux and Lastrapes 1990, Glosten, Jagannathan and Runkle 1993, Sentana 1995, Lee and Brorsen 1997 and Lee Chen and Rui 2001 (Rose, et al.,2005).

Non-constant variance, linked to unexpected events is considered to be uncertainty or risk, and this uncertainty is particularly important to investors who wish to minimize risks. In order to cope with these problems, Engle (1982) introduced the Autoregressive Conditional Heteroskedasticity (ARCH) model to estimate conditional variance. It takes into account changing variance over time, by imposing an autoregressive structure on the conditional variance. Bollerslev, Engle and Wooldridge (1988) expanded the univariate GARCH described above to a multivariate dimension to simultaneously measure the conditional variance and covariance of more than one time series. Thus, the multivariate GARCH model is applied to calculate a dynamic hedge ratio that varies over time based upon the variance-covariance between time series. (Rose, et al.,2005)

Finally, other researchers have proposed more complex techniques and some special case of the above techniques for the estimation of the OHR. Among these we mention the random coefficient autoregressive offered by Bera et al. (1997), the Fractional Cointegrated Error Correction model by Lien and Tse (1999), the Exponentially Weighted Moving Average Estimator by Harris and Shen (2002), and the asymmetric GARCH by Brooks et al. (2002). (Casillo,XXXX)

Despite the existence of massive literature on all the above approaches, no unanimous conclusion has been reached regarding the superiority of a particular methodology for determining the optimal hedge ratio. However, it would be wise to suggest that the choice of a strategy for deriving optimal hedge ratio should be based on the subjective assessment to be made in relation to investor preferences (Butterworth and Holmes, 2001).

Development of Research:

Figlewski (1984) conducted the first analysis of hedging effectiveness of stock index futures in US. He examined the hedging effectiveness for Standard and Poor’s 500 stock index futures against the underlying portfolio of five major stock indexes for the period June 1, 1982 to September 20, 1983. All five indexes represented diversified portfolio, however they were different in character from one another. Standard and Poor’s 500 index and New York Stock Exchange (NYSE) Composite included only the largest capitalization stocks. The American Stock Exchange composite (AMEX) and the National Association of Securities Dealers Automated Quotation System (NASDAQ) index of over-the-counter stocks contained only small companies which somewhat move independently of the Standard and Poor’s index. Finally, the Dow Jones portfolio contained only 30 stocks of very large firms. Return series for the analysis included dividend payments as risk associated with dividends on the portfolio is presumably one of many sources that give rise to basis risk in a hedges position. However, it was found that their inclusion did not alter the results. Consequently, and given the relatively stable and predictable nature of dividends, subsequent studies have excluded dividends. Figlewski used beta hedge and minimum variance hedge strategies and showed that the latter can be estimated by Ordinary Least Square (OLS) approach using historical data. He found that for all indexes hedge performance using minimum variance hedge ratio (MVHR) was better than beta hedge ratio was used. MVHR resulted in lower risk and higher return. When MHHR was uses, risk was reduced by 70%-80% for large capitalization portfolios. However, hedging performance was considerably reduced for smaller stocks portfolios. Also, hedging performance was better for once week and four week hedges when compared with overnight hedges.

Figlewski (1885) studies hedging effectiveness of three US index futures (S&P500, NYSE Composite and Value Line Composite Index (VLCI)) in hedging five US indices (S&P500, NYSE Composite, AMEX Composite, NASDAQ and DJIA). Data was collected for 1982. He analyzed the hedging effectiveness for the holding period ranging from one day to three weeks using the standard deviation of the hedged position, divided by the standard deviation of the un-hedged position, as a measure of assessing hedging effectiveness. Hedge ratios were derived using beta strategy and MVHR. Assuming constant dividends, the weekly returns of each of the five indices were regressed on the on the returns of the indices underlying the three futures. Daily data was used to compute ex post risk-minimizing hedge ratios. In nearly every case, risk-minimizing hedge ratio outperformed the other in terms of hedging effectiveness, for both types of hedge ratio it was found that the hedges under a week were not very effective. It was also found that hedging was more effective for the S&P500, NYSE Composite and the DJIA than for NASDAQ and AMEX Composite. In other words, once again, portfolios of small stocks were hedged less effectively than were those comprising large stocks.

Junkas and Lee (1985) used daily spot and futures closing prices for the period 1982 to 1983 for three US indices: S&P500, NYSE Composite and VLCI. They investigated the effectiveness of various hedging strategies, including the MVHR and the one-to-one hedge ratio. This was done for each index using data for a month to compute the hedge ratio used during that same month in hedging the spot value of the corresponding index. MVHRs were computed by regressing changes in the spot price on changes in the futures price. The average MYHR was 0.50, whike the average effectiveness, as measured by variance of un-hedged position minus variance of hedged position divided by variance of un-hedged position (HE), was 0.72 for the S&P500 and the NYSE Composite, and 0.52 for the VLCI. The effectiveness of the one-to-one hedge ratio was poor, leading to an increase in risk for the VLCI and the NYSE Composite, and an effectiveness measure of 0.23 for the S&P500. In other words, MVHR was found to be most effective in reducing the risk of a cash portfolio comprising the index underlying the futures contract. There was little evidence of a relationship between contract maturity and effectiveness.

Peters (1986) examined the use of S&P500 futures to hedge three share portfolios; the NYSE Composite, the DJIA and the S&P500 itself. MVHR and beta hedge strategy was applied to the data for the period 1984 to 1985. For each of the portfolio, MVHR gave a hedged position with a lower risk that did beta.

Graham and Jennings (1987) were first to examine hedging effectiveness for cash portfolios not matching an index. They classifies US companies into nine categories according to their betas and dividend yield. For each beta-dividend yield category, ten equally weighted portfolios of ten shares each were constructed. Weekly returns were computed for each portfolio for 1982-83. They then investigated the performance of S&P500 futures in hedging these portfolios for periods of one, two and four weeks. Three alternative hedge ratios were uses: one to one, bets and MVHR. The MVHR produced hedged positions with returns that were about 75% higher than for the other two hedge ratios. The measure of hedging effectiveness HE ranged from 0.16 to 0.33. For the one and the two week hedges, the MVHR hedge was more effective, that is, had a higher HE value.

Morris (1989) investigated the performance of S&P500 futures in hedging the risk of a portfolio of the largest firms in the NYSE. The data was monthly from 1982 to 1987. The MVHR was estimated using data for the entire period, and gave a HE value of 0.91.

Lindhal (1992) investigated hedge duration and hedge expiration effects for the MMI and S&P 500 future contract. Results showed that MVHR increased towards unity with an increase in the hedging duration. For S&P 500 hedge ratios were found to be 0.927, 0.965 and 0.970 for one, two and four week hedge duration, respectively. It was concluded that hedge ratio and hedging effectiveness increase as duration increase. Lindhal’s examination of the hedge expiration effect is based on the fact that future prices converge towards spot prices as expiration approaches. According to him MVHR can be expected to converge towards the naive hedge ratio if future prices also exhibit less volatility when approaching expiration. It was concluded that there was no obvious pattern in terms of risk reduction in relation to time to expiration.

Unlike previous studies which only investigate ex post hedging effectiveness, Holmes (1995) became the first individual in UK to examine the hedging effectiveness of FTSE-100 stock index futures contract using Ex Ante Minimum Variance Hedge Ratio strategy. The cash portfolio being hedged mirrored FTSE-100 stock index. Data for spot and future series was collected for the period July 1984 to June 1992 for hedging duration of one and two weeks. The results also demonstrated the superiority on MVHR over beta hedges and showed that ex ante hedge strategy resulted in risk reduction of over 80%. Greater risk reduction was also shown to be achieved by estimating hedge ratios over longer periods.

Holmes(1996) examined the ex post hedging effectiveness for the same data and return series used in the earlier study (1995) and showed that the standard OLS estimated MVHR provided the most effective hedge when compared to beta hedge strategy, error correction method and GARCH estimation. Results also suggested increase in hedging effectiveness with increase in hedging duration. This can be explained as variance of returns increases with an increase in the duration, resulting in the reduction of the proportion of the total risk accounted for by the basis risk.

Butterworth and Holmes (2001) provided an unprecedented insight in to the hedging effectiveness of investment trust companies (ITCs) using Mid250 and FTSE100 stock index futures contract ,the former being introduced in February 1994 with an aim to provide better hedging for small capitalization stocks. Analysis is based on daily and weekly hedge durations for the cash and future return data of thirty-two ITCs and four indices for the period of February 1994 to December 1996. FTSE100 index futures and FTSE Mid250 index futures are used to hedge cash positions. Apart from well established OLS approach, consideration is also given to Least Trimmed Squares (LTS) approach for estimation which involves trimming of regression by excluding the outliers. Four hedging strategies including traditional hedge, beta hedge, minimum variance hedge and composite hedge were compared on the basis if within sample performance. Composite hedge ratio was generated by considering returns on synthetic index futures formed by weighted average of returns on FTSE100 and FTSE-Mid250 contracts. Results demonstrated that traditional and beta hedge performed worst. MVHR strategy for daily and weekly hedges using Mid250 contracts outperformed the same strategy using FTSE100 contacts in terms of risk reduction for ITCs. However the superiority of Mid250 over FTSE100 is significantly less for cash portfolios based on broad market indexes. The composite hedge strategy demonstrated only minor improvements over results of the Mid250 contract. The LTS approach suggested similar results as OLS.

Seelajaroen (2000) attempted to investigate the hedging effectiveness of All Ordinance Share Price Index (SPI) to reduce price risk of All Ordinary Index (AOI) portfolio in the Australian financial market. Hedging effectiveness was investigated for one, two and four week hedge duration. Hedge ratios were generated by using Working’s model and the Minimum variance model and their effectiveness was determined by comparison with naive strategy. Data for the analysis consisted if daily closing prices of the SPI and API for the period January 1992 to July 1998. Minimum variance model consisted of both ex post and ex ante approach. Results demonstrated superiority of both Working’s model and Minimum variance model over naive hedge strategy. Working’s strategy was found to be more effective in long run, however, in short run the strategy is more sensitive to basis level used in the decision rule. Minimum variance strategy was also found to be highly effective, as even the standard use of the hedge ratio derived from past data was able to achieve risk reduction of almost 90%. Also, longer duration hedges were found to be more viable than short duration hedges and finally effects of time expiration on hedge ratio and effectiveness was found be ambiguous.

DATA & METHODOLOGY:

This paper examines the cross hedging effectiveness of five of the world’s most actively traded Stock Index Futures to reduce the risk of KSE100 index. The 5 stock index futures include S&P500, NASDAQ100, FTSE100, HANG SENG and NIKKEI 225. All 5 stock index futures and KSE100 index are arithmetic weighted indexes, where the weights are market capitalization. Analysis is based on daily and weekly hedge durations by using spot and futures return data for the period commencing from 1st January 2003 to 31st July 2008. Due to problems of sample size hedge durations of more than one week are not considered. Each daily return series consists of 1457 observations, out of which last 157 (from 1st January 2008 to 31st July 2008) are used to calculate out of sample (ex ante) hedging performance. Each weekly series consists of 292 observations, out of which last 31 (from 1st January 2008 to 31st July 2008) are used to measure ex ante hedging performance. The return series for each index is calculated as a logarithmic value change:

Rt = logVt – logVt-1 (2)

Where, Rt is the daily or weekly return on either the spot or futures position and Vt is the value of the index at time t.

Value is the daily or weekly closing value of all 6 indexes. All data was obtained from Datastream.

Two hedging strategies are considered. First, is the MVHR, and the second, is an extension of the first strategy by applying the theory of co-integration, formally known as Error Correction Model.

MVHR is estimated by regressing spot returns (KSE 100 in this case) on futures returns using historical information:

RSt = ? + bRFt + et (3)

Where, RSt is the return on KSE100 index in time period t; RFt is the return on the futures contract in the time period t; et is the error term and ? and b are regression parameters.

Value of b is obtained after running the above regression in e-views, which is the hedge ratio h* shown earlier in equation 1. This hedge ratio is used in further calculation for determining risk reduction. Effectiveness of minimum variance hedge is determined by examining the percentage of risk reduced by the hedge (Ederington, 1979; Yang, 2001). Consequently, hedging effectiveness is measured by the ratio of the variance of the un-hedged position minus the variance of the hedged position, divided by the variance of the un-hedged position (Floros, Vougas 2006).

Var(u) = ?2s (4)

Var(h) = ?2s + h2?2F – 2h?S,F (5)

Hedging Effectiveness (HE) = (Var(u) – Var(h)) / Var(u) (6)

Where, Var(u) is the variance on un-hedged position (KSE100); Var(h) is the variance on the hedged position; ?S & ?F are standard deviation on spot (KSE100) and futures returns respectively; h is the value of hedge ratio (b in equation 3); and ?S,F is the covariance between spot and future returns.

Error Correction Model (ECM) approach requires testing for co-integration. The return series are checked for co-integration by following a simple two step approach suggested by Engle and Granger. Consider two time series Xt and Yt, both of which are integrated of order one (i.e. I(1)). Usually, any linear combination of Xt and Yt will be I(1). However, if there exists a linear combination (Yt – I•Xt) which is I(0), then according to Engle and Granger, Xt and Yt are co-integrated, with the co-integrating parameter I•.

Generally, if Xt is I(d) and Yt is I(d) but their linear combination (Yt – I•Xt) is I(d-b), where b>0 then Xt and Yt are said to be co-integrated. Co-integration conjoins the long-run relationship between integrated financial variables to a statistical model of those variables (XYZ,200N).

In order to test for co-integration, it is essential to check that each series is I(1). Therefore, the first step, is to determine the order of integration of each series. Order of integration is determined by testing for unit root by using Augmented Dickey Fuller (ADF) test. A variable Xt is I(1), if it requires differencing once to make it stationary. The null of unit root is rejected when probability is less than the critical level of 5%. Then the following OLS regression is estimated:

RSt = ? + bRFt + et

Where, variables are same as equation 3.

Empirical existence of co-integration is tested by constructing test statistics from the residuals of the above equation. If two series are co-integrated then et will be I(0). This is found by testing the residuals for unit root by using ADF test. The null of unit root is rejected if probability is less than 5%.

Once it is established that the series are co-integrated, their dynamic structure can be exploited for further investigation in step two. Engle and Granger show that co-integration implies and is implied by the existence of an error correction representation of the series involved. Error correction model (ECM) abstracts the short- and long-run information in modeling the data(XYZ,200N). The relevant ECM to be estimated for generation of the optimal hedge ratio is given by:

j=1

i=1

RSt = ?et-1 + ?RFt + ? ?iRFt-i + ? ?jRSt-j + ut (7)

Where, et-1 is the error correction term and n and m are large enough to make ut white noise; ? is the hedge ratio. The appropriate values of n and m are chosen by the Akaike information criterion (AIC) (Akaike1974).

In short, returns on KSE100 are regressed on futures returns and residuals are collected by using OLS. ECM with appropriate lags is estimated by the OLS in the second stage.

Next phase is to determine the superiority of the two models MVHR and ECM, which were used to obtain the hedge ratios b and ? respectively. This is achieved by conducting Wald Test of Coefficient on model (7). If anyone of the lags in model 7 turn out to be significant, then optimal hedge ratio obtained through model (7) will be superior then hedge ratio obtained through model (3). Hence, signaling the superiority of ECM over MVHR. The significance is tested by a hypothesis, where:

Ho= C(1)=C(2)…=C(i)=0

H1 = C(1)=C(2)…=C(i)?0

The null is rejected if the probability of Chi-square statistic is less than the critical value of 5%.

Lastly, the superior hedge ratio will be used to determine ex ante performance. The hedging effectiveness of the superior hedge ratio will be based on the measure of risk reduction achieved through equation (6).

Importance of Strategic Readiness of Intangible Assets

This work was produced by one of our professional writers as a learning aid to help you with your studies

In 2000, the market-to-book value, or in other words, the ratio of the stock-market value to accounting value of the largest 500 companies in the U.S, increased to 6.3. In simple words this means that for every six dollars of market value, only one dollar appeared on the balance sheet as a physical or financial asset. The cause of this large difference has been attributed to the rise in value of intangible assets. ( Source: Getting a grip on Intangible Assets, Harvard Management Update)

In the past decade, there has been an increasing academic and corporate focus on the subject of intangible assets offering clarity to business leaders on the ways to measure and manage these assets in context of a business’s strategic goals. On regulatory front, European Union is soon to introduce standards for reporting on intangible assets.

Our report aims to analyse one such academic framework, developed by Robert S. Kaplan and David P. Norton, which highlights the importance of strategic readiness of intangible assets. The methodology of this conceptual framework is creation of a Strategy Map on which intangible assets have been mapped and measured.

Three key things that emerge from the analysis of this work named Measuring the Strategic Readiness of Intangible Assets and written for Harvard Business Review in 2004are:

1. Identification of the important intangible assets in a business organization.

2. Mapping these intangible assets to a business’s strategy.

3. Understanding the factors that enable these intangible assets to contribute to the success of the business.

Introduction

It is increasingly clear from the example at the beginning, that, in 21st century’s knowledge-driven, services-dominated, economy, it is the intangible assets, and not so much the physical and financial assets, which are playing an increasingly important role in shaping a business’s success. At the same time, it is realized by management, that there is a need to objectively evaluate the readiness of these intangible assets in enabling a business to achieve its strategy.

For the benefit of analysis, we start by defining intangible assets as any nonphysical assets that can produce economic benefits. These cover intellectual capital, knowledge assets, human capital and organizational capital as well as more specific attributes like quality of corporate governance and customer loyalty. (Zadrozny, Wlodrek).

So what is required to map and manage these assets for the success of a business’s strategy?

Analysis of Situation

According to Kaplan and Norton, while developing Balanced Scorecard (a concept for measuring a company’s activities in terms of its vision and strategies, and helps to give managers a comprehensive view of the performance of a business), they identified three major categories of intangible assets:

No.Intangible AssetsEncompassing Elements

1

Human Capital

Skills; Training; Knowledge

2

Information Capital

Systems; Databases; Networks

3

Organization Capital

Culture; Leadership; Alignment; Teamwork

Further, while understanding the critical success factors that transform a business organization into a performing and strategy focussed entity, the article discusses how these assets need to be mapped to the organization’s strategy on a framework called strategy map. Finally it explains the route by way of which, quantitative values can be assigned which clearly help an organization to understand the readiness of these assets in enabling an organization achieve its strategy.

Discussions and Findings

As we discover, there are unique features of intangible assets that make their behaviour different from the physical and financial assets. These are:

1. Intangibles assets mostly cannot create value for an organization in a standalone form. They need to be combined with other assets. The implication of this is on a firm’s ability to assign a value to these assets on a standalone basis.

2. These assets rarely affect financial performance directly, unlike physical or financial assets which immediately start paying off. Intangible assets contribute indirectly through a chain of cause and effect. For example, the investment in training a team in total quality management may decrease defects and therefore may give rise to customer satisfaction and heighten positive brand perception.

3. While human capital and information capital are easier to map and manage, organizational capital is much more difficult.

4. Human capital may be measured by mapping the jobs and identifying the strategic job families before focusing on getting these jobs ready for strategy implementation. Information capital may be evolved by identifying and creating a portfolio of transactional, analytical and transformational computer applications and sturdy network infrastructure that give a positive edge to the manner in which business is conducted. One such example is the complete transformation in retail banking with deployment of information systems that empower a customer exponentially.

5. Organizational capital is the most challenging element to map and manage because of the complete behavioural change required in conducting business at all levels. Changing the base culture – that involves the employees’ shared attitudes and beliefs, and the Climate – which comprises of the shared perception of the organization’s policies, procedures and practices, require a grip on deep-rooted, socio-psychological dynamics at work within the organization. For example, changing National Health Services (NHS) culture from a budget oriented operations to a dynamic business plan oriented operations that focuses on health consumer, is more challenging than mapping the strategic jobs and putting state-of-the-art information capital. For bringing organizational capital readiness, leadership plays a very important role, as do communication and knowledge-sharing.

6. Once these intangible assets are brought in state of strategic readiness, they start contributing in generating cash for the business. For example, if McDonalds sets a service response time of 30 seconds and trains its human capital to achieve this target, the customer turnover at the counter will increase and lead to higher revenues.

7. Finally, for these assets to come into a state of strategic readiness, they need to be aligned with the organization’s strategy. If they are not properly aligned, it can lead to chaos. For example, if McDonalds promises its customers a 30 seconds service but does not care to bring its human, information and organizational assets up to required standards, there will be widespread dissonance amongst its customer base and the risk of erosion in brand value will be very high.

Marginal and Absorption Costing of Income Statements

This work was produced by one of our professional writers as a learning aid to help you with your studies

This paper aims to look at how income statements are prepared using marginal and absorption costing. The absorption costing method charges all direct costs to the product costs, as well as a share of indirect costs. The indirect costs are charged to products using a single overhead absorption rate, which is calculated by dividing the total cost centre overhead to the total volume of budgeted production. (ACCA, 2006; Drury, 2006; Blocker et al., 2005). On the other hand under marginal costing, only variable costs are charged to cost units. Fixed costs are written off the profit and loss account as period costs. (Drury, 2006; Blocker et al., 2005). Sections a) and b) below show the marginal and absorption costing income statements respectively for H Ltd that manufactures and sells a single product during the years ending 2006 and 2007. It is assumed that the company uses the first-in-first-out (FIFO) method for valuing inventories. In addition it is assumed that the company employs a single overhead absorption rate each year based on budgeted units and actual units exactly equalled budgeted units for both years.

Marginal Costing
H Ltd Income Statement (Marginal Costing)2006 2007
?’000

?’000

Sales Revenue

3000

3600

Cost of Sales:

Opening Stock

0

400

Production cost (W1, W2)

700

500

Variable Marketing and Admin

1000

1200

Cost of Goods available for sale

1700

2100

Ending inventory (W3, W4)

200

100

1500

2000

Contribution Margin

1500

1600

Less Fixed costs

Marketing and Admin

400

400

Production overheads

700

700

1100

1100

Operating profit

400

500

Absorption costing.
H Ltd Income Statement (Absorption Costing)2006 2007
?’000

?’000

Sales

3000

3600

Cost of Sales

Beginning Inventory

0

400

Production Cost (W5, W6)

1400

1200

Ending Inventory (W7, W8)

400

240

1000

1360

Gross Profit

2000

2240

Marketing and Admin Expenses

Fixed

400

400

Variable

1000

1200

1400

1600

Operating profit

600

640

Reconciliation of net income under absorption and Marginal Costing.
Reconciliation 2006 2007
?’000

?’000

Absorption operating profit

600

640

Less Fixed overhead cost in ending inventory (W9)

200

140

Marginal Costing net income

400

500

Under marginal costing inventory of finished goods as well as work in progress is valued at variable costs only. On the contrary, absorption costing values stocks of inventory of finished goods and work in progress at both variable costs and an absorbed amount for fixed production overheads. (ACCA, 2006; Lucy, 2002). In the case of H Ltd, under marginal costing, only variable costs are included in the ending inventory figure. This results in a profit figure of ?400,000. On the other hand absorption costing includes additional ?200,000 as fixed overhead in the ending inventory for 2006. As a result absorption operating profit is overstated by ?200,000 in 2006. In like manner, the absorption profit under absorption costing is overstated by ?140,000 due to an inclusion of ?140,000 of fixed overhead cost in the ending inventory figure for 2007. To reconcile the profit under absorption costing and marginal costing, we may either subtract the fixed overhead included in ending inventory from the absorption cost operating profit to arrive at the marginal cost operating profit or add the fixed overhead costs in ending inventory to the marginal cost operating profit to arrive at the absorption cost operating profit.

Stock Build-ups

Stock build-ups may result from using absorption costing for performance measurement purposes because inventory is valued at both fixed and variable costs. Firstly, profit is overstated. In fact absorption costing enables income manipulation because when inventory increases fixed costs in the current year can be deferred to latter years and as such current net income is overstated which in effect results in financial statements that do not present fairly and as such affect users’ decisions on the financial statements. Secondly, maintaining high levels of inventory may result in obsolescence and as such declines in future profitability resulting from the loss in value of the inventory. (Blocher et al., 2005; Storey, 2002).

Advantages of Absorption Costing and Marginal Costing

According to ACCA (2006) the following arguments have been advanced for using absorption costing:

It is necessary to include fixed overhead in stock values for financial statements. This is because routine cost accounting using absorption costing produces stock values which include a share of fixed overhead. Based on this argument, financial statements prepared using absorption costing present a true and faithful representation of the actual results of operation of the company.
For a small jobbing business, overhead allotment is the only practicable way of obtaining job costs for estimating and profit analysis.
Analysis of under/over-absorbed overhead is useful to identify inefficient utilisation of production resources.

ACCA (2006) also identifies a number of arguments in favour of marginal costing. Preparation of routine cost accounting statements using marginal costing is considered more informative to management for the following reasons:

Contribution per unit represents a direct measure of how profit and volume relate. Profit per unit is a misleading figure.
Build-up or run-down of stocks of finished goods will distort comparison of operating profit statements. In the case of closing inventory, the inventory is valued only at the variable cost per unit. This makes the profit under a situation where there is closing inventory to be the same as the case when there is no closing inventory thereby enabling the comparison of operating profit statements over time.
Unlike under absorption costing, marginal costing avoids the arbitrary apportionment of fixed costs, which in turn result in misleading product cost comparisons.
Bibliography
ACCA (2006). Paper 2.4 Financial Management and Control: Study Text 2006/2007. www.kaplanfoulslynch.com
Blocher, E., Chen, K., Cokins, G., Lin, T. (2005). Cost Management A Strategic Emphasis. 3rd Edition McGraw Hill.
Drury, C. (2004). Management and Cost Accounting. 6th Edition. Thomson Learning, London.
Lucy, T (2002), Costing, 6th ed., Continuum.
Storey, P (2002), Introduction to Cost and Management Accounting, Palgrave Macmillan