What is Meant by Market Efficiency?

This work was produced by one of our professional writers as a learning aid to help you with your studies

Market efficiency has been a topic of interest and debate central amongst financial economists for more than five decades. Indeed, two of the recipients of the Nobel Memorial Prize in Economic Sciences in 2013, Eugene Fama and Robert Shiller, have debated about the efficiency of markets since the 1980s. Concerns about market efficiency were catapulted to prominence most recently by the financial crisis of 2007-8. Efficient capital markets are foundational to economic theories that posit the allocative efficiency of free markets, which requires informationally efficient capital allocation markets, such as those for equity and fixed income trading. An extended line of research has uncovered evidence of various anomalies which seem to challenge notions of market efficiency, and has also attempted to explain the causes of one such anomaly, the so-called “size effect.” Though there appears to be substantial evidence that the size effect is real and persistent, violating the efficiency market hypothesis, no substantial evidence supports the size effect as violating market efficiency.

“In an allocationally efficient market, scarce savings are optimally allocated to productive investments in a way that benefits everyone” (Copeland, et al., 2005, p. 353). To provide optimal investment allocation, capital prices must provide market participants with accurate signals, and therefore prices must fully and instantaneously reflect all available relevant information (Copeland, et al., 2005). In advanced economies, secondary stock markets play an indirect role in capital allocation by revealing investment opportunities and information about managers’ past investment decisions (Dow & Gorton, 1997). For secondary stock markets, and other formal capital markets, to efficiently and effectively fulfill these two roles, securities prices must “be good indicators of value” (Fama, 1976, p. 133). Therefore, allocative market efficiency requires capital market prices to be informational efficient.

Informational efficiency implies no-arbitrage pricing of tradeable securities and entails several defining characteristics that form the basis of the efficiency market hypothesis. Generally, “A market is efficient with respect to information set ?_t if it is impossible to make economic profits by trading on the basis of information set ?_t” (Jensen, 1978, p. 98), where economic profits are defined as risk-adjusted returns minus trading and other costs. If security prices reflect all available relevant information, such as P/E ratios and past return variances, then it would be impossible to to use such information to profitably trade these securites. Therefore tests of the possibility of using publicly available information to earn economic profits constitute tests of infomational effiency.

Tests of informational market efficiency generally take three forms, and comprise the elements of the efficient market hypothesis. Fama (1969) defined the three forms of market efficiency as the weak, semi-strong and strong form, with each form characterised by the nature of the information central to its application. Weak form efficiency tests are tests of the viability of using past price history of the market to predict future returns (which is a necessary, but not sufficient, condition for trading for economic profits). The semi-strong form of the efficienct market hypothesis tests whether all publicly available information could be used by traders to earn economic profits. And finally, the strong form of market effiency tests the viability of using all information, public as well as private, to generate economic profits. In the literature and amongst practicioners, it is the semi-strong form which “represents the accepted paradigm and is what is generally meant by unqualified references in the literature to the ‘Efficient Market Hypothesis’” (Jensen, 1978, p. 99). And though some references to ‘market efficiency’ allude to the allocative efficiency of markets, the term market efficiency usually refers to informational efficiency as operationally defined by Fama’s efficiency market hypothesis, specifically the semi-strong formulation.

Since its formulation in the late 1960s, researchers have conducted thousands of tests of the efficiency market hypothesis and have found various anomalies, such as the size effect, which appear to violate the market efficiency. Banz (1981) examined NYSE-listed common stock returns between 1936 and 1975 and found stocks with the smallest market capitalisaation earned a risk-adjusted return 0.40% per month higher than the remaining firms in his sample, which was the first evidence that the ‘size effect’ posed a challenge to semi-strong form efficiency. Analysing a sample of 566 NYSE and AMEX stocks over the 1963–1977 period, Reinganum (1981) found that portfolios constructed based on size exhibited predicatability of future returns, with the smallest sized portfolio outperforming the largest decile by 1.77% per month. Keim (1983), testing NYSE and AMEX stocks over the 1963-1979 period, reported a size premium of approximately 2.5% per month. Lamoureux & Sanger (1989) found a size premium for NASDAQ stocks (2.0% per month) and for NYSE/AMEX stocks (1.7% per month) over the 1973 to 1985 period. Fama & French (1992, p.438) concluded, “The size effect (smaller stocks have higher average returns) is robust in the 1963-1990 returns on NYSE, AMEX, and NASDAQ stocks.” Though evidence continued to mount of a size effect, which entails that average stock returns of firms with small market capitalisation were significantly higher than the average returns for large capitalisation firms, Fama and French’s paper preceded decades of research regarding explanations for the size effect and its possible implications.

Over the years researchers have offered a variety of empirical explanations, some of them mutually exclusive, for the size effect. Robert Merton (1987) argued that smaller firms have smaller investor bases and are less likely than larger firms to enjoy an institutional following amongst investors, making smaller firms less liquid and cheaper, which resulted in greater risk-adjusted returns. Chan & Chen (1991) asserted that smaller firms are more likely than large firms to either be distressed or only marginally profitable, and therefore small firms’ prices are more responsive to changing business conditions, which loaded the size effect. Fama & French (1993, p.5) formed 25 portfolios of securities based on size and book-to-market and found that these “portfolios constructed to mimic risk factors related to size and BE/ME capture strong common variation in returns, no matter what else is in the time-series regressions. This is evidence that size and book-to-market equity indeed proxy for sensitivity to common risk factors in stock returns.” Verifying their argument that the size effect was a proxy for common risk factors, Fama & French (1995) found evidence that firm size loaded profitability risk into the cross-section of stock returns. These, and other, empirical findings shed light on possible reasons for the size effect, but a consensus explanation never developed around a single cause.

In contrast to the empirical and economic explanations for the size effect, some researchers questioned whether the size effect existed at all. Shumway & Warther (1999) argued that the small firm effect is essentially a statistical illusion, related not to actual share prices but to market microstructure issues which inhibit proper measurement of price movements. They examined prices of NASDAQ-listed firms from 1972 to 1995, a period previous research associated with significant size effect, and found that after considering delisting bias (by accounting for delisted firms’ final price movements before removal from the sample), the size effect disappeared completely. Wang (2000) argued along similar lines, contending the size effect resulted from survivorship bias. He argued that small stocks are relatively more volatile and therefore more likely than large firms to be delisted due to bankruptcy or failing to meet listing requirements. These delisted stocks are often excluded from the samples studied for the size effect, which would bias the returns of small stocks upwards. Wang (2000) used simulation experiments to test for the likelihood of the small firm effect under such circumstances and concluded that the effect was spurious. Examining all of the above explanations and others, Dijk (2011, p. 3272) concludes, “The empirical evidence for the size effect is consistent at first sight, but fragile at closer inspection. I believe that more empirical research is needed to establish the validity of the size effect.” Though the causes of the size effect are interesting and remain an important topic of debate, more important are the possible implications of the size anomaly for the efficiency market hypothesis.

The size anomaly appears to present a violation of efficient markets, especially to those observers who wrongfully presume that market efficiency implies stock prices must follow a random walk; however, no researcher has yet to show that information related to firm size can be leveraged by traders to earn economic profits. Recalling Jensen’s (1978) definition of informational efficiency, the size effect violates market efficiency only if such information could be used to generate risk-adjusted abnormal returns. Though the size effect may indicate that stock returns are predictable, “if transaction costs are very high, predictability is no longer ruled out by arbitrage, since it would be too expensive to take advantage of even a large, predictable component in returns” (Timmermann & Granger, 2004, p. 19). Therefore return predictability invalidates market efficiency when it produces risk-adjusted returns that subsume transaction costs. According to Stoll and Whaley (1983), who test whether the size anomaly can be exploited to earn risk-adjusted returns greater than transactions costs, find it is not possible for the sample of NYSE-listed firms examined over the 1960 to 1979 period. This is due in part to the relatively insignificance of small firms in relation to the market as a whole. As noted by Fama (1991, p. 1589), “the bottom quintile [of stocks] is only 1.5% of the combined value of NYSE, AMEX, and NASDAQ stocks. In contrast, the largest quintile has 389 stocks (7.6% of the total), but it is 77.2% of market wealth.” So, even if the size effect is granted perfect validity, it does not necessarily negate the efficient market hypothesis.

A final set of reasons ameliorating concerns about the size effect’s threat to market efficiency is related to model specification. Abstracting from the specific arguments related to size effects, consideration of the joint hypothesis problem dampens concerns that size effects could be determined to violate market efficiency. Roll (1976) noted that the pricing models used to test market efficiency were also necessarily testing the validity of the specification of the market model (specifically, the validity of the market model proxy), which means that researchers’ models were necessarily underspecified. Violations seemingly attributable to the size effect, or any other apparent anomaly, can always be attributed to mispecification of the market model or mismeasurement of the market proxy, making it impossible to definitively infer anamolous behavior as evidence of market efficiency. Additionally, this time pointed out by Fama (1991, pp. 1588-9), “small-stock returns…are sensitive to small changes (imposed by rational trading) in the way small-stock portfolios are defined. This suggests that, until we know more about the pricing (and economic fundamentals) of small stocks, inferences should be cautious for the many anomalies where small stocks play a large role…”. Therefore, though there seems to be robust evidence for a size effect, transaction costs overwhelm risk-adjusted returns and model specification concerns generally blunt notions that size effects can be shown to disprove market efficiency.

The global financial crisis of 2007-8 renewed prominent calls for dispensation of the notion of efficient markets, as the allocative efficiency of markets seemed in doubt after so much capital appeared to be wasted on ill-advised investments. But efficient market allocation of investments relies not on ex post views of past downturns, but on ex ante decisions about future investment opportunities. Efficient markets imply that all relevant information is impounded in current asset prices, maximising market participants’ ability to allocate investment, which necessarily implies that the future is unpredictable—market efficiency prohibited the ability to forecast the financial crisis, as the model predicts. Alternatively, a long line of research has examined the possibility that anomalies, such as the size effect, disprove market efficiency. The size effect, however, though an interesting puzzle regarding the cross-section of stock returns, does not disprove market efficiency.

References
Banz, R., 1981. The relationship between return and market value of common stocks. Journal of Financial Economics, 9(1), pp. 3-18.
Chan, K. & Chen, N., 1991. Structural and Return Characteristics of Small and Large Firms. The Journal of Finance, 46(4), pp. 1467-84.
Copeland, T., Watson, J. & Shastri, K., 2005. Financial Theory and Corporate Policy. Fourth ed. London: Pearson.
Dijk, M. A. v., 2011. Is size dead? A review of the size effect in equity returns. Journal of Banking & Finance, 35(12), pp. 3263-74.
Dow, J. & Gorton, G., 1997. Stock Market Efficiency and Economic Efficiency: Is There a Connection?. The Journal of Finance, 52(3), pp. 1087-1129.
Fama, E., 1969. Efficient Capital Markets: A Review of Theory and Empirical Work. The Journal of Finance, 25(2), pp. 383-417.
Fama, E., 1976. Foundations of Finance. New York: Basic Books.
Fama, E., 1991. Efficient Capital Markets: II. The Journal of Finance, 46(5), pp. 1575-1617.
Fama, E. F. & French, K. R., 1992. The Cross-Section of Expected Stock Returns. The Journal of Finance, 47(2), pp. 427-465.
Fama, E. F. & French, K. R., 1993. Common risk factors in the returns on stocks and Bonds. Journal of Financial Economics, 33(1), pp. 3-65.
Fama, E. F. & French, K. R., 1995. Size and Book-to-Market Factors in Earnings and Returns. The Journal of Finance, 50(1), pp. 131-155.
Jensen, M., 1978. Some Anomalous Evidence Regarding Market Efficiency. Journal of Financial Economics, 6(2/3), pp. 95-101.
Keim, D. B., 1983. Size-related anomalies and stock return seasonality: Further empirical evidence. Journal of Financial Economics, 12(1), pp. 13-32.
Lamoureux, C. G. & Sanger, G. C., 1989. Firm Size and Turn-of-the-Year Effects in the OTC/NASDAQ Market. The Journal of Finance, 44(5), pp. 1219-1245.
Merton, R., 1987. A Simple Model of Capital Market Equilibrium with Incomplete Information. The Journal of Finance, 42(3), pp. 483-510.
Reinganum, M. R., 1981. Misspecification of capital asset pricing: Empirical anomalies based on earnings’ yields and market values. Journal of Financial Economics, 9(1), pp. 19-46.
Shumway, T. & Warther, V. A., 1999. The Delisting Bias in CRSP’s Nasdaq Data and Its Implications for the Size Effect. The Journal of Finance, 54(6), pp. 2361-79.
Timmermann, A. & Granger, C. W., 2004. Efficient market hypothesis and forecasting. International Journal of Forecasting, 20 (1), pp. 15-27.
Wang, X., 2000. Size effect, book-to-market effect, and survival. Journal of Multinational Financial Management, 10(3-4), pp. 257-73.

Mergers and Acquisitions Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Mergers and Acquisitions (M&A) occur when two or more organisations join together all or part of their operations (Coyle, 2000). Strictly defined, a corporate takeover refers to one business acquiring another by taking ownership of a controlling stake of another business, or taking over a business operation and its assets (Coyle, 2000). Corporate takeovers have been occurring for many decades, and have historically occurred on a cyclical basis, increasing and decreasing in volume in what has been termed merger waves since the late 1800s (Sudarsanam, 2010). There can be a number of distinct motives for corporate M&A and this short essay will discuss a number of these, drawing on theoretical and financial theory as well as empirical evidence to explain their rationale.

The first group of motives to be discussed are those that relate and can be explained by the classical approach to financial theory (Icke, 2014). These motives assume that firms do not make mistakes and acquire other companies as they believe that doing so will result in increased profitability (Baker & Nofsinger, 2010) as they allow for the achievement of enhanced economies of scale or scope (Lipcynski et al., 2009). This theoretical perspective can be used to explain a number of motives.

First, corporate takeovers can be used as a route to achieving geographic expansion. By acquiring another company in a different country or with more geographically-diverse operations, an acquiring company can expand its markets and thus expand its sales opportunities. The larger business post-acquisition can then, if implemented efficiently, benefit from economies of scale associated with reducing unit input costs, ultimately increasing profitability.

A second reason for completing a takeover could be to increase market share within a market a firm is already operating in. This can result in increased profits through again allowing for increased economies of scale through decreasing unit costs and can also increase profitability by reducing the number of competitors in a market.

Thirdly, acquiring businesses at different stages in the supply chain, known as vertical integration (Icke, 2014), can allow for enhanced profitability as it can facilitate enhanced value in the supply chain and the potential to exercise control and scale benefits over inputs to production and the overall cost of output.

Other motives for corporate takeovers can be categorised as being more consistent with the behavioural school of thought. This considers that M&A is driven by factors other than for pure profit maximization (Icke, 2014; Martynova and Renneboog, 2008). There a number of reasons why M&A may take place where the opportunity to benefit from scale economies is not the key driver. First, a company may engage in an acquisition in a bid to increase their size to prevent bids from other companies. This is consistent with the concept of ‘eat or be eaten’ (Gorton et al., 2005) which hypothesizes that during waves of M&A activity, firms feel vulnerable to takeover bids and as such feel compelled to engage in their own M&A activity in order to increase their size and minimize interest from potential bids.

A second motive for M&A that relates more to the behavioural school (but does possess some economic basis) is the opportunistic M&A activity associated with management taking advantage of a relative increase in the value of its stock to acquire a target in an equity-funded acquisition. In this case, it is the perceived opportunity to buy another company ‘cheaply’ that drives the acquisition, rather than the profit motive if all other variables are held equal.

What empirical evidence do we have in regard to value creation following a takeover for:
the bidder firm’s shareholders
the acquired firm’s shareholders

Mergers and Acquisitions (M&A) occur when two or more organisations join together all or part of their operations (Coyle, 2000). A number of empirical studies have been performed in order to ascertain the extent of value creation following a takeover for both the bidder firm and the acquired firm. Shareholders of the acquired firm have consistently experienced positive value impacts (Icke, 2012; Martynova and Renneboog, 2008) following completion of a takeover, while evidence of value creation following a takeover for the acquirer has been inconsistent and is broadly considered to be inconclusive (Angwin, 2007). This essay will discuss the empirical evidence of the value impact following corporate takeovers for both parties, looking at a broad range of evidence spanning the time following announcement to the fiscal years following completion of a takeover. The essay will briefly discuss the limitations of the evidence based on the highly differentiated nature of the M&A landscape and the presence of significant independent variables. It will then evaluate the results before arguing that for the bidder firm’s shareholders evidence of value creation is broadly inconclusive and that it appears that any value creation that is witnessed differs depending on the type and motives for the acquisition, as well as when it is taking place. It will argue that, as is consistent with the majority of empirical studies, value creation for the acquired company’s shareholders is positive (Martynova and Renneboog, 2008).

The value creation experienced by shareholders in the bidder firm following a takeover can be considered both post announcement and in the years following completion and integration of the businesses. Value impacts at announcement are most profound in the impact of share price fluctuations while performance-based metrics, such as profitability, can be used to assess value impacts following takeover (Icke, 2012).

First looking at the empirical evidence that supports positive value creation for the acquiring shareholders it is clear that there are a number of studies that demonstrate the positive value creating effects of a cross-section of transaction types. Looking at the US and Europe, Martynova and Renneboog (2008) measure the value impacts following a takeover by studying a century of historical M&A transactions. The evidence indicates that in the case of European cross-border transactions, value is created in terms of post-acquisition performance. Looking at developing countries, Kumar (2009) finds that in the case of developing economies, acquirer shareholders tend to experience better returns in both the short and long term following an acquisition than in developed economies. Gugler et al (2003), look specifically at the impact on sales and profitability of a takeover and find that acquisitions have a statistically significant impact on profit of the acquiring company. Chari et al. (2010) look at cross-border transactions and provides evidence that the acquirers will experience improved post-merger performance, but that this is dependent on having intangible asset advantages that can be exploited abroad. Villalonga (2004) studied diversification takeovers in a study that reviewed the share price performance of diversified conglomerates versus non-diversified trading peers in the years following the transaction. The evidence reveals that diversified firms actually trade at a large and significant premium to their peers, thus suggesting that this type of acquisition can drive long term value gain for shareholders in the post-acquisition entity. Draper and Paudyal (2006) studied the value creating impacts of private versus public takeovers and found that value creation for the acquiring company when the target is private is broadly positive. An empirical study by Icke (2014) looks at European and US M&A transactions by motive for takeover and finds that, in terms of announcement effects on share price, transactions driven by an increase in market share, research & development synergies and vertical transactions are rewarding for the acquiring company. In terms of longer term gains, Icke (2014) shows that transactions driven by increase in market share, geographic expansion, vertical integration and diversification all have a positive effect.

In contrast, there is a wide body of empirical research which contrasts with the findings of the above studies and covers a range of different M&A situations where value is in fact destroyed for the acquiring company shareholder both in terms of share price at announcement and in terms of post-integration performance. In a study that considers a broad range of takeover motivations, Walker (2000) finds that acquiring companies experience overall negative value impacts and those anomalies in which acquirers actually gain in the longer term are so infrequent they are considered to be statistically insignificant. When Martynova and Renneboog (2008) study US transactions aimed at achieving diversification the evidence indicates that post-acquisition value is destroyed for acquiring shareholders following a transaction and that wealth effects at announcement for acquirers are inconclusive. In a 2005 study, Powell and Stark (2005) find that post-acquisition performance in terms of sales impact, is actually positive for the acquirer, however, when this is controlled for extra working capital, the effect is inconclusive and likely a net negative result. Looking at vertical integration takeovers, both Kedia et al (2011) and Walker (2000) find that in the case of US transactions, takeovers result in value destruction for acquiring company shareholders. Icke (2014) also found that R&D driven takeovers have a negative effect.

The empirical evidence in relation to target firms’ shareholder value creation is significantly more conclusive across the spectrum of types of M&A. Empirical studies, which tend to focus on value creation for the owners of target companies primarily looks at shareholder value at announcement (Icke, 2014) in the form of share price rises and the premiums acquirers pay. Martynova and Renneboog (2008) find that targets gain value from announcement of a takeover and furthermore find that this gain is consistent across merger cycles, regardless of whether the takeover occurs during the peak or the low of the merger waves witnessed throughout the past century (Martynova and Renneboog 2008). Their study into US takeovers demonstrates that the value creation is significant in size, often reaching double digit growth on the value prior to announcement. In a study of hostile versus friendly takeovers, Shwert (1996) found that target shareholders experience significant gains from a takeover that has come about as a result of a tender process, rather than a hostile a single party bidding round, although found broadly positive results across both types for target shareholders. Likewise, studying the method of payment and the impact on value creation for target shareholders, Goergen and Renneboog (2004) found that all-cash offers trigger ARs of almost 10 percent upon announcement whereas all-equity bids or offers combining cash, equity and loan notes only generated a return of 6 percent but still resulted in positive value creation for the target company.

Empirical studies have also been conducted on transaction data based around the concept of merger waves. That is to look at transactions not as isolated occurrences but as events that have taken place within one of the six identified waves of M&A activity since the late 1800s (Sudarsanam, 2010). By looking at takeovers from the perspective of when they occurred, it is possible to identify more consistent patterns in value creation and to derive theories of attribution for these gains. Icke (2014) reviews a number of studies and finds that value creation for shareholders in both the target and the acquiring company varies depending on the wave in which it occurs when other variables are considered to be constant. Icke (2014) shows that the third wave generated largely positive returns for parties engaged in takeovers, while the fourth was broadly negative and the value impacts were indistinguishable during the fifth. This evidence of environment-sensitivity adds further complexity to the evidence surrounding value creation in takeovers.

Overall there is a wealth of empirical evidence available into the value impacts of corporate takeovers, however, the evidence is broadly inconclusive in determining the value creating opportunities for acquirers while it is broadly conclusive that target company shareholders will gain (Martynova and Renneboog 2008). The inconclusive nature is caused by methodological inconsistencies as a result of mixed methods, the difficulty capturing operational change, the different time periods and sample size distortions (Icke, 2014) as well as the vastly differentiated base of empirical evidence that exists, as discussed in this essay. As Icke (2014) states, the value effects of takeovers are, ultimately, non-conclusive. However, based on the empirical evidence discussed in this essay and drawing on Wang and Moini (2012), the general conclusion can be seen to be that in short-term event studies (addressing the impacts post-announcement) acquirers’ will either experience some normal returns or significant losses, while the target firms have shown to consistently experience positive value creation in the same timeframe. Post-acquisition performance is extremely difficult to measure and the evidence has been mixed. Furthermore, as Angwin (2007) argues, strategic motivations are essential for understanding post-takeover performance and for measuring the isolated effects of the takeover.

In conclusion, there exists a number of studies and a diverse body of empirical evidence into the value creating effects of takeovers for both target and acquirer shareholders. For target shareholders, studies focus on the announcement effects and are broadly positive, while for acquirer shareholders, studies look at both announcement and post transaction performance and show a broadly negative value impact with some evidence of positive value creation in certain types of M&A scenario and during certain periods (waves) in history.

Bibliography
Angwin, D (2007). Motive Archetypes in Mergers and Acquisitions (M&A): The implications of a Configurational Approach to Performance. Advances in Mergers and Acquisitions. 6. pp77-105.
Baker, KH and Nofsinger, JR. (2010). Behavioral Finance: Investors Corporations and Markets. Hoboken, Nj:John Wiley & Sons Inc.
Chari, A., Ouimet, P.P. and Tesar L.L.. (2010). The value of control in emerging markets. Review of Financial Studies. 23(4). pp1741-1770.
Coyle, B (2000). Mergers and Acquisitions. Kent: Global Professional Publishing.
Draper P. and Paudyal K. (2006). Acquisitions: Private versus Public. European Financial Management. 12(1). pp57-80.
Goergen, M and Renneboog, L (2004). Shareholder Wealth Effects of European Domestic and CrossBorder Takeover Bids. European Financial Management. 10(1). pp9-45.
Gugler, K., D.C. Mueller, B.B. Yurtoglu and Ch. Zulehner (2003). The Effect of Mergers: An International Comparison. International Journal of Industrial Organization. 21(5). pp625-653.
Icke, D (2014). AN EMPIRICAL STUDY OF VALUE CREATION THROUGH MERGERS & ACQUISITIONS – A STRATEGIC FOCUS. Aarhus University, Business & Social Sciences [online]. Available at: http://pure.au.dk/portal/files/68137404/Final_Thesis_31.12.2013_Daniel_Michael_Icke.pdf
Kedia, S., Ravid, S.A., Pons, V. (2011).When Do Vertical Mergers Create Value?. Financial Management. 40(4). 845-877.
Kumar, N. 2009. How emerging giants are rewriting the rules of M&A. Harvard Business Review. 87(5). pp115-121.
Lipczynski, J., Wilson, O.S. and Goddard, J. (2009). Industrial Organization: Competition, Strategy, Policy. Third edition. Essex, England: Pearson Education Limited.
Martynova, M and Renneboog, L (2008). A Century of Corporate Takeovers: What Have We Learned and Where Do We Stand?. Journal of Banking & Finance. 32(10). pp2148- 2177.
Powell, RG. and Stark, AW. (2005). Does operating performance increase post-takeover for UK takeovers? A comparison of performance measures and benchmarks. Journal of Corporate Finance. 11(1-2), pp293-317.
Schwert G.W. (1996). Markup Pricing in Mergers and Acquisitions. Journal of Financial Economics. 41(2). pp153-192.
Sudarsanam, S (2010). Creating Value from Mergers and Acquisitions – The Challenges. Essex, England: Pearson Hall.
Villalonga, B.N. (2004). Diversification Discount or Premium? New Evidence from the Business Information Tracking Series. The Journal of Finance. 59(2). pp 479-506.
Wang,D. and Moini, H. (2012). Performance Assessment of Mergers and Acquisitions: Evidence from Denmark. [online]. Available at: http://www.g-casa.com/conferences/berlin/papers/Wang.pdf
Walker, MM (2000). Corporate Takeovers, Strategic Objects, and Acquiring-Firm Shareholder Wealth. Financial Management. 29(200). pp55-66.

Overvaluation of the Stock Market Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

Stock markets are considered to be among the most preferred investment platforms by investors, as they generate a high return on investment (Fong, 2014). There are many underlying reasons for this high return, one of which may be the valuation of the financial commodities traded in the stock market (Chang, 2005). Some financial analysts believe that the stock markets are extremely overvalued (Phoenix, 2014), while there are others who consider them as being slightly overvalued (Rosenberg, 2010). Another school of thought has a viewpoint that they are fairly valued (Wolf, 2008); while, some hold the opinion that they are undervalued (Pan, 2009). Due to these differences in viewpoints, it becomes difficult to gauge the extent to which stock markets are overvalued. The reasons for these differences in opinions are the different geographical locations (Tan, Gan and Li, 2010) and the different assumptions made in comparisons (Cheng and Li, 2015). The difference in the methods used for valuation also turns out to be one of the reasons, as every method has its merits and demerits (Khan, 2002). Stock market overvaluation may have severe negative effects including a market crash or increasing organisation’s agency costs, which need to be considered by managers in organization-wide strategic management (Jensen, 2005).

Methods used for Stock Valuation

Various methods are used for stock valuation; some of the common ones include Price to Earnings ratio (Stowe et al., 2008), Knowledge Capital Earnings (Ujwary-Gil, 2014) and Dividend Discount Model (Adiya, 2010). The price to earnings ratio is the most common method used to evaluate stock markets, whereby the company’s current stock price is compared with the predicted earnings it will yield in future (Stowe et al., 2008). Knowledge Capital Earnings – KCE is another method through which a company’s intellectual capital can be gauged and interpretation of the extent to which it is overvalued can be given (Ujwary-Gil, 2014). The KCE method, however, is specifically subjective if the analyst is interested in estimating the potential future earnings of an organization (Ujwary-Gil, 2014).

The Dividend Discount Model is based on the assumption that the price of a stock at equilibrium will be equal to the sum of all its upcoming dividend yields discounted back to its current value (Ivanovski, Ivanovska and Narasanov, 2015). One of the shortcomings of this model is with the company’s growth estimation, in which the averaged historical rates do not provide an accurate picture, as they ignore the ongoing economic conditions and the changes that take place in the company (Ivanovski, Ivanovska and Narasanov, 2015). Another issue identified by Mishkin, Matthews and Giuliodori (2013) is related to the accuracy of dividends forecasted based on the company’s past performance and the predicted future trends of the market; critics cast doubts on the accuracy of these figures, as they are purely based on estimation of analysts and may not be always correct.

Stock Markets are Extremely Overvalued

Hussman (2014), who is well-known for his accurate insights about the financial markets, comments in one of his speeches that due to their Zero Interest Rate and Quantitative Easing policies, the central banks have driven the stock prices up to twice as high as they are supposed to be. This imparts the stock markets to be overvalued by 100%. While different authors argue that every evaluation metric has its merits and demerits, which makes it difficult to conclude whether stock markets are overvalued when calculated via a specific metric, a Phoenix (2014) report provides evidence of the fact that stock markets are overvalued by almost every metric used for valuation. According to Autore, Boulton and Alves (2015), short interest rates are also a determinant of stock valuation; the lower the short interest rate of the initial stock, the more overvalued the stock will be.

An example could be that of the U.S. stock market which is analysed to be overvalued by 55% (Lombardi, 2014), while it is estimated to be overvalued by 80% according to another research (Heyes, 2015). Lombardi (2014) identifies it to be overvalued to such an extent due to the increasing presence of bullish stock advisors as compared to bearish advisors, which results in the investors being complacent without being anxious about a huge market sell-off. By evaluating the market through various methods, Tenebrarum (2015) established an opinion that the U.S. stock market is valued at its highest peak to date. Additionally, Lombardi (2014) recognises these indicators to be similar to those before the stock market crash in 2007. Hence it may lead us to a prediction that history might repeat itself, as specialists have already expected the forthcoming crash (Heyes, 2015).

Reasons behind Extreme Overvaluation

Moenning (2014) explains one of the reasons behind stock overvaluation to be investors’ inclination to fall in the trap of investing based on stock valuation, instead of business cycles. He further elaborates that due to the investors’ inclination towards highly valued stocks, companies raise their stock prices to make their stock seem attractive to be preferred by investors over that of other companies. Qian (2014) identifies a solution to this that if investors are discouraged from merely considering stock valuation while looking for investment options, companies will not have an incentive to undertake systematic mispricing of their stocks, which results in overvaluation.

Another reason behind overvaluation of stock market has been suggested by Autore, Boulton and Alves (2015); according to whom the stocks are overvalued to a great extent due to the higher levels of failure to deliver. Three major exchanges report a huge number of failures to deliver in their daily listings approximately equal to 10,000 shares or 0.5% of the overall outstanding shares, which further explains the reason behind extreme overvaluation of stock markets (Autore, Boulton and Alves, 2015).

Stock Markets are Slightly Overvalued

Some analysts estimate the stock markets to be slightly overvalued as compared to what their value should be. Rosenberg (2010) further strengthens this point in his research which revealed that stock markets are overvalued by 35%. A Newstex (2010) report provides little evidence about the market being overvalued by 26%. Specialists from this school of thought believe that stock overvaluation may only result in a temporary disruption in the market, which may be resolved by cautiously reducing the stock prices.

Stock Markets are Fairly Valued

The ideal situation is the one when stock markets are appropriately valued, which Wolf (2008) identifies as an opportunity. He says that fairly priced stock markets are favoured by the investors and risk-seeking governments, as it is the situation with lesser uncertainty. With an overall market yield of 4%, Paler (2012) recognised the stock markets to be fairly valued, regarding them as a suitable investment platform. For example, Newstex (2015) reported Amazon’s stock price to be fairly valued at $295 per share as opposed to $380. This is because financial analysts believe that factors such as the potential decline in the annual revenue growth, reduction in operating profit margins due to increasing technology, marketing and other costs, and increased investment in growth strategies, such as international expansion, need to be accounted for when valuing stocks. It can thus be understood that overvalued stocks pose to be a threat for the financial markets because investors lose confidence, which results in a drop in revenue growth (Akbulut, 2013). The slightly overvalued stock markets may find their easy escape if the decline in the Central Bank rates results in a decrease in the wider interest rate spectrum (Saler, 1998).

Stock Markets are undervalued

Pan (2009) supports the claim that stock markets are undervalued, along with which he gives the example of the Asian stock market, which is approximately 30% undervalued. One of the reasons he identified for it was the political instability. Another example is provided by Pawsey (2009), whereby he analysed that most of the UK stock market remains undervalued and it has not been so in decades. The reason he identified was that the stocks are undersold as compared to the sales estimation. On one hand, the U.S. stock market is viewed as being extremely overvalued, while on the other, the U.K. stock market is severely undervalued. It can thus be seen that the geographical location plays a great role in the differences of opinion about overvaluation and undervaluation of the stock markets (Tan, Gan and Li, 2010).

There are some specific markets which are consistently undervalued for certain periods of time. An example could be the Russian stock market; Putin (2008) found Russian companies to be severely undervalued. Caldwell’s (2015) analysis also depicted that Russian stock market is among the three most undervalued markets globally. The analysis also included predictions that the Russian stocks might go down further, therefore investors need to beware before investing in such markets.

Reasons behind Stock Undervaluation

One of the reasons behind undervaluation of stock markets is the investor’s inclination towards highly valued stocks. Although some companies set their stocks at a lower price to make them seem cheaper and more attractive for the investors to buy, they find the investors doing the opposite, i.e. opting for highly valued stocks in anticipation of higher returns (Warner, 2010).

Reasons for Different Viewpoints
Different Assumptions and Valuation Methodology

The different viewpoints mentioned about stock valuation are based on the different assumptions (Cheng and Li, 2015) and different methods used for valuation (Khan, 2002). It also follows that these different methods have their own advantages and disadvantages, which if accounted for, may result in a different perspective. For example, price to earnings ratio is considered to be a worthless tool by some analysts because of its overoptimistic estimates (Tenebrarum, 2015). Taboga (2011) identifies another issue with this ratio, that it is mostly influenced by the fluctuations in earnings due to the business cycle oscillations. Hence he assumes that relying merely on this method may not provide a true picture of the extent to which stock market is overvalued.

Implications of Overvaluation

Hunter, Kaufman and Pomerleano (2005) explain that extreme overvaluation of the stock market should be taken into consideration and a solution should be devised for it, otherwise there would be higher probability of a crash. Liao (2014) also found a positive relationship between highly overvalued markets and possibility of a crash. He also found a negative relationship between extreme overvaluation and future share price jumps. Jensen’s (2005) study revealed that the overvaluation of a company’s stock gives rise to certain organisational forces which become difficult for the management to handle, damaging the organisation’s core value either partially or entirely.

On one hand, overvalued stock markets pose threats to the financial markets, while on the other, they help in boosting up the aggregate demand and supply, such that this positive effect may potentially be able to subside the negative effect (Cecchetti et al., 2000). Jones and Netter (2012) believe that mispriced stocks prove to be a source of encouragement for investors to trade, as a result of which they are reverted back to their reasonable prices.

Conclusion

The valuation of stock markets has long been an area of concern for financial institutions and analysts. The differences in valuation and the opinions regarding valuation occur because of the differences in the methods used for calculations and making estimates. Each method has its pros and cons and research suggests that one method alone cannot provide a true picture of the degree to which stock markets are overvalued or undervalued. There is evidence about stock markets being extremely overvalued, and there is also an equal amount of research suggesting they are fairly valued and/or undervalued. Considering the differences in methods used and the variation in geographical locations where these researches are conducted, it is difficult to hold a strong opinion about the extent to which stock markets are overvalued or undervalued, because critics against each school of thought have logical reasoning proving the limitations of the valuation method used by the other analysts. It is therefore necessary for the analysts to use a combination of two or methods for stock valuation, so that the doubts of the critics may be reduced, ensuring transparency in the financial data analysis.

References
Adiya, B. (2010). Discuss the Main Theories Underlying the Valuation of the Stock. Critically Assess the Role of Fundamental and Technical Analysis in Stock Market Valuation. EC 247 Term Paper, University of Essex.
Akbulut, M.E. (2013). Do Overvaluation-driven Stock Acquisitions Really Benefit Acquirer Shareholders? Journal of Financial and Quantitative Analysis, Vol. 48, No. 4, pp. 1025-1055.
Autore, D.M., Boulton, T.J., and Alves, M.V. (2015). Failure to Deliver, Short Sale Constraints, and Stock Overvaluation. Financial Review, Vol. 50, No. 2, pp. 143-172.
Caldwell, K. (2015). Revealed: The World’s Cheapest Stock Markets 2015. The Telegraph. 6th June. [Online] Available at: http://www.telegraph.co.uk/finance/personalfinance/investing/11654508/Revealed-The-worlds-cheapest-stock-markets-2015.html
Cecchetti, S.G., Genberg, H., Lipsky, J., Wadhwani, S. (2000). Asset Prices and Central Bank Policy. Geneva: International Center for Monetary and Banking Studies.
Chang, J. (2005). Shares Feature Attractive Valuations. Chemical Market Reporter, Vol. 268, No. 18, pp. 15.
Cheng, S., and Li, Z. (2015). The Chinese Stock Market Volume II: Evaluation and Prospects. London: Palgrave Macmillan.
Fong, W.M. (2014). The Lottery Mindset: Investors, Gambling and the Stock Market. London: Palgrave Macmillan.
Heyes, J.D. (2015). Stock Market is 50% to 80% Overvalued; Experts Warn Historical Crash now Imminent. Natural News. 17th September. [Online] Available at: http://www.naturalnews.com/051202_economic_predictions_stock_market_crash_James_Dale_Davidson.html
Hunter, W.C., Kaufman, G.G., and Pomerleano, M. (2005). Asset Price Bubbles: The Implications for Monetary, Regulatory and International Policies. London: MIT Press
Hussman, J. (2014). John Hussman: The Stock Market is overvalued by 100%. Phil’s Stock World. Newstex. Retrieved from ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/1621993284/fulltext?accountid=15977
Ivanovski, Z., Ivanovska, N., and Narasanov, Z. (2015). Application of Dividend Discount Model Valuation at Macedonian Stock-Exchange. UTMS Journal of Economics, Vol. 6, No. 1, pp. 147-154.
Jensen, M.C. (2005). Agency Costs of Overvalued Equity. Financial Management, Vol. 34, No. 1, pp. 5-19.
Jones, S.L., and Netter, J.M. (2012). Efficient Capital Markets. [Online] Available at: http://www.econlib.org/library/Enc/EfficientCapitalMarkets.html
Khan, A. (2002). 501 Stock Market Tips and Guidelines. USA: Writers Club Press.
Liao, Q. (2014). Overvaluation and Stock Price Crashes: The Effects of Earnings Management. PhD Dissertation, University of Texas.
Lombardi, M. (2014). EconMatters: U.S. Stock Market Overvalued by 55%? Newstex Global Business Blogs. Newstex. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/1641263053?pq-origsite=summon
Mishkin, F.S., Matthews, K., and Giuliodori, M. (2013). The Economics of Money, Banking and Financial Markets. European Edition. Barcelona: Pearson Education Limited.
Moenning, D. (2014). EconMatters: How Much are Stocks Overvalued? Newstex Global Business Blogs. Newstex. Retrieved from ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/1639491656?pq-origsite=summon
Newstex (2010). Is the Stock Market 26% Overvalued? Phil’s Stock World. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/189661843?pq-origsite=summon
Paler, N. (2012). Fidelity’s Roberts: Equity Markets Fair to Slightly Overvalued but better than Cash. Investment Week. 26th March. pp. 28. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/963553204?pq-origsite=summon
Pan, A. (2009). Asian Stock Markets Seen almost 30% Undervalued. Asiamoney. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/206616845?pq-origsite=summon
Pawsey, D. (2009). UK Stocks are ‘Significantly Undervalued’. Financial Advisor. Retrieved from ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/195110261?pq-origsite=summon
Phoenix Capital Research (2014). Stocks Are Severely Overvalued By Almost Every Predictive Metric. Phil’s Stock World. Newstex. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/1546016887?pq-origsite=summon
Putin (2008). Putin Says Russian Stock Market Undervalued. Daily News Bulletin. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/456062919?pq-origsite=summon
Qian, X. (2014). Small Investor Sentiment, Differences of Opinion and Stock Overvaluation. Journal of Financial Markets, Vol. 19, No. 1, pp. 219-246.
Rosenberg, D. (2010). Rosenberg: Stocks 35% Overvalued. Phil’s Stock World. Newstex. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/189666557?pq-origsite=summon
Saler, T. (1998). Fed could Rescue Slightly Overvalued Large-cap Stocks. Milwaukee Journal Sentinel. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/260844752?pq-origsite=summon
Stowe, J.D., Robinson, T.R., Pinto, J.E., McLeavey, T.W. (2008). Equity Asset Valuation Workbook. New Jersey: John Wiley & Sons, Inc.
Taboga, M. (2011). Under/Over-Valuation of the Stock Market and Cyclically Adjusted Earnings. International Finance, Vol. 14, No. 1, pp. 135-164.
Tan, Z.H., Gan, C., and Li, Z. (2010). An Empirical Analysis of the Chinese Stock Market: Overvalued/Undervalued. International Journal of Applied Economics & Econometrics, Vol. 18, No. 1, pp. 44-74.
Tenebrarum, P. (2015). EconMatters: The U.S. Stock Market is at its Most Overvalued in History. Newstex Global Business Blogs. Newstex. Retrieved from: ProQuest. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/1656537926?pq-origsite=summon
Ujwary-Gil, A. (2014). Knowledge Capital Earnings of a Company Listed on Warsaw Stock Exchange. European Conference on Knowledge Management. Kidmore: Academic Conferences International Limited.
Warner, J. (2010). Why Stock Markets are still Undervalued? The Daily Telegraph. 19th January, pp. 4. [Online] Available at: http://search.proquest.com.gcu.idm.oclc.org/docview/321739234?pq-origsite=summon
Wolf, M. (2008). Why Fairly Valued Stock Markets are an Opportunity? Financial Times. 26th November, pp. 13.

The Development of the Balanced Scorecard

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

The intention of this essay is to analyse the ‘Balanced Scorecard’ and to review its effectiveness as a performance management tool. It will review briefly the short history of the ‘Balanced Scorecard’ and then analyse each of the different aspects of the management tool and describe how they link together.

History of the Balanced Scorecard

The notion of the ‘Balanced Scorecard’ first appeared in the Harvard Business Review in 1992 in an article titled “The Balanced Scorecard – Measures that Drive Performance,”authored by Robert Kaplan and David Norton (Kaplan and Norton 1992). They had conducted a year-long study with “12 companies at the leading edge of performance measurement, [and] devised a ‘balanced scorecard’”as a result of their research (Kaplan and Norton, 1992, p.71).

A ‘Balanced Scorecard’ is a “strategic planning and management system that is used to align business activities to the vision and strategy of the organisation, improve internal and external communications, and monitor organisation performance against strategic goals”(Balanced Scorecard Institute, Unknown). It was brought out of the necessity to include non-financial indicators to measure performance, where in the past businesses and managers focused primarily on financially-based indicators to measure performance. These financially-based performance measurement systems “worked well for the industrial era, but they are out of step with the skills and competencies companies are trying to master today”(Kaplan and Norton, 1992, p.71).

After spending a year with various companies, Norton and Kaplan realised that “Managers want a balanced presentation of both financial and operational measures”(Kaplan and Norton, 1992, p.71). The recognition of the importance of operational measures was a milestone in performance measurement systems, as financially-based measurements help indicate the final outcomes of actions and processes already set in place, whilst operational measures help aid the driving of future financial performance.

Since its inception in 1992 the ‘Balanced Scorecard’ is now “adopted by thousands of private, public, and non-profit enterprises around the world”(Kaplan, 2010, p. 2). Which provides testament to its importance and effectiveness as a performance management system, it is likely that businesses that have implemented the systems have seen profound impacts on their profit margins and the happiness and innovativeness’ of their workforce.

The Four Perspectives

The scorecard itself is made up of four different perspectives; Financial, Customer, Internal Business Processes, and Learning & Growth. By looking at these different perspectives the balanced scorecard “provide[s] answers to four basic questions; How do customers see us? What must we excel at? Can we continue to improve and create value? How do we look to shareholders?”(Kaplan and Norton, 1992, p.72) By providing senior managers with information from four important perspectives, another benefit of implementing a scorecard is that it minimises information over-load by “add[ing] value by providing both relevant and balanced information in a concise way for managers”(Mooraj, Oyon and Hostettler, 1999, p.489).

To understand more completely how the interaction of the phases helps an organisation create additional financial value whilst aiding in the learning and growth, internal business processes and customer satisfaction perspectives see the appendix for fig.1, and fig.2. The four different perspectives and the way they interconnect are an important issue, as such it is also important to analyse each of them on an individual basis; first it must be recognised that each of the perspectives is made up of Objectives, Measurements, Targets and finally Programmes.

Each of these areas within the perspective helps identify and measure a way in which a company can achieve its stated objective through the implementation of a programme. A basic example for customer perspective would be as follows;

ObjectiveMeasurementTargetProgramme
Reduce staff turnoverStaff turnover ratioA ratio of less than 6 monthsTo implement staff feedback and satisfaction survey’s with the aim of creating an environment in which they feel productive and appreciated

Learning & Growth Perspective

This perspective is the beginning of the scorecard and in conjunction with the cause and effect hypothesis (Fig.2), makes up arguably the most important aspect as its “intended to drive improvement in financial, customer and internal process performance”(Kaplan and Norton, 1993). This aspect focuses primarily on innovation and improvement of work level employees, essentially creating more efficiency within the internal business processes. However, in order to achieve required innovation and improvements in efficiency a motivated and empowered workforce is essential, one method of achieving this is to implement a “staff attitude survey, a metric for the number of employee suggestions measured whether or not such a climate was being created”(Kaplan and Norton, 1993). Other such methods which could be implemented are that of calculating revenue per employee, and as such it can then create a measurement which can be observed and recorded year on year to achieve a pre-set objective, thus fulfilling each of the required facets of the balanced scorecard in relation to this perspective.

By implementing a programme, in the form of a survey or other such measures “it [can] identify strategic initiatives and related measures, these gaps can then be addressed and closed by initiatives such as staff training and development”(Mooraj, Oyon and Hostettler, 1999, p. 483). Once work-force empowerment is achieved and employees are happy and informed about their roles and the overall strategic aim of the organisation and methods of observing, recording and measuring are in place it can now focus on the next stage of the balanced scorecard.

Internal Process Perspective

This perspective, once an empowered and informed work-force is achieved and employees are working to their full potential, focuses primarily on making business and/or manufacturing processes more efficient, creating more output for the input. In order to achieve these improvements a business may implement many changes that “may range from moderate and localized changes to wide-scale changes in business process, the elimination of paperwork and steps in processes, and introduction of automation and improved technology”(Balanced Scorecard Institute, 2002).

In order to achieve this increase in efficiency an organisation “managers must devise measures that are influenced by employees’ actions. Since much of the action takes place at the department and work-station levels, managers need to decompose overall cycle time, quality, product, and cost measures to local levels”(Kaplan and Norton, 1992, p.75). By devising measurements aimed at work-station levels, such as delivery time turnaround or decrease in waste produced, managers are able to observe and monitor increases or decreases in efficiency and also locate where these increases or decreases stem from. Once a suitable measurement system is in place, managers are able to create targets to achieve and finally programmes in which to implement in an attempt to meet the pre-set targets.

By implementing a programme which is easily communicated, achievable and produces results that can be monitored by all levels that are relevant to the process, it will find that employees will benefit from seeing the results they produce with the intention of further motivating the work-force to increase efficiency. Once efficiency within the internal business processes has been achieved and an objective, a measurement system, pre-set targets and a programme that is successfully implemented, it can focus on whether or not the increase in innovation and empowerment combined with efficiency has had its intended effect on the customer.

Customer Perspective

The next perspective is that of the customer perspective which could be argued to be one of, if not, the most important aspect as this is where an increase in sales revenue and thus an increase in income are generated. After creating an empowered, informed work-force and improving efficiency relating to business processes this should “lead to improved products and services”, (Balanced Scorecard Institute, 2002) which in turn should improve the quality of products and services and ideally, with reduced costs incurred from efficiency, lower the cost of products and services offered to customers.

In order to achieve this increase in customer satisfaction or market share a similar method is needed in which an organisation must first create an objective, such as increase market share by 10% or maintain or increase repeat purchases. Once an objective is set in place then the organisation must create a measurement system to implement, one which can be reviewed annually, monthly or even weekly, an example of this may include a % increase in customer loyalty cards or a % increase in sales revenue. Finally, a programme must be implemented in order to drive toward the objective; an example of this may be an increase in market research to explore the possibility of new market opportunities or perhaps an investment in a new marketing campaign and special offers directed at repeat customers.

Financial Perspective

The final perspective is that of the financial perspective, in the eyes of the shareholders this is by far the most important aspect and where the effort in the earlier facets of the balanced scorecards cumulates in an increase in profit margins and ratios such as Return on Investment (ROI). This perspective “included three measures of importance to the shareholder. Return-on-capital employed and cash flow reflected preferences for short-term results, while forecast reliability signalled the corporate parent’s desire to reduce the historical uncertainty cause by unexpected variations in performance”(Kaplan and Norton, 1993). The first two are self-evidently of importance to shareholders with a return generated for shareholders and cash flow results which result in larger profits, while reducing the risk of uncertainty caused by a variation in performance is of particular importance and is something that can only be achieved through getting every employee focused and aligned with the overall strategic aims of the company, through an informed, focused and appreciated workforce, an efficient internal business process, and a satisfied customer-base.

The Cause and Effect Relationship

It is clear that linkages are the most important aspect of the balanced scorecard and that the cause and effect relationship (fig.2) allows for strategic alignment throughout an organisation. This has been seen to be “the common thread to the successful implementation of the balanced scorecard,”(Murby and Gould, 2005, pp.10) another key element to the balanced scorecard is making sure “that all employees understand [the] strategy and conduct their business in a way that contributes to its mission and objectives”(Murby and Gould, 2005, pp.5).

The importance of the cause and effect relationship in conjunction with ensuring that each and every employee is aware of the overall company strategy allows and an organisation to create a foundation for success in that the learning & growth facet provides a company with informed, innovative and an enthusiastic work-force which allows the company to be in a position to progress into the future. A final key point would be allowing managers the ability to “introduce four new processes that help companies make [an] important link”(Kaplan and Norton, 2007). By being in a position to translate the vision, communicating the strategy and linking it to compartmental and individual goals, integrating business plans with financial goals and finally giving each employee the ability to provide feedback, a company has created an environment in which they can adjust and augment at each level should managers feel the need too.

Conclusion

In conclusion, the essay has covered the short history and fundamentals of the ‘Balanced Scorecard’ and has shown how it is made up of different perspectives which provides management with basic questions regarding important stakeholders. It also provides management which a detailed measurement system and an ability to observe progress, or regression, within each of the different perspectives via the inclusion of objectives, measurement tools and targets which are created by management themselves. This also allows management to make changes where necessary in order to ensure that the overall strategic vision of the company is still being pursued. The essay has also highlighted the importance of the cause and effect relationship and provides the “strategic-map”within the appendix which can help provide an illustrative view of how the “balanced scorecard”in conjunction with the cause and effect relationship can turn an empowered work-force into a long-term financially stable organisation. It also covers the importance of communication, something that most organisations overlook as can be seen by the removal of the work-level employee from the overall strategic vision, and something that most organisations only feel upper-level management should be informed of.

Bibliography
Balanced Scorecard Institute, (2002). “The Balanced Scorecard and Knowledge Management.”Available at: https://balancedscorecard.org/BSC-Knowledge-Management
Balanced Scorecard Institute, (Unknown). “Balanced Scorecard Basics.”Available at: http://balancedscorecard.org/Resources/About-the-Balanced-Scorecard
Kaplan, R.K. (2010). “Conceptual Foundations of the Balanced Scorecard,”Harvard Business School, pp. 1-36 [Online]. http://www.hbs.edu/faculty/Publication%20Files/10-074.pdf
Kaplan, R.K. and Norton, D.N. (1993). Putting the Balanced Scorecard to Work. [Online] Available at: https://hbr.org/1993/09/putting-the-balanced-scorecard-to-work
Kaplan, R.T. and Norton, D.N. (1992). “The Balanced Scorecard – Measures that Drive Performance,”Harvard Business Review, pp.70-80 [Online]. Available at: www.alnap.org/pool/files/balanced-scorecard.pdf
Kaplan, R.T. and Norton, D.N. (2007). Using the Balanced Scorecard as a Strategic Management System [Online]. Available at: https://hbr.org/2007/07/using-the-balanced-scorecard-as-a-strategic-management-system
Mooraj, S.T. Oyon, D.O. and Hostettler,D.H. (1999). “The Balanced Scorecard: a Necessary Good or an Unnecessary Evil?”European Management Journal, 17(5), pp.481-491. [Online]. Available at: http://members.home.nl/j.s.sterk/AQM/The%20balanced%20scorecard%20a%20necessary%20good%20or%20an%20unnecessary%20evil.pdf
Murby, L.M. and Gould, S.T. (2005). Effective Performance Management with the Balanced Scorecard – Technical Report, CIMA, pp.1-43 [Online]. Available at: http://www.cimaglobal.com/Documents/ImportedDocuments/Tech_rept_Effective_Performance_Mgt_with_Balanced_Scd_July_2005.pdf
Illustrations
Balanced Scorecard Institute, (2002). “Cause and Effect Hypothesis”. [Online] Available at: https://balancedscorecard.org/BSC-Knowledge-Management
Kaplan, R.S. (2010). “The Strategy Map links intangible assets and critical processes to the value proposition and customer and financial outcomes.”Page 23. [Online] Available at: http://www.hbs.edu/faculty/Publication%20Files/10-074.pdf
Appendix

(Figure 1)

(Figure 2)

Principals of Corporate Finance

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

The question of whether or not to proceed with a project requiring significant capital expenditure is one which involves considerations running the gamut of issues facing the firm. Taking a purely financial perspective the firm is required by Fischer’s Separation Theorem to return the maximum amount of wealth to shareholders (Fischer, Reprinted 1977). In the modern firm ownership is separated from control in the form of the capital of the company being held, traditionally at least by shareholders who have little to do with the day to day running of the firm, this being entrusted to the Directors appointed to the board by the trustees and shareholders. As such in the modern finance world there is a considerable agency problem whereby the owners of the firm’s capital have a degree of separation from the control of their capital (Farma, 1978). As such it is expected, and enforced by the market in terms of the willingness of investors to place capital under a firm’s control, that a firm will return wealth consummate with an acceptable degree of risk. Indeed it is the risk of an investment which dries the importance of investment appraisal in firms and understanding the difference between systematic and un-systematic risk underpins much of the following discussion of the investment appraisal process (Hirshliefer, 1961).

Un-systematic risk is the risk associated with the unique operations and conditions of the firm and is relatively unimportant (at least in terms of the financial theory) whilst systematic risk, especially as represented as the Beta of the firm (more of which later) is the risk of the class of share within the market (Pogue, 2004). The theory is that a share price is determined by its relation to the capital market line, in terms of the random walk theory, which governs the movement of the share with the market. Shares move as the market moves, generally speaking, and so how much they move represents the systematic risk to the shareholder. Beta is now one of the most common ways to measure the value of equity capital and is also used heavily in portfolio theory. It is not without controversy or criticism. Betas are worked out using a wide range of financial data from the past and as such many commentators have argued that Beta has little to tell us about the future. There are significant problems with translating accounting data into price relevant information, particularly there is at best a tenuous link between earnings and book values of assets and prices observed in the market. Particularly the Ohlson model which it is argued demonstrates a coefficient between these figures and price (also which it is assumed makes sense of both the Modigliani and Miller relevancy hypothesis and Gordon and Shapiro’s value metrics) (Pogue, 2004). Notwithstanding these criticisms and the accepted criticism of the random walk theory, which are considerable, Beta is still widely accepted as a way of dealing with systematic risk.

What does this mean for Investment Appraisal techniques? In terms of the accepted methodology of investment appraisal the goal of such appraisal has to be the increase in wealth of the shareholders, and as such many of the techniques which are readily deployed by managers have no theoretical basis. In the following appraisal of the project a number of techniques are used to give decision relevant information of the project (Graham, 2001). The company has two criteria which it uses to judge the acceptability of a project, the Return on Investment, which it states must be above 15% and the payback period, which must be within three years. Both of these methods give information in terms of in the first case, accounting data, and in the second a rule of thumb for recouping the initial investment within a specified time period. Neither of these methods tell us much about the financial and wealth creating aspects of the project in question (Hajddasinski, 1993). Payback is simply a measure of the amount of time it takes to recoup the initial investment, and as such has little to do with maximising shareholder wealth, it is entirely possible for a project to recoup the initial investment very quickly but them go on to actually destroy wealth in later years, particularly when a project runs for a significant period of time. The Accounting rate of return similarly tells us little about the wealth creation of the project, considering as it usually does non financial items such as depreciation which have little to do with the amount of actual wealth returned to shareholders. Neither of these techniques takes into consideration systematic risk to shareholders, and as such ignores an important and fundamental aspect of modern finance theory. Indeed it is only Net Present Value (NPV) which can tell us about the wealth creating and destroying aspects of a project and as such it is this technique (along with the similar technique of Internal Rate of Return (IRR)) which can give decision relevant information in terms of shareholder wealth (Lefley, 2004).
Briefly NPV uses a discount factor, based upon the Weighted Average Cost of Capital (WACC) which adjusts the incremental net cashflows of a project for systemic risk, thus ensuring that the wealth created for the company reflects the time value of money (Amran, 1999). Much of the methodology in NPV requires one to recognise incremental cashflows and to remove those which have no relevance to wealth creation, particularly accounting derivations such as depreciation. Other cashflows which need not be included are sunk costs and other costs which would exist regardless of the projects acceptance. Thus the analysis concentrates on the wealth creating (or destroying) aspects of projects rather than the book conventions and ephemeral of other techniques. It results in a cash figure, in terms of either wealth added or destroyed by the acceptance of the project and is particularly useful for the ranking of projects in times of capital rationing. NPV is a powerful decision making tool, but not without considerable problems in and of itself. NPV requires the firm to estimate future cashflows, and as will be seen, the accuracy of these cashflows are of significant importance to the viability of the project (Amran, 1999). Further the use of NPV is considered by many to be far more complex than most other techniques and non specialists may find the results and even the preparation of this analysis to be a significant challenge. Further the discount factor itself is often controversial, WACC is only one of a range of factors which can be used, but is most theoretically correct (as will be seen in the discussion later of the capital gearing theory), but without a very accurate discount factor the analysis is at serious risk of error (Hillier, 1963). Notwithstanding these problems NPV is one of the most relevant and reliable tools of investment appraisal and satisfies much of the theoretical underpinning of the subject of finance. This report finds that the project returns a positive NPV and satisfies all of the other investment criteria and therefore should be undertaken (Graham, 2001).

Results & Findings

Please see appendix A for the full derivation of the results and findings.

Net Profit (?)2792009

Payback

2.5 years

ARR%

55.84018

NPV

1767785

IRR

21%

This is based on a cost of equity capital of 6% which in turn is based on the calculation for Equity Capital under the Capital Asset Pricing Model (CapM):

Where Ke is the cost of equity capital, rf is the risk free rate (often gilts) ? is the assigned Beta of the share and rm is the market risk. For the company this equates to 5.31% which has been rounded up to the nearest whole (under the assumption that it is better to err on the side of caution)

Discussion and Analysis

As has been established in the introduction the primacy of the NPV technique carries with it a significant theoretical advantage over other methods. IRR too is based on the same methodology and gives the cost of capital at which the NV of the project would be zero, as such it provides for the maximum cost of capita at which the project would be viable. It would seem that this project is worth undertaking, not only does it satisfy all of the existing criteria for the firm, but it also returns wealth to the shareholders given the risk class of the share. The problems of NPV have to, however, be considered in line with the predictability of cashflows and the sensitivity of the project to the accuracy of these cashflows (Kim S.H. & Crick, 1986). If the cars per day through the toll booths were a thousand less the project returns a negative NPV and in effect destroys wealth for shareholders:

Time

0

1

2

3

4

Income

– Vehicles (Est) p/day

0

2000

1400

1200

1000

– Toll p/car (?)

0

4

5

5.5

6

– Income p/day

0

8000

7000

6600

6000

– Income p/annum

0

2920000

2555000

2409000

2190000

Expenditure

– Operating costs (@? p/vehicle)

0

2

2.5

3

3.5

– Total Operating costs

0

1460000

1277500

1314000

1277500

– Wages (@?288 p/day * 365)

0

105120

105120

105120

105120

– Outlay

5000000

Total Expenditure

5000000

1565122

1382623

1419123

1382624

Net Income

-5000000

1354878

1172378

989877

807376.5

Net Profit

-675491

ARR%

-13.5098

Discount @ 6% (Cost of Equity Capital)

0.942

0.888

0.8375

0.7903

DCF

-5000000

1276295

1041071

829022

638069.6

NPV

-1215542

As wages are fixed this cost is not sensitive to change, but other costs may be, if operating costs rise by 50% then the project also destroys wealth:

Time

0

1

2

3

4

Income

– Vehicles (Est) p/day

0

3000

2400

2200

2000

– Toll p/car (?)

0

4

5

5.5

6

– Income p/day

0

12000

12000

12100

12000

– Income p/annum

0

4380000

4380000

4416500

4380000

Expenditure

– Operating costs (@? p/vehicle)

0

3

3.75

4.5

4.75

– Total Operating costs

0

3285000

3285000

3613500

3467500

– Wages (@?288 p/day * 365)

0

105120

105120

105120

105120

– Outlay

5000000

Total Expenditure

5000000

3390123

3390124

3718625

3572625

Net Income

-5000000

989877

989876.3

697875.5

807375.3

Net Profit

-1514996

ARR%

-30.2999

Discount @ 6% (Cost of Equity Capital)

0.942

0.888

0.8375

0.7903

DCF

-5000000

932464.1

879010.1

584470.7

638068.7

NPV

-1965986

IRR

-14%

In both these scenarios the changes to the cashflows has a devastating effect on the viability of the project, one which is not communicated adequately (especially in terms of the costs) by ARR, or even payback. Imagine not quantative factors that may cause these scenarios to happen. Drivers believe that the price of the toll is too high and find alternative routes to avoid paying the toll. In the case of costs hikes in energy prices or other operating costs could easily impact on the viability. These quick examples demonstrate the dangers of making assumptions about the future, and as such one must be very careful about the assumptions made n cashflows. One way of adjusting for these un systematic risks is to conduct sensitivity analysis, and to use statistical techniques to adjust the NPV, this is often termed Expected Net Present Value (ENPV) and uses standard deviation to adjust for risk. Further the cost of capital is a significant factor in the reliability of NPV (Pogue, 2004). Herein the cost of equity capital is used, as the firm is geared to all equity this is probably a realistic cost of capital, but perhaps investors see the direction of the firm as particularly risky and require further compensation. Using WACC is only one option for managers and indeed the use of WACC does not always adequately adjust for the risk seen as inherently bigger as cashflow move forward in time. Consideration needs to be given to the discount factor used.

Lastly, and in particular reference to the WACC it is important to consider the nature of the capital structure o the company (Harris, 1991). Capital structure generally refers to the mixture of debt and equity which goes to make up the capital of the company, known as gearing, and represented as a proportional ration. Assume that the company has ?5,000,000 of equity, as is stated, in the form of equity and has no debt. As this is a large capital project the company is faced with a decision as to how to finance the project. Assuming that the only options are a rights issue to generate more equity or debt (in the form of debentures, typical of long term borrowing) then a decision needs to be made as to which course is better for the company as a whole. Gearing is another contentious issue in finance with no correct answer to the problem of optimal gearing. A number of theoretical approaches can be applied to the problem, most notably the work of Modigliani and Miller (MM) and their irrelevancy propositions (Modigliani, 1958). To understand this, it is important to understand a number of features of both debt and equity. Equity as has been said is governed by the risk it represents for equity holders, often in the form of Beta, Debt is not governed by this and is rather a cost in terms of the interest payments over the life of the debenture and the repayment of the capital sum at the end of the loan. Therefore Debt is often cheaper than equity as the risk is considered lower than that of a shareholder. If one thinks of an Income Statement from a set of accounts, one can clearly see that Interest is payable regardless of the profit attributable to shareholders, in effect the bank gets paid first. Further there is a tax shield on interest payments, as these are a cost of the company and therefore reduce the amount of corporation tax payable. Therefore consider the following example. The company currently has ?5m in equity and requires a further ?5m to finance the toll booth project. It s cost of equity capital is 6% but it is able to borrow at 5% debentures, the rate of corporation tax is 30%. A it stands the WACC is 6% and if the company issues a further ?5m to finance the project it will remain so, if however the company borrows the ?5m the following holds:

Debt

Equity

% Cost

5

6

Gearing

0.5

0.5

Wacc

5.5

With a further reduction of (1-T) to represent the tax shield this figure becomes 5.15%, the cost of capital has been effectively lowered. This means that future projects (as it is important to use the existing cost of capital for investment appraisal regardless of how the project is to be financed for NPV calculations) will be return more wealth to shareholders. The work of MM, however, pointed out that in a theoretically perfect world (no tax, symmetry of information and borrowing rates as well as other theoretical suppositions) the reduction is exactly off set by the increased risk from extending borrowing as follows:

(Source, G Arnold, Corporate Financial Management. 3rd Edition, London: Prentice Hall)

Therefore there is an increase of risk to equity shareholders with the i

Estimation of Optimal Hedge Ratios – Strategies

This work was produced by one of our professional writers as a learning aid to help you with your studies

Naive or one-to-one hedge assumes that futures and cash prices move closely together. In this traditional view of hedging, the holding of both the initial spot asset and the futures contract used to offset the risk of the spot asset are of equal magnitude but in opposite direction. In this case the hedge ratio (h) is one-to-one (or unit) (-1) over the period of the hedge.

This approach fails to recognize that the correlation between spot and futures prices is less than perfect and also fails to consider the stochastic nature of futures and spot prices and resulting time variation in hedge ratios (Miffre, City University).

The beta hedge recognizes that the cash portfolio to be hedged may not match the portfolio underlying the futures contract. With the beta hedge strategy, his calculated as the negative of the beta of the cash portfolio.

Thus, for example, if the cash portfolio beta is 1.5, the hedge ratio will be -1.5, since the cash portfolio is expected to move by 1.5 times the movement in the futures contract, where the cash portfolio is that which underlies the futures contract. The traditional strategy and the beta strategy yield the same value for h (Butterworth and Holmes 2001).

Minimum Variance Hedge Ratio (MVHR) was proposed by Johnson (1960) and Stein (1961). This approach takes into account the imperfect correlation between spot and futures markets and was developed by Ederington (1979). According to him, the objective of a hedge is to minimize the risk, where risk is measured by the variance of the portfolio return. The hedge ratio is identified as:

h*= – ?S,F / ?2F (1)

Where, ?S,F is the variance of the futures contract and ?S,F is the covariance between the spot and futures position. The negative sign mean that the hedging of a long stock position requires a short position in the futures market. The relation between spot and futures can be represented as:

St = ? + h*Ft + et (2)

Eq. (2), which is expressed in levels, can also be written in price difference as:

St – St-1 = ? + h*(Ft – Ft-1) + ?t (3)

or in price returns as:

St – St-1 / St-1 = ? + h*(Ft – Ft-1 / Ft-1) + ?t (4)

Eq. (4) can be approximated by:

logSt – logSt-1 = ? + h*(logFt – logFt-1) + ?t (5)

Eq. (6) can be re-written as:

RSt = ? + h*RFt + ?t (6)

Where, RSt and RFt are returns on spot and futures position at time t.

Equation (2) and (3) assume a linear relationship between the spot and futures while eq. (4)-(6) assumes that two prices follow a log-linear relation. Relative to equation (2)-(3), the hedge ratio represents the ratio of the number of units of futures to the number of units of spot that must be hedged, whereas, relative to eq. (4), hedge ratio is the ratio of the value of futures to the value of spot. (Scarpa and Manera, 2006)

Eq. (2) can easily produce auto correlated and heteroskedastic residuals (Ederington, 1979; Myers and Thompson, 1989: cited in Scarpa and Manera, 2006). Due to this reason, some authors suggest the use of eq (3)-(6), so that the OLS classical assumption of no correlation in the error terms is not violated.

Empirically, optimal hedge ratio h* can be obtained by simple Ordinary Least Square (OLS) approach, where the coefficient estimates of the futures gives the hedge ratio. This is can only be done when there is no co-integration between spot and futures prices/values and conditional variance-covariance matrix is time invariant (Casillo,XXXX). Even though application of MVHR relies on unrealistic assumptions, it provides an unambiguous benchmark against which to assess hedging performance ( Butterworth and Holmes, 2001).

Error Correction Model (ECM) approach for determining optimal hedge ratio takes in to account the important role played by the theory of co-integration between futures and spot market, which is ignored by MVHR (Casillo,XXXX). The theory of co-integration is developed by Engle and Granger (1981), who shows that if two series are co-integrated, there must exist an error correction representation that permits to include both the short-run dynamics and the long-run information.

ECM approach augments the standard OLS regression used in MVHR by incorporating error correction term (residual) and lagged variables to capture deviation from the long run equilibrium relationship and short-run dynamics respectively (XXXXect). The presence of the efficient market hypothesis and the absence of arbitrage opportunity imply that spot and futures are co-integrated and an error correction representation must exist (Casillo,XXXX) of the following form:

i=1

j=1

?St = ?et-1 + ??Ft + ? ?i?Ft-i + ? ?j?St-j + ut (7)

Where, ? is the optimal hedge ratio and et-1 = St-1 – ?Ft-1

All the above mentioned approaches employ constant variance and covariance to measure hedge ratio, which have some problems. The return series of many financial securities exhibit non-constant variance, besides having a skewed distribution. This has been demonstrated by Engle 1982, Lamoureux and Lastrapes 1990, Glosten, Jagannathan and Runkle 1993, Sentana 1995, Lee and Brorsen 1997 and Lee Chen and Rui 2001 (Rose, et al.,2005).

Non-constant variance, linked to unexpected events is considered to be uncertainty or risk, and this uncertainty is particularly important to investors who wish to minimize risks. In order to cope with these problems, Engle (1982) introduced the Autoregressive Conditional Heteroskedasticity (ARCH) model to estimate conditional variance. It takes into account changing variance over time, by imposing an autoregressive structure on the conditional variance. Bollerslev, Engle and Wooldridge (1988) expanded the univariate GARCH described above to a multivariate dimension to simultaneously measure the conditional variance and covariance of more than one time series. Thus, the multivariate GARCH model is applied to calculate a dynamic hedge ratio that varies over time based upon the variance-covariance between time series. (Rose, et al.,2005)

Finally, other researchers have proposed more complex techniques and some special case of the above techniques for the estimation of the OHR. Among these we mention the random coefficient autoregressive offered by Bera et al. (1997), the Fractional Cointegrated Error Correction model by Lien and Tse (1999), the Exponentially Weighted Moving Average Estimator by Harris and Shen (2002), and the asymmetric GARCH by Brooks et al. (2002). (Casillo,XXXX)

Despite the existence of massive literature on all the above approaches, no unanimous conclusion has been reached regarding the superiority of a particular methodology for determining the optimal hedge ratio. However, it would be wise to suggest that the choice of a strategy for deriving optimal hedge ratio should be based on the subjective assessment to be made in relation to investor preferences (Butterworth and Holmes, 2001).

Development of Research:

Figlewski (1984) conducted the first analysis of hedging effectiveness of stock index futures in US. He examined the hedging effectiveness for Standard and Poor’s 500 stock index futures against the underlying portfolio of five major stock indexes for the period June 1, 1982 to September 20, 1983. All five indexes represented diversified portfolio, however they were different in character from one another. Standard and Poor’s 500 index and New York Stock Exchange (NYSE) Composite included only the largest capitalization stocks. The American Stock Exchange composite (AMEX) and the National Association of Securities Dealers Automated Quotation System (NASDAQ) index of over-the-counter stocks contained only small companies which somewhat move independently of the Standard and Poor’s index. Finally, the Dow Jones portfolio contained only 30 stocks of very large firms. Return series for the analysis included dividend payments as risk associated with dividends on the portfolio is presumably one of many sources that give rise to basis risk in a hedges position. However, it was found that their inclusion did not alter the results. Consequently, and given the relatively stable and predictable nature of dividends, subsequent studies have excluded dividends. Figlewski used beta hedge and minimum variance hedge strategies and showed that the latter can be estimated by Ordinary Least Square (OLS) approach using historical data. He found that for all indexes hedge performance using minimum variance hedge ratio (MVHR) was better than beta hedge ratio was used. MVHR resulted in lower risk and higher return. When MHHR was uses, risk was reduced by 70%-80% for large capitalization portfolios. However, hedging performance was considerably reduced for smaller stocks portfolios. Also, hedging performance was better for once week and four week hedges when compared with overnight hedges.

Figlewski (1885) studies hedging effectiveness of three US index futures (S&P500, NYSE Composite and Value Line Composite Index (VLCI)) in hedging five US indices (S&P500, NYSE Composite, AMEX Composite, NASDAQ and DJIA). Data was collected for 1982. He analyzed the hedging effectiveness for the holding period ranging from one day to three weeks using the standard deviation of the hedged position, divided by the standard deviation of the un-hedged position, as a measure of assessing hedging effectiveness. Hedge ratios were derived using beta strategy and MVHR. Assuming constant dividends, the weekly returns of each of the five indices were regressed on the on the returns of the indices underlying the three futures. Daily data was used to compute ex post risk-minimizing hedge ratios. In nearly every case, risk-minimizing hedge ratio outperformed the other in terms of hedging effectiveness, for both types of hedge ratio it was found that the hedges under a week were not very effective. It was also found that hedging was more effective for the S&P500, NYSE Composite and the DJIA than for NASDAQ and AMEX Composite. In other words, once again, portfolios of small stocks were hedged less effectively than were those comprising large stocks.

Junkas and Lee (1985) used daily spot and futures closing prices for the period 1982 to 1983 for three US indices: S&P500, NYSE Composite and VLCI. They investigated the effectiveness of various hedging strategies, including the MVHR and the one-to-one hedge ratio. This was done for each index using data for a month to compute the hedge ratio used during that same month in hedging the spot value of the corresponding index. MVHRs were computed by regressing changes in the spot price on changes in the futures price. The average MYHR was 0.50, whike the average effectiveness, as measured by variance of un-hedged position minus variance of hedged position divided by variance of un-hedged position (HE), was 0.72 for the S&P500 and the NYSE Composite, and 0.52 for the VLCI. The effectiveness of the one-to-one hedge ratio was poor, leading to an increase in risk for the VLCI and the NYSE Composite, and an effectiveness measure of 0.23 for the S&P500. In other words, MVHR was found to be most effective in reducing the risk of a cash portfolio comprising the index underlying the futures contract. There was little evidence of a relationship between contract maturity and effectiveness.

Peters (1986) examined the use of S&P500 futures to hedge three share portfolios; the NYSE Composite, the DJIA and the S&P500 itself. MVHR and beta hedge strategy was applied to the data for the period 1984 to 1985. For each of the portfolio, MVHR gave a hedged position with a lower risk that did beta.

Graham and Jennings (1987) were first to examine hedging effectiveness for cash portfolios not matching an index. They classifies US companies into nine categories according to their betas and dividend yield. For each beta-dividend yield category, ten equally weighted portfolios of ten shares each were constructed. Weekly returns were computed for each portfolio for 1982-83. They then investigated the performance of S&P500 futures in hedging these portfolios for periods of one, two and four weeks. Three alternative hedge ratios were uses: one to one, bets and MVHR. The MVHR produced hedged positions with returns that were about 75% higher than for the other two hedge ratios. The measure of hedging effectiveness HE ranged from 0.16 to 0.33. For the one and the two week hedges, the MVHR hedge was more effective, that is, had a higher HE value.

Morris (1989) investigated the performance of S&P500 futures in hedging the risk of a portfolio of the largest firms in the NYSE. The data was monthly from 1982 to 1987. The MVHR was estimated using data for the entire period, and gave a HE value of 0.91.

Lindhal (1992) investigated hedge duration and hedge expiration effects for the MMI and S&P 500 future contract. Results showed that MVHR increased towards unity with an increase in the hedging duration. For S&P 500 hedge ratios were found to be 0.927, 0.965 and 0.970 for one, two and four week hedge duration, respectively. It was concluded that hedge ratio and hedging effectiveness increase as duration increase. Lindhal’s examination of the hedge expiration effect is based on the fact that future prices converge towards spot prices as expiration approaches. According to him MVHR can be expected to converge towards the naive hedge ratio if future prices also exhibit less volatility when approaching expiration. It was concluded that there was no obvious pattern in terms of risk reduction in relation to time to expiration.

Unlike previous studies which only investigate ex post hedging effectiveness, Holmes (1995) became the first individual in UK to examine the hedging effectiveness of FTSE-100 stock index futures contract using Ex Ante Minimum Variance Hedge Ratio strategy. The cash portfolio being hedged mirrored FTSE-100 stock index. Data for spot and future series was collected for the period July 1984 to June 1992 for hedging duration of one and two weeks. The results also demonstrated the superiority on MVHR over beta hedges and showed that ex ante hedge strategy resulted in risk reduction of over 80%. Greater risk reduction was also shown to be achieved by estimating hedge ratios over longer periods.

Holmes(1996) examined the ex post hedging effectiveness for the same data and return series used in the earlier study (1995) and showed that the standard OLS estimated MVHR provided the most effective hedge when compared to beta hedge strategy, error correction method and GARCH estimation. Results also suggested increase in hedging effectiveness with increase in hedging duration. This can be explained as variance of returns increases with an increase in the duration, resulting in the reduction of the proportion of the total risk accounted for by the basis risk.

Butterworth and Holmes (2001) provided an unprecedented insight in to the hedging effectiveness of investment trust companies (ITCs) using Mid250 and FTSE100 stock index futures contract ,the former being introduced in February 1994 with an aim to provide better hedging for small capitalization stocks. Analysis is based on daily and weekly hedge durations for the cash and future return data of thirty-two ITCs and four indices for the period of February 1994 to December 1996. FTSE100 index futures and FTSE Mid250 index futures are used to hedge cash positions. Apart from well established OLS approach, consideration is also given to Least Trimmed Squares (LTS) approach for estimation which involves trimming of regression by excluding the outliers. Four hedging strategies including traditional hedge, beta hedge, minimum variance hedge and composite hedge were compared on the basis if within sample performance. Composite hedge ratio was generated by considering returns on synthetic index futures formed by weighted average of returns on FTSE100 and FTSE-Mid250 contracts. Results demonstrated that traditional and beta hedge performed worst. MVHR strategy for daily and weekly hedges using Mid250 contracts outperformed the same strategy using FTSE100 contacts in terms of risk reduction for ITCs. However the superiority of Mid250 over FTSE100 is significantly less for cash portfolios based on broad market indexes. The composite hedge strategy demonstrated only minor improvements over results of the Mid250 contract. The LTS approach suggested similar results as OLS.

Seelajaroen (2000) attempted to investigate the hedging effectiveness of All Ordinance Share Price Index (SPI) to reduce price risk of All Ordinary Index (AOI) portfolio in the Australian financial market. Hedging effectiveness was investigated for one, two and four week hedge duration. Hedge ratios were generated by using Working’s model and the Minimum variance model and their effectiveness was determined by comparison with naive strategy. Data for the analysis consisted if daily closing prices of the SPI and API for the period January 1992 to July 1998. Minimum variance model consisted of both ex post and ex ante approach. Results demonstrated superiority of both Working’s model and Minimum variance model over naive hedge strategy. Working’s strategy was found to be more effective in long run, however, in short run the strategy is more sensitive to basis level used in the decision rule. Minimum variance strategy was also found to be highly effective, as even the standard use of the hedge ratio derived from past data was able to achieve risk reduction of almost 90%. Also, longer duration hedges were found to be more viable than short duration hedges and finally effects of time expiration on hedge ratio and effectiveness was found be ambiguous.

DATA & METHODOLOGY:

This paper examines the cross hedging effectiveness of five of the world’s most actively traded Stock Index Futures to reduce the risk of KSE100 index. The 5 stock index futures include S&P500, NASDAQ100, FTSE100, HANG SENG and NIKKEI 225. All 5 stock index futures and KSE100 index are arithmetic weighted indexes, where the weights are market capitalization. Analysis is based on daily and weekly hedge durations by using spot and futures return data for the period commencing from 1st January 2003 to 31st July 2008. Due to problems of sample size hedge durations of more than one week are not considered. Each daily return series consists of 1457 observations, out of which last 157 (from 1st January 2008 to 31st July 2008) are used to calculate out of sample (ex ante) hedging performance. Each weekly series consists of 292 observations, out of which last 31 (from 1st January 2008 to 31st July 2008) are used to measure ex ante hedging performance. The return series for each index is calculated as a logarithmic value change:

Rt = logVt – logVt-1 (2)

Where, Rt is the daily or weekly return on either the spot or futures position and Vt is the value of the index at time t.

Value is the daily or weekly closing value of all 6 indexes. All data was obtained from Datastream.

Two hedging strategies are considered. First, is the MVHR, and the second, is an extension of the first strategy by applying the theory of co-integration, formally known as Error Correction Model.

MVHR is estimated by regressing spot returns (KSE 100 in this case) on futures returns using historical information:

RSt = ? + bRFt + et (3)

Where, RSt is the return on KSE100 index in time period t; RFt is the return on the futures contract in the time period t; et is the error term and ? and b are regression parameters.

Value of b is obtained after running the above regression in e-views, which is the hedge ratio h* shown earlier in equation 1. This hedge ratio is used in further calculation for determining risk reduction. Effectiveness of minimum variance hedge is determined by examining the percentage of risk reduced by the hedge (Ederington, 1979; Yang, 2001). Consequently, hedging effectiveness is measured by the ratio of the variance of the un-hedged position minus the variance of the hedged position, divided by the variance of the un-hedged position (Floros, Vougas 2006).

Var(u) = ?2s (4)

Var(h) = ?2s + h2?2F – 2h?S,F (5)

Hedging Effectiveness (HE) = (Var(u) – Var(h)) / Var(u) (6)

Where, Var(u) is the variance on un-hedged position (KSE100); Var(h) is the variance on the hedged position; ?S & ?F are standard deviation on spot (KSE100) and futures returns respectively; h is the value of hedge ratio (b in equation 3); and ?S,F is the covariance between spot and future returns.

Error Correction Model (ECM) approach requires testing for co-integration. The return series are checked for co-integration by following a simple two step approach suggested by Engle and Granger. Consider two time series Xt and Yt, both of which are integrated of order one (i.e. I(1)). Usually, any linear combination of Xt and Yt will be I(1). However, if there exists a linear combination (Yt – I•Xt) which is I(0), then according to Engle and Granger, Xt and Yt are co-integrated, with the co-integrating parameter I•.

Generally, if Xt is I(d) and Yt is I(d) but their linear combination (Yt – I•Xt) is I(d-b), where b>0 then Xt and Yt are said to be co-integrated. Co-integration conjoins the long-run relationship between integrated financial variables to a statistical model of those variables (XYZ,200N).

In order to test for co-integration, it is essential to check that each series is I(1). Therefore, the first step, is to determine the order of integration of each series. Order of integration is determined by testing for unit root by using Augmented Dickey Fuller (ADF) test. A variable Xt is I(1), if it requires differencing once to make it stationary. The null of unit root is rejected when probability is less than the critical level of 5%. Then the following OLS regression is estimated:

RSt = ? + bRFt + et

Where, variables are same as equation 3.

Empirical existence of co-integration is tested by constructing test statistics from the residuals of the above equation. If two series are co-integrated then et will be I(0). This is found by testing the residuals for unit root by using ADF test. The null of unit root is rejected if probability is less than 5%.

Once it is established that the series are co-integrated, their dynamic structure can be exploited for further investigation in step two. Engle and Granger show that co-integration implies and is implied by the existence of an error correction representation of the series involved. Error correction model (ECM) abstracts the short- and long-run information in modeling the data(XYZ,200N). The relevant ECM to be estimated for generation of the optimal hedge ratio is given by:

j=1

i=1

RSt = ?et-1 + ?RFt + ? ?iRFt-i + ? ?jRSt-j + ut (7)

Where, et-1 is the error correction term and n and m are large enough to make ut white noise; ? is the hedge ratio. The appropriate values of n and m are chosen by the Akaike information criterion (AIC) (Akaike1974).

In short, returns on KSE100 are regressed on futures returns and residuals are collected by using OLS. ECM with appropriate lags is estimated by the OLS in the second stage.

Next phase is to determine the superiority of the two models MVHR and ECM, which were used to obtain the hedge ratios b and ? respectively. This is achieved by conducting Wald Test of Coefficient on model (7). If anyone of the lags in model 7 turn out to be significant, then optimal hedge ratio obtained through model (7) will be superior then hedge ratio obtained through model (3). Hence, signaling the superiority of ECM over MVHR. The significance is tested by a hypothesis, where:

Ho= C(1)=C(2)…=C(i)=0

H1 = C(1)=C(2)…=C(i)?0

The null is rejected if the probability of Chi-square statistic is less than the critical value of 5%.

Lastly, the superior hedge ratio will be used to determine ex ante performance. The hedging effectiveness of the superior hedge ratio will be based on the measure of risk reduction achieved through equation (6).

Importance of Strategic Readiness of Intangible Assets

This work was produced by one of our professional writers as a learning aid to help you with your studies

In 2000, the market-to-book value, or in other words, the ratio of the stock-market value to accounting value of the largest 500 companies in the U.S, increased to 6.3. In simple words this means that for every six dollars of market value, only one dollar appeared on the balance sheet as a physical or financial asset. The cause of this large difference has been attributed to the rise in value of intangible assets. ( Source: Getting a grip on Intangible Assets, Harvard Management Update)

In the past decade, there has been an increasing academic and corporate focus on the subject of intangible assets offering clarity to business leaders on the ways to measure and manage these assets in context of a business’s strategic goals. On regulatory front, European Union is soon to introduce standards for reporting on intangible assets.

Our report aims to analyse one such academic framework, developed by Robert S. Kaplan and David P. Norton, which highlights the importance of strategic readiness of intangible assets. The methodology of this conceptual framework is creation of a Strategy Map on which intangible assets have been mapped and measured.

Three key things that emerge from the analysis of this work named Measuring the Strategic Readiness of Intangible Assets and written for Harvard Business Review in 2004are:

1. Identification of the important intangible assets in a business organization.

2. Mapping these intangible assets to a business’s strategy.

3. Understanding the factors that enable these intangible assets to contribute to the success of the business.

Introduction

It is increasingly clear from the example at the beginning, that, in 21st century’s knowledge-driven, services-dominated, economy, it is the intangible assets, and not so much the physical and financial assets, which are playing an increasingly important role in shaping a business’s success. At the same time, it is realized by management, that there is a need to objectively evaluate the readiness of these intangible assets in enabling a business to achieve its strategy.

For the benefit of analysis, we start by defining intangible assets as any nonphysical assets that can produce economic benefits. These cover intellectual capital, knowledge assets, human capital and organizational capital as well as more specific attributes like quality of corporate governance and customer loyalty. (Zadrozny, Wlodrek).

So what is required to map and manage these assets for the success of a business’s strategy?

Analysis of Situation

According to Kaplan and Norton, while developing Balanced Scorecard (a concept for measuring a company’s activities in terms of its vision and strategies, and helps to give managers a comprehensive view of the performance of a business), they identified three major categories of intangible assets:

No.Intangible AssetsEncompassing Elements

1

Human Capital

Skills; Training; Knowledge

2

Information Capital

Systems; Databases; Networks

3

Organization Capital

Culture; Leadership; Alignment; Teamwork

Further, while understanding the critical success factors that transform a business organization into a performing and strategy focussed entity, the article discusses how these assets need to be mapped to the organization’s strategy on a framework called strategy map. Finally it explains the route by way of which, quantitative values can be assigned which clearly help an organization to understand the readiness of these assets in enabling an organization achieve its strategy.

Discussions and Findings

As we discover, there are unique features of intangible assets that make their behaviour different from the physical and financial assets. These are:

1. Intangibles assets mostly cannot create value for an organization in a standalone form. They need to be combined with other assets. The implication of this is on a firm’s ability to assign a value to these assets on a standalone basis.

2. These assets rarely affect financial performance directly, unlike physical or financial assets which immediately start paying off. Intangible assets contribute indirectly through a chain of cause and effect. For example, the investment in training a team in total quality management may decrease defects and therefore may give rise to customer satisfaction and heighten positive brand perception.

3. While human capital and information capital are easier to map and manage, organizational capital is much more difficult.

4. Human capital may be measured by mapping the jobs and identifying the strategic job families before focusing on getting these jobs ready for strategy implementation. Information capital may be evolved by identifying and creating a portfolio of transactional, analytical and transformational computer applications and sturdy network infrastructure that give a positive edge to the manner in which business is conducted. One such example is the complete transformation in retail banking with deployment of information systems that empower a customer exponentially.

5. Organizational capital is the most challenging element to map and manage because of the complete behavioural change required in conducting business at all levels. Changing the base culture – that involves the employees’ shared attitudes and beliefs, and the Climate – which comprises of the shared perception of the organization’s policies, procedures and practices, require a grip on deep-rooted, socio-psychological dynamics at work within the organization. For example, changing National Health Services (NHS) culture from a budget oriented operations to a dynamic business plan oriented operations that focuses on health consumer, is more challenging than mapping the strategic jobs and putting state-of-the-art information capital. For bringing organizational capital readiness, leadership plays a very important role, as do communication and knowledge-sharing.

6. Once these intangible assets are brought in state of strategic readiness, they start contributing in generating cash for the business. For example, if McDonalds sets a service response time of 30 seconds and trains its human capital to achieve this target, the customer turnover at the counter will increase and lead to higher revenues.

7. Finally, for these assets to come into a state of strategic readiness, they need to be aligned with the organization’s strategy. If they are not properly aligned, it can lead to chaos. For example, if McDonalds promises its customers a 30 seconds service but does not care to bring its human, information and organizational assets up to required standards, there will be widespread dissonance amongst its customer base and the risk of erosion in brand value will be very high.

Marginal and Absorption Costing of Income Statements

This work was produced by one of our professional writers as a learning aid to help you with your studies

This paper aims to look at how income statements are prepared using marginal and absorption costing. The absorption costing method charges all direct costs to the product costs, as well as a share of indirect costs. The indirect costs are charged to products using a single overhead absorption rate, which is calculated by dividing the total cost centre overhead to the total volume of budgeted production. (ACCA, 2006; Drury, 2006; Blocker et al., 2005). On the other hand under marginal costing, only variable costs are charged to cost units. Fixed costs are written off the profit and loss account as period costs. (Drury, 2006; Blocker et al., 2005). Sections a) and b) below show the marginal and absorption costing income statements respectively for H Ltd that manufactures and sells a single product during the years ending 2006 and 2007. It is assumed that the company uses the first-in-first-out (FIFO) method for valuing inventories. In addition it is assumed that the company employs a single overhead absorption rate each year based on budgeted units and actual units exactly equalled budgeted units for both years.

Marginal Costing
H Ltd Income Statement (Marginal Costing)2006 2007
?’000

?’000

Sales Revenue

3000

3600

Cost of Sales:

Opening Stock

0

400

Production cost (W1, W2)

700

500

Variable Marketing and Admin

1000

1200

Cost of Goods available for sale

1700

2100

Ending inventory (W3, W4)

200

100

1500

2000

Contribution Margin

1500

1600

Less Fixed costs

Marketing and Admin

400

400

Production overheads

700

700

1100

1100

Operating profit

400

500

Absorption costing.
H Ltd Income Statement (Absorption Costing)2006 2007
?’000

?’000

Sales

3000

3600

Cost of Sales

Beginning Inventory

0

400

Production Cost (W5, W6)

1400

1200

Ending Inventory (W7, W8)

400

240

1000

1360

Gross Profit

2000

2240

Marketing and Admin Expenses

Fixed

400

400

Variable

1000

1200

1400

1600

Operating profit

600

640

Reconciliation of net income under absorption and Marginal Costing.
Reconciliation 2006 2007
?’000

?’000

Absorption operating profit

600

640

Less Fixed overhead cost in ending inventory (W9)

200

140

Marginal Costing net income

400

500

Under marginal costing inventory of finished goods as well as work in progress is valued at variable costs only. On the contrary, absorption costing values stocks of inventory of finished goods and work in progress at both variable costs and an absorbed amount for fixed production overheads. (ACCA, 2006; Lucy, 2002). In the case of H Ltd, under marginal costing, only variable costs are included in the ending inventory figure. This results in a profit figure of ?400,000. On the other hand absorption costing includes additional ?200,000 as fixed overhead in the ending inventory for 2006. As a result absorption operating profit is overstated by ?200,000 in 2006. In like manner, the absorption profit under absorption costing is overstated by ?140,000 due to an inclusion of ?140,000 of fixed overhead cost in the ending inventory figure for 2007. To reconcile the profit under absorption costing and marginal costing, we may either subtract the fixed overhead included in ending inventory from the absorption cost operating profit to arrive at the marginal cost operating profit or add the fixed overhead costs in ending inventory to the marginal cost operating profit to arrive at the absorption cost operating profit.

Stock Build-ups

Stock build-ups may result from using absorption costing for performance measurement purposes because inventory is valued at both fixed and variable costs. Firstly, profit is overstated. In fact absorption costing enables income manipulation because when inventory increases fixed costs in the current year can be deferred to latter years and as such current net income is overstated which in effect results in financial statements that do not present fairly and as such affect users’ decisions on the financial statements. Secondly, maintaining high levels of inventory may result in obsolescence and as such declines in future profitability resulting from the loss in value of the inventory. (Blocher et al., 2005; Storey, 2002).

Advantages of Absorption Costing and Marginal Costing

According to ACCA (2006) the following arguments have been advanced for using absorption costing:

It is necessary to include fixed overhead in stock values for financial statements. This is because routine cost accounting using absorption costing produces stock values which include a share of fixed overhead. Based on this argument, financial statements prepared using absorption costing present a true and faithful representation of the actual results of operation of the company.
For a small jobbing business, overhead allotment is the only practicable way of obtaining job costs for estimating and profit analysis.
Analysis of under/over-absorbed overhead is useful to identify inefficient utilisation of production resources.

ACCA (2006) also identifies a number of arguments in favour of marginal costing. Preparation of routine cost accounting statements using marginal costing is considered more informative to management for the following reasons:

Contribution per unit represents a direct measure of how profit and volume relate. Profit per unit is a misleading figure.
Build-up or run-down of stocks of finished goods will distort comparison of operating profit statements. In the case of closing inventory, the inventory is valued only at the variable cost per unit. This makes the profit under a situation where there is closing inventory to be the same as the case when there is no closing inventory thereby enabling the comparison of operating profit statements over time.
Unlike under absorption costing, marginal costing avoids the arbitrary apportionment of fixed costs, which in turn result in misleading product cost comparisons.
Bibliography
ACCA (2006). Paper 2.4 Financial Management and Control: Study Text 2006/2007. www.kaplanfoulslynch.com
Blocher, E., Chen, K., Cokins, G., Lin, T. (2005). Cost Management A Strategic Emphasis. 3rd Edition McGraw Hill.
Drury, C. (2004). Management and Cost Accounting. 6th Edition. Thomson Learning, London.
Lucy, T (2002), Costing, 6th ed., Continuum.
Storey, P (2002), Introduction to Cost and Management Accounting, Palgrave Macmillan

Non-audit Services Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

“Recent expansion of nonaudit services by public accounting firms has caused some to question whether auditors who provide nonaudit services to audit clients can remain independent of their clients”

Introduction

The increasing level of frauds and scandals in the corporate sector have resulted in an upsurge in the regulations for audit firms whereby their independence is kept into question due to the non-audit services they offer to their audit clients (IOSCO, 2007). Many public accounting firms provide such services to their clients merely because of convenience, knowledge about the clients’ financial statements and saving extra time spent dealing with audit and non-audit services separately (Muir, 2014). However, financial statement users often perceive it as impairing the auditor’s independence (Al-Ajmi and Saudagaran, 2011). Different views exist about the impact of providing non-audit services to audit clients; they may have negative (Quick and Rasmussen, 2015), positive (Wang and Hay, 2013) or no effect on the auditor’s independence (Jenkins and Krawczyk, 2001). As such, this essay will explore whether the provision of nonaudit services affects auditors’ independence.

Definition and Role of Non-audit Services

Adeyemi and Olowookere (2012) regard non-audit services to be any services provided by an auditor other than their code audit function. These services may include bookkeeping (Jenkins and Krawczyk, 2001), management consultancy (ICAEW, 2015), tax advisory services (Pwc, 2014), human resource consultancy (ABP, 2004) and others. Jenkins and Krawczyk (2001) found that bookkeeping has a negative impact on auditor’s independence, while management consultancy and tax advisory services have a positive impact. The differences occur because of an expectation gap between the auditing professionals and financial statement users (Jenkins and Krawczyk, 2001).

Looking at it from a marketing perspective, organisations providing additional value to their customers other than their core service are considered to be highly competitive and end up being more successful than their competitors (Hoffman, 2009). That is exactly what audit firms strive for when they offer additional services to their clients in anticipation of strengthening relationship with them (Ismail, Hasnah, Ibrahim and Isa, 2006). However, critics object on the income received from non-audit services because their impact on the objectivity of the auditor has long been considered as a potential threat for the auditing process and financial system as a whole (Adeyemi and Olowookere, 2012). Okaro and Okafor (2009) pointed out that an audit firm auditing their own work is not regarded to be independent and the objectivity of their work may be questioned at any point by financial statement users. To avoid any criticisms from their stakeholders, audit firms need to be particular about their audit quality, which is considered to be high if the stakeholders are assured to have no uncertainty and ambiguity in the financial statements prepared by the management (Krishan, Zhang and Sami, 2005).

Negative Impact on Auditor’s Independence
Threat to Audit Quality

The work of an audit firm is to act as an investment guide, which helps in their clients’ valuation and predicting bankruptcy (Salehi, 2009a). Research suggests that there is a strong relationship between the credibility of the statements produced by an audit firm and the investment decision taken by the client (Salehi, 2009a). Therefore the economic development of the client is often dependent upon the credibility of the documents prepared by the audit firm, which depicts the financial standing of the client (Wahdan et al., 2005). Sori and Karbhari (2006) believe that the auditor independence may be affected by this economic bonding between the auditor and the client. In case of an increasing pressure from the client regarding consultancy in investment decisions, the auditor may unintentionally overlook the quality of the actual audit services.

Gwilliam (2010) mentioned that a classic example of audit failure was that of Ernst & Young while conducting the audit of a UK truck manufacturing company, ERF. In that case, the provision of non-audit services impaired the audit quality to such an extent that the firm had to undergo a couple of lawsuits. A part of the case constituted of the company accountant’s attempt to fabricate the VAT returns, so that the repayments from the Customs and Excise could be received. Moreover, the audit team did not work on the VAT separately; they relied upon the figures received from the VAT specialists. This compromise in the quality of audit services resulting from intrusion of additional services, negatively affected the independence of Ernst & Young.

Threat due to the Provision of Joint Services

Another problem arises when the audit and non-audit services are provided in conjunction with each other, whereby the focus on the actual service may be lost (Sori and Karbhari, 2006). Swanger and Chewning (2001) recommended a solution to this issue, i.e. the personnel performing the audit and non-audit services should be separate. Regulatory authorities, however, believe that it would be difficult to track performance if this solution is implemented; hence audit firms should be banned from providing any additional services to their clients (Chadbourne and Parke, 2003). Additionally, the Securities and Exchange Commission adopted rules which limit the audit firms from providing any compensation to their clients in joint services (Chadbourne and Parke, 2003).

Threat of Higher Non-audit Fee

Research indicates that the auditor independence is adversely affected if the fee paid for non-audit services is higher when compared with that of audit services (Frankel, Johnson and Nelson, 2002). Due to the existence of this threat, the Securities and Exchange Commission devised laws which enforced the disclosure of all fees paid to auditors by their clients (Chadbourne and Parke, 2003). Chen, Elder and Liu (2005) found an unfavourable relationship between non-audit services and the degree of acceptance the client showed to the recommendations by the auditor. This imparts that highly extensive additional services result in lower possibility of acceptance from the client, due to the equally high fee attached to them (Reynolds, Deis and Francis, 2004). Therefore, it may turn out to be hazardous for the audit firm’s independence as it would then attempt to introduce even more extensive non-audit services, further complicating legal requirements for itself.

Threat from Relationship with Management

Perhaps the greatest detrimental effect which non-audit services have on auditor’s independence is related to the relationship between the auditor and client management and the way it affects audit approach (Gwilliam, Teng and Marnet, 2014). Despite its economic dependence on its clients, the audit firm’s independence is greatly strengthened by lower levels of competition to cater to its clients (Quick and Rasmussen, 2015).

Positive Impact on Auditor’s Independence
Strengthening Audit Quality

Wang and Hay (2013) provided evidence for a positive relationship between provision of non-audit services and auditor’s independence, indicating that these additional services help the audit firms distinguish themselves from their competitors, whereby they portray their uniqueness in front of their clients. Some authors support this claim by saying that the auditor’s objectivity is strengthened by non-audit services because they help them form a better understanding of their clients (Jenkins and Krawczyk, 2001). Proponents of this view explain that the audit quality is indeed enhanced by the provision of non-audit services, because the auditors are then able to develop a better understanding of their clients’ industry, competitive position, strategies, business model and the risks they face (Ernst & Young, 2013). Gwilliam, Teng and Marnet (2014) mentioned that because of economies of scope, the joint provision of audit and non-audit services has economic benefits for both the auditor and the client. It is mainly because of knowledge spillovers. Limiting the audit firms from providing non-audit services would result in economic inefficiency.

Ernst & Young (2013), for example, takes advantage of its non-audit services through knowledge spillovers; i.e. it uses the financial information gained from auditing its clients to provide advisory and consultancy services to the same clients related to their investment decisions, recruitment, strategic direction and other such internal matters.

While there are concerns regarding clients paying higher fee when they opt for a joint provision of both types of services (Frankel, Johnson and Nelson, 2002), there is another school of thought which directs financial statement users to initially compare the frequency of usage of both audit and non-audit services before jumping to any such conclusions (Ezzamel, Gwilliam and Holland, 2002). This imparts that firms paying higher may be using more of non-audit services than actual audit services.

An example of the positive effect of non-audit services could be gauged from the recent guidelines by the Financial Reporting Council (FRC, 2015), which introduced the revised Auditing Standards ensuring that the auditors are able to get some consultancy and advice regarding provision of non-audit services. Along with explaining its regulations, it also claims to provide guidance to audit firms on how they can use these supplementary services to their advantage, remaining within the ethical code of conduct. Even in case of pressure from the client regarding non-audit services, the auditor must first ensure its stakeholders that it produces completely transparent financial statements and should not get involved in suspicious practices, such as the KPMG case, where the company’s accountants were doubted to be involvement in tax dodging, which they finally had to publicly admit. They then avoided the lawsuits by paying a huge penalty and accepting the conditions imposed by the US Justice Department (Gwilliam, Teng and Marnet, 2014).

Positive Reputation Effects

Supporters of non-audit services do not contradict with the laws related to these services; they in fact believe that if the services are provided with the appropriate measures to safeguard auditor’s independence, they will end up being favourable for both the auditor and the client (Ernst & Young, 2013). Advocates of this viewpoint also found that the income received as a result of providing non-audit services helps in enhancing auditor’s reputational capital, which is the firm’s goodwill in the market (Wang and Hay, 2013). Thus, to sustain their goodwill, audit firms would keep themselves from surrendering to their clients. Evidence from economic models suggests that audit firms may be willing to forgo short-term increases in earnings from non-independent behavior in anticipation of building a better reputation in the long run, leading to higher economic returns (Gwilliam, Teng and Marnet, 2014). Firms would, therefore, abide by the rules as they prove to be a powerful tool to safeguard against any independence violence.

Enhancement in Audit Training

Some researchers believe that if auditing personnel are involved in providing non-audit services, they will not be able to perform the audit tasks in a complex business environment (Sori and Karbhari, 2006). On the contrary, proponents of non-audit services argue that by performing these additional services, junior auditors and audit trainees learn many skills which then help them become more competent accountants, which favourably impacts the audit firm’s independence and audit quality (Gwilliam, Teng and Marne, 2014).

No Impact on Auditor’s Independence

Some researchers believe that there is no relationship between provision of non-audit services and auditor independence (Jenkins and Krawczyk, 2001). Reviewing 20 years of literature, Salehi (2009b) did not find enough evidence about investors being concerned with non-audit services. Quick and Rasmussen (2009) also discovered that there is a lack of evidence supporting the claim that non-audit services are the reason behind impairment of auditor’s independence. Tepalagul and Lin’s (2014) study revealed that providing consultancy services to audit clients does not really affect the perceptions of the financial statement users about auditor’s credibility and independence; it in fact helps in enhancing the organisation’s internal control systems.

Conclusion

There are many reasons due to which an auditor’s reliability and independence may be compromised, one of which is often said to be the additional non-audit services provided by audit firms to their clients. Some researchers believe that these services pose to be a threat to the audit quality and independence by joint provision of both service types, higher non-audit fee and relationship with management. There are others who believe that these services positively influence auditor independence, whereby the audit quality is strengthened and the audit firm enjoys better reputational capital and enhanced audit training. There are still other researchers who found non-audit services to have no impact on the auditor’s independence. Numerous examples of firms are present supporting either of the three viewpoints; it all depends upon the auditor’s strategic moves by which it strives to safeguard its independence and the reliability of its work. The doubts financial statement users have about auditors’ performance can be handled well by standardized processes and transparency of information provided by audit firms.

References
ABP (2004). Ethical Standards 5: Non-Audit Services Provided to Audit Clients. The Auditing Practices Board. [Online] Available at: https://www.frc.org.uk/Our-Work/Publications/APB/ES-5-Non-audit-services-provided-to-audit-clients.pdf
Adeyemi, S.B., and Olowookere, J.K. (2012). Non-audit Services and Auditor Independence – Investors’ Perspective in Nigeria. Business and Management Review, Vol. 2, No. 5, pp. 89-97.
Al-Ajmi, J., and Saudagaran, S. (2011). Perceptions of Auditors and Financial-Statement Users Regarding Auditor Independence in Bahrain. Managerial Auditing Journal, Vol. 26, No. 2, pp. 130-160.
Chadbourne, and Parke, (2003). SEC Adopts Final Rules on Auditor Independence. [Online] Available at: http://www.chadbourne.com/files/Publication/40905c7b-de76-481d-a771-124300cc04ba/Presentation/PublicationAttachment/891d9f5a-b897-44ed-a141-004f2d922405/SECAdoptsFinalRulesonAuditorIndependence.pdf
Chen, K.Y., Elder, R.J., and Liu, J.L. (2005). Auditor Independence, Audit Quality and Auditor-Client Negotiation Outcomes: Some Evidence from Taiwan. Journal of Contemporary Accounting & Economics, Vol. 1, No. 2, pp. 119-146.
Ernst & Young (2013). Q&A on Non-audit Services. Point of View: Our Perspective on Issues of Concern. [Online] Available at: http://www.ey.com/Publication/vwLUAssets/EY-qa-on-non-audit-services-march2013/$FILE/EY-qa-on-non-audit-services-march2013.pdf
Ezzamel, M., Gwilliam, D., and Holland, K. (2002). The Relationship between Categories of Non-audit Services and Audit Fees: Evidence from the UK. International Journal of Auditing, Vol. 6, No. 1, pp. 13-35.
Frankel, R.M., Nelson, M.F., and Johnson, K.K. (2002). The Relation between Auditors’ Fees for Non-audit Services and Earnings Management. The Accounting Review: Supplement 2002, Vol. 77, No. s-1, pp. 71-105.
FRC (2015). FRC’s Work to Enhance Justifiable Confidence in audit through Implementation of the EU Audit Regulation and Directive. [Online] Available at: https://www.frc.org.uk/News-and-Events/FRC-Press/Press/2015/September/FRC-s-work-to-enhance-justifiable-confidence-in-au.aspx
Gwilliam, D. (2010). Trucking on: Audit in the Real World? (Man V Freightliner and Ernst & Young). Journal of Professional Negligence, Vol. 26, No. 4, pp. 180-193.
Gwilliam, D., Teng, C.M., and Marnet, O. (2014). How does Joint Provision of Audit and Non-audit Services Affect Audit Quality and Independence? A Review. Chartered Accountants’ Trustees Limited. [Online] Available at: http://www.icaew.com/en/products/audit-and-assurance-publications/~/media/481bd2be6ac7414cb4248996d259f8f5.ashx
Hoffman, K.D. (2009). Services Marketing: Concepts, Strategies & Cases. Third International Edition. Mason: South-Western.
ICAEW (2015). The Provision of Non-audit Services to Audit Clients. [Online] Available at: http://www.icaew.com/en/technical/ethics/auditor-independence/provision-of-non-audit-services-to-audit-clients
Ismail, I., Hasnah, H., Ibrahim, D.N., and Isa, S.M.(2006). Managerial Auditing Journal, Vol. 21, No. 7, pp. 738-756.
IOSCO (2007). A Survey of the Regulation of Non-Audit Services Provided by Auditors to Audited Companies – Summary Report. [Online] Available at: https://www.iosco.org/library/pubdocs/pdf/IOSCOPD231.pdf
Jenkins, J. G., and Krawczyk, K. (2001). The Influence of Nonaudit Services on Perceptions of Auditor Independence. Journal of Applied Business Research, Vol. 17, No. 3, pp. 73-78.
Krishan, J., Zhang, Y., and Sami, H. (2005). Does the Provision of Non-audit Services affect Investor Perceptions of Auditor Independence? Auditing: A Journal of Practice & Theory, Vol. 24, No. 2, pp. 111-135.
Muir, S. (2014). The Provision of Non-Audit Services to Audit Clients – Still a Difficult Circle to Square. CAPITA Asset Services. [Online] Available at: http://www.capitaassetservices.com/assets/media/SS14384_Non_audit_services_article-v4.pdf
Okaro, S.C., and Okafor, G.O. (2009). Stemming the Tide of Audit Failures in Nigeria. ICAN Students’ Journal January/March, Vol. 13, No. 1, pp. 11-17.
Pwc (2014). EU Audit Reform: Providing Non-audit Services. [Online] Available at: http://www.pwc.co.uk/assets/pdf/pwc-uk-eu-audit-reform-client-briefing-nas-7-aug-2014.pdf
Quick, R., and Rasmussen, B.W. (2009). Auditor Independence and the Provision of Non-audit Services: Perceptions by German Investors. International Journal of Accounting, Vol. 13, No. 1, pp. 141-162.
Quick, R., and Rasmussen, B.W. (2015). An Experimental Analysis of the Effects of Non-audit Services on Auditor Independence in Appearance in the European Union: Evidence from Germany. Journal of International Financial Management & Accounting, Vol. 26, No. 2, pp. 150-187.
Reynolds, J.K., Deis, D.R., and Francis, J.R. (2004). Professional Service Fees and Auditor Objectivity. Auditing: A Journal of Practice & Theory, Vol. 23, No. 1, pp. 29-52.
Salehi, M. (2009a). Non-audit Service and Audit Independence: Evidence from Iran. International Journal of Business and Management, Vol. 4, No. 2, pp. 142-152.
Salehi, M. (2009b). In the Name of Independence: With Regard to Practicing Non-audit Services by External Auditors. International Business Research, Vol. 2, No. 2, pp. 137-147.
Sori, Z.M., and Karbhari, Y. (2006). Audit, Non-Audit Services and Auditor Independence. Staff Paper Universiti Putra Malaysia. [Online] Available at: http://www.researchgate.net/publication/237379279_Audit_Non-Audit_Services_and_Auditor_Independence
Swanger, S.L., and Chewning, E.G. (2001). The Effect of Internal Audit Outsourcing on Financial Analysts’ Perception of External Auditor Independence. Auditing: A Journal of Practice and Theory, Vol. 20, No. 2, pp. 115-129.
Tepalagul, N., and Lin, L. (2014). Auditor Independence and Auditor Quality: A Literature Review. Journal of Accounting, Auditing & Finance, Vol. 3, No. 1, 101-121.
Wahdan, M.A., Spronck, P.S., Ali, H.F., Vaassen, E.V., Herik, H.J. (2005). Auditing in Egypt: A Study of Challenges, Problems and Possibility of an Automatic Formulation of the Auditor’s Report. [Online] Available at: http://ilk.uvt.nl/~pspronck/pubs/Wahdan2005c.pdf
Wang, S.W., and Hay, D. (2013). Auditor Independence in New Zealand: Further Evidence on the Role of Non-audit Services. Accounting and Management Information Systems, Vol. 12, No. 2, pp. 235-262.

Financial Ratio Analysis Essay

This work was produced by one of our professional writers as a learning aid to help you with your studies

Financial statements are useful as they can be used to predict future indicators for a firm using the financial ratio analysis. From an investor’s perspective financial statement analysis aims at predicting the future profitability and viability of a company, while from the management’s point of view the ratio analysis is important as it helps anticipate the future conditions in which the firm should expect to operate and facilitates strategic decision making (Brigham and Houston 2007, p. 77).

Profitability analysis

Harry’s Hamsters Limited (HHL) experienced growth in its profitability from 2007 to 2008; however, the net income reduced significantly during 2009. The return on equity (ROE) was 4.24 percent in 2007, increased to 14.68 percent in 2008 and decreased back to 5.10 percent in 2010. Similarly, the return on assets (ROA) also initially increased and later declined in 2009; the decline was sharper compared to the decline in ROE as the ROA in 2009 of 1.73 percent is lower than 2.08 percent in 2007. The ROE comprises of two main components: the return on net operating assets (RNOA) and the return on debt (ROD). RNOA for HHL has also deteriorated during 2008 decreasing from 16.61 percent in 2008 to 5.08 percent in 2009. The RNOA is used to weigh the overall performance of the HHL management. The ROD component of the ROE has also deteriorated from 13.68 percent in 2008 to negative 3.32 percent in 2009 (Kemsley 2009, pp. 12-16).
The ROCE was the highest in 2008 estimated 11.39 percent. It implies that the capital employed by HHL yielded high returns before the expansion period and that the company was significantly profitable. A considerable decline in 2009 to 4.82 percent can be unfavourable for the investors; however, as the company has not sold its shares to the public a reduction in this ratio for a temporary period is not a major concern for the current owners.
The operating profit margins for HHL initially increased from 10 percent in 2007 to 17.45 percent in 2008; however, the company reported lowered margins of 8.53 percent in 2009. The decline in the operating profit margins of HHL is largely attributed to the increase in costs associated with the expansion of the business. The operating margins are expected to recover over the next year assuming that the new operations will become profitable as sales increase. The cost of goods sold have increased in absolute terms but the overall gross profit margins for the company have improved from 35 percent in 2007 to 42.01 percent in 2009. This implies that the company is effectively managing its relations with suppliers and has kept a control over the costs attached to buying the hamsters for breeding; but the operating costs have increased due to the low sales activity in the new operations.

Liquidity analysis

The current ratio of HHL remains above the minimum threshold of one and is currently 1.22; historically, the ratio has remained between 2.73 and 3.25 times. However, the quick ratio for the company reveals serious concerns as it has decreased from 1.67 in 2008 to 0.22 in 2009. The low quick ratio implies that a considerable portion of the current assets of the company are tied up as part of its inventory (Bragg 2007, pp. 14-16). This could also mean that HHL might be unable to sell the hamsters and sales might be suffering. The company must increase its working capital to meet its near term current liabilities and retain its solvency (Brigham and Houston 2007, pp. 42).

Efficiency analysis

The firm’s efficiency has not necessarily decreased during the last year; an analysis of the efficiency ratios suggests a trend that is different from what is seen through the profitability and liquidity ratios. The inventory turnover has slightly deteriorated from 3.00 in 2007 to 2.89 in 2009; similarly impacting the day’s inventory on hand from 121.67 to 126.35 during the same period. The long inventory holding period suggests that the company needs to improve its liquidity position to maintain its efficiency and aim to reduce its inventory turnover significantly (Brigham and Ehrhardt 2008, pp. 57-62). The days of accounts receivables have reduced from 45.63 in 2007 to 40.05 in 2009 and at the same time the days of accounts payables have reduced even more drastically from 40.56 to 28.08. The operating asset turnover for HHL has deteriorated considerably from 0.87 in 2007 to 0.60 in 2009, owing to a long inventory holding period and a quick payment of the accounts payables.

Capital structure analysis

The capital structure has significantly changed over the past two years as HHL has increased its financial leverage and is using a considerable debt to finance its expansion activities. The debt ratio of the firm has increase from 0.47 in 2007 to 0.60 in 2009; imply that HHL is now funding 60 percent of its assets through debt (Berry 2006, pp. 68-71). The interest coverage ratio of the company had improved considerably in 2008 and was 4.29, but it has deteriorated to 1.89 raising additional concerns for the banks. The ROD for the company has reduced considerably but remains positive implying that the current level of financial leverage is generating additional returns for the company. Operating cash flows (OCFs) for the company remain negative being typical of young firms experiencing a high growth rate, but the ability of HHL to raise additional financing is limited; therefore negative OCFs raise serious concerns for the bank management.

Report to credit committee
Analysis for reasons of results

HHL avails a long-term debt facility of ? 0.45 million and has also utilised an overdraft of about ? 35,000 from its current facility. The company performed exceptionally well during 2008, which led to an increase in its debt facility from ? 0.275 million to ? 0.45 million recently. The recent financial results revealed a tightening credit position of the company during 2009, which led to concerns regarding the excess usage of the overdraft facility by the company. Recent communication with the company reveals that it is facing liquidity problems due to its ambitious expansion program; however, the problem can be solved depending on the ability of the management to realise the seriousness of the situation (Madura 2006, pp. 17-32).
The company is running an overdraft without any immediate plans regarding its understanding to pay back the short-term loan. The overdraft is being utilised to fund the working capital needs of the company, which it did not anticipate during its expansion into southern England. The success or failure of the new operations is yet to be seen and the position will only be clear by next year. The current assets are largely financing the inventory requirements of the company, while the inventory cycles are long and not in a position to be liquidated on urgent need. The company needs to introduce additional capital in order to solve its working capital problems.
The working capital position of HHL can also improve by increasing the days of accounts payable ratio to higher levels or by reducing the inventory cycle if possible (Myers 1984, pp. 126-128). However, both options seem unlikely leading us to prescribe alternative solutions. The company has seen deterioration in the profitability ratios, which has reduced its ability to pay the interest commitments on the outstanding loan. However, the company still maintains an interest coverage ratio of 1.89 and should be able to regain its position once the new operations become profitable.
The efficiency ratios of the firm have remained relatively stable with a slight decrease in the inventory turnover, an improvement in the accounts receivables turnover and a significant drop in the operating assets turnover. The company maintains a high debt ratio and about 60 percent of its assets are funded using debt; however, this is typical of most firms under the initial expansion phase.
The company remains committed to making profits but has not considered rising outside capital by going public in the near future; the only way to maintain its current pace of growth will be either through an injection of personal equity or through the offering of company stock to the public (Ronen and Yaari 2007). The owners have invested most of their life savings into the business and the company cannot possibly raise any further internal financing.

Recommendations regarding bank arrangements

The credit committee is recommended to raise concerns regarding the current liquidity position of the company and to prepare a schedule for the repayment of the overdraft amount over the next six months. The company is expected to recover from the current situation during the next year, but it is important to remain cautious until the sales position appears to improve. Also developing a degree of pressure on the management should clearly communicate the banks position to the firm (Gibson 2009, pp. 212-216). The intention is to educate the company management about the gravity of this situation and ensuring that it is able to recover smoothly from the liquidity crunch, while at the same time minimising the bank’s exposure to the business risk HHL is facing.
The Managing Director of HHL is consistent in maintaining regular contact with the bank; therefore we need to educate him with the possible solutions for recovering from the credit crunch faced by the company. The recommended solutions include a consolidation of the business before considering any further expansion projects, a reduction in the days inventory on hand, increase in the days accounts payables, the retention of profits into the business allowing for no dividend payments over the next quarters, an injection of equity from any other sources available, an increase in collateral to support the bank’s claims and a phasing out of the bank overdraft over the next six months as revenues from the sales are realised (Harvard Business School 2006, pp. 3-12).

Recommendations to management about improving finances of the company

Mr. Michael,
Thanks for a quick response pertaining to the overdraft issue. We have analysed the situation faced by HHL based on the recent financial statements and the qualitative information that we received during our recent correspondence. It is understood that your company has recently gone a major expansion and the short-term impacts are apparent on the financial results in terms of lowered profitability as anticipated. The concern raised by the bank is not directly related to the profitability of your company and we remain concerned about the liquidity position of HHL in months to follow (Bissessur 2008, pp. 142-146).
The understanding between the bank and the company was that the expansion will be fully funded by the increase in the loan facility. This increase in loan was to support both the fixed investment in the expansion project as well as the working capital needs of HHL. However, as it is seen the actual expansion investment has exceeded the anticipated amounts and the company is facing a severe liquidity crunch that needs to be resolved.
The credit committee is concerned regarding the profitability of the expansion project and is not prepared to enhance the overdraft limit until the latest results for the company become available. HHL would have to independently solve this liquidity crunch by either an injection of equity to facilitate the increased working capital requirements or to raise additional external capital. The intention of the company to continue towards is expansion projects can be best facilitated through a public listing of the company to raise additional capital (Hill and Jones 2009, pp. 28-29).
The bank would require the company to pay the entire overdraft drawn in instalments over the next six months. This payment schedule has been drafted after a careful consideration of the credit history of your firm with the bank; in usual circumstances we would have required the repayment of the whole overdraft instantly. Moreover, it must be understood that this correction is in the best interest of your company as it serves to facilitate your understanding of the gravity of the situation faced by HHL.
A large proportion of the current assets held by HHL are tied up in the inventory and the company has no cash reserves available to pay for the maturing current liabilities including the bank’s interest payments. It is important to understand that the company would have filed for bankruptcy if the current overdraft was not available. Therefore, it is a very serious concern which should be resolved as soon as possible (Capon 1990, p. 1145).
The company can adopt some emergency measures to immediately improve its cash position, including a maximum delay in the payment to creditors that might be possible without significantly harming the supplier relations, a quicker recovery of accounts receivables without significantly harming the sales position and an immediate sale of ready inventory on a cash payment discount (David 2006; Ebert and Griffin 2005). Moreover, the company must not withdraw any retained earnings in the form of dividends until the liquidity position is resolved.
Waiting for your response,
Nick Cameron

Bibliography

Berry, A., (2006). Accounting in a business context. Brighton: Cencage Learning.
Bissessur, S., (2008). Earnings quality and earnings management: The role of accounting accruals. Rosenberg Publishers.
Bragg, S., (2007). Business Ratios and Formulas: A comprehensive Guide. New Jersey: John Wiley and Sons.
Brigham, E., and Houston, J., (2007). Fundamentals of Financial Management. Mason: Thomson Publishing Limited.
Brigham, E., and Ehrhardt, M., (2008). Financial management: Theory and practice. Mason: Thomson Higher Education.
Capon, N et al., (1990). Determinants of financial performance: A Meta Analysis. Journal of Management and Sciences, Vol. 36 (10), pp. 1143-1159.
David, F., (2006). Strategic management: concepts and cases, 10th ed. Hong Kong: Pearson education.
Dominguez, K., (2006). Exchange rate exposure. Journal of International Economics, 68 (1), pp. 188-218.
Ebert, R., and Griffin, R., (2005). Business Essentials. Prentice Hall.
Finnerty, J., (2007). Project financing: Asset based financial engineering. New Jersey: John Wiley and Sons Publishing.
Kemsley, D., (2009). Financial Accounting Seminar: Practical Equity and Credit Analysis. New Orleans: Tulane University.
Kumar, K., and Dissel, V., (1996). Sustainable collaboration: Managing conflict and cooperation in systems. Journal of Information Management, 20 (3), pp. 279-300.
Gibson, C., (2009). Financial Reporting and Analysis: Using Financial Accounting Information. Mason: Cencage Learning.
Harvard Business School., (2006). Essentials of strategy. Boston: Harvard Business School Press.
Helfert, E., (2001). Financial Analysis: Tools and techniques, a guide to managers. New York: McGraw Hills.
Hill, C., and Jones, G., (2009). Strategic management theory: an integral approach. Mason: Cencage learning.
Madura, J., (2006). Introduction to business. Mason: Thomson publishing.
McDonald, B., and Morris, M., (1984). The statistical validity of the ratio method in financial analysis: An empirical examination. Journal of Business Finance and Accounting, 11 (1), pp. 89-104.
Myers, S., (1984). Finance theory and finance strategy. Interfaces, 14 (1), pp. 126-137.
Ronen, J., and Yaari, V., (2007). Earnings Management: Emerging insights, in theory, practice and research. New York: Springer Publishers.