Development and Popularity of the Keyboard Concerto

This work was produced by one of our professional writers as a learning aid to help you with your studies

Describe the development of the keyboard concerto from c.1710-1790, and assess why the form became so popular with both composers and public.

This essay explores the development of the keyboard concerto during the 18th century considering its precursors, social and economic context and the advent of the piano. By exploring the work of key composers during the 18th century, it will be shown how musical and social shifts created an environment in which enduring, popular and technically adventurous piano concertos could emerge.

Early Concertos

Concertos are typically defined asinstrumental works where a smaller group (in a concerto grosso) or soloist (ina solo concerto) contrasts against the sonority of a larger grouping. This technique was used in orchestration during the 17th century in works such as canzonas (Grout 1988: 473), with the concerto form emerging towards theend of the 17th century. Possibly the most influential composers ofearly concertos were Corelli, Torelli and Vivaldi. Wellesz and Sternfield(1973: 435) trace the emergence of the early concerto form through these three composers.

Corelli’s twelve Opus 6 concertigrossi were written at the end of the 17th century using a structure consisting of a somewhat random alternation of slow and fast movements. Movements were ritornello-based (a ritornello is like a refrain), with alternating tutti and concertino passages showing limited decoration or exploration of thematic material.

Torelli, composing at the turn ofthe century, wrote concerti grossi and solo concerti. He established the three movement (fast-slow-fast) structure that was widely adopted. Torelli also explored the use of contrasting thematic elements within concertos and increased the complexity of solo lines.

Vivaldi, writing in the early 18thcentury, refined the form, with more exploration of thematic contrasts, although Kolneder (1986b: 307-8) argues that Vivaldi’s material is perhaps better described as motifs than themes.

Although these three composers werekey to the emergence of the concerto form, their instrumentation focused on strings. Vivaldi wrote some flute and bassoon concerti, and orchestras would typically include a continuo keyboard part, but the first composers to use solo keyboard in concertos were Bach, Handel and Babel.

The First Keyboard Concertos

There is debate over which piece ofmusic qualifies as the first keyboard concerto. Handel wrote the first organ concertos, with a set of six published in 1738, but used a concerto-likestructure very much earlier, in his cantata ‘Il trionfo del tempo e deldisinganno’ of 1707, contrasting the organ with the orchestra in a ritornello structure.

Bach’s Brandenburg Concerto no. 5,composed around 1720, is widely held to be the first harpsichord concerto, and develops the concept of the virtuoso soloist, featuring an extensive solo harpsichord cadenza towards the end of the first movement.

However, recent research suggests that, even earlier than this, William Babel was writing concerted movements for harpsichord. The dates of composition are uncertain, but appear to be at leastas early as 1718, and possibly 5 or 6 years prior to that (Holman 2003).

Handel’s work, in addition to developing the keyboard concerto, provides interesting insights into the nature of performance and developments in amateur music-making at the time. Handel hadmoved to London, where he spent most of his adult life, in 1712, establishing himself as something of a celebrity. Initially finding success with Italian-style opera, the wane in the popularity of the form caused him to switch to oratorios. The virtuoso castrati, who had played a major role inopera, were not appropriate for oratorios, where virtuoso performance was considered not to be in the spirit of the work. By composing organ concerti tobe performed alongside the oratorios, Handel preserved an element of virtuoso performance popular with audiences, and as one of the leading organists of his day, he was able to showcase his skills through these works..

As the English organ had no pedals, music written for it transferred easily to the harpsichord, and Handel’s publisher could promote his second set of organ concerti as ‘for harpsichord or organ’, broadening its appeal (Rochester 1997).

Mid-Century Developments

The popularity of the Baroque concerto may have hindered the development of the concerto form. Wellesz and Sternfield argue that even such original composers as Sammartini and C.P.E.Bach could not rid their minds of Baroque preconceptions. (1973: 434)

C P E Bach regularly used the Baroque structure, with a number of tuttis punctuating solo passages in the ritornello style, but was innovative in other respects: his device of running one movement into another is more often associated with 19th century music.

Wellesz and Sternfield establish three main elements where there is a clear differentiation in style between Classical and Baroque concerto forms: tonality, form and co-ordination of musical elements (1973: 435-6).

Classical concerto style develops the concept of opposing tonalities, placing tonic and dominant against each other,while the Baroque style, though often using modulation, maintains more stability.

In the Baroque concerto, exposition and development are often combined, while in the Classical era there is clearer demarcation, pointing towards sonata rather than ritornello form.

The Baroque form entwines contrapuntal elements over a more independent bassline, while the Classical form prefers all elements – including harmony, melody, orchestration and rhythm- to be held together within the same overall plan.

Also key to the development of the keyboard concerto was the emergence of the piano. The prototype instrument was developed by Bartolomeo Cristofori in the final years of the 17thcentury and called the gravicembalo con piano e forte, meaning harpsichord withsoft and loud, although the dulcimer, where strings are hit by hammers, was more of an inspiration than the plucked harpsichord. This gave scope to developa keyboard instrument with greater dynamic versatility. However, composers were initially sceptical. In 1736, Gottfried Silbermann invited J S Bach to try one of his instruments. Bach was critical, but Silbermann worked to improve hispiano, and Bach subsequently acted as an intermediary in its sales.

The new instrument also found success in Britain. During the 18th century, Britain, and especially London, was cosmopolitan: Handel had had great success, and records show that many musicians from the continent made Britain home. Britain offered an environment of relative political stability compared with many areas of Europe. There was a keen appreciation of music among the upper classes, and a growing middle classwith money to spend on leisure pursuits – including music.

However, in 1740 there was only onepiano in the country. In 1756, the Seven Years War resulted in an exodus from Saxony to Britain, and their numbers included a group of harpsichord makers,one of whom, Zumpe, began to make pianos and invented the square piano. It had advantages over the harpsichord and other types of piano which were a similar shape to the harpsichord. It was quicker and cheaper to manufacture, and remained popular until the middle of the next century.

Johann Christian Bach, son of J Sand younger brother of C P E, arrived in London in 1762. He developed a range of commercial interests, and became Zumpe’s London agent, providing an incentive to write material to show the instrument to its best advantage. He had other business interests too: on arrival in London in 1762, he shared lodgings with Carl Abel, also a German composer. They developed a partnership running subscription concerts, which proved hugely popular until after J CBach’s death in 1782, and had a stake in the Hanover Square Rooms, which they used as a venue for their concerts.

Johann Christian had been a pupil of his older brother Carl Philip Emmanuel, but it was the younger brother who was the more influential on the development of the concerto form, particularly with regard to exposition themes. He often used a triadic primary theme and more cantabile secondary theme, suggesting elements of sonata form, although ritornello style is still evident.

J C Bach wrote around 40 keyboard concertos between 1763 and 1777 (Grout 1987: 560). Midway, dating from 1770,are the Opus 7 concertos: ‘Sei concerti per il cembalo o piano e forte’ (six concertos for harpsichord or piano). The title itself is significant.Harpsichord manufacture was still on the increase in the 1770s, but the instrument was soon to be overtaken in popularity by the square piano, and Bachwas the first to use the instrument for public performance (Grout 1987: 562).Grout suggests that the E flat major concerto, no. 5 of the set, hassignificant structural similarities to Mozart’s K488 (Piano Concerto No. 23 inA major), with a similar combination of Baroque ritornello structure and sonataform, contrasting keys and thematic material.

While Johann Christian’s work goes some way to realising the Classical concerto form, it was Mozart who pushed theform forward to create a precedent for concerto composition in subsequent centuries:

Mozart’s concertos are incomparable. Not even the symphonies reveal such wealth of invention, such breadth and vigor of conception, such insight and resource in the working out of musical ideas.

Grout 1987: 614

Mozart’s Piano Concertos

Mozart’s move to Vienna from Salzburg in 1781 heralds musical developments and reflects social changes. On 9May 1781, he wrote to his father I am no longer so unfortunate as to be in Salzburg service (Mersmann 1938: 161): he had been frustrated by the limited opportunities of his employment at court. The joy of leaving Salzburg for Vienna seems to have been musically inspiring, and the next few years were prolific, not least in the composition of piano concertos: Mozart wrote 12 between 1784 and 1786.

The influence of J C Bach on Mozartwas significant. The two had met in London in 1764, when Mozart was still aboy. In 1772, Mozart created his first three piano concertos by rearranging three of J C Bach’s sonatas. Beyond the concerto structure, the detail of Mozart’s music suggests Bach’s influence. His subtle ornamentation and cleveruse of suspensions and ambiguities of tonality also characterises J C Bach’s work.

Mozart’s use of keys is particularly innovative: in the first movement of the A major Piano Concerto K488, the development section incorporates a passage of dialogue between thewinds and a larger grouping of piano and strings, modulating through E minor at bar 156, C major at bar 160, A minor at bar 164 and then through F major at bar 166 to D minor at bar 168. The more obvious, related tonalities for a work in Amajor would be D and E major, the subdominant and dominant keys, and F# minor, the relative minor key. This type of harmonic device gives a strong sense of departure from the safety and stability of the home key, making its eventual return in the recapitulation stronger and more satisfying.

This passage also shows examples of Mozart’s innovative orchestration: the small group-large group contrast of earlier concertos becomes a three-way interchange, with piano, winds and strings forming three groups which are united and contrasted in a range of combinations.

Conclusion

Mozart’s innovations took the keyboard concerto to a new level, and give some indication of why the form became so popular with composers and the public.

For the composer, working patterns were changing, away from the often creatively restrictive nature of patronageto an environment of more freedom, with composers having more control of performances as events – J C Bach is a particularly good example of this. With many composers also being gifted performers, who could attract audiences by way of their virtuosity, the concerto offered scope to write exciting, challenging passages within the context of a major work, giving their performances real impact.

Yet the economic reality was thatincome depended on the success of concerts and the ability to please a fickle audience.Mozart was clearly aware of the need to please a range of Viennese listeners,writing of his 3 concertos written for the 1782-3 season:

There are passages here and there from which connoisseurs alone can derive satisfaction, but they are written so that thenon-connoisseurs cannot fail to be pleased even if they don’t know why.(Quoted by Steinberg 1998: 279)

Taking the above into account, itis surely not insignificant that Mozart’s piano concertos are, 200 years after their composition, enjoyed by a huge audience and also highly regarded by musicologists.

The development of the keyboard concerto in the 18th century demonstrates how changes in the social landscape and innovations in instrument technology planted the seeds of avibrant music industry. This helped set up the piano concerto to become an indispensable ingredient in the concert hall and a contributing factor in the phenomenon of the virtuoso in the 19th century and beyond.

Role of UN and WTO In Regulating Global Media

This work was produced by one of our professional writers as a learning aid to help you with your studies

The issue of politics and the regulation of media is not a new debate. Discussions around communications have a long history and governance and policy around telecommunications is a well-established topic at both national and international levels (Fylverboom, 2011). Even before national governments realised that international mechanisms were required to manage global issues such as trade of the environment, many realised that the benefits of international telecommunications would only be apparent if there were shared rules of the game in terms of governing how national networks would connect with each other (Fylverboom, 2011).

This study looks at the role of two key intergovernmental organisations and their role in regulating the global media. The United Nations (UN) and World Trade Organisation (WTO) both exert considerable influence on the world stage and it can be argued that both are influential in the regulation of media around the globe.

The United Nations was established in 1945 to bring together nations of the world to promote peace and security (United Nations 2015) it is involved in missions around the world ranging from peacekeeping and sustainable development to fighting terrorism and addressing climate change.

In relation to global media, UNESCO is an arm of the UN with a focus on education, cultural understanding and promoting freedom of expression and democracy. It states that it has a specific mandate to “foster freedom of expression and to promote the free flow of ideas by word and image”, The UNESCO webpage states that the organisation “works to foster free, independent and pluralistic media in print, broadcast and online. Media development in this mode enhances freedom of expression, and it contributes to peace, sustainability, poverty eradication and human rights.” (UNESCO, 2015).

Interestingly, UNESCO itself has entered the debate around media regulation, although more with a view on the contract between media self-regulation and state regulation than addressing the issue of media regulation by NGOs. A UNESCO report by Pudephatt (2011) provides a useful declaration of a media environment that supports freedom expression stating: “it will be a diverse media environment, part public, part private and part community; a plurality of different media outlets; and a system that is broadly self-regulating with the exception of broadcast media (where spectrum has been limited and a regulatory body allocates bandwidth)” (Pudephatt, 2011, p10).

Pudephatt debates a central question around media regulation, which is whether it threatens or supports democracy. Some argue that minimal state interference in the media is necessary for a media environment that supports democracy, whilst other will argue that state intervention is required to promote a pluralist and diverse media (Pudephatt, 2011). A good example to support this argument would be a democratic state in which a small number of wealthy individuals bought up most of the media outlets and used this near monopoly to promote one political or economic view with the result that democratic debate became stifled. Pudephatt (2011) makes the point that in the past many states have looked to prevent a company from occupying a dominant market share of the media in order to ensure freedom of expression.

It can certainly be argued that many arms of the UN do oppose most forms of media regulation. Human rights instruments such as the UN Charter, the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights for example can all be seen as tools to support rather than suppress freedom of expression.

O’Siochru et al (2002) suggest that there have been three broad phases in the development of global media regulation. The initial phase, pre-UN was driven by the economic and industrial revolution and accommodated the societal concerns of the time. A second phase came with the emergence of the UN and closer international relations. The increased presence of developing nations within the UN and its bodies, and calls for societal and human rights saw freedom of expression on the UN agenda, and through bodies such as UNESCO it encouraged greater freedom of expression in both national and international media (O Siochru et al.,2002). The third phase is represented by a weakening of the UN role in global media governance and one in which big business with a focus on the commercial rewards of global media looks to undercut national regulation and looks to free trade proponents such at the WTO to support this.

The WTO was established with a narrower agenda. It is an international body established to promote free trade through the abolition of tariffs and other trade barriers. It is closely linked with the ideas of globalisation and faces criticisms that it is too powerful, indifferent to worker’s rights, biased towards the rich and that it lacks democratic accountability (BBC, 2012). These criticisms can arguably be extended towards the WTO and global media, particularly when consideration is given to the expansion of powerful media conglomerates which benefit from the trade liberalisation ethos of the WTO.

The WTO has a great deal of power in relation to global economic matters and its influence on global media has also grown as media organisations become bigger business and increasingly commercialised (Leicester University, 2015). One of the key concerns for the WTO is promoting free trade and the addressing the pre-existing barriers that national sovereignty can out in place of media expansion globally.

Hackett and Zhao (2005) argue that the WTO has become an organisation which “straddles key areas of communication and is set to extend its mandate further” (p212). The WTO appears to be as supportive of the liberalisation of media and telecommunications as it is for the liberalisation of other areas of trade. With its rulings on formal trade complaints enforceable in international law, it is increasingly being seen by the largest media organisations as an ally as they look to expand into new market (Mansell and Raboy, 2011). The difficulties that companies such as Google have establishing a presence in states where there is more rigid censorship serve as one example of this.

Global broadcasting has been happening for decades yet whilst organisations such as the WTO have long had success in securing international agreements which liberalised trade, cultural industries have often been afforded greater protection by governments and certain restrictions have been placed on the importance of cultural industry products and media services (Mansell and Raboy, 2011).

One consequence of globalisation of the media however has been an increased commercialisation of the industry (Mansell and Raboy, 2011). Essentially, global media is now big business and there are huge profits to potentially be made if the largest media corporations can overcome national media regulation and expand into new territories. As Mansell and Raboy (2011) state “global markets in broadcasting are commercial, even when they involve trade among national broadcasters” (p55). In many individual nations, national broadcasters have seen their market share decline.

The WTO’s influence can be seen in the growing dominance of a small number of market leaders in the media industry and much of the context to this can be found in the US media. In 1984 around 50 corporations controlled the vast majority of news media in the US; this in itself seemed a small number but by 2004 similar criteria being applied found that this number had reduce to five huge organisations controlling over 90 per cent of mass media in the US (Mansell and Raboy, 2011). These organisation are looking to expand their influence globally and the liberalisation of trade under the WTO is enabling this to happen.

The dominance of a small number of corporations, the majority of which share a similar world view, brings us back to the question as to whether regulation of the media is a good or bad thing in terms of promoting democracy and freedom of expression. As these huge corporations use their market power as leverage to reduce traditional national interest public broadcasting, there are questions as to whether broadcasting in the public interest is still happening. Mansell and Raboy (2011) suggest that there is no global forum with sufficient influence to tackle these questions; in essence the liberalising power of the WTO is overcoming national attempts to regulate media and also the efforts of organisations such as UNESCO to promote a diverse global media.

One of the key media developments for both the UN and the WTO to address has been the growth of the Internet over the last two decades. Whereas global media had always been subject to some form of governance, the Internet has been portrayed as outside of the reach of regulation due to its global and decentralised nature (Fylverboom, 2005). Its ever-expanding nature and its versatile technical platform have left it for some time outside of global media governance but there is some evidence that this is beginning to change and the UN and the WTO have both had at least some involvement in this.

In 2003 a UN World Summit on the Information Society (WSIS) debated issues around global governance of the Internet. It looked at the status quo at the time where national governments were largely regulating the Internet within their own boundaries, and discussed whether some form of global governance was possible (Mansell and Raboy, 2011). It was clear at this point that both democratic and authoritarian governments were taking steps to deny access to their citizens for content that was seen as illegal or objectionable. The WSIS was ultimately unable to make much progress on this issue, finding that the conditions for access to the Internet would continue to be determined by the national government policies that were established and the effectiveness of their implementation (Mansell and Raboy, 2011). This suggests that the UN at least is limited in its capability to regulate some part of digital media on a global scale. Whilst the Internet is a global tool to be used, national governments so far are able to maintain a certain level of control as to the level of access that citizens have.

Unsurprisingly the WTO has looked to extend its influence into the complex area of Internet regulation. One of the difficulties it faces it that the complexity of Internet regulation makes it an issue much larger than something that simply comes under the remit of free trade or trade liberalisation. The whole area of content regulation has to be addressed; the attitude of China for example towards Internet content from the democratic West for example is not something that can easily be resolved (Kong, 2002). There remains a possibility that the WTO may be asked to step into legal battles around freedom of trade related to provision of Internet services. There has been discussion recently that companies such as Google might look to sue governments such as the Chinese’s for discrimination because of its harsh web filtering conditions (World Trade Law, 2015). If this were to happen, the WTO’s role in Internet governance would expand rapidly

Issues around Internet content make ideas of global regulation difficult; different cultures have different views on acceptable content. For the WTO, there are other issues around the Internet which it can more easily address. It may be able to coordinate regulation and standard-setting in areas such as data protection and enabling access to financial services (IP Watch, 2015).

A summary of the current position would be that the UN’s influence over global media regulation diminishes as the globalised nature of the Internet develops further, and as powerful media conglomerates exert influence to facilitate their own plans for future dominance. The WTO with its commitment to free trade has enabled some of these huge corporations to grow; the challenge it faces in future will be to resolve the ongoing issues between these media giants and the national governments which wish to maintain a level of control on the media access open to their citizens. The likelihood given its nature is that the WTO will support the media organisations; the question is whether individual governments will adhere to its decisions.

Bibliography

BBC (2012) World Trade Organisation [Online] Available: http://news.bbc.co.uk/1/hi/world/europe/country_profiles/2429503.stm

Fylverboom, M. (2011) The Power of Networks: Organizing the Global Politics of the Internet. London: Elgar

Hackett, R. and Zhao, Y. (2005) Democratizing Global Media. Oxford: Rowman and Littlefield

IP Watch (2015) Panels: WTO Could Play Crucial Role In Challenges Facing Global Digital Trade. [Online] Available: http://www.ip-watch.org/2015/10/08/panels-wto-could-play-crucial-role-in-challenges-facing-global-digital-trade/

Kong, Q. (2002) China and the World Trade Organization: A Legal Perspective. New Jersey: New Scientific

Mansell, R. and Raboy, M. (2011) The Handbook of Global Media and Communication Policy. Colchester: John Wiley and Sons

O Siochru, S., Girard, B. and Mahan, A. (2002) Global Media Governance: A Beginner’s Guide. Oxford: Rowman and Littlefield

Pudephatt, A. (2011) The Importance of Self-Regulation of the Media in Upholding FReedom of Expression

UNESCO (2015) Fostering Freedom of Expression [Online] Available: http://en.unesco.org/themes/fostering-freedom-expression

United Nations (2015) UN News Centre. [Online] Available: http://www.un.org/apps/news/story.asp?NewsID=52315#.ViaXlDZRHIW

World Trade Law (2015) Google, China and the WTO. [Online] Available: http://worldtradelaw.typepad.com/ielpblog/2010/01/google-china-and-the-wto.html

News Media & Popular Journalism

This work was produced by one of our professional writers as a learning aid to help you with your studies

Does Popular Journalism Reach Out or Dumb Down?

The news media has a responsibility to be objective, a responsibility it is often criticised for overlooking. Likewise the mass the media, with its huge audience, has an opportunity to educate – this is not to say that commercial television should fill its schedules with GCSE Bitesize revision programmes, but that there is much to learn through great writing, great acting and great comedy, that these are popular art-forms. Most mass media products do not seize this opportunity. Instead, a trend of pandering to the (perceived) base pleasures of certain mass markets is, more often than not, apparent, a trend that can be seen to reinforce stereotypes – an idea I will explore in this essay.

The real question then is: does the media reach out by dumbing down, or does it pander and condescend to its audience? In answering this question I intend to examine the ascendance of reality television since the late 1990s and question why such programmes came to prominence, and to analyse the differing approaches of various news products in their selection and presentation of the news and how this can relate to the notion of dumbing down.

The often criticised emergence of reality television came about in the late 1990s as a way of cutting down production costs whilst increasing output. With its stylistic roots in the documentary format but based on the concept of reality TV, the docu-soap came to prominence, and notoriety, with the unexpected successes of shows such as Driving School, Airport and Fairground, which followed members of the public as they went about their jobs and their everyday lives. The ‘stars’ of the shows often went on to enjoy minor, short-lived celebrity status, releasing pop records and guesting on other shows.

The unexpected success of the programmes opened the door to mass production, and a spate of copycat shows flooded both terrestrial and subscription channels – Sky One was notable for its successful Ibiza Uncovered series, following holiday-makers in the hugely popular club-based Ibiza night-life, which spawned countless Uncovered sequels. This was a dream come true for broadcasters, who had stumbled upon the scheduler’s Holy Grail – a format that was cheap, popular, and quick to produce. The use of ‘real people’ cut out the roles – and the fees – of writer and actor, as well as the valuable production time taken up by the writing and rehearsing process. They also cut equipment costs by the use of natural lighting and documentary style single camera format. Therefore high volumes of programmes could be churned out for little money, in little time.

The criticism which arose against the docu-soap phenomenon centred upon the flimsy content of the shows, the canonisation of trivial incidents, the lack of narrative, and the lack of any documentary-style insight into the lives of the protagonists. Many of the shows tended to make unwitting fools of its stars. Others would take the most trivial elements of its stars’ jobs – say, a routine check of an aeroplane toilet by a member of flight staff – and make it a central narrative of the show.

However what was perhaps particularly galling was that all of the terrestrial channels would pounce so fervently upon the fad. Of course any broadcaster has a lower end of entertainment, cheaper shows with lower production values than its flagship products, made quickly and cheaply to bulk out the schedule – but the docu-soap managed to find itself straddled across the channels in prime-time slots, as well as bulking out daytime schedules. For the BBC in particular, who have such a proud history of incendiary documentary film-making and social realism – this is the channel that screened Cathy Come Home (Ken Loach, 1966) – this seemed to reflect far too great a willing to sacrifice standards of content.

But in their presentation of real people in their real lives, were the docu-soaps ‘reaching out’ to the viewing public? It could be argued that the shows reflected their audience, that they made stars out of the viewing public, turned everyday events into prime-time viewing, took genuine events from genuine lives and put on screen, and thus reflected the social realities of its audiences to a greater degree than ever before.

However the stars of these shows were not comic characters penned for a cheap sitcom, they were human beings, with pasts, and families, tragedies, hopes, futures – but that’s not how they were presented. To the viewer, they were clowns and stooges, caricatures. The tools of the programmes may have been founded in reality, but the sum of the parts was as stripped down and simplistic as journalism can get. The plot of an episode of Airport: a member of staff going about his job. The point: mild amusement at his expense. And with the elimination of the creative process, the value of cheap, mild amusement at the expense of an unwitting stooge is hard to quantify. In truth, the shows had little more than stylistics in common with documentary.

And yet their effect is great. Whilst the docu-soap fad may have petered out, their influence can still be seen in the more recent popularity of reality antique and property make-over shows, and through its canonisation of members of the public, can even be seen to have paved the way for shows such as Big Brother and The X Factor, the new royalty of reality television.

This is a reality of the digital revolution. Products such as Freeview, Sky and ITV digital compete partly on the promise of more channels with greater choice than their rivals. More channels means more shows must be put in production, and unless the company wishes to go bankrupt, that means lower production values, less experienced talent both on and off screen, and more copycat shows – antique shows, reality shows, re-runs, and repeat showings. This leads to less experienced people making cheaper shows, and spreading them over a wider array of channels.

So we can see that the dumbing down of commercial television in the wake of the digital revolution is rooted not in the value system of the entertainment industry but in the economic reality of it. Writing talent, acting talent, directing talent, production values – all these elements cost both money and time, and when cheaply and quickly produced products are just as popular, the talent becomes expendable.

So what about the broadcast news, has it also undergone a process of dumbing down? Firstly, it is important to remember the role of humanity in news reportage. For example, what constitutes a news ‘event’? Is a motorway pile-up the event of the crash or the aftermath of it? Is the ‘event’ of a political speech the content of the speech or the reaction to it? Journalism relies on journalists, who rely on their own skills of interpretation, and it is generally accepted that every potential news story is judged on a certain set of ‘news values’. One of the most recognised interpretations of these values was made by Johan Galtung and Marie Holmboe Ruge in 1965. They identified eleven distinct values:

This process is merely a reality of news reportage: not every event that happens in the world can be covered, and so events must be judged on their ‘importance’. However problems can arise when this intangible ‘importance’ becomes linked not to theoretical values, but to the perceived values of a generalised target audience. In a News night investigation into the process of selecting stories for news coverage (screened in October 1999, BBC 2), a journalist from the News Of The World told the film crew that a story about white youths dying from drugs sold to them by a black man was more likely to be reported by their newspaper than a story about black youths dying from drugs sold to them by a white man.

This is based on the assumption by the newspaper that their audience is not interested in the problems of drug culture affecting the black community, whereas the representation of non-whites selling drugs to white youths reinforces racial stereotypes, and as such are more appealing, less challenging, and provide a greater sales guarantee. This is not just dumbing down, it is systemised media bias; it is a news service that bases its reportage on reinforcing stereotypical values of a massively generalised target audience in order to maintain circulation, and hence, profit. Even if the News Of The World have judged their audience correctly, it speaks of a worrying cycle of ignorance – if their audience members characterise non-whites as detrimental to whites, and their news reinforces this, how can they be expected to change their views?

So we can see that journalistic practice runs into serious problems when it considers its target audience in its methods of reportage. This is a particular problem for commercial television stations, who garner almost all their profits from advertising sales – sales which rely on the selling of a target market, one which is shared by the channel and its potential advertisers. What then happens if a certain news story does not appeal to a news product’s target market – is it tailored to be more attractive? Is it left out altogether? If the news product does not suit its target market, ratings drop, and advertisers pull out. ITV is a channel which survives on accessibility, so if its news is not accessible, does not reflect the tone and style of the rest of its programming, it risks losing viewers.

Let us examine the coverage of the first annual May Day protests in London. In May 2000 Trafalgar Square was occupied by members of the anti-globalisation protest group Reclaim the Streets, in protest against the practices of multinational corporations and the climate of brand power. As the demonstrations went on, a small minority of vandals along for the ride embarked on a low-scale wave of petty violence, which was denounced by RTS as contrary to their values.

RTS are a young political group, tapping into the youth culture trends of anti-capitalism and the deification of counterculture. Looking at the scenes of the protestors, they were young men and women, almost to a head in the 18-25 age group. The only terrestrial news channel to give any air-time to a member of Reclaim The Streets, or to even mention their name, was Channel 4 – the channel whose programming is aimed at the youth market to a greater degree than the others – in fact a channel who is contradictorily required to be ‘alternative’. The BBC news focused on the graffiti tagging of the Cenotaph, and ITV news focused on the small-scale vandalism and violence incited by a small minority of “protestors” who had crashed the party. Both news products characterised the protestors as ‘anarchists’ and ‘rioters’ (true of just a tiny minority). In this case, it is not hard to see how each news product’s target audience affected the reporting of the event. On the other hand, ‘Select’, an alternative music magazine, ran a 12 page special on the inspiration behind the protests, the base of the issues at the heart of Reclaim the Streets, and interviewed popular protagonists of the anti-capitalist sub-culture – comedian Mark Thomas and theorist Naomi Klein.

This does not necessarily suggest a greater moral credibility on the part of ‘Select’, but simply that they were in a position to make such a report. The style and tone fitted in perfectly with their target market, and the piece also ran interviews with various alternative musicians, such as Zack De La Rocha of politically outspoken anti-capitalist funk-rock group Rage Against The Machine. So whilst all of the terrestrial television news programmes can be seen to be dumbing down the event, it would be more accurate to say that they were catering their product to the perceived expectations of their target market, and ‘Select’ did exactly the same. It is hard to see the BBC devoting 10 minutes of a 30 minutes broadcast to a history of anti-capitalist theory and demonstration, but on the other hand this is a channel that recently gave prime-time half-hour debates to the leaders of the three major political parties in the run-up to the general election. ‘Select’ gave comprehensive coverage to the history of RTS and the theory behind the demonstration, but they may not have given so many column inches to, for instance, a pro-hunting group. Their coverage may have been more in depth and comprehensive on the May Day protests, but in the same way as the BBC and Channel 4, they covered what would sell.

So then we can see that ‘dumbing down’, within news reporting at least, perhaps has less to do with appealing to the lowest common denominator and more to do with appealing to a target audience. This can be seen to be a rather exclusive approach – appealing to a particular, and generalised, target audience excludes audience members who do not ascribe to the values of the target audience, and in this way we can see how popular news reinforces social stereotypes. It is, for instance, a rather galling assumption that a viewer of the BBC news is less interested in the motivations behind a political demonstration from a peaceful political group (who denounced the small-scale vandalism of a small minority as being contrary to their protest – at least they did when given air-time), than a stereotyped representation of anarchic youths run amok.

Is Print Media Dead in the 21st Century?

This work was produced by one of our professional writers as a learning aid to help you with your studies

With the emergence of digital media, the relevance of print media have been fiercely debated (Gomez, 2008; Leatherbarrow, 2012). The advocates of digital media supremacy bring to light the idea of the death of print media. In an attempt to persuade the public of the ultimate end of print newspapers, magazines, and books, the advocates present print media as fully outdated, expensive, and impractical (Anderson, 2014). What becomes evident from their pressure on the public is that they have initiated “a zero-sum game – print must die for digital to prevail” (Anderson, 2014, n.p.). This essay is aimed at discussing whether print media are really dead in the 21st century. Drawing on the recent research evidence and authoritative opinions, the essay attempts to generate an in-depth analysis of the vitally important issue.

Gomez (2008) asserts that print media (especially print books) continue to preserve popularity among the reading public because they greatly appreciate how print media look and smell. Drawing the parallels between people’s devotion to print media and patriotic feelings, Gomez (2008) poses a reasonable question: “how can books ever be replaced, let alone disappear?” (p.13). However, the author also claims that print media are significantly threatened by the wide spread of digital media and that the sales of print media are declining. Discussing the position of print media in the digital era, Hooper (2012) expresses a view that “reports of the ‘death of print’ have been greatly exaggerated” (n.p.). To prove his opinion, Hooper (2012) mentions some examples of the increasing interest in print media. For instance, he claims that some sites and online services (e.g. Google, Moshi Monsters, and Net-A-Porter) have recently started to publish print magazines to attract new partners and customers and realise new strategic goals. Hooper (2012) also discusses the case of the famous Berlin magazine 032c. This magazine was created by Joerg Koch to advertise the website. However, the print magazine has acquired so much popularity among readers that the website was transformed into an archival repository. Moreover, as West (2009) specifies, many famous newspapers (e.g. The New York Times, Washington Post, Time, and The Guardian) are still published because “the quality of journalism produced by traditional print media is still well ahead of the combined might of all the bloggers that inhabit cyberspace” (n.p.). In the viewpoint of West (2009), digital media will not replace print media until the quality of digital media is increased. Likewise, Anderson (2014) mentions that even computational and scientific fields heavily rely on print media. For instance, in the medicine field, print journals are considered as crucial and reliable resources widely used by health care professionals.

Nossek, Adoni, and Nimrod (2015) have conducted an interesting research on print media reading in nine European countries. The countries chosen for the research were technologically similar, but culturally different. The acquired evidence has clearly revealed that print media preserve their popularity in the 21st century. About half of the European respondents have acknowledged that they read either print books or print newspapers. However, the findings of Zickuhr and Rainie (2014) and Desilver (2014) have shown a gradual substitution of print books for digital books. Despite these findings, Nossek, Adoni, and Nimrod (2015) claim that “this displacement, if actualized, will only be partial” (p.379). Although digital books are less expensive and are more accessible than print books, the tradition of reading print books is too powerful (Liu, 2008). Moreover, in the viewpoint of Nossek, Adoni, and Nimrod (2015), readers significantly enjoy design and artistic worth of print books. On the other hand, Nossek, Adoni, and Nimrod (2015) acknowledge that print newspapers have more chances for displacement than print books. This is explained by two major factors: 1) print newspapers are less popular among readers than print books and 2) digital media provide readers with a range of new opportunities (e.g. socialisation, an immediate access to national and international news, and co-creation of news) (Nossek, Adoni, and Nimrod, 2015). West (2009) acknowledges that some large newspapers will certainly fail to survive in the highly digital world because of the loss of monopoly.

While the mentioned reasons for displacement can hardly be considered disputable, print media outperform digital media in the depth of media coverage, accuracy of information, and the diversity and quality of the content (West, 2009; Nossek, Adoni, and Nimrod, 2015). With regard to the latter aspect, the articles published in print newspapers and magazines are written by professional journalists who do not only produce grammatically correct writing, but also tend to discuss an issue or event from different perspectives, positions, and angles (West, 2009). In addition, Kitch (2009) specifies that people continue to view print media as crucial material objects which help them preserve memories of some events. Adoni and Nossek (2001) also point out that those readers who are unable to develop digital skills certainly prefer print books to digital books. What the research of Nossek, Adoni, and Nimrod (2015) has brought into light is that “the majority of Internet users balance their time spent reading different media” (p.381). Actually, the choice of digital or print media depends on readers’ needs and purposes of reading (Liu, 2008). For instance, those people who attempt to receive authoritative and detailed information on certain events choose print newspapers or print books, while those people who want to satisfy their psychosocial needs or want to be entertained choose digital media. In view of the fact that digital media and print media endow readers with diverse kinds of experience (Liu, 2008; Hooper, 2012), it is wrong to reject either of the two. This is proved by the survey of trade magazine editors conducted by Leatherbarrow (2012). According to the survey findings, editors strongly believe that their print magazines benefit from online versions and that people of different ages, professions, experiences, and skills prefer different kinds of media. In the viewpoint of one respondent, “My sector has a traditional older, less technically-literate reader base. They spend 12 hours a day in their shops, and want to relax with a magazine they can hold, not in front of a screen” (Leatherbarrow, 2012, n.p.). What is evident from this particular testimony is that print texts and digital texts cannot be differentiated on the premise of their different formats. It is the difference in experience that matters (Catone, 2013).

However, as Richtel and Bosman (2011) acknowledge in their article, reading of print media is widespread not only among the old generation, but also among the young generation. Although parents are obsessed with digital devices and digital media, they attempt to inspire their children’s interest in reading print books. According to Richtel and Bosman (2011), parents hold the view that the experience of reading print books is unique and contributes much to the overall development of their children. This unique experience is explained by the fact that children establish emotional ties with print books (something which cannot be achieved with digital books). Through these emotional ties, they evoke all five senses and acquire different skills. In addition to children and old people without appropriate digital skills, researchers and scholars also contribute much to the survival of print media. As Berger (2006) specifies, academic authors prefer print publications to digital publications. Print books can be sold, distributed among friends, relatives, colleagues, and students, and used for citations. In the process of writing an academic paper, scholars and students tend to heavily rely on print books because “online resources do not guarantee any longevity for citation as books and analog journals do” (Berger, 2006, p.152). This assertion is consistent with the findings of Ramirez (2003) who investigated the reading preferences of students from the National University of Mexico and found that 78 percent of students read and better understand print media and materials, while only 18 percent preferred reading of digital materials. Even when students read a digital text, they cannot read it for more than two hours (Ramirez, 2003). According to Liu (2008), students tend to choose print media when a text or book is rather lengthy, when they need to profoundly investigate a specific issue or area, and when they need to take notes.

What should be understood is that those who insist on the death of print media speak from the position of significant technological changes, fully disregarding social aspects of print media reading. Griswold, Lenaghan, and Naffziger (2011) express the view that digital media “are not bringing about the death of reading, or a postprint age, or the disappearance of the book in ink-on-dead-trees form, but are changing the nature and type of reading experiences available” (p.31). Following this line of argument, it becomes evident that modern readers do not have to dismiss print media for the sake of digital media. Instead, they have an opportunity to choose among different types of media. Moreover, by bringing to light the debate about the death of print media and by comparing print media to digital media, authors, researchers, and scholars unintentionally revive interest in print media (Sutherland and Deegan, 2012). The debate has a great impact on people’s minds and makes them reconsider their attitudes to the issue of print media. When in 1999 the British Library microfilmed and then eradicated American newspapers after 1850, this decision was negatively perceived by both English and international public (Chartier, 2004). As a result of this negative perception, American and English libraries were forced to stop destroying print newspapers and magazines. This particular example proves that people are not ready to easily reject print media, even though they widely read digital media. In the process of reading print and digital versions of the same text, readers use different methods and strategies of reading. Catone (2013) compares reading of digital media to watching a film version of a live performance. Those who understand the beauty and value of a print book certainly continue to invest in books to enrich their collection (Agresta, 2012).

The recent survey of English and American readers conducted by Publishing Technology (2015) has demonstrated that readers between 18 and 34 years understand and highly appreciate the value of print books. According to the acquired evidence, 79 percent of American respondents and 64 percent of English respondents read print books last year. The research has also found that the majority of English and American readers tended to buy their print books in bookstores instead of using Internet stores (e.g. Amazon). The findings of this survey and the above mentioned studies provide conclusive evidence that print media are still alive. Moreover, in the viewpoint of Josefowicz (2009), the myth about the death of print media is created by information junkies who use digital media and reject print media because they want to receive news in a fast way. However, Josefowicz (2009) claims that information junkies constitute a minor group of people. On the other hand, their voices are so loud that it may seem that the view of the death of print media is shared by the majority. In contrast to the research findings discussed in this essay, the opinions expressed by information junkies and digital media lovers are based on anecdotal evidence. Unquestionably, such evidence can hardly be considered trustworthy and reliable. As Josefowicz (2009) rightfully asserts, “the ‘end of print’ is a meme that has gained ascendancy in an environment of disruptive change in the communication ecology” (n.p.).

As the essay has clearly shown, print media are not dead in the 21st century. Despite the increase in reading digital media, the findings of the recent studies prove that people continue to read print newspapers, magazines, and books. In view of these findings, it is more appropriate to speak not about the death of print media, but about “the evolution of a new functional division of labour among print media and their digital equivalents” (Nossek, Adoni, and Nimrod, 2015, p.381). To satisfy their diverse needs, readers may successfully combine reading of print media and reading of digital media. The views of authors and critics mentioned in this essay reveal “the main reasons why printed publications are destined to survive” (West, 2009, n.p.).

Bibliography

Agresta, M. (2012). What will become of the paper book? Slate, 8 May. Available from: http://www.slate.com/articles/arts/design/2012/05/will_paper_books_exist_in_the_future_yes_but_they_ll_look_different_.single.html [Accessed 24 October, 2015]

Adoni, H. & Nossek, H. (2001). The new media consumers: Media convergence and the displacement effect. Communications. The European Journal of Communication Research, 26 (1), 59-83.

Anderson, K. (2014). Identity crisis – Does print need to die for online to flourish? The Scholarly Kitchen, 20 May. Available from: http://scholarlykitchen.sspnet.org/2014/05/20/identity-crisis-does-print-need-to-die-for-online-to-flourish/ [Accessed 24 October, 2015]

Berger, S. (2006). The future of publishing in the digital age. In: P. Messaris & L. Humphreys (Eds.), Digital media: Transformations in human communication (pp.147-158). New York: Peter Lang.

Catone, J. (2013). Why printed books will never die. Mashable, 16 January. Available from: http://mashable.com/2013/01/16/e-books-vs-print/#y7rqkMStoPqi [Accessed 24 October, 2015]

Chartier, R. (2004). Languages, books, and reading from the printed word to the digital text. Critical Inquiry, 133-152.

Desilver, D. (2014). Overall book readership stable but e-books becoming more popular. Pew Internet Research Project. Available from: http://www.pewresearch.org/fact-tank/2014/01/21/overall-book-readership-stable-but-e-books-becoming-more-popular/ [Accessed 22 October, 2015]

Gomez, J. (2008). Print is dead: Books in our digital age. Basingstoke: Palgrave Macmillan.

Griswold, W., Lenaghan, E., and Naffziger, M. (2011). Readers as audiences. In: V. Nightingale (Ed.), The handbook of media audiences (pp.19-40). Chichester: John Wiley & Sons.

Hooper, M. (2012). Who says print is dead? The Guardian, 3 June. Available from: http://www.theguardian.com/media/2012/jun/03/who-says-print-is-dead [Accessed 23 October, 2015]

Josefowicz, M. (2009). The fallacy of the ‘print is dead’ meme. Mediashift, 27 April. Available from: http://mediashift.org/2009/04/the-fallacy-of-the-print-is-dead-meme117/ [Accessed 24 October, 2015]

Kitch, C. (2009). The afterlife of print. Journalism, 10 (3), 340-342.

Leatherbarrow, T. (2012). Do trade magazines have a future? White Paper of WRP Agency. Available from: http://www.wpragency.co.uk/wp-content/uploads/2012/11/WPR-Whitepaper.pdf [Accessed 24 October, 2015]

Liu, Z. (2008). Paper to digital: Documents in the information age. Westport: Libraries Unlimited.

Nossek, H., Adoni, H., & Nimrod, G. (2015). Is print really dying? The state of print media use in Europe. International Journal of Communication, 9, 365-385.

Publishing Technology (2015). New research reveals print habits die hard with millennial readers. Available from: http://www.publishingtechnology.com/2015/03/new-research-reveals-print-habits-die-hard-with-millennial-readers/ [Accessed 24 October, 2015]

Ramirez, E. (2003). The impact of the Internet on the reading practices of a university community: The case of UNAM. World Library and Information Congress: 69th IFLA General Conference and Council, August 1-9, 2003. Berlin. Available from: http://archive.ifla.org/IV/ifla69/papers/019e-Ramirez.pdf [Accessed 24 October, 2015]

Richtel, M. & Bosman, J. (2011). For their children, many e-book fans insist on paper. The New York Times, 20 November. Available from: http://www.nytimes.com/2011/11/21/business/for-their-children-many-e-book-readers-insist-on-paper.html?_r=2& [Accessed 24 October, 2015]

Sutherland, K. & Deegan, M. (2012). Transferred illusions: Digital technology and the forms of print. Farnham: Ashgate Publishing.

West, W. (2009). Print media will survive. Mercatornet, 3 September. Available from: http://www.mercatornet.com/articles/view/print_media_will_survive/5735 [Accessed 24 October, 2015]

Zickuhr, K. & Rainie, L. (2014). A snapshot of reading in America in 2013, Pew Internet Research Project. Available from: http://www.pewinternet.org/2014/01/16/a-snapshot-of-reading-in-america-in-2013 [Accessed 22 October, 2015]

How has Social Media affected media regulation?

This work was produced by one of our professional writers as a learning aid to help you with your studies

Traditional media regulation is becoming significantly challenged by the user-centricity that is a feature of the contemporary media environment (Van Dijck, 2013). Social media means that users are able to exercise far greater control over the types of media that they wish to consume, and can also actively produce content (Vardeman-Winter & Place, 2015). The traditional approach to media regulation is that there are a relatively small number of users who produce the media, coupled with a large number of those who consume it, who are powerless to directly influence the content (Van Dijck, 2013). This means that the regulatory framework that was previously used which was founded on a command and control framework is inappropriate for a situation where there are substantial producers of content (Lievens & Valcke, 2013). Regulatory action in social media is typically focused upon disclosure of interest, protection of children, codes of practice and the prohibition of offensive material (Van Dijck, 2013). This will be investigated as follows. First, the impact of social media upon media regulation will be discussed. Secondly, the approaches to self-regulation will be considered. Thirdly, the challenge of educating users that is necessary to achieve self-regulation will be discussed. Finally, the challenges posed to greater regulation of the media will be considered.

The current model of media regulation has focused more upon the use of alternative regulatory instruments (ARIs). These are considered to be more effective in a fast-changing media environment. ARIs are defined as a collection of instruments, such as self and co-regulation, and have increased in its impact when referred by different media policy documents from the 1990s onwards (Lievens & Valcke, 2013). However, in practical terms there is less clarity on what is meant by these types of regulatory instruments (Van Dijck, 2013). There seems to be a sense in which they involve the use of non-governmental players, and stand as an alternative to the governmental approach (Lievens & Valcke, 2013). ARIs tend to refer to a regulatory framework that is distinct from the traditional form, and this tends to point towards self-regulation.

Self-regulation is often seen as a solution in which the freedom of the internet can be maintained alongside a desire to reduce the impact of legislative regulation (Van Dijck, 2013). This means that regulation is effectively enforced by a group of actors within the social media, without any influence emanating from outside the group Lievens & Valcke, 2013). Given that social media comprises the users as also those who produce media products, there is an intuitive attraction to their being involved in the regulatory procedure (Fuchs et al., 2013). Furthermore, the users of media are traditionally involved in the regulatory mechanism, such as through their representation in the bodies of public service broadcasters, or through audience research (Croteau & Hoynes, 2013). Self-regulation also provides an empowerment to the users of social media, which is consonant with their position in the social media universe (Lievens & Valcke, 2013). This allows the regulation of social media to be fitted to the features of its use.

Education is, however, a requirement for effective social media regulation in order to ensure that the rights and responsibilities of using social media are understood (O’Keeffe & Clarke-Pearson, 2011). Providing content that is against the users’ terms and conditions of the specific site is not an effective means to educate users as these are rarely read (Fuchs et al., 2013). The publicity that ensues when a social media user unwittingly commits a crime often has the impact of educating users. It has been noted, for example, that for many users of social media an understanding of the intricacies of defamation may not be as widely appreciated as is the case for the newspaper industry (O’Keeffe & Clarke-Pearson, 2011). There are thus some issues where people have been prosecuted for retweeting a defamatory statement simply because it was not widely understood that broadcasting such information could be illegal regardless of its provenance (Campbell et al., 2014). However, this publicity then at least ensures that there is a wider appreciation of what constitutes defamation in such cases and thus functions as a method of education (Fuchs et al., 2013). Furthermore, the extent to which self-regulation can apply to some of the key concerns of regulatory bodies, such as the protection of children or the removal of hate speech may be challenged (Campbell et al., 2014). There is an argument that the greater consumer choice that is exercised in the case of social media should result in a reduced level of regulation to take into account the extent to which the choice exercised by the user can play a role (Van Dijck, 2013). Consumers may thus be put in greater control of their own choices, but in order to do so, they need to be aware of the dangers that can arise through a lack of knowledge of appropriate behaviour.

Education is more commonly provided as a result of the user’s inappropriate behaviour being corrected by the social media site (Lievens & Valcke, 2013). This means that where material is posted that concerns other viewers, it may be flagged as inappropriate with the viewers being asked why they find it objectionable. The content is then reviewed by the regulatory body of the site which then can either approve or remove the content (Lievens & Valcke, 2013). This relies upon the users of the site to establish whether the material is likely to need regulating, rather than observing content individually (Van Dijck, 2013). A significant drawback of this method is that it represents an ex ante approach, allowing the material to remain online for as long as it takes to be reported (Lievens & Valcke, 2013). This means that where copyright is compromised or sensitive material is posted, the content remains public allowing for it to be copied (Buckingham & Willett, 2013). Such examples may be seen in cases where the rules are broken; where the posting is taken down on the original account, it is already too late and the information may be reposted repeatedly (Lievens & Valcke, 2013).

This characteristic of social media regulation means that the regulation of material is significantly limited, as material cannot be prevented from being broadcast by being reported as offensive (Lievens & Valcke, 2013). However, this does not extend as far as is the case for traditional media and stories that are entirely false that would not be permitted in a newspaper can be distributed freely through social media (Van Dijck, 2013). Although individuals may report them, they are often not removed unless they illustrate features that are against the terms of the use agreement (Baron, 2015). The process of reporting such content after it is published is therefore not a fully effective way to regulate content, and, moreover, involves looser regulation than is generally accepted for journalistic standards (Lievens & Valcke, 2013). At the same time, censorship is not applied on the basis that the information presented may be false and misleading (Van Dijck, 2013). Although this model does tend to empower users, the extent to which it provides an effective model of regulation can be questioned, as it cannot prevent false material from being published, as is the case for the traditional media.

The AVMS Directive was published by the European Commission in 2007, complemented by a Communication on media literacy (Lievens & Valcke, 2013). It was suggested that the promotion of media literacy was a more appropriate approach that the provision of advertising bans (Bertot et al., 2012). This has been explored particularly in cases where social media is used to develop the employees approach to social media in governmental or corporate context (Lievens & Valcke, 2013). Internal social media policies are usually created, and advice given on how best they may be used to elicit consumer or citizen engagement. However, there are divisions between how social media is used in an official capacity and the differences between how employees use social media as an individual can undermine the effectiveness of such regulation (Bertot et al., 2012). This illustrates that the trend towards self-regulation is only largely effective in the context where social media should be better understood by the user. For the majority of users, regulation is perhaps undermined by a lack of the education that has been argued as essential for its effective use.

Despite the calls for greater regulation, resistance has come from the belief that it presents significant economic opportunities. The barriers to regulation against audiovisual content on sites such as Youtube has been seen as tantamount to reducing choice for viewers (Lievens & Valcke, 2013). Parallels are drawn between how the highly regulated broadcasting environment in television in the 1980s reduced the level of choice for viewers. Furthermore, the use of social media to promote products and services provides a number of challenges to the regulatory environment in that it is not always easy to establish whether commercial activity is being undertaken by an individual for personal (Van Dijck, 2013). If an individual promotes a brand and does not conform to regulation that affects advertising, the extent to which they may be liable for omission or exaggeration poses a regulatory challenge (Evans, 2012). For example, situations where an employee represents themselves as a consumer can undermine the validity of the media regulation (Evans, 2012).

This lack of regulation can thus have significant effects on the veracity of other media. In April 2013, a bomb was detonated near the finishing line of the Boston Marathon (Lievens & Valcke, 2013). Social media played a significant role in disseminating information about the bombing, much of which was accurate. However, there was a range of misleading information that included significant factual errors. A tweet suggesting that an arrest had been made was retweeted 13,930 times and reported as fact by major news corporations (Lievens & Valcke, 2013). This is an example where the lack of regulation allowed assertions to be made, which could then circulate as fact without verification. Social media can thus perpetuate the misinformation available, and the fact that there is no regulation requiring users to only provide true material when broadcasting undermines this (Dabbagh & Kitsantas, 2012).

A similar issue surrounding social media use is the potential for it to be used for bullying (Creech, 2013). For example, for some individuals who have been insensitive may find themselves receiving death threats, and in other contexts their home locations may be shared (Croteau & Hoynes, 2013). This means there is an apparent propensity of social media to provide a kind of mob rule. Unfortunately, because these situations escalate relatively quickly, the type of ex ante regulation that is usually applied is ineffective as it is impossible to challenge a fast moving story that is repeated thousands of times (Jewell, 2013). This means that social media challenges the traditional gatekeeping process of journalism, but is less regulated, undermining the extent to which information can be disseminated (Vardeman-Winter, & Place, 2015).

A final key area in which social media regulation is likely to pose significant challenges to the existing model of media regulation is due to its international nature (Van Dijck, 2015). Media regulation has previously allowed regulation to take place on a national basis, so material deemed unsuitable for broadcast were easily prevented. For example, allegations surrounding the royal family have often been regulated against dissemination in the UK, but are freely disseminated abroad. Social media allows such allegations to be freely disseminated (Lievens & Valcke, 2013). In many cases, traditional broadcasters can be restricted, even where they are situated abroad and are cable operators (Lievens & Valcke, 2013). Social media effectively undermines the potential for such broadcasting to take place, meaning that its effect on the regulatory environment extends to undermining existing regulation that is organised on a national basis (Van Dijck, 2013). Social media thus not only challenges the reach of media regulation in terms of its nature, it also acts to undermine the effect of existing legislation.

In conclusion, social media has had a significant impact upon on media regulation. It does not fit clearly into traditional models of regulation and this undermines how such media may be regulated. Because it can blur the edges of different media types, in that it can provide news or advertising at the same time, it can also challenge regulatory frameworks based upon such media remaining discrete. Self-regulation is suited to the nature of the media, but poses significant challenges to existing regulatory frameworks, as it does not prevent the dissemination of sensitive or false material; it simply allows it to be removed ex ante. Social media also undermines the extent to which existing regulatory frameworks may be conducted on a national basis as any information that is disseminated is thus available globally. These features have effectively reduced the impact of regulation and thus far the focus on self-regulation has done little to prevent the whole-scale diminution of media regulation.

References

Baron, R. J. (2015). Professional self-regulation in a changing world: old problems need new approaches.JAMA,313(18), pp.1807-1808.

Bertot, J. C., Jaeger, P. T., & Hansen, D. (2012). The impact of polices on government social media usage: Issues, challenges, and recommendations. Government Information Quarterly,29(1), pp.30-40.

Buckingham, D., & Willett, R. (2013).Digital Generations: Children, young people, and the New Media. London: Routledge.

Campbell, K., Ellingson, D. A., Notbohm, M. A., & Gaynor, G. (2014). The SEC’s Regulation Fair Disclosure and Social Media.The CPA Journal,84(11), pp.26-35.

Creech, K. C. (2013).Electronic Media Law and Regulation. London: Routledge.

Croteau, D., & Hoynes, W. (2013).Media/society: Industries, images, and audiences. London: Sage Publications.

Dabbagh, N., & Kitsantas, A. (2012). Personal Learning Environments, social media, and self-regulated learning: A natural formula for connecting formal and informal learning.The Internet and Higher Education,15(1), pp.3-8.

Evans, D. (2012).Social Media Marketing: An hour a day. London: John Wiley & Sons.

Fuchs, C., Boersma, K., Albrechtslund, A., & Sandoval, M. (2013).Internet and surveillance: The challenges of Web 2.0 and social mediaLondon: Routledge.

Jewell, M. (2013). Self-regulation, teenagers and social media use: Inquiry into online behaviour and the influence of digital architecture. http://matt.pm/assets/self-regulation-social-media-use.pdf [retrieved 17th October, 2015].

Lievens, E. & Valcke, P. (2013) Regulatory trends in a social media context. In M. E. Price, S. G. Verhulst, & L. Morgan (eds.) Routledge Handbook of Media Law, London: Routledge, pp.557-580.

O’Keeffe, G. S., & Clarke-Pearson, K. (2011). The impact of social media on children, adolescents, and families.Pediatrics,127(4), pp.800-804.

Van Dijck, J. (2013).The Culture of Connectivity: A critical history of social media. Oxford: Oxford University Press.

Vardeman-Winter, J., & Place, K. (2015). Public relations culture, social media, and regulation.Journal of Communication Management,19(4), pp.19-38.

Example Media Essay – Greenpeace vs Lego / Shell

This work was produced by one of our professional writers as a learning aid to help you with your studies

Greenpeace Save the Arctic campaign – LEGO and Shell

Greenpeace has had environmental issues at the core of its mission since it was founded in 1971, when a small group set sail from Vancouver, Canada to witness nuclear testing (Greenpeace, 2014a). Now a large international organisation, Greenpeace has several main branches of environmental activism and campaigning. One of its major campaigns is ‘Save the Arctic’, which has been running for 15 years. The campaign is concerned with climate change in general and the shrinking Arctic, but also more specifically with the plans of oil companies to drill in the Arctic. According to Greenpeace, the harsh conditions and remoteness would mean “an oil spill would be almost impossible to deal with. It’s a catastrophe waiting to happen”( Greenpeace, 2014b). Climate change can be a nebulous and esoteric problem that the public feel increasingly helpless to do anything about (Nordhaus and Shellenberger, 2009), but by focussing on a specific aspect, with a specific enemy, Greenpeace are providing people with an avenue for tangible action and results.

Currently, the campaign targets the oil company Shell, but throughout its history it has run targeted campaigns against a number of oil companies. Increasingly, companies are coming under scrutiny for their environmental credibility as consumers become more aware of damaging practices and become more discerning with their purchasing power (Miles and Covin, 2000). Greenpeace previously used this knowledge in a successful campaign called ‘StopEsso’ that impacted the social credibility of ExxonMobil (Esso) and caused negative consumer perceptions about the company in regard to the issue of climate change (Gueterbock, 2004).

However, in its most recent ‘Save the Arctic’ campaign, Greenpeace tried another new tactic by targeting the toy company LEGO. LEGO has had a partnership with Shell since the 1960s that saw LEGO toy sets branded with the Shell logo distributed from Shell petrol stations in several countries. Instead of targeting Shell for its plans to drill in the Arctic, Greenpeace targeted LEGO for its partnership with Shell.

Oil companies are now well known for their poor environmental credibility, so environmental campaigns need new ways to bring attention to specific issues. LEGO is a much-beloved toy company, and Greenpeace hoped that by linking LEGO directly to Shell’s Arctic drilling plans they could damage LEGO’s environmental credibility. For a company that had not faced this kind of criticism before, the attention could potentially be very damaging (Cho et al., 2012), so Greenpeace hoped this would force them to end their partnership with Shell. This would further damage Shell by ending a lucrative partnership and denying them the credibility by association with a popular toy company.

Through its partnership with LEGO, Shell had reached a new audience by putting its logo in the hands of children and making it seem more family-friendly and caring (Greenpeace, 2014c). Greenpeace’s targeted campaign also helped them reach the new audience of children by making them an integral part of the campaign mission. Throughout the campaign, Greenpeace pointed to LEGO’s mission to “leave a better world for children”: a promise it is not fulfilling by supporting Shell.

Greenpeace’s campaign went beyond the rhetoric of securing the environment for our children’s future however; it actively used children in several of its marketing stunts. In one event, children built giant LEGO-block Arctic animals outside LEGO’s London headquarters. When justifying the use of children in their campaign, Greenpeace stated: “Children love the Arctic, and its unique wildlife like polar bears, narwhals, walrus and many other species that are completely dependent on the Arctic sea ice. They wouldn’t want to see them threatened.” (Greenpeace 2014c). When assessing the use of emotion in social campaigns aimed at engaging youth, Hirzalla and Van Zoonen (2010) identified the appeal to empathy with animals and identification of animals’ ‘coolness and cuddliness’ as key constructs. While appealing to children through the use of animals, Greenpeace also strengthened its message of saving the planet for future generations by using seemingly self-motivated children in its campaign.

Many of the tactics used in Greenpeace’s campaign against LEGO followed guerilla marketing principles. While traditional guerilla marketing campaigns aimed at selling products focus on the element of surprise and unconventional techniques, Greenpeace’s campaign style could be more closely compared to guerilla warfare, composed of a series of ambushes and sabotages (Creative Guerrilla Marketing, 2015). For example, a band of Greenpeace activists descended on a LEGO factory in the Czech Republic and decorated it with a Shell logo and an oil spill with giant unhappy minifigures (LEGO characters) cleaning it up. Later, activists appeared outside LEGO’s headquarters in Denmark with a series of giant bricks representing the signatures of petitioners to stop the partnership between LEGO and Shell. Greenpeace’s global reach and local bands of enthusiastic demonstrators allow it to run campaigns multinational companies can only dream of; they can produce targeted marketing stunts quickly and a little cost.

A related tactic used in the campaign is viral marketing. Again, aimed at creating buzz with lower cost, viral marketing is “an Internet-based ‘word-of-mouth’ marketing technique” (Woerndl et al., 2008). Greenpeace had an online petition to LEGO to sever its connection with Shell that was easy to sign and share, providing a low barrier to participation for people who might want to join the campaign but not to go out and engage in guerilla activities. It was also easy to share and the progress was easily measured. Often, visible metrics of success can further increase the likelihood of a viral campaign being shared wider as its credibility is established (Woerndl et al., 2008). For example, the number of hits on a YouTube video can influence the likelihood of someone watching and sharing the video.

In fact, the centrepiece of Greenpeace’s viral marketing campaign was a video. Ryan and Jones (2011) said: “Online video is so powerful because well-executed video can be incredibly engaging and entertaining, demands little effort to consume and packs a lot of information into a relatively short space of time in comparison to other media. It’s also incredibly easy to share.” Greepeace’s video, launched at the start of the campaign, now has nearly seven million views on YouTube (Greenpeace, 2014d). It centres around a direct parody of LEGO’s recent smash-hit movie and its iconic song ‘Everything is Awesome’. The song is sung not in its original high-energy upbeat style, but as a slow lament, as images of an Arctic created out of LEGO slowly drowns under a tirade of leaked oil from Shell’s oil drilling platforms.

The video is extremely evocative, showing Arctic animals and ways of life drowning, as well as eventually our way of life too. By constructing the set out of LEGO bricks and using the popular song from the movie, the focus is very much on LEGO, while also capitalising on its recent surge in popularity thanks to the movie. Emotional appeals in marketing are shown to be more effective in eliciting a response from viewers (Franzen, 1994). It can be a risky strategy to appeal to negative feelings, however, unless the ‘product’ being marketed offers a solution. Greenpeace’s encouragement to people to sign the petition and make LEGO end their partnership with Shell prevents the campaign from creating purely negative feelings that could work against viral potential by providing a concrete, actionable solution.

The campaign was launched at the end of June 2014. After two weeks of guerilla tactics and the launch of the video, LEGO at first seemed unwilling to change its position, stating that: “We expect that Shell lives up to their responsibilities wherever they operate and take appropriate action to any potential claims should this not be the case.” LEGO maintained that Greenpeace’s dispute was with Shell and not them. However, for Greenpeace, LEGO’s trust in the oil company’s responsibility was not enough, and the campaign intensified. Finally, in October 2014, LEGO announced that it would not renew its partnership with Shell (Vaughan, 2014a).

However, in LEGO’s statement on the termination of the partnership, it was still reserved in its messaging and maintained that it did not agree with Greenpeace’s tactics against them: “We do not want to be part of Greenpeace’s campaign and we will not comment any further on the campaign. We will continue to deliver creative and inspiring LEGO play experiences to children all over the world.” (LEGO, 2014).

The Greenpeace campaign attracted criticism for targeting LEGO specifically. Some individuals pointed to the hypocrisy of the focus on the dissolution of the partnership as a partway solution to Arctic drilling, considering that LEGO bricks are made of plastic, a by-product of oil (Skapinker, 2014). However, LEGO is currently searching for a sustainable alternative material for its bricks, and hopes to replace oil entirely by 2030 (Miel, 2014).

The narrow focus on targeting LEGO also drew criticism for its simplicity in not dealing with the larger issue of energy generation. Chris Rapley, former director of the Science Museum (who opened a gallery in partnership with Shell), said the campaign “might attract headlines and make them feel good, but does not address the real issues and will not deliver the changes we all need.” (Vaughan, 2014b) Additionally, it has been argued that we all use energy and products of oil in our everyday lives, so we are all ‘implicated’, and any action against individual companies is hypocritical (Skapinker, 2014). Both argue that oil companies are also those most heavily involved in renewable energy development, being more truly ‘energy’ companies than purely ‘oil’ companies.

However, a blogger for The Economist (identified just as M.S.) praised Greenpeace’s campaign, saying that just because we are all sinners does not mean we cannot pressure others to behave better, and it is just these sorts of campaigns that encouraged energy companies to invest in renewable energy research in the first place (S., 2014). M.S. also praised the tactics of Greenpeace’s campaign, saying it leveraged the weight of environmental credibility to produce a concrete result: “If Shell comes to fear that drilling in arctic waters will damage its brand and encourage other well-regarded companies to distance themselves from it, that may help dissuade it from further drilling.” The viral tactics of the campaign were lauded by M.S., who identified it as a breakthrough campaign for Greenpeace as they left their roots of unfurling banners from buildings behind and produced a “wickedly clever campaign that feels entirely of this moment”.

In conclusion, the Greenpeace campaign was a success because it combined virality with up-to-date guerilla tactics in order to challenge the environmental credibility and social licence of a globally-recognised and popular toy company. Future Greenpeace campaigns look set to repeat the strategy, and time will tell if they remain successful. Following the announcement that LEGO terminated their partnership with Shell, executive director of Greenpeace UK John Sauven said: “Clearly Shell is trying to piggy back on the credibility of other brands. It’s a good PR strategy if you can get away with it. But as we’ve shown, if you can’t get away with it, that social licence is taken away. It does damage them a lot.” (Vaughan, 2014a).

References

Cho, C., Guidry, R., Hageman, A. and Patten, D. (2012). Do actions speak louder than words? An empirical investigation of corporate environmental reputation. Accounting, Organizations and Society, 37(1), pp.14-25.

Creative Guerrilla Marketing, (2015). What Is Guerrilla Marketing? [online] Available at: http://www.creativeguerrillamarketing.com/what-is-guerrilla-marketing/ [Accessed 24 Jan. 2015].

Franzen, G. (1994). Advertising effectiveness. Henley-on-Thames, Oxfordshire: NTC Publications.

Greenpeace, (2014a). Our History. [online] Available at: http://www.greenpeace.org/usa/en/campaigns/history/ [Accessed 24 Jan. 2015].

Greenpeace, (2014b). Save the Arctic. [online] Available at: http://greenpeace.org.uk/climate/arctic [Accessed 24 Jan. 2015].

Greenpeace, (2014c). Lego and Shell – FAQs. [online] Available at: http://greenpeace.org.uk/blog/climate/lego-and-shell-faqs-20140630 [Accessed 24 Jan. 2015].

Greenpeace, (2014d). LEGO: Everything is NOT awesome. Available at: https://www.youtube.com/watch?v=qhbliUq0_r4 [Accessed 24 Jan. 2015].

Gueterbock, R. (2004). Greenpeace campaign case study — StopEsso. Journal of Consumer Behaviour, 3(3), pp.265-271.

Hirzalla, F. and Van Zoonen, L. (2010). Affective Political Marketing Online: Emotionality in the Youth Sites of Greenpeace and WWF. International Journal of Learning and Media, 2(1), pp.39-54.

LEGO, (2014). Comment on Greenpeace campaign and the LEGO® brand. [online] Available at: http://www.lego.com/en-GB/aboutus/news-room/2014/october/comment-on-the-greenpeace-campaign-and-the-lego-brand [Accessed 24 Jan. 2015].

Miel, R. (2014). Lego looking for a sustainable replacement for ABS. Plastics News. [online] Available at: http://www.plasticsnews.com/article/20140218/NEWS/140219915/lego-looking-for-a-sustainable-replacement-for-abs [Accessed 24 Jan. 2015].

Miles, M. and Covin, J. (2000). Environmental Marketing: A Source of Reputational, Competitive, and Financial Advantage. Journal of Business Ethics, 23(3), pp.299-311.

Nordhaus, T. and Shellenberger, M. (2009). Apocalypse Fatigue: Losing the Public on Climate Change. Yale environment 360. [online] Available at: http://e360.yale.edu/feature/apocalypse_fatigue_losing_the_public_on_climate_change/2210/ [Accessed 24 Jan. 2015].

Ryan, D. and Jones, C. (2011). The best digital marketing campaigns in the world. London: Kogan Page.

S., M. (2014). Childish arguments. The Economist. [online] Available at: http://www.economist.com/blogs/democracyinamerica/2014/10/greenpeace-lego-and-shell [Accessed 24 Jan. 2015].

Skapinker, M. (2014). Everything is not awesome about Greenpeace’s assault on Lego. Financial Times. [online] Available at: http://www.ft.com/cms/s/0/7a8885fc-538c-11e4-8285-00144feab7de.html?siteedition=uk#axzz3PlQsJ8QR [Accessed 24 Jan. 2015].

Vaughan, A. (2014a). Lego ends Shell partnership following Greenpeace campaign. The Guardian. [online] Available at: http://www.theguardian.com/environment/2014/oct/09/lego-ends-shell-partnership-following-greenpeace-campaign [Accessed 24 Jan. 2015].

Vaughan, A. (2014b). Science Museum former head gives Greenpeace Lego campaign ‘0 out of 10’. The Guardian. [online] Available at: http://www.theguardian.com/environment/2014/oct/09/science-museum-former-head-gives-greenpeaces-lego-campaign-0-out-of-10 [Accessed 24 Jan. 2015].

Woerndl, M., Papagiannidis, S., Bourlakis, M. and Li, F. (2008). Internet-induced marketing techniques: Critical factors in viral marketing campaigns. International Journal of Business Science and Applied Management, 3(1), pp.33-45.

Cross Dressing Can Support as Well as Undermine Gender Norms

This work was produced by one of our professional writers as a learning aid to help you with your studies

Discuss with reference to 2/3 films.

The representation of stereotypical gender identities in filmmaking has evolved throughout cinema history, primarily in accordance with changes in political and social values. The traditional gender stereotyping of the dominant male- the all-powerful, masculine hero – and the spectacle of an emotional, submissive but desirable female counterpart, continues to dominate the filmmaker’s approach to image and narrative in mainstream commercial cinema. However there are examples of films which break with this stereotype as the boundaries which define this traditional role of the male and female are blurred.

Many film critics have considered the essential appeal of cinema in relation to audience participation and the viewer’s willingness to temporarily suspend their views and judgments; to draw parallels, make assumptions and interpretation with the film’s fictionalised ‘reality’. The importance of the relationship between the spectacle and the spectator, the viewed and the viewer, continues to be integral to film theory and criticism. The viewer watches a film with pre-determined thoughts, values, expectations and prejudices. It is the purpose of the filmmaker to draw upon, guide and manipulate the audience’s emotions and sense of ‘realism’. As David Bordwell and Kristin Thompson consider, “Film form can make us perceive things anew, shaking us out of our accustomed habits and suggesting fresh ways of hearing, seeing, feeling, and thinking.”

The audience’s interpretation of a film, the way in which we identify with the characters, is, as is often in life, judged upon initial appearance. The mise-en-scene of a film; namely the use of setting, lighting, costume, with the movement of the actors, visually dictates the story and the viewer’s sense of ‘realism’. These elements are of equal importance and as influential as the filmmaker’s use of camera shot, movement, technique and frame composition. Costume, props and make-up function as a guide in a film, contributing to a narrative with the creation of a specific mood. Assumptions can be made about a character before they have even spoken, based entirely upon their physical appearance. Film genres play with costume props and make-up extensively, typically for the purpose of creating realism, or to give impact to an image.

The representation of cross-dressing in commercial mainstream cinema has conventionally been avoided or included for comic purpose. The disguise by the divorced husband played by Robin Williams as a female housekeeper in ‘Mrs Doubtfire’ (1993) typifies the humorous and inoffensive approach to the taboo subject which had been previously explored in films such as ‘Some Like It Hot'(1959) and ‘Tootsie’ (1982). These were roles in which the male protagonist finds it necessary to disguise themselves as women so as to ensure their success and happiness in life, and is not meant as a representation of gender confusion or sexual ambivalence. Each dresses in drag for comic effect, it is visual clown comedy. Mrs. Euphegenia Doubtfire is a divorced man determined to remain with his children in any way possible, so becomes their female nanny. In ‘Tootsie’ an unemployed actor disguises himself as a woman to get a role in a soap opera and becomes a star. In ‘Some Like It Hot’ two musicians witness a mob hit and escape in an all-female band disguised as women. The audience are in on the joke alongside the men (played by Jack Lemmon and Tony Curtis) while the fellow characters remain humorously oblivious. The light hearted, harmless, and unquestionably unrealistic, approach to gender identity in such films reflects cinema’s historical aesthetic tradition of telling a story which is the ‘norm’, familiar to its audiences, and marketed as entertainment for mass appeal

The portrayal of cross-dressing in relation to gender and sexual confusion in cinema is stereotypically of a character tormented by pain and uncertainty. The film is subjective, following their personal journey as they seek personal happiness and fulfilment, and a release of their fears. Such gender identity is typically explored by filmmaker’s through psychoanalytical representation. A film which exemplifies such depiction is Alfred Hitchcock’s film ‘Psycho’ (1960). The film tells the story of Norman Bates, a crazed individual whose obsessive need of his mother (he literally preserves her body in his basement), leads him to become her. The silhouette of Norman wearing a dress and wig as he raises his arm and slashes the defenceless heroine of the film as she has a shower is perhaps the most well-known images of cross-dressing in cinema history. A psychiatrist explains to the viewer as the film ends, “He was simply doing everything possible to keep alive the illusion of his mother being alive. And when reality came too close, when danger or desire threatened that illusion, he dressed up, even in a cheap wig he’d bought. He’d walk about the house, sit in her chair, and speak in her voice. He tried to be his mother.” Hitchcock is able to successfully manipulate his audience into identifying with each of the film’s victims in turn; firstly, with his female protagonist Marion Crane and then the male/female antagonist Norman Bates. The viewer’s emotions are shifted as Hitchcock forces us into exploring and comprehending the complex world of his mind and reconsider his identity and our interpretation of him.

The gender coding of masculine restraint, with the emphasis upon physique and not emotional charge, is evocatively explored in ‘Boy’s Don’t Cry’ (1999), a film which powerfully addresses the issue of sexual identity and gender roles. The film tells the story of Brandon Tenna (played by Oscar winning Hilary Swank), a young girl who successfully integrates herself into a small town Nebraskan community as a man, has a loving relationship with a woman, and who is later raped and murdered when it is discovered that he is in fact biologically female, given the birth name of Teena Brandon. Based upon a true story, filmmaker Kimberley Pierce explores not what it means being a lesbian but what it is to be a woman who feels that she is a man. Teena cuts her hair, tapes her breasts, and puts a sock down her trousers, hiding her female identity, and making not a sexual but a social transformation. The film is a graphic portrayal of the manifestation of hate, ignorance and ultimately the use of violence as a display of “manhood”. Significantly, it is not Teena who is represented as being crazed, but her attackers as they brutally rape her and shoot into her defenceless body. The viewer is forced to confront their own biases and prejudice as Pierce positions us without remission or apology throughout the shockingly explicit ordeals that Teena Brandon suffers. Pierce said of her film, “I think it’s a universal story that affects people regardless of their sexual orientation … the point is to engage the audience as deeply as possible with all the characters and allow the audience to see itself reflected in all of them, in the tragedy as a whole.” What makes the film so hauntingly frightening is its believability; that the rape and murder were so predetermined and could so likely happen again if a similar situation were to arise. Pierce asks the viewer to consider this.
Cinema has the capacity to shift and change an audiences understanding and evaluation of a subject matter. The individual expression of an artistic vision by the filmmaker is open to a flexibility which invites interpretation and rethinking. The varied representations of cross-dressing in films throughout cinema history, to the present day direct addressing of the taboo in films such as ‘The Crying Game'(Neil Jordan, 1992) and ‘Boy’s Don’t Cry’ exemplifies how complex subject matters might don’t necessarily alienate film audiences.

Bibliography:

Bordwell, David & Thompson, Kristin. Film Art, New York: McGraw Hill. 1990.

Francesca Miller. Putting Teena Brandon’s Story on Film. Gay & Lesbian Review Worldwide. Volume: 7. Issue: 4. 2000.

Turbocharger Petrol Engine

This work was produced by one of our professional writers as a learning aid to help you with your studies

The quest for higher efficiency of the internal combustion engine will always be pursued. Increasingly stringent emission regulations are forcing the manufacturers to downsize on engine displacement and increase the specific power. By adding the turbocharger, the air flows through the engine and hence specific power can be increased.

The advantage of a small turbocharged engine over a naturally aspirated (NA) engine of a similar power is that it is lighter, having better part load efficiency when operating at the same load, while producing less emission.

The objective in this study is to investigate a turbocharger in a naturally aspirated engine and testing the engine before the installation of the turbocharger.

Boost refers to the increase in the manifold pressure that is generated by the turbocharger in the intake path or specifically that exceeds the normal atmospheric pressure. This study also aims to develop a strategy for the control of boost for the engine.

1.0 Introduction
1.1 Background

Turbocharged spark ignition engines have been around since the 1970s, but their popularity outside the motorsport sector has been small until the recent advances in engine control. The lack of popularity could partly be due to the drivability issues associated with early turbocharged engines. The engine’s response to a sudden increase in driver’s demand was delayed due to a turbocharger lag.

The lag was then usually followed by a rapid increase of power which resulted in loss of traction and possible loss of control over the car. The developments made in the electronic control and management of internal combustion engine made it possible to overcome most of these drivability limitations. Passenger vehicles with turbocharged SI engines are now becoming more common. A number of companies such as Audi and Volvo now offer different passenger vehicle models with turbocharged SI engine whereas Mercedes offers supercharged and turbocharged engines.

The operating principle of the turbocharger is to use the energy recovered from the exhaust gases to force more air into the combustion chamber. This increases the amount of oxygen in the combustion chamber and hence more fuel can be burned and more power can be produced. Therefore a turbocharged engine can produce more power than a similar sized naturally aspirated engine. It is claimed that the displacement of the turbocharged engine can be reduced by up to 40% relative to NA engine, without compromising power output. Thus the turbocharged engine could be smaller, lighter and more fuel efficient as well as produce less emissions.

1.2 Aim

To design and specify turbocharger in a Vauxhall 2.2 litre engine

1.3 Objectives:

Critical literature review of the project.

To investigate turbo system, develop a system for the Vauxhall 2.2, produce drawings and design.

Testing the engine before installation of turbocharger

To investigate and develop strategy for control of boost for the engine over a wide range of condition.

2.0 Initial Critical Review of Literature

This project is related to the turbocharging of a four stroke petrol engine. In this discussion a turbocharged four stroke diesel engine will also be discussed briefly and the differences will be highlighted. However, it omits to discuss two stroke engines due to their different gas exchange processes.

Supercharging

The term supercharging refers to increasing the air density by increasing its pressure prior to entering the cylinder. This allows a proportional increase in the fuel that can be burned and hence raises the potential power output. Three basics categories are used to accomplish this.

The first is mechanical supercharging where a separate pump or compressor, usually driven by power taken from the engine, provides the compressed air. The second method is turbocharging, where a turbocharger, a compressor and turbine on a single shaft is used to boost the inlet air density. The third method is pressure wave supercharging which uses wave action in the intake and exhaust systems to compress the intake mixture.

The main advantage of turbocharging as opposed to supercharging is that turbocharging uses the energy in the exhaust gas that would have been lost. Supercharging uses power from the engine’s crank shaft and thus less power is available for propulsion

Turbocharging

The author acknowledges that the theory represented in this section is extracted from Watson and Janota (1982).

The exhaust driven turbocharger was invented by a Swiss engineer named Alfred Buchi, his patent applied to a diesel engine in 1905. It took a very long time to establishe a turbocharger, but it is now proved that their characteristics are suited to the diesel engine, the reason being that only air is compressed, and no throttling is used.

A turbocharger consists basically of a compressor and turbine coupled on a common shaft. The exhaust gases from the engine are directed by the turbine inlet casing on the blades of the turbine and subsequently discharged to atmosphere through a turbine outlet casing. The exhaust gases are utilized in the turbine to drive the compressor, which compresses the air and directs it to the engine induction manifold, to supply the engine cylinder with air of higher density than is available to a naturally aspirated engine.

Figure1: Automotive Turbocharger

Since diesel engines having no knock limitations, the maximum allowable boost on CI engines depends only on the mechanical strength of the engine. On an SI engine, the boost pressure is limited by knock. Thus if boost pressure is high on SI engines, the compression pressure must be low, high octane number fuel must be used or ignition timing must be retarded.

Turbocharger Theory

The operating characteristics of turbo machines such as turbines and compressors are totally different from the reciprocating internal combustion engine. The most common turbocharging assembly used in the automotive industry is made up of radial compressor coupled to radial turbine. Between the two is a wide supporting plain journal bearing, because an ordinary roller bearing would not survive the high rotational speed of up to 25000 rev/min of which a small turbine is capable. For racing application, ceramic ball bearings are being used more frequently.

Axial turbine coupled with a radial compressor is a common configuration. Axial turbines are preferred for their superior efficiency to those of a radial turbine, but according to manufacturer’s radial flow turbines are simpler and cheaper to manufacture and also the operating range of radial flow compressors are limited to certain pressure ratios, because a high pressure ratio will cause the supersonic flow and cause shockwaves at the inlet, this will impair the efficiency of compressor.

Turbocharging Diesel (CI) or Petrol (SI) engines

Today turbocharged diesel engines are common but turbocharged petrol engines are rare. There are sound reasons, both technical and economic for this situation. The principal reasons stem from the difference between the combustion systems of petrol and diesel engines. The petrol engine uses a carburettor or fuel injection system to mix air and fuel in the inlet manifold so that a homogeneous mixture is compressed in the cylinder.

A spark is used to control the initiation of combustion which then spreads throughout the mixture. This is because the mixture temperature during the compression must be kept below the self-ignition temperature of the fuel. Once the combustion has started it takes time for the flame front to move across the combustion chamber burning the fuel. During this time unburnt ‘end gas’ is heated by further compression and the radiation from the flame front.

If it reaches the self-ignition temperature before the flame front arrives, a large quantity of mixture may burn extremely rapidly producing severe pressure waves in the combustion chamber. This situation is commonly referred to as ‘knock’ and may result in severe cylinder head and piston damage. This is due to the fact that the compression ratio of the engine must be low enough to prevent knock occurring.

In the CI engine cylinder, air alone is compressed. Fuel is sprayed directly into the combustion chamber from an injector only when combustion is required. This fuel self-ignites as in a diesel engine the compression ratio must be high enough for the air temperature on compression to exceed the self-ignition temperature of the fuel. As injection takes time, only some of the fuel is in the combustion chamber when ignition starts, and since much of this fuel is not as damaging as the knocking situation in a petrol engine.

The maximum CR of the petrol engine, but not the diesel engine, is therefore limited by the ignition properties of the fuel. The minimum CR is limited by resulting low efficiency. Turbocharging results not only give a higher boost pressure, but a higher temperature. Unless the compression ratio of a petrol engine is reduced the temperature at the end of compression stroke will be too high and the engine will knock.

The engine may remain knock free under mild boost – but only because there should be a sufficient safe knock free margin, or a fuel of higher self-ignition temperature/octane number has been used. Thus the potential power output of a turbocharged petrol engine is limited. The diesel engine has no such a limitation and can therefore use a much higher boost pressure.

Petrol engines cost substantially less to produce than diesel engines of equivalent power output. The cost of the turbocharger on a diesel engine is more than offset by reduced engine size required for a specified power output (with the exception of very small engines). This situation will rarely occur in the case of petrol engine.

Energy Available In the Exhaust Gas:

Figure 2 shows the ideal limited pressure engine cycle in terms of pressure/volume diagram for the naturally aspirated engine. Superimposed is a line representing isentropic expansion from point 5, at which the exhaust valve opens, down to the ambient pressure (Pa) which could be obtained by further expansion if the piston were allowed to move to point 6. The maximum theoretical energy that could be extracted from the exhaust system is represented by the shaded area 1-5-6. This energy is called as ‘blow-down’ energy.

Figure2: Naturally Aspirated Ideal Pressure Limited Cycle (Watson and Janota,1982)

Considering the supercharged engine, the ideal four stroke pressure/volume diagram would appear as shown in figure, where P1 is the supercharging pressure and P7 is the engine back pressure in the exhaust manifold. Process 12-1 is the induction stroke, during which fresh charge at the compressor delivery pressure enters the cylinder. Process 5-1-13-11 represents the exhaust process.

When the exhaust valve first opens (point 5) some of the gas in the cylinder escapes to the exhaust manifold expanding along line 5-7 if the expansion is isentropic. Thus the remaining gas in the cylinder is at P7, when the piston moves towards the TDC, displacing the cylinder contents through the exhaust valve into the exhaust pipe against the back pressure.

At the end of the exhaust stroke the cylinder retains a volume (Vcl) of residual combustion products, which for simplicity can be assumed to remain there. The maximum possible energy that could be extracted during the expulsion stroke will be represented by area 7-8-10-11, where 7-8 represents isentropic expansion down to the ambient pressure.

Figure3: Turbocharged Ideal Pressure Limited cycle (Watson and Janota, 1982)

There are two distinct areas in figure 3 representing energy available from the exhaust gas, the blow down energy (area 5-8-9) and the work done by the piston (area 13-9-10-11). The maximum possible energy available to drive a turbocharger turbine will clearly be the sum of these two areas. Although the energy associated with one area is easier to harness than the other, it is difficult to devise a system that will harness all of the energy.

To achieve that, the turbine inlet pressure must rise instantaneously to P5 when the exhaust valve opens, followed by isentropic expansion of the exhaust gas through P7 to the ambient pressure (P8=Pa). During the displacement part of the exhaust process, the turbine inlet pressure must be held at P7. Such a series of process is impracticable.

Considering the simpler process in which a large chamber is fitted between the engine and the turbine inlet in order to damp down the pulsating exhaust gas flow. By forming a restriction to the flow, the turbine may maintain its inlet pressure at P7 for the whole cycle. The available work at the turbine will then be given by area 7-8-10-11. This is the ideal constant pressure system. Next consider an alternative system, in which a turbine wheel is placed directly downstream of the engine close to the exhaust valve.

If there were no losses in the port, the gas would expand directly out through the turbine along line 5-6-7-8, assuming isentropic expansion. If the turbine area were sufficiently large, both cylinders and the turbine inlet pressure would drop to P9 before the piston had moved significantly up the bore.

Hence the available energy at the turbine would be given by area 5-8-9. This can be considered the ideal pulse system. The system commonly used and referred to as ‘constant pressure’ and ‘pulse’ are based on the above principles but in practice they differ from these ideals.

Constant Pressure Turbocharging

In constant pressure turbocharging exhaust ports from all the cylinders are connected to a single exhaust manifold, whose volume is sufficiently large to dampen down the unsteady flow entering from each cylinder. When the exhaust valve of a cylinder opens, the gas expands down to the (constant) pressure in the exhaust manifold without doing useful work.

However, not all of the pulse energy is lost. From the law of conservation of energy, the only energy actually lost between the cylinder and turbine will be due to heat transfer. With a well-insulated manifold, this loss will be very small and can be neglected.

Consider what happens to the gas leaving the cylinder, expanding down into the exhaust manifold, and then flowing through the turbine. At the moment of the exhaust valve opening, the cylinder pressure will be much higher than the exhaust manifold pressure. During the early stages of valve opening (when the effective throat area of the valve is very small) the pressure ratio across the valve will be above the choked value.

Hence gas flow will accelerate to sonic velocity in the throat followed by the shock wave at the valve throat and sudden expansion to the exhaust manifold pressure. Due to the turbulent mixing and throttling, no pressure recovery occurs. The stagnation enthalpy remains unchanged and hence flow from the valve to turbine is accompanied by an increase in entropy.

As the valve continues to open the cylinder pressure will fall and flow through the valve which becomes subsonic. The flow will continue to accelerate to the valve throat and then expand to a pressure in the exhaust manifold. The energy available to useful work in the turbine is given by isentropic enthalpy change across the turbine, whereas the actual energy recovered is given by the enthalpy change across the turbine.

Clearly it is a lack of recovery of the kinetic energy leaving the valve throat and throttling gases that lead to poor exhaust gas energy utilization with the constant pressure system.

If the exhaust manifold is not sufficiently large, the blow down or the first part of the exhaust pulse from the cylinder will raise the general pressure in the manifold. If the engine has more than three cylinders, it is inevitable that at the moment when the blow down pulse from the cylinder arrives in the manifold, another cylinder is nearing the end of its exhaust process.

The pressure in the latter cylinder will be low; hence any increase in exhaust manifold pressure will impede or even reverse its exhaust processes. This will be particularly important where the cylinder has both intake and exhaust valves partially open and is relying on a through-flow of air for scavenging of the burnt combustion products.

There are some advantages and disadvantages of using a constant pressure system:

Conditions at the turbine entry are steady with time. Therefore losses in the turbine that result from unsteady flow are absent.

A single entry turbine may be used, eliminating ‘end of sector losses’.

Single turbocharger can be used on all multi-cylinder engines, it will be a large turbocharger unit and since it is a large unit it will have low leakage losses and hence have higher efficiency. Turbines designed for constant pressure turbocharging have a high degree of reaction (50%) which, coupled with exhaust diffuser, brings additional gains in efficiency.

From a practical point of view, exhaust manifold is simple to construct although it may be rather bulky, particularly relative to small engines with few cylinders.

Transient response of the system is poor. Due to the large volume of gas in the exhaust manifold, the pressure is slow to rise, resulting in poor engine response and making it unsuitable for applications with frequent load or speed changes.

Pulse turbocharging

Although the constant pressure system is commonly used on certain types of engines, the vast majority of turbocharged engines in Europe use a pulse turbocharging system. In the practical pulse system an attempt is made to utilize the energy represented by both pulse and constant pressure areas of figure 2.

The objective is to make the maximum use of high pressure and temperature exist in the cylinder at the moment of exhaust valve opening, even at the expense of the creating highly unsteady flow through the turbine. In most cases the benefit from increasing the available energy will more than offset the loss in the turbine efficiency due to the unsteady flow.

Now consider small exhaust manifold as shown in figure. Due to the small volume of exhaust manifold, a pressure build up will occur during the exhaust blow-down period. This results from a flow rate of gases entering the manifold through the valve exceeding that of gas through the turbine.

At the moment the exhaust valve starts to open, the pressure in the cylinder will be 6 to 10 times more than the atmospheric pressure, whereas the pressure in the exhaust manifold will be close to atmospheric. Therefore the initial pressure drop across the valve is above the critical value at which choking occurs and the flow will be sonic.

Further expansion of the gas to the exhaust manifold pressure occurs by sudden expansion at the exhaust manifold recovery occurs due to turbulent mixing. The stagnation enthalpy remains constant hence the flow from the valve throat is accompanied by an increase in entropy.

Finally the gas expands through the turbine to atmospheric pressure, doing useful work. The out-flowing gas from the cylinder loses a very large part of its available energy in throttling and turbulence after passing the minimum section of the exhaust valve. If the ratio of valve throat area to manifold cross section area is very small then throttling losses are very large and pressure drop across the valve is very large, during the initial stages of valve opening.

Following further opening of the exhaust valve the cylinder pressure increases, reducing the throttling losses across the valve. The pressure drop across the turbine is now much larger, transferring the available energy to the turbine, which represents a much larger proportion of the available energy in the cylinder.

At the end position of the valve opening the flow is sub-sonic and the throttling loss is reduced and is equivalent to the kinetic energy at the entry to the exhaust manifold. During the exhaust stroke, the flow process follows approximately the constant pressure pattern as described in the previous section. At the exhaust valve, the pressure in the exhaust manifold approaches atmospheric value.

With pulse operation, a much larger portion of the exhaust energy can be made available to the turbine by considerably reducing throttling losses across the exhaust valve. The speed at which the exhaust valve opens to its full area and the size of the exhaust manifold become important factors as far as energy concerned. If the exhaust valve can be made to open faster, the throttling losses become smaller during the initial exhaust period.

Furthermore, if the area of exhaust manifold is smaller than the rise in pressure of exhaust manifold will be faster, contributing to a further reduction in throttling losses in the early stages of the blow-down period. A small exhaust manifold also causes a much more rapid fall in pressure towards the end of the exhaust process improving scavenging and reducing pumping work. This discussion has therefore focused on the single cylinder engine connected to the exhaust manifold.

However, in the case of a multi-cylinder engine this problem becomes more complicated. Because the turbocharger may be located at the one end of the engine, narrow pipes are used to connect the cylinders to the turbine to keep the exhaust manifold size as small as possible. By using the narrow pipes the area increase following the valve throat is greatly reduced, keeping throttling losses to a minimum.

Scan dig7.2

Consider again a single cylinder engine, connected to a turbine by a long narrow pipe as shown in figure. Since the large quantity of exhaust energy becomes available in the form of a pressure wave, which travels along the pipe to the turbine at sonic velocity, the conditions at the exhaust valve and the turbine are not the same at a given time.

Therefore the flow process at the exhaust pipe and at the turbine end, have to be presented separately as shown in figure. For simplicity, pressure wave reflections in the pipe are ignored. During the first part of the exhaust process, in the choked region of flow through the valve, the gas is accelerated to sonic velocity at the throat. Since the contents of the pipe are initially at rest at atmospheric pressure, sudden expansion takes place across the valve throat. However some of the kinetic energy is retained as dependent on the valve throat area to pipe cross-section area.

As the valve opens further the pressure at the exhaust pipe entry rises rapidly. This is firstly because a certain amount of time is required for the acceleration of the outgoing gases, and secondly because the gases enter the exhaust pipe from the cylinder at a higher rate than they are leaving the exhaust pipe at the turbine end.

The sudden pressure rise at the pipe entry is transmitted along the pipe in the form of a pressure wave and will arrive at the turbine displaced in time. This displacement is a function of length of pipe and properties of gas. The pressure drop across the valve is noticeably reduced due to the rapid drop in cylinder pressure and the rise in the pipe pressure and also because the valve throat area to pipe area ratio has increased. Both effects considerably reduce throttling losses. The velocity at the turbine end of the pipe is greater than velocity after the valve, due to the arrival of high pressure wave at the turbine end.

In the subcritical flow region of blow down period, the pressure in the exhaust falls at the same time as that in the cylinder. The velocity at the valve throat is equal to the velocity in the pipe, since the valve is fully open. At the turbine exhaust gas expands to atmospheric pressure, doing useful work in the turbine.

It has been established that the pulse turbocharging system results in greater energy availability at the turbine. As the pressure wave travels through the pipe, it carries a large portion of pressure energy and small portion of kinetic energy, which is affected by friction. The gain obtained through the use of a narrow exhaust pipe is achieved partly by reducing throttling losses at the early stages of the blow down period and partly by preserving kinetic energy.

Thus the small diameter exhaust pipe is essential because this will preserve high gas velocity from the valve to the turbine. However if pipes are made too narrow, viscous friction at the pipe wall will become excessive. The optimum exhaust manifold pipe diameter will be a compromise, but the cross sectional area should not be significantly greater than the geometric valve area at full lift.

The actual flow through a pulse exhaust system is highly unsteady and is affected by pulse reflections from the turbine and closed exhaust valves. It will be evident that effective time of arrival of a reflected pulse changes as per the engine speed. Hence the exhaust pipe length is critical and must be optimized to suit the speed range of the engine.

The interference of reflected pressure waves with the scavenging process is the most critical aspect of a pulse turbocharging system, particularly on the engine with a very long valve overlap. Because of this phenomenon it is impossible to connect an engine with more than three cylinders to the same turbine without using a twin-entry turbine or introducing losses on the intake or exhaust processes.

The advantage of pulse over the constant pressure turbocharging is that the energy available for conversion to useful work in the turbine is greater. The ideal pulse turbocharging must have following characteristics:

The peak of blow-down pulse must occur just before the bottom dead centre of the cylinder, followed by a rapid pressure drop to below boost pressure.

The boost pressure must be above the exhaust manifold pressure to aid the scavenging process during the valve overlap.

The effectiveness of pulse system is governed by the gas exchange process and overall efficiency of the turbocharger under unsteady flow conditions.

Pulse converters in turbocharging

The pulse turbocharging system has been found to be superior as compared to the constant pressure system on the majority of today’s diesel engines. In the previous section it is made clear that the pulse turbocharging is most effective when groups of three cylinders are connected to the turbine entry.

When one or two cylinders are connected to a turbine the average turbine efficiency and expansion ratio tend to fall due to the wide spacing of exhaust gases pulses. To overcome some of these advantages ‘pulse converter’ has been developed.

Birmann was the first to use the term ‘pulse converter’. His main objective was to design a device that preserved the unsteady flow of gases from the cylinder during the exhaust and valve overlap periods, yet to maintain a steady flow at the turbine, so that it might be possible to achieve good scavenging and high turbine efficiency. For good scavenging he proposed a ‘jet pump system’, by using high velocity of gas issuing from a central nozzle to decrease the pressure in short pipes at the exhaust valves.

The system shown in figure 8.1 has some disadvantages as following:

Each nozzle must be larger than last which results in high manufacturing cost.

The whole installation is bulky and complex.

Because much of exhaust gases will pass through several ejectors and diffusers, the frictional and diffusion losses will be high.

There is insufficient length between exhaust ports to permits efficient pressure recovery in the diffusers.

The majority of pulse converters in use today are based on the concept of minimum energy loss, even if this means not only a loss of all suction effect, but also some pressure wave difference during scavenging. To avoid high mixing losses at the junction, the area reduction in the inlet nozzles is usually small (junction area >50% of pipe area), while the mixing length and plenum and often even the diffuser are omitted completely, as suggested by Petak (as cited in Watson and Janota, 1982).

These simple pulse converters have the added advantage of adding little over-all length to the exhaust system. A typical example from a four stroke engine is shown in figure 8.4. The pulse converter is specified by the nozzle and throat area ratios. Clearly such a pulse converter will generate no suction, but the flow losses through it will be very much less than in more complex designs.

Tests on a model pulse converters by Watson and Janota (1971) have shown that the area reduction at nozzles has to be severe to reduce pulse propagation substantially. The penalty accompanying large area reductions in the inlet nozzles is higher internal losses and hence reduces the amount of energy available for useful expansion through the turbine. In practice this means that the minimum possible area reduction is used, consistent with reasonable scavenging.

It follows that the design of the pulse converter is a compromise between minimum losses and reduction of the pulse interaction between the inlet branches. The compromise adopted may vary from one engine design to another, depending on the amount of pulse interference, etc.

8.0 References

Watson, N and Janota, M, 1982, Turbocharging the Internal Combustion Engine, MacMilan, Great Britain.

Heywood, John, B, 1988,Internal Combustion Engine Fundamentals, McGraw-Hill

Stone, R, 1992, Introduction To Internal Combustion Engine, MacMilan,

Great Britain.

Azzoni, P, Moro, D, Ponti, F & Rizzoni, G, 1998, Engine and load torque estimation with application to electronic throttle control, SAE paper No. 980795, Society of Automotive Engineers.

9.0 Bibliography

Notes posted by Dr Les Mitchell on studynet

‘Report Writing guide’ posted by Dr. Rodney Day on studynet

Various Models of Consumer Behaviour

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

This study uncovered that the field of consumer behaviour represents a broad ranging category where marketers seek to understand individual and group motivations, reactions and responses to varied product and service situations (Solomon et al, 2009). It found the processes and activities undertaken by consumers regarding the stages and steps of the decision and buying process (Gupta et al, 2004).

The cognitive approach was found to look at consumer perceptions in processing information that acknowledges social and environmental experiences (Watson and Spence, 2007). Humanistic models delve into behavioural perspectives as opposed to the perception aspects of the cognitive approach.

The above summary of the two major consumer categories represented the basis for understanding how they guide the differing models and theories under each. It was ascertained that in terms of cognitive and humanistic, the varied theories and models under them all have special attributes. These represent the basis for the selection of the theory or model suited to individual product or service categories.

Cognitive Models

Bray (2008) explained that the cognitive approach is credited to Hebb’s Stimulus Organism Response model that was introduced in the early 1950s. Hebb’s model states that there is a linear relationship concerning the impact of stimuli. This concept has been criticised by Kahle and Close (2006) along with Tyagi and Kumar (2004), who state that the Hebb’s model lacks the capacity to account for past experiences as its shortcoming.

Bray (2008) explains that people usually respond to the cognitive aspects of their environments, and that these cognitive areas are related to the parameters and processes of learning. He adds that feelings, behaviours and thoughts are connected in a casual manner. Regarding behavioural facets, it emphasises processes connected to human behaviour such as environmental stimuli and their behavioural responses (Zimmerman, 2008).

Cognitive Model – Consumer Decision

Consumer decisions under the cognitive designation consist of three models (Bauer et al, 2006). These represent the utility, satisficing, and prospect theories (Steel and Konig, 2006). The utility theory proposes that people make their decisions based upon expected outcomes (Steel and Konig, 2006). It also views consumers as actors that are rational and able to foresee or estimate the potential outcomes of decisions they make that include the potential for uncertainty. This is a flaw in the utility theory as the unknown end utility functionality of a purchase is debatable at best.

Cognitive Model – Satisficing Model

The satisficing model is a newer alternative to the utility model that was first proposed in the 1700s by Nicholas Bernoulli (Richrme, 2005). It represents larger and longer term consumer decisions. In many cases, good enough is the explanation for this type of decision process (Richrme, 2005). Elements of the utility theory are included in the satisficing model, but since the decision is made less often, consumers tend to accept a different basis for purchase or decision making then those products that are replaced more often (Richrme, 2005).

The limitations of funds to purchase exactly what one prefers is a constraint under this consumer behavioural model that in most cases causes consumers to settle as opposed to optimising their decision to meet all of their wants and desires (Bray, 2008). Bray (2008) argues that in many causes the optimal purchase cannot be determined, and that consumers tend to lack the capacity to find the best purchase option.

Cognitive Model – Prospect Theory

The prospect theory was developed in the mid 1970s by Amos Tversky and Daniel Kahneman (Sirakaya and Woodside, 2005). It added value to replace utility. Value represents a point of reference that consumers can use to determine the gains or losses from a purchase (Camerer et al, 2011).

The prospect theory helps to explain aspects of consumer behaviour that are not completely explained under the utility theory. These represent the emotional connection and the potential that the extent of a problem is not fully understood (Sirakaya and Woodside, 2005). These are strengths of the prospect theory over the utility and satisficing theories. Camerer et al (2011) found that the prospect theory might predict outcomes that are not accurate due to the fact it does not consider the characteristics of decision makers regarding their past history, and the context of the decision areas represented by the type of purchase (large versus small ticket items in terms of price or frequency).

Cognitive Models – Theory of Buyer Behaviour

The theory of buyer behaviour represents an approach to analyse and predict the method that consumers use in making their purchase decisions (Pickton and Broderick, 2005). In many cases, a consumer will use a generic decision making model (Pickton and Broderick, 2005). The first step generally represents them conducting research on varied products and prices. In most cases the process is prompted by replacing a product the consumer already owns that has become outdated or no longer works (Calonius, 2006). In order to understand the considerations in the process, the following provides an illustration of the factors:

Figure 1 – Model of Consumer Buying Considerations Affecting the Buying Process

(Friesner, 2014, p. 1)

Friesner (2014) adds that understanding consumer buyer behaviour entails how it links to the marketing mix represented by price, place, promotion and product. He advises that marketers and consumers are intertwined as the former modify their approaches to create a climate for consumer action, and that that the reaction of consumers causes continued modifications to the 4Ps. The above explains the strength and weakness of buying behaviour as it is based on the parameters of past and current consumer motivations, and actions. This same strength also represents a weakness as new products; better information availability (such as the Internet) and shifting product reputations mean that buying behaviour patterns and rationales are consistently changing (Calonius, 2006).

Cognitive Models – Theory of Reasoned Action

The theory of reasoned action is a method to predict behaviour, attitude and intention (Cooke and French, 2006). It separates intention from behavioural aspects that provide the framework to explain the impact of attitude (Hale et al, 2002). The main tenets are based on attitude, behavioural intention and subjective norm (Cooke and French, 2006). Attitude represents the beliefs formed by a consumer concerning a behavioural approach that include the assessment of what the consequences might be. Behavioural intention looks at the strength of an individual’s intention in performing a behaviour, with the subjective norm representing perceived expectations based on other people or groups, and how a person measures up to these norms (Cooke and French, 2006). The weakness of reasoned action is that the sum of the comparison group forms the basis for measurement. If the intentions, subjective norms or attitude are improperly gathered, it negatively affects the outcome.

Cognitive Models – Theory of Planned Behaviour

The theory of planned behaviour connects behaviour and beliefs (Ajzen, 2011). It is an improvement on the theory of reasoned action as it adds perceived behavioural control to attitude, behavioural intention and subjective norm (Ajzen, 2011). The addition of perceived behavioural control looks into the perceptions individuals carry regarding their capability to perform a type of behaviour (Kraft et al, 2005). This is the strength of the theory as it delves into the potential presence of factors that may impede or aid behaviour performance. Conversely, it also represents a weakness because if the perception does not accurately reflect actual behavioural control, it can cause incorrect conclusions or assumptions (Kraft et al, 2005).

Humanistic Models

Humanistic models of consumer behaviour are close to the behavioural perspective with some key differences (Wong, 2006). It is defined by Davis and Palladino (2010) as focusing on areas that can be observed and emphasises the learned nature of such behaviours. The humanistic model places emphasis on the individual and their psychodynamic determinants that consist of behaviours that can be observed and their reaction to external stimuli (Wong, 2006).

Humanistic models also take into account the psychology represented by Maslow’s hierarchy of needs that observed people tend to be motivated by rewards or unconscious desires (Koltko-Rivera, 2006).

Figure 2 – Maslow’s Hierarchy of Needs

(Burton, 2012, p. 1)

Maslow refers to the four bottom tiers as representing deficiency needs (Anderson, 2014). He explains that individuals become anxious when these needs are not met or if they are under threat. The top tier represents growth as it permits individuals to employ self-actualisation that include independence, objectivity, awareness, creativity and honesty (Anderson, 2014). Bourdieu (Trigg, 2004) along with Rouse (2004) and others have criticised Maslow’s hierarchy of needs as being too schematic (meaning too planned or structured) and as lacking in scientific principles. Others such as Oleson (2004) and Dye et al (2005) state it provides a useful theory and intuitive guide to understand aspects of human motivation.

The humanistic approach and its models represent that people have the capacity to guide and shape their destiny and thinking to design courses of action they can follow or amend as circumstances or situations change (Davis and Palladino, 2010).

Humanistic Models – Theory of Trying

The theory of trying represents a consumer seeking to act on a particular thought or series of thoughts connected to a potential purchase (Ahuja and Thatcher, 2005). Carsrud et al (2009, p. 155) explain it as “an attitude toward a reasoned action is replaced by an attitude toward trying and an intention is restricted to an intention to try”. The theory integrates hierarchical goals into a behavioural context that people use to work toward a decision (Ahuja and Thatcher, 2005). It delves into the processes individuals work through in attempting to solve a selection problem that includes looking at the varied options that are available.

Figure 3 – The Theory of Trying

(Bray, 2008, p. 27)

As shown by the above, the stages contributing to the theory channel into intention to try before being considered or actualised.

The issue with the theory is that it represents a subjective process that seeks to identify a switch in consumer intention from attitude to trying (Ahuja and Thatcher, 2005). This entails opinions based on the recorded intentions or attitudes of individuals that might have been perceived incorrectly. The other aspect that represents a weakness is that it is skewed toward the evaluation of the potential consumption patterns of individuals as opposed to their buying behaviour (Ahuja and Thatcher, 2005).

Humanistic Models – Model of Goal Directed Behaviour

The model of goal-directed behaviour was build on the tenets of the theory of planned behaviour as it advances the aspect of goals as its main foundation rather than behaviours (Hagger and Chatzisarantis, 2007). Desire is a critical component as its represents a driving force:

Figure 4 – Model of Goal Directed Behaviour

(Bray, 2008, p. 28)

The model of goal-directed behaviour is complex due to attitude, positive and negative anticipated emotions and subjective norms contributing to desires. As shown in the above figure, desire is a critical component that also represents a potential source of misunderstanding as it is a subjective area that can easily be misconstrued concerning consumer behaviour.

Conclusion

This exploration of cognitive and humanistic models uncovered that the varied theories and approaches under each has their own unique attributes. It was also found that each tends to have specific attributes that fit varied situations or circumstances based on what marketers are seeking to uncover to utilise in the development of campaigns and approaches to generating sales.

The cognitive approach seeks to uncover experiences, feelings, values, expectations and thoughts consumers develop and use as a part of their decision, and reasons for action. The three approaches under consumer decision making primarily represent rational (utility), expectations (satisficing) and value (prospect) approaches. The theory of buyer behaviour is a complex process that is influenced by marketing (the 4Ps), along with purchase considerations and psychological aspects that include perception and learning. The deeper a marketer elects to explore the above leads to using the theory of reasoned action that seeks to separate intention from behavioural aspects. This can thus be used to delve into the theory of planned behaviour that adds attitude and perceived behavioural aspects.

Whilst the humanistic models concerning consumer behaviour have a close affinity to behavioural perspectives, they place more emphasis on psychodynamic aspects and individuals using phenomenon that is observable and learned from such behaviours. Maslow’s hierarchy of needs explained that the self actualisation phase as the top tier was applicable under the humanistic approach as it represents the stage where base needs no longer control decisions. The theory of trying demonstrates the above as it represents a consumer seeking to act on a particular thought as the means to decide on a selection using hierarchical goals. Goal directed behaviour is more personally motivated as it utilises desires as a core component in the process.

It was found that each of the theories and models examined under cognitive and humanistic approaches had their strengths and shortcomings. This is because each of these models are based on differing approaches such as uncovering experiences, feelings, values, expectations and thoughts under cognitive, compared to psychodynamic and individuals observable phenomenon for humanistic. This study brought out that no one model or theory adequately explores or explains consumer decision making or behaviour, but that through combinations, marketers can arrive at more comprehensive understandings.

References

Ahuja, M., Thatcher, J. (2005) Moving beyond intentions and toward the theory of trying: effects of work environment and gender on post-adoption information technology use. MIS Quarterly. 29(3). pp. 433-437.

Ajzen, I. (2011) The theory of planned behaviour: Reactions and reflections. Psychology and Health. 26(9). pp. 1115-1116.

Anderson, A. (2014) Maslow’s Hierarchy of Needs. The Prairie Light Review. 36(2). pp. 4-6.

Bauer, H., Sauer, N., Becker, C. (2006) Investigating the relationship between product involvement and consumer decision-making styles. Journal of Consumer Behaviour. 5(4). pp. 348-351.

Bray, J. (2008) Consumer Behaviour Theory: Approaches and Models. (online) Available at http://eprints.bournemouth.ac.uk/10107/1/Consumer_Behaviour_Theory_-_Approaches_%26_Models.pdf

Burton, N. (2012) Our Hierarchy of Needs. (online) Available at https://www.psychologytoday.com/blog/hide-and-seek/201205/our-hierarchy-needs

Calonius, H. (2006) Contemporary Research in Marketing: A Market Behaviour Framework. (online) Available at http://www.udec.edu.mx/BibliotecaInvestigacion/Documentos/2009/Febrero/Mercadotecnia%20investigaci%C3%B3n%20contempor%C3%A1nea.pdf

Camerer, C., Loewenstein, G., Rabin, M. (2011) Advances in Behavioral Economics. Princeton: Princeton University Press.

Carsrud, A., Brannback, M., Elfving, J. & Brandt, K. (2009) Motivations: The Entrepreneurial Mind and Behaviour. In Carsrud, A. & Brannback, M. Understanding the Entrepreneurial Mind: Opening the Black Box. New York”Springer Publications.

Cooke, R., French, D. (2006) How well do the theory of reasoned action and theory of planned behaviour predict intentions and attendance at screening programmes? A meta-analysis. Psychology and Health. 23(7). pp. 751-754.

Davis, S., Palladino, J. (2010) Psychology. New York: Pearson Education.

Dye, K., Mills, A., Weatherbee, T. (2005) Maslow: man interrupted: reading management theory in context. Management Decision. 43(10). pp.1385 – 1393.

Friesner, T. (2014) Consumer Buyer Behaviour. (online) Available at http://www.marketingteacher.com/consumer-buyer-behaviour/

Gupta, A., Su, B., Walter, Z. (2004) An Empirical Study of Consumer Switching from Traditional to Electronic Channels: A Purchase-Decision Process Perspective. International Journal of Electronic Commerce. 8(3). pp. 134-137.

Hagger, M. & Chatzisarantis, N. (2007) Social Psychology of Exercise and Sport. London: McGraw-Hill International.

Hale, J., Householder, B., Greene, K. (2002) The theory of reasoned action. In Dillard, J., Pfau, M. The persuasion handbook: Developments in theory and practice. Thousand Oaks Sage Publications.

Kahle, L., Close, A. (2006) Consumer Behaviour Knowledge for Effective Sports and Event Marketing. New York: Taylor and Francis.

Koltko-Rivera, M. (2006) Rediscovering the later version of Maslow’s hierarchy of needs: Self-transcendence and opportunities for theory, research, and unification. Review of General Psychology. 10(4). pp. 308-311.

Kraft, P., Rise, J., Sutton, S., Raysamb, E. (2005) Perceived difficulty in the theory of planned behaviour: Perceived behavioural control or affective attitude? British Journal of Social Psychology. 44(3). pp. 481-484.

Oleson, M. (2004) Exploring the relationship between money attitudes and Maslow’s hierarchy of needs. International Journal of Consumer Studies. 28(1). pp. 81-82.

Pickton, D., Broderick, A. (2005) Integrated Marketing Communications. London: Prentice Hall.

Richrme, M. (2005) Consumer Decision-Making Models, Strategies, and Theories, Oh My! (online) Available at http://www.bj.decisionanalyst.com/Downloads/ConsumerDecisionMaking.pdf

Rouse, K. (2004) Beyond Maslow’s hierarchy of needs what do people strive for? Performance Improvement. 43(10). pp. 27-31.

Sirakaya, E., Woodside, A. (2005) Building and testing theories of decision making by travellers. Tourism Management. 26(6). pp. 819-821.

Solomon, M., Zaichlowsky, J., Polegato, R. (2009) Consumer Behaviour: Buying, Having, and Being. New York: Prentice Hall.

Steel, P., Konig, C. (2006) Integrating Theories of Motivation. Academy of Management Review. 31(4). pp. 851-857.

Trigg, A. (2004) Deriving the Engel Curve: Pierre Bourdieu and the Social Critique of Maslow’s Hierarchy of Needs. Review of Social Economy. 62(3) pp. 395-397.

Tyagi, C., Kumar, A. (2004) Consumer Behaviour. New Delhi: Atlantic Publishers

Watson, L., Spence, M. (2007) Causes and consequences of emotions on consumer behaviour: A review and integrative cognitive appraisal theory. European Journal of Marketing. 41(6). pp.497 – 501.

Wong, P. (2006) Existential and Humanistic Theories. In Jay, T., Segal, D., Hersen, M. Comprehensive Handbook of Personality and Psychopathology: Personality and Everyday Functioning. Hoboken: John Wiley & Sons Inc.

Zimmerman, B. (2008) Investigating Self-Regulation and Motivation: Historical Background, Methodological Developments, and Future Prospects. American Educational Research Journal. 45(1) pp. 171-175.

Service Marketing & Quality Frameworks

This work was produced by one of our professional writers as a learning aid to help you with your studies

Introduction

A service is “any act or performance that one party can offer to another that is essentially intangible and does not result in the ownership of anything. Its production may or may not be tied to a physical product (Kotler, 2000, p. 200)”. Furthermore, service marketing can be defined as “the marketing of activities and processes rather than objects” (Solomon, et al., 1985, p. 106). As services are mainly intangible products, they “face a host of services marketing problems that are not always adequately solved by traditional goods-related marketing solutions” (Hoffman & Bateson, 2010, p. 5).

Service quality is “a measure of how well the service level delivered matches customer expectations. Delivering quality service means conforming to customer expectations on a consistent basis” (Parasuraman, et al., 1985, p. 42). Due to these problems, there are a variety of new conceptual frameworks to monitor service quality. Some of these methods are completely new creations, whereas other good-based frameworks were merely extended to be applicable towards service quality.

This report will explore several service marketing and quality frameworks, including; service marketing mix (7Ps), SERVQUAL the services marketing triangle and service dominant logic. These different methods of measuring service marketing and quality will be critically evaluated using a variety of academic theory.

7Ps and Service Marketing Mix

The 7Ps and service marketing mix is a great framework used to analyse the performance of service marketing and the quality that a company has to offer. The service marketing mix used to consist of the 4Ps (Gronroos, 1994). These were (Booms & Bitner, 1981):

Product: Quality, brand name, service line, warranty, capabilities, facilitating goods, tangible clues, price, personnel, physical environment and process of service delivery.
Price: Level, discounts and allowances, payment terms, customers own perceived value, quality/price interaction and differentiation.
Place: Location, accessibility, distribution channels and distribution coverage.
Promotion: Advertisements, personal selling, sales promotion, publicity, personnel, physical environment, facilitating goods, tangible clues and process of service delivery.

However, this was later expanded on to form the 7Ps. This is because there was a higher degree of interdependence between buyers and sellers, meaning the marketing mix had to take into account buyer-seller relationships (Webster, 1984). The three extra factors to conclude the service marketing mix are (Booms & Bitner, 1981):

Participants: Personnel training, discretion, commitment, incentives, appearance, interpersonal behaviour, attitudes and customer behaviour/degree of involvement.
Process: Policies, procedures, mechanisation, employee discretion, customer involvement, customer direction and flow of activities.
Physical Evidence: Environment, furnishings, colour, layout, noise level, facilitating goods and tangible clues.

The addition of these extra three factors helped make the 7Ps a much more comprehensive framework for service marketing. Furthermore, it offers a broader perspective of service marketing, with more refined results. However, all frameworks come with some weaknesses, and the service marketing mix can sometimes be too complicated to companies or marketers. Furthermore, some academics will suggest that the extra elements can already be covered by the 4Ps, thus making them redundant, and also that it is hard to control and monitor the additional elements (Rafiq & Ahmed, 1995).

The original four factors of the marketing mix are widely known and used throughout all facets of marketing, but the modern additions give a new flavour to service marketing. This pays particular attention to ‘participants’ as it includes all employees and consumers that have an effect on service quality. However, processes and physical environment still have a big influence, as they monitor the environment and the ways in which an employee or company deliver service.

Service Dominant Logic

Service dominant (S-D) logic “superordinate’s service (the process of providing benefit) to products (units of output that are sometimes used in the process)” (Lusch, et al., 2007, p. 6). Furthermore, service dominant logic is thought to be grounded by nine fundamental factors.

Figure 1 – (Vargo & Lusch, 2006)

The presiding view of S-D logic is that customers should be viewed as an “operate resource”, which is a resource that can act with other resources, thus co-creating value (Lusch, et al., 2007, p. 6). Furthermore, collaboration between the organisation and their consumers allows for a strong bond to form between S-D logic and the 7Ps.

S-D logic was formed to recognise the importance of service marketing, and lay a new foundation over the outdated goods-dominated logic. With value being created in new ways, and consumers valuing the service encounter, organisations must create value for their services. Furthermore, there is no goods vs services in S-D logic, as it recognises goods as an ‘appliance’ used in the service encounter (Lusch & Vargo, 2006).

Although S-D logic provides many benefits for an organisation, there have been a variety of academics that criticise the approach. The majority of scholars (Groonos, 2006; Achrol & Kotler, 2006) point out that interaction and networks play a more imperative role in value creation, something that S-D logic does not take into account. However, Lusch & Vargo (2006) insist that S-D logic does take into account interaction and networks, as it believe value creation is the process of integrating and transforming resources which implies interaction between networks.

SERVQUAL

SERVQUAL is a service quality framework developed to measure the scale of quality provided by a service a company has to offer. It was composed by Parasuraman, Zeithaml & Berry through a series of publications in the 1980s and early 1990s (Buttle, 1995). Its aim was to compare customer’s perceptions with their expectations of a service. It its original formulation, SERVQUAL was composed of ten factors for analysing service quality. These were; reliability, responsiveness, competence, access, courtesy, communication, credibility, security, understanding and tangibles (Parasuraman, et al., 1985). However, they collapsed these components into five main factors, which would constitute the modern understanding of SERVQUAL or RATER. These factors are (Iwaardan, et al., 2003):

Reliability: Doing what is promised and doing it at the right time.
Assurance: One of the most significant factors of assurance, is a company that has the required knowledge to answer questions.
Tangibles: Up to date equipment, physical facilities and materials are visually appealing.
Empathy: A company’s communication with consumers, usually in the form of human interactions. Giving care to each individually personally.
Responsiveness: Most significant part of responsiveness is giving a prompt service.

There have been many criticisms about the long-term stability of the results that SERVQUAL can provide (Lam & Woo, 1997; Crosby & LeMay, 1998). This is in special attention to the applicability of all of the five factors mentioned above. Furthermore, Cronin & Taylor (1994) argue that service quality should not be strictly categorised into five different factors, but should be measured using whatever means are applicable to the situation. On top of this, Buttle (1995, p. 10) states that “SERVQUAL has been subjected to a number of theoretical and operational criticisms”. These are;

Theoretical:

SERVQUAL is based on a disconfirmation paradigm, and not an attitudinal paradigm.
Little evidence that consumers assess service quality using the five factors.
Focus heavily on the process of service quality, and not the outcomes

Operational:

Consumers generally use standards instead of expectations to measure service quality.
The five factors cannot cover the variability of service quality.
Consumer quality perceptions are very versatile, and can change quickly.

Although SERVQUAL does have several criticisms, it also has many practical applications. Wisniewski (2001) outlines some of the applications where SERVQUAL can be used. Understanding current service quality is the predominant use of SERVQUAL. This is because it allows managers to assess the current service, and monitor any gaps that exist. SERVQUAL can also highlight how different consumers perceive quality for the different services a company has to offer. Overall, it is a comprehensive framework that helps a company analyse the gap in service quality, and can help a manager decide on appropriate strategies to increase service quality.

Furthermore, SERVQUAL supplements the 7Ps service marketing mix well. This is because it allows the company to gather data from consumers, which they can tailor specifically to one of more of the 7Ps. Measuring the 7Ps through a SERVQUAL framework will allow a company to monitor where they are offering positive service quality, and where their service quality is lacking.

Services Marketing Triangle

Similarly to the services marketing mix, the services marketing triangle was created to handle the complexity that service marketers face when dealing with intangible products. The service marketing triangle highlights three key players, these are (Groonos, 1996);

Firm: The management of a company, including full-time marketers and sales personnel. This is enabled through continuous development and internal marketing with their employees.
Employees: This includes anyone that is working within close contact of the consumer. They play an integral role within the interactive marketing of service marketing.
Customers: Anyone that purchases the service of a company. They are also heavily exposed to the external marketing of a firm.

For marketing to be successful, a marketer should ensure that there is positive interaction between these three players. Furthermore, for this success to be accomplished, three types of marketing must be conducted. These are (Strydom, 2005);

External Marketing – Making Promises: Involves communication by a company towards their consumer. This form of communication allows the company to offer their services, and set the expectation of service quality that the client can expect. In service marketing this pays particular attention to physical evidence, such as the appearance of the place of business or appearance of staff.
Interactive Marketing – Keeping Promises: Interactive marketing is revolved around the communication that occurs between the client and the service delivery personnel. This is one of the most important parts of successfully utilising the services marketing triangle, as it is the only time that the client will have face-to-face experience with the company, via the providers.
Internal Marketing – Enabling Promises: A more modern addition to the services marketing triangle, internal marketing centres on training employees to the highest standards so they can deliver exceptional service. Without internal marketing, there is a high chance that the client will receive sub-standard service.

For the service marketing triangle to be implemented successfully, all departments of a company must work together to deliver the highest quality of service that is possible. All members of an organisation must be conscious of their role in delivering service quality, and understand what their marketing function is. (Alvesson, 1995).

Furthermore, the advancements in technology are having a huge impact on service quality and marketing frameworks. This is because the changes in technology are allowing companies to communicate with customers in a non-physical environment, such as through the internet. This is transforming the services marketing triangle into a services marketing pyramid, as all three factors can be bought together through the clever use of technology (Zeithaml & Bitner, 2000).

One of the most significant downfalls to the service marketing triangle is that firms often do not implement it as a triangle. Instead they will focus on one point of the triangle, and neglect the others. This is particularly true to internal marketing, as many organisations believe that if employees are treating correctly, then it will naturally pass through into the external environment (Li, 2010). However, the fact that all three points are woven together, and influence by each other, does present opportunities’ for organisations to conduct their marketing efficiently and at a cheap cost (Eric, 2014).

Another criticism of the service marketing triangle is that it takes into account to many marketing activities. Marketing is used merely as a tool to coerce a consumer to purchase a good or service (Kotler & Armstrong, 2010), and an organisation shouldn’t have to focus on all three aspects of triangle. As service quality is impacted by each individual point of the triangle, an organisation could, theoretically, only focus on one point (Yadav & Dabhade, 2013; Lings & Greenley, 2009). However, as previously mentioned, this can have unintended impacts on other facets of the triangle, meaning that an organisation should strive to monitor and implement all three points of the triangle, instead of focusing on only one.

Conclusion

It becomes quickly apparent that service marketing is an imperative factor for a company to conduct proficiently. This is because service marketing has positive links with service quality and customer satisfaction, which in turn has strong ties to a company’s overall financial performance. Conducting negative service marketing can result in a consumer experiencing negative service quality, and thus taking their business elsewhere and potentially spreading bad press.

Furthermore, because of the significance that service marketing and service quality has on a company, there have been a variety of frameworks that have been developed. The most proficient for service marketing is the 7Ps, whereas SERVQUAL is a great framework for service quality. However, the services marketing triangle somewhat combines these two factors into a comprehensive framework that outlines both service marketing (internal, external and interactive) and mediums through which service quality can be delivered (firm, employees, customers). The services marketing triangle is become the modern approach to service quality and marketing, especially as it is not including the advancements in technology. This further highlights the importance of service marketing and quality, as a variety of academics are consistently improving upon existing frameworks, so that companies can deliver the greatest amount of service quality through success service marketing.

Bibliography

Achrol, R. S. & Kotler, P., 2006. The Service-Dominant Logic for Marketing: A Critique. In: The Service-Dominant Logic of Marketing: Dialog, Debate, and Directions. New York: ME Sharpe, pp. 320-333.

Alvesson, M., 1995. Management of Knowledge-Intensive Companies. New York: Walter de Gruyter.

Booms, B. H. & Bitner, M. J., 1981. Marketing strategies and organization structures for service firms. In: Marketing of Services. Chicago: American Marketing Association, pp. 47-51.

Buttle, F., 1995. SERVQUAL: review, critique, research agenda. European Journal of Marketing, 30(1), pp. 8-32.

Cronin, J. J. & Taylor, S. A., 1994. SERVPERF versus SERVQUAL: reconciling performance based and perceptions-minus expectations measurement of service quality. Journal of Marketing, 58(January), pp. 125-131.

Crosby, L. & LeMay, S. A., 1998. Empirical determination of shipper requirements for motor carrier services: SERVQUAL, direct questioning, and policy-capturing methods. Journal of Business Logistics, 19(1), pp. 139-153.

Eric, B., 2014. The effects of the three sides of the service triangle model on customer retention in the financial service sector of Ghana. International Journal of Business , 4(4), pp. 123-136.

Gronroos, C., 1994. From Marketing Mix to Relationship Marketing: Towards a Paradigm Shift in Marketing. Management Decision, 32(2), pp. 4-20.

Groonos, C., 1996. Relationship Marketing Logic. Asia-Australia Marketing Journal, 4(1), pp. 7-18.

Groonos, C., 2006. What Can a Service Logic Offer Marketing Theory?. In: The Service-Dominant Logic of Marketing: Dialog, Debate and Directions. New York: ME Sharpe, pp. 354-364.

Hoffman, K. & Bateson, J., 2010. Services Marketing: Concepts, Strategies, & Cases. 4th ed. Mason: Cengage Learning.

Iwaardan, J. V., Wiele, T. V. D., Ball, L. & Millen, R., 2003. Applying SERVQUAL to Web sites: an exploratory study. International Journal of Quality & Reliability Management, 20(8), pp. 919-935.

Kotler, P., 2000. Marketing Management. Millenium Edition ed. New Jersey: Prentice Hall.

Kotler, P. & Armstrong, G., 2010. Principles of Marketing. 13th ed. s.l.:Pearson.

Lam, S. S. K. & Woo, K. S., 1997. Measuring service quality: a test-retest reliability investigation of SERVQUAL. Journal of Market Research Society, 39(2), pp. 381-396.

Li, L., 2010. Internal Quality Management in Service Organizations: a theoretical approach , s.l.: Karlstad Business School .

Lings, I. & Greenley, G., 2009. The impact of internal and external market orientations on firm performance. Journal of Strategic Marketing, 17(1), pp. 41-53.

Lusch, R. F. & Vargo, S. L., 2006. Service-dominant logic: reactions, reflections and refinements. Marketing Theory, 6(3), pp. 281-288.

Lusch, R. F., Vargo, S. L. & O’Brien, M., 2007. Competing through service: Insights from service-dominant logic. Journal of Retailing, 83(1), pp. 5-18.

Parasuraman, A., Zeithaml, V. A. & Berry, L. L., 1985. A Conceptual Model of Service Quality and Its Implications for Future Research. Journal of Marketing, 49(4), pp. 41-50.

Parasuraman, A., Zeithaml, V. & Berry, L. L., 1985. A conceptual model of service quality and its implications for future research. Journal of Marketing, 49(4), pp. 41-50.

Rafiq, M. & Ahmed, P. K., 1995. Using the 7Ps as a generic marketing mix: an exploratory survey of UK and European marketing academics. Marketing Intelligence & Planning, 13(9), pp. 4-15.

Solomon, M. R., Surprenant, C., Czepiel, J. A. & Gutman, E. G., 1985. A Role Theory Perspective on Dynamic Interactions: The Service Encounter. Journal of Marketing, 41(1), pp. 99-111.

Strydom, J., 2005. Introduction to Marketing. 3rd ed. Cape Town: Juta and Company Ltd.

Vargo, S. L. & Lusch, R. F., 2006. Service-Dominant Logic: What It Is, What It Is Not, What It Might Be. In: The Service-Dominant Logic of Marketing: Dialog, Debate and Directions. New York: M.E Sharpe, Inc, pp. 43-56.

Webster, F. E., 1984. Industrial Marketing Strategy. 2nd ed. New York: John Wiley & Sons.

Wisniewski, M., 2001. Using SERVQUAL to assess customer satisfaction with public sector services. Managing Service Quality: An International Journal, 11(6), pp. 380-388.

Yadav, R. K. & Dabhade, N., 2013. Service marketing triangle and GAP model in hospital industry. International Letters of Social and Humanistic Science, 8(1), pp. 77-85.

Zeithaml, V. A. & Bitner, M. J., 2000. Marketing: Integrating customer focus across the firm. 2nd ed. New York: McGraw Hill.