Quality of Service (QoS): Issues and Recommendations

The Effects Of Movement On QoS –

As the mobile device moves from a cell protected from one base station to an adjoining cell of a different base station during a connection handover takes place. This hand over time may just result in a short loss of communication which would possibly not be obvious for voice interplay however can outcomes in loss of information for different applications. For mobile computing, the base station may have to provide regional processing, storage or other services as good as communication.

Variations in link quality will additionally be caused by atmoA­ circular conditions such as rain or lightning. These effects need additional refined dynamic QoS management than fixed systems.

It is therefore the variation in QoS that is that the crucial distinction between mobile systems and communications based on wired networks. This implies for adaptive QoS management that specifies a variety of acceptable QoS levels, instead of attempting to ensure specific values. The QoS management is additionally accountable for cooperation with QoS aware applications to support adaptation, instead of insulating applications from variation in underlying QoS. The effects of quality on QoS need then that algorithms utilized should be capable of managing frequent loss and reappearance of mobile device within the network, and that overhead ought to be reduced in periods of low connectivity. This is in contrast to traditional distributed applications, wherever moderately stable presence and systematically high network quality square measure usually assumed.

The Restrictions Of Portable Devices On QoS –

Portability of the mobile computing device imposed variety of problems that place limitations on QoS. The main limitation is within the physical size of mobile computers. Systems usually are designed with the limitations of batteries in mind. Current battery technology still needs appreciable area and weight for modest power reserves, and isn’t expected to become considerably additional compact in future. This then places limits on the style due to the ought to offer low power consumption as a primary style goal: low power processors, displays and peripherals, and the observe of getting systems powered down or “sleeping” once not in active use are common measures to reduce power consumption in portable PCs (Personal computer) and PDAs (Personal digital assistant). Low power consumption elements are usually grade

of processing power below their higher consumption desktop counterparts, so limiting the complexness of tasks performed. The practice of intermittent activity might seem as frequent failures in some situations. Similarly, mobile technology needs vital power, notably for transmission, thus network association should be intermittent.

The second purpose is that of user interfaces: giant screens, large keyboards, and refined and straightforward to use pointer systems are commonplace in a desktop surroundings. These facilitate data wealthy, complicated user interfaces, with precise user management. In portable computers, screen size is reduced, keyboards are typically additional incommodious, and pointer devices less refined. PDAs have tiny, low resolution screens that are usually additional suited to text than graphics and will solely be monochrome. They have stripped miniature keyboards, and pen based mostly, voice, or easy cursor input and selection devices. These limitations in input and show technology need a considerably totally different approach to user interface style. In sush type of environments where users may use a variety of systems in different situations, the interface to applications may then be heterogeneous.

QoS management in a mobile environment should enable for scaling of delivered information, and also less complicated user interfaces once connecting using a common combination of portable devices and higher power non-portable devices [1, 6] and field of context aware computing provides groundwork during this area, wherever instead of treating the geographical context (as for mobility), one can treat the choice of end system as giving a resource context.

The Effects On Other Non-Functional Parameters –

Any style of remote access will increase security risks however wireless based mostly communication is especially likely to unseen undetected therefore mobility complicates traditional security mechanisms. Even nomadic systems can build use of less secure telephone and net based mostly communications than workplace systems using LANs. Some

Organizations might place restrictions on what knowledge or services will be accessed remotely, or need a lot of subtle security than is required for workplace systems. In addition, there are legal and moral problems rose within the observance of users’ locations.

Cost is another parameter that might be stricken by the employment of mobile communications. However, whereas wireless connections are frequently more expensive, the basic principles of QoS management in relevancy price are the same as for fixed systems. The only major extra quality is formed by the risk of a bigger range of connection, and therefore price, options, and the risk of performing accounting in multiple currencies.

WORK ON MANAGEMENT OF QoS IN MOBILE ENVIRONMENTS

Management Adaptivity – As declared within the section “The Effects of Movement on QoS,” one of the key ideas in managing QoS for mobile environments is adaptation to changes in QoS. In the following we tend to discuss 3 categories of change that have to be catered for.

Large-grained change is characterized as changes due to varieties of end system, or network connection in use, generally these can vary infrequently, often only between sessions, and therefore are managed mostly at the initialization of interaction with applications, probably by suggests that of context awareness.
Hideable changes are those minor fluctuations, some of that could be peculiar to mobile systems, that are sufficiently little in degree and period to be managed by traditional media aware buffering and filtering techniques. Buffering is often used to take away noise by smoothing a variable (bit or frame) rate stream to a constant rate stream. Filtering of packets could differentiate between those containing base and improvement levels of information in multimedia streams, e.g., moving from color to black and white images and are like those in fixed network systems [35]. However, as mobile systems move, connections with totally different base stations have to be set up and connections to remote servers re-routed via the new base stations. This needs moving or putting in filters for these connection, different connection could not give the same QoS as the previous one, and so the needed filter technique could differ. To manage this needs an extension of the traditional interactions for migrating connections between base stations. The choice and handover of management should realize of offered QoS, needed QoS, and the capability of the network to accommodate any needed filters. Wherever the network cannot maintain the current level of service, base stations ought to initiate adaptation in conjunction with handover [14, 41].
Fine-grained change are those changes that are often transient, however vital enough in vary of variation and period to be outside the range of effects that will be hidden by traditional QoS management ways. These include:
Environmental effects in wireless networks.
Other flows beginning and stopping in a part of the system so affecting resources available.
Changes in accessible power inflicting power management
Functions to be initiated, or degradation in functions like radio transmission.

These types of change should be informed with the applications involved, as they need interation between QoS management and the application for adaptation.

In several conditions it is a reasonable to assume that the wireless connection will determine the overall QoS. However, an end-to-end QoS management is still needed, specially for multicast systems, and those using the internet for their connection. The impact of price on patterns of desired adaptivity also becomes more pronounced in mobile systems, wherever connections usually have a charge per unit time or per unit data.

Adaptation paths connected with QoS management ought to be able to describe how a lot of the users are willing to pay for a certain level of presentation quality or timeliness. The heterogeneousness inherent in systems that might offer network access through more than one media also will be a issue here, as certain sorts of connection can cost more than others, and cost of connection will vary due to telecoms supplier traffic structures.

Resource Management And Reservation –

Some researchers contend that resource reservation isn’t relevant in mobile systems, as the accessible bandwidth in connections is just too extremely variable for a reservation to be meaningful. However, some resource allocation and admission control would appear reasonable once resources are scarce, even if laborious guarantees of resource provision are not practical. [44, 47] proposes that guarantees be created in admission control on lower bounds of needs, whereas providing best-effort service beyond this. This is achieved by creating advance reservation of minimum levels of resources within the next predicted cell to confirm accessibility and smooth handoff, and maintaining a portion of resources to handle unforeseen events. The issue of resource reservation is given some thought by those engaged on base stations and wired elements of mobile infrastructures, as these high bandwidth elements should be shared by several users, therefore the traditional resource management approach still applies.

Context Awareness –

A further aspect of resource management is that of large grained adaptivity, and context awareness. [49] defines situation as “the entire set of circumstances close surrounding agent, including the agent’s own internal state” and from this context as “the elements of the situation that ought to impact behavior. ” Context aware adaptation may include migrating data between systems as a results of mobility; dynamic a user interface to reflect location dependent information of interest; choosing a local printer or power conscious scheduling of actions in portable environments. The QoS experienced is also dependant on awareness of context, and applicable adaptation to that context [11]. A elementary paper on context awareness is [13], that emphasizes that context depends on more than location, i.e., vicinity to alternative users and resources or environmental conditions such as lighting, noise or social things. In consideration of QoS presentation, the problems with network connectivity, communications price and bandwidth, and location are obvious factors, poignant data for interactions as well as how end systems are used and user’s preferences, for instance, network bandwidth may be available to supply spoken messages on a PDA (Personal digital assistant) with audio capability, however in several situations text show would still be the most applicable delivery mechanism – speech might not be intelligible on a noisy factory floor, and secrecy is also required in conferences with customers. “Quality” will therefore cover all non-functional characteristics of information poignant any aspect of perceived quality.

CONCLUSION

We discussed the critical issues faced by QoS in a mobile environment, the time those challenges emerged and the techniques that were put forward to tackle those challenges following literature to discussed work.

Psychosocial Effects of Technology

Olivia Di Giulio
Introduction

As individuals of a modern society, we are use to technology being present in almost every area of our everyday lives. Being that technology is so present in our everyday lives, it is almost impossible to live a normal life without it. Technology such as laptop computers and cell phones have become fixtures of modern culture, affecting how we communicate, work, and spend our free time. Though the effects appear minimal on the surface, technology can alter an individual’s physiological state. Technology affects how view ourselves, our relationship with others, and the ways in which we communicate, therefore, creating negative psychosocial affects on the lives of individuals.

Though technology is meant to promote the positive aspects of human connection, it creates an abundance of negative affects and backlashes. Technology has been created and manifested in numerous forms throughout the twenty first century. Technology is a large umbrella term, due to the thousands of creations that can fall under its category. Technology can range from a physical creation such as a laptop and a cellphone, to a virtual creation such as the Internet, its various websites, and various social media applications that can be accessed from both cell phones and computers. The internet, which can be accessed from numerous technological devices, allows individuals to fully participate in its virtual world through sharing pictures, online chat forums, blog posts, and to write about their life and daily activities through social media. Through these various avenues, the Internet allows users to create virtual relationships and communicable ties. Though all of these facets seem extremely positive, the negative impacts outweigh its benefits. For every positive feature, in turn, creates a negative psychological impact in some shape or form. Technology can affect our individual mental states of being, how we view ourselves, the ways in which we communicate, and our relationships with others, which are some of the most important features of our human existence. Through technology we have redefined acceptable behaviors and moral norms, the basis of communication, and who we are as a culture.

One might ask why it matters that technology has affected our psychosocial sates of being. It matters because we are mentally no longer the same culture that we were before these technological advancements. As a society, our mental states have changed negatively. We have become lazy, dependent on technology, isolated, and unable to put down our technological devices. Though technology can be extremely helpful, these are not positive changes, and have affected the human brain, human interaction, and communication culture as a whole. We must be observant as a culture in how often we use our technology/ the ways in which we use our technology, in order to lessen its negative psychosocial affects, otherwise, we will not be able to live without it. In order to be proactive and lessen these affects, we must look at the devices that have forever changed the face of communication and the negative ways in which it affects our mental state and social aspects of society.

There are numerous technological advancements that have entirely redefined communication as a whole and the ways in which our society communicates. These technological advancements consist of cell phones, which allow instant communication through texting, and computers, which allow for the download of various communication software, applications, and social media apps (which can be found on both devices). Frequent uses of these devices and applications have allowed methods of communication to be entirely redefined, because most elements of communication can now take place virtually. Technology is extremely convenient and appealing, making it extremely difficult for users to resist, or wish to have face-to-face communication. A survey of undergraduate students showed that 85 percent use technology and social media to stay in touch with friends as opposed to other forms of communication (HumanKinetics.com). Due to its convenience and easy accessibility, technological communication has become a staple of our society and has entirely redefined not only the way in which we communicate, and but also affecting one’s relationships, due to communication playing a significant role in the creation of human ties.

Technology Negatively Affecting Personal Relationships

The quality of and logistics of human relationships have suffered negative affects due to technology use. Communication is a huge aspect of relationship building and when the basis of communication changes, the basis of relationship building changes as well. Communication plays a fundamental role in producing “the common understandings” that help create moral norms and “social value systems” (Bruce Drake, Kristi Yuthas, Jesse Dillard). Within technological avenues such as texting, communication is entirely virtual and many elements of conversation are lost such as body language, tone, and facial expressions, allowing conversation to become extremely impersonal and lack depth (Pyschcentral.com). According to psychologist Sherry Turkle, technological communication, such as texting, ironically interrupts relationship building, and does not foster conditions, which are necessary to build a true connection with another individual (Pyschcentral.com). Being that individuals are constantly connected through texting, they do not receive the proper alone time, which is necessary in developing a connection with others (Pyschcentral.com). In a recent study it has been found that the interruption of texting in a physical conversation “inhibits the development of closeness and trust”, and reduces the empathy that one can feel for others (Wbur.org).

Technology does not substitute the quality of physical conversation and does not reach the same heights and depth that physical conversation can. Through conversation, individuals search for and create moral norms, in which technology prevents the possibility of having these in depth conversations (Bruce Drake, Kristi Yuthas, Jesse Dillard). Physical conversation provides the tools necessary in which people can develop “personal identity, build close relationships, solidarity and community”; elements that are all lost within technological communication (Bruce Drake, Kristi Yuthas, Jesse Dillard). Instead, communication and relationships fostered through technology are extremely substance less, due to the fact that it is difficult to kindle a true connection in a virtual world, have in depth conversations, and rely on virtual fulfillment. Therefore, technological relations have numerous backlashes. Like realistic relationships, the relationships created through technology give individuals reassurance and validation. If the multitude of these associations is not fulfilled through virtual interaction, it can cause one to feel empty. It is extremely likely for one to feel empty when they rely on this type of validation, because it is virtual, and therefore, less likely for these associations to be fulfilled instantly, as opposed to physical contact. Relationships and the process of relationship building have changed, due to our societies shift in dialogue thanks to technology.

What we say and how we say it has been entirely changed thanks to technology, which has reinvented the technicalities of language. Cell phones and computers that operate off of a wireless connection can provide users with extremely fast technological communication, allowing messages to be delivered with speed. Abbreviations and colloquial language allow users to type fast messages within texts and chat rooms to one another. Though these aspects seem extremely positive, they are can be extremely dangerous for communication culture. Wireless connection and new conversational mechanisms provide the perfect equation to entirely redefine the face of communication. Users have become extremely accustomed to this type of fast pace communication, to the point where they can no longer live without it, due to its convenience and simplicity. Technology makes users desire speed as an essential need, which is extremely detrimental to quality communication. Technological communication, such as texting, and online chat rooms, have virtually destroyed the English language and uses of its correct forms within these devices, have become few and far between. Individuals are no longer taking the time to place emphasis on certain expressions or to be grammatically correct, because it is simply easier and faster to speak colloquially, therefore, preventing quality communication (Donovan A. McFarlane). Quality communication requires effort and without it, it leads to various misunderstandings (Donovan A. McFarlane). When communication is misunderstood, it is no longer efficient or achieves its purpose (Donovan A. McFarlane). In our society speed is often mistaken for efficiency (Donovan A. McFarlane). Individuals would rather summarize what they are saying, instead of properly explaining their ideas, due to our society’s need for speed, that technology makes us desire (Donovan A. McFarlane). Though it is meant to simplify communication, technology has made communication more difficult, due its impersonal nature and lack of quality, which promotes ineffectiveness, as opposed to cohesive dialogue (Donovan A. McFarlane).

Technology Affecting Behavior, Mental Health, and Mental Processes

As a culture, behavior has also been redefined through what is now seen as morally correct and acceptable. Technology has set these new standards in behavior and implemented entirely new social boundaries. It been said that technology such as the Internet, does not promote social integration (Kraut, Patterson, Lundmark). Over the last 35 years “Citizens vote less, go to church less, discuss government with their neighbors less, are members of fewer voluntary organizations, have fewer dinner parties, and generally get together less for civic and social purposes” (Kraut, Patterson, Lundmark) due to technology, therefore, enabling social disengagement and a less unified society. According to HumanKinetics.com, technology can cause one to feel, “distracted, overly stressed, and isolated”, due to frequent use. Technological avenues, such as texting, further manifest negative behavioral habits by hindering our ability to confront situations, allowing individuals to hide behind the screen of their phone (Pyschcentral.com). Bernard Guerney Jr., founder of the National Institute of Relationship Enhancement, believes that texting creates a “lack of courage” to approach an intense or awkward situation, because it is simply easier to hide behind a screen, which can hinder one’s social growth (Pyschcentral.com). One can grow from certain life experiences, which now have now become obsolete through the advent of texting (Pyschcentral.com). Technology also manifests lazy behavior (Insidetechnology360.com). Technology’s numerous functions enable most manual work to be done digitally, therefore, making the lives of individuals much easier and ultimately making them lazier. As technology evolves, devices are able to do more and more for users (Insidetechnology360.com). For example, Apple’s iPhone feature Siri, allows users to press a button and talk into the phone to request an action such as surfing the web, or making a phone call. As if making a phone or surfing the web was not easy enough, Apple has made it all the more easier by allowing users to perform these actions with a push of a button. Features like this, in addition to many other features of technology, breed a lazy society, because we no longer have to perform any actions ourselves, because technology can simply do it for us.

Additionally, technology enables the developing of more severe personality disorders. With features that enable users to create a profile about their life on social media sites, such as Facebook, and features that allow users to post up-to-the minute pictures on their daily activities on social media apps such Instagram, it allows users to become fixated on their appearance and reputation. Therefore, users will often post their best traits via Internet, enabling for the manifestation of behavioral conditions, such as narcissism (Humankinetics.com). The more one is engrossed, the more likely one can experience physiological, emotional, and behavioral changes such as narcissism (Yi-Fen Chen). Certain activities and interactions a user can partake in will increase the likelihood that there will be psychological traces left behind from the virtual environment, within the individual, after experiencing it (Yi-Fen Chen).

The negative affects of technology, which are visible to the human eye, appear minimal. These affects can be seen in the way communication has changed and the way in which we narcissistically portray ourselves via Internet, and do not seem extremely harmful. The affects in which we cannot see, such as, those that affect the brain are the most detrimental, because they target our mental health. Negative affects of technology of have further manifested themselves in the forms of possible addictions and mental illnesses. Being that technology is extremely present in our lives and convenient, it is hard for some to live without it, creating an inseparable and unhealthy relationship between the user and technology in the form of an addiction. Though it is not a recognized disorder by the American Psychiatric Association, there has been much speculation to include Internet Addiction in in the latest addition of the DiagnosticandStatisticalManualofMentalDisorders (U.S. National Library of Medicine), due to the manifestation of unhealthy relationships between users and technology. Internet Addiction is seen as an impulsive “spectrum disorder” which consists of “online and/or offline computer usage and consists of at least three subtypes: excessive gaming, sexual preoccupations, and e-mail/text messaging” (U.S. National Library of Medicine). In 2012 study done by the Department of Psychology and Neuroscience at the University of Colorado in Boulder, Colorado, showed a strong correlation between problematic Internet use and psychotic-like experiences (U.S. National Library of Medicine).

As a society, we must be extremely conscious and aware towards our technology use, due to its horrible psychosocial affects. Due to the way that it is positively promoted within our society, most individuals would never suspect the horrible backlashes of technology. We must be proactive about the way in which we use technology/ how we use our technology in order to prevent serious changes towards our behavior, mental health, relationships, and how we communicate. These affects are extremely detrimental towards our society and if we do not act upon them by monitoring our technology use, communication, social interaction, and our own mental health will only grow worse, and we will therefore have a communication crisis.

Works Cited
Adler, Iris. “How Our Digital Devices Are Affecting Our Personal Relationships.” wbur.org. 2013. Web. 02 Nov. 2014. http://www.wbur.org/2013/01/17/digital-lives-i

Chen, Yi-Fen. “See you on Facebook: exploring influences on Facebook continuous usage”. Behaviour & Information Technology 39 (2014): 59–70. Web.

Drake, Bruce, Yuthas, Kristi, Dillard, Jesse. “It’s Only Words – Impacts of Information Technology on Moral Dialogue.” Journal of Business Ethics 23 (2000): 41-59. Web.

Human Kinetics. “Technology can have positive and negative impact on social interactions.” humankinetics.com. Web. 02 Nov. 2014. http://www.humankinetics.com/excerpts/excerpts/technology-can-have-positive-and- negative-impact-on-social-interactions

Kraut, Robert, Patterson, Michael, Lundmark, Vicki. “Internet Paradox: A Social Technology That Reduces Social Involvement and Psychological Well-Being?” American Psychologist 9 (1998): Web.

McFarlane, Donovan. “Social Communication in a Technology-Driven Society: A Philosophical Exploration of Factor-Impacts and Consequences.” American Communication Journal 12 (2010): 1-2.Web.

Mittal VA, Dean DJ, Pelletier, A. “Internet addiction, reality substitution and longitudinal changes in psychotic-like experiences in young adults.” Early Intervention Psychiatry 3 (2013): 1751-7893. Web.

Mohan, Bharath. “Is Technology Making Humans More Lazy – Yes.” Insidetechnology360.com. R.R. Donnelley, 20 Feb. 2011. Web. 24 Nov. 2014.

http://www.insidetechnology360.com/index.php/is-technology-making-humans-more-lazy-yes-5968/

Pies, Ronald. “Should DSM-V Designate “Internet Addiction” a Mental Disorder”?” Psychiatry 2 (2009): 31-37. Web.

Suval, Lauren. “Does Texting Hinder Social Skills?”Psych Central.com. Psych Central, 2012. Web. 02 Nov. 2014. http://psychcentral.com/blog/archives/2012/05/02/does-texting- hinder-social-skills/

1

Post-War Technological Advances | Essay

In the autumn of 1945, Hitler was dead and the war in the west was over. The Japanese had retreated from the Asian countries under their occupation and were determined to protect their homeland till the last man. The Kamikaze attacks of the Japanese Air Force and the militarily expensive battle of Okinawa had driven home the message that a military invasion of Japan would be very dear in terms of human life and could take months to achieve. The official estimate of likely casualties was pegged at between 1.4 to 4 million allied soldiers. The Japanese were obdurate in their decision not to surrender.

On August 6, and 9, 1945, the Americans revealed the potential of their weapons technology. Two atom bombs, the “Little Boy” and “Fat Man” were dropped on the Japanese cities of Hiroshima and Nagasaki. The allies did not need to negotiate any further. Emperor Hirohito surrendered within a month. The episode, however ghastly, drives home as nothing else, the tremendous ability of technological innovation to increase bargaining power.

The post war period has seen the emergence of stunning new technological innovations in diverse areas of science and technology. Many of these have arisen in weaponry and space science and effected major changes in power centres and national equations on a global scale. Technological innovations in other areas have given rise to a slew of products, created billions of pounds worth of assets, shaped huge corporations and generated massive economic empires. The names of Sony, Microsoft, Apple, Google and Nokia, to name but a few, flash through the mental landscape when the issue of innovation comes up.

Bargaining power, while being practically tantamount to unionism, is more specifically a tool to enhance control over or influence economic decisions like “the setting of prices or wages, or to restrict the amount of production, sales, or employment, or the quality of a good or a service; and, in the case of monopoly, the ability to exclude competitors from the market.” (Power, 2006)

Technical innovations have been principal drivers of change in human society since prehistory and have often created huge economic advantages for its creators or owners. The principal reason behind this is exclusivity, the owner of the innovation being the sole possessor of a particular technological item that can be used to achieve significant economic returns.

This exclusivity also gives the owners sharply increased bargaining powers through access to a technology outside the reach of others and meant for the possessors’ sole discretionary use. The owners of the innovation are able to use this bargaining power in various ways, which include speed to market, early mover advantage, setting of prices, fixing of terms of credit, negotiating of contracts, asking of advances, obtaining supplier credit, accessing venture capital or institutional funds and organising alliances with large corporates. The ability to innovate technologically has, on many occasions given its owner enormous economic clout and led to the formation of giant mega corporations. It has verily proven to be the biggest leveller in the marketplace, witness the effulgent rocket trail of the growth graphs of Microsoft and Google and the slow decline of numerous economic giants who have not been able to come up with anything new or worthwhile.

When discussing the bargaining power of technological innovation it would be appropriate to refer to Intel Corp and the manner in which it used its technological knowledge of chips to drive home terrific contracts with IBM and other PC manufacturers and thereby transformed itself from a small start up to a successful and respected corporation with an international footprint.

Jane Katz, in a 1996 article called From Market to Market for Regional Review elaborates on the great Intel story. IBM, at one time far behind Apple in the PC race, entered into alliances with Intel and Microsoft for microprocessors and operating systems and also took the decision to go in for open-architecture to allow other firms to develop compatible products and to avoid possible anti trust issues. Intel, at that time was an untested company and IBM, concerned about Intel being unable to meet its supply commitments forced Intel to give up its right to license to others in order to supply to Big Blue. PC sales did very well and Intel grew furiously and fast. In any case, this success led to Intel quickly developing the next generation of chips. The number of new players having grown rapidly, thanks to the open architecture policy of IBM, Intel’s bargaining power grew significantly with all PC makers.

Thus, the balance of power shifted. When it came time to produce the 286 generation of chips, Intel was able to limit licensing to five companies and retain a 75 percent market share. For the 386 chip and beyond, Intel regained most of its monopoly, granting a single license to IBM, good only for internal use. The market for PCs grew, and Intel became fixed as the industry standard. Ultimately, IBM turned to Apple and Motorola in a belated and still struggling effort to create a competitor to Intel chips, the Power PC. (Katz, 1996)

Technological innovation, of course, gives rise to very significant powers in the hands of its owners. It however needs to be remembered that an innovation is no more than another valuable possession, comparable to significant capital, excellent technical skills or valuable confidential information. It needs great commercial acumen, business foresight and knowledge of human psychology to convert this asset into an extremely effective bargaining tool for obtaining a competitive edge or significant economic benefits. All too often, it is squandered away because of an inadequate knowledge of law or business and it is left to others to pick up the pieces and enjoy the benefits.

In most cases, innovation is not restricted to one huge big bang or tremor causing development. It is a series of small innovations in the technological development of a product that at one stage results in the emergence of a product sharply differentiated from the others available in the marketplace; a product impossible to emulate or bring into play within the immediate future. A truly innovative technological development is one that makes a giant leap in the benefits to cost ration in some field of human enterprise. It is this quality that sets up the platform for emergence of big bargaining power.

Another way of putting this is that an innovation lowers the costs and/or increases the benefits of a task. A wildly successful innovation increases the benefits-to-costs ratio to such an extent that it enables you to do something it seemed you couldn’t do at all before or didn’t even know you wanted to do. Think of the following examples in these terms: the printing press, the camera, the telephone, the car, the airplane, the television, the computer, the electrostatic copier, the Macintosh, Federal Express, email, fax and finally the web. (Yost, 1996)

This power that technological innovation gives is used by different people in diverse ways. It often comes the way of young and brilliant techies who decide to sell, using their bargaining power to get the best possible price for their product from available bidders. Sabeer Bhatia and Jack Smith launched Hotmail, a free web based email service accessible from anywhere in the world and designed specifically to give freedom from restricting ISPs. The service notched up subscribers rapidly and Bhatia got a summons from the office of Bill Gates soon after he got his venture capital backing.

When he was only 28, Sabeer Bhatia got the call every Silicon Valley entrepreneur dreams of: Bill Gates wants to buy your company. Bhatia was ushered in. Bill liked his firm. He hoped they could work together. He wished him well. Bhatia was ushered out. “Next thing is we’re taken into a conference room where there are 12 Microsoft negotiators,” Bhatia recalls. “Very intimidating.” Microsoft’s determined dozen put an offer on the table: $160 million. Take it or leave it. Bhatia played it cool. “I’ll get back to you,” he said. Eighteen months later Sabeer Bhatia has taken his place among San Francisco’s ultra-rich. He recently purchased a $2-million apartment in rarified Pacific Heights. Ten floors below, the city slopes away in all directions. The Golden Gate Bridge, and beyond it the Pacific, lie on the horizon. A month after Bhatia walked away from the table, Microsoft ponied up $400 million for his startup. Today Hotmail, the ubiquitous Web-based e-mail service, boasts 50 million subscribers – one quarter of all Internet users. Bhatia is worth $200 million. (Whitmore, 2001)

Sometimes technological innovation does give a person the power to refuse 100 million dollars, confident in the knowledge that he will be able to bargain for more!

While many individual developers or smaller companies favour to take Bhatia’s route, preferring to cash the cheque first, others go for more, develop the product and try to take it to its full economic potential. The biggest hurdle to the exclusivity of a product comes from clandestine copying as Microsoft and the drug majors have found out in South East Asia and China. Rampant piracy and copyright breach lead to a situation where the latest software and drugs are available within weeks of being released in the market.

While this problem is being resolved at the national level with both India and China beginning to take stringent action for IPR protection the lesson to be learnt in direct and oblique ways is that the bargaining power of a technological development will vanish, vaporise into nothingness if its exclusivity can not be maintained. While retaining all of its excellence and potential to effect change and bring about improvement, a technological investment loses all of its economic advantage and bargaining power the moment it loses its exclusivity. Humanity gets to be served, possibly even at a lower price, but the creator, individual or organization ends up unrewarded and short changed for all the sacrifice, talent, expenditure and effort incurred in the development of the product or service.

It thus becomes critical to arrange for the exclusivity of the innovation if it needs to be used for economic advantage. This is generally done in various ways, an important route being to keep on working at further innovations to add value and to ensure that a significant differentiation always exists between it and other similar products in the marketplace. Microsoft and Google are excellent examples of this approach where continuous R & D efforts work towards creating a slew of features which become difficult to emulate and thereby continue to provide the bargaining edge.

In conclusion the importance of hard nosed business acumen to protect the technological innovation needs to be stressed. Measures for this include the arrangement of adequate security to protect the product or service from espionage and cloning, sufficient care in licensing and similar arrangements and the adoption of necessary business and commercial safeguards for appropriate trademark, copyright, patent or IPR protection

References

Katz, J, (1996), To Market to Market, Regional Review, Retrieved September 28 2006 from www.bos.frb.org/economic/nerr/rr1996/fall/katz96_4.htm

Power, (2006), Wikipedia, Retrieved September 28 2006 from. en.wikipedia.org/wiki/Power

Whitmore, S, (2001), Driving Ambition, Asiaweek.com, Retrieved September 28 2006 from www.asiaweek.com/asiaweek/technology/990625/bhatia.html

Yost, D.A, (1995), What is innovation, Dream host, Retrieved September 28 2006 from yost.com/misc/innovation.html

Operating systems in Nokia phones

Introduction:

Operating system basically acts as interface between user and hardware. A mobile operating system also known as mobile OS or a handheld operating system controls the mobile device. It works on the same principle as the operating systems in windows which control the desktop computers. However the mobile operating systems are simpler than that of windows operating systems.

Various operating systems used in smart phones include:
Symbian OS,
Iphone OS,
RIM’S Blackberry,
Linux
Palm webOS,
Android
Windows mobile operating system.
Various operating systems along with their detail are:
1) Symbian OS:Symbian operating system is designed for mobile devices with associated libraries, user interface, and framework.

It is used in various models of the phones around 100 models use this. It consists of kernel and middleware components of software stack. The upper layers are supplied by application platforms like S60, UIQ and MOAP.

This is NOKIA N92 with Symbian OS.

Reasons for designing Symbian OS:

To ensure the integrity and security of data,
Utilize the user time,
All resources are scarce.
Designing of Symbian OS:

It uses a microkernel which has a request and call-back approach to services. It maintains the separation between user interface and design. Mobile view controller is the object oriented design used by the applications and the OS. This OS is optimised for low power battery based devices and for ROM based systems.

The Symbian kernel supports sufficiently-fast real time response to build a single-core phone around it—that is, a phone in which a single processor core executes both the user applications and the signaling stack.

Structure of Symbian model:
UI Framework Layer
Application services layer
Java ME
OS services layer
Generic OS services
Communication services
Multimedia and graphics services
Connectivity services
Base services layer
Kernel services and hardware interface layer.

It uses microkernel architecture i.e., it includes only the necessary parts in order to maximize the robustness, responsiveness and availability. It contains scheduler, memory management and device drivers. Symbian is designed to emphasize compatibility with other devices, especially removable media file systems.

There is a large networking and communication subsystem, which has three main servers called: ETEL i.e, EPOC telephony, ESOCK i.e, EPOC sockets and C32 which is responsible for serial communication. Each of these has a plug-in scheme. All native Symbian C++ applications are built up from three framework classes defined by the application architecture: an application class, a document class and an application user interface class. These classes create the fundamental application behaviour.

Symbian includes a reference user-interface called “TechView”. It provides a basis for starting customization and is the environment in which much Symbian test and example code runs.

Versions of Symbian OS:
Symbian OS v6.0 and 6.1
Symbian OS 7.0 and 7.0s
Symbian OS 8.0
Symbian OS 8.1
Symbian OS 9.0
Symbian OS 9.1
Symbian OS 9.2
Symbian OS 9.3
Symbian OS 9.4
Symbian OS 9.5
2) Iphone OS:

It is internet and multimedia mobile phone designed by apple Inc. The Iphone functions as a camera phone, a portable media player, and an internet client.

Iphone OS is an operating system that runs on Iphone. It is based on the same DARWIN operating system used in MAC OS X. It is responsible for the interface’s motion graphics. The operating system takes up less than half a GB of the device’s total storage (4 to 32GB). It is capable of supporting bundled and future applications from Apple, as well as from third-party developers. Software applications cannot be copied directly from Mac OS X but must be written and compiled specifically for Iphone OS.

Like the iPod, the Iphone is managed with iTunes. The earliest versions of Iphone OS required version 7.3 or later, which is compatible with Mac OS X version 10.4.10 Tiger or later, and 32-bit or 64-bit Windows XP or Vista. The release of iTunes 7.6 expanded this support to include 64-bit versions of XP and Vista, and a workaround has been discovered for previous 64-bit Windows operating systems. Apple provides free updates to Iphone OS through iTunes, and major updates have historically accompanied new models. Such updates often require a newer version of iTunes — for example, the 3.0 update requires iTunes 8.2 — but the iTunes system requirements have stayed the same. Updates include both security patches and new features. For example, Iphone 3G users initially experienced dropped calls until an update was issued.

3) Android OS:

Android is a mobile operating system running on the Linux kernel. It allows developers to write managed code in the Java language, controlling the device via Google-developed Java libraries.

The unveiling of the Android distribution on 5 November 2007 was announced with the founding of the Open Handset Alliance, a consortium of 47 hardware, software, and telecom companies devoted to advancing open standards for mobile devices.

4) Palm webOS:

It is a mobile operating system running on the Linux kernel with proprietary components developed by Palm.

The Palm Pre Smartphone is the first device to launch with webOS, and both were introduced to the public at the Consumer Electronics Show. The Palm Pre and webOS were released on June 6, 2009. The second device to use the operating system, the Palm Pixi, was released on November 15, 2009. The webOS features significant online social network and Web 2.0 integration.

Features:

WebOS’s graphical user interface is designed for use on devices with touch screens. It includes a suite of applications for personal information management and makes use of a number of web technologies such as HTML 5, JavaScript, and CSS. Palm claims that the design around these existing technologies was intended to spare developers from learning a new programming language. The Palm Pre, released on June 6, 2009, is the first device to run this platform.

5) Rim’s Blackberry OS:

A proprietary multi-tasking operating system (OS) for the BlackBerry is provided by RIM which makes heavy use of the device’s specialized input devices, particularly the scroll wheel or more recently the trackball and track pad. The OS provides support for Java MIDP 1.0 and WAP 1.2. Previous versions allowed wireless synchronization with Microsoft Exchange Server’s e-mail and calendar. The current OS 4 provides a subset of MIDP 2.0, and allows complete wireless activation and synchronization with Exchange’s e-mail, calendar, tasks, notes and contacts.

Third-party developers can write software using these APIs, proprietary BlackBerry APIs as well, but any application that makes use of certain restricted functionality must be digitally signed so that it can be associated to a developer account at RIM. There is only the guarantee of authorship of an application but not of the quality or security of the code.

This is blackberry 7250 displaying the icons provided to it by the use of a proprietary multi-tasking operating system (OS).

6) Windows mobile operating systems:

Windows Mobile is a compact operating system developed by Microsoft, and designed for use in smartphones and mobile devices.

It is based on Windows CE, and features a suite of basic applications developed using the Microsoft Windows API. It is designed to be somewhat similar to desktop versions of Windows, feature-wise and aesthetically. Additionally, third-party software development is available for Windows Mobile, and software can be purchased via the Windows Marketplace for Mobile.

Originally appearing as the Pocket PC 2000 operating system, Windows Mobile has been updated multiple times, with the current version being Windows Mobile 6.5. Most Windows Mobile phones come with a stylus pen, which is used to enter commands by tapping it on the screen.

Windows Mobile’s share of the Smartphone market has fallen year-on-year, decreasing 20% in Q3 2009. It is the 4th most popular Smartphone operating system, with a 7.9% share of the worldwide Smartphone market.

The figure showing windows operating system used in smartphones.

Operating Systems Course: Reflection Essay

There are a lot of new concepts about Telecommunications and Networking that I’ve learned in depth in this course. It is one of the very interesting courses that I have done so far in IT. I feel it is worth doing this course online as there is a chance to learn so many concepts through our assignments. Wire shark labs were very interesting and we have gained practical knowledge on how networking works in real scenarios.

There are so many topics that I felt interesting throughout the course, but there is this topic ‘Modes of Network Operation’ which was the discussion topic in the 6th week that left an ‘aha’ moment.

In Infrastructure mode of Network operation, communications occur between a set of Wireless Adapter equipped computers and also between a wired networks by going through a Wireless Access Point (AP). Infrastructurerefers to switches, routers, firewalls, and access points (Aps). Access Points are responsible for handling traffic between wireless networks and also wired networks. There is no Peer to Peer communication in this mode. A wireless network in infrastructure mode which is also connected to wired network is called as BSS (Basic service set).A set of two or more service sets is called Extended Service Set (ESS).

The BSSID is a 48-bit number of the same format as a MAC address. This field uniquely identifies each BSS. The value of this field is the MAC address of the AP.

Advantages of Infrastructure mode:

Wide areas are covered by utilizing the high power of an access point in Infrastructure mode which is the advantage.
The learning curve will be less for knowing wireless strengths and weaknesses with Infrastructure Mode.
Number of clients can be supported in this mode of operation as additional access points can be added to WLAN to increase the reach of the infrastructure and support any number of wireless clients.
Infrastructure mode networks offer the advantage of scalability, centralized security management and improved reach.

Disadvantages:

The disadvantage associated with infrastructure wireless network is additional cost to purchase AP hardware.

ADHOC Mode: In this mode, each station is a peer to the other stations and communicates directly with other stations within the network. No Access points are required.

Advantages:

Because Ad Hoc Mode does not require an access point, it’s easier to set up, especially in a small or temporary network.

Disadvantages:

In Ad Hoc Mode connections are limited, for example between two laptops, to the power available in the laptops.
Because the network layout (thenetwork topology) in Ad Hoc Mode changes regularly, system resources are taken just to maintain connectivity.
In an Ad Hoc network with many computers, the amount of interference for all computers will go up, since each is trying to use the same frequency channel.
In Ad Hoc Mode, chains of computers will connect to pass your data, if your computer is not directly in range. On the other hand, you do not have control over the path your data takes. The automatic configuration routines may send your data through several computers, causing significant network delays.

Conclusion

Based on the above various mode of operation both offer advantages and as well as disadvantages. Based on the necessity one many opt for Ad hoc mode where set up is easy and no access points are required whereas Infrastructure mode is suited well for wireless networks as it supports any number of clients and offers advantages such as scalability, security and improved approach.

There are a lot of concepts of Operating Systems that are learnt in depth from the course Operating Systems, it being one of the most important courses to be known to end up in software industry. However, I feel, and have always felt that it’s important to understand where we came from and how we landed up here, to be able to understand where we are going. The technology on which the operating systems run on and the mechanics of OS have progressed more than that could have been imagined in the last 30 years. By understanding how that progress was made, we can apply it and make equal progress in the future too.

There are so many interesting topics in the discussions and journal entries throughout the course, but the first discussion on “Microsoft Windows 8: One Size Fits All?” remained as my favorite topic. It being the first discussion topic, also made me feel how interesting the entire Operating systems course would be. This topic grabbed my attention all of a sudden as I have been using Operating systems mostly Microsoft Windows from so many years without even knowing what exactly is happening behind it. The pros and cons of Windows 8 are summed from the “One size fits all” discussion.

This made me think of the practical application of an OS by comparing with the features of other OS. I felt it’s not possible to develop single OS which can be efficient on tablets and PC’s and that was the first time I had to disagree/not satisfied with Microsoft’s invention. Microsoft has been ruling the OS platforms. Windows 8 has drastic changes in platform and user interface of the operating systems. It had a smartphone before the Apple iPhone revolution came along, and it was pushing tablet PCs before the Apple iPad made it cool. But, as long as Microsoft’s history with mobile devices is, so is its stubborn desire to make everything about its Windows OS.

Nowadays we cannot even imagine the world without computer as they are a part of everyday life now. But many of us do not care about what is actually happening when we use a system. Though I had little knowledge about operating systems earlier, now even though I don’t know everything I’m sure that I have learnt a lot about the functioning of an operating system, many types of management techniques in various operating systems, scheduling algorithms, and Protection and Security mechanisms in OS.

An Operating system is a program that manages computers hardware and knowledge about operating system is necessary to start a career in software industry. Operating system provides a basis for application programs and acts as an intermediary between users and computer hardware and optimizes the utilization of hardware. It must exist in order for other programs to run.

I would definitely continue my career as a software developer after completion of Masters in IT as I have been a software developer earlier. As a developer I would be developing software applications and having in depth knowledge of Operating systems is always necessary. An Operating system provides a software platform on which other application programs can run. So in some scenario’s like asynchronous function calls in the program written, I would definitely understand the execution of program much better having knowledge of how operating system works.

I am pretty sure next time when I buy a laptop or an electronic device, I would not be lost with technical specifications. Indeed I would be more interesting to know the features and discuss the specifications confidently.

Finally, I would say learning about the operating system will help every IT professional in their career.

Online Technologies: Opportunities for Charities

Information Technology and developments in non profit organisations: How online technologies offer new opportunities for growth to the charity organisations

Table of Contents (Jump to)

Chapter 1 – Introduction

1.1 Introduction

1.2 Aims and Objectives

1.3 Overview

Chapter 2 Literature Review

2.1 Introduction

2.2 Charities

Chapter 1 – Introduction
1.1 Introduction

As Sergeant and Jay (2004, p.2) have commented, the concept of charity and their mission of raising funds to help the poor and needy has been around for centuries. However, both the numbers and complexities of charity organisations have multiplied significantly over recent decades. Sargeant and Tofallis (2000) confirmed reports from the NCVO that in the UK as of 1998, the number of NGO’s exceeded half a million, of which 40% could be designated as Charity based organisations., This group was then reported to have a collective estimated turnover of approaching ?20 billion. Both of these statistics will have grown dramatically over the past decade.

The main mission of charities is to deliver practical and constructive assistance to those in need; providing information on issues such as health problems and disability or promoting the message for fairer laws. These missions can be related to human activity, preservation of the natural world environment and its wildlife or seeking justice for those that are oppressed. However, currently charities have to face up to a number of obstacles in effectively performing the task for which they have been set up, most of which arise in two particular areas. Firstly, with the increasing growth of needy causes, there is a rise in the number of charitable organisations emerging to address these issues, increasing the competition for funds proportionately. Secondly, there is little doubt from the level of research that has been undertaken, that the charity giver is becoming increasingly discerning about the impact of their donations. This concerns centres around the desire to ensure that the gift has the maximum impact. Therefore, it is important to the donor that the minimum amount of that gift is used for the charity’s internal administrative purposes.

Despite the fact that the “mission” of a charity has in the past often been deemed more important than “economic intentions” (Hussey and Perrin 2003, p.200), the current climate within this sector is requiring them to become more efficient if they wish to sustain the objectives of their cause. This means that thy have to look for ways in which they can improve the effectiveness and the efficiency of their operations. In this regard, although somewhat belatedly when compared with the move by commercial corporations, the charity sector is increasingly studying the benefits of using information technology processes as a means of achieving the efficiencies that are required.

However, as Hackler and Saxton (2007), although some charities are incorporating information within their organisations, the extent, areas of the business covered and effectiveness of these developments has not yet been perfected in a significant number of cases. In fact, in some it is considered that with some charities it can be reducing efficiency. Indeed the research conducted by Sargeant and Tofallis (2000) concluded that “the performance of many charities would appear to fall well short of the efficient frontier with no immediately obvious explanation forthcoming for why this might be so.” Indeed, they could also find no pattern to the causes of these failures either.

It is the issue of information technology in particular its effective and efficient use in charity organisations that inspired this research project. Of specific interest is the intention to assess the impact that this technology has upon the duel targets of increasing financial efficiency and improving the delivery of the main services and missions of the charity.

1.2 Aims and Objectives

As stated previously, the aim of this research is to identify the ways in which information technologies can be used to improve the efficiencies of charity operations. In this regard it is intended to focus the research upon the usage of IT in the online environment. Thus the research question or hypothesis that has been set for this study is as follows: –

Online information technology processes can offer charities opportunities for growth and expansion in terms of the revenue and message and mission generating areas of their operations.”

To assist with the achievement of this goal the research will use the following framework of objectives: –

Growth and maximisation of revenue

It is intended to determine the extent to which a charity can make use of the IT opportunities available using the Internet to grow its revenue base and the methods by which this can be achieved.

Cost reduction and efficiency

Using the same premise as that included within the previous objective it is also the intention of this paper to address the issue of the appropriate IT methods that can be employed for increasing the efficiency of the charity organisation in terms of cost control and reduction where appropriate.

Mission and programmes

Bearing in mind the unique purpose of the charity format, which is that it has a mission to serve a specific cause, the research will also be ensure that, in addition to the financial objectives outlined above, the information processes examined are compatible with the enhancing of the message that charities need to communicate. This will be applied to both the potential and recipient of their services.

The research itself will use a mixture of data to address the research question. This will include reference to the extensive range of financial statements which are available from individual charity websites or the Charities Commission (2008) online resources, although only a sample of these reports will be utilised. To address the issues and concerns of the individual charities more directly, individual interviews will be conducted with a number of representatives from this sector.

1.3 Overview

The management and presentation of the research paper has followed a logical format. Chapter two presents a review of existing literature that is available and that relates to the issues being addressed by the researcher. This includes publications and comments by academics, professional observers and other interested stakeholders. Following this critical review, in chapter three it is intended to concentrate upon the methodology that has been applied to this project. It will provide an overview of the available methods and the reasons for the method that has been adopted in this instance. Chapter four provides the in-depth results of the research findings, both that which has been gathered from primary and secondary resources and these will be analysed and discussed in more detail in chapter five. Finally, the research project will reach a conclusion in chapter six and, where considered feasible and appropriate, the researcher’s recommendations will be presented and explained.

Included at the end of this study, although separated from the main body of the study, will be additional information. This will include a biography of the various resources that have been referred to or used to assist with the development of the project. In addition, in attached appendices, information that is considered of further value in understanding the issues raised and the examinations undertaken, including the transcripts of interviews, have also been included.

Chapter 2 – Literature Review
2.1 Introduction

To assess the issues of the charity use of online information technology, it is important to perform a critical review the existing literature that is available relating to various elements. In this case that will include providing a brief understanding of the charity environment. In addition, it will include a review of the information technology processes and their advantages as well as the areas where charities have been found to have deficiencies either in the usage of these technologies or the extent to which they have availed themselves of the technology itself. The chapter has been sectioned in a manner that appropriately addresses these areas.

2.2 Charities

As many academics have observed, in comparison with commercial organisations, the charity is a complex organisation, not least because of its structure and mode of operations (Wenham et al 2004, Hussey and Perrin 2003 and James 1983). There are even different to the other types of non-profit organisations referred to by Hackler and Saxton (2007), such as those that are often form for regulating the decisions and objectives of various parts of nation and international political policies. An example of these would be the various organisations that have been set up in the UK to deal with the reduction of carbon emissions such as The Carbon Trust.

The differences attributable to the charity organisation can be observed in many areas of the operation. For a start one of the main intentions that is needed for the organisation to qualify as a charity is for it to have a non-profit making objective (Hurray and Perrin 2003). Secondly, its mission that in the corporate sense would be classed as strategic objective is directed to the service of the external stakeholder or user (Hussey and Perrin 2003). In other words, where the purpose of the commercial organisation is to achieve financial success that will enable it to return additional value to the shareholders and potential investor, the charity’s financial aim is to utilise its funds specifically for the benefit of those whose demands and needs it is intending to address. Often, because of the break-even requirement, the charity will take on projects that are of no immediate benefit, but will have the effect of helping them to subsidise other, more highly valued activities (James 1983, p.351).

Another difference in organisational processes is that the charity revenues generating activities relies heavily upon the volunteer donor (Wenham et al 2004), therefore making it difficult to predict. In addition, this places constraints upon administrative expenditure in areas such as computers and other modern equipment (Sargeanr and Jay 2004). Furthermore, because of the purpose of the charity and the need to concentrate its expenditure upon projects that are determined within its mission statement, together with the fact that funds may be limited, many charities are heavily reliant upon the efforts of voluntary employees. Many of these employees might have limited knowledge of the operational processes that are required for an efficient organisation, which can be a disadvantage (Galaskiewicz et al 2006, p.338). This is especially true if there is a sizable organisation to manage.

Irrespective of these differences, to remain true to its mission statement and stated aims, every charity still has to create a strategy that allows it to address three specific operational procedures. These are the maximisation of incoming funds, minimising administrative costs to ensure the recipients of its objectives, in terms of projects and services, receive the maximum benefit and effective marketing, which is designed to attract donors and service users (Wenham et al 2004). Therefore, it is important for the charity to be organised in terms of its mission, which means having the right strategies in place (Hussey and Perrin 2003, p.215 and 218) and assessing their appropriateness. As Hackler and Saxton (2007) acknowledge, it is in these areas that the use of information technology can be considered.

All charities have to be registered with the Charities Commission (2008) irrespective of their size. An integral part if this registration is the need to provide regular financial statements which

Online Self Testing for Real Time Systems

A Survey on Different Architectures Used in Online Self Testing for Real Time Systems

I.ABSTRACT

On-line self-testing is the solution for detecting permanent and intermittent faults for non safety critical and real-time embedded multiprocessors. This paper basically describes the three scheduling and allocation policies for on-line self-testing.

Keywords-components: MPSoC, On-line self-testing, DSM technology

II.INTRODUCTION

Real-time systems are very important parts of our life now a day to day. In the last few decades, we have been studied the time aspect of computations. But in recent years it has increase exponentially among the researchers and research school. There has been an eye catching growth in the count of real-time systems. Being used in domestic and industry production. So we can say that real-time system is a system which not only depends upon the correctness of the result of the system but also on the time at which the result is produced. The example of the real-time system can be given as the chemical and nuclear plant control, space mission, flight control systems, military systems, telecommunications; multimedia systems and so on all make use of real-time technologies.

Testing is a fundamental step in any development process. It consists in applying a set of experiments to a system (system under test ? SUT), with multiple aims, from checking correct functionality to measuring performance. In this paper, we are interested in so-called black-box conformance testing, where the aim is to check conformance of the SUT to a given specification. The SUT is a “black box” in the sense that we do not have a model of it, thus, can only rely on its observable input/output behavior.

Real time is measured by quantitative use of clock (real clock)[1].Whenever we quantify time by using the real clock we use real time. A system is called real time system when we need quantitative expression of time to describe the behavior of the used system. In our daily lives, we rely on systems that have underlying temporal constraints including avionic control systems, medical devices, network processors, digital video recording devices, and many other systems and devices. In each of these systems there is a potential penalty or consequence associated with the violation of a temporal constraint.

a. ONLINE SELF TESTING

Online self-testing is the most cost-effective technique which is used to ensure correct operation for microprocessor-based systems in the field and also improves their dependability in the presence of failures caused by components aging.

DSM Technologies

Deep submicron technology means, the use of transistors of smaller size with faster switching rates[2]. As we know from Moore’s law the size of transistors are doubled by every year in a system, the technology has to fit those inc in transistors in small area with better performance and low-power[4].

III. Different Architectures used in Online Self Testing in Real Time Systems.

1.The Architecture of the DIVA Processing In Memory Chip

The DIVA system architecture was specially designed to support a smooth migration path for application software by integrating PIMs into conventional systems as seamlessly as possible. DIVA PIMs resemble, at their interfaces, commercial DRAMs, enabling PIM memory to be accessed by host software either as smart memory coprocessors or as conventional memory[2]. A separate memory to memory interconnect enables communication between memories without involving the host processor.

PIM Array PIM to PIM Interconnect

Fig.1: DIVA Architecture

A parcel is closely related to an active message as it is a relatively lightweight communication mechanism containing a reference to a function to be invoked when the parcel is received. Parcels are transmitted through a separate PIM to PIM interconnect to enable communication without interfering with host memory traffic. This interconnect must support the dense packing requirement of memory devices and allow the addition or removal of devices from system.

Each DIVA PIM chip is a VLSI memory device augmented with general purpose computing and communication hardware[3]. Although a PIM may consist of multiple nodes, each of which are primarily comprised of few megabyte of memory and a node processor.

2. Chip Multiprocessor Architecture (CMP Architecture)

Chip multiprocessors are also called as multi-core microprocessors or CMPs for short ,these are now the only way to build high-performance microprocessors, for a number of reasons[6].

limiting acceptance of CMPs in some types of systems.

Fig.2: The above figure shows the CMP Architecture[6]

3. SCMP Architecture: An Asymmetric Multiprocessor System-on-Chip

Future systems will have to support multiple and concurrent dynamic compute-intensive applications, while respecting real-time and energy consumption constraints. Within this framework, an architecture, named SCMP has been presented[5]. This asymmetric multiprocessor can support dynamic migration and preemption of tasks, thanks to a concurrent control of tasks, while offering a specific data sharing solution. Its tasks are controlled by a dedicated HW-RTOS that allows online scheduling of independent real-time and non real time tasks. By incorporating a connected component labelling algorithm into this platform, we have been able to measure its benefits for real-time and dynamic image processing.

In response to an ever increasing demand for computational efficiency, the performance of embedded system architectures have improved constantly over the years. This has been made possible through fewer gates per pipeline stage, deeper pipelines, better circuit designs, faster transistors with new manufacturing processes, and enhanced instruction level or data-level parallelism (ILP or DLP)[7].

An increase in the level of parallelism requires the integration of larger cache memories and more sophisticated branch prediction systems. It therefore has a negative impact on the transistors’ efficiency, since the part of these that performs computations is being gradually reduced. Switching time and transistor size are also reaching their minimum limits.

The SCMP architecture has a CMP structure and uses migration and fast preemption mechanisms to eliminate idle execution slots. This means bigger switching penalties, it ensures greater flexibility and reactivity for real-time systems.

Programming Model

The programming model for the SCMP architecture is specifically adapted to dynamic applications and global scheduling methods. The proposed programming model is based on the explicit separation of the control and the computation parts. Computation tasks and the control task are extracted from the application, so as each task is a standalone program. The control task handles the computation task scheduling and other control functionalities, like synchronizations and shared resource management for instance. Each embedded application can be divided into a set of independent threads, from which explicit execution dependencies are extracted. Each thread can in turn be divided into a finite set of tasks. The greater the number of independent and parallel tasks are extracted, the more the application can be accelerated at runtime.

Fig3:

SCMP Processing

As shown in Figure 9, the SCMP architecture is made of multiple PEs and I/O controllers. This architecture is designed to provide real-time guarantees, while optimizing resource utilization and energy consumption. The next section describes execution of applications in a SCMP architecture.

When the OSoC receives an execution order of an application, its Petri Net representation is built into the Task Execution and Synchronization Management Unit (TSMU) of the OSoC. Then, the execution and configuration demands are sent to the Selection unit according to application status. They contain all

of active tasks that can be executed and of coming active tasks that can be prefetched. Scheduling of all active tasks must then incorporate the tasks for the newly loaded application. If a non-configured task is ready and waiting for its execution, or a free resource is available, the PE and Memory Allocation Unit sends a configuration primitive to the Configuration Unit.

Fig4:SCMP Architecture[5]

Table Of Comparison

Name Of The Paper

Year of Publication

Author

Limits

The Architecture of the DIVA Processing In Memory Chip

2002

Jeff Draper, Jacqueline Chame, Mary Hall, Craig Steele, Tim Barrett,

Jeff LaCoss, John Granacki, Jaewook Shin, Chun Chen,

Chang Woo Kang, Ihn Kim, Gokhan Daglikoca

This paper has described a detailed description of DIVA PIM Architecture. This paper having some issues for exploiting memory bandwidth, particularly the memory interface and controller, instruction set features for fine grained parallel operation, and mechanism for address translation.

Chip Multiprocessor Architecture: Techniques to Improve Throughput and Latency

2007

KunleOlukotun, LanceHammond, James Laudon

This work provides a solid foundation for future exploration in the area of

defect-tolerant design. We plan to investigate the use of spare components,

based on wearout profiles to provide more sparing for the most vulnerable components.

Further, a CMP switch is only a first step toward the overreaching

goal of designing a defect-tolerant CMP system.

SCMP Architecture: An Asymmetric

Multiprocessor System on-Chip for Dynamic Applications

2010

NicolasVentroux, Raphael David

The new architecture, which has been called SCMP, consists of a hardware real-time operating system accelerator (HW-RTOS), and multiple computing, memory, and input/output resources.

The overhead due to control and execution management is limited by our highly efficient task and data sharing management scheme, despite of using a centralized control. Future works will focus on the development of tools to ease the programmation of the SCMP architecture.

Conclusion

We have done a survey how on-line self-testing can be controlled in a real-time embedded multiprocessor for dynamic but non safety critical applications using different architectures. We analyzed the impact of three on-line self-testing architectures in terms of performance penalty and fault detection probability. As long as the architecture load remains under a certain threshold, the performance penalty is low and an aggressive self test policy, as proposed in can be applied to

[8] D. Gizopoulos et al., “Systematic Software-Based Self -Test for Pipelined Processors”, Trans. on Vlsi Sys., vol. 16, pp. 1441-1453, 2008.

such architecture. Otherwise, on-line self-testing

should consider the scheduling decision for

mitigating the overhead in detriment to fault

detection probability. It was shown that a policy that periodically applies a test to each processor in a way that accounts for the idle states of processors, the test history and the task priority offers a good trade-off between the performance and fault detection probability. However, the principle and methodology can be generalized to other multiprocessor architectures.

References

[1] R. Mall. “Real-time system”: Theory and practice. Pearson Education, 3rd Edition, 2008.

[2] Analysis of On-Line Self-Testing Policies for Real-Time Embedded Multiprocessors in DSM Technologies O. Heron, J. Guilhemsang, N. Ventroux et al 2010 IEEE.

[3] Jeff Draper et al., The Architecture of the DIVA Processing In Memory Chip”, ICS’02, June.

[4] C. Constantinescu, “Impact of deep submicron technology on dependability of VLSI circuits”, IEEE DSN, pp. 205-209, 2002.

[5] Nicolas Ventroux and Raphael David, “SCMP architecture: An Asymmetric Multiprocessor System-on-Chip for Dynamic Applications”, ACM Second International Forum on Next Generation Multicore/Many core Technologies, Saint Malo, France, 2010.

[6] Chip Multiprocessor Architecture: Techniques to Improve Throughput and Latency.

[7] Antonis Paschalis and Dimitris Gizopoulos “Effective Software-Based Self-Test Strategies for On-Line Periodic Testing of Embedded Processors”, DATE, pp.578-583,2004.

IJSET 2014Page 1

Negative Effects of Technology on Society

Negative Effects of Technology

There is no doubt that technology is playing a critical role in developing societies as countries depend on it in all disciplines of life. Countries all over the globe are competing to invent and develop the highest technological devices that can maintain the highest efficiency and accuracy of the work. Starting from 1980s, people started to use technology every day. The use of technology kept rising dramatically that people used it in tiny things. That overuse resulted in many negatives. There are many negative effects of overusing of technology on societies but the three major effects could be health problems, privacy problems and social problems.

One of the negative sides of the rapid use of technology on societies is the health issue. There is no doubt that the technology is getting better and spreading around the world. That led societies to deal with it almost every day to get their work done resulting in many issues. These issues can mainly be divided into two main categories which are mental health problems and physical health problems. First of which is the physical health problem, there are many physical health issues that caused by dealing with technology but the critical ones are weaken the eyesight. People who usually deal with computers for a long periods of time like programmers suffer from blurry vision and eye soreness. According to Tripp (2013), people who usually use computers for long terms experience serious issues such as soreness of the eyes and blurring the vision. That can be clearly seen since eyes have to concentrate all the time on the screen that emits dangerous radiations that affects the eyes critically leading the eyes to start dropping tears which can make the eye blurry. Resulting in soreness of the eyes on long terms. In addition, technology can cause mental problems for some. Some people who usually spend hours dealing with TVs and Computers without interacting with people get discouraged and get an independent behavior which results in fear of talking to people resulting them to suffer from mental disease called anxiety. Unfortunately nowadays, parents are exposing their children to technology without taking in consideration the cruel effects of such an action. Causing their children to suffer from many mental diseases. Crawford (2011) mentioned that due to spending huge amounts of time; huge number of children were diagnosed by di polar disorder, anxiety and depression resulting in using enormous amounts of psychotropic medications. That can be clearly seen since the parents does not take in consideration that technology isolate the child from the outer world thus they get mental diseases. To sum up, technology has double edged effects which are mental and physical problems.

The second negative effect of overuse of technology on the society is the deprivation of privacy and security. As the world experiencing many advancements in the technology it is also facing problems of privacy and security that can strip the world from personal information. Firstly, privacy issues. Privacy problems are considered as one of the critical issues. Privacy issues are concerned with tracking locations and spying on information. It is very easy for professionals to trace and spy on any electronic device that connects to a network by simply tracking the IP location using programs then establishing connections by dominating over foibles resulting in accessing data of the user. For example, some advertising websites can track location, watch what users do and see what the users like and dislike by s doing a survey of which products are more preferred, while some countries usually spy on another countries to maintain its internal security and spy on extremely important information. According to Thai serves group (2012), minister of communication in Iran mentioned that the western are spying on the internet resulting in spreading of corruption. That is clearly seen since the west have more advanced instruments that can track and spy on anyone in the world with high accuracy. In addition to privacy problems, security problems can negatively affect anyone in the world, it is concerned with stealing of information. It is known that information of users of any website are saved in cloud storage of the website. Professionals aim for that cloud storage. If it is not well secured all of users information including ID and passwords will be threatened. For example, if a professional hacker login a bank database, he would cause a fatal damage to the users and the bank, running away leaving almost no trace. “Computer predators” (n.d.) mentioned that while computers are connected to the internet, hackers send malwares to seek for financial information and transfer it to them. That can be clearly seen since many hackers have used these malwares to penetrate banks systems in the west. Wrapping up, technology affects dramatically on the privacy and security

The third negative effect of overuse of technology on societies is the social issue. Through these years technology was getting more advanced not only in business and work fields but for numerous number of fields. One of these important fields that gains enormous amount of profit is the games field, which has been modified many times to make the games more realistic and has more similarities to the real life. Games are so realistic that killing and other disgusting scenes are included in games. The serious effects of games can be divided into two main categories which are temper fluctuating and lack of social skills .Firstly, due to killing and blood scenes in the games aggression spread among all the teenagers. According to Alamy (2012), teens who usually play brutal games become extremely aggressive. That can be clearly seen because watching such dangerous scenes make the teenagers or children more likely to attack people or cause critical damage to their friends in schools or in neighborhood. In addition to aggression, lack of social skills is another result of overuse of technology. Sitting all the day working on electronics such as computers could isolate the person. It is clear that technology boosted communications by easily contacting anyone by just pressing a button. But in fact it is responsible for the rapid loss of the relationships. Nowadays some people could be having a date and they are actually sitting next to each other but in reality they are not having any kind of contact. According to Howarth (2014), children social skills are decreasing dramatically because of spending long periods of time interacting with technology. That is clear because the more the child is attached to technology the less his social skills will be. Wrapping up, technology affects socializing of children.

Finally, technology was made to serve the world but people used it in intensive levels that it caused serious problems that are health problems, privacy problems and social problems. Health problems are considered as critical issues which affects mental and physical health of the user. Moreover, privacy is negatively affected due to spying and stealing of information of users. Furthermore, socializing is affected by rapid change of temper and lack of social skills. People should spend less time and communicate with each other and use technology in rates that it doesn’t hurt people to avoid getting any problems in the future.

Word Count -1198

Modelling of ?-turns using Hidden Markov Model

Modelling of ?-turns using Hidden Markov Model

Nivedita Rao
Ms. Sunila Godara

Abstract— One of the major tasks in predicting the secondary structure of a protein is to find the ?-turns. Functional and structural traits of a globular protein can be better understood by the turns as they play an important role in it. ?-turns play an important part in protein folding. ?-turns constitute on an average of 25% of the residues in all protein chains and are the most usual form of non-repetitive structures. It is already known that helices and ?-sheets are among the most important keys in stabilizing the structures in proteins. In this paper we have used hidden Markov model (HMM) in order to predict the ?-turns in proteins based on amino acid composition and compared it with other existing methods.

Keywords- ?-turns, amino acid composition, hidden Markov model, residue.

I. Introduction

Bioinformatics has become a vital part of many areas of biology. In molecular biology, bioinformatics techniques such as signal processing or image processing allow mining of useful results from large volumes of raw data. In the field ofgeneticsandgenomics, it helps in sequencing and explaining genomes and their perceivedmutations. It plays an important part in the analysis of protein expression, gene expression and their regulation. It also helps in literal mining of biological prose and the growth of biological and gene ontologies for organizing and querying biological data. Bioinformatics tools aid in the contrast of genetic and genomic data and more commonly in the understanding of evolutionary facets of molecule based biology. At a more confederated level, bioinformatics helps in analyzing and categorizing the biological trails and networks that are an significant part of systems biology. In structural biology, bioinformatics helps in the understanding, simulation and modelling of RNA, DNA and protein structures as well as molecular bindings.

The advancements in genome has increased radically over the recent years, thus resulting in the explosive growth of biological data widening the gap between the number of protein sequences stored in the databases and the experimental annotation of their functions.

There are many types of tight turns. These turns may subject to the number of atoms form the turn [1]. Among them is ?-turn, which is one of the important components of protein structure as it plays an important part in molecular structure and protein folding. A ?-turn invokes four consecutive residues where the polypeptide chain bends back on itself for about 180 degrees [2].

Basically these chain reversals are the ones which provide a protein its globularity rather than linearity. Even ?-turns can be further classified into different types. According to Venkatachalam [3], ?-turns can be of 10 types based on phi, psi angles and also some other. Richardson[4] suggested only 6 distinct types(I,I’,II,II’,VIa and VIb) on the basis of phi, psi ranges, along with a new category IV. Presently, classification by Richardson is most widely used.

Turns can be considered as an important part in globular proteins in respect to its structural and functional view. Without the component of turns, a polypeptide chain cannot fold itself into a compressed structure. Also, turns normally occur on the visible surface of proteins and therefore it possibly represents antigenic locations or involves molecular recognition. Thus, due to the above reasons, the prediction of ?-turns in proteins becomes an important element of secondary structure prediction.

II. RELATED WORK

A lot of work has been done for the prediction of ?-turns. To determine chain reversal regions of a globular protein, Chou at al. [5] used conformational parameters. Chou at al. [6] has given a residue-coupled model in order to predict the ?-turns in proteins. Chou at al. [7] used sequence of tetra peptide. Chou [8] again predicted tight turns and their types in protein using amino acid residues. Guruprasad K at al. [9] predicted ?-turn and ?-turn in proteins using a new set of amino acid and hydrogen bond. Hutchinson at al. [10] created a program called PROMOTIF to identify and analyse structural motifs in proteins. Shepherd at al. [11] used neural networks to predict the location and type of ?-turns. Wilmot at al. [12] analysed and predicted different types of ?-turn in proteins using phi, psi angles and central residues. Wilmot at al. [13] proposed a new nomenclature GORBTURN 1.0 for predicting ?-turns and their distortions.

This study has used hidden Markov model to predict the ?-turns in the protein. HMM has been widely used as biological tools.

(a) (b)

Figure 1.1 (a) defines Type-I ?-turns and (b) defines Type-II ?-turns. The hydrogen bond is denoted by dashed lines. [14]

III. Materials and methods
A. Dataset

The dataset used in the experiment is a non-redundant dataset which was previously described by Guruprasad and Rajkumar [9]. This dataset contains around 426 non-homologous protein chains. All protein chains do not have more than 25% sequence similarity. It is basically to ensure that there is very little correlation in the training set. In this dataset, each protein chain contains at least one beta turn and has X-ray crystallography with resolution 2 or more.

The dataset shows there are mainly ten classes and other classes are made using the combination of these ten classes.

Table 1 Datasets Description [14]

No. of ?- proteins ( class a )

68

No. of ?- proteins (class b )

97

No. of ?- proteins/ ?- proteins (class c )

102

No. of ?- proteins + ?- proteins (class d)

86

No. of multiple domain proteins (class e )

9

No. of small proteins (class f )

2

No. of coiled proteins (class g)

22

No. of low resolution proteins (class h )

0

No. of peptides (class i )

0

No. of designed proteins (class j )

1

No. of proteins with both a and b classes

3

No. of proteins with both a and c classes

7

No. of proteins with both a and d classes

5

No. of proteins with both b and c classes

6

No. of proteins with both b and d classes

4

No. of proteins with both b and f classes

1

No. of proteins with both c and d classes

10

No. of proteins with both c and g classes

1

No. of proteins with both b, c and d classes

2

B. Hidden markov model

In our work, we have used the probabilistic feature of HMM for ?-turns prediction. A model is presumed that ruminate the protein sequence being generated with a stochastic process that alternates amid two hidden states: “turns” and “non-turns”. The HMM is trained using 20 protein sequences.

The probability transition matrix is 2?2 for two states: turns and non-turns. The probability emission matrix is considered as 2?20 as there are 2 states and 20 amino acids. We prepared our probability transition matrix and probability emission matrix according to the knowledge that we have for dataset that is the probability of ?-non-turns is more than ?-turns in a protein sequence and by considering probabilities of each residue as the parameter taken from Chou [7] for calculating the emission and transition matrix.

There are more than ten classes and this HMM model parameter is estimated in2 super states and the training was performed.

Let P be a protein sequence of length n, which can also be expressed as

Where ri is the amino acid residue at sequence position i. The sequence is considered to be generated from r1 to rn in hidden Markov model. The model is trained using Baum-Welch algorithm [15].

Baum-Welch algorithm is a standard method for finding the maximum likelihood estimation of HMMs, in which posterior probabilities were performed by using both forward and backward algorithms. These algorithms were used to compile the state transition probability and emission probability matrices.

The initial probabilities are calculated, taking into account a correlation between residues in different position. The most probable path is calculated using Viterbi algorithm [16] as it automatically segments the protein into its component regions.

The probability of residue in the protein sequence used to generate the emission matrix given by

Where, m is the total number that of residue in the protein sequence and n is the total number of residues in the protein sequence.

C. Accuracy measures

Once the prediction of ?-turns is performed using the hidden Markov model, the problem arises of finding an appropriate measure for the quality of the prediction. Four different scalar measures are used to assess the models performance [17]. These measures can be derived four different quantities:

TP (true positive), p, is the number of correctly classified ?-turn residues.

TN (true negative), n, is the number of correctly classified non-?-turn residues.

FP (false positive), m, is the number of non-?-turn residues incorrectly classified as ?-turn residues.

FN( false negative), o, is the number of ?-turn residues incorrectly classified as non-?-turn residues.

The predictive performance of the HMM model can be expressed by the following parameters:

Qtotal gives the percentage of correctly classified residues.

MCC (Matthews Correlation Coefficient) [18] is a measure that counts for both over and under- predictions.

Qpredicted , is the percentage of ?-turn predictions that are correct.

Qobserved is the percentage of observed ?-turns that are correctly predicted.

IV. results and discussions
A. Results

This model is used to predict the ?-turns and is based on hidden Markov model.

There are basically two classes: turns and non-turns. It is used to predict one protein sequence at a time. It has been observed that it performs better than some existing prediction methods.

B. Comparison with other methods

In order to examine of this method, it has been compared with other existing methods as shown in table 2.

For now, the comparison is done on a single protein sequence. The comparison is for protein sequence with PDB code 1ah7.

Figure 2 shows comparison of Qtotal using different algorithms. Figure 3 shows comparison of Qpredicted using different algorithms. Figure 4 shows comparison of Qobserved using different algorithms. Figure 5 shows comparison of MCC using different algorithms. The HMM based method shows better results than some of the already existing algorithms of the prediction.

Figure 2. comparison of Qtotal with different algorithms

Figure 4. comparison of Qobserved with different algorithms

Figure 3. comparison of QPredicted with different algorithms

Figure 5. comparison of MCC with different algorithms

Table 2 Comparison with other methods

Prediction method

Qtotal (in %)

Qpredicted (in %)

Qobserved (in %)

MCC

Chou-Fasman algorithm

Thornton’s algorithm

1-4 & 2-3 correlation model

Sequence coupled model

GORBTURN

HMM based method

56.5

62.2

53.7

51.8

77.6

54.2

26.7

30.9

24.2

21.4

40.8

28.5

76.1

82.6

69.6

58.7

43.5

67.3

0.22

0.31

0.15

0.07

0.28

0.27

V. conclusion

In this paper, we presented a way in which HMM can be used to predict ?-turns in a protein chain. Our method is used to predict turns and non-turns of single protein sequence at a time. The results thus obtained are better than some of the other existing methods. The performance of the ?-turns can further be improved by considering other techniques such as using predicted secondary structures and dihedral angles from multiple predictors or by using feature selection technique [19] or by considering combination of many features together. We can also combine different machine learning techniques together to improve the performance of the prediction.

References

Chou, Kuo-Chen. “Prediction of tight turns and their types in proteins.”Analytical biochemistry286.1 (2000): 1-16.
Chou, P.Y. and Fasman, G.D. (1974) Conformational parameters for amino acids in helical, beta-sheet and random coil regions calculated from proteins.Biochemistry, 13, 211-222.
Venkatachalam, C. M. “Stereochemical criteria for polypeptides and proteins. V. Conformation of a system of three linked peptide units.”Biopolymers6.10 (1968): 1425-1436.
Richardson, Jane S. “The anatomy and taxonomy of protein structure.” Advances in protein chemistry34 (1981): 167-339.
Chou, P. Y., and G. D. Fasman. “Prediction of beta-turns.”Biophysical journal 26.3 (1979): 367-383.
Chou, K.C. “Prediction of beta-turns” Journal of Peptide Research(1997): 120-144.
Chou, Kou-Chen, and James R. Blinn. “Classification and prediction of ?-turn types.“Journal of protein chemistry16.6 (1997): 575-595.
Chou, Kuo-Chen. “Prediction of tight turns and their types in proteins.”Analytical biochemistry286.1 (2000): 1-16.
Guruprasad, Kunchur, and Sasidharan Rajkumar. “Beta-and gamma-turns in proteins revisited: a new set of amino acid turn-type dependent positional preferences and potentials.”Journal of biosciences25.2 (2000): 143.
Hutchinson, E. Gail, and Janet M. Thornton. “PROMOTIF—a program to identify and analyze structural motifs in proteins.”Protein Science5.2 (1996): 212-220.
Shepherd, Adrian J., Denise Gorse, and Janet M. Thornton. “Prediction of the location and type of ?-turns in proteins using neural networks.”Protein Science8.5 (1999): 1045-1055.
Wilmot, C. M., and J. M. Thornton. “Analysis and prediction of the different types of ?-turn in proteins.”Journal of molecular biology203.1 (1988): 221-232.
Wilmot, C. M., and J. M. Thornton. “?-Turns and their distortions: a proposed new nomenclature.”Protein engineering3.6 (1990): 479-493.
Available from :http://imtech.res.in/raghava/betatpred/intro.html
Welch, Lloyd R. “Hidden Markov models and the Baum-Welch algorithm.”IEEE Information Theory Society Newsletter53.4 (2003): 10-13.
Lou, Hui-Ling. “Implementing the Viterbi algorithm.”Signal Processing Magazine, IEEE12.5 (1995): 42-52.
Fuchs, Patrick FJ, and Alain JP Alix. “High accuracy prediction of ?aˆ?turns and their types using propensities and multiple alignments.”Proteins: Structure, Function, and Bioinformatics59.4 (2005): 828-839.
Matthews, Brian W. “Comparison of the predicted and observed secondary structure of T4 phage lysozyme.”Biochimica et Biophysica Acta (BBA)-Protein Structure405.2 (1975): 442-451.
Saeys, Yvan, Inaki Inza, and Pedro Larranaga. “A review of feature selection techniques in bioinformatics.”bioinformatics23.19 (2007): 2507-2517.

The mesh generation

Describe general methods (structured, unstructured, hybrid, adaptive, etc.) and discuss their key features and applications

A key step of the finite element method for numerical computation is mesh generation. One is given a domain (such as a polygon or polyhedron; more realistic versions of the problem allow curved domain boundaries) and must partition it into simple “elements” meeting in well-defined ways. There should be few elements, but some portions of the domain may need small elements so that the computation is more accurate there. All elements should be “well shaped” (which means different things in different situations, but generally involves bounds on the angles or aspect ratio of the elements). One distinguishes “structured” and “unstructured” meshes by the way the elements meet; a structured mesh is one in which the elements have the topology of a regular grid. Structured meshes are typically easier to compute with (saving a constant factor in runtime) but may require more elements or worse-shaped elements. Unstructured meshes are often computed using quadtrees, or by Delaunay triangulation of point sets; however there are quite varied approaches for selecting the points to be triangulated

The simplest algorithms directly compute nodal placement from some given function. These algorithms are referred to as algebraic algorithms. Many of the algorithms for the generation of structured meshes are descendents of “numerical grid generation” algorithms, in which a differential equation is solved to determine the nodal placement of the grid. In many cases, the system solved is an elliptic system, so these methods are often referred to as elliptic methods.

It is difficult make general statements about unstructured mesh generation algorithms because the most prominent methods are very different in nature. The most popular family of algorithms is those based upon Delaunay triangulation, but other methods, such as quadtree/octree approaches are also used.

Delaunay Methods

Many of the commonly used unstructured mesh generation techniques are based upon the properties of the Delaunay triangulation and its dual, the Voronoi diagram. Given a set of points in a plane, a Delaunay triangulation of these points is the set of triangles such that no point is inside the circumcircle of a triangle. The triangulation is unique if no three points are on the same line and no four points are on the same circle. A similar definition holds for higher dimensions, with tetrahedral replacing triangles in 3D.

Quadtree/Octree Methods

Mesh adaptation, often referred to as Adaptive Mesh Refinement (AMR), refers to the modification of an existing mesh so as to accurately capture flow features. Generally, the goal of these modifications is to improve resolution of flow features without excessive increase in computational effort. We shall discuss in brief on some of the concepts important in mesh adaptation.

Mesh adaptation strategies can usually be classified as one of three general types: r-refinement, h-refinement, or p-refinement. Combinations of these are also possible, for example hp-refinement and hr-refinement. We summarise these types of refinement below.

r-refinement is the modification of mesh resolution without changing the number of nodes or cells present in a mesh or the connectivity of a mesh. The increase in resolution is made by moving the grid points into regions of activity, which results in a greater clustering of points in those regions. The movement of the nodes can be controlled in various ways. On common technique is to treat the mesh as if it is an elastic solid and solve a system equations (suject to some forcing) that deforms the original mesh. Care must be taken, however, that no problems due to excessive grid skewness arise.

h-refinement is the modification of mesh resolution by changing the mesh connectivity. Depending upon the technique used, this may not result in a change in the overall number of grid cells or grid points. The simplest strategy for this type of refinement subdivides cells, while more complex procedures may insert or remove nodes (or cells) to change the overall mesh topology.

In the subdivision case, every “parent cell” is divided into “child cells”. The choice of which cells are to be divided is addressed below. For every parent cell, a new point is added on each face. For 2-D quadrilaterals, a new point is added at the cell centroid also. On joining these points, we get 4 new “child cells”. Thus, every quad parent gives rise to four new offsprings. The advantage of such a procedure is that the overall mesh topology remains the same (with the child cells taking the place of the parent cell in the connectivity arrangement). The subdivision process is similar for a triangular parent cell, as shown below. It is easy to see that the subdivision process increases both the number of points and the number of cells

A very popular tool in Finite Element Modelling (FEM) rather than in Finite Volume Modelling (FVM), it achieves increased resolution by increasing the order of accuracy of the polynomial in each element (or cell).

In AMR, the selction of “parent cells” to be divided is made on the basis of regions where there is appreciable flow activity. It is well known that in compressible flows, the major features would include Shocks, Boundary Layers and Shear Layers, Vortex flows, Mach Stem , Expansion fans and the like. It can also be seen that each feature has some “physical signature” that can be numerically exploited. For eg. shocks always involve a density/pressure jump and can be detected by their gradients, whereas boundary layers are always associated with rotationality and hence can be dtected using curl of velocity. In compressible flows, the velocity divergence, which is a measure of compressiblity is also a good choice for shocks and expansions. These sensing paramters which can indicate regions of flow where there are activity are referred to as ERROR INDICATORS and are very popular in AMR for CFD.

Just as refinement is possible by ERROR INDICATORS as mentioned above, certain other issues also assume relevance. Error Indicators do detect regions for refinement, they do not actually tell if the resolution is good enough at any given time. In fact the issue is very severe for shocks, the smaller the cell, the higher the gradient and the indicator would keep on picking the region, unless a threshold value is provided. Further, many users make use of conservative values while refining a domain and generally end up in refining more than the essential portion of the grid, though not the complete domain. These refined regions are unneccesary and are in strictest sense, contribute to unneccesary computational effort. It is at this juncture, that reliable and resonable measure of cell error become necessary to do the process of “coarsening”, which would reduce the above-said unnecessary refinement, with a view towards generatin an “optimal mesh”. The measures are given by sensors referred to as ERROR ESTIMATORS, literature on which is in abandunce in FEM, though these are very rare in FVM.

Control of the refinement and/or coarsening via the error indicators is often undertaken by using either the ‘solution gradient’ or ‘soultion curvature’. Hence the refinement variable coupled with the refinement method and its limits all need to be considered when applying mesh adaptation

A hybrid model contains two or more subsurface layers of hexahedral elements. Tetrahedral elements fill the interior. The transition between subsurface hexahedral and interior tetrahedral elements is made using degenerate hexahedral (pyramid) elements.

High quality stress results demand high quality elements, i.e., aspect ratios and internal angles as close to 1:1 and 90°, respectively, as possible. High quality elements are particularly important at the surface. To accommodate features within a component, the quality of elements at the surface of a hexahedral model generally suffers, e.g., they are skewed. Mating components, when node-to-node contact is desired, can also adversely affect the models element quality. Even more difficult is producing a tetrahedral model that contains high quality subsurface elements. In a hybrid model, the hexahedral elements are only affected by the surface mesh, so creating high quality elements is easy.

Minimal effort is required to convert CAD data into surface grids using the automated processes of pro-surf. These surface grids are read by pro-am. The surface grid is used to extrude the subsurface hexahedral elements. The thickness of each extruded element is controlled so that high quality elements are generated. The interior is filled automatically with tetrahedral elements. The pyramid elements that make the transition are also generated automatically.

A hybrid model will generally contain many more elements than an all-hexahedral model thus increasing analysis run-time. However, the time saved in the model construction phase – the more labor intensive phase – more than makes up for the increased run-time. Overall project time is reduced considerably. Also, as computing power increases, this “disadvantage” will eventually disappear.

Hexahedral Meshing

ANSYS Meshing provides multiple methods to generate a pure hex or hex dominant mesh. Depending on the model complexity, desired mesh quality and type, and how much time a user is able to spend meshing, a user has a scalable solution to generate a quick automatic hex or hex dominant mesh, or a highly controlled hex mesh for optimal solution efficiency and accuracy.

Mesh Methods:

Automated Sweep meshing

Sweepable bodies are automatically detected and meshed with hex mesh when possible
Edge increment assignment and side matching/mapping is done automatically
Sweep paths found automatically for all regions/bodies in a multibody part
Defined inflation is swept through connected swept bodies
User can add sizing controls, mapped controls , and select source faces to modify and take control over the automated sweeping
Adding/Modifying geometry slices/decomposition to the model also greatly aids in the automation of getting a pure hex mesh.

Thin Solid Sweep meshing

This mesh method quickly generates a hex mesh for thin solid parts that have multiple faces as source and target.
Can be used in conjunction with other mesh methods
User can add sizing controls, mapped controls, and select source faces to modify and take control over the automated sweeping

MultiZone Sweep meshing

This advanced sweeping approach uses automated topology decomposition behind the scenes to attempt to automatically create a pure hex or mostly hex mesh on complicated geometries
Decomposed topology is meshed with a mapped mesh or a swept mesh if possible. A user has the option to allow for free mesh in sub-topologies that can’t be mapped or swept.
Supports multiple source/target selection
Defined inflation is swept through connected swept bodies
User can add sizing controls, mapped controls and select source faces to modify and take control over the automated meshing

Hex-dominant meshing

This mesh method uses an unstructured meshing approach to generate a quad dominant surface mesh and then fill it with a hex dominant mesh
This approach generally gives nice hex elements on the boundary of a chunky part with a hybrid hex, prism, pyramid, test mesh internally
Tetrahedral Meshing

The combination of robust and automated surface, inflation and tet meshing using default physics controls to ensure a high-quality mesh suitable for the defined simulation allows for push-button meshing. Local control for sizing, matching, mapping, virtual topology, pinch and other controls provide additional flexibility, if needed.

Mesh Methods:

Patch conforming mesh method:

Bottom-up approach (creates surface mesh, then volume mesh)
Multiple triangular surface meshing algorithms are employed behind the scenes to ensure a high quality surface mesh is generated, the first time
From that inflation layers can be grown using several techniques
The remaining volume is meshed with a Delaunay-Advancing Front approach which combines the speed of a Delaunay approach with the smooth-transitioned mesh of an advancing front approach
Throughout this meshing process are advanced size functions that maintain control over the refinement, smoothness and quality of the mesh

Patch independent mesh method:

Top-down approach (creates volume mesh and extracts surface mesh from boundaries)
Many common problems with meshing occur from bad geometry, if the bad geometry is used as the basis to create the surface mesh, the mesh will often be bad (bad quality, connectivity, etc.)
The patch independent method uses the geometry only to associate the boundary faces of the mesh to the regions of interest thereby ignoring gaps, overlaps and other issues that give other meshing tools countless problems.
Inflation is done as a post step into the volume mesh. Since the volume mesh already exists, collisions and other common problems for inflation are known ahead of time.

Note: For volume meshing, a tetrahedral mesh generally provides a more automatic solution with the ability to add mesh controls to improve the accuracy in critical regions. On the contrary, a hexahedral mesh generally provides a more accurate solution, but is more difficult to generate.

Shell and Beam Meshing

For 2-D planar (axisymmetric), shell and beam models, ANSYS Meshing provides efficient tools for quickly generating a high quality mesh to accurately simplify the physics.

Mesh Methods for shell models:

Default surface meshing

Multiple surface meshing engines are used behind the scenes to provide a robust, automated surface mesh consisting of all quad, quad dominant or all tri surface mesh.
User can add sizing controls, and mapped controls to modify and take control over the automated meshing

Uniform surface meshing

Orthogonal, uniform meshing algorithm that attempts to force an all quad or quad dominant surface mesh that ignores small features to provide optimum control over the edge length
Describe key features of ALL existing meshing options in Ansys Mesh module and discuss their applications

The meshing tools in ANSYS Workbench were designed to follow some guiding principles:

Parametric: Parameters drive system
Persistent: Model updates passed through system
Highly-automated: Baseline simulation w/limited input
Flexible: Able to add additional control w/out complicating the workflow
Physics aware: Key off physics to automate modelling and simulation throughout system
Adaptive architecture: Open system that can be tied to a customer’s process

CAD neutral, meshing neutral, solver neutral, etc.

By integrating best in class meshing technology into a simulation driven workflow, ANSYS Meshing provides a next generation meshing solution.