Saturday, May 23, 2020

Comparing and Contrast Emilia and Desdemonas love for...

â€Å"I have decided to stick with love. Hate is too great a burden to bear.† (Martin Luther King, Jr.) In the play Othello this quote shows relation to how love occurs among the main characters. This is highlighted though the relationship that Emilia and Desdemona have with their respective husbands. Throughout, the relationship of these characters with their spouses will be analyzed. The relationship between Emilia and her husband Iago will be expressed as well as the relationship between Desdemona and her husband Othello. Then the similarities and differences between the two relationships will be compared. Emilia and Iago had a very complex relationship. They do not have a strong and equal relationship displaying love. This is not†¦show more content†¦Emilia chose to do what is expected and marry a man of her own kind. Unlike Desdemona, who chooses to step outside of her comfort zone. As a result leaving the two with very different dynamics of a relationship. The similarities between Emilia relationship and Desdemona relationship is that they both are married to men who are involved in the army. They are both patriotic as they try to defend their nation, from anything that tries to. They both have a huge responsibility in protecting there country from war. Desdemona and Emilia both had downfalls in there relationship. The downfalls in Emilia s marriage is the way in which she allowed herself to be treated. Since the beginning of the play, she knew that she wasn t getting treated properly by her husband,nonetheless she stayed with him. Desdemona s downfall began when her husband stopped trusting her. The way he began asking her so many questions and developing that huge rage towards her,led to the fall in there relationship. For him to accuse her of infidelity was the breaking point in their relationship, while Emilia s fall started from the day she married Iago. Overall, love plays a huge role in the difference between these two dominan t relationships in the play. It plays a big contribution to the characters downfalls. Emilia searches for love from the person ,whom should be offering it to her the most. This as a result leads her to think that adultery is okay. In additionShow MoreRelatedDiction of Othello1473 Words   |  6 Pagescreates dramatic irony. The effect of this type of diction is that the reader has a clearer and comprehensive understanding of what exactly is going on. He uses imagery several times in the describing of various people as animals or in terms of comparing things. Most of the imagery pertains to Othello and his various travels or especially his jealousy. This use of imagery and diction is effective because in addition to having merely an idea of what is going on the reader can have a mental pictureRead MoreThe Creation of Tension and Suspense in Othello by William Shakespeare1677 Words   |  7 PagesOthello is a tragedy because of the deception and betrayal of Iago which causes many people to die. The play explores many different themes each trying to convey a specific message. The main themes of the play are betrayal, revenge, love, trust, honesty, racism and social hierarchy. These are only some of the themes explored in the play. White men were seen as trustworthy, respected people and arrogant, like Iago for example. Black men however, were racially abused because Read MoreRacism In Othello Essay1414 Words   |  6 Pagescan be used to justify problematic or racist worldviews. In Christianity, white can be used to symbolize anything righteous: one’s purity, virginity, or innocence. When cleansed from sin, one becomes â€Å"white as snow† (II Corinthians 5:21). In stark contrast, black is representative of darkness, suffering, and sinfulness. It is said that God is light and that when Satan fell from Heaven, he was no longer in God’s light, only existing in darkness (John 1:5). Jennings 2 It would be quite simple for oneRead More Comparing Heroes and Villains in Measure for Measure, Othello, and Hamlet2339 Words   |  10 PagesComparing Heroes and Villains in Measure for Measure, Othello, and Hamlet According to John Steinbeck, Heroes are innocent; villains are cunning. This statement likely regards the internal aspects of characters, such as intellect, reasoning/motivation, and morality/responsibility, as indicated by consistency in action and/or articulation, as in direct speech or soliloquy. An examination of the heroes and villains in Measure for Measure, Othello, and Hamlet can determine whether Steinbecks

Monday, May 18, 2020

Rhetorical Analysis Cutting Edge Instruction

To begin on a hopeful note, instruction framework in India is aggressive and trains understudies thoroughly to withstand savage rivalry on an overall scale. We should recognize that our researchers are at standard if not better than a large portion of the understudies abroad. Indians are an energy to figure with because of their sheer ability and productivity, and this can be credited to their thorough preparing at school and higher education. Having perused this, we should in like manner concede that our (Indian) instruction framework requires a makeover . It is an excess of data situated, which leaves little extension for inventiveness, innovations and self-learning. Thither is an interest to update the syllabus/educational programs occasionally keeping in mind the end goal to achieve it all the more fascinating for the researchers. Cutting edge instruction additionally offers approach to impart moral values and train in the youthful brains that could empower them to wind up more secure and more capable people. Advanced instructive needs to empower the future era to withstand the pressures and burdens of society and move forward in the midst of trials and trials. The developing rate of violations and suicides submitted by adolescents and the young says a lot of the inadequacies of our preparation framework. †¢ The best disadvantage of our training framework is the nearness of such a large number of various tables. Verging on each country of India possesses itsShow MoreRelatedStatement of Purpose23848 Words   |  96 Pagesfascinates me and attracts me to the Environmental Studies Program. Two courses in my geography department increased my interest in the connection between the environment and economics: Conservation of Underdeveloped Countries and Environmental Impact Analysis. In the former, we studied the problems of natural resource management in developing countries. The balance is always tilted toward economic growth at the expense of environmental preservation. For example, because the Pantanal Wetland could becomeRead MoreStrategy Safari by Mintzberg71628 Words   |  287 Pagesbe one simple definition of strategy, but there are by now some general areas of agreement about the nature of strategy. The accompanying box summarizes these. Strategies for Better and for Worse Any discussion of strategy inevitably ends on a knife-edge. For every advantage associated with strategy, there is an associated drawback or disadvantage: 1. Strategy sets direction. Advantage: The main role of strategy is to chart the course of an organization in order for it to sail cohesively throughRead MoreOrganisational Theory230255 Words   |  922 Pagestheory focuses attention on the human issues in organization ‘There is nothing so practical as a good theory’ How Roethlisberger developed a ‘practical’ organization theory Column 1: The core contributing social sciences Column 2: The techniques for analysis Column 3: The neo-modernist perspective Column 4: Contributions to business and management Four combinations of science, scientific technique and the neo-modernist approach reach different parts of the organization Level 1: Developing the organizationRead MoreDeveloping Management Skills404131 Words   |  1617 PagesLine 58 Understanding and Appreciating Individual Differences Important Areas of Self-Awareness 61 Emotional Intelligence 62 Values 65 Ethical Decision Making and Values 72 Cognitive Style 74 Attitudes Toward Change 76 Core Self-Evaluation 79 SKILL ANALYSIS 84 Cases Involving Self-Awareness 84 Communist Prison Camp 84 Computerized Exam 85 Decision Dilemmas 86 SKILL PRACTICE 89 Exercises for Improving Self-Awareness Through Self-Disclosure 89 Through the Looking Glass 89 Diagnosing Managerial CharacteristicsRead MoreCoaching Salespeople Into Sales Champions110684 Words   |  443 PagesVulnerability-Based Leadership 143 The Perfect Manager Express Your Authenticity: Become Vulnerable Embrace Your Humanity Evidence of an Emerging Culture Vulnerability and Trust The Passive Manager Embrace Healthy Conï ¬â€šict Call Them Out Using the Coaching Edge Take a Stand for Your Salespeople Declare What You Really Want for Your Sales Team The ‘‘I’m Sensing That’’ Statement The Proactive Manager A View from the Sidelines 143 146 147 148 149 151 153 153 154 156 158 161 162 CHAPTER NINE FacilitatingRead MoreLogical Reasoning189930 Words   |  760 PagesReasons ................................................................................................ 236 Deceiving with Loaded Language ................................................................................................... 238 Using Rhetorical Devices .................................................................................................................. 240 Review of Major Points .............................................................................................

Tuesday, May 12, 2020

A Look at the Conflicts of the Juvinile Court System in...

The book â€Å"No Matter How Loud I Shout† written by Edward Humes, looks at numerous major conflicts within the juvenile court system. There is a need for the juvenile system to rehabilitate the children away from their lives of crime, but it also needs to protect the public from the most violent and dangerous of its juveniles, causing one primary conflict. Further conflict arises with how the court is able to administer proper treatment or punishment and the rights of the child too due process. The final key issue is between those that call for a complete overhaul of the system, and the others who think it should just be taken apart. On both sides there is strong reasoning that supports each of their views, causing a lot of debate about the†¦show more content†¦What judgment should be used to best serve the needs of both the public and the juveniles, and who to place it upon. Judges like Roosevelt Dorn, who frequently showed his contempt of the current juvenile just ice methods, but was the best advocate for the treatment of the children. I think his method is one of the best ways to reform the system through early prevention and more rehabilitation. The district attorney’s office was often prosecuting on the word of often subpar investigations and evidence, due to a lack of funding. The adult court transfers are not the most effective way to reduce more juvenile crimes, although for extreme delinquents I think it is the best way to sentence them. The transfers do not do anything to prevent more crime; they just reduce some of the workload of the overloaded juvenile system. Ronald Duncan is one prime example of the failings of the juvenile court. It is the one case written about in the book, that the juvenile system has virtually no chance of saving the juvenile from further crime. It also happens to be the one case in which the juvenile court couldn’t transfer the child into adult court, and therefore could not sentence him to a harsh enough sentence. Duncan was charged with two counts of first degree murder, not many days before his sixtieth birthday. When after a night working at Baskin’Robbins the owners, a husband and wife, were going to drive Duncan

Wednesday, May 6, 2020

The Fault in Our Stars by John Green Essay - 926 Words

Cancer affects Hazel Grace, Augustus Waters, and their families deeply, it represents the lost, hope, and surprise of cancer often, but this is not only true in books,it also affects people in real life, parents start to view their kids differently, and the children start to view themselves as nothing but disease, and the culture they once had starts to change. Augustus Waters and Hazel Grace each have their own struggles, Hazel suffers from thyroid cancer and is terminal, Augustus had been cured, but it popped back making his body full of cancer, he as well ending up with terminal cancer. Often organizations and people would give them a little bit more because they are kids who had inevitability of death to look to. They both having†¦show more content†¦One day of being okay, then next his body was practically consumed by the cancer inside of him. No hope, no treatment, no way to save him a little longer. When Augustus dies so many suffered from it, or claimed to it affec ted them, he showed losing someone can be a surprising as the diagnosis was. Even though they both suffered they did get a few perks even though not many. â€Å"I wasn’t bad, but all the shoes and balls are Cancer Perks† (Green, 30). The mention of cancer perks were given throughout the whole book. The things the received from all those around them. Getting to go on planes first, wine even though under aged, free trips, and much more. People tend to want to give more to those who have less. This rule applies in real life as well. Talia Castellano, a 13 year old girl before her passing experienced cancer perks. She got to be a CoverGirl and everyone supported her. Though not the social norm society changed for this one little girl, because she was dying. Even with the perks, it doesn’t rid the cancer. Not in the books, not in real life, because eventually they had to face what the inevitable. Families deal with cancer in all different types of ways. Hazel Grace’s parents would try to protect, but make sure she did live. Her mom started to get a degree so if Hazel did die she could council those who lost their children to cancer. Yet at the same time they constantly worried, fights broke out all the time ThereShow MoreRelatedThe Fault in Our Stars by John Green624 Words   |  3 PagesOptimism is an emotion that inspires hopefulness and confidence about the future. Optimism propels people and novels forward. Optimism is a driving force in the novel â€Å"Lord of The Flies† by William Golding and the novel â€Å"The Fault in Our Stars† by John Green. In the novel â€Å"Lord of the Flies†,one of the most important emotions is optimism. Without optimism the boys would have no hope that they would make it off the island. At the start of the novel things are not going the boys way, their planeRead MoreThe Fault Of Our Stars By John Green1502 Words   |  7 PagesThe Fault in Our Stars, published by John Green in January 2012 is a professional, fictional narration of a sixteen year old girl named Hazel Grace Lancaster and her experience with terminal cancer. Hazel was prepared to die until a surgery followed by radiation and chemo at age fourteen shrunk her tumours and bought her a few more years of life. Hazel has a poor outlook on her remaining years with terminal cancer, she does not wish to form any close bonds due to the fact she is afraid of the impactRead MoreThe Fault Of Our Stars By John Green2013 Words   |  9 Pagesnovel, The Fault in Our Stars, John Green describes the hardships, endless love, and a tragedy, th at two teenagers must push through to find their forever. Hazel Lancaster, an intelligent, aware, and selfless young girl, has struggled with cancer since the early age of thirteen. Augustus Waters, a smart, metaphor loving, cancer stricken kid, falls completely in love with Hazel Grace, but a great misfortune cuts their time together short. â€Å"Some infinities are bigger than other infinities (Green, 260).†Read MoreThe Fault Of Our Stars By John Green Essay848 Words   |  4 PagesThe fault in our stars is written by John Green, a popular American writer and vlogger. The novel is narrated by Hazel Grace Lancaster, a sixteen year old cancer patient. Her parents force her to attend a Support group so she can make â€Å"friends†. Hazel gets more than a friend from the support group. She befriends a 17 year old called Augustus Waters, the guy she ends up falling in love with. Augustus Waters really inspired me throughout the novel. He was a very strong character who had a positiveRead MoreThe Fault Of Our Stars By John Green1768 Words   |  8 PagesJournal Entry 1: The Fault In Our Stars by John Green. Entry written by Matt Kruse. How realistic are the characters? Would you want to meet any of the characters in real life? How has the author used exposition to introduce you to the characters? Do you like them? Why or why not? Is there a character that you can relate to better than others? Primarily, all of the characters in The Fault In Our Stars are pretty realistic. Most of the characters act like normal people you could just find everyRead MoreThe Fault Of Our Stars By John Green1023 Words   |  5 PagesThe Fault in Our Stars is a book written by John Green. This book has many themes like love for the ways that Hazel and Augustus treat one another. There is courage for the way that these teenagers battle cancer and are brave while doing it. Also, another theme is family for the way that Hazel and Augustus’s parents love them, support them, and comfort them with every decision that they make. The main characters in this book are Hazel Grace Lancaster, the narrator of the book who has cancer and knowsRead MoreThe Fault Of Our Stars By John Green1079 Words   |  5 Pages Augustus Waters once said â€Å"I’m on a roller coaster that only goes up, my friend.† (Green, John). Isaac once stated â€Å"There’s nothing you can do about it.† (Green, John). Augustus Waters and Isaac are fictional character from the popular book, â€Å"The Fault in Our Stars†, written by John Green. These quotes show a little bit of these characters personalities. The exciting and emotional book came out January 2012 and since then a movie was released based on it. (Wikipedia). It includes teens sufferingRead MoreThe Fault Of Our Stars By John Green1490 Words   |  6 PagesIn the novel, The Fault in Our Stars, the author, John Green, provides the reader with a theme that people tend to differ other people who do not appear to be the same as every other average human being. People would contradict this universal truth, but it cannot be denied. From the onset, Hazel is receiving extra care and attention from her parents and guardians. â€Å"‘Mom† I shouted. Nothing. Again, louder, â€Å"MOM!† She ran in wearing a threadbare pink towel under her armpits, dripping, vaguely panickedRead MoreThe Fault in Our Stars: John Green1819 Words   |  7 Pagesâ€Å"That’s the thing about pain†¦ it demands to be felt† John Green eloquently states in the tear-jerking novel The Fault in Our Stars. Ask anyone who read the book about the supporting character, charming Augustus Grey, and quickly witness an almost physical wave of acrimony and nostalgia pass over them. Green’s unique ability to demand compassion from the reader through his cleverly compiled diction forces the reader to feel the extreme pain his characters are faced against. Pain brings people togetherRead MoreThe Fault in Our Stars by John Green2159 Words   |  9 Pagesis invited over to his house to watch a movie. Although , he pulls out a cigarette and Hazel freaks out to which he explains that it is a metaphor, â€Å"You put the killing thing right between your teeth, but don’t give it the power to do its killing† (Green 20). Once at his house Hazel begins to feel not to different from other girls, yet by the time they say goodbye, she cannot get the thought of him out of her head. Hazel shares a book with Augustus and he shares one with her. She quickly reads through

Decision Analysis Free Essays

CREATE Research Archive Published Articles Papers 1-1-1980 Structuring Decision Problems for Decision Analysis Detlof von Winterfeldt University of Southern California, winterfe@usc. edu Follow this and additional works at: http://research. create. We will write a custom essay sample on Decision Analysis or any similar topic only for you Order Now usc. edu/published_papers Recommended Citation von Winterfeldt, Detlof, â€Å"Structuring Decision Problems for Decision Analysis† (1980). Published Articles Papers. Paper 35. http://research. create. usc. edu/published_papers/35 This Article is brought to you for free and open access by CREATE Research Archive. It has been accepted for inclusion in Published Articles Papers by an authorized administrator of CREATE Research Archive. For more information, please contact gribben@usc. edu. Acta Psychologica 45 (1980) 71-93 0 North-Holland Publishing Company STRUCTURING DECISION PROBLEMS FOR DECISION ANALYSIS * Detlof von WINTERFELDT ** University of Southern California, Los Angeles, CA 90007, USA Structuring decision problems into a formally acceptable and manageable format is probably the most important step of decision analysis. Since presently no sound methodology for structuring exists, this step is still an art left to the intuition and craftsmanship of the individual analyst. After introducing a general concept of structuring, this paper reviews some recent advances in structuring research. These include taxonomies for problem identification and new tools such as influence diagrams and interpretative structural modeling. Two conclusions emerge from this review: structuring research is still limited to a few hierarchical concepts and it tends to ignore substantive problem aspects that delineate a problem it its real world context. Consequently structuring research has little to say about distinctions between typical problem classes such as regulation, siting, or budget allocation. As an alternative the concept of â€Å"prototypical decision analytic structures† is introduced. Such structures are developed to meet the substantive characteristics of a specific problem (e. g. , siting a specific Liquid Natural Gas plant) but they are at the same time general enough to apply to similar problems (industrial facility siting). As an illustration, the development of a prototypical analytic structure for environmental standard setting is described. Finally, some typical problem classes are examined and some requirements for prototypical structures are discussed. An introduction to problem structuring Decision analysis can be divided into four steps: structuring the problem; formulating inference and preference models; eliciting probabilities and utilities; and exploring the numerical model results. Prac* This research was supported by a grant from the Department of Defense and was monitored by the Engineering Psychology Programs of the Office of Naval Research, under contract # NOOO14-79C-0529. While writing this paper, the author discussed the problem of structuring extensively with Helmut Jungermann. The present version owes much to his thought. Please don’t take footnote 3 too seriously. It is part of a footnote war between Ralph Keeney and me. ** Presently with the Social Science Research Institute, University of Southern California, University Park, Los Angeles, CA 90007, (213) 741-6955. 12 D. von Winterfeldt /Structuring decision problems titioners of decision analysis generally agree that structuring is the most important and difficult step of the analysis. Yet, until recently, decision analytic research has all but ignored structuring, concentrating instead on questions of modeling and elicitation. As a result, structuring was, and to some extent still is, considered the ‘art’ part of decision analysis. This paper examines some attempts to turn this art into a science. Trees are the most common decision analytic structures. Decision trees, for example, represent the sequential aspects of a decision problem (see Raiffa 1968; Brown et al. 1974). Other examples are goal trees for the representations of values (Keeney and Raiffa 1976) and event trees for the representation f inferential problem aspects (Kelly and Barclay 1973). In fact, trees so much dominate decision analytic structures that structuring is often considered synonymous to building a tree. This paper, however, will adopt a more general notion of decision analytic structuring. According to this notion, structuring is an imaginative and creative process of transla ting an initially ill-defined problem into a set of welldefined elements, relations, and operations. The basic structuring activities are identifying or generating problem elements (e. g. , events, values, actors, decision alternatives) nd relating these elements by influence relations, inclusion relations, hierarchical ordering relations, etc. The structuring process seeks to formally represent the environmental (objective) parts of the decision problem and the decision makers’ or experts’ (subjective) views, opinions, and values. Graphs, maps, functional equations, matrices, trees, physical analogues, flow charts, and venn diagrams are all possible problem representations. In order to be useful structures for decision analysis, such representations must facilitate the subsequent steps of modeling, elicitation, and numerical nalysis. Three phases can be distinguished in such a generalized structuring process. In the first phase the. problem is identified. The elements which are generated in this phase are the substantive features of the problem: the decision maker(s); the generic classes of alternatives, objectives, and events; individuals or groups affected by the decision; characteristics of the problem environment. This list is pruned by answering questions such as: what is the purpose of the analysis? For whom is the analysis to be performed? Which alternatives can the decision maker truly control? At this stage only very rough relations between problem elements are constructed. Examples include organizational relations D. von Winterfeldt /Structuring decision problems 73 among decision makers, influence relations between classes of actions and events, and rough groupings of objectives. Products of this problem identification step are usually not very formal, and are seldom reported in the decision analytic literature. They may be in the form of diagrams, graphs, or ordered lists. Among the few documented examples are Hogarth et al. (1980) for the problem of city planning and Fischer and von Winterfeldt 1978) for the problem of setting environmental standards. In the second structuring step, an overall analytic structure is developed. The elements generated in this step are possible analytic problem representations. Besides tree structures, these may include more complex structures previously developed for similar problems such as screening structures for siting decisions or si gnal detection structures for medical decision making. Paradigmatic structures of alternative modeling approaches (e. g. , systems dynamics or linear programming) which could fit the problem should also be examined at this step [ 1 I. A creative activity in this structuring phase is to relate and combine part structures, e. g. , simulation structures with evaluation structures, or decision trees of different actors. From the candidate structures and their combinations an overall structure is selected which is judged most representative of the problem and manageable for further modeling and elicitation. Only a handful of analytic structures have been developed which are more complex than decision trees. Gardiner and Ford (in press) combined simulation and evaluation structures. Keeney (in press) developed decision analytic structure for the whole process of siting energy facilities. Von Winterfeldt (1978) constructed a generic structure for regulatory decision making. The third structuring phase coincides with the more traditional and limited notion of structuring. In this step the parts of the overall analytic structure are formalized in detail by refining the problem elements and relations identified in the first step. This includes a detailed construction of decision trees, event trees, and goal trees. Linkages between part structures are established, e. g. between simulation and evaluation structures. Decision makers and groups affected by possible decisions are specified together with events or actions linking [l] Although such structures alternatives to decision analytic in the remainder of this paper. structures should be considered, I will ignore 14 D. von Winterfeldt/Structuring decision problems them. Examples of this structuring step can be found in most decision analytic textbooks. This three step structuring process of identifying the problem, developing an analytic structure, and formalizing its detailed content seldom evolves in strict sequence. Instead, the process is recursive, with repeated trials and errors. Often the analyst decides on a specific structure and later finds it either unmanageable for modeling or non-representative of the problem. The recognition that a structure needs refmement often follows the final step of decision analysis, if numerical computations and sensitivity analyses point to places that deserve more detailed analysis. Knowing about the recursive nature of the structuring process, it is good decision analysis practice to spend much effort on structuring and to keep an open mind about possible revisions. The above characterization of the structuring process will be used as a format to review the structuring literature. First, the use of problem taxonomies for the step of problem identification is examined. Methods to select analytic approaches are then reviewed as possible aids for the second structuring step. Finally, some recent advances in formalizing part structures are discussed. * Two conclusions emerged from this review and motivated the subsequent sections of this paper: (1) Although structuring research has much to say about analytic distinctions between decision problems and structures (e. . , whether a problem is multiattributed or not), it has little bearing on substantive problem distinction (e. g. , the difference between a typical regulation problem and a typical investment problem). (2) Structuring research is still limited to a few, usually hierarchical concepts and operations. Emphasis is put on simple, operational and computerized structuring. Little effort is spen t on creating more complex combinations of structures that represent real problem classes. As an alternative, the concept of prototypical decision analytic structures is introduced. Such structures have more substance and complexity than the usual decision trees or goal trees. They are developed to meet the substantive characteristics of a specific problem, but are at the same time general enough to apply to similar problems. As an illustration, IIASA’s [21 development of a prototypical decision analytic [2] International Institute for Applied Systems Analysis, Laxenburg, Austria. D. von Winterfeldt /Structuring decision problems 75 structure for environmental standard setting will be described. Finally, several typical classes of decision problems will be examined and some requirements or prototypical structures will be discussed. Taxonomies for problem identification The taxonomies described in the following typically classify decision problems by analytic categories (e. g. , whether a problem is multiattributed or not) and they attempt to slice the universe of problems into mutually exclusive and exhaustive sets. The purpose of such taxonomies is twofo ld: to facilitate the identification of an unknown element (e. g. , a medical decision problem) with a class of problems (e:g. , diagnostic problem); and to aid the process of matching classes in the problem taxonomy (e. . , diagnostic problems) with an analytic approach (e. g. , signal detection structures). Thus, by their own aspiration, problem taxonomies should be useful for the early phases of structuring decision problems. MacCrimmon and Taylor (1975) discuss on a rather general level the relationship between decision problems and solution strategies. Decision problems are classified according to whether they are ill-structured or well-structured, depending on the extent to which the decision maker feels familiar with the initial state of the problem, the terminal state, and the transformations equired to reach a desired terminal state. Three main factors contribute to ill-structuredness: uncertainty, complexity, and conflict. For each category MacCrimmon and Taylor discuss a number of solution strategies. These strategies include, for example, reductions of the perceptions of uncertainty, modeling strategies, information acquisition and processing strategies, and methods for restructuring a problem. Taylor (1974) adds to this classification scheme four basic types of problems: resource specification, goal specification, creative problems, and well structured problems (see fig. 1). Problem types are identified by the decision maker’s familiarity with the three subparts of the problem. Taylor discusses what types of decision strategies are appropriate for each of these problem categories, for example, brainstorming for creative problems and operations research type solutions for well structured problems. Howell and Burnett (1978) recently developed a taxonomy of tasks 16 D. von Winterfeldt /Structuring Problem Type Initial State decision problems Terminal State Transformation Type 1, Resource Specification Problems UnfamllIar Type 11, Goal Specification Problems Type III, Creative Problems Type IV, Well-Structured Problems Varies Varies Unfamihar Varies Vanes Familiar Unfamiliar Familiar Fig. 1. Types of problem structures (Taylor 1974). and types of events with the intention of assessing cognitive options for processing probabilistic information for each taxonomy element. Uncertain events are classified according to three dichotomies: frequentistic – not frequentistic; known data generator – unknown data generator; process external – internal to the observer. Task characteristics are complexity, setting (e. g. , real life us. laboratory), span of events, and response mode characteristics. For each vent/task combination Howell and Burnett discuss how different cognitive processes may be operating when making probability judgments. For example, in estimating frequentistic events with unknown data generators, availability heuristics may be operative. Brown and Ulvila (1977) present the most comprehensive attempt yet to classify decision problems. The ir taxonomy includes well over 100 possible characteristics. Decision problems are defined according to their substance and the decision process involved. Substantive taxonomic characteristics are mainly derived from the analytic properties of the situation, i. . , amount and type of uncertainty, and amount D. von Winterfeldt/Structuring decision problems 71 and types of stakes, types of alternatives. Only a few elements of this part of the taxonomy can be directly related to problem content, i. e. , current vs. contingent decision, operating vs. information act. The taxonomic elements of the decision process refer mainly to the constraints of the decision maker, e. g. , reaction time, available resources. The taxonomy by Brown and Ulvila incorporates most previous problem taxonomies which tried to define decision problems by categories derived from decision analysis. These include taxonomies by von Winterfeldt and Fischer (1975), Miller et al. (1976), and Vlek and Wagenaar (1979). To be useful for problem identification, the above taxonomies should lead an analyst to a class of problems which has characteristics similar to the decision problem under investigation. Unfortunately, the existing problem taxonomies are ill-suited for this purpose, because they use mainly analytic categories to distinguish problems. Such categories are derivatives of the decision analytic models and concepts, rather than characteristics of real world problems. For example, the analytic categorizations f problems into risky vs. riskless classes is based on the distinction between riskless and risky preference models. Analytic categories create more or less empty classes with little or no correspondence to real problems. For example, none of the above taxonomies allows distinguishing between a typical siting problem and a typical regulation problem in a meaningful way. I t appears that substantive rather than analytic characteristics identify real problems. Substantive characteristics are generalized content features of the problems belonging to the respective class. For example, a substantive eature of regulation problems is the involvement of three generic decision makers: the regulator, the regulated, and the beneficiary of regulation. To become useful for problem identification, taxonomies need to include such substantive problem characteristic Methods for selecting an overall analytic structure Most taxonomies include some ideas or principles for matching lems with analytic structures or models. MacCrimmon and attempted to match their basic type of decision problems with tive solution strategies, Howell and Burnett speculated on which tive processes may be invoked by typical task/event classes in probTaylor ognicogniproba- 18 D. von Winterfeldt /Structuring decision problems bility assessment; von Winterfeldt and Fischer identified for each pro blem category appropriate multiattribute utility models. But in none of these papers explicit matching principles or criteria for the goodness of a match are given. Rather, matches are created on the basis of a priori reasoning about the appropriateness of a strategy, model, or a cognitive process for a particular class of decision problems. Brown and Ulvila (1977) attempted to make this selection process more explicit by creating an analytic taxonomy in correspondence with the problem taxonomy. The analytic taxonomy classifies the main options an analyst may have in structuring and modeling a decision problem. The taxonomy includes factors such as user’s options (amount to be expended on the analysis), input structure (type of uncertainty), elicitation techniques (type of probability elicitation). These categories identify options, both at a general level (optimization, simulation, and Bayesian inference models) and special techniques (e. g. , reference gambles, or Delphi technique). To match problems with analytic approaches Brown and Ulvila created a third taxonomy, called the â€Å"performance measure taxonomy†. This taxonomy evaluates analytic approaches on attributes like â€Å"time and cost measures†, â€Å"quality of the option generation process†, â€Å"quality of communication or implementation†, etc. Different problem classes have different priority profiles on the performance measure categories. Similarly, different analytic approaches have different scoring profiles on the performance measures. The analytic approach chosen should perform well on the priority needs of a particular problem, Brown and Ulvila discuss the ‘goodness of fit’ of several analytic approaches to a number of decision situations in terms of these performance measures. For example, they argue that a contingency type analysis (an element of the analytic taxonomy) is appropriate for decision problems that occur repeatedly and require a fast response (elements of the decision situation taxonomy) because contingency type analysis allows fast calculations (elements of the performance measure taxonomy). Several authors have developed logical selection schemes, which can identify an appropriate analytic approach or model based on selected MacCrimmon (1973), for example, developed a problem features. sequential method for selecting an appropriate approach for multiattribe evaluation. The first question to be answered is whether the purpose of the analysis is normative or descriptive. Further questions D. von Winterfeldt /Structuring decision problems 79 include whether the type of problem has occurred frequently before, if there are multiple decision makers with conflicting preferences, and whether alternatives are available or have to be designed. All questions are of the yes-no type and together create a flow chart for selecting among 19 possible approaches. For example, if the purpose of the analysis is normative, if direct assessments of preferences (e. g. ratings) are valid and reliable, and if the type of problem has frequently occurred before, regression models or ANOVA type approaches would be appropriate. Johnson and Huber (1977) and Kneppreth et al. (1977) discuss a three step procedure for selecting a multiattribute utility assessment approach. In the first step, the characteristics of the multiattribute problem are listed, including discreteness vs. c ontinuity of dimensions, uncertainty vs. no uncertainty, and independence considerations. In the second step the evaluation situation is characterized on the basis of judgments about the task complexity, mount of training required for assessment, face validity required, assessment time, accuracy and flexibility. In the third and final step the profile describing the evaluation problem is compared with a profile characterizing five different generic assessment models or methods. The technique that best matches the situation profile is selected. For example, lottery assessment methods and models would be appropriate if the evaluation problem involves uncertainties, does not require high face validity, and allows for a good amount of training of the assessor. Both the taxonomy riented and the sequential selection methods for matching problems and analysis suffer from certain drawbacks. As stated earlier, problem characteristics used in taxonomies typically neglect substantive aspects o f the decision problem. Consequently, an analyst may choose an analytic approach based on a match with a spuriously defined problem class. For example, when facing a medical diagnosis problem, an analyst may find that some detailed substantive characteristics of the problem (e. g. , the way doctors process information, the physical format of information, etc. ) suggest a signal detection structure. Yet, as far as I can see, none of the above matching processes would directly lead to such a structure. Advances in formalizing structures Influence diagrams are a recent development in decision analytic structuring (see Miller et al. 1976). Influence diagrams draw a graphical 80 D. von Winterfeldt /Structuring decision problems picture of the way variables in a decision model influence each other, without superimposing any hierarchical structure. For example, a decision variable (price) may ‘influence’ a state variable (demand) and thus ‘influence’ a final state (successful introduction of a new product into market). Influence diagrams have been conceived mainly as an initial pre-structuring tool to create a cognitive map of a decision maker’s or expert’s view of a decision problem. In the present stage influence diagrams are turned into hierarchical structures and analyzed with traditional tools. But research is now underway at SRI Internatio nal on the use of influence diagrams directly in EV or EU computations. Another generalization of the tree approach is Interpretative Structural Modeling (ISM) developed, for example, in Warfield (1974) and Sage (1977). In interpretative structural modeling, matrix and graph heory notions are used to formally represent a decision problem. First, all elements of the problem are listed and an element by element matrix is constructed. The structure of the relationships between elements is then constructed by filling in the matrix with numerical judgments reflecting the strength of the relationship, or by simply making O-l judgments about the existence/non-existence of a relation. Computer programs can then be used to convert the matrix into a graph or a tree that represents the problem. Influence diagrams, value trees, decision trees, and inference trees can all be thought of as special cases of ISM. For example, in value tree construction, the analyst may begin with a rather arbitrary collection of value relevant aspects, attributes, outcomes, targets and objectives. Using alternative semantic labels for the relationships between these elements (e. g. , ‘similar’, ‘part of’), an element by element matrix can be filled. Finally, the analyst can explore whether a particular relational structure leads to useful goal tree structure. Besides these generalizations of traditional hierarchical structuring tools, several refinements of special structuring techniques have been suggested, particularly for evaluation roblems. Keeney and Raiffa (1976) devoted a whole chapter to the problem of structuring a value tree. They suggest a strategy of constructing a value tree by beginning with general objectives and disaggregating by using a pure explication logic (i. e. , what is meant by this general objective? ). This approach has previously been advocated by Miller (1 970) and others. Mannheim and Hall (1967) suggest in addition the possibility of disaggregating general D. van Winterfeldt /Structuring decision problems 81 objectives according to a means-ends logic (how can this general objective be achieved? ). Other disaggregation logics (problem oriented, process oriented, etc. ) could be analyzed in the ISM context. There are a number of papers that suggest more empirical or synthetic approaches to value tree construction. Of particular interest is a repertory grid technique described by Humphreys and Humphreys (1975) and Humphreys and Wisuda (1979). In this procedure similarity and dissimilarity judgments are used to span the value dimensions of alternatives. Several computer aids have been developed recently to aid decision makers or experts in structuring decision problems. Some of these are discussed in Kelly ( 1978), and Humphreys (1980). These aids typically rely on empty structuring concepts (decison trees, value trees, inference trees, or influence diagrams) and they guide the decision maker/expert in the analytic formulation of his/her problem. Special aids are OPINT for moderately complex problems which can easily be formulated into a decision tree or matrix structure, the decision triangle aid for sequential decision problems with a focus on changing probabilities, and EVAL for multiattribute utility problems (Kelly 1978). In addition to these structuring and assessment aids, there are now computerized aids under development xploiting the idea of influence diagrams and fuzzy set theory. Influence diagrams, ISM, and computer aids are indicative of a trend in structuring research and perhaps in decision analysis as a whole. This trend turns the fundamentally empty structures of decision trees, goal trees, and inference trees into more operational, computerized elicitation tools, without adding problem substance. T here are clear advantages to such an approach: a wide range of applicability, flexibility, user involvement, speed, limited training, and feedback, to name only a few. It also reduces the demands on the decision analyst’s time. There is, of course, the other extreme, the prestructured, precanned problem specific version of decision analysis applicable to essentially identical situations. A military example is Decisions and Designs Inc. % SURVAV model (Kelly 1978) which applies to routing decisions for ships to avoid detections by satellites. Such a structure and model can routinely be implemented with almost no additional training. In turn it gives up generalizability. Neither extreme is totally satisfactory. Empty general structures must consider each problem from scratch. Substantive specific struc- 82 D. von Winterfeldt /Structuring ecision problems tures have limited generalizability. The middleground of problem driven but still generalizable structures and models needs to be filled. Problem taxonomies may help here by identifying generic classes of problems. But as was discussed earlier, existing taxonomies are ill equipped for this task since they neglect substantive problem features. The question of filling in the middleground between ‘too general’ structures and ‘too specific’ structures thus becomes a question of searching for generalizable content features of problems that identify generic classes of decisions. These generic classes can then be modelled and structured by â€Å"prototypical decision analytic structures† which are specific enough to match the generalizable problem features and general enough to transfer easily to other problems of the same class. At the present stage of research this search process will necessarily be inductive because too little is known about problem substance to develop a problem driven taxonomy and matching analytic structures. An inductive research strategy may attempt to crystallize the generalizable features of a specific application, . or compare a number of similar applications (e. . , with siting problems), or simply use a phenomenological approach to delineate problem classes in a specific application area (e. g. , regulation). In the following two sections some possibilities for developing prototypical decision analytic structures will be discussed. An example of developing a prototypical structure The following example describes the struct uring process in the development of a decision aiding system for environmental standard setting and regulation. The work was performed as part of IIASA’s (see fn. 2) standard setting project (see von Winterfeldt et al. 1978), which had oth descriptive and normative intentions (how do regulators presently set standards? how can analytic models help in the standard setting process? ). Because of this wide approach of the standard setting project, the research group was not forced to produce workable models for specific decision problems quickly. Consequently, its members could afford and were encouraged to spend a substantial amount of time on structuring. Inputs into the structuring process were: – retrospective case studies of specific mental protection agencies; standard processes of environ- national Railway Corporation energylevelmeasure 3 measurefor aeroplanenoise 1 Japanese dB’ ‘SO†, AT SOURCE RULES ROUTING USE SCHEMES SCHEMES LAND Fig. 2. Regul atory alternatives for Shinkansen noise pollution. IMPLEMENTATION AND MEASUREMENT INSTRUMENT /I ALTERNATIVE OF HOUSE IN HOUSE IN FRONT lMldB(A) WCPNLl MEAS†6iiA~â€Å" 30 – di) MEASURED LEO’ EQUIP- TION FICA- SPECI- MENT SPE:D CONTROL RES+RlCT TIMES OPERATION 84 D. von Winterfeldt /Structuring decision problems – previous models suggested for standard setting; – field studies of two ongoing standard setting processes (oil pollution and noise standards). In addition, the structuring process benefited much from continuing discussions with leading members of environmental agencies in the United Kingdom, Norway, Japan and the United States. Although the structuring effort was geared towards decision analysis, substantial inputs were given by an environmental economist (D. Fischer), an environmental modeller (S. Ikeda), a game theorist (E. Hopfinger), and two physicists (W. Hafele and R. Avenhaus), all members of IIASA’s standard setting research team. The overall question was: how can standard setting problems best be formulated nto a decision analytic format and model such that the model is specific enough to capture the main features of a particular standard setting problem and, at the same time, general enough to apply to a variety of such problems? In other words, what is a prototypical decision analytic structure for standard setting? Since the regulator or regulatory agency was presumed to be the main client of such models, the initial structuring focussed on regulatory alternatives and objectives. In one attempt a wide but shallow alternative tree was conceived which included a variety of regulatory ptions ranging from emission standards, land use schemes, to direct interventions. An example for noise pollution standards is presented in fig, 2. Coupled with an appropriate tree of regulatory objectives, a decision analysis could conceivably be performed by evaluating each alternative with a simple MAU procedure. A possible value tree is presented in fig. 3 for the same noise pollution problem. This simple traditional structure was rejected since regulators seldom have to evaluate such a wide range of alternatives and because it does not capture the interaction between the regulators and the regulated. Also, regulators are much concerned about monitoring and implementation of standards, an aspect which a simple MAU structure does not address. The second structure was a narrow but deep decision tree, exemplified in fig. 4 for an oil pollution problem. In addition to the regulator’s alternatives, this tree includes responses of the industry to standards, possible detection of standards violations, and subsequent sanctions. This structure was geared at fine tuning the regulators’ definitions of D. von Winterfeldt /Structuring decision problems 85 of hospitals, schools, retwement homes MINIMIZE f residential life DISTURBANCE other / EEggF M,NIM,zE HEALTH Hearing EFFECTS PsychologIcal Synergetic (aggravation of existing illness) Investment for pollution equipment MINIMIZE COST ~—– Operation of pollution eqwpment RAILWAY CORP. OBJECTIVES Speed MAXIMIZE SERVICE - Aeliablllty ClXlllOrt wth mtemational regulation CONSISTENCY OF REGULATION with other national â⠂¬Å"cise standards (car, mr. other trams) POLITICAL OBJECTIVES -/ Enwonmental policy AGREEMENT POLICY WITH GOVERNMENT Transportation policy t Ewnomtc growih policy Fig. 3. Regulatory objectives for noise pollution control. he standard level (maximum emission, etc. ) and monitoring and sanction schemes, and to assessing environmental impacts. The structure is specific in terms of the regulatory alternatives. But by considering industry responses as random events, and by leaving out responses of environmental groups, it fails to address a major concern of regulatory decision making. The third structure was a three decision maker model, in which the regulator, the industry/developer and the environmentalists/impactees are represented by separate decision analytic models (see von Winterfeldt 1978). A signal detection type model links the regulator’s decision through possible detections of violations and sanction schemes to the the industry model. An event tree of pollution generating events and effects links the developer’s decisions to the impactee model (see fig. 5). The model can be run as follows: the regulator’s alternatives are left 86 EPA average UK aver,, UK maximum Norway average DEFINITIONS OF OIL EMISSION STANDARDS parts per million ofoil No pollution – Grawty Separator cugated Plate Inter- equipment Gas Flotation Filters ceptrr n ob STANDARD LEVEL in watt r ofoil POLLUTION EQUIPMENT PERFORMANCE o00 patis per milhon in water n First vidabon of No udat#on of standard occurs at tulle DETECTION STATES standard dunng all opemons n t POLLUTION EQUIPMENT DECISION BY THE OIL INDUSTRY PENALTY No pdlution equipment Gravity separator Gas Flotatux corrugated Plate bltw- Pais Filters EQUIPMENT PERFORMANCE per million n Second wdation POLLUTION EQUIPME NT DECISION BY THE OIL INDUSTRY No more vidations DETECTION STATES Find eflects~ on environment (pdlution levels) FINAL EFFECTS – industry (cost) – regulatlx (political) Fig. 4. Segment of a decision tree for setting oil pollution standards. A standard is usually defined by the number of samples to be taken, how many samples form an average, and how many exemptions from a violation are allowed. For example, the EPA average definition is as follows: four samples are to be taken daily, the average of the four samples may not exceed the standard level (e. g. , 50 ppm) more than twice during any consecutive 30 day period. 87 D. von Winterfeldt /Structuring decision problems REGULATORY 1 DECISION MODEL I U R (0 1 DETECTION OF REGULATION VIOLATION DEVELOPER – SANCTIONS POLLUTION GENERATING EVENTS I IMPACTEE DECISION MODEL POLLUTION EFFECTS Fig. 5. Schematic representation of the regulator-developer-impactee model. 1: variable standard of the regulator d(r): expected utility maximizing treatment decision of the developer a[d(r)]: expected utility maximizing decision of the impactees variable. The developer’s response is optimized in terms of minimizing expected investment, operation, and detection costs or maximizing equivalent expected utilities. Finally, the impactees are assumed to maximize their expected utility conditional on the regulator’s and the developer’s decision. At this point the model stops. The structure only provides for a Pareto optimality analysis of the three expected utilities accruing to the generic decision units. This model allows some detailed analyses of the probabilities and value aspects of the standard setting problem, and it proved feasible in a pilot application to chronic oil discharge standards (see von Winterfeldt et al. 1978). Regulators who were presented with this model, con- 88 D. von Winterfeldt /Structuring decision problems REGULATOR’S CHOICE Fig. 6. Game theoretic structure of the regulation I problem. sidered it meaningful, and it offered several insights into the standard setting problematique. Yet, there was a feeling among analysts and regulators that the static character of the model and the lack of feedback loops required improvement. The final structure considered was a game theoretic extension of the three decision maker model. The structure of the game theoretic model is presented in fig. 6. In this model the standard setting process in explicitly assumed to be dynamic, and all feedbacks are considered. In addition, transitions from one stage to another are probabilistic. The model was applied in a seven stage version in a pilot study of noise standard setting for rapid trains (Hapfinger and von Winterfeldt 1978). The game theoretic model overcomes the criticisms of the static decision analytic model, but in turn it gives up the possibility for fine tuning and detailed modeling of trade-offs and probabilities. Considering such aspects in detail would have made the running of the model impossible. Therefore, relatively arbitrary (linear) utility functions and simple structures of transition probabilities have to be assumed. Although the appropriateness of the different structures was not explicitly addressed in this study, two main criteria come to mind when judging structures: representativeness of the problem and manageability for further analysis. Each of these criteria can be further broken down. For example, representativeness includes judgments about the adequacy of the structural detail, and coverage of important problem aspects. The overall conclusions of many discussion with regulators, analysts, D. von Winterfeldt /Structuring decision problems 89 industry representatives, and the results of the pilot applications led us to accept the third structure as a prototypical decision analytic structure for relatively routine emission standard setting problems. The model is presently considered for further applications in emission tandard setting and an extension to safety standards will be explored. Towards a kit of prototypical decision analytical structures Not every decision analysis can afford to be as broad and time consuming as the previous study. Decision analysis usually has a much more specific orientation towards producing a decision rather than developing a generic structure. Still I think that it would be helpful i f analysts were to make an effort in addressing the question of generalizability when modeling a specific problem, and in extracting those features of the problem and the model that are transferable. Such an inductive pproach could be coupled with more research oriented efforts and with examinations of similarities among past applications. Such an approach may eventually fill the middleground between too specific and too general models and structures. But rather than filling this middleground with analytically specific but substantively empty structures and models, it would be filled with prototypical structures and models such as the above regulation model, more refined signal detection models, siting models, etc. In the following, four typical classes of decision problems (siting, contingency planning, budget allocation, and regulation) are examined nd requirements for prototypical structures for these problems are discussed. Facility siting clearly is a typical decision problem. Keeney and other decision analysts have investigated this problem in much detail and in a variety of contexts (see the examples in Keeney and Raiffa 1976). A typical aspect of such siting problems is sequential screening from candidate areas to possible sites, to a preferred set, to final site specific evaluations. Another aspect is the multiobjective nature with emphasis on generic classes of objectives: investment and operating cost, economic benefits, environmental impacts, social impacts, and political onsiderations. Also, the process of organizing, collecting, and evaluating information is similar in many siting decisions. Thus, it should be possible to develop a prototypical structure for facility siting decisions, 90 D. von Winterfeldt /Structuring decision problems simply by assembling the generalizable features of past applications [ 31. Contingency planning is another recurring and typical problem. Decision and Design Inc. addressed this problem in the military context, bu t it also applies to planning for actions in the case of disasters such as Liquid Natural Gas plant explosions or blowouts from oil platforms. Substantive aspects that are characteristic of contingency planning are: strong central control of executive organs, numerous decisions have to be made simultaneously, major events can drastically change the focus of the problem, no cost or low cost information comes in rapidly, and organizational problems may impede information flows and actions. Although, at first glance, decision trees seem to be a natural model for contingency planning, a prototypical decision model would require modifying a strictly sequential approach to accommodate these aspects. For example, the model should be flexible enough to allow for the ‘unforeseeable’ (rapid capacity to change the model structure), it should have rapid information updating facilities without overstressing the value of information (since most information is free), and it should attend to fine tuning of simultaneous actions and information interlinkages. Budget allocation to competing programs is another typical problem. In many such problems different programs attempt to pursue similar objectives, and program mix and balance has to be considered besides the direct benefits of single programs. Another characteristic of budgeting decisions is the continuous nature of the decision variable and the constraint of the total budget. MAU looks like a natural structure for budget allocation decision since it can handle the program evaluation aspect (see Edwards et al. 1976). But neither the balance issue nor the constrained and continuous characteristics of the budget are appropriately adressed by MAU. A prototypical decision analytic structure would model an evaluation of the budget apportionment, or the mix of programs funded at particular levels. Such a structure would perhaps exploit dependencies or independencies among programs much like independence assumption for preferences. Regulation covers a class of decision problems with a number of recurrent themes: three generic groups involved (regulators, regulated, [,3] I believe that. Keeney’s forthcoming book on siting energy facilities is a major step in that direction. Of. course, it could also be a step in the opposite direction. Or in no direction at all (see also first asterisked footnote at the beginning of the article). D. von Winterfeldt /Structuring decision problems 91 beneficiaries of regulation), importance f monitoring and sanction schemes, usually opposing objectives of the regulated and the benefrciaries of regulation, and typically highly political objectives of the regulator. In the previous section, the more specific regulation problem of standard setting was discussed, and a prototypical decision analytic structure was suggested. A decision analytic structure for regulation in general can build on the main features of the standard setting model. This list could be extended to include private investment decisions, product mix selection, resource development, diagnostic problems, etc. But the four examples hopefully re sufficient to demonstrate how prototypical decision analytic structuring can be approached in general. In my opinion, such an approach to structuring could be at least as useful for the implementation of decision analysis as computerization of decision models. Besides the technical advantages of trahsferability, prototypical decision analytic structures would serve to show that decision analysts are truly concerned about problems. Today decision analysis books have chapters such as ‘simple decisions under uncertainty’ and ‘multiattribute evaluation problems’. I am looking forward to chapters such as ‘siting industrial acilities’, ‘pollution control management’, an d ‘contingency planning’. References Brown, R. V. and J. W. Ulvila, 1977. Selecting analytic approaches for decision situations. (Revised edition. ) Vol. I: Overview of the methodology. Technical report no. TR77-7-25, Decisions and Designs, Inc. , McLean, VA. Brown, R. V. , A. S. Kahr and C. Peterson, 1974. Decision analysis for the manager. New York: Holt, Rinehart, and Winston. Edwards, W. , M. Guttentag and K. Snapper, 1976. A decision-theoretic approach to evaluation research. In: E. L. Streuning and M. Guttentag (eds. ), Handbook of evaluation research, I. London: Sage. Fischer, D. W. and D. von Winterfeldt, 1978. Setting standards for chronic oil discharges in the North Sea. Journal of Environmental Management 7, 177-199. Gardiner, P. C. and A. Ford, in press. A merger of simulation and evaluation for applied policy research in social systems. In: K. Snapper (ed. ), Practical evaluation: case studies in simplifying complex decision problems. Washington, DC: Information Resource Press. Hogarth, R. M. , C. Michaud and J. -L. Mery, 1980. Decision behavior in urban development: a methodological approach and substantive considerations. Acta Psychologica 45, 95-117. Hiipfmger, E. and R. Avenhaus, 1978. A game theoretic framework for . dynamic standard setting procedures. IIASA-RM-78. International Institute for Applied Systems Analysis, Laxenburg, Austria. 92 D. von Winterfeldt /Structuring decision problems Hopfinger, E. and D. von Winterfeldt, 1979. A dynamic model for setting railway noise standards. In: 0. Moeschlin and D. Pallaschke (eds. ), Game theory and related topics. Amsterdam: North-Holland. pp. 59-69. Howell, W. C. and S. A. Burnett, 1978. Uncertainty measurement: a cognitrve taxonomy. Organizational Behavior and Human Performance 22,45-68. Humphreys, P. C. , 1980. Decision aids: aiding decisions. In: L. Sjoberg, T. Tyszka and J. A. Wise (eds), Decision analyses and decision processes, 1. Lund: Doxa (in press). Humphreys, P. C. and A. R. Humphreys, 1975. An investigation of subjective preference orderings for multiattributed alternatives. In: D. Wendt and C. Vlek (eds. ), Utility, probability, and human decision making. Dordrecht, Holland: Reidel, pp. 119-133. Humphreys, P. C. and A. Wisudha, 1979. MAUD – an interactive computer program for the structuring, decomposition and recomposition of preferences between multiattributed alternatives. Technical report 79-2, Decision Analysis Unit, Brunel University, Uxbridge, England. Johnson, E. M. and G. P. Huber, 1977. The technology of utility assessment. IEEE Transactions on Systems, Man, and Cybernetics, vol. SMCJ, 5. Keeney, R. L. , in press. Siting of energy facilities. New York: Academic Press. Keeney, R. L. and H. Raiffa, 1976. Decisions with multiple objectives: preferences and value tradeoffs. New York: Wiley. Kelly, III, C. W. , 1978. Decision aids: engineering science and clinical art. Technical Report, Decisions and Designs, Inc. , McLean, VA. Kelly, C. and S. Barclay, 1973. A general Bayesian model for hierarchical inference. Organizational Behavior and Human Performance 10, 388-403. Kneppreth, N. P. , D. H. Hoessel, D. H. Gustafson, and E. M. Johnson, 1977. A strategy for selecting a worth assessment technique. Technical paper 280, U. S. Army Research Institute for Behavioral and Social Sciences, Arlington, VA. MacCrimmon, K. R. , 1973. An overview of multiple criteria decision making. In: J. L. Cochrane and M. Zeleney (eds. ), Multiple criteria decision making. Columbia, SC: The University of South Carolina Press. pp. 18-44. MacCrimmon, K. R. and R. N. Taylor, 1975. Problem solving and decision making. In: M. C. Dunnette (ed. ), Handbook of industrial and organizational psychology. New York: Rand McNally. Mannheim, M. L. and F. Hall, 1967. Abstract representation of goals: a method for making decisions in complex problems. In: Transportation, a service. Proceedings of the Sesquicentennial Forum, New York Academy of Sciences – American Society of Mechanical Engineers, New York. Miller, J. R. , 1970. Professional decision making: a procedure for evaluating complex alternatives. New York: Praeger. Miller, AC. , M. W. Merkhofer, R. A. Howard, J. E. Matheson and T. R. Rice, 1976. Development of automated aids for decision analysis. Technical report, Stanford Research Institute, Menlo Park, CA. Raiffa, H. , 1968. Decision analysis. Reading, MA: Addison-Wesley. Sage, A. , 1977. Methodology for large scale systems. New York: McGraw-Hill. Taylor, R. C. , 1974. Nature of problem ill-structuredness: implications for problem formulation and solution. Decision Sciences 5,632-643. Vlek, C. and W. A. Wagenaar, 1979. Judgment and decision under uncertainty. In: J. A. Michon, E. G. Eijkman and L. F. W. DeKlerk (eds. ), Handbook of psychonomics, II. Amsterdam: North-Holland. pp. 253-345. Warfield, J. , 1974. Structuring complex systems. Batelle Memorial Institute Monograph, no. 4. Winterfeldt, D. von, 1978. A decision aiding system for improving the environmental standard D. von Winterfeldt /Structuring decision problems 93 setting process. In: K. Chikocki and A. Straszak (eds. ), Systems analysis applications to complex programs. Oxford: Pergamon Press. pp. 119-124. Winterfeldt, D. von and D. W. Fischer, 1975. Multiattribute utility: models and scaling procedures. In: D. Wendt and C. Vlek (eds. ), Utility, probability, and human decision making. Dordrecht, Holland: Reidel. pp. 47-86. Winterfeldt, D. von, R. Avenhaus, W. Htiele and E. Hopfmger, 1978. Procedures for the establishment of standards. IIASA-AR-78-A, B, C. International Institute for Applied Systems Analysis, Laxenburg, Austria. How to cite Decision Analysis, Papers

Journal Innovative Research In Electrical -Myassignmenthelp.Com

Question: Discuss About The Journal Innovative Research In Electrical? Answer: Introduction An AC system is made up of a lot of components and based on their load consuming nature they can be classified as capacitive, resistive and inductive. In case of purely resistive load the current and voltage are in phase but most of the loads connected in any facility is inductive in nature and in case of inductive loads the current lags the voltage and is therefore out of phase whereas, in case of capacitive load the current leads the voltage (Stokes, 2008). The current which is leading or lagging the voltage is called wattles current because it supplies reactive power and this power is essential for turning motors attached to the AC system (Stokes, 2008). So this type of load cannot be avoided and if this type of load increases then the accounting power requirement increases which in-turn increases the kVA demand for the same kW load (Stokes, 2008). Most of the high tension tariff charge a separate kVA charge along with the kW charges which increases the electricity bill of the con sumer (Stokes, 2008). Thus consumers having low power factor have to pay more for the useful power they use making power factor improvement a necessity. Some of the methods applied to improve power factor in a system are as follows: Capacitive power factor improvement could be easily implemented as they are capable of supplying the reactive power required by the load by connecting the capacitor bank in parallel to the inductive load (Turchi et al., 2014). The capacitor bank acts like a source of reactive power and thus the inductive load absorbs lower reactive power from the AC system thus reducing the phase difference between the current and voltage. Power factor improvement using Synchronous Condenser is another method where a three phase overexcited synchronous motor working under no load is connected to the load side of the inductive load (Turchi et al, 2014). Synchronous condenser financial like a capacitor by either supplying reactive power or by drawing lagging current from the AC supply. Phase advancer is an AC exciter which could improve the power factor of induction motor (Turchi et al., 2014). It is connected to the rotor circuit of the induction motor and is mounted on the shaft of the induction motor (Turchi et al., 2014). The flux required a slip frequency needs exciting ampere turns and the phase advancer supplies this ampere turn and thus improves the power factor. As seen above most of the methods stated above work similar to a capacitor so as to improve the power factor and therefore capacitor banks are mostly used in power factor improvement. Another advantage of using capacitors in power factor improvement is that they are readily available in different sizes and shapes and are cheaper than other methods used to improve power factor. Therefore, capacitor banks are widely used in the application of power factor improvement. This papers briefly reviews the work carried out in improving the power factor using capacitor banks. Improvement in power factor has certain advantages as follows (Natarajan, 2005): As energy efficiency improves the power consumption reduces which results in reduced fossil fuel usage and greenhouse gas emissions form the power stations. Electricity bills are reduced. The existing supply can deliver more kVA. Losses in distribution equipment and transformers are reduced. The voltage drops in long cables are reduced. The electrical burden on electrical components are also reduced. An assembly of number of capacitors is a capacitor bank and are used to generate kVAr in-order to improve power factor (Ramzan et al, 2016). Arrangements of series/paralleled connected units are called Shunt capacitors banks. Some of the types of capacitor banks used in power factor improvement are discussed below. Grounded Wye - Connected Capacitor Banks (Natarajan, 2005): They are made out of parallel and series arrangement of capacitors in each stage, providing a low impedance path to ground. They make a low-impedance way to ground giving innate self-insurance to lightning surge streams and some security from surge voltages consequently, working without surge arresters. They offer a low impedance way for high recurrence streams thus they can be utilized as channels in frameworks. In any case, the dissemination of inrush streams and music may cause fizzling and over operation on defensive transfers and wires. Ungrounded Wye - Connected Capacitor Banks (Natarajan, 2005): They don't allow substantial capacitor release streams, third consonant ebbs and flows or zero succession ebbs and flows amid framework ground deficiencies to stream. Other favourable position is that over voltages showing up at the present transformer optional may not be as high as on account of grounded banks if the nonpartisan is protected for full line voltage. Be that as it may, it is costly for banks over 15 kV. There are two classes of associating capacitor bank. They are shunt and arrangement associating. Among these two classifications, shunt capacitors are all the more generally utilized as a part of the power arrangement of all voltage levels (Chandra and Agarwal, 2014). There are some particular points of interest of utilizing shunt capacitors, for example, Reduced line current of the framework. Improved voltage level of the heap. Reduced framework misfortunes. Improved power factor of the source current. Reduced heap of the alternator. Reduced capital speculation per megawatt of the heap. All the previously mentioned benefits originate from the way that the impact of capacitor lessens responsive current moving through the entire framework. Shunt capacitor draws practically settled measure of driving current which is superimposed on the heap current and therefore diminishes responsive segments of the heap and consequently enhances the power factor of the framework (Miller, 1976). Arrangement capacitor then again has no influence over stream of current (Miller, 1976). As these are associated in arrangement with stack, the heap current dependably went through arrangement capacitor bank. The capacitive reactance of arrangement capacitor kills the inductive reactance of the line henceforth, lessens, viable reactance of the line. Power factor correction is accomplished by supplementing a capacitive or inductive circuit with a reactance of inverse stage (Andrews et al, 1996). For a normal stage slacking power factor stack, for example, a vast enlistment engine, this would comprise of a capacitor bank as a few parallel capacitors at the power contribution to the gadget. Low power factor is not that quite a bit of issue in private homes it does however turn into an issue in industry where numerous expansive engines are utilized. So there is prerequisite of remedying the power factor in industry. Low power factor is not acknowledged according to standard since poor or low power factors influence the expenses of both the buyers and the electrical power industry. In-spite of the increased cost of working, reactive power requires the utilization of transmission lines, transformers, circuit breakers, switches and wiring of higher current limits. By utilizing a PLC based capacitor bank framework we can enhance slacking power factor consequently framework will be protected from various burden of slacking power factor by utilization of this framework the power factor control turns out to be quick and exact than different techniques and likewise the electric charges are additionally diminished (Desai et al., 2015). The info waveform of voltage and current with stage distinction are bolstered to zero intersection finders, which give square waves in advanced organization. These advanced waveforms are utilized by microcontroller to compute power factor. Microcontroller takes choice to s witch suitable capacitor bank to make up for power factor. In the wake of identifying poor power factor, automatic power factor control framework switches one capacitor at any given moment out of a gathering of eight capacitors (Tiwari and Sharma, 2014). On the off chance that expected objective to accomplish power factor is met, at that point next cycle rehashed else exchanging of capacitor proceed till pay is not under control. Before the genuine execution of the automatic framework in genuine physical world, we can confirm the evidence of idea utilizing Proteus VSM (Sarkar and Hiwase, 2015). The power factor correction an additionally be done utilizing solid state switched capacitors Measuring the power factor from the heap by utilizing LM358 zero intersection circuit and CD4070BC stage move indicator, and afterward computing the power factor have been finished by the program and LCD will be utilized for show. On the off chance that the power is not in the range, the switches are on/off modelled by the controller unit and capacitors are initiate/deactivate and enhance the power factor (Popa et al, 2013). This framework gives usage done on Arduino UNO microcontroller utilizing C language software to program the microcontroller, Arduino program to decide time slack amongst current and voltage and Proteus 7.7 to mimic the power factor as indicated by the heap. Solid state switches are electronic exchanging gadgets that can work on or off positions when a little outer voltage is connected from the microcontroller (Than, 2016). In AC circuits, solid state transfers switch on at zero load current. The circuit cannot be stopped by a sine wave crest, holding the expansive transient voltages that would cause a sudden fall of the attractive field around the inductance and is known as zero-hybrid exchanging. Many focal points show up by utilizing the solid state switches in this framework. There is thinner profile, allowing quieter operation and switching which is quicker than electromechanical transfers; the exchanging time of an ordinary optically coupled SSR is in the range of microseconds to milliseconds depending on the time required to power the LED on and off. It has no moving parts to wear and no contacts to pit or develop carbon. Yield resistance stays steady in any case measure of utilization. It is significantly less delicate to working and capacitance conditions like, mugginess, vibration, mechanical stun and outer attractive fields. Power Factor Control System with solid state switched capacitor when actualized totally, will accommodate power factor change in low voltage framework. Arduino UNO controller is exceptionally prominent at this occasion, in like manner effectively to compose the program by utilizing the abnormal state language. By the utilizing of solid state switches, it can contrast and the mechanical transfers, such a significant number of dependable and productive results show up. This is the extremely effective framework for different burdens, by utilizing the distinctive sizes of capacitors and setting off the switches which were controlled by the program. In automatic power factor correction framework, we utilize potential divider circuits rather than customary zero intersection locator which gives a steadier power factor and the cost additionally get chop down in light of the fact that we don't have to utilize ICs (Utpal et al, 2016). Microcontroller usage reduces the expenses. Because of utilization of microcontroller numerous parameters can be controlled and the utilization of additional hard products, for example, clock, RAM, ROM and information yield ports lessens. Overcorrection should be avoided as current and voltage are usually more because of which the power framework or machine winds up plainly precarious and the life of capacitor banks diminishes. A PLC is a solid state modern PC that controls framework by persistently checking the state of info gadgets and settling on choices in view of foreordained program so as to control the state of yield gadgets (Vukojevic et al, 2015). The essential monotonous strides in the operation of all PLCs, incorporate, filtering the contribution, amid which the state of the considerable number of sources of info associated with the PLC is examined (Jain et al, 2016). Using Program Scan, PLC checks and executes the program rationale in a successive way, producing the yield states. Amid the Output Scan the created yield states are refreshed in the yield status registers to invigorate or de stimulate the yields gadgets that are associated with the PLC yield module. Housekeeping incorporates inward diagnostics, correspondence with programming terminals, and so forth. Evaluations There are various routes in which a capacitor bank might be associated, with the decision being reliant on the coveted hand-off security, the framework establishing, the kVAr limit of the bank and the voltage level of the framework. At first the individual capacitor units are chosen to meet the voltage necessities of the framework, at that point the quantities of parallel units are chosen to meet the kVAr requirements of the capacitor bank (Chen et al, 2010). The number of paralleled capacitor units in each stage depends on two criterions as follows: If one capacitor unit in a stage fails then it should not create a voltage drop of more than 110 percent of the evaluated voltage. If a unit fails, then adequate fault current should flow through the individual circuit to clear the fault within 300s or less. Attention should be provided to this 300s time traverse as it is most extreme, and it should be toned down to 30s of traverse time. The highest number of parallel connected capacitor units for each arrangement bunch is represented by an alternate thought. At the point when a capacitor unit comes up short, different capacitors in a similar parallel gathering would contain some charge, which would then deplete off as a high-recurrence transient current moves through the damaged unit and its breaker. This transient current has to be withstood by the breaker holder and the fizzled capacitor unit. The over-voltage induced when a capacitor unit is separated increases the quantity of capacitor units per arrangement. Failure of extra capacitors is well on the way to happen in a similar arrangement bunch as the primary failure, as they have the most elevated voltage push. Every capacitor unit is normally secured by a breaker, remotely mounted between the capacitor bank intertwine transport and the capacitor unit (Locke, 2000). With inside melded capacitors, when a capacitor pack or component falls flat, the present, through its separate fusible connection, will be extensively higher than the ordinary current and may blow the fusible connection, in this way confining the fizzled pack or component. If a wire blows in a capacitor bank, it would expand the central recurrence voltage happens on the rest of the units in that arrangement gathering. An unbalance location conspire is utilized to screen such conditions and to make a move as required (Shwedhi and Sultan, 2000). This plan more often than excludes three levels of activity. Over-voltage should be less than 110% which should be indicated using an alarm. The postponement is normally 4s or more prominent. Over-voltage greater than 110% requires Trip capacitor bank exchanging gadget as the unbalance is increased. The deferral is typically 4s or more noteworthy. Trip for extreme bank unbalance should provide fast wire clearing time with lower postponement frequency of 0.3 to 0.5s. Unbalance insurance regularly gives the essential security to arcing flaws inside a capacitor bank and different variations from the norm that may harm capacitor units as well as breakers. Arcing issues may cause generous harm in a little division of a moment. The unbalance security ought to have least purposeful deferral so as to limit the measure of harm to the bank in case of outside arcing. As Switching of capacitors are done automatically consequently we get more precise outcome, Power factor correction strategies influences framework to steady and because of change in power factor its productivity likewise increments (Than, 2016).. Power factor correction plan can be connected to enterprises, power frameworks and in addition in house hold reason. The utilization of microcontroller decreases the expenses. By utilizing microcontroller different parameters can be controlled and the utilization of additional hard products, for example, clock, RAM, ROM and info yield ports decreases. Before automatic power factor correction circuit inclusion capacitors are associated in parallel to the heap circuit, so they are outlined, for example, to be associated in parallel to stack (Khan and Owais, 2016). Capacitance of capacitors include when associated in parallel, they are associated in parallel with the hand-off switch board. Underneath figure demonstrates the association of capa citor bank. Capacitors are associated in parallel to the heap circuit, so they are composed, for example, to be associated in parallel to stack. Capacitance of capacitors include when associated in parallel, they are associated in parallel with the transfer switch board. A PLC based power factor change plot is actualized to enhance the power factor under various working conditions. The power factor controller utilizes a query table made in light of the deliberate estimation of power factor as per the look table, the PLC switches the capacitor in each stage using hand-off worked switches (Jain et al, 2016). By introducing appropriately measured power capacitors into the circuit the Power Factor is enhanced and the esteem turns out to be closer to 0.9 to 0.95, therefore, capacitor banks utilized for power factor correction decrease misfortunes and expands the proficiency of the power framework and furthermore builds security (Chen et al, 2010). By utilizing an automatic power recurrence control framework, the productivity of the framework is profoundly expanded. Conclusions Utilization of PLC as a power factor controller has turned out to be an adaptable, proficient and savvy instrument for a modern application. This improves the power quality, cost and protection in the system. By utilizing power factor correction controller, we can accomplish: higher estimations of capacitors progressively the disfiguring administration, operation with little load or sit out of gear of the engine due the increase of misshaping administration, a major contrast between the power factor in distorting administration and power factor of essentials show a higher twisting administration, if the capacitors banks are associated with arrangement curls, the disfiguring is decreased on the off chance that it utilizes channel with bring down recurrence ( 2 KHz). The Mechanically Switched Capacitors (MSCs) are the most prudent responsive power remuneration gadgets. They are a basic and minimal effort, other than being low-speed answer for voltage control and system adjustment under substantial load conditions (Oliveira et al, 2010). Their usage has no impact on the short out power yet it underpins the voltage at the purpose of association. The MSCs establishments have other advantageous impacts on the framework, for example, change of the voltage profile, better voltage direction, diminishment of misfortunes and decrease or delay of interests in the transmission and era limit. Reasonably outlined capacitor banks when associated with acceptance engines can enhance the power factor among different changes in general execution. Any glitch of the capacitor bank can, notwithstanding, cause undesirable corruption of framework execution. Power supply mutilation regularly can be measured by Power Factor and Total Harmonic distortion (Das et al, 2009) The low power factor is exceptionally undesirable as it causes an expansion in current, bringing about extra misfortunes of dynamic power in every one of the components of power framework from power station generator down to usage gadgets. With a specific end goal to guarantee most great conditions for a supply framework from building and monetary point of view, it is essential to have power factor as near solidarity as would be prudent. Change of power factor influences the service organizations to dispose of the power misfortunes and the buyers are free from low power factor punishment charges. References Andrews, D. Bishop, M.T. and Writte, J.F. (1996). economics measurements, analysis, and power factor correction in a modern steel manufacturing facility. IEEE Transactions on Industrial Applications. 32(3), pp. 617-624. Chandra, A. and Agarwal, T. (2014). Capacitor Bank Designing for Power Factor Improvement. International Journal of Emerging Technology and Advanced Engineering. 4(8), pp. 235-239. Chen, Y., Yao, D., Ma, W., Pan, J., Gertmar, L., Babaee, A. and Vessel, R. (2010). Study on coordinated reactive power control strategies for power plant auxiliary system energy efficiency and reliability improvement. In: International Conference on Power System Technology. [online] Hangzhou: IEEE, pp.1-8. Available at: https://ieeexplore.ieee.org/document/5666716/ [Accessed 17 Sept. 2017]. Das, S., Das, G., Purkait, P. and Chakravorti, S. (2009). Anomalies in harmonic distortion and Concordia pattern analyses in induction motors due to capacitor bank malfunctions. In: International Conference on Power Systems. [online] Kharagpur: IEEE, pp. Available at: https://ieeexplore.ieee.org/document/5442656/ [Accessed 17 Sept. 2017]. Desai, S., Lalpurwala, N. Salokhe, V. and Katre, R. (2015). Power Factor Correction for 1 Phase Induction Motor Using PLC. International Journal of Electrical and Electronics Research. 3(2), pp. 1-4. Jain, R., Sharma, S., Sreejeth, M. and Singh, M. (2016). PLC based power factor correction of 3-phase Induction Motor. In: International Conference on Power Electronics, Intelligent Control and Energy Systems. [online] Delhi: IEEE, pp. 1-5. Available at: https://ieeexplore.ieee.org/document/7853637/ [Accessed 17 Sept. 2017]. Khan, M.B. and Owais, M. (2016). Automatic power factor correction unit. In: International Conference on Computing, Electronic and Electrical Engineering. [online]. Quetta: IEEE, pp. 1-6. Available at: https://ieeexplore.ieee.org/document/7495239/ [Accessed 17 Sept. 2017]. Locke, C. (2000). Optimal capacitor sizing for induction motors. In: Canadian Conference on Electrical and Computer Engineering. [online]. Halifax: IEEE, pp.1162-1166. Available at: https://ieeexplore.ieee.org/document/849646/ [Accessed 17 Sept. 2017]. Miller, F.D. (1976). Application guide for shunt capacitors on industrial distribution systems at medium voltage levels. IEEE Trans. Industry Applications. 12(5), pp. 444 459. Natarajan, R. (2005). Power System Capacitors. Boca Raton: Taylor Francis. Oliveira, A.L.P. and Pereira, A.L.M. (2010). Introduction of the Mechanically Switched Capacitors (MSCs) application on Power Transmission Systems. In: IEEE/PES Transmission and Distribution Conference and Exposition. [online] Sao Paulo: IEEE, pp. 452-457. Available at: https://ieeexplore.ieee.org/document/5762921/ [Accessed 17 Sept. 2017] Popa, G.N., Dinis, C.M. and Paliciuc. (2013). On the use of low voltage power factor controller in deforming regime. In: International Symposium on Electrical and Electronics Engineering. [online] Galati: IEEE, pp. 1-6. Available at: https://ieeexplore.ieee.org/document/6674332/ [Accessed 17 Sept. 2017]. Ramzan, N., Akbar, A., Khan, Z.J., Naseer, P. and Zaffar, N. (2016). Reactive power compensation through synchronization of networked VFDs. In: Power Systems Conference. [online] Clemson: IEEE, pp.1-7. Available at: https://ieeexplore.ieee.org/document/7462862/ [Accessed 17 Sept. 2017]. Sarkar, A. and Hiwase, U. (2015). Automatic Power Factor Correction by Continuous marketing. International Journal of Engineering and Innovative Technology. 4(10), pp. 170-176. Shwedhi, M.H. and Sultan, M.R. (2000). Power factor correction capacitors; essentials and cautions. In: IEEE Power Engineering Society Summer Meeting. [online]. Seattle: IEEE, pp. 1317-1322. Available at: https://ieeexplore.ieee.org/document/868713/ [Accessed 17 Sept. 2017]. Stokes, G. (2008). Handbook of Electrical Installation Practice. 4th Ed. Hoboken: Blackwell Science Ltd. Than, M.M. (2016). Implementation of Power Factor Correction Using Solid State Switched Capacitors. IOSR Journal of Electrical and Electronics Engineering. 11(4), pp. 70-79. Tiwari, A.K. and Sharma, D. (2014). Automatic Power Factor Correction Using Capacitive Bank. International Journal of Engineering Research and Applications. 4(2), pp. 393-395. Turchi, J., Dalal, D., Wang, P. and Jenck, L. (2014). Power Factor Correction (PFC) Handbook: Choosing the Right Power Factor Controller Solution. On Semiconductor, [online] Available at: https://www.onsemi.com/pub/Collateral/HBD853-D.PDF. [Accessed 16th Sept 2017]. Utpal, Rishav and Tiwari, M. (2016). Automatic Power Factor Correction Using Capacitor Banks. International Journal of Innovative Research in Electrical, Electronics, Instrumentation and Control Engineering. 4(4), pp. 9-16. Vukojevic, A., Handley, J. laval, S. and McFetridge, B. (2015). Improving power delivery through secondary voltage control. In: SoutheastCon. [online] Fort Lauderdale: IEEE, pp. 1-5. Available at: https://ieeexplore.ieee.org/document/7132938/ [Accessed 17 Sept. 2017].

Saturday, May 2, 2020

Java Basic Principles & Features

Question: Discuss the principles, characteristics and features of programming in Java. Answer: Introduction: The report is about the designing a software solution for the university by that information of students can be kept and displayed. The insertion and deletion with update records as well. The report is started with basic principles of java and its features as well. The design solution is discussed to give overview of solution generation and implementation would explore while detailed structure with demonstration of modules individually. The test and documentation describes cases and user documentation. The software is based on java technology and designed on IDE. The Tool is also described with proper description and screenshots are attached with every activity step by step execution. The following point is about the java principles. Each one is described in its extent with example. Basic principles of java: Java is purely object oriented programming language and it follows majorly four principles inheritance, polymorphism, encapsulation etc. The main principles are discussed below: Encapsulation: Encapsulation is the principle of object oriented programming that enables binding of data with methods. It enables the data hiding feature and also provides security of data. It prevents unauthorized users to access the confidential information. It is implemented by using access specifires for example private, protected and public. The encapsulation is applied in java using classes; a class is the whole structure of program and binds the methods with data. Each class is represented by object and the class members can be accessed by this object only. Inheritance: Inheritance is the most powerful design principle of object oriented programming and it is described as inherit the functionality of parent class into its child class. The feature of extending functionalities of paternal class into child class increases the reusability of code. It is the most powerful feature in java and maximum used in coding. Without using inheritance in programming every object need to be described again and again. It saves time and space both while using single object in multiple places. Inheritance also uses encapsulation like subclass objects are bind with superclass objects. Polymorphism: Polymorphism is the capability to use one function in many forms. As the name suggests it self that poly means many and morph means forms. It helps to design generic interface grouped with components. The polymorphism can be implemented in two ways runtime and compile time. Runtime polymorphism is called overriding and compile time polymorphism is called overloading. Both the approaches are used to apply new definition to already declared functions. Overriding is applied by defining the function again. The characteristics and features of Java Programming Language Simple: Java programming is very easy to read as well as to write. Java has adhesive features in itself which make it easy to understand and to learn it. Java is a simple language for getting it as its concept is mostly taken from C++. Secure: Java is secure because it is not harmful for the other system in computer. Web application in java to access is much secure. It gives a secure way of creating an Internet application. Portable: Java supports all types of platform like Windows, Linux, and Mac. Throw internet java program can be easily transferred. Java can be also called as JVM as it is a run-time system and it can execute everywhere. Object-Oriented Programming Language: It provides the features somewhat like C++ i.e. Object Oriented. Java language can also be called as OOP. It is an Object Oriented Programming Language. Robust: Java is that type of programming language which is error free which help the user to work with it. Java is a user friendly programming language. Multithreaded: In the multithreaded programming it makes a link support. Multithreading is the feature that gives multiple program running simultaneously. The multithreading enables multi program running and one program can enable more than one thread. Interpreted: JVM help to interpret the bytes code. Interpretation of java code into byte code. Byte code consists of half machine code and half language. High performance: Faster execution takes place over here through JVM. Over here Byte code is enhance. Distributed: Java can be broadcast over internet. Java allocated with scattered environment. Dynamic: Java gives runtime information of a program and which can also be verified and solve easily. Java enables dynamic programming by run time polymorphism that is called overriding. JAVA RUNTIME ENVIRONMENT (JRE): Java runtime environment is the environment provided by java to run the code and it consist of two main components and they are java runtime machine and garbage collection. Java virtual machine is use to implement the language. The traditional compiler changes the high level language into class file and once the code is converted into class file it can be run on any machine. This makes java platform independent compile once run anywhere. Java virtual machine does the execution work and converts class file into executed file. Garbage collection: Garbage collection is the mechanism to allocate and reallocate memory space. In java automatic garbage collection is done to allocate memory. Garbage collector does the memory reallocation at the time of memory wastage. Java supports memory revocation at the time while memory leak. Therefore there is no memory leakage in java. Automatic garbage collection is supported in java. Java design solution: The given scenario is based on student information record. The student information should be inserted, updated, deleted and displayed. The student informations are First name, Last name, DOB, Student ID, Mobile, Address and Course enrolled. For the solution to this problem proposed program is to be executed and components are used to write as follows J Frame Swing Components J Table The above components are used in the interface design and each one has its own significance. The java frame is used in designing the whole structure of solution and java table is used to display the information. The swing components are used to add labels, textboxes and buttons. The program is written using NETBEANS Integrated development environment (IDE). NETBEANS is the IDE which provides java based components integrated in it and java based programming can be done. The data structure is used for the displaying purpose is table and there are three functionalities has given to user as insertion, deletion and update. It is the desktop application because it is developed using java standard edition. The windows programming is followed to design this solution. The designing structure is prepared using the form structure and information taken from normal input. The next point is about the implementation of design solution using design tool and programming is done using core java. Java SE Technology NETBEANS Design tool Design implementation: The proposed design solution is implemented by writing code in above mentioned IDE that auto corrects and provides suggestions for further design. The design implementation would be shown in following screenshot. The solution is designed in modules and they are as follows INSERTION UPDATE DELETE DISPLAY The page design is done using swing components and the design structure would be shown in the next screenshot. The design coding is shown in above method as could be seen that label and text fields are added. Design implementation structure is shown in next page. The above screenshot is described the designing and how would the page would be look like. The table structure would be shown in other module information and also the coding would be attached with each page activity. Insertion: The very first module in the proposed software implementation is inserting information into table and the information is taken manually. As shown in above diagram there is a list attached for course enrollment and four options are given. The address field is taken with text area so that if address is in long form can be managed easily. The screenshots are as follows: Page code is shown in this screenshot The insertion activity would be seen in below screen shots: The information is collected and shown in the table below so in this manner insertion is done. Update: The second main feature of this application software is to update the inserted information. And first the row is selected by clicking method and for this purpose mouse click event is implemented in coding that would be shown in below screenshot. In this picture the row has been selected to be updated. Now the Course selected would be changed in below picture As can be seen in above picture record has been changed. The coding of this update page is also attached. Deletion: The last module for the design implementation is to delete the selected row from the table. For this purpose the function is written on the action command of delete button. The function contains a simple command and executed after pressing the button. Now the demonstration is being shown using next picture. In the next picture the selected record would be deleted. In this manner the module implementation is done the above screenshots are attached to make an understanding to the demonstration of software. All above mentioned functions are working properly. The next points cover error handling and object utilization. Error handling control flow structure: The proposed solution does not contain any error handing because the classes implemented in the coding does not throw any critical exception. The control flow structures are used in mouse click event function and executed successfully. The error handing is the main feature of java because it handles the system errors by java runtime environment. It gives java unique feature to run the program and makes it more dynamic. The below screenshots would clear the idea of using it in proposed solution. The error handling in java can be done using different statements as follows Try catch statement Throw key word Throws keyword All three methods of handling the exceptions in java are applied on specific requirement places. For this program the throws exception clause is used so that main function automatically throws it. Test design documentation: Test of software is done at the user extent and the solution is tested from its design to its execution. The white box testing is implemented in this kind of software solution because white box testing is umbrella activity and it runs with each activity of software. The design documentation can be done in two ways first is user and other is technical review. The user can use the proposed solution as shown in screenshots and all activities run independently therefore functions run one by one. The design of software is done to implement different modules as already discussed in detail at the beginning of the report. The technical document contains technology is used in it java and tool for designing the interface and coding as well. The best part of unitizing the IDE NETBEANS is that one should not need to apply effort on the design interface rather the tool provides facility to drag and drop components directly to frame. This approach saves time and effort both. Conclusion: The software solution is developed according to given requirements. The solution to problem is attached with implementation step. The steps are clearly mentioned and described using live running frame. The proposed solution is helpful in small scale record and displaying it but the future scope is vast because the essential information can be utilized in future as well. The solution provided includes detailed information of each module with its demonstration also. The screenshots are attached to demonstrate the software and comments are mentioned to check the functionality described using functional code. The code pages are also attached in the files and each image is described with comments. The goal is achieved so far but if any requirement is not fulfilled than it would be covered in full version to this project. The report is about the design and implementation of principles of java in the program developed and so far all are utilized in it.