Saturday, August 31, 2019

Prejudice and Discrimination in Philadelphia Essay

Philadelphia is a movie which demonstrates not only the cold-blooded and hypocritical members of corporate society, but the indignities and prejudices that people living with AIDS have to go through. This movie was set in an era when homosexuality was not socially accepted and not many people were educated on the disease AIDS. Andrew Beckett, a Philadelphia lawyer who has been keeping his homosexuality, and his AIDS, hidden from his conservative bosses. He is a good worker and is respected in the workplace until one day he’s suddenly and inexplicably fired. Andrew suspects AIDS is the reason, and is determined to fight in court, even as he is losing his other battle, against the disease. Beckett hires attorney Joe Miller to represent him. Joe Miller is a homophobe and has to first overcome these fears of gays. Andrew Beckett’s ex Boss, Charles Wheeler, a sickening, prejudice man who resembles the most disgusting corporate boss there is: The corporate boss, who pretends to be friends with his coworkers or clients, only to stab them in the back later. He will do only anything to benefit himself and get to the top of the business. At the beginning, Wheeler pretends to be Andy’s friend, heck he even asks him for legal advice on a special antitrust case called †Highlight vs. Sander Systems†. Andy Beckett’s becomes fired, from the job, once they find out he has aids, but try to make it look he was fired for other reasons. The movie also greatly shows the prejudices, and misconception people have about aids. Another scene that shows people ignorance and prejudice towards AIDS and homosexuals is the library scene in which Andrew Beckett is conducting research for his AIDS case against the law firm that illegally fired him. The librarian first asks Andy Beckett, if he would be more comfortable in a study room, but then it becomes evident that the ignorant librarian is telling not asking Andy Beckett to go to another room because she is uneducated on the disease and fears she might catch it. Andrew being, a very proud man, refuses showing his true dignity even while having AIDS. At the time Andrew Beckett’s lawyer Joe Miller was there and saw this happening, although he was hiding behind a pile of books. He realised Andrew needed him to help protect his rights. The lawyer took the book Andy was holding out of his hand to show the librarian he was not afraid getting the disease by touching something Andrew had touched. During the court case, Joe Miller brings up the point of homosexuality against Andrew’s old company in defence, he brings up the points of how society don’t accept AIDS and homosexuals. In the end Andrew and Joe win the case and get money in damages, although Andrew is dying, he is happy to see that they won the case and raised awareness of AIDS.

Friday, August 30, 2019

Agency relationship Essay

1. INTRODUCTION Agency is a fiduciary relationship created by express or implied contract or by law, in which one party (the agent) may act on behalf of another party (the principal) and bind that other party by words and/or actions. The etymology of the word agent or agency says much. The words are derived from the Latin verb ago, agere (the respective noun agens, agentis). The word denotes one who acts, a doer, force or power that accomplishes things.1 Agency is the exception to the doctrine of privity under the law of contract. 2. LIABILITY OF A PRINCIPAL AGAINST THIRD PARTIES Lord Alverstone CJ in THE QUEEN V KANE2 defined an agent simply as ‘any person who happens to act on behalf of another’. A principal is one who authorizes another to act on his or her behalf as an agent. The general rule is that where an agent makes a contract on behalf of his principal, the contract is between the principal and the third party and prima facie at common law, the only person who can sue and be sued on the contract is the principal. The agent acquires no rights under the contract, nor does he incur any obligation. Having performed his task by bringing about a contract between his principal and a third party, the agent drops out of the picture subject to any outstanding matters between him and principal.3 The onus is on the person alleging that he entered into a contract with another person through an agent to prove that in fact the agent was acting as such. Agents of the state can never be personally liable for the state’s failure to perform a contractual obligation as stated in STICKROSE (PTY) LIMITED V THE PERMANENT SECRETARY MINISTRY OF FINANCE 4. In law, agents are recognized as having the power to affect the legal rights, liabilities and relationships of the principal. In CAVMONT MERCHANT BANK v AMAKA AGRICULTURAL HOLDINGS5, the Supreme Court held that where an agent in making the contract discloses both the interest and the names of the principal on whose behalf he purports to make a contract, the agent as a general rule is not liable to the other contracting party. Apart from having the power to affect the legal rights, liabilities and relationships of the principal, the agent may also affect the legal position of his principal in other ways. For instance, he may dispose of the principal’s property in  order to transfer ownership to a third party or he may acquire property on his principal’s behalf. Sometimes the actions of the agent may make the principal criminally liable as illustrated in the case of GARDENER v ACKEROYD 6.  The rights and liabilities of principal and agent against third parties may differ according to whether the agency is disclosed or undisclosed. The distinction between disclosed and disclosed agency is important as it affects the principal’s ability to ratify the agent’s actions. Furthermore, the agent’s liability to third parties may depend on whether the agency was disclosed or not. Agency is disclosed where the agent reveals that he is acting as an agent; if the agency is disclosed it is of no legal significance that the principal is not named. If an agent contracts with a third party without disclosing that he is acting as an agent the agency is undisclosed. 7 An undisclosed principal can intervene on the contracts of an agent within his actual authority. Where an agent makes a contract disclosing the agency, the normal rule is that a direct contractual relationship is created between the principal and the third party and either party can sue the other on the contract. It is important to note that only a disclosed principal can ratify an unauthorised contract. In KEIGHLEY MAXTED v DURANT8 a principal authorized an agent to buy wheat at a given price in the joint names of the principal and the agent. Having failed to purchase wheat at that higher price, the agent bought wheat in his own name at a higher price. The principal being satisfied with this act purportedly ratified the wheat purchase agreement at a higher price but failed to take delivery of the wheat. The seller then sued the principal arguing that the sale contract had been ratified. It was held that the action could not succeed because the agent’s act was unauthorized and since the principal’s identity had not been disclosed to the sellor, the principal could not ratify and consequently was not liable on the contract. Where the principal is disclosed, he and not the agent is liable on the contract and may sue and be sued. In GADD v HOUGHTON & CO.9 Houghton & Co. sold to the buyers Gadd, a quantity of oranges under a ‘sold note’ which stated, inter alia, that ‘we have this day sold to you on  account of James Morand & Co †¦.’ and signed ‘Houghton & Co.’ The seller having failed to deliver the oranges, the buyer sued Houghton & Co for damages for non-delivery. The action failed, since by the words of the sold note Houghton & Co had clearly indicated that they were not to be personally liable. They were merely brokers. Lord Mellish stated that â€Å"where you find a person in the body of the instrument treating himself as the seller or character, you can say that he intended to bind himself.† In SUI YIN KWAN & ANOTHER v EASTERN INSURANCE CO. LTD10 it was held that the doctrine of undisclosed principal applied. Where an agent acts within his actual authority the undisclosed principal may intervene and acquire the rights/liabilities of the agent. In this case, the agents acted within their actual authority and therefore, the relatives could recover from the insurance company. Lord Lloyd summarized the law as follows: (1) an undisclosed principal may sue and be sued on a contract made by an agent on his behalf, acting within the scope of his actual authority. (2) In entering into the contract, the agent must intend to act on the principal’s behalf. (3) The agent of an undisclosed principal may also sue and be sued on the contract. (4) Any defence which the third party may have against the agent is available against his principal. (5) The terms of the contract may, expressly or by implication, exclude the principal’s right to sue, and his liability to be sued. The contract itself, or the circumstances surrounding the contract, may show that the agent is the true and only principal. Sometimes the agent contracts with third parties after disclosing the fact, that he is an agent but without disclosing the name of his principal. In such cases, the principal is bound by the contracts made on his behalf. And thus, the principal is liable to third parties for his agent’s acts done on behalf of the principal. However, such acts must be within the scope of the agent’s authority, and the unnamed principal must be in existence at the time of contract. As a matter of fact, when the agent contracts after disclosing his representative character, the contract will be the contract of the principal. For all such acts, the agent is not personally liable. However, the agent is personally liable if he declines to disclose the identity of the principal when asked by the third parties. 11  When there is undisclosed agency, the contract is initially between agent and the third party and each may enforce the contract against the other. However, if the third party later discovers the principal’s existence, he may enforce the contract against either the agent or the principal. Provided that the agent acted within the scope of his actual authority, the principal can intervene and enforce the contract against the third party. 12 3. CIRCUMSTANCES WHEN AN AGENT MAY BE HELD PERSONALLY LIABLE If an agent continues to act after his authority has been terminated, he may incur personal liability for breach of implied warranty of authority. Sometimes an agent may suffer a potential risk when his authority is terminated automatically without his knowledge. In the case of YONGE v TOYNBEE13 where solicitors were acting in litigation for a client who, unknown to them, became mentally incapacitated so that the agency was considered to be terminated. However, they continued to litigate for the client and were held liable for their breach of warrant of authority and were ordered to pay the costs of the other litigant. There are three exceptional cases where the undisclosed principal cannot sue or be sued, by the third party. The first is where the contract between the agent and the third party expressly provides that the agent is the sole principal U.K MUTUAL STEAMSHIP ASSURANCE ASSOCIATION v NEVILL14. The second is where the terms of the contract are inconsistent with agency. In HUMBLE v HUNTER15, an agent signed a charter-party in his own name and described himself as â€Å"owner† of the ship. It was held that his undisclosed principal could not sue. The third case where an undisclosed principal cannot sue is where the identity of the principal is material to the third party. One such case is where the contract made between the agent and the third party is too personal to permit an undisclosed principal to intervene, for example, contracts for personal service. In the case of SAID v BUTT16, a theatre critic knew the management of a particular theatre would not sell him a ticket because of articles he had written. He obtained a ticket through an agent. It was held that the theatre could prevent the principal from  entering the theatre. McCardie J said that â€Å"the critic could not assert a right as an undisclosed principal since, as he knew, the theatre was not willing to contract with him†. Even where the undisclosed principal’s existence is discovered, the agent remains liable on the contract and the third party may choose to enforce the contract against either principal or the agent but not both. This is known as the right of election. A third party has an elective right to sue either the agent or the principal where the agent does not disclose the principal. In BOYTER V THOMSON17 the seller instructed agents to sell on his behalf a cabin cruiser under a brokerage and agency agreement. The buyer purchased the boat thinking it was owned by the agents and he was not told that the agents were acting as such nor the name of the owner nor that the owner was not selling in the course of a business although he was aware that the boat was being sold under a brokerage arrangement. The boat proved to be unseaworthy and was unfit for the purpose for which she was purchased. The buyer sued the seller for damages which were granted. The seller appealed to the House of Lords where goods were sold by an agent acting in the course of business for an undisclosed principal the buyer was entitled to sue not only the agent but also the principal. Once the third party elects to sue one party, his option to sue the other is extinguished. However, not any action by the third party suggesting action against one party in preference for another will be construed as the exercise of the right of election. In CURTIS v WILLIAMSON18, one Boulton appearing to act on his own behalf purchased some gunpowder from the plaintiff. Later, the plaintiff discovered that Boutlton was acting on behalf of an undisclosed principal, the defendant mine owners. Boulton then filed a petition of liquidation and the plaintiff filed an affidavit in those proceedings in an attempt to recover the debt owed for the gunpowder. However, the plaintiff changed their mind and sued the defendant principal. It was held that once an undisclosed principal is discovered the third party may elect to sue that principal; and secondly, that the filing of the affidavit against the agent did not prevent the action against the principal. The third party will not be bound by an election unless he has unequivocally indicated his intention to hold one party liable and release the other. The doctrine of the undisclosed  principal exists for purposes of commercial convenience, it is important to maintain protections for the third party. In the situation where the agent has failed to pass the payment to the third party, either the principal or the third party will lose and it seems fairest to place the loss on the principal.19 4. HOW AGENCY MAY BE DETERMINED As the relationship between the agent and his principal is based on consent, actual authority is of paramount importance. An agent is only entitled to be paid if he acts within his actual authority. If he acts outside his authority he may be liable to his principal. The relationship between the principal and a third party depends on the agent’s power to bind his principal. However, what is of concern to the third party is the agent’s apparent authority as this is what he relies on in the ordinary course of events. There are several types of authority. These are: a) Express Authority – the agreement between a principal and agent may be express or implied. Express agreement may be made orally, in writing or by deed. In general, if an agent is appointed to execute a deed his appointment is by deed called a power of attorney. b) Implied Authority arises where, although a particular action is not sanctioned by express agreement between the principal and the agent, the principal is nevertheless taken to have impliedly consented to the action or transaction in question. In GARNAC GRAIN CO. v H.M.F. FAURE AND FAIRCLOUGH20 the House of Lords stated that â€Å"the relationship of principal and agent can only be established by the consent of the principal and agent. They will be taken to have consented if they have agreed to what amounts at law as a relationship even if they do not recognize it themselves and even if they have professed to disclaim it. An agent who has express authority to carry out a particular task may also have additional authority to do certain acts incidental to his authorized task For instance, an agent authorized to sell the principal’s property has implied incidental authority to sign a contract of sale.† c) Apparent Authority – a person may be bound by the acts of another done on his behalf without his consent or even in breach of an express prohibition if his words or conduct create the impression that he has authorized the other person to act on his behalf. This is described at law as â€Å"apparent agency or  authority† or â€Å"ostensible agency or authority†. The distinction between actual and apparent authority was explained by Diplock L.J. in FREEMAN & LOCKYER V. BUCKHURST PARK PROPERTIES21. â€Å"Apparent† or â€Å"ostensible† authority, is a legal relationship between the principal and the contractor created by a representation, made by the principal to the contractor, intended to be and in fact acted on by the contractor, that the agent has authority to enter on behalf of the principal into a contract of a kind within the scope of the â€Å"apparent† authority, so as to render the principal liable to perform any obligations imposed on him by such contract. To the relationship so created the agent is a stranger. He need not be (although he generally is) aware of the existence of the representation. The representation, when acted on by the contractor by entering into a contract with the agent, operates as an estoppel, preventing the principal from asserting that he is not bound by the contract. It is irrelevant whether the agent had actual authority to enter into the contract. d) Agents of Necessity – A person who acts in an emergency, for instance, to preserve the property or interest of another may be treated as an agent of necessity. His actions will be deemed to have been authorized even if no actual authority is given. Like apparent authority, an agency of necessity can arise even in the absence of consent from the principal. Agency of necessity only arises in extreme circumstances where there is actual and definite commercial necessity for the agent’s actions. The following must be satisfied for an agency of necessity to exist: (i) There must be an emergency – something unforeseen. (ii) It must be practically impossible to get instructions for the principal. (iii) The agent must act bona fide in the interest of the principal rather than to advance his own interests. He must not take advantage of the principal. (iv) The agent must act reasonably in the circumstances. e) Agency arising out of Co-habitation – It is argued that a wife has authority to pledge the credit of her husband for necessities (or vice versa). However, others argue that social conditions now make it old fashioned to suggest that actual or apparent authority should not arise  between husband and wife. The law recognizes the following as agents even though they do not bear the title of agent22: (a) Company Directors and other company officials – being an artificial person, a company has to act through human agents. Then authority to act as company agents is vested in the board of directors. This authority may be delegated to one or more executive directors by the articles of the company to allow him to manage the day-to-day operations of the company. (b) Partnerships – as a partnership has no separate legal identity from its members, every partner in a firm is an agent of the firm as well as all other partners for the purpose of the business of the firm. Thus, a partner who performs an act for the purpose of carrying out the business of the firm, binds the firm as well as the other partners. (c) Employees – may be servants working under a contract of service or an independent contractor working under a contract for services. An employee e.g. a shop assistant is the agent of the shop owner for the purposes of making a contract of sale for the owner. He has the authority to make statements about goods that are binding on the shop owner, his employer. (d) Professionals – acting on behalf of clients may be the agents of those clients. E.g. a lawyer conducting litigation is his client’s agent and may have authority to settle the case and that settlement will bind the client. Thus the lawyer, not the client, normally signs a consent judgment. Similarly, an accountant’s agreement or statement to ZRA will bind his client in accordance with agency principles. The relationship between principal and agent depends on consent. If withdrawn, the agency will automatically end, as well as the agent’s actual authority to bind the principal. An agency relationship may be terminated in the following ways: (a) By mutual consent between the agent and the principal. (b) By either party unilaterally withdrawing consent. (c) An agent may have been appointed for a fixed period of time or for a specific task or set of tasks. Once the time elapses or the task(s) is/are completed the agency will terminate. (d) By operation of law e.g. if the  performance of the agency relationship becomes illegal (e.g. one party becomes the citizen of an alien enemy) or impossible (where it will be ended by the agency contract being frustrated). Death of either party will also terminate the agency and any contract made between them. If an agent becomes insane, the relationship is automatically terminated. The bankruptcy of either the agent or the principal will also end the agency.23 The Effect of Termination vis a vis Third Parties The agent may continue to have apparent authority even if actual authority has been terminated. If the principal’s conduct is such as to suggest to a third party that the agent continues to have authority. Until the principal brings the termination of the agent’s authority to the notice of a third party, the agent may continue to have apparent authority on the strength of the principal’s representation. DREW v NUNN24 the principal became insane but his wife, who was his agent, continued to act in his name. When he recovered from his insanity he tried to disclaim liability for acts done by his wife during his insanity/incapacity. Held: The agent i.e. his wife, had apparent authority and therefore he was bound. However, where an agent’s actual authority is terminated by the principal’s death or bankruptcy the agent will automatically cease to have apparent authority.

Jason & Medea Essay

How do Jason’s feelings at the end of the play differ from those revealed in other encounters? In their first encounter, Jason appears to be trying to make himself feel as if he is better than Medea, and as if he is the bigger person than she, mfou no doubt hate me: but I could never bear ill-will to you† implies that he is a better person for helping her even though she hates him – and that even after all that’s happened and all she has said he still â€Å"could never bear ill-will†. He continues to try and defend his actions, claiming it was for social status, that he didn’t marry for love, but for the fact hat he wants to know they will have a good life and not be poor; also, as he marries the King’s daughter, his sons with Medea will be half-brothers to any children Jason may have with Glauce, therefore improving their status on becoming a king of Corinth. Their second encounter is after Medea has decided her exact plan; she knows how she will kill the princess and the king, and has then also planned to kill her sons. She asks for Jason to attend, and he does, at which point she acts like a stereotypical wife of the time, admitting that she was wrong for all the feelings she had, and that verything that had happened was her fault, that she overreacted because she knew Jason was only doing it for the good of their family. It would seem to be a friendly conversation on Jason’s part, he shows no kind of hostility towards Medea when she speaks to him, and openly accepts her apology, and states when he first speaks to her that he is â€Å"ready to listen†. However, later in this meeting he, again, demeens women, â€Å"Only naturally a woman is angry when her husband marries a second wife. † perhaps this is true in a sense, however I think anyone would be angry if their significant ther decided to marry someone else; not Just a woman. After this it could be said that Medea plays up to this, as when he mentions his sons growing up and being strong, she weeps. This may be because she knows her sons will never go, or she believes crying will make Jason pity her. In this encounter he also mentions sexual jealousy, implying that Medea is simply angry because of the fact that Jason is now sleeping with someone else, rather than her – this is because he doesn’t understand her anger, and therefore infers that it is because of this, rather than the fact that he eft her to marry another. Later in this passage, he also refers to Medea as a â€Å"foolish woman† when she tries to send the Coronet and dress to Glauce, and this theme of sexism is carried out a few lines later â€Å"If my wife values me at all she will yield to me more than to costly presents, I am sure of that†; again, the attitude of the ancient Greek time was that women were to do what they were told, rather than what they wanted. They were to be obedient, and not break any rules. In the third and final encounter, at the end of the play, it appears Jason has reached is peripeteia, his downfall. Medea, at this point, has killed their two sons – and it is clear he loses complete control of his emotions, and he begins wildly insulting Medea, calling ner an â€Å"abomination†. It is also earlier in this pa rt that ne calls ner â€Å"the woman I will kill. † at the beginning of the play, he was supposedly in love with her, whilst at the end, he wants nothing more than her to be dead. It becomes obvious that Jason has realised what Medea is truly like, how manipulative and cunning she is; and how she tricked him, in certain parts, at least, into believing she as Just an obedient wife to him. He claims he wants the gods to â€Å"blast her life†, and during the time in which most, if not all, people believed that these gods were real and had impact on their lives, this would be one of the worst things to wish upon someone else. Again, Jason also mentions her â€Å"sexual Jealousy’, blaming this for the murder of their children â€Å"†¦ out of mere sexual Jealousy, you murder them! † At the complete end of the play, Jason is on the ground, whilst Medea is in a chariot (pulled by dragons) on the roof; this could be a representation of the fact that, in the eginning, Jason was of a higher standing than Medea, however at the end she had gotten (in a sense) what she wanted, and that she was now on top – her enemies not able to laugh at her. He asks Medea to let him bury the children, a request which she declines, so he then asks if he could hold them one last time. She responds with â€Å"now you have kisses for them†, as previously Jason had appeared to be more than happy to let his sons be exiled – even if he did state in previous encounters that he had married the princess not for Just his social standing, but also his sons.

Thursday, August 29, 2019

Freedom of speech in the United States Essay Example | Topics and Well Written Essays - 750 words

Freedom of speech in the United States - Essay Example The subject of free speech is among the most contentious issues in the liberal countries such as the United States (Sunstein, np). Freedom of expression becomes a volatile matter when it is highly valued. The reason for this being that only then does the boundaries placed upon it turn out to be controversial. The appropriate philosophical framework for deciding the free speech cases can be as follows: The first issue to take into consideration in any sensible argument for freedom of expression is that it will have to be restricted. It is prudent as a justice to establish whether the case does not go beyond the limitations of freedom of expression. Furthermore, it must determine whether the case is tantamount to violation of the National Defence Authorization Act provision. Important controversies that arise in free speech can be resolved by clear definition of the limits of freedom of expression. One manner of solving this is to stipulate noticeably the issues that are considered to be beyond the restrictions of free speech. The thing that ought to be protected is the interest of the people in light of exercising their freedom of expression (Calvert et al., 635). Like, in this case, the concerns of the defendant should be protected by the law regardless of his opinion towards the government. What’s more, in as much as the freedom of speech is vital to the people, there are some things that should be circumscribed. For instance, the people should be restricted to engage on issues that are a threat to the national security, as well as private security.

Wednesday, August 28, 2019

International Marketing Assignment Example | Topics and Well Written Essays - 3000 words

International Marketing - Assignment Example We can also view it as â€Å"the action of acclimating a firm’s activities to international surroundings.† 1Strategy is the resolve of the fundamental long-standing goals of the business venture, and the embracing of courses of action and allotment of resources essential for carrying out these objectives. It consists of incorporated decisions, proceedings or tactics that will aid to realize goals. Brand strategy is used as a sunshade term to indicate the expansive range of strategic options open to the firm, together with both managerial and purposeful management strategies, product/market approaches, and diversification strategies. Main Body Step 1: Coca Cola brand topped in the 2010 list of Global Inter-brands and as the senior marketing consultant working for the brand, I hereby present a report that seeks to answer a digit of questions. Step 2: The Coca Cola brand has overtime played a vital role in the mother company’s international expansion. A coherent and viable global brand architecture is a vital constituent of the firm’s general worldwide marketing strategies because it provides structural basis for levering sturdy brands into foreign markets, ensuring assimilation of acquired brands in adding up to rationalizing the company’s adapted global strategies in branding. ... he global media, global retailing and outright movement of persons, goods and entities across international borders/territories has changed brand markets to constituents of emerging integrations that have not been in the picture before. Consequentially, a global firm like Coca Cola has concentrated on coordinating and integrating its existing strategies and methodologies in marketing across global markets. 3A vital element in Coca Cola’s International marketing strategy is the strategic branding policy that it has adopted overtime. A Strong brand like Coca Cola has helped the mother company to ascertain the firm's identity in the market, and develop an unyielding consumer franchise plus providing a weapon to defy growing retailer clout. The brand has also provided the root for other brand extensions, which further strengthen the firm's souk position and enhancement of value. 4In international markets arena, an important brand strategy for the firm is has been to use the same b rand name in different countries, leveraging brand strength across these established boundaries and maintaining local brands that respond to variant customer preferences in the local setups. A related issue has been the branding level that needs maximum emphasis, that is, corporate/house or product-level brands or a jumble of both. The innermost responsibility of branding in defining the firm's distinctiveness and its pose in intercontinental markets means that it is decisive to expand explicit and formidable international brand structural designs. This is implying identifying the dissimilar echelons of branding contained by the firm, the actual number of manufactured brands at each level on top of their product market and geographical scope. A crucial element in this branding structure is the

Tuesday, August 27, 2019

Industry and Macroeconomic Analysis Dissertation

Industry and Macroeconomic Analysis - Dissertation Example USA is by far the greatest contributor with a market value of about $5 trillion (Hughes & Arissen, 2005). The main reason being the cosmopolitan nature of the cities like USA where the commercial value of the property is extremely high. Second contributor is Japan than is estimated to have a market share of about $2 trillion (Hughes & Arissen, 2005). With regards to the GDP, Japan remains the second largest economy of the world and hence, the value of the property is quite high. These two major economies are followed by Germany ($1.1 trillion), UK ($1 trillion), France ($800 billion) and Italy ($600 billion). However, it is worth noting that the 88% of the total real estate market is dominated by the top 15 countries (Hughes & Arissen, 2005). It is a well-known fact that the real estate market is cyclical in nature and booms and busts have been noticeable. The booms in the 1980s were followed by busts in the early 1990s. However, the late 1990s or the early 2000s once again experienc ed a property boom. USA has been the major player in this and the housing market got accelerating demand. Thus, by 2007 this property boom decelerated and the world economy when the global economy was entangled in a global recession marred by a credit crunch. Area of the Study The study focuses upon the property market in Thailand. The main concentration would be on the four leading property companies operating in Thailand namely, Quality House PLC, Land and House PLC, Sansiri PLC and Supalai PLC. The study would incorporate a thorough financial and macro analysis of these companies and the area they are operating. Thus, the dissertation would further try to enhance upon the market value and conditions of the property market in Thailand with regards to these companies and provide a clear picture of the investment possibilities and scenarios. This would be followed up by recommendations. Objectives and Methodology The key objective of the study is to develop a framework through which an investor could gain knowledge about the investment prospects in the Thailand Real Estate Industry. The study aims to provide forecast and conclusion as to whether or not the Thai property sector is attractive from an international investor’s perspective, and also on the companies which will be reviewed. The study would be conducted in a number of steps. 1) The global real estate market would be analyzed. 2) The macro-economic indicators that correspond to the smooth working of the real estate market would be analyzed. 3) Analysis of the Housing Market with respect to the four above mentioned companies. 4) Calculations of their financial ratios. 5) Calculation of the intrinsic values for the four leading companies. 6) Investment decisions and recommendations. The World This focuses upon the changes that have occurred. Light is shed upon the world trend towards economic prosperity. PEST Analysis Political Analysis The political scenario of the world is quite varied. There a re free economies prevailing and at the same time social welfare economies are existent as well. Monarchy – one man rule and democracy have become rivals in today’s political world. Countries like USA, France and India are the major democracies in the world. Contrarily, the Middle Eastern side is marred by despotic rule. The recent upsurge in the opponents of dictatorship has raised their voices and the results have concluded by the uprisings against them in

Monday, August 26, 2019

Interdependence evaluation Essay Example | Topics and Well Written Essays - 3000 words

Interdependence evaluation - Essay Example Automobile companies spend their time to improving the total quality of their products. Bankers tried their best to bigger banks with global presence. Media companies aggressively reaching out at new markets with new vigor. Telecom companies are buying out stakes in far away markets to gain more strength. In such a scenario competitive strength is the crucial word. The entrepreneurs understand the increasing pressure on them in this global business scenario. So they are improving their quality of the product and service to face the competition ahead. Technology has played a major role in deciding competitive strength. Cutting across sector all business units are deliberately and seriously vying options to improve their technology. Here comes the importance of interdependence. People everywhere want goods and services. Goods are tangible items such as books, cars, carrots, paper clips, and shirts. Services are activities that people want done for them, such as haircuts, car repairs, teaching, or housecleaning. Fortunately, every society is endowed with resources which can be used to provide many of these goods and services. These resources, which economists call productive resources, are usually classified into three groups such as land, labour and capital. He says that while land refers to natural resources, labour is human work and capital is physical resources. While productive resources are limited but individuals want unlimited goods and services from limited resources. This gap between production and demand creates scarcity of commodities Entrepreneurs are those who address this scarcity and provide goods and services. The entrepreneur purchases scarce productive resources, and then organizes the production of a particular good or service. (Harlan R Day, Economics and Entrepreneur, Indiana Department Of Education, Center for School Improvement and Performance, Office of School Assistance, 1991) The main goal of the entrepreneur is to make Profit from his products or services. To become a successful entrepreneur need to understand his customers needs. This has necessitated more cautious approach from the entrepreneur. The entrepreneur has to choose carefully scarce productive resources Resources used to produce one particular good or service cannot be used to produce another. The true cost of using a resource is the best alternative use for that resource. Economists call this best alternative use of the opportunity (Harlan R Day, Economics and Entrepreneur, Indiana Department Of Education, Center for School Improvement and Performance, Office of School Assistance, 1991) Recently entrepreneurship has been modeled explicitly as a form of human capital accumulation usually linked to the long run size of the firm (Bates 1990, Iyigun and Owen 1998, Otani 1996). It was also said that the availability of external financing is a crucial determinant of the amount of entrepreneurial activity in a community (Evans and Jovanovic 1989, Evans and Leighton 1989, Kihlstrom and Laffont 1979). But in the today's context, there have been drastic changes on the role of business. Though profit is continued to be the driving force for entrepreneurs and enterprises, the way of production and services have changed in both concept and meaning. It is

Sunday, August 25, 2019

Business Assignment Example | Topics and Well Written Essays - 750 words

Business - Assignment Example In the model, when the expected returns do not meet the expected returns, then the investment should not be undertaken. For this reason, Capital Asset Price Model focuses on price and investment. Arbitrage Pricing Theory is a model that bases its idea that returns of an asset can be predicted using the relationship that exist between the asset and the common risk factors. This theory also defines the price where an asset that is not well priced is likely to be. The model is always viewed as a substitute to the capital asset pricing classical. For this reason, Arbitrage Pricing Theory is a model that has more elastic assumption requirement (Hodrick, Ng and Song Mueller, 76). Multi-Factor Model of Risk and Return is a financial model based on multiple factors. The multiple factors occur during its computation when explaining the market phenomenon and equilibrium’s asset prices. The factor can be used in explaining either an individual securities or a portfolio of securities. The model achieves such objective by comparing two or more factors in analyzing relationship between variables and the resulting performance of the securities. Capital Asset Price Model is a model that describes the relationship that occurs between the expected returns and the risks that are involved. On the other hand, Arbitrage Pricing Theory is a model that which is based on the idea that returns of an asset can be predicted using the relationship that exist between the asset and the common risk factors. The multi-factor model is based on multiple factors during its computation when explaining market phenomenon and equilibrium asset prices. This is an international bond that is issued in a foreign country whose value is stated in their respective currency. Eurobonds are issued by international organization and categories according to the currency in

Saturday, August 24, 2019

POP Education Assignment Example | Topics and Well Written Essays - 1750 words

POP Education - Assignment Example He gives me the way forward in each and every problem that I encounter. Problem Amongst the ten children, there is an eight year old boy who has a hearing problem. At first I had not known about his condition but continued assessment on him made me realize it. The boy struggles to talk; he cannot pronounce most words in the right way. When spoken to, he does not respond and seems not to notice that someone is talking to him. He always asks me to repeat the things that I say to him, and when I call his name away from him, he searches around trying to figure out where the voice came from. He rarely participates in the class and is always dull and withdrawn to himself. I asked his parents about his condition, and they said they were aware of it, and they wanted me to assist them in helping the little boy. Data According to health researchers, hearing loss in children can be caused a condition called otitis media. This is the inflammation of the child’s middle ear normally due to building up of fluid. This disease is diagnosed frequently in conjunction with children with hearing impairments (Hockfield 68). It is not permanent, and the hearing losses caused by it are mild, though if it occurs repeatedly, it can cause severe damage to the eardrum and the hearing nerves and hence leading to permanent hearing loss. Congenital causes are also a factor to hearing loss. Here, the child suffers from the problem from birth. It can be hereditary or be caused by a condition during child birth. Genetic factors are said to contribute to more than 50% of hearing problems caused by congenital factors. A parent carrying the dominant gene for loss of hearing passes it to the child. The probability of the child getting the condition from the parents is higher if the dominant gene is present in both parents. Some genetic syndromes have hearing loss as one of their characteristics (Canalis and Lambert 108). These syndromes include Down, Usher, Treacher Collins, Crouzon and Alpo rt syndrome. Some congenital causes in which the child does not inherit from parents include harmful chemicals taken by the mother during pregnancy, illnesses and prenatal infections. There are also acquired factors which lead to hearing loss of a child. These occur in a child’s life after birth due to ailment, injury or other conditions. The conditions causing hearing loss through acquired causes include injury of the head, measles, meningitis, ear infections, chicken pox, influenza and mumps among others. In August 2008, a research carried out by the Better Hearing Institute found out that, children with hearing problems are not the given adequate attention and help they ought to have. This is due to parents being so busy doing other things while viewing the problem as a less serious one. The study blamed the government on dwelling so much on elections and politics, paying less attention to these children who need help. It argued that children need to hear both in and outsi de the classroom so that they may develop in their language, their social and their emotional well-being (Jack Snowman 93). According to the research, many educators and health observers usually underestimate the effects of hearing impairment. Parents, on the other hand, do not detect the problem in their children early enough, and when they do, they do not take immediate action so as to minimize it. Others are given the wrong information on how to deal with the problem of hearing

Friday, August 23, 2019

Law Essay Example | Topics and Well Written Essays - 1750 words - 2

Law - Essay Example The development of each of these areas of law would be discussed in turn and any similarity as well as difference would looked into so as to make an effective comparison between the two difference applications that have been provided for that is one by way of statute and the other would be that of the rule of Wheeldon v. Burrows and the cases that have effectively developed the rule and applied the provision. Easements are where a benefit is provided to the dominant tenement that is the land which benefits from the easement, which provides the person who owns the dominant tenement of land to use the easement. The second element in respect of an easement is the based on the fact that since there is a benefit that is accruing there is a burden on what is known as the servient tenement or in other words the land that has been burdened by the easement. A vital principle related to an easement is the fact that it is a proprietary interest and the accruing benefit and burden, subject to th e laws of registered and unregistered land, transfer, if the land that is either the servient or dominant tenement is transferred to another person. (Cursley et al 2009) The creation of an easement is dependent upon the satisfaction of a criterion that had been laid down in Re Ellenborough Park1 which are generally referred to when determining the existence of an easement. The first and foremost requirement is the fact that there must be a dominant and servient tenement thus eliminating the possibility and stating that the easement cannot exist in gross. (Hawkins v. Rutler)2. The second requirement is the fact that the dominant and servient tenement’s occupation and ownership must be by different persons (Roe v. Siddons)3. However, according to Wright v. Macadam4 the occupation by different persons would allow an easement to be created. The Third element is the fact the easement must benefit the dominant tenement and this is dependent upon the proximity of the servient teneme nt; it also been stated that the advantage should not be purely personal (Hill v Tupper); and the right must not that be of a recreational user. The fourth criterion is that the easement that has been alleged must be capable of formation of subject matter of a grant. Case law has developed upon the criterion and has provided guidelines in this respect, the first one being that there must be a capable grantor, which is clear in the facts at hand, the second that there must be a grantee which is evident because the tenants were granted the rights; thirdly the subject matter of grant is sufficiently certain, which is clear enough in respect of the facts that is the right to cross; and finally the right must be capable of being called an easement that is it is covered under the rights which have been recognized to be easements, which has been done in respect of the right to cross. The final factor that has not been expressly listed down in the case was that of public policy which is con sidered when determining whether an easement is existent or not. (Grey et al 2006) The next aspect that is considered is that easement can be existent either legally or under equity as laid down under section 1 of the Law of Property Act (LPA) 1925. (Cooke 2006) As far as legal easements are considered there are a number of formalities that need to be fulfilled. The first requirement is that for a legal easement there must either be a fee simple absolute in possession or as an adjunct to a term of years (section 1 Law of Property Act 1925). Secondly easements can only be legal if created by way of statute, by prescription, by deed or registered disposition. All other easement are equitable in nature. (Dixon 2004) As far easement by prescription is Law Essay Example | Topics and Well Written Essays - 1500 words Law - Essay Example Unfortunately, even in 2012, until more research is conducted to collect data on duration of street bail, Hucklesby’s claims remain valid. Street bail was introduced in the British legal system in 2003. The amendment came into effect in 2004.1 Street bail was designed to speed up justice in the British legal system by enabling officers to spend more time collecting evidence, and less on bringing the suspect in the police station to bail him or her out a few minutes later.2 There were estimates in 2004 that the new bail system would be economical, as it would provide additional 390,000 hours of police officers’ time annually to focus on investigating the crimes.3 Guidance on Street Bail was implemented in 2006. The guide aimed to direct implementation of the Sections 30A to 30D of the Police and Criminal Evidence Act 1984 (PACE), as amended by Section 4 of the Criminal Justice Act 2003. 4 While making a decision whether to bring the offender in or not, the police officer must consider following facts: whether the offender has a history of violating the bail, whether the offender could jeopardize the evidence crucial to the judicial system if left free, whether the offender could continue offending if left free, and whether data are correct regarding the address of the offender and the nature of the offense. 5 In Northern Ireland, an equivalent document was published as well.6 However, Hucklesby argues that the pre – charge bail system only discourages justice. The nature of the offense, or the ability to jeopardize evidence, is left to the interpretation of the police officer. As a result, Hucklesby argues, more arrests will take place, instead of fewer.7 Moreover, in cases where police officers will not be willing to pursue the investigation, the offender will not be turned in.8 Cape too agrees with Hucklesby’s arguments, due to the inexperience of the arresting officers and a low threshold for arrest and long bail periods, where sus pects will not be able to present their own story.9 Some argue otherwise. There are arguments that even in the light of the new approach to bail, PACE â€Å"continues to use its ‘fundamental balance’ approach,†10 which was abused in the past. PACE’s approach is to protect the rights of the suspect, while allowing for the police officers to gather enough evidence to identify the offender.11 One of its aims is also to decrease detention time. 12 A famous case portraying the misuse of power before the street bail on behalf of law enforcement officers is the Birmingham pub bombings, where six suspects were wrongfully convicted.13 The suspects were treated outside their protection system and tortured.14 Moreover, they were interrogated partly also outside of the police station, which violates the rules of PACE.15 The new approach to bail on street attempts to avoid such problems through allowing suspects freedom while conducting investigation. However, the powe r remains in hands of the arresting police officers. Though PACE aims to decrease the detention time, Skinns has found evidence that detention time has been increasing back to the pre – PACE level.16 In 1986, the mean detention time was over four hours, whereas in 1990 – 3 it increased to over six hours. 17 In 1979, before PACE, the mean detention time was over ten hours. 18 Moreover, police investigation is still a problem. Skinns found that gathering evidence is still a problem in the British criminal system, and it rests with â€Å"

Thursday, August 22, 2019

Supply Chain Management in Fast Fashion Companies (Zara & H&M) Literature review

Supply Chain Management in Fast Fashion Companies (Zara & H&M) - Literature review Example Barnes and Lea-Greenwood's (2006) article on fast fashion and supply chain management has revealed significant information in regard to the so called fast fashion phenomenon. Their research on fast fashion and its relation to supply chain management have even caught the attention of well known fashion companies, enthusiasts and the business press. Although the concept is new in the fashion industry, the authors were able to explore widely and expound briefly the strategy that led Zara and H&M to where they are now. The authors have defined fast fashion as a form of business strategy that targets to lessen the number of processes that are undergone in a buying cycle and lead times to deliver new fashion products in stores. When this happens, customer satisfaction is met, and this satisfaction is being driven by the speed in delivering fashion products that are in line with the current trends. Fast fashion is a concept that is considered a "mainstay in UK's fashion industry" (Barnes & Lea-Greenwood, 2006). To modern fashion retailers such as Zara and H&M, fast fashion is a key strategy that has helped them attain success. The two well known fashion companies have adopted this strategy and have continuously changed their clothing styles and product ranges to adapt to what is "in" at any moment. Rapid changes are made attracting more buyers of apparels under the brands Zara and H&M. Furthermore, Barnes and Lea-Greenwood (2006) have inferred that fast fashion is associated with supply chain management. For instance, it has been proposed, in reference to the said perspective, that the framework of a fast fashion business is dependent to vertical integration. Vertical integration, according to Welters and Lillethun (2011), centralizes the supply chain allowing buyers to obtain goods in a short span of time and at an affordable price. In a fashion business, there is pressure in defeating the previous years' performance and this cycle is a usual scenario. In the modern times, success in retailing is being attributed to supply chains instead of companies (Hines, 2004 cited in Barnes and & Lea-Greenwood, 2006). On the other hand, the authors (Barnes & Lea-Greenwood, 2006) have contended that, in spite of being connected to supply chain management, fast fashion is not totally affiliated with the strategy. Findings of the study conducted by Barnes and Lea-Greenwood (2006) have identified fast fashion as a consumer-driven process. Many things were taken into serious consideration prior to arriving at this judgment. First, they were able to observe that, at present, individuality has already become the trend for the buyer's fashion demands. Most consumers want to set a trend, and this behavior increases the demand for fast fashion. Many designers consider quick access to the media as a means for the young consumers to gain knowledge in regard to the new fashion trends. Respondents of the survey conducted by the two authors have also conceded to their ju dgment and have stated that progress in fast fashion is being driven by the changing consumer demand making it a crucial aspect of fashion and fashion retailing. Hence, fast fashion is the answer to the changing consumer demand of modern times. Furthermore, the supply chain has to adjust for it to respond to inconstant consumer demands. The fast fashion business paradigm relies on the capacity of an individual to acquire and react positively to changes in consumer tastes. Responses to these changes in the fast fashion business model are quick since connections to fashion markets, and fashion makers are in proximity (Doeringer & Crean, 2006 cited in Welters &

Greedy Based Approach for Test Data Compression Using Geometric Shapes Essay Example for Free

Greedy Based Approach for Test Data Compression Using Geometric Shapes Essay As the complexity of systems-on-a-chip continues to increase, the difficulty and cost of testing such chips is increasing rapidly. One of the challenges in testing SOC is dealing with the large size of test data that must be stored in the tester and transferred between the tester and the chip. The cost of automatic test equipment (ATE) increases significantly with the increase in their speed, channel capacity and memory. As testers have limited speed, channel bandwidth and memory, the need for test data reduction becomes imperative. This project deals with lossless compression of test vectors on the basis of geometric shapes. It consists of two phases: i) Encoding or Compression and ii) Decoding or Decompression. During the compression phase we exploit reordering of test vectors to minimize the number of shapes needed to encode the test data. The test set is partitioned into blocks and then each block is encoded separately. The encoder has the choice of encoding either the 0‘s or the 1‘s in a block. In addition, it encodes a block that contains only 0‘s (or 1‘s) and x‘s with only 3 bits. Furthermore, if the cost of encoding a block using geometric shapes is higher than the original cost of the block, the block is stored as is without encoding. We have created a new greedy based algorithm to find the shapes present in a block in minimal time. This algorithm after analysis seems to be at least 50% more efficient than the algorithm proposed by the author of the original paper which has been implemented in our program. During the decoding phase the data is read from the compressed file and decoded based on the format in which it was encoded. These phases have been implemented using software. The application gives a good compression ratio of nearly 50% under average conditions, is extremely fast and the shape extraction algorithm used provides fast extraction of shapes. To test a certain chip, the entire set of test vectors, for all the cores and components inside the chip, has to be stored in the tester memory. Then, during testing, the test data must be transferred to the chip under test and test responses collected from the chip to the tester. One of the challenges in testing SOC is dealing with the large size of test data that must be stored in the tester and transferred between the tester and the chip. The cost of automatic test equipment (ATE) increases significantly with the increase in their speed, channel capacity and memory. As testers have limited speed, channel bandwidth and memory, the need for test data reduction becomes imperative. 1. 2 Systems on a chip A system on a chip or system on chip (SoC or SOC) is an integrated circuit(IC) that integrates all components of a computer or other electronic system into a single chip. It may contain digital, analog, mixed-signal, and often radio-frequency functions—all on a single chip substrate. A typical application is in the area of embedded systems. A typical SoC consists of: †¢ A microcontroller, microprocessor or DSP core(s). Some SoCs nbsp—called multiprocessor system on chip (MPSoC)—include more than one processor core. †¢ Memory blocks including a selection of ROM, RAM, EEPROM and flash memory. †¢ Timing sources including oscillators and phase-locked loops. †¢ Peripherals including counter-timers, real-time timers and power-on reset generators. †¢ External interfaces including industry standards such as USB, FireWire, Ethernet, USART, SPI. †¢ Analog interfaces including ADCs and DACs. Department of Computer Science and Engg, TKMCE Page 4 Greedy Based Approach to Test Data Compression using Geometric Shapes Voltage regulators and power management circuits. These blocks are connected by either a proprietary or industry-standard bus such as the AMBA bus from ARM Holdings. DMA controllers route data directly between external interfaces and memory, bypassing the processor core and thereby increasing the data throughput of the SoC. Figure 1 Department of Computer Scien ce and Engg, TKMCE Page 5 Greedy Based Approach to Test Data Compression using Geometric Shapes 1. 3 Data Compression Data compression, source coding orbit-rate reduction is the process of encoding information using fewer bits than the original representation would use. Compression is useful because it helps reduce the consumption of expensive resources, such as disk space or transmission bandwidth. On the downside, compressed data must be decompressed to be used, and this extra processing may be detrimental to some applications. For instance, a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be viewed as it is being decompressed (the option of decompressing the video in full before watching it may be inconvenient, and requires storage space for the decompressed video). The design of data compression schemes therefore involves trade-offs among various factors, including the degree of compression, the amount of distortion introduced (if using a lossy compression scheme), and the computational resources required to compress and decompress the data. Several test data compression techniques were proposed in the literature. These techniques can be classified into two categories; those that require structural information of the circuit and rely on automatic test pattern generation and/or fault simulation and those that are more suitable for intellectual property (IP) cores as they operate solely on the test data. Techniques of the first approach include some of the linear decompression-based schemes and broadcastscan-based schemes. Techniques of the second approach include statistical coding, selective Huffman coding , run-length coding , mixed run-length and Huffman coding , Golomb coding , frequency-directed run-length (FDR) coding , alternating run-length coding using FDR (ALT-FDR), extended frequency-directed run-length (EFDR) coding , MTC coding , variable-input Huffman coding (VIHC) , multilevel Huffman coding , 9-coded compression , Block Merging (BM) compression and dictionary-based coding . Test compression techniques in this class can be further classified as being test independent or test dependent. Test-independent compression techniques have the advantage that the decompression circuitry is independent of the test data. Changing the test set does not require any change to the decompression circuitry. Examples of test-independent compression techniques include Golomb coding, frequency-directed run-length (FDR) coding, alternating run-length coding Department of Computer Science and Engg, TKMCE Page 6 Greedy Based Approach to Test Data Compression using Geometric Shapes using FDR (ALT-FDR) , extended frequency-directed run-length (EFDR) coding , MTC coding , 9- coded compression and Block Merging (BM) compression 1. 4 Automatic Testing Equipment Automatic or Automated Test Equipment (ATE) is any apparatus that performs tests on a device, known as the Device Under Test (DUT), using automation to quickly perform measurements and evaluate the test results. An ATE can be a simple computer controlled digital multimeter, or a complicated system containing dozens of complex test instruments (real or simulated electronic test equipment) capable of automatically testing and diagnosing faults in sophisticated electronic packaged parts or on Wafer testing, including System-OnChips and Integrated circuits. ATE is widely used in the electronic manufacturing industry to test electronic components and systems after being fabricated. ATE is also used to test avionics and the electronic modules in automobiles. It is used in military applications like radar and wireless communication. . 4. 1 ATE in the Semiconductor Industry Semiconductor ATE, named for testing semiconductor devices, can test a wide range of electronic devices and systems, from simple components (resistors, capacitors, and inductors) to integrated circuits (ICs), printed circuit boards (PCBs), and complex, completely assembled electronic systems. ATE systems are designed to reduce th e amount of Figure 1. 2 test time needed to verify that a particular device works or to quickly find its faults before the part has a chance to be used in a final consumer product. To reduce manufacturing costs and improve yield, semiconductor devices should to be tested after being fabricated to prevent even a small number of defective devices ending up with consumer. Department of Computer Science and Engg, TKMCE Page 7 Greedy Based Approach to Test Data Compression using Geometric Shapes Chapter 2 2. 1 Problem Definition As the complexity of systems-on-a-chip continues to increase, the difficulty and cost of testing such chips is increasing rapidly. To test a certain chip, the entire set of test vectors, for all the cores and components inside the chip, has to be stored in the tester memory. Then, during testing, the test data must be transferred to the chip under test and test responses collected from the chip to the tester. Our application must be able to compress the test vectors by a significant percentage and it must also be lossless. In addition to these two basic requirements the program must extract the shapes from each block in an optimal manner (here the technique to be used is a greedy approach rather than a brute force one). Moreover the test data must be sorted and partitioned before shape extraction is done. The application must also be able to correctly decompress the encoded data. In order to obtain the shapes covering the bits in as little time as possible, we have created a greedy based algorithm which works in an overall time of O(n4). The original algorithm proposed by the authors of ? Test Data Compression based on Geometric Shapes? [1] on other hand requires one O(n4) operation to identify all possible covers and another O(n4) to find the optimal among them which is a brute force approach. 2. 2 Motivation for Project One of the challenges in testing SOC is dealing with the large size of test data that must be stored in the tester and transferred between the tester and the chip. The amount of time required to test a chip depends on the size of test data that has to be transferred from the tester to the chip and the channel capacity. The cost of automatic test equipment (ATE) increases significantly with the increase in their speed, channel capacity and memory. As testers have Department of Computer Science and Engg, TKMCE Page 8 Greedy Based Approach to Test Data Compression using Geometric Shapes limited speed, channel band-width and memory, the need for test data reduction becomes imperative. 2. 3 Problem Analysis The problem can be divided into the following phases 2. 3. 1 Test Set sorting Here sorting is done on the basis of its neighbors. Also to achieve maximum compaction the first vector after sorting must contain maximum number of zeroes. 2. 3. 2 Test Set partitioning Partitioning of test vectors into blocks can be done easily. But in the case of partial blocks which appears if the number of test vectors and size of test vectors are not integral multiples of N(block is of size N*N) we can partition the block as N*N and use a mark array to indicate which bits are not to be processed. . 3. 3 Shape Extraction Here the shapes must be extracted optimally which means we have to use a greedy algorithm. This algorithm was created and works superbly. 2. 3. 4 Decoding This is only a simple matter of finding the code and based on the code of filling up the test vectors. Department of Computer Science and Engg, TKMCE Page 9 Greedy Based Approach to Test Data Compression using Geometric Shapes Chapte r 3 3. 1 Encoding Phase 3. 1. 1 Test Set Sorting 3. 1. 1. 1 Description Sorting the vectors in a test set is crucial and has a significant impact on the compression ratio. In this step, we aim at generating clusters of either 0‘s or 1‘s in such a way that it may partially or totally be fitted in one or more of the geometric shapes shown in Table 3. 2. The sorting is with respect to both 0‘s and 1‘s (0/1-sorting). The technique is based on finding the distance D between two vectors A and B that maximizes the clusters of 0‘s and 1‘s. The next vector with the highest distance to the existing vector is selected during the sorting process. The distance D may be computed with respect to 0‘s (0-distance), to 1‘s (1-distance) or to 0‘s and 1‘s (0/1-distance) as follows: here k is the test vector length and W(Ai, Bi) is the weight between bits Ai and Bi. Table 3. 1 specify the weights used in computing the 0/1-distance between two vectors. Note that for i = 0, W(Ai, Bi-1) = 0 and for i = k 1, W(Ai, Bi+1) = 0. Table 3. 1 Department of Computer Science and Engg, TKMCE Page 10 Greedy Based Approach to Tes t Data Compression using Geometric Shapes Table 3. 2 3. 1. 1. 2 Algorithm 1. Find the vector with the maximum number of zeroes and interchange with first vector 2. i? 1 3. Compare ith vector with all other vectors from i+1 and calculate the distance based on the equation 4. Exchange the vector with maximum distance with ith vector 5. If ilt;n then i? i+1 Department of Computer Science and Engg, TKMCE Page 11 Greedy Based Approach to Test Data Compression using Geometric Shapes 3. 1. 2 Test Set Partitioning 3. 1. 2. 1 Description A set of sorted test vectors, M, is represented in a matrix form, R? C, where R is the number of test vectors and C is the length of each test vector. The test set is segmented into L? K blocks each of which is N? N bits, where L is equal to R/N and K is equal to C/N. A segment consists of K blocks. In other words, the test set is segmented into L segments each containing K blocks. For test vectors whose columns and/or rows are not divisible by the predetermined block dimension N, a partial block will be produced at the right end columns and/or the bottom rows of the test data. Since the size of such partial blocks can be deduced based on the number of vectors, the vector length and the block dimension, the number of bits used to encode the coordinates of the geometric shapes can be less than log2N. 3. 1. 2. 2 Algorithm 1. Partition the test vectors into 88 blocks( partial or full) 2. If block is partial then a. Mark the rest of the bit positions as already processed Department of Computer Science and Engg, TKMCE Page 12 Greedy Based Approach to Test Data Compression using Geometric Shapes 3. 1. 3 Shape Extraction 3. 1. 3. 1 Description This algorithm was created by our group to obtain the optimal covers of the shapes in as little time as possible. In our algorithm we begin by assuming that all other points before (i,j) has been processed. This means that if any new shape exists in this block it may only begin at a point greater than or equal to (i,j). Now if we are starting from (i,j) we need to check only four points adjacent to it along with (i,j). These positions are shown Figure 3. 1. This is a direct consequence of our initial assumption. Now let us assume that a shape begins from (i,j). Since no other shape has been detected so far, (i,j) is a point. Now the algorithm checks the four adjacent points to see whether the make any other shape when taken in combination with (i,j). Since (i,j) is classified as a point, the next possible shape that can be formed is a line. There are four possiblities for this. This is shown in Figure 3. 2. Figure 3. 1 Department of Computer Science and Engg, TKMCE Page 13 Greedy Based Approach to Test Data Compression using Geometric Shapes Now if another of the adjacent points is a valid bit and if the current shape s a line,then the next figure that can be formed from 3 points is a triangle. This also has four different possiblities. This is shown by Figure 3. 3. Figure 3. 2 If the current shape is a triangle(type 4) and if another point adjacent to (i,j) is of the bit we are checking for then, the only remaining possiblity is a rectangle. This is shown by Figure 3. 4. Figure 3. 3 In orde r to avoid the possiblity of rechecking bits that have already been processed our algorithm uses a ? mark‘ matrix similar to the block of bits,except that every position other than what has already been included in a shape are marked as zeroes. Those that have been identified as belonging to a shape are marked as ones. We also insert the points that have to be processed by the algorithm in the next stage into a queue for faster processing of the rest of the shape. Department of Computer Science and Engg, TKMCE Page 14 Greedy Based Approach to Test Data Compression using Geometric Shapes Figure 3. 4 The anomalies that can occur during this approach are: ? There can be other shapes starting from the same point (i,j). Since we are performing a greedy search, the only possiblity that comes under this category are additional lines emanating from (i,j). This can be easily solved by saving the current shape as well as the newly identified line into the list of shapes. Then the algorithm performs all the above mentioned steps, i. e. marking the bits processed and inserting the points to processed later into the queue. ? Another problem related with this simple approach is that the type 1 traingle may recognized as a rectangle and a few lines if its size is greater than one. This can be avoided by computing the length of the side of square that may contain the triangle(if it exists ) and the length of both the diagonals. If the length of a side is the same as that of a diagonal then its indeed a traingle or a square. To distinguish between these we check whether the length of both diagonals are same. If they are not, then the shape is a triangle,otherwise it‘s a rectangle. The reason these anomalies needs to be carefully implemented is that anomaly 2 can increase the computational complexity of our oerall algorithm significantly if its to be solved. Once the shapes have been detected for what they are we process only those positions that may be a continuation of the shape are processed. Also the proceesing of these bits are only done in the direction of interest(for example, in the case of say type 1 line the only possible extension of the shape occurs in the downward direction and hence this is the only direction processed). This means that not all of the four adjacent positions need to be checked during further processing, which in turn reduces complexity. Department of Computer Science and Engg, TKMCE Page 15 Greedy Based Approach to Test Data Compression using Geometric Shapes Once a shape has been completely detected, which begins from (i,j), we start the processing of the next bit at position (i,j+1) or (i+1,1). This is necessary so as to ensure that we do not miss any shapes during proceesing. Department of Computer Science and Engg, TKMCE Page 16 Greedy Based Approach to Test Data Compression using Geometric Shapes 3. 1. 3. 2 Algorithm Department of Computer Science and Engg, TKMCE Page 17 Greedy Based Approach to Test Data Compression using Geometric Shapes 3. 1. 3. 3 Complexity Analysis As we have seen the algorithm needs 3 loops. Out of this two is used to traverse the entire block. This gives us an outer loop complexity of O(n2). Then the third loop is always executed 4 times in order to check neighboring points. The actual detection of shapes is only a matter of addition of indices to (i,j) and checking to see whether they satisfy any of the conditions of the algorithm. Addition is done in constant time. Now although the detection of the kernel of shapes can be done in a constant time we need to spend some additional time in the case of anomaly 2. As mentioned earlier this can be solved by finding the length of the sides of the square containing it and the length of both the diagonals of the square. Also this must be the square that may contain the whole triangle. This means that in the worst case the lengths may be of size n. This gives us the complexity for this step to be 4O(n). The further processing of shapes that has been detected is done using a queue. The maximum number of times the queue can be executed is O(n2). This because there are at most that many bits in a block. Therefore the overall complexity for shape detection is O(n2) x4x(4O(n) + O(n2))=O(4n3 + n4)=O(n4). Now in average cases the queue will not need to contain the entire block, as the block can be assumed to be comprised of equal parts required and unrequired bits. This means that in the average case, shape extraction process predominates and average case complexity becomes O(n3). This is much better than a brute force approach to shape extraction. Even in the worst case our algorithm performs better as we do not need to perform a covering step to find the most optimal covers for the shapes detected. This would have taken another O(n4) which we avoid by directly using a greedy approach. Department of Computer Science and Engg, TKMCE Page 18 Greedy Based Approach to Test Data Compression using Geometric Shapes 3. 1. 4 Encoding 3. 1. 4. 1 Description The encoding process will be applied on each block independently. The procedure Extract_Shapes(b) will find the best group of shapes that cover the bits that are equal to b as shown in the algorithm. Encode_Shapes determines the number of bits, a, needed to encode this group of shapes. There are two cases that may occur: a) The block contains either 0‘s and X‘s or 1‘s and X‘s. In this case, the block can be encoded as a rectangle. However, instead of encoding it as a rectangle, it is encoded by the code 01‘‘ (indicating that the block can be filled by either 0‘s or 1‘s) followed by the bit that fills the block. Hence, the number of bits to encode the block a = 3. We call such blocks filled blocks. ) The block needs to be encoded by a number of shapes. We call such a block encoded block. In this case, we need the following: ? 2 bits to indicate the existence of shapes and the type of bit encoded. If the encoded bit is 0, then the code is 10, otherwise it is 11. ? P = 2 ? log 2 N ? 3 Bits to encode the number of shapes, S. If the number of shapes exceeds 2P, then the number of bits needed to encode the shapes is certainly greater than the total number of bits in the block. In this case, the block is not encoded and the original test data is stored. 3. 1. 4. 2 Algorithm 1. While there are shapes to be encoded a. Find shape and type of shape b. Find x,y coordinates of shape c. If shape has a length parameter calculate its value d. Depending on shape and type encode the parameters as per table 2. 2 Department of Computer Science and Engg, TKMCE Page 19 Greedy Based Approach to Test Data Compression using Geometric Shapes 3. 2 Decoding Phase 3. 2. 1 Description The pseudo-code of the decoding algorithm is given below. It first reads the arguments given by the encoder and computes the parameters needed for the decoding process. These parameters include the number of segments, the number of blocks in a segment and the dimensions of the partial blocks. For each segment, its blocks are decoded one at a time. The first two bits indicate the status of the block as follows: ? ? ? ? 00: the block is not encoded and the following N*N bits are the original test data. 01: fill the whole block with either 0‘s or 1‘s depending on the following bit. 10: There are shapes that are filled with 0‘s. 11: There are shapes that are filled with 1‘s. For those blocks that have shapes, the procedure Decode_Shapes is responsible for decoding those shapes. It reads the number of shapes in the block and then for each shape it reads its type and based on this it reads its parameters and fills it accordingly. Based on the arguments read first, the decoder can determine the number of bits needed for each variable (e. g. the coordinates and the distances). These are used for the partial blocks when only one block of each segment remains and when the last segment is being decoded. Department of Computer Science and Engg, TKMCE Page 20 Greedy Based Approach to Test Data Compression using Geometric Shapes 3. 2. 2 Algorithm Department of Computer Science and Engg, TKMCE Page 21 Greedy Based Approach to Test Data Compression using Geometric Shapes Chapter 4 4. 1 Language Specification The above project has been implemented in C/C++. This is because C/C++ is a language very well suited for bit level manipulations and provides other features which can be easily implemented using hardware directly. Another consideration that is of paramount importance here is the degree by which C/C++ lends itself to system level programming. The key considerations can be summed up as: ? ? ? ? ? ? Simple Very High Speed Very close to assembly language Can be used to directly implement application using hardware Bit level manipulations are possible Dynamic . 2 Hardware Specification CPU RAM Main Storage Medium Monitor : Pentium II or above : 4 MB : 1 GB HDD : Standard VGA 4. 3 Software Specification Operating System Design Tools : DOS : C/C++ Department of Computer Science and Engg, TKMCE Page 22 Greedy Based Approach to Test Data Compression using Geometric Shapes Chapter 5 5. 1 Application One of the challenges in testing SOC is dealing with the large size of test data that must be stored in the tester and transferred between the tester and the chip. The amount of time required to test a chip depends on the size of test data that has to be transferred from the tester to the chip and the channel capacity. The cost of automatic test equipment (ATE) increases significantly with the increase in their speed, channel capacity and memory. As testers have limited speed, channel band-width and memory, the need for test data reduction becomes imperative. To achieve such reduction, several test compaction and lossless compression schemes were proposed in the literature. The objective of test set compaction is to generate the minimum number of test vectors that achieve the desired fault coverage. The advantage of test compaction techniques is that they reduce the number of test vectors that need to be applied to the circuit under test while preserving the fault coverage. This results in reducing the required test application time. Department of Computer Science and Engg, TKMCE Page 23 Greedy Based Approach to Test Data Compression using Geometric Shapes CONCLUSION In order to check the effective compression ratio produced by the application several different test sets were taken and the algorithm was applied. The test vectors were sorted to maximize the compression. In this work, test vectors were sorted based on a greedy algorithm. Test vectors sorting based on the 0/1-distance was performed. For 0/1-distance sorting, the test vector with more 0‘s was selected as the first vector. The compression ratio is computed as: In the case of large vectors with only sparsely populated positions the application was found to produce very high compression ratio. In the average cases the compression ratio was nearly 50%.

Wednesday, August 21, 2019

The Concept Of Collaborative Working Social Work Essay

The Concept Of Collaborative Working Social Work Essay Collaboration is a interprofessional process of communication and decision making that enables shared knowledge and skills in health care providers to synergistically influence the ways service user/patient care and the broader community health services are provided (Way et al, 2002). The development of collaborative working will necessarily entail close interprofessional working (Wilson et al., 2008). According to Wilson et al, (2008) and Hughes, Hemmingway Smith, (2005) interprofessional and collaborative working describes considering the service user in a holistic way, and the benefits to the service user that different organisations, such as Social Workers (SW), Occupational Therapists (OT) and District Nurse (DN) and other health professionals can bring working together can achieve. These definitions describe collaborative working as the act of people working together toward common goals. Integrated working involves putting the service user at the centre of decision making to m eet their needs and improve their lives (Dept of Health, 2009). This paper will focus first see why health care students learn about working together then reviewing government policy and how this can be applied in a Social Care context, then on influencing factors on the outcomes of collaborative working references within the professional literature, and finally, reviewing evidence on collaborative practice in health and social care. Learning to work collaboratively with other professionals and agencies is a clear expectation of social worker in the prescribed curriculum for the new Social Work Degree (DoH 2002). The reasons are plain: à ¢-  Service users want social workers who can collaborate effectively with others to obtain and provide services (Audit Commission 2002) à ¢-  Collaboration is central in implementing strategies for effective care and protection of children and of vulnerable adultsas underlined, respectively, by the recent report of the Victoria Climbià © Inquiry (Laming 2003) and the earlier No Secrets policies (DoH 2000) à ¢-  Effective collaboration between staff at the front-line is also a crucial ingredient in delivering the Governments broader goals of partnership between services (Whittington 2003). Experience is growing of what is involved in learning for collaborative practice. This experience promises valuable information for Social Work Degree providers and others developing learning opportunities but has not been systematically researched in UK social work programmes for a decade (Whittington 1992; Whittington et al 1994). The providers of Diploma in Social Work programmes (DipSW) represented an untapped source of directly transferable experience in this area of learning and were therefore chosen as the focus of the study. Making collaborative practice a reality in institutions requires an understanding of the essential elements, persistent and continuing efforts, and rigorous evaluation of outcomes. Satisfaction, quality, and cost effectiveness are essential factors on two dimensions: outcomes for patient care providers; and outcomes for patients. Ultimately, collaborative practice can be recognized by demonstrated effective communication patterns, achievement of enhanced patient care outcomes, and efficient and effective support services in place. If these criteria are not met, collaborative practice is a myth and not a reality in your institution. Simms LM, Dalston JW, Roberts PW. Collaborative practice: myth or reality? Hosp Health Serv Adm. 1984 Nov-Dec;29(6):36-48. PubMed PMID: 10268659. http://www.ncbi.nlm.nih.gov/pubmed Health care students are thought about collaboration so that they can see the unique contribution that each professional can bring to the provision of care in a truly holistic way. Learning about working together can help prevent the development of negative stereotypes, which can inhabit interprofessional collaboration. (Tunstall-Pedoe et al 2003) Health care students can link theory they have leant with practice and bring added value of successful collaborative practice. (www.facuity.londondeanery.ac.uk) Learning collaborative practice with other professionals is the core expectation in social work education both qualifying and post grad. Effective collaboration and interaction can directly influence a SU treatment, in a positive way, and the opposite can be said about ineffective collaboration that can have severe ramifications, which has been cited in numerous public inquiries. Professionals should also share information about SUs to keep themselves and their colleagues safe from harm. Working together to safeguard children states that training on safeguarding children and young people should be embedded within a wider framework of commitment to inter and multi-agency working at strategic and operational levels underpinned by shared goals, planning processes and values. The Children Act 1989 recognised that the identification and investigation of child abuse, together with the protection and support of victims and their families, requires multi-agency collaboration. Caring for People (DH, 1989) stated that successful collaboration required a clear, mutual understanding by every agency of each others responsibilities and powers, in order to make plain how and with whom collaboration should be secured. It is evident from the above that Government has been actively promoting collaborative working, and this is reflected in professional literature. Hence, the policy climate and legislative backdrop were established to facilitate inter-agency and intra-agency collaborati on. The stated aim has been to create high quality, needs-led, co-ordinated services that maximised choice for the service user (Payne, 1995). Political pressure in recent years has focused attention on interprofessional collaboration in SW (Pollard, Sellman Senior, 2005) and when viewed as a good thing, it is worthwhile to critically examine its benefits and drawbacks just what is so good about it. (Leathard, 2003). Interprofessional collaboration benefits the service user by the use of complementary skills, shared knowledge, resources and possibility better job satisfaction. Soon after the new Labour government in 1997 gave a powerful new impetus to the concept of collaboration and partnership between health professionals and services, they recognised this and there was a plethora of social policy initiatives official on collaborative working published. A clear indication of this can be found in NHS Plan (DH, 2000), Modernising the Social Services (DH, 1998a). Policies concentrat ed on agency structures and better joint working. This was nothing new, since the 1970s there has been a growing emphasis on multiagency working. 1974 saw the first big press involvement in the death of a child (Maria Coldwell) and they questioned why professionals were not able to protect children who they had identified as most at risk. The pendulum of threat to children then swung too much the other way and the thresholds for interventions were significantly lowered, which culminated with the Cleveland Inquiry of 1988 when children were removed from their families when there was little concrete evidence of harm (Butler-Sloss, 1988), with too much emphasis put on the medical opinion. An equilibrium was needed for a collaborative work ethic to share knowledge and skills and Munro (2010) states that other service agencies cannot and should not replace SWs, but there is a requirement for agencies to engage professionally about children, young people and families on their caseloads. T he Children Act 2004 (Dept of Health, 2004) and associated government guidance, introduced following the Public Inquiry into the death of Victoria Climbià © in 2000, including Every Child Matters (Dept of Health, 2003), were written to stress the importance of interprofessional and multiagency working and to help improve it. The failure to collaborate effectively was highlighted as one of many missed opportunities by the inquiry into the tragic death of Victoria Climbià © (Laming, 2003) and Baby Peter (Munro, 2009). There is an assumption that shared information is information understood problems with information sharing and effective commination are cited again and again in public enquiry reports Rose and Barnes 2008; Brandon et al, 2008). These problems can simply be about very practical issues, such as delays in information shearing, lost messages, names and addresses that are incorrectly recorded (Laming 2003 cited in Ten pitfalls and how to avoid them 2010) An explicit aim was to motivate the contribution of multiagency working. By 1997 Labour had been re elected and rolled out a number of studies into collaboration. These studies revealed the many complexities and obstacles to collaborative working (Weinstein, 2003). The main drivers of the governments health and social care policies were partnership, collaboration and multi-disciplinary working. One of the areas covered by Working Together to Safeguard Children 2010 (Dept of Health, 2010) stated that organisations and agencies should work together to recognise and manage any individual who presents a risk of harm to children. The Children Act 1989 (Dept of Health, 1989) requires multi-agency collaboration to help indentify and investigate any cases of child abuse, and the protection and support of victims and their families. It should be remembered that everyone brings their piece of expertise/ knowledge to help build the jigsaw (Working Together 2010) and to assess the service user i n a holistic way. Although the merits of collaboration have rarely been disputed, the risk of conflict between the professional groups remains. Some of the barriers to collaboration are different resource allocation systems, different accountability structures, professional tribalism, pace of change and spending constraints The disadvantages are if commissioning was led by health, an over-emphasis on health care needs, and inequities between patients from different practices There are challenges in terms of professional and personal resistance to change; it is difficult to change entrenched attitudes even through inter-professional education. Sometimes professionals disagree about the causes of and the solutions to problems, they may have different objectives because of different paradigms (Pierson M, 2010). There are also several concerns for SWs which include not knowing which assessments to use, appearing to be different or work differently from others in the team, not being taken seriously or listened to by colleagues and not having sufficient time or resources because of budget constraints (Warren, 2007). Some of the reasoning for this pessimistic mood is feelings of inequality and rivalries, the relative status and power of professionals, professional identity and territory. Different patterns of accountability and discretion between professionals, are all contributing factors to these feelings (Hudson, 2002). Thompson (2009) suggests that instead of the SW being viewed as the expert with all the answers to the problems, they should step back and look at what other professionals can contribute. Collaborative working offers a way forward, in which the SW works with everyone involved with the clients; carers, voluntary workers and other professional staff, to maximise the resources, thus giving an opportunity for making progress and affording the service user the best possible care. Weinstein, et al, (2003) stated that although there are problems with collaborative working, the potential positive outcomes out-weight the negatives. There could be a more integrated, timely and coherent response to the many complex human problems, fewer visits, better record keeping and transfer of information, and some reduction of risk; therefore the whole is greater than the sum of the parts. If SWs work in silos, working in a vacuum, they are unlikely to maximise their impact (Brodie, 2008). It is important to use collaboration and an interprofessional/multi agency working culture in Social Work in order that the most vulnerable service users receive the best possible assessments of their needs. The advantages are better understanding of the constraints of each agency and system overall, shared information on local needs, reduction in duplication of assessments, better planning, avoiding the blame culture when problems occurred and accessing social care via health less stigmatising. Greater knowledge of the SWs roles and responsibilities by other health care professionals will ensure that the SWs role is not substituted in assessment of the service users circumstances and needs (Munro, 2010). The Munro Report (2010) also states that if everyone holds a piece of the jigsaw a full picture is impossible until every piece is put together. Working together to Safeguard Children states a multi-professional approach is required to ensure collaboration among all involved, which may include ambulance staff, AE department staff, coroners officers, police, GPs, health visitors, school nurses, community childrens nurses, midwives, paediatricians, palliative or end of life care staff, mental health professionals, substance misuse workers, hospital bereavement staff, voluntary agencies, coroners, pathologists, forensic medical examiners, local authority childrens social care, YOTs, probation, schools, prison staff where a child has died in custody and any others who may find themselves with a contribution to make in individual cases (for example, fire fighters or faith leaders). In a study by Carpenter et al (2003) concerning the impact on staff of providing integrated care in multi-disciplinary mental health teams in the North of England, the most positive results were found in areas where services were fully integrated. There is much evidence to suggest that collaboration represents an ethical method of practice where differences are respected, but used creatively to find solutions to complex problems. In essence the service user should be cared for in a holistic approach and to achieve this collaboration is the answer. (1516) Professor Munro askes Some local areas have introduced social work-led, multi-agency locality teams to help inform best next steps in respect of a child or young person, including whether a formal child protection intervention is needed. Do you think this is useful? Do you have evidence of it working well? What are the practical implications of this approach? (http://www.communitycare.co.uk/Articles/2011/01/04/116046/munro-asks-frontline-workers-what-needs-to-change.htm)

Tuesday, August 20, 2019

Basic Structure Of A Computer System Computer Science Essay

Basic Structure Of A Computer System Computer Science Essay A computer is an electronic device capable of manipulating number and symbols, first taking input, processing it, storing and giving out output under a control of set instructions which is known as a program. A general purpose computer requires the following hardware components: memory, storage device (hard disk drive), input device (keyboard, mouse etc.), output device (screen, printer etc.) and central processing unit (CPU). Many other components are involved in addition to the listed components to work together efficiently. Computers can be classified by size and power as follows: Personal computer: Personal computers are small computers based on a microprocessor. A personal computer has a keyboard for inputting data, a monitor for output and a storage device for saving data. Workstation: workstations are usually powerful than a personal computer. It has more powerful microprocessor and a higher-quality monitor. Minicomputer: Mini computers are multi-user computer capable of supporting from 10 to hundreds of users simultaneously. Mainframe computer: Mainframe computers are powerful multi-user computer capable of supporting many hundreds or thousands of users simultaneously. Super computer: Super computers are extremely fast computers that can perform hundreds of millions of instructions per second. MAIN REPORT COMPUTER SYSTEM A computer system can be represented using the following block diagram: CPU Bus Interface Timing and Control Address Bus ALU I/O RAM ROM Keyboard Mouse etc. Data Bus Control Bus Clock The CPU is can be expanded into three main parts: The ALU (Arithmetic and Logic Unit), The Bus interface Unit, and The Control Bus. The clock is an electronic circuit that gives regular pulses to the CPU. Faster clock speeds means more pulses to the CPU and the instructions are stepped through faster. The memory chip contains millions of separate memory stores and each of these locations has a unique number. This is known as memory address. The CPU stores data at any of these addresses and fetch the content back when required. RAM stands for Random Access Memory. These chips store the instructions for running the operating system and any computer application. This memory also stores all the data that is being worked on. RAM is a volatile memory which means that it only stores data while the computer remains switched on. When switched off, it loses all the stored data. ROM (Read Only Memory) on the other hand is a chip with program instructions permanently burned into it. The content is not lost even if the machine is switched off. The CPU can either fetch data from or write data when the appropriate memory location is accessed. Such data is transferred from the CPU to the memory location along the Data Bus. The control Bus is a set of tracks on the computers motherboard that run from the CPU to the devices and works under the direction of the CPU. LOGIC GATES Logic gates perform logical operation on one or more logic inputs and produce a single logic output. It processes signals which represent true or false. It is called Boolean logic and is most commonly used in digital circuits. Logic gates are identified by their function: NOT, AND, NAND, OR, NOR, EX-OR and EX-NOR and they are usually represented by capital letters. Logic Gate Symbols There are two series of symbols for logic gates: the traditional symbols which have distinctive shapes making them easy to recognise so they are widely used, and the International Electro technical Commission (IEC) symbols which are rectangles with a symbol inside to show the gate function. Traditional Symbols Source: http://www.kpsec.freeuk.com/gates.htm IEC Source: http://www.kpsec.freeuk.com/gates.htm Inputs and Outputs All Gates except a NOT gate have two or more inputs. A NOT gate has only one input and all gates have only one output. In the following figure, A and B are inputs and Q is the output. Source: http://www.kpsec.freeuk.com/gates.htm Other types of gate used are NOT gate, AND gate, NAND (NOT AND) gate, OR gate and NOR (NOT OR) gate. Truth tables A truth table is a good way to show the function of a logic gate. It shows the output states for every possible combination of input states. The symbols 0 (false) and 1 (true) are usually used in truth tables. The example truth table on the right shows the inputs and output of an AND gate. Input A Input B Output Q 0 0 0 0 1 0 1 0 0 1 1 1 Computer numbering system Humans speak to one another in a particular language and we use different words and letters. Although we type words and letters in the computer, the computer translates those words and letters into numbers. Computers talk and understand in numbers. Those number systems are: Decimal, Hexadecimal, and Binary. The Decimal Number System is the system is most frequently used in arithmetic and in everyday life. The decimal number system is also known as the base 10 number system as the position in the number represents an incremental number with a base of 10. Each position only contains a number between 0 and 9. The Hexadecimal number system is used to represent memory addresses or colours. It is also known as the base 16 number system, because each position in the number represents an incremental number with a base of 16. Since the number system is represented in 16s, there are only 10 numbers and 5 letters (A to F). The Binary number system is used by most machines and electrical devices to communicate. It is also known as the base 2 number system, because each position in the number represents an incremental number with a base of 2. Since it is represented it 2s, there are only 2 numbers that can be a value in each position 0 or 1. CPU COMPONENTS The CPU is the intelligence of the machine but it needs a pre-written program to create, use and modify the data. If the computer needs to compare two numbers, or add two numbers, this is carried out inside the CPU and the numbers have to be fetched into the CPU from the computers memory chip. The three main components of CPU are: Arithmetic logic Unit (ALU), Bus Interface unit, and the Control Bus. Arithmetic Logic Unit (ALU) carries out all the calculations and decision making tasks. The ALU uses devices called gates that receive one or more inputs and based up what function they are designed to perform, outputs a result. The basic operations of an ALU include adding and subtracting binary values as well as performing logical operations such as AND, NOT, OR AND XOR. The Bus Interface Unit takes the data to and from the CPU which is held inside internal registers (small memory stores) along the external Data Bus to read and write memory and devices. The Data Bus carries information in both directions. The Bus Interface Unit also places the required location addresses on the Address Bus, so that the required devices can be accessed for reading or writing. The Control Bus is the physical connection that carries control information between the CPU and other devices within the computer. It decodes all program instructions and dictates all the CPUs control and timing mechanisms. It sends out the read and write signals on the Control Bus. COMPUTER MEMORY The computer has to temporarily store the program and data in an area where it can be used by the computers processor to work. This area is known as the computers memory. It consists of computer chips that are capable of storing information. These information could be: the operating system (e.g. DOS, windows etc.), the instruction of the program to run (e.g. a database or a drawing program), or the data that is used or created (e.g. letters from word-processing or records from a database). There are different types of memory used in a computer system. They are: Cache memory, Random Access Memory (RAM), Read Only Memory (ROM), and Virtual Memory. Cache memory is extremely fast memory that is built into a computers CPU (L1 cache) or in some cases located next to it on a separate chip (L2 cache). L1 cache is faster than L2 cache as it is built into the CPU. These days, newer computer come with L3 cache which is faster than RAM but slower than L1 and l2 cache. Cache memory is used to store instructions that are repeatedly required to run programs and helps to improve overall system speed. The reason it is so fast is that the CPU does not have to use the motherboards system bus for data transfer. Random Access Memory (RAM) is the memory chip that consists of a large number of cells, each cell having a fixed capacity for storing data and unique address. RAM is a volatile memory which means all the programs and data in the memory is lost when the machined is switched off. There are different types of RAM modules available such as SODIMM, SDRAM, DDR, DDR2 and DDR3. SODIMM are used for laptops whereas the rest are used for desktop computers. Read Only Memory (ROM) is a memory chip in which the program instructions are permanently burned into. It is non-volatile which means its content is not lost even when the machined is switched off. It is used to store some of the system programs that keep the computer running smoothly. For example computer BIOS (basic input out system) is stored on the ROM. There are different types of ROM available such as Programmable ROM (PROM), Erasable Programmable ROM (EPROM), and Electrically Erasable Programmable ROM (EEPROM). Virtual Memory is a part of most operating system. It is used when the amount of RAM is not enough to run all the programs. If the operating system, an email program, a web browser, a word processor, a Photoshop application are loaded into the RAM simultaneously, the RAM will not be able to handle all applications and thus the computer looks at RAM for areas that have not been used recently and copies them onto the hard drive. This frees up space in RAM to load new application. But because the read/write speed of a hard drive is much slower than that of RAM, the performance is not satisfactory. It is not recommended to use virtual memory as it is slow. The solution to this problem would be to upgrade the memory. SYSTEM SOFTWARE A computer system is not complete without system software. For a computer to perform any tasks, both software and hardware are equally important. System software gives life to hardware. System softwares are the files and programs that make up a computers operating system. It includes libraries of functions, system services, driver for hardwares, system preferences, and other configuration files. System software comprises of Assembler, Debugger, Compilers, Operating System, File management tools etc. The system software is installed on the computer when the operating system is installed. It can also be updated by running programs such as windows update. The system software is also called low-level software as it runs at the most basic level of the computer. It generates the user interface and allows the operating system to interact with the hardware; however system software is not meant to be run by the end user like application programs. Application programs such as web browser, or Microsoft word is often used by the end user whereas the end user does not use an assembler program unless he/she is a computer programmer. The system software runs in the background and thus the user does not have to worry about what the system software is doing. CONCLUSION In the report, the basic structure of a computer system was described with diagram. Different components such as CPU, memory, BUS, input/output devices that form a computer system were identified and explained. General ideas about Logic gates were given and different number systems used by computers to represent data were also described. As the CPU is the main part of a computer system, it was further looked into and Arithmetic Logic Unit, Control Bus and Bus interface Unit were discussed. Different types of memory and their uses were explained and the importance of the system software was discussed finally.