Personhood, Human Dignity and Moral

The study shows why four individuals, Dr. Wilson, who is on the medical side, offer recommendations to couples that the pregnancy is unhealthy and go through abortion. He describes in his opinion that the infant is born with an irregular disease and the infant has a shortened lifespan. The doctor’s decision was based on the virtue ethics theory, in which Aristotle defined some twelve moral virtues and stressed that abortion could be seen as a triumphant action if it is seen in the golden mean that is courage and eventually aims at human happiness. The second individual is Jessica’s boyfriend, who was very supportive of Jessica’s decision. His beliefs were in favor of her or the feminism theory, the mother more so than the right to survival of the child. He was not sure that Jessica obtained proper help during her early months of pregnancy.

The last person Aunt Maria, who was at some stage someone they lived with, wanted Jessica to give birth to the child, for she believes in the principle of right to live. In the Aunt’s viewpoint, abortion is homicide and God is going to judge her. However, Jessica’s final choice was either to get an abortion or to proceed with her motherhood. It’s impossible to survive in life if she gave birth to an unwanted child. It has two choices according to feminism, that is, to eliminate the womb entirely, or to refrain from intercourse. But this idea is not true, as the mother is not granted the absolute right to terminate her child. That is because mothers have always aborted babies on the grounds of sex. Trusting the recommendation of the psychiatrist is one way to help think of the life of the fetus. Whereas scientific theories and studies have already identified an abnormality in the growth and development of the fetus, it is not advisable to give birth to an irregular infant. It is not only going to be troublesome for the parents, but also for the boy. As far as ethical issues are concerned, after 24 weeks of conception of the fetus can be terminated. However, my religious background would have me to believe in the right to live theory, God has the final say so if he is the Divine creator to bring life into an individual what right do I have to terminate it.

The concepts of personhood, human dignity, as well as moral status informs my philosophy of health care and wellness in a way that allows me to respect one’s decision. As a health professional it is important to first understand the patient in a way that if advise is to be given that it best suits that individual. A health professionals advise is greatly accepted but in a way that all theories that best aid the patient is needed. As an example, the relationship-based moral agency principle refers to Dr. Wilson and the relationship of the clients. Dr. Wilson needed to respect the interests and actions of both Jessica and Marco because they were all interested with the best-case scenario of the fetus. According to Sebo (n.d.), the fetus may not have a moral value since it loses reason, which is self-awareness and understanding, or that it cannot sense the danger around it. Because this logic is absent, the embryo begins to mature gradually as a individual ages. My final thoughts are that the fetus still develops into a human no matter how flawed it appears to be. It still evolves exactly as God wants it to evolve. However Dr. Wilson did the appropriate thing by advising Jessica that she is the one to give the definitive answer, even if all parties have opposing theories.

References

  1. Grand Canyon University (GCU), (2020). Case Study: fetus Abnormality Sebo, J. (n.d.). Moral Status

The Missing Gaps in David Miller’s Moral and Economic Justice Theory

In this essay we will explore the contention that the guiding principle for income distribution of work should be the principle of desert, whereby the desert is the contribution to the social product – drawing upon the work of David Miller. Miller argues for income distribution to be desert-based, where monetary rewards are provided as a portion of net output whereby the individual is responsible. The critique will begin with pointing out considerable weaknesses in his arguments, and a comparative perspective to Rawls’ Difference Principle on wealth redistribution.

Miller’s principle provides coherence for fairness within a pluralistic society by viewing society as solidaristic on culture and beliefs, and instrumental between individuals. Individuals are able to communicate on a common ethos and contribute to each other based on their relationship: “you get what you give”. The act of working co-operatively using rewards for social contribution provides individuals the liberty to act for their own needs and those close around them. Redistributing individual incomes causes dead-weight loss (Case, and Fair, 1999) as the subjectivity and relatively of one’s own needs is ignored. The desert-principle allows individuals to choose their level of work for what income they believe is sufficient for them. For example, if a sub-set in society increases their utility by indulging in more expensive aspects of life, they will put in more effort to achieve these goals. If another group can maintain the same utility on a less extravagant life, then this group is pareto efficient at a lower level of input to society. Pareto efficiency refers to points where no one can be better off by moving away from those points (Jennings, 2012).

In saying this, applying the Difference Principle where wealth should be distributed to the least advantaged (Rawls, 1971) would mean punishing the well-deserved, the rich, for possessing more wealth, rather than rewarding them for their social contribution; ultimately taking away incentive for society to progress. Removing this incentive would also disallow individuals to self-progress: “working harder, not smarter”, and in turn result in a regressive society. Desert-based principles allow individuals to re-evaluate oneself to become more productive, taking the most advantage out of their own unique natural talents. It is argued that these natural talents are unjust as they are given by luck at birth, however the same talents can be utilised to create purpose and identity for each person and fulfils the most important human needs: self-gratification and self-achievement – to deserve what those have worked hard for. Desertness allows people to integrate into a productive and self-sufficient citizen of society in allowing growth. Without a consistent means as an incentive to work, how many people will move the world forward? The innovative leaders will no longer increase productivity, the world economies will become stagnant and society will regress.

Too much power and wealth can do mostly more harm than good, however the desert principle elects qualified leaders chosen by society, and many have the natural talent of leadership, whereas others prefer to follow – hence a healthy amount of power will find its free-market equilibrium. Though natural talents may attract an advantage for power and wealth, it also means more responsibility. Allowing individuals to work on a desert-basis principle strikes a balance between using one’s own inherent abilities with earning a living they want and deserve for themselves and others around them.

Miller’s theory of justice may at first sound reasonable and coherent in theory, but there are major flaws in which his arguments are proven invalid in many situations (Celello P: 2019). To begin, Miller’s arguments give no lexical ordering to his proposals, hence can often lead to difficult situations if there exists a relationship with more than person; for instance, a family run business where two or more family members run the business and thus work together (Celello P: 2019). Albeit, Miller does acknowledge for a wide range of examples where his arguments show that there is a ‘just’ outcome of the relationship that can be achieved in order to reach a desirable level of social justice (Celello P: 2019). Here, Miller commits the fallacy of using explanations as excuses. In the family business situation, the desert principle is not a necessary condition for social justice – let’s say the male owns an accounting business and the female is a home-maker, the male would not keep the entirety of the proceeds to himself, rather he would use it as a means of income for both himself and his female partner. Miller has not considered personal relationships in his theory, as both man and woman appreciate that the proceeds can be equally distributed without the need for desert.

A feminist critique of Miller would begin with his separation of justice. For the sake of argument, if one person is not engaged in one sphere, then they are unable to ever receive a “just” distribution in reference to their deserts or needs (Feldman F, and Skow B: 2019). Take for example, an individual that does not have any family or indeed any friends as they live in the wilderness (Feldman F, and Skow B: 2019). Now that individual by definition would not belong to a “solidaristic community” that is able to distribute goods in order to compensate their needs, beyond those goods which they as an individual receive as being a citizen. Moreover, an individual who has never worked, nor someone who will ever be able to work, take for instance, a severely disabled person, such people will never receive the benefits or “dues” for their labour or any activity for which they engage in (Feldman F, and Skow B: 2019). Hence, according to Miller’s theory of desert, such people would get nothing. Whereas, within a well-functioning liberal-democracy, such people in society will still be provided with the basic public goods, that will enable them to live their lives.

Moving away from the theoretical implications of his arguments. In practical terms, Miller is defending desert as a principle of social justice. Miller has a deterministic view of the world which ultimately leads him to the conclusion that individuals are not responsible for their skill levels, abilities and talents, hence it is unjust to be rewarded for such inherent traits (Feldman F, and Skow B: 2019). If one rejects the deterministic argument, then the proposition of ‘merit’ which is morally justifiable also must be accepted as well as intellectual “merit”. Liberal democracies are founded upon meritocracy, and to reject this principle, as Miller would, is to reject the most fundamental value of modern society. Additionally, Rawls’ argues for his position on desertness from a position of merit (Feldman F, and Skow B: 2019). Here, Miller argues that desert can only be achieved if and only if the allocative capacity of the market is strictly controlled, with the implication of limiting inequality (Feldman F, and Skow B: 2019). However, it is naïve of Miller to argue that the talents that society values can only be measured by the market, and thus must have economic value upon them – as this is not the only way, nor is it the “just” way to determine whether someone else is deserving.

Additionally, Miller’s blatant support for a liberal-nationalism relies too heavily on the normative assertion that we ought to give greater moral obligations to those with the same nationality than to those of different nationalities (Feldman F, and Skow B: 2019). This argument fails because it relies on the assumption that all nations are equal and are thus able to provide the necessary public goods for their citizen’s needs, as well as their rights. This claim does not coincide with reality. As there are numerous dictatorial states such as Venezuela and North Korea that do not provide the basic public goods for their citizens, thus other nations, working in correspondence with the United Nations, have an obligation to assist in supplying such rights to people of these nationalities. Moreover, one simply must look at the issue of energy and see that Miller’s desert argument also fails to acknowledge the inequality of natural resources – which are grossly undistributed between nations (Feldman F, and Skow B: 2019). Such a placement on the inequality of natural resources is itself morally arbitrary. Hence, in terms of distribution of income, those born into wealthy families also suffer from moral arbitrariness; but in any sensible democracy, they ought to have the right to inherit the proceeds of their parent’s labour. Nevertheless, people born into such circumstances, have a moral obligation to provide aid for those born into poorer families.

In this essay we have explored the contention that the guiding principle for income distribution of work should be the principle of Desert, drawing upon Miller’s argument that income distribution should be desert-based, where monetary awards are provided as a portion of net output whereby the individual is solely responsible. Miller’s theory of justice has good intentions for a liberal state, however fails to reason the downfalls of his arguments on the desert principle. As aforementioned, his theory provides a consistent justice theory however inconsistent applications, however, the critique offered explained the considerable weaknesses in his arguments, when analysed from a comparative perspective to Rawls’ Difference Principle on wealth distribution as well as feminist and rational perspectives.

Bibliography

  1. Case, Karl E.; Fair, Ray C. (1999). Principles of Economics (5th ed.). Prentice-Hall. ISBN 978-0-13-961905-2.
  2. “Desert,” by Peter Celello. The Internet Encyclopedia of Philosophy, ISSN 2161-0002, https://www.iep.utm.edu/ (Celello P: 2019)
  3. Feldman, Fred and Skow, Brad, ‘Desert’, The Stanford Encyclopedia of Philosophy (Fall 2019 Edition), Edward N. Zalta (ed.), URL = . (Feldman F, and Skow B: 2019)
  4. Frank, B., Jennings, S., Bernanke, B. (2012) Principles of Microeconomics (3rd Ed). McGraw-Hill.
  5. Rawls, John, 1971, A Theory of Justice, Harvard, MA: Harvard University Press..

Why Do Human Beings Have To Distract Ourselves? Is It Moral?

Is it within human nature to distract ourselves from uncomfortable or challenging realities, events or thoughts? This idea has been explored by philosophers such as John Stuart Mill, Immanuel Kant and Jean-Paul Sartre. However, at this point in time, can distractions become overbearing and are they possibly stunting the progress of humanity. This shift in distractions has gone from a child’s comforting bedtime story to calm their fear of the dark to a constant state of noise, colours and canned laughter surrounding day to day activities. Unlike the child’s fear of the dark, there is no apparent reason for this level of distraction. I believe that the reason we feel a desire to distract ourselves is that our current existence is an uncomfortable reality. From the troubling political climate to the changing climate on Earth, we are distracting ourselves to forget or ease the discomfort we experience from the world today and our inability to control events out of our hands. These distractions are not all bad however because if we got to a high level of discomfort our ability to use our time productivity would go down. Because of this, it is necessary to have a balance of distractions to productivity.

In order to answer this question, we must first establish the existence of free will. Determinists, like Pierre Simon Marquis de Laplace, believe that every event in the universe is predetermined, including moral actions. If moral decisions or actions are determined then we cannot have free will because we have no ability to choose. Emotions guide us to act a certain way but, according to C.A. Campbell, we can choose to go against those emotions when making moral decisions. For example, we could choose to be kind even if we are in a bad mood. Those sorts of events provide evidence for the theory of compatibilism. If everything is predetermined then we would not have any need for emotions because we wouldn’t be making any decisions. This makes the theory of determinism unprobable and the theory of compatibilism, where at least some events in the universe have no deterministic value, more probable. Therefore, for the sake of this argument, I will assume compatibilism.

The basis of my argument will rely in part on John Stuart Mill’s theory of utilitarianism. In Jeremy Bentham and Thomas Hobbs 1789 book, Introduction to the Principles of Morals and Legislation, the idea of utilitarianism is first established. It is stated that happiness is the presence of “pleasure” and the absence of “pain”. In calculating happiness Bentham uses duration, intensity, certainty vs uncertainty, and nearness vs farness. This is the original basis for Utilitarianism. By this logic, it would be considered moral to sit around and watch television all the time if that’s what brings you pleasure. It would be an immediate, low intensity, high certainty pleasure with a long duration. It could be argued that time could be better spent writing a philosophy essay but this could be an uncomfortably confronting or tiring experience and any pleasure coming out of it would be uncertain and further in the future. Therefore by Bentham’s theory, you would have been making a moral choice in watching television all day. This theory is flawed as it values the quantity of happiness over the quality.

John Stuart Mill disagrees with Bentham, claiming that Bentham disregards qualitative differences, as Bentham did not distinguish the difference between an animal’s pain to a humans pain. Mill’s theory alters the idea of utilitarianism and allows the sacrifice of an individual’s pleasure and experience of pain for the greater good of a community. Mills says that because happiness is difficult to attain people are morally justified to attempt and reduce their total amount of pain with their actions.

This theory is still intrinsically flawed because it accounts for pain and pleasure as the only driving forces in decision making. Kant debunks this theory by saying that we can also use reason to influence our decisions which is what differentiates us from animals who only act out of instinct. That being said, pleasure and pain are the underlying reasons for actions when reason is not applied. The main time we do not use reason is when we are thinking subconsciously or when we are not analysing our decisions. This makes Utilitarianism a good theory to approach this problem because it is a simple way to explain human actions and motivations.

Since we have established that it’s time to decide why we choose to spend our time on meaningless distractions. For the sake of this argument, I will define distractions as activities like scrolling mindlessly through social media, watching YouTube videos, watching meaningless television, etc. The obvious answer is because it brings us pleasure but spending all day every day only on distractions would make you unhappy. So, distractions make you happy but only if they are in balance with other meaningful activities.

A long time ago the distractions I have talked about as meaningless did not exist and there were other things humans did to distract themselves like recreational swimming, playing games, etc. These activities were distractions because they were recreational and did not serve any material purpose. Now we don’t have to worry about surviving in the same sense. Instead, we have to contribute to society to survive in this capitalist climate. This new form of survival, however, does not take up as much time and in fact relies heavily on recreational activities to work. As time goes on more and more people are beginning to pass the time through superficial distractions that prevent us from deep reflection or philosophical examination. Plato’s theory of a good life maintains heavily on the idea of examining life. He maintains that a life unexamined is not a life worth living. This argument in many ways is correct. Imagining spending every waking hour indulging in mindless activities that give you mild happiness. That life would no way compare to a life of variety in which you examined the human condition, learnt and then spent time indulging in life’s pleasures as a reward. This again supports the idea of a balanced life in which learning and physical pleasure are held in unison to create a fulfilling life.

The increasing desire to escape reality through distractions can, in part, be explained by changes in the political climate. The end of World War II changed everything, science, once seen as a pure course of discovery, had now been used to create weapons of mass destruction. Everything around you could be gone in an instant. This shocking reality is on the front of everyone’s minds. Some embrace it and lean into new philosophical outlooks, like Heidegger suggests, but the majority of people choose to try and continue on as usual. Heidegger has excellent points on death. Heidegger said that there are certain things that come with contemplating the reality of death, that it can happen to anyone, anywhere at any time. Heidegger says, if you reflect on death every day you will have an authentic attitude towards death. People who put off living life, who act like they will live forever, Heidegger says have an inauthentic view of death. Heidegger believes that death is the only thing that can bring authentic meaning to life and in order to be morally good or live a good life you must accept death and use it to motivate your actions. By Heidegger’s theory anyone that chooses to distract themselves from death using distractions is immoral. Distraction is the easy path when faced with these realities.

Those who had not chosen to fill their time with distractions after World War II are further tempted into it by advertising and the culture surrounding these distractions. Continuously more and more events fuel the idea of distraction in individuals minds. Reports of the holocaust and the idea of genocide became widespread knowledge. The attack on the 11th of September 2001 threatened the world with a new terrorist threat. Even in more recent times, the election of Donald Trump as the President of the United States meant tremendous implications for not only America but the world. Of course, the rising issue of climate change also plays a big role, especially with the new media attention. These realities are all uncomfortable to think about and people need a solution. The easiest solution is distractions. Young people learn everything from our surroundings so we also learn our distraction habits from them. For example, when a mother gets overwhelmed at work and then at the end of the day indulge themselves by playing video games then her children will think of video games as a way to destress. Therefore when the children are stressed they will play video games, furthering the cycle. Because of the increasing levels of stress, people are destressing more and therefor distracting more.

Another factor that can drive distraction is the mental health epidemic. While no one exact cause for the increase in mental health issues is known it is another explanation for the increase in distractions. This is relevant because in every mental health condition, individuals, adapt their own coping mechanisms to deal with stress. These coping mechanisms often follow patterns like, increased television habits, activity on social media, or other activities that help them forget about their stress. This would, in turn, maximise our pleasure and minimise our pain.

Another factor impacting the individuals want for distraction is boredom. Not only a lack of activities to complete but sensory boredom. Video games, TV shows and the like all have sensory stimulation, bright colours flashing are accompanied by loud noises. Products aimed at children hook you in with this sensory stimulation. This early induction into sensory stimulation means that our bodies become used to this level of stimulation and therefore crave it when we are away from it. For example, a blind person that was not born blind often has ticks to stimulate their eyes even though they can’t be visually stimulated. This is very often some form of rubbing of the eyes or other physical stimulation. For someone who is technically blind but can also see shadows they often use lights to stimulate their eyes. This goes the same for when someone is used to hearing loud noises all day long, they will often be unsettled by the quiet as it is unfamiliar. For example, if someone watches television in the morning and the works in a loud classroom when they get home they would likely want to stimulate their hearing by putting on music or watching TV and could be unsettled by the quiet of their house.

Therefore, due to the uncomfortable political climate, external stresses and the accessibility and encouragement for distracting, we use our spare time on meaningless tasks, to maximise pleasure and minimize pain.

The morality of choosing to distract oneself from the “real” issues in the world has been debated by many philosophers. The theory of utilitarianism does not provide a clear answer to this question, instead, saying that whatever maximises happiest at the moment is the moral choice. This can be debunked by the example of sacrificing yourself to save your child. It would bring you pain but it would allow your child to live, which is ultimately what many people would choose. This makes utilitarianism an unfitting theory for this question.

Jean-Paul Satre’s theory of free will applies an individual morality for our decisions. Sartre claims that humans live in suffering because once they are capable of making decisions, they are responsible for those decisions and their consequences. Sartre calls this suffering “existential angst”. This could also offer another explanation as to why we feel the need to distract ourselves.

Furthermore, Sartre defined living in “bad faith” which as when someone does not take responsibility for their choices or does not “pursue freedom”, or free will. A person living in “bad faith” accepts things as they are and does not pursue other options. Therefore Sartre believes that you are not using your full free will unless you are living in “good faith”. Living in “good faith” means that unless you are content and take responsibility for all of your decisions. For example, you could decide to watch one episode of Grey’s Anatomy and be content with your decision. However, you find yourself caught on a cliffhanger, so you watch another episode, this one you can still justify to yourself and take responsibility for. Then you watch another one because you are having a great evening. At the end of the third episode, you are tired and realise it is 12 pm. You use excuses like “time got away from me”. You do not take responsibility for your actions or justify them. This would be living in “bad faith”. If you are living in “Bad faith” you will experience more existential angst and become more stressed causing the urge to distract yourself to increase. Alternatively, after the third episode, you could think about how that ended up being a bad idea and you will now be tired for work tomorrow. You accept this as your fault and think that next time you will stop at one or two episodes. That would be living in “good faith”. In other words, don’t watch television all day if you can’t justify that decision to yourself and take responsibility for it.

Sartre’s theory is an excellent way to approach this problem but that does not leave it without fault. Sartre describes living in “good faith” as someone who takes responsibility for all their actions, however, this does not account for certain exceptions. For example, a starving man has no option for sustenance other than a human corpse and he chooses to consume the corpse. The man does not take legal or emotional responsibility for those actions claiming he was acting out of necessity. Would he be living in “bad faith”? Alternatively, a person with Tourette’s Syndrome who yelled “everyone down I have a bomb” at an airport would likely not take responsibility for their actions as they did not have free will over that action. On the other hand, that person could justify this action to themselves and accept it even if not taking responsibility.

This fault, however, can be solved when coupled with compatibilism (the idea that at least some things in life are predetermined). With this idea, you could suggest that the starving man did not make that decision or rather his biology made the decision. Similarly, the man with Tourette’s Syndrome would not have physically been able to stop himself from yelling.

An alternative theory is Kantian ethics. Kant says that we can’t control the consequences of our actions, only our motives. Therefore we are morally responsible for the will behind our actions and not the outcomes. A categorical imperative is universal and should be applied to everyone, with no exceptions. This means that if a serial killer asks you where your friend is you have to tell her, just the same as telling the police where you saw the killer runoff to. Kantian ethics directly oppose the theory of utilitarianism, claiming that we are not solely governed by pleasure and pain but that reason can also shape our actions. Kant says that we can choose to endure pain and forgo pleasure, which is what sets us apart from animals.

Consider this, Norm knows that if he spent his time exercising and being social he would be happier and healthier. But despite this knowledge Norm chooses to sit on the couch after work, eating a pizza and watching TV. He knows it isn’t good for him but he admits that it is what he wants to do, he really likes TV. Kant argues that Norm is not acting in free will. He says freedom is not just being free to get what you want because we do not choose our wants. If Norm could choose his wants he would want to eat healthier and exercise. But what he actually wants is to eat pizza and watch TV. There is a force outside Norm’s will that he is obeying. Kant says that true freedom is the ability to resist our wants. We are free when we act in accordance with laws we give ourselves. So if Norm was to throw out his TV and start exercising he would be truly free. In the context of distractions, you could have a personal law that means you find it immoral to watch TV all day, meaning this distraction would be immoral to you. However, watching a certain amount of TV could be moral. But if you find it immoral to watch TV at all, then you would be morally wrong to ever watch TV because it would be a categorical imperative.

Throughout this essay, I have shown that humans crave distractions to avoid thinking about distressing things and to obtain pleasure. This idea comes from John Stuart Mill’s theory of utilitarianism and of human’s desire to avoid pain (the distressing thoughts) and to acquire pleasure (the distractions). Because humans possess freewill they must take responsibility for their decisions, according to Satre. This is the most compelling theory I have examined in accordance with the morality of distraction as it allows for a situation to situation basis. This means that someone would be able to morally justify using distractions in balance with productive and creative activities as long as they thought it was justifiable and took responsibility for it. It also provides a solution for people using distractions as coping mechanisms or stress relievers. Based on my research I conclude that distractions are moral as long as the person distracting themselves is content with the time they spend on distractions and accept responsibility for their actions.

To What Extent can and should Moral Discussions in Political Theory Inform Actual Politics?

Throughout this assignment I will aim to show how and to what extent moral discussion in the form of moral philosophy has informed actual politics, providing examples and an in depth understanding of the philosophy and its implications. The piece will then aim to come to a conclusion on the extent to which moral discussion should inform politics, the only issue I find whilst writing this is the issue of ones own political motivations, for some the idea of utilitarianism is essential to the way they perceive the perfect political landscape whilst others may be more swayed by the works of Rawls and Kant. Throughout I have tried to remain impartial looking at examples and providing a layout that establishes, what the ideology is, why the theory became popular and how it has impacted politics be it positive or negative.

To begin with I believe that the Kantian theory of morality is a good place to start, especially as Kant took morality very seriously throughout his life. Moral absolutism, the moral philosophy that Kant followed, or as Kant deemed it ‘Categorical Imperatives’ are key to how we each act, in Kantian theory these are commands we must follow based on pure reason alone, our desires must be kept separate to ensure we all follow the same rules. (Kant, Ellington and Kant, 1994) Looking at this specific area, we can see parallels to modern governments and laws, for instance theft. Theft is illegal, Kant would argue that this is because it is essentially something, we do not in turn wish upon ourselves, we shouldn’t steal because then we validate its legitimacy and therefore accept it also happening to ourselves. (Kant, 1990) Therefore, we have established real laws to ensure everyone understands. There are a few examples of Kant’s theories being used partially in modern governments because his philosophy on morality is well known. For instance, lying in a court of law, or to the police we are thought that it is morally wrong to do so, and this can be traced to Kant. Kant’s theories derived that we should never forsake these ‘Categorical Imperatives’ which in turn means we should never lie, steal, harm others because it is morally wrong to do so. This is where Kant’s theories begin to lose their strength in politics. To legally enforce that lying is immoral and therefore illegal you face numerous issues, some simply being how do you enforce this ruling the simple answer is you can’t. Kantian theories provide great examples of how moral discussions can inform actual politics, as mentioned prior you can trace some laws to the philosophy that Kant followed and shared with the world. If we follow Kant’s second formula, “The formula of humanity” we see that it follows the idea of autonomy, ‘this imbues us with an absolute moral worth, which means that we shouldn’t be manipulated, or manipulate other autonomous agents for our own benefit.’ This idea of autonomy is essential to Kant’s thinking, we must all know the truth of something to agree to it fully. This is also shown in actual politics, if we look at terms of service or any documents where we must sign to show we understand and agree something, this is because we follow the idea that we must remain fully informed to make legitimate decisions. A service provider cannot tell you one thing and provide another without a consequence legally, this is because our decisions were made on false information and therefore, we are not autonomous in our decision making. Kantian theory is widely accepted and used to inform actual politics in many nations and is a great beginning example to show that moral discussion can inform politics.

A great follow up to Kantian theory of morality is in fact its essential counter, the idea of Utilitarianism. This theory thought of famously by John Stuart Mill and Jeremy Bentham and where Kantianism focuses on the intent behind our actions, utilitarianism focuses instead on the consequences this in turn leaves the intentions of our actions irrelevant if it is for a set result. Modern utilitarianism focuses on the ends, what we as mortal beings seek which Mill and Bentham argue is happiness. (Corry and Sprigge, 1970) Everything we do from education, working, hobbies etc all have the same goal attached to them we all do something because in the end we think it will make us happy. Like Kantianism utilitarian thinkers believe that moral theory should apply to everyone equally, where they differ is on how to ground it, so everyone understands, and they believe that there is nothing better than our primal desires. This in turn makes utilitarianism a hedonistic philosophy ‘the good is equal to the pleasant.’ The ‘Principle of Utility’ is where we begin to see its impacts on politics and how it has and still to this day informs politics globally. The principle follows the idea that as special as we are as individuals, we are in fact no more special than any other human and therefore when making moral decisions we must do so from ‘the position of a benevolent, disinterested spectator.’ This is what we assume and presumably hope all leaders of nations do, think not about themselves but how best to help the large masses in the best way possible as to maximise its positive effects. Now understandably this is not the case for all nations, as we have seen in many African nations as well as countries such as North Korea etc the idea of the powerful taking all the benefits is a common theme. However, when looking upon nations such as our own, or the US and Canada we tend to see an agreement with the idea of a benevolent spectator. We often find that elected governments, politicians and officials often work for the general betterment of the area they represent and therefore act in accordance to benefit the many, and whilst this is not always the case it is generally accepted that it is the role, they “should play”. Therefore, showing that utilitarianism can and has informed actual politics, and understandably so as it follows an ideal for political rulers that we most likely all share. However, utilitarianism does have its critiques and as a result did split in to two forms: the first form being ‘Act utilitarianism’ where in “any” situation, you should choose the action that produces the greatest good for the greatest number; and the second being ‘Rule utilitarianism’ where we ought to live our lives by rules that are likely to lead to the greatest good for the greatest number’. Act utilitarianism came under critique due to its principle of having to act for the greater good, even if it means killing 1 to save 100, hence where the idea of rule utilitarianism came in where the rules, we live by instead are the foundation of our utility and not the individual actions we face. This again is where we see how moral discussion informs actual politics, we could argue we live, especially in the UK, in a rule utilitarianism system as the laws and policies that pass is generally considered to be for the continued benefit of those within it, for instance austerity whether perceived as good or bad, had the intention of providing a long term continued benefit for the majority despite the negative consequences is would have for the “few”.

The most convincing example of how to a large extent moral discussion can inform actual politics is from the 17th century British philosopher Thomas Hobbes and the moral discussion of Contractarianism. (Hampton, 1988) One of the most famous and well agreed upon moral discussions even by today’s standards, Hobbes believed that morals are not natural or even primal but instead Hobbes established that, wherever there are groups of free, self-interested and rational individuals living together morality will emerge as a consequence and this is because those people are able to come to the realisation that there are far more benefits in cooperating. The social contract theory or contractarianism essentially comes down to the principle of trading in a portion of our freedom to reap the benefits of cooperative living, these are in turn explicit contracts something we agree to with a group of individuals that we ourselves accept and understand. There is however another form of contract, the idea of implicit contracts. Contracts that we deem as implicit are those which we have never truly agreed to, but we find ourselves beholden to nonetheless. For instance, immigration is an explicit contract, people who come in to the UK or any nation agree that they will follow the laws and rules in place by that country’s standards. However, nearly all the individuals born in that nation cannot say the same, being born in to a system you are therefore contractually obligated to follow the rules despite not formally agreeing to do so. And whilst this at first seems unfair, contractarians explain that despite not explicitly agreeing you have and still reap the benefits of the system often without knowing it, roads have laws to ensure they are safe, water is purified and public services are free (specific to the UK and Canada) because you pay taxes or your family does which means you have taken from the common pot, but also supplied it at the same time. We see the idea of contractarianism throughout politics and it has been one of the most profound moral philosophies to have an impact on politics itself. For instance, in the United Kingdom, the United States, Canada and most of Europe we see that we all live in a social contract. We are all born or move in to a contract whether it be explicit or implicit and as a result we all supply a level of freedom to ensure we have protection from the government, the police etc. When we pay tax, we are therefore provided with benefits such as improvements to the local area, infrastructure, education, health services and many other things and whilst some argue that they do not receive as much of a benefit as others, the principle remains that by helping others within your contract they in turn are provided with the opportunity to help others as a consequence of your maintained support of the contract. This is a great example of the true extent of moral philosophies impact on actual politics, without the idea of the social contract who knows what state the modern world would be in today and whilst it seems perfect it does of course have its flaws, the idea of defection is an issue, where someone prioritises their own interests instead of cooperating with the contract itself. This is quite common in social contracts, for instance people breaking the speed limit because they refuse to wait, or people using the hard shoulder to overtake knowing its illegal, this is defection in action. This is often the case in situations in which we do not know the individuals that we are breaking the contract with, it is much easier to keep contracts when we make them with people that we know, in fact modern society is built upon the foundation of trust. Trust that an individual will keep their word and this is why we do not often see someone who freely makes an agreement break it, because there will be a moral outrage due to the fact they willingly made a contract that they themselves broke, it leaves you cast in a negative light in society and therefore limits your future potential. The contract theory has been key to maintaining public order without explicitly forcing anyone to do anything, and in that sense it is remarkable. It is also fundamental in the arguments against slavery being that only a free individual can truly make a contract, the contractors must be better off within the system than they would be without it essentially the system overall must enhance your life more than if you were alone. Unlike Kantianism and Utilitarianism, we see that Contractarianism has no morality until it is contractually agreed upon and therefore it can change as we change, hence with actual politics how we can change laws and policies. We change the contract because we all agree that it needs changing or updating, morality can change it is not set in stone but instead a determination that we all decide. Therefore, we choose the responsibilities that we take upon ourselves via the contracts we choose and are morally obligated to see them through because we took it upon ourselves to accept to the contract at hand.

Then we move on to whether politics should be influenced by moral discussions and this in turn becomes a complex and ironically moral decision the induvial must decide upon themselves. Personally, I believe that it is a give and take system and therefore we can choose which parts of moral discussion we agree upon and take them, leaving the parts that we deem non-beneficial out of the equation. Ultimately, we should be able to have moral discussion implement actual politics but not consistently and I will show why.

In relation to Kantianism and whether it should be used in actual politics it is not entirely clear, some aspects of course should be and were highlighted in the prior explanation of the concept. When people swear testimonies or make statements to officials such as the police, individuals should not lie as it corrupts the system if they do so and in turn if caught, they face punishment. However, this is not always the case, Kant’s moral discussion can be brought down to what is the intent and the outcome of the action, often a lie has the intent of deception and therefore is immoral by nature. This is not always the case and it is something Kant overlooks, the idea that even though an action is immoral the act of doing so can still be for a moral cause. Making someone feel positive about themselves for instance. Kant would say call it as it is, if someone is fat tell them and through that honesty you therefore encourage them to work hard and better themselves. However, others may see this as a cause to lie if they are fragile of mind they may be driven to depression and even more negative implications based on the answer you gave them, justifying the need to lie in some cases, it could be argued that is. When observing the work of Kant, it is also important to mention John Rawls, they shared a similar belief of morality. Both Rawls and Kant concluded that objective moral truths are not ‘fixed by a moral order that is prior to and antecedent of our conception of the person and the social role of morality’, but are ‘constructions of reason.’ (Rawls, 1980) To both men morality was not something we had been given by divine intervention, or gifted by the stars but instead something we all were capable of rationalising and understanding with little intellect. Both Rawls and Kant were key to modern democracies, none more arguably than the impact Rawls had on the United States and therefore it is hard, nay impossible to argue that such great thinkers should have no influence in modern politics in action. The theorising of these men provided the ability for millions to live in the modern nations we see before us. The extent to which is of course up to the beholder of the information, a government may use the knowledge provided by Kant and Rawls or in turn they may not. But as stated prior we see numerous impacts both writers have had on most of the western democracies including America.

Utilitarianism again faces the same issues of questioning that faced Kantianism, whilst we know it does impact politics, to what extent should it? This could come to an extent of criticism based on the outcome but again as with Kantianism it would ultimately be down to those in power as to if the information provided is in the end used. But judging the content alone I believe that utilitarianism shows us something very important about ourselves as humans, and that essentially comes down to what are we willing to tolerate to have a perfect society? When looking at ‘The ones who walk away from Omelas’ by Ursulak Leguin you are placed in a moral conundrum and the outcome you follow tells you a lot about yourself. Utilitarianism, even If you do not agree with it provides a lot of insight in to the morality of man and therefore deserves in my opinion to be at least considered when influencing politics, the idea of the many over the few has been numerous in the past and even the present. Most noticeably the ideas enforced by Lenin and later Stalin often had the image of the deaths of the few were for the betterment of the many. And whilst the image of utilitarianism if often overshadowed by the ideas that sacrifice of innocents for the betterment of the many is legitimate this is not the case for all thinkers of the ideology, Mill often argued in his two most famous works, ‘On Liberty’ and ‘Utilitarianism’ the first principle of utility is the protections of the rights of all. (Mill, 1974) (Mill and Plamenatz, 1949) Especially when looking into the ideas behind ‘Rule Utilitarianism’ we begin to see an ideology focused on the happiness on the many without the need for harming innocents and abusing power over rights.

Finally, we cover the creator of the ‘Leviathan’ himself, Thomas Hobbes and whether the ideas of contractarianism should influence modern politics. Like the Kantianism and Utilitarianism before it the question is an opinion and therefore it would be only deemed correct or wrong by those in power. However, to look at the information provided by Hobbes and equally by Locke despite differences on how and what humanity looked like prior both agree that a social contract of some description provided the betterment of humanity and the foundation of a functioning and successful society. (Hampton, 1988) (Locke et al., 1948) Now Rawls would critique them both stating that the idea is ‘political and not metaphysical’ however, since this is political science it applies just as well. It is well known and shown prior how impactful contractarianism has been on the political landscape, we ourselves live in a contract as we live and breath paying in to a system that provides benefits as a reward for the continued relinquishment of a certain amount of freedom. Unlike the other two however, I would personally like to emphasise the importance that contractarianism has had on politics and morality and therefore conclude that this should influence actual politics to the greatest of extents as it establishes the fundamental trust we as citizens place in one another to follow and obey the laws and not to become defectors as this would in turn have a consequence on our social standings to which we have focused on for so long in history.

To conclude, the extent to which moral discussion has an impact on actual politics is rather large, whether it be the ever-evolving morality of Hobbes or the unwavering rigidity of moral strength shown by Kant. Each thinker has in some way shaped politics for the good or the bad. Regarding if it should be allowed to, why not? It’s a very basic answer but they are only men, providing explanations to something they deem needed. Bill Clinton deemed Rawls the greatest political thinker of the 20th century and took on numerous ideas from the man himself. Politics is an evolution of the people and the structures that bind it, education is not threat to that in the end.

Can Armed Conflict be Considered a Factor Leading to Economic, Moral and Social Progress?

The history of mankind has revolved around conflict, especially after the first societies sprung up. The innumerous conflicts that have occurred throughout history are analyzed and transmitted through generations, which has led to the constant advancement of society. This advancement is due to humans being able to recall and recognize our ancestors’ failures, which leads us to learn from them and inevitably change as a society. Despite this learning process, human beings have continued to commit faults, as it is inevitable, for new sources of conflict to arise. Even though armed conflict is inevitable, as stated before, and it has paved the way to the advancement of society, does that make it positive, or even necessary? Do war and revolution benefit society in any way?

A broad variety of ideas revolve around war: from supporting violent revolutions to no violence at all. Armed conflict is an event that with little to no doubt, has brought change and almost always, some kind of development. That change; however, is not always positive and definitely not enjoyed by all members of society. Revolution has been considered a milestone in the making of Modern society, not to mention that it has brought huge changes in people’s view on different aspects of life. Oppressed people are highly benefited in the case of revolution, as a fight to their voices to be heard and their living conditions to be taken into account by their countries is being carried out. However, armed conflict, being not the only incident to trigger change and development, is completely unnecessary and absolutely devastating. Especially in the case of war, unlike in revolution, people are practically forced to go to battle.

People give up their lives for a conflict that they did not cause, and are mostly not familiar with. Thousands and sometimes millions of people are killed and kill other civils that had the same choice as them to join the war. Getting in context, Russia had always been an underdeveloped country compared to those from the west and central sides of Europe. The major problem was that Russia was one of the least industrially developed countries on the continent. For this reason, Russian society was going through a hard time economically, since the land was not suitable for cultivation. While the working class and the peasants were on the verge of dying of hunger, the Russian aristocracy and especially the royal family lived surrounded by richness and luxury. This situation caused a revolution in 1905 to break out. This caused the biggest changes in the state. This revolution caused the tsar to create a duma or a parliament. Nonetheless, that duma had very limited power and it pleased not the tsar nor the revolutionaries. Not only did the situation not change at all, but Russia got involved in World War 1 against Russia. The tsar, Nicholas II, decided to lead the army himself, which left his family alone in charge. His wife Alexandra was highly influenced by Rasputin. The Russian army had not been trained properly, and it was not as technically developed as the western ones. This led to Russia suffering an excessive number of casualties. The situation in the country was highly affected under these conditions.

On February 23rd of the Julian calendar, women went out to the streets in order to celebrate the International Women’s Day. The next day, soldiers and workers joined the women in the protests. Troops were sent to put down the disorders; however, they joined the protesters. The government lost control of the situation. On the 3rd of March, Nicholas II abdicated from the throne. This got Russian revolutionaries’ hopes high for a new Russia. The members of the duma formed a provisional government, which was supposed to hold power until a Constitution was made. However, they had relations with the Petrograd Soviet, council elected by workers and soldiers, which was dominated by Socialist-Revolutionary & the Marxist Party. It was way more radical than the provisional government but supported the idea of war. The Bolshevik party was led by Lenin, and their slogan was simple: Bread, Peace & Land! The situation in Russia was worsening, which made the Bolshevik party increasingly appealing to the people. In July 1917, troops went against the protesters in the street and arrested some Bolshevik leaders. Kornilov, the army commander-in-chief, sent his men to ‘restore the peace’ in Petrograd.

The Bolsheviks succeeded in protecting the city and the troops were forced to either switch sides or retire. The Bolshevik party gained the majority in the Petrograd Soviet, and in October 1917, they stormed the government in the Winter Palace. From November on, a civil war occurred in Russia, which was fought between two big groups; the Bolsheviks and the Whites, who wanted the tsar back and had international support. One of the most iconic events in the revolution happened in this war: the Bolsheviks were in Yekaterinburg, where the tsar was staying with his family. It was July of 1918, and the Whites were approaching them. The Bolsheviks executed the tsar and his family. In 1921, the war was declared finished, being the Bolsheviks the victorious side. In 1922, the Soviet Union was created under Lenin’s leadership. There is a tendency to classify war and revolution as two subjects to study together, as two, in one way or another, highly similar or, in some cases, equal concepts. There is, nonetheless, a crucial difference between the two. 90% of war has as an objective or reason either the conquest of foreign territories or the demonstration of a higher military power than a rival state. The states go to war when the head of the state finds it appropriate; this is, a person or a very limited group of people that usually represent a single ideology are the ones that decide for the civilians.

In a case of revolution, change is necessary, and those civilians within a state make the choice of driving that change. Even when it is the head or the government of a state the one or ones that decide for the country to go to war, they are not the ones that actively participate in it, they are not risking their own lives for their nation to be victorious. In a revolution; however, the revolutionaries do not only know what they are fighting for, but they identify with it. There is a perfect example of it in the case of the Russian Revolution. Russia joined the First World War against Germany, but the Russian people were not okay with that decision, and they showed it by standing up for themselves. Armed conflict is a very ambiguous type of event to classify as good or bad. There are many arguments in favor of armed conflict, as well as counterarguments. In both cases, there is no questioning that armed conflict creates a change, regardless of it being positive or negative. On the one hand, armed conflict can be seen as a positive matter. As already mentioned, conflict brings change, and among this change, development is often reached; whether it be moral, social or economic. Besides, the relations between the ally states most usually improves due to the fact that they have fought one or several mutual enemy states in the case of war. Enemy states’ relationships are more ambiguous, but there are times in which, in the case of war, the problems between them are solved. In the case of revolution, the core problem of that state in which it happens is solved in the case of it being successful, which leads to the improvement of the social situation.

Nevertheless, there are as many or more bad consequences of armed conflict as they are positive. Firstly, armed conflict does not always bring improvement, and definitely not to everybody. Adding to this, the possible improvement armed conflict brings can be argued with a frequently asked question: does the end justify the means? In this case, I believe it does not. Improvement can be triggered by a large number of incidents, and by no means does this event have to be violent. Development is not always worth the lives lost in armed conflicts, and sometimes, the conflict is not what causes the development. Nowadays’ Russia was not directly influenced by the revolutionaries’ ideas, as the system created by them was not stable and durable enough as for it to last a century. As repeatedly stated, there is no doubt that armed conflict brings change and although, as has been mentioned before, it is not the only event capable of bringing positive change, armed conflict can be considered to cause development in economic, moral and social contexts. In economic terms, revolution can lead to a broad variety of situations, such as a radical change of the economic system, as happened in the Russian Revolution, in which communism was imposed. Nevertheless, it has not been proved to bring a huge change in the economic system; as the only economic system change ever was triggered by the Russian Revolution, which after a while, it went back to capitalism. Revolutions such as the Russian could be an important factor for economic development in a country since the citizens’ needs are being taken into further consideration and thus their well-being improves.

In the economy, especially in capitalist systems, the well-being of the worker is crucial for the production to be high, as the happier someone is, the more productive it becomes. This makes them a very valuable element in the economy. Leaving the positive results of revolution aside, armed conflict is a very expensive casualty. Not only from the economic perspective, but armed conflict also causes high levels of damage to a state or even a continent, when this conflict englobes larger territories. The situation changes when talking about war, as the costs are usually higher. In revolutions, the revolutionaries usually cause destruction in all ways possible, which causes costs to the government, as well as the lives of those who pass away in the streets. States, in most cases, send police force to deal with the revolutionaries, which, in one way or another, are another cost to be considered. Russia had it more difficult as the troops sent to stop the Bolsheviks joined the revolutionaries in several cases; making the police cost even bigger and the economic situation even more unbearable.

In Russia, the economy was not just a long-term positive change that occurred because of the revolution that took place; but the cause of such revolution. In this case, the high cost of war is more than obvious. In the beginning of the 20th century, Russia was a highly underdeveloped state that had not have the previous century’s industrial revolution and was way behind the West European states, most of the population still being extremely rural. This had as a consequence the low-class workers’ misery, while the higher classes enjoyed the luxury and richness produced by those agonizing farm and newly-created-factory workers. This economic situation worsened when Russia entered World War I, which meant that lots of resources and money were going to be destined to the war. Military cost, armament, vehicles used to move from one place to another… were extremely expensive, causing the lower classes’ situation to go worse.

Therefore, even if, in the short term, it can be seen as a negative incident; armed conflict has not only been positive but sometimes necessary for a states’ economic system in the long term. As the war materials are getting increasingly more sophisticated and thus not only more expensive but more destructive, direct conflict between developed states, especially those with nuclear weapons is not likely to happen at any moment soon. The conflicts nowadays and in the future are likely to be fought by slighter conflicts between the developed states in underdeveloped countries, trade wars such as the one happening between China and the USA and threats, just like in the Cold War. As for the moral aspect, Karl Marx was a heavy inspiration for Russian revolutionaries. As explained in his works, he believed society was controlled by economy and divided in two groups: the oppressed and the oppressors. The two groups have exited throughout history, in all economic systems (I.E. the oppressors in the feudal systems were the landowners and the oppressed were the servants). He believed that the one group that had the means of production were the ones constituting the oppressing group. For him, that was history; the class struggle. Before explaining history according to Marx, there is a need to explain how he saw society: he divided it in two structures: the structure, made up by the economic relations (between the owners of the mans and their workers, between the workers and the production means etc.); and the superstructure, which consisted in the law of a society, its moral values and the culture. The oppressors, as they controlled both the structure and the superstructure, they could control the mind of the society, as they controlled the moral values of it, its laws, its educational system…

This way, the oppressors stayed on power, as they raised their children with a mindset destined to defend nothing but their own interests. Marx lived in an industrialized society, in 1800s England; which made his thesis revolve mostly on the new capitalist society where the capitalist or the businessman was the oppressor and the proletary were the oppressed group. History’s process, Marx explained, was always the same. Some members of the oppressed group would come to the realization that they are being controlled by the oppressing class, and would make a revolution that would make them the oppressing group. Some could think it would go on forever, with the roles of the groups changing back and forth. However, Marx believed, this long process had an ending. As mentioned before, Marx lived in a capitalist society, that is why he based most his theory in his situation. He believed the world’s proletary would get to an agreement and stand up for themselves all at once, creating the biggest revolution ever happened. This would not switch the groups’ roles, but would completely erase both groups and create a society in which everyone was equal, regardless of their social status. That would be the point in which communism was achieved. As is understandable, Marxist ideas had a huge impact on the Russian revolutionaries. However, the communist system built up after the revolution was not effective enough as for it to collapse. Marx would argue that the revolution was not the way it should have been.

The revolution that would put an end both to the capitalist society and to history as Marx described it needed to be done by all the workers worldwide, for it to happen in a single state would cause nothing but the roles to switch. This is exactly what happened in the Soviet Union, in which Lenin established an authoritarian state and later Stalin became the very praised and at the same time despised leader of the communist dictatorship. Nevertheless, even if the revolution resulted way too bloody for it to be justified, such incidents and especially the debates they create can be a huge help for the people to get an idea of what is morally acceptable to do and what is not in situations such as the Russian one. Moreover, the consequences such a corrupt and unjust political system are likely to bring are also very visible, which is useful for political philosophers to see the flaws of such system and try to eliminate the negative aspects of it. Revolution have been considered a milestone in modern world history, as they are a call of the people to the states for them to listen to their voice. They are the events in history that are not exclusively about the states’ actions but about the citizens’. A nation to come together to fight a common enemy is something very unique in modern history as a common goal is agreed on and even if this goal is not achieved, it is very impressed to see such a big group of people fighting for it. The Russian proletary started gathering and organizing events with the idea of achieving a better-quality life.

More and more people started relating to those workers’ wishes and joining the movement, until a revolution as big as the Russian Revolution happened, which ended up with the creation of a completely different political and social system. Armed conflict, and especially revolution can be seen in multiple cases as a trigger for positive change in society, regardless of the devastating economic, demographic and social it may bring. In the case of war, the development it might bring to society are by no means comparable to the destruction and misery it brings in all cases. Revolution, on the other hand, apart from the lower level of devastation it hardly always causes if compared to war, it consists of the people fighting for their wellbeing and their nation’s situation to improve. War only happens whenever two states have an issue that most of the times concerns no one else than the heads or the governments of those states.

However, it is the people who has to fight for the interest of their governor. In no case are not war nor revolution a good thing because the consequences are in a bigger scale negative than positive. However, revolution, being a fight of the people, is justifiable enough because as the state does not interfere with all the armament, the damage is way smaller. War always causes a lot more damage because the war materials are way more sophisticated, making it more expensive and more capable to bring destruction. To sum up, armed conflict is indeed a factor leading to improvement, but it is overly devastating, especially taking into account that conflicts do not necessarily have to be violent for them to have an impact in society.

The World War II as the War of Social, Economic and Moral Progress

The Second World War was a global armed conflict that started the 1st of September of 1939 after the invasion of Poland by the German troops. Immediately, on September 2nd, Great Britain and France declared war on Germany. After this declaration, two military alliances emerged. The Allied powers were formed by France, Great Britain, United states and the Soviet Union. On the other hand, the Axis powers compound by Germany, Italy and Japan. These countries are called belligerent. They receive this name because they are the ones directly involved in the conflict. But, almost all the countries in the world are involved in some way in the war. Around 100 million people from many different countries took part of the Second World War. This event, is classified as the deadliest one ever with around 70 million casualties, including the Holocaust and the civilians that died. The World War in Europe ended on May 8th of 1945, with the invasion of Germany by the allied powers and the surrender of the Nazis. But the war proceeded until September 2nd of 1945, with the bombings in Japan by the United States military. These bombings affected the cities of Hiroshima and Nagasaki and consisted of two atomic bombs, the first atomic bombs ever used, showing the cruelty that humans are capable of, along with starting the era of nuclear weapons. After the war, a series of trials and assemblies to discuss about the future of the world and to judge the Axis power took place. The Nuremberg trials were ruled under the International law. On these trials, many people, most being Nazis or accomplices in helping get the power they achieved, were judged about their war crimes. The sentences were severe, including the death penalty for twelve of them as a punishment. These trials and assemblies tried to balance and improve the world till what we know today. Throughout decades, wars and revolutions have been globally known as conflicts fulfilled by negative consequences. However, in the Second World War, Western countries experienced a wide range of developments, especially within the social, moral and economic aspects which can be clearly seen nowadays. So, wars and revolutions of course have negative effects on everyone, however they are a big lesson for all of us.

Wars have been around since the beginning of humanity. They are unavoidable because of human’s natural desire of power and dominance. Throughout the years, wars have become more destructive because of the technological advances that are able nowadays. So, why don’t we try to understand some concepts? First, in order to get a firm position, we should know what’s something good and something bad. Society says that the difference between good and bad depend on the individual who is being asked. Every person has a different opinion or perceptive of an action as persons depending of, we like or dislike it. Secondly, it is important to know what’s a war and the leading factors that make humans start them. If we look at the dictionary for the word ¨war¨, they provide the following definition: “a state of armed conflict between different countries or different groups within a country”. These armed conflicts have some different leading factors like economics, territorial gain, religion, revolutions, and so on. As I stated before, the Second World War started because of the invasion of Poland by the German troops. But was that the only factor leading to war? Germany had for decades conflicts with Poland due to territorial and political issues. Since the signing of the Treaty of Versailles, Germany was hurt with the rest of countries because they thought that the other countries were overreacting, and they found the sanctions way too harsh.

Germany found themselves on an economic crisis after all the expenses of war and they couldn’t pay the sanctions that they were punished with. German citizens lived hard years with a huge economic crisis and being the “laughingstock” of Europe, because of their lost in the war. On 1932, Germany had elections. Adolf Hitler won these elections with more than one third of the total votes. Germany saw on him a savior of the nation. On one of his first public speeches as a chancellor, talking about the difficult times that all the German citizens were passing he mentioned the next phrase: “Obstacles do not exist to be surrendered to, but only to be broken”. With these words the new leader addressed to all the German Nation. On his speeches, Hitler exposed his ideas and started with his Nazi propaganda. With those, his popularity increased and started to gain the trust of all the Germans.

After World War I, in order to prevent the world from another disaster like this they created the “League of Nations”. These international organization was there to provide a forum for resolving international disputes and maintain the peace. But, the failure of it and the pain of the German nation become one of the reasons why Hitler was ready to start a new conflict. After World War II, many assemblies took place. They didn’t want to make the same mistakes they committed during the First World War in 1919. So, they tried to solve these problems by different meetings. The Potsdam Conference took place from July 17th of 1945 until August 2nd of the same year. It was an assembly between the “Big three”, this group was compound by Churchill, Truman and Stalin. The goals of the conference also included the establishment of postwar order, peace treaty issues, and countering the effects of the war. The 4th of February of 1945 the Yalta Conference took place, also known as the Crimea conference. It lasted a week and they discussed matters on the future of Germany and Europe and on the organization of the postwar “New Germany”.

New International organizations were born when the war ended. After the failure of the League of Nations, on 1945 they created the “United Nations”. These organization encourages countries to co-operate between them and it helps to prevent conflicts and make resolutions of them. Not only International Organizations were born after the war, but a lot of declarations in order to promote Human rights. After all the massacres that were done during the war, especially the Nazis, on December 10th, 1948 the Universal Declaration of Human rights was created. This declaration is one of the main documents that compounds the International Bill of Rights. Eleonor Roosevelt was one of the main activists that promoted these documents. “Disregard and contempt for human rights have resulted in barbarous acts which have outraged the conscience of mankind”. That’s what she said on her speech before the Assembly of the United Nations. In Europe, a similar declaration was created, known as the European Convention of Human Rights. On March 1951 the United Kingdom was the first country to ratify these documents.

This war brought a lot destruction and hate over all Europe. Cities were devastated, countries were emerged in huge debts, millions of people died. However, there were advancements made, for example, during this hard time countries improve their technology and invested a large amount of money and resources to enhance their power. Wernher von Braun, a German engineer, created the V-1 and the V-2 missiles during war time in order to use them against the German enemies. After the war ended, he moved to the United States. There he developed these missiles and ended up creating the Saturn Rocket. This rocket was the one that they used to launch man for first time to the moon. So many actions were taken to make Europe “great again”.

During war time the unemployment of the countries decreases. This is due to the need of people in order to win the armed conflict. They needed a lot of soldiers, engineers, pilots, nurses and essential resources and tools. As I said before, war leads to the development of a lot of aspects of society, science, economy, medicine…

During the Nazis bombings of London, the campaign denominated as “Blitz”, a numerous amount of people were heavily wounded and killed. The Royal Air Force, as a goal to help, hired a New Zealander doctor called Archibald McIndoe, he certainly improved plastic surgery during those years, leading us to what we know nowadays; he also managed to give perfection to treatments on burn wounds. During World War II, the nuclear power developed a lot. The development of this science at the beginning wasn’t for helpful uses, it was to create nuclear weapons. These atomic bombs have only been used twice in history; in the bombings to Japan by the United States. This nuclear power has an important use for us to this day. It is used as heat to produce electricity, is used in the hospitals, in the agriculture, at our house, and on other numerous ways. Even though this power was created for another proposes it has helped humanity a lot.

World War II lead to a lot of humanitarian help between the population of the same country. But not only that, it showed the power of cooperation between different nations. In Locke´s theory we can see how he says that “every human is capable of being good-natured with a push in the right direction”. This theory was shown during the War. For example, even though people were going through rough times they were able to give everything they had to supply their troops. People sacrificed themselves, so the military had access to what they needed. A lot of people enrolled voluntarily to the army, in order to fight and protect their country. Wartime is a good opportunity to give back everything that other people has given for us. This is a clear example of cooperation and how people can be kind with each other if we all go in the same direction. Do to all these things, Patriotism grown in the society. The main country that suffered this phenomenon, was the United States, said to have started after to attack to Pearl Harbor. In order to promote this ideology, they created the figure of “Uncle Sam”. He was a personification of the country. His image was used to encourage men to enlist in the military and to promote unification, so people will help troops from home during war time.

Wars are costly, and a lot of countries are not able to recover their arches from all the money expended. That happened to France during the 1780s, leading to a Revolution on 1789. In order to avoid getting into this situation, countries have different methods to finance their war, the fastest and worse is printing money. It is the least effective because as an affect in a near future it will most likely experience inflation. In order for the governments of countries that are involved in war, they increase taxes to get extra money to pay their army. The most common practice during war was to act as a debtor country. In the late 1940s, European countries were emerged on a huge debt. George Marshall, President of the United States, saw this as an opportunity and took into action by lending them money to these countries in order to recover from the devastation. These aids are known as The Marshall Plan or the European Recovery Program. The main purpose of this plan was to help Europe and to reduce the communist influence. It was very beneficial to all the countries involved, including the United States. The Europeans reconstructed cities, industries, the trade barriers between them were removed… The United States improved their relationship with Europe. These relations have been improving since the start of this plan. The relationship between both parts includes cooperation in trade, military defense and shared values.

The Soviet Union developed a similar plan, The Molotov Plan. But only the Eastern Europe countries that were political and economically allied with them were able to be part of this plan. This plan wasn’t as successful as the first one. Not a single country that was part of it developed as a communist country whereas the countries helped by the Marshall Plan developed in all aspects and are important and powerful nowadays.

There is a lot of bad consequences on armed conflicts. These consequences include from human lives losses to destruction of cities going through the spread of diseases. All these consequences being both short and long term. World War II effects are still present nowadays. For example, the way economy works presently. Many innocent people over Europe and around the globe suffered for the six years the war lasted. War survivors are left with physical and psychological damages that they will carry with them the rest of their lives; due to the bad conditions, soldiers and civilians were exposed to a lot of diseases and unhuman environments. Tuberculosis spread all over Europe, causing an epidemic affecting 11 million people and 70 million died, being the deadliest war ever. Society is affected in many ways during wartime. During World War II, the Jews, Romanians, Homosexuals, communists, and any person considered to be a threat were persecuted and killed. Social services were collapsed, schools closed, hospitals were packed with people injured from the war and families were separated. The leaders of the nation during those time, made a lot of consequential decisions causing more people to die and exposed their nation to danger. “War leads to war”, that’s a fact that has been demonstrated on many occasions in history. World War I lead to World War II. After 1945, foreign policies of the countries changed due to the postwar changed. In the United States, that policy changed a lot due to the fear of Communism, eventually causing the Cold war, that took place between 1947 to 1991, and in the Korean war in order to stop the Communism. War leads to a constant state of fear and paranoia to all the people, not just to the ones involved, but for everyone else.

Even though some countries take benefits of wars to develop their economy, like the United States, the prejudicial economic effects have more value on the scale. Countries involved on the war lost their GDP, meaning that all the good and services produced on a year by a country lose their value. There is an exception, the United states, that instead of losing the value of their GDP they increased it by almost the double. The national debts of the countries increase. Germany made his last payment that had from World War I on October of 2010. On the other hand, Great Britain made his las payment on December 2006. Germany was forced to pay for all the reparations of the war to Great Britain and the rest of the countries of the Allied powers, how the Potsdam Conference sentenced. In order to face these payments, all this governments must raise their taxes. On 1940 and 1942, there was two revenue acts in the United States. On these, they accord to raise the taxes to the citizens by almost 20% of how they were before the war started. Countries also suffered from inflation, making them to create plans against it.

Human rights were completely violated. Not only on this war, but on every revolution and war since the beginning of history. These human rights that weren’t respected included the most essential one “No one can take away your human rights”. This was seen by the Nazis on their concentration camps. Other rights that weren’t respected were: “A fair and free world” or “The right of public assembly”. This last one, was affected mostly on clandestine associations of groups that were against the ideology of the Nazi regimen. There were three main acts during World War II that shocked the world, because of the cruelty shown in them and how the human rights weren’t followed. These events are the Holocaust, the Japanese interment and the Bataan death march. As it´s said before, the Holocaust caused 6 million deaths and it ended with the 62% loss of Jewish population in Europe.

Another atrocity made during that time, was the Japanese Internment on the United States. The government took this decision after the Pearl Harbor attack. All the people with Japanese ancestry was forced to sell their houses and their business and got sent to internment camps located in California, Arizona, Wyoming, Colorado, Utah and Arkansas. Around 120,000 people was sent to these camps. Japanese’s were persecuted, forced to leave their life behind and caged. After all these crimes, the camps were closed in 1947. Years later, the government recompensed them with 20,000 dollars to each intern. On April 9, 1942, the Japanese troops transferred around 80,000 soldiers, from Filipinas and the United states, from Bataan to camp O’Donnell. Around 18,000 of them died due to the harsh situations they were exposed and how they were forced to walk almost 120 km in clandestine conditions. They also suffered physical abuse including getting beat up and left without water and food. So, after World War II ended, the Allied powers judged the Japanese´s for crime wars. To sum up, we can see how in these wars and revolutions, throughout history, lead to the violation of our human rights.

In conclusion, World War II brought a lot of hate, destruction and crisis. Regardless, it also pushed in a moral, social and economic development to those countries involved. Human rights were not followed during the war, but after it ended the participating countries made the “International Bill of Human Rights” and they created organizations such as the United Nations to protect them and hold those who don’t follow them accountable. Even though, during the war countries were devastated and full of hate and anger. Countries have learned from those scenarios. Many of the advancements that were made during that time that still affects us nowadays. Such as healthcare, education, and science. Economically, we have learned to cooperate between nations in order to prevent crisis. Such as the creation of close trade relations between States which during the war were at odds, such as the United States and Japan. We created many methods to restructure our economic systems. Therefore, Throughout the decade’s wars have been only known to be conflicts full of negative consequences, however no one ever highlights the progress we take out of them. The Second World War was a turning point for the world such as we know it today. We have experienced a wide range of developments, especially within the social, moral and economic aspects. War is the ultimate lesson in morality, it not only hurts us, though allows us to grow stronger out of the hurt.

Moral Panic in a 21st Century Context

First coined by criminologist Jock Young, moral panic can be defined simply as the task of creatingnwidespread concern within a society through the use of media and by people who hold a high status of power such as politicians.According to Critcher (2008), there are three dimensions of moral panic: identifiable process of definition, marking of a moral boundary and the creation of discourse at a various number of levels. However, there are obviously concerns regarding how moral panics affect modern day society, especially with rise of new media sources such as social media applications. One topic which has been centralised to concern of society of the idea of drug abuse which has been focused on since the 1970s. This is a topic which has been analysed by the likes of Cohen and Jewkes.

Deviancy Amplification is the sociological phenomenon that suggests when deviance is present within society, two structures work together to create a news story and to spread moral panic. This amplifying cycle begins with controlling culture, from either a police report or a news story. This story will then be exaggerated to the public to create an awareness and to warn them to stay away from the issue causing moral panic. This is then passed to a significant culture, such as social media. This story has a possibility of already being exaggerated to the public to create awareness and to stay away from the deviance. This exaggerated statement is then passed on to the significant culture, known as tabloid media. (Hall et al, 1978). The global issue of excessive drug abuse is key in understanding how online news sources create deviancy amplification and moral panic in order to terrify and worry parents into letting their children be independent in case they make the wrong decisions, and take something that could be fatal.

An example of this is from a 2012 case study, when a fifteen year old girl was at an unsupervised house party and consumed a number of pills and an unknown white powder, which led to strange behaviour and her death. Davies, (2012) identified the dangers of taking drugs and going to unsupervised parties which shows that if you allow your child to go somewhere alone, you will lose them to drugs. The main point to come from this type of moral panic is that if you take drugs, you will die, especially if you are a young person who is being forced to take these by a peer who is older and supposed to be responsible. The media shows situations of this person being alone and vulnerable as being the fault of parents and that you must look after your children at all times, or you will lose them to evil ‘drug pushers’, as shown by Cohen. The main focus of deviance amplification is to choose if it is relevant, newsworthy or interesting enough for the public to consume.

In Folk Devils and Moral Panics (Cohen, 1972) it was highlighted that the media has a role of manipulating public behaviours to be outside acceptable norms in society, the creation of the moral panic label means that the significance has been exaggerated with more serious problems. This was inspired by the idea of labelling theory. Cohen’s process model showed that emphasis would focus on when problems occurred and were given a name by the news media, the way the media stereotypes those ‘in the wrong’ and experts are asked to give their opinion regarding this topic. The topic is then analysed and laws are implemented in an attempt to calm society and stop panic. For example, the introduction of the Drugs (Prevention of Misuse) Bill, which was presented as a solution to the issues that happened in Clacton regarding Mods and Rockers and drugs in the 1960s. Cohen’s model can be criticised to needing an update. Due to new media sources, it has made it “impossible to rely on the old models with their stages and cycles.” (Cohen, 1995)

Cohen in 2002 then identified seven clusters of social identity which moral panics tend to belong to. An example of this is: ‘wrong drugs.’ These are used by wrong people at wrong places, drug use is perceived as an interaction between evil crush pushers and a defenceless user forcing them from ‘soft’ to ‘hard’ drugs. This shows the process of moral panics tending to be ordinary issues that end up serving as a warning to real dangers in society. The media has the role within society is to maintain stability, however, it is also responsible for change.

Jewkes created then disproved five propositions regarding moral panic. She highlighted six issues: two familiar and two which widen debate regarding moral panics. The final two look at the idea that the moral panic has been spread as far as it can be and the issues that young people can be a main focus. These are flaws with moral panic analysis, however, it shouldn’t be rejected as invalid concerning analysing moral panic. This needs careful reconstruction to provide a ‘conceptual basis. (Jewkes, 2004)

The lesson for present day society is simple: moral panics should be conceptualised as forms of discourse. This analysis shows the ways of speaking about problems are created and constructed to replace other problematic behaviour within society. (Mills, 2004) Discursive formations control the right for who should speak on certain issues and who has the right to control what happens whilst a moral panic occurs. What is common between modern issues? They all pose risks to individuals in society. (Lupton, 1999) In a 21st century context, drug issues can be seen as a centralised part of moral panics created by the media. Cohen’s ideas of moral panics been a condition that affects society as they react to new forms of behaviour will always be an important way to view how moral panic affects society, however, with the rise of social medias, it can be said that Jewkes ideas of moral panics are more relatable by today’s standards. Drug abuse relating to young people is a massive concern for society.

Bibliography

  1. Cohen, S. (1972) Folk Devils And Moral Panics. 1st ed. London: MacGibbon and Kee.
  2. Cohen, S. (1995) State Crimes of Previous Regimes: Knowledge, Accountability, and the Policing of the Past. Law & Social Inquiry. Vol.20(01), pp.7-50. Available: https://doi.org/10.1111/j.1747-4469.1995.tb00681.x[Accessed: 3 March 2019].
  3. Cohen, S, (2002). Folk Devils and Moral Panics 3rd ed. London, UK: Routledge
  4. Critcher, C. (2008) Moral Panics And The Media. 1st ed. Maidenhead: Open University Press.
  5. Davies, C. (2012) Girl, 15, Who Died After Ecstasy Overdose Told Her Friends Not To Call Ambulance. Available: https://www.theguardian.com/uk/2012/aug/01/isobel-jones-reilly-inquest-ecstasy-party[Accessed: 2 March 2019].
  6. Hall, S., Critcher, C., Jefferson, T., Clarke, J. and Roberts, B. (1978) Policing The Crisis: Mugging, The State And Law And Order. 1st ed. Houndmills: Palgrave Macmillan.
  7. Jewkes, Y. (2004). Media and Crime 1st Ed Thousand Oaks, CA: Sage.
  8. Lupton, D. (1999) Risk And Sociocultural Theory. 1st ed. Cambridge: Cambridge University Press.
  9. Mills, S. (2004) Discourse. 1st ed. London: Routledge.

Moral Panics And the Media

Media is an inseparable part of our life. The connection between media and crime has been theme of many sociological research. Media can lead to create moral panics and folk devils. As Cohen (2015, p.1) defined, moral panics are “A condition, episode, person or group of persons emerges to become defined as a threat to societal values and interests; its nature is presented in a stylized and stereotypical fashion by the mass media”. This report critically evaluates Cohen’s (2015) Folk Devils and Moral Panics. First, it examines youth culture, then it examines deviance, finally it examines moral enterprise.

According to Cohen (2015) suggestion, youth culture is the main type moral panic in Britain. He said that, youth culture mostly occurs in middle class or student based and has deviant behaviour, such as vandalism, hooliganism, drugs taking, violence and organizing demonstrations. The definition of deviance is included in next paragraph. As an example of youth culture, Cohen (2015) present Rockers and Mods, popular subcultures in 1960s. Author states, that those groups became violent and deviance, because they felt marginalized from mass culture by social media. On the contrary, Jewkes (2015) claim, that Rockers and Mods lead to create fashion of 1960s and has been not marginalized from mass culture, rather optimistic welcome. Jewkes (2015) states, that overall those ‘spectacular’ subcultures, which promoted moral panics in the past, are not that evident nowadays. The author, also argue, that old fashioned theory, show that young people need to follow specific requirements to be part of subculture, while postmodernist criticist promote, that nowadays self-creation is occur and is spread rapidly through internet. Cohen (2015, p.2) states, that Mods and Rockers were not only subculture, but also distinct as a separate case, something totally new, although Jewkes (2015,) said, that in current world, where young people have access to wide range of subcultures and multimedia, they can smoothly grow from childhood to adulthood using lots of different and coexisting subcultures. Jewkes (2015) also states, that nowadays subcultures are no determined by age and less visible than in 1960s. Jewkes (2015) also argues that current generations are less distinguish with parents’ generation and show more morality and code ethics in their culture, in example fighting for animal’s rights more than the youth cultures of their parents. Jewkes (2015) also states, that nowadays youth cultures are shown as attitude and promote by media rather than label them as deviant. Nonetheless, the arguments above presents, that media might create moral panics.

Another important term of Cohen’s book is deviance. Cohen (2015, p.5) definition of deviant is “The deviant is one to whom the label has successfully been applied; deviant behaviour is behaviour that people so label”. In this case, deviant mean problematic behaviour by specific group of people. Cohen (2015) also states that main promotor of deviance are social medias, which creating specifics rules for specific groups. If any of the group is not following the rules, then social groups labelling them as a deviant, and treating as something separate, an outsider. Jewkes (2015) argue that, the media might create speculations and overstatements of violence or further danger in order to isolated incident, creating moral panic. Jewkes (2015) also states, that deviance labelling is mostly overshowed by media as behaviour or lifestyle of those groups. Cohen (2015) therefore argues that civil reactions in this case are rather ‘effective’, which mean they are just reaction to deviance, that the source of deviance. Very important are also types of deviation which states their nature. As Cohen (2015), there are two types of deviation, found in Lemert notes which are primary deviation and secondary deviation. Primary deviation occur, when the behaviour might be inconvenient to single case and have no impact at level of ‘self-conception’ , while secondary deviation appear, when the deviance individual apply his deviance to wider group, or creating another deviance and using it as an assault for those, who reacted to his deviance, or as an adaptation tool to problems created by his deviance and respond to it, Cohen (2015). Cohen (2015) states to support his idea, that media creates moral panics said, that media allocate a lot of expanse for deviance, unethical behaviours, violation, and other shocking and scary news, which might be tool to create panic. Nevertheless, the arguments above shows, that media can create moral panics.

Last paragraph looks at moral enterprise. Cohen (2015, p.3) defined “the creation of a new fragment of the moral constitution of society’. Cohen (2015) states, that moral enterprise is made by specific groups of societies and right-thinking persons such as newsman, legislator etc. through the media. To support this opinion, Cohen (2015, p.9) states, such as media act as an moral agents, on their vision of what is right and what should not take place, creating kind of campaign against everything what goes over what they created. In some cases, they are aware of it, while in other this is not obvious. This shown that media are using their power, to provoke specific reaction and generate panic. However, Critcher (2006), see the problem with Cohen (2015) arguments and states, that Cohen have ‘linear model of moral panics’. To confirm this argument, Chritcher (2006) argues, that in Cohen (2015) work, the difference between media, moral entrepreneur and cultural control are hard to recognize. Cohen (2015) proclaim, that moral barricades has been made by media tools and media are shaping social problems. Critcher (2006, p.,17) therefore is coming off with following questions, “Who are the significant moral entrepreneurs, whether groups or individuals?” and “Do they lead, follow or operate alongside the media?”, to make reader critical think of Cohen (2015) work. Those two questions can lead to think in different way about moral enterprise presented by Cohen and look for evidence, which has been not shown by the author. After all, the arguments listed above shows, that media may have impact on creating moral panics.

To conclusion, all of three arguments states, that media have incredible impact in creating our reality and could create moral panics. While nowadays media are different and offer wide range of news, subcultures and might be less bias, still can create range of folk devils, and unnecessary panics on single incidents, in the way they present that. To sum up, many arguments listed above shows, that Cohen work is old fashioned and full of errors, however might be applicable to current reality and image of media.

Bibliography

  1. Carrabine, E., Cox, P., Fussey, P., Hobbs, D., South, N., Thiel, D., and Turton, D., (2014) Criminology: a sociological introduction (3). New York: Routledge.
  2. Cohen, S., (2015) Folk Devils and Moral Panics(3). New York: Routledge.
  3. Crither, C., (2006) Moral Panics and the media .Glasgow: Bell & Bain Ltd.
  4. Jewkes, Y., (2015) Media & Crime (3). Croydon: CPI Group.

Human Moral and Values

The late 16th-century drama Hamlet by William Shakespeare’s falls in history as one of the author’s greatest and most popular works. A driving factor to the dramas’ world-wide success attributes to the play’s use of human emotion, values, and morals. Specifically, the play makes use of the protagonist, Hamlet, to convey and express what it means to live as a human. In connection with today’s world, readers can feel a sense of connection to Hamlet. Throughout the drama, Hamlet faces a myriad of complications that he must overcome with his decisions, within these decisions, readers choose and decipher whether Hamlet’s courses of actions become justifiable or not. Through Hamlet’s thoughts and actions, readers of the drama, ponder over the morals which lie within human beings.

Although the dramas’ interpretation of a human being may come confusing and conflicting at times, readers are left to their imagination and fight the moral dilemma of right and wrong alongside Hamlet. Like readers, Hamlet has fears and doubts about the things happening within his life and faces an internal struggle within his character to do justice or leave things as they lie currently. Hamlet’s internal dilemma of avenging his father’s death bestows in the mind of readers the struggle between the authority of one’s inner thoughts and these thought’s influence on the decisions in life.

As the play goes, Hamlet faces many different encounters, which bestows upon him moral dilemmas. These dilemmas force Hamlet to think about these certain situations and how to act upon them, but Hamlet’s thoughts often conflict with his actions and cause him to become scared to commit to his mission of avenging his father. One example of a moral dilemma that Hamlet faces lies within his chance to kill his father’s murder, Claudius. In this scene, Hamlet sees Claudius praying, and becomes conflicted with his morals and values stating, “Now might I do it, now he is a-praying, And so he goes to heaven” (3.3.77-79). As seen, Hamlet receives the perfect opportunity to avenge his father, once and for all. Yet, he questions his actions and allows his thoughts to overcome his would-be actions.

Although readers, may argue upon Hamlet’s hesitation to kill Claudius, it is arguable that Hamlet’s hesitation is derived from his moral values. Take into consideration that Claudius is Hamlet’s uncle, a person whom Hamlet has known all his life and has grown up with. It is debatable that Hamlet’s hesitation comes from the love for his uncle, therefore causing Hamlet to question himself, and allowing his thoughts to get the best of his actions. Shakespeare’s ability to convey Hamlet’s internal conflict with himself shows that he is indeed just a human, a person who beholds doubts, fears, and internal conflicts.

Furthermore, throughout the drama, Hamlet questions his existence multiple times. Hamlet interrogates himself, debating whether it would be better to suffer from his thoughts and burdens or to end his life, and all misery. In one of the drama’s most famous soliloquies, Hamlet questions his existential existence stating: To be or not to be—that is the question: Whether ’tis nobler in the mind to suffer The slings and arrows of outrageous fortune, Or to take arms against a sea of troubles And, by opposing, end them. To die, to sleep— No more—and by a sleep to say we end The heartache and the thousand natural shocks (3.1.64-70).

As seen, due to the extenuating circumstances within his life, Hamlet asks himself if suicide is really the option. Within his recent life, Hamlet has faced and undergone many questionable and terrible circumstances, such as the death of his father, his mother remarrying his uncle, and his recent breakup. These Circumstances place a heavy doubt within Hamlet’s consciousness, causing him to question his existence as a human. Hamlet asks himself whether it is killing himself would become easier than living a life of burdens and doubts. In correlation to today’s world, many people, similar to Hamlet, questions the idea of their existence, often opting for suicide rather than living with their burdens. Hamlet embodies what it means to live as a human, a person who rightfully beholds fears and doubts.

The Elizabethan drama Hamlet by William Shakespeare goes down as one of the author’s most popular and relatable works. Within the drama, Shakespeare addresses the notion of what it means to be a human. This is done through the use of the protagonist, Hamlet. Throughout the novel Hamlet is faced with many different situations and experiences that question his moral values as a human as well as his existence as a human. Take for example Hamlet’s opportunity to kill Claudius. Although Hamlet was given the perfect opportunity, Hamlet, like all humans, knew that murdering his uncle was not the morally correct thing to do. This hesitation by Hamlet may be interpreted as Hamlet’s inability to choose between his thoughts and his wanted actions. Another aspect that makes Hamlet human lies within his doubts about life. Like most humans, Hamlet has contemplated his life, thinking that death would become easier than living a life of burden. Hamlet’s morals, as well as his questionability about life, make him a human, this factor, helps readers relate to the novel more, gaining a deeper connection and relationship with the fictional character.

Moral Ambiguity in Video Games

All video games are designed to give the player a choice. To a certain extent, it is an essential characteristic to facilitate a greater sense of immersion for the player. Either by creating a rich mental model of the game’s environment by highlighting unique set pieces to invoke imagery of its desired tone or hint at subtle nuances to give the player a small snippet into where a greater narrative might entail a hidden inconsequential truth to reward their personal investment foreseeable hours away from the start. Creating a video game that is immersive requires a joint conscious effort from the game developers to engage the player keeping them dedicated and motivated to return. By presenting the enticing offer of crafting your own unique experiences set within the linear gameplay prompts the player the opportunity of choice when creating their own titular game character in the genre of Role-playing games. Owing to the sense of personal involvement in choosing physical characteristics or bestowing preferred attributes leads to the player designates how to react to a given situation.

Whether it’s a simple fight or flight scenario between which other non-playable characters to align and who the player deems worthy of their malicious wrath or as complex in which role the player plays in determining the fate of an entire species depends on how the player chooses to interact or prioritize in order to progress towards the next objective. Thus, ignoring any notion of a fixed playthrough to only process the narrative and rendering any conflicting moral stimulant useless unless conveyed in a consequential way. Whilst the exposure of video games presents a chance to make seemly significant moral choices, the manner of its ambiguous presentation that promotes the freedom of a player’s choice does not impact the predetermined behavior to challenge the foundational moral beliefs of the player if not spurred by a meaningful interaction in the virtual environment to lead the player to reconsider their actions.

Role playing which is prevalent in a majority of RPGS is generally understood by which an individual pretends or imagines temporarily that they are another person to garner insight another’s thoughts, attuites or otherwise intentions to see something they haven’t experienced themselves. For instance, designing a custom character whose behavior and outward appearance is appeasing to the liking of the player or mask insecurities whose impact subtly forces the player to craft what they call perfect representations of themselves. Identification is when role-play and you take media characters identity and adopt their ethics in which they see fit to enhance their immersion into the virtual environment that the game is set. If the world is harsh environment such as wasteland filled with radiated monsters then the player is going to be forced to pit themselves amidst the moral conflict of whether to adapt or abide by their convictions of moral beliefs. The ethics of how a player truly plays depends on the type of game if it is an open world crime game then the player will likely resort to rob and steal not be the disciplinary force such as the police. The idea also intrinsically promotes the idea of immersion that video games are considered to be such as an escape away from reality thus away from any consequences. A relevant note, Krcmar and Cingel state, “players experience more guilt when the violence in games is unjustified and when they are themselves more empathic players. Fourth, players also experience greater guilt when they do not perceive play as ‘just a game’”.” (Krcmar, Marina, Drew P. Cingel 2016). Thus, players are less likely to perceive or worry about the moral constraint of ethics rather treating like a mere fantasy.

Defining morality is challenging owed due to the lines between good and evil are spewed that most role-playing games find it hard to incorporate as it may sometimes come across as cynicism of upon the players to follow the fixed narrative structure rather than letting the player discover the story as they progress in form of side-quests or player’s own exploration that inform them informally to the events that are yet to transpire. During gameplay players are unknowingly making two parallel decisions. On one hand, the moral decision fueled by common clichés in western gaming which are to shoot, steal, or to scoot out of there until you gain enough experience points or have the necessary gear to take on harder bosses. Unlike most RPG’s that rely on enhanced graphics and minimalistic plots, Fallout New Vegas (Obsidian Entertainment, 2010) engages the player to take control of a former mail courier who was shot and left for dead in the toxic wasteland of the Mojave desert of post-nuclear America to venture outwards to find the kingpin and stop him before he unleashes another nuclear genocide. Along the way, he must face an array of mutilated ghouls to renegade factions that are threating to wage war to due water shortages and countless night raids by mutants. You must complete open-ended quests which give you varying optional objectives which effect an implemented karma system that relies on which factions you side with, what method you choose to execute orders whether you kill only perceived wicked characters or all characters you stumble across even the innocent. These in game moral decisions plant the player in realism as it showcases that the player’s actions will have significant consequences that could lead to them being sided against and marked as the villain. Which the dynamic changes to a harder odd of survival later in the game. A relevant note, Krcmar and Cingel reason players, “did not examine moral reasoning … make an ethical decision when faced with a moral dilemma.” Nor did the authors find that, “it is unclear how and why video game players make moral decisions.” (Krcmar, Marina, Drew P. Cingel 2016). This relates to titular subject of in game consequences of moral ambiguity because of the connection of having to side with at least a slight majority of factions to proceed further in the game. Inescapable and indispensable the game builds you as the hero and when the player objects to this role the question of moral ambiguity is asked, what lengths would you go to prove that you are not as corrupt as the monsters that dwell in the wasteland?

Making choices despite existent narrative is considered a positive highlight when taking Role-playing game formulaic structure into account. Reminiscent of other brands of storytelling such as literature, film, and theatre however the immersion of the player to take part as a viable living witness rather than an observer aids the player to find that their choices matter in their desired outcome. Often linked the gameplay and the narrative is tied to the game’s set rules in which the player in most Role-playing games are subjected to abide. Cheats or otherwise bugs may prevail over the set rules, but a majority of the time, the player’s freedom of choice is limited to dialogue trees or the occasional actions rather than realistic driven choices that fundamental alter the game in a genuine way. The mechanics play part in the role of determining choice for the player is given freedom to abide by any playstyle they choose. Control wise role-playing games prides themselves on functionality and simplicity. Just three buttons, a joystick, a d-pad, and three arrow keys. It may seem unchallenging yet complicate themselves once you realize that as you progress. The fact you die quite a lot is sadistically and functionally tracked by the game by which it learns from failures as fast as you do. Every success is your own and every mistake is yours too. Your sense of pride may be forever changed, but your sense of accomplishment will make you humbled and add only to fuel your own ego only to see it crash when you die again. This ties into morality because of how role-playing games wants you to be tested in how far you will go before you feel tested. Solidary single player and the multiplayer also aspect play affects your social conventions to violate morality. For you kept in touch by another player rather than be immersed yourself to realize you are subject to morality. According to a better explanation, “instead, players would make choices based on reasons related specifically to the game environment, to game play, and to progressing successfully through the game. For example, a player may shoot a character or even save a character in order to score points or regain health. It is not necessarily that the decisions themselves would be moral or immoral, but rather, the reasons for those decisions would be for the purposes of advancing through the game.” (Krcmar, Marina, Drew P. Cingel 2016). Again, this proves most players are less likely to make moral decision against their real- world beliefs in video games unless a game convinces them otherwise to alter their convictions.

In conclusion, although video games expose players to make significant moral choices, the ambitious presentation of predetermined behavior and intention does not impact the player’s fundamental moral beliefs unless conveyed in different meaningful way unless spurred by a meaningful interaction in the virtual environment to lead the player to reconsider their actions. The manner of Ethics, identification or role-playing aspect of immersion, and morality systems all play apart in the existing narrative, but do not fully convince the player to challenge their predetermined morality unless provoked by thoughtful way. Other issues could arise to change players like the desired outcome of a game however the margin to do so is small. After all, like the characters are representations of the player’s own identity thus the truth was rigged from the start.