The Lewis and Clark Expedition

The Lewis and Clark Expedition that took place in 1803-1806 was the first land expedition across the United States from the Atlantic coast to the Pacific and back. Although the original purpose of the expedition was rather modest  to investigate what the US acquired through the Louisiana purchase from France, the results laid the foundation for future US expansion into the Wild West. Captain Meriwether Lewis and Lieutenant William Clark were chosen to lead the grand exploration. After thinking their task over carefully, they came up with a bold plan: climb the Missouri River as far as possible, cross the Rocky Mountains, and then take the Columbia River to the Pacific Ocean. In my opinion, it is important to note that the expedition team included native Indians, because without them, Lewis and Clark would not be able to safely pass through Indian settlements. Not all Indian tribes were friendly towards the Americans. Moreover, since the expedition had to note in detail the geographic, geological, and ethnic features of the traversed territories, the help of the Indians clearly provided significant support to the researchers.

What I also find fascinating is the fact that on the way, Sacagawea from the Shoshone tribe joined the expedition. An excellent guide and mediator in communication with the Indians, the Indian woman was the guarantor of the good intentions of travelers. Joining the expedition with a small child, the Indian woman stoically endured all the trials along with the rest of the group. I think this is a great example of the Native Indian womens bravery and strength worth mentioning. Another interesting point is made by Moulton (2018) in his book, where he stated that Lewis and Clark were the first Americans to explore the Great Plains, which were very different from what they were accustomed to. Some of the places that the travelers passed were so scarce in obtaining food that the expedition had to first eat their horses, and then deservedly acquire the nickname dog eaters from the local Indians. This point is crucial, as Lewis and Clarks discoveries greatly influenced the geographical knowledge of the early XIX century. The expedition reached its goal by collecting truly unique material about the Indians and exploring the area in detail.

Reference

Moulton, G. E. (2018). The Lewis and Clark Expedition day by day. University of Nebraska Press.

Reaction Rates and Effects of Temperature, Concentration, and Surface Area

Introduction

When an acid and a base are reacted, a neutralization reaction occurs whereby a solution and a gas are formed. This experiment tests the effect of temperature, concentration, and surface area on the rate of reaction. Temperature is defined as the degree of hotness or coldness of a substance, measured in degrees Celsius (Flowers et al., 2019). Concentration is a measure of the amount of a solute in a solution, which is expressed in Kilograms per liter (Kg/l). In reference to chemical elements, surface area describes the exposed area of a substance that can take part in a chemical reaction (Flowers et al., 2019). The temperature of reactants, the concentration of a substance, and the surface area are directly proportional to the rate of reaction.

The collision theory is a mathematical model for predicting the speeds of chemical reactions, especially in gases. The collision theory assumes that in order for a reaction to occur, the interacting species must collide with each other (Flowers et al., 2019). In reference to temperature, the increase in temperature raises the kinetic energy, thereby increasing the rate of collisions. The higher the concentration, the more the number of particles and, therefore, the higher the rate of collisions. According to the collision theory, increasing the surface area increases the surfaces available for collisions, increasing the rate of reaction (Flowers et al., 2019). In essence, the theory proposes that an increase in reacting elements, surfaces, and kinetic energy translates to more collisions and consequently faster reactions.

Hypothesis

The reaction between calcium carbonate and dilute hydrochloric acid results in the production of calcium chloride solution, carbon dioxide gas, and water, as shown in the equation below. Under normal temperature, an increase in concentration and surface area will increase the rate of reaction (Flowers et al., 2019). Concentration will be determined by how much of the 350 g of CaCO3 is dissolved in water. As the solution is heated, the rate of reaction increases, given by the equation below.

CaCO3 + 2 HCl ’ CaCl2 + CO2 + H2O.

Procedure

  1. Measure 50 g of CaCO3 in a water jar with 100 ml water and stir completely.
  2. Add HCl in a titration tube.
  3. Slowly titrate the solution while measuring the volume of gas produced every 5 minutes in a gas jar.
  4. Repeat part 1 and heat the war the solution.
  5. Titrate and measure the new volumes of gas.
  6. Now mix 100 g of CaCO3 as in step 1 above.
  7. Titrate and measure the volume of gas.
  8. Take 50 g of CaCO3 and crush them.
  9. Repeat steps 2 and 3.

Data Tables

Time (sec) The volume of gas(ml)
50g of CaCO3 in 100 ml water under normal temperature 5
10
15
Solution heated to 70 degrees Celsius. 5
10
15
100 g of CaCO3in 100 ml of water under normal temperature 5
10
15
Crushed CaCO3 5
10
15

Relating Result to Scenarios

Crushed ice has a larger surface area than ice cubes, which increases the area exposed to collisions and reactions. Therefore, crushed ice will cool a soda faster than ice cubes, according to the collision theory. In hot tea, sugar would dissolve faster than in iced due to faster collisions in high temperatures. Lastly, a strong powdered milk concentration would be easier to taste because of the increased number of particles which increase the probability of collisions and, therefore, being tasted.

Conclusion

This lab was designed to test the effects of temperature, concentration, and surface area on the rate of reaction. The reactants used were CaCO3 and dilute hydrochloric acid. The lab was set to generate four data sets: under normal temperature and low concentration, higher temperature, higher concentration, and increased surface area. The results imply an increase in temperature, surface area, and concentration increases the rate of reaction. This relates to the collision theory, which proposes that an increase in the rate of collisions leads to faster reactions.

Reference

Flowers, P., Theopold, K., Langley, R., Neth, E. J., & Robinson, W. R. (2019). Chemistry (2nd ed.). Openstax.

Digestion of Foodstuffs as Process

Digestion of foodstuffs is a vital process for the organisms proper functioning and an irreplaceable part of the metabolism. Metabolism, in turn, provides the bodies of living organisms with energy and valuable substances, which ensures activity and health. Digestion is a process that is designed to help the body absorb valuable substances, vitamins, minerals, proteins, fats, and carbohydrates by reforming their structure and making them more straightforward (Process of Digestion, n. d.). Therefore, the process of digestion is complex and involves many chemicals that the body produces and the specially designed system of organs.

Digestion starts before the complex processes in the inner structures of the body  from the oral cavity. The intaken food is chewed, teeth break it into small pieces, and the saliva hydrates it to continue the process of digestion. Saliva brings amylase, an enzyme that hydrolyzes the starch, and lysozyme, which is antibacterial and prevents infections (Process of Digestion, n. d.). The moisture and chewed food, called a bolus, is swallowed with the help of the pharynx to continue digestion (Process of Digestion, n. d.). Then, the process starts in the stomach and lasts up to 5 hours. The main active substances here are mucus secreted by mucus neck cells, proenzyme pepsinogen secreted by the peptic cells, hydrochloric acid secreted by oxyntic cells (Digestive system processes: Chemical and physical, n. d.). The process involves mixing the food with gastric juices, and the stomach muscle maintains this mixture; the food mass is called the chyme afterward (Digestive system processes, n. d.). The stomach and its enzymes mainly provide the digestion of proteins.

More profoundly, the processes that the gastric juice and enzymes provide are the following. Hydrochloric acid provides the acidic pH; pepsin turns protein into peptones and proteoses, casein turns into peptides (Process of Digestion, n. d.). Later, in the small intestine, chyme is mixed again due to the movement of the small intestine and meets more enzymes: amylase, enterokinase, trypsin, chymotrypsin, carboxypeptidase, elastase, and nucleases (Rakshitha, n. d.). These enzymes and substances also contribute to the further digestion of the food and its simplification for the cells to absorb.

Additionally, the small intestine ensures the processes connected with glucose digestion. Maltase turns maltose into glucose; sucrose does the same to sucrose to get fructose, lactase turns lactose into glucose and galactose, peptides are turned into amino acids by aminopeptidases (Digestive system processes, n, d.). After that, bile provides the emulsification of fat globules, and pancreatic lipase helps break down triglycerides to glycerol and fatty acids (Rakshitha, n. d.). The food becomes suitable for cell absorption at this stage, and only some particles are moved to the large intestine.

The large intestine is not as active in providing substances for digestion as the small intestine. Its primary function is ensuring the absorption of minerals and water (Digestion of foods, n. d.). The waste food particles called fecal matter are then removed from the body (Digestion of foods, n. d.). Generally, the whole system is controlled by hormones, which stimulate the alimentary canal to work correctly and transfer signals from the brain to the organs (Rakshitha, n. d.). At the same time, nerve signals are sent back to the brain to indicate hunger (Digestion of foods, n. d.). In other words, the digestive process is controlled by the nervous system and hormones.

To conclude, the digestion system of the human body is well-designed for maintaining the proper functioning of the organism. All of the systems elements are interconnected and interdependent, meaning that the process requires a coherent activity of the involved elements. The enzymes that take part in digestion provide the bodys cells with properly dissolved useful substances required for energy balance and metabolism in general.

References

Digestion of foods. (n. d.). Raw Food Explained. Web.

Digestive system processes: Chemical and physical. (n. d.). Digesting of Foodstuffs, 599-612. Web.

Process of Digestion. (n. d.). Toppr.com. Web.

Rakshitha, S. (n. d.). Digestion & absorption of foodstuff in human beings. Biology Discussion. Web.

Pluto: The Status in the Galaxy

Pluto has been a topic of many arguments for scholars during the centuries and has caused numerous debates. Through the paper flows the idea of the inconstancy and the motion of the science that result in changes of astronomers perceptions. Therefore, the provided article observes the events and discoveries that have influenced the fate of the space object and the many statuses that were given to it.

There are different issues connected with Pluto and its status in the galaxy. One of them is that science keeps moving towards new discoveries and progress. Therefore, the theories that once have been relevant after some time can be improper due to new information and researches. For example, many astronomers started to doubt Plutos position as a planet in 1930 due to the creation of the new rules, because the planets characteristics not fit into the new standards (Associated Press, 2006). Although Pluto and other small space objects are considered the dwarf planets, they are still planets; thereby, Pluto should not be disqualified. Another problem is that the scholars from the very beginning of astronomy did not have specific guidelines for measuring the planets. Therefore, it led to inconsistencies in determining whether Pluto deserves to be considered planet or not. Those issues are crucial because, without evident characteristics, it is impossible to evaluate the properties and characteristics of the space objects.

Overall, several issues mainly focus on the building specific criteria for observations and classifications of the space objects. Pluto is an example of the importance of the precise measurement system for the following identification of the subject. The article correlates with my chosen field in constant changes and development of the science that revises all the old statements and facts.

Reference

Associated Press. (2006). Astronomers Vote to Strip Pluto of Planetary Status. Fox News. Web.

Urban Sprawl in Portland: Advantages, Disadvantages, Net Effect

Introduction

Urban sprawl refers to the loss of a lands rural characteristics due to the geographic expansion of cities and towns or spatial footprint. Urban sprawling is caused by the need to accommodate an increasing urban population and fulfill residents desire for increased living space and residential amenities. The European Environment Agency (EEA) defines urban sprawl as  the physical pattern of low-density expansion of large urban areas, under market conditions, mainly into the surrounding agricultural areas (Fertner et al., 2016, p. 2).

It is characterized by decentralized planning or minimal planning control of the land division and patchy development strung out and scattered across the region. The newly-developed urban areas tend to be fragmented/discontinuous, have low-density residential settlements and private transportations.

Portlands landscape has changed drastically in the last fifty years. Cornfield lands have been replaced with ranch houses, wetlands have been drained to allow highway construction, and forests that previously provided shades during the summer have given way to parking lots that radiate heat. Fertner et al. (2016, p. 9) indicate that about 75% of new housing uprose in designated urban villages in the past two decades. The existing structure of the metropolitan area has changed due to new infrastructure and urban developments. Fertner et al. (2016, p. 7) analyzed the characteristics of urban areas established in Portland between 2006 and 2011 and concluded that the land was characterized by fragmentation. Portland is the fastest growing city in Oregon, with a population, with a growth percentage of 1.2 percent between 2006 and 2011 (Fertner et al., 2016, p. 2).

It had over 580 new urban patches with an average individual size of 2.1 hectares and a total size of 1235 hectares (Fertner et al., 2016, p. 2). These new patches were established adjacent to the existing urban areas and were created by extending a current metropolitan area or transforming formerly low-density regions. Most of the urbanized land was agricultural and natural (forests, wetlands, and semi-natural areas).

The Specific Advantages of Urban Sprawl in These Areas

Economic Growth

The urban sprawl in Portland has contributed to the metropolitan areas economic growth. According to Fertner et al. (2016, p. 6), the Gross Domestic Product (GDP) of Portland grew by 5 percent between 2006 and 2012 (the period in which Portland experienced a spike in the growth of urban areas). Anecdotal evidence indicates that the GDP increase was caused by the population growth experienced during the same timeframe. The construction of commercial, industrial, and residential settlements creates employment opportunities for locals, and the local government can also benefit from sales and property taxes generated from the new development.

The Specific Disadvantages of Urban Sprawl in These Areas

The Socioeconomic Implications

Literature on urban sprawl has enabled a deeper understanding of the social implications of the dispersion of cities. Sociologists argue that urban sprawl causes differences in social classes, inculturation issues, and ethnic conflicts. Social groups mostly coincide with ethnic/racial or religious groups, creating social disputes. Due to poverty, some ethnic communities might cluster in urban ghettos that typically are characterized by crime.

For example, Portlands socioeconomic disparities and social inequities are both racialized and spatial. A recent study conducted by Goodling et al. (2015) indicated that Portlands uneven development is the reason why the urban core of the metropolitan area is predominantly White and affluent, while the outside areas (the eastside) are impoverished and mainly occupied by people of color. The authors attributed these outcomes to be the negative externalities caused by the uneven historical development of the region.

During the height of urban sprawl in the 1960s, the federal government supported Whites by providing them loans to buy newly constructed houses. While White Portlanders migrated outward the region, African Americans were segregated in the inner areas of Portland. These regions were devalued due to the government switching capital from industrial enterprises to real estate (Goodling et al., 2015). For example, between 2000 and 2010, the income level of regions in the eastside dropped by 15% (Goodling et al., 2015, p. 506). On the other hand, the Whites benefitted from infrastructure improvements and federal investments. This analysis shows that the urban sprawl in Portland caused socioeconomic inequities.

Transportation Costs

Urban sprawl also negatively affects mobility and energy efficiency, increasing transportation costs. Regions characterized by urban sprawling have a greater reliance on private transportation, which weakens public transportation systems (Rubiera-Morollón & Garrido-Yserte, 2020). About 57.7% of Portlands residents use private means to commute, 8.9% carpool with others, and 12.3% use mass transit (Commuting in Portland, Oregon, n.d. para. 3).

Rubiera-Morollón & Garrido-Ysertes (2020) study implies that an urban-sprawl-areas have a greater dependence on private vehicles. Therefore, it is logical to conclude that, in part, Portlands urban sprawl has contributed to the regions increased personal transportation means. This reliance on private transportation creates congestion, traffic jams and increases atmospheric carbon emissions.

Environmental Costs

Portlands environment has had a significant impact on the regions environment and environmental quality. Fertner et al. (2016, p. 6) indicate that approximately 150ha of nature and agricultural land in Portland was transformed for urban uses during the 2006 and 2011 urban sprawl. Natural land such as forests was cleared to make way for infrastructure and residential and commercial buildings, destroying wildlife habitat. The destruction of the natural areas also degrades free ecosystem services such as flood control and water purification. Although Portland still has several green spaces, these areas are too small to support wildlife and native species.

The Overall Net Effect of the Urban Sprawl

Energy Consumption

Urban sprawl is associated with an increase in energy consumption. Evidence from various studies has shown consistent results that urban sprawl causes high energy consumption (Rubiera-Morollón & Garrido-Yserte, 2020). Typically, many residential homes are scattered in regions characterized by urban sprawling, yielding energy inefficiencies. Energy providers need to travel long distances to provide the dispersed homes with energy, increasing transportation costs. Infrastural costs result from lengthy distribution systems, pipelines, power lines for more kilometers to reach every home. These works increase energy costs due to increased infrastructural and transportation expenses.

Economic Costs

The effect of population increase without a significant surge in density negatively impacts a countrys fiscal situation. Study outcomes retrieved from various econometric analyses have shown that an increase in urban dispersion increases the local public debt and a more significant financial burden in the long term (Rubiera-Morollón & Garrido-Yserte, 2020). Simply put, regions characterized by high population and population density have better pecuniary outcomes than dispersed cities.

Increase Transportation Costs

Researchers indicate that urban dispersion (patches of urban areas) characterized by low population density increases commuting distance, impeding efficient mass transportation (Rubiera-Morollón & Garrido-Yserte, 2020). Because people are spatially dispersed across the region, the number of stops increases, increasing the transportation costs. A vicious circle exists between private means and urban dispersion. First, the use of personal vehicles encourages or accelerates dispersion, but at the same time, people need them to commute between urban patches.

Environmental Effects

Urban sprawl causes air population primarily from carbon emissions generated from the increased use of private vehicles. Currently, carbon emissions contribute significantly to the current global warming issues. Urban sprawl also causes significant consumption of natural resources for urban activities. This consumption destroys wildlife and their habitat and strains the ecosystems ability to sustain itself. Whenever urban sprawl extends to a region with extensive forests, it creates forest fragmentation and disrupts wildlifes migration corridors. It also causes water and soil pollution. Automobile transportation systems are non-point sources of water pollution, while soil properties are permanently affected. For example, human activities cause loss of soil compaction, water permeability, soil biodiversity, and the soil capacity to act as a net sink for carbon and greenhouse gases.

Conclusion

Urban sprawl extends beyond the expansion and dispersion of buildings and encompasses planning, economics, sociology, policy science, and environmental analysis. Urban sprawl in Portland is characterized by low-density urban areas sprung out across the region and disparities in localities. It had over 508 new urban areas between 2006 and 2011 and is still one of the fastest-growing regions in Oregon. Despite the boost in economic growth in the region during the urban sprawl, Portland has also experienced adverse effects. The urban sprawl in Portland has created social conflicts characterized by social inequities and socioeconomic disparities, inefficient energy consumption, and increased transportation costs. The city is currently working with urban growth management to control the sprawl.

References

Commuting in Portland, Oregon. (n.d.). Sperlings Best Place. Web.

Fertner, C., Jørgensen, G., Nielsen, T. A. S., & Nilsson, K. S. B. (2016). Urban sprawl and growth management: Drivers, impacts, and responses in selected European and US cities. Future Cities and Environment, 2, 113. Web.

Goodling, E., Green, J., & McClintock, N. (2015). Uneven development of the sustainable city: Shifting capital in Portland, Oregon. Urban Geography, 36(4), 504527. Web.

Rubiera-Morollón, F., & Garrido-Yserte, R. (2020). Recent literature about urban sprawl: A renewed relevance of the phenomenon from the perspective of environmental sustainability. Sustainability, 12(16), 114. Web.

Qualitative Methods and Analysis

Qualitative analysis can be defined as a way a researcher develops a deep understanding of a phenomenon by employing deep research. It is aligned using a particular methodology, and there has never been a single way to analyze qualitative data. In other words, it is a nonlinear and interactive process. The process is often presented in phases for deeper understanding. This paper will analyze and give a reflection of qualitative methods and analysis.

For a better understanding of the analysis, it is done in phases. The first phase is organizing the data to be analyzed. Collecting data is always done in bulky. During the various process of generating the data through interviews, information is recorded in a lengthy document (Lester et al., 2020). Multiple techniques are used, such as audio or video, and the files are stored in an orderly manner to avoid confusion. This is the primary stage and sets the foundation for analysis. Transcribing data is another phase for researchers doing the analysis. Data is allocated time, and analysts give time for the transcription of the available information. Although it is a cumbersome process, transcribers should work on their work to avoid loss of meaning.

Becoming familiar with the data is another phase of qualitative analysis. Researchers must be in a position to tell what they have done. If they do not understand their work, they might be unable to explain it later to the scholars. It may inspire other groups to research the same data further, thus ensuring more discoveries about a particular data are made. Another step is recording the data in documents such as memos that describe what it entails (Lester et al., 2020). They capture the dates, places and importance of the information. After that, coding is done, a short descriptive phrase is assigned to the researchers interest. The purpose of that is to reduce the size of the data and its complexity. Lastly, making the analytic process transparent is critical to ensure that the process of analyzing data is verifiable and transparent. Researchers are supposed to develop a detailed audit that outlines the connection between all stages of data analysis.

Core ethics in this work includes being transparent and presenting honest work. A better way of doing it is that the researchers use their primary data to avoid misinterpretation. They should also have a deeper understanding of the concepts they are passing on to ensure that scholars get the necessary assistance from them when required (Lester et al., 2020). Core social work is evident in the first stage of the research. In collecting data, there must be many individuals involved. In some instances, when using questionnaires to conduct the survey, many people must present their views before the researchers final compiling of the data. Social work is therefore essential in this case scenario and requires maximum cooperation from individuals.

In conclusion, qualitative analysis can be best learnt through a process known as thematic analysis. Dividing it into phases gives the researcher a good way of presenting ideas. There are many ways of interpreting data, but the thematic analysis is arguably the best due to inconsistencies that impact the final results. To have an excellent approach to this, a15-point checklist should be used for conducting the research to ensure the reliability of the final result.

Reference

Lester, J. N., Cho, Y., & Lochmiller, C. R. (2020). Learning to do qualitative data analysis: A starting point. Human Resource Development Review, 19(1), 94-106. Web.

Receptive Vocabulary Size and Proficiency in English as a Foreign Language

Introduction

The use of scatter plot is one of the strategies that studies employ to visualize and present information to readers. Data visualization has numerous advantages, including enhancing the understanding of a phenomenon, highlighting trends and patterns, and summarizing complex information (Li, 2018). Scatterplot aids in data visualization by depicting the nature of the relationship between two variables measured on a continuous or numerical scale. Visualizations of two variables in a scatterplot can have a positive, negative, or spurious relationship, depending on trends and patterns that they exhibit (Li, 2018). To illustrate the use of scatter plot, this assignment selected a study by Miralpeix and Muñoz (2018), which depicts the relationship between the receptive vocabulary size and proficiency in English as a Foreign Language.

Variables in the Study

Receptive vocabulary size (RVS) and proficiency in English as a Foreign Language (PEFL) are two variables examined in the selected study. RVS is an independent variable that measures the number of words a participant understands. It is a computerized test where participants indicate whether they understand a series of words presented to them. PEFL is the dependent variable in the study that measures the levels of language skills among learners. It evaluates reading, writing, listening, speaking, vocabulary, and grammar skills measured on a Likert scale from 1 to 5 to indicate the degree of proficiency. Since these two variables exist on a numeric scale, it is possible to examine their relationship and depict using the scatterplot.

Association between Variables

The study used a scatterplot in the visualization of the relationship between RVS and PEFL. The examination of the scatterplot indicates that RVS and PEFL have a positive correlation. Although the scatterplot shows the existence of outliers in data points, it shows a pattern and trend of a positive association between RVS and PEFL. The dispersion of values shows that RVS ranged from 2500 to 7200 scores, while PEFL ranged from 3.31 to 8.72. As the range in RVS is less than that of PEFL, it implies that these variables have a moderate relationship between them. Since these variables have a causal relationship, the scatterplot reveals that learners with high scores in RVS tend to have high levels of PEFL, and vice versa.

Pearson Correlation

Pearson correlation coefficient ought to be an additional analysis to the scatterplot to quantify the degree of relationship between RVS and PEFL. According to Field (2017), the Pearson correlation coefficient shows both the direction and the magnitude of the relationship between two variables. In this case, the scatterplot shows the direction of the relationship but does not depict the strength of association. Hence, the Pearson correlation coefficient is necessary to quantify the degree of relationship between RVS and PEFL among learners. The coefficient ranges from -1 to +1 to indicate both the direction and magnitude of a relationship (Field, 2017). As the scatterplot indicates a positive relationship, the coefficient would indicate if the relationship is strong (r e 0.7), moderate (r = 0.4-0.6), or weak (r d 0.4) (Field, 2017). Therefore, the addition of the Pearson correlation in the analysis would provide supplementary information to the scatterplot.

Conclusion

Data visualization using scatterplot enhances the presentation, highlights trends, and summarizes relationships between variables. The study sought to establish the nature of the relationship between RVS and PEFL as variables of interest. By using the scatterplot, the selected study demonstrated that RVS and PEFL have a positive relationship. The analysis of the strength of the relationship using the Pearson correlation coefficient would provide additional information. The nature of the relationship shows that learners with high RVS have a high level of PEFL skills. In essence, the scatterplot and correlation indicate that PEFL skills among learners increase as the level of RVS rises.

References

Field, A. P. (2017). Discovering statistics using IBM SPSS statistics. Thousand Oaks.

Li, Q. (2018). Using R for data analysis in social sciences: A research project-oriented approach. Oxford University Press.

Miralpeix, I., & Muñoz, C. (2018). Receptive vocabulary size and its relationship to EFL language skills. International Review of Applied Linguistics in Language Teaching, 56(1), 1-24. Web.

Olfactory Sense as the Most Rapid Warning System

The human evasion reaction to unsavory scents related to risk has for some time been viewed as a conscious cognitive process. However, the research from Karolinska Institutet shows that it is an unconscious and fast reaction. The brain processes involved in the translation of an unpleasant odor into avoidance behavior in humans have long remained a mystery. The lack of non-invasive methods to measure signals from the olfactory bulb is one of the reasons for such development (Karolinska Institutet, 2021).

The olfactory bulb is the initial section of the rhinencephalon. It has direct links to crucial elements of the central nervous system that help in the detection and recall of potentially harmful circumstances and chemicals (Karolinska Institutet, 2021). After being breathed through the nose, odor impulses reach the brain in 100 to 150 milliseconds (Karolinska Institutet, 2021). The olfactory sense appears to be especially crucial in humans for identifying and responding to potentially hazardous stimuli.

Researchers at Karolinska Institutet have devised a method for measuring signals from the human olfactory bulb. It analyzes odors and provides signals to areas of the brain that regulate movement and avoidance behavior (Karolinska Institutet, 2021). Their findings are based on three tests in which individuals were asked to score their reactions to six distinct odors, some of which were good and some of which were negative, while the electrical activity of the olfactory bulb was monitored (Karolinska Institutet, 2021).

The individual automatically leans back and away from the source of the odor as a result of the signal (Karolinska Institutet, 2021). These results imply that human sense of smell is vital in detecting threats within environment, and that most of this capacity is more unconscious than the reaction to danger mediated by eyesight and hearing.

Questions: What kind of utilities researchers utilized in order to monitor electrical activity of participants? What are the other senses that function as the threat detector?

Reference

Karolinska Institutet. (2021). Sense of smell is our most rapid warning system. ScienceDaily. Web.

Peoples Behavior: Building Communities of Care

For many years, scientists have been interested in various aspects of human behavior. For these purposes, a new concept was introduced into psychology, which is called behaviorism. This section of science deals with the study of peoples responses to various situations occurring with them. Some of the most important studies on this topic are the works of Harlow, Rosenhan, and Skinner. A detailed study of these scientific papers and an assessment of their contribution to research on human behavior is of particular value. They are essential both for the development of new works and for the complete understanding and development of strategies for managing behavior.

Monkeys Attachment Theory

The first experimental survey that needs to be mentioned as making a notable contribution to the science of human behavior is the study of Harry Harlow. This scientific work was completed in 1958, and it consisted of the study of the relationship of newborn monkeys with their mothers. Hence, Harlow argued that the attachment between them is formed as a result of tactile contact. Vicedo (2020, p. 2) emphasized that the attachment theory has become one of psychologys most influential theories about early child development and its impact on an individuals subsequent emotional life and adult relationships. Furthermore, the scientist noted that newborns directly need touch since this process provided them with emotional comfort.

In a series of experiments, Harlow tried to prove his theory about attachment. So, he separated newborn monkeys and their mother, placing them at the same time as the so-called surrogate mothers. It is worth noting that these were not real animals but figures created by their wire and terry material. A monkey made of wire also imitated feeding with a bottle attached to it. During the study, it was noted that the cubs spent more time around a soft artificial monkey. It was also their hiding place when the researcher imitated a dangerous situation.

The scientist was able to come to several conclusions after the experiment. Therefore, Harlow emphasized that for proper development, the cub needs contact with a particular object, in this case, a monkey made of tissue. This factor plays a unique role during the critical period of development, that is, during the first few months. Another conclusion is that premature separation from the mother leads to emotional damage. Furthermore, the scientist concluded that maternal affection is not the primary source of stress but rather the social aspect of this kind of interaction.

This study is a valuable source of knowledge, as it contributes to the development of knowledge about the premature separation of children from mothers. Moreover, the data obtained during the experiment are applicable to the study of such topics as neglect and child abuse. Both of these factors affect the feeling of comfort and security. Therefore, the knowledge gained will help develop strategies and policies to prevent these problems and help reduce the percentage of children who will not be able to adapt to life in society in the future. However, it is worth noting that the scientists experiment is quite fierce and, in some ways, unethical. This is due to the fact that the baby monkeys, in any case, experienced fear and stress due to separation from their mother.

On Being Sane in Insane Places

The experiment conducted by David Rosenhan is one of the most mind-shaking in the history of psychology. The study was conducted in 1973 and was called On being sane in insane places. The main result of this experiment was the generation of doubts about the situation in psychiatric institutions (Bryant, 2020). It is worth noting that sometimes it is difficult to understand whether a person is insane. This is since it is often the mental hospital that creates such conditions in which the suppression of the individual and the aggravation of the psychological state occurs.

The essence of the study was to select thoroughly psychologically healthy volunteers whose task was to get into psychiatric hospitals in different US states. Under the pretext of seeing hallucinations, participants were placed in institutions for the mentally ill, but when placed there, they began to behave as completely healthy, showing good nature, decency, and behaved respectfully. Despite the same complaints, patients were often diagnosed with different diagnoses.

The second stage of the experiment with pseudo-patients was also carried out. At this stage, the researcher was studying whether psychiatric specialists would be able to identify and distinguish a mentally suffering person from a completely healthy one. An important aspect was that the scientist warned about the possibility of meeting people who had been selected in advance for the study, but there were none. During the experiment, doctors evenly distributed sick and healthy patients, which made Rosenhan come to the conclusion that the methods of diagnosing a mental illness are incorrect and unreliable.

However, it is worth noting that not all medical specialists fall under Rosenheims withdrawal characteristics. Therefore, it is noted that the treatment being developed is based entirely on the patients testimony. In addition, it is impossible to talk about the complete incorrectness of psychiatric institutions in which pseudo-patients got, since their explanations of the illness were obviously false. Nevertheless, such aspects as the biased attitude of the medical staff of mental hospitals, negligence, and neglect were proved by the researcher during the experiment.

Operant Conditioning

The last important discovery for the psychology of human behavior was the establishment of such an understanding as operational conditioning. The promoter of this phenomenon was Berres Skinner, who deduced its main principles. According to the scientist, the behavior followed by favorable consequences can most likely be repeated. Moreover, the behavior followed by negative consequences may manifest itself again with the most negligible probability. It can be concluded that the results of an action determine its repeatability; that is, they are based on past experience.

Skinner also emphasized that it is necessary to study the manifestations of human behavior, not inner feelings. Cherry (2020, p. 1) states that as a behaviorist, Skinner believed that it was not really necessary to look at internal thoughts and motivations in order to explain behavior. Additionally, the researcher believed that all psychological knowledge existing at that time was too simplified to justify peoples behavior fully. Skinner saw the primary tool for studying this aspect of personality in the analysis of the causes and consequences of specific actions. In 1948, the scientist conducted an experiment with animals, which he placed in special equipment, which was called Skinners box. It was an equipped device for recording the behavior of an animal in a compressed time frame. For certain actions, it was either rewarded or punished.

During the experiment, the researcher identified three main types of reactions that can result from a particular behavior. First, the neutral operand consists of environmental reactions that in no way affect the repetition of eating. The second is reinforcement, which is characterized by a response to an action that increases the likelihood of repetition. The latter is considered a punishment that negatively affects the re-occurrence of certain behavior. Moreover, Skinner noted the importance of the external environment as a factor in the formation of skills and behavior.

The discovery of the concept of operant conditioning has contributed to the development of an even greater study of the processes of human behavior formation. It is noted that conditioning consists in creating the necessary conditions for managing and modifying peoples behavior. Therefore, Skinner deduced specific patterns that also influenced the development of behaviorism. They can also be used in solving social problems, defining and implementing behavioral technology. This aspect can contribute to the development of policies and strategies for managing negative behavior and translating them into positive ones.

Therefore, it can be concluded that the importance of such a science as behaviorism is vital. It helps first of all to explain the patterns of human behavior and to form strategies to change them. Scientists such as Harlow, Rosenheim, and Skinner have made an outstanding contribution to this science. The scientists considered such topics as an attachment, stigmatization towards the psychologically ill and incompetence of medical institutions, and the process of repeating actions depending on their consequences. These scientific works are of critical importance as they contribute to a better understanding of human nature and expand existing knowledge about psychology.

Reference List

Bryant, K. (2020) On being sane in insane places: Building communities of care, Griffith REVIEW, (67), pp. 186-193.

Cherry, K. (2019) What is operant conditioning and how does it work. How reinforcement and punishment modify behavior, Verywell Mind, pp. 1-5.

Vicedo, M. (2020) Attachment theory from ethology to the strange situation, Oxford Research Encyclopedia of Psychology.

Methods in Remembering, Imagining, False Memories, and Personal Meanings Study

Study Summary

According to the study, memories are closely linked with the imagining system. When a person tries to imagine something, memories are used as a source of data. When a person tries to memorize an event, imagination may have a significant influence on the final perception of the past. From a particular perspective, human memories are always incorrect and subjective. People tend to transform their memories according to their personal beliefs and life values. Therefore, the primary goal of memory is to create personal meanings. The study has shown that false memories do not exclusively have harmful consequences.

In some cases, they may lead to a wrongful interpretation of reality and worsen psychological illnesses. However, false memories may also modify traumatic experiences and decrease related problems. For example, people with posttraumatic stress disorder may avoid negative emotions and images using false memories. Even though they provide incorrect information about the past, they may be beneficial.

Research Methods

Remembering, Imagining, False Memories, and Personal Meanings is a comprehensive study, which utilizes a wide variety of diverse approaches and methods. Nevertheless, its methodology excludes several effective techniques. For example, the provided research is primarily descriptive as there is no experimental research involves. It is mainly based on the previous study and summarizes existing knowledge in order to apply it to a specific subject. Case studying is also utilized in the survey to provide additional information regarding false memories interaction with mental disorders. Correlation and causation issues play a significant role in the study as remembering and imagining systems are closely linked. Even though the article primarily focuses on correlation, a causal hypothesis could also be established by further research.