Turtle Mound: Archaeological Research

The Turtle Mound is an ancient archaeological place located 14 kilometers south of the New Smyrna Beach in Florida, United States. The mound is the biggest shell heaped on the United States mainland, which has an estimated altitude of fifteen meters. It extends for approximately180 meters along the Indian River shoreline and constitutes over 35,000 cubic yards of shells (Green et al., 2019). The area contains refuse and oyster shells from the ancient Timucan community. The location was named The Turtle Mound because the heap lost some of its oysters and shells and took the form of a turtle. In addition to this, the mound consists of oysters and dirt from the times of ancient Timucua people who lived in the territory of modern Florida for about five to six centuries before the Europeans came and occupied the place. It also contains several types of tropical plants that can be found in soil.

The location also has the northernmost distribution numerous kinds, which comprise Schoepfia chrysophylloides, Plumbago, and Amyris elemifera, among others. The heat preservation of shells and their closeness to the Atlantic Ocean allows the mound to retain warmer temperatures than surrounding areas (Gillreath-Brown & Peres, 2017). Surveyors from Spain visited the locality and testified on how inhabitants launched their dugout canoes at the mounds bottom.

The mound was said to have been approximately twenty-three meters high before it was lowered by the mining of shell rocks in the 19th and 20th centuries. Back in the 1970s, the mound was added to the National Register of Historic Places (Harvey et al., 2019). Currently, the location is governed by the National Park Service administration as portion of the Canaveral National Seashore. Furthermore, it was determined that no broad mines had been done as the mound started to appear. Travelers from Spain visited the locality and recounted inhabitants releasing their dugout canoes at the mounds edge.

The turtle mound is the largest coastal shell on the east coast. On the east coast of Central Florida, alongside the shorelines of Mosquito Lagoon, many mounds were created by the Timucua people. Turtle mound helped stabilize the mounds and slowed erosion. By planting oysters, marsh grass, and mangrove trees around the turtle mound base, no more erosion occurred (Wilder & McCollom, 2018). Lack of erosion enabled settlers of Floridas east coast to practice farming, through they which acquired their food. Turtle mound helped archaeologists learn more about the Timucua peoples history and the importance of preserving their culture (Donnelly et al., 2017). Since the Timucuas were the first occupants of east coast Florida, they witnessed how the mound grew and adopted their their traditions. They saw it was an important part of the culture and had to make it count as part of their history. The turtle mound helped Timucua families to work together. They also involved children in carrying heavy loads.

Archaeological Research

Archaeological studies help individuals carry out research that is informed and accurate. These studies ensure that the correct information about specific subjects of studies is available. Archaeology is regularly regarded as a branch of socio-cultural anthropology since scientists drawn from environmental, biological, and geological systems mostly operate their studies from the archaeological sites. Furthermore, the research conducted by scientists can sometimes contain of new findings while others are a continuous research due to the area of the study region. Firstly, archaeological research says that there is still no exact date on which the Turtle Mound was formed (Mangin et al., 2019). The researchers predict that the site was used between 800 and 1,400 years ago (Mangin et al., 2019). However, current radiocarbon analysis stipulates that the region dates back to around 1000 BCE, which was further backed by the discovery of ancient pottery believed to be 1,200 years old.

The archaeologists continue to say there has never been a complete purging conducted on the site, making it impossible to have a complete history of the mound. Archaeologists found out that the mound was used by people of the Timucua culture, an indigenous society that occupied most of Central Florida (Green et al., 2019). They also figured the Timucuas were forcibly evicted from Florida and placed in Cuba because they were attacked by war, slavery, and disease. The research also constituted the multiple names that the mound adopted since European began settling in Florida. The names were as follows; Surruque named after cacique, an Indian tribe that lived in the area; Mount Belvedere, the Rock, Mount Tucker, and Turtle Mount. Further, archaeologists used bows, arrows, spears, and snares to catch various small mammals and reptiles (Kimball & Wolf, 2017). They concluded that the turtle mound is the last remaining souvenir of the Timucua people since most of the mounds have been leveled to provide materials for filling roads. Moreover, they recently found 1,200-year-old pottery, fish bones, and other samples on sight that will be analyzed with radiocarbon dating technology to find out how old the mound was.

Significance

The turtle mound-like any other archaeological site, played a significant role in improving peoples lives around the area. The mound is a visible site that can be spotted from seven miles away out of the sea; hence it was used as a navigational device. Individuals and explorers who wanted to visit the site could freely move without fear of getting lost because it gave them a sense of direction. Also, the large mound was used by Spanish colonialists and mariners as a landmark to guide them during navigation. The Mound can be used as a hiking trail, which led to the peak of an ancient shell heap and provides a magnificent view of the Atlantic Ocean (Witherington et al., 2017). It means people do not need to travel to the Atlantic Ocean because they can visit the site and get the same experience that one would get if they wanted to visit the Atlantic Ocean.

The Timucua people used the mound as a high ground for refugees during hurricanes. When the hurricanes became intense in the city, Timucua hid and sought refuge in the mound. The mound acted as a tribute to the Timucua people and the land they loved and appreciated. It helped uncover the past and save the future of Floridas first people (Wilder & McCollom, 2018). It acted as a representation of the past of the Timucuas, where they hoped to recover tradition that is now gone. It is here that we learn more about their future and why we must preserve it. Further, the site acted as a location for adventure. It attracted many explorers from around the world who came to look out for the historical marker of the past as the new inhabitants prepare for the future.

Images of the Turtle Mound

Turtle Mound on water edge 
Image 1: Turtle Mound on water edge 
geographical map of The Turtle Mounds location
Image 2: geographical map of The Turtle Mounds location
Picture of turtle mound boat tours 
Image 3: Picture of turtle mound boat tours 

References

Wilder, George J., and Jean M. McCollom. A floristic inventory of Corkscrew Swamp Sanctuary (Collier County and Lee County), Florida, USA. Journal of the Botanical Research Institute of Texas (2018): 265-315.

Donnelly, M., Shaffer, M., Connor, S., Sacks, P., & Walters, L. (2017). Using mangroves to stabilize coastal historic sites: Deployment success versus natural recruitment. Hydrobiologia, 803(1), 389401.

Gillreath-Brown, A., & Peres, T. M. (2017). Identifying turtle shell rattles in the archaeological record of the southeastern United States. Ethnobiology letters, 8(1), 109114.

Green, W., Caves, M. C., & Williams, L. L. (2019). The Myrick Park Mounds (47Lc10), an Effigy Mound Site in Western Wisconsin. Midcontinental Journal of archaeology, 44(2), 207229.

Harvey, V. L., LeFebvre, M. J., Defrance, S. D., Toftgaard, C., Drosou, K., Kitchener, A. C., & Buckley, M. (2019). Preserved collagen reveals species identity in archaeological marine turtle bones from Caribbean and Florida sites. Royal society open science, 6(10), 191137.

Kimball, L. R., & Wolf, J. (2017). The ritualized landscape at Biltmore Mound. North Carolina Archaeology, 66.

Mangin, M. J., Schneider, S. G., & Jol, H. M. (2019). Subsurface imaging of a late Woodland Effigy Mound site: Lake Koshkonong Effigy Mounds, Wisconsin.

Turtle Mound Boat Tours Daytona Beach Florida River Cruises www.turtlemoundtours.com. (n.d.). 2020, Web.

Witherington, B., Peruyero, P., Smith, J. R., MacPhee, M., Lindborg, R., Neidhardt, E., & Savage, A. (2017). Detection Dogs for Sea Turtle Nesting Beach Monitoring, Management, and Conservation Outreach.. Marine Turtle Newsletter, 152, 14.

Mars Rover Curiosity: Review

New Mars rover Curiosity landed Mars on August 6. According to the comments of its creators, the purpose of this sophisticated device is in participating in long term robotic exploration of Mars (Webster par. 2). In particular, the mission of this Mars rover is to attempt answering the question that has been boggling peoples imagination for decades of whether there exists life on Mars. The rover will search the living conditions on the planet to make conclusions concerning its habitability by small living forms such as microbes (Webster, par. 8). The rover is rigged with an onboard laboratory designed for studying the local geologic setting, soils and rocks which will offer a basis for detecting chemical joint parts of life such as carbon and hydrogen compounds, and assessing Martian atmosphere and living conditions during the previous periods of time.

On August 6, 2012, Mars rover Curiosity successfully landed in Gale Crater (Curiosity Safely on Mars! par. 3). The purpose of the rovers work is in examining different chemical joints taken from the Martian mineral samples and Martian atmosphere to define whether the planet can be habitable for humans or any other living organisms during the current period of time, and whether it was habitable for any kind of life in the past (Howell par. 5). Ashwin Vasavada, the main scientist responsible for the project explains, when we were designing Curiosity, we were going to use it for our habitability investigations as well, but it really is paid for and intended to understand the environment humans will experience on Mars (Howell par. 4). To make such important conclusions, the specially constructed onboard laboratory of Curiosity will carefully examine different samples of Mars soils and rock along with the samples of gases from its atmosphere. Such analysis will provide a basis for assessing the living conditions on the planet now and in the previous periods of time. As the condition of rocks and soil depends on the conditions of environment, and is actually a record of the planets history, it will be possible to identify whether Mars was ever inhabited before (Howell par. 7). In addition, the rover is rigged with the Radiation Assessment Detector which will be used in order to measure the level of radiation from the galactic cosmic rays and the Sun. Such radiation assessment is also very important in identifying the level of habitability of the planet.

The chemists working with this project put a lot of work into it. First of all, getting to Mars was related to a number of important researches which resulted into constructing Hazard-Avoidance cameras. According to Curiosity Safely on Mars,

All Sol 0 spacecraft activities appear to have been completely nominal. These

include firing all of Curiositys pyrotechnic devices for releasing post-landing

deployments. Spring-loaded deployments, such as removal of dust covers

from the Hazard-Avoidance cameras (Hazcams) occur immediately when

pyros are fired (par. 6).

However, the major part of work for chemists will occur during analyzing the forms of Mars rocks, soil samples, and atmosphere gases. During these investigations, the scientists will make their conclusions concerning the possibility of existing life forms consisting of carbonic compounds.

In conclusion, the new Mars rover Curiosity, having recently landed the surface of the red planet, is the embodiment of scientists hopes to eventually answer the question concerning the existence of life on the other planets which have been boggling peoples minds for decades. The onboard laboratory of Curiosity is aimed to conduct a number of chemical researches to identify whether life was possible on Mars before, and whether humans would be able to live on Mars in the conditions currently existing there.

Works Cited

Curiosity Safely on Mars! Health Checks Begin 2012. Web.

Howell, Elizabeth. Mars Rover Curiosity to Double as Martian Weather Station. 2012. Web.

Webster, Guy. Five Things About NASAs Mars Curiosity Rover. n. d. Web.

Applied and Traditional Academic Research Methods

Introduction

The applied research and the traditional academic research methods derive their processes and procedures from the generalized definition of research. Research is the scientific process through which solutions to problems are derived from statistical inferences (Salkind, 2006). The notion of finding a solution to the current problem is the foundation of any scientific inquiry. In applied research, the broad problems are narrowed to specific questions.

In essence, applied research is used to find solutions to realistic questions (Salkind, 2006). For instance, a firm may want to determine the level of performance of their products concerning certain specifications in the market. In the scenario, applied research is utilized to provide solutions to the specific question. Applied research utilizes a systematic procedure to answer the specific question. Whether applied research or pure research methods, the problem definition and solution forms the center stage.

Steps in the Applied Research method

All research methods follow sequential steps to find solutions or answers to the study problems. The fact that the research process follows steps does not mean that every step is followed strictly. The process is dynamic and subject to change as the study advances (Salkind, 2006). In applied research, the first step is the problem definition or identifying the research question. Once the research question has been defined, the research proposal is written, which is then followed by developing the study design. The step is followed by data collection. The data collection step is followed consecutively by data analysis and selection of procedures.

The final step in the applied research is the writing up of the research report. As indicated the steps outlined are not standard. For instance, the steps to be followed in business may vary with the sequences in the social science inquiry (Babbie, 2004).

The Discussion of the Steps in the Research Process

The Definition of the Research Problem or Question

This step is critical in identifying the problem. In applied research, the specificity and clarity of the research question or problem are significant at this stage. The reason is that the research method tends to find a solution to the specific research question or problems. Besides, identifying the research problem is vital in making decisions about various alternatives that may be available (Salkind, 2006).

Identifying the specific question to be answered stems from examining the broader problems that need to be solved. Narrowing the broader problem to specific questions is critical in finding solutions to the problems. The hypotheses are drawn from the specific research questions, which are then tested quantitatively. The cost analysis of the research project is also conducted at this stage. The value assessment is the determination of the research costs. Under the circumstances that the project cost exceeds the value of the research, the project should be discontinued (Davis, 2006).

Writing the Research Proposal

The research proposal provides an explanation of the study question and the course of action by the researcher (Babbie, 2004). The research proposal explains the research expectation, particularly to the financiers of the project. For instance, in a research project that tends to study the effectiveness of new programs on the employees performance, the proposal on the concerned topic and how the topic would be addressed is written to the concerned authorities (Davis, 2006).

In particular, the proposal contains the resultant-specific information after the research process. Using the specific question in the example, the proposal would indicate that the research would show the degree of employees satisfaction with the introduced programs and the level of the firms performance. Most importantly, the proposal act as a guide to the research, particularly in designing data collection and analysis procedures to address the specific question (Davis, 2006).

Research Design

Designing research involves deciding on what type of methodology to be applied as well as establishing systematic procedures of the study process. The non-experimental method is utilized primarily in most investigations including applied research. The non-experimental method is characterized by static variability. In other words, the study is conducted on the existing situations as well as the cause and effects. The results, as well as findings, are reported on the relationships between the variables (Babbie, 2004). The research design stage is critical particularly in determining the direction of the study and the control measures to be applied during the study. The control measures involve determining the sample size, data collection as well as analytical methods.

Data Collection

The data collection step is the actual gathering of information on the concerned topic. In fact, at this step, the researchers have to decide which method to use while gathering information that would address the specific research question (Babbie, 2004). Normally, the researchers can either utilize primary or secondary data. The primary data are collected through surveys and interview methods while the secondary data are gathered through literary reviews.

Data Analysis

The data analysis step involves transforming the gathered data into useful information. In most cases, the data analysis process determines the correlation between the variables. In particular, from the data analysis, the research question is answered or the solution of the problem is found depending on the drawn conclusions (Salkind, 2006). The process of data analysis involves quantifying the collected information and testing the hypotheses through the application of advanced statistical techniques.

Research Report

A research report is the last step in the applied research procedures. The main aim of the research report is to present the conclusions drawn from the study results. The research report contains the findings of the research, the problems encountered during the study process, the general research procedures, and the recommendations considered significant and consequential by the researchers. In most cases, organizations utilize the research findings and recommendations to design and implement policies (Davis, 2006).

The Importance of the Steps in the Research Project

Each of these steps remains critical to any research project to be undertaken. For instance, the determination of the research problem narrows the predicaments into a definite question, which can be tackled through the study procedures. In addition, at this stage, the costs of the research are determined. Further, the proposal act as a framework that guides the study. The research design stage is important to the project particularly in determining the direction of the study and the control measures to be applied. Data collection and analysis determine the correlation between the study variables while the report provides recommendations and conclusions depending on the findings of the investigation.

Conclusion

Applied research involves finding solutions to specific questions. In essence, applied research is used to find solutions to practical questions. Like all other research methods, the applied research process undergoes a series of steps ranging from designing the research problems to writing the final report of the study. Even though the steps may overlap in terms of underlying actions, they remain decisive to the process of investigation.

References

Babbie, E. R. (2004). The practice of social research. Belmont, CA: Thomson/Wadswoth.

Davis, D. (2006). Business research for decision-making. Boston, Massachusetts: South-Western College Publishing.

Salkind, N. J. (2006). Exploring research. Upper Saddle River, NJ: Prentice Hall.

The Measures of Central Tendency and the Descriptive Statistics

Introduction of the study

The measures of central tendency like the mean, median, mode that will be discussed in this essay and the descriptive statistics are very useful in summarizing any collected data and help to come up with detailed and correct conclusions. The aim of this essay is to analyze the measures of central tendency and descriptive statistics used and their importance. The essay will also examine the statistical methods and data undertaken for a current research undertaken by Cambridge University and determine how well the statistics fits it. The study looks upon a recent innovation of new treatment proposed for patients in the later stage of prostate cancer. The discrepancy in the statistics collected under this study shows that such results may cause doubts in research findings. According to statistics, the sample of the patients used in the survey was only 12 collected from the age group of 47 to 73. Because of the narrow range that was used in the research, more accurate results were expected. The other view that was shown in the study was that men were the only patients selected for the study. This is seen to have contributed to better results in the study (Plichta & Kelvin, 2001).

Drawbacks of the study

Despite a good descriptive analysis produced by the study, the study never gave out results that were promising with respect to the measures of central tendency. In relation to s of the measures of central tendency, the research team gave out inaccurate results relative to the statistics given. This was based on the existing statistics and the assessment of the data used in coming up with the inferences. With a small sample of size 12 (12 patients used) the results were supposed to be favorable. According to the current study, a mean of 10.75 obtained and it corresponds to the mean of the past study. However, because a small sample size was used and there was significant variation in the figures the mean must be incorrect. This is true since the least is 3 weeks and highest is 45 weeks thus making the use of the mean inappropriate (Elizabeth & Stancey, 2012).

Importance of standard deviation

The calculation of dispersion in any study is more accurate when the standard deviation is used. This arises since the standard deviation has the ability to accommodate the extremes of any given data. For example in the case of the range, since the lowest value is three and the highest value is 45 the range of the data is 42, which is too high. However, that large disparity between the lowest and the highest value within a very small sample size will give wrong statistics. This makes the standard deviation the most appropriate in examining the statistics collected. According to the past statistics, the results of the study by the research team were unacceptable because small sample size was used together with the tapered age group (Alexander, Franklin & Duane, 2001; Plichta & Kelvin, 2001).

Recommendations and conclusion

In the case of such study, the statistics obtained should be invalidated and another study conducted with a large sample size being used. With a large sample size of the patients, the study will yield more precise and appealing results when analyzed. The age difference is widened to accommodate larger sample sizes leading to a more accurate conclusion. There is a need to incorporate other measures of central tendency such as mode and median in the analysis since they give more precise information about the statistics under consideration (Alexander, Franklin & Duane, 2001; Plichta & Kelvin, 2001).

References

Alexander, M.M., Franklin, A.G. & Duane, C.B. (2001). Theory of statistics. Newhill: TataMcGrawHill.

Elizabeth, A.K. & Stancey, P. (2012). Research methods for health care. Web.

Plichta, S.B. & Kelvin, E.A. (2001). Munros statistical methods for Health Research. Brandon hill: Lippincott Williams and Wilkins.

Thermodynamics and the Arrow of Time

Introduction

Heat transfer is a common process in most machines, especially where two surfaces are in contact. The process is conceptualized as energy in transit. The transfer of heat is used to perform work, for example when the parts of the machine are in motion. Heat can also be generated when energy is transformed from one form to another. For example, a car engine burns fuel and heat is transferred when the fuel turns into a gas. However, most of the heat produced in the process does not perform work on the gas. On the contrary, some energy is released into the environment in the process. What this means is that the engine is not 100% efficient (OpenStax College [OpenStax] 5). The academic field dealing with the study of this phenomenon (heat transfer) is referred to as thermodynamics. Apart from addressing the issue of heat transfer, the academic field also analyzes the relationship between the phenomenon and work.

The current paper is written against this backdrop. In the paper, the author gives a brief overview of the second law of thermodynamics, which is expressed in various ways. In addition to the overview, the author defines the concept of entropy and how it is related to heat transfer. The different expressions of heat transfer are, however, equivalent to each other. The author explains the relationship between entropy and the arrow of time. Finally, the author outlines ways of overcoming the various problems associated with the arrow of time. The paper will revolve around a main idea. The main idea is a description of how the arrow of time, also referred to as aging, can be anticipated in design.

Thermodynamics and the Arrow of Time

The Second Law of Thermodynamics

It is possible to express the second law of thermodynamics in many specific ways. According to Thomas Kuhn, the second law was first conceptualized by two scientists in this field. The two were Rudolph Clausius and William Thomson. As already indicated in this paper, most expressions of this law have been proved to be more or less equivalent to one another. Basically, the second law of thermodynamics addresses the issue of systems and physical processes taking place within them. It states that in a closed system, one cannot finish any real physical process with as much useful energy as they had to start with (Harvey and Uffink 528). What this means is that heat engines that are based on the principles of thermodynamics cannot be 100% efficient (Harvey and Uffink 528). Some energy will be lost to the environment as it is converted from one form to the other.

Furthermore, the second law of thermodynamics deals with the direction taken by spontaneous processes. In most cases, physical processes occur spontaneously and in one direction. The implication here is that the processes are irreversible under a given set of conditions. There are certain processes that never occur, suggesting that there is a law forbidding them to take place. The law forbidding such processes to take place is the second law of thermodynamics (OpenStax 5).

It is important to note that complete irreversibility is a statistical statement that is hard to realize. Therefore, an irreversible process is conceptualized as one that depends on path. Such conceptualization means that if a process can go in one direction only, then the reverse path differs fundamentally and the process cannot be reversed (Hawking 345). As a designer, I have to be alive to this reality. For example, I must be aware of the fact that heat involves the transfer of energy from high to low temperatures. A cold object that is in contact with a hot object will not get colder. On the contrary, heat transfer from the hot object will make raise its temperature. Another important to note as far as the second law of thermodynamics is concerned is the relationship between mechanical energy and thermal energy. According to Maccone (3), mechanical energy can be converted to thermal energy. For example, when two surfaces moving in the opposite direction come into contact (mechanical energy), the friction between them will generate heat (thermal energy). As a designer, I am also aware of this relationship.

Entropy, the Second Law of Thermodynamics, and the Arrow of Time in Design

Entropy is sometimes referred to as the arrow of time. The concept is defined as the quantitative measure of disorder in a given system, whether closed or open. Entropy is conceptualized with references to energy, which is the ability to do work. It is a fact beyond doubt that all forms of energy can be converted from one form to another. In addition, all forms of energy can be used to do work. However, it is not always possible to convert the entire quantity of energy available to work. The unavailable energy is of interest to thermodynamics and to designers like me. The significance of the lost energy is accentuated by the fact that thermodynamics arose from efforts to convert heat to work. Entropy is a thermodynamic property that is used to measure the systems thermal energy per unit temperature that is not available to perform useful work. As a concept, entropy calls for a particular direction for time (Harvey and Uffink 530).

In an isothermal process, the change in entropy (S) is computed in terms of heat and temperature. It is the change in heat (Q) divided by the absolute temperature (T). In any reversible thermodynamic process, the change in entropy is represented in calculus as the integral of the progress from initial state to final state (OpenStax 4).

Entropy increases when heat is transferred from hot to cold regions since the change at low temperatures is larger than the one at high temperatures. Therefore, the decrease in entropy associated with a hot object is less than the increase in entropy of a cold object. However, for a reversible change process, entropy remains constant. To this end, the second law of thermodynamics can be stated in terms of entropy. It is stated as &..the total entropy (of) a system (that) either increases or remains constant in any process (Hawking 365).

The level of entropy increases with time. What this means is that things become more disorderly as time goes by. Thus, if one happens to find a stack of papers on their desk in a mess, they should not be surprised even if they had left them neatly stacked. They should realize that it is entropy at work. The scenario described above illustrates the nature of the relationship between entropy and the arrow of time (OpenStax 5).

Anticipating the Arrow of Time as Part of Design

The various laws of physics, including the thermodynamic laws, are described by Lieb and Yngvason (6) as time invariant. What this means is that the laws still hold even if time is reversed. But, according to OpenStax (4), time reversal contradicts nature and logic. The reason for this is that time progresses forward as opposed to backward. The contrast between time reversal and reality has created a reversibility paradox, which scientists are trying to understand (OpenStax 6).

In anticipating the arrow of time, scientists have proposed numerous solutions to address the reversibility paradox. One solution suggested by the scientists involves embedding irreversibility on physical laws. Another possible solution is establishing low-entropy initial states. One of the various solutions in this area was suggested by Maccone. The scholar assumes that quantum mechanics remain constant regardless of the nature of the scale used (Hawking 350). The scholar proves that entropy can either increase or decrease. In cases where an occurrence leaves at its wake a trail of information, nature dictates that the phenomenon should increase. However, entropy decreases for some phenomena. But such phenomena do not leave any information behind to show that they have happened. The solution allows for time reversibility to exist, but not to be observed. The condition is in line with the laws of physics and the second law of thermodynamics (Maccone 5).

Hawking (350) is of the view that it is possible to reduce the direction of time to achieve an entropy gradient. The scholar bases this on the assertion that our brains act like computers, which supposedly incur an entropic cost, using memory in the process. It follows that the states of the world we remember are those with lower entropy than present and future states. The reason for this is that all subsystems of the universe partake in the same entropic flow. Hawking (355) concludes that the psychological arrow of time coincides with the thermodynamic arrow of increase in entropy.

Earman (45) formulated a condition that needs to be met if a theory is to be time reversal invariant. However, subjecting thermodynamics to conditions and criteria is not completely straightforward. The only clear instance where reference to time is explicitly made is in the distinction between the initial and the final. The distinction is evidenced in the adiabatic accessibility relation (Earman 45).

Lieb and Yngvason (6) approach the problem of arrow of time by taking into consideration the recent axiomatization of thermodynamics. The two scholars make efforts to establish the existence of a simple entropy function. The function so established is proved to increase under adiabatic processes. The approach adopted by the two scholars does not presuppose the differentiability of T, which means that it is capable of handling phase transitions. Furthermore, the approach guarantees that entropy is defined globally on T terms (Lieb and Yngvason 5).

Conclusion

In this paper, the author briefly explained the second law of thermodynamics and ways in which it the law is expressed. For the purpose of this paper, the author expressed the second law of thermodynamics in terms of entropy. The author explained the relationship between entropy and the arrow of time. Finally, the author outlined possible ways of overcoming the problem of the arrow of time. Throughout the paper, the author was describing how the arrow of time, also defined as aging, can be anticipated in design. The description was the underlying theme of the paper. The definition of the second law of thermodynamics, the expression of the law in terms of entropy, as well as the description of solutions to address entropy, were all constructed around the underlying theme.

References

Harvey, Brown and Jos Uffink. The Origins of Time-Asymmetry in Thermodynamics: The Minus First Law. Philosophy of Science 32.4 (2001): 525-538. Print.

Earman, Joseph. An Attempt to Add a Little Direction to The Problem of the Direction of Time. Philosophy of Science 41.1 (2004): 1547. Print.

Hawking, Stephen. The No Boundary Condition and the Arrow of Time, New York: Free Press, 2004. Print.

Lieb, Eric and Jose Yngvason. A Fresh Look at Entropy and the Second Law of Thermodynamics. Physics Today 3.2 (2000): 3-7.

Maccone, Lorenzo. Quantum Solution to the Arrow-of-Time Dilemma. Physical Review Letters 103.8 (2009): 2-5.

OpenStax College. 2012. College Physics. Web.

Atomic Force Microscopy and the Hall Effect

Introduction

Both the Atomic Force Microscopy (AFM) and the Hall Effect represent unique methods of magnetic field measurements. While the AFM method employs a probe that is placed on its tip to take these measurements, Hall Effect explores Hall voltage to achieve the same. These measurements come in handy in the characterization of semiconductor devices. Among them, there are other measurements like the resistivities, Hall coefficient and Hall mobility that can be obtained and are also vital in the characterization of the same. These measurements are also determined by the van der Pauw (vdP) geometry and they do also help in the selection of excellent material for a specific semiconductor.

With regards to AFM, there are more than a dozen scan-proximity-based microscopes. Nonetheless, they operate under the same principle to inspect a local property including the height and magnetism. The micro-separation between the probe and a sample enables the equipment to inspect a specific area, portraying an image of the sample. The image attained by this method takes after that obtained by a television set in that both consists of many rows or lines of information placed one above the other (Koo-Hyun, 2007). Of note, the efficiency of the equipment depends on the probes size rather than a system of lenses that are absent in its system. In view of the Hall Effect, this is a consequence of the Lorentz force that happens when a current-carrying conductor is placed in a traverse magnetic field (Koo-Hyun, 2007). This force results in the production of a voltage that cuts the magnetic flux at right angles. The presence of this voltage has a bearing on both the charge sign and density in a semiconductor.

In this report, we explore the science behind AFM and the principles involved in both the Hall Effect and the vdP techniques.

Atomic Force Microscopy

AFM is an important technique that is employed in nanotechnology research. In order to achieve its function, the AFM explores the surface topography. Nonetheless, its tip can effectively act as a probe in electronics to characterize materials at a nanoscale level. Such material properties as resistivities, capacitance and the superficial voltage (Pd) can easily be determined concurrently with topographic information. To this end, a novel equipment that is quite complex has been developed, requiring the interplay of topographic and electrical information to quantify AFM-based electrical measurements at the nanoscale level (Koo-Hyun, 2007). In spite of these complexities, this technique offers unique information vital in the electrical characterization of ever-contracting electronic devices like semiconductors. This equipment incorporates unique features like photosensitive diodes, a flexible cantilever, sharp tips, high resolution and a force feedback mechanism to achieve atomic-scale resolution.

In electric measurements, AFM employs a force probe on its tip to determine the magnitude of a force (repulsive/attractive) between it and the sample. In a repulsive (contact) mode, the AFM explores its tip (cantilever) to briefly touch the material under investigation. Consequently, a detection system records the samples local height courtesy of the vertical deflection of the cantilever (Koo-Hyun, 2007). On the other hand, the topographic images are obtained when the tip is in a non-contact mode. Unlike electron microscopes, AFMs have the abilities to image material samples that are in the air and those submerged in liquids (Koo-Hyun, 2007).

To obtain accurate data on the cantilever deflection, a laser beam is applied. To this end, the cantilever acts as a reflecting surface of a laser beam (fig. 1), with an angular deflection of the beam producing twice as much deflection as the original beam. This beam is detected by two photosensitive diodes which give information about the laser spot location and hence the cantilevers angular deflection thanks to the difference between the two photodiode signals (Willemsen, 2000). Notably, since the size of the cantilever is minute vis-à-vis the cantilever-detector distance, the motions of the probe are significantly amplified by the lever.

A cantilever atop a sample (A) and an optical lever (B)
Figure 1: A cantilever atop a sample (A) and an optical lever (B) (Koo-Hyun, 2007).

What makes AFM cantilevers efficient is that they possess a high flexible stylus (fig. 2) which exhibit high resonant frequency, making them highly responsive. This stylus, exhibiting a small Youngs Modulus (H0.1N/m), offers no damage to the sample while scanning. The resonance frequency can be determined by the equation:

Resonance frequency = 1/2À (spring constant/mass).

As such, cantilevers come in small sizes and hence exhibit high resonance frequency (see the equation above). Their resolution is enhanced by sharp ultra clever tips.

A flexible stylus
Figure 2: A flexible stylus (Koo-Hyun, 2007).

In an effort to achieve high resolution, AFM incorporates piezoceramics in its system to strategically position the tip/sample. Piezoelectric ceramics are unique kinds of materials that expand or contract when in presence of a voltage gradient or, conversely, create a voltage gradient when forced to expand or contract (Miller, 1988). One advantage of exploring piezoceramics is that it can achieve a three-dimensional positioning of a sample with enhanced precision. Contemporary AFM adopts tube-shaped piezoceramics in its system (fig. 3) to enhance its function.

tube-shaped piezoceramic
Figure 3: tube-shaped piezoceramic (Binnig & Smith, 1986).

To control the force on the specimen, AFM explores a feedback mechanism shown in the figure below (fig. 4). The system incorporates a tube scanner, an optical liver, a cantilever and a feedback circuit (Binnig & Smith, 1986). While the roles of the first three components have been explained herein, the role of the feedback circuit is to steady the cantilever deflection thanks to the potential difference across the scanner. Of note, the speed at which an image can be acquired is dependent on the sensitivity of the feedback loop to steady the cantilever deflection. As such, the sensitivity of the feedback loop determines the performance of an AFM. To this end, efficient feedback loops operate at a frequency of approximately 10 kHz, acquiring an image within a minute.

feedback loop
Figure 4: feedback loop (Koo-Hyun, 2007).

Hall Effect

A current-carrying conductor placed inside a magnetic flux and cutting at right angles induces a potential difference between the edges of the conductor. This effect is what is referred to as the Hall Effect. The magnitude of both the current (i) and the magnetic flux (m) have an effect on the electric field intensity (e). To this end, the electric field intensity is directly proportional to m and i. Mathematically, this is represented as below:

e= kim, where k is the Hall Constant.

Of note, a conductor exhibiting Hall Effect portrays different polarities depending on the conductors atomic structure (Purcell, 2001).

Just like in conductors, semiconductors experience the Hall Effect. The degree of this effect on semiconductors is what is dubbed Hall Mobility. Mathematically, this is a product of k and the conductivity. Hall Mobility comes in handy when choosing a suitable semiconductor device for Hall generator, equipment used in the determination of the magnetic field intensity (Whittle, 1973).

Technically, when a semiconductor is exhibiting Hall Effect, the holes shift to the P-type section (fig 5).

vector notation of Hall Effect
Figure 5: vector notation of Hall Effect (Jenkins, 1957).

When considering the vector notation with respect to the motion of the hole we get:

F = q (E + VxB)&&&&&&&&&&&.. (1).

Along the y axis, the equation below is obtained:

Fy = q (Ey -VxBx)&&&&&&&&&&. (2).

Theoretically, equation 2 means that the hole will move along the length of the bar only when a force due to the field (qEy) is established along the width of the bar. Otherwise, the hole will move in the y-direction with a net force of qVxBx. Principally, at the moment the holes start drifting along the bar, Ey = VxBz. As such, there is no net backward force. This Ey is what is termed the Hall Effect. The ultimate voltage built i.e. VAB = Eyw is what is termed the Hall Voltage. To this end, the Ey is represented as below:

Ey = RHJxBz

(Tolansky, 1970).

In that respect, RH, J and B represent the Hall constant, density of the holes and the field respectively. The Hall constant is represented as below:

RH = 1/qpo, where po and q represent the density of the holes and the positive charge respectively.

Equally, when dealing with an N-type semiconductor, the same approach is applied.

The figure below shows an integrated circuit vital for Hall Effect measurements.

Hall Effect circuit.
Figure 6: Hall Effect circuit. (Halliday, 1962).

The Hall Effect can be used to study the correlation between majority charge carriers/mobility and temperature, a vital component in semiconductor characterization. Moreover, this effect comes in handy in the study of the magnetic fields in integrated circuit devices.

Van der Pauw (vdP) geometry

Van der Pauw geometry is a technique synonymous with the study of Hall coefficients and resistivities in semiconductor devices. Importantly, it should be noted that this technique as applied in Hall measurements is only applicable where there is no magnetic field influence. In addition to the Hall contribution, measurements in a finite magnetic field generally include a term associated with field-induced changes in the longitudinal resistivity (Van der Pauw, 1958). However, this can easily be eliminated thanks to the difference in field measurements taken at the opposite ends of the field. With respect to the resistivity measurements, vdF technique offers solutions to the problems experienced in conventional methods of resistivity measurements. Such a problem as imprecise placement of a specimen due to lack of knowledge about a samples geometry is eliminated. Also, the charge density should be evenly distributed across the cross-section at any one given point. In this report, we dwell much on the principle adopted by vdF in resistivity measurements.

VdF technique was initially designed to take resistivity measurements in thin and flat semiconductor samples (Van der Pauw, 1958). Later, van der Pauw showed that this technique can be explored in the analysis of irregular-shaped samples with a parameter like the direction of the current not necessary. To this end, for this to be true, the following conditions are to be obeyed. First, the contact ought to be placed on the edges of the specimen. In this respect, if the specimen is not tiny in size, then the structure appears like a group of vertical lines stuck together across the entire length. As for the equipotential surfaces, the specimen is assumed to be cylindrical. Second, the contacts are to be reasonably small. Third, the samples thickness ought to be uniform. Finally, the sample ought to be continuous (without disjoining).

Some of these conditions are tasking and hence cannot be done experimentally. As such, finite size of the contacts and of their erroneous placement was calculated in the case of samples with different shapes (fig. 7) (Van der Pauw, 1958). To this end, the following equation was derived:

exp (-ÀRAB, CDd/Á) + exp (-ÀRBC,ADd/Á) = 1,

Where RAB, CD = (VD-VC)/iAB, and iAB is the current in the direction AB.

resistivity measurements for an arbitrary shape.
Figure 7: resistivity measurements for an arbitrary shape.

An explicit equation for the measurement of this parameter can be obtained on conditions that a sample contains a line of symmetry where contacts A and C are placed and that B and D are placed such that they are symmetrical relative to the axis (Van der Pauw, 1958). To this end, the below equation on resistivity measurement was derived:

Á = Àd/ln 2 (RAB, CD).

Conclusion

In a conclusion, as it has been depicted in the above report, all the three techniques are designed to basically measure parameters that are basically related. While the AFM method employs a probe that is placed on its tip to take magnetic field measurements, Hall Effect explores Hall voltage to achieve the same. These measurements are vital in the characterization of semiconductors. Between them, such measurements as resistivities, Hall mobility and Hall coefficient which differentiate different material samples of semiconductors can be obtained. On the other hand, the van der Pauw geometry presents a novel technique that is more efficient in the measurement of resistivities.

References

Binnig, G., & Smith, D. (1986). Single-tube three-dimensional scanner for scanning tunneling microscopy. Review of Scientific Instruments 57 (8), 1687-1688.

Halliday, D. (1962). Physics. New York, NY: Wiley and Sons Press.

Jenkins, F. (1957). Fundamentals of Optics. New York: McGraw-Hall Press.

Koo-Hyun, C. (2007). Wear characteristics of diamond-coated atomic force microscope probe. Ultramicroscopy, 108 (1), 1-10.

Miller, G. (1988). Scanning tunneling and atomic force microscopy combined. Applied Physics Letters, 52 (26), 2233-2235.

Purcell, M. (2001). Electricity and Magnetism. New York, NY: New York University Press.

Tolansky, S. (1970). Multiple Beam Interferometry of Surfaces and Films. New York, NY: New York University Press

Van der Pauw, J. (1958). A Method of Measuring Specific Resistivity and Hall Effect of Discs of Arbitrary Shape. Philips Research Reports, 12 (1), 1-9.

Whittle, Y. (1973). Experimental Physics for Students. London: Chapman & Hall Press.

Willemsen, O. (2000). Biomolecular Interactions Measured by Atomic Force Microscopy Biophysical Journal, 79 (6), 3267-3281.

Mechanisms of Change and the Fossil Record: Mass Extinction

Introduction

All living beings form ecosystems with many internal and external connections; ecosystems are assembled into a single biosphere. This global system of life is in constant dynamic equilibrium. The colossal complexity of the biosphere compensates for any negative impact. However, sometimes ecosystems collapse, such a catastrophe can last from several millennia to millions of years. Each mass extinction is a unique event with its own set of causes. Furthermore, in the near future, humanity may become such a reason.

Geologic History of the Earth

In the history of our planet, there have been five mass extinctions. There are the Ordovician-Silurian (450-443 million years ago), Devonian (372 million years ago), Permian (253-251 million years ago), Triassic (208-200 million years ago), and Cretaceous-Paleogene (65.5 million years ago). In the course of the Ordovician-Silurian extinction, almost 85% of all species, which inhabited the planet died (Mass extinctions, 2017). The Ordovician-Silurian extinction had two stages in one million years. It is believed that the reason for the former was the movement of the ancient supercontinent of Gondwana towards the South Pole and the latter  the global warming. The trilobite family trinucleidae, brachiopod genus thaerodonta, brachiopod genus plaesiomys and others disappeared from this period.

The Devonian extinction occurred over a long period, during which there were three disasters, each separated by 10 million years. Since the dyed species of creatures were mainly from tropical groups, the reason for it could be climate change due to a decrease in the amount of carbon dioxide in the atmosphere (Mass extinctions, 2017). Extinct groups of species were the odontopleurid trilobites, dalmanitid trilobites, phacopid trilobites, atrypid brachiopods, and pentamerid brachiopods.

The Permian extinction was the worst event that ever happened on the Earth. Within 60 thousand years, 96% of all marine and about 70% of terrestrial species became extinct, which is why the restoration of the biosphere took a much longer time, and all ecological ties were destroyed (Mass extinctions, 2017). It is believed that the most probable reason was increased volcanic activity in Siberia, which has result in global warming. The tabulate corals, rugose corals, goniatitic cephalopods, productid brachiopods, and cladid crinoids disappeared from this period.

The Triassic extinction consisted of the eruption of volcanoes, which occurred because of the disintegration of the supercontinent of Pangea. For that reason, a considerable amount of carbon dioxide got into the atmosphere of ancient Earth, which could radically change the planets climate and kill living beings. Shelled cephalopods, brachiopods, corals, sponges on the ocean, and phytosaurs, crocodile-like animals on the land, were hard hit.

Cretaceous is the most recent mass extinction, destroyed 75% of all species, including dinosaurs. It hastened the evolution of mammals and the emergence of man. According to the most common version, the catastrophe was a falling meteorite about 10 kilometers in size. The impact in the first stage caused massive fires, earthquakes, and giant tsunamis on Earth (Mass extinctions, 2017). The non-avian dinosaurs, vertebrates, plesiosaurs, ichthyosaurs disappeared.

When animals began to share the planet with a new biological species  Homo sapiens  the habitat for them became many times more hostile. The successful adaptation of humans to the environment often resulted in the extinction of other species. In particular, the steller cow belonged to the group of sirens. These animals were not afraid of man because they never encountered him (Extinctions in the recent past and the present day, 2017). Consequently, of the mass capture of steller cows, less than 30 years have passed since they were finally exterminated.

Bird watchers estimate that the population of wandering pigeons numbered 3-5 billion birds, the species was the most abundant, accounting for a third of all terrestrial birds in the United States. Tasty meat and ease of catching made them the primary niche for poultry meat production and consumption. As a result, billions of wandering pigeons first turned into millions, and then they were utterly exterminated by man.

Global Climate Change

The Earths atmosphere allows sunlight to pass through while retaining thermal radiation from the surface. The accumulation of gases and other emissions in the atmosphere aggravates this process, triggering the greenhouse effect (A blanket around the Earth, n.d.). This global problem has existed for a long time, but it becomes more relevant to developing technologies. Greenhouse gases are the collective name for a whole set of gases that can trap the planets thermal radiation. In the visible range, they remain transparent while absorbing the infrared spectrum. The leading greenhouse gases are water vapor, carbon dioxide, methane, nitric oxide, and chlorofluorocarbons.

The reason for the greenhouse effect is the rapid growth of an industry that uses oil, gas, and other fossil hydrocarbons as energy sources; they account for about half of all gas emissions. Apart from it, in the process of photosynthesis, trees assimilate carbon dioxide and produce oxygen; forests are the lungs of the planet, their destruction is fraught with a sharp increase in the amount of carbon dioxide in the atmosphere (A blanket around the Earth, n.d.). As a result of the decay of farming animals waste products, a large amount of methane is formed, which is one of the most aggressive greenhouse gases.

The scientists came to a conclusion about the possibility of a sixth extinction. The critical factors of the destruction of 145 scientists from 50 countries in their report named, in particular, changes in land use, hunting, climate change, environmental pollution. According to the researchers, due to human activities, 75 percent of the land, 40 percent of the worlds oceans, and 50 percent of river waters are already actively degrading (Díaz et al., 2019). Since the 16th century, at least 680 species of vertebrates have become extinct. According to average estimates for all groups of plants and animals, extinction now threatens 25 percent of species. More than 40 percent of amphibians, 33 percent of reef corals, and more than a third of marine mammals are at risk. Extinction threatens 10 percent of insect species, and if the trend continues, by the end of the century, they may not remain on Earth at all. In summary, all this will radically change the entire biosphere of the planet.

Conclusion

To summarize, all cases of mass extinction have left their mark on the history of life on the planet. According to many sources, today, the Earth is experiencing another catastrophe associated with human activities. People have become a factor that changes the evolutionary fate of other earthly creatures. Nonetheless, understanding the past processes and a responsible attitude to nature and drawing attention to the current ecological situation can help preserve the biosphere.

References

A blanket around the Earth. (n.d.). National Aeronautics and Space Administration. Web.

Díaz, S., Settele, J., Brondízio, E. S., Ngo, H. T., Guèze, M., Agard, J., Arneth, A., Balvanera, P., Brauman, K. A., Butchart, S. H. M., Chan, K. M. A., Garibaldi, L. A., Ichii J. Liu, K., Subramanian, S. M., Midgley, G. F., Miloslavich, P., Molnár, Z., Obura, D., Pfaff, A, & Zayas, C. N. (2019). Summary for policymakers of the global assessment report on biodiversity and ecosystem services of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. Web.

Extinctions in the recent past and the present day. (2017). Sam Noble Museums. Web.

Mass extinctions. (2017). Sam Noble Museums. Web.

Parasympathetic and Sympathetic Nervous Systems

Action Potential

Action potential is simply a communication function in the nervous system. It takes across ion charged plasma membranes and is characterised by very fast reversals of voltage exchange. This is possible by the presence of voltage gated ionic channels that are found along the axon hence enable conduction of the action potential. It can be explained in five phases as of resting potential, threshold, rising phase, falling phase and the recovery phase. The resting potential phase is characterised by small movements of K+ ions in and out in order to maintain the membrane potential constant for safeguarding the cell. In this phase, the neuron is said to be at the rest. Threshold potential is signaled by the arrival of Na+ ions as they enter the neuron hence depolarising the membrane and, at the same time, increasing negativity.

If the stimulus causes depolarisation to reach the threshold potential, more Na+ ions quickly are allowed to pass by the sodium channels hence this triggers voltage change of the membrane resulting in positive values. This signals the peak of the action potential where the sodium channels start to close and potassium channels open hence letting positive charges leave the cell. This leads to the membrane potential to revert to its original resting potential state. At this state, the K+ channels are fully opened and activated resulting in beyond depolarisation of the resting membrane. High levels of K+ ions leave the cell and this causes closure of the potassium channels in order to restore the membranes resting potential. The steady state is achieved by normal opening of the Potassium channels hence the return to the membrane resting state.

Functions of SNS

The Sympathetic Nervous System originates from the spinal column and extends towards the middle parts of the spinal cord.

It is traceable from the thoracic section to the third lumber section. It belongs to the autonomic nervous system whose primary function is monitoring and regulation.

The SNS and PNS together form the Autonomic Nervous system. The biggest function of the SNS is reflected from this background although its major work is in preparation for fight-flight response. It is responsible for up and down sympathetic chains that aid in homeostatic mechanisms regulation in living organisms. The SNS is meant to prepare the body for stressful situations which require regulated and well maintained pH and temperature. In its criteria of functions, it works to counteract the parasympathetic nervous system. The SNS innervates a high number of organs in order to effectively perform its regulatory functions.

It is assigned the role of control of internal body organs like the heart, lungs, eyes, blood vessels, sweat glands, digestive system, kidney as well as others. In this process, the release of acetylcholine and noradrenaline serve as the most important processes for the SNS to be effective. The messaging in SNS is bidirectional with the presence of both efferent and afferent messages which help in speedy functions needed for this preparation of body organs like in the case of fight or flight response. In essence, the SNS is associated with accelerated heart rate, dilation of bronchial and tracheal systems, dilation of pupils stimulation of glycogenolysis as well as vaso-constriction of the blood vessels, increased blood pressure as well as other involuntary functions in the body. This list is long and these regulation mechanisms are triggered by the release of noradrenaline. This results in sensations of cold, heat or pain and hence helps the body in protection against any adversary or weather vagaries. The SNS works together with systems in the nervous system and therefore coordination with networks like sino-atrial nodes and atrio-ventricular nodes contributes to the overall picture of its functions.

The Functions of PNS

This nervous system works together with the sympathetic nervous system but in the opposite directions. Where SNS raises the blood pressure, PNS lowers it, acceleration of her rate by SNS, PNS decelerates the rate. The most important function of the PNS helps in reverting the body organs to their normal levels after the fight-flight response effects from SNS. The SNS regulates automatic reflexes as well as autonomic activities through innervating the body muscles like cardiac muscles. It consists of four cranial nerves that originate from the brainstem where all its activities are initiated up to the sacral region. The most useful nerve is the vagus nerve which functions in resting digestion after the SNS activity.

CV System

Right atrium, tricuspid or right atrio-ventricular valve Right ventricle, lungs, pulmonary artery, pulmonary semilunar valve, Pulmonary vein, heart, left atrium, Bicuspid or Mitral valve, left ventricle, Aortic semilunar valve, aorta, arteries, body, capillaries, veins, Venacava.

Definition of terms

  1. Filtration: It takes place in the glomerulus and involves blood pressure forcing plasma, dissolved substances and small proteins out of the glomeruli into the Bowmans capsule which is now referred to as renal filtrate. The high pressure of the blood results in this process.
  2. Reabsorption- It is the process that follows after filtration. By active transport using ATP energy, useful materials are reabsorbed back through renal tubules and they include glucose, amino acids, vitamins and positive ions. Other negative ions like chloride would be absorbed through passive transport to balance the positive ions. Water is reabsorbed through Osmosis and this happens in the proximal Convoluted Tubule section. This process allows the regulation of glucose levels and inorganic ions in the blood hence the kidneys role of homeostasis is achieved.
  3. Secretion- It is a process of active secretion of waste products, ammonia, creatinine and other metabolic products from the body are eliminated to the collecting duct as urine. In the process, Hydrogen ions are secreted into the tubule cells in order to regulate the body blood pH.
Simple Diagrams of Negative feedback Mechanism of Up regulation of CO and TPR
Simple Diagrams of Negative feedback Mechanism of Up regulation of CO and TPR.
Simple diagram of feedback mechanism for ADH.
Simple diagram of feedback mechanism for ADH.
Feedback Mechanism for Aldosterone
Feedback Mechanism for Aldosterone.

Physiological effects of Angiotensin II

Angiotensin II is a hormone effector of the renin-angiotensin system. Its most visible and quick physiological effects include vasoconstriction and blood pressure regulation. It is also affiliated to endothelial dysfunction, atherosclerosis, inflammation as well as heart failure. Its effects depend on whether it is chronic or acute.

List of Structures that Air passes through to alveoli

Nose, nasal cavity, mouth cavity, the pharynx, trachea, epiglottis, the Larynx, Bronchi, bronchioles and alveoli.

Body Ritual Among the Nacirema by Horace Miner

The life of the Nacirema is rich in many rituals that shape the community and underline the role of magic. Three dominant concerns, namely the human body, the appearance, and human health, are usually mentioned in ceremonial activities. For example, the mouth fascination ritual proves the importance of the mouths condition in social relationships and explains why its hygiene cannot be ignored. Magical elements are used to improve the condition of an individual, and the Nacirema mention them because of the impossibility to explain the compounds that are usually transcribed in an ancient, secrete language. The article Body Ritual Among the Nacirema by Horace Miner explains how rituals are connected to the personality structure, and the author states the differences between male, female, and infant hygiene. I find these descriptions fascinating as they show how it is possible to treat routine activities and obligations in a specific way and understand their magical or spiritual worth.

The article reveals a number of strange characteristics in familiar things. Instead of washing teeth, people must realize that each step has a purpose and further impact. Such an outside perspective of American life is not weird but unique to gain a new strong meaning to the things that modern people do not find necessary to recognize. In todays world, there are many nations and cultures where similar routines are treated differently. For instance, in movies, many English citizens are represented with faded smiles and bad teeth, compared to bleach Americans. Cultural relativism teaches people not to judge each other in terms of their personal beliefs, values, and traditions. This article helps me view the Nacirema as an adequate community whose principles and ideas may gain meaning in modern America.

Interval Estimation for Correlation Coefficients

The correlation study focuses on the assumption that there is some interrelation between two variables that cannot be controlled by the researcher. In other words, the correlation is not the causation. For example, the correlation study might suggest that there is an interrelation between public health and self-esteem, but it cannot be proven as some factors such as social relationships, individual features, and others might play a role in the formation of self-esteem.

The criterion for the quantitative assessment of the relation between variables is called correlation coefficients. The interval estimation for correlation coefficients helps to evaluate the strength and weakness of this relationship as well as its form and orientation. If the coefficient of correlation between variables belongs to an ordinal scale, then Spearman coefficient is used while Pearson correlation coefficient is relevant for those variables that belong to the interval scale (Rosner, 2010). Pearson correlation (typically just a correlation) between variables might be positive, negative, or absent. Two variables are positively correlated if there is a direct relation between them. In the case of the unidirectional relation, small values of one variable correspond to small values of other variables. Two variables are correlated negatively if there is an inverse or multidirectional relationship between them. In multidirectional ratio of small values, one variable corresponds to large values of the other variable and vice versa. The values of correlation coefficients are always in the range from -1 to +1. Thus, the correlation coefficients estimation allows establishing direct links between. The formula for calculating the correlation coefficients is constructed in such a way that if the relationship between the values is linear, then Pearsons coefficient accurately establishes the closeness of this connection (Rosner, 2010). Therefore, it is also known as the coefficient of linear Pearson correlation. The calculation of Pearsons correlation coefficient assumes that x and y variables are normally distributed.

At the same time, intervals might be equal when the difference between the maximum and minimum values in each of the intervals is the same; unequal when, for example, the interval width is gradually increased while the upper range is not completely closed; open, when there is only either the upper or lower boundary; and closed, when there are lower and upper bounds.

The reliability of the interval estimation for the correlation coefficients is determined by the probability of the interval built on the results of the sample containing the unknown parameter of the total population. Probability interval estimation parameter is called the confidence interval. Scholars usually choose it close to the unit as it can be then expected that a series of observations would be properly assessed. In other words, the confidence interval would uncover the true value of the particular parameter. If the confidence interval is close to the unit, then the risk of the error is insignificant. The risk of error is the level of significance, also called the confidence level corresponding to the given interval. The public health studies primarily prefer to determine the confidence level close to 0.95 or 95 %.

The interval estimation for correlation coefficient is not only resolves the issue of dependence between variables but also measures the degree of their relation in the two-dimensional normal distribution of variables. Therefore, in the normal case, one can test the hypothesis and indicate the confidence intervals.

Reference

Rosner, B. (2010). Fundamental of Biostatistics (7th ed.). New York, NY: Cengage Learning.