Analysis of Human Senses and Its Importance

Background

Humans function through sensory nerves and organs that coordinate with the brain to bring emotions and perform physical activities. The primary senses include taste, vision, hearing, touch, and smell. Smell describes how individuals perceive scents; the concept describes eyesight, hearing is the ability to perceive sound, touch involves contact, while taste describes the ability to identify different flavors through the tongue. This essay explains how some of these react in particular conditions through practical experiments.

Visual Senses

Rods and Cones

In experiment 1 A, I could identify the object shapes but could not tell the colors. As much as rods help in low light vision, cones are also light receptors that assist in identifying colors but in brighter light due to their small amounts in the eye compared to rods. Therefore, objects in darkness are recognizable in their shapes and physical form only. I was also able to identify and define the formation of objects at a peripheral angle since rods play a predominant role in peripheral vision. The same effect applies to stargazing, where stars are more defined at a peripheral angle than viewing at a straight angle.

Rods are more spread in the retina than cones which are more concentrated at the center bringing the averted vision, enabling better peripheral views (Allen et al., 2019). In brighter light, objects are recognizable in shape and color compared to low light in experiment 2A. Rods work better in darkness and are less light-sensitive, while cones are more sensitive to light which sends color signals to the brain through different wavelengths that define a particular color.

Experiment 1 B tested the ability to identify a blind spot and the afterimage effect. The bars of the cage remains in the blind spot after the mouse disappears because they are not a part of the mouse but objects used by the brain to fill the missing data of the mouse, creating an illusion of a complete mouse. When identifying the blind spot, a challenge occurs when both eyes are open because the visual field of one eye overlaps the other.

One can see an object in the blind spot because the brain generates the idea of the image by filling in or ignoring the non-existing parts. With the mouse in the cage, an individual sees the bars instead of the mouse because the mouse is the central focus of the blind spot, so when the brain tries to fill the missing details of the whole mouse, it focuses on the bars in compensating the missing parts of the mouse making them more visible.

In the afterimage experiment, I could see a black dot at the center of the circle, which appear with the peace symbol visible in the first circle only. Also, the peace symbol color changes from black to white in the ring. This experience describes a negative afterimage effect where the retina maintains the illusion of an object after a prolonged observation with a different color (Li & Sun, 2021). While observing the dot in the second circle, the after image persisted for approximately one minute.

Compensation and Optical Illusion

Blind people have the same hearing and smelling sense as individuals with sight. One explanation why blind people perceive smell and touch more than people with sight is that they practice and have more experience using these senses. The human eye consists of various receptors that take the visual information to the brain for perception and identification, which gives the shape, color, and definition of an object (Laeng et al., 2018). This exchange of information forms a communication path where the eye sends and receives data from the brain. Therefore, in some instances, a miscommunication occurs, sending the wrong or opposite information, creating an optical illusion.

Color Blindness and Standard Vision Test

I have never met a color-blind female since the majority population such a disorder are men. This is because they possess just one X chromosome. Moreover, I could distinguish the orange circle from the green and identify the red star. At 20 feet, I was also able to see the letter E clearly with both eyes open than a single eye. Consequently, the E becomes invisible with distance and disappears at 100 feet.

Taste and Smell

The Effect of Smell on Taste

In experimenting with how smell affects taste, a mint-flavored candy was used. On tasting the candy while pinching the nose, I could detect some sweetness and a cooling sensation, which became less cooling with repeated rinsing and tasting. I could smell the minty flavor on releasing the nose, which enhanced and spread the cooling sensation to the nose. The minty taste fades with continuous rinsing because the taste receptors adapt to the cooling effect inhibiting further communication to the brain. In identifying perfume concentration at different exposure times, the subject describes the perfume to have a strong scent on the first exposure.

After sniffing for ten minutes, the perfumes scent fades to a mild concentration. After staying out of the room for fifteen minutes, the subject describes the fragrance as vital, but the effect is less intense than the first exposure. Kakutani et al (2021) explain that smell buds adapt the molecules of the fragrance, which block the passages making an individual immune to the scent.

Touch and sensitivity

Using a paper clip, I experimented with the difference of simultaneous spatial threshold at distances ranging from 1.5mm, 5mm, 10mm, and 20mm between the paper clip tips. At a 1.5mm distance, the subject confirmed touch by two ends on the back of the arm while indicating one tip on the palm on the same test. The remaining spaces showed two tips where the back of the hand had more spatial sensitivity than the palm. Fingertips had more spatial sensitivity than the other parts of the hand.

Temperature Perception

In the temperature perception experiment, the hand immersed in ice adapted to the freezing effect and became numb while the hand in hot water had a burning sensation which became more comfortable within a few minutes. In the lukewarm water, the hand from ice had a warm feeling while the other hand from hot water did not show much difference in temperature change. When a hand is put in ice, the cold receptors depolarize the temperatures quickly to gain an optimal state, while hot water receptors hyperpolarize, creating a slight temperature change (Yogev & Ciuha, 2021). Since cold receptors depolarize quickly, moving a cold hand to lukewarm water has minimal reaction due to the initial depolarization hence maintaining the coldness.

Hearing And Balance

The subject was able to intercept sound while staying still than when turning the head during the experiment. Staying still gives an individual more focus in intercepting the noise source because motion distracts the individuals attention. Perceiving sound with an earbud blocking one ear is difficult since the hearing system receives different information from opposite sides that must be similar to identify the sound source (Kniep et al., 2017). Animals that can turn ears have more advantage in hearing because perking ears helps focus more on perceiving a particular sound from the surrounding noises. Vision helps in balancing the heads position and directing body movement according to the physical surroundings. Alcohol affects a persons ability to balance due to its influence on the brain hence the balance test for sobriety.

References

Allen, A. E., Martial, F. P., & Lucas, R. J. (2019). Form vision from melanopsin in humans. Nature Communications, 10(1). Web.

Kakutani, Y., Narumi, T., Kobayakawa, T., Kawai, T., Kusakabe, Y., Kunieda, S., & Wada, Y. (2017). Taste of breath: The temporal order of taste and smell synchronized with breathing as a determinant for taste and olfactory integration. Scientific Reports (Nature Publisher Group), 7, 1-9. Web.

Kniep, R., Zahn, D., Wulfes, J., & Walther, L. E. (2017). The sense of balance in humans: Structural features of otoconia and their response to linear acceleration. PLoS One, 12(4). Web.

Laeng, B., / Kenneth, G. K., Hagen, T., Bochynska, A., Lubell, J., Suzuki, H., & Okubo, M. (2018). The face race lightness illusion: An effect of the eyes and pupils? PLoS One, 13(8). Web.

Li, H., & Sun, P. (2021). Visual characteristics of afterimage under dark surround conditions. Energies, 14(5), 1404. Web.

Yogev, D., & Ciuha, U. (2021). Perception of thermal comfort during skin cooling and heating. Life, 11(7), 681. Web.

Mathematical Induction: Origin, Key People and Usage

Mathematical induction is traditionally defined as a mathematical method, or a type of a mathematical proof, which is used when the necessity to prove that in the following expression: (fg) = fg + fg, for every integer n >= 1, the derivative of f(x) = xn is f'(x) = nxn  1 (The technique of proof by induction, n. d., para. 1).

It is believed that the principle of mathematical induction was suggested by Plato in ca. 370 BC (Mathematical induction, n. d., para. 4). To be more exact, Plato mentions the concept of mathematical induction in his Parmenides. However, speaking of the person that first introduced the term, one must give credit to Euclid and his proof of the fact that the numbers of primes are infinite (Mathematical induction, n. d., para. 4).

Bhaskara, as the creator of the so-called cyclic method (Mathematical induction, n. d., para. 4) can also be mentioned among the range of people that worked on the problem of mathematical induction and its use as a means of solving mathematical problems.

Speaking of the actual application of the aforementioned method to the process of problem-solving in mathematics, mathematical induction is used widely in two realms, which are mathematical logic and computer science. In the latter, mathematical induction manifests itself as recursion, a method that is used to split a complex problem into a range of smaller and simpler ones. In the former, induction helps solve the propositional logic problems (Mathematical induction, n. d.a, p. 3).

In fact, mathematical induction is used as a form of rigorous deductive reasoning (Mathematical induction, n. d., para. 3). It should also be kept in mind that mathematical induction is not applicable for inductive reasoning, which, in its turn, is defined as a form of non-rigorous reasoning in mathematics.

Reference List

Mathematical induction. Web.

Mathematical induction. Web.

The technique of proof by induction. Web.

Anthropological Problems: Origin of Human Beings

The article Our True Dawn by Catherine Brahic describes the challenges that appear to modern scientists while trying to trace our relatives. The author also describes the difference in methods used by scientists. For geneticists to determine the period of splitting humans from apes means to specify exactly the time when their DNAs became different. Paleontologists in their turn just search for the fossil remains to determine their age and nature.

The author also describes the method geneticists can use in order to determine when split between human and apes occurred, but there is one catch in it as to get the answer they should know the tempo of the mutation and it is impossible without knowing the date of split. Trying to get it around, geneticists decided to take orangutans as an example as the period of their split from our lineage is known. Having made the analysis, they managed to come to conclusion that split occurred between 4 or 6 million years ago.

However, paleontologists did not appreciate this result as the number was hard to believe because the earliest hominine from Africa was dated only 3.85 million years. Skeptically met was even the 5 to 6-million-years split as there are some fossils of that period which obtain peculiarities of human being. Moreover, history seems to prove the ideas of fossil hunters as researchers can observe the change in genomes in real time.

Discovery of three remains called Ardi, Orrorin and Toumai had to prove the theory, however, they turned out to have human characteristics. Though, according to the molecular clock, it was too early. The article ends with the thoughts about split between human and Neanderthals, stating the fact of difference in the dates, which were determined with the help of molecular clock.

The second article called First of Our Kind by Kate Wong also cogitates about the nature and origin of human beings. At the beginning of the work she states that climate made the great influence on the development of our ancestors. Warm weather favored appearance of new grasslands and in order to move on big plain surfaces hominids had to have long and strong legs and skilled arms in order to create tools to cultivate these grasslands.

The great importance of the Malapa Fossils is also underlined in the article. Their unicity and importance lie in the fact, that they can propose quite different point of view on the order in which new Homo traits appeared. However, not only the change in the perception of general features can be made, but also on some deeper levels, like evolutional fractal too. The author also describes the notion of mosaicism, which should be taken as a lesson to the paleoanthropologists.

The idea of interpreting bones not found together as belonging to absolutely different creatures is meant. Kate Wong introduces Bergers interpretation of fossils, which differs from the one, accepted by paleoanthropologists, Berger claims, that A. sediba should be taken as the root of Homo and all researches should be directed in this way. However, traditional scientist W. Kimbell doubts this fact, referring to the great number of incongruities and problems with dates and geography.

In the end of the article the author underlines the fact that the works and researches connected with Malapa Fossils have just begun, as Berger, the main scientist working with them, is obviously planning to spend all his life working with these fossils and trying to reveal some new facts. Having more than three dozen places to find them he is planning a lot of work to do.

Works Cited

Brahic, C. Our True Down. Scientific American. 24 Nov. 2012: 34-37. Print.

Wong, K. First of Our Kind. New Scientist Magazine. 2012: 30-39. Print.

Digestibility, Textural and Sensory Characteristics of Cookies

The article by Li et al. presents a study that reveals many aspects of in-depth research processes behind innovative technologies. The focus of the study is a type of enzyme-assisted aqueous extraction residue (REAE) called okara  a pulp made of soybeans after they are processed (Li et al., 2020). In this paper, the article Digestibility, textural and sensory characteristics of cookies made from residues of enzyme-assisted aqueous extraction of soybeans by Li et al. will be summarized.

Objectives

The paper discusses two distinct objectives that can be achieved through the presented method of research. The first set of results that is being examined is the reduction of calories from the addition of okara (Li et al., 2020). The authors note that the dietary fiber resulting from the proposed process is expected to possess properties that would be beneficial for a cookie while being less expensive to produce than regular ingredients (Li et al., 2020). Therefore, the second reason that the paper discusses is the potential use of okara for providing the maximum output for the processing plants without decreasing the quality of the resulting dough below acceptable (Li et al., 2020). It is worth noting that slightly lowered characteristics included taste and texture.

Methods

First of all, all the materials are clearly stated in the paper to allow replication of the experiment. Each product is listed alongside its origin, concentration, dosage, and consistency. The process of residues preparation is described starting with the specific method of okara acquisition. The recipe for cookies is listed, including ingredients and methods of their preparation. Six samples with different percentages of REAE in flour are brought for comparison (Li et al., 2020). Disulfide content in doughs is measured, in vitro and starch digestibility tests are conducted in a controlled environment in incubators, texture analysis is performed via a texture analyzer and a vernier caliper (Li et al., 2020).

In vitro digestibility takes into account the total starch content through the usage of different masses for each sample to eliminate differences in the rate of starch hydrolysis (Li et al., 2020). Untrained volunteers are selected from different social backgrounds to assist with sensory evaluation (Li et al., 2020). The statistical analysis is performed on three consecutive tests whose results were averaged (Li et al., 2020). Many other necessary precautions against potential inconsistencies are utilized and noted throughout the paper.

Results

The outcome of the study reveals meaningful differences in all parameters among samples. Starch hydrolysis percentage decreases with the increase of REAE, alongside glycemic and hydrolysis indexes (Li et al., 2020). There is a notable linear reduction of gluten protein per cookie (Li et al., 2020). Hardness, weight, springiness, and chewiness increase, while volunteers note lowered sweet and fat tastes (Li et al., 2020). Cookies become more crunchy and hard, further decreasing the enjoyability of the product (Li et al., 2020). Each step of the outcomes analysis is presented with a p-value to indicate the relationship between dependent and independent values.

Conclusions and Future Applications

In conclusion, the presented research method has revealed the potential for REAE <30% to be used for mass production. The average scores for analyzed statistics allow researchers to determine the acceptability of the product that contains such an amount of REAE is acceptable. The clarity of writing from the authors presents a convincing point that leaves no space for doubts and arguments. I found the utilized sensory evaluation methods highly valuable for their thorough calculations that highlight relationships between variables in an accurate and easy-to-read format. Moreover, the discussion part of the paper is filled with links to similar studies that support the authors goals and outcomes of the experiment, which gives the paper better credibility. These two research strategies will be crucial for my future research.

Reference

Li, Y., Sun, Y., Zhong, M., Xie, F., Wang, H., Li, L., Qi, B., & Zhang, S. (2020). Digestibility, textural and sensory characteristics of cookies made from residues of enzyme-assisted aqueous extraction of soybeans. Scientific Reports, 10(1). Web.

America and Germany Comparison

This paper is aimed at comparing such countries as the United States and Germany. In particularly, it is necessary to focus on such aspects as employment, education, and medical insurance. This discussion of these aspects is important for understanding the policies of the governments and the experiences of many people living in these countries.

On the whole, it is possible to argue that the United States, the government is less likely to intervene into various areas of human activities. These are the main questions that should be discussed more closely.

It should be mentioned that employers in the United States require more commitment from their workers. For instance, one can point out that American employees are more likely to do overtime. In contrast, in German companies, this behavior is less frequent. Apart from that, Germany workers tend to take vacations more often. This is one of the issues that should not be overlooked. Additionally, one should keep in mind that the role of trade unions is more significant in Germany than in the United States.

In America, the employees of private businesses are predominantly non-unionized. To a great extent, this trend shows that the bargaining power of employers is weaker in Germany than in America. As a rule, they find it more difficult for businesses to terminate workers. This is one of the aspects that can be distinguished because it can throw light on the experiences of many workers and labor relations in these countries.

Furthermore, one should pay close attention to the education in the United States and Germany. It is important to mention that in Germany more attention is paid to the vocational training that can be provided to high-school and college students. Furthermore, in this case, policy-makers focus on the partnership between educational organizations and businesses. This approach is important for ensuring that a learner is better able to meet the requirements which are set by employers.

It is also vital to mention that in German schools, students are taught foreign languages at early stages of education. This approach can be critical for increasing the employment opportunities of students. One should bear in mind that homeschooling is legal in the United States, and many parents take this option. In turn, homeschooling is prohibited in Germany and other European countries. Thus, in Germany, the government exercises a closer control over education, and this difference should not be disregarded.

Finally, much attention should be paid to medical insurance because this issue is relevant to many people, especially those ones who cannot afford medical services. It should be noted that for a long time, Germany has long relied on the governmental insurance which covers the majority of the population.

In contrast, in the United States, private insurance companies play a more important role. Furthermore, modern American policy-makers lay stress on the need to promote universal coverage. However, this policy is often criticized by various stakeholders such as medical workers.

On the whole, this discussion indicates that countries representing the Western culture can significantly differ from one another. In particular, the United States, the government is less likely into to intervene into various areas of human activities such as employment or education. In turn, German society is marked by the increased role of the state. These are the main details that can be singled out.

Slow Pace of Solar Installation in India

The Indian solar installation program has slowed down year-after-year according to Mercom Capital Group. The consultancy and communication firm had predicted the installations to reach 1,000 MW by the end of this year. Between 2012 and 2013, solar installations in India increased by 12 MW; so elusive was the growth that the firms comprehensive survey had to identify numerous factors that had caused the slow solar growth (Sengupta n.pag).

For instance, the delay in the Jawaharlal Nehru National Solar Mission (JNNSM) PV projects to go online contributed to the slow pace of solar installation in India. JNNSM PV projects to be offline until June 2015. In addition, the failure of Concentrated Solar Power (CSP) projects in India, as well as the current trade disputes pitting the US and India in the WTO have played a part in the delayed solar installations.

Concentrating Solar Power is one of the best sustainable options of acquiring energy from sun rays using clean mirrors placed at specific angles in hot regions of the world. From the point of no greenhouse emissions and the possibility of producing cheap electricity, CSP is not only environmentally friendly by protecting the global climate, but also economically sustainable.

Fossil fuels energy remains the major emitter of CO2 into the environment; it also uses non-recyclable products as opposed to CSP, which uses inexhaustible sunlight rays and recyclable parabolic mirrors.

The US claims that the Domestic Content Requirement (DCR) regulations discriminate against their solar cells in moving to inculcate thin-film technology (Prabhu par. 3) has also been a factor for the slow pace of solar installation. The constantly escalating prices of diesel, the Indian population is in dire need of cheap and environmentally friendly power.

Apart from the reduction in project margins that emanate from reverse auctions, government agencies have also delayed crucial state policies that address the installation of solar panels. For instance, the enforcement of real Renewable Purchase Obligation (RPO) has taken different twists, thus making it difficult for steady implementation of the solar projects.

Even though Indias economy is growing at a slow pace, Mercom asserted that the countrys solar market is still unexploited. In a move to counter the slow process of solar installation and constant power shortages, the government, on January 2013, deregulated diesel prices by increasing its price by Rs 0.50 per month for retail customers (Sengupta n.pag). As a consequence, the solar situation has slowed industrial growth. Moreover, generating power through a backup system has become a costly alternative.

Tentatively, the government of India is attempting to import subsidized diesel to serve the needy market. Notably, the past 13 months have seen the prices of diesel increased by 15%; solar, therefore, has become a very attractive alternative. Agriculture, businesses, and industries are hugely affected by power shortages; the government should put in place and implement relevant policy objectives in order to increase the growth of solar installation all over the country.

For the second time, India has delayed the solar bid cut-off date; this unexpected occurrence was a major concern among investors in the country, as well as among solar designers. Based on the delays that have been witnessed in the first phase of JNNSM, the 2022 expectation of solar installation of 20GW is highly likely not to meet its deadline (Peschel par. 6).

The government moves to assure the citizens of its commitment to completely roll out the solar projects by allowing the lowest bidders for the projects to win the tenders has received mass criticism. Critics argue that such a move would result in bad competition and bad projects since most bidders may only target the 30% cover-up cost for a project from viability gap funding (VGF) (Sengupta n.pag).

The situation is likely to pressurize solar developers to bid for the projects, as there is no alternative source of funding. Ritesh Pothan, a solar consultant in India, adds that the bidding and auctioning processes would create cheap solar projects.

The future of solar projects is not as bleak as other pundits may believe; the signing of the purchase agreements for NSM phase II batch one is likely to occur in April 2014 given that its allocation process was cleared by early parts of March. Considering the disappointments in the market in 2013 following the slow pace of installation, the government of India has to come up with clear, simple, and friendly project policies so that the evolving solar market experiences high growth rate than countries like Japan and China.

Recent commitments by government agencies and environmental organizations could see CSP provide 7% of usable power by 2022 (Peschel par. 9). In encouraging CSP, the government of India should offer investment incentives and tax holidays to firms that intend to set-up such environmental projects.

This move, coupled with improved operations, research, and development and competition among firms, lower the generation and supply cost of solar power. Even though a CSP plant requires high initial cost, the cost-benefit analysis of the project from a long-term perspective reveals cheap operational cost, albeit constant generation of electricity.

The reverse auctions that the government had proposed in the bidding process hinder the economic viability of the project. Therefore, India has to strive to look better and better every day by implementing relevant policies that work towards solving the rampant power shortage among millions of Indians. The Solar Energy Corporation of India (SECI) should avoid continuous delays in the solar projects to enable India to be at par with the whole world in the implementation of clean sources of energy.

India cumulative solar installations (JNNSM vs States)
India Solar Installations (MW)
All India Solar PV Installations by Policy Type (MW)
(Prabhu par. 7)

Works Cited

Peschel, T. Indias National Solar Mission to miss capacity targets. 2014. Web.

Prabhu, R. Indian Solar Market Update  2nd Quarter 2013. Web.

Sengupta, D. Indian solar installations are forecast to be approximately 1,000 MW. The Economic Times [Mumbai] 2014: n. pag. Web.

The Book How to Lie With The Statistics by Darrell Huff

The book under analysis is called How to Lie with the Statistics. It is written by Darrel Huff. This book is not his work first work. There were also Career Story of a Young Commercial Photographer and The Dog that Came True. However, only the book under analysis became a real bestseller. Being devoted to the topic which seems to be absolutely dull and not interesting, this book, however, turned out to be very interesting for a general reader.

It is the most popular book of the second half of the 20th century devoted to the issue connected with science and mathematics. The reason for its favor is very simple. The author managed to use some very good sources to make his book more successful. The first thing which strikes a readers eye is the books title. The author says directly that statistics lie, though scientists have been trying to assure people that they can believe the data obtained with its help.

Provocative title at once made this book very recognizable and discussed. The second source of its success is the authors style of writing. The book is written in a very simple manner for people being nonspecialists in statistics to get its main idea better. The authors style is not very complicated. He tries to present his ideas in a humorous key to making a reader more interested. Moreover, there is a great number of different examples in this book, that is why it is very easy and fascinating to read it.

The third factor of the books success is illustrations. It is rather strange to see cartoons in the book devoted to some serious question, though these pictures make this work not so dull and help to understand the content better, as almost every authors statement has clear evidence and bright illustration.

A large amount of caricatures just proves this statement, adding an element of fun to the book. The last thing which guaranteed the books overwhelming success is its content. Every chapter has its own peculiarities and can be appreciated by different people. There is no equal information as every new section of the book gives something new for a reader to be acquainted with the majority of facts, connected with statistics.

From the beginning of the book, the author says that it is devoted to the description of ways of using statistics to deceive people (Huff, 1993). The book has a clear subdivision, and it consists of ten chapters. Each chapter is a complete story, which can be read separately and its content can be understood.

The first chapter centers around the investigation of the problem of selection. The author describes its deceptive character and shows how interviewers influence the results of this selection, unconsciously changing answers of respondents. The author states the fact that there is only one way to get reliable information about some data. He gives an example of describing a barrel full of beans. If a person needs to know their quantity, he/she should just count them (Huff, 1993).

The chapter Well-Chosen Average is devoted to the authors cogitations about the arithmetic mean, median, and mode. Using different examples, the author tries to show a reader how a choice of an average number influences its value for the same selections. The author wants to show a reader the possibility of manipulating by choosing the fitting meaning of an average number.

It should be admitted, that this chapter serves as some kind of a turning point. The book grasps a reader, and it is impossible to stop reading without getting to know about some other ways of manipulation. Moreover, the clearness of examples should be underlined. The author gives a lot of evidence to support his words; however, the book is not overflowed with them and it is still easy to read this work. In a greater degree, it is determined by the fact that these evidence are very appropriate and help to understand the authors ideas better.

The chapter The Little Figures That Are Not There is the next in the list which helps the author to share his ideas. He underlines important aspects of statistic research which are often not mentioned while announcing results of this research. Moreover, the author gives some examples of manipulations with the size of the selection.

One of the examples shows how a toothpaste was tested. The group consisted of only six people. This number was not enough for reliable research. However, results were profitable for a manufacturing company. The author teaches the reader how important it is to draw attention to numbers and do not trust statistics.

In the rest of the chapters, the author continues to cogitate about using statistics for lucrative purposes. There is a great number of different thoughts which seem to be very interesting for readers. The author tells us about the method of manipulation, which is connected with graphic data.

Different ways of usage of the infographic are described. The author shows how it is possible to deceive a viewer with the help of a slight change in scale or just making some visual aids disproportionately bigger. There is a great number of different diagrams and graphs which help to understand the authors idea better.

The book ends with a very interesting chapter in which the author tries to tell a reader how to live with obtained knowledge and summarizes all the data given. He outlines that it is very easy to lie with the statistics and people should be aware of this fact.

Having read the whole book, it is possible to make some certain conclusions. First of all, it should be said that the book is not liked by statisticians (Steele, 2005) however, it still can be recommended for everyone. It is very easy to read as it is written in a clear and humorous key. The book can be easily understood by nonspecialists. A great number of different examples and evidence given is an incontestable advantage of this book.

It helps a reader to understand the main idea better. Keeping in mind peculiarities of the book, it is possible to understand why this book became so popular. Moreover, it is possible to outline four main aspects which make these book so interesting.

They are the title, the authors style of writing, the illustrations, and of course, the books content. Successfully combined these four elements, the author managed to create a real masterpiece which is still actual even nowadays. Ideas obtained from this book can help a person in his future life. With this in mind, it is possible to say that the book How to Lie with Statistics is worth reading.

Reference List

Huff, D. (1993). How to Lie with Statistics. New York: W. W. Norton & Company.

Steele, J. (2005). Darrell Huff and Fifty Years of How to Lie with Statistics. Statist. Sci. 20(3), 205-209. doi:10.1214/088342305000000205.

Researching of Kuhns Scientific Change

Kuhn considers science to be a social institution in which social groups and organizations operate. The main unifying principle of the society of scientists is a unified style of thinking, recognition by this society of specific fundamental theories and methods (Sismondo 12). Kuhn calls these provisions uniting the community of scientists a paradigm. According to Kuhn, the development of science is a leap-like, revolutionary process, the essence of which is expressed in a paradigm shift. The development of science is like the development of the biological world  a unidirectional and irreversible process.

A scientific paradigm is a set of knowledge, methods, patterns of problem-solving, and values shared by the scientific community. The next level of scientific knowledge after the paradigm is a scientific theory. In the development of science, Kuhn identifies four stages:

  1. Pre-paradigm, for example, physics before Newton, when the appearance of anomalies  unexplained facts was observed.
  2. The formation of a paradigm results in the appearance of textbooks that reveal the paradigm theory in detail.
  3. The stage of normal science. This period is characterized by the presence of a straightforward program of activities (Sismondo 15). Predicting new types of phenomena that do not fit into the prevailing paradigm is not the goal of normal science.
  4. Extraordinary science is a crisis of the old paradigm, a revolution in science, and the search and design of a new paradigm. Kuhn describes this crisis both from the content side of sciences development and the emotional-volitional side.

Kuhn believes that the choice of theory for the role of a new paradigm is carried out through the consent of the relevant community. Kuhn rejects the principle of fundamentalism since he sees the world through the prism of the paradigm accepted by the scientific community, and the new paradigm does not include the old one. Kuhn breaks with the tradition of objective knowledge, independent of the subject (Sismondo 21). For him, knowledge is not something that exists in the imperishable logical world but something in the minds of people of a particular historical epoch burdened with their prejudices. Kuhns most outstanding merit is that he introduces the human factor into the problem of the development of science, paying attention to social and psychological motives. Kuhn proceeds from the idea of science as a social institution in which certain social groups and organizations operate.

Work Cited

Sismondo, Sergio. An Introduction to Science and Technology Studies. Wiley-Blackwell, 2010.

The X and Y Sex Determining Chromosomes

The genome of human beings is organized into twenty-three chromosome pairs, of which only one pair is responsible for sex determination, with each parent contributing to one chromosome out of the two. The X and Y are the two sex chromosomes that will determine the sex of an embryo (Szalay, 2017). It is notable that mothers only pass on X chromosomes to their children. In terms of fathers, females inherit the X chromosome for the XX genotype from them, and males will inherit a Y chromosome from the father to form the XY genotype. Therefore, the Y chromosome is essential in terms of its presence or absence because it has the genes that will prevent the default biological development and cause the formation of the male reproductive system. This means that the biological default entails the development of the female reproductive system.

In genealogy, the lineage of males within a family is traced through the Y chromosome because it can only be passed down by the father. The Y chromosome is the defining one in determining the male sex and is one-third the size of the X chromosome. It contains fifty-five genes, while the X chromosome contains around nine hundred. The Y chromosome contains the SRY gene that causes the emergence of testes to form an embryo, later resulting in the development of external and internal male genitalia. If a mutation has occurred in the SRY gene, it is possible that the embryo will have female genitalia formed regardless of the XY chromosome pair.

It is also notable that a variation in the number of sex chromosomes in a cell is a common occurrence. For example, some men may have more than two chromosomes in all of their cells, which is referred to as the Klinefelter syndrome, characterized by the XXY variation (Klinefelter syndrome, 2020). Besides, it is possible for men to lose the Y chromosome with age, while smoking can also increase the rate of the loss. Some genes that were considered to be lost from the Y chromosome can actually relocate to other chromosomes. Most of the Y chromosome is made up of repeating DNA segments, which means that specialized technology is needed to determine the arrangement of the highly similar segments. Moreover, it is notable that many health conditions are considered to be associated with changes in genes that are expressed on the Y chromosome, and more research is currently being done in this area.

The X chromosome is much larger in size than the Y counterpart. It is carried by the egg, which means that females, who have eggs, can only pass on the X chromosome to their offspring. It is also notable that there is quite a large number of genes to be found in the X chromosome. It has been estimated that there are around 155 million base pairs, which can translate to between 900 to 1,4000 genes embedded into the X chromosome. This means that the chromosome is responsible for carrying around 5% of a humans total DNA in the entire cell (National Human Genome Research Institute, 2021). Considering the genes that the X chromosome carries, it is more often than sex-related disorders will be included in the X chromosome. Since there is no protective mechanism that safeguards against the mutations, they are more predominant in males than females.

References

Klinefelter syndrome. (2020). Web.

National Human Genome Research Institute. (2021). X chromosome. Web.

Szalay, J. (2017). Chromosomes: Definition & structure. Web.

Formation and Weathering of Rocks

In the process of lithification, sedimentary rocks are traditionally formed, according to Lutgens and Tarbuck (58). As a rule, the process of lithification presupposes that sediments should remain under pressure; the sediments expel the so-called connate fluids in the process and finally turn into sediment rock.

The difference between a glassy and a porphyric texture in igneous rocks is rather basic. The glassy texture presupposes that the rock is very smooth and has a homogenous surface. A porphyric structure, in its turn, means that the surface of the rock in question has sponge-like qualities in terms of its texture.

Known for its volcanic origin, pumice is formed in the process of rock being thrown out of the crater of a volcano under a very high pressure and temperature (Lutgens and Tarbuck 62). As a result, the specific texture of pumice is created.

A coarse grained igneous rock of basaltic composition is intrusive. A fine grained igneous rock of felsic composition is extrusive.

As Lutgens and Tarbuck explain, mafic rock, in contrast to granitic rock, is extremely dense. As a rule, the former tend to sink underneath the latter due to the aforementioned difference in density (Lutgens. and Tarbuck 71).

Though both mechanical and chemical weathering contribute both to the process of erosion and to the process of rock formation (Lutgens and Tarbuck 65), there is a tangible difference between the two.

While chemical weathering is traditionally portrayed as the key force in the rock formation process, mechanical weathering is usually viewed as a supplementary one. However, without the mechanical weathering, which helps disintegrate rock into smaller particles, the process of chemical weathering, which requires that minor elements should be removed from the rock.

Life forms, in their turn, also affect the process of weathering to an impressive extent. The organic processes that occur in the soil lead to the creation of humus. The latter, in its turn, contributes to the faster decomposition of the elements of soil and rock into smaller particles, which will be later on split into even tinier pieces by chemical weathering.

The first and the most important difference between detrital and chemical sedimentary rocks concerns their origin. Unlike chemical sedimentary rocks, the material for which is traditionally transported in a river, detrital sedimentary rocks are formed from the material that has been transformed in solid particles.

A coarse crystalline chemical sedimentary rock that does not contain calcite can be identified as claystone or rock salt.

There is a major difference between brecca and siltstone. Unlike siltstone, which is composed of silt-sized sediment grains, brecca is made out of the components of a larger size. As a rule, these components include boulders, pebbles, gravel and cobbles.

When magma touches a cool country rock, temperature around the rock rises. If there is an intrusive igneous rock in the vicinity, contact metamorphism occurs. As a result, rocks are formed.

Two examples of coarse grained nonfoliated metamorphic rocks are marble and quartzite (Lutgens and Tarbuck 61).

Though schist and marble share a range of similarities, they still cannot be considered entirely the same. Indeed, taking a closer look at the two, one will notice that schist is metamorphic; marble, in its turn, is not. In addition, unlike schist, which is traditionally metamorphosed with the help of heat, marble is metamorphosed by considerable pressure.

Works Cited

Lutgens, Frederick K. and Edward J.Tarbuck. Minerals: Building Blocks of Rocks. Foundations of Earth Science (7th Edition). Prentice Hall. 2014. Print.