Grand Canyon Geology in Two Articles

The first article on the geology of the Grand Canyon, where a series of events led to the emergence of the canyon. It began with the formation of the inner gorges metamorphic and igneous rocks two billion years ago, which were uplifted between 70 and 30 million years ago due to tectonic shifts. Subsequently, the Colorado River started to erode and carve the plateau, gradually widening the pathway and creating the canyon, and it began 5-6 million years ago (National Park Service par. 3). There is one main reason why the canyon is large as it is today. It is due to the fact that the Colorado River has been eroding and carving the plateau for almost six million years, which is a substantial amount of time to create the given landscape.

The entire process of the canyon formation is called downcutting, which refers to a rivers continuous erosion of the rock mass. The sheer scale of the down-cutting is directly dependent on a number of characteristics of the river, such as flow, volume, and slope (National Park Service par. 20). The river itself participates in the deposition of the rocks, which means that there are two categories of rocks. The first group is comprised of rock deposits older than the river, and the second group consists of rocks younger than the river because the latter is the main cause. An interesting fact about this can be found in the Spillover Theory, which claims that the ancestral Colorado River was temporarily dammed behind the Kaibab Plateau and other high points (National Park Service par. 44). In other words, the blockade of the stream of the river can be a reason for the active down cutting since a higher volume river was subsequently released.

The second article focuses on essential and interesting concepts centered around the Grand Canyon. This UNESCO World Heritage Site is reflective of the geological history of the landscape. Stratigraphy allows experts to be able to observe and analyze the layering patterns of the rock in order to describe the planets state, including climate, during the specified period of time (Geology and Ecology of National Parks par. 3). There are three primary layers of rocks, which include Paleozoic strata, the Precambrian Grand Canyon supergroup, and metamorphic basement rocks. In some cases, there is a missing layer within the rock formations, which are called unconformities, such as the Great Unconformity of the Grand Canyon (Geology and Ecology of National Parks par. 4). These gaps also provide invaluable insight into the history of layering since it means that the missing layers eroded before the deposition.

It should be noted that there are three major types of rocks such as metamorphic, sedimentary, and igneous. The latter is the oldest rock formed due to the cooling of the magma or lava, and the former is the result of two other rocks exposed to pressure and heat. Sedimentary rocks form due to the sedimentation of sand and mud. Geological insight is further enhanced by fossils, such as brachiopods, burrows, tracks, and trilobites, each of which represents living creatures of the past (Geology and Ecology of National Parks par. 5). The most interesting fact is that the canyon is much younger than the rock deposited in the terrain because the Colorado River became the driver of erosion relatively recently.

Works Cited

Geology and Ecology of National Parks.  Grand Canyon Geology. USGS.

National Park Service.  Geology. NPS.gov.

Approaches and Methods of Solving Mathematical Problems

Background

Five high school mathematics students were invited to complete this assignment. The students identities remain anonymous, but it should be said that each of them was over the age of 14 and had never taken a MA 105 course. The students voiced no discontent about math while also being uninspired by the discipline; in other words, they were ordinarily high school students. All respondents were asked to solve three uncomplicated problems and explain their methodology.

Reflecting Subtraction Tasks

The first problem in this project was a problem on subtraction skills: students were asked to solve Examples 107-68. The choice of these numbers was not accidental since a more profound knowledge using short-term memory is needed to perform subtraction competently in this case. Interestingly, each of the students produced the correct answer, with only three (1, 2, and 3) completing the entire assignment, while the others ignored the first part, focusing on the second. The first student divides the numbers into components: 107 turns into 100+7, and 68 turns into 70-2. In one case, the first student uses an additional format to represent a whole number and a subtraction format in the other, but it does not seem systematic. For 107, the number 110 (107+3) is much closer, but for this student, the number 100 is probably easier to perceive than 110. Using the complex method, the student arrives at the correct solution and then successfully checks himself by subtracting the columns target numbers and even using notations. The second student also uses the column and also marks that he has solved the problem several times, getting the same answer. Among these solutions is a complex decomposition of the problem into four different steps, which complicates the procedure and leads to errors. The third and fourth students also use columns, but the fourth student uses a more detailed sequential notation, while the third student does parallel calculations and crosses out the numbers when they are no longer needed. The fifth student chooses an evaluative comparison strategy in which the most specific closest possible number is chosen for each number to simplify the subtraction procedure. Thus, the fifth student uses the analogy of 110-70=40, calling it rounding. In addition, the student shows his calculations using column subtraction but does not provide any additional strokes or marks: it can be concluded that the basic calculations are implemented in mind.

Reflecting on the Multiplication Problem

Students were asked to solve the 14×15 example for the multiplication task and explain the solution. It is noteworthy that the first and second students performed the problem identically, column by individual digits, and their solutions are indistinguishable. However, the fifth student seemed to think more comprehensively and, using the column, multiplied not digit by digit but a number by digit. This method may not always lead to accurate results because multiplying a number by a digit is not always straightforward; hence, the fifth student takes a bit of a risk by ignoring the procedures for multiplying by digits. The fourth student also uses a column like 1 and 2 but divides it into three parts, explaining each step in passing. This is the most detailed solution and takes the longest. It is interesting to highlight the third students answer: it is also correct, but it gives the impression that a calculator was used because there is no solution process. It is unlikely that this student could have solved the example in his mind, so it seems that either third-party drafts or a calculator were used. Again, only 1,2, and 3 students answered the first part using different degrees of comparison with round numbers chosen by personal preference.

Reflecting on the Percentage Task

In this part, students were asked to find 75% of the number 12 using either method. Students 1 and 2 showed a similar process using column multiplication but slightly different procedures. Student 1 multiplied the full 12 by each of the numbers 75 as 12×5+12×70 and then separated the decimal point. This seems not always straightforward since multiplying by digits of a number can lead to arithmetic error more often than multiplying a digit by a digit. Student 2 multiplied 12 by each of the digits of 75 but did the calculations according to the principle 12×5+12×7. Student 5, on the other hand, multiplied 75 by the number 12, so his calculations were different, and again used the method of multiplying a number by a digit. This time, the third and fourth students seem to have both used additional tools, as no calculations were shown. Again, only 1,2, and 3 students answered the first part.

General Comparison

The students showed strong dynamics in their methods, but some of the patterns were detectable. The third student was the most likely to walk away from more complicated problems, probably using a calculator. The first two students and the fifth used similar techniques, but the fifth always used them uniquely, sometimes reversing the problem. The fourth student seemed tired toward the end or unable to solve the percentage problem, so only at the end did he turn to a calculator (probably). In general, the columnar solution was always used by students, but 3, 4, and 5 required written steps or problem statements more often than others. To emphasize the overall bottom line, not only did the solution strategies differ between students, but they were also used differently by each student. They all ended up with correct answers, but the task performed showed how differentially the problem could be approached.

Understanding the Metabolic Function

Describe Metabolism, Catabolism, and Anabolism and Explain Their Role in the Body

The broad definition of metabolism suggests that the subject matter includes the entirety of the processes within a body required to sustain life in an organism. Being extremely complex, metabolism involves multiple stages, one of which is represented by catabolism, namely, the phenomenon of various compounds being processed to release the required energy (Ang, 2016). Anabolism is another part of the metabolic function, which is represented by the development of the compounds mentioned above (Ang, 2016). Thus, each process plays a distinctive role, anabolism representing the process of obtaining the essential nutrients, catabolism implying their further processing, and metabolism encompassing the whole range of chemical reactions and physical processes needed to sustain life in a human body or any other living being.

Discuss the Mechanisms Involved in the Metabolism of Carbohydrates

The processing of carbohydrates is a rather intricate phenomenon that involves several major stages. First, the development of glucose as a product of gluconeogenesis should be mentioned. Additionally, the process involves the creation of poly- and monosaccharides as a result of carbohydrates decomposing only soluble sugars within the body. After the specified substances are formed, they are transported to the respective tissues in which they are required to sustain the necessary levels of energy. The described change launches the process of cellular respiration, implying that cells receive the needed amount of energy and are capable of further functioning (Wildman & Medeiros, 2018). On a larger scale, glucose delivered to the tissues requiring it for their proper functioning allows obtaining the product known as pyruvate owing to the glycolysis stage, during which Adenosine Triphosphate (ATP) is formed (Wildman & Medeiros, 2018). As a result, multiple cellular processes are fueled by the required supply of energy, allowing the body to maintain its functioning.

References

Ang, M. (2016). Metabolic response of slowly absorbed carbohydrates in type 2 diabetes mellitus. Springer.

Wildman, R. E., & Medeiros, D. M. (2018). Advanced human nutrition (4th ed.). Boca Raton, FL: CRC press.

Hertz-Sprung-Russell Diagram of Star Lifecyles

Hertz-sprung-Russell diagram is a disperse chart of stars correlating the stars luminosities against their spectral types and effectual temperature. Through this diagram, temperatures are measured in Kelvins ranging from 3000 to 30,000. Similarly, the magnitudes of the stars range from +15 to -10.

The stars luminosity and effective temperature are plotted along the vertical and horizontal axis respectively on the diagram. Similarly, the horizontal axis contains a third scale, the spectral scale, plotted on it. Stars are grouped in spectral classes depending on their characteristics.

Characteristics of the four main groupings of stars on the diagram

When one plots the diagrams of the nearest stars to earth, the stars will appear randomly on the chart in four distinct groups suggesting that there is a relationship between the stars temperatures and luminosities. The four groups noted are identified as group A, B, C, and D. Group A stars comprise of the cool and dim stars on the lower right side of the graph to the very bright stars on the top left corner.

Group B stars comprises of cooler and more luminous stars than group A stars. Their size is immensely larger compared to the group A stars. Similarly, group C stars comprise of the much larger and luminous stars than group B stars. Finally, the chart contains the representation of group D stars. These stars, as seen from the diagram, are very hot and dim. This suggests that the stars must be very tiny as compared to other groups of stars, and are referred to as white dwarfs.

Formation of stars

Astronomers have established that stars, like people, have a life cycle. They use the relationship between the young stars and cloud stars to analyze and explain stars formation. The space between the stars comprises of gases and dust known interstellar medium. Of this medium, hydrogen gas constitutes 75 percent of the mass while helium constitutes 25 percent (Seeds and Michael 154). Similarly, traces of carbon, oxygen, and nitrogen are present on this medium.

Certain conditions are crucial in ensuring that the interstellar cloud gas remains in equilibrium. These include the kinetic and potential energy balance. The failure in this regard causes the clouds to undergo gravitational collapse. The viral theorem, that asserts that for equilibrium to persist the internal energy must be half the potential energy, explains this collapse (Moche and Dina 121).

The nebulae clouds and dust remain cold and inactive until excited by an external disturbance from a comet or shockwave originating from a distant supernova. The external force shearing through the cloud particles causes particles collision leading to the formation of clumps. With time, a clump accumulates more mass and progressively attains a stronger gravitational pull. With increased gravitational pull, the clump attracts more particles from the surrounding clouds as it increases in size.

Because of the clumps increase in size and density, its center begins to grow hotter and denser. Over the span of more than a million years, a clump can transform into a small body referred to as a proto-star. Similarly, proto-stars, like the clumps, continue to attract more particles and dust from the surrounding clouds and grow hotter.

Eventually, when the proto-star attains a temperature of 7 million Kelvins, hydrogen fusion occurs resulting in the production of helium and massive energy (Seeds and Michael 321). During the initial stages, the strong inward gravitational pull compromises the outflow of the fusion energy. Subsequently, as more materials accumulate in the proto-stars, their mass and heat increases.

Over millions of years, proto-stars attain enough mass and heat to support the solar mass to collapse into the proto-star (Seeds and Michael 321). As this collapse occurs, bipolar flow occurs as enormous gas jets erupt through the proto-stars detonating the remaining particles on the surface. During this stage, the stabilization of a young star occurs with the outward hydrogen fusion thwarting the inward gravitational force.

Death of stars

Billions of years after their formation, stars die ending their life cycle. The death of stars significantly depends on the type of the stars involved. A stars lifetime will depend on the availability of hydrogen in its core and other factors such as the rate of nuclear burning. Once a star drains its hydrogen supply, it increases in size and luminosity (Seeds and Michael 184).

Death of low stellar mass stars

The exhaustion of the core hydrogen triggers the death of a medium star. This exhaustion thwarts the stars source of heat causing distortion in the stellar equilibrium (Abell and George 221). Eventually, the stars core collapses under the gravitational pull resulting in the burning of helium at the expense of hydrogen.

A star will then use helium as its main source of energy until it is exhausted. At this stage, the stars outer surface expands and extends outwards resulting in an increase in the size of the star involved. This phase lasts for thousands of years leading to a massive loss of the stars winds. Ultimately, the medium star loses its entire mass envelope and exposes its hot core. The ionization of the nebulas results from the radiation process.

Death of medium and massive stellar mass stars

Over time, massive stars exhaust the hydrogen supply in their cores resorting to the burning of helium. Similarly, with the exhaustion of helium, the nuclear burning cycle continues, but with different elements. First, carbon burns to oxygen followed by the burning of sulfur to iron. Eventually, since iron exists in a stable form, it cannot burn any further thus hampering energy production. With the lack of energy to balance the gravity, the stars iron core collapses.

Astronomers have noted that the iron core does not completely collapse as the nuclear densities resist any further collapse leading to the core rebound releasing supernova explosions (Fradin and Dennis 67). The supernova explosions are responsible for the injection of carbon, silicon, and oxygen into the space (Gaustad and John 78).

The mass of the parent stars determine the destiny of every hot neutron core. For medium stars, the neutron core cools progressively into a neutron star. Concerning the massive stars, the gravitational pull will be so immense such that the nuclear forces will be overpowered leading to the core collapse and the formation of black holes.

How type I and type II supernovae occur

When massive stars die out, their nuclear reactions turn them into significantly bright and hot bodies that collapse inwardly as they explode by a process called supernova.

This process can be referred to as type I or type II depending on the shape and nature of the spectral lights emitted in the process (Ridpath 56). A type i supernova occurs when the emitted light curves realize sharp maxima with gradual death. The type II supernova is identified with less sharp maxima and a sharp death.

Works Cited

Abell, George O.. Exploration of the universe. New York: Holt, Rinehart and Winston, 1964. Print.

Fradin, Dennis B.. Astronomy. Chicago: Childrens Press, 1983. Print.

Gaustad, John E., and Michael Zeilik. Study guide to accompany Astronomy, the evolving universe. 3rd ed. New York: Harper & Row, 1982. Print.

Moche, Dinah L., and George Lovi. Astronomy. New York: Wiley, 1978. Print.

Ridpath, Ian. Astronomy. London: Dorling Kindersley, 2006. Print.

Seeds, Michael A.. Horizons: exploring the universe. 5th ed. Belmont, CA: Wadsworth Pub. Co., 1998. Print.

Making Sense of Qualitative Data

Introduction

According to Coffey & Atkinson (1996), data analysis is the systematic procedures applied by a researcher in order to identify any essential features and relationships in data being considered (p. 9). Data analysis procedures depend on whether it is qualitative or quantitative.

Quantitative data analysis tends to employ deductive data analysis whereas qualitative data analysis will tend to employ inductive techniques in coming up with theories to describe any phenomena. Quantitative analysis procedure includes two processes namely; preparing research data for analysis; and secondly the description of data using descriptive statistics. Qualitative data analysis will include research techniques such as ethnographic, phenomenological and grounded theory.

Phenomenological, Grounded theory and Ethnographic approaches

Ethnographic data analysis methods tend to employ a holistic approach to data analysis. They are based on cultural alignments of various people who share beliefs, traditions, values and religion. They focus entirely on relatively complex social dynamics, systems and sub-systems that bring about common behaviors among various people in society. The procedure for coming up with such a holistic description of the people sharing a common culture will entail the use of both emic and epic terms.

Grounded theory as a qualitative research design employs a data analysis procedure where the various phases in research procedure overlap. Thus, the theory is developed throughout the research process. It employs coding techniques in order to classify the collected data into categories.

According to www.essortment.com, these are; open coding, axial coding and selective coding. Open coding refers to the identification of interrelationships among data identifiable on the face. Axial coding involves the reorganization of identified relationships to identify more abstract and unique ones. Selective coding finally involves the focusing of research attention on major relationships identified beforehand.

The procedure followed in grounded theory approach entails; the collection of data; identification of the relationships present in the collected data; identification of the core category, one where other subcategories are hinged, and the development of a theory that is grounded on the data collected, and based on identified categories. Phenomenological research approach involves a descriptive study of how various people experience a particular phenomenon.

It studies perceptions and feelings towards the phenomenon. Phenomenological research starts with the formulation of the research question, then, a description of where the participants in the research are located is made followed by stating the data collection and storage methods. Finally, the researcher will give an explication of the data (p. 6).

Conclusion

A number of discrepancies are bound to arise in any research exercise and as such, the researcher should endeavor to reduce such discrepancies to the bare minimum and acceptable levels. This will help to increase data integrity and reliability of the conclusions and interpretations of the research outcomes, which the researcher will arrive at after his or her analysis of the data.

He or she should be able to ensure that all the participants are given a chance to listen to recordings of all their audio interviews, in order to validate the information being recorded. The research environment should be located as far away as possible from any disruptions that are likely to interfere with or influence, in any undesired manner, the research outcomes. It should be comfortable by being properly ventilated and away from noisy places, preferably in secluded locations.

References

Coffey, A., & Atkinson, P. 1996. Making sense of qualitative data: Complementary research strategies. Thousand Oaks, CA: Sage. (Grounded Theory, 2012)

Grounded Theory. (2012).

Predictive Models for Microbiome Data

Background

Human health and disease control are some of the oldest yet complicated fields in the history of humankind. Over the years, scholars have published papers and literature in microbial studies concerning the relationship between different microbial communities and their influence on diseases and infections. The microbial communities exist inside and outside the human body and significantly influence the overall human health and prevalence of diseases and infections. The microbial communities outside the human body are found on the skin, nails, and hair and are associated with communicable and non-communicable diseases.

The study of microbial communities is resource-intensive and has been challenging in the recent past. However, with the aid and adoption of computers and artificial intelligence models such as machine learning, the processes have been simplified, producing more reliable results. Traditionally, microbial studies included blood sampling, urine and stools screening, and other samples and were associated with separating the samples to establish any microbes in the specimen. The processes were labor-intensive and highly susceptible to error due to human fatigue. The paper represents machine learning algorithms in the study of microbial communities. The experiments present a stepping stone in the study, interpretation, and understanding of microbial communities found in and on the human body and their impact on human understanding of diseases and infections.

Data Classification and Analysis

Data classification and analysis is one of the on-demand techniques used in the modern computing era. The proliferation of data collected, stored, and processed in a day has dwarfed the traditional data processing tools and techniques. Today, data is stored in various dimensions and formats, calling for sophisticated methods and tools for analysis. Data analysis and processing play a crucial role in interpreting naturally existing phenomena that would have otherwise been considered meaningless. Over the years, data collection and collection has been so diverse that the existing analysis techniques and tools have become obsolete. Scholars and engineers came up with techniques to extract data from different sources and formats to form singly manageable data sets. This process is known as data mining and is commonly used by large cooperate organizations to summarize data from different operations, departments, and processes.

Usually, different data collection points use varying techniques, storage techniques, data formats, and complicated tools that might not necessarily be compatible with one another. As a result, it becomes challenging for data analysts to process data from such varying sources. Data mining is an essential practice in large cooperate institutions as it helps them gain meaningful insight into customers, suppliers, and other stakeholders. The results are used to make company decisions that impact the companys performance, future, and success (Ge at al. 20590). Data mining has heavily relied on existing tools. For instance, it is heavily dependent on data warehousing and the computing power of the information systems. The process is also affected by the effectiveness of the data collection methods and tools as they dictate the type and amount of data collected and stored in the data warehouses.

The development and advancement of data processing and analysis tools have been on the rise since the development of the internet. Today, the amounts of data processed have surpassed human capabilities, thwarting their efficiency in data collection, storage, processing, and presentation. The development of data analysis and processing tools powered by artificial intelligence has been on the rise. It has helped improved accuracy, efficiency and reduced operational costs in the cooperate sector. The development and growth of different fields of artificial intelligence have been adopted in all fields of human life. Machine learning is one of the youngest yet widely adopted branches of artificial intelligence. This branch of artificial intelligence deals with the development of models and computer algorithms that can learn by themselves from existing data sets. Data warehouses contain the data sets needed to train machine learning models that suit their businesses in an operating organization.

Machine learning models study patterns and relationships between different stakeholders data stored in data warehousing. Based on the results, the models can then make human-like intelligent decisions such as predicting sales, customer needs, product development process, the success of a particular marketing strategy, or mutation of a particular disease-causing organism. Machine learning is mainly adopted in the military, education, finance, banking, medicine and microbiology, agriculture, and meteorology to help experts analyze large data sets with minimal effort (Zou et al. 1182). The models and algorithms have improved in performance and accuracy. As of today, artificial intelligence has surpassed human intelligence in various fields, including x-ray scanning, gaming, and image processing.

Data mining is always driven by company needs which also dictates the kind of software used. However, the data mining process remains the same across organizations, and the main goal remains to establish links between different data sets. Machine learning and data warehousing are of great importance to the scientific communities as they help extract meaningful insights from unimaginably large data sets (Zou et al. 1182). Also, the data collection tools used in modern research experiments have dwarfed the labor-intensive tools and techniques used before the widespread adoption of artificial intelligence. Since machine learning models rely on large data sets for train and testing, the data collection and sampling techniques have been updated to match the processing power of the computing resources and artificial intelligence models.

Importance of Machine Learning in the Analysis of Microbial Data

Although human intelligence beats artificial intelligence general applications, Artificial intelligence had surpassed human intelligence in various sectors such as gaming, analysis of x-ray data, and other fields. It is an excellent sign of the unchallenged benefits of artificial intelligence in real-life scenarios. Health care is one of the most skill-demanding disciplines challenging human accuracy. The adoption of machine learning models has proven beyond any reasonable doubt that the models can outperform human experts as long as they are trained adequately. Training machine learning models helps the algorithms to adapt and improve their accuracy (Qu et al. 827). Microbes are tiny and present remarkable similarity from community to another. Human experts might not be able to establish the differences even with the help of a powerful microscope. However, properly trained machine learning models equipped with high precision sensors and other data collection tools can establish the differences, similarities, and potential impact of these microbial communities with high levels of accuracy.

Methods

The method of choice of data processing techniques plays an essential role in determining the results obtained. The data pre-processing entails importing required libraries, loading data set to model, handling missing data, and encoding measures of central tendency. In our case, the study used the random forest algorithm to mine data from multiple sources. It is one of the most influential and widely used machine learning models for data mining as its performance improves with the intensity in training levels. Random forest is a popular algorithm used in artificial intelligence to establish the relationship between grouped items in a data set. in a nutshell, the algorithm employs classification and regression analysis to compute the relationship between data features.

This algorithm is significant because it looks out for the best rather than the essential features as it split down the decision tree. Before splitting down a tree, the algorithm picks a particular set of essential features geared towards producing the best results. Random forest is similar to a decision tree, except that it adds randomness into the tree, making it more complex, unbiased, and unpredictable. Besides, random forest ranks high compared to other machine learning algorithms in measuring the relative importance of every feature in the data set. With the help of feature importance, an analyst can choose which features to drop or keep for the analysis process. This helps focus on what is most useful rather than important. The correctly classified instances were used to measure the models accuracy. The results of the model are presented in the section below. The usefulness of the algorithm was tested using the confusion matrix.

Results

This section presents the results obtained after train and testing the models. It presents graphical and tabular views of the results as abstained after running the models. Machine learning models are data-dependent, and their accuracy y is directly proportional to the data set used in the model training process. The larger the data set, the better the model adapts and teaches itself how to identify and classify input in the future. The model used different features for machine learning include Axilla, Volar Forearm, Plantar Foot, Forehead, Palmar Index Finger, Popliteal Fossa, Labia Minora, Umbilicus, External Nose, Lateral Pinna, Palm, and Glans Penis.

The aforementioned features are the different body parts from which the microorganisms were found in or on the human body.

Table 1 Random Forest model training.

=== Run information ===
Scheme: weka.classifiers.meta.AdaBoostM1 -P 100 -S 1 -I 10 -W weka.classifiers.trees.DecisionStump
Relation: HSS_otus-weka.filters.unsupervised.attribute.Remove-R1-weka.filters.unsupervised.attribute.Remove-R2,3,4,5,6,7,8-weka.filters.unsupervised.attribute.Remove-R2,3,4,5,6,7,8-weka.filters.unsupervised.attribute.Remove-R2,3,4,5,6,7,8-weka.filters.unsupervised.attribute.Remove-R2,3,4,5,6,7,8-weka.filters.unsupervised.attribute.Remove-R2,3,4,5,6,7,8-weka.filters.unsupervised.attribute.Remove-R2,3,4,5,6,7,8-weka.filters.unsupervised.attribute.Remove-R2,3,4,5,6,7,8-weka.filters.unsupervised.attribute.Remove-R2,3,4,5,6,7,8-weka.filters.unsupervised.attribute.Remove-R2,3,4,5,6,7,8-weka.filters.unsupervised.attribute.Remove-R2,3,4,5,6,7,8-weka.filters.unsupervised.attribute.Remove-R2,3,4,5,6,7,8
Instances: 401
Attributes: 2151
[list of attributes omitted]
Test mode: 10-fold cross-validation
=== Classifier model (full training set) ===
AdaBoostM1: No boosting possible, one classifier used!
Decision Stump
Classifications
X2245 <= 0.0375411033978809 : forehead
X2245 > 0.0375411033978809 : plantar foot
X2245 is missing : plantar foot
Class distributions
X2245 <= 0.0375411033978809
axilla volar forearm plantar foot forehead palmar index finger popliteal fossa labia minora umbilicus external nose lateral pinna palm glans penis
0.07309941520467836 0.17543859649122806 0.049707602339181284 0.1871345029239766 0.08187134502923976 0.06432748538011696 0.017543859649122806 0.03216374269005848 0.04093567251461988 0.07894736842105263 0.18421052631578946 0.014619883040935672
X2245 > 0.0375411033978809
axilla volar forearm plantar foot forehead palmar index finger popliteal fossa labia minora umbilicus external nose lateral pinna palm glans penis
0.03389830508474576 0.0 0.7966101694915254 0.0 0.0 0.0847457627118644 0.0 0.01694915254237288 0.0 0.0 0.01694915254237288 0.05084745762711865
X2245 is missing
axilla volar forearm plantar foot forehead palmar index finger popliteal fossa labia minora umbilicus external nose lateral pinna palm glans penis
0.06733167082294264 0.14962593516209477 0.1596009975062344 0.1596009975062344 0.06982543640897755 0.06733167082294264 0.014962593516209476 0.029925187032418952 0.034912718204488775 0.06733167082294264 0.1596009975062344 0.0199501246882793
Time taken to build model: 0.04 seconds
=== Stratified cross-validation ===
=== Summary ===
Correctly Classified Instances 108 26.9327 %
Incorrectly Classified Instances 293 73.0673 %
Kappa statistic 0.1306
Mean absolute error 0.1332
Root mean squared error 0.2586
Relative absolute error 90.699 %
Root relative squared error 95.4622 %
Total Number of Instances 401

=== Detailed Accuracy By Class ===

TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
0.000 0.000 ? 0.000 ? ? 0.484 0.063 axilla
0.000 0.000 ? 0.000 ? ? 0.564 0.167 volar forearm
0.719 0.039 0.780 0.719 0.748 0.703 0.805 0.575 plantar foot
0.891 0.742 0.186 0.891 0.307 0.129 0.562 0.177 forehead
0.000 0.000 ? 0.000 ? ? 0.551 0.076 palmar index finger
0.000 0.000 ? 0.000 ? ? 0.459 0.065 popliteal fossa
0.000 0.000 ? 0.000 ? ? 0.401 0.014 labia minora
0.000 0.000 ? 0.000 ? ? 0.489 0.029 umbilicus
0.000 0.000 ? 0.000 ? ? 0.495 0.034 external nose
0.000 0.000 ? 0.000 ? ? 0.545 0.072 lateral pinna
0.078 0.089 0.143 0.078 0.101 -0.014 0.558 0.174 palm
0.000 0.000 ? 0.000 ? ? 0.462 0.025 glans penis
Weighted Avg. 0.269 0.139 ? 0.269 ? ? 0.577 0.194

=== Confusion Matrix ===

a b c d e f g h i j k l < classified as
0 0 2 22 0 0 0 0 0 0 3 0 | a = axilla
0 0 1 53 0 0 0 0 0 0 6 0 | b = volar forearm
0 0 46 16 0 0 0 0 0 0 2 0 | c = plantar foot
0 0 0 57 0 0 0 0 0 0 7 0 | d = forehead
0 0 0 25 0 0 0 0 0 0 3 0 | e = palmar index finger
0 0 5 19 0 0 0 0 0 0 3 0 | f = popliteal fossa
0 0 0 6 0 0 0 0 0 0 0 0 | g = labia minora
0 0 1 10 0 0 0 0 0 0 1 0 | h = umbilicus
0 0 0 12 0 0 0 0 0 0 2 0 | i = external nose
0 0 0 25 0 0 0 0 0 0 2 0 | j = lateral pinna
0 0 1 58 0 0 0 0 0 0 5 0 | k = palm
0 0 3 4 0 0 0 0 0 0 1 0 | l = glans penis

Table 2. Random Forest model test results.

=== Run information ===
Scheme: weka.classifiers.trees.RandomForest -P 100 -I 100 -num-slots 1 -K 0 -M 1.0 -V 0.001 -S 1
Relation: HSS_otus-weka.filters.unsupervised.attribute.Remove-R1-weka.filters.unsupervised.attribute.Remove-R2,3,4,5,6,7,8-weka.filters.unsupervised.attribute.Remove-R2,3,4,5,6,7,8
Instances: 401
Attributes: 2214
[list of attributes omitted]
Test mode: 10-fold cross-validation
=== Classifier model (full training set) ===
RandomForest
Bagging with 100 iterations and base learner
weka.classifiers.trees.RandomTree -K 0 -M 1.0 -V 0.001 -S 1 -do-not-check-capabilities
Time taken to build model: 1.31 seconds
=== Stratified cross-validation ===
=== Summary ===
Correctly Classified Instances 271 67.581 %
Incorrectly Classified Instances 130 32.419 %
Kappa statistic 0.6312
Mean absolute error 0.1039
Root mean squared error 0.2119
Relative absolute error 70.703 %
Root relative squared error 78.2312 %
Total Number of Instances 401

=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
0.556 0.032 0.556 0.556 0.556 0.523 0.914 0.703 axilla
0.683 0.053 0.695 0.683 0.689 0.635 0.928 0.746 volar forearm
0.922 0.021 0.894 0.922 0.908 0.890 0.985 0.942 plantar foot
0.766 0.065 0.690 0.766 0.726 0.672 0.917 0.853 forehead
0.536 0.083 0.326 0.536 0.405 0.362 0.920 0.367 palmar index finger
0.741 0.011 0.833 0.741 0.784 0.771 0.980 0.762 popliteal fossa
1.000 0.000 1.000 1.000 1.000 1.000 0.998 0.842 labia minora
0.333 0.000 1.000 0.333 0.500 0.572 0.895 0.485 umbilicus
0.214 0.008 0.500 0.214 0.300 0.312 0.946 0.496 external nose
0.667 0.051 0.486 0.667 0.563 0.533 0.937 0.576 lateral pinna
0.578 0.042 0.725 0.578 0.643 0.590 0.888 0.721 palm
0.500 0.000 1.000 0.500 0.667 0.704 0.986 0.675 glans penis
Weighted Avg. 0.676 0.041 0.704 0.676 0.677 0.644 0.933 0.734
=== Confusion Matrix ===
a b c d e f g h i j k l < classified as
15 1 2 1 2 0 0 0 0 5 1 0 | a = axilla
1 41 1 3 7 2 0 0 0 2 3 0 | b = volar forearm
3 0 59 0 1 1 0 0 0 0 0 0 | c = plantar foot
1 3 0 49 2 0 0 0 3 4 2 0 | d = forehead
1 2 0 3 15 0 0 0 0 1 6 0 | e = palmar index finger
1 4 1 0 0 20 0 0 0 0 1 0 | f = popliteal fossa
0 0 0 0 0 0 6 0 0 0 0 0 | g = labia minora
2 1 0 2 1 1 0 4 0 1 0 0 | h = umbilicus
0 0 0 7 1 0 0 0 3 3 0 0 | i = external nose
1 0 0 4 3 0 0 0 0 18 1 0 | j = lateral pinna
0 7 3 2 14 0 0 0 0 1 37 0 | k = palm
2 0 0 0 0 0 0 0 0 2 0 4 | l = glans penis

Discussions

This section presents a discussion of the results presented in the above section. The section explains the results and compares them with other machine learning model training experiments carried out in the past. The data set contained a total of 401 instances (rows of data). The data set had several features, all of which were used in the analysis. A total of 2214 instances were studied, as presented in the results section above. The features, in this case, refer to the body parts from which the microbes were found. They included axilla, forehead, palmar index finger, popliteal fossa, labia minora, umbilicus, external nose, lateral pinna, palm, and glans penis. The data set contained 401 instances with a total of 2214 attributes. While training the model, 108 instances were identified correctly (which accounted for 26.9327%), while 293 instances were identified incorrect (accounting for 73.0673%). Two hundred seventy-one instances were identified correctly, accounting for 67.581%, while 130 were identified incorrectly, accounting for 32.419 %.

The trend in the models performance is in concurrence with that of most supervised machine learning models. Conventionally, machine learning models adapt as they input more labeled data, enabling them to adapt and learn. The ability of the model to correctly identify the microbes according to their respective communities is a major breakthrough in the study and classification of disease-causing organisms and diseases and infections.

Works Cited

Ge, Zhiqiang, et al. Data Mining and Analytics in The Process Industry: The Role of Machine Learning. IEEE Access 5 (2017): 20590-20616, Web.

Qu, Kaiyang, et al. Application of Machine Learning in Microbiology. Frontiers in Microbiology 10 (2019): 827. Web.

Zou, Quan, and Qi Liu. Advanced Machine Learning Techniques For Bioinformatics. IEEE/ACM Transactions on Computational Biology and Bioinformatics 16.04 (2019): 1182-1183. Web.

Landscape and the Changes That It Goes Through

The angle of repose is related directly to the phenomenon of mass wasting. Seeing that the former is defined as the most obtuse angle, at which the slope remains stable, the relation between the two phenomena can be defined in the following way: the steeper the angle of repose is, the more probable the chance for mass wasting becomes.

There are three ways, in which stream can transport its load. These are floatation (items with lower density remain on the surface); solution (objects dissolve in the water and, thus, are transported); suspension (small particles are transported with the help of water turbulence).

In a jar of stream water, the suspension is most likely to appear at the bottom of the jar, the solution will be spread in the jar proportionally, and the floatation will remain at the surface of the water in the jar.

Delta is the part of the land, where a river floats into the ocean or a sea (Deltas para. 1).

Traditionally, flash floods occur in the areas that are flat and set rather low. A low and flat surface is the exact definition of a city area; therefore, it is natural that flash floods occur in cities for the most part. In the suburbs, the surface is far from being even, which prevents flash floods from occurring.

The significance of groundwater is not to be underrated. First and foremost, groundwater supplies plants and trees with the mineral resources that they require. Next, groundwater provides about 90% of the members of rural areas with water, since the aforementioned member are not capable of retrieving water from the city suppliers.

Unsaturated, or vadose (Lutgens and Tarbuck 87), zone is a point at which the groundwater appears under the atmospheric pressure. The water table is the area, where the vadose zone ends. Under the water table, the saturated zone lies; the origin of the term saturated zone can be explained by the fact that the pores in the soil under the water table are completely saturated with water.

An aquifer is traditionally defined as a permeable rock or any other unconsolidated material, beneath which groundwater can be found and extracted from, according to the definition provided by Lutgens and Tarbuck (84). The role of aquifers is quite impressive; it helps create saturated water, since the pressure from aquifer is quite big.

Despite the fact that pores allow for a better permeability of water, some rocks have high porosity and at the same time display very low permeability rates. It should be noted that not the porosity, but the mechanical structure of a rock defines the permeability rate. Among the rocks that have high porosity and a low permeability, basalt and shale should be mentioned

Since hot springs are generated because of the geothermal heat, it is logical that they emerge in the places that are ore tectonically active; therefore, the West, where the tectonic stretching occurs at a more noticeable pace, is the location for hot springs.

Though it hardly seems possible, there is a way for a sewage-contaminated aquifer to clean itself in a natural way. For the process to start, it will be required to remove the source of the contamination. Due to the movement of the groundwater, the process of cleaning will start. It should be mentioned, though, that the process is very long.

Subsidence is sinking of the land after the groundwater is taken from underneath it.

Works Cited

Deltas. Web.

Lutgens, Frederick K. and Edward J.Tarbuck. Landscapes Fashioned by Water. Foundations of Earth Science (7th Edition). Prentice Hall. 2014. Print.

Why Are Some Animals So Smart? by Carel Schaik

How does Carel Van Schaik define culture?

In reference to Sumartan orangutans Carel Van Schaik makes a conclusion that that the animals that the cultural animals are also intelligent. The scientist explores various opinions considering the forces that serve to stimulate the development and evolution of intelligence among animals.

Van Schaiks idea is that intelligence in the world of animals is pushed forward not by the need to survive and work hard for the food, but through the social learning (32). According to Van Schaik, social, or cultural inputs are the necessary influences that accompany the growth of an animals intelligence. This means that Van Schaik defines culture as the social force promoting intelligence in animals.

Compare that to how Haviland defines culture in Chapter 1 of the textbook. What are the differences?

Haviland defines culture as a set of ideas and perceptions shared by a society and transmitted within it (7). In this understanding, culture is applied when various experiences need to be interpreted; as a result, various conclusions are made. According to Haviland, culture generates behavior and is reflected in it.

The cultural standards used in the society are learned and transmitted from a generation to generation, but not acquired with the help of biological inheritance. Basically, Haviland and Van Schaik view the work of social and cultural forces in opposite directions and this is the main difference between their opinions. Van Schaik understands culture and the moving force of intelligence, while Haviland sees intelligence as the source of culture.

How does the difference between the two different definitions of culture relate to the context of the definition? They define culture for two different kinds of primates. Discuss.

The two definitions take different features of their objects of study as the basics. Haviland studying humans makes a conclusion that intelligence generates culture, which means that intelligence is viewed as a trait initially possessed by humans. Van Schaik that studies a different kind of primates tries to determine the sources of intelligence.

As a result, he concludes that culture is the basic feature that develops through the social learning and serves as a stimulus for the development of the animals intelligence. This way, humans are seen as initially intelligent primates and Sumatran orangutans  as initially cultural ones.

What sorts of theoretical tests did Van Schaik conduct in terms of thinking through the problem of how to determine whether or orangutan intelligence in Sumatra was due to biology or culture?

In order to determine if the orangutan intelligence in Sumatra is cultural Van Scheik, first of all, decides to test his hypothesis geographically. This means that if the behavior is cultural, in certain regions it will be uncommon, whereas in the places where it was invented the animals are going to be familiar with the skill.

Secondly, the scientist eliminates the possibility that the behavior is ecological or genetic. Finally, the researcher tests the geographic distribution of the behavior. Its spreading within a certain territory means that the animals pass the knowledge culturally.

Put into your own words the reasons why Van Schaik concluded that the Sumatra orangutans who knew how to extract fatty seeds from the Neesia fruits using modified twigs did so because of their access to cultural knowledge not available to other orangutans.

Van Schaik concludes that a wild orangutan obtains the ability to invent various complicated behaviors are the cultural animals that learn through observation of various experiences and practices. The researcher states that the animal that applied a new behavior had to accumulate the basis of knowledge for this behavior, which obviously came from the animals social interactions with others if its kind.

Works Cited

Haviland, William A. Cultural Anthropology.

Van Schaik, Carel. Why Are Some Animals So Smart? Cambridge, Massachusetts: Harvard University Press, 2006. Print.

Filtering Mechanisms in the Visual Perception System

Every day, the individual is confronted with a tremendous amount of visual noise, which has no informative value whatsoever, but it overwhelms the perception of visual channels. Consequently, even without focusing on specific details, individuals can become overworked just because of the excess noise and interference around them. To avoid such distraction and overload, there are filtering mechanisms in the visual perception system. This essay evaluates five such mechanisms and provides a comparative analysis for them.

Techniques for filtering visual attention are different but serve the same purpose, namely to prevent information from entering the brain and, consequently, process information from the field of visual noise. Several information filtering systems are distinguished, depending on what underlies these procedures. Thus, this paper cites five fundamental patterns routinely used by our minds to maintain focus and overcome information noise. The first of these techniques is visual scanning, which allows us to assess with our eyes what is happening around us. For example, when looking for a house number, a person can quickly study the picture around them to find the sign they are looking for. Also associated with this technique is the fixation technique, which draws the individuals attention strictly to one detail. Fixation on the plate with the number allows one to reduce all surrounding noise and direct ones vision, attention, and consciousness to the frame in question. However, when visual scanning and fixation are combined, it becomes possible to generalize to a third technique called saccadic eye movement (Hessels et al., 2018). When individuals are surrounded by several houses and need to choose a particular one, the gaze moves quickly between fixation points. Two other retention techniques are covert and explicit attention, which are opposite in meaning. With covert attention, an individual might notice a lateral vision that there are no houses on the left and right, so not look in those directions. In contrast, explicit attention could be realized if the individual looked directly at the house to determine its number.

As can be seen, each of the five ways mentioned has differences and similarities. First of all, they all involve the visual organs, and they also all have a common purpose, namely the control of the eyes to maintain concentration and attention. Hidden attention is passive and less energy-consuming, while explicit attention, on the other hand, requires more energy and time. Saccadic eye movements, in general, are the most energy-consuming, while this technique allows you to cover the maximum number of objects in your surroundings at the same time. Finally, visual scanning is characterized by an active form of perception in which the individual expresses their will, whereas fixation is not always realized according to the desire of the beholder.

Reference

Hessels, R. S., Niehorster, D. C., Nyström, M., Andersson, R., & Hooge, I. T. (2018). Is the eye-movement field confused about fixations and saccades? A survey among 124 researchers. Royal Society Open Science, 5(8), 1-10.

A Political Analysis of Botswana and Djibouti Developing

Introduction

The aim of this essay is to compare the two countries in terms of the political structure and structure of social life. Two African countries, Botswana and Djibouti, were selected for consideration. Despite the difference in economic development and political structure, many aspects of the life of these countries are quite similar to each other. In particular, it makes sense to focus on the negative aspects of the political and social structure of these countries. This is necessary in order to characterize the problem areas inherent in the arrangement of life in many developing countries and to establish the possible reasons for their emergence and maintenance. Botswana and Djibouti are African countries with similar characteristics in terms of social and political violence, the elimination of which is an unresolved issue on the future agenda.

Overall Characteristics

On the whole, undertaking to compare the political life and social customs of the two African countries for comparison, it makes sense to characterize the geopolitical situation that has developed on the continent over the past hundred years. Social conflicts in Africa are a rather complex topic for discussion, which is often obscured by the obscene notions of Western civilization, which tends to use prejudice in its judgments. Western colonization largely shaped the image of modern Africa, in particular, dividing the continent into specific countries and regions.

It should be noted that after the decolonization process, there was much less bloodshed on African lands. Each country has its own specific percentage of ethnic minorities whose rights are generally not suppressed by violence. This reduces the risk of civil war, and clashes between armed government officials and insurgents in general have become less and less over the 21st century (Driscoll, 2021). At the moment, African countries for the most part live on the principle of mutual non-invasion (Aucoin, 2017). The Organization of African Unity has provided relative peace on the borders of African states, and at the moment there are virtually no conflicts between African countries.

However, this does not mean that there are no social conflicts and political tension within individual African countries. Moreover, the escalation of this tension could be associated with the relative development of each African country and, accordingly, the growing demand for human freedoms, including democratic freedom of choice (Raleigh and Kishi, 2020). Considering countries such as Botswana and Djibouti, one can draw fairly full-fledged conclusions about the political structure of the countries of modern Africa and the internal conflicts that accompany their gradual development.

Political Situation in Botswana

While political instability is almost synonymous with state building in African countries, Botswana compares favorably with most other states. The country is said to represent a full-fledged intercultural inclusive community that calls for international and intercontinental unity. Botswana calls itself a democracy where opposition parties are given the opportunity to form coalitions. However, despite the fact that the situation appears sufficiently in a democratic light, the opposition parties still cannot compete sufficiently with the leading one (Holm 135). This is primarily due to the lack of historical precedent. Paradoxically, being not banned and having the right to free activity, these parties do not receive sufficient incentive to form into a significant opposition capable of changing the alignment of political forces.

A separate case worth considering in this context is the situation with the 2019 elections. The Democratic Party of Botswana has once again won the election despite conflicting middle-class responses to the current president. An obstacle to the victory of the opposition party was its unification with the forces of the former president of the country, who was trying to regain positions. This coalition was perceived by opposition supporters as weakness and concession, and they did not want to vote for the previous president (Seabo and Masilo 65). Post-election protest scores were quite low compared to, for example, South Africa (AfricaNews, 2021). Thus, political violence in the country is very low, which is due to historical and economic characteristics, since Botswana is a diamond-mining country, which has a positive effect on its economy.

The real problem within the country is gender-based violence. The persecution, humiliation and abuse of women in the country is extremely high. In 2020, an initiative group was created calling on the government to pay attention to the problem in the context of a lockdown, when women risk being locked away alone with their torturers (Thobega, 2020). The problem of gender-based violence in Botswana captures the vast majority of women in the country, reaching the most sophisticated ways of implementation.

Political Situation in Djibouti

Another African country called Djibouti shows an average level of danger. The main crime in the country concentrates on cases of theft and petty theft. However, the countrys relatively low level of violent crime does not mean that Djibouti does not suffer from similar problems often found in African political systems. In particular, the incidence of political violence in Djibouti is quite high. There is one dominant party in the country, while the president has remained the same since 1999. This situation is supported by the crackdown on political protests and opposition rallies. Opponents of the dominant party accuse the president of dishonest, fraudulent elections and lack of freedom of speech. International election observers in Djibouti, however, said they had not observed any violations, despite the fact that the figures collected by the countrys leader are absolutely unprecedented. All of this describes the political situation in the country as complex and contradictory.

In general, Djiboutis credibility as a country with free speech is low. Djibouti was named not free country according to one of the authoritative American ratings (Freedom House, 2020). Among the discriminated segments of the population there are also women who are at regular risk of abuse, physical, psychological and sexual violence on a daily basis (Bureau of Democracy, 2021). In addition, in Djibouti, there have been cases of extermination of opposition forces, state pressure on ethnic groups, as well as disappearances of people that have not been investigated by the state (Douala, 2021). This characterizes Djibouti as a country with a high level of political lack of freedom and repeated cases of violent oppression.

Conclusion

It should be noted that the available information field is still not dense enough to assess the real situation in the African countries. Information warfare takes place in both cases in the essay, as the leading political party seeks to silence opponents and play down violent political conflicts. At the same time, the level of lack of freedom of speech and crime in both countries is quite high. Discrimination against minorities and persecution of women, which are stigma for African society, deserve special mention. Thus, despite the significant difference in development and economic well-being, both countries show similar and comparable socio-political problems.

Works Cited

Aucoin, Ciara. Less armed conflict but more political violence in Africa. Institute for Security Studies, 2017. Web.

Botswana pulls off all-round incident free general election. AfricaNews, 2019, Web.

Bureau of Democracy, Human Rights, and Labor. 2020 country reports on human rights practices: Djibouti. US Department of State, 2021, Web.

Djibouti. Freedom House, 2020, Web.

Douala, Cameroon. Several dead, houses razed amid ethnic fighting in Djibouti. Anadolu Agency, Web.

Driscoll, Jesse. Social conflict and political violence in Africa. Stanford SPICE, Web.

Holm, John D. Elections in Botswana: Instituonalization of a new system of legitimacy. Elections in Independent Africa, edited by Fred M. Hayward, Routledge, 2019, pp. 130-158.

Raleigh, Clionadh, and Roudabeh Kishi. Africa: The only continent where political violence increased in 2020. Mail & Guardian,  Web.

Seabo, Batlang, and Bontle Masilo. Social cleavages and party alignment in Botswana: Dominant party system debate revisited. Botswana Notes and Records, vol. 50, 2018, pp. 59-71.

Thobega, Keletso. Botswana sets up gender violence courts to tackle pandemic backlog. Reuters, Web.