How the Biosphere Is Supported by the Other Three Spheres

Introduction

Earths surface represents the point of four spheres meeting, where they overlap and interact. The atmosphere is the outer gas shell of the Earth, the lower border of which lies through the lithosphere and hydrosphere. The hydrosphere is a water shell of the Earth, which includes all waters on its surface and in the air, like oceans, seas, lakes, and clouds. The lithosphere, in turn, is the outer sphere of the solid Earth. The biosphere is also widely known as the ecosphere, is connected to the three layers mentioned above, and embodies all living organisms on the planet. This paper aims to describe how the atmosphere, hydrosphere, and lithosphere support the biosphere and strengthen the connection with examples.

Main body

The continuous interaction between the atmosphere and the biosphere impacts all living creatures on the Earth. The atmosphere serves as a guarding layer, protecting the planets surface from ultraviolet radiation coming from the sun and absorbing and emitting the heat. Besides, due to the presence of the atmosphere, most of the meteorites do not reach the Earth. For instance, when meteoroids enter the atmosphere at high speed, they burn up, and it is possible to see what is called shooting stars, or meteors (NASA Science). One can state that atmosphere represents a protector or a shield from the harmful influence on life on Earth.

Due to the natural soil erosion, the atmosphere and hydrosphere hide and even the craters from large meteorites. The lithosphere also serves as a protection for the ecosystems on the Earth. The most active interaction between the lithosphere and the biosphere occurs in the crust covering the planets surface. Plants penetrate the cracks with their roots, thus destroying solid rocks and turning them into sedimentary ones. The remains of plants and animals accumulate and settle to the bottoms of reservoirs. Consequently, this layer facilitates the existence of living organisms in the ecosphere. In conclusion, it is worth mentioning that the sustainable presence and functionality of all the spheres influence each other and maintain life on the Earth.

Works Cited

NASA Science. Meteors and Meteorites. Nasa Science Solar System, 2019, Web.

Understanding and Exploring Assumptions

Significant of meeting assumptions in statistical test

In a statistical analysis, assumptions made are very significant in designing of the research method applicable to a given case scenario. Ensuring that the data meets a given assumption helps in reducing the errors during computation, particularly type one and two errors. The assumptions help in boosting the reliability, avoided non-normality, reduce cases of curvilinearity and consequently, give an output which is desirable. The assumptions ease the performance of the parametric tests and therefore, are in a position to effectively and efficiently compute the output.

Histograms with normal curves

Histogram of hygiene of day 1.
Figure 1: Histogram of hygiene of day 1.
Histogram of hygiene of day 2.
Figure 2: Histogram of hygiene of day 2.
Histogram of hygiene of day 3.
Figure 3: Histogram of hygiene of day 3.

Probability plots (p-p plots)

Probability plot for hygiene day 1.
Figure 4: probability plot for hygiene day 1.
Probability plot for hygiene day 2.
Figure 5: probability plot for hygiene day 2.
Probability plot for hygiene day 3.
Figure 6: probability plot for hygiene day 3.

Examination of normality

The probability plots shown above indicate that the hygiene variable for day 1 displays a normal distribution where the largest probability was recorded at the centre and reduced to the ends. The hygiene variable for days 2 and day 3 shows that there is a slight shift of the most concentrated values towards the left side and therefore, this is where a normality assumption comes into being.

Descriptive analysis

Descriptive statistics

The above shows the measures of central tendencies. From the values given, the hygiene for day one is not normally distributed since there is a great variation in the differences between the minimum and the mean and the maximum and the mean. The hygiene variables for day 2 and three shows that their data is normally distributed by the virtue that the mean lies almost at the middle of the maximum and the minimum values (Field, 2009).

The skewness of the data which is a measure of how the data is distributed towards the right or the left of the mean shows that for day 1 the data is skewed more to the right contrary to the probability plots. The skewness of the hygiene for day 2 and 3 are skewed to the right but by a very small margin.

Kurtosis which is the level of flatness of the data shows that the hygiene for day one is more peaked at the centre as compared with the other two variables which are more flattened.

Descriptive analysis for SPSS Exam

Descriptive statistics

The above table shows the descriptive analysis with the measures of central tendencies. The skewness shows that the variables are skewed by small value to the right which is less than one in both cases. This is an indication that the data has its values gradually decreasing from the mean. The kurtosis analysis shows that the values are fairly flat and there is no significant sharp increase in the variables at the centre or the mean.

Histogram of Percentage on SPSS exam.
Figure 7: Histogram of Percentage on SPSS exam.
Histogram of Computer Literacy.
Figure 8: Histogram of Computer Literacy.
Histogram of the percentage of the lecturers attended.
Figure 9: Histogram of the percentage of the lecturers attended.
Histogram of Numeracy.
Figure 10: Histogram of Numeracy.

Figure 7, 8, 9 and 10 shows the histograms for the variables in the data set. It is clear that figure 8 and 9 which represents the literacy and the percentage of the lecturers attended shows that the values fairly follow the normal curve where the value is highest at the middle and decreases to the right and the left. In this case, we can conclude that the data has a normal distribution basing on the graphs. Which is for the percentage on the SPSS exam do not follow the normal distribution at all. The values are randomly distributed, and it is not conclusive where the data is skewed towards whether it is to the left or the right. On the other hand, figure 10 shows that the numeracy variable shows that the values are skewed to the left and decreases to the right.

Test for homogeneity of variance

Test of Homogeneity of Variances.

Levene Statistic df1 df2 Sig.
Percentage on SPSS exam 2.584 1 98 .111
Computer literacy .064 1 98 .801
Percentage of lectures attended 1.731 1 98 .191
Numeracy 7.368 1 98 .008

ANOVA

Sum of Squares df Mean Square F Sig.
Percentage on SPSS exam Between Groups 32112.640 1 32112.640 244.556 .000
Within Groups 12868.360 98 131.310
Total 44981.000 99
Computer literacy Between Groups 20.250 1 20.250 .295 .588
Within Groups 6734.340 98 68.718
Total 6754.590 99
Percentage of lectures attended Between Groups 1228.503 1 1228.503 2.656 .106
Within Groups 45324.225 98 462.492
Total 46552.727 99
Numeracy Between Groups 53.290 1 53.290 7.778 .006
Within Groups 671.460 98 6.852
Total 724.750 99

The above Levenes test also known as the ANOVA test shows that the percentage of the lecturers who meets the criteria set for the homogeneity of variance is 2.584, which suggests that the largest variance should be at most four times the smallest value of variance. Therefore, this condition is satisfied hence the data has homogeneity of variance (Field, 2009).

Assumptions of normality and homogeneity of variance

The assumption of normality means that the data is assumed to be distributed about the mean where most of the data are concentrated at the centre and reduces towards the extremes. On the other hand, the assumptions of covariance mean that the variance of the variables should not differ by a large margin. The largest variance should be four times for the smallest variance for this assumption to be adhered. In situations where this assumption is violated, it leads to false conclusions from a given set of data. There are situations where the effects are not felt, for instance, when a larger number of responses is nearly equal or equal to the mean.

The violation can be addressed by transformation of data, using a more conservative ANOVA test, using tests, which are free from distribution issues and trimming the data to fit the normal distribution.

Reference

Field, A. (2009). Discovering statistics using SPSS (3rd ed.). Los Angeles: Sage.

Studying the Flow of Gas in the Core of a Reactor Using Discrete Element Method Simulation

Introduction

Discrete Element Method (DEM) is a technique used to compute the motions and effects associated with a large number of small sized particles. It is also commonly referred to as Distinct Element Method (Rycroft et al. 21306). The process has enabled to numerically simulate a wide array of particles within the same processor. Today, the concept is commonly used to solve a number of problems encountered in the engineering field. Such issues include those related to discontinuous and granular material. It is also used in the simulation of granular flow. In addition, it is also used to simulate rock and powder mechanics (Rycroft et al. 21306). In their article, Rycroft et al. also note that the technique can be used to study the motion of liquid and gas flow (21306). To achieve this, a continuum approach that treats the two materials as fluids is used. For such simulations to take place, computational fluid dynamics is used. In this paper, the author describes how they will use the information provided by Rycroft et al. in studying reactor design and testing (21307).

Studying Gas Flow in the Core Using DEM

A DEM simulation can be used to study the flow of gas accurately. The process gives information on local ordering, porosity, and residence-time distribution. For the simulations to be successful and accurate, a number of factors need to be taken into consideration. One of the important factors includes the forces that act on the gas particles. Friction is one of the major forces experienced when gas is in motion (Lane and Metzger 19). It occurs when two air particles come into contact with each other. Contact plasticity also acts on any molecules that come into contact. It is also commonly referred to as recoil (Jebahi et al. 100). It is the effect felt when two particles collide. It is also noted that gravity acts on gas particles. The effect of this force has to be taken into consideration when studying the flow of these substances. The reason is that it tends to increase the mass of particles. In addition, gravitational pull may slow down or increase the velocity of particles depending on the direction of flow. As such, it must be factored in to improve the accuracy of simulations. The attractive potentials of the gas particles in the core are also an important force that should be taken into consideration (Jebahi et al. 114). Such potentials include electrostatic, adhesive, and cohesive forces. The forces make it difficult to determine the nearest neighbour pair. Specialised algorithms need to be used to resolve the effects of these attractive potentials (Rycroft et al. 21310).

Molecular forces associated with the gas in question also need to be taken into consideration when using DEM simulations. The reason behind this is to compensate for the interactions expected to take place between the gas molecules. Of key importance are the coulomb forces (Lane and Metzger 19). They are also commonly referred to as electrostatic forces. According to Azmy and Sartori, these forces act on gaseous molecules that carry electric charges (45). As such, they impact significantly on the accuracy of DEM simulations. Pauli forces of repulsion are also common when dealing with simulation of gas flow. They manifest themselves when two atoms get too close to each other. When this happens, the atoms often repel each other. Van der Waal forces will also have an effect on the motion of gas particles in the core. The force is the totality of both attractive and repulsive forces between molecules (Lane and Metzger 19). However, some of these attractions and repulsions are not regarded as Van der Waal forces. They include the effects resulting from electrostatic attractions and covalent bonds (Jebahi et al. 112). It is noted that the forces affect the movement of gases within the core. An algorithm that puts into consideration the effect of each of these forces is required when simulating the flow of gas in the core. Failure to factor in either of them will have a negative effect on the accuracy of the simulation.

According to Yang, DEM simulations involve taking the sum of all forces acting on each of the gas particles (99). A complex and integrated computation algorithm is used to accurately point out the changes that occur in the velocity of the particles within the core. The algorithm also helps to determine the position of individual particles. According to Azmy and Sartori, DEM simulations also enable one to use the current position of a gas molecule to compute the force that the particle will be subjected to during its next position (48). The loop is continuous and shows the movement of the gas particles throughout the core (Rycroft et al. 21306).

Simulating Gas Flow in the Core of a Reactor: Methodology

According to Jebahi et al., the simulation process consists of three distinct processes (98). The first is initialisation. It is followed by explicit time-stepping. The last stage is post-processing. The first phase of the simulations commences with the generation of the core model. In the process, particles will spatially orient (Jebahi et al. 99). The particles are then assigned initial velocity. The explicit time-stepping phase calls for nearest neighbour sorting (Jebahi et al. 100). The purpose of the steps is to help decrease the possibility of contact pairs during the simulation. In the process, the number of computation requirements is decreased. For the purposes of the proposed study, the researcher will conduct a monodispersed simulation. To this end, the following factors will be taken into consideration:

  • The effect of gravity on the motion of gas particles will be taken to be g = 9.81 ms2.
  • The friction coefficient between the wall and the particles is expected to be µw = 0.7. The assumption made here is that the walls are not toinless.

Integration Methods used to Describe the Flow of Gas

In their study, Rycroft et al. found that various integration methods can be used in DEM simulations (21310). They include verlet algorithm, symplectic integrators, and the leapfrog method (Azmy and Sartori 48). The first approach entails a numerical technique that integrates Newtons law of motion. It is mostly used in computer graphics. It helps engineers to compute the flight route taken by gas particles in DEM simulations. The following formulae are used to determine trajectory of the gas particles in the proposed study:

  • Newtons motion equation for a conservative system is
Formula

or individually

Formula

Where:

  • t represents time.
  • Formula

     represents the ensemble of position vector of N objects.

  • V, on the other hand, is the scalar potential function.
  • F is the negative gradient of the potential that gives the ensemble of forces on the particles.
  • M in this case represents the mass matrix, typically diagonal with blocks with mass for every particle (Yang 23).

Symplectic integrator is another numerical technique. According to Yang, the computation is based on symplectic geometry (23). It is also based on classical mechanics (Yang 23). It is used in DEM simulations for a number of reasons. The first one is to show the speed and position of particles. It borrows from the Hamiltons equations, which show the following:

Formula

In this case,

  • H is the Hamiltonian.
  • q is position coordinates,
  • P is momentum coordinates (Yang 23).

The Leapfrog method, on the other hand, is similar to verlet algorithm. It shows the updating position of gas particles within the core. It also provides information on the velocity of these particles (Yang 23). The formula used is given below:

Formula

In the equation above:

  • is position at step.
  • is velocity.
Formula
Is velocity at step Formula.
Formula

Is acceleration, and is size of each time step for a particle (Yang 23).

Conclusion

It is evident that DEM simulations are important when it comes to the process of designing and testing reactors. The technique enables engineers to use computerised algorithms to obtain data touching on local ordering, porosity, and residence-time distribution. As a result, engineers can accurately anticipate the nature of the flow of particles inside a reactor. It is possible to analyse millions of granular particles using this methodology. The technique is also commonly used to provide accurate data on the flow of gas.

Works Cited

Azmy, Yousry, and Enrico Sartori. Nuclear Computational Science: A Century in Review, Dordrecht: Springer, 2010. Print.

Jebahi, Mohamed, Damien Andre, Inigo Terreros, and Ivan Iordanoff. Discrete Element Modelling of Thermal Behavior of Continuous Materials. Discrete Element Method to Model 3D Continuous Materials 2.27 (2015): 93-114. Print.

Lane, John, and Philip Metzger. A Review of Discrete Element Method (DEM) Particle Shapes and Size Distributions for Lunar Soil, Cleveland, Ohio: National Aeronautics and Space Administration, Glenn Research Centre, 2010. Print.

Rycroft, Chris, Gary Grest, James Landry, and Martin Bazant. Analysis of Granular Flow in a Pebble-Bed Nuclear Reactor. Physical Review 74.1 (2006): 21306  21321. Print.

Yang, Qiang. Constitutive Modelling of Geomaterials: Advances and New Applications, Berlin: Springer, 2013. Print.

Sampling Strategy and Sample Size

Abstract

The present paper critiques the sampling strategy and sample size of the selected article. Overall, it is evident that the strategies used (e.g., inclusion criteria, stratified sampling, randomization, and power analysis) were effective in maintaining internal validity and ensuring that findings could be generalized to the wider population.

Introduction

Sampling is of immense importance in research as it allows scholars to study a proportion of the population and generalize the conclusions across the entire population if the sample is (Creswell, 2009). This paper critiques the sampling strategy and the sample size of a quantitative study titled A Parent-Adolescent Intervention to Increase Sexual Risk Communication.

Critique of Sampling Strategy and Sample Size

The study by Villarruel, Cherry, Cabriales, Ronis, and Zhou (2008) recruited 791 participants by inviting adolescents and their parents to participate in a health promotion program. Parents formed the population for the study, with the inclusion criteria for selection being participation in the program and having an adolescent in the family. In retrospect, purposive sampling should have been used here as it enables a focus on specific characteristics of the population that are of interest to the researchers (Frankfort-Nachmias, Nachmias, & DeWaard, 2014).

Stratified sampling was used to group participants according to the number of adolescents in their family and gender, before random sampling was applied to assign participants to the experimental and control groups (Villarruel et al., 2008). These are the major tenets of stratified sampling that enable researchers to group the sample according to key characteristics and also to randomize participants from different strata (Frankfort-Nachmias et al., 2014). The power analysis done demonstrated the sample size was sufficient as it had 91% statistical power to identify a medium small effect (d =.25) of the intervention on outcome (Villarruel et al., 2008). Other analyses (e.g., mediation analysis) proved that the sample could be relied upon to establish the required effect, though researchers failed to offer a description of these techniques.

Justification of Sample Size

The analyses described above justified the sample size as sufficient. Failure to justify the sample could lead to adverse outcomes such as inability to get the data required to make a correct decision on a particular research and lack of credible study findings (Creswell, 2009).

Strengths and Limitations of Study due to Sampling Strategy

The sampling strategy used ensured that the study results not only demonstrated the effect of the intervention based on the characteristics of each subpopulation, but could also be relied upon due to the power and effectiveness of the sample. Additionally, the analyses done on sample size ensured the correctness of the sample, minimized sampling error and enhanced heterogeneity characterization, which in turn reinforced the generalizability of findings (Wagner & Esbensen, 2015). Lastly, the stratified sampling technique used in the study saved money and time resources as it uses a smaller sample compared to random sampling due to its greater precision (Uprichard, 2013). However, limitations existed in terms of difficulties in classifying members of each subpopulation according to specific characteristics as well as sorting each member of the selected sample into a single stratum (Creswell, 2009).

Analysis of Sampling Strategy

An effective sampling strategy can strengthen a quantitative study by ensuring that the findings can be generalized to a wider population and also by dealing with threats to internal and external validity. According to Witter (2002), randomization is easy to use and controls for confounding factors that may compromise the internal validity of the study. However, a poor sampling strategy can weaken a quantitative study in terms of undercoverage (leaving out some groups of the population during sampling), non-responsiveness (individuals refusing to participate in a study after selection), as well as producing divergent and often erroneous inferences (Bhattacherjee, 2012).

Conclusion

This paper critiqued the sampling strategy and sample size of the selected article. Overall, it is evident that the sampling strategy used is effective in ensuring that the findings maintained their internal validity and could be generalized to a wider population.

References

Bhattacherjee, A. (2012). Social science research: Principles, methods, and practices. New York City: Springer.

Creswell, J.W. (2009). Research design: Qualitative, Quantitative, and mixed methods approaches (3rd ed.). Thousand Oaks, CA: Sage Publications Inc.

Frankfort-Nachmias, C., Nachmias, D., & DeWaard, J. (2014). Research methods in the social sciences (8th ed.). New York: Worth Publishers.

Uprichard, E. (2013). Sampling: Bridging probability and non-probability designs. International Journal of Social Research Methodology, 16(1), 1-11.

Villarruel, A.M., Cherry, C.L., Cabriales, E.G., Ronis, D.L., & Zhou, Y. (2008). A parent-adolescent intervention to increase sexual risk communication: Results of a randomized controlled trial. AIDS Education and Prevention, 20(5), 371-383.

Wagner, C., & Esbensen, K.H. (2015). Theory of sampling: Four critical success factors before analysis. Journal of AOAC International, 98(2), 275-281.

Witter, J. (2002). Sample size calculations for randomized controlled trials. Epidemiologic Reviews, 24(1), 39-53.

Expression and Purification of Tagged Protein in E.coli

E.coli has been found as a modal vector in which genes of different sources can be expressed. There have been many developments in the systems through which protein expression and purification can be achieved using E. coli as the cloning agent. Clontech HAT (Histidine Affinity Tag) is a protein expression and purification system whereby the vectors encode a novel polyhistidine epitope. This epitope allows the proteins that are expressed in bacteria to be purified at physiological pH at denaturing circumstances or conditions. This system presents two major advantages in that the proteins involved are soluble and hence they are no apparent aggregates within the inclusion bodies. The fact that the proteins can be washed out under neutral pH is also advantageous. (Scientifix, 2011)

Several purification strategies have also been devised. The clontech xTractor Buffer generally disrupts the bacterial cells to enhance protein purification. This system of protein purification has been optimized to enhance the extraction of poly-histidine tagged proteins. The extraction of protein using this method is simple. The cells are re-suspended in the buffer and mixed gently for about 10 minutes. The mixture of salts that results produces a lysate that has no visible precipitates. Proteins obtained through xTractor Buffer have a higher biological activity than cells that would be obtained through sonication. (Scientifix, 2011)

Other methods relate to protein expression and purification in E.coli and it is all a matter of choice depending on the characteristic of cell that one would want to achieve at the end of it. The choices are also dependent on the available time limit as well as financial resources at ones disposal. However, the methods are eligible and result-oriented.

References

Scientifix, 2011, Protein Expression and Purification in E.coli, Web.

Tests and Scaling Tools in Social Studies

Social science researchers have a responsibility towards effective assessment and measurement of works conducted by others through appropriate usage of test and scaling tools (Sahn & Stifel, 2000, p.96). One, therefore, as a social science researcher needs to acquaint himself or herself with these tools to be able to carry out measurement of data in several disciplines as may be required (Krzanowski & Marriott, 1994a). For most of the time, a critique of the tools used by others becomes very necessary for self-development (Chatfield & Collins, 1980). A scaling method, by explanation, has to do with organizing data in terms of quantitative attributes (Everitt & Dunn, 2001, p.18; Everitt et. al, 2001, p.68). Lately, easier-to-use scaling tools such as the SPSS have been developed to institute a change in how scientific data is been dealt with. With such a tool, analysts are better equipped to transform variables by simply clicking a bottom (Oppenheim, 1992). Even though so much has been put together to enhance data analysis, several times, statistical tasks have been considered arduous by researchers. There is always an emergent obstacle that researchers will find distressing rather than been exciting and presenting mastery for important skills.

A number of other scale and test tools, including Wechsler memory- used in the estimation of the memory function of an individual, MMPI- used in the measurement of mental alertness, truthfulness scale- used in determining how reliable ones truth is, and the Bayley for determining a childs proper growth-rate, among others, have been put in place for accessing variables (Manly, 1994, p.24).

One other scaling method that is found very useful in social science is the General Linear Model (or the GLM) which is a tool in statistics that is used to effectively incorporate the normal distribution of dependent variables as well as variables that are continuously independent (GenStat, 2002). The GLMs SPSS procedures affords one the opportunity to operate with specifically generalized linear models in terms of syntactic or dialoguing boxes, and equally makes it possible for one to get outcomes in pivoted tabular format- this is of considerable significance based on the fact that the GLM makes easy the editing of outcomes (De Vaus, 1990, p.36; Krzanowski & Marriott, 1994b, p.6). Otherwise, the several features present on the GLM, therefore, make it possible and easier for one to put together designs that have vacant cells by plotting mean estimations and customizing linearly structured models in conformity with an available research question (Sahn & Stifel, 2000, p.92). Researchers, who have become very conversant with fitting linearly structured models, be they univariate, multivariate or recurring measures, would obviously note the usefulness of GLM procedures (Chatfield & Collins, 1980, p.107). Basic GLM features include sum-of-the-square, estimated-marginal-means, profile plots, as well as custom-hypothesis tests which are optional in 4 approachable measures and are well structured for the evaluation of sums-of-squares (SS). These SS options are quite easy to access. The first type of SS enables the calculation of reduced error SSs through the addition of effects in the model periodically.

The use of multivariate methods in conducting surveying measurements by researchers has proved to be vital in analyzing index constructions as well as in exploring initial-stage data that has specified surveyed subdivisions (Babbie, 1998, p.12; Manly, 1994, p.29). With a good understanding of multivariate methods of sampling, one will certainly appreciate the significance of using such in the determination of index constructions in a very practical angle of consideration. For researchers in the survey, the tool may however not be quite familiar as they are likely to have inadequate knowledge of its usage.

Reference List

Babbie, E. (1998). Survey Research Methods (2nd ed.). Belmont: Wadsworth.

Chatfield, C., & Collins, A.J. (1980). Introduction to Multivariate Analysis. London: Chapman and Hall.

De Vaus, D. A. (1990). Survey in Social Research (2nd ed.). London: Unwin Hyman.

Everitt, B.S., & Dunn, G. (2001). Applied Multivariate Data Analysis. London: Arnold.

Everitt, B.S., Landau, S., & Leese, M. (2001). Cluster Analysis. London: Arnold.

GenStat,. (2002). GenStat for Windows, 6th Edition. Oxford: VSN International Ltd.

Krzanowski, W.J. & Marriott, H.C. (1994a). Multivariate Analysis, Part 1. London: Arnold.

Krzanowski, W.J. & Marriott, H.C. (1994b). Multivariate Analysis, Part 2. London: Arnold.

Manly, B.F. (1994). Multivariate Statistical Methods: A primer, 2nd ed. London: Chapman and Hall.

Oppenheim, A. N. (1992). Questionnaire Design, Interviewing and Attitude Measurement. London & NY: Continuum.

Sahn, D.E. & Stifel, D. (2000). Assets as a measure of household welfare in developing countries. Washington: Washington University.

Arguments Against Science in Inherit the Wind by Jerome Lawrence

It is reasonable to believe that science should be appropriately appreciated by most people in the current time of unprecedented progress in all spheres of life and the availability of various information. It forms the foundation for all the technological advancements making everyones living more comfortable and attractive. However, the increased popularity of counter-scientific theories and different conspiracy beliefs distributed via social networks paints a different picture. It is worth noting a poll showing that over forty percent of Americans think that government conceals information regarding aliens, and almost a half trust the stories about haunted houses (Kaufman and Kaufman 12). In this context, Inherit the Wind provides a valuable discussion on the conflicting issues of research and beliefs. Based on the historic trial that took place in 1925, it dwells into the ideas of progress supporters and their opponents. A review of some characters arguments is essential for understanding the reasons driving many people against science and finding ways to counter them.

The most obvious but rarely recognized reason for people not to believe in research is their failure or unwillingness to think. Rachel directly admits this position in several instances throughout the play. When Cates notes that such a simple thing as twilight is different at the top of the world, she mentions living in Hillsboro as a reason not to think about it (Lawrence and Lee 12). This is especially evident in todays world when many scientific arguments become excessively complicated for a person to understand. In the final part of the play, Rachel gives a statement that can perfectly summarize this position. She notes that she was always afraid of her possible thoughts, and, therefore, it seemed safer not to think at all (Lawrence and Lee 77). However, the fallacy of this argument is that it deprives people of their right to think and makes them easily convincible. Rachel realizes it emphasizing the significance of ideas, no matter whether they are good or bad (Lawrence and Lee 59). Although it might be hard, thinking on ones own is necessary for adequately navigating todays world.

In a significant number of cases, peoples opposition to science is related to their incomplete knowledge or tendency to follow popular beliefs. It is noted that the scientific method is a hard discipline, and even researchers are vulnerable to confirmation bias (Achenbach). In Inherit the Wind, Brady shows a perfect example of this attitude. Participating in the trial dedicated to Darwins book, he admits not having read it (Lawrence and Lee 58). He also stands against inviting a zoology or geology professor to the hearings citing such evidence as irrelevant, immaterial, inadmissible (Lawrence and Lee 58). Moreover, even talking about the Bible, in which he calls himself an expert, Brady quickly becomes confused when complicated matters are concerned. The inherent weakness of this attitude is that a person fails to see all the available arguments and make a decision based on them. Like Rachel was unwittingly led to testifying against Cates, everyone can become a victim of insufficient knowledge. Following conspiracy theories, which are easy to understand and believe in, is a direct consequence of this approach. Therefore, a person should strive for comprehensive research data despite its complicated character.

Finally, many people refuse to believe in scientific ideas, which differ from their traditional views. Research shows that such concepts as the one that humans originate from primitive species are hard to grasp and intuitively contradicted by many, even if they are accepted rationally (Achenbach). Moreover, peoples tendency to rely on personal experience and create unjustified causal connections support their counter-science argumentation. All this is reflected in Reverand Browns ideas stated during the trial. His speech indicates that the world is simple for him, it is created by God as written in the Bible, and anyone opposing this view is a sinner and deserves punishment (Lawrence and Lee 38). In the first scene, he even mentions the need to place a banner indicating the proper views of the community. Brady also supports a similar argument stating that material things are inferior to the great spiritual realities of the Revealed World (Lawrence and Lee 44). Such polarization of opinions can often be detected concerning many scientific issues, and increased literacy does not reduce it (Achenbach). Still, it often leads to pseudoscientific conclusions, which are highly promoted on the Internet.

As can be seen, the various arguments against science presented in Inherit the Wind remain relevant in todays world. Peoples desire not to think independently and conduct proper research leads to the increased popularity of easy-to-grasp theories and explanations. The availability of communication means and the influence of views expressed by celebrities further aggravate this issue. A famous actress and anti-vaccine activist, Jenny McCarthy, once noted that she got her degree from the University of Google (Achenbach). That quickly reminds of the polarized opinions expressed by Reverand Brown and his simplified understanding of the surrounding world. Thus, science is currently viewed not as a set of data, but as a method of deciding what to believe. Understanding the arguments against it and their inherent deficiencies helps to break the bubble of misleading information surrounding everybody. It teaches to ask proper questions and find the right answers, which is critical as the technology becomes more and more complicated.

Works Cited

Achenbach, Joel. Why Do Many Reasonable People Doubt Science? National Geographic, 2020. Web.

Kaufman, Allison B., and James C. Kaufman, editors. Pseudoscience: The Conspiracy Against Science. MIT Press, 2018.

Lawrence, Jerome, and Robert Edwin Lee. Inherit the Wind. Dramatists Play Service Inc, 2000.

Hydrogen-Bonding Complexes Types Analysis

The following is a review of the article on the hydrogen-bonding complexes of 5-Azauracil and uracil derivatives. The article discusses the strong complexes formed when the derivatives of Uracil bond with complementary compounds such as 2, 4-dioxotriazine and adenine. They explore why the strong bonds are formed between these compounds while weak bonds are formed when 5-azauracil combines in an aqueous medium (Diez-Martinez, Kim, & Krishnamurthy, 2015). The researchers explain that the moieties formed when the different compounds combine have special bonds that interfere with those of water in aqueous form leading to the differences in strength.

The researchers discuss the process of solvation that is thought to affect the strength of the hydrogen bond formed between the different moieties (Diez-Martinez, Kim, & Krishnamurthy, 2015). This process is the interference between the bonds of water and the new hydrogen bonds formed when 5-azauracil combines in an aqueous medium. In an organic medium, solvation is absent. Consequently, there is no interference, and the hydrogen bonds are stable leading to the greater strength of these compounds in the organic medium. The researchers base their work on the role that solvation could have played in the origin of life. They state that the discriminating role of this solvent could have informed the selection of molecules utilized in oligomer formation during the process of life development.

Hydrogen bonds formed by 5-azauracil in aqueous media (Source: Diez-Martinez, Kim, & Krishnamurthy, 2015)
Figure 1: Hydrogen bonds formed by 5-azauracil in aqueous media (Source: Diez-Martinez, Kim, & Krishnamurthy, 2015)

The researchers begin with an introduction of the different moieties and oligomers that are present in the DNA and the bonds present. In addition, they explain the differences in strength between some of the bonds formed between the different moieties. Solvation is an integral part of the natural selection process as water is thought to interfere with the selection of the different moieties and compounds that form the DNA. In fact, the researchers observed minimal interference between the ADA association with DAD bond partners when placed in chloroform (Diez-Martinez, Kim, & Krishnamurthy, 2015). However, the interference was greatly increased when the hydrogen bonds were put in aqueous medium.

In the pairing studies, these researchers used spectroscopy to investigate the strengths of association. The other method used was self-association where the researchers investigated the NH proton shift at varying concentrations at the temperature of 298K. The proton shift in the hydrogen bond improves the strength of this bond. The third method used was cross-association where the researchers experimented with adenine and 2, 4-Diaminotriazine (Diez-Martinez, Kim, & Krishnamurthy, 2015). In organic solvent, the pairing of uracil and adenine is mediated through the Hoogsteen mode that is different from the Watson-Crick mode. The Watson Crick mode is shown in the combination of 2,4-diaminntriazines and 5-azauracil.

Hydrogen bonds formed by Uracil in aqueous media (Source: Diez-Martinez, Kim, & Krishnamurthy, 2015)
Figure 1: Hydrogen bonds formed by Uracil in aqueous media (Source: Diez-Martinez, Kim, & Krishnamurthy, 2015)

In the three methods describes, the researchers reported a weaker association of azauracil in water (Diez-Martinez, Kim, & Krishnamurthy, 2015). The explanation given for the weaker association is the pKa value of this compound that is close to the neutral pH. A nitrogen and argon atmosphere was used in the experiments. In addition, the researchers used Thin Layer Chromatography (TLC) alongside UV lamps and PMA. An ion trap mass spectrometer measured the different characteristics of the compounds and their mass spectra.

The researchers also conducted a study to determine the binding constant. In this procedure, they performed a binding study of each of the host molecules and monitored the effects using the NMR spectroscopy (Diez-Martinez, Kim, & Krishnamurthy, 2015). These researchers also performed a job plot procedure in CDCl3 where the molarity of the fluid was increased with spectrometer readings being taken. They also recorded the NH shift of the molecules and used this data to construct a job plot.

When synthesizing the derivatives, the researchers describe a complex process involving different pathways and reactants. They began with the creation of a suspension of the 6-chloromethyl-uracil and diisopropylthylamine. Into this mixture, they added bis-protected-pyrrolidine and stirred the resulting mixture at 60 0C. The mixture was stirred over a period of 24hours to allow for a perfect mix and elimination of the extra solvent. This process was followed by the further evaporation of the remaining solvent at a reduced pressure (Diez-Martinez, Kim, & Krishnamurthy, 2015). The resulting residue was re-dissolved in MeOH after which the researchers filtered the residue through a celite and silica bed. The same mixture of the solvent was used to wash the bed several times. Lastly, the researchers combined the filtrate and the washings and dried them to a solid mass that need no further purification.

The conclusion included that the bonding behavior of azauracil moiety in chloroform with its complementary counterpart has the same strength as uracil and adenine in chloroform (Diez-Martinez, Kim, & Krishnamurthy, 2015). The experiment showed that the base pairing propensity in water was mediated by the hydrogen bond in these compounds. Consequently, the properties of these compounds are determined by the moieties and the strength of the hydrogen bonds in aqueous media.

References

Diez-Martinez, A., Kim, E., & Krishnamurthy, R. (2015). Hydrogen-Bonding Complexes of 5Azauracil and Uracil Derivatives in Organic Medium. The Journal of Organic Chemistry, 80(1), 70667075

Introducing the Geography and Economics of England

Introduction

England is one of the most visiting countries on the globe, attracting tourists by historical heritage and traditions, picturesque lands, and the latest achievements in all fields of human activity. However, no matter how much time travelers may spend in England, it will still not be enough to comprehend this wonderful country. This paper aims at describing Englands geography and great sites worth visiting and famous sports games that originated in this state, including football, tennis, and cricket. The paper will also portray the economic and industrial condition of England and provide facts about its GDP and financial sector.

Geography and Great Sites

England has varied geography mostly comprising different hilly and flat lowland areas and mountains in North and West. Low hills stretch across much of the country, interspersed with low-lying lands and plains. The Highland Zone includes the Pennine Mountain Range, the mountains of the Lake District, and the Cumbrian Mountains (Meyer). Although some mountains peak at over 900 meters above sea level, most of them are relatively easily accessible with numerous roads along low watersheds and wide passes (Meyer). Besides, due to the newest uplifts occurring in several stages, the mountains were fragmented into many massifs and acquired a specific mosaic structure with aligned surfaces at different heights.

England also possesses a rich variety of inland waterways, that is, rivers. The River Severn is the longest river in the United Kingdom, which originates in Wales, passes through England, and flows into the North Sea. Nevertheless, Englands longest river is the Thames, the length of which equals 346 kilometers (Meyer). Additionally, England is rife with lakes, mainly located in the Lake District and distinguished by remarkable picturesqueness.

Among the most prominent sites, tourists should see Stonehenge, the Tower of London, and Leeds Castle. Stonehenge is one of the best-known places, which carries historical and cultural significance for the entire civilization. This prehistoric stone monument arranged in a circle is located on Salisbury Plain, in Wiltshire. Despite its unknown origin, it is assumed that Stonehenge was constructed for religious and civil purposes. The Tower of London, also known as the Tower, is an ancient royal fortress, the grounds and buildings of which historically served as a political prison and a royal palace, arsenal, and mint simultaneously. Initially built by William the Conqueror, Leeds Castle was later converted into a palace of the royal family of King Henry VIII, due to which it preserved its historical aesthetic and appeal (Meyer). Besides, the fortress has been constructed on three islands that exude from a beautiful artificial lake

Sports

Undeniably, English people a highly sporting nation who established many popular team and individual sports, including football, tennis, and cricket. For instance, despite that games with balls can be traced back to ancient times, the contemporary football rules were formed in English public schools from the 19th century when the first professional clubs were established (The BBC). The English Premier League is currently famous by many outstanding clubs, such as Liverpool, Manchester United, Chelsea, Arsenal, and Everton, which draws an enormous global audience. Many UK teams compete in the Union of European Football Associations (UEFA) Champions League, the worlds most prestigious football competition.

Cricket, a slightly complicated game of balls, bats, and wickets, which sometimes may last to five days, was also originated in England. The first evident reference concerning the sport emerged in Guilfords court testimony in 1597, and international matches were first conducted in the late 1800s (The BBC). It is worth noting that the Mens Cricket World Cup was launched in 1975, two years later than the Womens (The BBC). The best-known competition is the Ashes, a series of matches played between Australia and England.

Tennis, initially known as lawn tennis, is the English most significant individual sport in terms of viewing audiences and registered players. The origins of the game date back to the twelfth century, but its modern form evolved in England in the late 19th century when, in 1872, Leamington Spa, the first tennis club, was founded (British Council). The oldest and most famous tennis tournament in Britain is the Wimbledon Championships that take place every year at the All England Lawn Tennis and Croquet Club in London.

Economy

England is a highly industrialized country, producing textiles, clothing, chemical products, automobiles, locomotives, ships, and aircraft, which primarily come from the City of London. Main industries also include computers, microelectronics, pharmaceuticals, and paper and glass products. Approximately one-fifth of workers in England are employed in manufacturing (Thomas and Kellner). The financial sector has played an increasingly considerable part in English economic development, and London is one of the global largest financial centers. According to the Office for National Statistics, as of 2018, gross domestic product (GDP) accounts for $2.5 trillion with almost 1.5 percent growth (Office for National Statistics). The British pound sterling, Englands official currency, is one of the leading and most stable currencies in the world. Insurance companies, banks, future and commodity exchanges are significantly concentrated in the City of London, the primary of which is the central bank of the UK and the Bank of England.

Conclusion

In summary, the paper has described Englands geography, great sites, and famous sports games that originated in this state, including football, tennis, and cricket. England has a varied geography, mostly comprising different hilly and flat lowland areas and mountains in North and West, such as Pennine Mountain Range and the Cumbrian Mountains. In addition, many popular team and individual sports, including football, tennis, and cricket, were born in this country. Finally, Englands economy is intensely and comprehensively developed, manufacturing different goods and highly technological equipment, devices, and machines.

Work Cited

Meyer, Amelia. Geography. England Forever, 2013.

Meyer, Amelia. Famous Castles. England Forever, 2013, Web.

Regional Economic Activity by Gross Domestic Product, UK: 1998 to 2018. Office for National Statistic.

7 amazing sports invented in The UK. The BBC, 2020. 

Sport in the UK. British Council, 2020. Web.

Thomas, William Harford, and Kellner, Peter. England. Encyclopædia Britannica, 2019.

Genomics, Vaccines, and Weaponization

Three former bioweaponers, Sergio Popov, Ken Alibeck, and Bill Patrick had different motivations for engaging in the development of biological weapons. Bill Patrick had much desire in the development of biological weapons for warfare because he believed that biological weapons are humane ways of dealing with the enemy. He also believed that biological weapons were better to use in warfare than the use of bombs or chemical weapons. Ken Alibeck and Sergio Popov engaged in the development of biological weapons, because it was the only duty they could have done at that time in the Soviet Union. Alibeck and Popov had little enthusiasm about the development of biological weapons, as they believed that biological weapons posed serious threats to the world (Nova, 2011).

I had changed my views on the three bioweaponers after watching their interview with Kirk Wolfinger. Although I reckon the development of biological weapons to be an evil undertaking, some of the scientists were forced to engage in bioweaponry because of the circumstances. Ken Alibeck and Sergio Popov engaged in the development of biological weapons not because of their own desire, but due to being forced by circumstances to earn a living. However, Bill Patrick seems enthusiastic about the development of biological weapons out of his own passion.

Bill Patrick is the only bioweaponer in the interview who still holds the same sentiments and views about biological weapons as in the heydays of the development of biological weapons. Bill spent over thirty years at Fort Derricks Base for Biological Weapons in the United States and later went to work on microbe defenses (Nova, 2011). Bill believes that biological weapons are still a viable form of weapons to use against the enemy because they are more humane than other forms of weaponry because they just incapacitate the enemy but they do not damage infrastructure (Nova, 2011).

Sergio Popov, a former Soviet scientist on biological weapons, and Alibeck Ken, a former soviet bioweaponer, who fled to the United States after the collapse of the Soviet Union have different views nowadays than in the past. They both believe that the development of biological weapons was an evil undertaking, and they did not wish to engage in the process ever again (Nova, 2011).

After the collapse of the Soviet Union in 1992, The Russian president, Mikhail Gorbachev signed a decree that banned the development of biological weapons. Most of the stockpiles of biological weapons were destroyed and, there was a considerable downsizing of the biological weapons stockpiles held by the Soviet Union. However, doubts about whether Russia completely eliminated all stockpiles of biological weapons developed by the Soviet Union still exist (Jeane, 2005).

Many of the scientists involved in the development of the biological weapons in the Soviet Union have immigrated to other countries to offer other countries their skills. These scientists can be lured by rogue states in the world to offer their skills and knowledge for the development of biological weapons. This is something that should worry the world significantly. The United States has granted asylum to many of these scientists to avoid them being lured by rogue nations in the development of biological weapons (Christian, 2003).

Reverse vaccinology, a method of searching candidate vaccines for pathogens, has several steps. The first step is the sequencing of the genome of the pathogen which is under interest. After sequencing, several algorithms have been applied to identify the cell surfaces and protein secretions that can cause antibody response in a human host. The next step is the production of recombinant proteins in bacteria like E coli.

The recombinant proteins are further purified and then used as immunogens in mice. The immune sera obtained from immunized mice are then collected, assayed and tested for the ability to bind to the surface of the antigen and their bactericidal activity. Furthermore, the vaccines are taken through a process of final evaluation before being tested in clinical trials.

Some letters containing highly infectious pores of dry powdered anthrax were sent to various locations in the United States via mail-in September 2001. After the anthrax attacks, hundreds of samples were taken from numerous facilities suspected to have become contaminated with the anthrax spores in the letters to determine the extent of contamination (National Academy of Sciences, 2011). The Center for Disease Control gathered over 125,000 samples after the anthrax attack. The strains of anthrax isolated from the letters, contaminated with anthrax, were identified and found to be related to the Ames strain of bacteria through genome sequencing and carbon fourteen dating (Lake, 2011).

DNA sequencing of anthrax isolated from some of the victims of the attack was done at the Institute Of Genomic Research in 2001. Carbon 14 dating of the samples of anthrax strains done at the Lawrence Livermore National Laboratory in June 2002 established that the anthrax strains had been cultured two years before they were sent in mails (Lake, 2011).

The Institute Of Genomics Research and other biodefense experts also identified many mutations of the anthrax strains obtained from the letters. These mutations were identified through genome analysis and screening of over 1073 assays of the anthrax strains obtained from the contaminated letters by the FBI. After carrying out genomic analysis, The Federal Bureau of Investigation concluded that the strains of anthrax were related to an Ames strain of bacteria cultured at the United States Army Medical Research Institute of Infectious Diseases (Lake, 2011).

Weaponization means that alteration of the genetic structure of the organism to improve its virulence/disease causes ability and resistance to drugs for use in warfare. The weaponization of a biological agent means that the act of enhancing a biological weapon can be used as a weapon (Christian, 2003). A biological agent might be weaponized through manipulating or treatment in a way that improves its usefulness, as a weapon, such as making the biological agent more virulent, is easier to disseminate as an aerosol or make the biological agent more stable to the dissemination (Jeane,2005).

Dr. Frasers (2004) article, A Genomics Approach to Bio Defense Preparedness highlights the history of bioterrorism, and covers into the depth of how genomics influence the development of biological weapons and resist antibiotics and vaccines.The smallest genome that has been sequenced is that of mycoplasma genitalium G37 consisting of a genome of 0.58 mb. This bacteria causes urethritis and arthritis. The largest genome that has ever been sequenced is that of Psuoidenomas aeroginosa PAD1 that caused opportunistic infections with a genome of 6.26 mb. According to Jeane (2005), the size of genome in bacteria matters as the size of bacteria influences the virulence of bacteria and the resistance to drugs by bacteria.

As a professor in a new biotechnology department, I would identify whether new postdoctoral students are potential security risks through utilizing creativity and security strategies. I would utilize methods such as surveillance and background checks of the identity of the student to ensure that all that is done in the lab is accounted for and recorded. I can respect the privacy rights on my interview with students, but I would do profiling of the students to establish their true identities. Laboratory security measures, such as the installation of surveillance systems, would be implemented to ensure that activities done in the lab are recorded.

I would minimize theft in the laboratory through effecting structures and procedures that allow for accountability where every scientist records what is used and carries out checks in exit points to minimize the risk of theft. Having non-foreign students in a laboratory does not eliminate theft and spying in laboratories. There is also a need to have good security structures in all laboratories at all times regardless of the composition of the staff.

Racial profiling has increased as a result of the many security threats that America faces. It is common for the police to stop and search minority races in America rather than stop and search the majority white races. Although times of increased threat to national security call for exceptional security measures, it is wrong to profile people according to their race for security reasons such measures infringe on the right to privacy and the notion of equality of races (Michele, 2004).

The proposed Hiking bill in New York is a racist bill that seeks to provide the police officers with the right to consider a persons race or ethnicity when deciding whether to stop or search a suspect. Hiking bill is a racist bill that will lead to the increase in the cases of racial profiling against minority populations in the United States.

Reference List

Christian C. (2003). Biological weapons: An overview of threats and responses. California: Strategic and defense studies centre

Fraser C. (2004). A genomics based approach to bio-defense preparedness. Nature Reviews genetics. (5) 23-33.

Jeane G. (2005). Biological weapons: From the invention of state sponsored programs To contemporary bioterrorism. Columbia: Columbia University Press.

Lake E. (2011). Analyzing the anthrax attacks. Web.

Michele M. (2004). Racial profiling a mater of survival. Web.

National Academy of Sciences. (2011). Anthrax: a medical detective story. Washington: National academy of sciences.

Nova. (2011). Interviews with bio warriors. Web.