Osmosis in Living Organism: Germination Experiment

Osmosis is known to be a diffusion process “related to the concentration gradient and to vapor pressure gradient across the membrane” (Howlett 53). This phenomenon may be discussed in many contexts, and germination is one of them. Germination experiments aptly illustrate the key characteristics of osmosis and demonstrate that it can occur at different rates, and the primary factor is the substance.

The Processes Associated with Seed Germination

In order to interpret the results of the experiment and explain predicted differences in the studied groups, one should pay attention to the essence of germination. When the conditions are favorable, i.e., the temperature is acceptable, and a seed is supplied with water, it transfers from the dormant state. It is possible to single out several stages of seed germination.

First and foremost, the dry seed starts imbibing (consuming) water and expanding; as long as enzymes and food supplies become hydrated, enzymes become active: the seed increases its metabolic activities, and the energy for the process of growing is produced (Bradbeer 18). Water plays an important role: turgor pressure inside the cells starts raising, and seeds can gradually grow in size. On the second stage, respiration is the most significant process. As the seed is provided with oxygen, its respiration changes from the anaerobic to the aerobic type. Water sloppily is also important. During this stage, the radicle and the plumule appear. On the final stage, the cotyledons are expanded (Bradbeer 34).

Treatment and the Expected Effects

The experiment will be carried out on 20 seeds that will be placed in different conditions. The treatment will include wetting the towels in which the seeds will be put with different liquids. It is expected that such treatment will create the favorable conditions necessary, as it has been stated above, for the seed germination. In this context, water becomes the most important component since the beginning of the germination is triggered by it. The effects of the distilled water are expected to be the most visible. This claim can be supported by the fact that the influence of osmosis makes a great impact: in the case of distilled water, nothing prevents seeds in the dormant state from interaction with water at the molecular level. In other words, water that does not contain any additional substances directly affects the seeds. It may be expected that the seeds placed in the towel imbued with water will germinate faster than those placed in different condition: probably, this process will take a day. Other liquids that contain a wide range of molecules will be less advantageous for the seeds. It can be explained, again, by the phenomenon of osmosis. To germinate, seeds should interact with water, but other molecules not only produce zero effect but also hamper the process because precious time is wasted on unnecessary interaction with useless substances. As a result, the germination will take place, but it will happen later than with distilled water.

The Purpose of the Experiment

The present experiment is to be conducted to explore the factors that should be taken into account during seed germination and explain them from the scientific point of view using the background knowledge of osmosis and seed germination stages. In this respect, water is the most important factor the influence of which is to be studied in the experiment.

Works Cited

Bradbeer, J. W. Seed Dormancy and Germination. New York: Springer Science & Business Media, 2013. Print.

Howlett, Larry. Osmosis: The Molecular Theory. San Francisco: EBookIt, 2014. Print.

An Experiment on Antibiotic-Resistant Bacteria

Abstract

This experiment was aimed at investigating the effects of MSa bacteriophages on such antibiotic-resistant bacteria as Staphylococcus aureus. At the very beginning, it was hypothesis that MSa bacteriophages would either destroy the bacterium or suppress its growth. We can say that the findings only partially support the initial hypothesis because this research has several limitations, in particular insufficient sampling and shortage of time.

Introduction

The experiment, which has been carried out, aims to test the use of phage therapy on an antibiotic-resistant bacterium, in particular on Staphylococcus aureus or golden cluster seed, as it is also known. This microorganism is known to survive the exposure to antibiotics (Lindsay, 240).

This pathogen can be the cause of septic arthritis and endocarditis (Fischetti, 224). Phage therapy is considered to be an alternative to antibiotics. It relies on the use of bacteriophage or those viruses, which destroys or slows down the growth of a pathogen, yet remains harmless to the beneficial bacteria and host body (Grath & Sinderen, 3).

One should note that this question has long been of great interest to many biologists. To prove this point, I can refer to the studies, conducted by Rosental and Bulov in early sixties and to the most recent research, made by a group of authors under the leadership under the direction of Petra Kramberger (2010).

The main objective of these studies was to develop a bacteriophage that would destroy this bacterium. During this experiment, I attempted to investigate the effects of MSa phages against Staphylococcus aureus. The initial hypothesis was that these MSa phages would destroy the cell of the bacterium. This hypothesis has been based on the recent research findings, this phages has a negative effect on Staphylococcus aureus (American Society for Microbiology, unpaged).

Materials and methods

In order to perform this experiment I took five samples of water, contaminated with Staphylococcus aureus. Each of these samples was preserved in 50 Ml tubes. I have decided to work with five samples mostly because I wanted to determine whether the concentration of phages within the sample affects the bacteria in any way.

The study has been conducted within ten days. While observing the interplay of MSa phages and Staphylococcus aureus, I paid attention to possible growth suppression effects or signs of lysis, in other words, the dissolution of a cell. I have introduced different number of phage colonies in different samples. It would be better to illustrate this in table format:

The sequence number of sample Phage Concentration
Sample One One Phage Colony
Sample Two Two Phage Colonies
Sample Three Three Phage Colonies
Sample Four Four Phage Colonies
Sample Five Five Phage Colonies

Throughout the experiment, the temperature remained at 100 F. As I have noted before, I was primarily interested in the effects produced by bactriophages on Staphylococcus aureus.

Results:

On the whole, the experiment has yielded varying results. In the first two samples, I observed no signs of growth suppression or lysis. As far as the third and the fourth samples are concerned I have noticed 10 and 15 per cent decline of growth. Only in the fifth sample, there were signs of lysis, which means that bacteriophages attached themselves to the cell of bacteria. This findings should better be presented in the form of chart.

The sequence number of sample Outcome
Sample One No signs of growth suppression
Sample Two No signs of growth suppression
Sample Three 10 per cent growth decline
Sample Four 15 per cent growth decline
Sample Five Sings of lysis

Therefore, these findings indicate that the concentration of phage within the sample may affects the life of the bacterium.

Discussion

Overall, it is possible to argue that the findings confirm the initial hypothesis and MSa phages can really produce a negative effect on Staphylococcus aureus. Furthermore, these results suggest that the amount of phage colonies is another factor that influences the interplay of the bacterium and bacteriophage.

Still, I have to admit that this research has several laminations: 1) first of all, the number of samples was too small and insufficient for biological research; 2) secondly, such studies are normally carried out within a period of time, which is longer than ten days;

It is quite probable that I will continue this research in the future, because as it seems to me that in biologists as well as medical workers will pay even more attention to phage therapy. The thing is that an increasing number of bacteria have developed resistive mechanisms toward antibiotics. Despite the efforts of pharmacologists who produce new antibiotics, such bacteria as Staphylococcus aureus still continue to survive.

Another reason why this area of research appears very promising to me is that antibiotics usually entail a great number of health complications while phage therapy is much safer for the patient. So, there is a great likelihood that such research will benefit the community. I am not sure that I will focus particularly on the study of Staphylococcus aureus, yet antibiotic-resistant bacteria are of great interest to me.

Works Cited

American Society for Microbiology. “Phage Therapy May Control Staph Infections In Humans Including MRSA.” ScienceDaily, 22 August 2007. Web.
<>.

Fischetti. V. Gram-positive pathogens. New Jersey: Wiley-Blackwell. 2006. Print

Grath Stephen & Sinderen Douvie. Bacteriophage: genetics and molecular biology. NY: Horizon Scientific Press. Print.

Kramberger Petra. Honour Richard, Herman Richard et al. Purification of the Staphylococcus aureus bacteriophages VDX-10 on methacrylate monoliths. Journal of Virological Methods, 2010, p 1 -5. Available at: Lindsay Jodi. Staphylococcus: molecular genetics. NY: Horizon Scientific Press. 2008. Print.

Rosendal K. and Bulow. P Temperate Phages Influencing Lipase Production by Staphylococcus Aureus. Journal of General Microbiology. 1965. Journal of General Microbiology. 41 (1965), 349-356.

Simon, E.J., J.B. Reece, and J.L. Dickey. Campbell Essential Biology. 4th edition. Boston: Benjamin Cummings, 2010.

Versalovic James. & Wilson M. Therapeutic microbiology: probiotics and related strategies. ASM Press, 2008. Print.

Collisions in One Dimension: A Physical Experiment

Abstract

When two bodies collide they either stick together or separate. It is believed that there is a correlation between the initial and final momentum as well as the initial and final energy. Theoretically it is believed that in an inelastic collision both energy and momentum are conserved unlike perfectly inelastic collision where only momentum is conserved (Drexler 24). Hence, the purpose of this experiment was to ascertain this theory.

In this experiment, two carts on a runway were set to collide as the data of their velocities before and after collision were being recorded. Their velocities and masses were altered momentarily and the readings recorded. For the case of a perfectly inelastic collision, the carts were set to collide on the ends bearing Velcro. The experimental result revealed less than 20% and more than 40% lose in KE for elastic and perfectly inelastic collision respectively with some disparities in measurements. The momentum lose was kept at less than 10% for both collisions. These loses in KE could be attributed to heat, sound, deformation or light. Perfectly inelastic collision has relatively greater loses than elastic collision because it liberates more energy in many forms. Thus, the experiment almost agreed with the theoretical concept had it not been for some unavoidable uncertainties and hence, 100% momentum and/ or energy conservations are unattainable.

Experimental Objectives

The objective of this experiment is to ascertain that when bodies are involved in an elastic collision, both the energy and the momentum are conserved unlike in a perfectly inelastic collision where only the momentum is conserved.

Procedure

In this experiment, one cart (1) was pushed to collide with another stationary cart (2) placed between computerized velocity-sensitive gates with masses constant at least 4 times as it records. The masses of the carts were altered 4 times by loading of different masses on them hence, there were 4 runs (2 apiece for elastic and perfectly inelastic collision). The velocity of cart 1 was varied at least 4 times in every run. In case of perfectly inelastic collision, the carts were made to collide on the ends bearing Velcro.

Results

Table A of experiment part 1(elastic collision of two equal masses)

(V2f-V1f/V2i-V1i) r=∆P/Pi ΔKE/KEi
-0.954 0.046 0.09
-0.941 0.114 0.114
-0.955 0.089 0.089
-0.952 0.048 0.093
0.017 0.033

Table B of experiment part 2 (elastic collision of varied masses)

(V2f-V1f/V2i-V1i) r=∆P/Pi ΔKE/KEi
-1.049 0.179 0.195
-0.897 0.065 0.113
-0.916 0.017 0.097
-0.920 0.029 0.760

Table C of experiment part 2 (perfectly inelastic collision of equal masses)

V1f/V2f r=∆P/Pi ΔKE/KEi
1.000 0.113 0.606
1.000 0.031 0.531
1.000 0.063 0.561
1.000 0.100 0.595
1.000 0.053 0.442

Table D of experiment part 2 (perfectly inelastic collision of varied masses)

V1f/V2f r=∆P/Pi ΔKE/KEi
1.000 0.104 0.735
1.000 0.132 0.752
1.000 0.084 0.723
1.000 0.093 0.729

Data analysis

For elastic collision, both the energy and the momentum is conserved hence,

  • Therefore, m1v1i+m2v2i =m1v2f+m2v2f
  • Rearranging the equation we get: m1 (v2f-v1i) =m2 (v2i-v2f)
  • But, 1/2m1v1i^2+1/2m2v2i^2 =1/2m1v2f^2+1/2m2v2f^2

Rearranging and simplifying the energy equation:

  • m1 (v2f^2-v1i^2) =m2 (v2i^2-v2f^2)
  • m1 (v2f-v1i)(v2f+v1i) =m2 (v2i-v2f)(v2i+v2f)

Dividing momentum equation by energy equation;

  • (v2f-v1f) = – (v2i-v1i)
  • Dividing both sides by (v2i-V1i) we get; (v2f-v1f)/ (v2i-v1i) = -1

Table 1 of experiment part 1(elastic collision)

V1i (m/s) V2i (m/s) V1f (m/s) V2f (m/s) mass 1 mass 2 (V2f-V1f/V2i-V1i) r=∆P/Pi ΔKE/KEi
0.392 0 0 0.374 0.524Kg 0.524Kg -0.954 0.046 0.09
0.444 0 0 0.418 -0.941 0.114 0.114
0.330 0 0 0.315 -0.955 0.089 0.089
0.377 0 0 0.359 -0.952 0.048 0.093
0.363 0 0 0.369 0.017 0.033

Relative velocities were obtained from the quotient of (V2f-V1f)/ (V2i-V1i).

Therefore, (0.374-0)/ (0-0.392) =-0.954

r=∆P/Pi= [(P1f+P2f)-(P1i+P2i)]/ (P1i+P2i)

= [(m1V1f+m2V2f)-(m1Vi+m2iV2i)]/ (miVi+m2iV2i)

= [(0.524*0+0.524*0.374)-(0.524*0.392+0.5237*0)]/ (0.524*0.392+0.524*0)

= 0.046

ΔKE/KEi = (KEf –KEi)/KEi

= [(0.5*m1*V1f^2) + (0.5*m2*V2f^2)] – [(0.5*m1*V1i^2) + (0.5*m2*V2i^2)]/ [(0.5*m1*V1i^2) + (0.5*m2*V2i^2)]

= [(0.5*0.524*0^2+0.5*0.524*0.374^2)-(0.5*0.524*0.392^2+0.5*0.524*0^2)]/ (0.5*0.524*0.392^2+0.5*0.524*0^2)

= 0.090

Table 2 of experiment part 2 (elastic collision)

V1i (m/s) v2i (m/s) v1f (m/s) v2f (m/s) mass 1 mass 2 (V2f-V1f/V2i-V1i) r=∆P/Pi ΔKE/KEi
0.347 0 -0.110 0.254 0.500 kg 1.022 kg -1.049 0.179 0.195
0.318 0 -0.084 0.201 -0.897 0.065 0.113
0.326 0 -0.092 0.207 -0.916 0.017 0.097
0.342 0 -0.096 0.219 -0.920 0.029 0.760

Relative velocity ratio= (0.254-(-0.11))/(0-0.347)=-1.049

r=∆P/Pi = [(0.500*(-0.110)+1.022*0.254)-(0.500*0.347+1.022*0)]/ (0.500*0.347+1.022*0)

= 0.179

ΔKE/KEi = [(0.5*0.500*(-0.11)^2+0.5*1.022*0.254^2)-(0.5*0.500*0.347^2+0.5*1.022*0^2)]/ (0.5*0.500*0.347^2+0.5*1.022*0^2)

=0.195

Table 3 0f experiment part 3 (perfectly inelastic collision)

v1i (m/s) v2i (m/s) v1f (m/s) v2f (m/s) mass 1 mass 2 V1f/V2f r=∆P/Pi ΔKE/KEi
0.444 0.000 0.197 Same 0.524 kg 0.524 kg 1.000 0.113 0.606
0.483 0.000 0.234 Same 1.000 0.031 0.531
0.442 0.000 0.207 Same 1.000 0.063 0.561
0.271 0.000 0.122 Same 1.000 0.100 0.595
0.836 0.000 0.195 1.000 0.053 0.442

V1f/V2f= 0.197/0.197=1

r=∆P/Pi = [(m1+m2) Vf-(m1Vi+m2iV2i)]/ (miVi+m2iV2i)

= [(0.524+0.524)*0.197-(0.524*0.444+0.524*0)]/ (0.524*0.444+0.524*0)

= 0.113

ΔKE/KEi = [(0.5*(m1 + m2)*V^2)] – [(0.5*m1*V1i^2) + (0.5*m2*V2i^2)]/ [(0.5*m1*V1i^2) + (0.5*m2*V2i^2)]

= [(0.5*(0.524 + 0.524)*0.197^2)-(0.5*0.524*0.444^2+0.5*0.524*0^2)]/ (0.5*0.524*0.444^2+0.5*0.524*0^2)

= 0.606

Table 4 of experiment part 4 (perfectly inelastic collision)

v1i (m/s) v2i (m/s) v1f (m/s) v2f (m/s) mass 1 mass 2 V1f/V2f r=∆P/Pi ΔKE/KEi
0.325 0.000 0.096 Same 0.500kg 1.022 kg 1.000 0.104 0.735
0.296 0.000 0.084 Same 1.000 0.132 0.752
0.297 0.000 0.089 Same 1.000 0.084 0.723
0.320 0.000 0.095 Same 1.000 0.093 0.729

V1f/V2f= 0.096/0.096=1

r=∆P/Pi = [(m1+m2) Vf-(m1Vi+m2iV2i)]/ (miVi+m2iV2i)

= [(0.500+1.022)*0.096-(0.500*0.325+ 1.022*0)]/ (0.500*0.325+ 1.022*0)

= 0.104

ΔKE/KEi = [(0.5*(m1 + m2)*V^2)] – [(0.5*m1*V1i^2) + (0.5*m2*V2i^2)]/ [(0.5*m1*V1i^2) + (0.5*m2*V2i^2)]

= [(0.5*(0.500 + 1.022)*0.096^2)-(0.5*0.500*0.325^2+0.5*1.022*0^2)]/ (0.5*0.500*0.325^2+0.5*1.022*0^2)

= 0.735

For perfectly inelastic collision in part 4, rexpected= (m-M)/2m

= (0.500-1.022)/2*0.500

= -0.522

Discussion

The objective of this experiment was to affirm that in deed when bodies involve in an elastic collision, both the energy and the momentum are conserved. However, this is not the case with perfectly inelastic collision where only the momentum is conserved. From the experiment; with respect to elastic collision as shown in table 1 and 2, the relative velocity ratios were approximately equal to the expected (-1) with extremes being -0.897 and -1.049. For perfectly inelastic collisions, the ratio V1f/V2f was constant since the bodies are moving with a common velocity.

For the elastic collision (part and 2), the ∆P/Pi remained fairly below 10% with only two values exceeding 10%. This was expected however, this disagrees with the theory which states that the momentum ought to be conserved 100% and hence ∆P/Pi should be equal to zero (Drexler 23). The fluctuation in momentum can be attributed to lose of energy due to sound or friction on impact. Similarly, for perfectly inelastic collision the momentum loses were fairly below 10% with an extreme value of 53%.

As regards energy conservation, elastic collision recorded energy loses below 20% with the only extreme value hitting a high of 75%. The 20% energy loss is expected since on impact some energy is lost as sound, and the friction encountered as a result of moving parts as well as the surface runway. Comparatively, perfectly inelastic collision recorded greater losses than the former. The energy losses were greater than 40%. This is so because more energy losses are involved e.g. deformation, heat, light or sound.

The uncertainties that occurred while taking the experiment have a bearing on the equipments used. Some of the errors could be due to the friction on the moving parts and the surface. However, this can be minimized in future by using frictionless pulleys and surfaces. For easy adjustments to minimize the influence by gravity the run-way ought to be placed on sliding wedges. Moreover, the deviations in the data are partly contributed by the design of the experiment which didn’t put enough precautions like: the conditions under which the experiment ought to be done. It is imperative to carry out the experiment in an area where the wind flow is calm since this can hinder the smooth flow of the carts and thereby alter the results.

Conclusion

As attested by the data and the analysis done here in, the experiment gave satisfactory results that reflect the objective of the experiment. This is so because we did ascertain that indeed the energy and the momentum for both types of collision almost agreed with the theoretical values had it not been for experimental uncertainties.

Works cited

Drexler, Jerome. How Dark Matter Created Dark Energy and the Sun: An Astrophysics

Detective Story. Makawao, Maui, HI: Inner Ocean Publishing, 2003. Print.

Customer Research, Experiments, and Surveys

Research in an Organization

The Use of Research in Dell Inc.

Research is vital for every business that intends to remain relevant in the market. Dell Inc uses research as a tool to identify its customers. Dell Inc may recognize the tastes and preferences of its target customers through surveys. Dell usually employs market research in its investigations. So, to interact with consumers, the company uses Dell.com. The interactive course provides Dell with vital data on consumers’ feelings towards products, and what they think could be done to enhance these products.

Subsequent to such feedback, from the public, the corporation can enhance its services and products. Dell, as well, employs online questionnaire, where they gather reactions about certain ideas, new suggestions, and thoughts from their clients. Idea storm enables Dell society members to recommend intriguing ideas regarding services and products they would wish Dell to provide. Such recommendations can be shared among other individuals of society.

Moreover, such research allows Dell business to analyze its competitors, in the industry, and imitate some strategies that may, similarly, help in its actions. It, as well, provides the company with an opportunity to update itself on the newest market tendencies; this information proves helpful in the preparation of valuable concepts and strategies for sensation in the market. Thus, Dell Inc can construct knowledgeable and educated decisions through research.

Dell employs marketing research as an instrument to resolve any marketing troubles they may be experiencing, or they expect may occur in the prospect. Dell can maintain track of marketing practices, new inclinations, and the occurrences of its market, through research. Dell usually collects valuable marketing information, through research, which keeps the company conversant with the interests of the clients. This assists the business to construct marketing strategies and other plans they may desire to execute, so as, to cater to the needs of the consumer.

The introduction of online tools enables market research at Dell. At present, Dell can manage to get to its markets and acquire credible data for contribution into its planning and business procedures like establishing whether a product idea is a feasible/possible demand for a product; perceiving the traits of a service/product that may interest customers; making enhanced service or product depending on the interests of the customers; creating improved target promotion and other advertising plans; shaping price points for services or commodities; restoring dialogue with long-lost or quiescent clients, and making individuals aware of subsidiary commodities.

Current Areas of Research at Dell Inc.

At present, Dell is conducting research about the latest models in its private computer line. Dell wants to have a demographic outlook of the amount and people or corporations, who will buy the latest model in its personal computer line. Since, managers, at Dell, do not have the time or proficiency to acquire such data, and the marketing sector may not be, in a position, to offer the detailed data from previous knowledge, Dell is using market research to obtain this information.

At present, Dell is, also, using business research to examine upcoming rivals in the market. Dell usually starts with secondary research data or data that are already accessible. Dell wants to know the proportion of consumers in the market who buy its products against those consumers who purchase products of its competitors. Hence, researchers, at Dell, are examining the purchasing tendencies in the business, with the intention of boosting Dell’s share in the market. Increasing market share in a business leads to enlarged sales and profits.

Dell is also, using business research to ascertain sufficient distribution of its commodities. Dell is conducting a distribution follow up study to make sure that its clients receive ordered products on time. The company is also, using research to advertise all its products to consumers. This helps to determine whether Dell needs to enlarge its distribution, especially of the new computer lines.

Besides, Dell is pursuing business research, in order to assess the potential success of new commodities. Businesses should know the nature of services and products that consumers need/desire ahead of marketing. Currently, Dell is interviewing individuals to assess the usability of its new products in the market.

Areas that Need Further Research in the Company

Dell hardly conducts research to evaluate its advertising efficiency. I feel that Dell should focus on this area of research, as it is of significant value. Dell could use business research to ascertain the proficiency of its advertising. For instance, Dell could scrutinize the percentage of people who viewed its latest television commercial. The company may discover that other persons come to know of its advertising depending on the duration that the ad runs. Dell may necessitate running its television ads at diverse t phases if few individuals have seen the ads. Dell can, as well, utilize business research to investigate whether customers remember the slogan or message of its commercials.

I, also, feel that Dell needs to conduct further research on the experiences of customers during the buying process. For instance, the company could research how the customer got to know about their products and his/her experiences during the buying and acquisition process.

So, to accomplish this, a survey that contains questions like how the customer got to know about the products, and why the customer decided to purchase that product, and no other substitutes, could be formulated. Questions regarding after-sales service and usability of the commodity to the customer can also be included in the survey. Through this course, the company will be able to identify its fields of weaknesses and strengths, and areas that make it remain top, as compared to its rivals.

Dell is also reluctant in surveying the price of its commodities. It is vital for any company to compare its prices with those of the rivals, regardless of quality. This, in turn, makes the company remain relevant, and not to risk losing clients.

Thus, corporations conduct research for various reasons, including collecting fundamental data regarding business and consumer clients. Nevertheless, firms should ensure that they employ the proper methods for gathering customer data. Several company market researchers utilize online surveys, so as, to obtain reliable data, hastily. Nevertheless, businesses should ensure that they fill sufficient surveys so that the outcomes can best characterize the views of the whole demographic that they should serve.

Secondary search and qualitative and quantitative research

The Distinctions between Primary, Secondary, and Tertiary Sources in a Secondary Search

Primary sources are initial resources on which other researches become founded. They give information in its original shape, not construed or appraised by other writers (Leech & Onwuegbuzie, 2007). Primary sources provide original views, describe findings, or distribute new data. Some examples of primary sources consist of letters, personal narratives, speeches, and government documents.

Secondary sources are explanations that could be written following the fact with the advantage of retrospection. They describe, deduce, examine, and assess primary sources. They interpret and discuss data, which gets offered by primary sources. Secondary sources are works that are not directly linked to the occasion or data to which they submit. Some examples of secondary sources include textbooks, journal articles, dissertations, and magazines.

Tertiary sources collect, incorporate, and evaluate secondary, or, even, primary sources. They have a tendency of presenting factual information. Some examples of tertiary sources comprise abstracts, directories, and bibliographies.

Problems of Secondary Data Quality Researchers Face

The setback of secondary data quality, which a researcher should face is determining and substantiating the value of the secondary resources that the researcher should employ (Cooper & Schindler, 2006). When using secondary sources, researchers should attempt to verify the precision of the data. For instance, a researcher who quotes a newspaper item, regarding a court hearing, ought to dig further to confirm the data. Researchers may be required to obtain records of the court hearing, in order to accomplish this task. Researchers should establish the merit of secondary sources, every now and then (McDonnel, 2010).

So, to utilize reliable secondary sources and obtain precise and honest data, a researcher should authenticate the sources through evaluating them, carefully. A researcher should consider the principle, power, scope, and audience that the data relies on, in order to assess secondary sources (Cooper & Schindler, 2006). Establishing these aspects will assist a researcher select secondary sources that have significant value.

The Differences between Qualitative Research and Quantitative Research

Qualitative research becomes opinionated in approach as it attempts to realize human actions and reasons that motivate such actions. Researchers are apt to be subjectively absorbed in the topical issue, in this research technique. In quantitative research, conversely, researchers are apt to stay alienated, impartially, from the topical issue. Thus, the approach of quantitative research is objective, since it just seeks exact measurements and investigation of target notions to answer inquests (Fink, 1995). Qualitative research becomes used in the early stages of research projects, whereas quantitative research gets recommended for the last fractions of research projects.

Quantitative research gives the researcher a lucid image of what to anticipate in his research, as opposed to qualitative research (Carmines & Zeller, 1991). The researcher, in qualitative research, acts as the main data gathering tool.

Here, the researcher utilizes a range of data-collection strategies depending on the approach or drive of the research. Examples of data-collection approaches employed in qualitative research are an entity, profound interviews, focus groups, non-structured and structured interviews, narratives, archival research, documentary or content analysis, and participant observation. Conversely, quantitative research utilizes tools like surveys, questionnaires, and other apparatus to gather measurable or numerical data.

The Difference Between Data from Qualitative Research and Data in Quantitative Research

First, qualitative research presents data, in the shape of words, which comes from interviews, images, or artifacts. Conversely, quantitative research presents data in the form of figures, graphs, and tables (Neil, 2007). Second, qualitative research relies on human analysis ahead of computer coding, which makes the researcher to observe the contextual structure of the event getting assessed, whereas quantitative research employs computerized investigation, such as, mathematical and other statistical methods. The quantitative method, unlike the qualitative analysis, maintains an apparent division between opinions and facts. Also, it may continue in the course of the project.

Third, qualitative research has a profound height of appreciation, and researcher involvement in data gathering enables insight to form and be experienced during the course. Conversely, quantitative research gets restricted by the chance to investigate respondents and the eminence of the original information collection tool; insights pursue data gathering and data entry, with a restricted capacity to re-interview partakers.

Recommended Qualitative Research for a Manufacturer of Small Kitchen Electrics

A Manufacturer of Small Kitchen Electrics wants to establish if some Innovative Designs with Unusual Shapes and Colors Developed for the European Market could be Successfully Marketed in the U.S. Market

The best qualitative research technique would be to carry out various focus groups, where the new designs can be revealed to customers, in order to observe their responses. This can be carried out through video-conference or online. A focus group is a qualitative research whereby a cluster of persons gets asked about their views, insights, attitudes, and beliefs regarding a commodity, concept, service, idea, or promotion (Henderson, 2009).

Queries become asked in an interactive group situation where members are, at liberty, to converse with other members, in the group. Focus groups enable businesses wishing to name, expand, test markets, or package a fresh product, to observe, discuss, and assess the fresh commodity before it becomes accessible to the community. This offers valuable information regarding the potential market of the commodity.

The benefit of such a focus group is that the sample can be obtained from wide demography and geography, where the designs are observable.

Research with Experimentation

Adolescents report both positive and negative consequences of experimentation with cigarette use” by Brady et al.

The Independent and Dependent Variables Used in the Study

The dependent variable is changeable which may be considered to be the outcome of an independent variable (Sarstadet & Mooi, 2011). In the study by Brady et al. (2008), the dependent variable is experimentation with cigarette use. Conversely, an independent variable is changeable which may be considered to be the origin or rationalization for dependent variables. The independent variable in this study is adolescents.

The Sampling used to gather Subjects, as well as on the Reliability and Validity of the Study

Reliability gets concerned with ensuring that the process of data gathering results in inconsistent outcomes. This can be determined by having diverse researchers pursue similar processes, in order to see whether results will be identical. The method can be said to be reliable when the outcomes are similar. Conversely, validity refers to the study being able to measure what it got designed to measure.

The study became limited to only 155 adolescents who reported smoking whole cigarettes or puffing, at any of the four-time periods throughout the study (Brady eta l., 2008).155 adolescents are too few to represent the entire adolescent population. Again, this population became obtained from California high schools only. Although these students came from different backgrounds, they might have gotten influence by the environment at California High schools.

Thus, there is no way this study could give accurate results since the study did not become conducted in the actual environments of adolescents. Thus, we can not say that the behavior of adolescents from diverse backgrounds, at California Schools, represents how the same adolescents could behave in their home environments. This can be supported by the fact that different environments influence human behavior in diverse ways.

Also, the sample of the study became limited to those adolescents attending two northern California public high schools. The study did not consider those adolescents who were out of high school. The study, besides, left out those adolescents from private schools. Thus, the results of this study cannot be reliable as the study became limited to a narrow population.

The data became collected every 6 months throughout the ninth and tenth grades (Brady et al., 2008). This demonstrates that the results did not get collected at the same time length, yet the results were similar. Hence, we can say that the study was valid, from this perspective.

In the study of Baddy et al., there is some validity drawing from the research method. The focus of the experiment necessitated the collection of qualitative data, for evaluation. Brady et al. (2008) gathered qualitative data from high school learners in California and obtained in-depth information, which got afterward transformed into quantitative data. Such information is hard to collect from other research techniques like laboratory experiments, where participants get denied an elongated duration to convey their true views and emotions because of time restrictions, occasionally, having to select a response quantitatively.

The study appears reliable as figures get used to demonstrate results. Using figures in a study makes the work be reliable and worthy of trust. Brady et al. (2008) reveal “45% of adolescents reported both positive and negative consequences of experimentation, in comparison to a third who reported no consequences and smaller groups of roughly 10% who reported only positive or only negative consequences” (p. 6). Also, he reveals “of the entire sample of adolescents who reported smoking at some time point during the study, 47% initially reported only puffing on cigarettes while 53% initially reported having progressed to smoking whole cigarettes” (Brady et al., 2008, p. 6). Using figures makes the data appear real. He, also, uses tables, in the analysis, which makes the work easy to analyze.

Brady et al. (2008) only used surveys and did not employ other methods of data collection. Thus, the trustworthiness of the data can be questioned. In premise, no study can be trusted entirely except when one can verify to the reader that the data is accurate. The most appropriate way to achieve this is to employ diverse methods of data collection including questionnaires, interviews, and observation, and strive to attain a similar result, in order to validate the hypothesis.

Researchers become concerned with both internal and external validity. External validity denotes the degree to which the outcomes of a study can be indiscriminate. This implies that the results can be applied to a wide population, or the outcomes can be applied under different contexts. Although participants, in the study, reported diverse ethnic backgrounds, we can not ascertain that their behavior at California public schools represents the behavior at their places of origin. This is because the learners from diverse backgrounds could have become influenced by the behavior of other adolescents, in the same school, making them adopt the new behavior. Thus, the results of this study do not fit to be generalized to the entire world.

The study has internal validity. This denotes the strict steps to which the study became carried out, including the design, performing measurements, as well as, decisions regarding what was and was not to be measured. Participants, in the study, and design became well organized. Measurements became, also, prearranged.

For instance, Brady et al. (2008) explain “adolescents were asked their level of smoking experience for each type of smoking (1 time, 2–5 times, 6–10 times, more than 10 times)” (p.5). Brady et al. (2008) explain that the study did not measure all of the outcomes that got related to adolescents’ decision making regarding cigarette smoking. He, also, explains that the testing lacked the sample size or data collection waves to check whether early experiences could be linked with ensuing intention to smoke and the smoking activities.

The study has criterion validity. This is because the criteria of the study, starting from the participants, to the design and conclusion, became organized well. However, the study lacks construct validity. Construct Validity tries to find agreement connecting a theoretical concept and a measuring tool or process. This is because there lacks theoretical support for some procedures; that takes place, in the study. For instance, former research has not studied the proportion of adolescents reporting certain consequences after experimenting with cigarette use (Brady et al., 2008). Hence, there is no agreement between theory and procedure.

However, from the conclusion that “adolescents experience both positive and negative consequences of experimentation with cigarette use”(Brady et al.,2008, p.10), it is apparent that the study was valid, as it was able to serve the intended role, or, else, it was able to measure what it could be expected to measure.

Research with Survey

Objective. To investigate reasons that lead most university students to purchase Linux laptops. So, to conduct the study, I prepared a questionnaire that had six independent questions. The questionnaire got administered to 10 college students, who were in possession of Linux laptops. All of these university students were course mates at Australian University.

Results

What need(s) did you want to satisfy?

10% of the participants said that they purchased the Linux laptops out of essential needs. 60% of the participants reported that they acquired the Linux laptop out of the need for belonging and self-esteem. Finally, 30% of the participants reported that they obtained the Linux laptop for esteem purposes.

How did you realize your need(s)?

10% of the participants reported that they realized the need for a Linux laptop from the desire to complete assignments on time. 60% of the participants reported that they realized the need for a Linux laptop from the desire to watch movies like their classmates who had such laptops and to associate themselves with their colleagues. 30% of the participants reported that they realized the need for a Linux laptop, as it was prestigious to have such an item.

What information sources did you check to obtain data about the laptop?

20% of the participants reported that they obtained data about the Linux laptop from the Internet. 50% of the participants reported that they obtained data about the Linux laptop from friends. 10% of the participants reported that they obtained data about the Linux laptop from magazines. Lastly, 20% of the participants reported that they obtained data about the Linux laptop from TV advertisements.

What evaluative criteria did you use?

40% of the respondents said that they did not evaluate the Linux laptop against others, as they became already convinced that it was the best brand. 30% of the participants explained that they evaluated the features of all the laptops, such as the durability of the battery, size of the hard disk, and memory capacity, as well as, speed. The remaining 30% explained that they considered the authentic appeal and portability of all the laptops.

Why did you pick the Product a Linux laptop?

30% of the participants explained that they preferred the Linux laptop against the Toshiba, HP Compaq, and Dell laptop, as it had a lasting battery, a larger hard disk, and memory capacity, as well as, a higher speed than the rest. The remaining 30% explained that they preferred the Linux laptop over the others because of its authentic appeal and portability, as it was not as heavy as others. The remaining 40% reported that they picked the Linux laptop because they had seen their friends and relatives use the brand with no complaints.

Were you satisfied with the purchase and why?

90% of the participants agree that they became fully satisfied with their purchase, as it serves the intended function. All of this group express their satisfaction regarding the after-sales service, especially how the laptop became packed in a nice bag. 10% of the participants did not get contented with the purchase. This is because the laptop, occasionally, becomes slow in reading the movies. When the respondent complained to the support representative, he became informed that he had to give a minimal fee for hard disc replacement before his problem could be solved.

The Validity and Reliability of the Questionnaire

  1. The questionnaire had a clear objective, which was to investigate reasons that led most university students to purchase Linux laptops.
  2. The questionnaire made use of both structured and unstructured questions, which gave respondents an opportunity to express themselves, fully. Hence, the results could be valid. The questionnaire used a small sample size of 10 college students, who are too few to represent the entire population that uses the Linux laptop.
  3. The questionnaire, also, used just questions, thus making it precise, and easy to interpret. The questionnaire has internal validity. This denotes the strict steps to which the study got carried out, including the design, performing measurements, as well as, decisions regarding what was and was not to be measured. Participants, in the study, and design became well organized.
  4. The questionnaire has both internal and external validity. External validity denotes the degree to which the outcomes of a study can be indiscriminate. This implies that the results can be applied to a wide population, or the outcomes can be applied under different contexts.
  5. The questionnaire focuses on the collection of qualitative data, for evaluation, which becomes later transformed into quantitative data. Hence, the questionnaire seems to be reliable as it allows the employment of both the quantitative and qualitative methods of measurements (Thomas, 2003).
  6. The questionnaire has external validity. External validity denotes the degree to which the outcomes of a study can be indiscriminate. This implies that the results can be applied to a wide population, or the outcomes can be applied under different contexts.
  7. The questionnaire can measure what it should measure. Hence, it is valid. However, the reliability of the questionnaire can not be ascertained, unless we use other methods of data collection, to confirm these results.
  8. The questionnaire appears reliable as it allows the use of figures in demonstrating results. Using figures in a study makes the work be reliable and worthy of trust.

However, the questionnaire became limited to only 10 college students who owned the Linux laptop. This proportion of college students is too few to represent the entire university population that uses Linux laptops. Again, this population became obtained, mainly, from the Australian university. Although these students came from different backgrounds, they might have become influenced by the environment at the Australian University.

Thus, there is no way this study could give accurate results since the study did not become conducted in diverse environments of the students. Thus, we can not say that the laptop buying behavior of college students from diverse backgrounds, at the Australian university, represents how the same learners could behave in their home environments. This can be supported by the fact that different environments influence buying behavior in diverse ways. The questionnaire did not consider the laptop buying behavior of learners from other colleges, apart from the Australian university. From this perspective, this questionnaire cannot be reliable as it got limited to a narrow population.

References

Brady, S.S., Song, A.V., & Halpern-Felsher, B.L. (2008). Adolescents report both positive and negative consequences of experimentation with cigarette use. San Francisco: University of California.

Carmines, E. G. & Zeller, R.A. (1991). Reliability and validity assessment. Newbury Park: Sage Publications.

Cooper, D. R., & Schindler, P. S. (2006). Business research methods. 4th ed. New York, NY: McGraw-Hill.

Fink, L.A. (1995). How to measure survey reliability and validity. Thousand Oaks, CA: Sage.

Henderson, N. R. (2009). Managing moderator stress: take a deep breath. You can do this!. Marketing Research, 21, 28-29.

Leech, N. L., & Onwuegbuzie, A. J. (2007). An array of qualitative data analysis tools. School Psychology Quarterly. 22, 557-584

McDonnel, D. (2010). Issues regarding reliable and valid research studies. Web.

Neil, J. (2007). Qualitative versus quantitative research: key points in a classic debate. Web.

Sarstadet, M. & Mooi, E. (2011). A concise guide to market research: the process, data, and methods. London: Sage.

Thomas, R.M. (2003). Blending Qualitative and quantitative methods in theses and dissertations. New York: Oxford University Press.

Physics Laboratory Experiment on Acceleration

Introduction

The purpose of the experiment was to study the relationship that exists between an object using a uniform circular motion to move and the force that is required to cause that acceleration. The conical pendulum was used to examine the mathematics and physics associated with uniform circular motion (Minkin, L., & Sikes, 2021). The conical pendulum moves at a constant spend in a circular horizontal plane and when the bob is attached to a string, it forms a cone and so it is used to illustrate uniform circular motion.

Objective

This experiment enables one to acquire information on uniform circular motion since many have problems relating to it. This helps in sorting out some of these problems.

Materials

  • Pendulum
  • Motor
  • Telescope
  • Bob
  • String
  • Support

Preliminary set up

The pendulum used in this case moves in a horizontal circle and so a bob with mass m is connected to a string with length L with its other end attached to firm support. That bob rotates with an angular velocity ω that completes a horizontal orbit with radius r. The vertical distance between the plane of the orbit and the support is assumed to be h with an angle θ between the string and the descending vertical. The string has tension supporting a stationary mass M and a mass m that keeps moving in a loop.

Therefore FT=Mg

Procedure

(Minkin, L., & Sikes, 2021)
  1. A pendulum with length L was connected to a shaft containing a motor whose rotational speed could be changing by altering the voltage.
  2. The speed of the motor and length L was adjusted to allow the period of several revolutions to be measured by observation with values of the angles of rotation.
  3. The height h of the pendulum was measured by focusing the telescope on the rotating pendulum while noting down the values on the scale while considering the point of suspension. Whenever the revolution speed increased, it made it difficult for the plane of revolution to keep track of the pendulum path in the telescope.

Data and Calculation

Observation Height= L cos θ Time for 10 rev Periodic time T’(sec) Angular frequency ω = 2π/Tʹ ω 2 g= ω 2 h Tension= mω2L Centripetal force
1 16.6 7.18 0.718 8.754 76.63 1272 78100 26600
2 12.8 5.56 0.556 11.3 127.7 1634.6 93400 56500
3 7.4 5.81 0.581 10.81 116.85 864.7 13400 12700
4 5.1 5.81 0.581 10.81 116.85 595.9 26500 25900
5 4 5.25 0.525 11.97 143.3 573.2 35200 34600

Observation and Analysis

From the results, gravity was fairly constant with few experimental errors and it was close to the standard value of gravitation acceleration which is 980 cm/s2 since the average value from the table is 988 cm/s2(Giacometti, 2020). After computing centripetal force and tension which are physical quantities that relate to circular motion, tension value was bigger than that of centripetal force when the bob was rotating at lower frequencies. When the speed of rotation was increased, both of the values increased similarly. The rotating horizontal plane of the bob was getting lifted in the direction of where it is suspended and the conical path of the string became circular. This shows that tension and centripetal force are similar when it comes to circular motion. Tension is what was causing centripetal force, velocity could increase every time the radius of the circle was increased and decreased when the mass stopped moving.

Conclusion

The conical pendulum well illustrates circular motion and with simple arrangements, gravitational acceleration can be determined. Different ways have to be found to improve the measurements of height h and period in as much as the present measurements are accurate to some extent. Motion can be easier when a string is replaced with a solid rod since a rod has a negative tension and its rigidity helps in supporting the object at its position. Changing different factors in such an experiment affects the results. From this, a relationship of different entities can be verified since I could see velocity change at each trial and so I was able to understand the relationship between variables.

References

Giacometti, J. A. (2020). The motion of a conical pendulum in a rotating frame: The study of the paths, determination of oscillation periods, and the Bravais pendulum. American Journal of Physics, 88(4), 292-297.

Minkin, L., & Sikes, D. (2021). Demonstrating Conical Pendulum Stable and Unstable States. The Physics Teacher, 59(6), 474-476.

The Elasticity Experiment in Physics

Introduction

Laboratory work investigated the dependence of the vertical displacement of a holding device (band, spring) depending on the applied force. An elastic band, a short spring, and a long spring were used in experiments to observe their stretching when weights of different masses were suspended from them. Through the masses, the corresponding values of gravity were obtained.

Analysis

To obtain the data, nine tests were performed, three for each type of restraint, as shown in Table 1. The second column corresponds to the spring’s initial position before the weight was attached, so this value is constant within each of the rounds. The third column is the new value of the spring or belt position when the weight shown in the last column of the table is attached to it. The difference between the initial and final values of the position of the lower end of the spring corresponds to the vertical displacement, that is, it shows how much the given weight was able to stretch the holding device.

Band
# Y1, mm. Y2, mm. ∆Y, mm. Weight, g.
1 163.5 159.5 4.0 100
2 163.5 155.5 8.0 150
3 163.5 153.5 10.0 200
Tall Spring
1 147.5 146.0 1.5 50
2 147.5 142.5 5.0 100
3 147.5 129.5 18.0 200
Short Spring
1 159.5 89.5 70.0 50
2 159.5 47.5 112.0 100
3 159.5 13.0 146.5 200

Table 1: Empirical data from nine trials.

Three restraints were used: an elastic band, a long spring, and a short spring. As can be seen, unique values were true for each of the stretches, which was entirely due to both the type of restraining device and the mass of the suspended weight. Since gravity (F = mg) acts on the suspended weight, it was possible to determine force values for each test (Smart Apple Education Academy, 2020). The acceleration of free fall was assumed to be 9.81 m/s2, and it was considered that the weights were measured in grams. Table 2 shows the results of the force calculations for each of the tests.

Band
# Weight, g. Force, N.
1 100 0.981
2 150 1.472
3 200 1.962
Tall Spring
1 50 0.491
2 100 0.981
3 200 1.962
Short Spring
1 50 0.491
2 100 0.981
3 200 1.962

Table 2: Results of calculations of the values of gravity forces affecting the weights.

Since the gravity forces were determined based only on the mass of the weight and a constant value of the free-fall acceleration, the results were identical. At the same time, the applied gravity forces stretched the holding device differently due to the mechanical features of the springs and belts. In order to study this pattern, the dependencies of the displacement on the applied gravity force were plotted. However, it should be kept in mind that each of the restraints had unique tensile coefficient values, so it was not acceptable to depict a single relationship for all of them. For this reason, three linear dependencies were visualized: one for each device. Figure 1 shows these dependencies with the corresponding coefficients of determination.

Figure 1: Plots of the displacement dependence (in mm) on the applied force with an indication of the regression equation.

From the R2 values, it is clearly seen that linear approximations are well satisfied by these sets, so no additional linearization was required. This means that as the force applied to the weight increases (that is, as the weight mass increases), the vertical displacement tends to increase linearly, which seems quite logical. It is noteworthy that the slope values were quite different: the strongest slope was true for the elastic band, while the slowest growth was observed for the short spring. The y-intercept value is the definition of the displacement value at the moment of zero force and makes no physical sense because when there was no force, no displacement should have been observed. The inverse value of the slope (s2/kg) also makes physical sense and determines the stiffness of the restraining device, be it spring or a band. Thus, the stiffness of the elastic band was determined to be the minimum, and for the short spring, the stiffness was the maximum.

Conclusion

It has been shown that the vertical displacement of the linear is related to the applied force of gravity, which means that there is an increase in displacement as the force of gravity increases. Visually, this is observed as an increase in the extension of the restraint as the mass of the suspended weights increases. The physical meaning of the inverse of the slope as the stiffness coefficient of the corresponding device was also shown: for a short spring, the stiffness was determined to be maximum.

Reference

Smart Apple Education Academy. (2020). YouTube.

Human Genome Sequencing and Experiments

Introduction

Human genome sequencing presents several challenges related to experimental procedures and bioinformatics. As to the first challenge, the extraction and amplification of DNA for sequencing are cumbersome procedures, which take a lot of time and require due diligence. These procedures, however, are indispensable since next-generation sequencing approaches require voluminous templates for effective sequencing of the genome. The second challenge is that the human DNA is complex because it is diploid with repetitive regions and structural variants (Mostovoy et al. 587; Bickhart et al. 643; English et al. 1).

Their existence, therefore, complicates sequencing by making the process of phasing more difficult. The production of short-reads poses a third challenge that hinders coverage of whole-genome sequencing and reduces the accuracy of the genome assembly. Huddleston et al. note that the assembly of short-reads gives rise to low-quality contigs, especially in complex regions of the human genome (688). The fourth challenge occurs because the contamination of libraries or the existence of chimeras prevents the accurate determination of the human genome (Bickhart et al. 651). The last challenge to be mentioned is that the vast volume of data generated in sequencing requires huge storage space in databases and powerful computer programs for assembly, analysis, and interpretation.

DNA Sequencing

Automated Sanger sequencing is one of the novels approaches to human genome sequencing. It is regarded as the gold standard of sequencing because it is highly accurate, generates long reads, targets small regions, and is ideal for sequencing small samples. However, this approach has its setbacks: it is a tedious method that needs preparation of template DNA, and it is very slow for whole-genome sequencing. As another novel approach, sequencing by synthesis (Illumina) is an effective technology due to its accuracy, high throughput, and scalable method that generates long reads. However, its weakness is that it entails a burdensome preparation of libraries and adapters as well as the purchase of expensive equipment.

Single-Molecule Real-Time sequencing (SMRT) is a novel approach to human genome sequencing that allows real-time detection of DNA synthesis, generates long reads, is highly accurate, requires a small amount of template DNA, and does not need PCR in sample preparation. Nevertheless, SMRT has lower throughput and parallelism than that of Illumina. Nanopore MinION is another approach that proved to be advantageous because it can generate long reads of up to 200kbp as well as allows real-time analysis of sequences, is fast, affordable, and does not necessitate extensive preparation of template DNA (Jain et al. 3). Nonetheless, it has a very high error rate, and its throughput is lower in comparison to Illumina.

These approaches to human genome sequencing may be combined to augment the generation of accurate and high-quality reads. Mostovoy et al. created a hybrid approach for sequencing the human genome by combining Illumina, BioNano Genomics, and 10X Genomics-based sequencing (587). The Illumina was used to assemble short-reads while 10X Genomics-based sequencing was used to generate libraries of short-reads. Subsequently, BioNano Genomics was employed to generate physical maps and chimeric assemblies (Mostovoy et al. 588). Ultimately, the study utilized 10X Genomics in phasing and validating sequences. The hybrid approach produced not only phased but also high-quality sequences. English et al. pooled multiple sequencing technologies in evaluating structural variation in the human genome (1). Particularly, the approaches that were integrated included Illumina Nextra, BioNo Irys, short-read next-generation sequencing, and Pacific BioSciences RSII. The combined use of these approaches enhances the determination of Parliament structure and reveals structural variations in the human genome. Huddleston et al. combined single-molecule real-time- and Pacific Biosciences sequencing approaches in reconstructing complex human genomes, which improved the quality of reads and genomic sequences (688).

As an example of the utilization of the novel approaches to human genome sequencing, Nanopore MinION has been applied in sequencing the whole genome of the influenza virus. According to Wang et al., the sequencing of the influenza virus using Nanopore MinION generated sequence with an accuracy of 99% when compared to the Sanger method and Illumina MiSeq (1). Moreover, a study applied single-molecule real-time sequencing in sequencing the whole genome of the domestic goat (Bickhart et al. 643). The study combined strategies such as the assembly of long and short reads, the scaffolding of sequences, and the mapping of chromatin interactions. Single-molecule sequencing improved outcomes by about 400 times and gave the best de novo assembly of the mammalian genome. A study used Illumina in creating a hybrid approach, which employed de novo sequencing and the assembly of the human genome (Mostovoy et al. 589). The outcome of the approach gave rise to sequences that are not only phased but also of high quality.

Advancements in bioinformatics have led to the creation of algorithms that enhance storage, access, analysis, and the use of biological information, mainly genomic sequences. Since sequencing generates reads, the base-calling algorithm is essential in determining the quality and assembly of sequences as well as assessing the accuracy of scores. Phred is software that carries out base calling and assembles reads into genomic sequences. Phrap is another algorithm that assembles and aligns sequences based on the scoring matrix of similarity index. As gaps in alignment present challenges, the assignment of a higher penalty enhances the alignment of sequences. Basic Local Alignment Search Tool (BLAST) is another algorithm that aids in the identification of unknown sequences based on homology. A BLAST search gives hits, which are difficult to identify. E-values of hits are statistical values that provide the validity of sequences. Open Reading Frame Finder is an important algorithm that searches for the start- and stop-codons in six frames of sequences to identify genes. The problem with ORF Finder is that it identifies the start- and stop-codons but not genes. To overcome this challenge, open reading frames that are greater than 1kbp are considered genes.

Conclusion

The Sanger sequencing method has made significant contributions to the sequencing of genomes. The use of this method has been employed in sequencing whole genomes of a human (3 billion bp), a bacteriophage (5,386 bp), yeast (315,000 bp), a fruit fly (180 million bp), and a mouse (2.5 billion bp) among other organisms. Nanopore MinION improved the understanding of the structure and function of the influenza virus (Wang et al. 2015). SMRT has made a substantial impact on the study of bacteria methylomes, which elucidate the functions of methyltransferases (Murray et al. 11451). These findings have advanced the understanding of bacteria and their molecular functions. Illumina sequencing improved the sequencing of the genome of a domestic goat and offered the best contigs ever obtained from a mammal (Bickhart et al. 643). Illumina has overcome challenges associated with mammalian genomes such as repeats, diploidy, and complexity regions. Therefore, these approaches to sequencing have contributed immensely to the advancements in genomic studies.

Works Cited

Bickhart, Derek, et al. “Single-Molecules Sequencing and Chromatin Conformation Capture Enable De Novo Reference Assembly of the Domestic Goat Genome.” Nature Genetics, vol. 49, no. 4, 2017, pp. 643-654.

English, Adam, et al. “Assessing Structural Variation in a Personal Genome: Towards a Human Reference Diploid Genome.” BMC Genomics, vol. 16, no. 286, 2015, pp. 1-15.

Huddleston, John, et al. “Reconstructing Complex Regions of Genomes Using Long-Read Sequencing Technology.” Genome Research, vol. 24, no. 1, 2014, pp. 688-696.

Jain, Miten, et al. “The Oxford Nanopore MinION: Delivery of Nanopore Sequencing o the Genomics Community.” Genome Biology, vol. 17, no. 239, 2016, pp. 1-12.

Mostovoy, Yulia, et al. “A Hybrid Approach for De Novo Human Genome Sequence Assembly and Phasing.” Nature Methods, vol. 13, no. 7, 2016, pp. 587-591.

Murray, Iain, et al. “The Methylomes of Six Bacteria.” Nucleic Acids Research, vol. 40, no. 22, 2012, pp. 11450-11462.

Wang, Jing, et al. “MinION Nanopore Sequencing of an Influenza Genome.” Frontiers in Microbiology, vol. 6, no. 766, 2015, pp. 1-5.

The Science About the Experiments: Colloidal Systems

Critical Coagulation Concentration and the Schultz-Hardy Rule

Colloidal systems are those in which small particles (size range 10-9 m to 10-6 m) are uniformly distributed in a continuous matrix. Such systems are very important in our life as we use them for a variety of uses. It is therefore, relevant to understand as to what keeps a colloidal system stable, in other words, why not these colloidal particles coagulate and become larger. Scientific knowledge helps us in understanding this question.

It is nothing but the balance of forces that keeps these colloidal particles separated from each – other and stabilizes the system. There are many theoretical models that help in understanding this phenomenon. Some of these are – Helmholtz model, Guoy – Chapman model and DLVO theory. The stability of a colloidal system depends on the charge on it i.e. the surface charge density, which determines the potential field around it and the Debye length. This Debye length depends on the concentration of the colloidal system and charge on the colloidal particles. Increasing electrolyte concentration decreases the Debye length of a charged particle.

The net force on a charged particle in a colloidal system consists of attractive (van der Waal’s force) and repulsive force due to overlap of electrical double layer on colloidal particle. With increasing concentration of the colloidal system, the Debye length shortens and the van der Waal’s attractive force dominates leading to coagulation of the colloidal particles in irreversible manner. If concentration is small then Debye length is large and the repulsive force between the colloidal particles dominates resulting in stability of the colloidal system.

The stability of a colloidal system also depends on the counter ion concentration. This is what is stated by the Schultz – Hardy rule. Thus it can be seen that understanding of science helps in appreciating and explaining a phenomenon from the fundamental understanding of the subject.

Light Scattering for Particle Size

Colloids have very important role in our life. As the colloids consists of a dispersion phase distributed uniformly in a continuous phase; therefore, properties of a colloidal system strongly depends on the particle size and size distribution of the dispersed phase. There are many techniques which can be used to measure particle size and size – distribution of particles in a colloidal system. One such very important technique is Laser based particle analyzer. It is very important to understand the underlying principle of this technique.

This works on the principle of scattering of light by the particles in the colloidal system. The principle is very simple. If light is made to pass through a colloidal system, it will be scattered by the dispersed particle and the intensity of transmitted light will be less than that of the incident light due to this scattering. However, there are other complexities associated with this simple theory. Scattering of light will depend on the wavelength of the light as well so one should use a monochromatic source. Scattering of light depends on particle size as well, so one needs to take reasonably uniform size of particles in the colloidal system. Loss of intensity will depend on the concentration of the colloidal particles, so that should also be taken into consideration and one should not forget to subtract the background absorption i.e. absorption by the continuous phase.

Once these are taken care of one can calculate concentration of the colloidal system by using Beer-Lambert Law, which states that the intensity of the transmitted light decays exponentially with increasing turbidity and path length of the colloidal system. The turbidity depends on the concentration of the colloidal solution, which can be calculated using the Beer-Lambert law. From concentration of the colloidal system one can back calculate the size of the colloidal particles. Thus it can be seen that fundamental understanding of scattering of light by colloidal particles can be used to measure the size of the colloidal particles.

Titration of Colloids

In colloidal systems, with colloidal particles dispersed in a medium of high dielectric constant, there exists an electrical double layer. In this electrical double layer, the fixed charge layer exists on the dispersed particle, while the opposite charge is diffused into the vicinity of the particle in the medium. This electrical double layer holds the key of the stability of a colloidal system. The potential due to this electrical double layer depends on the surface charge density of the colloidal particle. Therefore, it is important to experimentally determine this. This can be determined by titrating a colloidal system with a salt or an electrolyte. Addition of salts reduces the surface charge density on a colloidal particle and therefore, there is a concentration of the added salt for which a colloidal system can be brought to its position of zero charge or PSZ. From the concentration of the salt required to bring a sol to its position f zero charge one can calculate the surface charge density on a colloidal particle.

Thus it can be seen that scientific understanding helps in designing simple experiments to demonstrate understanding of apparently complex processes.

Surface Tension and Contact Angle

Surface tension of a material is macroscopic manifestation of a microscopic property of a material – inter-atomic or intermolecular forces in that material. In case of solids, this property is more often termed as surface energy i.e. energy required to create a new surface. In case of liquids both the terms – surface energy as well as surface tension is used. This discussion is limited to liquids only. In any liquid, there are attractive forces between atoms / molecules that keep the liquid together. Due to these attractive forces, the liquid tries to assume a shape which maximizes the number of bonds between the atom / molecules in the liquid. This is the reason, why a water drop or for that matter drop of any liquid assumes a spherical shape, if allowed to as this shape maximizes the number of bonds between the molecules of water. However, the atoms on the surface – which is essentially a two phase interface between water and air, have molecules with un-satiated intermolecular bonds. This is what causes surface tension. Thus higher the inter atomic / intermolecular force higher will be the surface tension. This is why water has higher surface tension than organic fluid like hexane as inter atomic force in case of water is hydrogen bond, which is much stronger than the van der Waal’s forces attracting the hexane molecule.

Once the water droplet is kept onto some other surface, now there are competitive forces at work, theses are attractive force between water molecules and the attractive force between water molecule and the new material. If the formal is stronger, then water will prefer to assume a shape in which the contact with the new surface is minimum and the contact angle will be large; however, if the latter is stronger, then water will prefer to bond with more of the new surface than with itself and the contact angle will be small.

Thus contact angle is a result of the interplay between the cohesive force of a liquid and the adhesive force between the liquid and the solid material at the interface. Similarly, if the surface tension is small, then a liquid will rise higher in a capillary tube. Thus it can be seen that how scientific understanding of intermolecular forces helps in explaining the phenomena like contact angles and rise of a liquid in a capillary tube.

Critical Micelle Concentration (CMC) of Surfactants

There are some organic molecules with amphiphilic characteristics. These molecules have two parts – head and tail and the two parts have opposite polarity. Such molecules are soluble in water to some extent so as to form a monolayer on the free surface of water. This leads to considerable drop in surface tension of water. Sue to this reason, these molecules is termed as surfactant. Decrease in the surface tension of water is a logarithmic function of surfactant concentration. Once the concentration of surfactant reaches to the critical value, i.e. when all the available free surface or water – air surface is covered by monolayer of surfactant there is no incremental decrease in surface tension of water due to further addition of the surfactant. The additional surfactant added to water forms structures of colloidal dimensions and the resultant colloid is termed as micelles.

How to determine the critical micelles concentration? One technique could be measuring the surface tension of water with varying concentration of the surfactant molecules and the critical micelles concentration will be given by the saturation is decreasing surface tension vs. surfactant concentration plot.

Another way could be measuring the conductivity of water with varying concentration of the surfactant molecules. As the monomer ions will have higher mobility than the micelles; therefore, concentration of the solution will increase with increasing concentration of surfactant until micelles starts to form leading to decrease in the conductivity.

Thus it can be seen that basic scientific understanding helps in determining critical micelle concentration of surfactants.

Natural Science: Mouse Experiment

The data in the table allows for concluding that the drug typically leads to an increased number of offspring a mother mouse has.

The conclusion above can be figured out by comparing the litters of drug and control groups based on their arithmetical values. For this, it is necessary to sum up the babies in the litter and divide the obtained number by the number of the litters. On average, an ordinary mouse has five babies in the offspring, while the quantity increases to 6.9 babies when the drug is used.

Cow Growth Rates

The graph on Excel reveals that the experimental feed slowed Bessie’s weight gain.

The conclusion above is justified because Bessie gained 40 lbs. fewer than Bertha the next month of using the experimental feed. It is important that the experiment used twin cows because this condition minimized the chances that any external factors contributed to the different weight gain results.

According to the graph, Forks grew the fastest because it had the most dramatic increase in population.

Cloverfield is the only town in the graph that declined in population.

Mystic had the slightest change in population because it only increased by less than 200 individuals.

The graph reveals that the population of Forks is 3,800 people in 2010.

According to the pie chart, the Insects group has the most number of species, which is 50%.

All invertebrates account for 75% because insects are also included in this group. As for vertebrates, the pie chart makes it challenging to state their exact percentage, but it is possible to assume that they constitute approximately 5%.

Diffusion and Osmosis Experiments

Abstract

Diffusion and osmosis are passive modes of transport that facilitate the movement of water and other molecules in living cells. Molecular kinetic energy was assessed by examining carmine in a drop of water under a microscope. The diffusion of molecules across a semi-permeable membrane was evaluated by noting the colors of solutions separated by dialysis tubing. The behavior of living cells in various environments was observed by looking at Elodea cells in hypertonic and hypotonic solutions. The osmolarity of various solutions was also evaluated by noting the changes in weight of potato cylinders in the solutions. It was observed that water molecules were in constant, random motion and that dialysis tubing allowed the movement of water and I2KI across it. It was also noted that Elodea cells became swollen when placed in a hypotonic solution and that the osmolarity of solutions increased with an increase in solute concentration. The osmolarity of potato tubers was estimated at 0.3M. It was concluded that osmosis and diffusion were vital mechanisms in physiological processes.

Introduction

Physiological processes in the bodies of living organisms require raw materials, which when consumed lead to the production waste substances. Consequently, it is necessary to ensure that important materials are available for metabolic processes and that the toxic waste products do not accumulate and damage cells (Hunter 72). Maintaining a steady state in living cells requires controlled movement of substances within living cells. The controlled movement of substances leads to communication between the cell and the surroundings. The most common forms of transport in living organisms are diffusion and osmosis. Diffusion can be described as the overall movement of molecules from an area of high concentration to low concentration (Nix 165). Osmosis, conversely, is the transfer of water molecules from an area of high concentration to low water concentration across a selectively permeable membrane (Zeuthen and Stein 205). Diffusion and osmosis are passive types of transport because they do not involve the use of additional energy in the form of adenosine triphosphate to facilitate the movement of molecules (Nix 166).

Fluids fall into three main categories based on their osmotic pressure. A hypertonic solution has a higher solute concentration than a living cell and has a tendency to draw water from cells by osmosis. A hypotonic solution has lower solute strength than a cell and tends to release water to the cell. An isotonic solution, on the other hand, is a solution with an equal solute concentration as a cell (Stoker 230). When a cell is in isotonic surroundings, water does not move in or out of the cell.

This practical aimed at investigating the attributes of molecules that enhance the progress of diffusion and the movement of solutes through a selectively permeable membrane. It was hypothesized that dialysis tubing was permeable to water molecules and impermeable to glucose and starch molecules. It was predicted that Benedict’s test would only be positive for the solution in the bag and that the same solution would test positive for the I2KI test. It was also hypothesized that the cytoplasm of a cell with a cell wall would reduce in size when placed in a hypertonic environment and increase in size when placed in a hypotonic environment. Therefore, it was predicted that if Elodea cells were placed in hypotonic environments, their cytoplasm would swell causing the cells to become turgid. It was also hypothesized that potato tubers would lose weight if placed in 0.6M sucrose.

Methods

A drop of water was placed on a glass slide. The end of a dissecting needle was used to transfer carmine to the water droplet by touching it, putting the wet end into the carmine powder and back to the droplet. The carmine and water mixture was stirred using the needle after which a cover slip was placed on the slide. The setup was examined under a compound microscope at low magnification followed by high magnification. The observations were recorded for later use in the discussion.

A dialysis bag was prepared by folding over 3cm at the end of a 30cm dialysis tubing pre-soaked in water. The side of the tubing was tied tightly with a thread ensuring that no liquid could leak. Equal portions of 30% glucose and starch solution were then added to the tubing through the open side. The contents of the tubing were mixed thoroughly after which the color of the solution was recorded in table 1. 300ml of water was added to a 500ml beaker followed by a few drops of I2KI until the color of the water changed to amber-yellow. The bag was then put into the water and I2KI solution with the unfastened side outside the beaker. The setup was left to stand for 30 minutes after which the ultimate colors of the solutions were recorded. Thereafter, the solutions were tested for the presence of sugars (Benedict’s test) by adding one dropper full of Benedict’s reagent to three test tubes filled with two pipettes of each of the solutions. The tubes were heated in a boiling water bath for approximately 3 minutes.

The two demonstration microscopes with Elodea in solutions A and B were examined under a microscope. The features that were observed were recorded in table 2.

100ml of deionized water and various sucrose solutions were placed in labeled 250ml beakers. Seven potato cylinders that were at least 5cm long were made by making holes in potatoes using a cork awl. The cylinders were then peeled and cut to uniform lengths. Thereafter, the cylinders were wiped using paper towels and weighed to the nearest 0.01 grams. The potato cylinders were sliced into two uniform halves and placed in the beakers containing the various solutions. The setups were incubated for 1.5 to 2 hours, which included swirling of the beakers at intervals of 15 minutes. At the end of the incubation period, the weights of the potato cylinders were recorded in table 3.

Results

The movement of carmine particles in the water was random. It was observed that the movement was continuous and did not come to a stop. Another notable observation was that tiny carmine particles appeared to shift faster than the large ones.

Table 1: Investigating the permeability of dialysis tubing to glucose, I2KI and starch

Solution source Original contents Original color Final color Color after Benedict’s test
Bag Glucose and starch White Deep blue Brick red
Beaker Water and iodine Amber Amber Clear with a greenish tinge
Control Clear Clear Clear Clear

Table 2: Appearance of Elodea cells in unknown solutions A and B

Solution Appearance/ Condition of cells
A The cell wall was small and appeared perforated
B The cell was turgid

Table 3: Estimating osmolarity by change in weight

Approximate time in solutions: 1 hour 30 minutes
Molarity of solution (M) 0.0 0.1 0.2 0.3 0.4 0.5 0.6
Final weight (g) 3.2 3.0 3.0 2.9 2.6 2.4 2.4
Initial weight (g) 2.8 2.8 2.9 2.9 2.8 2.8 2.9
Weight change (g) 0.4 0.2 0.1 0.0 -0.2 -0.4 -0.5
% change in weight 14.29 7.14 3.45 0.00 -7.14 -14.29 -17.24
Figure 1: A graph of percentage change in weight versus the molarity of sucrose

Discussion

The first exercise entailed the observation of molecular movement because the molecules of gases and liquids were in continuous arbitrary movement (Mörters and Peres 7). Carmine, being insoluble in water, led to the formation of a colloidal suspension. During their motion, the water molecules collided with solid particles of carmine in what was referred to as Brownian motion. Brownian motion could bring about diffusion because it caused molecules to progress from regions of high concentration to zones of low concentration (Mörters and Peres 7). Diffusion was important in cell metabolism because it allowed cells to obtain the chemicals and molecules required for metabolic processes. For example, oxygen that was necessary for respiration reached the cell by diffusing across the cell membrane.

The ultimate colors of the solutions following the sugar test showed the movement of sugar from the bag into the beaker. The results corroborated the premise that water would move via osmosis from the beaker into the tubing. Water from the beaker moved via osmosis into the bag leading to a color change of the solution in the bag from white to deep blue, which was a positive test for starch (Harisha 44). In addition, the solution in the beaker was negative for sugars after the Benedict’s test. These observations implied that there was no movement of glucose through the bag into the beaker. The findings of the experiment showed that potassium iodide had the smallest molecules followed by glucose and finally starch molecules. Supposing that the experiment began with glucose and potassium iodide in the tubing and starch in the beaker, the liquid in the beaker would change to deep blue because water would shift from the bag to the beaker via osmosis.

Based on my predictions and observations, solution A was hypertonic while solution B was hypotonic. Solution B had the greatest osmolarity because it was absorbed by the Elodea cell causing it to swell and expand. Water from a pond would be expected to be hypotonic to Elodea cells because such water contained a lower concentration of dissolved substances than those found in Elodea cells.

The curve intersected the zero line on the plot at a sucrose molarity of 0.3M. The data could be utilized in the establishment of the osmolarity of the potato tuber by checking the molarity where there was no net change in the weight of the tuber. Therefore, the osmolarity of the tissue was projected at 0.3M. The findings of the study confirmed the supposition that potato tubers would lose weight if placed in 0.6M sucrose.

The limitation of the experiment was that it was difficult to obtain potato cylinders with identical weights. In addition, most of the experiments involved the study of osmosis in plant cells. Future studies could look at the effects of osmosis and diffusion in animal cells.

Conclusion

Kinetic energy was necessary to facilitate the process of diffusion since no external source of energy was involved in the process. Osmosis, conversely, could only occur when an osmotic pressure existed on the two sides of a selectively permeable membrane. It was concluded that osmosis and diffusion were vital processes in maintaining the homeostasis of living cells. Therefore, to avoid any alteration in the water content of cells, it was necessary to keep them in environments whose osmolarity matched the osmolarity of the cells.

Works Cited

Harisha, S. An Introduction to Practical Biotechnology. New Delhi: Firewall Media, 2005. Print.

Hunter, G. Scott. Let’s Review: Biology, the Living Environment, New York: Barron’s Educational Series, 2009. Print.

Mörters, Peter and Yuval Peres. Brownian Motion, New York: Cambridge University Press, 2010. Print.

Nix, Staci. Williams’ Basic Nutrition & Diet Therapy14: Williams’ Basic Nutrition & Diet Therapy, St. Louis, Missouri: Elsevier Health Sciences, 2012. Print.

Stoker, H. Stephen. General, Organic, and Biological Chemistry, New York: Cengage Learning, 2012. Print.

Zeuthen, Thomas and Wilfred D. Stein. Molecular Mechanisms of Water Transport Across Biological Membranes, California: Gulf Professional Publishing, 2002. Print.