Developing Sensitivity to Your Perspective on Research

Foremost, the implementation of research is intended to have a developmental impact through quality improvement across the board, that is, within the economic, social, political as well as technological dimensions of any particular profession or industry. Energy resource has the pivotal role of powering other sectors. However, the global concern on clean energy practice as a friendly environment practice begs the belief that it is a responsibility of every energy consumer to prioritize this in his/her operations. Arguably, operations in the information technology sector should move towards clean energy. It behooves research and development scholars in the sector to fill this lacuna. Already, the world has shifted from being service industry driven to knowledge society as illustrated by the change in management approach from Theory X and Theory Y to knowledge society where practice is knowledge driven than just management practice. This means the role of IT in delivering knowledge has become more pronounced. In other words, there is increase in consumption of computing services to drive information. This makes it ethical to research on IT technologies that will lead to cuts on resource demands while delivering the same or better services.

Experience of providing solutions that minimize wastage of resources and give value for money and forgone cost, is a consolation for pursuing the current research aims. Workplace decisions on investment become justified when the returns indicate cost savings and adoption of technology that provides superior services over the prior. This is the advantage of having prior workplace experience (Shank, 2006, p.77).

With regard to IT development, the world is gearing towards virtualization. It is my assumption that information sharing will be embraced in order to give virtualization more meaning. Moreover, virtualizations solutions will further be refined to suit LAN as well as WAN users.

Economic benefits are the bottom line of every investment. In addition, there is a global concern and call for voluntary action towards energy efficiency in order to save the environment and limited resources.

My desire is to provide research solutions that are economically and environmentally friendly rather than suggest technological changes due to advancement of knowledge in the information technology sector.

References

Schram, T. H. (2006). Conceptualizing and proposing qualitative research. Upper Saddle River, N.J.: Pearson Merrill Prentice Hall.

Shank, G. D. (2006). Qualitative research: A personal skills approach Upper Saddle River, NJ: Pearson Merrill Prentice Hall.

How Microorganisms Can Be Used for Bioremediation

The effects of environmental pollution on the global marine ecosystem, as well as local ones, have been tremendously adverse, with multiple species having become extinct or approaching the stage of extinction. Therefore, strategies for restoring the ecosystems that said organisms need to survive and thrive must be introduced into the context of the present-day environmentalism-oriented efforts. Bioremediation has been used recently as a means of restoring damaged marine environments, offering a chance at alleviating some of the harm produced (Bovio et al. 312). By using the properties of some microorganisms, such as compound microbial agents, which allow removing contaminants from target environments, the global community will be able to contribute to the management of the drastic effects of pollution.

The idea of introducing microorganisms that absorb pollutants into contaminated marine settings has been quite popular recently as a possible response to water pollution, in particular. A case study by Gao et al. suggests injecting microorganisms such as the HP-RPe-3 compound microbial agent into highly polluted aquatic areas as the measure needed for restoring the original ecosystem (645). According to the case study results, the proposed measure against pollution allows reducing the extent of water contamination substantially (Gao et al. 657). Specifically, the study explains that microorganisms such as the HP-RPe-3 compound microbial agent mentioned above allow enhancing the rates of pollutant degradation significantly. Moreover, the study confirms that photosynthetic bacteria are beneficial from both economic and environmental perspectives since the specified agents are comparatively cheap and easy to manipulate, which is crucial for addressing pollution problems observed on a large scale.

The study also outlines the multiple benefits of using the HP-RPe-3 compound microbial agent as the microorganism that helps to accelerate the process of degradation in ammonia nitrogen. Since ammonia nitrogen is one of the main water pollutants, the application of the microbial agent in question should be considered critical for managing water pollution and preventing the further contamination of groundwater. As the picture below shows, the specified agent allows for the creation of a chemical bond that increases the speed of degradation for the adverse elements, namely, ammonia nitrogen (NH3-N) contained within the bond, thus rendering it harmless and easy to remove from the water.

HP-RPe-3 Compound Microbial Agent: Effects
Figure 1. HP-RPe-3 Compound Microbial Agent: Effects (Gao et al. 652)

The study under analysis indicates that the increase in the process of pollutant decomposition, to which microbial agents contribute significantly, provides a perfect alternative solution to fighting marine pollution. Furthermore, there are strong indications that the introduction of the said microbial agent into larger marine contexts will allow minimizing the damage produced by major instances of pollution. Specifically, the opportunities for addressing the impact of chemicals being dumped into the ocean may emerge after incorporating the specified agent into the set of tools for handling the current pollution crisis. Due to the strong effect that the HP-RPe-3 compound has on aquatic environments, it could be assumed that it may also work in marine settings.

At this point, it is necessary to mention that introducing the said agent into the marine context may imply a larger range of obstacles than one might presently believe to face. Specifically, due to the difference between marine and freshwater settings, the effects that the HP-RPe-3 compound produces may be reduced slightly. Indeed, the case study in question appears to have tested the proposed strategy in the freshwater environment, whereas the marine context suggests the presence of other constituents, primarily, chloride, sodium, and sulfates (Gao et al. 648). Therefore, further tests will have to be administered to promote the specified microorganism as the main agent in marine water purification.

Nonetheless, the existing evidence points to the efficacy of the suggested tool. Namely, the inclusion of the HP-RPe-3 compound agent into polluted environments is expected to reduce the decomposition time, as well as the general structure and longevity, of key pollutants observed in the marine settings. Therefore, the introduction of microbial agents, in general, and the specified resource, in particular, into marine settings as the method of removing the observed contamination should be explored as a viable tool with massive potential. Specifically, the introduction of similar microorganisms into the marine setting may help in addressing not only the problems of NH3-N removal but also the management of oil spills.

Since numerous types of microorganisms possess the ability to purify the environment in which they are placed, their use as the vehicle for bioremediation must be considered a crucial opportunity for handling the effects of pollution. As the case study under analysis has indicated, introducing microorganisms such as the HP-RPe-3 compound microbial agent into the environment of a heavily contaminated water source allows purifying it to a significant extent. Therefore, the suggested technique of injecting microorganisms known for their cleansing properties into the settings of contaminated aquatic areas is likely to produce noticeably positive effects on the overall level of water quality. As a result, multiple marine areas will be restored, with the opportunity for their endemic species to thrive in the newly purified ecosystems.

Works Cited

Bovio, Elena, et al. The Culturable Mycobiota of a Mediterranean Marine Site after an Oil Spill: Isolation, Identification and Potential Application in Bioremediation. Science of the Total Environment, vol. 576, 2017, pp. 310-318.

Gao, Hong, et al. Application of Microbial Technology Used in Bioremediation of Urban Polluted River: A Case Study of Chengnan River, China. Water, vol. 10, no. 5, 2018, pp. 643-663.

Heidarrezaei, Mahshid, et al. Isolation and Characterization of a Novel Bacterium from the Marine Environment for Trichloroacetic Acid Bioremediation. Applied Sciences, vol. 10, no. 13, 2020, pp. 1-18.

The Mystery of the Galapagos Finches

Introduction

Among many exotic and extinct fauna, thirteen species of Darwins finches, namely Geospiza fortis, G. scandens, G. magnirostris and G. fuliginosa harbor Galápagos Islands, Santa Cruz, and Daphne Major (Grant & Grant, 2008). According to Grant and Grant (1995), these species have displayed remarkable evolutionary adaptations to survive harsh conditions, especially severe droughts during 1976-77 and 1984-1986. Various morphological traits used for evaluating the natural selection in finches were: body weight, beak dimensions, wing and leg size, seed preferences etc. (Price et al., 1984; Grant et al., 1976; Grant, 1991), including some inheritable traits like beak size (Grant & Grant, 2008). Interestingly, selected traits varied in the above two draught seasons, for e.g. in 1976 large beaks were preferred but small beaks adapted to 1984 drought.

The present investigation is envisaged to ascertain the precise factor(s) that might have caused deaths in finches during 1973-1978, and led some Finches to survive. The analysis is based on two questions and four hypotheses each:

  • Why did finches die  the hypotheses were
  1. The finches died because the lack of seeds.
  2. The finches died because the lack of rainfall.
  3. The finches did not die (null hypothesis)
  4. The finches died because of predators.
  • Why did some finches survive  the hypotheses were
  1. The finches survived by feeding on alternate seeds.
  2. The finches survived the lack of rainfall.
  3. The finches did not survive (null hypothesis)
  4. The finches escaped the predators to survive.

Methods

The data on different traits were obtained for wet and dry months of 1973-1978 from a program titled The Galápagos Finches (Jackson & Hughes, 2009) with source data derived from the original papers. The data on different groups and populations of fringes (survived, died; adults, fledglings; male, female) and different traits (weight, beak length, wing length and leg length) were pooled and compared in dry and wet seasons of the years. Systematic compilation of the surviving populations and dead populations was done with attributes like less or more weights, shorter or longer beaks, wings, and legs. Using DOUBLE Y PLOT graphs each of died and survived Populations were scaled towards the right Y-axis, and the remaining information was plotted on Y and X axes. Therefore, for each trait, two graphs were plotted one each for dead and survived populations, and all eight graphs covered the four traits.

Results & Discussion 1  Why die question

The finches died because the lack of seeds

Beak Lenght and Death

The data correlating beak length and mortality is shown in above figure. Among the dead populations, fringes with beak length <10 mm died more often and in greater number than those having >10 mm beak length. The only exception was dry 1976, when the trend was opposite. This data is explanatory that smaller beak size (<10 mm) is an unselective trait in the fringes and these fringes exhibited higher mortality than those with larger beaks. Therefore, the given hypothesis is accepted.

The finches died because the lack of rainfall

Weight and Death

In the above figure it can be seen that overall dead population is higher during dry seasons compared to wet seasons, which initially suggests that lack of rainfall results in higher mortality. However, the difference in 1976 was not so pronounced (22 against 15) and this has led to believe that mortality can not necessarily be correlated with drought. There was an increase in total population of finches in wet seasons due to fledgling younger birds which grew up in dry seasons.

It was further noted that most of the smaller birds (<15 g) which included the fledgling birds exhibited significantly higher mortality than those which were larger. This indicates that the lighter birds were fragile and week to cope with the harsh drought seasons as against the wet season.

The finches did not die (null hypothesis)

This is indeed not correct and this hypothesis is rejected on account of higher overall mortality of total population until 1977. This can be clearly seen in the above figures (total dead populations).

The finches died because of predators

Wing Lenght and Death
Leg Lenght and Death

Both Wing length and leg length are the attributes directly related to flying and running ability in order to escape or hide from predators. These features also determine body size and energetic status of the birds. In above figures the trend for varying sizes of wings and legs correlated with each other. Apparently, initial (year 1973) higher mortality was seen in birds with smaller wings and legs but from 1976 onwards mortality for larger wings and leg size increased. As wing and leg lengths did not determine selective deaths in populations, we reject the hypothesis that predation was the main cause of death.

Results & Discussion 2  Why Survive question

The finches survived by feeding on alternate seeds

Beak Lenght and Survival

In above figure, it can easily be made out that while in year 1973 (dry and wet) birds with both smaller beaks (<10 mm) and larger beaks (>10 mm) had advantage, from dry 1976 , only the birds with lager beak size survived. From the data obtained from wet 1978 survived fledgling birds, it is further noticed that the young birds with large beak size similar to size of adults (>10 mm) survived more than those with regular small beak size. As during this period the fringes with smaller beaks displayed higher mortality it is unambiguous that large beak is a selected trait for survival. Such birds could break open large and hard seeds when smaller seeds perished due to drought, and did not starve. The given hypothesis is therefore accepted.

The finches survived the lack of rainfall

Weight and Survival

From the above figure it is clear that regardless of wet or dry months there was not much difference in number of total survived population, with a trend of decreasing in number in later years. This data of seasonal surviving birds and inconsistent mortality data of total dead populations indicate that it may be premature to correlate lack of rainfall with survival or mortality of the birds. Hence, this hypothesis is presently rejected.

There is an interesting feature of survival of less weighed birds (< 15 g). During wet 1973- wet 1976, birds of both weights (</> 15 g) survived equally, but from dry 1976 onwards only the lighter birds preferentially survived. As the light birds also had higher mortality than heavier ones, it was not possible to correlate weight of the birds with survival.

The finches did not survive (null hypothesis)

This is of course not happening because from wet 1977 season onward the total surviving population increased and stabilized. At this time of study the mortality was low, indicating that the birds were adapted to survive and multiply in population size, and thus we reject this null hypothesis.

The finches escaped the predators to survive

Wing Lenght
Leg Lenght

The trend of survived population and relative survival of small and large sized wings (</> 60 mm) or legs (</> 15 mm) was quite similar. Although the surviving birds from dry 1976 onwards did exhibit greater leg and wing lengths, with exception of Wet 1978, the mortality of these larger winged and legged birds was also quite high in the corresponding periods. These attribute rule out any possibility of selection of better flying and evading ability in the birds to escape or hide from the predators. Another observation that less weight fringes survived wet 1977 onwards rules out any role of predation as these fringes would have been easier pray than more weighed birds. Hence the hypothesis that anti-predation is a surviving attribute, is rejected.

Overall Conclusion

The data clearly reveals that only convincing trait that can be correlated with survival of the finches is beak length. Birds with larger beak length (>10 mm) had an advantage in evolution. In drought seasons woody fruits of Tribulus were the common feed, and only the birds with strong and long bill had an ability to crack the fruits. Previous work also pointed out about this feature as selectable trait and it was also found to be inheritable (Grant & Grant, 2008). Moreover, it is likely that wet vs. dry seasons variations in the mortality could be due to feeding neglect as less seeds were available in drought. Apparently, there is no direct effect of the wet or dry season on birds survival. Survival of less weighed adult and fledgling birds was unusual observation, which contradicts earlier theory that bigger size and heavier birds have evolutionary advantage owing to better mating and foraging abilities (Price et al., 1984; Grant et al., 1976). Wing and leg size associated with prevention from predation seems to have no correlation with fringe survival.

References

Grant, B.R. & Grant, P.R. 2008. Fission and fusion of Darwins finches populations. Philosophical Transactions of the Royal Society B, 363, 28212829.

Grant, P.R. 1991. Natural selection and Darwins finches. Scientific American, Issue, 82-87.

Grant, P.R. & Grant, B.R. 1995. Predicting microevolutionary responses to directional selection on heritable variation. Evolution, 49(2), 241-251.

Grant, P.R., Grant, B.R., Smith J.N.M., Abbott, I.J. & Abbott, L.K. 1976. Darwins finches: Population variation and natural selection. Proceedings of the National Academy of Sciences USA, 73(1), 257-261.

Jackson, J. & Hughes, M. (2009). The Galápagos Finches. Web.

Price, T.D., Grant, P.R., Gibbs, H.L. & Boag, P.T. 1984. Recurrent patterns of natural selection in a population of Darwins finches. Nature, 309, 787-789.

A Detailed Look at Brucellosis

Introduction

Etiologic agent

Brucellosis is a common infection that affects man, swine, goats, and cattle, although it rarely infects wild mammals such as deer in the United States. The disease is responsible for the occurrence of abortion in cattle (Ingebrigtsen, Ludwig, Arlin, & McClurkin, 1986).

Brucellosis is caused by bacteria of the Brucella species. This disease infects mammals both wild and domestic. The various Brucella species that infect domestic mammals include B. melitensis and B. ovis, which infects sheep and goats respectively; B. suis,- pigs; B. abortus -cattle, and B. canis dogs (center for disease control and prevention [CDC], 2005).

Country/state of origin

Brucellosis is speculated to have first manifested at around 1600 BC in the fifth plague of Egypt. Current laboratory examination of the ancient Egyptian bones, dating back to about 750 BC, indicated proof of osteoarticular lesions, including sacroiliitis that are common attributes of the infection (Pappas & Papadimitriou, 2007).

In 1887 David Bruce isolated B. melitensis (then known as Micrococcus melitensis) from a British soldiers spleen who had been killed by Malta fever (febrile disease) widespread within the military personnel based on Malta. 20 decades after the isolation of B. mellitensis, the knowledge of Malta fever eluded scientists and they presumed it to be a vector-borne disease. However, in 1905 Themistocles Zammit by chance established the zoonotic attribute of the infection by detecting B. melitensis from goats milk (Seleem, Boyle, & Sriranganathan, 2010).

In 1987, Bang associated Bangs bacillus (B. abortus) to be the etiologic agent of Bangs disease. His proposition was further supported by Alice Evans work on infectious bacteria in dairy commodities, who verified the connection of Bangs disease with Malta fever (Seleem, Boyle, & Sriranganathan, 2010).

In 1976 Brucellosis was observed in farmland in Minnesota, following a survey motivated by a decrease of white-tailed deer (Odocoileus virginianus) which was suspected to be due to drought, disease, and associated stress. A serological study was performed to establish the incidence of antibodies in these mammals to the etiologic agents of brucellosis, parainfluenza 3, leptospirosis, and Infectious bovine rhinotracheitis. The result of the survey indicated a positive outcome (Ingebrigtsen, Ludwig, & McClurkin, 1986).

Method of transmission

Zoonotic is often transmitted via abrasion on the skin surface from holding sick animals. In the US it often occurs from the consumption of unpasteurized milk and other daily commodities. Also, the organism is highly infectious in the lab through aerosolization and thus preparing cultures prerequisites the implementation of biosafety level-3 measures.

Symptoms

During the acute stage, that is between infection and eight weeks after infection, the disease manifest with uncertain and flu-like symptoms such as back pain, fever, myalgia, sweats, headache, malaise, and anorexia (CDC, 2005).

During the undulant staged, that is one year after the onset of infection, the disease manifest as arthritis, undulant fevers, and epididymal-orchitis in men. Also, neurological indications can present acutely in a maximum of 5 percent of clinical incidents (CDC, 2005).

During the chronic stage, which is more than a year after infection, the disease may manifest as chronic fatigue syndrome, arthritis, and depression.

Treatment

According to Saleem et al., (2008) Brucella has the potential for intracellular localization and the ability to adjust to the surrounding expressed in its replicative advantage. This confers to it the ability to resist drugs leading to increased treatment failure and relapse rates and requires drug combination as well as patient compliance to achieve effectiveness. Therefore, the optimal therapy for brucellosis is a combination prescription of two antibiotics to avoid relapse associated with monotherapies (Seleem et al., 2009).

A drug combination of doxycycline and streptomycin (DS) is the latest best therapeutic alternative with minimal adverse effects and less relapse, particularly in incidents of acute and localized types of brucellosis (Seleem et al., 2009). Noteworthy, neither doxycycline nor streptomycin can block the intracellular growth of brucellae on their own. The DS combination is regarded as the gold-standard therapy.

However, the DS prescription is less reasonable since the streptomycin has to be administered parenterally for 3 weeks. Thus, a combination of doxycycline therapy, for 6 weeks, in conjunction with a parenteral administration of gentamicin [5 mg per kg] for 7 days is regarded as the necessary optional regimen (Glynn and Lynn, 2008).

DS combination regimen had been regarded as the gold-standard treatment against Brucellosis for many years by the WHO. Nevertheless, in 1986 Joint FAO/WHO Expert Committee on Brucellosis amended their suggestion for therapy of adult acute brucellosis to rifampicin (600-900 mg per day orally) and doxycycline (200 mg per day orally) DR for 6 weeks as the best-sorted treatment. However, the superiority of the DS regimen over the DR regimen has been proven by research studies.

Epidemiology and economic impact

Brucellosis is a countrywide notifiable highly infectious disease as well as reportable to the local health authorities. In the past 15 years, the incidence of the disease had been estimated to be 100 per year in the US.

The incidence of brucellosis in the United States is below 0.5 cases for every 100,000 population, for mostly the B. melitensis. The disease is most prevalent in California, Virginia, Florida, and Texas. The groups of people who are at high risk of infection include animal laboratorians, Abattoir workers, veterinarians, meat inspectors, and animal inspectors.

The epidemiology of brucellosis is continually shifting, with new varieties emerging or re-emerging. The epidemiology of human brucellosis has rapidly shifted throughout the previous few years due to several sanitary, political, socioeconomic factors, as well as increased international movements. A novel variety of human brucellosis has developed, especially in central Asia, at the same time the condition in specific regions of the Middle East is increasingly deteriorating (Pappas et al., 2006b).

The disease occurrence is global, except in certain first-world countries in which the bovine brucellosis (B. abortus) has been eliminated. Eliminated in this sense implies that no case has been reported for a minimum of 5 years. Such countries include the UK, Sweden, Australia, Norway, Canada, New Zealand, Cyprus, Netherlands, Denmark, and Finland.

Those countries in which brucellosis is prevalent within its various population, include South America, Mexico, Central America, northern and eastern Africa, Central Asia, India, Near East countries, and Mediterranean Countries of Europe,

Conclusion

Reducing the incidence of brucellosis in third world countries necessitates substantial efforts to develop a framework that enlightens people concerning the risk factors of brucellosis; deliver appropriate lab facilities and instruct personnel to collect and analyze samples; rigorous surveillance programs and maintain records. In addition, when the occurrence of brucellosis is reduced or exterminated within the animal reservoir, this translates to a corresponding considerable decrease in the occurrence within the human population.

Reference list

Center for Disease Control and Prevention (CDC). (2005). Brucellosis: Brucella melitensis, abortus, suis, and canis. Clifton Rd, Atlanta, USA. Department of Health and human services CDC.

Glynn, M.K., Lynn, T.V., 2008. Brucellosis. J. Am. Vet. Med. Assoc. 233, pp. 900908.

Ingebrigtsen, D. K., Ludwig, J. R., & McClurkin, A. W. (1986). Occurrence of antibodies to the Etiologic agents of infectious bovine rhinotracheitis, parainfluenza 3, leptospirosis, and brucellosis in white-tailed deer in Minnesota. Journal of Wildlife Diseases, 22(1), 1986, pp. 63-86.

Pappas, G., Papadimitriou, P., (2007). Challenges in Brucella bacteremia. Int. J. Antimicrob. Agents 30 (Suppl. 1), S2931.

Pappas, G., Papadimitriou, P., Akritidis, N., Christou, L., Tsianos, E.V. (2006b). The new global map of human brucellosis. Lancet Infect. Dis. 6, 9199.

Seleem, M.N., Jain, N., Pothayee, N., Ranjan, A., Riffle, J.S., Sriranganathan, N. (2009). Targeting Brucella melitensis with polymeric nanoparticles containing streptomycin and doxycycline. FEMS Microbiol. Lett. 294, 2431.

Seleem, M. N., Boyle, S. M., & Sriranganathan, N. (2010). Brucellosis: A re-emerging zoonosis. Veterinary Microbiology 140. pp. 392398.

Prokaryotes and Eukaryotes; Reason for Smaller Size of Prokaryotes

Prokaryotes are organisms that lack a cell nucleus or any other membrane-bound organelle, while Eukaryotes possess both. The nucleus is one of the most important structures required in living organisms. The nucleus can be thought of as the brain of the cell.

Mostly Prokaryotes are single-celled or unicellular, except the Myxobacteria, which is a multicellular organism at some stage in its life cycle. The Eukaryotes are multicellular beings that carry out different functions of life. They contain both membrane-bound organelles (small organs) and a membrane-bound nucleus.

The nucleus controls several important functions. For example, the duplication of cells occurs under the command of the nucleus. A Eukaryotic cell has the capability of cell division. It contains the important structure, Deoxyribose Nucleic Acid (DNA) which is necessary for the function and development of organisms.

Prokaryotes can be exemplified as different members of a group. They carry out separate tasks and work in a group function. Similar cells, group together and work in harmony to carry out one particular function. Hence they do not carry out individual processes of life. The reason Prokaryotes lack important organelles like the mitochondria, ribosome, and nucleus is that they are not required to perform complicated functions. This can explain their single-celled structure and hence their smaller size.

The Eukaryotes are dissimilar to the Prokaryotes in that, they are required to carry out important functions of life. A single organism can be capable of performing multiple tasks. It is a being in itself. It contains important structures like mitochondria, ribosomes, endoplasmic reticulum, etc. Its nucleus contains DNA. Hence complicated tasks like the production of proteins or cell division can be carried out by these cells. Eukaryotic cells must accommodate important sub-structures in their body. This, explains why Eukaryotes are larger in size when compared to Prokaryotes.

A tissue is a structure that is made up of similar types of cells and an organ is described as a structure that is made up of similar tissues. Several organs may group together to form an organ system. The digestive system is made up of several organs like the liver, stomach, intestines, etc. The liver is made up of individual cells called hepatocytes. The stomach has similar muscles that help it contract in harmony. The esophagus has cartilaginous structures that help during peristaltic movements. Therefore, each organ is made up of similar tissues (Hurlbert, 1999). Each tissue is made up of several, but similar prokaryotic cells. Organs include the heart, brain, liver, etc.

The reason there is dissimilarity in size between Prokaryotes and Eukaryotes can be explained by a simple example. If 5 people are delegated the task of building a structure and they are provided all the materials necessary, the plan, and the blueprints of the structure, their work will be made easy. These 5 men have been programmed in a way that building the structure will be practically effortless for them. Consider these 5 men to be individual prokaryotic cells and the tasks that they are performing as a group to be a function. The structure that they are building is tissue and several similar structures could potentially form an organ. Similarly, if another 5 men were to build the same structure but were not provided any materials, plans, or a blueprint of the structure.

The men would have to gather the necessary materials, make a plan and then start their work. Such men would require expertise in the matter as everything that is required to build the structure has not been provided to them. These men can be considered as eukaryotic cells that have to carry out a complicated task. Such men would be of higher intelligence, unlike the previous 5 men who would not have to put too much thought process in carrying out their work.

The functional demands of eukaryotic cells require them to have specific organelles within their bodies. These organelles accommodate space. This space is the reason eukaryotic cells are larger in size when compared to prokaryotic cells. Prokaryotic cells are simple structures that do not need organelles in their bodies.

References

Hurlbert, R.E. (1999) Microbiology 101/102 internet text Chapter ii: eukaryotic vs. PROKARYOTIC CELLS.

Kaiser, G. (2007) Introduction. Cellular Organization: Prokaryotic And Eukaryotic Cells.

Cell membrane structure & function.

Opium: Legal and Illegal Use

The opium poppy (Papaver somniferum) also known as bread seed poppy is a flowering plant. The plant can reach 3-16 feet tall with lobed silver-green foliage and blue-purple flowers that are approximately 5 inches wide. Seeds are held in a spherical capsule topped by a disk with the stigmas of the flower, and seeds escape from the pores when shaken. The opium poppy is best known for its unripe seeds which are processed to obtain a milky latex that contains analgesic alkaloid morphine that is chemically modified to produce opium, used in both synthetic opioids for medical use as well as heroin in the illegal drug trade (Petruzzello).

The opium poppy is referenced in history as early as 3,400 B.C. where it was cultivated in lower Mesopotamia. Sumerians called it the joy plant, and it was passed to the Assyrians, and eventually the Egyptians. The plant and properties of opium were widespread as more countries began to grow it, making it widely available and less costly. It was cultivated along the Silk Road and is known to have played a major role in history in Chinas Opium Wars of the mid-1800s.

Opium was known in Ancient Greek and Roman cultures as a powerful reliever and a method of inducing sleep. In 1803, the principal ingredient in opium seeds, the opium resin, was extracted and used to create morphine, one of the most effective drugs to the modern day in relieving severe pain. In the modern-day, one other component of opium, thebaine, is used to create synthetic pain killers such as oxycodone (Opium Poppy).

Opium is also known for its role in the illegal drug trade as the key component in heroin. It is grown in countries of the Middle East, Asia, and Latin/South America by small farmers and exported, primarily for illegal drug use. In some regions, it is one of the leading economic industries supporting the local populations. However, it is also grown on a large scale as an agricultural crop, for the pharmaceutical industry as well as for its regular ripe poppy seeds.

The legal growing of the opium poppy occurs in India, Turkey, and Australia, while illegal farms are prevalent in Afghanistan, Burma, and Columbia. Ripe poppy seeds which are kidney-shaped and graying-blue in color can be used for food, seasoning, oil, and birdseed in some cultures. Furthermore, sometimes the opium poppy is considered to be an ornamental plant, grown in gardens due to its natural beauty and variety of colors.

As evident, opium poppy cultivation is an ancient historic practice that has become a tremendous industry in the modern-day for both legal and illegal uses. The role of the flower in heroin processing and the further illegal drug trade is horrible, with many consequences. International organizations and the NATO military coalition in Afghanistan have dedicated significant resources in identifying the makeshift opium poppy farms and laboratories, destroying them, and offering local populations other economic opportunities, which is the primary reason why these practices continue. Meanwhile, the critical role of opium in the production of pain management medication such as morphine or oxycodone is indisputable.

While that industry also has negative impacts with the recent opioids epidemic in the United States, it also helps millions of people and these medications are instrumental in the healthcare industry. Until better alternatives are found, the legal growth of the opium poppy is a necessity.

Works Cited

Opium Poppy. DEA Museum. Web.

Petruzzello, Melissa. Opium poppy. Encyclopaedia Britannica. Web.

Scientific Theory and Other Perspectives

Since time immemorial, science has been an integral part of our lives giving us a better understanding of the reality around us. With science, there have been many incredible changes with some being beneficial while others have been destructive. Nonetheless, it cannot be denied that science has been very instrumental in our everyday lives. There are other perspectives that also attempt to give explanations as to why certain phenomena exist. Perspectives such as religions and law have greatly shaped the understanding of a cross-section of the society but there is a major difference between these perspectives and science. The focus of this discussion is the major difference between science and other perspectives and also the purpose of the scientific theory.

A scientific theory is model or an explanation that is based experiments, observation and valid reasoning that must be evaluated, tested and gotten conformation as principle that can be said to aid in the explanation and subsequent prediction of occurrences of events of phenomena in the future (Whewell, 1968, p.234). For a scientific theory to be said to be valid, there must a through examination of facts that must be carried out carefully and rationally. It also becomes very important to separate facts and theories with facts being aspects that can either be measured or observed whole theories are explanations that help interpret particular facts.

Research and inquiry that is systematic and disciplined in terms of management and organization is characterized by the fact that there is always a need to come up with a theory which gives explanations that are based on observed evidence to ensure that there the information is credible, tangible and concrete (Larson, 2006, p.12). Any inquiry that is scientific in nature ought to revolve around logic.

A scientific theory is very much likened to there aspects such as models of maps. However, there is difference between other aspects and the theory due to the fact that a theory is always based on valid explanations. Scientific inquiry can be differentiated from naïve inquiry. As already mentioned a scientific inquiry is one that revolves around logic in its assumptions and conclusions. A naïve inquiry on the other hand is one that does not pay too much attention on logic and makes assumptions and conclusions based on other factors.

Usually, a naive inquiry may later be proven false when a physical experiment is conducted (Douglas, 2010, p.34). The major difference that exists between a scientific inquiry and a naïve inquiry is that is the manner in which data is collected and analyzed. Verification is a very important stage in any kind of inquiry. In a naïve inquiry, sufficient measures are not usually taken in verifying the information gathered (Morrison, 2000, p.21). This therefore means there are usually high possibilities of errors that could make the inferences invalid.

Before any inferences can be made in any inquiry, there must be supporting evidence and the manner in which the evidence is collected is crucial in determining the outcome (Suppe, 1977, p.34). Scientific inquiry always begins by ensuring that is sequence must always be logical especially in the definition and measurement all of phenomena that is observable. This is the only to evaluate the understanding that we have regarding certain aspects. Any abstract idea, proposition, concept and assumption are normally changed through research and either approve of disapprove any assumptions or hypotheses. Scientific inquiries also look at the relationship in terms of their measurement both quantitatively and qualitatively.

References

Douglas, L. (2010). Differentiated science inquiry. London: Corwin Press.

Larson, E. (2006). Evolution: The Remarkable History of a Scientific Theory. California: Random House Publishing Group.

Morrison, M. (2000). Unifying scientific theories: physical conceots and mathematical structures. London: Cambridge University Press.

Whewell, W. (1968). Theory of scientific method. London: Hackett Publishing.

Suppe, F. (1977). The structure of scientific theories. Illinois: University of Illinois Press.

Polymerase Chain Reaction Analysis

Introduction

PCR is a biotechnological invention that is used to analyze genetic material and synthesize copies of the same. It analyses the very tiny fragments of genetic material including the damaged material to a level which can be easily studied. For the past hundred years, PCR has been the most important scientific based technology; constituting the fastest method of obtaining DNA duplicates hence facilitating various researches on genetics. With just ten years down the line, PCR has changed life in many aspects since its invention. Since its invention by a scientist, Kary Mullis, it has been essentially used in many application in medicine and biological research applications. Such applications include cloning of DNA, during sequencing of genetic material, analyzing active genes especially in the diagnosis of diseases that are passed through generations. The basics of the technique are dependant of thermal cycling which involves several cycles involving heating and cooling of the genetic material being analyzed. The repeat of the two processes alongside polymerase enzymes is what results to melting of the genetic material and consequent replication of the same. It has been used widely from diagnosis of medical conditions to law courts to study of animal behavior among others. Its wide application has been related to its simplicity and the fact that it is less costly compared to other biotechnological techniques in addition to its speed when operating (Powledge, 2010).

Significance of PCR in molecular biology

The significant use of PCR on genetic material has been facilitated by the fact that all living things have their sequences of genetic material , DNA and RNA, made uniquely and specific to each species. For instance, in man, every single individual has his own unique sequence of DNA material. It is this uniqueness that makes it possible for scientists to trace the precise species from which a certain organism belongs to and relate organisms based on their genetic constitution. However, this process requires a remarkable amount of DNA for the study. On this aspect, PCR has been very helpful since it has the ability to manipulate the natural functioning of polymerase enzymes which are naturally present in all living things and function in copying genetic material, proofreading it and even correcting the miscopied material especially during transcription. So far, only PCR has the capacity to characterize and synthesize any specified piece of genetic material. It also has the capacity to pick up a specific material from a mixture and duplicate it. PCR does not require genetic material from a specific region but rather can use material from blood samples, hair, microorganisms as well as from plants and animals. In addition, PCR can analyze material as old as millions of years old (Rabinow 1998).

Basically, the entire process is very simple to all molecular biologists. It only requires a template, the material to be copied and two primers, short sequences of any genetic material. The primers are made up of four different bases under which different genetic materials are classified. The genetic material is made up of nucleotides which exist as chains. DNA exists as double strands of the nucleotides while RNA consists of only one strand just like the primers do. The sequences of the primers are known before the process since they are obtained from either side of the material to be copied (Janes 2002). This process is made more reliable by the easy availability of the primers which can be purchased from suppliers or made in the lab. PCR basically consists of three major steps. The process is initiated by separating the two strands from the helix creating individual units. This is followed by joining the primers to the templates of the original material. The final step involves the synthesis of new DNA where the polymerase enzymes moves along the template, reads it and matches it with complementary nucleotides resulting to formation of two helixes one of which is similar to the original template and the other consisting of new sequences. Automated machines for regulating temperature changes during this procedure have been developed to make the work much easier. Such machines are already available making the procedure much reliable and fast. In order for one to make as many copies as desired, all what needs to be done is to repeat the process using every newly synthesized strand. Since one cycle takes only around two minutes, million copies of the desired material can be made in less than hour, an operation which took a week or so before the invention of PCR (Mullis 1990).

The significance of PCR has been demonstrated in its many uses in many different aspects of life. One of the fields benefiting greatly from PCR is the medicine field where PCR is being used to detect and identify organisms that cause infectious diseases. Genetic related diseases, both inherited and those resulting from mutations are also being studied. PCR is enabling physicians to study very minute amounts of DNA from an infected cell by amplifying it to enable them identify the cause of infections. Medical analysis using PCR is proving to be more reliable than the previous methods due to the fast speed associated with the procedure especially where emergency health cases occur. PCR has been mostly used in medicine to study disease causing organisms that cannot be cultured (Powledge, 2010). For instance, PCR can detect the HIV virus soon after infection unlike other identification tests such as ELISA. Instead of looking for antibodies that the body of the infected person makes against the HIV virus like in the case of ELISA, PCR identifies the particular DNA specific to the HIV virus. PCR is the only molecular test that has been able to detect and identify the bacterial DNA that causes otitis media, a painful middle ear infection in children. Another infection, Lyme disease which is characterized by painful inflammation of the joints can be diagnosed early enough using PCR by detecting the causative organisms DNA in the joint fluid. This way, early treatment can be done to prevent later complications of the disease other than the usual diagnosis which is based on appearance of symptoms which may lead to complication of the disease before it is even diagnosed. PCR can also detect several organisms transmitting sexual diseases with only a single swab. More so, unlike other molecular techniques, PCR has been used to differentiate strains belonging to the same genus. Basically, PCR is the most sensitive test for identification of infectious agents especially those that could not be identified by other methods due to their evasive behavior. PCR has also been used to detect variations in DNA resulting due to mutations most of which cause personality disorders (Powledge, 2010).

Use of PCR alongside scientific research can be used to yield predictive results concerning genetic constitution. For instance, sensitive study on mutations through PCR can reveal who is predisposed to the common personality disorders related to mutations. Some of these diseases may even be fatal therefore this knowledge may help affected people to initiate preventive measures. For instance, mutations in the tumor suppressor genes have been detected in the gastrointestinal tract. Through this test, people with high risks of developing colon cancer can be identified early enough. Similar tests have also been used to warn parents on getting babies due to the risky they could be faced with concerning genetically inherited diseases or disorders. PCR has also been widely used in parenting first by reassuring mothers that they can safety have children through confirmation of non existence of genetic diseases in both parents. Similarly, lives of infants can be saved by using PCR to determine whether the blood group of a mother is compatible to that of the fetus; if not treatment can be started in the womb to prevent disabilities and death cases caused by the incompatibility thanks to PCR invention.

Conclusion

PCR is principally the most important molecular technique in the field of biotechnology. Its major significance is the ability to produce as many copies of genetic material of any organism as possible (Don et al., 1991). It can therefore be used to study the cells that contain the genetic material in question. Even the very complex cells such as those of human beings have been analyzed especially concerning gene related diseases. Early diagnosis of various complex diseases that cannot be detected by any other diagnostic method has also been made possible by PCR. This way, many lives have and will continue to be saved by ensuring early treatment of such diseases. PCR has been commonly used in the genetic analysis in medicine and law where criminals can be traced through gene comparison which is similar to PCR application on diagnosis of diseases and other genetic concerns. What makes PCR even more reliable is the fact that the whole process is inexpensive, consumes less time and is very easy to carry out. With the increasing advancement in technology, PCR is expected to get much more reliable as researchers have already reported the possibility of analyzing and copying larger pieces of genetic material such as an entire genome of an organism. In addition, there are different types of PCR which allow specific application of the technique to further increase efficiency. However, just like many other technical applications, PCR is faced with some technical problems. One of the major problems that have been encountered in the running of the process is the contamination of the genetic material sample which in most cases results to production of much more than required amount of the genetic material. The major cause of this irregularity is the use of an already amplified genetic material may be from a previous experiment. This then results to introduction of contaminant molecules into the sample. This problem may be difficult to avoid especially medicine and law applications where human life is involved. Despite this problem, it is still no doubt that PCR is the very most significant invention in molecular biology up to date.

Reference List

Don, R, et al., 1991. PCR to circum spurious priming during gene amplification. Nucl. Acids Res. 19, 4008

Janes, H. And Chen, B. 2002. PCR Cloning Protocols. Humana press, New Jersey. 2nd Ed

Mullis, K. 1990. The unusual origin of the polymerase chain reaction. Scientific American.

Powledge, T. 2010. The polymerase chain reaction. Web.

Rabinow, P. 1998. What is PCR? Web.

Galapagos Finch Speciation

When Charles Darwin visited the Galapagos Islands in the 1830s, thirteen species of finches inhabited the place. Finches show a variety of shapes and sizes of beaks, all of which are suitable for their different types of food and lifestyles. Darwin gave this phenomenon the following explanation: they are all descendants of the original pair of finches, and that natural selection is responsible for their differences. The Grants studied the infamous finches, the same ones that Darwin studied two centuries ago. Darwin believed that it would take thousands of years to witness evolution, but Grants have proven that it can take only several years. Grants, in their investigation, generally focused on the medium ground finch, Geospiza fortis, which has a stubby beak and mostly eats seeds.

What affected the food supply, and consequently triggered the natural selection of the finches, was the event that the Grants witnessed in 1977. The directional form of natural selection begins to act in a changing environment, in the Grants study, drought. During that event, there was no rain for almost 18 months, and the plants withered, producing practically no seeds. Due to the lack of food supply, medium ground finches with larger beaks could eat alternative foods, because they could crack open bigger seeds.

However, the medium ground finches with smaller beaks had difficulties with finding alternative food sources; therefore, they died of starvation. When the Grants returned to the island to document the changes in the finch population, they realized that the beak size had become larger in comparison to the pre-drought generation. Thus, a directional form of natural selection occurred.

Evolution by genetic drift took place when immigrant members of large ground finches started breeding on the island. The new population experienced a genetic bottleneck (microsatellite allelic diversity fell), and inbreeding depression occurred (Grant R. & Grant P., 2003, p.968). The results were shown by how poorly the 1991 cohort survived. In this example, changes in the beak structure of the species were caused by genetic drift.

Daphne Major was a perfect example for studying the pre-zygotic and post-zygotic reproductive isolation in middle ground finches and the cactus finches because of their moderate degree of isolation. The Grants have found it out by capturing and measuring the species to determine phenotypic variation, comparing offspring with their parents to determine inheritance, and following their fates across years to detect selection (Grant R. & Grant P., 2003, p.966). There was a definite transformation, since there was a significant heritable change in beak size and body size among the species.

Hybridization is the process of the formation or production of hybrids, which is based on the union of the genetic material of different cells in one cell. The causes of hybridization include extra-pair mating, the interspecific takeover of nests with eggs, and the dominant singing of a close neighbor (Grant R. & Grant P., 2003, p.970). In the 1990s, the flow of genes from the medium ground finch to the cactus finch population contributed to a decrease in mean body size and a blunter beak morphology of cactus finches (Grant R. & Grant P., 2003, p.970). In this case, the barrier in song differences between the two species was broken, and the environmental change happened to have been the most crucial factor in this process.

As it was previously stated, song divergence plays a significant role in keeping the species distant from each other. The song is a fascinating attribute as it is culturally and not genetically inherited. This conclusion was supported by field observations of the songs of offspring, parents, and even grandparents by the Grants. The song is acquired through learning early in life, in a process that resembles imprinting; it is generally received from fathers during the period of parental dependence, in association with parental morphology (Grant R. & Grant P., 2003, p.970). Therefore, cultural inheritance occurred within the species studied by Grants, independent from their genealogy.

References

Grant, B. R., & Grant, P. R. (2003). What Darwins finches can teach us about the evolutionary origin and regulation of biodiversity. BioScience, 53(10), p. 965. Web.

Non-Parametric Equivalents for Parametric Tests and Chi-Square

About Non-Parametric Procedures

The most common reasons of selecting a non-parametric test over the parametric alternative

Non-parametric tests are preferable over parametric tests in case the data has violated the assumptions of normality (data does not have a normal distribution). Non-parametric tests are also preferable over parametric tests if the data does not maintain the assumption of homogeneity of variances (if the variances are assumed to be unequal). Finally, non-parametric tests are choices where the observations are not guaranteed to be independent (randomly selected) unlike in parametric tests which assume that the observations are made independently (Field, 2009).

The issue of statistical power in non-parametric tests (as compared to their parametric counterparts), and type that tends to be more powerful

In general, parametric tests have a relatively higher statistical power compared to their non-parametric counterparts. For instance, the Friedman test contains a lower statistical power compared to repeated measures ANOVA because the Friedman test has no assumption for normal distribution. This argument prevails in the explanation of lower statistical power in all other non-parametric counterparts of parametric tests.

It is notable that the statistical significance generated in both parametric and non-parametric tests is almost the same. However, non-parametric tests have lower statistical power for detecting differences but this loss of power is reduced as the relationship between the variables becomes stronger. Non-parametric tests have less statistical power because power is usually not straightforward in that it is calculated via simulation methods. This is unlike the calculation of power in parametric tests where formulas and tables graphical displays aid in calculating the power (Field, 2009).

For each of the following parametric tests, identify the appropriate non-parametric counterpart:

Parametric Test Equivalent Non-parametric Test
Dependent t-test Wilcoxon Signed Rank
Independent samples t-test Mann-Whitney U Test
Repeated Measures ANOVA (one-variable) Friedman Test
One-Way ANOVA (independent) Kruskal-Wallis Test
Pearson Correlation Spearman Rank Correlation

Non-Parametric Equivalents of Parametric Tests

A non-parametric version of the dependent t-test (Wilcoxon Test)

Table 1 shows that among the 40 students who did the creativity pre-test, the mean score was 40.15 with a standard deviation of 8.30, while the mean creativity posttest score was 43.35 with a standard deviation of 9.56. Table 2 shows that there were 9 cases where creativity pretest scores were higher than creativity posttest scores and 28 cases where creativity posttest scores were higher than pretest scores. There were three ties indicating that there were three cases where creativity pretest scores were the same as posttest creativity scores. The mean sum of negative ranks is 15.67 whereas the mean sum for positive ranks is 20.07.

It is therefore evident that posttest scores were higher in general compared to pretest scores. Table 3 shows that the Z value is -3.179 and this has a 2-tailed significance value of.001 indicating that the difference between the pretest scores and posttest scores was statistically significant. The Wilcoxon test is based on negative rank in this case since the negative rank contains the smallest sum of ranks (141.0).

I summary, the Wilcoxon test (based on negative ranks) conducted to assess whether creativity posttest scores were significantly higher than pretest scores indicate a significant difference between pretest scores and posttest scores, (Wilcoxon Z = -.3179, p <.05). Since this is based on negative ranks, the significance of this test implies that post-test scores were higher than pre-test scores after the 12 weeks course.

A non-parametric version of the independent t-test (Mann-Whitney U)

The average rank for creativity pre-test scores, N = 40 was 32.66 while the average rank for creativity post-test scores, N = 40 was 44.78 (Table 4). This means that creativity test scores tended to increase with time ranging from the start of the course (Week 0) to the end of the course (Week 12). On applying the Mann-Whitney U test on creativity test scores, it was found that there was no significant difference between pretest and posttest creativity scores, (U = 629.00, p =.10) (Table 5). It is important to note that this finding contradicts the findings of the independent t-test that found that there was a significant difference between creativity pretest and posttest scores. This is an indication that the Mann-Whitney U test (non-parametric) has a lower statistical power compared to the parametric counterpart (independent t-test).

Non-parametric version of the single factor ANOVA (Kruskal-Wallis Test)

The mean systolic blood pressure is 124.77 with a standard deviation of 9.03. The mean diastolic blood pressure is 82.90 with a standard deviation of 2.83 (Table 6). The mean rank for systolic blood pressure taken at home is 14.25 whereas the mean ranks for systolic blood pressures taken at the doctors office and in the classroom were 22.80 and 9.45 respectively (Table 7). The mean rank for diastolic blood pressure taken at home setting was 15.75.

The mean ranks for diastolic blood pressures taken at the doctors office and in a classroom setting were 16.30 and 14.45 respectively. This indicates that the mean ranks for either type of blood pressure taken at the doctors office are higher than when blood pressure is taken in the classroom or at home.

A Kruskal-Wallis test was conducted to evaluate differences between systolic and diastolic blood pressures in three settings (home, at the doctors office and in a classroom). There were significant differences in systolic blood pressures in the three settings, Chi-Square (N = 30) = 11.85, p<.05. However, there was no significant differences in diastolic blood pressures in the three settings, Chi-Square (N = 30) =.237, p >.05 (Table 8).

Sometimes a Mann-Whitney U test is conducted to clearly identify where differences in the groups (in this case the three settings occur (Field, 2009)). However in this case, it is clear that systolic blood pressures differ significantly in the doctors office than in any other setting. This implies that when blood pressure is measured at the doctors office, patients tend to record high systolic blood pressures than when blood pressure is taken in other settings such as home or in a classroom where people are perceive the environment to be normal. This therefore confirms that the participants of this study actually experienced white coat syndrome out of the perception that they are in an abnormal setting.

A non-parametric version of the factorial ANOVA (Friedman Test)

A Friedman test was conducted to evaluate differences in mean of math scores among grade 5 children in classes of different sizes (10 or less children, 11-19 children and 20 or more children). The test was significant, Chi-Square (N = 60) = 60.00, p<.05 (Table 11). The mean rank for classroom size is 1 whereas the mean rank for math score is 2. The significant Friedman test implies that the alternate hypothesis that there is a difference in math scores when children are in classrooms of different sizes to be accepted. The high Chi-Square value is also a clear indicator that classroom size and math score are related variables and their relationship is that an increase or decrease in classroom size causes a significant change in math score attained by children in either gender. One would therefore expect to have different math scores in classes of 10 or less children when compared to classes with 11-19 children or classes with 20 or more children.

Contingency Tables

Table 12 shows that 927 (65.3%) of a total 1419 cases were used to generate a contingency table between education (degree) and perception of life (life). Table 13 shows the observed and expected counts and frequencies for different life experiences (exciting, routine and dull) in relation to level of education (less than high school, high school level and junior college education). The observed count for persons with an education less than high school and who view life as exciting is 52 (50.8%) while the expected count is 77.0.

For individuals with less than high school education and who have the perception that life is dull, there is an observed count of 95 (21.3%) and an expected count of 79.1. Still in the group of individuals with less than high school education, there is an observed count of 17 and an expected count of 8.0 of persons who view life as dull.

For individuals with a high school education, there is an observed count of 221 (50.8%) and an expected count of 226.7 individuals who had a perception that life is exciting. There was an observed count of 242 (54.1%) and an expected frequency of 232.9 of person with a high school education and whose perception of life is that life is just a routine. Of the 483 individuals who had a high school education, there was an observed count of 20 (44.4%) individuals and an expected count of 23.4 persons who were of the view that life is dull.

Among the 280 participants who had a junior college education, there was an observed count of 162 (37.2%) persons and an expected frequency of 131.4 persons who viewed life as being exciting. An observed count of 110 (24.6%) and an expected count of 135.0 of individuals with a junior college education perceived life to be a routine. For those who viewed life to be dull and had a junior college education, there was an observed frequency of 8 (17.8%) and an expected frequency of 13.8.

From Table 14, there is no cell that has less than a count of 5. The Pearson Chi-Square value is 36.63 and it has 4 degrees of freedom and its 2-tailed probability is.001. (Pearson Chi-Square = 36.63, df = 4, p =.001). This means that there are significant differences in life experiences among persons with different levels of education (less than high school, high school level and junior college education).

The differences in life perception according to level of education have also been displayed graphically as shown in Figure 1. The findings displayed in Figure 1 affirms the results displayed in the 2 by 2 contingency table thus adding more weight to the hypothesis that education and perception of life are dependent. Figure 1 shows that 52 (5.61%) of participants who had less than high school education felt that life is exciting as compared to 221 (23.84%) of participants who had a high school education and 162 (17.48%) who had a reached at least junior college level of education. 95 (10.25%) of individuals who had less than high school education expressed that life is routine compared to 242 (26.11%) of those who had a high school education and 110 (11.87%) of individuals with at least a junior college level of education.

Figure 1 also shows that 17 (1.83%) of individuals who had less than high school education felt that life is dull compared to 20 (2.16%) of individuals with a high school education and 8 (0.86%) of those who had at least junior college level of education.

In summary, there are more high school degree holders with a perception of life as exciting followed by individuals with at least junior college education. Individuals with less than high school education have the least perception of life as exciting. In the same pattern, the highest numbers of individuals who perceive life to be routine have a high school education followed by junior college individuals. The lowest number of individuals with a view of life as routine belongs to the persons with less than high school education. The least number of individuals with a perception of life as being dull have at least junior college education, while the highest number of individuals who view life as dull have a high school education.

Since the Chi-square test is statistically significant (Chi-Square = 36.63, 2-tailed p =.001), it is evident that there are differences in the way people view life when one considers the level of education. The contingency table (Table 13) actually shows that there are differences between the observed counts and the expected counts of life perception in all the education levels. This is a clear indicator that education level and life perception are related/dependent. Furthermore, the Chi-Square value is very large. The minimum expected count for the Chi-Square test is 7.96. It is on this basis that the null hypothesis (education and perception of life are independent) is rejected and instead the alternate hypothesis (education and perception of life are dependent) is accepted.

On paying closer attention to the contingency table (Table 13), it is evident that the smallest differences in observed and expected counts are found in the high school individuals in all the three life perception categories (5.7, 10.9 and 3.4 in for exciting, routine and dull life perception respectively). The differences in observed and expected counts on perceptions of life increase for those with at least junior college education (30.6, 25 and 5.6 for exciting, routine and dull perception of life respectively). Relatively high differences in expected and observed counts also appear in those who have less than high school education (25, 15.9 and 9 respectively for exciting, routine and dull life perceptions respectively).

It is the large differences in expected and observed counts in all categories of education that result into a big Chi-Square value indicating relatedness of education and life perception. This can only be slightly ruled out for high school education individuals since the differences are relatively small. However, the overall conclusion is that perception of life and education are related and thus the null hypothesis is rejected.

Reference

Field, A. (2009). Discovering statistics using SPSS (3rd ed.). Los Angeles: Sage. Web.

Appendix

Table 1: Descriptive Statistics for Creativity Pretest and Posttest Scores.

Descriptive Statistics
N Mean Std. Deviation Minimum Maximum
Creativity pre-test 40 40.15 8.304 26 56
Creativity post-test 40 43.35 9.598 20 59

Table 2: Wilcoxon Signed Ranks for Creativity Pretest and Posttest Scores.

Ranks
N Mean Rank Sum of Ranks
Creativity post-test  Creativity pre-test Negative Ranks 9a 15.67 141.00
Positive Ranks 28b 20.07 562.00
Ties 3c
Total 40
a. Creativity post-test < Creativity pre-test
b. Creativity post-test > Creativity pre-test
c. Creativity post-test = Creativity pre-test

Table 3: Wilcoxon Signed Ranks Test for Creativity Posttest and Pretest Scores.

Test Statisticsb
Creativity post-test  Creativity pre-test
Z -3.179a
Asymp. Sig. (2-tailed) .001
a. Based on negative ranks.
b. Wilcoxon Signed Ranks Test

Table 4: Mann-Whitney Ranks for Creativity Posttest and Pretest Scores.

Ranks
pre-test and post-test group N Mean Rank Sum of Ranks
creativity test score Pre-test scores 40 36.22 1449.00
Post-test scores 40 44.78 1791.00
Total 80

Table 5: Mann-Whitney U Test for Creativity Pretest and Posttest Scores.

Test Statisticsa
creativity test score
Mann-Whitney U 629.000
Wilcoxon W 1449.000
Z -1.647
Asymp. Sig. (2-tailed) .100
a. Grouping Variable: pre-test and post-test group

Table 6: Descriptive Statistics for Diastolic and Systolic Blood Pressure.

Descriptive Statistics
N Mean Std. Deviation Minimum Maximum
Systolic Blood Pressure 30 124.77 9.031 110 145
Diastolic Blood Pressure 30 82.90 2.833 78 90
Setting 30 2.00 .830 1 3

Table 7: Kruskal-Wallis Ranks for Systolic and Diastolic Blood Pressures in Different Settings (Home, Doctors Office, and Classroom).

Ranks
Setting N Mean Rank
Systolic Blood Pressure Home (control) 10 14.25
Doctors office 10 22.80
Classroom 10 9.45
Total 30
Diastolic Blood Pressure Home (control) 10 15.75
Doctors office 10 16.30
Classroom 10 14.45
Total 30

Table 8: Kruskal-Wallis Test for Systolic and Diastolic Blood Pressures in Three Settings (Home, Doctors Office, and Classroom).

Test Statisticsa,b
Systolic Blood Pressure Diastolic Blood Pressure
Chi-Square 11.851 .237
df 2 2
Asymp. Sig. .003 .888
a. Kruskal Wallis Test
b. Grouping Variable: Setting

Table 9: Descriptive Statistics for Math Score in Different Classroom Sizes (10 or less children, 11-19 children and 20 or more children).

Descriptive Statistics
N Mean Std. Deviation Minimum Maximum
Classroom size 60 2.00 .823 1 3
Math_Score 60 89.1833 5.92750 72.00 99.00

Table 10: Friedman Ranks for Math Score in Different Classroom Sizes (10 or less children, 11-19 children and 20 or more children).

Ranks
Mean Rank
Classroom size 1.00
Math_Score 2.00

Table 11: Friedman Test for Math Score in Different Classroom Sizes (10 or less children, 11-19 children and 20 or more children).

Test Statisticsa
N 60
Chi-Square 60.000
df 1
Asymp. Sig. .000
a. Friedman Test

Table 12: Number of Cases Processed for Degree*Is life Exciting or Dull.

Case Processing Summary
Cases
Valid Missing Total
N Percent N Percent N Percent
Degree * IS LIFE EXCITING OR DULL 927 65.3% 492 34.7% 1419 100.0%

Table 13: A Contingency Table for Degree*Life Experience.

Degree * IS LIFE EXCITING OR DULL Crosstabulation
IS LIFE EXCITING OR DULL Total
EXCITING ROUTINE DULL
Degree Less than high school Count 52 95 17 164
Expected Count 77.0 79.1 8.0 164.0
% within Degree 31.7% 57.9% 10.4% 100.0%
% within IS LIFE EXCITING OR DULL 12.0% 21.3% 37.8% 17.7%
% of Total 5.6% 10.2% 1.8% 17.7%
High school Count 221 242 20 483
Expected Count 226.7 232.9 23.4 483.0
% within Degree 45.8% 50.1% 4.1% 100.0%
% within IS LIFE EXCITING OR DULL 50.8% 54.1% 44.4% 52.1%
% of Total 23.8% 26.1% 2.2% 52.1%
Junior college or more Count 162 110 8 280
Expected Count 131.4 135.0 13.6 280.0
% within Degree 57.9% 39.3% 2.9% 100.0%
% within IS LIFE EXCITING OR DULL 37.2% 24.6% 17.8% 30.2%
% of Total 17.5% 11.9% .9% 30.2%
Total Count 435 447 45 927
Expected Count 435.0 447.0 45.0 927.0
% within Degree 46.9% 48.2% 4.9% 100.0%
% within IS LIFE EXCITING OR DULL 100.0% 100.0% 100.0% 100.0%
% of Total 46.9% 48.2% 4.9% 100.0%

Table 14: Chi-Square Tests for Education*Life Experience.

Chi-Square Tests
Value df Asymp. Sig. (2-sided)
Pearson Chi-Square 36.630a 4 .000
Likelihood Ratio 35.186 4 .000
Linear-by-Linear Association 33.630 1 .000
N of Valid Cases 927
a. 0 cells (.0%) have expected count less than 5. The minimum expected count is 7.96.
Relationship between education and life perception
Figure 1: A clustered bar graph showing relationship between education (degree) and life perception (exciting, routine or dull).