Alameda Island: Community Assessment

An assessment otherwise termed as a specific way of identifying problems, strengths, and needs of a community is a means used by community developers to make decisions and set objectives. A community assessment also facilitates alignment of priorities and makes it easier to identify which course of action to take. A community assessment or exercise is an undertaking that seeks to collect or rather gather information based on the existing or current concerns, strengths, and conditions of a community, families, and children in it (County Health Status Report, 2004). Since it is a shared vision, collaborative partnership, as well as a formed development, community assessment reviews local resources, assets, gaps, and barriers within the community (Alameda County Social Services Agency, 2005). Additionally, community assessment puts into consideration activities carried out by people residing in a community and attentively looks at any emerging needs of the community. This paper will seek to carry out an abbreviated community assessment on Alameda Island community by describing the core descriptors such as demographics, culture, history, values, and the physical environment of the community among others.

Demographics and racial distribution

Alameda is a county in the state of California. It has 13 cities with a population of about 1.5 million. It covers an area of 739.02 square miles and as of 2010; the US Bureau of statistics documented that 2,043.6 persons occupied one square mile. I come from the Alameda Island community, which has a population of 72,259 (Schenker & Gentleman, 2002). According to the 2010 US Census Report, the households of this community are about 30,226 and 17,863 families, which reside in it (Reinert, Carver, & Range, 2005). The population density of Alameda Island is around 2,583.3/km2 and the housing units equal to 31,644. Generally, the report declared that this community is a residence to many different people with diverse racial backgrounds. The racial composition of the community records whites, African Americans, Native Americans, Asians at around 56.95%, 6.21%, 0.67%, and 26.15% respectively. Other races within this community make 6.13% of the total racial composition of Alameda Island (Hooker, Ciril, & Wicks, 2007).

Age and gender of residents

Out of the 30,226 households in this community, only 27.7% are children under the majority age (18 years) living in them with other family members. In this community, there are also married couples living together, female householders who are unmarried, and others people who do not have family relations (Reinert, Carver, & Range, 2005). A closer outlook into the community database shows that Alameda Island has its population spread out in different mainstreams (Alameda County Social Services Agency, 2005). People under the age of 18 are many with a percentage of 21.5 while people aged between 18 and 24 occupy 7.0%. People aged between 25 and 44 are the majority with a percentage of 33.6 while those aged 45 to 64 make a percentage of 24.6 (Schenker & Gentleman, 2002). The community has a considerable amount of old people since 13.3 is the percentage held by those with 65 years or more. 38 years is the median age of people in this community (County Health Status Report, 2004). The assessment shows that the number of women is slightly higher than that of men.

Physical setting of Alameda Island

The Island of Alameda lies in the North Western part of Alameda County. Centrally, it consists of the major original section wherein the former Naval Air Station lies at the western end of this community (Reinert, Carver, & Range, 2005). Along the south shore part of the Alameda Island, physical setting is the Bay Farm Island, which forms the mainland. Currently, residents know the former Naval Air Station as Alameda Point after transforming its name due to demographic alterations set forth by the naval demarcation. A lagoon separates the Alameda Island community from the south shore area (Hooker, Ciril, & Wicks, 2007). Physical demographics and statistical survey indicate that the South shore and the Alameda Point exist on an artificial fill (Schenker & Gentleman, 2002)

Culture and values

After looking at the racial makeup of this community, it is understandable that their cultural beliefs are many. Culture defines a set of beliefs learnt, traditions exercised, and guides or principles carried out by people for either collective or individual behaviors shared amongst members of a particular group (Reinert, Carver, & Range, 2005). In this community, people celebrate various festivals every year (Hooker, Ciril, & Wicks, 2007). For instance, according to the Alameda Arts Council, Art in the Park is a cultural event celebrated annually featuring more than 100 local artists. It has areas sub-divided into childrens activity area and music section. It takes place at the Encinal and Park Avenue otherwise known as the Jackson Park. In addition, those willing to attend the Art in the Park event do not pay any amount and are held late during summer with poet readings, food, and art demonstrations (Alameda County Social Services Agency, 2005). Residents value the aspect of honor and gathering as an event such as shining stars in the arts seek to bring people together and awarding those with an outstanding contribution towards the development of this community (Schenker & Gentleman, 2002).

Economics and healthcare availability

The community of Alameda Island depends highly on its Alameda Point, theaters, and spirit and wine production firms. The adjacent authorization of a bond measure that led to the construction of a new library replacing the Carnegie library damaged by the Lorna Prieta earthquake indicates the economic development of this community. As of today, Alameda Island collects revenue from its community owned library (Hooker, Ciril, & Wicks, 2007). Furthermore, Naval Air Station is now a civilian development project after decommissioning. The US Veterans Administration made a proposal to construct a modern facility amounting to $209 million in September 2010 at the Alameda Point. Generally, this facility would offer services such as rehabilitation to drug abuse and other addictions, mental health services, primary, and specialty care (County Health Status Report, 2004). There are also theaters and a number of wine and spirit producing cellars such as the Rock Wall Winery and Rosenblum Cellars Winery. This community relies heavily on Alameda County Medical Center, which has its premises in Oakland and opens typically for long hours (Reinert, Carver, & Range, 2005).

Community health problems

This community experiences bits of health related problems. For example, the number of healthcare facilities is limited. The Alameda County Medical Center serves quite a huge population from 13 different cities within the Alameda County (Reinert, Carver, & Range, 2005). Demographic observations reveal that over 700 children start their lives in poverty every year. Moreover, with reference to the scientific consensus of 2010, most adult diseases are rampant in adverse conditions, which involve experiences at the time of pregnancy (Healthy People, 2003). Stressors over the course of life are yet another factor contributing to cumulative impacts early life health complications. Extensive array of evidence links early life experiences and intrauterine with a variety of health impairments which include chronic pulmonary disease, obesity, mental health problems, drug abuse, cancer, depression, alcoholism, and cardiovascular risk factors. Risk behaviors such as alcoholism, drug and other substance abuse, depression, and cardiovascular risk factors form the largest proportion of hazardous behaviors among residents of this community (Hooker, Ciril, & Wicks, 2007).

Socioeconomic characteristics and problems

Immense outlook into the US County Quick Facts shows that unfair disadvantage is evident in Alameda Island. Documented evidence maintains that reducing the factors of disadvantage in early life can act as cognitive element towards minimizing disparities and other chronic conditions amongst people (County Health Status Report, 2004). In this community, structural conditions have aided in concentrating resources and paramount opportunities for health and job vacancies in certain areas. Health indicators such as infant mortality, morbidity, and weight of newborn babies indicate that socioeconomic characteristics of the races making Alameda Island differ (Schenker & Gentleman, 2002). Both morbidity and mortality statistics indicate that babies born out of white families weigh heavier compared to babies born out of African American family (Hooker, Ciril, & Wicks, 2007). This report shows disparity in health conditions of whites and blacks residing in Alameda Island. Facts show that black children face adverse health circumstances that mount over the course of their lives (Alameda County Social Services Agency, 2005). Apart from the problem of health inequality, the community of Alameda Island faces other chronic conditions such as drug abuse, alcoholism, and depression among the old and the young.

Analysis of the problems

Statistical study shows that lack of adequate healthcare facilities contributes to high mortality rate among residents. As indicated earlier, the Alameda County Medical Center serves a large number of people from 13 different cities. Indeed, this kind of workload is by itself a factor affecting provision of health services to residents of Alameda Island (Healthy People, 2003). A Life Course Perspective is an initiative organized by the Building Blocks Collaboration which by the end of its role found out that presence or rather availability of abusive drugs in addition to inadequate income generating activities among the young generation are the number one factors facilitating drug and substance abuse. The symposium indicated that low income and other related factors contributed to problems of depression (County Health Status Report, 2004).

Community strengths

Tangible observations reveal that the community of Alameda Island is indeed safe as it records minimal cases of insecurity per annum. Furthermore, the community has a traversed transportation network whereby vehicles access this community through three bridges namely High Street, Part Street, and Fruitvale Avenue bridges (Hooker, Ciril, & Wicks, 2007). It also has two one-way pathways that bridge Alameda Island with Oaklands Chinatown. Putting in place initiatives that can help reduce health inequality among residents and imposing heavy fines and long-term imprisonment to those found guilty of dealing, trafficking, or using drugs can facilitate or help solve the problems (Healthy People, 2003). Organizations such as Unicef and NACADA can intervene and help solve the problems as the former can assist reduce health disparity while the latter can eradicate drug abuse (Alameda County Social Services Agency, 2005). The community appears motivated to resolve its problems as it has worked tirelessly to vote in favor of constructing a community library. Additionally, the civilians helped change NAS to Alameda Point is a clear indication of the communitys zeal to solve unemployment and poverty related problems.

References

Alameda County Social Services Agency. (2005). Quality of Life Benchmarks Report 2005. Web.

County Health Status Report. (2004). Alameda County Public Health Department Community Assessment, Planning and Education Unit. Web.

Healthy People. (2003). National Health Promotion and Disease Prevention Objectives. US Department of Health and Human Services, Public Health Service. DHHS Publication No. (PHS) 91-50212. Government Printing Press: Washington, DC.

Hooker, S. P., Ciril, L. A., & Wicks, L. (2007). Walkable Neighborhoods for Seniors: The Alameda County Experience. Journal of Applied Gerontology 26, (4) pp. 157-9.

Reinert, B., Carver, V., & Range, L. M. (2005). Evaluating community tobacco use prevention coalitions. Evaluation and Program Planning, 28(2), pp. 201-208.

Schenker, N., & Gentleman, J. F. (2002). On Judging the Significance of Differences by Examining the Overlap Between Confidence Intervals. The American Statistician, 55(3), pp. 182-6.

The Statistical Term Sample: Technical Definition

Biostatisticians use a multiplicity of statistical terms and concepts that help them to organize numerical information in various formats, understand statistical techniques, and make informed decisions. These terms are important in helping professionals to not only design, analyze and interpret data of studies in public health and medicine, but also to make conclusions about the epidemiology of disease and health risks through the assessment and application of mathematical models to the factors that impact health (Rosner, 2010). This paper offers a technical and lay definition of a statistical term known as sample in order to develop an adequate understanding of the concept and its principles and applications.

The term sample is technically defined as a subset of all the units of analysis which make up the population (Watt & Berg, 2002, p. 121). This basically means that a sample is a smaller representation of the whole population as the units that comprise a sample are taken from the larger population. For example, a researcher who is interested in evaluating the incidence of dental caries in a community of 2000 residents may decide to use 200 residents only due to factors such as time constraints, financial limitations, and inability to contact all the residents to take part in the study. These 200 residents selected serve as the sample of the study because they represent the larger community. In the common mans language, therefore, the term sample can be described as the units or individuals selected by the researcher for inclusion into the study based on their uniqueness in representing the characteristics of a particular population.

The sample selected must be able to represent the population, meaning that it should contain all the characteristics of the population to enable the researcher to draw valid conclusions or inferences about the population of interest to the study (Watt & Berg, 2002). If the sample is not representative in the example of dental caries described above, the researcher may end making wrong conclusions on the factors that cause the community to experience a high incidence of dental caries. It is therefore important for biostatisticians to evaluate the sample distribution, defined in the literature as a statement of the frequency with which the units of analysis or cases that together make up a sample are actually observed in the various classes or categories that make up a variable (Watt & Berg, 2002, p. 121). An assessment of the sample distribution will enable the researcher to make an informed decision on whether the selected sample can be used as a valid representation of the population.

Although there are many statistical techniques that can be used to select a sample from the population, the researcher must always ensure that the technique used has the capacity to provide a valid and representative sample (Rosner, 2010). This means that a sample can be limited by the techniques and strategies used to select cases or units for participation in a study. Researchers who use the simple random sampling technique to select participants for the dental caries study, for example, may end up having a more representative sample than those who use convenience sampling as the former technique ensures that all community members have an equal chance or probability for selection. Lastly, it is important to deal with issues of sampling error and sample confidence level to ensure that the inferences or conclusions drawn from the sample will portray the true picture on the ground.

References

Rosner, B. (2010). Fundamentals of biostatistics (7th ed.). Boston, MA: Cengage Learning.

Watt, J.H., & Berg, S.V.D. (2002). Research methods for communication science (2nd ed.). Boston, MA: Allyn & Bacon.

Estimation for the Poisson Distribution

Biostatisticians often experience research-based situations where they are expected to observe the counts of events that occur within a set unit of time, such as the number of reported cases of cholera in different cities or the number of children born per hour in any given day. In such contexts, biostatisticians may use the Poison distribution to estimate whether these events occur randomly or indiscriminately in time or space (Pagano & Gauvreau, 2000). This paper provides a technical definition and a general description of the Poisson distribution and how estimations for the Poisson distribution are made.

The Poisson distribution is technically defined as a discrete probability distribution for the counts of events that occur randomly in a given interval of time (or space) (Rosner, 2015, p. 86). Researchers have observed that most Poisson distributions are unimodal and display a positive skew that decreases as the mean number of events per interval increases. Additionally, Poisson distributions are not only centered roughly on the mean number of events per interval but their variance or spread expands as the mean number of events per interval increases (Pagano & Gauvreau, 2000). Poisson distributions assume that (1) the likelihood that an event will occur in the time or space interval provided is proportional to the length of the interval, (2) an infinite number of occurrences or events can occur in the specified time or space interval, and (3) events occur independently at a mean number of events per interval (Rosner, 2015). Based on the above information, the mathematical formula for estimating Poisson distributions is as follows: (X = number of events in a given interval; » = mean number of events per interval; e = mathematical constant H 2.718282; x = probability of observing events) (Rosner, 2015, pp. 86-87).

Estimation for the Poisson Distribution

In general terms, the Poisson distribution can be described as a form of probability distribution that uses discrete values or variables to show the likelihood of a specified number of events happening in a predetermined interval of time and/or space even when these events are set to happen at regular intervals that are independent of time considerations for the last event. For example, a public health official may use records to show that the department receives an average number of 20 complaints per day from citizens who are unhappy with the public water system. However, if in one particular day the public health official receives more complaints coming from a broad range of sources and arriving independently of one another, then it can be reasonably assumed that the complaints received per day follow a Poisson distribution since an infinite number of complaints are possible in any given day. From this description and example, it is clear that public health officials can use estimations for the Poisson distribution to evaluate the likelihood of a set of independent events or experiences taking place in a predetermined time-frame and/or space.

Overall, it can be concluded that the Poisson distribution is a useful application for statisticians when it comes to estimating the likelihood of a number of autonomous events occurring in daily research settings and experimental contexts. Although the length of the interval or time-frame is of immense importance in determining the probability of an event occurring, it is important to remember that an infinite number of events or incidences can take place in any specified time-frame.

References

Pagano, M., & Gauvreau, K. (2000). Principals of biostatistics (2nd ed.). California: Duxbury Press.

Rosner, B. (2015). Fundamentals of biostatistics (8th ed.). Boston, MA: Cengage Learning.

Critical Appraisal Tool of Research Work

Introduction

The application of critical appraisal tools to evaluate the validity of research conclusions has gained acceptance in modern research settings. Appraisal tools are mainly used in medicine and allied professions. In these fields, evidence-based practice is of utmost importance to patients. Critical appraisal guidelines often appear as a checklist used to assess a particular study. Appraisal tools are commonly used in settings that rely on evidence. The tools assess the applicability of results to a similar but larger setting (Crombie 1996). They offer a straightforward and all-inclusive checklist that can be employed in evaluating the rigor and reliability of a research paper. Appraisal tools focus on the authors, objectives of the study, research question, methodology, data analysis, discussion, and recommendations. This paper will critically examine an appraisal tool in terms of its completeness and applicability. The paper will also discuss the limitations of the appraisal tool and limitations of the paper not identified by the appraisal tool. It will conclude by analyzing the applicability of the tool to the research approach in the paper.

Critical examination of the Appraisal Tool is fundamental because it enables the assessment of relevance and quality of the study. It helps in the identification of information redundancy and overload. Critical appraisal of any research paper is necessary for a complete assessment of the results. The strengths and limitations of the research methodology are also evaluated by appraisal tools. Any bias can also be detected and necessary corrective measures are taken.

A rating scale for the appraisal of research paper

The appraisal tool did not provide a comprehensive approach to the analysis of the paper. For example, the rating scale has only three responses (yes, no, and not clear) (Dzewaltowski et al. 2009). This indicates that it relies on absolute responses only. However, the appraisal tool included a section on proof of the validity of the topic in question and proved to be dependable in this regard. The questions are not specific to the research paper in question. There is a substantial inconsistency that restricts the items that can be mentioned in the critical appraisal tool. Most questions appear to be a summary of the expected outcome. It is difficult to tell what each question is testing. The objective of the appraisal tool appears to change from one question to the other.

Limitations of the appraisal tool

There are several limitations in the paper which are not identified by the appraisal tool. There is no clear statement in the article describing the objectives of the study. The research objectives have a bearing on the outcome of the study. The study does not sufficiently describe the research question. Without a research question, the course of research may be diverted. The article does not provide a thorough review of the literature. A literature review is an important component of scientific studies. Review of literature enables the consumer of research to make certain decisions. A consumer may compare the findings of the current research with the existing data. The articles that were reviewed were not thoroughly analyzed. This section would have provided the methods used by the previous researchers in the field of study. A thorough analysis of methods is necessary because it enables researchers to judge whether their research brought new evidence to the field or it strengthens existing research. The appraisal tool failed to point out this problem. There should have been a section in the appraisal tool focusing on this information.

Though the paper reports on the sample size, it does not provide information on whether there was a pre-study sample size calculation. Sixteen schools with a population of 1582 students were sampled (Dzewaltowski et al. 2009). How the researchers arrived at the sample size is not reported. The appraisal tool does not have a section to test this information. This information is useful to the consumer because it indicates whether the sample was representative of the whole population. This is particularly important when making generalizations.

Applicability of the appraisal tool

This appraisal tool does not have a section on potential benefits and harm to subjects. Potential harm and benefits should be reported in research. It enables the subjects to make informed choices before participating in the study. The section is also required when research is intended to be utilized clinically. Clinicians should review this section before they implement any research findings. Because this information is missing, the clinical implication of the study is not clear.

The appraisal tool is generally inadequate. It lacks some crucial sections that would support its clinical reliability. For example, it fails to identify the limitations of the study. The study does not explicitly describe the problem being addressed. The study does not have a formulated question backed by an appropriate case study. The current study is more of a qualitative study than a quantitative study. The appraisal tool is not entirely suitable for a qualitative study. The findings of the study are not entirely depicted as fully credible (Chalmers et al. 1981).

The findings of the research do not seem relevant even to the selected population and conclusions are drawn may not be transferable to other study samples. The sample population is comprised of members of different races. It is difficult to gauge whether results from this study can be relied on when dealing with subjects drawn from a single race. The study was conducted in a developed country. Therefore, conclusions may not apply to subjects living in a developing country. The research is not recent and this might pose challenges regarding its current relevance. The paper was not published in a major journal. This limits the reliability of the findings therein. The conclusions are haphazard and do not entirely interpret the results in an accurate manner (Graham, Calder, Hebert, Carter, Tetroe 2000). Additional analysis to determine whether other variables could have affected outcomes is not reported in the study. The appraisal tool does not have a section to test this fact.

Application of research to practice

The application of research findings to clinical practice is governed by several factors. Before any research findings are used to make decisions in clinical settings, they should have been subjected to rigorous testing (Campbell, Hotchkiss, Bradshaw & Porteous 1998). New evidence should be appraised using an appropriate tool. Appraisal tools should focus on the relevance of the topic, the objective of the study, the research question, the results, and the recommendations. The research should answer a current clinical problem. The sample population should have similar features with the patients expected to benefit from the study. Generalizations can only be made when the patients are likely to benefit from an intervention. The current study being appraised appears not to have met most of these requirements. For example, a report on potential harm to the subjects is not made in this research. The lack of this information has made the findings suspicious. However, of importance in the current study is the fact that the study concerns behavior modification. The study did not employ any invasive techniques. The findings cannot be applied to practice because of this fact. The research does not appear to solve any clinical problem concisely. This makes it difficult for the consumer of research to judge whether the results are credible. This article was not published in a leading journal. This compounds the first problem. If the article had been published in a reputable journal the consumer of research would have no difficulty adopting the research findings into practice.

Another impediment to its application in the absence of sound recommendations and limitations faced during the study. The study does not end as is the practice when conducting scientific research by recommending a follow-up study to validate its findings. Because the authors did not state the limitations they faced during the study, the reader is unable to know the circumstances surrounding the study. The circumstances may be regarded as confounders depending on the type of research.

The study gives conclusions and recommendations at the end. This is a common research practice. However, in research with clinical implications, a clinical pathway should be offered (Nyberg & Marschke 1998). The paper should end with clear and concise recommendations that can be adopted. Clinical recommendations should be listed starting with the most important to the least important. The level of evidence should also be listed alongside each recommendation. This enables the reader to gauge the extent to which a recommendation was validated.

Another challenge to the application of research to practice is time. Research is conducted within a certain time frame. Once the time has elapsed the findings will be relevant for a limited time. Characteristics of the target population change over time. Therefore, past research may not apply to current situations. The current research under review was conducted several years ago. Therefore, its findings no longer apply. After several years, old research loses value due to the emergence of new evidence. Research findings are usually regarded as accurate if no new research has been conducted. Research findings that are older than ten years are not reliable.

Conclusion

In conclusion, the application of critical appraisal tools to evaluate the validity of research conclusions has gained acceptance in modern research settings. Appraisal tools are used in clinical settings to test the validity, reliability, and applicability of research findings. The appraisal tool used to appraise the article had several limitations. It lacked certain important sections. For example, the tool did not critically examine the methodology. In addition, the tool did not sufficiently interrogate the research question to establish its relevance to current problems. The tool did not have a section to critically appraise the results and the data analysis methods. The limitations of the paper not identified by the tool include a lack of a concise objective and research question. The paper did not also report on potential benefits and harm ascribed to the research. Generally, the research is difficult to implement. This is because of several factors including unclear recommendations. The recommendations should be presented alongside their level of evidence.

References

Campbell, H, Hotchkiss, R, Bradshaw, N & Porteous M 1998, Integrated care Pathways, Br Med J, vol. 316, pp. 133137.

Chalmers, T et al. 1981, A method for assessing the quality of a randomized control trial, Control Clin Trials, vol. 2, no.1, pp. 3149. Web.

Crombie, I 1996, The pocket guide to critical appraisal, BMJ. Web.

Dzewaltowski, D et al. 2009, Healthy Youth Places: A Randomized Controlled Trial to Determine the Effectiveness of Facilitating Adult and Youth Leaders to Promote Physical Activity and Fruit and Vegetable Consumption in Middle Schools, Health Education and Behavior, vol. 36, no. 3, pp. 583600. 

Graham, I, Calder, L, Hebert, P, Carter A, Tetroe J 2000, A comparison of clinical practice guideline appraisal instruments, Int J Technol Assess Health Care, vol.16, pp. 10241038.

Nyberg, D & Marschke, P 1998, Critical pathways: tools for continuous quality improvement, Nurs Adm Q, vol. 17, pp. 6269.

Modeling Weather Data of Australian Meteorology Bureau

This paper reports the results of modeling of weather and climate data obtained from the Australians Bureau of Meteorology. The goal is to predict whether it will rain tomorrow using a decision tree model and the meteorological variables for the last fourteen months. The data modeling is performed by the Rattle and R software package after it is organized using excel. The prediction of weather, and specifically whether it will rain tomorrow, is a significant aspect because it helps in planning in advance. The results of the models highlights that there is a high probability that it would rain tomorrow.

Introduction

Data mining basically involves building data models. These data models can help us predict the future behavior of certain things (Williams, 2011). Building data models or rather data modeling turns data into a structured form that mirrors the data in a valuable way. This paper reports on the results of modeling of some weather and climate data obtained from the Australian Governments Bureau of Meteorology website.

Goal and objectives of experiment

The goal of this experiment is to predict whether it will rain tomorrow using some meteorological variables and the respective data recorded for the last fourteen months.

The specific objectives of the experiment are as follows:

  1. To determine whether the rainfall for the following day exceeds 1 milliliter.
  2. To determine whether the rainfall for the following day is, or is below 1 milliliter.

Hypothesis of experiment

The null hypothesis of this experiment is that it will not rain tomorrow. The alternative hypothesis is that it will rain tomorrow.

Methodology

This experiment involved first obtaining weather and climate data from the Bureau of Meteorology website. The data for the Canberra, NWS meteorological station (Station number 070351) was selected. This station is located within 35.300S, 149.130E.

Data for the last fourteen months was obtained and used in developing the model and prediction. The data was categorized into the following variables:

  • Date,
  • Minimum temperature (°C),
  • Maximum temperature (°C),
  • Rainfall (mm),
  • Evaporation (mm),
  • Sunshine (hours,
  • Direction of maximum wind gust,
  • Speed of maximum wind gust (km/h),
  • Time of maximum wind gust,
  • 9am Temperature (°C),
  • 9am relative humidity (%),
  • 9am cloud amount (oktas),
  • 9am wind direction 9am wind speed (km/h),
  • 9am MSL pressure (hPa),
  • 3pm Temperature (°C),
  • 3pm relative humidity (%),
  • 3pm cloud amount (oktas),
  • 3pm wind direction,
  • 3pm wind speed (km/h),
  • 3pm MSL pressure (hPa),

Since this weather and climate data was available in several spreadsheets, the Microsoft Excel spreadsheet was used to organize it into a single spreadsheet. In addition, using Excel, three other derived variables were added to the existing variables to facilitate the data modeling using the Rattle software. These variables included the Rain Tomorrow, whose dataset was treated as the prediction problem. The other two variables were the Rain Today and Risk MM.

In the Rattle software data panel, the Date variable was set as an identifier, the Rain Tomorrow variable as the target, and the Risk MM as the risk variable. Other variables were set as the input variables. However, all the variables were a mixture of categoric and numeric data type, except the Date variable that was an identifier type.

The Rattle software package, which is highly dependent on the R platform, was used to manipulate and model the weather and climate data. The data was loaded and necessary variables selected after which it was explored, tested, transformed, and then modeled. The output diagram (decision tree) was copied

Results

A summary of the decision tree model for classification was output by the Rattle software (built using rpart) as shown below.

n= 294

node), split, n, loss, yval, (yprob)

* denotes terminal node

  1.  root 294 66 no (0.7755102 0.2244898)
  2.  Time.of.maximum.wind.gust=0:08,0:13,0:17,0:38,0:39,0:53,1:04,1:07,1:29,1:42,1:50,10:06,10:13,10:14,10:33,10:35,10:55,10:56,10:57,11:05,11:06,11:11,11:14,11:18,11:20,11:23,11:25,11:27,11:35,11:43,11:53,11:56,11:57,12:00,12:02,12:08,12:11,12:12,12:21,12:23,12:28,12:30,12:32,12:33,12:34,12:41,12:43,12:44,12:45,12:46,12:47,12:49,12:50,12:51,12:55,12:56,13:01,13:03,13:04,13:07,13:08,13:09,13:12,13:13,13:20,13:22,13:23,13:24,13:26,13:27,13:29,13:32,13:33,13:34,13:35,13:39,13:40,13:42,13:43,13:44,13:47,13:51,13:52,13:54,13:55,13:57,13:58,14:00,14:02,14:10,14:13,14:22,14:23,14:28,14:29,14:42,14:48,14:52,14:53,14:58,15:01,15:02,15:05,15:06,15:07,15:08,15:09,15:11,15:15,15:16,15:18,15:23,15:26,15:28,15:29,15:30,15:31,15:32,15:34,15:37,15:38,15:45,15:53,15:54,16:08,16:13,16:16,16:24,16:36,16:43,16:44,17:04,17:15,17:20,17:21,17:28,17:30,17:32,17:41,18:12,18:18,18:20,18:28,18:33,18:46,18:51,19:29,19:32,19:43,19:52,2:04,2:07,2:13,2:37,20:10,20:15,20:19,20:51,22:13,22:47,23:11,23:13,3:11,3:20,3:48,4:19,5:29,5:35,6:20,7:20,9:44,9:52,9:53,9:57 201 0 no (1.0000000 0.0000000)
  3.  Time.of.maximum.wind.gust=0:42,0:43,1:05,10:11,10:31,10:32,10:38,10:49,11:00,11:28,11:33,12:17,12:25,12:37,12:48,12:54,12:59,13:06,13:11,13:18,13:21,13:31,14:12,14:20,14:27,14:33,14:35,14:36,14:39,14:43,14:55,14:56,14:57,15:00,15:20,15:24,15:33,15:39,15:49,15:50,16:06,16:11,16:19,16:26,16:48,17:17,17:34,17:52,18:07,18:27,18:52,19:26,20:01,20:42,20:49,21:15,22:03,22:36,5:03,6:57,7:28 83 20 yes (0.2409639 0.7590361)
  4. Time.of.maximum.wind.gust=0:43,12:48,13:21,14:27,14:33,14:39,14:55,14:56,15:20,15:24,15:50,16:06,16:11,17:17,17:34,21:15 36 16 no (0.5555556 0.4444444)
  5. X9am.wind.speed..km.h.=11,13,22,33,4 9 0 no (1.0000000 0.0000000) *
  6. X9am.wind.speed..km.h.=15,19,2,31,6,7,9,Calm 27 11 yes (0.4074074 0.5925926)
  7. X3pm.relative.humidity&.< 48.5 16 5 no (0.6875000 0.3125000) * 27) X3pm.relative.humidity&.>=48.5 10 0 yes (0.0000000 1.0000000) *
  8. Time.of.maximum.wind.gust=0:42,1:05,10:11,10:31,10:32,10:38,10:49,11:00,11:28,11:33,12:17,12:25,12:37,12:54,12:59,13:06,13:11,13:18,13:31,14:12,14:20,14:35,14:36,14:43,14:57,15:00,15:33,15:39,15:49,16:19,16:26,16:48,17:52,18:07,18:27,18:52,19:26,20:01,20:42,20:49,22:03,22:36,5:03,6:57,7:28 47 0 yes (0.0000000 1.0000000) *

Classification tree:

rpart(formula = RainTomorrow ~., data = crs$dataset[crs$train,
c(crs$input, crs$target)], method = class, parms = list(split = information),
control = rpart.control(usesurrogate = 0, maxsurrogate = 0))

Variables actually used in tree construction:

[1] Time.of.maximum.wind.gust X3pm.relative.humidity&.
[3] X9am.wind.speed..km.h.
Root node error: 66/294 = 0.22449

n= 294

CP nsplit rel error xerror xstd

1 0.696970 0 1.000000 1.0000 0.10840
2 0.075758 1 0.303030 1.9848 0.12913
3 0.010000 4 0.075758 1.9394 0.12881
Time taken: 0.60 secs
Rattle timestamp: 2011-07-26 10:15:10

======================================================================

The visual presentation of this output is shown below.

A visual representation of the decision tree model of the weather/climate data
Figure 1: A visual representation of the decision tree model of the weather/climate data

Rules

The rules for this modeling were output as follows:

Tree as rules

Rule number: 7 [RainTomorrow=yes cover=47 (16%) prob=1.00]

Time.of.maximum.wind.gust=0:42,0:43,1:05,10:11,10:31,10:32,10:38,10:49,11:00,11:28,11:33,12:17,12:25,12:37,12:48,12:54,12:59,13:06,13:11,13:18,13:21,13:31,14:12,14:20,14:27,14:33,14:35,14:36,14:39,14:43,14:55,14:56,14:57,15:00,15:20,15:24,15:33,15:39,15:49,15:50,16:06,16:11,16:19,16:26,16:48,17:17,17:34,17:52,18:07,18:27,18:52,19:26,20:01,20:42,20:49,21:15,22:03,22:36,5:03,6:57,7:28
Time.of.maximum.wind.gust=0:42,1:05,10:11,10:31,10:32,10:38,10:49,11:00,11:28,11:33,12:17,12:25,12:37,12:54,12:59,13:06,13:11,13:18,13:31,14:12,14:20,14:35,14:36,14:43,14:57,15:00,15:33,15:39,15:49,16:19,16:26,16:48,17:52,18:07,18:27,18:52,19:26,20:01,20:42,20:49,22:03,22:36,5:03,6:57,7:28

Rule number: 27 [RainTomorrow=yes cover=10 (3%) prob=1.00]

Time.of.maximum.wind.gust=0:42,0:43,1:05,10:11,10:31,10:32,10:38,10:49,11:00,11:28,11:33,12:17,12:25,12:37,12:48,12:54,12:59,13:06,13:11,13:18,13:21,13:31,14:12,14:20,14:27,14:33,14:35,14:36,14:39,14:43,14:55,14:56,14:57,15:00,15:20,15:24,15:33,15:39,15:49,15:50,16:06,16:11,16:19,16:26,16:48,17:17,17:34,17:52,18:07,18:27,18:52,19:26,20:01,20:42,20:49,21:15,22:03,22:36,5:03,6:57,7:28
Time.of.maximum.wind.gust=0:43,12:48,13:21,14:27,14:33,14:39,14:55,14:56,15:20,15:24,15:50,16:06,16:11,17:17,17:34,21:15
X9am.wind.speed..km.h.=15,19,2,31,6,7,9,Calm
X3pm.relative.humidity&.>=48.5

Rule number: 26 [RainTomorrow=no cover=16 (5%) prob=0.31]

Time.of.maximum.wind.gust=0:42,0:43,1:05,10:11,10:31,10:32,10:38,10:49,11:00,11:28,11:33,12:17,12:25,12:37,12:48,12:54,12:59,13:06,13:11,13:18,13:21,13:31,14:12,14:20,14:27,14:33,14:35,14:36,14:39,14:43,14:55,14:56,14:57,15:00,15:20,15:24,15:33,15:39,15:49,15:50,16:06,16:11,16:19,16:26,16:48,17:17,17:34,17:52,18:07,18:27,18:52,19:26,20:01,20:42,20:49,21:15,22:03,22:36,5:03,6:57,7:28
Time.of.maximum.wind.gust=0:43,12:48,13:21,14:27,14:33,14:39,14:55,14:56,15:20,15:24,15:50,16:06,16:11,17:17,17:34,21:15
X9am.wind.speed..km.h.=15,19,2,31,6,7,9,Calm
X3pm.relative.humidity&.< 48.5

Rule number: 12 [RainTomorrow=no cover=9 (3%) prob=0.00]

Time.of.maximum.wind.gust=0:42,0:43,1:05,10:11,10:31,10:32,10:38,10:49,11:00,11:28,11:33,12:17,12:25,12:37,12:48,12:54,12:59,13:06,13:11,13:18,13:21,13:31,14:12,14:20,14:27,14:33,14:35,14:36,14:39,14:43,14:55,14:56,14:57,15:00,15:20,15:24,15:33,15:39,15:49,15:50,16:06,16:11,16:19,16:26,16:48,17:17,17:34,17:52,18:07,18:27,18:52,19:26,20:01,20:42,20:49,21:15,22:03,22:36,5:03,6:57,7:28
Time.of.maximum.wind.gust=0:43,12:48,13:21,14:27,14:33,14:39,14:55,14:56,15:20,15:24,15:50,16:06,16:11,17:17,17:34,21:15
X9am.wind.speed..km.h.=11,13,22,33,4

Rule number: 2 [RainTomorrow = no cover=201 (68%) prob=0.00]

Time.of.maximum.wind.gust=0:08,0:13,0:17,0:38,0:39,0:53,1:04,1:07,1:29,1:42,1:50,10:06,10:13,10:14,10:33,10:35,10:55,10:56,10:57,11:05,11:06,11:11,11:14,11:18,11:20,11:23,11:25,11:27,11:35,11:43,11:53,11:56,11:57,12:00,12:02,12:08,12:11,12:12,12:21,12:23,12:28,12:30,12:32,12:33,12:34,12:41,12:43,12:44,12:45,12:46,12:47,12:49,12:50,12:51,12:55,12:56,13:01,13:03,13:04,13:07,13:08,13:09,13:12,13:13,13:20,13:22,13:23,13:24,13:26,13:27,13:29,13:32,13:33,13:34,13:35,13:39,13:40,13:42,13:43,13:44,13:47,13:51,13:52,13:54,13:55,13:57,13:58,14:00,14:02,14:10,14:13,14:22,14:23,14:28,14:29,14:42,14:48,14:52,14:53,14:58,15:01,15:02,15:05,15:06,15:07,15:08,15:09,15:11,15:15,15:16,15:18,15:23,15:26,15:28,15:29,15:30,15:31,15:32,15:34,15:37,15:38,15:45,15:53,15:54,16:08,16:13,16:16,16:24,16:36,16:43,16:44,17:04,17:15,17:20,17:21,17:28,17:30,17:32,17:41,18:12,18:18,18:20,18:28,18:33,18:46,18:51,19:29,19:32,19:43,19:52,2:04,2:07,2:13,2:37,20:10,20:15,20:19,20:51,22:13,22:47,23:11,23:13,3:11,3:20,3:48,4:19,5:29,5:35,6:20,7:20,9:44,9:52,9:53,9:57
[1] 9 8 3 6 4 7 1 5 2

Data analysis and conclusion

The rule number twenty seven (27), which corresponds to the node number twenty seven and number twenty seven in figure 1 above is the strongest or the best rule predicting rain. It has the highest probability. The rule read that if the wind speed at 9am is any of the 5, 19, 2, 31, 6, 7, 9 kilometer per hour speed or the wind is calm, and the relative humidity at 3 pm is greater than of equal to 48.5 then there is a very likelihood that it will rain tomorrow.

On the contrary, rule number 26 reads that if the wind speed at 9am is any of the 5, 19, 2, 31, 6, 7, 9 kilometer per hour speed or the wind is calm, and the relative humidity at 3 pm is less than 48.5 then there is a 64.7% chance that it will not rain tomorrow.

Rules number two, number seven, and number twelve are not used to predict whether it will rain tomorrow because they are based on few variables.

The conclusion based on the rule twenty seven is that it would rain tomorrow given the previous weather parameters. The probability that it will rain tomorrow is 100%.

Reference list

Williams, G., 2011. Data mining with Rattle and R: the art of excavating data for knowledge discovery. New Mexico, United States: Springer.

Effects of Arsenic on Human

Introduction

Arsenic is an element that occurs naturally. It exists in both living things and minerals. Due to its toxicity and peoples inability to notice its effects, many people have used it for many years for homicidal purposes (ATSDR, 2014). This paper discusses the toxicological and epidemiological implications of arsenic through analyzing two studies and other sources that talk about arsenic.

Exposure Pathways for Arsenic

The main exposure pathways for arsenic are food, water and inhalation of contaminated air (Moeller, 2011). Children and pregnant women can also ingest arsenic through eating the soil. Research shows that food, especially seafood, carries the largest amount of arsenic (Roza, 2009).

Industrial exposure occurs when employees in industries inhale arsine gas (Buttner & Muller, 2011). People encounter arsine gas when it leaks from pipes during transportation. Sometimes, exposure occurs during the treatment of ores that contain arsenic with acids (Gordis, 2009).

Description of the Two Studies

The research in Human and Experimental Toxicology looks at the toxic elements of arsenic and its effects to the body. It also addresses various exposure pathways of arsenic and the best methods of diagnosis and treatment. It explains the process involved in the disintegration of pentavalent arsenic into trivalent arsenic. The product of this process is responsible for harming the body. In addition, it looks at the process through which arsenic causes neuro-toxicity.

The second study comes from The Journal of Exposure Science and Environmental Epidemiology. This study used the guidelines provided by the US Environmental Protection Agency, EPA, to determine the number of counties in the US whose wells contained excess arsenic. The researchers mostly used the EPA database in accessing information about the occurrence of arsenic in each county. Their analysis showed that the content of arsenic in drinking water in 33 counties exceeded the limit set by EPA (Frost, Muller, Petersen, Thomson & Tollestrup, 2003). Arsenic in drinking water is very dangerous. It can harm a very large population through food and other drinks that come from water (Wasserman et al., 2004)

Summary of the Two Studies

Aim

The aim of the study in Arsenic neuro-toxicity is to analyze the toxicity properties of arsenic and the way the toxins damage body organs. On the other hand, the second study aims at finding out the number of counties whose water contains excessive amounts of arsenic. The researchers wanted this information for epidemiological purposes.

Methods

Researchers who carried out the study on arsenic neuro-toxicity reviewed studies done by other scholars. They reviewed 85 sources that contained information on arsenic. The study on the concentration of arsenic in drinking water mainly used information from the EPA database.

Results

The results of the study on neuro-toxicity indicated that arsenic metabolizes when pentavalent arsenic disintegrates to form trivalent arsenic (Cooper, 2007). It then disrupts the manufacture and repair of Deoxyribonucleic Acid and the process of oxidative phosphorylation (Vahidnia, 2007).

Similarities Between the Toxicology and Epidemiology Methods

The similarity between the toxicology and epidemiology methods in the two studies is that they both used other researchers information. The research on toxicology used books and journals published by other scholars while that on epidemiology used the information that was published by EPA.

Differences Between Epidemiology and Toxicology Methods

The two studies were different in the sense that, the toxicology study reviewed different sources of information about arsenic while that on epidemiology interviewed people in addition to the information they obtained from the EPA database.

Conclusion

Arsenic is a toxic element that occurs in many forms. It can cause cancer and other fatal illnesses in case of direct contact with it. People should ensure that their food and drinking water are free from arsenic.

TEST: 2

  • Question 1-A study finds that if rats are exposed to PCB and methylmercury together, the toxic effects of methylmercury are much greater than for rats exposed only to methylmercury. This means that in this study, PCB and methylmercury were found to be synergistic: TRUE
  • Question 2-The hallmark of a toxic response is that adverse effects are immediate: FALSE
  • Question 3-A control group in an environmental epidemiology study of whether dioxin exposure is associated with diabetes could be either those who were not exposed to dioxin or those who do not have diabetes: FALSE
  • Question 4-Metabolism of a chemical always reduces its toxicity: FALSE
  • Question 5-A study finds that maternal exposure to alcohol induces effects in offspring, such as facial abnormality and developmental delay. This study demonstrated that alcohol is a teratogen: TRUE
  • Question 6-The data we may get from an acute study will generally include all but which of the following? Carcinogenesis
  • Question 7-The study of toxicology is limited to understanding the effects of toxic agents on animals: FALSE
  • Question 8- Humans are exposed to trace quantities of toxic chemicals in food: TRUE
  • Question 9- One of the challenges of retrospective environmental epidemiology studies is determining just how much of the agent people were exposed to: TRUE
  • Question 10- The Ames test is a short-term in-vivo test that uses rats to determine whether a chemical is carcinogenic: FALSE

References

ATSDR (2014). ATSDR  Toxicological Profile: Arsenic. Web.

Buttner, P., & Muller, R. (2011). Epidemiology (1st ed.). South Melbourne, Vic.: Oxford University Press.

Cooper, C. (2007). Arsenic (1st ed.). New York: Marshall Cavendish Benchmark.

Frost, F., Muller, T., Petersen, H., Thomson, B., & Tollestrup, K. (2003). Identifying US populations for the study of health effects related to drinking water arsenic. Journal of Exposure Science and Environmental Epidemiology, 13(3), 231239.

Gordis, L. (2009). Epidemiology (1st ed.). Philadelphia: Elsevier/Saunders.

Moeller, D. (2011). Environmental health (4th ed.). Cambridge, Mass.: Harvard University Press.

Roza, G. (2009). Arsenic (1st ed.). New York, NY: Rosen Pub. Group.

Vahidnia, A., Van der Voet, G., & De Wolff, F. (2007). Arsenic neurotoxicitya review. Human & Experimental Toxicology, 26(10), 823832.

Wasserman, G., Liu, X., Parvez, F., Ahsan, H., Factor-Litvak, P., & van Geen, A. et al. (2004). Water arsenic exposure and childrens intellectual function in Araihazar, Bangladesh. Environmental Health Perspectives, 13291333.

Nonspherocytic Hemolytic Anemia due to Hexokinase Deficiency

Abstract

Nonspherocytic hemolytic anemia due to hexokinase deficiency is a hereditary disorder marked by the annihilation of red blood cells. The disease occurs as a consequence of a deficiency in hexokinase that is specific to the erythrocytes. An inadequate amount of hexokinase in the red blood cells occurs because of an alteration in the HK1 gene. Consequently, the glycolytic pathway is affected leading to the inability of erythrocytes to withstand oxidative stress. The condition is transmitted through the genes in an autosomal recessive manner implying that faulty genes from both parents are prerequisites for the development of the disease. Grave anemia is the hallmark of the condition among other indicators such as pallor, fatigue and jaundice. Diagnosis of nonspherocytic hemolytic anemia due to hexokinase deficiency is done by performing a peripheral blood count and screening for the enzyme glucose-6-phosphate dehydrogenase. During the management of nonspherocytic hemolytic anemia due to hexokinase deficiency, supplementation of folic acid is done to promote the regeneration of erythrocytes. In extreme cases of anemia, the transfusion of red blood cells may be performed. Studies involving gene therapy as a permanent cure for the condition are currently being investigated.

Introduction

Nonspherocytic hemolytic anemia due to hexokinase deficiency is an uncommon disorder typified by serious, longstanding lysis of red blood cells (hemolysis). According to the National Institutes of Health (2013), about twenty incidents of this disorder have been documented so far. This condition normally begins during infancy and is often referred to as congenital nonspherocytic hemolytic anemia. It is a consequence of the deficiency of the enzyme hexokinase (ATP: D-hexose 6-phosphotransferase) (Bianchi & Magnani, 1995). Tissues such as the kidney, brain and erythrocytes depend heavily on glucose to perform their normal physiological functions (Karen, Wouter, Annet, Kersting, & Richard, 2009). Therefore, they require adequate amounts of hexokinase to function properly. For this reason, a deficiency of this vital enzyme affects the integrity and function of erythrocytes.

The indicators of nonspherocytic hemolytic anemia due to hexokinase deficiency (NSHA) closely resemble the signs of another condition referred to as pyruvate kinase deficiency. Both diseases are related to defects in carbohydrate metabolism. The indicators of NSHA due to HK1 deficiency include anemia, numerous deformities, dormant diabetes as well as panmyelopathy (Mallouh, 2012). Infants often have elevated levels of bilirubin, which is known as hyperbilirubinemia (Becker, 2003). In addition, the skin becomes pale and may have an unrelenting yellow coloration (jaundice). The spleen and liver may also enlarge in an unusual manner (splenomegaly and hepatomegaly) (National Organization for Rare Diseases, 2014). It has also been established that the activity of hexokinase in the red blood cells declines to approximately 0.25 of the normal values. There is also the possibility of the formation of gallbladder stones during the early stages of development.

Causes of Nonspherocytic Hemolytic Anemia due to Hexokinase Deficiency

A mutation in the HK1 (hexokinase 1) gene that codes for the enzyme hexokinase-R are responsible for the development of NSHA. Hexokinase-R is the hexokinase isoenzyme that is found exclusively in the erythrocytes. The HK1 gene is situated on the 10th chromosome at the 22nd position (HK1, 2014). Therefore, its cytogenetic position is described as 10q22. The mutation occurs in the form of deletion or switch of a single nucleotide (Hexokinase 1, 2014). The transformation can be homozygous or heterozygous. Consequently, the amino acid at position 529 changes from leucine to serine. This change is transmitted genetically in an autosomal recessive manner. Therefore, an individual can only suffer from the condition if he or she possesses a pair of defective genes. For this reason, men and women have equal chances of developing the disorder. The probability of two carrier parents siring offspring with the condition is about 0.25 with each pregnancy (National Organization for Rare Diseases, 2014). Inheriting one copy of the defective gene makes one a carrier of the disease without the manifestation of disease symptoms. It is thought that a large number of people have defective genes. However, only intimately related people possess similar defective genes, which raises the odds of producing children with the recessive genetic condition.

In the absence (or insufficient quantities) of hexokinase, the structure of the red blood cells becomes distorted from the normal biconcave-shaped discs to asymmetrical non-spherical shapes that are easily damaged. Anemia comes about when the rate of destruction of red blood cells surpasses the rate of the renewal of novel cells.

The Metabolic Pathway Affected by Nonspherocytic Hemolytic Anemia due to Hexokinase Deficiency

Glycolysis is a vital process in the breakdown of glucose to yield energy. During this process, glucose is broken down via a series of ten chemical reactions to yield two molecules of pyruvic acid. The pyruvic acid spawned from this reaction then enters the tricarboxylic acid cycle to produce reducing power that is converted into energy in the form of ATP by means of the electron transport chain. However, in the red blood cells, the absence of mitochondria means that the tricarboxylic acid cycle and electron transport chain cannot take place. Therefore, two different pathways are used in energy production in the erythrocytes. These pathways are the Embden-Meyerhof (glycolysis) pathway and the hexose monophosphate shunt (Nayak, Rai, & Gupta, 2012). The majority of the energy used in erythrocytes is generated via the Embden-Meyerhof pathway, which leads to the preservation of lipids as well as sodium and potassium. The hexose monophosphate shunt, on the other hand, produces only a tenth of erythrocyte energy requirements. However, this pathway is vital in the prevention of oxidative damage to erythrocytes. During glycolysis, glucose reacts with a phosphate molecule to form glucose-6-phosphate in a reaction that is catalyzed by the enzyme hexokinase (Bianchi, & Magnani, 1995). This reaction is the rate-limiting step in glucose metabolism as additional steps cannot proceed without this vital step.

Glucose-6-phosphate, which is the second intermediate in glycolysis, is the first reactant in the hexose monophosphate shunt. Glucose-6-phosphate dehydrogenase, a key enzyme in this pathway, also serves as a forager of free radicals (Nayak, Rai, & Gupta, 2012). Therefore, deficient hexokinase means that energy is not produced via glycolysis and that the hexose monophosphate shunt cannot take place. Therefore, nicotinamide adenine dinucleotide phosphate (NADPH) is not produced leading to reduced amounts of glutathione, an important antioxidant. Consequently, the red blood cells are unable to respond to oxidative stress leading to the destruction of vital proteins of the red blood cells. The damaged proteins cannot maintain the integrity of the erythrocytes leading to hemolysis.

Detection and Treatment of the Disease

Laboratory tests are used in the identification of NSHA. Such tests include a total blood count, peripheral blood smear, reticulocyte count, serum total and direct bilirubin, and Lactate dehydrogenase (Becker, 2003, p. 369). It is often realized that the hematocrit and hemoglobin values are lower than normal. In addition, abnormal hemoglobin with irregular shapes referred to as Heinz bodies are often visualized (National Organization for Rare Diseases, 2014).

The conventional method of treating nonspherocytic hemolytic anemia due to hexokinase deficiency is supplementation of folic acid to encourage the regeneration of new red blood cells. An infusion of fluids is often necessary to safeguard against shock as well as to sustain urinary output. In instances of severe anemia, red cell transfusion may be performed. It is also imperative for patients to steer clear of substances that promote the damage of erythrocytes. Iron chelation treatment through the administration of drugs such as deferoxamine or deferasirox is helpful when frequent transfusions are required. This procedure prevents iron overload from the frequent transfusions of red blood cells.

Approaches being Investigated

There is no permanent cure for nonspherocytic hemolytic anemia due to hexokinase deficiency since the condition is a consequence of a gene mutation. However, it is hypothesized that replacing the mutant gene with a normal one may reverse the situation. Gene therapy using the human isoenzyme has yielded success in the treatment of pyruvate kinase deficiency in animal models (Meza et al., 2009). As a result, studies are underway to determine the possibility of gene transfer in treating NSHA (HK1, 2014).

Conclusion

The mechanism of genetic transmission of nonspherocytic anemia due to hexokinase deficiency makes it an extremely rare occurrence. However, the few cases that have been reported are characterized by extreme anemia. Currently, there is no lasting cure for this condition. Nevertheless, maintaining the levels of red blood cells has proved useful in managing the situation. Therefore, it is essential that patients with this condition receive regular medical attention to safeguard their health.

References

Becker, P. S. (2003). Congenital nonspherocytic hemolytic anemia. In National Organization for Rare Disorders (Ed.), NORD Guide to Rare Disorders (pp. 369-372). Philadelphia, PA: Lippincott Williams & Wilkins.

Bianchi, M. & Magnani, M. (1995). Hexokinase mutations that produce nonspherocytic hemolytic anemia. Blood Cells, Molecules & Diseases, 21(1), 2-8.

Hexokinase 1. (2014). Web.

HK1. (2014). Web.

Karen, M. K. V, Wouter, W. S, Annet, C. W., Kersting, S., & Richard, W. (2009). The first mutation in the red blood cell-specific promoter of hexokinase combined with a novel missense mutation causes hexokinase deficiency and mild chronic hemolysis. Haematologica, 94(9), 12031210.

Mallouh, A. A. (2012). Other red cell enzymopathies. In Elzouki, A. Y., Harfi, H. A., Nazer, H. M., Stapleton, F. B., Oh, W., & Whitley, R. J. (Eds), Textbook of clinical pediatrics (pp. 2981-2984). Berlin: Springer.

Meza, N. W., Alonso-Ferrero, M. E., Navarro, S., Quintana-Bustamante, O., Valeri, A., Garcia-Gomez, M., Bueren, J. A., Bautista, J. M., & Segovia, J. C. (2009). Rescue of pyruvate kinase deficiency in mice by gene therapy using the human isoenzyme. Molecular Therapy, 17(12), 2000-2009.

National Institutes of Health. (2013). Nonspherocytic hemolytic anemia due to hexokinase deficiency. Web.

National Organization for Rare Diseases. (2014). Anemia, hereditary nonspherocytic hemolytic. Web.

Nayak, R., Rai, S., & Gupta, A. (2012). Essentials in hematology and clinical pathology. New Delhi: JP Medical.

Analyzing and Evaluating Research

Based on the research contents of the sampled study, the main research questions presented by Mendenhall and Doherty (2007) investigate how health care service providers and patients overcome traditional barriers to diabetes management by redesigning a new and inclusive disease management approach. One research question focused on understanding the formulation and redesign processes of the inclusive framework  Partners in Diabetes (PID). Another research question focused on understanding the barrier and mistakes made in formulating and implementing the PID program (Mendenhall & Doherty, 2007). These research questions sought to explore the main phenomenon of study through descriptive assessments of different contexts of the research phenomenon.

The research questions presented by Mendenhall and Doherty (2007) were focused and specific because they reflected the main research phenomenon, which is to evaluate how PID could improve diabetes management practices. This attribute aligns with the views of Salkind (2012) who says research questions need to be direct, focused and specific. The research questions presented by Mendenhall and Doherty (2007) also go a step further to find out how the PID framework could extend to other facets of health care management. Thus, they present a holistic understanding of the research phenomenon by investigating important aspects of its implementation, including its link with traditional diabetes management philosophies, current problems with the formulation and design of health management philosophies and, lastly, the potential for future applications (Newman, Ridenour, Newman, & DeMarco, 2003). This way, the research questions highlight three main facets of the research phenomenon  the past (how health care service providers dominated health management processes), the contemporary potential application of PID, and the future of chronic disease management (its potential to improve diabetes management and other chronic disease management programs). Based on the nature of the questions presented by Mendenhall and Doherty (2007), the questions center the study by exploring different tenets of the research phenomenon. Possibly limited by the research approach adopted by Mendenhall and Doherty (2007), the main weakness of the research questions presented by Mendenhall and Doherty (2007) is the failure to accommodate a synthesis of multiple sources of research data.

The purpose of the sampled study was to find out ways for improving the management of diabetes through a collaborative program  PID. By finding out the experiences of participants and other parties involved in its implementation, the program sought to find out better ways of improving diabetes management and extending the programs application to other facets of health care management (Mendenhall and Doherty, 2007). The proposed health care model thrives on promoting increased partnerships between health care service providers and their patients. The research questions related to the purpose statement by finding out the challenges and mistakes that could impede the realization of a holistic framework for merging the interests of patients and health care service providers in diabetes care management. The research questions also seek to define the transition between traditional health care models where providers are experts who offered services to passive patients) to a more comprehensive health care service delivery model that recognizes patients as participants. Lastly, the research questions provide useful insights into how policy experts could redesign the proposed health care model PID and improve it for maximum usefulness.

In line with the views highlighted in the first paragraph of this report, the main undoing of the research question is the failure to accommodate multiple data. However, based on the nature of the research questions presented by Mendenhall and Doherty (2007), the questions posed by the respondents appear focused and analytical. The last question that seeks to find out future applications of PID are subject to further analysis because what specific areas of chronic disease management would PID apply?

References

Mendenhall, T., & Doherty, W. (2007). Partners in diabetes: Action research in a primary care setting. Action Research, 5(4), 378406.

Newman, I., Ridenour, C. S., Newman, C., & DeMarco, G. M. P. (2003). A typology of research purposes and its relationship to mixed methods. Handbook of mixed methods in social and behavioral research. Thousand Oaks, CA: Sage.

Salkind, N. (2012). 100 Questions (and Answers) About Research Methods. London, UK: SAGE.

Mathematics and Logic. A Troublesome Inheritance

Introduction

As we know, humanity has been interested in issues of nature and society from the very moment the first cultures appeared. To answer these essential topics, the best minds of civilizations have created science and many categories within it, some of which are mathematics and logic. Despite their technical nature, these disciplines can also be considered within the framework of the humanitarian paradigm, which can lead to the emergence of new meanings, discoveries, and reasoning. According to Elrod, quantitative reasoning is the habit of mind to consider the power and limitations of quantitative evidence in the evaluation, construction, and communication of arguments in public, professional, and personal life (para. 4). The purpose of this paper is to show how A Troublesome Inheritance: Genes, Race and Human History inspired and changed my quantitative thinking about mathematics and logic.

Logic as a Political Justification

The book that influenced my understanding of the discipline of logic is A Troublesome Inheritance: Genes, Race and Human History by Nicholas Wade. The author attempts to investigate various causes of the differences between human races and societies from the biological point of view that is currently being condemned (Wade, p. 4). The bottom line is that the author suggests taking the paradigm of biological substantiation as the main and objective through logical theses. This statement makes me remind that logic is both a broad, accessible, and egalitarian discipline and deeply involved structured science of the mind, which is quite paradoxical and fascinating. It is worth noting that it seems to me that this argumentative nature partly brings together the science of logic and jurisprudence. The book also shows and narrates how various numeric and quantitative measuring tools and scales may be applied to qualitative phenomena.

The History of Humankind through the Prism of Math and Logic

Nicholas Wade offers a look at the biological and historical evolution of humankind through the prism of mathematic and logic. It is fascinating to observe throughout the book how morphological features, human thinking, the context of the time, and the environment mutually influenced each other (Wade, p. 8). Studying the history of societies also provides insights into how new and unique concepts, for example, algebra, appear in the human mindset. These concepts appear due to the collision of biological and social spheres; then, they are popularized and become a cultural norm, paving the way for new algebraic discoveries. Since the book is mostly written from a stigmatized and non-popular perspective, this work offers a unique vision.

Explanation of Key Factors of Civilizational Development through Math and Logic

Despite its biological focus, A Troublesome Inheritance: Genes, Race, and Human History will be exciting and useful to both amateurs and specialists of other scientific disciplines, including mathematic and logic. Focusing more on continental territories, the author also does not miss a study of the nuances of communities in insular parts of the world (Wade, p. 12). The author not only logically describes various external and internal sources of influence on the diverse human societies, but also compares social processes and their consequences between human races. Such a detailed anthropological analysis, for example, through quantitative IQ testing, of social causation and correlations has opened for me new facets of the logical approach in aspects of historical and civilizational issues. Now I understand how logic within the framework of history can explain the emergence of various cultural phenomena, the influence of precedents, various social prohibitions, and restrictions.

It is also worth noting that my historical interest has always prevailed in the areas of the ancient world and antiquity. It was fascinating to read about how and why different communities independently developed in different ways. While reading, it also becomes clear the influence of biological and social features and patterns on the formation of such institutions as economies, hierarchies, countries, governments, and political systems (Wade, p. 12). As a result, logical argumentation and quantitative data analysis show a whole and consistent historical chain of qualitative phenomena. The mathematical and logical interpretation of social events of a humanitarian and biological nature may open up new perspectives in the fields of biology, sociology, and psychology.

Logic as a Possible Theory of Everything

It is clear that, in his work, A Troublesome Inheritance: Genes, Race and Human History, Nicholas Wade explores our world more globally from the perspectives of multiple disciplines. The author begins his research with the unrecorded ancient time point and ends with assumptions about the future (Wade, p. 6). It can be stated with confidence that this work is of enormous scientific scope. Most of all, I was influenced by the structure, systematization, presentation, and interpretation of facts; in other words, the logical argumentation of the researcher. This book serves as an excellent example of the application of the discipline of logic in critical technical, scientific categories. Perhaps in the future, it is logic that will become the forerunner of that ever-sought-after theory of everything that scientists are eager to discover.

The Importance of the Impartiality of Scientific Disciplines

In addition to new knowledge in the fields of biology, genetics, anthropology, and sociology, reading this book gave me one more valuable understanding of both mathematics and logic. However, this new knowledge is not internal but of the Meta nature. Technical and humanitarian disciplines, in particular, and the scientific approach as a whole, should be perceived outside the framework of political paradigms and worldviews. It is evident that the modern Western and even the global community is highly polarized in matters of politics, society, and economics. It is important to note that, according to De-Wit et al., Since 1994, the number of Americans who see the opposing political party as a threat to the nations well-being has doubled. (para. 1). It is such an independent and neutral path that will allow us to fully continue the development of the overall global progress of the human community.

Conclusion

This work is a personal reflection of the material read and its impact on personal technical mathematical and logical knowledge, as well as quantitative thinking. I have read the work titled A Troublesome Inheritance: Genes, Race and Human History of the authorship of Nicholas Wade. His book opens new horizons in the technical disciplines of mathematics and logic, providing not only new knowledge but also a Meta understanding of the issue. The work both manages narrowly focused aspects and addresses global topics. The author examines in detail controversial and acutely social questions, gives competent arguments, and, therefore, has significant scientific weight for both specialists and amateurs. It is this book that should inspire new generations of specialists in the fields of mathematical and logical research.

Works Cited

De-Wit, Lee, et al. Are Social Media Driving Political Polarization? Greater Good Magazine, 2019. Web.

Elrod, Susan. Quantitative Reasoning: The Next Across the Curriculum Movement. Peer Review, vol. 16, no. 3, 2014. Web.

Wade, Nicholas. A Troublesome Inheritance: Genes, Race and Human History. Penguin, 2015.

Suppressive Interactions and Their Effects on Body

Hearing is one of the five ordinary senses, inherent to people and some other vertebrates; it is the ability to perceive different sounds by means of such an important organ as the ear. Hearing is generally performed by the auditory system, the process, when the ear detects certain vibrations and transduces them into the necessary nerve impulses, which can be analyzed by the brain. The auditory system can be quite sensitive to the spectral shape, however, as in the differences among whispered vowels. (Manley et al., 2004) Nowadays, many studies deal with different hearing processes, which call some problems with hearing and the perception of information. One of such processes is hearing loss and auditory suppression. Hearing loss caused by damage to the cochlea (inner ear) is probably the most common form of hearing loss in the developed countries. (Moore, 1995) In this paper, we are going to study and analyse several types of auditory suppression, their interactions, and their effect on other parts of the body. Cochlear nonlinearity and two-tone suppression are considered to be two of the most frequent processes, which happen in the inner ear and provoke certain troubles. To find out the necessary treatment and help people to restore their hearing abilities, it is better (1) to study the articles by such writers as Abbas, Hall, or Javel, Geisler and Ravidran and examine the research, conducted by Duifhuis, (2) to analyze the described by them researches concerning two-tone suppression in auditory nerves and the suppressions in the basilar membrane, and, finally, (3) to compare the symptoms of two-tone suppression and cochlear nonlinearity.

The basilar membrane is considered to be one the most important elements of the inner ear, which aims to separate the liquid-filled tubes and provide a vertebrate with an opportunity to hear. If something serious happens in this part of the ear, it is quite possible to not only get some problems with information perception but also become deaf. The cochlea is another section of the inner ear that moves according to the vibrations coming and helps to divide the vibration sound and make them comprehensible to the brain. One of the most important functions of the basilar membrane, which may be developed in some mammalian species cochlea, is the ability to disperse all incoming sound waves.

The damage of the cochlea is probably the most frequent form of hearing impairment. One of the common symptoms of cochlear nonlinearity is a disability to recognize weak sounds. General complaints lie in the difficulty of understanding speech that is accompanied by some background noises. (Moore 1995) Investigations of cochlear nonlinearity help to grasp not only the major idea of hearing impairment but also clear up what exactly influences the decreasing of such abilities as detecting and comprehending sounds. Many types of research prove that loss of hearing or wrong interpretation of the information may be caused by numerous biological, psychological, and environmental factors. Cochlear nonlinearity happens when the input to the BM (the basilar membrane) is doubled, and the output is less doubled. Brian Moor speculates upon the psychological and perceptual aspects, which are crucially important for cochlear nonlinearity and two-tone suppression. He mentions that hearing loss may be caused by aging. However, it is not the only reason. Damage to the cochlea may happen by many other different things and growth slopes.

Another form of basilar membrane damage is two-tone suppression. Two-tone suppression is one of the general characteristics of the normal ear and turns out to be a result of the process when the tone-driven activity of a single fiber in response to one tone can be suppressed by the presence of a second tone. (Moore, 1995) It is necessary to remember that such characteristic is inherent to normal ears only, the damaged ear is deprived of the two-tone suppression. In simple words, two-tone suppression occurs when a second tone is added at a different frequency and the first tone is decreased. With the help of the work by Duifhuis, we can realize that suppression may be rather a dependant on the subject, however, the general form of the suppression remains to be constant (Duifhuis, 1980).

Duifhuis admits that Suppression is a non-linear phenomenon so that one should not expect the effects of 2 suppressors to add up (Duifhuis, 1980) In his research, he chooses an interesting way  to use suppression frequency as a primary independent variable. He measures suppression monaurally in order to limit the number of alternative techniques. First, two-tone suppression was set at 1 kHz, further experiments were characterized by other frequencies (0.5, 2, and 4 kHz). During these researches, it was mentioned that suppression is much more prominent at higher levels, and at lower levels, suppression is considered to be less prominent. It was proved that suppression is rather dependent on different subjects: suppressor and suppresses frequencies and levels. With the help of his investigation, Duifhuis calls into question the already existing facts and proves that frequency is not the only issue that influences two-tone suppression. The experiments of this very scientist are considered to be useful because the research, described in this paper, has lots in common with the one by Duifhuis.

Two-tone suppression can easily arise from cochlear nonlinearity. In this case, the major cause of two-tone suppression is the basilar membranes mechanical measurements. This is why to clear up the true nature of two-tone suppression, it is better to concentrate on both physiological and psychophysical investigations. Duifhuis makes a wonderful attempt to describe all the measurements of two-tone suppression on the psychological level. He proves that the effect of two-tone suppression may be decreased only in a noisy environment. Duifhuis distinguishes two levels of the stimulus components, which become the major variables of the investigation; they are L1  a suppressee, and L2  a suppressor. The quantity of suppression in the given stimulus is dependant on both the suppressee and the suppressor. It is also necessary to point out that, in their article about the effects of suppressors, Yasin and Plack suppose that the increase of suppression as the major function of suppressor level is much greater for the suppressor that is below the signal frequency. (Yasin and Plack, 2007) The increasing of suppression turns out to be the major theme for discussion in this article, they also use temporal masking curves in order to measure the chosen levels.

The work by James Kates shows that two-tone rate suppression is a nonlinear property of the cochlea in which the neutral firing rate in the region most sensitive to a probe tone is reduced by the addition of a second (suppressor) tone at a different frequency. (Kates, 1995). The physiological investigations on two-tone suppression can easily prove that suppression may be presented not as a neural response to the cochlear partition, but also as its mechanical motion. To analyze this very point, it is necessary to remember the theory offered by Hall in 1977. He explains that neural response is related directly to the first of second spatial derivative of the travelling wave along the basilar membrane. (Hall, 1977).

Two-tone suppression may produce another effect if the frequency of the excitatory tone is a bit higher than the frequency of the suppressor tone. The relations between the suppressor and the suppressee turn out to be rather important, however, the role of suppressee, taken alone, is not that significant in the whole process of two-tone suppression. (Javel et al., 1978) Even if the object of the experiment are the cats, it is possible to achieve certain results and, with the help of a thorough examination of cats auditory nerves, explain how exactly the suppressee interacts with the suppressor. The discharge rate of the auditory nerve fibres of cats should be measured by several stimuli: the suppressor tone and the excitatory tone.. One of them causes an increase of the rate, and another suppresses it. Comparing these two reactions, it is quite possible to analyze whether the suppressee may behave and influence the other functions of the ear alone.

The last but not less important issue, which we are going to consider in this paper, is the use of the pulsation threshold method in order to induce two-tone suppression. Duifhuis is one of those scientists, who are eager to analyze the impact of this very method and its further impact. The pulsation threshold method is used in order to evaluate the basilar membrane reaction to two-tone suppression. During the investigation, Duifhuis uses a stimulus time pattern: a repetition frequency (4 Hz) is characterized for the masker stimulus and the probe. This very method was chosen because of one simple reason: day-to-day variability is essentially equal for toward masking and pulsation threshold& the effects of suppression measured in terms of threshold differences are greater. (Duifhuis, 1980).

The peculiar feature of such a method is a simultaneous usage of the suppressee and the suppressor. In order to prove the effectiveness of this method, it is necessary to compare it with some other methods. In this work, we will use forward masking as the major opponent of the pulsation threshold. The results, demonstrated by Duifhuis, will help to prove that the effects of suppression are greater in the pulsation threshold. The frequency in the probe should be the same as the one in suppressee during the pulsation threshold methods. If these assumptions will be correct, the effectiveness of the method we call pulsation threshold will be proved. Suppression is not merely an effect of suppressor  suppressee amplitude ratio but that it also increases as the overall level increases. (Duifhuis, 1980). Another important method that has to be mentioned is direct masking. Hartman admits that auditory bandwidths determined by direct masking are wider than those determined by forward masking or pulsation threshold. (1997) To get a clearer understanding of the effectiveness of the methods chosen, it is also necessary to admit the boundaries of the results of each method.

Hearing becomes an important sense for any human and other vertebral. In a moment, many problems may appear with hearing and the perception of the necessary information. The auditory system is crucially important and helps to define different types of vibrations and make them available for evaluation. Any damage, whether it is cochlea nonlinearity or two-tone suppression, should be analyzed thoroughly on different levels. Suppressive interactions may be studied only by means of numerous investigations and theoretical backgrounds. The works by Abbas, Hall, Javel, and Duifhuis, of course, help to get a clear understanding of such notions as pulsation threshold, auditory suppression, and two-tone suppression within a cochlear model. Cochlear nonlinearity causes certain troubles for humans and other living beings to perceive the information on the necessary level. Two-tone suppression is another process that may arise from cochlear nonlinearity and causes some other problems with hearing. The peculiar feature of two-tone suppression is that its effect does not call any neural inhibition. This very phenomenon is analyzed by means of presenting one tone at the characteristic frequency of a neuron, and another tone is presented later with its certain frequency and intensity. The comparison of such methods as forwarding masking and pulsation threshold helps to see and analyze the pros and cons of both and get a clear understanding of which one is more effective for such a phenomenon as two-tone suppression. To study this phenomenon, it is also necessary to clear up the difference between the suppressee and the suppressor, learn more about their relations, and present the information that may prove which of them does play a significant role in the process, and which of them does not really crucially important, but still, its presence obligatory.

Works Cited

  1. Abbas, Paul, J. And Sachs, Murray, B. Two-Tone Suppression in Auditory-Nerve Fibers: Extension of a Stimulus-Response Relationship. Journal of the Acoustical Society of America. 59, 1 (1976): 112-122.
  2. Duifhuis, Hendrikus. Levels Effects in Psychophysical Two-Tone Suppression. Journal of the Acoustical Society of America, 67.3 (1980): 914-27
  3. Hall, J. L. Twp Tone Suppression in a Non-Linear Model of the Basilar Membrane. Journal of the Acoustical Society of America, 61 (1977): 930-39.
  4. Hartmann, William, M. Signal, Sound, and Sensation. Springer, 1997.
  5. Javel, Eric, Geisler, C., and Ravidran, A. Two-Tone Suppression in Auditory Nerve of the Cat. Journal of the Acoustical Society of America. 64, 81 (1978): S135
  6. Kates, James, M. Two-Tone Suppression in a Cochlear Model. IEEE Transactions on Speech and Audio Processing. 3, 5 (1995): 396-406.
  7. Manley, Geoffrey, A., Popper, Arthur, N., and Fay, Richard, R. Evolution of the Vertebrate Auditory System. Springer, 2004.
  8. Moore, Brian, C. Perceptual Consequences of Cochlear Damage. Oxford University Press, 1995.
  9. Yasin, Ifat and Plack, Christopher, J. The effects of Low- and High-Frequency Suppressors on Psychophysical Estimates of Basilar Membrane Compression and Gain. Journal of the Acoustical Society of America, 121, 5 (2007): 2832-41.