Research Interview Types and Practical Usage

Interviews are an important element of research, which must be given a higher preference when choosing a data collection method. According to Hagan (2017), interviews refer to conversations in which one party, the interviewer, solicits responses from a second party, the interviewee, to gather information. Based on a critical review of the course reading, it is evident that the choice of interview approach depends on what the researcher seeks to achieve, the availability of resources, and the interviewees capabilities.

Researchers may use different terminologies in reference to interviews conducted in particular areas of study and professions. Hagan (2017) asserts that the phrase investigative interview is used in journalism, while preliminary interviews refers to surveys conducted before the commencement of a major study. However, all types of interviews fall into three broad categories: structured, unstructured, and depth interviews. Researchers must acknowledge the differences between these three categories to make an informed choice before commencing a study.

A clear distinction can be made from the general description of the structured, unstructured, and depth interviews. Structured surveys consist of check-off responses to straightforward questions designed to elicit a limited response pattern. Unstructured interviews are unique in that they feature open-ended questions. Respondents are not restricted to particular answers or responses, as is the case with structured interviews. The third category, called depth interviews, entails intensive survey questions designed to explore a particular subject, topic, or area of study (Hagan, 2017). These types of interviews serve unique purposes, as can be deduced from their definitions.

The success of an interview depends on researchers mastery of the interviewing process. The people conducting the interviews might require thorough training to acquaint themselves with the objectives of the study (Hagan, 2017). They should be competent enough to adjust to changing scenarios, as they may not necessarily encounter cooperative interviewees. Importantly, interviewers should be skillful enough to establish and build rapport with respondents for the certainty of a fruitful interview (Hagan, 2017). In essence, the interviewer is responsible for all interview processes from the start to the end and must be keen to deliver success.

Recording the interview is shown to be an important step and process in research. Hagan (2017) recommends the use of a pencil to record the interviewees responses and personal observations, which should be placed in parenthesis for clarity. The record should then be edited into a self-explanatory form once the interview is over. This recommendation is subject to criticism because it could be the source of errors leading to the distortion of information. Possibly, it could be necessary to include a cross checking session in the interview exit process. The interviewer could read the responses and observations loudly and request the respondent to reaffirm recorded answers.

Interviews have been used in studies meant to explore criminal behaviors and patterns, popularly known as offender interviews. Nonetheless, interviews involving active and incarcerated criminals are important because they lead to important revelations about criminal tendencies. However, researchers must obtain permission from relevant authorities before conducting interviews involving incarcerated persons. For instance, the Corrective Service Act (2006) of the Australian law states that it is illegal for anyone to interview prisoners without the chief executives permission. As such, researchers must acquaint themselves with applicable law before interviewing offenders to avoid problems with authorities.

In overview, interviews do not need to involve face-to-face interactions because researchers can use various communication technologies, such as the telephone and teleconferencing equipment to conduct interviews. Importantly, researchers can choose from three basic forms of interviews: structured, unstructured, and depth interviews. The choice of interview approach depends on various factors, such as the nature of the study and the respondents characteristics. However, interviewers must take charge of the process for the certainty of a successful interview.

References

Corrective Service Act, Publ. L. No. 238, Stat. 132 (2006). Web.

Hagan, F. E. (2017). Research methods in criminal justice and criminology (10th ed.). Pearson.

Nucleic Acid Hybridization Analysis

Introduction

This particular set of experiment was aimed at hybridizing nucleic acid and then characterizing the hybridized nucleic acid, which is a very important process in gene detection. This analysis helps us gain more insight into the structures, arrangement, and genes expression. Nuclei acid hybridization is a molecular technique that is widely used in analysis of complicated long chain nucleic acid. The technique facilitates easy and accurate recognition or classification of sequences/ series that are related to explicit probe series. A specific or particular sequence of the nuclei acid can be recognized from a heterogeneous/varied array if deoxyribonucleic acid portions are first size cut into small sizes using technique of agarose gel electrophoresis. Then the fractioned DNA sequences are then moved to deoxyribonucleic acid binding membrane in a lone stranded shape (Kinjo & Rigger, 1995).

The probe sequence /series is then labeled using radioactive tagging technique, and then modeled into a single strand. The resulting tagged probe sequence is then transferred into a solution and given time wash over the casing/membrane that has the target and other series. Under the suitable conditions, the probe will only anneal to those series/sequences on the membrane that are similar in terms of structure, same among others to it (Eigen & Rigler, 1994).

In this research, a plasmid sub-clone approach is used to classify a related series or sequences from a constraint digest/assimilate of a huge recombinant plasmid. The deoxyribonucleic acid (DNA) that was used in this particular experiment as a probe is a control portion from the plasmid pMAQ105. This restriction portion is then recovered by applying gel purification technique (Rigger, 1995). The DNA used is this exercise for probing is an amalgamation of plasmids pMAQ28 and R388. The amalgamated plasmids measures 37 kb in total (Zeiss, 2010). The appropriate restriction map that is applicable to this experiment is as shown below:

Sequence probe restriction map
Fig. 1: sequence probe restriction map

Key: Relevant restriction sites: B = BamH1, E = EcoR1, H = HindIII

Apparatus and Procedure

Reagents

TE = 10 mM Tris pH 8.0, 1 mM EDTA

Depurination solution: 0.25 M HCl

20X SSC = 0.3 M Sodium citrate, 3 M NaCl, pH7.0

Pre/Hybridization buffer: 2X SSC, 0.5% (w/v) blocking agent, 5% (w/v) Dextran sulphate, 0.1% (w/v) SDS

Buffer 1: 0.10 M Tris-HCl, pH 7.5, 0.15 M NaCl

Buffer 2: 0.10 M Tris-HCl, pH 7.5, 0.15 M NaCl, 0.5% (w/v) Blocking Reagent

Buffer 3: 0.10 M Tris-HCl pH 9.5, 0.10 M NaCl

Antibody Conjugate Solution: 1/1000 (v/v) Antifluorescein-AP conjugate in Buffer 2

Digest 1 B = BamH1, E = EcoR1, H = HindIII was set up for restriction digests of amalgamated DNA  pMAQ28/R388. The test tubes were then labeled, kept warm at 37 oC for two hours before being removed and placed in order in a rack. From the sheep provided, the position of each tube on the rack was entered where the three samples were placed. The technician then loaded the specimen, ran them on agarose gel, and then put back on the rack in the order in which they were run

Step 2

The gel was photographed followed by identifying the tracks onto which our specimen had been loaded. After identification, a small wedge of gel was trimmed off at the top right corner. The obtained wedge gel was then placed in a tray and 0.25m hydrochloric acid (depurination solution). The solution was left top to stand for 20 minutes under moderate stirring. After 20 minutes, the solution was tipped off and then washed with distilled water and the gel put in 0.4M sodium hydroxide (NaOH). As the gel was soaking in basing solution, zeta-probe and Whatman paper were cut into three pieces to the same size as the size of the gel. A lot of paper towel was cut to the same size as the size of the gel soaked and then piled together to a height of 6 cm high. The filter was then wetted with 0.4 M sodium hydroxide and the experiment apparatus for capillary transfer set as shown below to be transferred in 0.4M sodium hydroxide.

Figure 1

After completion of transfer of O/N at room temperature, the membrane was then enfolded in cling wrap and then kept in a freezer.

Probe Preparation

The probe sequence used in this experiment was derived from plasmid pMAQ105. Concentrated(undiluted) pMAQ105 specimen prepared earlier was used in this part. EcoR1 digest of pMAQ105 was set up and kept warm at 37oC for two hours. The digested specimen was then put on the rack and the precise position recorded on sheet in reading readiness for loading on a gel in the coming experiment.

Figure 2

pMAQ105 insert

Relevant restriction sites: B = BamH1, E = EcoR1, H = HindIII

Membrane Preparation

A filter was recovered and then soaked for 30 seconds in 2XSSC, followed by inserting the membrane prepared in a hybridization bag, 15ml of pre-hybridization buffer solution and 750 ¼g DNA carrier was added and the n the bag carefully sealed as we ensured that all the air in the bad was purged. The bag was then incubated for at least 2 hours at 60ºC. Laboratory technician both prepared the probe of which we used in the subsequent experiments and placed the gel on the UV light box. The track on which our specimen was placed and the band that stands for the EcoR1 portion that was to be used as the probe was identified. Using the provided razor blade, portions of the gel under consideration was cut into small pieces and then put in an Eppendorf tube. The weight of the gel put in the tube was determined by weighing an empty tube followed by weighing tube with gel and then computing the difference.

10 ¼l of membrane binding solution was added to every 10mg of gel cut above, then vortexed before being incubated at 60ºC for roughly 10 minutes to ensure complete dissolution of the gel slices. A mini column was then inserted into collection tube followed by transferring the dissolved gel mixture in the assembled mini column. The solution was kept at room temperature for about 1 minute followed by centrifuging the solution at the highest speed of the centrifuge machine for one minute. The flow was disposed off and then the mini column reinserted into the collection tube. 700 ¼l of membrane wash solution was added, centrifuged at the greatest speed for 5 minutes, and then discarded. Again, 500 ¼l of membrane wash solution was added and then the assembly centrifuged for 5 minutes at the top speed. The contents of the collection tube were discarded and then column assembly centrifuged again for another one minute with micro centrifuge lid not tightened to give room for evaporation of ethanol that may have remained in the column. The mini column was the transferred into a clean 1.5ml Eppendorf tube followed by adding 30 ¼l water free of nuclease and then given about two minutes before centrifuging the solution for 1 minute at top speed. The contents were then discarded and the tube clearly labeled.

DNA specimen was boiled for 10 minutes band then instantly put on ice for 5 minutes. The solution was then spanned briefly; a labeling mixture that contained random Primers and reaction buffer mix, fluorescein nucleotide mix, and Klenow fragment was added to DNA solution followed by incubating the solution for one hour at 37ºC. the reaction was halted by boiling the mixture for 10 minutes at 60ºC.

To mix the membrane and the probe, 300 ¼l of hybridization buffer solution and 250-¼g DNA carrier was carefully added to the probe. The probe was denatured by heating the solution for 5 minutes and then chilling it on ice. Denatured tagged probe was added to equilibrated 15 ml of hybridization buffer at 60ºC. pre-hybridization buffer solution contained in the bag was emptied and then probe DNA mix placed in this bag. The solution was then Hybridized at 60ºC.

Membrane was removed from the bag and then put in a g=big weigh boat tray. The membrane on the tray was the covered with 2 X SSC, 1% SDS and kept warm at 60ºC for 15 minutes. The liquid was tipped off and the same procedure repeated using 0.2X SSC, 0.1% SDS. After this, the liquid was again tipped off and the membrane covered in buffer solution at 15ºC for 5minutes under constant stirring. Membrane pores were blocked by covering the membrane in a buffer solution 2 for one hour. The membrane was then put in antibody conjugate solution and kept warm for one hour under moderate agitation. The membrane was washed 3 times for 5 minutes each in buffer 1. Followed by rinsing again the membrane two times for 5 minutes each in buffer 3.

Chemiluminescence Step

The membrane was transferred to a clean weigh boat tray, then covered with CDP star and the entire set up kept warm for 5 minutes at 37oC. Any excess solution was removed by wiping using tissue paper. This was wrapped in plastic wrapper by carefully folding the all plastic wrap on the wrong side of the membrane. The membrane was exposed to the film followed by developing the film.

Autograph was examined. The photo of southern gel was examined to determine which area of pMAQ28/R388 and pMAQ105 was homologous

References

Eigen, M. & Rigler, R. (1994). Sorting single molecules: Applications to diagnostics and evolutionary biotechnology. Proc. Natl. Acad. Sci., 91, 5740-5747.

Kinjo, M. & Rigger, R. (1995). Ultrasensitive hybridization analysis using fluorescence correlation spectroscopy. Nuc. Acids. Res. Journal, 10, 1-14.

Marmur, J. & Doty, P. (1962). Basic (bas) calculations were carried as described. Journal of Molecular Biology, 5, 109-118.

Rigger, R. (1995). Fluorescence correlations, single molecule detection, and large number screening. Applications in biotechnology. Journal of Biotechnology, 41, 177-186.

Sigma-Aldrich. (2011). Oligos melting temperature.

Zeiss, C. (2010). Analysis of nucleic acid hybridization and determination of kinetic parameters. Web.

Reddy Attipalli and Rong Zhou on Photosynthesis

The topic of photosynthesis has widely been researched by scientists as it is significant for green plants and other organisms to form chemical energy from light energy. The conversion of light energy involves carbon dioxide (CO2), water, and minerals to organic compounds rich in energy and oxygen. Therefore, this suggests that carbon dioxide is an essential element in the photosynthesis process. Plants tend to remove carbon from the atmosphere and oceans to fix it into organic chemicals. Various studies have investigated how elevated CO2 concentration impacts the entire photosynthesis process up to the point when the plant produces.

Rong Zhou et al. (2) are among the researchers who examined how elevated CO2 concentration combined with the stress attached to the heat and drought impact photosynthesis. Though the researchers focus was on tomatoes, they established extensive effects based on various aspects. Rong Zhou et al. (2) highlighted the increase in the net photosynthetic and intercellular rate as the possible causes of CO2 concentration. They were also able to identify the decrease in plant conductance, contributing to the increase in water use efficiency. These findings were supported by Reddy Attipalli et al. (46), who indicated that elevated CO2 concentrations cause a global increase in the average temperatures that drastically shift the precipitation. Ultimately, these effects are observed in plant growth and development since the photosynthetic carbon assimilation patterns change. However, Reddy Attipalli et al. (46) argue that plants tend to respond to the potential impact of many factors such as water, temperature, and soil nutrition to influence the photosynthetic process. Therefore, these researchers argue that photosynthetic responses should not be tied to the concentrations of CO2 but rather to differences in experimental technologies, treatment duration, plant age, and plant species.

Rong Zhou et al. (4), on the other hand, attempted to contradict the views of Reddy Attipalli et al. (47) by arguing that CO2 concentrations substantially impact the conversion of chemical energy in plants. They indicate that CO2 concentrations in plants such as tomatoes tend to induce the closure of stomata or decrease their stomatal density to generate reactive species of oxygen, ABA, and ABA receptors since they are required in plants. Similarly, according to Rong Zhou et al. (4), there is a significant number of genes in plants that are upregulated and downregulated by the CO2 while playing essential roles in the process of photosynthesis and leave development. As a result of this effect, transcriptional alteration and the leaf senescence delay in birch are reduced by the concentration of CO2.

Reddy Attipalli et al. (48), in their research, further downplays the significance of CO2 concentrations in photosynthesis by indicating that the vegetation bulk belongs to the group of photosynthesis called C3. The name C3 entails carboxylation as a 3-carbon acid, which supports that C3 is most used in the photosynthetic pathway. The authors argue that C3 is more impactful in the photosynthesis process than CO2 and only operates at a less than optimal level to show dramatic growth and production. Thus, the two articles by Reddy Attipalli et al. and Rong Zhou et al. examine the topic of photosynthesis from different perspectives in relation to the impact of CO2 concentrations. While Rong Zhou et al. (3) supports that CO2 concentration impacts the photosynthetic process in plants, Reddy Attipalli et al. (49) show that even with the elevated CO2, other factors contribute towards the photosynthetic process.

References

Reddy Attipalli R., et al. The Impact of Global Elevated CO‚ Concentration on Photosynthesis and Plant Productivity. Current Science, vol. 99, no. 1, 2010, pp. 4657. EBSCOhost. Web.

Rong Zhou, et al. Interactive Effects of Elevated CO2 Concentration and Combined Heat and Drought Stress on Tomato Photosynthesis. BMC Plant Biology, vol. 20, no. 1, 2020, pp. 112. EBSCOhost. Web.

Effects of Increasing Nitrate Concentrations

Introduction

Background

Invasive plant species can be described as introduced species that have the ability to thrive in areas beyond their natural range and dispersal (Cellot et al. 1998). This category of plants easily adapts to new environments in an aggressive manner with high reproduction rates. They lack natural enemies, this combined with their quick multiplication results into outbreak populations in most areas where they are found (Luneva 2009).

Research evidence strongly shows a relationship between the persistence of invasive plant species and the loss of native species with disturbance and fluctuations in soil fertility (DiTomaso 2007). The addition of Nitrogen (N) to soil in disturbed grazing land has been shown to increase the abundance of invasive species such as cheat grass and the European forget me not (Myosotis scorpioides), while reduction in N availability has been shown to relatively increase the abundance of the native perennial species. A study carried out by Young in 1998, showed that the seedling establishment of the medusahead increased with fertilization by NO3 , and was not affected by NH4+ fertilization and decreased with the immobilization of the mineral N (Thomas et al. 2003).

The absorption of Nitrate is thought to relatively vary with the type of soil in use due to the difference in water holding capacity. Documented evidence indicates that the waterlogging properties of the clay soil promote denitrification and the loss of nitrogen (Ling 2010). However, other types of soils such as the sandy loam have a higher percentage of nutrient losses through leaching (Cellot et al. 1998).

In the current study different types of soil (sandy loam clay loam and silt loam) will be used to study the relative uptake of nitrogen that will be measured by the germination and growth of Myosotis scorpioides.

The varying types of soil to be used in this study include: sandy loam which can be described as soil material that contains 7 to 20% clay, more than 52% sand, with the percentage of silt and percentage of sand being 30% or more; silt loam, described as soil material that has 50% or higher content of silt and between 12% and 27% clay, or 50 to 80% silt and less than 12% clay; and finally clay loam, described as soil material that contains 27 to 40% clay and 20 to 45% sand (Thomas et al. 2003).

Myosotis scorpioides is a herbaceous perennial plant that belongs to the genus Myosotis, its commony known as the European forget me not (Ling 2010). The plant thrives in wet places and is most commonly found near streams and rivers (Luneva 2009). Myosotis scorpioides reproduces sexually through seeds and vegetatively via stolons that root at the nodes (Thomas et al 2003). No data is available to indicate the number of seeds that each Myosotis scorpioides plant produces, however, its close relative Myosotis alpestris has been documented to produce between 20 and 120 seeds per plant (DiTomaso 2007).

The plant mostly grows in areas with disturbance suggesting that the germination of its seeds is promoted by soil disturbance. A study carried out in Germany identified seedlings of Myosotis scorpioides scattered in moderately grazed areas suggesting that grazing disturbances promote germination (Luneva 2009). The plant is native in the temperate areas of Europe and Asia but grows exotically in the United States, Canada and other parts of the world. Myosotis scorpioides forms beautiful plant cover and is therefore used as a garden plant (Thomas et al. 2003). Studies show that the plant tends to escape from gardens to wet areas where it forms dense monocultures (Ling 2010).

In the United States, the plant is found in 41 states and is considered invasive in several of them and has been banned in Massachusetts (Luneva 2009).

The goal of this study is to evaluate the influence of Nitrate concentrations on the growth and germination of Myosotis scorpioides seeds planted in either sandy loam, silt loam or clay loam. As described earlier, the relative germination of Myosotis scorpioides seeds increases with increased soil disturbance, mainly in the grazing areas.

The study hypothesizes that the increase in nitrate availability will increase the percentage of germination and the rate of growth of Myosotis scorpioides. Additionally, the study hypothesizes that change in the type of soil (Sandy loam, silt loam, and clay loam) does not affect germination but may cause variation in the rates of growth (DiTomaso 2007).

Problem Statement

Non perennial invasive plant species have tended to relatively increase in number following soil disturbance by activities such as grazing. Such an increase has led to noxious levels of the plants endangering the perennial or native plant populations. The increased rate of germination has been linked to soil disturbance that increases the level of nitrogen (N) availability. Studies have linked but not confirmed the increase in N availability to increased rates of germination of invasive plants. Studies show that the amount of N availability and rate of release for plant growth depends on soil type. Thus this study will seek to identify the correlation between nitrogen, germination and growth of Mysotosis scorpioides, and soil type.

Objectives

The study seeks to investigate the following specific objectives:

  • The effect of increasing nitrite concentration on the germination and growth of Mysotosis scorpioides
  • THe effect of soil variation in the uptake of nitrogen for the germination and growth of Mysotosis scopioides

Methods

  • The materials (seeds of M. Scorpiodes, soil samples and planting trays) o be used in this study will be commercially obtained.
  • The different commercially obtained soil samples (sandy loam, clay loam, and silt loam) will divided into three parts each and treated with varying concentrations of nitrate (0mg/L, 5mg/L, and 10mg/L) which will then be placed in separate propagation trays without drainage holes.
  • The seeds of the M. Scorpioides will then be planted in the different trays containing different soil samples (sandy loam, clay loam and silt loam) and nitrates at different concentrations.
  • Three seeds will be planted in each propagation tray to maximize the likelihood of germination.
  • After planting, the propagation trays will be placed in a safe greenhouse where they will be watered and observed daily for seed germination.
  • The number of germinated seeds per tray will be determined and then related to the concentration of nitrate and the soil sample used
  • After germination, the height of the plants will be measured twice a week (Mondays and Fridays) for two weeks and the data will be recorded according to the soil sample and concentration of nitrate.
  • This data will then be used to determine the effects of varying concentrations of Nitrogen on the germination and growth of S. Scorpioides in different soil substrates.

References

Cellot, B., F. Mouillot, and C. Henry. 1998. Flood Drift and propagule bank of aquatic macrophytes in the riverine wetland. Journal of vegetation Science. 9(5). 631-640.

DiTomaso, J., and E. Healy. 2007. Weeds of California and other Western States.

University of California Agriculture and Natural Resources Communication Services, Oakland, CA. 834 p.

Ling, C. 2010. Myosotis scorpioides. USGS Nomindigeneous Aquatic Species Database, Gainesville, FL. Web.

Luneva, N. 2009. Weeds, Myosotis arvensis L.  Common (field) Forget-me not.

AgroAtlas. Interactive agricultural ecological atlas of Russia and neighboring countries: Economic plants and their diseases, pests, and weeds. 2012. Web.

Thomas A., T. Charles, and A Douglas. 2003. Nitrogen effects on seed germination and seedling growth. J Range Manage. 56: 646-654 p.

Underdevelopment of the Mentoring Theory

Apparently, there are several significant problems in the mentoring theory development. Bozeman and Feeney (2007) from the Department of Public Administration and Policy at the University of Georgia focus their article on identifying these persistent issues and critiquing this theory, as well as a number of existing findings that are not useful. The conceptual framework is used for this study; namely, the researchers analyze published sources, find a relationship between them, and suggest their own ideas that may contribute to the theory.

The authors decided to use two research methods for their paper, and Secondary Data Analysis is the first of them. To identify the problems in the theory and critique findings that lack necessary information, it is needed to analyze existing literature. Second, Bozeman and Feeney (2007) highlight the issues and voids with the help of a thought experiment that makes it possible to prove that the mentoring theory and research cannot solve particular persistent problems.

In a conclusion, the authors claim that the mentoring theory remains underdeveloped. In their judgment, the reason for that is a failure to confront some of the lingering conceptual gaps in research and theory (Bozeman & Feeney, 2007, p. 735). Therefore, further studying, as well as demarcating mentoring from the sometimes confounding concepts of training or socialization, are needed (Bozeman & Feeney, 2007, p. 719). Unfortunately, the existing literature is not enough to answer the questions and solve the problems in theory.

This article is a high-quality overview that builds on criticism and discussion of many studies. The paper is supported by sources, leaves no unresolved issues, and also brings new ideas and proposals to the mentoring theory. Since my research topic is about the lack of mentorship of African American women, this article is of great significance. It allows suggesting that an increase in the mentorship of African American women may be achieved by solving a more global problem: the underdevelopment of the mentoring theory. As soon as researchers continue enhancing it, it will be possible to address the issue of my research topic.

Reference

Bozeman, B., & Feeney, M. K. (2007). Toward a useful theory of mentoring: A conceptual analysis and critique. Administration & Society, 39(6), 719-739.

Statistics. How Changes to Variables Affect Conclusion

Definitions

In the process of researching, the necessity to provide the correct meaning of discussed terminology units arises on a regular basis to ensure readers accurate understanding. Therefore, the methods and the process of defining are issues of great importance in every study. There are two common approaches to carefully provide the accurate meaning of complicated and straightforward terms, which are conceptual definition and operational definition.

The first one is the outlining of fundamental principles, which underlay a term, or in other words, it can be providing the meaning to one construct. It is possible to use conceptions, which are mental images, for summarising observations and experiences that have something in common to the term to support the definition. Moreover, the other options used to support defining concepts are also vital for the conceptualization. They are constructs, which represent the agreed-on meaning, researchers assign to a term. They do not apply to the object that exists in the physical world and cannot be measured.

Complicated terms require the determination of their dimensions that are a specifiable aspect of a concept and serve the purpose of outlining the vital parts in which a definition can be divided. Indicators as groups of observations within dimensions they belong to, which are chosen to be considered a reflection of a variable. The result of a conceptualization process is obtaining a set of indicators, which assist in specifying the meaning of a term. In general, the conceptual definition is the process of identifying what a particular concept implies based on fictional constructs, mental images, and dimensions and their indicators detection.

The second approach to provide the meaning of a term is the operational definition. It outlines metric techniques for quantifying the object of interest (p. 9). This definition articulates how to detect, identify, and measure the value of characteristics, which describes a phenomenon, a term for which is being explained. For instance, with this defining approach, it is possible to find the existence of an items attributes, such as temperature, pressure, volume, and methods of these rates collecting. Operationalization considers such issues as the range of variation, which is the limit of attributes, characteristics between extremes, and dimensions of a concept.

Moreover, from the perspective of operational defining, there are two essential qualities of all variables, which are exhaustive and mutually exclusive. The first one implies the ability to consider and classify every taken observation into one attribute. The second one means that the scientist is capable of categorizing on only one feature basis. In addition, four levels of measurement, which are used for variables with different values of characteristics, exist.

Nominal measures offer a name or a label for a variable without providing the ranking, as the object of defining is not numerically related (Bhandari, 2020). Ordinal ones categorize the data and rank it in order, while it is possible to distinguish the dependency between certain points of the data. Usually, this method is applicable to the related terms, which have the same indicators different values.

Interval measures include the mentioned opportunities and provide the option to space the data evenly. It means that this approach allows detecting equal distances and differences in values between attributes (Bhandari, 2020). A standard instance of the appropriate data for interval measure is test scores. The last rate, which summarizes all other levels possibilities, is the ratio level. It allows the zero point detection, which is a matter of great importance for data evaluation. Instances of ratio scales include but are not limited to age, weight, and temperature in Kelvin (Bhandari, 2020). In general, operationalization is the variety of specific research procedures, which result in empirical observation, which represent the concept from the perspective of their physical attributes values evaluating.

Conceptualizes race

Race is the term that can be defined by conceptualizing and operationalizing to ensure its accurate meaning providing. The first step of measuring is to determine a variety of dimensions, which characterizes the discussed definition from each side. They serve to provide the correct explanation for a concept and outline the points, which assist in determining the most accurate coincidence to a particular type of race. This variable is the simple one, and most of its dimensions do not require the implementation of additional indicators, which might be necessary for the composite measure. However, it is possible to distinguish one or two of them for each of the dimensions. In this work, all indicators are provided in the form of self-directed sentences.

The first dimension is racial identity, which implies initial subjective self-identification, deprived of pre-set options and preconceptions. In other words, it asks to what race an individual feels they belong (Roth, 2016). The indicator is I believe my race is &, and it is intended to show individuals composite and comprehensive opinion of their belonging. The next one is racial self-classification, which is also means that a person determines himself or herself, but on the basis of official form or survey, with constrained options. The indicator is I discovered that my race is & according to a survey.

The next two dimensions outline an individuals racial features, which others can directly observe. Observed, appearance-based race is concentrated on readily observable characteristics. This dimension depends on personal opinions, which are prejudiced, as most people make mistakes in the process of race determination. The indicator is Others believe that my race is & on the basis of my appearance. Observed, interaction based race is focused on characteristics revealed through interaction between individuals (Roth, 2016). Instances of features, which can indicate belonging to one or another race, include but are not limited to language, accent, surname, and behavior. The indicator is Others believe that my race is & on the basis of my personal features, revealed through interactions with them.

The next dimension is reflected race, which implies an individuals belief about what race they belong to from other peoples points of view. This form of defining is also based on unqualified opinions, but the reliability of the finding might be increased depending on the number of responses. The indicator is I believe that most people think that my race is &. Phenotype is the dimension that reveals belonging through considering racial appearance.

It is different from the observed ones, as it depends on only scientifically based provisions. This dimension includes but is not limited to characteristics, such as skin color, hair texture or color, nose shape, lip shape, and eye color (Roth, 2016). The first indicator is I believe that my race is & on my skin color basis. The second one is I believe that my race is & on my other features basis.

The next dimension is Racial Ancestry, which implies considering the compiled racial groups of an individuals ancestors (Roth, 2016). It also has two indicators focused on both known and hidden features left from an individuals forefathers, which may determine their belonging. The first indicator is I believe that my racial ancestry is & on my family history basis. The second one is I believe that my racial ancestry is & on my genetic testing basis.

The last dimension is Origin, which implies consideration of the territory where an individual and his forefathers have lived. It focuses on investigating a persons race by determining its ancestral land and connecting it to the characteristics prevailing in people, which also belong to this territory. The indicator is I believe that my race is & on the basis of the land my ancestors came from. All the mentioned above dimensions and indicator together, form the set of points which assist in determining the belonging and defining the meaning of the variable race.

Operationalizes race

Operationalization is focused on specific research procedures development and assist in representing the concept. Race is the exhaustive variable, which means that it is possible to classify every observation into at least one attribute, and they do not exclude each other (Chapter 5, conceptualization, operationalization, and measurement, n.d.). A person can notice that their race may depend on a variety of indicators, which complement each other.

The data, race can be determined on the basis of which, do not have values, which might help to range it, in a particular order, and the zero point is absent. Therefore, the measures appropriate to the discussed concept are exclusively nominal ones. In this situation, measurement, which is possible to conduct around the variable race, is limited to two options. These possibilities are surveys, one of which is composed in the form of cards, with answers Yes, No, or Inapplicable. The second one represents a set of options, and the measure, which requires choosing one of them.

Typical measurements for the first three of the mentioned dimensions, respectively, are open-ended self-identification questions, closed-ended survey questions, and interviewer classification (Roth, 2016). The appropriate options of data collecting for the next dimensions are the same, with the exception of the racial ancestry. It is necessary to analyze ancestry informative markers to measure the belonging according to the genetic indicator.

It is possible to identify one comprehensive and precise definition for the term race divided into three sentences by gathering all the provided information. Race is an individuals belonging to a particular group of people, which have common attributes, united into several major categories. These attributes are a type of skin, behavioral and appearance features, genetic distinguishes, and origin. Measurements can be conducted on the basis of interviewer classification, open and close-ended questions, which compose surveys cards, and Analysis of ancestry informative markers.

Census Bureau changes

Census Bureaus report on race provides readers with an updated vision regarding the meaning of the term race and attributes, which serve its correct determination. Several distinct changes were implemented in it with the purpose of ensuring the accurate definition of race categories. The first one is restating the question of race, which was deprived of emphasis on personal opinion, to obtain results supported with complementary observations.

The second major change is adding the sixth category, Some Other Race, to consider a group of people that cannot determine themselves as belonging to the other five segments (Humes et al., 2011). In addition, were conducted the adjustments, which serve the expanding of multiple-race combinations and improving origin questions to different groups (Humes et al., 2011). These changes are vital for the accurate race identification of an individual and a better understanding of the terms meaning.

How Changes to Variables Affect Conclusion

The operational definition is supposed to provide an accurate meaning of a concept. Therefore, the difference in distinct operationalization processes and their outcomes can produce different conclusions about race. Changes to variables may outline altered categories with other particular attributes. The mentioned situations consequences are incorrect definitions about the same term that may affect the conclusion. It is necessary to carefully consider potential changes in order to avoid misunderstandings and mistakes in determining an individuals race.

Reflection

It is possible to notice gaps in one or another explanation of the terms meaning and methods to measure the variable, comparing distinct operational and conceptual definitions. The contemporary world is evolving along with societys vision and opinion toward race determination, which raises the necessity to restate the previous descriptions regularly. Comparing the Bureaus conceptualization and operationalization of race and my ones, it is possible to notice similar and different points. Definition of race categories, which are used in the 2010 Census, is focused on the location, people of a particular group and their ancestors has lived, personal self-determination, and reported entries (Humes et al., 2011).

The provided identification is brief, strict, and precise. I assume that it was necessary to reject complementary attributes, as the initial definition mat becomes unclear. However, it brings to a conclusion that the term was not fully explained, which is partially justified with the presence of missing points in the questionnaire. My definition is more complicated and comprehensive but does not include instances, which might be approved by using a form for race determination. The meaning of the term race is explained similarly in my and Bureaus operational and conceptual definitions. The main difference between them is in the number and focus of provided details.

References

Bhandari, P. (2020). Levels of measurement: Nominal, ordinal, interval, ratio. Web.

Humes, K. R., Jones, N. A, & Ramirez, P. R. (2011). Overview of race and Hispanic origin: 2010. Census Bureau.

Chapter 5, conceptualization, operationalization, and measurement. (n.d.). Web.

Roth, W. D. (2016). The multiple dimensions of race. Ethnic and Racial Studies, 39(8), 1310-1338.

The Plague Year From the New Yorker Source Analysis

It is important to note that sources and their credibility plays an essential role in ensuring that the writing is evidential and persuasive because the lack of high-quality and reliable sources is indicative of the questionability of the presented statements. The New Yorker article is not as credible as it could be due to its excessive use of secondary and even tertiary sources, which are primarily linked to the publisher itself. Therefore, the use of sources by the author can be considered inadequate.

The first main point is centered around the fact that the selected article uses sources, the majority of which are other The New Yorker articles. The author excessively relies on the information provided by the publisher, which undermines the writings credibility since it is preferred to utilize the primary sources or highly credible secondary ones when they are available and accessible for use within the text. Among many examples is the statement, which states that in the audience was Herman Cain, the former C.E.O. of Godfathers Pizza and an erstwhile Presidential candidate, who had become one of Trumps most prominent Black supporters (Wright, 2020). The source is directed towards another The New Yorker article, which argues that the given individual was a supporter of Trump, and the source could have used a more direct alternative, such as an interview or social media posting.

The second main point revolves around the fact that few sources, which are not from the publisher, are not presented in an accurate manner. For example, the author writes that a Chinese study reported on an infected traveler who took two long bus rides, which he uses to support the usefulness of masks (Wright, 2020). However, the study itself does not address the usefulness or uselessness of masks but rather focuses on the conclusion that there could be an airborne spread. One should be aware that the given point is not about whether or not masks are effective but rather about the fact that the author of the article presents the conclusions of a study in an inaccurate manner.

The third main point focuses on how the author occasionally does not include sources where needed. For example, the author writes that on April 3rd, the C.D.C. finally proclaimed that masks were vital weapons (Wright, 2020). It is important to note that such specific information needs to be sourced or referenced because it is one of the key points of the segment. In other words, the article does not make a sound choice in regards to selecting and including sources in the story, which makes some of the critical statements unsubstantiated or unsupported. Therefore, it is of paramount importance for the author to be able to use evidence in order to provide valid and persuasive points.

In conclusion, one should be aware of the fact that the selected article has poor use of sources because the author relies heavily and excessively on other The New Yorker articles, which are secondary and tertiary sources. In addition, the writing does not properly present the conclusions or findings of the referenced sources, where only small pieces of information are used to support the authors points, which deems the arguments highly inaccurate. The author also fails to incorporate key sources in places where they are needed the most, such as specific points with dates and organization.

Reference

Wright, L. (2020). The plague year. The New Yorker. Web.

Logistic Regression: Research Methods

This is a method that used in the modeling of dichotomous outcome variables. It is usually represented in two forms, either as simple logistic regression or multiple logistic regression. The simple logistic regression is used whenever there is one nominal variable with two values and a measurable variable. Under such circumstances the dependent is replaced by the nominal variable. The independent is usually replaced by the measurable variable. The existence of two independent variables with the dependent variable being nominal usually prompts the use of multiple logistic regression. There are a number of similarities between the linear and simple logistic regressions however; in simple logistic regression the dependent variable is nominal.. The major huddle might be in determining the value of the nominal variable when measurable variable has been given (Ahuja, 2010).

The prediction of the nominal value is possible through the use of logistic regression. This model is very relevant to the nursing field as it is used in finding out the probability of a certain medical condition occurring. One may for instance set out to determine the effect of the cholesterol level on heart attack. In such a case, the statistician might take a number of women who are 50 years old and record their cholesterol levels. 15 years later, the statistician follows up to determine the number of those who were prone to heart attack and compare with the statistics of the cholesterol level. Basing on the findings, one is able to deduce that cholesterol levels have or do not have an effect on the probability of having a heart attack. Logistic regression is also used whenever the measurable variable has been set by the statistician while the nominal variable is free to vary. Logistic regression can also be done when there are two nominal variables (Brockopp, 2003).

How it Works

In simple logistic regression an equation is usually used to find the Y value for every X value or variable. In the linear regression the value of Y can be determined. In logistic regression the Y value is the probability of getting a given value of a nominal variable. Taking the case of cholesterol level and its impact on the probability of having a heart attack, the nominal value would be represented by the probability of having a heart attack. The value usually varies from 0 to 1. The limited value is not used directly but it takes the form of an equation; Y (1-Y), also known as the odds. In the event that the probability of a patient suffering from heart attack is 0.25, then the odds would be 0.25(1-0.25). The odds would therefore be 1/3. The equation for the natural log would therefore be; ln[Y/(1Y)]=a+bX. The slope and the intercept are used to determine the line of best fit. The maximum likelihood method is used to determine the value at which the expected results can probably be observed. The method is computer intensive unlike the list-squares method that is employed in the linear regression method. The P value can be determined by a number of methods although the most recommended is the likelihood ratio method(Brockopp, 2003).

Logistic regression has been cited as the best model for data analysis as it is easier to interpret. The multiple logistic regression is more often used as it allows the researchers to make a comparison between the dependent and the independent/predictor variables. The Logistic regressions results usually include the odds ration hence making the interpretation of data easier. The odds ratio has been widely used as it estimates the likelihood of an occurrence. By using the odds ration, one can easily find the relative risk. If there is only one observable value for the nominal variable, then one does not have to use a scatter graph as the Y value will either be at 0 or 1(Ahuja, 2010).

The Use of Spreadsheets/SPSS

The use of spreadsheets for data analysis is vital in analyzing the data and checking out for any errors. The SPSS uses the nominal, ordinal and scale data. In SPSS, the variables need to have short labels (Baker, 1988).In SPSS, the table with omnibus tests Model Coefficients is usually critical in complex models and its purpose is to enable the predicting of the covariates within the model jointly. In addition, SPSS has a model summary table which shows the fitness of the model in handling the respective data. This allows for the comparison of various models. The model also has the classification table which allows for the diagnostic testing. The table has a section for whatever is observed, predicted and the percentage of correctness. There is usually a need for the setting of the cut-off field to a value different from the default. In some cases, the percentage correct is usually associated with the sensitivity or specificity. The other well known table in SPSS is the Variables in the Equation table. This one usually has a column for the log odds ratio that has been estimated, the Sig. Column which shows the p-value and the Exp (B) column which shows the odds ratio. The other type of table that is quite evident in the SPSS model is the Risk Estimate Table which gives the odds ratio as well as the various risk ratio information. There is also another table for Case Processing Summary. It has the information on the missing as well as those cases that have not been selected. This prevents the unexpected loss of data. The table for the Dependent Variable Encoding shows the categories that have been labeled as 0 and 1. In case of the divergence of the expected results from the expected, then the statistician has to check here (Grove, 2005).

In the mentioned case, they would represent the women that get a heart attack after the period of 15 years. The marginal percentage usually represents the proportion of the respective observation to the number of variables in that particular group. In the given case, this can be found by getting the proportion of those who succumbed to heart attack within the group. The other parameter which is referred to as valid stands for those observations in the data whereby the outcome and predictor variables are both present. The Missing parameter stands for the missing data within the observations of a particular dataset or predictor. Total as a parameter represents both the valid and missing parameters in a given set of data (Grove, 2005).

Sub population represents the combination of certain predictor variables. B, which is the regression coefficient, stands for the change that occurs in the dependent variable for an increase in the predictor variable unit. Another parameter is the T-test which is a statistical comparison between to variables. This helps in determining whether the two different variables are similar or different, their rate of change might for instance be identical. In the given example, if the population to be examined included both sexes (Male and Female), then the t-test would determine whether the cholesterol level in the body for both sexes had the same effect on the probability of having a heart attack or if there was a difference (Grove, 2005). The F-statistic is vital in the analysis of variance as well as linear regression. This is usually a square of the t-value. The null hypothesis on the other hand stands for the general position that can either be proved right or wrong depending on the findings from the research (Munro, 2000). In the mentioned case, the null hypothesis could be; People with high cholesterol levels are more susceptible to heart attacks than those with lower cholesterol levels.

Regression is a parameter that determines how strong the dependent variable is related to the independent variables. According to the given case, it is quite evident that Logistic regression is very important in determining the probability of some condition occurring or not. Basing on such deductions, one is able to give advice concerning a number of conditions for instance in the medical field. In the given case for instance, if it turns out that those with high cholesterol levels have a greater chance of developing a heart attack, then the doctor might advise his/her patience to reduce the rate of cholesterol consumption so as to avoid the risk of developing a heart attack (Munro, 2000).

References

Ahuja, R. (2010). Research Methods. New Delh: Rawat Publications.

Baker, T. (1988). Doing Social Research. New York: McGraw Hill Book Co.

Brockopp, D. (2003). Fundamental of Nursing Research. Boston: Jones and Bartle.

Grove, N. B. (2005). The Practice of Nursing Research Appraisal, Synthesis, and Generation of Evidence. Texas: Saunders.

Munro, B. H. (2000). Statistical methods for health care research. Michigan: Lippincott Williams & Wilkins.

Re-Engineering Photosynthesis to Stimulate Crop Growth and Productivity

Introduction

With the growing population, meeting the necessary food demands needs improvement of crop productivity, possibly by enhancing photosynthesis efficiency. Photorespiration happens in most C3 crops, leading to toxic byproducts such as glycolate. Processing these toxins is costly as it utilizes a lot of the plants energy reducing the crops photosynthetic efficiency by 20-50% (South et al., 2019). Although there are several ways of reducing the cost of photorespiration, altering photorespiratory pathways in the plants chloroplasts has shown promising outcomes like increased plant size and photosynthetic rates. This study was done to determine whether alternative photorespiratory routes can effectively improve the productivity of C3 field crops.

Body

The research was done by testing the performance of alternate photorespiratory pathways in field tobacco. The first pathway used five Escherichia coli glycolate oxidation pathway genes. The second one used malate synthase and glycolate oxidase from plants and catalase from E. coli, and the third path used malate synthase from plants and green algal glycolate dehydrogenase (South et al., 2019). RNA interference (RNAi) was used to downplay the native glycolate transporter of chloroplasts in the photorespiratory pathway, thus limiting metabolite flux in the native path.

The results indicated that the first pathway increased biomass by almost thirteen percent, while the second path revealed no change compared to field-grown tobacco. In the third pathway, there was improved biomass by twenty-four percent with RNAi and eighteen percent without RNAi. Field testing across seasons indicated a significant biomass upsurge in the third pathway; the path also showed increased efficiency in light-use during photosynthesis in the field by seventeen percent (South et al., 2019). The above results demonstrate that installing synthetic glycolate metabolic pathways into chloroplasts of tobacco plan led to significant elevations in biomass accumulation in both agricultural field conditions and greenhouse environments.

Conclusion

To conclude, inhibiting the native photorespiratory pathway while engineering the more effective tobacco paths increases both vegetative biomass and photosynthetic efficiency. The researchers are optimistic that similar outcomes may be realized in other C3 grain plants. For this novel application to impact society, people must be guaranteed proper knowledge and technology. With the excellent application of this hack, the world will sustain a bio-based economy through growing crops in environmental niches, adequate control of inputs, and efficient utilization of limited resources. A better comprehension of such fundamental processes, such as photosynthesis, is likely to lead to new scientific and technological initiatives in this era.

Reference

South, P., Cavanagh, A., Liu, H., & Ort, D. (2019). Synthetic glycolate metabolism pathways stimulate crop growth and productivity in the field. Science, 363(45), 19.

Carrots and Silverbeet: Review

Carrot is considered a root vegetable because it has a horn like shape. It comes in different colors like red, white, and yellow varieties. However, most varieties come in an orange colour. A fresh carrot has a crisp texture. Its taproot is commonly eaten as it contains a high concentration of vitamins. However, the green portions can also be eaten (Rose, 1).

Silverbeet has a range of colors and leaf textures. Its stem colors and plant sizes also differ among the varieties, which include Success, Fordhook Master, Fordhook giant, and Compacta Slo Bolt which are all dark green. These varieties have different growing seasons, height, and leaf colors (Wade, 2).

Botanical names of Carrot and Silverbeet

Carrots generic name is Daucus and specific name is carota. Its scientific name is therefore Daucus carota subsp. sativus (Rose, 2). Silverbeet, at times called Swiss chard, is known scientifically as Beta vulgaris. It is categorized under the L. Cicla group (Wade, 1).

Background of Carrot and Silverbeet

Initially, carrots were never grown for consumption of their roots but were grown for their aromatic leaves and seeds. This tradition dates back to the 1st century. Carrot that is currently grown was introduced into Europe between the 8th and the 10th century. The orange colored variety appeared in the Netherlands in the 17thn century. John Aubrey intimated in his memoranda that carrots were first sown in Beckington. Wild ancestors of carrot are thought to have come from Iran and Afghanistan that is considered the center of diversity of the wild carrot. Selective breeding of the wild sub species has managed to reduce the bitterness, increase sweetness, and reduce the percentage of the woody core. In the springs and summer, carrot grows a rosette of leaves as it builds up the stout taproot. It is in the taproot where large amounts of sugars are stored (Rose, 3).

Silverbeet is mainly grown as a leaf vegetable. Whereas its leaves can be eaten like those of spinach, its stems can also be cooked. It bears a lot of semblance with spinach. However, it has large, coarse, and mild tasting leaves. Unlike spinach, it can tolerate cold, heat, drought, and disease. Nevertheless, Silverbeet and spinach belong to the Chenopodiaceae family where a majority of root vegetable beetroot belong. Originally grown in Portugal, Spain, and the Mediterranean islands, silverbeet has today spread to Britain, Australia, and New Zealand, among other countries (Wade, 2).

Why carrot and Silverbeet are grown

Carrot is grown because of its nutritional value to humans. It has beta-carotene that is normally metabolized into Vitamin A that helps in maintaining good vision. Apart from that, it also has dietary fiber, antioxidants, and minerals (Rose, 4).

Silverbeet is good for human consumption because it is low in saturated fat and cholesterol. It has high dietary fiber and contains vitamins A, C, E, and K. It is also rich in calcium, iron, magnesium, and phosphorus. These nutrients make it ideal for weight loss and maintenance of optimum health (Wade, 3).

Growing season

Growing season for Silverbeet keeps changing depending on the varieties. For instance, Compacta Slo Bolt is grown between the 10th and 14th weeks of the year while Fordhook giant, Fordhook master, and Success are grown between the 9th and 10th weeks (Wade, 3). Carrot should be grown in seasons with full sun when they grow best (Rose 3).

Crop distribution

Silverbeet can grow in wide range of climatic conditions including sub-tropical, temperate, and cold temperature climates (Wade 1). However, it grows best in coastal regions. Carrot grows best in regions with shady areas with full sun (Rose 2).

Pest and disease

Carrot is affected by a series of bacterial, viral, fungal, nematodes, and parasitic infections. These include bacterial leaf blight, alternaria leaf blight, cyst nematodes, and alfalfa mosaic (Rose, 2). Pest and diseases that affect Silverbeet include cercospora leaf spot and beet webworm moth (Wade, 3).

Management practices

Silverbeet should be harvested using gloves to curtail cuts and abrasions that may increase the chances of a disease being transferred from one plant to the other mechanically. Because carrots are affected by a number of disease-causing agents, care has to be taken to ensure that both preventive and curative measures are taken to avoid losses attributed to disease infection.

Works Cited

Rose, Francis. The Wild Flower Key. London: Fredrick Warne, 2006. Print.

Wade, Stephen. Silverbeet growing: Prime Facts. 2011. Web.