Qualitative Research on Suicidal Behavior

Research plays a critical role in the scientific world, discovering and describing various phenomena, events, causal relationships, thereby promoting the resolution of numerous issues. It can be applied in many industries, especially healthcare, education, psychology, social science, marketing, and business overall. Nevertheless, depending on a research question, scholars use different research designs, including interviews, surveys, cross-sectional, or quasi-experimental studies, to collect and examine data related to the target topic. This paper concerns the following question: What actions intercept individuals from engrossing in suicidal behavior? The chosen research type that is suitable for the given question is qualitative research.

The Main Steps in Qualitative Research

A research method is an integral part of the study, containing procedures and techniques for collecting, analyzing, and interpreting data to gain in-depth insight into a particular topic. When examining data, qualitative research aims at gathering and investigating information about meanings, lived experiences, behavior, and social phenomena and interactions (Research methods, 2020). In particular, this type of research is beneficial in understanding how and why phenomena or events occur and explaining necessary actions.

The first step in qualitative research is to define a pertinent and clear question or, in other words, what to study. The second step can be an overview of existing literature to obtain a better understanding of the research topic and its critical aspects (Sauro, 2013). In the third stage, researchers should select the design of the study, including participants, the sample size, the research method. In qualitative research, the method can comprise interviews, surveys, contextual inquiries, observations, focused groups, phenomenology, and others. The fourth step is directed at collecting and analyzing data using one of the research approaches. Analyzing data usually assumes coding related themes, ideas, and concepts and conducting content analysis. The fifth stage includes synthesizing, generating, validating findings from employed research. The final step is providing a report on outcomes, uncovering and underscoring the core points of the study.

Research Approach

The best approach to gather necessary data related to the research question is a literature review. A literature review is an examination of scholarly sources, such as books, peer-review journal articles, reports, reliable websites, and other relevant publications, on a particular theme. It focuses on the current knowledge, which facilitates detecting specific concepts, methods, interventions, or gray areas in the target field. It is worth noting that a literature review does not imply only summarizing overviewed sources; furthermore, it analyzes, critically evaluates, and synthesizes the whole retrieved data to gain a comprehensive understanding (Literature reviews, n.d.). The indispensable elements of the literature review include structure outline and inclusion criteria. In this regard, the sample can comprise articles connected with suicide preventive strategies, best practices, interventions, elements, as well as problems thwarting therapys effectiveness.

In terms of the present research question, a literature review helps find essential information about recent achievements in suicide prevention policy. In particular, it can be directed at revealing the most advanced practices and strategies applied by the global scientific community to avert suicide intentions. In addition, a review can provide useful information about widespread obstacles, prevalent issues, and the primary points needing urgent consideration when performing therapy or counseling. It also contains other relevant statistics and the description of terms and tendencies that can support developing appropriate preventive activities. In general, literature reviews equip scholars and clinicians with a handy guide to a studied issue, with useful information and definite acts emerged for the recent years.

The Explanation of Potential Data

The literature review can target systematic reviews, case studies, cross-sectional research, action research, surveys, and observations. Besides, potential studies should be conducted for the last five or, at least, ten years. As mentioned above, studies should comprise terms related to the theme, that is, actions that prevent suicides among adolescents. Besides, sources should indicate widespread reasons, factors, and adverse environments contributing to suicide intents.

The Usefulness of the Literature Review

Suicide intent is an individuals complicated behavior formed under many interrelated inner and outer determinants, including social environment, persons characteristics, mental disorders, and others. Most practitioners and therapists encounter difficulties in comprehending a substantial evidence base connected with suicide prevention interventions, especially evaluating their effectiveness and relevancy. In this regard, the literature review can provide them with information about the necessary actions and activities, allowing to reduce suicide intents and improve patients well-being. For example, Menon et al. (2021) recommend that consultants focus on recognizing depression and suicide risk in individuals to provide timely therapy. The researchers also offer the most commonly used practices for suicide prevention, such as awareness programs, gatekeeper training, screening, media strategies, hotlines, and pharmacological and psychotherapeutic approaches. Moreover, the literature review can contain the description of these practices and particular interventions, herewith considering multiple challenges, factors, settings, and other aspects many clinicians face in their workflow. Overall, it can serve as a handy guide for consultants who work with patients, parents, risk groups, policymakers, and the public.

Conclusion

In summary, the paper has examined the research design for the following question: What actions intercept individuals from engrossing in suicidal behavior? The paper has provided specific steps to conduct qualitative research, including defining a clear question, existing literature overview, selecting study design, collecting and analyzing data, and reporting findings. A literature review has been selected as a research approach to detect and describe the most advanced interventions for averting suicide intentions.

References

Literature Reviews. (n.d.). University of North Carolina at Chapel Hill. Web.

Menon, V., Subramanian, K., Selvakumar, N., & Kattimani, S. (2018). Suicide prevention strategies: An overview of current evidence and best practice elements. International Journal of Advanced Medical and Health Research, 5(2), 43-51. Web.

Research methods: What are research methods? (2020). The University of Newcastle, Australia. Web.

Sauro, J. (2013). 7 Steps to conducting better qualitative research. MeasuringU. Web.

Description of the Radiopharmaceutical

Introduction

A radiopharmaceutical, otherwise known as nuclear medicine, is made up of one or more radioactive isotopes called radio-nuclides. Radiopharmaceuticals are pharmaceuticals prepared in readiness for applications in curing and/or identification of diseases in persons. Radioactive element 133Xe, 131I-NaI, or a labelled compound 131I-iodinated proteins and 99mTc-labeled compounds are a few examples of radiopharmaceuticals. There are two main components of radiopharmaceuticals which include: a radionuclide that gives the preferred radiation qualities; and verification of the in vivo spreading and physiological characteristics of the radiopharmaceutical, a specified chemical compound is required. A radionuclide, which is an unstable nucleus, possibly will decay by emitting various kinds of ionizing radiation: gamma (³), positron (²+), beta (²-) and alpha (±) radiation (Sharp, Gemmel & Smith, 2005).

Alpha emitters, could be described as mono-energetic and display considerably a very short span in matter for reasons attributable to their mass. This has the effect of leaving almost all of its energy on a very minute area (stretching to a few cell diameters). Alpha emitters are applied only for purposes with therapeutic outcomes. Their applications in clinical medicine are very rare, and they are predominantly applied in for research and development missions. For radionuclides enriched by using neutrons, these break up and decay by emitting beta (²-) radiation. Beta emitters characterize diverse energy quantities, and have varying range in matter (from 40 to 100¼m) determined by their energy. Just like the alpha emitters, Beta emitting radionuclides have primary applications in the field of therapeutic radiopharmaceuticals. Positron (²+) decay takes place in nuclei enriched by using proton (Sharp, Gemmel & Smith, 2005).

A precise work out of the time-dependent signal in body tissues is necessary for calculating the affected areas of the body absorbed dose. The applied schema, which gives key methods for calculating the taken in dose of radio-nuclides that has internally accumulated, is called Medical Internal Radiation Dose (MIRD). Whole- body retention is resolved by quantitative imaging, recovering of body excreta totally and quantitatively, non-imaging, which is observation using a probe set externally and sampling the blood directly which yields results for blood activity (Loevinger & Berman, 1976).

In order for the applied schema to work, three stages are necessary. The first stage is the accumulation of data; the second stage is the study of data; while the last stage is processing of data. This paper describes only the second phase mentioned above, that is, the study of data, which will give descriptions on techniques used in quantitative measurement. These techniques include: Techniques of Planar Imaging using Scintillation Camera, PETS and SPECTS. Non-imaging techniques are omitted altogether. Rigorous mathematical calculations are omitted but a few formulae are included (Loevinger & Berman, 1976).

Techniques of Planar Imaging using Scintillation Camera

There are several factors that influence the precision in measuring the quantity of radioactivity using a scintillation camera. The following are some of the factors: limitations on separation of energy into constituent parts; degradation of separation of space components due to collimator high-energy of photons passing through thin partitions in organism and consequences of spreading radiation; statistical noise related to low mass count and other interferences; the inherent resolution of the scintillation crystal- NaI(Tl), the Compton scatter and collimator influence the resolution of space in planar images; and geometric sensitivity (the amount of emitted photons divided by unit time which gets through to the crystal from a circumscribed angle position) depends upon the applied collimator. Examples of types of collimators are converging, pinhole and diverging and parallel-hole collimators. The approved collimators for measuring the quantity of radioactivity are parallel-hole collimators. These collimators have less geometric distortion in comparison to other types.

Scintillation camera resolution of space weakens in proportion to the increase of distance between the source and the detector. Therefore, the closer the subject to the detector is the better the spatial resolution. Data related to the activities of radiopharmaceutical doses absorbed in the whole body and identifiable localities are obtained from views of planar scintillation camera. For spread radiopharmaceuticals in a single locality or secluded non-superimposed regions in the planar projection, this method gives optimum accuracy. Majority of scintillation camera systems applied in radiopharmaceuticals are computer-based and software is available for automated or semi-automated techniques for acquisition of very complex data and carrying out statistical analysis (Thomas, Maxon & Kereiakes, 1988).

Technique of Conjugate View Counting

Conjugate view counting method is favoured by many and applied often in taking imaging measurements of radioactivity. It comprises of a combination of a system calibration factor, data transmitted through the subject and 1800 complementary planar images. This technique presents an enhancement over the single-view approach which relates to contrasting a specified phantom under predetermined geometry in that the thorough mathematical theory for conjugate view methods for measurements presents adjustment for attenuation, in-homogeneity and source thickness. Results from calculations related to tissue source depth are distinct from theoretical point of view. For situations where activities of source region are not time-dependent, a definite conjugate-view measurement taken is generally acceptable. Present day scintillation cameras with two heads offer suitable methods and efficient protocols for immediate acquisition of the two images and generally permit means for A/P scans of whole-body. However, single-head camera arrangements could be applied with repetitions of imaging and positioning as necessary to get the conjugate view. The arrangement calibration factor is necessary to translate the source region count rate into resolved activity. It is important to measure the calibration factor at each time of acquisition point to document that the arrangement response stays invariable or give explanations for any variation in performance that might have influence on the examined count rate. The pair of conjugate-view image is in general and in relation to the source region, an anterior and posterior (A/P) image. However, any real l800 opposed arrangement could be applied.

Two examples of mathematical calculation formulae are given here below

For isolated single source region the calculation is as follows.

formula
  • Where Aj is source activity;
  • Ia’If are counts/time;
  • e is transmission factor;
  • fj is correction for the source attenuation coefficient
  • ¼ and source thickness
  • t and
  • C is count rate per unit activity. For 4 overlapping source regions, the calculation is as follows.
formula
formula

The examples given are for source regions that have background surrounding and the related activities could be ignored. There are other methods associated with rigorous calculations of conjugate view counting such as:  subtraction related to simple back grounds; analytical formalism; number pseudo-extrapolation; methods related to factors of build-up; methods of multi-energy windows; and techniques of digital filtering, among others. However, there are situations where features or volumes are shown by scintillation camera on one view only. A method applying actual point source of a single straightforward view is handy (Leichner et al., 1993).

The techniques that are applied in this field of nuclear medicine are numerous and two more approaches discussing PETS and SPECTS are highlighted below. There are backgrounds of various regions that are irregular in structure and lack homogeneous composition intrinsically. Complications are inevitable and calculations of radiopharmaceutical activities are expected to be very involving and sophisticated. Interferences make the situation even much more complicated. Actual measurements of activity concentration become a formidable task. The techniques of conjugate view counting described above cannot be applied in this scenario. This is where SPECTS come into operation. In this category we have varieties of SPECTS techniques including among others the following: SPECT systems multiple detectors; multiple cameras on ring system; fan beams; con beam; scintillation rotating camera of single type; use of filters for statistical uncertainty effects reduction and many more. Many factors connected with regions of imaging view have to be considered that affect the results accuracy. These are statistical noise, intrinsic sensitivity, spatial penetration, energy resolution, transmission attenuation, tissue densities, geometric structural formations, limitations attributed to quantitative measurements, and many more.

PET provides the most accurate techniques for calculations of radiopharmaceutical activities in situations where the above described methods are not adequate and /or where cost is not top priority. Automated computer driven systems are highly desirable in the PET domain. Hybrid types of sophisticated digital techniques are common for PET and accurate spontaneous multi dimensional calculations are carried out. Needless to say, PET is very expensive and rarely found in ordinary institutes. Top scientific research laboratories have equipment employing PET technology. PET is occasionally applied in measuring activity of a positron emitter to replicate another radionuclide of the same atomic number where by the activity measurements and absorbed dose approximations are preferred. The concept is made that isotopes have the same bio-kinetic performance. Contrasting most positron emitters, 1241has amply prolonged half-life to allow imaging over many days throughout the biologic washout and uptake of the agent. Consequently, once a comparable positron-emitting isotope that has a material half-life that is adequately long relative to the pharmacokinetics is available, PET imaging possibly could enhance the accuracy of measurements of activity (Thomas, Maxon & Kereiakes, 1988).

Conclusion

Computational methods expressed and highlighted in this paper have applied the MIRD schema for absorbed dose evaluation. Many analytical methodologies for present and future applications and likely design of experimental trials have been discussed by exhaustive consideration of information already available and distributed touching the radiopharmaceutical for assessment of the suitable number of investigational sampling positions to be acquired supported by uptake and retention attributes. It is very important to have available a correct resolution of the time-dependent activity in situ is because this is necessary for calculating absorbed dose to the concerned regions. The answers to questions of over-all activity A and residence time T entail ascertaining a probable plan for data compilation, analysis and dissemination. After categorizing all source regions with enough sequential sampling, the complete activity in every one of these regions in comparison to time should be established. Quantitative measurement methods such as conjugate view planar imaging and inclusive of SPECT and PET imaging have been presented and clearly explained by giving areas, suitability and limitations of applications. Notwithstanding, not all methods of imaging techniques have been covered. Needless to say, the scope is very wide and only the most important and in conventional use have been selected for this paper.

References

Loevinger, R. & Berman, M., 1976. Revised schema for calculating the absorbed dose from biologically distributed radionuclides, MIRD Pamphlet No. 1 Revised. New York: The Society of Nuclear Medicine.Leichner, P. et al., 1993. An overview of imaging techniques and physical aspects of treatment planning in radioimmuno therapy. Medical Physics, 20, pp. 569-577.

Sharp P. Gemmel, H. & Smith, F., 2005. Practical Nuclear Medicine. Oxford, UK: University Press.

Thomas, S. Maxon, R. & Kereiakes, J., 1988. Techniques for quantitation of in vivo radioactivity. In: Gelfand, M. & Thomas, S. (eds.). Effective use of computers in nuclear medicine. New York: McGraw-Hill.

Identification of an Unknown Sample

Introduction

This laboratory test aimed to recognize one unknown species from a diversified culture. The researcher conducted three trials in the laboratory to ascertain the unidentified species. The experimenters tests include; Columbia CNA Agar, Eosin Methylene Blue Agar, and the Phenol Red Test (Vega & Dowzicky, 2017). The analyzer realized that the unknown species were the Proteus mirabilis bacteria and the streptococcus pyogenes bacteria. Bubbles were not observed in Columbia CNA Agar Test, indicating that the bacteria are Gram-Negative; catalase was not produced. Colonies with dark halos appeared; the Columbia Agar Test did not break down the red blood cells, indicating that the specimen was gram-positive.

Bubbles were observed in Eosin Methylene Blue Agar Test, which indicates that the bacteria are Gram-Positive; catalase was produced. Bacteria are categorized into Gram-positive and Gram-negative depending on their vulnerability to staining a chemical die during a standard biochemical technique (Vega & Dowzicky, 2017). The possible results during the Phenol Red Test include; colored colonies, which indicate that the species are Gram-positive because they ferment lactose. The sample in the test tube turned yellow, showing the pH of the species in the test tube dropped because of the acid creation and fermentation of the sugars available in the species. Bubbles were not observed during the Phenol Red Test, indicating that the bacteria are Gram-positive; catalase was not produced. Unknown specimens have been used in this report by the researcher to identify the bacteria of the species.

The tester doing this lab test has sufficient information relating to the procedures used in researching and cultivating microbes. The scientist will identify the unknown specimens by analyzing their sugar fermentation, growth properties, and enzyme production (Vega & Dowzicky, 2017). Identifying the unknown species will help the researcher to identify various bacteria via distinct biochemical characteristics and tests. This is crucial in medicine because identifying unknown bacteria helps treat patients by being aware of the bacteria or species contributing to the origin of the illness. Additionally, being knowledgeable about various bacteria is vital because it enables people to develop more antibiotics that patients will use shortly.

Methodology

The scholar inoculated distilled water with a sole bacterial collection to protect the liquid media by touching a pure protecting colony, immersed the loop into distilled water, and thoroughly mixed the solution. To inoculate an agar state medium, the fieldworker placed a small specimen into the agar plate using a sterile swab. The scientist then lifted the lid from the petri dish to inoculate an agar slate medium (Vega & Dowzicky, 2017). The researcher added 5% of the unknown specimens in distilled water to conduct the Columbia CNA Agar Test. The mixture was thoroughly mixed and heated until it boiled. The experimenter then autoclaved the solution for 15 minutes at 121 degrees Celsius (Vega & Dowzicky, 2017). The solution was then allowed to cool to 50 degrees Celsius.

For the Eosin Methylene Blue Agar Test, the researcher suspended 35.96 grams of the species in 1000 ml of distilled water and mixed the solution until the suspension was uniform. The researcher then heated the mixture until it completely dissolved. The expert did sterilization by autoclaving the mix at 121 degrees Celsius for 15 minutes. The third test that the researcher conducted is the Phenol Red Test. The scholar used an inoculating needle and aseptically protected the 30 grams of the species in a test tube (Vega & Dowzicky, 2017). After that, the scholar incubated the test tube holding the species at a temperature ranging between 35 to 37 degrees Celsius for approximately 18 to 24 hours and checked for color changes.

Results

Bubbles were not observed in Columbia CNA Agar Test, indicating that the bacteria are Gram-Negative; catalase was not produced. Colonies with faint halos appeared; the Columbia Agar Test did not break down the red blood cells, indicating that the specimen was gram-positive. Bubbles were observed in Eosin Methylene Blue Agar Test, which suggests that the bacteria is Gram-Positive; catalase was produced. Colorless colonies also appeared in the test tube, indicating that the specimen is Gram-negative because it did not ferment lactose. Colored colonies observed in Phenol Red Test show that the sample is Gram-positive because it ferments lactose. The piece in the test tube turned yellow, indicating the pH of the species in the test tube dropped because of acid creation due to the sugar fermentation of the species. Bubbles were not observed during the Phenol Red Test, indicating that the bacteria are Gram-positive; catalase was not produced. The species in the test tube did not make any gas from the fermentation of the species, hence indicating that the species in the test tube was an acrogenic organism.

Discussion

Eosin Methylene Blue Agar Test is essential in distinguishing gram-negative and gram-positive bacteria. The bacteria that use oxygen generate the catalase enzyme to respire and protect the microbes from toxic by-products eliminated by oxygen metabolism (Vega & Dowzicky, 2017). Eosin Methylene Blue Agar differentiates and isolates gram destructive enteric bacteria from nonclinical and clinical specimens. The researcher used Antimicrobial Drug Susceptibility Disc Diffusion Assay to determine the antimicrobials inhibiting the growth of the fungi causing infections (Vega & Dowzicky, 2017). Columbia Agar is a very nutritious medium applied to isolate and grow different microorganisms, specifically very fussy bacteria such as pneumococci and streptococci in animal samples. Scholars use Columbia Agar for general purposes; researchers can improve the technique by using pure blood. The Phenol Red Test is used for general purposes in the lab to distinguish gram-negative enteric bacteria. It is also used in determining microorganisms depending on their fermentation reactions.

The Phenol Red Test has a pH indicator (phenol red), peptone, one carbohydrate (either sucrose, lactose, or glucose), and a Durham tube. The lab tests performed by the learner on the unknown specimens detected that the species were Proteus mirabilis bacteria and streptococcus pyogenes. The species had colorless colonies, Gram-negative, and did not ferment lactose during the Eosin Methylene Blue Agar Test (Vega & Dowzicky, 2017). Colorless colonies also appeared in the test tube, indicating that the specimen is Gram-negative because it did not ferment lactose. The sample in the test tube turned yellow, showing the pH of the species in the test tube dropped because of the acid creation and fermentation of the sugars available in the species. Colored colonies observed in Phenol Red Test show that the sample is Gram-positive because it ferments lactose. The presence of the Phenol Red Test is not critical in retaining cell cultures. Researchers often use the Phenol Red Test as a quick way of detecting unknown species. Health practitioners could use the test results to assess effective medicines when treating patients with unknown species.

Reference

Vega, S., & Dowzicky, M. (2017). Antimicrobial susceptibility among Gram-positive and Gram-negative organisms collected from the Latin American region between 2004 and 2015 as part of the Tigecycline Evaluation and Surveillance Trial. Annals of Clinical Microbiology and Antimicrobials, 16(1).

Methodology and Rationale for the Research

Methodology

The proposed research aims to investigate if people with down syndrome are less likely to have behaviors if they are provided with one-on-one care. Typically, people having Down syndrome without assistance tend to demonstrate aggressive conduct towards other people (Valentini et al., 2021). Therefore, it is crucial to research if one-o-one care can make any changes. The sample is presented by 20 individuals with down syndrome aged between 16 and 21.

Ten of them receive one-on-one care, and the rest has regular treatment and assistance from medical staff. All participants in the study had moderate to significant mental retardation. Since the purpose of the study is to investigate if one on one therapy affects the conduct of people with down syndrome, it can be claimed as an independent variable. In turn, the behavior change is a dependent variable.

Before conducting the experiment, it is necessary to review any evidence related to the topic. After collecting theoretical data regarding the intervention of one on one care for a patient with down syndrome, the experiment should take place. The group of 10 people will be provided with one on one care, while the rest of the participants will be guided by both their relatives and medical staff. The experiment is expected to last for 30 days, during which patients mental activity will be evaluated in terms of the behavior they demonstrate. Hence, it will be possible to measure the dependent variable as a result of such interaction. The results of the experiment will be presented by measuring patients mental activity within a specified period.

Rationale

The proposed research will provide an understanding of how one on one care affects people with down syndrome. By investigating the effects of Down syndrome, scientists can help improve the lives of those who live them on a daily basis. This will help with previous treatment, which can lead to changes in childrens development, and find ways to help people with Down syndrome live healthier, happier, more productive, and independent lives. This search can also help identify the reasons why some of the conditions of people with Down syndrome are at higher or lower risk; because of this, it can also help the tens of millions of people living with these cases.

What is more, much is still unknown about the effect of a specific treatment of people with Down syndrome since their behavior is unpredictable in most cases. This investigation will help to establish if one on one care is eligible for patients with down syndrome. It will also identify if this method can help alleviate patients conditions and change behavior. It is said that aggressive behavior in patients with Down syndrome is likely to be violent and uncontrollable (Valentini et al., 2021). Since one-on-one care is considered to reduce aggressive conduct, this hypothesis should be tested and proved.

It is also indispensable to investigate if the therapeutic method is better than medical treatment since the former may worsen ones cognitive abilities while the latter can make changes. By experimenting, we may recognize the necessity of one-on-one care implementation in hospitals to assist people with Down syndrome. This will raise medical staff awareness and let them treat such patients differently. It is very probable that such an intervention will allow researchers to investigate the effects of this method more thoroughly or in combination with the other methods.

Reference

Valentini, D., Di Camillo, C., Mirante, N., Vallogini, G., Olivini, N., Baban, A., Buzzonetti, L., Galeotti, A., Raponi, M., & Villani, A. (2021). Medical conditions of children and young people with Down syndrome. Journal of Intellectual Disability Research, 65, 199 209. Web.

Each Human Being as the Owner of a Library of Ancient Information

Human beings have been trying to decipher their origin and roots for centuries onwards. They easily observed that living things inherit traits from their parents, and used this finding for the cultivation of certain features in animals and plants. But the means of such heredity represented a mystery to people for a long time, until in 1953 the DNA molecule was described by James Watson and Francis Crick and proclaimed the bearer of all the genetic information the human possesses. A new era in genetic science began, facilitating revolutions in spheres ranging from medicine to jurisdiction.

As such, the deoxyribonucleic acid (DNA) molecule comprises two long, spiral-shaped chains made up of nucleotides containing three molecules (a base one, a phosphate one, and the sugar molecule deoxyribose). The bases in the DNA nucleotides are adenine (A), thymine (T), guanine (G), and cytosine (C), coupled together correspondingly and forming the DNA code for each individual. Amazingly enough, research has shown that using only those four different base possibilities  A to T, T to A, G to C, and C to G  the human genome is capable of creating a whole life, whilst the Microsoft Windows XP operation system requires a 200 times larger code in order to operate one personal computer (Moore). The DNA molecules constitute genes that are in their turn responsible for the formation of proteins that perform the chemical reactions of the body.

The uniqueness of the DNA consists in the fact that firstly it possesses the ability to replicate itself repeating precisely in each cell of the body; and secondly, the extremely varied layout of the DNA allows for encoding information for a huge number of proteins comprising the living bodies. The DNA enables storing and transferring from generation to generation an almost unimaginable variety of inherent personal features, from the parameters of skin and hair structure and color to the intricacies of brain development. Located within the gene, the DNA functions as a holder of a unique pattern necessary for the individuals grown, development, differentiation, and reproduction. This makes the DNA especially valuable for purposes of research in the most various spheres of human activity.

In medicine the analysis of mutations in different genes helps to predict possible hereditary illnesses and fosters developing treatment for the diseases caused by those mutations; focused research of the two specific genes responsible for processing widespread drugs allows doctors to prescribe a more accurate treatment to individual patients. In jurisdictions, forensic medicine widely exploits the practice of DNA research, the results of which constitute strong evidence, in order to identify the actual criminals. Catastrophe victims can also be efficiently identified by their DNA.

Other exciting fields of applying DNA research are history and anthropology: the recently emerged genetic anthropology makes use of the evolutionarily stable mitochondrial DNA and Y chromosomes to trace the patterns of human migration all over the world. In an attempt to solve the captivating mystery of races, forensic research highlights the common ancestry of people and strives to answer, inter alia, the questions of the impact the culture has had on human genetic variation, the ways cultural practices have affected human patterns of genetic diversity, and the reason why people look different from each other if they share a recent common ancestry; besides, forensic studies databases assist medical research focusing on the ethnic distribution of genetic diseases. (Human Genome Project)

As it appears, human beings are indeed bearers of unique historical information that is encrypted in their DNA code which is individual in every person and constitutes a mystery yet to be solved.

Works Cited

Human Genome Project. Genetic Anthropology, Ancestry, and Ancient Human Migration. 2008. Web.

Moore, Todd. Man vs. Windows XP. 2009. Web.

Sampling Methods in Evaluating Research

It is true that for researchers, the use of sampling techniques in statistical testing is critical. For example, the concept of probability or nonprobability sampling can be used to form a group of participants. Smallidge et al. (2018) used a quantitative cross-sectional approach to assess licensed dental hygienists awareness gathered through nonprobability sampling. Thus, the target population was professional dental hygienists, but the general population was Maine residents. Specifically, an invitation to participate in the study was sent to each of the professionals from a pre-prepared list of the target population (physicians with licenses in the local community). Within three weeks, 268 (21%) of the 1,284 questionnaires were collected, and the results were processed with both descriptive statistics and thematic analysis. I think this is an acceptable research method, but the percentage of those who participated seems to be relatively small compared to the target population.

In contrast, a recent study by Comassetto et al. (2021) assessed the need for dental care for homeless people: a probability sampling method was used. More specifically, out of 242 homeless people, 214 (88.4%) consented to participate in a survey and oral dental examination. Participants were randomly selected, but the selection was based on two inclusion criteria. The homeless community was the target population, but only local residents of the city of Porto Alegre were available for the study: the predominant majority of the sample was male. The results were processed using chi-square, t- and Mann-Whitney tests. This is an excellent study, using the right methodology and showing a high response rate.

The IRB represents a particular area of research ethics that provides equality and social protection for trial participants. In other words, because of the Boards existence, subjects are not expected to experience discrimination or disadvantage. Of the two studies, only Smallidge et al. (2018) received IRB approval: IRB102815S.

References

Comassetto, M. O., Hugo, F. N., Neves, M., & Hilgert, J. B. (2021). Dental pain in homeless adults in Porto Alegre, Brazil. International Dental Journal, 1-8.

Smallidge, D., Boyd, L. D., Rainchuso, L., Giblin-Scanlon, L. J., & LoPresti, L. (2018). Interest in dental hygiene therapy: A study of dental hygienists in Maine. American Dental Hygienists Association, 92(3), 6-13.

Aseptic Technique and Use of Media

Purpose

The purpose of this lab experiment is to equip learners with essential laboratory techniques and skills employed to avoid contamination of microbial cultures by maintaining purity. This experiment will allow us to utilize an aseptic technique to inoculate a pure culture of Lyompholized Escherichia coli to broth, slants, and plate with precision by keeping the samples pure without contamination.

Discussion

This experimental exercise demonstrates basic laboratory practices necessary for handling and studying various micro-organisms. To prevent contamination of microbial cultures aseptic technique was employed. Lyompholized bacterial culture was reactivated and successfully sub-cultured into the broth and on agar through the application of the Aseptic technique. Aseptic techniques are critical in laboratory experiments and tests involving microorganisms to prevent contamination and ultimate interference of test results.

To illustrate the subculturing and aseptic technique, this experiment utilized Escherichia coli. The experiment aimed at keeping a pure culture (Escherichia coli) i.e. free from any present contaminants. Basic aseptic techniques include disinfection of the working area, instrument transfer and disposal, culture tube flaming, and culture tube inoculation. (Aseptic 2) The first activity was the reactivation of E.Coli followed by aseptic transfer of the activated culture to broth culture. This was followed by the aseptic transfer of the reactivated culture from broth to slant and finally from broth culture to plate. The three separate cultures were incubated exhibiting tendencies of microbial growth in the broths during the period. After a while, cloudiness formed on the surface of the medium and later dispersed into the broth. This action is referred to as turbidity: the uniform cloudiness formed in a previously sterile medium after inoculation of a pure culture.

Reflection

Aseptic practices are critical in any microbiological practices and experiments effectiveness. This is attributed to the fact presence of contaminants in test subjects may compromise the credibility of tests and experiments. The aseptic technique helps maintain purity cultures from other strains or species. Contaminants including unwanted bacterial and fungal microorganisms are brought about by various factors including people, the environment, and work surfaces amongst other sources. Therefore it is important to apply aseptic practices at specific stages of microbiological experiments.

Works Cited

Aseptic Technique. Media, incubation, and aseptic technique.n.d. Web.

Carolina Biological Supply. Aseptic technique and use of media: Investigation manual. Carolina Biological Supply, 2018.

Political Sampling: Pros and Cons of Probability and Non-Probability Sampling

Introduction

In many parts of the world, politics is an area of interest for many people, given the immense weight that political systems confer over the wellbeing and the governance of societies. Political systems are categorized as part of social systems and are among the major sectors in the worlds governance systems after economic, cultural, and legal systems (Jennings & Wlezien, 2018). Therefore, it is imperative to note that political polling remains the positions of influence and power on which many lives depend. As an analysis method, political enthusiasts and professionals have been reported to use various methods in acquiring trends and information from the populations over the possible outcomes of polls. Several studies have cited two main methods to be critical in the political polling process: probability sampling and non-probability sampling methods (Lauderdale et al. 2020). In this regard, this proposal aims at establishing the advantages and disadvantages of using probability and non-probability sampling as political sampling methods.

Probability sampling

Probability sampling is based on chances. According to Etikan, and Bala (2017), subjects in a probability sampling method have an equal chance of being selected for research information over their opinions on a poll. This sampling method can be achieved through various ways, including through mobile or phone call surveys by following the random digit dialing (RDD) for a set of poll subjects (Jennings, & Wlezien, 2018). Therefore, probability sampling has been the widely used method of political sampling over the decades, based on the ability for random subject selections.

Pros of probability sampling

The probability sampling method is the most easily accessible form of political poll sampling used in many nations worldwide due to its outstanding merits. According to Etikan and Bala (2017), probability sampling is a less costly opinion poll sampling method, making room for more people to be sampled at one time in the process. Etikan and Bala (2017) further note that probability sampling also consumes less time by randomly reaching out to many people all at once within a short period.

Several study findings have also linked probability sampling with better outcomes based on its simplicity and ease of obtaining samples from larger population groups faster and more efficiently (Elfil, & Negida, 2017). Most political polling systems use technical means of achieving outcomes from opinion polls. In contrast, probability sampling methods useless specialized equipment, requiring no technical knowledge on operations (Sharma, 2017). Ultimately, probability sampling is considered better because of the possibility of calculating the sampling error margin from a sample group as a representation of the entire population (Yadav, Singh, & Gupta, 2019). Therefore, probability sampling comes out as an efficient polling system whose ease of use and access elevates its outcomes compared to other sources.

Disadvantages of probability sampling

Whereas probability sampling and its associated vices are all the better to use, some faults in using probability sampling also exist. Different types of probability sampling have their unique challenges. According to Lauderdale et al. (2020), systemic sampling is limited to randomly selecting the sample group. In contrast, the stratified random sampling method is more rigorous and tedious thus consumes more time, especially if larger samples of the population have to be considered. Further, cluster sampling is limited to the homogeneity of the sample population, whereas simple random sampling also consumes relatively more time (Sharma, (2017). Therefore, addressing the aforementioned challenges sets probability sampling as the best sampling method for political polling endeavors.

Non-probability Sampling

In non-probability sampling, the opposite of probability sampling is eminent. For non-probability sampling, the probability of selecting the sample population from the entire population cannot be adequately quantified (Quatember, 2019). In this regard, the non-probability sampling thrives on the decisions made by the researcher, which sometimes may be biased. According to Yadav, Singh, and Gupta (2019), non-probability sampling allows the room for sample subjects to select themselves or join in for a survey as in dial-in polls used mainly by media houses and the internet community. Further, a study report indicated that the time for getting a response from the sample population was faster with non-probability sampling due to the participants enthusiasm and excitement to participate in the process (Sharma, 2017). A similar study also revealed that non-probability sampling allows for faster access of information besides being more cost-effective than probability sampling (Lauderdale et al., 2020).

Pros of Non-Probability Sampling

According to Sharma (2017), non-probability sampling comes out as the most cost-effective and less time-consuming poll sampling method when compared with probability sampling. In cases where the poll population is tiny, non-probability sampling is considered the best method to use as it allows for the chance to sample even smaller populations (Jennings & Wlezien, 2018). In the recent past, primary survey information collection practices have resorted to non-probability sampling methods as they are less expensive (Sharma, 2017).

Cons of Non-Probability Sampling

According to Etikan and Bala (2017), non-probability sampling does not allow the researcher to know how effective the chosen sample out of the general population represents the views and the feelings of the entire population. Etikan and Bala (2017) further note that with non-probability sampling, it is close to impossible to accurately determine the confidence intervals and margins of error in the sampling process. According to Sharma (2017), non-probability sampling exhibits some difficulties in providing estimates for bias in information relay, limiting the validity and quality of the poll data. Non-probability sampling also limits the generalization of research data findings owing to the tiny population sample that the method uses (Elfil & Negida, 2017). Finally, the non-probability sampling also presents with sample population challenges as a huge part of the populations views remain uncaptured, thus limiting outcomes of the process (Sharma, 2017).

Conclusion

In conclusion, political opinion polls are an important part of the political process that acts as a prediction tool for the possible outcomes of a political contest. Probability and non-probability sampling methods remain the widely use methods of sampling. Cost-effectiveness and time efficiency are some of the factors of utmost consideration in polling methods. However, the best sampling method ought to be the one that allows for the collection of more accurate data within a short period at a lower cost. In this regard, probability sampling is the best possible sampling method that can be considered for more precise and cost-effective sampling. Accuracy and consistency of data findings is one massive concern over political opinion polls, as they bear the capabilities of defining and redefining outcomes of the political process. Non-probability sampling remains the best and the widely used opinion polls method based on its simplicity and wide sample calculations for margins of error.

References

Elfil, M., & Negida, A. (2017). Sampling methods in clinical research; an educational review. Emergency, 5(1).

Etikan, I., & Bala, K. (2017). Sampling and sampling methods. Biometrics & Biostatistics International Journal, 5(6), 00149. Web.

Jennings, W., & Wlezien, C. (2018). Election polling errors across time and space. Nature Human Behaviour, 2(4), 276-283. Web.

Lauderdale, B. E., Bailey, D., Blumenau, J., & Rivers, D. (2020). Model-based pre-election polling for national and sub-national outcomes in the US and UK. International Journal of Forecasting, 36(2), 399-413. Web.

Quatember, A. (2019). 42. The representativeness of samples. In Handbücher zur Sprach-und Kommunikationswissenschaft/Handbooks of Linguistics and Communication Science (HSK) (pp. 514-523). De Gruyter Mouton.

Sharma, G. (2017). Pros and cons of different sampling techniques. International journal of applied research, 3(7), 749-752.

Yadav, S. K., Singh, S., & Gupta, R. (2019). Sampling methods. In Biomedical Statistics (pp. 71-83). Springer, Singapore.

Mercury Consumption Effects on Human Beings

Objectives

The reports objective is to determine the capacity and the danger of mercury consumption by human beings.

Introduction

Mercury is a toxic heavy metal and is found in some parts of the environment. There are several artificial and natural mercury sources; however, the most significant ones are those which people are susceptible to every day. Generally, extensive mercury consumption affects body systems, such as digestive, immune, circulatory, nervous, and reproductive systems. Also, it causes the body organs failure which may result in individual aftermath (How People Are Exposed to mercury). Mercury exists in various forms which differ in toxic level and their effects on the body system; therefore, individuals should have control over food consumption because cooking does not eliminate mercury.

Results

How People Are Exposed to Mercury

The most common way for people in the US to be exposed to mercury is by consuming fish and shellfish containing methyl mercury. However, individuals can be exposed to mercury in various forms under different circumstances. Apart from eating fish, a worker can consume mercury through inhalation of elemental mercury in the industrial process and using recess produce bearing the element. The typical exposure mercury is mainly through metallic products. Breaking containers have mercury which evaporates and becomes odorless, toxic vapor when not cleaned up immediately (How People Are Exposed to mercury). Exposure occurs when people inhale the toxic vapor into their lungs. To enjoy the benefits of eating fish while minimizing exposure to mercury; the Environmental Protection Agency advises individuals to consume the type of fish which contains low mercury content. Fish is an essential dish in human diet; however, some fish can contain mercury and other harmful chemicals. Therefore, specific advisories on fish consumption are crucial.

Amount of mercury consumed by shrimp.

Amount of mercury consumed by shrimp.

The concentration of mercury per shrimp in mg/g-shrimp)

The concentration of mercury per shrimp in mg/g-shrimp)

Amount of mercury consumed by fish if fish in the lake eat about 1,700 g of shrimp

Amount of mercury consumed by fish if fish in the lake eat about 1,700 g of shrimp

The concentration of mercury per fish given one fish weighs 150 g

The concentration of mercury per fish given one fish weighs 150 g

Mercury consumed by a person in 1 day given a person from this community eats 1 fish daily

Mercury consumed by a person in 1 day given a person from this community eats 1 fish daily

Mercury consumed by the person in 30 years

Mercury consumed by the person in 30 years

Conclusion

Steps to Reduce Mercury in The Environment

Reducing mercury pollution could involve the use and consumption of natural resources, and practicing safe fish caging. This could be effective when individuals understand the dangers of mercury elements such as methyl mercury, metallic mercury, and other related mercury compounds. Fish is a primary food in the human diet; however, limiting fish-eating and giving alternative white meat such as poultry would reduce exposure to methyl mercury (How People Are Exposed to mercury). Some mercury compounds such as ethyl mercury and phenyl mercury are used as medicine preservatives which eventually put human life in danger. In summary, mercury is a harmful metal of which effects can be controlled by practicing self-control in all human actions.

Reference

How people are exposed to mercury, US EPA. 2021. Web.

Exploring the Moderating Effects of Operational Intellectual Capital by Onofrei

Introduction and Background

Companies worldwide have adopted lean practices, and there has been a significant focus on the connection between lean manufacturing practice and organizational success. The goal of Lean manufacturing is to maximize consumer value while reducing waste. The ultimate aim of introducing lean production in a business is to boost efficiency, improve quality, minimize lead times, and lower costs. Previous research has concentrated on the technical side of lean practice implementation and its effect on results rather than the people issues (Onofrei et al., 2019). Researchers recently turned their attention to why lean succeeds (or does not), with a specific focus on human resource management (HRM) techniques.

Companies must concentrate on developing committed and specialized expertise and forming a larger institutional aggregation of intellectual capital (IC) to achieve long-term lean implementation. Awareness and IC management have become critical factors in a companys growth and survival, making it adaptable and receptive. Personnel, organizational routines, production procedures, and partnerships around the supply chain are examples of knowledge-based tools. According to the article, to maximize investments in lean practices, organizations must understand how to utilize various operational intellectual capital (OIC) dimensions (Onofrei et al., 2019). Since they are uncommon, valuable, and difficult to replace or copy, such knowledge-based tools provide a competitive advantage.

Research Question

The article investigates the interactions between OIC dimensions and investment in lean planning (ILP). The research question can be stated as, Is it possible to improve the efficacy of ILP by using OIC? This question addresses the research gap between ILP and OIC (Onofrei et al., 2019). The authors hypothesize that OIC is a critical knowledge-based asset that is essential, difficult to duplicate or replace, and produces powerful operational and competitive advantage when properly leveraged.

Data Used

The data used in this study is derived from the Global Manufacturing Research Group (GMRG) fifth rounds survey project. The data contained demographic information, sustainability, competitive environment, innovation, supply chain management, and organizational culture. The data relates to 528 out of the 987 respondents who answered all the questions in the modules (Onofrei et al., 2019). As a result, the GMRG data are suitable for interpreting the synergism between ILP and OIC parameters due to their broad coverage across ten countries of all major continents.

Qualitative Vs. Quantitative Study

Qualitative and quantitative studies differ by the nature and type of data analyzed. Qualitative data is expressed in words and is used to generate an in-depth understanding of concepts. Most of this data is obtained from secondary sources such as peer-reviewed journals. On the contrary, quantitative data is expressed in numerals and visual representations such as graphs. The quantitative study is focused on confirming theories using primary sources of data such as experiments and observation. The article uses a quantitative study since its data source is a survey whereby individual data was collected from 987 persons and analyzed through t-tests. Although the authors referred to several researchers theories, the data were independently analyzed, and control variables were used appropriately (Onofrei et al., 2019). Using industry type and plant size as control variables and the application of confirmatory factor analysis, the authors proved the connection between ILP and OIC constructs.

Sampling Method

In the article, the researchers used a stratified sampling method to gather and evaluate their data. Stratified sampling When researchers are trying to draw conclusions from various sub-groups or strata, they also use stratified sampling. The strata or sub-groups should be distinct, and there should be no overlap in the results. The researcher can apply basic probability sampling when using stratified sampling. Age, ethnicity, gender, work profile, nationality, educational level, and other factors are used to divide the population into subgroups.

In this article, the researchers identified more than ten countries from each major continent and designed questionnaires for each country. In this case, nationality was the factor used to group the population sample. From each countrys data, the respondents who filled all the modules included in the questionnaire were picked for inclusion into the central database (Onofrei et al., 2019). This implies that only data contained fully filled questionnaires from each country were used in the final data analysis.

The main reason behind researchers use of the stratified sampling method is the need for extensive coverage. Selecting countries from each major content provided a wide range of data that implies the samples were likely to be a more accurate representation of the entire population. Also, the researchers needed to establish the different factors influencing OIC parameters in different countries and their influence on ILP.

Conclusions from the Study

The moderating function of OIC was considered in the study of the effects of ILP on operational efficiency. The findings support the effect of social and structural capital (STC) on ILP and operational performance improvement. The findings help to clarify the deep connection between ILP and operational success. In terms of application, the study provides managers with empirical data on the impact of knowledge management on organizational efficiency. The study focuses on how the OICs unique features can be used to supplement operational output provided by ILP.

Validity and Reliability

The research is founded on the thesis that better operational efficiency is achieved by investing in operational intellectual capital and lean practices. The study presents empirical evidence obtained from quantitative analysis of research surveys to show how intellectual capital can be used to influence lean practices. The researchers applied stratified sampling to obtain and analyze data from over ten countries in major continents. The results show that human capital (HUC), structural capital (STC), and social capital (SOC), will enhance lean investments performance.

Confirmatory factor analysis was applied in validating the measures relating to all variables in the study. The elements coefficients and standard errors were compared to see if they were convergent. The results show that each coefficient was more than double its standard error. Furthermore, the chi-square to degrees of freedom ratio of 2.209 (2/df) is appropriate (Onofrei et al., 2019). In each category, composite reliability figures revealed high build reliability; all values are well above 0.7. At both the build and item stages, the findings showed that the measures had sufficient discriminant validity. These findings are considered to be valid and reliable particularly given the data sets multi-nationality, diversity and spread across various industries.

The study has several limitations that may limit the findings validity and reliability. First, the measurement items for OIC must be refined in order to capture the measurements of OIC holistically. Second, the study looked at the effect of ILP on operational efficiency without taking into account other performance indicators like economic and market-related performance (Onofrei et al., 2019). Third, considering how far an organization has gone on its lean path, the review offers few theoretical insights into the use of OIC capital differently. Finally, the nature of the research sample framing is unlikely to be random. Despite the advantages of a large-scale multi-state data set, the data collection is restricted, which must be taken into account when analyzing the findings.

Reference

Onofrei, G., Prester, J., Fynes, B., Humphreys, P., & Wiengarten, F. (2019). The relationship between investments in lean practices and operational performance: Exploring the moderating effects of operational intellectual capital. International Journal of Operations & Production Management, 39(3), 406-428.Web.