The Main Flaw in the Quantitative Study

While quantitative studies provide valuable information for further research and clinical practice, they might contain some serious flaws. Out of all questions related to the studys credibility, the article by Sand-Jecklin and Sherman (2014) contained the answers only for the six of them. On the one hand, four positive answers were related to the source, evidence support of the research question, proper delivery of the intervention, and the consistency of the findings to the results of other studies. On the other hand, there were two negative responses concerning the reliability of the measuring instruments and the control of extraneous variables and bias.

Moreover, the appraisal of the quantitative study helped to detect the main flaw associated with the weakness of the sampling method selected by the researchers. According to Brown (2018), convenience sampling is drawn from an available population. It is the easiest way to collect data, but it might negatively affect the researchs credibility as it is prone to bias. Thus, the researchers should extract the sample from a particular population based on demographics, health conditions, or symptoms (Brown, 2018). However, the study in question utilized the samples from the available patient and nurse populations regardless of their characteristics and did not include the participants comprehensive profile. Therefore, the credibility of the study was undermined by the improper use of a sampling method.

The findings of the study demonstrate the positive impact of a blended bedside shift report, but the data obtained by the researchers may not be considered credible. Some findings are consistent with the results of the previous studies, but they should not be used in an assessment of a patient handoff without additional efforts to improve their credibility. There are several ways to improve the credibility of the study and make it appropriate for further implementation. Firstly, the samples obtained via a survey method should present a target population (Brown, 2018). Secondly, the survey should be clear and accessible to avoid any inconsistencies in the responses. Finally, the convenience sampling method might be replaced with random sampling to improve credibility and properly represent the actual population.

References

Brown, S. J. (2018). Evidence-based nursing: The research-practice connection (4th ed.). Jones & Bartlett Learning.

Sand-Jecklin, K. & Sherman, J. (2014). A quantitative assessment of patient and nurse outcomes of bedside nursing report implementation. Journal of Clinical Nursing, 23, 2854-2863. Web.

Reproduced with permission from: Brown, S. J. (2018). Evidence-based nursing: The research-practice connection (4th ed.). Burlington, MA: Jones & Bartlett Learning.

Mixed Method Research Design

Pros and Cons of Using a Mixed Method Research Design

The selection of research design is challenging for scholars who strive to cover as many issues as possible. This reasoning results in the preference of a mixed method processing quantitative and qualitative data for the precision of an outcome. This approach is a beneficial instrument incorporating such values as elaboration, generalization, triangulation, and interpretation (Gibson, 2017). However, it also includes specific cons alongside the apparent pros.

The principal complications of a mixed method research design are related to the human factor. The Bible says, Do your best to present yourself to God as one approved, a worker who does not need to be ashamed, rightly handling the word of truth (2 Timothy 2:15, n.d.). Nevertheless, most researchers do not possess the required knowledge and experience, ensuring practicality and ethics at the same time (Hafsa, 2019). Therefore, they might fail to provide comprehensive results of their studies.

There are several advantages of the specified approach as well. They are related to the fact that a mixed method better reflects scholars needs because of the emerging complexity and the interrelation between various issues (Hafsa, 2019). In this way, it allows analyzing more aspects of a problem under consideration (Hafsa, 2019). Thus, the mentioned benefits lead to an outcome that implies more practical implications than other types of research design.

Indeed, the issue of bias takes a central place when the mixed method is used by researchers. It can also be connected to the need for integrity and the necessary knowledge regarding the subject of study (Hafsa, 2019). Their ability in terms of rightly handling the word of truth can be questioned as well (2 Timothy 2:15, n.d.). As a result, the values provided by this type of research can be compensated by its drawbacks (Gibson, 2017). Therefore, the successful implementation of a mixed method is conditional upon the attention of the participants in the first place.

References

2 Timothy 2:15. (n.d.). Literal World ESV.

Gibson, C. B. (2017). Elaboration, generalization, triangulation, and interpretation: On enhancing the value of mixed method research. Organizational Research Methods, 20(2), 193-223.

Hafsa, N. E. (2019). Mixed methods research: An overview for beginner researchers. Journal of Literature, Languages and Linguistics, 58.

Qualitative vs. Quantitative Business Research Comparison

Introduction

There are two main types of research design, quantitative and qualitative research designs. In the best-case scenario, quantitative and qualitative methodologies should work in tandem. The qualitative technique occurs at the beginning of a study to explore values that are to be measured in the following quantitative phase (Schmitz, 2012). In this way, the former helps to improve the usefulness and efficacy of the latter. Researchers select either of the two designs based on the study aim, objectives, nature of topic and research questions, and data collection and processing (Susan et al., 2001). The table below shows a comparison of the two techniques and is based on Antwi and Hamza (2015):

Quantitative Research Qualitative Research
It is dependent on the collection of numerical data, for instance, numbers. It is reliant on the collection of non-numeric data, such as descriptive narration with words.
It adheres to the confirmatory scientific method as it focuses on hypothesis and theory analysis. It is crucial to state the hypothesis then evaluate it to determine if it is supported. It is described as an exploratory scientific method. It is used in illustrating what is locally seen and it sometimes generates new theories or hypotheses.
It embraces a deductive approach in testing a hypothesis, often using a positivist model. It embraces an inductive approach in the generation of a hypothesis, often using an interpretive model.
It can be described as a narrow-angle lens since it focuses on a few causal factors at a time. It can be regarded as a wide-angle lens as it entails exploring human choice and behavior in all of its spheres, hence, holistic.
It is based on larger samples; hence, the findings are often statistically valid. It relies on smaller samples; thus, the results are statistically invalid.
It constitutes close-ended questions It constitutes open-ended questions.
Quantitative researchers assume that reality is socially constructed. For them, it is necessary to get close to their objects of study by participant observation. Qualitative researchers try to understand participants from a native perspective.

Examples of Quantitative Data Sets Used in Business Research

In business, quantitative research is concerned with the measurement of a market or population. It is most applicable in cases where stable and representative measurements of the market are needed. Stability is essential both in the case that study is conducted on a repetitive basis and in the instance where the objective is to detect changes over time. On the other hand, representativeness is crucial in instances, which market research is applied to aid in decision-making. There are several examples of quantitative data sets used in business research, and they comprise market segmentation, customer satisfaction, and advertising effectiveness data. Market segmentation data is that which entails the type and number of consumer groups present with regards to them sharing similarities in product preferences and characteristics (B2B International, n.d.). The results obtained from a marketing segmentation analysis are primary in the development of a customized marketing mix.

Conversely, customer satisfaction data is defined as demographic, behavioral, and personal information regarding clients that is collected by companies. It can also be adjusted to give a representation of customer satisfaction levels and loyalty overtime or on various aspects of the companys products and services (B2B International, n.d.). Finally, advertising effectiveness data is concerned with the effect of a marketing campaign on advertising awareness or brand association (B2B International, n.d.). It comprises of both collecting the pre- and post-advertisement awareness data to measure effectiveness.

Examples of Quantitative Data Collection Tools and Data Analysis Tools Used in Business Research

Quantitative data collection is dependent on sampling and structured data collection tools that facilitate the acquisition of information required in preordained brackets. Relevant data is collected in several ways, and they are interviews, surveys, and experiments. It is essential to note that the first three tools embrace the use of close-ended questions. These methods can either be used independently or in combination to enhance the advantages and mitigate the disadvantage of individual techniques. Interviews are the most popular quantitative data collection tool. There are two types of surveys, namely, personal and telephone interviews (Brannen, 2017). In the first type, the interviewer and participant are in one physical location. On the other hand, telephone interviews are described as dialogues between an interviewer and the respondent over the phone through a structured questionnaire. It can be used to measure the level of customer satisfaction with a purchased product or the effectiveness of promotional campaigns. Mail-out surveys are characterized to have a low response rate, concerning the number of mail-outs.

Depending on the numerical data that has been collected during research, the descriptive or inferential methods can be used for analysis. Descriptive analytical methods are those that summarize data in the form of charts and tables, however, they do not attempt to conclude from the samples in which the sample was taken (Bhattacherjee, 2012). Examples include the line, bar graphs, and measures of central tendency. Conversely, inferential statistics, hypothesis testing, and estimation statistics make it easy to conclude (Bhattacherjee, 2012). Thus, it entails concepts, such as the ANOVA, Chi-Squared, T-test, and regression, among others.

Examples of Qualitative Data Collection Tools and Data Analysis Tools Used in Business Research

Different methods are used to collect data in qualitative research, however, the most popular include interviews, focus groups, and document analyses. By combining two or more methods, the credibility of a study is enhanced. Interviews are further divided into structured, semi-structured, and unstructured (Brannen, 2017). Structured interviews are characterized to have a limited participant response; hence, they are less time-consuming and easy to administer. On the other hand, unstructured interviews do not mirror any predetermined concepts and are conducted with a minimal organization; hence, they are time-consuming. Third, for semi-structured interviews, apart from the questions targeting the already defined areas, it allows the interviewer and participant to diverge and pursue an idea in detail. Interviews aim to explore opinions, beliefs, or experiences on particular matters. Conversely, focus groups share some standard features with unstructured interviews (Brannen, 2017). It is defined as a group dialogue on a specific topic identified for study purposes. It is usually steered, observed, and recorded by the researcher. The objective of this collection tool is to obtain information on collective views (Brannen, 2017). Lastly, document analysis is centered on existing resources, such as scholarly articles, government reports, and books, among others.

There are different techniques for analyzing qualitative data; therefore, the selection of a particular technique relies on the research question, the researchs theoretical foundation, and the appropriateness of the technique. The three main analytical techniques include content and narrative analysis and grounded theory (Brannen, 2017). Content analysis is utilized in the evaluation of documented information, such as media, texts, or media. On the other hand, narrative analysis is employed in the examination of information from several sources, be it from interviews, surveys, or direct observations. It is focused on the experiences and stories shared by participants used to answer the study items. Finally, the grounded theory infers to the process of utilizing qualitative data to explain the cause of a phenomenon. It achieves so by factoring an array of similar cases in different settings and using the information to formulate causal explanations. It is essential to note that the investigators can modify or develop new explanations as they continue to analyze more cases until an explanation is identified, which aligns with all.

Pros and Cons of Mixed Methods

In business research, it is highly likely to find studies employing an amalgamation of both quantitative and qualitative strategies. This is what is referred to as mixed research. The mixed method primarily assumes that it is capable of addressing some research questions more comprehensively as compared to using the quantitative or qualitative options independently. As a result, it is summed up to have the ability to harness the individual strengths and offset the weakness of each approach (Brannen, 2017). In addition, data triangulation facilitates the performance of in-depth research; hence, a more meaningful interpretation of the data phenomenon under analysis.

Despite its considerable strengths, the mixed method design also has its drawbacks. First, combining the methodologies is regarded by some researchers as problematic based on the perspective that they belong in separate paradigms. Furthermore, mixing two methods in a single study is time-consuming, and requires researchers who are experienced and skilled in using both methods. Third, attaining a perfect integration of dissimilar data types can be challenging. It is hence vital to reflect on the findings of a study and ascertain whether they have been enhanced by the amalgamation of the different data types. Lastly, the presentation of the findings of the mixed methods is considered as a disadvantage as hinders its application. Consequently, some researchers opt to present quantitative and qualitative data distinctly based on the target audience (Brannen, 2017). Moreover, they may choose not to focus on some interpretations and conclusions.

Characteristics of Qualitative Research

Qualitative data is more challenging to define; however, stress is placed on understanding instead of simple measurement. For instance, quantitative research may identify which of two separate adverts is more recalled; nevertheless, it is also essential to consider how one element works as an advert and the reason behind its higher efficiency. This is where qualitative research comes in. Hence, it can be generally said to be dealing with questions such as Why, Would, and How. Qualitative research is often conducted among small samples; thus, the results are not necessarily statistically valid. Regardless, such data can highlight potential issues that can be analyzed in quantitative research (Tetnowski, 2015). Second, it requires intense researcher involvement; hence, researchers are forced to explain clearly the purpose of the investigation all through the entire study in an attempt to eliminate prejudice (reflexivity and flexibility).

Third, it embraces an emergent design; therefore, the initial plan for the research should be flexible. This can lead to different methods being used for research, and in some cases, the research problem can be altered, thus, resulting in an entirely new study (Tetnowski, 2015). The fourth characteristic is that it is holistic as it constitutes a broader picture of the issue under study  the researcher concentrates on varying perspectives and identifies the varied factors involved. An ongoing data analysis also characterizes it since the examination of qualitative data does not occur at the end of the research.

References

Antwi, S., & Hamza, K. (2015). Qualitative and quantitative research paradigms in business research: A philosophical reflection. European Journal of Business Management, 7(3), 217-225. Web.

Bhattacherjee, A. (2012). Social science research: Principles, methods and practices. Textbook Collection. Web.

Brannen, J. (2017). Mixed methods: Qualitative and quantitative research. Taylor and Francis.

B2B International. (n.d.). What is the difference between qualitative and quantitative research? Web.

Schmitz, A. (2012). Principles of sociological inquiry: Qualitative and quantitative methods. Saylor Academy. Web.

Susan, A. M., Gibson, C. B., & Mohrman, M., Jr. (2001). Doing research that is useful to practice: A model and empirical exploration. Academy of Management Journal, 44(2), 357-375. Web.

Tetnowski, J. (2015). Qualitative case study research design. Perspectives on Fluency and Disorders, 25(1), 39-45. Web.

Developing an Evaluation Plan and Disseminating Evidence

In this project, the nursing shortage problem is to be addressed through educational reforms and better pay for nurses. It is believed that higher salaries would make the profession more attractive, while the extensive financing of education will attract experienced professionals who will enhance the quality of training. On a related note, the proposed solution will raise the quality of the personnel coming out of nursing schools. The quality of education should improve, as well as the average salary for nursing personnel. Such changes would positively influence the perception of the health care industry. The timeline for the implementation of the proposed changes is three years, to give the newly educated and re-educated personnel time to gain the necessary experience needed to replace the person who should leave the industry within the next five years.

Methods

To evaluate the effectiveness of this solution, a combination of qualitative and quantitative methods would be used to attain the most informative results of the assessment. The qualitative method will be used to gain insight into and understanding of the underlying opinions of nursing staff, educators, and prospective nurses, whereas the quantitative method is intended to supply numerical data that will be converted into usable statistics (Windsor, 2015). The qualitative method, in the form of semi-structured surveys and interviews, will provide answers as to whether the preoperative curriculum developed by experienced nurse educators better prepares students for OR roles. The proposed change would be considered effective if the survey and interviews provide proof that students have the necessary skills useful in OR contexts. The quantitative method, in the form of questionnaires and systematic observations, is intended to enable assessing the effectiveness of the solution regarding nurse turnover and job satisfaction both before and after the intervention. It will also reflect a change in the rate of medical errors,

and so on (Developing an effective evaluation plan, 2011). The change would be considered effective if the incidence of medical mistakes caused by overload and stress should be reduced to a considerable extent. Also, lower turnover should be a consequence of the effective implementation of the program. The combination of the two methods would enable assessing the degree to which project objectives would be achieved, and provide in-depth insights into the performance indicators (Stufflebeam & Coryn, 2014). The chosen methods will enable evaluating the process and the impact of the changes, as well as the outcome of the implementation.

Variables

Several variables should be taken into consideration when evaluating the program outcomes, such as staff experiences and attitudes, the rate of nurse turnover, skills and satisfaction of students who obtained the training, and cost-effectiveness of the changes.

It is crucial to assess how the improved quality of education and the increased compensation payable to the nursing personnel will have translated into a positive perception of the profession and the quantity of trainees. Apart from that, it is important to find out whether the improved stability of the system will have resulted in lower nurse turnover, due to increase in graduates entering into practice. According to the program, the turnover issue would be decreased, due to the new and better-trained nursing staff. However, it is crucial to take into consideration the length of the training program, and assess the effectiveness of the transition of nurses from nursing schools into the health care system. The quality of nursing care should be assessed when turnover decreases, as well as evaluating the quantity and quality of the medical errors in a medical facility.

The positive experiences and general satisfaction of the current nursing staff will imply an adequate salary range and the possibility of self-actualization in the profession, as well as opportunity for career growth. In terms of the staff perception variable, increased remuneration is believed to attract professional nurses to venture into academia, and provide qualified nurse education. Additionally, the cost-effectiveness of the projects is one of the main variables in the justification of efficiency (Silverman & Patterson, 2014). The problem of efficiency of capital investments is mainly defined by the extent to which future changes would justify the current costs. Efficiency is generally determined by the likelihood of beneficial effect, compared to the total cost of obtaining this result.

When assessing any intervention, it is crucial to determine whether the project will work as it was planned, and improve upon program submission if necessary. Additionally, positive assessment provides the basis for continuing endorsement of the program. The methods of evaluation enable insight into the effectiveness of the change and determine the capability to address the target group of the program. Moreover, they highlight any potential or existing issues in the implementation. The aim of the appraisal is to define whether the objectives and aims of the program have been reached, and to what extent.

Disseminating Evidence

Effective dissemination is important to ensure that the results of the project were well adapted to the target audience. Regarding the current project, the results would be properly presented and sent to all the parties involved in the project before being made available to the public, or distributed. The results of the project would be noted in other research projects, when suitable. The participants who took part in surveys, questionnaires, and other activities related to the project would be notified duly about the results and the immixture that came from the project outcome.

Outlets for the dissemination and use of the project outcomes would be implemented through specific strategy. They would involve three main means of propagation: the Internet, scholarly publications, and networks. They will address the three target groups: the academic community, policy makers, and the greater nursing community (Tabak, Khoong, Chambers & Brownson, 2012). The dissemination tools would include reports that deliver the research findings, peer review articles in scholarly journals, and policy briefs. The dissemination strategy implies communication between the stakeholders and researchers, as well as a dialogue with the audience aimed at receiving feedback (Tabak, Khoong, Chambers & Brownson, 2012). The main goal of open dialogue is to gain insight into how to improve or revise the proposed project. Various booklets and printouts would be provided with a goal of reaching every category of the audience.

After the evaluation of the project has been made, it is crucial to provide feedback to the stakeholders engaged in the intervention. Dissemination of the evidence will help garner follow-up support of the project, in case it is successful. Publicity from disseminated information may also increase the influence and significance of the changes. On the other hand, if the program was not effective, it would be necessary to get this information to the public, so that the following issues would not be addressed in other, similar projects.

References

Windsor, R. (2015). Evaluation of health promotion and disease prevention programs: Improving population health through evidence-based practice. Oxford, UK: Oxford University Press.

Stufflebeam, D. L., & Coryn, C. L. S. (2014). Evaluation theory, models, and applications. Hoboken, NJ: John Wiley & Sons.

Silverman, R. M., & Patterson, K. L. (2014). Qualitative research methods for community development. London, UK: Routledge.

Developing an effective evaluation plan. (2011). Web.

Tabak, R. G., Khoong, E. C., Chambers, D. A., & Brownson, R. C. (2012). Bridging research and practice. American Journal of Preventive Medicine, 43(3), 337-350.

Schwann Cells Origin and Functionality

Introduction

Schwann cells refer to any of the cells within the peripheral nervous system which create a myelin sheath around the neuronal axons. These cells were named after the person who discovered them, Theodor Schwann, when he discovered them in the 1800s (Encyclopedia Brittanica). The cells are the same as a type of neuroglia known as oligodendrocytes that occur within the central nervous system. (Encyclopedia Brittanica).

Simple aspects

Schwann cells are different from cells in the neural crest in embryonic development while they are stimulated to proliferate from the surface of the axons (Encyclopedia Brittanica). When motor neurons become disconnected as thus causing the degeneration of nerve terminals, Schwann cells reside within the original space in the neuron. Degeneration in this process is then followed by a regeneration while fibers regenerate so that they return to their original target sites (Encyclopedia Brittanica). The remaining Schwann cells following nerve degeneration are thought to determine to route (Encyclopedia Brittanica).

Demyelinating neuropathies are those that affect Schwann cells and then move away from nerves (Encyclopedia Brittanica). Schwann cell-axon interactions effected in this manner while the process causes the insulation of myelin in axons to be removed as conduction is blocked in the axon for nerve impulses. Schwann cells are vulnerable to toxic and immune attacks as is evident in diphtheria and Guillian-Barre syndrome. General electrical conduction can be blocked through this as injury to axons also damage Schwann cells to create what is known as secondary demyelination (Encyclopedia Brittanica).

Complex aspects

Schwann cells cover the majority of the surface of all axons in peripheral nerves (Corfas et al). Axons and glial cells are in close physical contact and also in complex and constant communication with each other while they influence and control the maintenance, functionality and development of one another (Corfas et al). Recently progress has been made in better understanding the mechanisms at a molecular level of Schwann cell-axon interactions, particularly the neuregulin1 (NRG1)-erB signaling pathway, the role perisynaptic Schwann cells have within neuromuscular junctions, the underlying mechanisms in the formation and functions of the node of Ranvier, as well as the mechanisms generating tumors in Schwann cells (Corfas et al).

Along the whole of the length of peripheral nerves of mammals, the axons of motor, autonomic, and sensory neurons are closely associated with Schwann cells (Corfas et al). The contact between peripheral axons and Schwann cells is commonly regarded as intimate while this relationship provides an indication that the cells interact in a number of important ways (Corfas et al). In the fully developed, mature, nervous system, Schwann cells can be split into four categories: nonmyelinating cells (NMSCs,) myelinating cells (MSCs,) satellite cells of peripheral ganglia, and perisynaptic Schwann cells (PSCs.) (Corfas et al). These categories are based on their morphology, makeup with regards to biochemistry, and the types of neurons and area of axons that they are related to (Jessen et al). MSCs wrap around all axons of large diameter, including the motor neurons as well as some sensory neurons. All MSCs associate with one axon while creating the myelin sheath required for salutatory conduction in nerves (Corfas et al).

NMSCs associate with axons with small-sized axons of c-fibers radiating from all postganglionic sympathetic neurons as well as some sensory neurons (Corfas et al). Each NMSC wraps around multiple sensory axons for a Remak bundle which are split by small-diameter extensions from the body of Schwann cells (Corfas et al). Also, located more with the peripheral areas, PSCs reside within neuromuscular junctions (NMJs) where they cover the presynaptic terminals of motor axons without completely wrapping around them (Corfas et al). Satellite cells also associate with neuron-based cell bodies within the peripheral ganglia (Corfas et al).

The diverse kinds of Schwann cells which can be found within the adult are mainly stemming from a sole precursor type of cell, which is the neural crest cell. Some Schwann cells may stem from placodes and ventral neural tubes (Corfas et al). The multipotent and actively migrating neural crest cells move to the peripheral nerves in the stages of embryonic development, as mentioned, where they will grow in steps while giving rise to all Schwann cells (Corfas et al). By the twelfth day in the embryonic process, the Schwann cell precursors start to show three separate markets. From the fifteenth day to the time of actual birth, these precursors will cause the creation of immature Schwann cells. After birth, these immature cells will separate to form myelinating, nonmyelinating, and perisynaptic phenotypes, all processes which take a number of weeks to complete (Jessen and Mirky). The axons give signals which control choices made across various phenotypes of Schwann cells, however, the identity at a molecular level of such signals is not currently understood. Certainly, however, the signals from the various types of axons and areas are critically important (Corfas et al).

The three kinds of Schwann cells are not just different with regards to the kinds of axons they interact with but are also different on many levels at a biochemical level. MSCs are composed of myelin proteins, while these proteins are essential for the creation and functionality of myelin sheaths. NMSCs and PSCs are less specifically characterized, however, while the NMSCs and MSCs contain high concentrations of glial fibrillary acidic proteins (Corfas et al).

Conclusion

As we can see, Schwann cells interact closely to such an extent with axons that this relationship is considered intimate commonly across the scientific field. While some aspects have yet to be discovered, the critical roles of such cells in the nervous system are undoubtedly essential and certain.

Marie Maynards Role in Advancing Chemistry

Introduction

Marie M. Daly is a well-known biochemist from the United States of America who became the first Black American woman to be honored with a Ph. D. in Chemistry. Her family was mostly education-oriented and, as a result, she quickly completed her studies at New York University and Queens College and then received her Bachelors and Masters degrees in Chemistry. She then proceeded to Columbia University, where she completed her Ph.D. After all the studies, Daly started researching how components of the human body work and their effect on digestion (Brown, 2012). The impact of her findings on science, the challenges she faced, and many other factors that led to her success are important aspects of her contribution to Chemistry.

Marie Maynards Role in Advancing the Field of Chemistry

Daly proceeded to Howard University in Washington after her studies to teach chemistry. After two years, she went to the Rockefeller Institute in New York and began her survey on the cell nucleus and its composition. She worked on this project for seven years and achieved great success in her study. Daly spent most of her time in the laboratory experimenting on different specimens and developed various scientific theories, which immensely helped study the heart. She even conducted a heart attack survey and identified how large amounts of cholesterol and clogged arteries were related. Thus, she established an understanding of the impact of food and diet on the heart. She also examined the enzyme amylase, its relation to the digestive system, and how it affects nutrition (Borne, 2020). As a result of her study, people became more conscious and took care of their diet and the types of food consumed, and, therefore, Dalys research proved to be of significance.

Throughout her lifetime, Daly remained focused on her studies as she sought to help curb many diseases which claimed individuals lives. In 1961, Daly married Vincent Clark, but nothing of much importance is mentioned about him. Dalys breakthrough in research on the heart has helped in the knowledge of cardiovascular health. The human body is programmed to produce its cholesterol, but it can also be obtained from the consumption of animal products like meat and milk. The cholesterol is then transported to the liver by lipoproteins for metabolism to take place. The body contains two types of lipoproteins for this task; High-Density Lipoproteins and Low-Density Lipoproteins (HDLs and LDLs). The LDLs are partially harmful since they facilitate cholesterol accumulation in blood vessels, thus, increasing heart attack and stroke chances (Helix, 2018). Through Dalys research, people got to know the significant causes of heart attacks.

In 1952, she was also involved in research about nuclei and their contents before DNAs structure and its role were discovered in hereditary diseases. Dalys work was cited by other researchers as she had identified contents and properties of histones, purines, and pyrimidines which are essential in the coding of RNA and DNA. These findings were crucial in decoding DNA structure and determining the cell role (Debakcsy, 2018). Similarly, she studied hypertension and found the contributing factors that were both chemical and nutritional, as well as other factors such as sugar, smoking, and how they played a part in making the condition worse. In the years to come, Daly researched more on other issues involving the circulatory system, aging and hypertension, and muscle energy recycling focusing on creatine.

Marie Maynards Challenges

Daly had to overcome many challenges to succeed in life. She faced issues such as gender discrimination, racism, and financial crisis but still managed to come through them and achieve the success she had in her life. It was only by luck that she managed to get a job in chemistry teaching when World War 2 was in progress; thus, there was a shortage of men, and women had to step in and take charge. She even had to withdraw from Cornell University as the tuition fees were high, and her father could not pay the money required. Regardless of this, her passion ensured that she achieved significant success in her life.

Marie Maynards Contribution to Society

During her time, Daly was committed to helping the less fortunate students. She facilitated enrollment in medical school and science programs through scholarships. By doing this, she managed to raise many childrens living standards who wanted to learn across the USA (A&E Television Networks, 2020). She was a mentor to many students and encouraged them to follow their dreams career-wise.

Conclusion

Dalys success story is an inspiration to all people that nothing is unachievable if they have the will to succeed. The study by Daly on the heart has facilitated a better understanding of its functions and enhanced treatment for patients in the cardiac care unit. Daly is a role model for many people due to her achievements despite the many setbacks she faced in her career pursuit. Her research contributed to the advancement of the field of Chemistry, which is still relevant today.

References

A&E Television Networks. (2020). Marie M. Daly. Biography Editors.

Borne, J. (2020). Hidden figures beyond: The first Black Ph.D. in chemistry, Marie Maynard Daly. Charged Magazine

Brown, J. (2012). African American women chemists. Oxford University Press.

Debakcsy, D. (2018). Marie Maynard Daly (1921-2003), Americas first Black woman chemist. Women You Should Know. 

Helix (2018). Dr. Marie Maynard Daly: A love for the heart. Celebrating remarkable scientists for Black history month. Helix Newsletter. 

Differential Analysis by Campus and Enrollment Type

Introduction

In term 1/2010, aggregate enrollment shrank palpably by 14% or from 410 in the corresponding period last year to 354 this year. Since eCampus enrollment accounted for at least 95% of the total in both periods, changes in the headcount here loomed large. The loss of aggregate enrollment occurred across both active campuses B and D (see Fig. 2 overleaf). True, campus E showed no erosion in eCampus headcount from one term to the next but the numbers involved, just 2 students each time, are so marginal as not to count in the overall scheme of things.

In-Class Enrolment
Figure 1

Since in-campus enrollment is generally very low, small changes are magnified. For some reason, even the minuscule campus B enrollment of two students last year disappeared (Figure 1), presumably in favor of attending to a sizeable online student body (Figure 2).

Site D continued to attract a small number of in-class students and site E did the best in the 2010 term, coming out of nowhere in the 2009 term to boast all of nine students this time around.

eCampus Enrollment
Figure 2

New Starts, T1/09 and T1/10

Campus D registered more new starts in 2010 compared to the prior year, largely explaining the growth in total enrollment (Fig. 3 overleaf). Otherwise, the data seems to reveal a fluke for campus C: 1 new start this term versus zero total enrollment.

New Starts
Figure 3
New Starts
Figure 4

Online new starts shrank in term 1, by two-thirds for campus B and from a minuscule 2 online students last year to nothing at all in the case of campus E (Figure 4 above).

Total Enrollments, Term 2, 2009 and 2010

In term 2, only campus D had any in-class enrollment to show for it. The puny student population of seven in the same period last year nearly doubled in T2/2010.

In-Class Enrollment
Figure 5

The online situation firmed up in term 2 (Fig. 6 overleaf), very much as it did last year when eCampus enrollment at campus B reached the system-wide high of 350 students, the best attained in the two years under review. Though gross enrollment numbers still shrank, campus B online students were off just 6 percent from last years record. Campus D remained a distant second in eCampus enrollment this year. Online enrollment at this site fell proportionately faster than at site B, but the reality is that the reduction of ten students in campus D enrollment impacts system-wide performance less than the fall-off of 21 students at campus B.

eCampus Enrollment
Figure 6

New Starts, Term 2, 2009 and 2010

In term 2, only campus D had any in-class new starts (Fig. 7 overleaf). In fact this site performed satisfactorily in this respect all the way to term 4. As well, new starts of 3 compares favorably with absolutely nothing in term 2/2009.

Term 2 was also a comparatively parched season for online e new starts (Fig. 8 overleaf). Only campus B recorded any tangible inflow, albeit new eCampus registration this year were off around 22 percent from prior-year levels. For the rest, site C garnered just one online new start.

New Starts
Figure 7
Figure
Figure 8

Total Enrollments, Term 3, 2009 and 2010

In term 3, site D continued to outpace all others where in-class student body was concerned (Fig. 9 below). Though the numbers are low, gross enrollments were more than double where they stood in the same period last year. In T3/2009, campus B had had one solitary student but accepted no one at all this year.

Figure
Figure 9

Instead, site B continued to set the pace for online enrollment (Fig. 10 overleaf). The eCampus student population was less than 10 percent off the near-record achieved in the same period last year. Site D may have achieved the rare feat of being 6 percent up on last years online enrollment (the chart scale in Fig. 10 obscures the fact that the numbers rose from 44 to 47). Still, the eCampus student body at the latter was only a fraction (15%) of the volume campus B boasted.

Figure
Figure 10

New Starts, Term 3, 2009 and 2010

Only campus D had any new in-class starts at all in term 3 (Fig. 11 overleaf). Still, even this puny number of three stood better than in the 2009 term when none of the campuses recorded any new enrollments.

Online inflows this term were similarly sparse, about the same situation as prevailed in the equivalent term of 2009. Campus C did get three new online enrollments, but this did not fully compensate for the fact that eCampus new starts at site B fell off from 21 to 17 (Fig. 12 overleaf).

In-Class New Starts
Figure 11
Figure
Figure 12

Total Enrollments, Term 4, 2009 and 2010

By term 4, aggregate in-class enrollment this year stood at just ten. That site E had half of this versus nothing last year (Fig. 13 below) barely made up for the erosion experienced by campus D.

In-Class Enrollment
Figure 13

As to eCampus enrollment, sites C, D and E (in that order) did tangibly better compared to term 4 the prior year (Fig. 14 overleaf). However, these gains did not make up for the near-halving of online enrollment that campus B suffered.

Figure
Figure 14

New Starts, Term 4, 2009 and 2010

Once again, only campus D recorded any fresh enrollment activity for in-class students (Fig. 15 overleaf). The picture was made even bleaker by the fact that new starts this term shrank from four the prior year to a solitary new enrollment in 2010.

Sites C, D and E accepted token numbers of newly-enrolling students for online courses (Fig. 16 overleaf). But these failed to make up for the two-thirds loss campus B endured compared to the same term in 2009.

Figure
Figure 15
Figure
Figure 16

Total Enrollments, Term 5, 2009 and 2010

By the fifth term of 2010, campus D held steady with an in-class student body of ten compared to nothing at all for all other sites and a similarly dismal showing in 2009 (Fig. 17 below).

In-Class Enrollment
Figure 17

As the year closed, eCampus enrollment continued to droop. Gains at sites C and E (Fig. 18 overleaf) could not make up for the marginal loss at site D and the near-halving of online enrollment at campus B.

Research of the Mirror Neurons

It is worth noting that mirror neurons have a typical shape and consist of a cell body, dendrites, and an axon. They are located in various structures of the brain (Mazurek & Schieber, 2019). Their location does make sense, given that there are several functions they perform  understanding the intentions and thinking of another subject, empathy, imitation, and learning. When performing each of these functions, structures containing mirror neurons are involved, as well as a different composition of auxiliary structures that do not have mirror properties, for example, as in the case of the superior temporal gyrus when perceiving visual stimuli.

Other species also have mirror neurons; they are activated both when a certain action is performed, and when another living being performs the same action (Mazurek & Schieber, 2019). Primates and more primitive living beings, including birds, have mirror neurons. It can be assumed that the presence of mirror neurons does make these animals more social since they experience a feeling of empathy with the emotional state of another living being. This occurs without loss of understanding of the external origin of this experience.

The main trend in applied research of mirror neurons includes the study of hypotheses regarding the functions of these neurons (Mazurek & Schieber, 2019). Since the activation of mirror neurons occurs when observing only familiar actions, scientists suggest that mirror neurons are not responsible for understanding intentions but for finding a finished motor program and extracting it from long-term memory. This is the reason for binding certain types of disorders to mirror neurons. In particular, the study of the functions of mirror neurons will make it possible to understand the causes of cognitive impairment and outline ways to treat them. Disabilities include difficulties with learning a foreign language, autism, and diseases related to impaired memory.

Reference

Mazurek, K. A., & Schieber, M. H. (2019). Mirror neurons precede non-mirror neurons during action execution. Journal of Neurophysiology, 122(6), 2630-2635.

Statistics Misuse: Collecting and Organizing

Collecting and organizing various information is an essential part of modern life. The statistics presented in the form of multiple graphs and tables are compelling. People are used to trusting this data so much that organizations and states base their activities on it. More complete and well-structured information should help in making correct and effective decisions. However, statistics can often be misinterpreted or misused, which can ultimately have a significant impact on policy or decision-making.

In many cases, statistics in politics are used to promote a particular point of view, which can often be found on television. An excellent example of such statistical manipulation would be the graph shown in 2012 by Fox News (Jilla, 2017). The image shows that the real unemployment rate increased by almost 7 percent from 2009 to 2012. The main reason for the occurrence of such numbers is incorrect counting. Fox News deliberately compares two different unemployment rates: for 2009, the standard unemployment rate is used, while the 2012 figure includes part-time workers, discouraged people, and other categories. Thus, according to the statistics comparing the same measures, it is noticeable that in February 2009, in the first month of Obamas presidency, the unemployment rate was 8.2%, and in August 2012, it fell to 8.1% (Duffin, 2020). The image separately includes the percentage of unemployed government workers, which Fox emphasizes their special privileged position.

Apart from statistics, people are also used to trust television, which often promotes certain political views and trends. Misinterpretation and misuse of information are performed in many forms, including, as shown in the example, by providing false information by counting inadequate numbers. Unfortunately, in the modern world, information is used not only for organizing activities but also for lobbying certain groups interests, which leads to an impact on the process of making important decisions in society.

References

Duffin, E. (2020). Unemployment rate in the United States from 1990 to 2019. Statista. Web.

Jilla, R. (2017). What are good examples of misleading statistics? Quora. Web.

Prokaryotes and Eukaryotes: Overciew

All creatures can be divided into two groups: prokaryotes or eukaryotes, depending on their cells structure. Prokaryotes are unicellular living organisms; they do not transform into a multicellular form, capable of autonomous existence. They can be bacteria, including cyanobacteria or blue-green algae and archaea. These organisms are the oldest and the most primitive ones on the planet. Prokaryotes can be found everywhere: in the air, in water, in soil, inside living organisms. In the atmosphere, bacterial spores are present at altitudes up to almost twenty km, and they penetrate the ground to a depth of approximately four km. Bacterial cells are very diverse in shape, such as rods, rounded, hexagonal, star-shaped, stem-like ones. Some of them unite in pairs, others create chains; moreover, they may build clusters like bunches of grapes.

Unlike prokaryotes, eukaryotes are nuclear living organisms, and their cells contain a nucleus. They can be both unicellular and multicellular, but their cell structure is the same. The group of eukaryotes includes plants, animals, including humans, and fungi. The main thing that distinguishes prokaryotes from eukaryotes is the absence of a cell nucleus. This means that prokaryotic DNA is not organized into chromosomes and is not surrounded by a nuclear envelope. Eukaryotic cells are much more complicated. Their DNA is packed into chromosomes, which are located right in the nucleus.