The process of determining correlations between specific data sets or locating a tendency within one of them is a crucial part of assuring quality in the context of any organization (Groebner, Shannon, & Fry, 2014b).
Therefore, it is imperative to make sure that proper statistical tests are applied to test a certain hypothesis or to identify a trend in the corresponding data set. Seeing that the test involving the measurement of the standard deviation alone is not a possibility, the chi-square test is typically used as the tool for identifying whether the null hypothesis should be accepted or rejected. However, when considering the phenomenon, one must admit that the definitions typically provided for the subject matter and its elements raise several questions among the people who are only exploring the theory.
Can the chi-square distribution be completely symmetrical, and if it can, under what circumstances?
Groebner, Shannon, and Fry (2014a) make it quite clear that the graphical representation of the chi-square distribution is typically arranged as a curve. However, the authors also note that, with the increase of the degree of freedom, the curve becomes closer to being symmetrical (Groebner et al., 2014). Therefore, one may wonder whether the degree of freedom will have to stretch to the infinity, or whether there is a point at which the image of the chi-square distribution can become completely symmetrical (Inferential statistics, 2016).
Why is the sample size of at least 30 items typically viewed as sufficient?
The concept of sample size is admittedly vague. Despite the evident significance of the subject matter for carrying out statistical tests, the identification of the items number that is usually viewed as sufficient for conducting a test needs further commentary. For instance, the process of locating the number 30, which is considered to be enough for carrying out a statistical test, could be explained in a more detailed fashion (Chi-square goodness of fit test, 2016). At present, the statement concerning the sufficiency of 30 elements is viewed as an axiom, which is barely passable for the realm of statistics. In other words, a further review of the issue in question is required (HyperStat online: Ch. 16, chi-square, 2016).
Can the Goodness of Fit be viewed as the expected outcome of the chi-squared test?
Last but not least, the issue regarding the Goodness-of-Fit test needs to be brought up. An admittedly peculiar concept, the phenomenon of the Goodness of Fit is rendered as the degree, to which the outcomes meet the expected results. Therefore, it begs the question of whether the phenomenon of the Goodness of Fit can be equal to the proof of the research hypothesis. In other words, it could be assumed that the value of the Goodness of Fit allows determining whether the null hypothesis should be rejected or confirmed. If there is a correlation between the two concepts, the level of the Goodness of Fit should be in an inverse proportion to the veracity of the null hypothesis (Statistics and probability dictionary, 2016).
Carrying out a statistical analysis is a challenging task. However, the application of the chi-square test will help one make an essential business decision even in the environment that involves an array of variables. Once learning the essential details about the subject matter, one is likely to apply the tests successfully to measure the potential of each decision available.
Reference List
Chi-square goodness of fit test. (2016). Web.
Groebner, D. F., Shannon, P. W., & Fry, P. C. (2014a). Chapter 11. Hypothesis tests and estimation for population variance. In Business statistics (9th ed.) (pp. 448-474). Upper Saddle River, NJ: Pearson.
Groebner, D. F., Shannon, P. W., & Fry, P. C. (2014b). Chapter 13. Goodness-of-fit tests and contingency analysis. In Business statistics (9th ed.) (pp. 547-578). Upper Saddle River, NJ: Pearson.
The population mean is the average of all samples included in the study (Rubin, 2012). To retrieve the population mean, one must add all values and divide them by their number. Being rather basic, the given type of measurement is used in a variety of cases. For example, when there is a need to identify the average age of the group participants, the population mean can be used (Groeber, Shanon, Fry, & Smith, 2014).
Example
Sample Mean
Definition
The sample mean, in its turn, is typically interpreted as the mean of the values included in the sample divided by the sample size. Although the specified term is very close to the population mean, there is a major difference between the two. Particularly, the population mean value is related to the entire range of variables included in the study. the sample mean, in its turn, is calculated for a specific sample taken from the array of variables available.
Example
Median
Definition
A median is typically referred to as the value that divides the sample in two halves (Jackson, 2013).
Example
Skewed Distribution
Definition
In the skewed distribution, the data is unevenly distributed around the center.
Example
Skewed distribution can be used in the instances that require random value collection. In this scenario, retrieving skewed data is rather probable (Vito & Higgins, 2014).
Symmetric Distribution
Definition
In the symmetric distribution, the data is arranged symmetrically around the center. The symmetric distribution is, therefore, opposed to the skewed one.
Example
The figure below is a typical example of symmetric data. The left and the right sides of the graph are symmetrical to each other.
Mode
Definition
The concept of a mode is traditionally rendered as the tool for measuring the central location of a particular set of values. Although often confused with a mean, it, in fact, has very little to do with the concept of an average number. As a rule, a mode is rendered as the data that can be encountered most frequently on a specific slot.
Example
Supposing, there are ten teams working on a conveyor belt in a factory. The percentage of defects occurring during the performance of the teams is 2%, 2.1%, 2.2%, 2.8%, 3.3%, 3.5%, 4.2%, 4.2%, 4.4%, and 4.5% correspondingly. In the data provided above, the following piece of information occurs most frequently: 4.2%. Therefore, 4.2% is the mode in the identified data set.
Reference List
Groeber, D. F., Shanon, P. W., Fry, P. C., & Smith, K. D. (2014). Describing data using numerical measures. In Business statistics (9th ed.) (pp. 85-145).Upper Saddle River, NJ: Prentice Hall.
Jackson, S. L. (2013). Statistics plain and simple. Boston, MA: Cengage Learning.
Rubin, A. (2012). Statistics for evidence-based practice and evaluation. Boston, MA: Cengage Learning.
Vito, G. F., & Higgins, G. E. (2014). Practical program evaluation for criminal justice. New York, NY: Routledge.
The principal characteristic of a scientific research method is that researchers allow the reality to speak for itself. According to Zikmund and Barbin, A scientific research method is the way researchers go about using knowledge and evidence to reach objective conclusions about the real world (Zikmund & Barbin, 2010 p.7). This paper will evaluate the scientific method used in the research article, Consensus in team decision making involving resource Allocation, by Philip S. Chong and Omer S. Benli. It will evaluate the methodology employed in the article as well as provide interpretations of the results and conclusion.
Methodology
The purpose of the research was to develop a practical method that can be applied in team decision making especially in the distribution of financial resources in business enterprises. The use of a hypothesis proves that the methodology employed in the research paper was scientific. The research proposed the hypothesis that, the selected team consensus strategy from among all available strategies must have a minimum sum of squares of monetary regrets (Chong & Benli, 2005 p. 1147).
A statistical representation of the hypothesis was established. When making decisions in an organization, it should be ensured that every member of the organization is satisfied. The paper, therefore, hypothesized that this can only be achieved through team decision making.
During the study, a framework was developed as a scientific decision making tool. The hypothesis was represented algebraically and three college department heads with three business strategies were used to demonstrate the hypothesis. To come up with the algebraic representation of the hypothesis, it was presumed that there are m distinct ways of allocating shared resources amongst n parties. To achieve its objectives, the study focused on obtaining compromise by selecting a group of pure plans that reduced the variance. This was followed by developing an approach to acquire the best process that all team members could agree.
The procedure was then interpreted through mixed methods and a theoretic interpretation of the problem was finally represented. The study applied the stated hypothesis to explicate the decision-making procedure. Calculations were done for the three cases whereby A, B and C strategies were applied in the first case. Strategies A, B and C together with strategy AB were applied on cases II and III (Chong & Benli, 2005 p. 1156).
Interpretation of the study results
The hypothesis of the study was interpreted as Nash Equilibrium and this involved mixed approaches. The study showed that since each team member, must be flexible and willing to give up something to reach an agreement, the best approach was a consensus agreement (Chong & Benli, 2005 p. 1148).It further showed that quantifying compromise was an effective way of team decision making in an organization.
It stated that when a team made a decision, it was agreeing to a compromise process in lieu of the practice. That is, it provided it with the highest payoff. According to the paper, the disparities in payoff were referred to as the teams opportunity loss. By observing the behavior of individuals and decision makers in a team while selecting financial distribution formula, the research proved that the hypothesis was true for the specific procedure undertaken by the three departmental heads.
Conclusion
Decision making involving the distribution of resources in an organization requires that compromises be made amongst team members to arrive at a common agreement. Strategies that are based on reason should be developed and resource distribution calculated. However, it is postulated that the strategy that is finally chosen has the least variance of financial regrets. This research paper acts as an effective guide in organizational team decision making.
References
Chong, S., & Benli, S. (2005). Consensus in team decision making involving resource allocation. Journal of business decision. 43(9), 1147-1160.
Zikmund, G. & Babin J 2009. Business research methods. Mason, OH: South-Western Cengage Learning.
The concepts of osmosis, diffusion, and active transport concern the movement of molecules and are some of the foundational terminologies of the biology curriculum. Nevertheless, the terms are frequently confused and misunderstood. According to the research by Reinke, Kynn, and Parkinson (2019), most first-year biology students have a large number of misconceptions concerning the aforementioned terms. It implies that the topic of molecular movement appears to be highly complex and challenging; therefore, it is essential to elaborate on the terminology. The current paper attempts to analyze osmosis, diffusion, and active transport and discuss the primary differences between these core concepts.
Definitions
While the three notions are defined by the movement of molecules, there are some drastic differences between them. However, before contrasting the types of movement, it is essential to provide a brief definition for each of them. Diffusion is a type of passive transport of molecules across the cell membrane from areas with a high density to regions with a low density of molecules (Rae-Dupree & Dupree, n.d.). This type of movement might be simple which refers to standard passive intervention through the membrane, or it might be facilitated which requires assistance from a carrier molecule (Rae-Dupree & Dupree, n.d.). Osmosis has a similar definition, the movement of water molecules across a selectively permeable membrane from a region of higher water concentration to a region of lower water concentration (BBC, n.d.). Lastly, active transport concerns the transfer of molecules against their concentration gradient and, contrary to the previous concepts, from a region of low concentration to an area of high concentration. Overall, some of the core differences between the concepts are noticeable from the very definitions.
Differences
Having established the definitions for osmosis, diffusion, and active transport, it is possible to examine the differences between the notions. The concepts are primarily contrasted by the direction of movement, the type of transported substances, and whether energy is required for the process (BBC, n.d.). The former is described in the definitions of the notions. The type of substances is the core difference between the processes. Diffusion transports various substances, including carbon dioxide, water, food substances, and oxygen, while osmosis allows only for the transportation of water (BBC, n.d.). On the other hand, active transport primarily concerns the movement of mineral ions in plants and glucose in animals (BBC, n.d.). Furthermore, unlike diffusion and osmosis, active transport requires energy to effectively function (Rae-Dupree & Dupree, n.d.). Additionally, as mentioned before, active transport is not passive (like osmosis and diffusion), and, therefore, has a few consequent distinguishing marks. Active movement is generally a rapid, unidirectional, and selective process that is also affected by temperature (Rae-Dupree & Dupree, n.d.). On the other hand, passive movements including osmosis and diffusion are slow, bidirectional, and not affected by temperature.
Conclusion
Summing up, the current essay has provided the definitions of osmosis, diffusion (simple and facilitated), and active transport of molecules and discussed the core differences between the concepts. As mentioned in the introduction, these notions prove to be complex for a large number of students, and, therefore, it is essential to analyze the three features and get a better understanding of the subject. The primary differences between the concepts include the type of movement, the need of energy, and the forms of substances transported. Overall, having examined the contrast between osmosis, diffusion, and active transport, it becomes considerably easier to understand the more complex topics regarding the cell processes.
Rae-Dupree, J., & Dupree, P. (n.d.). The cell membrane: Diffusion, osmosis, and active transport. Web.
Reinke, N. B., Kynn, M., & Parkinson, A. L. (2019). Conceptual understanding of osmosis and diffusion by Australian first-year biology students. International Journal of Innovation in Science and Mathematics Education, 27(9), 13-33.
The musculoskeletal system is an organ system made up of specialized tissues of the skeletal muscles and bones. Calcium relates to a mineral found primarily in foods, including milk, kale, and fish; it helps maintain strong teeth and bones in the body. This mineral plays an instrumental role in improving nerve stimulations, muscle contractions, and blood pressure regulation; it helps maintain the essential mass for skeletal support.
Introduction
The musculoskeletal system relates to an organ system made up of specialized tissues of the skeletal muscles and bones. The skeleton contains vital hematopoietic components and stores ninety-nine percent of the calcium in the body (Geiger et al. 3). Bones connect to other muscle fibers and bones through connective tissue such as ligaments and tendons. Muscles ensure bones are in place and aid in their movement. Joints allow movement between bones while cartilage prevents friction between bone ends during motion. Calcium is added to the musculoskeletal system by osteoblasts and removed by osteoclasts (Nguyen 20). Calcium is stored within the skeleton and it helps in maintaining the necessary mass for skeletal support. Besides, it is essential for muscle contraction, blood clotting, and heart functioning. This essay provides a detailed description of how calcium strengthens the musculoskeletal system, which includes bones, ligaments, muscles, cartilage, joints, and tendons.
Sources of Calcium
Humans get calcium through the foods they eat. The body does not manufacture calcium. Therefore, it gets the compound from the food one eats or other calcium supplements. Foods rich in calcium include dairy products like milk and cheese, green leafy vegetables, such as spinach, okra, and curly kale, soya drinks, fortified flour products like bread, and fish (Murphy et al. 5). Lack of sufficient calcium can lead to rickets in children and osteoporosis in adults.
Recommended Daily Calcium Quantities
Insufficient intake of calcium leads to poor uptake by the body, which, in turn, makes the bones weak. It is estimated that only about 32 percent of Americans obtain adequate minerals from food alone (Kahwati et al. 1603). Even with supplements, most adults do not take enough calcium. The recommended daily amount of calcium for grownups aged between 19 and 64 is approximately 700mg/day (Kahwati et al. 1603). The required quantities are obtained from daily balanced diets and supplements for some people. Too many calcium supplements (above 1500mg/day) are harmful because it results in diarrhea and stomach pain.
Definition of Calcium Deficiency
Inadequate intake of calcium increases the risk of developing hypocalcemia (calcium deficiency disease). Naafs argues that the probability of having calcium deficiency increases with age (268). Besides, one does not become calcium deficient after skipping one daily dose. However, people need to ensure that they get the recommended daily dose because the body utilizes the mineral quickly. Since vegans do not eat calcium-rich dairy products, they are more likely to get hypocalcemia.
Causes of Calcium Deficiency
Causes of calcium deficiency include poor intake over a long time, particularly in childhood, dietary intolerance to calcium-rich foods, hormonal changes in women, medications that reduce calcium absorption, and certain genetic factors (Tankeu et al. 643). Other causative agents are malabsorption and malnutrition. Malabsorption occurs when the body cannot absorb the required minerals from food eaten while malnutrition is when a person is not taking enough nutrients. Calcium deficiency does not have short-term symptoms as the body tries to maintain calcium levels by extracting it from bones. Nevertheless, long-term low calcium levels exhibit severe symptoms, including memory loss, muscle spasms, muscle cramps, and tingling and numbness in the face, feet, and hands.
Symptoms of Hypocalcemia
Hypocalcemia affects all body parts, leading to symptoms such as fragile and thin skin, weak nails, dental problems, and slow hair growth. Although calcium deficiency, in its early stage, does not result in any symptoms, the indicators develop as the disorder progresses (Almaghamsi et al. 453). Signs of severe hypocalcemia include memory loss, tingling and numbness in the face, feet, and hands, hallucinations, muscle spasms, brittle and weak nails, the fracturing of bones, depression, seizures, and muscle cramps. In case one experiences the aforementioned symptoms, they should seek medical care, this includes the diagnosis and treatment of the disease.
Diagnosis and Treatment of Hypocalcemia
Calcium deficiency is diagnosed and treated by taking a blood sample to check calcium levels and recommending the appropriate treatment options. The doctor will measure the albumin level, total calcium level, and ionized calcium amount. The normal adult calcium level should range from 8.8-10.4mg/dL (Harvey et al. 449). Constant low calcium levels from the above tests confirm the presence of hypocalcemia. The treatment for calcium deficiency involves adding more calcium to the body by taking supplements. The generally commended calcium boosters comprise calcium carbonate, calcium phosphate, and calcium citrate (Nicholson et al. 140). The products are available in chewable, tablet, and liquid forms.
Critics and Opposing Views
Although calcium supplements are suggested in case of low levels of the mineral in the body, a high concentration of the compound can have damaging health problems. Even though eating calcium-rich foods and taking supplements helps increase calcium levels in the blood, it is essential to maintain the recommended dosage and consumption ratios (Cormick and Belizán 4). There is growing evidence suggesting calcium supplementations potential adverse impacts. For instance, according to a review by Li et al. calcium consumption <751mg/d increases an individuals susceptibility to osteoporosis and its related fractures, while high intake >1,137mg/d elevates the likelihood of hip fractures in females (2447). However, other researchers, including Wimalawansa et al. highlight calciums importance in enhancing bone development and fracture prevention (10). Right amounts of calcium have several benefits to the musculoskeletal system, including support and strengthening of bones. Therefore, people should adhere to the correct dosage of supplements and take calcium-rich foods to boost their immunity.
Nicholson, Kristina et al. A comparative Cost-Utility Analysis of Postoperative Calcium Supplementation Strategies Used in the Current Management of Hypocalcemia. Surgery, vol. 167, no. 1, 2020, pp. 137-143. Web.
Wimalawansa, Sunil et al. Calcium and Vitamin D in Human Health: Hype or Real? The Journal of Steroid Biochemistry and Molecular Biology, vol. 180, no. 1, 2018, pp. 414. Web.
Correlation is referred to as the association between bivariate data, which is also described as data sets containing two observations. Scatterplots, also known as scatter graphs or correlational charts, provide excellent descriptive representations of the interrelationship between two quantitative variables. Each point in a scatter plot denotes a paired measurement of two variables for a particular subject, and every subject is represented by one point on the graph (Brase & Brase, 2015). Therefore, by noting the sequential position of the points throughout the chart, an individual can determine the direction and strength of the relationship. In regards to direction, when the plots produce a lower-left to upper right pattern, it can be concluded that there is a positive correlation between the two variables. Conversely, an upper-left to lower-right pattern suggests the existence of a negative correlation. Furthermore, when the plots lie on a straight line, a perfect correlation can be insinuated. Lastly, when the points are scattered randomly and do not show a linear trend, a zero correlation can be suggested.
When analyzing scatterplots, it is crucial not only to consider the direction of the relationship, which is negative, positive, or zero but also the magnitude of the correlation. The distance between individual points can represent this (Brase & Brase, 2015). The concept of drawing an imaginary oval is often used to help interpret the magnitude of the collinearity. A strong correlation between variables is represented by the presence of points that are close to one another and a small imaginary oval. On the other hand, a weak correlation is signified by large distances between points and a wide imaginary oval. In summation, scatter plots provide a good visual representation of the direction and magnitude of linear correlation between two quantitative random variables.
Both qualitative and quantitative research represent sets of strategies, techniques, and processes that are used to collect data or evidence for further analysis with the aim of uncovering new information or facilitate a better understanding of a topic at hand. Understanding the differences between quantitative and qualitative study methodologies is the key step to choosing the method that would help a researcher answer a study question or prove or disprove a hypothesis (Collins & Stockton, 2018). This paper aims to explore the fundamental similarities and differences of both qualitative and quantitative research by providing examples illustrating each approach.
Qualitative research refers to an empirical methodology in which data is not quantifiable, and focuses on understanding a research query as an idealistic or humanistic approach (Pathak, Jena, & Kalra, 2013). While a quantitative approach is seen as a more reliable method that depends on numeric and objective methods that can be easily replicated, the qualitative method is used for understanding the beliefs of people, their attitudes and experiences, behaviors, and interactions. Despite the fact that the method was once viewed as philosophically incongruent with experimental research, qualitative research is now distinguished for its contribution to add new dimensions to studies that cannot be gained through the mere measurement of variables.
Qualitative research suggests that events can be understood adequately if they are viewed in context, which implies that the natural surroundings within which a study is being carried out have great significance. An example of qualitative research is descriptive phenomenology, which has been exceedingly employed in psychological studies as a method used to study a person as a whole and not fragmented psychological processes (Christensen, Welch, & Barr, 2017). Phenomenology is a research philosophy that originated in 1900 with Edmund Husserls publication of Logical Investigations. The design has been widely applied in research on social sciences as a method aimed at exploring and describing individuals lived experiences of individuals. Being both a philosophy and a method of scientific inquiry, phenomenology can take different forms as it evolved from its initial European approach to include the American one. Husserl described experience embedded into phenomenology as occurring within the circumstances of the environment with which a person is engaged (Neubauer, Witkop, & Varpio. 2019). Therefore, research conducted from the perspective of descriptive phenomenology relies on the perceived individual consciousness that would inform the process of inquiry.
Another example of qualitative methodology is the case study design, which is intended to generate hypotheses and validate the tools that are used in exploring a specific phenomenon. This research design has been generally applied within such disciplines as psychology, ecology, anthropology, and science. The case study design allows researchers to test theoretical models by using them in real-world situations (Ridder, 2017). By doing so, conclusions as to whether the developed models and theories are actually applicable in practice. For scholars specializing in social sciences, psychology, or anthropology, the case study is a qualitative design that is regarded as a valid research method that allows building close connections between researchers and participants, which would reveal essential insights about the issues to be studied. In addition, case studies offer a great degree of flexibility as they can be adjusted to different purposes and environments.
Quantitative research is a methodology that implies gathering numerical data that can be differentiated into categories, ranked, or measured in units of measurement. Compared to qualitative research, the quantitative method aims to establish general laws of behavior and phenomena across a wide range of settings and contests. This type of research sets the objective of testing a particular theory put forth by a scholar to ultimately support or reject it. When conducting their studies, researchers aim to reach objectivity and separate themselves from the data.
An example of quantitative research is a correlational study design in which a researcher aims to understand what kind of relationships occurring between naturally occurring variables. Therefore, the goal of correlational design is figuring out how two or more variables are related and in what way. The change in one variable is expected to cause a change in another, and with the help of a correlational study, researchers are expected to determine the nature of such changes (Mertler, 2015). This type of research is of descriptive nature and depends on the established scientific methodology and hypothesis developed before the study is carried out. For instance, correlational research can show the statistical relationship between low-income earners and health outcomes; that is, the fewer people earn, the less likely they are to reach positive health outcomes.
Another example of quantitative methodology is quasi-experimental research, which is unique in its characteristic of lacking a study aspect. This type of design is similar to experimental research that manipulates an independent variable; however, it is different because there is either no control group, random selection, random assignment, or active manipulation. Quasi-experimental research is predominantly carried out in cases when there is no possibility to perform a random selection or control group creation (Maciejewski, 2018). However, the lack of randomization in quantitative research poses some threats of internal validity that could be eliminated through careful design, measurement, or statistic analysis.
Some similarities between qualitative and quantitative studies should be considered. One of the similarities is that raw data acquired as a result of data collection is ultimately qualitative. Despite the fact that numbers present no bias, researchers still have to make decisions as to which numbers should remain and which should be disregarded. Therefore, the process of choosing and justifying numbers is qualitative, which suggests that all research is qualitative to some extent (Aspers & Corte, 2019). Another similarity is the researchers role as both qualitative and quantitative studies involve the researcher. Although, there is a variation in the degree of impact. For instance, in qualitative anthropological research, there is an option for the researcher to integrate himself or herself into the study group in order to record impressions and experiences. In a medical study that takes the quantitative approach, the research is separated from the data that is being collected.
To conclude, qualitative and quantitative research approach scholarly inquiry from the perspective of different types of data. Qualitative inquiry is conceptual and is concerned with understanding behaviors and perspectives from the standpoint of individuals involved in research. Thus, data will be collected by observing participants or questioning them, which yields non-quantifiable data. Quantitative inquiry assumes a measurable reality and is concerned with discovering facts about certain phenomena. Data collected during quantitative research is acquired through measurement, which yields numerical and quantifiable data. Thus, depending on the phenomena to be studied and the goals of research, scholars will choose between qualitative or quantitative research.
References
Aspers, P., & Corte, U. (2019). What is qualitative in qualitative research. Qualitative Sociology, 42, 139-160.
Christensen, M., Welch, A., & Barr, J. (2017). Husserlian descriptive phenomenology: A review of intentionality, reduction and the natural attitude. Journal of Nursing Education and Practice, 7(8), 113-118.
Collins, C., & Stockton, C. (2018). The central role of theory in qualitative research. International Journal of Qualitative Methods, 17(1), 1-10.
Maciejewski, M. (2018). Quasi-experimental design. Biostatistics & Epidemiology, 4(1), 38-47.
Mertler, C. (2015). Introduction to educational research. SAGE Publications.
Neubauer, B., Witkop, C., & Varpio, L. (2019). How phenomenology can help us learn from the experiences of others. Perspectives on Medical Education, 8, 90-97.
Pathak, V., Jena, B., & Kalra, S. (2013). Qualitative research. Perspectives in Clinical Research, 4(3), 192.
Ridder, H-G. (2017). The theory contribution of case study research designs. Business Research, 10, 281-305.
Looting and smuggling of illegally obtained artifacts have become a major problem for various branches. In terms of archaeology, this tendency prevents experts from examining the items, hiding important findings from professionals. In addition, looting receives the attention of the worlds governmental services, as it is criminal activity related to the global black market. The purpose of this essay is to examine artifact looting and smuggling as the key ethical question in the field of contemporary archaeology.
Archaeological procedures are established to maintain the integrity of the process, but looters disrupt them, adding an unaccounted variable to the equation. Kelly and Thomas (2017) write that artifact looting in the United States has reached the level of an epidemic. In the vast majority of cases, it is done with the sole purpose of personal profits. Looters bypass the necessary protocols, simply excavating items and selling them illegally. Consequently, excavations are done without due diligence, as looters are unlikely to follow the screening, dating, and retrieval procedures. As a result, artifacts are damaged and even lost, leaving experts without a potentially important piece of knowledge.
Another detrimental aspect of looting is related to the fact that it is virtually the illegal exploitation of the cultural heritage done for profits. The tendencies have been particularly alarming in unstable regions with a rich history. For example, the contemporary Syrian conflict has had an immense impact on the archaeological potential of the area. Not only does the warfare damage and destroy valuable artifacts, all that remains often become the target for looters, some of whom are affiliated with terrorist groups. Cox (2015) reports numerous instances of illegal artifact reselling in the markets of the Middle East. Merchants create a smokescreen, which makes most of their items appear fake, but the right people know that some of them are genuine artifacts that were stolen from the expert community. The border between Syria and Lebanon has become particularly notorious in this regard, as looters from one side actively cooperate with smugglers from the other.
On the other hand, despite the emerging concerns, the concept of archaeological looting is not a recent one. Kelly and Thomas (2017) recount the story of the most famous artifact, the Rosetta Stone, which was virtually looted by Napoleons soldiers during the Egyptian Campaign. This fact represents an ethical dilemma for archaeologists, as the item was retrieved illegally, but its examination by a French expert has made an immense contribution to the scientific community. Since the early 20th century, the United States government has been enacting legislation preventing the destruction of cultural heritage. The Rosetta Stone is a representative example, but it is possible to theorize that a similarly positive scientific outcome can be achieved through international treaties and procedures aimed at the optimal examination of findings.
Ultimately, modern looting is nothing like the Rosetta Stone, and its scientific potential is dismal. As suggested by the Middle Eastern artifact market accounts, looters only pursue profits when smuggling artifacts. As a result, the archaeological community lacks an immense amount of items, which could have provided additional cultural insight into the history of humanity. This problem is to be addressed on various levels, as its impact extends beyond the field of archaeology. Therefore, combined efforts of archaeologists, governments, and international organizations are required to confront the destructive wave of looting.
Reference
Kelly, R. L., & Thomas, D. H. (2017). Archaeology (7th ed.). Cengage.
Evaluation is a complicated concept and for it to generate valid conclusions, it needs a carefully thought approach. Such complications emanate from the fact that there are numerous factors that influence the manner in which conclusions are interpreted as well as the outcomes. These factors are usually in a complex interrelationship, which is further catalyzed by the context within which evaluation is being conducted.
Nevertheless, there are numerous evaluation designs and methods through which validity of an evaluations outcomes is determined depending on the prevailing needs and conditions. These designs are complex and detailed; their similarities and differences are inexhaustible. Nevertheless for the purpose of this essay comparisons and contrasts are made with reference to outcomes several observations made with regard to outcomes
One-group evaluation designs are among the simplest methods of arriving at conclusions. These designs are intended to demonstrate whether an evaluators informational needs have been met at a particular point in time. As such, findings within One-group evaluation designs can only be used with reference to that particular time and are not indicative of conclusions at a different time. This does not mean that One-group evaluation designs are not useful.
Within the One-group evaluation designs, there are the pretest/post test as well as posttest only designs. Evaluation of an effective law compliance program is best evaluated through the posttest only design since it is illogical to pretest the compliance with law before an intervening program is instituted. One-group evaluation designs are limited in effectiveness, but such limitations are effectively addressed the more complex evaluation methods generally referred to as quasi-experimental designs, which includes the time-series designs, Selective Control Design , nonequivalent control group designs among others. Time-series designs increases the interpretability of an evaluation by extending the periods of observations over time and indicates that findings are also limited within an extended period of time (Posavac, 2011).
Similarly, nonequivalent control group designs extend the interpretability of findings by incorporating more than one study group. Like the One-group evaluation designs, quasi-experimental designs are also effective in evaluating law enforcement programs. For instance, the time series design has been effectively used to test the compliance with pre-marriage HIV testing law in Illinois. An evaluation of compliance with the program was conducted over 116 months after its introduction (Posavac, 2011).
Like the One-group evaluation designs, quasi-experimentation designs could only posttest compliance with the pre-marriage HIV testing law since pre-program conditions did not necessitate its evaluation. Additionally, the findings are only limited for the extending to 116 months, are only limited to those partners intending to marry and only factor in the effectiveness of the program in Illinois. Therefore, evaluation outcomes of all the evaluation methods are tentative indicators rather than absolutely conclusive findings.
Evaluation designs are intended to generate valid conclusions. But as it occurs from time to time, there are some external influences that affect an evaluators degree of certainty with regards to validity of outcomes. These are commonly referred to as threats to internal validity; they are detailed and their similarity within which they are manifested across all evaluation designs cannot be exhaustively discussed within this essay. Nevertheless they can be enumerated as Maturation, historical occurrences, participants selection criteria, attrition, testing criteria and the measurement methods.
While threats to internal validity are found to significantly mediate in determining the certainty of validity in all evaluation designs, they do so in varied fashion. For instance, regression towards the mean outcome is found to impinge the validity of outcomes to various extents depending on evaluation design.
Using the time series design, it was found out that the number of marriage certificates issued in Illinois after the introduction of a pre-marriage HIV testing law dropped by 14 % in Illinois, but stayed constant in other states with a similar law. The drop in Illinois is likely to create a false impression on the effectiveness of the law, but in real sense couples from Illinois obtained marriage certificates from adjacent states that had no such laws.
Similarly, with reference to one-group evaluation design, regression was found to influence the validity of perceived outcomes. For instance, changing the law to divert federal funds from foster families to biological families experiencing financial difficulties experienced a significant drop in foster parenthood. While such a drop was achieved over a long period of time, measuring the changes at particular times reveals a +/-25 % fluctuation levels (Posavac, 2011).
Thus, if measurement was done at a time when regression towards the mean was highest, then the program is likely to be termed as effective. But to negate the influence of self selection on the effectiveness of the program, is vital to included a selectively controlled group; issuance of certificates in adjacent states that had no such law. This includes selectively controlled group; those couples from Illinois intending to marry without taking the test. Thus, Selective Control Design seems relevant in this case, as it allows for the inclusion of couples not affected by the program (Shadish, Cook and Campbell, 2002). Nevertheless outcomes from all designs are generally affected by threats to internal validity.
The pre-marriage HIV testing law described above focuses only couples planning to get married. Similarly, the law changing foster care funding only focuses on children from abusive families. Evaluation of the effectiveness of such program, as previously explained can be done using both one-group and time based designs. While each of the evaluation designs is most likely to generate different outcomes, it is evident that each of the evaluation design is only effective in generating valid results if the participants share similar needs. For instance, it would be illogical for evaluators to incorporate partners without marriage plans as part of the non-program control group as the needs for this particular group of participants fail to match evaluation criteria.
The analysis above indicates that all the evaluation methods are focused on outcomes at the end of the program. This implies that evaluation methods indicated herein are summative in nature. But to what extend is this similarity evident? Trochim (2006) asserts that summative evaluation has various considerations. As already indicated law enforcement agencies can only evaluate the effectiveness of a law using the posttest only design such that the effectiveness of the pre-marriage HIV testing law can only be determined at the end of the program.
Similarly, the actual effectiveness of the same law using the any of the quasi-experimentation designs can only be effectively determined if the behavior of participants is observed for an extended period of time after the end of the program. This can enable evaluators to determine whether any of the threats to validity influenced outcomes during the program. Similarly, while experimental evaluation design is claimed to have more valid outcomes than other designs, it only effectively evaluates the influence of threats to validity at the end of the program. Thus with reference to evaluating the outcomes and impacts of a program at the end of it, all the aforementioned evaluation designs show similarities.
As indicated by Trochim (2006) summative evaluation has various considerations, which includes meta-analysis of outcomes; meta-analysis involves integrating estimates of multiple studies to come up with an aggregated summary judgment. The effectiveness of the pre-marriage HIV testing law was determined by evaluating the outcomes after introduction of the law in Illinois as well as well as outcomes in other states where the law was operational. Additionally comparisons were made about the trends in issuance of marriage certificates in adjacent states without such a law. The outcomes were evaluated for a long period of time and also involved different sets of participants.
However, if the outcome of the pre-marriage HIV testing law was to be determined at a particular time in Illinois, then such comparisons would not be possible. This indicates that one group design is dissimilar to quasi-experimental designs as far as meta-analysis of outcomes is concerned. There are also dissimilarities within the quasi-experimental designs as far as meta-analysis of outcomes is concerned. While time-series designs analyze the outcomes of a program at different times, nonequivalent control group designs aggregate outcomes involving different groups (Shadish, Cook and Campbell, 2002). This is demonstrated through the manner in which the outcomes of the pre-marriage HIV testing law were validated.
While evaluation outcomes of all the evaluation methods are tentative indicators rather than absolute conclusive findings, the level of uncertainty varies depending on the evaluation design in question. The level uncertainty with regards to validity of outcomes would be significantly high if one-group design is used to evaluate the effectiveness of a complex program. For instance effectiveness of the pre-marriage HIV testing program can only be derived through quasi-experimental designs; evaluating its effectiveness over an extended period of time, in this case 116 months, and aggregating the findings.
Time-series design is likely to generate a lower level of uncertainty as compared to any of the one-group designs; one group designs would only in this case evaluate the effectiveness of pre-marriage HIV testing law at a particular point in time (Posavac, 2011). But are there contrasts within the quasi-experimental designs a far as certainty of outcomes is concerned? Yes, depending on the type of outcome desired.
Referring to the law diverting federal funds from foster families to biological families, the scenario is likely to demonstrate such subtleties. The level of certainty is likely to be low with regards to evaluating the validity of outcomes over an extended period of time than it would with regards to outcomes involving more that one set of participants. On the other hand, the level of certainty is likely to be low if nonequivalent control group design is used to evaluate outcomes involving more than one set of participants than it would evaluating the validity of outcomes over an extended period of time (Shadish, Cook and Campbell, 2002).
As indicated earlier, threats to internal validity do influence the interpretation of outcomes in almost similar fashion, regardless of the evaluation design in question. Threats to validity are enumerated as enumerated as Maturation, historical occurrences, participants selection criteria, attrition, testing criteria and the measurement methods (Posavac, 2011). Regression nevertheless, is complicated and influences outcomes differently across the evaluation designs. For instance, in considering the foster care funding program, regression is only a valid influence only when evaluating the effectiveness of the program on children from those abusive families that have not responded to counseling and any other correctional therapy.
Thus, in this case regression seems to be influential in any design that only factors participants in dire need of help. Thus, time-series designs, experimental designs and pretest/posttest are likely to be influenced by regression (Shadish, Cook and Campbell, 2002). However, if the program is to factor in another set of participants, such as children from families which are likely to be positively affected by counseling, then regression cannot be used as a credible interpretation of outcomes since the number of children under foster care will definitely reduce. Such a reduction will be as a result of improved conditions rather than the effects of the program.
As indeed evidenced in this essay, evaluation is a complex concept. Drawing comparisons and contrasts between these designs is in itself as complicated as the designs are. Nevertheless, an attempt to highlight such has been made and indicates an intricate interrelationship between these designs. Drawing conclusions thus ought to be undertaken from a particular approach. In this essay, comparison and contrast are made with reference to law enforcement case studies enumerated by Posavac (2011). Regardless of the complexities here in, clear distinctions have been made on the extent of similarities and differenced between the evaluation designs and methodologies.
Reference List
Posavac, E. (2011). Program evaluation: methods and case studies. London: Prentice Hall.
Shadish, W., Cook, T., and Campbell, D. (2002). Experimental and quasi-experimental design for generalized causal inference. Boston, MA: Houghton Mifflin.
The article under consideration is titled Dynamic bayesian networks based abnormal event classifier for nuclear power plants in case of cybersecurity threats. It is drawn from a journal called Progress in Nuclear Energy and is authored by Pavan Kumar Vaddi together with seven other scholars. The article explains that nuclear power plants are increasingly susceptible to cyber-attacks since their instrumentation and controls are nowadays based on digital systems (Vaddi et al., 2020). Accordingly, cyber-attacks on nuclear power plants have the potential to cause serious problems, more so when they masquerade as safety events (Vaddi et al., 2020). The article, therefore, notes that research is required on this subject to differentiate between cyber-attacks and safety events to allow for the right responses in a timely manner.
While the standard industry practice for troubleshooting safety events has been to observe physical sensor measurements, Vaddi et al. (2020) suggest the use of the Dynamic Bayesian Networks (DBNs) methodology. The approach is justified since it is an appropriate framework for inferring hidden states of a system from variables observed through probabilistic reasoning (Vaddi et al., 2020). Using a DBN-based abnormal event classifier and architecture to implement the classifier, Vaddi et al. (2020) implement the classifier as part of the monitoring system. In the foregoing, they set up an experimental environment with a two-tank system together with a simulator of a nuclear power plant and a programmable logic controller. They then used a set of 27 cyber-attacks and 14 safety events for the experiment. Moreover, six cyber-attacks and two safety events were used to manually finetune the conditional probability tables (CPTs) of 2 timescale DBN. Consequently, results showed a successful identification of the nature of an abnormal event in all 33 cases, while the cyber-attack of fault was noted in 32 cases. It follows that the DBN methodology is applicable and thus requires more research for improvement.
Reference
Vaddi, P. K., Pietrykowski, M. C., Kar, D., Diao, X., Zhao, Y., Mabry, T., Ray, I., & Smidts, C. (2020). Dynamic bayesian networks based abnormal event classifier for nuclear power plants in case of cybersecurity threats. Progress in Nuclear Energy, 128, 103479.