Historical Maps at the Beaton Institute

The map that I have chosen for analysis was created by Johannes de Laet who was one of the most famous geographers working in the sixteenth and seventeenth centuries. The discussed map is one of his most famous works, it was created in 1633. The map is called Nova Francia et Regiones Adiacentes.

The discussed map is likely to have been created for the specific social group. At first, it is likely to have been designed for literate and educated people who could read maps. What is more, the creator was a merchant, and I suppose that one of the purposes of the discussed map was to help the other traders to become familiar with the depicted land as there were plenty of natural resources that could be sold in other places. To continue, the depicted place could present certain interest for the well-off section of the society as there was a fertile ground allowing growing a lot of agricultural plants. Richer people must have been interested in the land because there was a huge fish stock.

The discussed map is believed to be the first one depicting Prince Edward Island. It is unique because it is one of the first maps showing Lake Champlain which is situated near the borders of Canada and the United States. The map presents quite an accurate depiction of the most prominent objects and the configuration of the terrain. It is mainly focused on geographical properties; it includes a lot of hills and forested areas. The land is surrounded by water, and rocky beaches are shown as well.

I believe that there are specific elements in the map that demonstrate its primary purpose. If the map was initially created for the people who wanted to reclaim the land, it is quite obvious that these details could help them to make a choice and decide where to go to reach their goals. Furthermore, it could be very helpful for sailors as all the geographical denominations are included and certain elements of submarine relief are also presented. Due to that, it would help travelers to arrive at their destination.

I believe that there are no elements in the map that would suggest certain inequalities between the representatives of different social groups living in the depicted territory. Instead, the map focuses on the properties of nature. The only thing connected to this map that can be regarded as a manifestation of inequality between social groups is the situation with literacy. Common people were not supposed to be able to read and write. The map contains a lot of geographical names, and this is why it could not be read by the majority of people.

Despite a great number of details that are shown in the discussed map, many details are absent. To be more precise, there are no villages and towns depicted in the map. In that period, the land was undeveloped, and there were no large objects connected to human activity. As for the worldviews of the creator and its audience, the work is likely to be the result of ambitious peoples willingness to reclaim new territories. Furthermore, maps can serve as the means of exercising power as they are closely interconnected with the political situation in any territory.

The Repeated-Measures ANOVA in a General Context

First and foremost it is essential to emphasize the term generally in the statement that the repeated-measures ANOVA is generally more powerful than the one-way ANOVA. It means that this analysis will target to prove the advantage of the repeated-measures ANOVA in a general context, without considering potential exceptions where the one-way ANOVA is more efficient. Also, it essential to exclude the multivariate design from this analysis as the repeated measures ANOVA cannot be applied to those measurements that imply the variables of different qualities (DAmico, Neilands, & Zambarano, 2001).

To begin with, it is necessary to define the factors that determine the measures power. Let us assume that the power of a test is determined by its power to reject the null hypothesis. Now, it is essential to consider the factors that affect this power: the sample size and the p-value. A larger sample size, as well as a lower p-value, increases the chances of rejecting the null hypothesis (Razali & Wah, 2011).

It is suggested that in order to compare the repeated-measures ANOVA and the one-way ANOVA, it is essential to compare the power of paired samples t-tests and independent samples t-tests. Therefore, it is necessary to examine any data sets retrieved from two types of tests to examine the tests power. Let us refer to the paper that provides an explicit description of the tests outputs. Upon the consideration of these data sets, two critical observations need to be made. On the face of it, the observed independent sample t-tests might appear more powerful as they have larger sample sizes. In the meantime, closer consideration reveals that the paired samples t-tests tend to show lower p-value than the independent samples t-tests.

A lower p-value increases the chances of rejecting the null hypothesis  as a result, it is rational to assume that paired-samples tests have an advantage over independent samples t-tests in terms of power. The next stage implies finding evidence for the parallels between the repeated-measures ANOVA and paired samples t-tests and between the one-way ANOVA and independent sample t-tests. The analyzed paper demonstrates the sets of data retrieved from the four types of measurements. The presented tables show that the results retrieved through the paired samples t-test are equal to those retrieved through repeated-measures ANOVA, while the independent sample t-test shows the same results as the one-way ANOVA. Therefore, it might be concluded that the repeated-measures ANOVA is generally more powerful than the one-way ANOVA (Dr. RSM700 lecture notes, April 15, 2016).

The proposed explanation relies on the assumption that the power of measures is determined by the low p-value that increases the chances of rejecting the null hypothesis. Meanwhile, the power of the repeated-measures ANOVA might be likewise evidenced from a different perspective. Hence, let us assume that the power of measures is determined by the low error variance. Therefore, another advantage of the repeated-measures ANOVA regarding power resides in the fact that it allows distinguishing between within-subject and between-subject variability ensuring additional degrees of freedom. The variation among sample members allows reducing the error variance (Dimitrov & Rumrill, 2003).

As a result, it might be concluded that there are at least two factors proving that the repeated-measures ANOVA is generally more powerful than the one-way ANOVA. First, it shows a lower p-value (as it is evidenced by the example of ANOVA and paired samples t-tests comparison). Second, it offers reduced error variance in comparison with the one-way ANOVA.

Reference List

DAmico, E. J., Neilands, T. B., & Zambarano, R., (2001). Power analysis for multivariate and repeated measures designs A flexible approach using the SPSS MANOVA procedure. Behavior Research Methods, Instruments, & Computers, 33(4), 479-484.

Dimitrov, D. M., & Rumrill, P. D., (2001). Pretest-posttest designs and measurement of change. Work, 20(2), 159-165.

Razali, N. M., & Wah, Y. B., (2011). Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests. Journal of Statistical Modeling and Analytics, 2(1), 21-33.

Research Methods, Design, and Analysis

When dealing with qualitative research, one takes into account many considerations. The clarity of the purpose of study is most important. It must be significant, organized well and the researcher must ensure its findings contribute theoretically (Pope and Mays, 2000, p.g12). The studys main goal should be to answer the question using the laid down procedures. The research should establish findings that are applicable in other fields as well. He should also have the ability to establish those findings that had not been determined earlier. Other considerations would be to understand the complexities of a population such as the values and behaviors of a particular culture.

From the article Standing on the promises: The experiences of black women, the purpose of the study is clear in that, it sets out to examine the problems of southern black women administrators coming from a tradition of generational protest (Guaetae J.M ,2005). Its significance is to investigate the problems faced by black women in their quest for leadership as administrators. Its theoretical contribution is in advocating for social justice in modern society without racial and gender prejudice. The introduction is organized well with clear ideas. The author took in to account the culture of the black woman as well as intangible factors that is, her gender roles.

The researcher as an instrument can cause bias in research. His expectations can lead him to concentrate more on the information that will influence their expected outcome. This is referred to as observer bias. The researcher can also influence the study group to behave in a way that his expectations are achieved. He can do this by giving much information to the subjects as to what the study seeks to find out referred to as the Pygmalion effect. Also, when researching based on some variable, the researcher may rate subjects based on their impression on the subjects. This is referred to as the halo effect.

Observer bias can be minimized by the application of a double blind technique, where both the researcher and the subjects cannot influence the results. For example in the administration of drugs, the researcher should not know which is placebo and which is real. To avoid the Pygmalion effect the researcher should avoid giving too much information in regard to the research expectations. The researcher can avoid the halo effect by concentrating more on what is observed rather than his impression on the subjects.

Some of the approaches used in the qualitative analysis are: Ethnography whose term originates from the field of anthropology. It seeks to study the communities culture although it is being used in other areas such as in business culture for example the research in constructivist Frameworks using Ethnographic techniques; (Chrisetensen,et al 2010 )Field study is where the researcher goes into the field, it is closely related to participant observation for example use of systems development methodologies in practice: field research and Grounded theory developed by Glasser and Straut, seeks to develop a grounded theory that is , one that is based on certain truisms as illustrated in Rigour and grounded theory research.(Trochim.W,2006)

The application of the field study approach was appropriate because the researcher was in a position to investigate the use of systems development methodologies based on actual observations and not assumptions. In the example of the grounded theory approach there could be an element of assumption and exaggeration in inquiry to nursing, where the subjects choose to give vague information leading to wrong theory development. The research on constructivist frameworks, the ethnographic technique is relevant as the subject in itself is science based.

References

Christensen, L. B., Johnson, R. B., & Turner, L. A. (2010). Research methods, design, and analysis (11 ed.). Boston, MA: Allyn & Bacon.

Pope, C., & May, N., (2000), Qualitative Research in Healthcare. London: BMJ.

Trochim, W., (2000), Research Methods Knowledge Base. Atomic Dog Publishing: Cincinnati, OH.

Foundations of Conducting Research

Abstract

This essay focused on research theories. It showed the differences between deductive, inductive, grounded, and axiomatic theories in scientific research. These theories have both strengths and weaknesses, and no single theory is superior to other theories or more valid. Theories and hypotheses differ in several aspects, but they also share a few characteristics. For instance, theories are based on tested evidence while hypotheses are mere suggestions. Finally, study variables remain vital aspects of any scientific research because they are used to measure differences and associations between factors under investigation.

Key Differences

When conducting research, the fundamental distinction to remember between the inductive and deductive methods is that while the deductive method focuses on evaluating theories, the inductive method concentrates on creating new theories using available data.

The deductive approach, as a rule, starts with a hypothesis, while the inductive method will typically rely on research questions to limit the scope of the study (Rubin & Rubin, 2012).

For deductive methodologies, the focus is commonly on causality, while for inductive methodologies, the approach is largely centered on investigating new occurrences or analyzing already examined phenomena from an alternate point of view. Additionally, the deductive approach is narrower in scope and is focused on testing or confirming hypotheses (Gay & Weaver, 2011).

Inductive methodologies are mostly applied in subjective or qualitative studies, whereas deductive methodologies are more generally used in quantitative studies (Hashemnezhad, 2015). Nevertheless, there are no set specific guidelines and some qualitative studies may have the deductive introduction.

One particular inductive method that is habitually alluded to in research writing is the grounded theory, spearheaded by Glaser and Strauss (Gay & Weaver, 2011).

Grounded theory alludes to a theory that is formulated inductively from a body of information or data collected and analyzed on a specific research issue. If formulated effectively, this implies the ensuing theory at minimum fits one dataset properly. This appears differently about a theory that got deductively from grand theory without relying on data, and it might thus end up fitting no data in the slightest degree.

The grounded theory focuses on a case as opposed to a variable standpoint, even though the difference is almost difficult to draw. This implies to a limited extent that the researcher takes various cases to be wholes, in which the variables associate as a unit to deliver some expected results. A case-based viewpoint usually assumes that variables relate in complex manners. Cases are analyzed based on both variations and similarities to determine casual differences and common factors to demonstrate potential causes and outcomes.

Grounded theory is not an alternative to be applied lightly. It requires broad and rehashed scrutinizing through data, investigating, and re-examining different circumstances with a specific end goal to recognize a new theory. It is a method most appropriate to research studies in which the issue to be examined has not been earlier investigated.

Axiomatic theory reflects the use of a set of axioms to generate a theory. Assumptions (axioms) for the theory are useful here, and they are considered relevant and effective without any more testing. Therefore, a logical theory emanates from a set of axioms, and the theory may be assessed or reviewed based on available empirical evidence, research, or new studies. Earlier economic theories, for instance, were developed from multiple assumptions. Thus, these axioms assist researchers to develop new theories, and it is important to note that the new theory can be later empirically tested, revised, or overruled.

In any case, axiomatic theories are hard to extricate irrespective of evidence from empirical data showing that they fail to anticipate or clarify phenomena. One method for reinforcing they claim to offer possible axioms in which empirical research exhibits prominent descriptive and predictive capabilities. That being said, researchers focused on the underlying assumptions of such theories just denounce them gradually, if at all.

Validity of Theories

From a general perspective, no one theory is superior to any other theory. All these theories have their strengths and limitations. For instance, researchers have highlighted the challenges and strengths of both inductive (qualitative) and deductive (quantitative) studies. Inductive methods focus on determining and understanding experiences, perspectives, and thoughts of research subjects. That is, inductive methodologies explore the meaning, purpose, or reality of a study issue. On the other hand, deductive approaches strive to enhance research impartiality, consistency when repeated, and applicability of findings to general populations, and they are generally interested in prediction. These explanations show that the two theories are the opposite of each other. The grounded theory could be difficult to apply, particularly when large volumes of data are presented and no specific rules are used to identify suitable categories. Additionally, researchers interested in grounded theory must be extremely skilled to use it. Axiomatic theory may even ignore empirical data to promote general assumptions held for longer periods. Nevertheless, axiomatic theory cannot tolerate any irrelevant factors of a problem. It provides strong terms and conditions for any problems under investigation. Thus, assumptions not included in the axioms nor obtained from them can be a part of the theory.

Researchers have now introduced new methods to account for the limitations and strengths of some theories. A mixed-methods methodology, for instance, is developed to account for both weaknesses of quantitative and qualitative studies. The mixed-methods approach is seen as an alternative for enhancing research design to support thorough research of phenomena that are considered worth investigating.

Theory vs. Hypothesis

A hypothesis is either a proposed account for a noted phenomenon or an envisioned prediction of a potential causal association among numerous phenomena. In science, on the other hand, a theory is tried, very much substantiated, unifying clarification for a wide range of confirmed, demonstrated factors. There is always evidence to support a theory, but a hypothesis is just a proposed potential result and is testable and falsifiable.

From the above definitions, one can observe that a hypothesis explains a notable phenomenon while a theory offers explanations that are well supported with verifiable facts. A hypothesis is founded on suggestions, assumptions, projections, possibilities, or predictions with no clear certainty on outcomes. A theory presents hard evidence, is repeatedly tested, verified, and often has a wide scientific consensus.

It is imperative to recognize that both a theory and a hypothesis are testable and falsifiable, but the latter is not well substantiated.

The hypothesis is normally founded on limited sets of data. On the other hand, a theory is based on extremely large sets of tested data under different conditions. That is, multiple studies have been conducted to confirm or disapprove a given theory.

Further, a hypothesis is used in specific instances. That is, a hypothesis cannot include other cases not specified within a study. It is only restricted to that specific single cause. Conversely, theory tends to be general. It reflects the establishment of a common rule gained using many tests and experiments. The resulting principle is usually applicable to several specific cases.

The overall purpose of a hypothesis is to present a tentative possibility, which can be assessed further using observations and experiments, but the purpose of a theory is to account for consistently observed phenomena.

Many criminology theories, such as social control theory and rational choice theory, have been tested over time, and results are always consistent. Nevertheless, testing scientific theories does not stop at any given moment because new evidence may emerge to support or refute earlier findings. A hypothesis is always been seen as an informed guess, and scientific methods can be applied to test and verify it. The result may support it and recommend further studies or disapprove it as false. A hypothesis that has been consistently demonstrated as true (a working hypothesis) may well become another theory (Shields & Rangarajan, 2013; Halvorson, 2012).

There are multiple common misconceptions about theories and hypotheses. One may talk of a theory while in reality they mean a hypothesis. In such cases, a hypothesis appears a legitimately contemplated proposition in light of the observation. Even if such observations were true, yet regardless of the possibility of its validity, such observations could have been brought on by some different factors. Since this observation is just a contemplated possibility, it can be tested and falsified, which makes it speculation (a hypothesis) rather than a theory.

Variables

Variables can be referred to as any elements of a theory that can change or differ as a factor of interaction in a given theory (Al-Riyami, 2008). Alternatively, a variable is any factor that influences the outcome of a study. Each study must have variables becomes they are required to comprehend variations in relationships. Age, color, and country for example are variables because they can change and take on different values based on a wide range of ages, colors, and countries involved in a study. Further, factors, such as height, weight, and time, are quantifiable values in scientific experiments. Answers based on rates, such as 1  agree, 2- strongly agree, and 3 -strongly disagree are variables, which allow researchers to analyze and assess thoughts and opinions statistically. An investigator should identify specific variables to manipulate to produce quantifiable findings (Al-Riyami, 2008).

Variables are therefore important for generating results that can be interpreted. Variables can also be classified as independent or dependent. The way of planning any research is to identify what research factors could influence the result. While there are multiple sorts of variables in research methods, attention has been directed to independent and dependent variables. Researchers usually identify the independent variable in their study designs for manipulation and consider the dependent variable as the measurable results based on the manipulation of the independent variable. For most experiments, it could be simple to identify, isolate, and manipulate the independent variable to measure the dependent variable.

In other studies, it could be more troublesome to identify independent and dependent variables. Thus, researchers develop robust designs with variable operationalization to evaluate unclear concepts with no specific variables. Additionally, other confounding variables (extraneous), which are not independent, may also cause changes in independent variables. Further, variables that are difficult to control (intervening variables) also have effects on dependent variables.

References

Al-Riyami, A. (2008). How to prepare a research proposal. Oman Medical Journal, 23(2), 6669.

Gay, B., & Weaver, S. (2011). Theory building and paradigms: A primer on the nuances of theory construction. American International Journal of Contemporary Research, 1(2), 24-32.

Halvorson, H. (2012). What scientific theories could not be. Philosophy of Science, 79(2), 183206. doi: 10.1086/664745.

Hashemnezhad, H. (2015). Qualitative content analysis research: A review article. Journal of ELT and Applied Linguistics, 3(1), 54-62.

Rubin, H. J., & Rubin, I. S. (2012). Qualitative interview: The art of hearing data (3rd ed.). Los Angeles, CA: Sage Publications.

Shields, P. M., & Rangarajan, N. (2013). A playbook for research methods: Integrating conceptual frameworks and project management. Stillwater, OK: New Forums Press.

Scientific and Philosophical Underpinnings of Research

Introduction

When considering philosophical and scientific foundations of research, the major focus is made on analyzing a social impact of information technology on society and economy. In this regard, theoretical and conceptual frameworks provide valid social and philosophical guidelines for an interpretive assessment of the social impact of ICT (ODonnell and Henriksen, 2002, p. 92). Noting that information technologies correlate with social and business sciences, human beings are also involved in this sphere (Nwokah, Kiabel, and Briggs, 2009, p. 430). In this respect, the context of IT research is based on close interaction between information users and the information systems they are engaged with.

Approach to Inquiry: Induction versus Deduction

Ambiguity and confusion arises between induction and deduction when the argument is based on stressing probability, even if it is quite high, or on probability that is equated to certainty (Srinagesh, 2009, p. 185). Hence, deduction is more based on the analysis of consequences derived from a particular assumption whereas induction is based on the summary of explicit or probable facts that are just admitted, but not proved. In other words, deduction is based on existing certainties whereas induction postulates what may happen or what is possible.

Type of Data: Numeric versus Narrative Data

Taking into consideration qualitative and quantitative underpinning of research, the primary emphasis should be put on character and origins on the data itself, but not on the methods used to process information (Marczyk, DeMatteo, Festinger, 2010). In this respect, qualitative data presents a construct that can be received from a mere description of different variations and changes and evaluation of particular objects. Quantitative or numerical data analysis lies in measuring and rating attitudes based on summary statements. In other words, all observations and facts are quantified rather than described.

Testing Hypotheses and Theories versus Generating Hypotheses and Building Theory

Testing hypotheses implies the analysis of a null hypothesis and correlating it with an alternative hypothesis to denounce the null one (Marczyk, DeMatteo, Festinger, 2010). In contrast, the process of generating hypotheses often involves the identification of qualitative research methods and target groups as the basis for data analysis. Building appropriate research questions contributes to generating predictions and analyzing results ((Marczyk, DeMatteo, Festinger, 2010). It leads to the formulation of good and provoking hypotheses that are sufficient for creating concepts and building theories. Hence, hypothesis generation is the initial stage for building a research that logically ends with testing and evaluating plans and theories.

Two Rationales for Using a Mixed Methods Approach

The usage of a mixed method approach to research implies that qualitative and quantitative aspects will be engaged to study a particular question from both perspectives. Hence, the first rationale for using mixed methods approaches lies in the necessity of addressing several dimensions for a particular questions to better understand the process and find out relevant studies (Clark and Creswell, 2010, p. 9). The second rationale of using this approach is premised on the complexity and diversity of the investigated problem and on the presence of a great number of variables of different characters.

Reference List

Clark, V. L., and Creswell, J. W. (2010). Designing and Conducting Mixed Methods Research. US: SAGE.

Marczyk, G. R., DeMatteo D., Festinger, D. (2010). Essentials of Research Design and Methodology. US: John Willey and Sons.

Nwokah, N. G., Kiabel, B. D., and Briggs, A. E. (2009). Philosophical Foundations and Research Relevance: Issues for Marketing Information Research. European Journal of Scientific Research. 33(3), pp. 429-437.

ODonnell, D., and Henriksen, L. B. (2002). Philosophical foundations for a critical evaluation of the social impact of ICT. Journal of Information Technology. 17, pp. 89-99.

Srinagesh, K. (2006). The Principles of Experimental Research. US: Butterworth-Heinemann.

The Semi-aquatic Mammals Pinnipedia

Introduction

Pinnipedia is a broadly distributed and varied group of semi-aquatic sea mammals. It consists of three families that include Otariidae (eared seals), Odobenidae (extant walruses) and the Phocidae (Earless seals) (Harrison & King 102).

Earth history

Pinnipeds were first spotted in the middle Miocene having been tremendously specialized for an aquatic survival. It has been proposed that the pinnipedia may have had a two fold origin and a monophyletic foundation.

Reproduction

Pinnipeds are polygamous with the males being larger than the females. Towards the breeding season, the males choose the breeding sites and establish harems on the arrival of the females. This depends on the species. The males assertively defend groups of particular females while others protect the reproductive terrain. Males compete for females.

Breeding occurs especially during the late spring and summer. A single pup is normally produced yearly but twins occasionally occur. After giving birth, the females wean their young ones for an uneven period of time. The females possess a postpartum estrus that permits them to breed soon after reproduction (Harrison & King 102).

Species number

Pinnipeds constitute slightly more than 28% of the diversity of marine mammal species with 33-37 living species being spread all through the world. Of these species, 18 belong to the family Phocidae, and the remaining 14-18 species belong to the Otariidae and the Odobenidae family.

Shapes/ Sizes and Color

Pinnipeds are smooth bodied and have a barrel shape. This makes them to be well adapted to their marine habitats. The large size of Pinnipeds in comparison with most earth carnivores helps them to preserve warmth in their bodies. Their sizes differ with the smallest Pinniped being 1.3 m when fully grown and the largest being 4m long.

Color patterns in Pinnipeds occur almost entirely among the family Phocidae. Others show dark and disruptive color patterns (Nowak 1458). Some Pinnipeds have a homogeneous coloration and this permits them to intermingle well with their icy surroundings.

Typical behavior

Pinnipeds have a typical behavior of going back to land to reproduce. They are polygynous with triumphant males mating with quite a number of females throughout the breeding period. The males compete for females and the females reach sexual maturity before the males.

Eating habits

Pinnipeds are carnivorous. They feed on sea creatures including fish, crustaceans and sea birds. Most are general feeders while a few concentrate on some foods only. Pinniped eyes are well adapted to darkness hence they do most of their feeding at night (Henry 110).

Role of Pinnipeds in the food chain

Pinnipeds play a major role in the food chain. They feed on crustaceans, echinoderms, fish and young whales. They are themselves eaten by orcas, bears and human beings. Walruses for instance have been chased by natives due to their flesh, hide and tusk.

Defense

Pinnipeds have tusks that grow up to a length of one meter. Males have larger tusks compared to the females. They use these tasks for fighting and sparring their enemies. Pinnipeds also have bristles all over their mouths for defense.

Movement

Pinnipeds are adapted for movement both on land and on water. They have wing like flippers on the front and on the back. Both pairs assist them while walking on land. During swimming in water, the hind limbs are turned backwards and are kept parallel with the vertebral column. Their feet act as sufficient propellers. Some Pinnipeds are however belly walkers with rising and falling movements of the abdomen. Pinnipeds movement in water is thus graceful and they frequently engage themselves in water sports.

General habits

They spend almost all their life in water, beaches or ice floes. Pinnipeds are good divers being able to fight back ache and fatigue associated with lactic acid build up during spinning. They produce sounds while in water or on land. These sounds are associated with breeding and other social interactions (Henry 110).

Works Cited

Harrison, Richard & King, Judith. Marine mammals .London: Hutchinson University Library, 2006. Print.

Henry, William. Antarctic Pinnipedia .Washington, D.C: American Geophysical Union, 1971. Print.

Nowak, Ronald. Walkers Mammals of the World. London: John Hopkins University Press, 1999. Print.

Quantitative Methods and Design Analysis Patterns in Research

Introduction

Quantitative data analysis is generally needed for proper assessment of the numeric data associated with any type of research. Within-subject design, as well as between-subject design types are regarded as the most reliable types of researches for achieving the reliable and effective set of quantitative data. Therefore, if people are offered three variants for participating in a survey, the between-subject research will be the most effective solution, while within subject research will be featured with a lower error rate.

Research Design

The two variants of the study will involve the opportunity of sending a test e-mail to the target audience of the research. As for the matters of the research design, the variants available for the analysis are as given below:

  • Three separate e-mails for every participant
  • One e-mail with three links

These are the possible approaches for the within-subject study design. However, there is high credibility that e-mails will be marked as spam, especially if they are sent separately.

The other variant is diversifying the audience, and sending one e-mail with a single link to every single person. This may be featured with the increased error rate, however, more data will be collected for the study, as people will be less irritated with unwelcomed messages.

Discussion

Assuming that experiment will involve three variants, the design of the experiment in general will involve assessment of the click rates for each variant. Therefore, people will be offered three variants of the test (it is preferable that the participants did not guess about the experiment): three types of e-mails may be sent to the target audience of the research. One will simply contain a hyperlink with no explanation provided. The other will emphasize that $10 will be donated to charity if a person participates in the survey, and the last will state that a person will participate in a lottery with $ 1000 prize if he/she participates.

In the light of the fact that the research data will be needed for proper assessment of human behavior, the evaluation of the quantitative data will be performed in accordance with the principles of within-subject design, as well as between-subject design types. In fact, both principles involve the same methods of data evaluation, therefore, the key steps of data analysis will be:

  • The generation of models and concepts
  • Development of measurement instruments and grades
  • Experimental control
  • Data collection
  • Modeling

Assessment of the results

The only important aspect of research design is explained by the specific hypothesis of the research. As it is stated by Newman and Benz (2005) people are reluctant on participating in numerous researches, however, they may gladly answer several questions of the same questionnaire. The offered hypothesis presupposes that people will perceive the generated e-mails as three different actions, and if all three will be sent to an entire audience, up to 85 of the e-mails will be erased as e-mail spam (Duffy and Chenail, 2008). On the other hand, the audience may be explained that they are participating in a research, and the three hyperlinks are the variants of answers for a single question of the research. Hence, the design of the study will be of a within-subject type.

Between-subject design will help to preserve the anonymous nature of the research in general, and get a wider range of frank answers (when people click the links ruled by their own interest and without knowing that they participate in a study). Therefore, between subject design is featured with the advantages that are closely associated with the differentiation of the audience, and the following differentiation of the treatments which is impossible for the within-subject design. As it is stated by Grinnell and Unrau (2005, p. 144):

This type of design is often called an independent measures design because every participant is only subjected to a single treatment. This lowers the chances of participants suffering boredom after a long series of tests or, alternatively, becoming more accomplished through practice and experience, skewing the results.

Therefore, between-subject design is closely linked with the matters of audience differentiation and purposes of the study. Since the offered study involves violation of personal informational space of the audience, sending three e-mails in a row will be inappropriate.

As for the assessment of the results, both variants of study design will involve the analysis of the results from the perspective of the audiences treatment of the research subjects, and the motivation of the audience to choose one of the three offered variants. (Denmark, Milner, et.al. 2008) Therefore, data measurement will be performed by counting clicks for each link, and calculating rejection rates (the amount of clicks will be lower then the amount of messages sent. Therefore, this difference will be regarded as the rejection rate).

Conclusion

The design analysis patterns are generally regarded as the significant aspects for achieving the results for the research. Hence, while the within-subject design is too obsessive, the between-subject design will be helpful for obtaining wider results, and performing a more reliable analysis of the research data.

References

Denmark, D. L., Milner, L. C., & Buck, K. J. (2008). Interval-Specific Congenic Animals for High-Resolution Quantitative Trait Loci Mapping. Alcohol Research & Health, 31(3), 266.

Duffy, M., & Chenail, R. J. (2008). Values in Qualitative and Quantitative Research. Counseling and Values, 53(1), 22.

Grinnell, R. M. & Unrau, Y. A. (Eds.). (2005). Social Work Research and Evaluation: Quantitative and Qualitative Approaches (7th ed.). New York: Oxford University Press.

Newman, I., & Benz, C. R. (2005). Qualitative-Quantitative Research Methodology: Exploring the Interactive Continuum. Carbondale, IL: Southern Illinois University Press.

Political Theatres of the Classic Maya

The paper will look at the social, political, and cultural factors associated with performances in the theatres and how much attention is given to the physical setting of the theatres, and the audiences as compared to that given to the performers. The antithesis in this text is the suggestion that there has to be some control over the power structure vested with the performers, or otherwise the running of the society will forever be questionable. On the other hand, the argument and elaboration of the Celtic theory that focuses on modes and forms of theatres form the thesis. The author argues against fighting all authorities and instead gives a proposal for identification of social groups as a way of making life in the society meaningful. From the general outlook and the arguments presented, the text targets the general group that has an interest in the running of the society.

The author uses a pure post-structural theoretical framework assumptions. Those that have been made in this case presuppose that the cultural, social and political causes of the society are interrelated and that the description of the relations, expectations and performers stands out as an assumption. The issue of private and public power is also debated by the author. The template could be considered as a confirmation to the fact that nature binds the society, and thus rumors concerning rituals are never theorized.

Development of the text is unclear since it begins with a definition of performance drawn from other authors and an argument surrounded by a lot of assumptions. Every part of the text seems a conclusion of its own. So many gaps are evident in the text inclusive of the missing theories. The key to this is how much the author overlooks the social system of the society and most importantly, gender as a major discussion on the part of the audience. He also fails to explain whether or not the entire Mayan society was constituted of potential performers and spectators, or whether some training had to be made.

From how he handles the issue of political consciousness in the Mayan society, it can be clearly indicated that the author exclusively uses a complex approach to interpretation. He does not bother to give reasons and forces behind numerous assumptions he makes and uses in his text. He, however, tends to assume that the readers of the text are knowledgeable about the lives and rituals of the Mayan society; at the same time, this may not be the case.

As the text advances, the author seems to have some interest in the rituals and spiritual standing of the Mayan society. However, such a stance is entirely dropped when he recognizes that it could be contradicting to his stand on their political life. The whole text, according to my own understanding, seems to be politically-based. He further contradicts himself when he speaks about political vitality and competition on the Sabbath regarding performances. The paper also focuses on a closed argument and uses a lot of transference. This is evident when he argues that leaders are the most important players in planning but fails to offer alternatives to that.

Displacement is evident in the text. Happenings in the modern society, which are not depicted here, are displaced by what the society used to be. What is more, there is a lot of eroticizing as opposed to assimilation. Status quo is just another argument to which this approach could be applied, and all archaeological research to a great extent, affect the current and the coming generations of the concerned society since such reserach could be used to evaluate societies when they are no longer in existence, just like the Mayan society.

Article Summary

In his article, Professional vision, Charles Goodwin seeks to examine the discursive practices often used by members of a given profession to shape the domain of their professional assessment, the phenomenal environment where their thoughts dwells, as well as the objects of knowledge (including bodies of expertise, theories and artifacts) that symbolize their profession. These objects of knowledge are a source of competence for the professions in question, in that they set them miles apart from other professions.

Goodwin (607) has examined the three practices necessary for achieving a professions vision. They include highlighting, coding schemes, and the generation and interpretation of graphical representation. Goodwin (608) examines these practices with respect to two professions under study. The two professions in question are law and archeology.

Instruction was a central element of the activities undertaken by the expert witnesses and archeologists in the courtroom, and for the individual learning processes consisting of modes of access and participation frameworks for the relevant phenomena. Although each of these settings was organized differently, nonetheless, both comprised common discursive practices.

The configuration of the aforementioned practices that the current paper has investigated is pervasive, consequential and generic in nature with respect to human activity, with good reason of course. To start with, the classification process is vital for human cognition. The construction and application of coding schemes allows facilitates social organization of relevant classification systems as bureaucratic and professional knowledge structures.

Ongoing historical practices have helped to shape human cognition. In addition, graphical representations are a prototype version of how human beings are able to construct external cognitive artifacts necessary for the persuasive display and organization of relevant knowledge. Strong political and rhetorical consequences are associated with the graphical representation of a coding scheme.

This is because the practice of highlighting echoes the perceptions of other individuals in that it reshapes a scrutiny domain, in effect making some of the phenomena to become more salient than other. Also, some of the phenomena may fade into the background. Goodwin (609) has investigated seeing as a historically and socially constituted body of practices that allows for the shaping and construction of objects of knowledge that can animate a given professions discourse.

The study allowed for an interaction between co-workers, their tools of measurement, the lines that they have drawn, and their ability to view pertinent events in such a manner as to allow for the achievement of a single coherent activity. At the same time, practices mainly involved in the generation, distribution and deducing of these representations provides the cognitive infrastructure and materials needed to allow for the achievement of the archeological theory.

The author has further argued that professional vision is unevenly allocated and is housed by specific social entities. In addition, the author has managed to communicate across the three practices relevant to a given profession in an orderly manner relative to human interaction. Examining the interactions between these practices within specific parameters allows us to explore diverse phenomena using a single analytical framework.

Thanks to the sequence of interaction, individuals of a given profession contest are held responsible for the right perception and constitution of objects that fins use in outlining their professional competence. Upon reading this article, one cannot help but wonder, is professional vision largely established by the three aforementioned practices- coding scheme, highlighting, and the generation and expression of material representation, or not?

Work Cited

Goodwin, Charles. Professional vision. American Anthropologists, 96.3(1994): 606- 633

Theoretical Aspects of Quantum Teleportation

The most basic constituents of nature have special properties different from the properties exhibited by objects with significant mass. Small bits of information known as qubits can undergo quantum teleportation (Braunstein 609). The physics behind the behavior of qubits, the fundamental units that constitute quantum information, are poorly understood.

However, teleportation has been physically demonstrated in several experiments. The concept relies on the theory which states that at the quantum level, a change of state of energy at one-point results in a universal reaction which is a change in all fundamental locations in the universe (Braunstein 611).

Every small movement or change of state of energy at any point in the universe has equal universal reaction. This is known as quantum non-locality (Whitaker 19). It is a proven fact that many events where changes of state of energy occur in the universe cannot be observable by human beings.

However, a few special events can be monitored with scientific instruments. Quantum non-locality is observed at two or more different locations resulting in teleportation (Braunstein 613). The energy state of one point is transferred to another point without any apparent transfer of energy.

Humans have succeeded in observing the universal reaction of change of state of energy at one point, one hundred and forty three miles from the location of occurrence of the event of changing of state. Since the event and the observation were at different ends of a single optic fiber, the process was classified as quantum teleportation (Barrett & Chiaverini 2). Thus, teleportation is the observation of the reaction to an event at a particular point when the time and place of occurrence of the real event are known.

One special characteristic of the theory is that no time elapses between the moment of change of state of a fundamental packet of energy and the observation of the reaction at any point. Another special characteristic is that no energy is transmitted whatsoever. Moreover, no mass moves from the location of the event itself (Barrett & Chiaverini 3). The mechanisms behind the phenomenon are not yet well understood by modern scientists.

There are several prerequisites for successful quantum teleportation. The packet of energy to be teleported must be related to the packet that is expected to change at the other end of the teleportation channel. This relation between the two packets of energy is known as quantum entanglement.

One of the two entangled quantum particles must be transmitted by classical means to the observation end of the teleportation channel. This is required so that the change in the state of the particle at the location of the event to be identical to the observed change at the other end.

Teleportation cannot occur if the packets of energy at the two different locations are not entangled. The only event in which time is consumed is the transmission of one of the entangled states to the observation point (Barrett & Chiaverini 1). The subsequent changes in either of the pair of particles results in an equivalent change at the other end of the quantum teleportation channel.

It is important to distinguish quantum teleportation from classical mechanics and the fictional teleportation of matter through communication channels. The particles to be teleported in quantum teleportation are quantum particles. Thus, the normal classical mechanics do not apply to the particles since they do not have the properties of matter at macroscopic level. Quantum teleportation begins by creation of quantum entanglement between two particles.

This is followed by transmission of one of the particles to the observation end of the teleportation exercise. This transmission may seem to nullify the necessity of quantum teleportation. However, after the placement of the two quantum-entangled particles at each end of the teleportation channel, multiple states of the particles can be replicated at either ends of the channel without any conventional communication or transmission of energy (Mochon 4).

Physical transportation of the particle is not a viable option since the quantum-entangled particles would be invariably distorted leading to failure of the teleportation process. With each quantum teleportation cycle, the resulting state at the observation end is almost identical to the real event.

However, there is an infinitesimal distortion of the states of the particles due to the random vibration at the quantum level. Thus, with multiple teleportation cycles, the entanglement of the particles declines (Mochon 2). This requires the quantum entanglement between the particles to be replenished to continue quantum teleportation. This phenomenon differs from the normal communication since the information cannot be broadcasted.

Quantum physics does not allow broadcast of information in the process of quantum teleportation. Another limitation in quantum teleportation is that a photon cannot be measured so that the information can be replicated at the other end of the channel. The random vibrations of particles at quantum level make it impossible to measure the state of the energy. Only the magnitude of the quantum energy is measurable (Mochon 10).

In addition, teleportation is only applicable at quantum level. Three-dimensional particles cannot undergo quantum teleportation. Fundamental quantum particles are regarded as dimensionless in classical physics, and one-dimensional in modern theories of relativity. Thus, they do not satisfy the requirements for a conventional particle (Davis 19)

In a practical experiment, classical bits of information are used to execute the teleportation cycle. Since the entangled particles have a quantum relation, they use the classical bits to change their states to match each other. While the transmission of the classical bits takes time, teleportation itself does not consume any time (Davis 22).

If the state to be teleported is at point A, than the quantum particle which state is to be matched is at point B, at the end of the teleportation channel. Particle B will be in the original state of A, while A will be in another undefined form when teleportation is complete. However, the quantity at B has never physically interacted with the quantity at A. The undefined state of particle-A is a result of its distortion during the process of sending of the required classical bits.

In addition, the theory stresses that no measurement of particle-A has taken place. Accurate measurement of quantum particles is not possible since an attempt to measure the energy of the particle results to great disturbance of the state of the particle (Davis 50). Therefore, accurate results cannot be obtained.

Since it is not possible to measure quantum particles, scientists usually scan the particle partially so that they obtain the classical bit. In the beginning of the research on quantum teleportation, scientists hoped that the process could be used for communication without actually using any time in the transmission of information.

This proved to be an impossible practical application. Teleportation of the states does not involve time consumption. However, the classical bits used to initiate the teleportation process travel at a velocity less than the speed of light (Davis 50). Time elapses during the process. Thus, teleportation cannot be used to send information at a speed higher than the ultimate velocity, the speed of light.

The main element of teleportation is the spin. It defines the state of the quantum particle. All subatomic particles have a characteristic spin that defines the magnitude of their energy and the state or direction of the energy. A single spin can be teleported in each teleportation cycle.

Large objects consist of an infinite number of spins since all matter is made up of energy. If these spins could be teleported at the same instance without any disturbance of their states, teleportation of large objects such as human beings is possible (Zhang et al. 9). However, teleportation seems to result in a slight modification of the spin in a quantum particle. Thus, trying to teleport a large object would definitely destroy the structure of the object (Zhang et al. 8).

Conclusion

Quantum teleportation is a proven concept. However, it is only applicable at quantum level at the moment. It is argued that if scientists figure out a way to teleport human beings from one place to another in future, it will be impossible to keep them alive.

This is because of the problem of teleporting consciousness, which is a characteristic of all human beings. Consciousness is usually separate from the physical operation of the human body. The fabric of consciousness has not yet been described in scientific terms. Although consciousness seems to be related to quantum non-locality, it is said to thrive in another plane of existence other that the one known to scientists.

Thus, it is impossible to teleport consciousness, which is a major component of life. At the moment, it is not possible to apply quantum teleportation on matter. Quantum teleportation is not a viable means of transport in the near future because the field of quantum mechanics and quantum non-locality has not yet been understood to a satisfactory level by scientists. However, quantum teleportation presents possibilities of faster computing in future.

Works Cited

Barrett, M. D., and J. Chiaverini. Deterministic Quantum Teleportation. Letters to Nature 429.6 (2004): 1-3. Print.

Braunstein, Samuel. Quantum Teleportation. Fortschr. Phys. 15.2 (2002): 608-613. Print.

Davis, Eric. Teleportation Physics Study. Airforce Research Laboratory 34.2 (2003): 10-76. Print.

Mochon, Carlos. Introduction to Quantum Teleportation. Perimeter Institute for Theoretical Physics 2.1 (2006): 1-11. Print.

Zhang, Lei, Jacob Barhen, and Hua-Kuang Liu. Experimental and Theoretical Aspects of Quantum Teleportation. Center for Engineering Science Advanced Research 1.1 (2007): 1-9. Print.