The egg softens, the eggshell dissolves, and gas is produced.
After Corn Syrup
The egg shrinks.
After H2O
The egg swells.
Largest and Smallest Eggs Before Dissection
The eggs retained their shapes.
Largest and Smallest Eggs After Dissection
The eggs retained their shapes.
Summary
The data obtained from the above experiment supports the hypothesis that if the cell is soaked in corn syrup, a hypertonic solution, then water will move out of the cell by osmosis, and the egg will shrink. Besides, it supports that if the cell is soaked in H2O, a hypotonic solution, water will move into the cell by osmosis, and the egg will swell.
Based on the above results, vinegar is an acid that reacts with the eggshell’s calcium carbonate to dissolve, making the egg soften. In addition, corn syrup is the hypertonic solution, which draws water from the cells, causing the eggs to shrink. H2O is a hypotonic solution that makes water from outside the cell enter, causing them to swell. The movement of water in both solutions is by osmosis.
Osmosis occurs in the presence of a semi-permeable membrane provided by the plasmalemma. When a cell is placed in a hypertonic solution, such as glucose, water molecules diffuse out of it to the external environment to establish an isotonic state, causing the cell to be crenate (Marbach & Bocquet, 2019). This movement occurs since there is less concentration gradient of water molecules in the surroundings than in the cell. Conversely, when the cell is placed in H2O, a hypotonic solution, water molecules diffuse from the surrounding through the semi-permeable membrane into the cell, lysing it (Marbach & Bocquet, 2019). Osmosis facilitates the establishment of an equilibrium state between water molecules in the cell and those in the surrounding.
The possible sources of errors in this experiment emanate from two points. Firstly, the concentration of the eggs was not equal, as some eggs could have been more hypertonic than others. Secondly, some solutions might have impurities, such as corn syrup, which would ultimately raise the concertation beyond the standards of the experiment (Marbach & Bocquet, 2019). Therefore, these sources might have contributed to false readings despite being negligible.
The purpose of this laboratory work was to evaluate the ideal gas law for the case of gas in a syringe when the pressure was increased. Lowering the piston caused a pressure build-up in the container, which the data showed increased pressure and temperature. Ideal gas ratios are used to check for collected data, and any patterns and errors are explained. This work showed a low error rate, indicating that the experiment was successful.
Data
For this lab work, thermodynamic parameters (Pressure, Temperature, Volume) were measured for the two states depending on whether the piston was lowered or not. Table 1 below shows the results of these direct measurements. It can be well seen that when the piston was lowered, the volume of free space inside the syringe decreased by 42.5%; it is natural that there was an increase in pressure when the volume was lowered. In particular, the pressure increased sharply by about 50.7% to 111.0 kPa at the end point with the lowered piston. The temperature also increased as a result of compressing the piston, and the increase was 4.7% or 14.4 Kelvin.
Pressure, kPa
Volume, mL/cm3
Temperature, K
P1
P2
V1
V2
T1
T2
65.5
111.0
40
23
302.4
316.7
Table 1. Measurement results.
Results
The change in pressure with a decrease in volume does not seem surprising: the law of the ideal gas shows an inversely proportional relationship between P and V. Thus, a decrease in pressure led to an increase in volume. From the point of view of thermodynamic configuration, in a closed container with gas there was no exchange of energy and matter with the environment, so it is assumed that there is no escape of matter outside the syringe. It is also worth specifying that when space is compressed in such a closed container, gas molecules cannot propagate, which leads to the only possible variant of their configuration, namely compaction. As a result, the probability of particles colliding with each other as a result of chaotic motion increases, which in turn increases the temperature.
The ideal gas equation shows that the ratio V1 / V2=P2 / P1 turns out to be true at constant temperature. This is easily verified for the data obtained, viz:
40 mL /23 mL = 111.0 kPa/ 65.5 kPA
1.74 ≠ 1.69
Obviously, this condition was not fulfilled for real gas, because the container with the gas was not in ideal conditions, that is, the practical application of this law differs from the theoretical concept. The difference between practice and theory can be solved by introducing an additional parameter, namely the volume of air inside the syringe, V0. Then the equation takes the form of:
V1 + V0/ V2+ V0 = P2 / P1
40 + V0 = 1.69(23 + V0)
40 + V0 = 38.87 + 1.69 V0
V0 =40 – 38.87 / 0.69 = 1.64
Therefore, there was another 1.64 mL of air in the syringe. In addition, it can be calculated PV/T ratios for the ideal gas for the two boundary states based on the available data. In particular:
P1 V1 / T1 = P2 V2 / T2
(65.5 kPa)(40 mL + 1.64 mL) / 302.4 K = (111.0 kPa)(23 mL+ 1.64 mL) / 316.7 K
9.02 Kpa mL K-1 = 8.64 kPa mL K-1
The values were not identical, which means that there was a measurement error caused by measurement errors and/or uncertainties. The percentage error in this case was:
PD = 9.02 – 8.64 / 9.02 = 4.21 %
The pressure increase with volume reduction is really not identical in this experiment as one would expect (LibreTexts, 2022). The difference is due to the presence of air in the container, which means that the real gas was far from the ideal state.
The decrease in temperature is due to heat transfer through the walls of the container, which means that the closed system tends to compensate for thermal equilibrium. In contrast to temperature, pressure does not return to the initial pressure, because under the conditions of hermeticity, escape of the substance outside the container is excluded, that is, pressure is not pressurized and is not removed from the system. Since it is known that pressure is inversely proportional to volume, and an additional volume of air was present in the syringe, a return of pressure to its original state is impossible.
When the piston is lowered, that is, the pressure is applied, the temperature naturally increases by 14.4 Kelvin. This follows from the law of ideal gas, which postulates a direct proportionality between pressure and temperature: an increase of one lead to an increase of the other.
Conclusion
The present laboratory work evaluated the law of the ideal gas in a practical experiment. It was shown that the real gas in the syringe differs slightly from the characteristics of the ideal gas, also due to the presence of air in the container, namely 1.64 mL. The percentage error was 4.21 between the ideal gas ratios at the starting point and the end point, indicating the overall success of the experiment.
Experiments are carried out in the laboratory to help students gain practical experience on the theory studies experienced during other classes. It is a good introduction to an apprenticeship that helps students to deal with technical matters in around the technology world.
This experiment was accomplished through an introduction to study mechanism under Newton’s second law that states force is a product of mass and acceleration of the body. It is, for this reason, it qualifies to be a vector quantity.
In a better part of this experiment led to determining methods that can help us locate the center of gravity of regular bodies. Through considering the measurements of the bodies with regular shapes, it was easy to locate the C. O. G through bisection.
Experimental Objective
The main objectives of this experiment were to determine the equilibrium of objects when forces and torques are applied to the body.
To locate the center of gravity of different bodies of regular shapes such as circular, triangular and rectangular shaped objects.
Another objective was to find out how energy is conserved in bodies in motion.
Theoretical Background
Force is one of the components of Newton’s laws and it falls under the second law. It is calculated as:
Force=mass*acceleration
A push or a pull on a body is regarded as a force. This force is associated with many effects such as changing the speed of an object, alteration of direction of the motion as well as deformation of shapes of bodies. Generally, its description using both magnitude and direction makes it a vector quantity.
To understand the forces acting on the body to make it into equilibrium they must be equal from both directions. If forces are not uniformly acting on a body it leads to mechanical stress. As a result, the body cannot balance if the pivot is not put at the center of gravity. This concept of C. O. G have been used in many places where the balancing of objects is required for example for decoration purposes.
Experimental Procedures
The following are the steps followed in this experiment:
There was a demonstration of force where it was clear that it was vector quantity
The concept of force being a vector was used to demonstrate this effect in determining the equilibrium of the physical bodies
Uniform motion of pendulum was demonstrated where we used spring/mass system
Lastly, we used bodies in motion at different slope angle to determine how energy is conserved in bodies in motion as well as forces acting on these bodies.
Experimental Data
Different shapes were used in this experiment to determine their centre of gravity. It was observed that, Centre of gravity of these shapes laid a point midway the objects. Calculations were carried out to determine the C. O. G points.
For instance, to determine the C. O. G of a circle, two diameters were drawn from different points of the circle. Where these two diameters intercepted it was regarded the point of equilibrium.
C. O. G
To determine the point of equilibrium of rectangle measuring 12 cm by 8 cm, two diagonals were drawn from vertices of the rectangle. The point of inception, halfway of the two 14.5 cm diagonals formed the point of equilibrium of this rectangle.
C. O. G
4cm
12cm
Similarly or the other shapes employed this method of bisecting and finding the point of interception of the bisectors. This will be shown in the diagrams above.
Data Analysis
It was indentified that, when a single shape is used to determine its’ C. O. G it was much easier than a combination of two shapes. For instance, to determine the C. O. G of a circle you just need to draw two diameters from two different points of the circle. The interception of these two diameters will form the C. O. G. of this circle.
A different scenario involved a T-shaped shape involving two rectangles. This required more mathematical calculation since apart from getting the C. O. G. of each rectangle; an extra point is determined by bisecting the point between the C. O. Gs.
4cm
12 cm C. O. G
Discussion
From the above observations it is clear that it is very easy to determine the centre of gravity of any given body of any shape. It is easier to come up with the C. O. G of regular shaped bodies than the complicated or multi-shaped bodies.
Although this experiment concentrated on regular shapes, it is suggested that even the centre of gravity of irregular bodies can be determined in the same way only that more tasks are carried out.
In irregular shapes, multiple lines are drawn in order to come up with the most probable C. O. G.
When the C. O. G of a physical body is known, it is easier to determine the point of equilibrium and calculation of mass requirements in equilibrium systems.
Conclusions
Force as described above plays a better part of our daily life activities and hence this experiment was crucial undertaking. With good understanding of how force works will help the students to appreciate how nature and its components interacts in a stable manner.
The issue of determining the Centre of Gravity is another concept worth noting since I cannot imagine what would happen if our bodies and other mechanical object can operate if we did not have a centralized centre of gravity. It would become easy to fall and even causing damages or injuries to people.
The property that matter, whether solid, liquid, or gas, has to compress in a specific space, the amount of mass per unit volume, is known as density. Understanding how various materials interact when combined is an everyday use of density. When Archimedes was tasked with figuring out whether King Hiero’s goldsmith was stealing gold while creating a golden wreath devoted to the gods and substituting it with another, less expensive alloy, he first discovered it.
To determine it experimentally, the object’s mass and volume must be determined, and both magnitudes must be divided (Ross , 2017). Because wood has a lower density than metal, it floats in water, while an anchor sinks because metal has a higher density. In this experiment, density plays a crucial role in separating the three types of unknown metals based on their reactions to distilled water. As silver, rhodium, and platinum have densities of 10.5, 12.4, and 21.45 grams per cubic centimeter, respectively, it is expected that the densities discovered in this experiment will match those known densities. A virtual lab will be run for that objective.
Methods and materials
The first unknown metal was measured for mass using a weigh boat and weighed on a scale. The starting volume of the distilled water was then determined by measuring it between 2 and 7 milliliters in a graduated cylinder with a 10-milliliter capacity. The graduated cylinder was filled with the amount of metal powder, and the outcome was measured as the final volume. The average density for the first unidentified metal was determined after three tests. The last two metals undergo the same procedure.
Results
Three trials were conducted to determine the weight of the metal (m1) poured into the weight boat. The volume of water (Vi) ran into the 10mL graduated cylinder at the beginning of the experiment. The importance of water (Vi) poured into the graduated cylinder after the metal poured into the water (Vf). The density of the metal was determined using these measurements and the density formula (m1/(Vf-Vi)). The three trials were completed, and the densities were totaled and divided by the total number of attempts to determine the average density. Here are the numbers:
Table 1: Results of the first unknown metal.
Unknown Metal 1 Data
Measurement (variable)
Trial 1
Trial 2
Trial 3
Mass of metal (m1)
5.50 g
5.76 g
5.54 g
Initial Volume (Vi)
4.29 ml
4.18 ml
4.28 ml
Final Volume (Vf)
4.82 ml
4.78 ml
4.81 ml
Density (D)
11.0 g/cm^3
9.6 g/cm^3
11.1 g/cm^3
Average Density
10.6 g/cm3
Table 2: Results of the second unknown metal.
Unknown Metal 2 Data
Measurement (variable)
Trial 1
Trial 2
Trial 3
Mass of metal (m2)
6.80 g
12.89 g
6.70 g
Initial Volume (Vi)
4.33 ml
4.07 ml
4.15 ml
Final Volume (Vf)
4.85 ml
5.09 ml
4.71 ml
Density (D)
11.3 g/cm^3
12.9 g/cm^3
13.4 g/cm^3
Average Density
12.5 g/cm^3
Table 3: Results of the third unknown metal.
Unknown Metal 3 Data
Measurement (variable)
Trial 1
Trial 2
Trial 3
Mass of metal (m3)
11.25 g
11.00 g
11.40 g
Initial Volume (Vi)
4.15 ml
4.09 ml
4.18 ml
Final Volume (Vf)
4.70 ml
4.59 ml
4.78 ml
Density (D)
22.5 g/cm^3
22.0 g/cm^3
19 g/cm^3
Average Density
21.2 g/cm^3
The results of the experiments indicate that the first unknown metal is silver, the second unknown metal is rhodium, and the third unknown metal is platinum. The average densities of the strange metals were 10.6, 12.5, and 21.2 grams per cubic centimeter, respectively (Unknown Metal 1 Data, Unknown Metal 2 Data, and Unknown Metal 3 Data). It can be observed that these values are closely connected to the densities of silver (10.5), rhodium (12.4), and platinum (21.45). The thicknesses of unknown metals one through three are closely similar to those of unknown metal one, rhodium, and platinum, respectively.
Discussion
This research has led to conclusion that the density of unknown metals can be utilized to compare them to known metals and rename them by calculating their mass and volume. This experiment demonstrates that a material’s thickness can be used to determine its identity. Initially, silver had the lowest density (10.5 g/cm3), followed by rhodium (12.4 g/cm3) and platinum (21.45 g/cm3). In light of this, their average density can help to determine the unknown metals. To rename them, it was discovered that unknown metal number one is silver, unknown metal number two is rhodium, and unknown metal number three is platinum. The primary goal of this experiment was achieved, which was to identify the unidentified metals. Future tests would yield results that are more accurate if fixed volume of distilled water in the 10 mL cylinder will be used, but overall, this experiment met its goal.
This study aims to propose an experiment in which the performance of the Delta Max paper airplane is compared to other paper airplane models in the context of the range and duration of the flight. The report provides data on the methodology for collecting, analyzing, and presenting the results and discusses the main strengths and weaknesses of the experiment.
Methodology
Primary Data Collection
This study is based on an experimental design in which primary data is collected directly and then subjected to statistical processing. For this purpose, five identical copies of different paper airplane models are each created and flown by the same person under the same physical conditions (no wind, ambient temperature, or air composition), which will minimize the effects of confounding variables (Thomas, 2022). In this case, it is assumed that differences in range and flight time, critical criteria for RED BULL PAPER WINGS, will be determined only by the shape of the paper airplane (RedBull, 2022). A five-fold run for each model is also based on the assumption that this will reduce errors and achieve an ideal average result (Phil, 2020). Thus, reliable primary data will be collected during experimental launches, eliminating the effect of any undesirable factors.
Range data collection will be measured directly with a laser meter. A similar mechanism is used on RED BULL PAPER WINGS and has improved accuracy over classical rulers and scales with systematic errors (RedBull, 2022). For reliable statistical analysis, it is critical not to change meter positions for different models so as not to cause distorted results. Measuring the range to one decimal place is also recommended to improve accuracy.
Analysis and Processing
Each sample’s range (in meters) and duration (in seconds) will be placed in an Excel spreadsheet. In addition to the mean value for each pattern, measures of central tendency and variation will also be determined to observe critical trends in flight characteristics (Table 1). The inferential test will use a parametric one-way ANOVA to determine if there were differences between the different models as independent samples: the comparison will be made in the context of both the range and duration of the paper aircraft flight. The parametric test will determine the statistical significance of the differences between the models, and, if found, the posterior test will determine the location of these differences. In other words, combined with descriptive analysis, inferential statistics will answer the key research question and determine which model performed best.
#
Delta Max
Basic Dart
Lift Off
The Raven
Lock-Bottom
Distance, m.
Time, s.
Distance, m.
Time, s.
Distance, m.
Time, s.
Distance, m.
Time, s.
Distance, m.
Time, s.
Mean
63.8
20.2
62.8
14.2
58.4
13.0
55.5
11.0
63.9
17.8
Median
62.8
20.0
66.5
14.0
57.1
13.0
55.9
11.0
63.3
17.0
SD
3.2
1.9
9.9
2.2
3.1
1.6
2.4
1.2
1.7
1.3
Range
8.2
5.0
23.3
6.0
7.4
4.0
6.6
3.0
4.5
3.0
IQR
3.3
2.0
2.2
1.0
2.4
2.0
2.0
1.0
1.5
1.0
Table 1: Descriptive statistics for the five models.
Graphical Representation
Most of the visualization of the primary data will be in the form of tables (e.g., Table 2), but it is also acceptable to use graphical representations in the form of coordinate dependencies of the flight range on the time it takes to fly, or boxplots (Figure 1 and Figure 2). Notably, if linear regressions are plotted for each model, it will thus be possible to determine the average speed of the paper airplane in flight (Bohacek, 2021). Thus, the primary visual representations would include tables, boxplots, and coordinate plots.
#
Delta Max
Basic Dart
Lift Off
The Raven
Lock-Bottom
Distance, m.
Time, s.
Distance, m.
Time, s.
Distance, m.
Time, s.
Distance, m.
Time, s.
Distance, m.
Time, s.
1
62.2
20
45.1
11
58.7
12
54.4
11
62.0
17
2
68.3
23
68.0
15
57.1
13
56.4
11
63.3
17
3
65.5
21
68.4
17
56.3
11
52.2
9
63.2
17
4
60.1
18
66.5
14
63.6
15
58.8
12
66.5
20
5
62.8
19
65.8
14
56.2
14
55.9
12
64.7
18
Table 2: Primary data collected for five paper airplane models in five dimensions.
Critical Review
The proposed design has several strengths and weaknesses, potentially affecting the validity of the results. First, the design is about eliminating confounding variables and controlling them, so distortion is expected to be minimal. Second, robust statistical inferential tests are used for the analysis, which means the validity of the results will be confirmed. On the other hand, the limitations of the proposed experimental design include using the same individual. Although this strategy initially aims to equalize strength in each launch, the effect of arm fatigue, which occurs by the last of 25 paper airplane throws, cannot be excluded (Nagwa, 2020). At the same time, one cannot be sure that the laser meter will prove functional throughout the experiment and will not show distorted results after several trials.
Discussion
This experiment investigates a Delta Max paper airplane’s flight characteristics (range, duration). The model airplane was constructed according to the suggested instructions from a sheet of A4 paper with a density of up to 100 g/mm without additional resources, whether glue, staples, or rubber bands (PPO, 2021; RedBull, 2022). Additional paper airplane models assembled from similar materials are used for comparison — for purity, the experiment must be conducted under identical conditions to reduce distortion of the results. The primary data includes both range and flight time information for each of the five launches for the five models. However, it is not sufficient to only average these values to obtain reliable results (Luellen, 2022). On the contrary, from an experimental point of view, using a parametric one-way ANOVA test, which satisfies the experimental conditions (EZ SPSS, 2019), is suggested. The results of the inferential statistics at a given level of significance (.05) allow us to reject or accept the null hypothesis and draw conclusions about the differences in flight performance for the five paper airplane models. However, the proposed design has several limitations, discussed above, related primarily to the effects of fatigue and confidence in the correctness of long-distance and time measurements.
Conclusion
To summarize, the work has shown a robust experimental design to increase the data’s validity and provide a clear answer to the research question. Descriptive and inferential statistics manage the primary data and controlling for confounding variables helps minimize confidence bias.
The pinacol rearrangement constitutes the dehydration of pinacol and the stabilization of carbocation by the shift of methyl. The mechanism of pinacol rearrangement follows the SN1 reaction mechanism with pinacol as a limiting reagent. The mechanism of the pinacol rearrangement commences with the protonation of one of the two –OH groups. Anslyn and Dougherty explain that the protons derived from the concentrated sulfuric acid get attracted to the lone pairs of electrons on one of the –OH group in the pinacol (675). The interaction of the proton with the –OH group leads to the formation of oxonium ion (-OH2+). The oxonium ion leaves the pinacol and creates a tertiary carbocation that causes a methyl shift in a manner that stabilizes it (Anslyn and Dougherty 675). The methyl shift culminates in the formation of pinacolone (ketone). Figure 1 below demonstrated the mechanism of pinacol rearrangement.
Evaluation of the experimental procedure shows that it was performed successfully as the pinacol rearrangement happened as expected. Comparative analysis of the theoretical yield and the actual yield of the experiment show a disparity. Starting with the 3 g of the pinacol, which is the limiting reagent, the theoretical yield of pinacolone could be 2.604 g. However, the actual yield obtained from 3 g of pinacol is 1.5 g, which is 57.6% of the theoretical yield. Therefore, the experiment managed to synthesize pinacolone, although constituting 57.6% of the theoretical yield. The low yield emanated from diverse sources of errors that are inherent in the experimental procedure. A feasible source of error could be an incomplete conversion of pinacol. Other possible sources of errors could be inaccurate pipetting of aqueous layer resulting in the loss of pinacolone and evaporation during the distillation process.
Sulfuric acid was used in the experiment to provide hydrogen ions (H+) for the pinacol rearrangement to occur. As -OH group is bad leaving group, its protonation results in the formation of oxonium ion (-OH2+), which is a good leaving group (Anslyn and Dougherty 675). Fundamentally, part of the reaction of pinacol rearrangement is the dehydration. Sulfuric acid dehydrates pinacol and creates tertiary carbocation, which triggers a shift in methyl group from the primary to the tertiary position.
Simple distillation was used in the separation of water and pinacolone. The simple distillation applies to the separation of two or more liquids, which have a considerable difference in boiling points (Diwekar 3). The simple distillation aimed to separate water and pinacolone from the reaction contents containing traces of pinacol and sulfuric acid. Water and pinacolone have boiling points of 100°C and 106°C respectively. As the boiling point of sulfuric acid is 290°C while that of pinacol is 172°C, these boiling points are considerably higher than that of water and pinacolone, and thus, a simple distillation procedure separates them effectively.
The distillation process stopped at 100°C because water and pinacolone have close boiling points. Raoult’s law explains why the distillation process was stopped when the temperature reached 100°C. Raoult’s principle holds that two liquids with close boiling points co-distill at the temperature that is lower than the boiling temperature of either of them (Reger, Goode, and Ball 496). The principle, therefore, implies that all water and pinacolone would have been distilled by the time the temperature reached 100°C.
Saturated sodium chloride is a liquid drying agent that has can remove a lot of water from an organic phase. As distillate has aqueous and inorganic phases, the addition of saturated sodium chloride absorbs water from the organic phase to the aqueous phase. The following chemical equation demonstrates the drying effect of saturated sodium chloride.
NaCl (s) + H2O (I) → Na+(aq) + Cl–(aq) + H2O (I)
Sodium Chloride Water Sodium ion Chloride ion water
(Pavia 716)
Since distillate has two layers, the aqueous layer has water while the organic layer has the product. Water is in the aqueous layer because it is an inorganic substance. The product is in the organic layer because pinacolone is a ketone, which is an organic substance. Comparison of the densities reveals that water is denser than the product. The density of water is 1 g/ml while the density of pinacolone is 0.801 g/ml. Therefore, aqueous phase forms the lower while the organic phase forms the upper layer in the flask.
The purpose of the anhydrous was to dry the product. Anhydrous sodium sulfate is a solid drying agent that can absorb a lot of water from another substance. Pavia states that anhydrous sodium sulfate is a hygroscopic substance that can absorb moisture from the air and become a solution (715). The following chemical equation indicates that anhydrous sodium sulfate absorbs water, which turns it into sodium and sulfate ions.
The use of drying agents in the experiment is necessary because the organic product is mixed with water. Given that the product is an organic compound mixed with aqueous solution, saturated sodium chloride was used to remove water from the organic phase. Moreover, as the distillate comprises water and pinacolone, anhydrous sodium sulfate was used to remove water and leave pinacolone as a pure compound.
Analysis of the IR spectra of the reactant (pinacol) and the product (pinacolone) reveals that they have marked differences. As the reactant is alcohol, it exhibits IR spectrum of alcohols. The reactant has a broad O-H stretch at 3440.25 -1cm and C-H stretching frequencies at 2984.99 cm-1 and 2942.41 cm-1. In contrast, the product has C-H stretching frequencies at 2968.88 cm-1 and 2873.38 cm-1 and a unique C=O stretch at 1705.45 cm-1. Comparison of the IR spectra of the reactant and product shows that pinacol rearrangement took place. The O-H stretch is evident in the reactant because it is an alcohol while the product lacks it because it is a ketone with C=O stretch.
Moreover, NMR analysis of the reactant and the product indicates that there is an apparent chemical shift. The structure of the reactant (pinacol) comprises two tertiary hydroxyl groups and four primary methyl groups. Two protons in the hydroxyl groups and 12 protons in methyl groups depict three peaks in the NMR spectrum. In contrast, the NMR of the product illustrates chemical shifts in the spectrum. The molecular structure of pinacolone shows that it has three tertiary methyl groups, secondary carbonyl group, and secondary methyl group (Aggarwa, Kimpe, Collier, Dadoub, and Eberbach 969). The protons in these groups give rise to three types of peaks. However, the peaks are magnified because of the tertiary position of the protons and the existence of the carbonyl group in the molecular structure. Therefore, analysis of the NMR spectra confirms that there was pinacol rearrangement that resulted in the formation of pinacolone.
The 2,4-DNP test is a qualitative test that detects the presence of carbonyl groups in ketones and aldehydes. The reagent, 2,4-DNP, reacts with the carbonyl groups in solutions and forms a yellow, red, or orange precipitate (Ahluwalia and Dhingra 20). The reaction of 2,4-DNP with pinacol would not give white precipitate because it is an alcohol without carbonyl group. However, the reaction of 2,4-DNP with the product formed a yellow precipitate. The formation of a yellow precipitate confirmed that the product has a carbonyl group, and thus, it is pinacolone.
Works Cited
Aggarwa, Varinder, Norbet Kimpe, Steven Collier, Miguel Dadoub, and Wolfgang Eberbach. Science of Synthesis: Houben-Weyl Methods of Molecular Transformations: Heteroatom Analogues of Aldehydes and Ketones. New York: Georg Thieme Verlag, 2014.Print.
Ahluwalia, Vander, and Sunita Dhingra. Comprehensive Practical Organic Chemistry: Qualitative Analysis. Hyderabad: India Universities Press, 2004. Print.
Anslyn, Eric, and Dennis Dougherty. Modern Physical Organic Chemistry. Sausalito, Calif: University Science Books, 2006. Print.
Diwekar, Urmila. Batch Distillation: Simulation, Optimal Design, and Control. New York: CRC Press, 2011. Print.
Pavia, Donald. Introduction to Organic Laboratory Techniques: A Small-Scale Approach. Belmont, Calif: Thomson Brooks/Cole, 2005. Print.
Reger, Daniel, Scott Goode, and David Ball. Chemistry: Principles and Practice. Belmont: Brooks/Cole, Cengage Learning, 2010. Print.
Elastic materials stretch proportionately with increase in mass provided that the elastic limit is not exceeded. This forms the basis for Hooke’s law which states that: for elastic bodies, the stretching force (F) is directly proportional to the extension (d) provide that the elastic limit is not exceeded. Hence, the essence of performing this experiment was to ascertain this principle for both a rubber band and a spring (Lewin 14).
In this experiment both the rubber band and the spring were separately subjected to corresponding stretching forces and the information tabulated for analysis. A graph of d was plotted against F (separately) to obtain a linear fit and the static spring constant (Ks). Using the spring that was set to oscillate along the vertical axis, the period squared (T2) versus the mass (m) were plotted and then analyzed. From this graph the dynamic constant (Kd) was obtained and compared with Ks of the spring (they ought to be equal).
The result revealed that ‘Ks’ (41.62± 0.34 N/M) was approximately equal to Kd (40.0±0.0234 sec2/g). This showed discrepancy of 4.05% that might be due to experimental errors. The Ks of the rubber was found to be 23.26 ±3.26 N/m. This reveals that the spring is stronger than the rubber band. From the analysis, the effective mass of the spring was 0.031gm. The uncertainties might have been due to experimental design that might have limited the obtaining of accurate values of T (by the aid of the eye). In future experiments, this can be minimized by computerizing the whole experiment
Objective of the experiment
The main objective of performing this experiment is to validate the relationship between the force and the extension of a spring with respect to Hooke’s law to obtain the static spring constant. Also, alongside this objective, the dynamic spring constant was derived from the relationship courtesy of simple harmonic motion of a spring.
Procedure
In this experiment, a spring was loaded with masses ranging between 100 and 700gms in ascending order. The values of displacement, x, and the force, F, were momentarily recorded in steps. Upon reaching the maximum mass, the spring was off-loaded in the very order of loading as their respective values of x were being recorded. The values of the displacement against the force were then plotted to obtain the static spring constant. This procedure was repeated with the rubber band replacing the spring.
Using a 500gm mass, the periodic time, T, were tabulated for different amplitudes, x. Using a nominal amplitude of between 1 and 2 cm, the periodic time and their corresponding masses (between 200 and 600gms) were tabulated for analysis.
Results
The results for this experiment were:
From the relationship between the displacement (d) and the force (F) for the rubber, the static spring constant (Ks) was equal to 23.26 ± 3.26N/m. Ks for the spring was equivalent to 41.62± 0.34 N/M. the dynamic constant (Kd) for the spring was equivalent to 40.0±0.0234 sec2/g. The uncertainty in Kd of the spring with respect to Ks of the spring was equivalent to 4.05%. The effective mass (mo) of the spring was 0.0314 gm.
Data analysis
Table 1 below shows the values obtained when varying the mass
Rubber band
Spring
Force
displacement
Displacement on off-loading
displacement
displacement on off-loading
1.4715
0.015
0.011
0.012
0.002
2.4525
0.039
0.037
0.038
0.013
3.4335
0.074
0.068
0.06
0.037
4.4145
0.117
0.109
0.084
0.061
5.3955
0.164
0.15
0.108
0.088
6.3765
0.217
0.197
0.133
0.109
7.3575
0.263
0.16
Table 2 below represents the data obtained from the experiment.
Table 2
mass
T^2
Time for twenty oscillations
T=t/20
0.55
0.578
15.2
0.780
0.65
0.667
16.34
0.817
0.75
0.775
17.61
0.880
0.85
0.871
18.67
0.934
On plotting the data of table 1, the below trends are observed.
From graph 1, the relationship displacement d and the force F assume a linear plot that is believed to be connected by the formula: ΔF = KsΔd.
Where, Ks is the static spring (rubber band) constant, d is the displacement and F is the stretching force.
Rearranging the formula, we obtain: Δd=ΔF/Ks.
Therefore, the slope of the graph will give us the reciprocal of static spring constant. Hence, Ks =1/0.043
=23.26 ± 3.26N/m
Uncertainty in Ks = 1/ 0.043-1/ 0.05 =3.26 N/m.
We therefore conclude that the static rubber band constant is equal to 23.26 ± 3.26N/m. the rubber band obeys the Hooke’s law since the value of the force is directly proportional to the displacement.
As regards the spring, Ks=1/ slope =1 /0.024
=41.62± 0.34 N/M
Uncertainty in the value of Ks of the spring= 1/0.024-1/0.0242 ≈ 0.34
We therefore conclude that the Ks of the spring is equivalent to 41.62± 0.34 N/M. from the graph we can tell that the spring obeys the Hooke’s law. This is so because the value of the force increases proportionately with the displacement. Comparing the two materials, it is evident that the spring is much stronger than the rubber band since Ks of spring (41.62± 0.34 N/M) is greater than Ks of the rubber (23.26 ± 3.26N/m).
From the equations, ω^2=Kd/ (m+mo) and ω=2π/T, a relationship between T^2 and m can be derived. Combining the two you obtain:
4π/T^2=Kd/ (m+mo)
Kd is the dynamic spring constant,
m is the hanging mass,
T is the periodic time and,
ω is the angular velocity.
Rearranging the equation you get:
T=2π√ ((m+mo)/Kd)
Squaring on either side you obtain:
T^2=4π^2m/Kd+4π^2mo/Kd
Therefore, the graph of T^2 against m should be a linear graph where:
The value of Kd (40.00) is almost equal to the value of Ks (41.62) of the spring. The discrepancy between these values is given by:
(41.62-40.00)/40.00* 100 = 4.05%
From the graph, mo = Kd *Y intercept/ (4π^2)
=39.999*0.031/ (4*4.314^2)
=0.0314 gm.
From the equation, T^2=4π^2m/Kd+4π^2mo/Kd, when the value of T^2 =0 then m=mo. The variable, m, is the mass of the spring and is the m axis intercept. Therefore, m is approximately equal to 0.314gm.
We therefore conclude that the dynamic spring constant (Kd) is approximately equal to 40.0±0.0234 sec2/gm. This value is approximately equal to the static spring constant (Ks) (41.62± 0.34 N/M). The value of effective mass of the spring (mo) is equal to 0.314 gm.
Discussion
The objective of performing this experiment was to ascertain whether the rubber or the spring obey Hooke’s law. Moreover, the experiment wanted to ascertain the relationship between dynamic constant of a spring and the static constant of a spring. According to Hooke, for elastic materials, the force is directly proportional to the extension provided the elastic limit is not exceeded. In this experiment, for both the spring and the rubber band, they obeyed Hooke’s law (ΔF = KsΔd). The constant of proportionality (Ks) determines the strength of elasticity i.e. the strength of the material increases with the value of Ks. This is obtained from the gradient of the graph. It can be deduce that the spring (Ks=41.62N/M) is stronger than the rubber (Ks=23.26N/M).
Basically, the value of Ks for a material should be the same as Kd of the same material. Kd is obtained from an equation combining parameters when set to oscillate. With regards to the spring, Kd (40.0±0.0234 sec2/gm) and Ks (41.62± 0.34 N/M) were approximately equal were it not for experimental errors. The discrepancy was 4.05%. The value of mo (0.314gm), the effective mass of the spring should be one third that one for the ruler. However, from graph 2 when the loaded mass m is zero, the graph exhibits the mass of the spring (displacement intercept) as negative. This could be due to experimental error.
The uncertainties that might have dispersed the data could have been contributed by experimental design. One inevitable error with respect to oscillating bodies that might have dispersed the data could be due to inaccuracy in telling the periodic time (T). In future experiments, this error can be minimized by computerizing the experiment. Otherwise the experiment achieved the objective in line with Hooke’s law.
Conclusion
From the experimental analysis, it can be deduced that the objective of the experiment was achieved with both the rubber and the spring behaving in line with Hooke’s law. This is so because from the graphs drawn, a straight line with a positive gradient was obtained indicating that it is a direct proportion relation.
Every trial must have replications or repetitions because the experimental error is estimated in replicated trials. However, dairies commercial farms demonstration trials are impossible to replicate most of the time. This is because of the practical operation at the dairy farm, the manageability of the day-to-day operation, and the cost involved with them. In such situations, it is possible to use statistical experimental design and tools that can be used to analyze the data. This paper will focus on-farm trials and observations, nature and underlying principles of non-replicated tests, and briefly discuss analysis methods.
Importance of Replication
Replication is one of the milestone concepts of experimental design and statistical analysis. It is widely used in animal science research to calculate the experimental error variance against which treatment effects should be compared. It also helps to operational consistency of experimental results (Kuehl, 2000). Replicating treatments in a trial enables the researcher to separate the actual treatment effects from the background noise by absorbing experimental error (Johnson, 2006). Therefore, whenever possible, experimental treatments should be replicated.
However, there are situations where replicating a treatment in different units is not possible, forcing investigators to conduct non-replicated studies. Most of the time, such experiments are frequently used to save costs. According to Machado and Petrie (2006), replication in agricultural science is impractical, expensive, or impossible in certain situations. For example, long-term experiments initiated before the current understanding of statistics, ecological and watershed studies, large field-scale research trials, demonstration plots, geological research, and even unforeseen design mistakes.
Why Non-Replication is Not Used in Dairy Publication
Machado and Petrie (2006) point out that many agricultural researchers consider non-replicated experiments unscientific and unacceptable for publication. Most of the trials published in the animal science journal are from the universities where the farm conducts a scientist trial. Therefore, the structure is designed to use small numbers of animals with enough replication of the experimental united in those studies. A non-replicated study requires special statistical knowledge and a significant number of animals or subsamples to have the power to detect the difference in the treatments. According to Bisgaard (1992), experimental design techniques were initially developed with replications for agricultural and biological research. Although there are early industrial applications of the design of experiments, engineers took a long time to envision that methods used for agricultural and biological studies had relevance to their work. Today, some processes are widely used in industry that can be applied in agricultural, physical, and environmental settings.
Publication of Non-Replicated Data
Due to the lack of replication, a direct estimation of error variance from non-replicated experiments is impossible. This causes problems in the assessment of the significance of estimated effects. However, there are effective methods to overcome this difficulty, such as the one proposed by Miliken and Johnson (1989). Daniel (1959) suggests using normal probability plots for unreplicated two-level factorial experiments. Box and Meyer (1986) and Lenth (1989) provide alternative procedures to normal probability plots.
Unreplicated designs are very useful in industrial experimentation. Applications of these designs have been found in the literature since the 1960s. For example, Michaels (1964) describes unreplicated experimental designs, and Prvan and Street (2002) present an annotated bibliography of application papers on fractional factorial designs with illustrations of unreplicated cases. Ilzarbe et al. (2008) analyzed 77 instances of practical design of experiments applications in engineering published in important scientific journals between 2001 and 2005. Therefore, there are plenty of examples of non-replicated experimental designs that were carefully described and published. The dairy industry needs to follow the same step to take advantage of the sound data produced by several field trials that were never basically because of lack of knowledge or the unwillingness to work with those procedures. Research using experiment design with no replication has a long history. The most famous one is Tukey’s test of additivity, where the degree of freedom (df) is designed for a specific non-additivity structure (Tukey, 1949). Since then, many detailed tests have been deployed, as shown below.
Tests That Can Be Applied to Dairy Field Trials
Another Fisher’s contribution, relevant to the analysis of unreplicated data, is the randomization test (Fisher, 1935). For t, the randomization test treats the test statistics as a sample from all the possible sets of results that might have been obtained from a particular set of experimental units (experimental farms if this is a field experiment). Its distribution over the data sets then determines the probability value for each test statistic. The method is useful when the data does not satisfy the distributional assumptions (normality assumption) required for the standard analysis.
Daniel (1959) used the idea of detecting outliers in a data set by using a probability plot to identify the outliers. The values that fall off the line correspond to active effects. The method uses the implicit assumption of few functional effects to draw a line through the bulk of small contrasts. However, the method ingeniously avoids the need for estimating σ. He presented an objective graphical method, a standardized probability plot with guardrails, which plots the unsigned contrasts divided by the ordered unsigned contrast corresponding to the order statistic close to 0.683 percentile. Dynamic effects are then identified by the standardized contrasts which exceed their corresponding guardrails. It is the most powerful test when there is only one active effect.
Box and Meyer (1986) presented a Bayesian approach based on effect sparsity. They used a scale contaminated model, which assumes that the active effects have a normal distribution. Therefore, contrasts corresponding to the active impacts have distribution to the inert results with a normal distribution. For each product, the marginal posterior probability of being active is computed and declared active if the probability exceeds 0.5. They noted that they could estimate parameters based on ten published analyses of data sets. This provided empirical support for the principle of effect sparsity and motivated their recommendation.
Dong (1993) proposed a method based on the trimmed mean of squared contrasts rather than the trimmed median of the unsigned contracts. The method has a small mean squared error which is a good motivation for standardizing the contrasts. He also proposed iteratively calculating the trimmed median of the unsigned contrasts until it stops changing when there are many active effects.
Alin and Kurt (2006) have reviewed non-additivity interaction in two-way Anova tables with no replication. They describe some methods for testing non-additivity when there is only one observation. Some of these tests depend on known interaction structure, whereas others do not. However, these methods are straightforward to apply using Microsoft Excel and Statistical packages like R.
Payne (2006) has also described new and traditional methods for analyzing unreplicated experiments. One of the methods he recommended was the spatial method. In the spatial methods, the experiment is first analyzed conventionally, treating it as a randomized block design. This design aims to group the units (i.e., dairy farms) into blocks so that the farms in the same block are more similar than those in different blocks. Each treatment occurs an equal number of times in each block (usually once), and the allocation of treatments is randomized, where possible, independently within each block. The analysis estimates and removes between-block differences to estimate treatment effects more precisely. However, the design may constrain which treatments appear on some of the farms to allow reasonable estimates for the parameters in the spatial model. The experimenter has to take account of variation by fitting models to describe how the correlation between each farm and its neighbor’s changes according to their relative locations and the analysis of residual (or restricted) maximum likelihood (Patterson & Thompson, 1971; Gilmour et al., 1995).
Wang (2013) has published excellent analysis methods for two-factor unreplicated experiments where one factor is random. His research is motivated by comparing measurement methods. Its foci include parameter estimation, tests of additivity, and prediction of one method given measurements of other methods. Although this proposed test is similar to Mande’s test, his result is a more robust test.
Recently, Vivacqua et al. (2015) have published an application of split-plot experiments in time. It extends the experimental design structure by considering an unreplicated factorial plan augmented with one central point and repeated control treatment in a two-time period. The analysis procedure was described in detail unavailable in the literature. Nevertheless, this paper provides an approach to evaluate the significance of meaningful contrasts when the additional treatments evaluated over a two-time period are also unreplicated.
Outliers
One critical point in working with non-replicated design is knowing how to work with outliers. Whether or not an appropriate transformation is used can be more important than the test selected. Slight departures from normality do not affect the distributions of the test statistics too much. Still, outliers impact the sizes of estimates of the effects, so effective detection and elimination are critical to successful process optimization. The active effects as outliers have extensive literature (Barnett & Lewis 1994). Benski (1989) proposed using an outlier test to identify the functional impact to solve this problem. The test is based on a robust estimate of spread, which uses the interquartile range of the contrasts. In particular, it is possible to use interactive graphical methods developed for this problem. However, this topic is out of the scope of this paper and can be discussed in another one.
Conclusion
In conclusion, this paper has reviewed many experiment designs used in other fields that can also be applied in animal science, especially in dairy farms field trails. Given restrictions on time and cost, unreplicated designs will continue to be widely used in the dairy industry, and they could be published if the right statistical tool is used. There are many competing test methods, which are difficult to distinguish among based on performance, and it is easy to come to wrong conclusions if a careful comparison is not made. However, using comparison techniques, the researcher can make the correct decision. The choice of a test is further complicated because no single test performs well over a wide range of dynamic effects. In this regard, we need to use a graphical analysis environment in software designers to provide us with an appropriate choice of tests and an appropriate way to work with outliers. Therefore, it is recommended that the authors take the comprehensive approach of this paper as a guideline for fully describing the statistical method used in his publication, making it replicable and acceptable for publication.
Following the experiment for the simple pendulum, one can see that the pendulum’s period of motion changes due to the different lengths of the string but not the weight of the washer. For example, when using two different weights (25 and 50 g) with the string of the same lengths, they moved at about the same speed, finishing a period at the same time. However, if one lengthens or shortens the string, it changes the speed at which the weight moves. A longer string produces a slower period of movement, and shorter strings move the weight quicker, resulting in smaller periods. This means that the length of the string is more important than the weight for determining the speed at which a pendulum moves. This information is necessary when considering pendulums as a part of technology design.
The experiment is prone to error as it is completed in an everyday uncontrolled environment. The main possible factor that affects the pendulum experiment is the way in which one releases the pendulum. It may be difficult for a person to release two weights in the exact same way without using any force. For example, one can throw a pendulum or slow down its movement by holding onto the weight. Therefore, one pendulum may move quicker than another because of human error. Similarly, pendulum construction can be faulty and make one weight move quicker or slower than another if it is fastened in a different way. Finally, outside influences, such as the weather, breathing, movement of the construction, and more, can disturb the pendulum and change the period at which the weights move. If one blows on the pendulum or shakes the table on which the experiment takes place, the result cannot be considered reliable.
The outcome of a given genetic analysis relies on a cascade of events that commence from the extraction of genomic DNA, polymerase chain reaction (PCR), cloning of PCR products and plasmid isolation. Here, the sample DNA obtained from any source in very meager quantities can be made to meet the experiment requirements. This is accomplished by a technique known as PCR invented by Kary B. Mullis in the year 1988, whose main objective was to amplify and detect a target DNA molecule present only once in a sample of 10(5) cells (Saiki et al. 1988).It was proposed that in order to obtain good number of target copies, DNA needs to be denatured by a thermostable DNA polymerase isolated from Thermus aquaticus (Saiki et al. 1988).This would enable the amplification reaction to be performed at higher temperatures, significantly improve the specificity, yield, sensitivity, and length of products (Saiki et al. 1988).
For example, earlier workers have amplified up to 22 kb of the beta-globin gene cluster, 91 inserts of 9-23 kb from human genomic DNA and up to 42 kb from phaga lambda DNA (Cheng et al. 1994).It was further described that the ability to amplify DNA sequences of 10-40 kb will bring the speed and simplicity of PCR to genomic mapping, sequencing and would ensure studies in molecular genetics (Cheng et al. 1994). These research findings could serve as background information and may enable good number of investigations.
So, the study hypothesis in the present case was that the 18s rRNA gene and actin genes of fruit fly Drosophila melanogaster may be cloned and obtained as purified products that would correspond to the DNA ladder size. This is because it is not fully known whether this could be achieved in a simple academic laboratory setting. The process involved in this description was based on a flow chart commonly followed in molecular genetics experiments (Lab manual).
Therefore, the purpose of the present experiment is to determine the feasibility of molecular genetic analysis of18s rRNA gene and actin genes of fruit fly Drosophila melanogaster.
This organism has been extensively studied as a model due to its similarity to human beings in terms of development and behavior, and on the grounds that it could provide valuable insights in the genomic era (Beckingham et al. 2005).
Methods
Initially genomic DNA of D.melanogaster was isolated using the following steps.12 adult flies were added into clean 1.5 ml tubes and a 200µl of lysis buffer was then added. The flies were ground with a blue pestle mini-grinder for 5 minutes. The tubes were centrifuged for 1 minute at optimum speed in the microfuge.Care was taken to balance the weight while placing the tubes in the rotor (Lab manual).
The supernatant obtained on centrifugation was pipetted into a clean, labeled 1.5 ml tube and the original tubes containing the pellet were disposed into the biohazard bin. In the safety hood, a 200 µl of phenol/CHCL3 was added to the tubes containing the fly supernatant which are vortexed gently for 30 seconds. For an easy identification of clear phenol liquid, yellow colored anti-oxidant preservative was added. The tubes were again centrifuged for 2 minutes at maximum speed in the microfuge (Lab manual).
The aqueous phase obtained was transferred to a clean tube by using a pipet containing yellow tip. The clean tube with DNA samples was labeled as “DNA”. From this tube, a 5µl of DNA was pipetted into a new tube labelled”A”.A1µl of mg/ml RNase was added to the tube labeled “DNA”, which was vortexed and incubated for 30 minutes at room temperature. A 5µl of this RNase treated DNA was added to new tube labelled “B”. A 2µl of the DNA was pipetted from the tube labelled “DNA” to a new tube labeled ” Diluted” to which a 38µl of TE buffer was added and mixed gently (Lab manual).
The diluted DNA thus obtained was used for PCR.There were total 4 tubes labeled as “DNA”,”A”,”B”, and “Diluted”. The tubes “A” and “B” were used for agarose gel electrophoresis.A 1µl of gel loading buffer was added to tubes “A” and “B”. The samples thus obtained were loaded next to each other on a 1% agarose gel. One of the lanes of the gel was loaded with a sample of DNA ladder (Lab manual).
The subsequent reactions suitable for PCR were carried out in a 0.2 ml microcentrifuge tube which was labeled as “DNA”. A 2.5 µl of previously diluted DNA was added to this PCR tube followed by 47.5 µl of Master Mix that contains DNA template, 10x Taq polymerase buffer, a mix of 4 deoxynucleotides (dATP, dCTP, dGTP, dTTP), the forward and reverse primers and Taq polymerase. The total reaction volume maintained for PCR was 50 µl. The samples were kept on ice before loading into the PCR.A negative control was also prepared by adding 2.5 µl of previously diluted DNA to 2.5 µl of sterile water in the tube labeled “water”. The PCR was programmed for the initial denauration of template DNA at 940 for 3 minutes and for three main steps such as denaturation at 940 for 3 minutes, annealing at 520 for 1 minute and extension at 720 for 2 minutes (Lab manual).
The PCR was run for 4 hours and the samples were removed from the thermal cycler and stored in -200 C freezer. Next day, all the PCR samples were brought to room temperature by thawing. Here again, two new tubes labeled “DNA” and “water” were obtained to which 20 µl of PCR samples were added. A 2.5 µl of gel loading buffer was added to each tube and the samples thus obtained were loaded onto 1% agarose gel for carrying out eletrophoresis. A suitable DNA ladder was also loaded in one of the lanes of the gel. Electrophoresis was stopped when the blue dye was run half way down the gel (Lab manual).
The gel was finally photographed for observing the PCR product. Next, the amplified products i.e. the gene 18s rRNA + actin gene were inserted into plasmid vector pCR 2.1-TOPO using Nacl. Actually, 4 µl of PCR product was used for ligating into 1 µl of the vector using 1 µl of NaCl (Lab manual).
For this purpose, ligation reaction was made in a labeled 0.5 ml tube that was later tapped for mixing. It was incubated for 10 minutes during which it was thawed on ice to obtain top10 cells and labeled. In the next important transformation reaction, the vial containing the cells was added with 2 µl of ligation reaction and mixed gently, incubated for 30 minutes on ice and was kept in a heat block maintained at 420 C for 30 seconds (Lab manual).
The cell vial was again kept on ice for 1 minute. Later, a 250 µl of warmed SOC medium was added to the vial which was then kept for horizontal shaking for 1 hour at 370C.The lab bench was cleaned with a counter cleaner to avoid any contamination. A 50 µl from the vial was spread on pre-warmed plates containing Leuria broth (LB) and LB+ Ampicilin. This is to check for the cells that would be either LB/amp- or LB/amp+ (Lab manual).
The next step was to observe the bacterial plates for transformation. With the help of a ruler, a 1 cm square was drawn on the agar side of plates. Number of colonies that appear within 1cm block were counted and recorded in a table for a comparison. The formula used here was area = πr2 and the total area of the plates was (3.14) (5 cm) 2 =78.5 cm2. Liquid broth of LB+Amp was prepared by adding 10 µl of 25 mg/ml Ampicillin to 5 ml LB broth (Lab manual).
From that mixture, 2 ml was transferred into 2 different tubes labeled as experimental E1 and E2. Two colonies were picked from the “experimental” LB +Amp plate with the help of a sterile inoculating loop. Care was taken to pick only colonies that appear clearly isolated from others. The colonies thus picked were placed separately into tubes containing 2 ml of LB +Amp liquid broth. They were grown overnight at 370 C kept on shaker and stored at 40 C till use (Lab manual).
In order to recover plasmid DNA (free from contamination of genomic DNA or protein) from the liquid culture and confirm the presence of the inserted DNA by restriction mapping Plasmid mini-preps were used. Initially, the 2 overnight cultures were obtained and 1.5 ml from each tube was transferred into two 1.5 ml mirofuge tubes which were again labeled as E1 and E2. They were centrifuged at 8000 rpm for 1 minute, supernatant obtained was collected into a beaker and the tubes were drained onto a paper towel (Lab manual).
The pellet was resuspended in 250 µl buffer P1 with RNase, by vortexing. A 250 µl buffer P2 was added and the tubes were inverted 4-6 times to ensure mixing. Vortexing was avoided to keep off shearing of genomic DNA. A 250 µl of buffer N3 was added and the tubes were inverted immediately 4-6 times (Lab manual).
To achieve a colorless solution, the solution was mixed thoroughly. Next, it was subjected to centrifugation for 10 minutes at full speed until a clear white pellet was visible. The supernatant was pipeted out and transferred to labeled Q1A prep spin columns that contain bottom tubes. They were again centrifuged at full speed for 1 minute, and the flow -through liquid in the bottom tubes was discarded. The spin columns were washed by 750 µl buffer PE, centrifuged for 1 minute at full speed (Lab manual).
The flow –through liquid was later discarded and again the column was centrifuged for 1 minute at full speed to remove any residual liquid from the column. The QIA prep column was placed in a clean labeled 1.5 ml microcentrifuge tube. DNA was eluted by adding 50 µl of sterile water to the center of each QIAprep column, allowing them to stand for 1 minute and centrifuging for 1 minute at full speed. The QIA prep columns were finally discarded (Lab manual).
The plasmid DNA thus obtained was subjected to electrophoresis. In a 1.5 ml tube,2 µl of plasmid DNA, 2 µl 10X loading dye and 16 µl distilled water were added. This sample mixture was loaded in adjacent lanes of a 1% agarose gel electrophoresis. The DNA ladder was loaded into one of the lines of the gel as a size marker. Electrophoresis was stopped when the blue dye front was run half way down the gel (Lab manual).
Results
The results obtained were shown in the pictures 1 and 2.
From left to right: ladder, 1E-e1, 1E-e2, 1W-e1, 1W-e2, 2E-e1, 2E-e2, 2W-e1, 2W-e2
From left to right: ladder, 3E-e1, 3E-e2, 3W-e1, 3W-e2, 4E-e1, 4E-e2, 4W-e1, 4W-e2
From left to right: ladder, 1E-e1, 1E-e2, 1W-e1, 1W-e2, 2E-e1, 2E-e2, 2W-e1, 2W-e2
From left to right: ladder, 3E-e1, 3E-e2, 3W-e1, 3W-e2, 4E-e1, 4E-e2, 4W-e1, 4W-e2
In this experiment, genomic DNA was isolated from D.melanogaster and inserted into plasmid vector that was subsequently transformed into E.coli (Lab manual). The gel photograph of lab 6 (Picture.1) depicts the migration of samples “A” and “B” prior to amplification. The lanes of the gel ladder reflect the corresponding size of DNA in base pairs. The migration was moderate initially and later improved. When electrophoresis was stopped, DNA sample “A” was more prominent than the sample”B”. From the PCR experiment(Picture.2), it was revealed that, the amplified product of 18s ribosomal fragment was corresponding with the size of DNA marker at 5000 bp (Lab manual).
There was no amplification in the other lane due to the sample with no DNA and PCR mix. In other words, there was water instead of genomic DNA in the sample that was loaded. Hence, only clear dye front was visible clearly in that lane. The amplified product of actin gene has yielded similar results. In the transformation experiment, we have used control and transformation plates (Lab manual).
Control plates with LB would enable the growth of E.coli where as LB+amp would enable the growth of E.coli that are ampicillin resistant of that possess ampicillin resistant genes harbored in the plasmid vehicle used for transformation. So, LB plate would become positive control as it facilitated the growth of E.coli cells, in general, regardless of whether they ampicillin resistant genes or not. LB +amp plate would become negative control as it facilitates the growth of E.coli cells that are ampicillin resistant and deselects those that are ampicilin sensitive. As such, there would be obviously lot of difference in the cell growth anticipated in LB plate and LB+ ampicillin.
Hence, we observed a lustrous growth of colonies in the LB plate compared to the LB+ampicillin that has only 28 isolated colonies. From the miniprep experiment, we have isolated 50 µl of plasmid DNA and when it was electrophoresed bands similar to that obtained previously were observed.
But they were more conspicuous than the earlier PCR products indicating the purity of the miniprep product. The bands represented in pictures 3 & 4 were slightly above 5000 bp corresponding with DNA marker. The bands represented in the pictures 5 & 6 were also above above 5000 bp corresponding with DNA marker.
Discussion
In this experiement, an attempt was made to isolate, clone and purify18 s rRNA and actin genes specific to the common fruit fly (Lab manual). Firstly, the migration of DNA samples and their position in the gel according to the DNA ladder marker. may indicate their approximate molecular weight that was anticipated prior to PCR amplification. This could also reflect the reliability of the protocol such that it could be applied to other genomic experiments with minor modifications. Similarly, the amplified products obtained after PCR are in agreement with the DNA ladder size.
The use of LB medium and ampicillin played vital role in determining the presence of approiate colonies during transformation (Lab manual). This could decide whether the colonies with Amp resistant genes would be used for other advanced cloning and transformation experiments. This might also minimize the time required for the process of random selection of colonieMini prep experiment has enabled to obtain the plasmid DNA product in pure forms.
The pictures(3-5) depicted that the approximate band size of genes (18s rRNA and actin) cloned were greater than 5000 bp or close to 7500bp.Hence, the results obtained have addressed the hypothesis. The 18s rRNA gene and actin genes of Drosophila have been cloned ad purified with the simple protocol of laboratory manual. They have also been observed as amplified products, free from contamination, in the gel when the bands were similar to DNA marker, in molecular weight. Therefore, it may indicate that the stated hypothesis is correct. This could also indicate the efficacy of the reagents used and the conduciveness of lab atmosphere.
This lab work aimed at study hypothesis seems to be important; because of the growing interest on the study of Drosophila melanogaster as an animal model (Beckingham et al.2005).
Its potential genetic system has enabled the study of biogenesis of mitochondria that was considered to play essential role in cellular homeostasis (Fernandez-Moreno et al. 2007).This milestone may be due to the molecular analysis of wide range of gene/DNA inserts that has become easier with the feasibility of gene amplification process by PCR (Saiki et al. 1988). Therefore, the laboratory study of D.melanogaster should be greatly encouraged with more sophisticated molecular biology techniques, to come forward with more study hypotheses.
References
Beckingham, KM, Armstrong, JD, Texada, MJ, Munjaal, R, Baker, DA. “Drosophila melanogaster–the model organism of choice for the complex biology of multi-cellular organisms.” Gravit Space Biol Bull 18.2 (2005):17-29.
Cheng, S, Fockler, C, Barnes, WM, Higuchi, R. “Effective amplification of long targets from cloned inserts and human genomic DNA.” Proc Natl Acad Sci U S A 91.12 (1994): 5695-9.
Fernandez-Moreno, MA, Farr, CL, Kaguni, LS, Garesse, R.” Drosophila melanogaster as a model system to study mitochondrial biology.” Methods Mol Biol 372 (2007):33-49.
Saiki, RK, Gelfand, DH, Stoffel, S, Scharf, SJ, Higuchi, R, Horn, GT, Mullis, KB, Erlich, HA. “Primer-directed enzymatic amplification of DNA with a thermostable DNA polymerase.” Science 239.4839 (1988): 487-91.