Designing an effective method for data collection is essential for without data no research can be fulfilled. Still, very few statistics textbooks pay attention to data collection methods and issues (Bauer, 2009). Therefore, it is advised that healthcare decision-makers must be well acquainted with the art and science of data collection methods available in statistics. As medical research is performed on a person or a group of people, so information regarding each member is essential for the research and needs to be recorded. This data may be obtained from direct measurement (e.g. weight on scales), or asking questions or by observing, from results of a diagnostic test (e.g. diagnosis of coronary heart disease) or other methods. Sometimes the unit of observation or research may not be humans, in that case, similar information has to be extracted through direct data collection methods i.e. by observation or direct measurement. Other researches may be dealt with financial performance for instance comparison of two hospitals in terms of efficiency and cost. Here the unit of observation is hospitals, and information, in this case, will be gathered from the hospital financial database. Therefore, an effective data collection method and certainty as to the nature of data required for the research is quintessential for any research process and is no exception to healthcare.
What is data? Assuming a study of medical students is being conducted, where demographic data have to be gathered like age, sex, city of birth, socio-economic background, etc. Each of these demographic information i.e. Age or Sex is considered variables while the information obtained in these areas from each student is called data.
Broadly, data can be divided into two categories viz. qualitative and quantitative data. Qualitative data refers to qualities and quantitative data to quantities. Qualitative data are non-numeric data. For example, name, sex, or city of birth is qualitative data. Further, some variables may appear to be quantitative but essentially, they are not. For instance, data variables related to the socio-economic background of the participant like age, income, occupation, education, etc. This is because these numeric variables are used to depict the socio-economic variable and are not used for quantitative analysis. They are a tag or label and socio-economic group cannot be considered to be a quantitative data (Daly & Bourke, 2000, p. 2).
Quantitative variables, also known as categorical or nominal variables are used to collect data of quantitative nature and used for statistical interpretation. When quantitative data are gathered in two categories like alive/dead or hypersensitive/normotensive, they are called binary which means a dichotomous or an attribute variable (Daly & Bourke, 2000, p. 2). Any variable, which has an intrinsic quantitative meaning, is quantitative. These data also have an intrinsic numerical meaning. These data are called metric or numeric variables. They arise from actual measurement (e.g. Age) or count (e.g. Number of siblings).
Quantitative data can be continuous and discrete. A discrete (quantitative) variable is one whose values vary by finite steps for instance variable mike number of siblings or number of children in a family who takes an integral value (Daly & Bourke, 2000). Continuous variables are those, which may take any value. Thus, the data may or may not be an absolute numeric form rather can take any number between two absolute numbers. Examples of continuous variables are weight, age, time, body temperature, etc. Practically, continuous variables are measured in discreet units, and data is usually collected nearest to the closest unit. For instance, weight may be measured to the nearest kilogram or height to the nearest centimeter.
Now the question that needs to be answered is how these different kinds of data are measured. In the case of qualitative data, basic data are used. This is done to avoid the cumbersome process of dealing with a long list of qualitative responses. Here the basic rule is to count the number of occurrences of each qualitative data in a category, which are then presented in form of frequencies. This allows the presentation of the data in a compact form. However, a problem is encountered when presenting the percentages in decimal places. Traditionally one decimal place is considered sufficient to round off the percentage to its nearest value (Daly & Bourke, 2000). A second problem that arises is when the rounded-off percentages do not equal 100%.
Qualitative data may be analyzed using statistical diagrams like bar charts or pie charts. Bar charts are usually used when one axis represents qualitative data against a qualitative variable like measuring the representing major reasons for endoscopy vis-à-vis frequency of occurrence. Usually, bar charts are constructed to show frequency, relative frequency, etc. Pie charts or diagrams are used to demonstrate qualitative data. Essentially the pie represents the total area of the frequency observed in each category. However, pie is also useful, but car charts are preferred in representing qualitative data.
Now considering quantitative data, we must understand how the data collected are shown or a data overview is presented. The scales that can be used for representing quantitative data are nominal, ordinal, ratio, interval scales, etc. The first known method is through presenting a frequency table. Now here the method is the same as in the case of qualitative data representation, but here categories must be created to represent the values in variable groups. These groups are formed by making class limits and class intervals. Data may also be represented using histograms or frequency polygons. A histogram is a bar chart presented for quantitative data. They provide a good picture regarding the frequency distribution and the shape of the distribution. Another method of presenting frequency distribution is through frequency polygons. This is drawn by joining the mid-points at the top of each bar using straight lines. As a drawing, these diagrams can become tedious at a time for a large number of data. In such a case, a stem-and-leaf diagram, dot-plots, or cumulative frequency polygon are used.
Given these methods of data collection and data representation in healthcare researchers, it is now important to understand why a researcher should take the trouble of conducting such a complicated process. The reasons are as presented by Cook, Netuveli, and Sheikh:
Data collection and processing are important steps that need to be considered in detail before embarking on a research project.
The careful selection of an appropriate statistical package will pay a dividend in long run. (2004, p. 78)
Further, there are ethical issues that need to be considered while representing the data. Here a researcher must segregate his/her personal views from the data and present unbiased data in the data presented.
Reference
Bauer, J. C. (2009). Statistical Analysis for Decision Makers in Healthcare (2nd Eds.). New York: CRC Press.
Cook, A., Netuveli, G., & Sheikh, A. (2004). Basic skills in statistics: a guide for healthcare professionals. London: Class Publishing Ltd.
Daly, L. E., & Bourke, G. J. (2000). Interpretation and uses of medical statistics. Malden, MA: Wiley-Blackwell.
The Canadian Shield (also referred to as the Laurentian Plateau or Bouclier Canadien in French) makes up almost half of Canadas total area since it extends from Labrador through to northern Quebec, Ontario, eastern and Northern Manitoba, northern Saskatchewan and the very northeast corner of Alberta where it plunges under the plains and mountains (Schwartzenberger, 6). This gigantic geological shield is highest at the periphery and lowest at the center around Hudson Bay. A thin layer of soil covers the eight million square kilometers area (Willis, para.2). The thin soil lies on top of igneous rocks that date back to the Precambrian period (between 4.5 billion and 540 million years ago). The deep, widespread linked bedrock region relates to its volcanic history. While the majority of the population is concentrated in the southern central part, the population in the northern area of the shield is scarce and scattered. Even though the area has the capacity to produce hydroelectric power, only a few industries can be found there.
The Canadian Shield is a physiographic division, consisting of five smaller provinces, the Laurentian Upland, Kazan Region, Davis, Hudson, and James (Physiographic Regions Map, 1). The shield enlarges into the U.S. as the Adirondack Mountains. The Frontenac Axis and the Superior Upland connect it as it enters the U.S. The region is characterized by several rounded and uncovered areas of rocks that are millions of years old, many angular lakes, marshy surfaces, and a drainage system that is disorderly. This makes it to be U-shaped, but nearly semi-circular. Therefore, it appears as a warriors shield or as a huge doughnut. The thin soils were created due to glacial erosion. The many angular lakes and marshy surfaces in the region are due to the watersheds of the area that have not matured and are still sorting themselves out coupled with the effect of post-glacial rebound.
Isolated areas of the shield are covered by Jackpine forests (The Canadian Shield Region, para.3). These have distinct undergrowth comprising of several species of pale reindeer lichens, dusty green sage as well as bearberry, and enormous sand dune landscapes grade into the pine forest. The sand dunes enable unique plant species that are not found elsewhere in the province to exist in this region and the granite rocks and the brilliant sand beaches that are found in the region extend starting from Fidler Point to other places. Because the shield has a poor climate, most of the land slopes steeply, and lacks adequate drainage, there are no extensive agricultural activities taking place in the region. The scarce vegetation that is mainly present in the area is rooted in the rocks.
In spite of the agricultural limitations, the northern part of the shield has vast natural resources such as copper, gold, silver, nickel, and diamonds, and the region is a collection of Archean plates, accreted juvenile arc terrenes, and sedimentary basins of Proterozoic age. It is believed that they were increasingly amalgamated during the interval of 2.45 to 1.24 Ga. The Canadian Shield is the largest region on the planet that has exposed Archaean rock with highly extensive geological features. The area has varying climates and it can receive an estimated amount of forty-five centimeters of rainfall every year. The northern region mainly has long and very cold winters, gets more snow, and has the shortest summers.
Works cited
Physiographic Regions Map. The Atlas of Canada. 2007. Web.
Schwartzenberger, Tina. The Canadian Shield. Calgary : Weigl Educational Publishers, 2006.
The Canadian Shield Region. Alberta Online Encyclopedia. Heritage Community Foundation. N.d. Web.
Willis, Bill. The Canadian Shield. Social studies. Worsley School. 1997. Web.
According to BCS (1996) interpretation of catalysis reactions can be carried out using transition state theory. The process of converting reactants into products involves formation of intermediate products that dissociate to form final products. in a non-directed reaction, a side reaction can result leading into formation of side products (side products are products of a reaction that occur but are not the intended products).
Brito-Arias (2007) indicate that in some reactions, the formation of an intermediate compound is not favoured due to physical conditions of the reactions whereas in some cases the intermediate complex formed is stable and establishes an equilibrium state with the reactants. In the case of a stable intermediate product, there are no products that are formed unless efforts are made to overcome the activation energy required to realize the transition state for the products to be formed from decomposition or rearrangement of the intermediate products. Some of organic intermediate compounds formed occur such that the intermediate compound becomes neutral and no migration of electron could occur.
This halts the reaction. Another scenario that can bring an organic reaction to a stand still is when the nucleophile, which is the electron source has an electron withdrawing group at ß carbon atom (IUPAC, 1997a). such a scenario is observed in ±-ß-keto esters that require oxo-insertion in order to form a nucleoplhile that can deprotonate a hydronium ion leading into formation of a protonated ±-ß-keto ester hence realizing an electron sink at the ±-carbon atom to the carbonyl functional group.
The electron withdrawing group reduces electron density at the electron source and the nucleophile becomes electronically neutral. Another scenario is when the electrophile has an electron donating group on ß-carbon atom. The electron donating group donates electron to the deficient electron sink hence stabilizing the molecule. A reaction cannot therefore proceed if the reactant is electronically neutral because no electron transfer can occur. The reaction can only be achieved through introduction of better leaving groups (IUPAC, 1997b).
Aims of the experiment
To carry out an acid catalyzed hydrolysis of glycoside salicin and determine absorbance as a function of time.
To carry out enzyme catalysis of glycoside salicin and use data on absorbance at 290nm at various time intervals to determine the first order rate constant for the reaction
To isolate emulsin from almonds and carry out comparative studies on its efficiency compared to standard catalysis.
Literature review
Laidler (1997) indicates that Activation Energy, denoted as Ea, is the energy required to convert reactants to transition state of the reaction. Transition state is the reactions highest energy profile or the energy barrier for the reaction (Laidler, 1993). Increasing temperature has been established to help in breaking the transition state of a reaction because it increases vibrational energy of the molecules and rate of collision of the molecules. In biological systems, increasing temperatures or use of finely divided catalysts is not possible because body reactions occur at very narrow temperature conditions for reactions (Ingle & Crouch, 1988).
Bugg (2004) provides that enzymes are biological catalysis that help to lower activation energy of the chemical reaction pathway. Hence enzymes create a reaction pathway that has lower activation energy for the reaction to occur.
Chemical kinetics provides that for an acid catalyzed reaction, the rate of hydrolysis is directly proportional to concentration of the substrate and the acid used for the hydrolysis (Laidler, 1993).
Hence
Rate = (-dx/dt clX [H+]&&&equation 1
Where x is concentration of the substrate
The negative sign indicates that the concentration of the substrate decreases as time increases. But the acid is only used for protonation purposes so that a better leaving group could be formed at the electron sink. This means the concentration of the acid remains constant through out the reaction because it is not involved in formation of any product (Page and Williams (1987). This implies
Rate = -dx/dt = kx &..equation 2
This results into a graph that passes through point zero of the vertical and horizontal axis with a gradient of k)
The reaction therefore follows a pseudo first order kinetics because the rate of the reaction depends on one reactant. In measuring the concentration of the products of the hydrolysis, the concentration of the phenyl-derivative is determined. This implies that at time t=y, we have
Y = a x &..equation 3
This means at the end of the reaction, if no equilibrium is established, the concentration of the phenyl-derivative will be equal to the concentration of the glycoside derivative (Warshel, 1991). Hence
ln( a y/a) = kt
According to Beer Lambert law, A =µcl
Where:
A= absorbance
µ=extinction coefficient
C= concentration in moles /Litre
L= length of the cell in cm
By deduction, it follows that concentration of phenyl-derivative (thus y) will be proportional to the absorbance at 290nm because µ and l are constants.
Hence
Ln (A -A) = -kt + ln A
Since all enzymes at any point of reaction exist in form of an enzyme-substrate complex, the rate is represented by
Rate = -dx/dt k since concentration of enzyme is a constant then dx/dt = k, whose integral is zero meaning the reaction is a zero order kinetic whose units are Ms-1. Ahhrenius equation of reaction relates to rate constant K as a function of Ea. Where
ln k = -Ea/RT + ln A
Where R is the gas constant (8.314 J/mol-1k-1)
T is the absolute temperature
A is the frequency factor. If the rate is determined at two different temperatures thus T1 and T2, then
Ea = RT2T1 ln(K2/K1)/T1-T2 &&.equation 4.
Methodology of the experiment
Procedure: experiment A
Reagents used for the reaction
Phosphate buffer at PH 6, Concentration = 0.1 M
Solution E (0.67g dissolved in a phosphate buffer in 250 ml conical flask)
2M NaOH
Solution F (0.005g emulsion in phosphate buffer -10ml
Apparatus used during the reaction
Thermostated Water bath at 30oC
Thermostated Water bath at 40oC
Fourteen 100 ml test tubes
Thermometer
Stirrer
Four significant figure digital balance
Four 100 ml Measuring cylinder
Four 10 ml measuring cylinder
Fourteen pipettes
Procedure A
Procedure 1 of the reaction
Mix 7.5 ml of salicin solution E and 2.5 ml of emulsin solution F in a 50 ml conical flask at 30oC.
Record the time of mixing
At time zero (time of mixing), pipette 0.3 ml of the mixture and add it into test tube containing 2M NaoH
Repeat the procedure of pippeting 0.3 ml of the mixture six more times at an interval of 3 minutes.
Label every test tube and keep it in a test tube rack or test tube holder
Procedure 2 of the reaction
Repeat procedure 1 on the 40oC water bath for the rest of seven test tubes
Procedure 3 of the experiment
Determine the absorbance at 290nM, of all the 14 test tubes against a blank of 7.5 mL NaoH (2M) and 2.5 mL of distilled water.
Deliverables
A plot the graph of absorbance (vertical axis) against time (horizontal axis) and calculate zero order rate constants
Determination of Energy of activation, Ea, for the enzyme catalyzed reaction
Procedure B
Reactants used
Concentrated Hydrochloric acid
2M NaoH
Solution G (solution of salicin prepared from 0.67g in aqueous 1-propanol (80% water and 20% 1-propanol)
Apparatus used
A thermostated water bath at 65oC
A thermostated water bath at 75oC
Fourteen test tubes containing 10mL of 2M NaoH
100 ml Test tubes
10 ml test tubes
Procedure 1 of the experiment
Mix 7.5 mL of the Salicin solution G with 2.5 mL of concentrated HCL in a 50 Ml test tube in a 65oC water bath
Cover the 50 ml flask with a parafilm
Pipette 0.3 ml aliquot at time zero
Quench the 0.3 ml with excess of 2M NaoH
Mix with vortex mixer
Repeat the above steps for 6 other aliquots at an interval of five minutes
Procedure 2 of the experiment
Repeat the above steps in procedure 1, procedure B, using the 75oC water bath.
The time interval between every aliquot pipette should be three minutes
Procedure 3 of the experiment
Determine T reading by pippetting 0.3 ml of the aliquot by adding it into a test tube labeled 15
Add 10 mL of 2M NaoH solution
Deliverables for procedure B
Tabulation of test tube number
Tabulation of time
Tabulation of absorbance (At)
A At differences (by subtracting absorbance reading from the value obtained in test tube labeled 15
ln A At values
Plot of absorbance (vertical axis) against time (horizontal axis
Plot of ln A At (vertical axis against time (horizontal axis)
Determination of the first order rate constant
Determination of Energy of activation for the acid-catalyzed reaction
Method 3 for the isolation of Emulsin from Almonds
Procedure for the method 3
Place two almonds in boiling water for 15 seconds
Pour out the water
Peel off the green skins
Grind the blanched almonds using mortar and pestle
Add 10mL of water and grind into a paste
Add 10mL of 10% acetic acid to coagulate present proteins
Let it to settle for 10 minutes
Stir at interval of 2 minutes for four minutes
Filter half of the solution using a fluted filter paper
Add 1 drop of 10% acetic acid to the filtrate to clear the filtrate
(Add dropwise 10% acetic acid until the filtrate clears if it does not clear with the first drop)
The filtrate is now assumed to contain the emulsin protein
Procedure 2 of the experiment
Prepare two test tubes each containing 10 ml of 2M NaoH (one of them should be labeled as standards while the other should be labeled as emulsin)
For the standard test tube mix 7.5 mL of Salicin solution E , 2.5 mL of distilled water and one drop of 10% acetic acid
For the emulsin test tube, mix 7.5 mL of salicin solution E, 2.5 mL of the filtrate and one drop of 10% acetic acid
Incubate both test tubes at 40oC for ten minutes
Pipette 0.3 ml aliquots
Quench the reaction by adding excess of 2M NaoH
Determine absorbance at 290 nM for the two tubes against a blank of 7.5 mL 2M NaoH and 2.5 mL of distilled water.
Deliverables
Comment on the results.
Explanation on whether almonds contain very much emulsin and reason for the answer.
Results presentation
Wavelength = 290nm
Enzyme Catalyzed Reaction
Table 1: enzyme catalyzed reaction in a water bath thermostated at 30 CŸ (at a three minute interval between aliquots)
Time (s)
absorbance
0
0.2451
180
0.3843
360
0.8238?
540
0.6512
720
0.8865
900
0.9398
1080
1.0117
Table 2: enzyme catalyzed reaction in a thermostated water bath at 40 CŸ (in three minute aliquot intervals)
Time (s)
absorbance
14
0.2557
186
0.4715
366
0.6343
548
0.9568
729
1.0923
908
1.4057
1098
1.6093
Acid Catalyzed Reaction
Table 3: acid catalyzed reaction for hydrolysis of the glycoside salicin at 65 CŸ (between five minute aliquot intervals).
Time (s)
absorbance
259
0.3039
508
0.4404
757
0.5721
1010
0.7022
1260
0.7430
1509
0.9298
1765
1.1821
Table 4: Reaction of the acid catalyzed reaction for the glycoside salicin at 75 CŸ.
Time (s)
absorbance
198
0.3347
375
0.5653
548
0.7614
728
0.9312
911
1.1507
1089
1.2599
1269
1.3987
Standard
Emulsin
Absorbance
0.1855
1.4175
Almond does not contain as much salicin as the standard sample. This is because almonds enzyme is not pure and exists as a mixture of other compounds and there are other chemical compounds that affect it reactivity rates as opposed to the standard.
Table 6: tabulation for time, absorbance, A-At, ln (A-At) at 65oC and 75oC.
Time (s)
Absorbance, At (65CŸ)
A At (65CŸ)
ln(A At) (65CŸ)
Absorbance, At (75CŸ)
A At (75CŸ)
ln (A At) (75CŸ)
180
0.3347
3.6653
1.2989
250
0.3039
3.6961
1.3073
360
0.5653
3.4347
1.2339
500
0.4404
3.5596
1.2696
540
0.7614
3.2386
1.1751
720
0.9312
3.0688
1.1213
750
0.5721
3.4279
1.2319
900
1.1507
2.8493
1.0471
1000
0.7022
3.2978
1.1933
1080
1.2599
2.7401
1.0080
1250
0.7430
3.2570
1.1808
1260
1.3987
2.6013
0.9560
1500
0.9298
3.0702
1.1217
1750
1.1821
2.8179
1.0360
Conclusion
The enzyme catalyzed hydrolysis of glycoside salicin occurs at faster rate that the acid-catalyzed reaction. This is because the enzyme catalyzed reaction has lower activation energy. The reaction pathway established by the enzyme has a lower transition state hence lower activation energy and a fast rate of reaction. The almond does not contain as much concentration of enzyme as the standard because it exists as a mixture. Another factor that could have contributed to lower activity of the almond derivative could be possibilities of denaturation of the enzyme following heating. Enzymes, being protein in nature are denatured at elevated temperatures.
Future studies should therefore investigate reactivity trends of enzymes whose side groups either have electron donating groups and electron withdrawing groups so that control enzymes can be developed with either electron withdrawing or donating side groups. Further research work should be carried out on effects of protecting the reactive side groups with either neutral, electron withdrawing group or electron donating groups in order to determine the rates of reaction of an enzyme.
Questions
What do you conclude from your results for the activation energies for the enzyme and acid catalyzed reactions?
The activation energy of the enzyme catalyzed reaction is lower and therefore occurs at a higher rate.
Give a mechanism for a acid-catalyzed hydrolysis of salicin 1
The mechanism of the reaction
In question 2, would you expect the initially formed product glucose to be the ±-anomer or ß-anomer? Explain your answer.
The initially formed product is a ß-anomer. This is because it is a product of a ß-glucose derivative. Condensation of ß-glucose gives rise into a ß-polysacharide. It follows immediately that hydrolysis should yield its starting monomers. Due to electron density, ß-anomer glucose will not be stable and undergoes rearrangement to form ±-anomer that is relatively stable and is not subject to electron density that could stimulate rearrangement.
Whole three dimensional electron cloud is represented by
Assuming the emulsion salicin complex has the following general structure (HA=acidic group) suggest a mechanism for the enzyme catalyzed reaction (Hint water is present)
Suggest some reasons why this process should be so much efficient than the acid catalyzed reaction?
The enzyme-catalyzed reaction is a more efficient process because it is a forward reaction but the acid-catalyzed reaction is not efficient in terms of products or yield because at every step of the reaction, the reaction establishes a state of equilibrium. This means the enzyme catalyzed reaction gives into a larger yield compared to the acid-catalyzed reaction.
The reaction of the enzyme-catalysis of the glycoside occurs at a lower temperature with more efficiency than the acid-catalyzed reaction
The enzyme-catalyzed reaction occurs at lower activation energy. Products are therefore formed faster than in the acid-catalyzed hydrolysis.
In question 4, would you expect the product glucose to be ±-anomer or ß-anomer? Explain your answer.
The initially formed product is a ß-anomer. This is because it is a product of a ß-glucose derivative. Due to electron density, ß-anomer glucose will not be stable and undergoes rearrangement to form ±-anomer that is relatively stable and is not subject to electron density that could stimulate rearrangement.
Whole three dimensional electron cloud is represented by
Based on the transition state for the enzyme catalyzed reaction, design a potential enzyme inhibitor. Explain how it works.
The potential enzyme inhibitor
Mechanisms through which the enzyme inhibitor works
At equilibrium
When the inhibitor is introduced, lone pairs of electron on the amino group react with the carbonyl carbon on the carboxylic end, as the electrophile, which results into elimination of the hydroxyl group (OH). This locks the active site of the enzyme. Since the enzyme is the electrophile to the glycoside salicin which acts as nucleophile, there is no possibility of reaction taking place.
List of reference
Biological and Chemical Sciences (BCS) -University of London, 1996, 2-Carb-33: Glycosides and glycosyl compounds, Biological and Chemical Sciences, University of London.
Brito-Arias, M. 2007, Synthesis and Characterization of Glycosides, New York: Springer.
Bugg, T., 2004, introduction to enzymes and C0-enzymes chemistry, 2nd edition, Blackwell publishing limited.
Ingle, J.D. and Crouch, S.R. 1988, Spectrochemical analysis, New York, Jersey, Prentice Hall.
International Union of Pure and Applied Chemistry (IUPAC), 1997a, Glycosides, IUPAC Compendium of Chemical Terminology, 2nd edition, editors A. D. McNaught and A. Wilkinson. Oxford: Blackwell Scientific Publications.
International Union of Pure and Applied Chemistry (IUPAC), 1997b, Glycosol group, Compendium of Chemical Terminology, 2nd edition, editors A. D. McNaught and A. Wilkinson; Oxford: Blackwell Scientific Publications.
Laidler, K., 1997, Chemical kinetics, 3rd Edition, Benjamin Cummings.
Laidler, K.J. 1993, The world of physical chemistry, Oxford University Press.
Page, M.I. and Williams, A., (eds) 1987, Enzymes mechanisms, Royal Chemical society.
Warshel, A., 1991, computer modeling of chemical reactions in enzymes and solutions, John Wiley and sons.
The patient would likely benefit from practices in the CAM domain of manipulative and body-based practices, commonly aimed at those with chronic pain. This domain includes CAM treatments such as chiropractic manipulations, massage, acupuncture, acupressure, and reflexology. Chiropractic manipulation would benefit the patient as it performs spinal and joint adjustments which positively influence the bodys nervous system. Massage therapies have a wide range of proven and claimed benefits, primarily stimulating blood flow to affected areas and inducing relaxation (Woodbury, Soong, Fishman, & García, 2015). The patient will likely benefit from the mind-body domain of CAM which includes a range of therapies ranging from cognitive psychotherapy to meditation and hypnosis, and even the use of distraction techniques such as music or art. Inherently, all of these may be helpful in distressful situations, but the case study presents a traumatic event and chronic pain. Therapies such as meditation and hypnosis can allow the patient to manage the increasing pain (due to lack of opiate analgesics) on a mental health level (Lambing, Witkop, & Humphries, 2019).
Measures of Pain
Physical assessments consist of physically examining the patient and seeking to anatomically determine the cause of pain. The advantage of this as it allows to collect data such as vital signs and use various tests that may help to determine the cause of pain. The biggest disadvantage is that there may be an overreliance on physical examination for pain identification and treatment, while tests and data are important, they cannot always identify the source of the pain. Behavioral assessments seek to examine the frequency or duration of behavior around pain as to determine factors which may explain the behavior or b the cause of pain. The benefits of this as it is observing behavior can determine the origins of pain when physical observations and the patient themselves are unable to. The disadvantage is that in some instances, pain may not necessarily manifest in behavior or the individual may mask the true level of pain or where they feel it. Self-assessments utilize various tools and scales for a patient to identify and relay their origins or levels of pain. The benefit to this is that it is a primary source of information, and with pain being a subjective experience, the feedback from the patient feeling the pain is critical. The downside is that self-reporting may be biased due to perception and pain thresholds, making it a difficult aspect to generalize for everyone (Powell, Downing, Ddungu, & Mwangi-Powell, n.d.).
Powell, R. A., Downing, J., Ddungu, H., & Mwangi-Powell, F. N. (n.d.). Pain history and pain assessment. Web.
Woodbury, A., Soong, S. N., Fishman, D., & García, P. S. (2015). Complementary and alternative medicine therapies for the anesthesiologist and pain practitioner: A narrative review. Canadian Journal of Anesthesia. 63(1), 6985.
Nickel (Ni) is a chemical element that belongs to the tenth group and the fourth period of the periodic table. The atomic number of nickel is 28, whereas its atomic mass is approximately 58.71 grams per mole. Nickel is a tough, gray metal. It is flexible and can be beaten into thin panes or pulled into thin wires. The existence of four electrons in the outmost shell makes nickel a good conductor of heat and electricity. The melting and boiling points of the metal are 1453 oC and 2913 oC respectively.
Nickel reacts with other elements to form complexes that are mainly blue or green. Dilute acids cause nickel to dissolve leading to the liberation of hydrogen gas. Tiny particles of nickel have the potential to adsorb hydrogen gas thereby making nickel an important catalyst. Other applications of nickel include the manufacture of alloys and super alloys, rechargeable batteries, coins, catalysts, and metal castings (INSG Insight, 2013). Nickel can resist corrosion at extreme temperatures and salinity hence making it a useful material in the production of gas turbines and propeller bars in boats (INSG Insight, 2013).
Recent Research on Nickel
Recent research focuses on the use of nickel in nanotechnology. Nanoparticles have drawn immense attention due to their unique magnetic, optical and electronic traits. Nanoparticles of metals are useful in the making of paints, colors and sensors. Tientong, Garcia, Thurber, and Golden (2014) make use of simplified chemical reactions in the manufacture of nanopowders from nickel as well as nickel hydroxide. Nickel metal is reacted with hydrazine hydrate at an alkaline pH followed by sonication at temperatures between 54 and 65 oC, which triggers a reduction reaction that forms nickel hydroxide nanoparticles whose diameters are between 12 and 14 nanometers (Tientong et al., 2014). Polyvinylpyrrolidone lowers the diameter of the nanoparticles by half. X-ray diffraction and infrared spectroscopy are used in the elucidation of the organization of the resultant nanoparticles.
Roselina, Azizan, Hyie, Jumahat, and Bakar investigate the influence of pH on the development of nickel nanostructures in the chemical reduction technique (2013). Hydrazine is used as a reducing agent while ethylene glycol is employed as a surfactant at a temperature of 60 oC (Roselina et al., 2013). Conversely, varying quantities of sodium hydroxide are used to monitor the pH of the reacting mixture. The structure of the nanoparticles formed under various pH conditions is studied using electron microscopy. It is revealed that altering the proportions of sodium hydroxide leads to variations in the sizes of the nanoparticles from 20 nanometers to 800 nanometers. In addition, raising the pH from 6 to 12 causes the development of nanostructures whose texture resembles wool. Pure nickel nanoparticles are created when the ratio of hydroxide and nickel ions is greater than four (Roselina et al., 2013).
A separate study by Jovaleki, Nikoli, Gruden-Pavlovi and Pavlovi (2012) looks into the magnetic properties of two nickel alloys namely nickel ferrite and zinc nickel ferrite. The classic sintering technique and the planetary mill synthesis methods are used to manufacture these ferrites (Jovaleki et al., 2012, p. 499). Electron microscopy (scanning and transmission) is used to monitor the progress of the reaction. The electromagnetic radiation coefficients are then computed from measured values of permittivity as well as permeability. The study deduces that the preparation methods and the ultimate particle size influence the properties of the resultant ferrites (Jovaleki et al., 2012).
Conclusion
It is evident that nickel is a versatile element that can be used in the development of novel compounds with unique properties. Therefore, a delicate balance between chemical and environmental conditions is necessary to ensure that nickel compounds with the right attributes are formed.
References
INSG Insight. (2013). Nickel-based super alloys. Web.
Jovaleki, C., Nikoli, A. S., Gruden-Pavlovi, M. & Pavlovi, M. B. (2012).Mechanochemical synthesis of stoichiometric nickel and nickelzinc ferrite powders with NicolsonRoss analysis of the absorption coefficients. Journal of the Serbian Chemical Society, 77 (4), 497505.
Roselina, N. R. N., Azizan, A., Hyie, K. M., Jumahat, A., & Bakar M. A. (2013). Effect of pH on formation of nickel nanostructures through chemical reduction method. Procedia Engineering, 68 (2013), 4348.
Tientong, J., Garcia, S., Thurber, C. R., & Golden, T. D. (2014). Synthesis of nickel and nickel hydroxide nanopowders by simplified chemical reduction. Journal of Nanotechnology, 2014(2014), 1-6.
Research comprehension is essential in reading a scientific article. Such writing is not accessible to most people because of the seeming complexity of structure and language. Nevertheless, works of science can be read and understood, and finding effective strategies for exploring complex texts is a critical skill to learn. The major idea is that pre-reading activities are as important in successful comprehension as the reading itself. This papersummarizes the methods, results, and discussion used in constructing step-by-step instructions for reading research projects.
Introduction
All research articles start with an introduction or background information. The introduction is the section that describes the subject of the article in general terms. It serves two purposes. The first is that it familiarizes readers with the subject matter and explains the point of a research report in the first place. Secondly, it outlays the researchers suggestions and expected outcomes. It includes a thesis, which is a statement that the author will try to prove over the course of their paper.
There are several recommendations to remember when reading an introduction. Particularly, a reader should pay attention to peer reviews, which signify a qualitative article. They are done by external scientists who have expertise in the new articles subject matter. Herber et al. (2020) refer to peer reviews as a touchstone of modern evaluation of scientific quality (p. 2). The most valuable part of peer reviews is reviewer comments. The reason is that researchers provide feedback via comments, thus increasing the overall scientific veracity of an article.
Another point to consider is that evaluating a research project requires methodological integrity. Levit et al. (2017) argue that there are two processes, which constitute its essence fidelity to the subject matter and utility in achieving research goals (p. 2). Fidelity is exercised through the continuous study of the phenomenon under observation, while utility entails producing inquiries that answer the research questions. Inquiries themselves are based on the research questions of a particular study. Overall, the first step is selecting appropriate literature based on the number of peer reviews and adherence to methodological integrity.
Methods
Everyone conducting a study elaborates on the ways they use to arrive at conclusions. The methods section is designed to make readers aware of the steps undertaken by researchers. The authors underscore how they conducted their study. They disclose the use of volunteers and the manner of their participation in the study. The people who are being studied are called a study sample. The authors also describe any tools used to collect data.
An essential part of this section is the discussion of the samples limitations. Sample selection should adequately reflect the population (Gentles et al., 2016). A reader should evaluate whether the sample size adequately reflects the population. Sometimes, researchers decide to choose respondents based on convenience sampling, while the real population is not represented in the research. Therefore, the second step in reading the methods section is assessing how correctly the sample represents the population.
Results
Having explained the method used, researchers display their results. The important feature of this section is that it is supposed to be objective. Only the final data are presented, which stem from the analysis. Readers are free to interpret the result of the study as they wish, while researchers do not provide subjective insight into what the results mean. The third step is ascertaining the causative factors for the resulting findings.
Discussion
The discussion section is the logical continuation of the results. Researchers analyze the implications of their findings and present their view of what the findings mean for the studied area. The authors also note the limitations of their study and recommendations for further research. Unlike the previous sections, the authors provide their opinion and view of the situation. It is important to be able to distinguish between the authors input and the objective findings, which is the fourth step of reading a research report (Harrison, 2019).
Conclusion
In order to obtain the skills necessary for reading a research report, it is important to consider the recommendations suggested by scientists. There are four steps that readers can undertake to better comprehend a scientific article. The first step is choosing the relevant literature. The second is comparing the sample size and the population. The third is assessing how the study arrives at the results. The final step is evaluating how adequately the authors interpret the results.
References
Gentles, S. J., Charles, C., Nicholas, D. B., Ploeg, J., & McKibbon, K. A. (2016). Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research. Systematic Reviews, 5(1), 172. 1-11.
Levitt, H. M., Motulsky, S. L., Wertz, F. J., Morrow, S. L., & Ponterotto, J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: Promoting methodological integrity. Qualitative Psychology, 4(1), 2-22. Web.
The production and purification of proteins is a difficult and expensive task which takes time. Naturally, proteins are produced by plants and animals from their building blocks known as amino-acids. However, this is a slow process and the products are usually very limited and in small amounts which cannot meet the industrial needs, say for example the production of enzymes for industrial use.
This has challenged scientists to come up with several methods that can be used in the mass production of the proteins for industrial and large scale use in daily life. These include the production of enzymes, medicines and food supplements among other uses. One of the major ways that protein production is carried out is by expressing them in plants. ²-glucuronidase is one such protein that is manufactured in large scale for industrial use as an enzyme.
This protein, ²-glucuronidase is prepared industrially by expressing in transgenic plants like tobacco (Menkhaus et al., 2004), as a recombinant protein, which is then later extracted and purified by several methods before it is ready to use. The amount of protein that is recovered from this process is sufficient enough to cater for the industrial needs of a certain activity, for example enzyme production.
Plant expression systems have turned out to be a very good option for the mass production of the proteins used for medicinal, pharmaceutical and commercial purposes too.
These plant expression systems, using transgenic plants, are beneficial and have some advantages including the low production cast that is involved, high and fast rate of production of a certain protein (Larrick et al., 2001), safety in that the plants do not produce harmful toxins which may be harmful to the humans and plants have also been shown to produce a large variety of recombinant proteins in their systems which may be later purified (Fischer and Emans, 2000).
The most commonly used plant for the industrial production of ²-glucuronidase is the Tobacco plant. This transgenic plant is considered most suitable since there cannot be a transfer of any harmful protein produced by the plant to animals or humans because it is neither a feed crop nor a food crop (Fischer et al., 2004). Furthermore, the tobacco plant has in place strong mechanisms and regulatory control measures that can be used for the expression of transgene proteins.
Another advantage of the tobacco plant is that it produces a large biomass which makes it a very suitable plant for the production of the recombinant ²-glucuronidase in large quantities. One disadvantage, however, of the tobacco plant as a transgenic option is that it produces large quantities of natural plant proteins and phenolic and alkaloid compounds which hamper the purification of the expressed recombinant proteins.
Uses of ²-glucuronidase
Some of the uses of the ²-glucuronidase in the industry include as a component in diagnostics. The ²-glucuronidase is an enzyme which has the abilities to carry out the splitting of compounds that contain glucuronic groups. These glucuronic groups are mainly present in the spleen, liver and also in the reproductive and endocrine tissues of some higher animals and mammals alike.
Thus in diagnostics, the ²-glucuronidase is used in the determination of the amount of steroids and or proteins which contain the glucuronic compounds in blood. In addition to this function, the ²-glucuronidase is used as an iso-enzyme in the molecular biology assays mainly as a reagent.
Purification of ²-glucuronidase
The purification of the recombinant ²-glucuronidase from the transgenic plant proteins is essential for the proper adsorption and functioning without impurities. Some of the impurities that have to be eliminated from the transgenic plant extract for example from the tobacco plant include acidic elements of phenolic acids and phytic acid, native plant proteins, nucleic acids in addition to the nicotine present which has toxic alkaloid properties.
These form complexes with the recombinant proteins thereby making processes such as column chromatography to be interfered with. In order to eliminate this, the inclusion of certain elements such as beta mercaptoethanol or dithiothreitol in the extraction process helps to increase the amount of recovered recombinant protein from the transgenic plants. A phenolic-binding agent such as polyvinyl polypyrrolidone may also be included to decrease the interference brought about by the phenolic compounds from the extract (Holler et al., 2007).
Extraction
The first step of is the extraction of the recombinant protein fro the transgenic plant extract. This is done by the use of extraction buffer made of 50mM Sodium Phosphate, 10mM 2-mercaptoethanol, and 1mM EDTA pH 7.0. This is in the aqueous two-phase extraction (ATPE) method which is a very powerful and resourceful technique that has been functional in facilitating bio-catalytic reactions (Spiess et al., 2008).
Homogenization
Homogenization is then carried out until the sample contains no large particles in the mixture. A 2% w/v polyvinylpolypyrollidone that was pre-hydrated is then immediately added to the mixture and centrifuged at 17000xg and left to stand for about fifteen minutes at room temperature and pressure. The supernatant is then removed and filtered through a syringe.
Polyethyleneimine (PEI) precipitation
Polyelectrolyte precipitation then follows whereby the Polyethyleneimine diluted to 10mg/ml in deionized water and adjusted to a pH of 7.0. This is added to the extract in a ratio of 800mg of the PEI per total protein extracted. Upon addition of the electrolyte, the samples are centrifuged vigorously for about 15 seconds and allowed to stand at room temperature and pressure for about half an hour. After this, they are then further centrifuged for about 20 minutes at 17000xg and the supernatant removed for analysis.
The pellets are then suspended in a resuspension buffer consisting of 50mM NaPi at pH 7, 10 mM BME, 1mM EDTA and 0.5M NaCl. Sonication is then done for about 5 seconds and then followed by centrifugation at 17000xg for 10minutes. The supernatant is removed and the samples again centrifuged at 16000xg for 10minutes.
HIC Chromatography
Chromatography of the supernatants then follows as the next step in the purification process. Phenyl Sepharose Fast Flow low substitution is used in this process. The column is packed to approximately 5.1 cm bed volume and an equilibrating buffer comprising of 50mM NaPi at pH 7.0 and 1.5mM Ammonium Sulphate buffers. The proteins are eluted in a linear gradient and aliquots of 2mL collected.
Ceramic Hydroxyapatite Chromatography
Ceramic hydroxyapatite is then packed into a column as slurry in a bed height of about 10cm with the equilibrating buffer being 40mM NaPi at pH 6.8. The proteins are eluted in a linear gradient as the eluting buffer. 1mL fractions are collected per minute and pooled for concentration.
Size exclusion chromatography can also be employed in the separation of the recovered proteins. These have different molecular weights and sizes and therefore can be separated by this method. This is because the natural proteins from the tobacco plant namely Rubisco consisting of two subunits, a smaller one of about 13 kDa units and a larger one of about 55 kDa units. The ²-glucuronidase recombinant protein however is much larger than these two: it is about 68 kDa units.
Rubisco is basically an acidic protein with an isoelectric point of about 6.0. This property therefore creates a challenge in the purification and separation procedure. This is because the recombinant ²-glucuronidase protein is also acidic in nature and separation processes such as ion-exchange chromatography are futile in the purification process.
The Purification Process
The purification process after extraction mainly incorporates 3 stages namely Polyethyleneimine precipitation, HIC Chromatography and Ceramic hydroxyapatite chromatography. In the first stage of purification, the Polyethyleneimine acts to bind the nucleic acids and the alkaloids that might have been extracted together with the recombinant ²-glucuronidase proteins. This stage has a pH of 7 in order to prevent the destruction of the extracted proteins.
The HIC stage mainly functions to remove the PEI and any nucleic acids that might have passed through the first stage. This is made possible by the phenyl Sepharose binding to the PEI at the pH of 7.0. and the nucleic acids present. The last stage is basically a polishing stage where the eluted ²-glucuronidase is isolated and obtained in a much pure form and whose activity is higher.
Conclusion
From the above process, the acidic recombinant ²-glucuronidase protein can be recovered and efficiently purified from the transgenic tobacco plant extracts. The PVPP and BME serve to eliminate the impurities in the extract so that the adsorption in the subsequent chromatographic stages can be increased. This also increases the yield of the recombinant ²-glucuronidase protein form the extract. The Polyethyleneimine precipitation phase serves as a step in the elimination of the initial impurities such as nucleic acids and alkaloids.
The HIC step is effective in the removal of the PEI and nucleic acids resulting from the first step in the purification process. The HAC step acts as the polishing stage of the products. In this process, approximately 41% purity of the initial ²-glucuronidase protein can be recovered. The choice of using ceramic hydroxyapatite (HA) resin is due to its unique ability to bind acidic proteins.
Hence, it is employed in the removal of the acidic proteins by binding them and eliminating them from the mixture. These can then be removed from the extract/ supernatant through the fractionating column. Furthermore, it has the potential of scaling up the purification process. The HA resin which is mainly 10mM NaPi requires a low salt sample to bind the proteins.
The purity of the recovered protein can be assayed by use of Sodium Dodecyl Sulphate (SDS-PAGE) analysis. Here, the products are run through the SDS-PAGE gel and stained with Coomasie Brilliant blue stain and later assayed by staining with silver stain.
References
Fischer, R. and Emans, N. (2000). Molecular Farming of Pharmaceutical Proteins, Transgenic Res. 9, pp. 279299.
Fischer, R., Stoger, E., Schillberg, S., Christou, P. and Twyman, R.M. (2004). Plant-based Production of Biopharmaceuticals, Current. Opinion. Plant Biol. 7, pp. 17.
Holler, C., Vaughan, D. and Zhang, C. (2007). Polyethyleneimine Precipitation versus Anion Exchange Chromatography in Fractionating Recombinant ²-glucuronidase from Transgenic Tobacco Extract. J Chromatogr A 1142(1), pp. 98105.
Larrick, J.W., Yua, L., Naftzgera, C., Jaiswala, S. and Wycoffa, K. (2001). Production of Secretory IgA Antibodies in Plants, Biomol. Eng. 18 (3), pp8794.
Menkhaus, T. J., Bai, Y., Zhang, C., Nikolov, Z. L., Glatz, C. E. (2004). Considerations for the recovery of recombinant proteins from plants. Biotechnol Prog 20(4):10011014.
Spiess, A.C., Eberhard, W., Peters, M., Eckstein, M.F., Greiner, L., Büchs, J. (2008). Prediction of Partition Coefficients using COSMO-RS: Solvent Screening for Maximum Conversion in Biocatalytic Two-phase Reaction systems, Chem. Eng. Process. 47, pp. 10341041.
Chadha et al (1982) claims human Interferon alpha is protein derivative that is biosynthesized and secreted by lymphocytes under pathogenic stimulus. Allen and Diaz (1996, p.182-3) argues that the main stimulus that predispose secretion of human alpha are viruses, bacteria, parasites or tumor cells. Spiegel (1989, p.76-77) claims human interferon alpha provides foundation for communication between leucocytes hence facilitates in immunological functions. The classification of interferon alpha has been documented by Allen and Diaz (1996, pp.181-184) to belong to group of specialized compounds known as glycoprotein that are termed as cytokines. Ozato et al (2007) suggested that Glycoprotein consists of two macromolecules made of carbohydrates and proteins. Allen and Diaz (1996, p.181) has demonstrated that nomenclature of interferon alpha is based on its functionality through interference with replication or biological processes of pathogens. Other biological roles of interferon alpha include activation of immune system (Ozato et al, 2007), stabilization of healthy cells such that they can resist pathogenic infections.
Interferon cellular production
Berg et al (1982, pp.23-6) claims that as a pathogenically infected cell undergoes lysis or death from cytolytic pathogens influence, it releases pathogenic particles that catalyze progression and propagation of infection. However, Chadha and Sulkowski (1985) argument is based on assumption that the affected cell signals other healthy cells by secreting or releasing interferon alpha as documented by Klaus et al (1997). The healthy neighboring cells through a negative feedback process as claimed by Isaacs and Lindenmann (1957, 1987) secrete a counter-enzyme termed as Protein-Kinase-R (PKR) whose primary biological role is to phosphorylate a protein termed as eIF-2 (Chadha et al (1982).
According to Chadha and Sulkowski (1985, pp.45-51), eIF-2 is responsible for eukaryotic translation initiation that results into formation of inactive complex via eIF-2B which decreases protein biosynthesis hence interfering with biochemical processes that can support bioactivity of the pathogen. Allen and Diaz (1996, p.182) claims another enzyme termed as RNAse-L initiations infected cell destruction which can either occur through lysis or force the cell to commit suicide through apoptosis which terminates protein biosynthesis of the pathogen and the pathogen-host cell. Interferon alpha has been documented in literature (Chadha et al, 1982; Berg et al, 1982; and Isaac and Lindenmann, 1987) to initiate production of specific proteins known as Interferon-stimulating Genes (ISGs) that have pathogen receptors and destroy the pathogen. Interferon alpha according to Spiegel (1989, pp.76-7) increase P53 activity which reduce spread of the pathogen and induce cell apoptosis
Aims of the essay
The essay reports on purification and isolation of human interferon alpha protein. The essay seeks to report on the first method of purification and isolation of interferon alpha that was used, provide biochemistry of separation steps that were used and form a flow chart based on the initial interferon alpha purification and isolation. The essay reports on successful methods and unsuccessful methods that were adopted and rationale for the adoption of the methods. The essay reports on the processes that were used to precipitate the interferon alpha (salting out) and methods of chromatography that were applied.
The discovery of the human interferon alpha protein
Allen and Diaz (1996) and Berg et al (1982) claim that two hypotheses have been put forward to demonstrate mechanism through which human interferon alpha was discovered.
First proposed method towards discovery of interferon alpha protein
In literature (Isaac & Lindenmann, 1987, pp.429-438) claim interferon alpha was discovered in 1957 in Chickens by Alick Isaac and Jean Lindenmann who were attached to the National Institute for Medical Research in London. The two virologists observed that there was interference effect that arose from heat-inactivated influenza virus (Allen & Diaz, 1996) that occurred on live influenza virus that was cultured in a chicken egg Chorioallantioc membrane (Berg et al, 1982). Isaacs and Lindenmann (1987) claims the results for discovery of the interferon alpha were published in 1957. Due to interference effect Isaacs and Lindenmann (1987) named the protein interferon based on its property of interference effect. Allen and Diaz (1996) have identified the interferon alpha as Type I Interferon. Follow up studies in National Institute for Medical Research (1978), determined that human being produced interferon alpha through the same biochemical mechanism. The first isolated and purified human interferon alpha was carried out in 1981. Increased research on biochemistry of Type I interferon (Allen & Diaz, 1996) between 1978 and 1981 resulted into determination of protocol for purification and isolation of Type I interferon alpha and Type I interferon beta (Klaus et al, 1997).
Second proposed method towards discovery of interferon alpha protein
The second proposed discovery of interferon alpha has been reported to have occurred earlier before Isaacs and Lindenmann (1987) discovered interferon alpha in 1957 (Isaacs & Lindenmann, 1957). The discovered was pioneered by Yasu-Ichi Nagano and Yasuhiko Kojima, Japanese virologists in 1954 (Nagano & Kojimo, 1954). The Japanese virologists were residents at the institute For Infectious Diseases at university of Tokyo. The virologists were working on vaccine for small pox when they observed that there was inhibition of viral growth in an area of a rabbit skin that had been inoculated with Ultra-Violet inactivated virus (Isaacs & Lindenmann, 1957).
In conclusion (Nagano & Kojimo, 1954) claimed that there was a virus inhibiting factor in the rabbit tissue. Nagano and Kojimo (1958) conducted series of research towards isolation, purification and characterization of the interferon alpha. The findings of Nagano and Kojima were published in a French journal named Journal de la Société de Biologie in 1954 (Nagano & Kojimo, 1954). The Japanese virologists carried out further studies and determined that the viral activity of the virus-inhibiting factor had a lifespan of 1-4 days and was independent from antibody production. Nagano and Kojima findings (Nagano & Kojima, 1954; 1958) determined that the virus-inhibiting factor had no relationship with antibody production were published in 1958 (Nagano & Kojimo, 1958) a year after Isaacs and Lindenmann (1957; 1987) published their findings and named the virus-inhibiting factor as Interferon alpha.
How interferon alpha was purified initially purified
The interferon alpha was isolated and purified through centrifuging and reverse phase high performance liquid chromatography (RP-HPLC) (Isaacs & Lindenmann, 1957). The procedure involved loading the crude interferon alpha into a glass absorbent chromatography, followed by eluting interferon alpha from the glass absorbent by using standardized Hydrophobic electrolyte that had a PH range of 7-8.5. This was followed by loading the interferon alpha eluate through a molecular sieving chromatography which resulted into a resolution that had a range of 10k to 100K IU (Nagano & Kojimo, 1954; 1958). This was repeated but with a different hydrophobic electrolyte until viruses was eliminated. Finally the eluate was entered into a Zinc II ions chelate resin which resulted into collection of non-adsorbed eluate that had purified interferon alpha.
Discovery route for interferon alpha
Detailed protocol for purification as a flow chart.
Unsuccessful methods for purification
The procedure for purification of the interferon alpha of successful purification
A batch that contained crude interferon alpha that was determined to have 10000 IU in terms of MDBK cells was adopted for a batch-to-batch absorption via use of 15ml of silicic acid that had been equilibrated by using 20.00mM phosphate buffer whose PH had been standardized to 7.4 (Isaacs & Lindenmann, 1957). The adsorption was allowed to progress for 90 minutes at a temperature of 4oC as the mixture was stirred. The gel that resulted was put into a glass column that measured 0.9cm by 25cm and immediately washed by using 100ml of 20.00mM phosphate buffer whose PH had been standardized at 7.4 (Nagano & Kojimo, 1954; 1958). interferon activity was recovered from the silicic acid by using 100.00 mM TRIS-HCL whose PH had been standardized to 8.0 to achieve 205 interferon alpha activity followed by 100.00 mM TRIS-HCL that contained 0.5 Molar TMAC, 0.5 Molar Sodium Chloride and 10% propylene glycol whose PH had been standardized to 8.0 to achieve 70% interferon activity (Isaacs & Lindenmann, 1957). A 10% remaining activity was however detected in the non-adsorbed fraction of the silicic acid which was subject to presence of sendai virus residuals (Hauschild et al, 2008).
Successful methods for purification
The procedure for purification of the interferon alpha of successful purification.
Step one: silicic acid adsorption
A sample that has been determined to contain interferon alpha is taken in its crude state and cooled at 4oC before purification procedures are started (Isaacs & Lindenmann, 1957; Nagano & Kojimo, 1954). The step involves a batch-to-batch operation as claimed by Isaacs and Lindenmann (1957). After interferon alpha is cooled, it is mixed with silic gel (Liu, 2005). The preferred silica gel is silicic acid. Alternatively, crude cooled interferon alpha is introduced into a controlled pore glass (CPG) as claimed by Isaacs and Lindenmann (1957). The CPG should have been activated with 20.00 mM sodium phosphate buffer solution which should have a PH of 7.4 (Nagano & Kojimo, 1954). The ratio of the silicic acid to the culture solution of interferon alpha should be within the range of 1:10 to 1:50. In literature, a ratio of 1:30 has been proposed to be adequate as documented by Isaacs and Lindenmann (1957) and Nagano and Kojimo (1954; 1958). The mixture should be stirred at a low speed for about 60 minutes to 90 minutes. The adsorption temperature should be thermoregulated at 4oC (Isaacs & Lindenmann, 1957). The gel should be transferred into a glass column after 60-90 minutes as claimed by Isaacs and Lindenmann (1957). In the glass column, the gel should be washed by using 20.00mM phosphate buffer solution whose PH should be standardized to 7.4 (Sen, 2001). The washing with the buffer should be continued until adsorption at 280.00nM attains a background level.
Elution of the interferon alpha
The elution of interferon alpha should be conducted via use of a buffer that contains 100.00 mM TRIS-HCL, 0.5Molar solution of Sodium chloride and 0.5 Molar solution of Tetramethylammonium Chloride (TMAC) and 10% concentration propylene glycol with PH standardized at 8.0 (Isaacs & Lindenmann, 1957). Optimization of the elution buffer, as claimed by Isaacs and Lindenmann (1957) ensures dissociation conditions are provided which decrease opportunities for formation of complexes hence realization of higher yield of interferon alpha inter-species (Nagano & Kojimo, 1958). The elution of interferon alpha results into a five times concentration and removal of utmost 93% of proteins as documented by Isaacs and Lindenmann (1957)
Inactivation of the virus
This step was meant to realize virus, virus inactivation and concentration (SEn, 2001).
Virus inactivation was carried out through ultrafiltration and concentration determined (Nagano & Kojima, 1954). This was carried by using eluate that contained interferon alpha activity mixed with non-ionic detergent TRITON X-100 to achieve a concentration of 0.1% TRITON X-100. In absence of TRITON X-100, other non-ionic detergents can be used for instance TWEEN-20 and NONIDET P-40 as claimed by Isaacs and Lindenmann (1957). Gentle stirring should be carried out continuously for thirty minutes at a temperature of 4oC which results into destruction any present viruses (Nagano & Kojima, 1958). Post incubation, the product should be transferred onto a Millipore cassette which could preclude flow of proteins or compounds that had molecular weight that was in excess of 100000 (Isaacs & Lindenmann, 1957).
The ultrafiltrate that is achieved contains proteins that have molecular weight that is less than 100000 which further undergoes repeated stirring through ultrafiltration system with the membrane whose pore sizes are precludes to allow flow of materials that have molecular weight that is less than 10000 (Isaacs & Lindenmann, 1957). This is step is repeated until the right concentration is achieved. The aim of repeated ultrafiltration is to ensure all viruses are removed in case there are some viruses that might have been resistant to non-ionic detergents (Nagano & Kojimo, 1954; 1958 and Isaacs & Lindenmann, 1957). The final concentrate consists of proteins that have a molecular weight of 10000 and 100000 IU which proceed to the next phase that involves use of molecular sieving.
Molecular Sieving
The concentrated material that contained proteins whose molecular weight ranged between 10000 and 100000 was loaded into a 12cm by 180cm column that was impregnated with SEPHACRYL S-200 (Isaacs & Lindenmann, 1957). In the second trial, sieving beads that had a resolution that ranged from 10000 to 100000 molecular weights were used. The molecular sieving was conducted at a temperature of 4oC in presence of 10mM phosphate buffer, 0.5Molar solution of sodium chloride and 10% glycol at a standard PH of 7.4 (Hauschild et al, 2008; Isaacs & Lindenmann, 1957). The proteins were eluted depending on their molecular weight. Different fractions were achieved that were transferred into a Zinc II ions chelate agarose chromatography (Isaacs & Lindenmann, 1957).
Zinc Chelate Chromatography
The product of molecular sieving column was applied on a column that was impregnated with a chelating matrix (Isaacs & Lindenmann, 1957). The Zinc chelating chromatography used a Zinc II ions Chelate agarose. After loading of the combined product from the molecular sieving stage Isaacs and Lindenmann (1957) claim, the column containing the chelating matrix was washed with 10 mM phosphate buffer that further contained 0.5Molar solution of sodium chloride whose PH had been standardized to 7.4. The procedure was carried out at a temperature of 4oC (Navratil et al, 2010).
Buffer Exchange/Concentration
Alick and Lindenmann (1957) claims non-adsorbed material is concentrated and buffer is exchanges to 20mM phosphate buffer at PH of 7.4. This is carried out by use of ultra-filtration subject to use of a membrane that is impermeable to protein that has a molecular weight of above 10000 (Fensterl & Sen, 2009). The process of ultra filtration is carried out until compete buffer exchange is achieved that is determined by concentration of interferon of about 100,000 to 100,000 IU (Isaacs & Lindenmann, 1957). This is followed by addition of human serum albumin which is used as a stabilizing agent for the interferon alpha. The human serum albumin concentration should be within the range of 1miligram to 10 milligrams per milliliter (Nagano & Kojima, 1954; 1958). The product of stabilization should be filtered through a 0.22 uM filter (Liu, 2005). It should be noted that the filters should be primed with 0.5% human serum albumin before sterilization which ensures the interferon alpha does not lose its activity (Isaacs & Lindenmann, 1957; Liu, 2005)).
Diseases that are managed by interferon alpha
Some of the diseases that are treated by using interferon alpha include hairy cell leukemia, genital warts, Aids related Kaposi Sarcoma, Non-A hepatitis, Non-B hepatitis and Hepatitis B (Fensterl & Sen, 2009).
Diseases caused by deficiencies in enzymes
Navratil et al (2010) claims deficiencies or disorders in enzymes result into diseases that are characterized by loss of lysosomal storage for instance Sly Syndrome that is a Type MPS VI and characterized by loss of Lysosomal storage. Lysosomal storage diseases have been documented in literature to be associated with functionality of glycosaminoglycans (Fensterl & Sen, 2009). The disorders are characterized by accumulation of glucuronic acid containing glycosaminoglycans (dermatan, heparin and Chondroitin 4-and 6-sulfates. Other types of MPS include MPS I H or Hurler syndrome, MPS I S or Scheie syndrome, MPS I H-S or Hurler-Scheie syndrome, MPS II or Hunter syndrome, MPS III or Sanfilippo syndrome, MPS IV or Morquio syndrome, MPS VI or Maroteaux-Lamy syndrome and MPS VII or Sly syndrome, MPS III or Sanfilipo Syndrome has other subtypes for instance Sanfilippo B that is caused by deficient enzyme alpha-N-acetylglucosaminidase; Sanfilippo C that is caused by altered enzyme acetyl-CoAlpha-glucosaminide acetyltransferase and Sanfilippo D that is caused by deficient enzyme N-acetylglucosamine 6-sulfatase.
References
Allen G, Diaz MO. (1996) Nomenclature of the human interferon proteins. J Interferon Cytokine Res; 16: 181184.
Berg et al., (1982) Purification and Characterization of the HuIFN-a Species, Texas Reports on Biology and Medicine, vol. 41.
Chadha and Sulkowski, (1985) Production and Purification of Natural Human Leukocyte Interferons, The Interferon System; A Current Review (New York), Kruzel et al., Oct.
Chadha et al. (1982) Adsorption of Human Alpha (Leukocyte) Interferon on Glass: Contributions of Electrostatic and Hydrophobic Forces, Journal of Interferon Research, vol. 2, No. 2
Fensterl, V; Sen GC (2009). Interferons and viral infections. Biofactors 35 (1): 1420.
Hauschild, A.; Gogas, H.; Tarhini, A.; Middleton, M.; Testori, A.; Dréno, B.; Kirkwood, J. (Mar 2008). Practical guidelines for the management of interferon-alpha-2b side effects in patients receiving adjuvant treatment for melanoma: expert opinion. Cancer 112 (5): 982994.
Isaacs A, Lindenmann J (September 1957). Virus interference. I. The interferon. Proc. R. Soc. Lond., B, Biol. Sci. 147 (927): 25867.
Isaacs A, Lindenmann J. (1987) Virus interference. I. The interferon. J Interferon Res; 7: 429438.
Klaus W, Gsell B, Labhardt AM, Wipf B, Senn H. (1997) The three-dimensional high resolution structure of human interferon alpha-2a determined by heteronuclear NMR spectroscopy in solution. J Mol Biol; 274: 661675.
Liu YJ (2005). IPC: professional type 1 interferon-producing cells and plasmacytoid dendritic cell precursors. Annu Rev Immunol 23: 275306.
Nagano Y, Kojima Y (1958). Inhibition de linfection vaccinale par un facteur liquide dans le tissu infecté par le virus homologue (in French). C. R. Seances Soc. Biol. Fil. 152 (11): 16279.
Nagano Y, Kojima Y (October 1954). Pouvoir immunisant du virus vaccinal inactivé par des rayons ultraviolets (in French). C. R. Seances Soc. Biol. Fil. 148 (19-20): 17002.
Navratil V, de Chassey B, et al. (2010-11-05). Systems-level comparison of protein-protein interactions between viruses and the human type I interferon system network. Journal of Proteome Research 9 (7): 352736.
Ozato K, Uno K, Iwakura Y (May 2007). Another road to interferon: Yasuichi Naganos journey. J. Interferon Cytokine Res. 27 (5): 34952.
Sen GC (2001). Viruses and interferons. Annu. Rev. Microbiol. 55: 25581.
This paper provides the guidelines on how to develop a model that suits different geological formations for disposing of nuclear waste products. It includes an analysis of different situations that provides the geological understanding of the potential sites using different software tools to conduct the study. In addition, the paper uses overall work packages for automated analysis of the tools and processes used for the development of the model. In addition, the paper is divided into six sections and each section makes significant contributions to the modeling process by providing a detailed description of the model and the underpinning procedures for creating radioactive waste disposal zones. The introduction section is characterized by a systematic and logical sequence of procedures necessary for data collection, processing, interpretation, and integration into the model. On the other hand, the module provides a communication framework and identifiable best practices for sharing data with the stakeholders responsible for taking the necessary actions including the government to protect the environment from the effects of radioactive wastes by formulating policies for the disposal of waste products. In addition, the government formulates the necessary legal framework that is used to identify, model, and prepare the potential sites based on knowledge and data on the geological formation of the site.
Objectives
Miller, Chapman, McKinley, Alexander, and Smellie (2011) reviewed the research objectives that were used as the framework to write the report by comparing the report with recent findings in the area of study. The results of the review showed that the report is up-to-date and the objectives were well articulated. However, et al. (2011) observed that the research paper should have been logically organized to avoid making a repetition of the objectives. Miller et al. (2011) affirmed that the paper meets the minimum standard requirements for developing the model by categorizing the modeling into the three key components, which include geology, hydrogeology, and hydrochemistry to define the characteristics of the rocks that were used to generate the modeling data.
Approach
A critical review of the paper leads to the conclusion that studies by different researchers on modeling geological sites for the disposal of radioactive wastes agree with the methodology used in this paper. Most of the methodologies used to model the repository of rocks use different categories of rocks that are evaluated before the disposal site is prepared especially those that are in the RWMD in the UK. Here, the three core potential characteristics of the rocks that have been identified as potential candidates for the waste preparation sites include the higher strength rocks from the geological geometry of rocks found in the UK, the behavior of the rocks because of the movement of fluids and other liquid wastes stored in the rocks, and the safe storage and disposal of waste materials. The study shows that the article investigated the potentiality of lower strength sedimentary rocks on the appropriateness to store radioactive wastes and for their consistency with the geological formation of rocks in the UK.
Some of the elements that were used in the model, known as evaporates were established to originate from salts and other hydrates, which contaminate stationary or flowing underground water, increasing the vulnerability of those who use water to the harmful effects on the health and the environment. Overall rating of the paper based on previous research in the same area leads to the conclusion that the paper meets the necessary threshold required for a sufficiently researched paper on the most appropriate tools and procedures for the UK to apply when modeling waste disposal sites. The key areas of investigation, based on the research by different authors have been exhaustively discussed in the paper.
Originality
The author tried to make the paper to be original. However, there is evidence of heavy borrowing from different authors and institutions and other sources in writing the paper (Chapman & Mc Kinley, 1987). It is clear that Chapman and Mc Kinley (1987) considered different types of geological structures as the storage areas for different waste products from different countries. However, such information was obtained as the best practice for the application, interpretation and modeling of geological information using data generated from studies conducted on topographical, geomorphological and geological characteristics of the areas of study.
Article Review
Analytical summary
Analytically, the investigation was based on the UKs geological background and policies and laws that govern the identification of areas for the disposal of nuclear wastes to ensure that they are consistent with the best practices for storing nuclear wastes. The study is in agreement with most of the low-level radioactive waste (LLW) methods and packages that were part of the waste disposal program. Previous research factored in the use of the deep geological disposal methods and the other study considered the use of land-based disposal methods. However, it is fair to note that the present modeling makes both HLW and the LLW to be inclusive in the process because they have widely been recommended as the right waste disposal methods in many countries. It is clear that the location of geological faults and repositories that were modeled into the construction of deep and shallow waste disposal sites should be investigated to understand the geological characteristics of disposal sites that could be developed in such geological areas. By conducting a feasibility study to develop the waste disposal sites, the methodology is in agreement with the approaches that have been investigated and practiced in many countries including the USA.
The geological survey of the countries provides sufficient data about the caverns and the chemical composition of the soils and rocks to map and model the shallow and deep waste disposal sites. For instance, France provides a model for waste disposal under clay soils. On the other hand, Swedens model is based on diameter vertical or horizontal boreholes that are drilled underground to create galleries that are 450 meters deep. The model used in Sweden is based on a detailed descriptive model based on an environmental impact assessment that provides a detailed analytical report of the thermal properties, rock mechanics, and the hydrogeology, hydro-geochemistry, bedrock transport properties and a description of the surface system. On the other hand, Finland uses vertical disposal boreholes at a depth of 600 meters, but other models use are deeper underground tubes or channels drilled several meters below the ground. Here, the general concept is to use a deep storage site the underground that provides safer storage of the waste materials.
Appropriateness of the article
The article provides a synthesis of different models with empirical evidence of the tools that have been used in different situations such as the geological nature of the tools to determine the rock formations that have been worked on. The criticism of the methods is that most of the tools and assessment methods were done using tools that were available in 1981 and 1997. However, the long term use of the methods and evidence of suitability makes the approach proposed by Chapman and Mc Kinley (1987) suitable for modeling the different components used to investigate the hydrogeology, hydrochemistry and engineering of suitable methods that could be used to model the disposal of the waste materials in the UK.
The general mapping of the region that was followed by the district, site, and the Potential Repository Zone (PRZ) facilitates the process of creating a logical model for mapping details of the limited surface exposure of the land using the visible surface. It is evident in the model that the boreholes and mine plans have been investigated using limited geological data based on an interpretation of the geophysical data. The main problem here was that the results were represented in a 2D model that fails to meet the criteria for combining different data sets to create a complete model. However, further investigations show that the model was later revised to create a more robust model that defines every aspect of the mapping and data integration process by making the interpretation of the data sets used to make the conclusion on the study. An investigation by Hadermann and Heer (1996) shows that the topographical investigation was based on the OS 1:50,000 topographic data, which is sufficient to explain the standard application of the measures necessary to ensure correct data was used. In addition, the model provides the true vertical depths that need to be factored into the entire modeling process because different elements represent different attributes and one of the most critical attributes is the vertical depth from the surface to ensure safe storage of nuclear waste materials.
However, it is worth noting that the mapping was done using traditional methods and such an approach lacked the use of modern advanced and sensitive tools and equipment and information that was generated using the tools was deemed not to be up-to-date. However, the mapping was made accurate by the use of BGS procedures that rely on the quality of the Intergraph software, which uses digital map drafting. It is also evident that some work was done by the use of the Quaternary fieldwork program (Ojovan & Lee, 2013). The overall, it is worth suggesting that mapping needs to be done dynamically to evaluate and update the modeling process using up-to-date software versions and modern mapping technologies.
Basement rocks
The exploration of the basement rocks is another area that was adequately covered because the structure and nature of the rocks were critical when evaluating the suitability of the area for disposing of the nuclear wastes. The rock formation can either be strong or weak tectonic forces and the potential to damage the storage facilities depends on the strength of the rocks to resist the action of the forces. It is evident that the model shows that the Borrowdale Volcanic Group BVG can be exposed to the sea and that makes it the perfect place for the disposal of the nuclear wastes because the tectonic forces act towards the sea at the point where pressure and energy are released, leaving little or no damage to the storage facilities. The article claims that the Geological mapping of the western part of the Lake District was in agreement with the argument by Saltelli and Tarantola (2002), which provides rich information about the entire volcanic faults of the target region and information of the geological survey to determine the vulnerability of the area to natural forces such as earthquakes.
Here, the key data sets that were used to determine the suitability of the site include the Gibb Deep Geology Group on-site, BGS off-site that defined the Core logs, the Borehole Televiewer (BHTV), and the Core photographs. It is evident that the model factored the deep survey of the rocks and soils to determine the most suitable depths to store the waste materials. However, it is evident that no samples of the rocks or the soils were collected for laboratory analysis at this point. In addition, the data affects the confidence that could be derived from the use of the data for geological mapping and drilling of storage boreholes.
On the other hand, the article provides a detailed discussion of the integration of boreholes using the data from the GeoQuest workstation. The critical elements considered in this case were the datasets, the lithological logs of seismic velocities, and the fault position of the rocks. The adequacy of the interpretation and use of the data was evident at the point where the results showed that the fault patterns can be caused by seismic activities from different directions. In addition, offshore and onshore data integration using data generated from the contour map of average velocity, different velocity maps, and the seismic maps does not provide sufficient data to allow for adequate confidence in the use of the results to create the boreholes or to be adequately sure that the mapping is effective for use. The findings are in agreement with the study conducted by Valsala, Roy, Shah, Gabriel, Raj and Venugopal (2009) that the data gaps that appear in the case of seismic activities need to be addressed effectively.
The methodology used to evaluate the sedimentary rocks and other rock formations made adequate representation of facts because the process relied on information gathered from research that had been conducted in the same area covering the three main objectives stated in the study. However, a weakness arises because the software used could not allow for 3D modeling, which is an important component of the study. The strategy for using domain maps provides sufficient data to model the process, but it is recommended that future studies integrate the software with the 3D capabilities to ensure adequate modeling of the rocks for safe disposal of nuclear wastes.
Geological structure of the Sellafield was modeled using several software packages that do not provide evidence of the suitability of the packages in modeling the geology of different areas. However, evidence of new data based on the modeling software shows the reliability of the tools and the confidence in the results. In addition, the sources of data are adequate for the model. However, there is no description of the data was obtained for use and the degree of confidence and reliability in the modeling process (List, Mirchandani, Turnquist & Zografos, 1991). Despite the weakness, it is evident that the data from different sources were combined to create a 3D model using the VULCAN and earthVision software because the software uses the triangulated mesh system to address the gap in data using stratigraphic surfaces. By combing both the earthVision and VULCAN capabilities, it was possible to obtain the right 3D model for the Potential Repository Zone (Rutqvist, Wu, Tsang, & Bodvarsson, 2002). However, the modeling solutions do not provide any detailed study of alternative solutions that include the use of other software products and no comparison of data sets that are done on different software products to model the potential repository zone.
The modeling is rich in content on the mineral deposits that are within the Potential Repository Zone. The study provides the fracture orientation of the mining sites, the spatial heterogeneity, spatial variability, and the rock mass properties have been discussed in detail to ensure a comprehensive summary of information about the appropriateness of the area of study for waste disposal. However, the study falls short of proving the chemical composition of the substances the reactive nature when exposed to the storage material used to make the storage tanks and how that could affect the safety of the waste materials. However, the mention of the use of the core samples to conduct in situ tests confirms the dependability of the results that were used to do the modeling.
Other areas of study that provided sufficient data for the modeling include an investigation into the tectonic effects using expertise such as the Seismic Hazard Working Party (SHWP) to conduct hazard management issues because of the expertise on the various effects that earthquakes have had on the rocks. The in-situ stress has comprehensively been covered and the geotechnical modeling was done using expertise knowledge domains (Rutqvist, Wu, Tsang & Bodvarsson, 2002). Other issues covered include the safety of groundwater, the effects of natural and induced changes to the quality of water, and the techniques that are safe for designing and developing the repository.
Various examples shave been used to verify the model by comparing practical findings with empirical data based on practical evidence. The site descriptive models include attributes such as the underground layout, site-specific inputs, the thermal, geological and environmental impact assessment reports. The integrated geological model that has been applied when creating the site has been factored into the study. In comparison, the site descriptive model (SDM) used to create the model in use for the various sites that were investigated shows that results were comprehensive and appropriate for use in modeling a new site.
Additional models such as the local model area (and corresponding volume) provide a clear approach to use in modeling a specified local area that has been specified for creating the repository.
Conclusion
In conclusion, the investigation shows that geological modeling to determine the potential characteristics of a nuclear waste disposal zone was conducted in a comprehensive manner that was deemed satisfactory. Several issues such as the geomorphology of the site identified for the disposal of the wastes, the characteristics of the soils and rocks in which the material could be disposed of, the distribution and nature of the characteristic of the soil and sedimentary rocks, the software tools to use to map the target sites were done comprehensively. However, it is evident that some approaches such as the use of software to examine the topographical, geomorphological and geology of the target sites sometimes when doe using a 2D model and data that was not updated failed to update the model effectively. The modeling strategies were consistent with the modeling techniques and the sources of data were of good quality. On the other hand, the study used several case studies to demonstrate the characteristics and nature of the soils and sedimentary rock and the nature and scope of the disposal strategies that have been used everywhere to create the model. The number and depth of the boreholes used to create the storage sites could be very deep. For instance, the 29 deep boreholes in the Sellafield area demonstrate the reliability of the disposal modeling strategies. France provides another excellent model for the deep storage of nuclear wastes in a thick clay (mudstone) bed in the Paris basin that is 500 meters deep. The rock structure is defined by the thick succession of Jurassic strata that is sufficient to ensure compliance with the French policy on nuclear waste disposal. In Sweden, a site descriptive model (SDM) was developed after a large volume of data was collected and analyzed for the purpose of modeling the disposal site after a vigorous analysis of the data. However, the main weakness identified here is that the article fails to present empirical evidence on how accurate some software tools are in analyzing the data collected from different sites and how applicable the findings are in real-life situations. Some of the sources of data need to be updated to make the study current.
References
Chapman, N. A., & Mc Kinley, I. G. (1987). The geological disposal of nuclear waste. McGraw-hill, New York
Hadermann, J., & Heer, W. (1996). The Grimsel (Switzerland) migration experiment: integrating field experiments, laboratory investigations and modelling. Journal of Contaminant Hydrology, 21(1), 87-100.
List, G. F., Mirchandani, P. B., Turnquist, M. A., & Zografos, K. G. (1991). Modeling and analysis for hazardous materials transportation: Risk analysis, routing/scheduling and facility location. Transportation Science, 25(2), 100- 114.
Miller, W. M., Chapman, N., McKinley, I., Alexander, R., & Smellie, J. A. T. (2011). Natural analogue studies in the geological disposal of radioactive wastes. Elsevier.
Ojovan, M. I., & Lee, W. E. (2013). An introduction to nuclear waste immobilisation. Newnes.
Rutqvist, J., Wu, Y. S., Tsang, C. F., & Bodvarsson, G. (2002). A modeling approach for analysis of coupled multiphase fluid flow, heat transfer, and deformation in fractured porous rock. International Journal of Rock Mechanics and Mining Sciences, 39(4), 429-442.
Saltelli, A., & Tarantola, S. (2002). On the relative importance of input factors in mathematical models: safety assessment for nuclear waste disposal. Journal of the American Statistical Association, 97(459), 702-709.
Valsala, T. P., Roy, S. C., Shah, J. G., Gabriel, J., Raj, K., & Venugopal, V. (2009). Removal of radioactive caesium from low level radioactive waste (LLW) streams using cobalt ferrocyanide impregnated organic anion exchanger. Journal of hazardous materials, 166(2), 1148-1153.
The Novum Organum is one of the most known philosophical works by Francis Bacon. This book was published in 1620 in Latin for the first time. With time, this work has been translated into several languages in order to spread the offered theory all over the world and provide other scientists with the opportunity to develop Bacons ideas. From Latin into English, the title of that manuscript is translated as a new instrument. In the 17th century, society was waiting for something new and extraordinary in many spheres of life. Francis Bacon, an English philosopher, and scientist wanted to create something significant for both science and philosophy. The Novum Organum presented a new system of logic that could easily replace the already known methods by Aristotles Organon; later that system was known as the Baconian method. It is necessary to admit that The Novum Organum also presented one of the earlier predecessors of the known scientific method. Bacons instances of the fingerpost could show the true way with the help of which any question may be decided; nowadays, scientific methods play the same role and take into consideration modern innovations and techniques.
The Idea of The Novum Organum
Francis Bacon was one of the first scientists, after Aristotle, who made considerable contributions to the development of the scientific method. He chose the idea of symbolism to interpret the main idea of investigation and its analysis. In The Novum Organum, he used a symbol of a sailing ship to present the way of his discovery. The sailing of ships, the movements of animals, the transmission of missiles, are all performed likewise in times which admit (in the aggregate) of measurement. (The Novum Organum). He was eager to create a method in order to find out the nature. He did not try to use some broad or hacking theories. The only right way he saw was to unity theory and practice in order to check the reliability of the answer and provide several unassailable facts.
Among Prerogative Instances I will put in the fourteenth place Instances of the Fingerpost, borrowing the term from the fingerposts which are set up where roads part, to indicate the several directions. (The Novum Organum).
With the help of these words, Bacon wanted to underline that any problem or answer could be interpreted in several ways. According to the way a person comprehends the problem/ or answer, he/she may present different answers and prove their reliability.
The Essence of the Baconian Method and a Scientific Method
Nowadays, the idea of the instances of the fingerpost is almost forgotten. However, it is necessary to underline that the term a scientific method, which lots of people know and use today, is based on Bacons idea of the instances of the fingerpost. Bacon specified that there could not be any other thing that was crucially important for the explanation of the phenomenon as a proof or a disproof.
The scientific method is usually based on the idea that any action is rather fundamental and repeatable. This action should be generalized on the logical development in order to make the idea incontestable.
The idea of a scientific method lies in the fact that the method chosen for the discovery should be more objective than subjective. The objectivity helps to avoid any biased interpretations of the matter and make clear conclusions. The very essence of the scientific method is the use of obtained knowledge or a deep analysis of new information in order to investigate the event from different perspectives.
The idea of the instances of the fingerposts is almost the same. In order to solve the existed problem, the casual hypothesis should be used. Such kind of hypothesis needs to be checked by the analysis of the evident consequences. In fact, it is not that easy to carry out such a process without a certain and even simple guide. According to Francis Bacon, the instances of the fingerposts were considered to be at the head of this process. Instances of the fingerpost show the union of one of the natures with the nature in question to be sure and indissoluble, of the other to be varied and separable; and thus the question is decided. (The Novum Organum) According to such a system, the former nature will serve as the necessary cause, and the latter will be simply rejected.
Connection of the Scientific Method to the Instances of the Fingerpost
The scientific method, that many scientists use nowadays, has lots in common with the idea of Francis Bacon concerning the instances of the fingerpost.
A scientific method consists of two major steps: developing a hypothesis and analyzing several experimental studies in order to check the reliability of the hypothesis. Such kind of checking is all about the retrospective analysis of the ideas connecting to the hypothesis. In Bacons instances of the fingerpost, the idea of checking the reliability of the facts is also one the major ones. These instances of the fingerpost help to find just the right way to check and not to make a single mistake. According to Bacon, it is crucially important to make several exclusions to comprehend the truth of the matter; this is why the scientific method should consist of three different methods: agreement, difference, and concomitant variate. This is what modern scientific methods consist of. Of course, the major idea is interpreted in other words, however, its essence remains the same.
Conclusion
The ideas offered by Francis Bacon in 1620 turn out to be rather appropriate in modern science and philosophy. Bacons works have been translated into several languages these translations helped many other philosophers and scientists to work on his theories and develop them taking into account new technologies and circumstances. The Novum Organum, also known as The New Organon or True Directions Concerning the Interpretation of Nature, is one of the most known works by Francis Bacon. He offered the idea of the instances of the fingerpost that became the basis of the scientific methods. The scientific method consists of several techniques used in order to investigate the problem with the help of background knowledge or some new ideas, which are perfectly analyzed. One of the bases of any scientific method lies in Bacons idea of the instances of the fingerpost. His idea implies the use of experiments in order to prove the chosen hypothesis. Several hypotheses may be chosen; in such a case, the methods of agreement and difference should be taken into consideration. Bacon was one of the first philosophers, who analyzed these methods and presented their essence in The Novum Organum.
References
Bacon, Francis. The New Organon or True Directions Concerning the Interpretation of Nature. The Constitution Society. 2005.