The Future of Satellites Lies in the Constellations by Nataniel Scharping

The recent article The future of satellites lies in the constellations by Nataniel Scharping, published in Astronomy magazine, discusses possible opportunities and problems that may arise from the launching and operation of numerous satellites. The author argues that by 2030, the number of active satellites revolving around the Earth may reach approximately 100 thousand, whereas, at present, the planets orbit accounts for 3 thousand artificial celestial bodies. Such rapid projected growth is mainly explained by planned launches of a large number of satellite constellations. The latter is defined as &groups of dozens or even hundreds of small satellites united in a common task (Scharping, 2021). For instance, the SpaceX project intends to launch more than 10 thousand satellites to ensure internet access all over the globe. However, Scharping (2021) maintains that despite having a positive impact on humanity, the thousands of satellites would lead to overcrowded lower orbits of the Earth and, thus, cause damage to inhabitants and the planet.

The Reasons Behind the Article Choice

There are two main reasons that may have determined my decision. Firstly, approximately three months ago, I was scrolling through my YouTube feed and, by chance, saw the video that was explaining how satellite television works. After watching it, I became interested in knowing how the satellites actually function. Hence, I found the NASA (2017) article named What Is a Satellite? that helped me to satisfy my initial curiosity. As a result, when I saw the title of the article under review and the topic it discusses I was immediately attracted to it. Secondly, I think the problem posited by the author is involving and thrilling itself. In my opinion, the critical views on progress and development are generally increasingly engaging for readers. Therefore, in short, my choice was based on individual motivation and the general attractiveness of the topic.

The Article Summary

Scharping (2021) states that two factors would serve as a precondition for rapid satellite number growth. One aspect includes the active penetration of private companies into the spacecraft industry. The other element encompasses the development of CubeSats  miniature satellites, each measuring 10 x 10 x10 centimeters and mass varying from 1 to 10 kg (ESA, 2021). For those reasons, launching new artificial celestial bodies has become both cheaper and faster. On the one hand, CubeSats necessitate less time, money, and materials to manufacture. On the other hand, the existence of numerous companies in the market signifies the building of an increased number of rockets that would help to send significantly more satellites than before.

The main advantage that satellite constellations have compared to sole satellites is that the former can provide constant and uninterrupted communications and internet access opportunities. Normally, if operating at close range to the Earth, traditional satellites have a velocity higher than the rotation speed of the planet. For that reason, they are not able to continuously operate in the active area of transmitters and receivers. Contrary, CubeSats can ensure stable interaction with the land due to the constant availability of one or two satellites from the group in the accessible range.

However, on the flip side, there are number of problems that such a congestion of the planets lower orbits creates. First of all, an increased number of satellites would interfere with observations and research of space by astrologists. The former can either block the view by physical presence or by transmitting a great number of wireless signals in the atmosphere, which negatively affect the work of radio telescopes. Secondly, more satellites mean that there is a higher chance of collision between different bodies. The worst-case scenario predicts the possibility of so-called Kessler syndrome appearance when one accident can cause the domino effect and lead to the destruction of all or most of the artificial celestial bodies. Finally, more satellite constellations are associated with increased aluminum in the Earths upper atmosphere, which damages the planets ozone layer and, thus, aggravates environmental problems.

The Article Analysis

In my opinion, Scharping (2021) presented quite a balanced paper on the future of satellites. The author first introduced the readers to the positive aspects of having more artificial celestial bodies on the planets lower orbits and then discussed the possible negative impacts of such a development. Although all the arguments were clear and persuasive, I think the article lacks a description of possible solutions for existing problems. For instance, Scharping (2021) does not discuss that CubeSats could bring new opportunities for space observation, nor the author mentions that SpaceX is planning to collect space junk which may reduce the possibilities of collisions. Of course, the primary aim of the paper was to raise public awareness of the problem, but possible solutions would make the discussion more thorough.

Remaining Questions

As a result of my reading, I have some questions left that are related to the growth in the number of satellites. Firstly, what solutions exist to address the problems that may potentially appear? Particularly, it would be interesting to know whether there are some alternatives to satellite constellations that can bring the same amount of utility but reduce overall negative outcomes. Secondly, should countries unite their efforts to coordinate and regulate the number of satellites that each state can have? Space exploration is a new sphere of political competition and, thus, can lead to new problems and tensions. Therefore, I think the governments would face conflicts regarding satellite amount that each state can possess. Finally, what is the impact that worldwide satellite internet can have on people in closed countries such as North Korea? Particularly, I am interested in whether citizens would be able to access the internet that is not regulated by the government.

References

The European Space Agency. (2021). CubeSats.

NASA. (2017). What is a satellite? 

Scharping, N. (2021). The future of satellites lies in the constellations. Astronomy.

Fossil Fuels Formation and Processing

Fossil fuel derivatives are produced using plant and animal deposits. These sources are found in the earths deep layer and contain carbon and hydrogen, which can be singed for energy (Strand, 2007). Coal is a solid raw material that is formed for an extended period by the rot of land vegetation. When layers are compacted and warmed over the long run, stores are transformed into coal (Sriramoju et al., 2020). Coal is generally extricated in mines. Coal has all the earmarks of being an ignitable dark or tarnish dark sedimentary stone. Before preparing and subsequent refining, it is filtered and squashed into small pieces that remind granules (Kumar, 2018). Coal refining includes pre-ignition medicines and cycles that change coal qualities before it is scorched (Kumar, 2018).

Oil is the most broadly utilized petroleum derivative. It is a fluid, non-renewable energy source shaped from the remaining parts of marine microorganisms saved on the ocean bottom. Following many years the stores end up in rock and residue where oil is caught in tiny spaces. It tends to be separated by substantial penetrating stages. Its viscid fluid color ranges from colorless to brownish-black. Raw petroleum comprises various natural mixtures changed to items in a refining interaction (Hsu & Robinson, 2019). Petroleum processing plants have three fundamental strides as a partition, change, and treatment which change oil into usable oil (Hsu & Robinson, 2019).

Natural gas is a moderately new kind of fuel source. It is a vaporous petroleum derivative that is adaptable, bountiful, and generally clean contrasted with coal and oil. Like oil, it is framed from the remaining parts of marine microorganisms. Petroleum gas essentially comprises methane (CH4) (Kidnay et al., 2020). It is profoundly packed in little volumes everywhere deep in the earths crust. Like oil, it is brought to the surface by drilling. Crude petroleum gas is generally gathered from a gathering of nearby wells. It is first prepared in a separator vessel at that assortment point for evacuation of free fluid water and flammable gas condensate (Kidnay et al., 2020). The condensate is typically then moved to a petroleum processing plant, and the water is treated and discarded as wastewater (Kidnay et al., 2020). The crude gas is then channeled to a gas preparing plant where the underlying purging is normally the expulsion of corrosive gases. Thus, the final look of the gas is clearer in color than it was before processing.

References

Hsu, C. S., & Robinson, P. R. (2019). Petroleum processing and refineries. Petroleum Science and Technology, 129157.

Kidnay, A. J., Parrish, W. R., & McCartney, D. C. (2020). Fundamentals of natural gas processing (3rd ed.). Taylor & Francis Group.

Kumar, O. (2018). Coal processing and utilization. Scitus Academics.

Sriramoju, S. K., Babu, V., Dash, P. S., Majumdar, S., & Shee, D. (2020). Effective utilization of coal processing waste: Separation of low ash clean coal from washery rejects by hydrothermal treatment. Mineral Processing and Extractive Metallurgy Review, 117.

Strand, J. (2007). Technology treaties and fossil-fuels extraction. The Energy Journal, 28(4).

Biotechnology: the Protein Separation

Materials and Methods

GST Pull Down

The protein-protein interaction experiment produced the 6*His-USP elution fraction that was carefully transferred into a microfuge labeled USP7D. The labeled microfuge was then stored on ice. GST and GST-EBNA 1 peptide columns, which were initially loaded in the freezer, were obtained and allowed to thaw at RT for approximately 2minutes after which they were placed on ice. The lids and plugs from the peptide-loaded columns were removed and placed on#3 glass test tubes. The columns were washed twice with 1ml of 1*PBS PH 7.4.The aim of the pull down experiment was to use chromatography to find out whether there was any interaction or any biological activity between the two proteins (Hayes, Flanagan & Jowsey, 2005).

To ensure there was a positive interaction between the proteins 6*His USP7 and GST-EBNA1, two consecutive experiments were conducted. First, they were incubated for ~90 minutes in a fridge. Half of the purified 6*His-USP 7 (200ml) was dialyzed and mixed with 800 µL of 1*PBS pH 7.4 During this step, the SDS-PAGE gel was casted which took ~2hours (Reed, Holmes, Weyers & Jones, 2007).

The liquid fractions were collected and put in microfuge tubes labeled GSTE PD-UP and GST PD-UP which were then stored in ice. The liquid contained the unbound protein. The samples were centrifuged at 1000rpm for 1minute RT. Samples 7 and 8 were made by separating the liquid into two (20-µL) aliquots, which were similarly labeled. The samples were washed for six times. The sixth wash was obtained and labeled as GST-PDW and GSTE-PDW that is sample 9 and 10 respectively.

200 µL of a GST-Elution buffer were added to the samples after which they were incubated for 10 minutes. The Elutes were then transferred into microfuge tubes labeled GSTPD Elt and GSTEPD Elt and placed on ice. These tubes held the eluted fractions from the pull down extraction. They were labeled and represented samples 11 and 12 for SDS page analysis. SDS-PAGE Gel was casted and incubated. These samples were labeled as GST-PDUP and GSTE PD-UP respectively.

Table 1

Gel 1 Lane 1 Lane 2 3 Lane 4 Lane 5 Lane 6
sample 1 GSTE-NI GSTE Lys GSTE Lys USP7 Lys Protein marker USPD7
D
Vol. µL 24 12 12 20 24

(Middelberg, 2006)

Table 2

GEL 2 Lane 1 Lane 2 Lane 3 Lane 4 Lane 5 Lane 6 7
Sample GST PD-UP GSTE PD-UP GST PD-W Protein marker 10 GSTE PD-W GST PD-Elt 5 GSTE
PD Elt
Vol. µL 24 24 20 24 24 24

(Middelberg, 2006)

Results

The relationship between two protein produces 6* His-USP elution which was first put in microfuge (labeled USP7D) for incubation. GST and GST-EBNA 1 columns were obtained and allowed to thaw for about 2 minutes. The columns were then buffered twice with 1 ml of 18PBS pH 7. 4 (Reed, Holmes, Weyers, & Jones, 2007).

In order to ensure positive results, two respective experiments were conducted. Both setups were incubated for 90 minutes in a fridge. Part of it was dialyzed and mixed with 800 micro-liters of 1*PBS pH 7.4. At this stage, SDS-PAGE gel was casted. This experiment took approximately 2 hrs (Hong, Yu, & Kang, 2002; Atkinson & Babbitt, 2009).

15% SDS PAGE Gel GST EBNA1 appears larger in the 6th lane of the legend figure the GST PD

Liquid portions were collected, labeled and placed in ice. They were then buffered for six times and re-labeled again as sample 9 and 10. 200 micro-liters of GST elution buffer were added to the samples after a 10 minutes incubation period. They were then labeled as sample 11 and 12 for SDS page analysis. A total of 12 samples were obtained from various experiment for the SDS-PAGE experiment. The samples were placed on different wells on the gels as shown in the tables above.

From legend figure, lane 8 constituents included

  1. GST pull down
  2. GST PD-UP
  3. GST-PDW
  4. Protein marker
  5. GSTE PDW
  6. GST-PD-Eit
  7. GSTE PD-Eit
Showing a 15% SDS-PAGE gel. (A 15% SDS Page gel running of loaded samples and a protein marker). (Reed, Holmes, Weyers, & Jones, 2007)
Figure 1. Showing a 15% SDS-PAGE gel. (A 15% SDS Page gel running of loaded samples and a protein marker). (Reed, Holmes, Weyers, & Jones, 2007)

(Assay of protein pull-down, the interaction between 6* His-USP and GST-EBNA1 was studied. The procedure aimed at examining if there was any biological protein-protein interaction between the two proteins. A blue eye prestained protein was used as the control marker).

15% SDS-PAGE gel with loaded samples and a protein marker. (Reed, Holmes, Weyers, & Jones, 2007)
Figure 2. 15% SDS-PAGE gel with loaded samples and a protein marker. (Reed, Holmes, Weyers, & Jones, 2007)
The M.V(kDA) verses the disatance traveled. (Middelberg, 2006)
Figure 4. The M.V(kDA) verses the disatance traveled. (Middelberg, 2006)
The Log of disatance travelled verses the M.V. (kDA). (Middelberg, 2006)
Figure 5. The Log of disatance travelled verses the M.V. (kDA). (Middelberg, 2006)

Discussions

From the above figure 2, the lane with GST EBNA produced a larger band compared to lane 6 that contained GST PD Elt. Virtually every cell in the human body contains an enzyme called Glutathione S-transferase (GSH). The enzyme is important especially for scavenging of free radicals through the below equation (Hong, Yu & Kang, 2002; Osuna, & Casamayor, 2011; MacDonald, & Lucy, 2006; Oakley, 2011).

2 GSH (reduced glutathione) Ò GSSG (OXIDISED GLUTATHIONE)

GSH is one of the GSTs substrate and hence GST can bind the unusual tri-peptide present in GSH with a very high affinity. The aim of the experiment was to exploit this high affinity interaction between the two compounds. The experiment involved initial purification of the two proteiens and later a set up to study their particular intterraction through a pulldown protein assay technique.The GSH can be coupled to sepharose or agarose beads, after which the beads can be employed in the affinity chromatography. Affinity chromatography can utilize either a gravity flow purification or a batch purification. This fusion however requires glutathione to be in its reduced GSH form. The GST tag is recovered by the application of a protease thrombin, an enzyme that that lyses cleaves at the thrombin cleavage site. The protein of interest is then obtained from GST (Atkinson & Babbitt, 2009; Wu, & Koiwa, 2012).

The ubiquitin protease, USP 7 is a deubiquitylating enzyme that removes ubiquitin from its substrate. It performs an important role in tumor suppression as it supports the function of the protein p53. EBV is on the hand a virus, which is greatly associated with some forms of cancer. EBV competes with p53 for USP53. The virus EBNA1 used in the experiment has been previously associated with effects that are similar to EBV. The study tested if the protein EBNA1 and USP interact (Rath, Glibowicka, Nadeau, Chen & Deber, 2009; Board, 2011).

The blue bands on the clear background show that the procedure was perfect on figure 1. Figure 2 however has many errors, which might have been caused by contamination or by protein degradation. There is a direct relationship between a protein mobility and the log its molecular weight (Figure 5). The other drawback of such a purification is that if a protein requires post-translational modifications, the whole process definitely fails. The apparent molecular weights are most likely closer to the true protein weights. As was expected, there was reported a likely interaction between , USP 7 and the EBNA1 proteins. The standard curve requires the determination of the position of each band (Middelberg, 2006).

In conclusion, It is vey important that due to the many different proteins, any form of error be avoided during the protein sepataion. Accuracy is very important as it eliminates some of the errors that may lead to wrong results. Secondly, contaminations have to be avoided by ensuring use of sterile equipment. It is advisable to take note of the false positives and false negatives that may result.

References

Atkinson, H., & Babbitt, P. (2009). Glutathione transferases are structural and functional outliers in the thioredoxin fold. Biochemistry, 48(46), 1110811116.

Board, P. (2011). Glutathione transferases. Drug metabolism reviews, 43(2), 91.

Flanagan, J., & Smythe, M. (2011). Sigma-class glutathione transferases. Drug metabolism reviews, 43(2), 194  214..

Hayes, J., Flanagan, J., & Jowsey, I. (2005). Glutathione transferases. Annual review of pharmacology and toxicology, 45(1), 51  88.

Hong, S., Yu, J., & Kang, S. (2002). Ultrastructural localization of 28 kDa glutathione S-transferase in adult Clonorchis sinensis. The Korean Journal of Parasitology, 40(4), 173  176.

MacDonald, A., & Lucy, C. (2006). Highly efficient protein separations in capillary electrophoresis using a supported bilayer/diblock copolymer coating. Journal of Chromatography A, 1130(2), 265  271.

Middelberg, A. (2006). Biomolecular Engineering. Chemical Engineering Science, 61(3), 875.

Oakley, A. (2011). Glutathione transferases: a structural perspective. Drug metabolism reviews, 43(2), 138  151.

Osuna, B., & Casamayor, E. (2011). Sodium Dodecyl Sulfate-Polyacrylamide Gel Protein Electrophoresis of Freshwater Photosynthetic Sulfur Bacteria. Current Microbiology, 62(1), 111  116.

Rath, A., Glibowicka, M., Nadeau, V., Chen, G., &Deber, C. (2009). Detergent binding explains anomalous SDS-PAGE migration of membrane proteins. Proceedings of the National Academy of Sciences, 106(6), 17601765.

Reed, R., Holmes, D., Weyers, J., & Jones, A. (2007). Practical Skills in Biomolecular Science (3rd ed). Toronto: Pearson Education Canada.

Wu, X., & Koiwa, H. (2012). One-step casting of Laemmli discontinued sodium dodecyl sulfate-polyacrylamide gel electrophoresis gel. Analytical biochemistry, 421(1), 347.

Aspects of the Bootstrap-T Algorithm

The bootstrap-t algorithm is inspired by Students t-test, as its name indicates. It introduces a similar parameter T and calculates its percentiles, which can then be used to establish confidence intervals for the initial parameter of interest. With that said, since the T-percentiles are unknown, it is necessary to approximate them, at which point the bootstrapping begins taking place. Using an estimation of the overall distribution and multiple bootstrap data sets drawn from it, it is possible to produce T*, a replication of the original parameter (DiCiccio & Efron, 1996). With enough different data sets produced and analyzed, with their results then ordered, it is possible to assign each percentile a value from the resulting set. The ±th percentile is assigned the (B times ±)th value in a set of B ordered results. With these results, one can establish a confidence interval for the data that has been found to be second-order accurate (DiCiccio & Efron, 1996). With that said, the algorithm has several weaknesses, notably the high computational intensity and the numerical instability, which can produce extremely large confidence intervals.

The BCa method also relies on bootstrap data sets that are sampled from the data. Generally, the number of such structures necessary for accurate estimations varies from several hundred (for standard error) to multiple thousands or more (for confidence intervals) (DiCiccio & Efron, 1996). Replications of parameters of interest are procured from each of these and used to estimate the confidence intervals. They are defined by a complicated formula that features two parameters that give the method its name: the bias-correction and the acceleration. The former is estimated using the entirety of the sample compared to the parameters estimated value, aiming to correct upward or downward biases in the sample. The latter aims to measure the speed of changes in the standard error on a normalized scale and can be convoluted to define, though, per DiCiccio and Efron (1996), it can be estimated using Fishers score function. BCa is also second-order accurate and correct under general circumstances, also being transformation invariant and exactly correct under the normal transformation model. However, it is highly complex and can be overly conservative and close to the non-bootstrap confidence intervals.

The final method discussed, ABC, stands for approximate bootstrap confidence [intervals]. According to DiCiccio and Efron (1996), it is a middle ground between the two approaches discussed above, abandoning BCas bootstrap cumulative distribution function and introducing a nonlinearity parameter. ABC involves mapping from the natural parameter vector · to the expectation parameter µ using a predetermined function. µ can then be used to compute the standard deviation estimate, followed by the calculation of a number of numerical second derivatives to calculate the methods three constants. The method diverges into several variations at this point, with the simplest, ABCquadratic, calculating the endpoint as a direct function of the inputs (DiCiccio & Efron, 1996). The other versions of the algorithm require slightly more effort, addressing issues such as nonlocality-related boundary violations. A significant advantage of ABC over its two counterparts is that it requires one-hundredth of the computation, but it also less automatic and demands smoothness properties for the parameter of interest (Efron & Hastie, 2016). Still, in simpler cases, using the method can confer dramatic advantages, and it is frequently used where the standard interval may have sufficed.

References

DiCiccio, T. J., & Efron, B. (1996). Bootstrap confidence intervals. Statistical Science, 11(3), 189-228.

Efron, B., & Hastie, T. (2016). Computer age statistical inference: Algorithms, evidence, and data science. Cambridge University Press.

Researching of Amino Acids in the Human Body

It is important to note that there are approximately 500 amino acids, but proteins in the human body are mostly comprised of 20 amino acids, which can be further categorized into three main groups. There are essential amino acids, nonessential amino acids, and conditional amino acids. The former group includes valine, tryptophan, threonine, phenylalanine, methionine, lysine, leucine, isoleucine, and histidine (Amino acids reference charts). The nonessential group involves tyrosine, serine, proline, glycine, glutamine, glutamic acid, cysteine, aspartic acid, asparagine, arginine, and alanine (Amino acids reference charts). The essential amino acids are the ones, which cannot be produced within the body, which is why they need to be consumed through food, whereas nonessential ones can be produced from other amino acids (Amino acids reference charts). In addition, depending on a condition, such as stress or illness, some nonessential amino acids can become essential due to certain factors, which makes them conditional amino acids, and they include serine, proline, ornithine, glycine, tyrosine, glutamine, cysteine, and arginine.

Alanine is an amino acid abbreviated either as Ala or A, and the radical group is comprised of CH3. It can be categorized as an aliphatic amino acid, which means that its side chain is hydrophobic (Amino acids reference charts). Phenylalanine is an amino acid abbreviated as Phe or F, where the side chain is also hydrophobic, but it is not a methyl group but rather an aromatic ring (Amino acids reference charts). Cysteine is abbreviated as Cys or C, and it contains the radical group with a sulfur atom (Amino acids reference charts). It is important to note the fact that cysteines side chain is a polar neutral one, which means that it is neither acidic nor basic.

Reference

Amino acids reference charts. Merck, 2021. Web.

Practical Proforma Enzyme Kinetics

Introduction

Crude acid phosphatase, an enzyme, is vital in speeding up biological reactions, and as such, it comes in handy in the manufacture of proteins as well as the conversion of sugar compounds into usable sugar substrates. The enzyme phosphatase is an ubiquitous acid found in both plants and animal. The germ of the wheat seed is dominated by this enzyme and as such provides an all-important plant source. On the other hand, in animals and in particular humans, the prostate glands form the all-important source of this enzyme. From the wheat germ, it is possible to extract a crude product of this enzyme via pulping process in its aqueous state. The aqueous extract can be converted to powdered extract courtesy of a spray drier at low temperatures (Chang 2005). In this form they can conveniently be commercialized for laboratory use.

The enzyme phosphatase can be distinguished in the sense that it is non-specific in nature. To this end, it can act on a variety of esterase phosphate esters. This enzyme exists in three isomers designated as I, II and III. The enzyme functions, optimally, at a pH of 5. In plants, it is vital in enhancing the availability of inorganic phosphates that aid in the growth of seedlings. In details, whatever happens is that sugars in wheat are normally stored as compounds containing phosphates. To make the sugars useful in plants, there is need to break the sugar compound into its constituents- sugars and phosphates. The enzyme phosphatase enhances this by catalyzing this dissociation. Once dissociated, the sugars are hydrolyzed to provide the all-important energy that aid in the running of the biological processes. On the other hand, the phosphates form the building blocks vital for protein synthesis. Of note, in humans, clinical doctors use serum concentration of this enzyme as an indicator vital in the diagnosis of prostate cancer.

The objective of writing this report is to study the enzyme kinematics using the aforementioned enzyme. As such, the compound disodium p-nitrophenyl phosphate will act as a substrate that will be catalyzed by the enzyme to liberate p-nitrophenol compound as illustrated in the scheme below.

Phosphatase.
Phosphatase. Enzyme (Chang 2005).

The product, a yellowish solution of p-nitrophenol, can be determined using a spectrometer at a wavelength of 405 nm. It is important to note that the concentration of this compound is proportional to the optical density of the solution (absorbance). As such, according to Michaelis, a plot of the rate of enzymatic reaction against the concentration of substrate assumes the trend 1 below:

Practical Proforma Enzyme Kinetics
(Chang 2005).

However, in presence of an inhibitor, the trend changes as is in 2 and 3 above for competitive and non-competitive inhibitors respectively. According to Lineweaver, the above trend can give a linear trend when the reciprocal of the variables are plotted as below:

Practical Proforma Enzyme Kinetics
(Chang 2005).

These variables are connected according to the equation below:

1/V1 = Km/Vmax (1/[S]) + 1/Vmax (Atkins & Paula 2006).

In this report, sodium molybdate will act as the enzyme inhibitor, and as such, we will determine both the mechaelis-menton constant (Km) and the the maximum velocity (Vmax).

Null Hypothesis

H0A: The rate of phosphatase reaction will be proportional to the p-nitrophenyl phosphate concentration initially before it reaches an optimal level as substrate concentration increases.

H0B: A competitive inhibitor will affect the Km by increasing it while a non-competitive inhibitor will affect the Vmax.

Alternative Hypothesis

H1A: the rate of phosphatase reaction will not obey Michaelis principle and hence will not assume his trend.

H1B: An inhibitor will have no effect on the enzyme.

Aims

The objective of this report was to verify the Michaelis-Menten principle and as such validate it using Lineweaver-Burke principle. This was done separately in absence and presence of an inhibitor.

Method

The following reagents were utilized in this experiment: 2 mM disodium p-nitrophenyl phosphate (substrate), enzyme extract, 1M NaOH, citrate buffer at a pH of 5 and 0.9mM of sodium molybdate- an inhibitor.

Before the commencement of the experiment, the enzyme phosphatase was obtained from the wheat germ and then diluted appropriately before it was utilized later in the experiment.

The experiment commenced when a 20 mL volume of a 0.3 mM reagent of disodium p-nitrophenyl was made from a 2 mM of the same. Different volumes of the substrate were added in 11 pairs of well labeled test tubes such that the first nine pairs held volumes of a 0.3 mM p-nitrophenyl phosphate while the last two pairs held a 2 mM of the same. The substrate volumes in the tubes followed the patterns shown in the tables below:

Table 1: without an inhibitor.

Tube no. 1 2 3 4 5 6 7 8 9 10 11
Substrate (mL) 0 0.12 0.24 0.48 0.72 0.96 1.32 1.8 2.4 1.2* 2.4*
Water
(mL)
2.8 2.68 2.56 2.32 2.08 1.84 1.48 1 0.4 1.6 0.4
Buffer
(mL)
2 2 2 2 2 2 2 2 2 2 2

Table 2: in presence of an inhibitor.

Tube no. 12 13 14 15 16 17 18 19 20 21 22
Substrate (mL) 0 0.12 0.24 0.48 0.72 0.96 1.32 1.8 2.4 1.2* 2.4*
Water
(mL)
2.6 2.48 2.36 2.12 1.88 1.64 1.28 0.8 0.2 1.4 0.2
Buffer (mL) 2 2 2 2 2 2 2 2 2 2 2
Sodium molybdate
(mL)
0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2

To each tube, and with the exception of the first pair (1 and 12), a 1.2 mL volume of the enzyme was added one at a time after every minute from the first to the last tube. Another set of 22 tubes each containing 1mL of NaOH were lined up to stop the reaction after ten minutes of incubation when the enzyme had been added. This was done by siphoning a 2mL volume of enzyme containing solution into the NaOH containing tubes to make a 3 mL solution in a tube. With a spectrophotometer set at 405nm, the absorbance of this tube was measured and recorded for analysis.

Results

Tube [S] Absorbance at 405 nm [P] after reaction stopped (mM) Quantity of P Rate of reaction 1/[S] 1/Vo
(¼M) (¼mol) (¼mol min-1) (mM-1) (min ¼mol-1)
No inhibitor present 1 0 0.002 0.007 0.02 0.002 0 500.00
2 6 0.003 0.010 0.03 0.003 0.170 333.33
3 12 0.005 0.017 0.05 0.005 0.083 200.00
4 24 0.011 0.037 0.11 0.011 0.042 90.91
5 36 0.020 0.067 0.20 0.020 0.028 50.00
6 48 0.021 0.070 0.21 0.021 0.021 47.62
7 66 0.022 0.073 0.22 0.022 0.015 45.45
8 90 0.036 0.120 0.36 0.036 0.011 27.78
9 120 0.041 0.137 0.41 0.041 0.008 24.39
10 400 0.160 0.533 1.60 0.160 0.003 6.25
11 800 0.220 0.733 2.20 0.220 0.001 4.56
Inhibitor present 12 0 0.023 0.077 0.23 0.023 0 43.48
13 6 0.006 0.020 0.06 0.006 0.170 166.67
14 12 0.014 0.047 0.14 0.014 0.083 71.43
15 24 0.003 0.010 0.03 0.003 0.042 333.33
16 36 0.016 0.053 0.16 0.016 0.028 62.50
17 48 0.009 0.030 0.09 0.009 0.021 111.11
18 66 0.013 0.043 0.13 0.013 0.015 76.92
19 90 0.011 0.037 0.11 0.011 0.011 90.91
20 120 0.019 0.063 0.19 0.019 0.008 52.63
21 400 0.055 0.183 0.55 0.055 0.003 18.18
22 800 0.089 0.297 0.89 0.089 0.001 11.24

The below calculation will lead us to getting the concentration of the substrate [S].

We begin by calculating the moles (m) of the substrate present in a tube. For instance, taking tube 2:

m = Molarity*Volume of substrate = 0.3*0.12= 0.036 moles.

To get the concentration of the substrate = moles/ total volume.

But, the total volume = substrate volume + water volume + buffer volume + enzyme volume = 0.12+2.68+2+1.2= 6mL

Therefore, [S] = 0.036/6 =0.006 mMž0.006*1000 ¼M= 6¼M.

On the other hand, in presence of an inhibitor, for instance, tube 18: m =0.3*1.32 =0.396 moles.

[S]=0.396/ (1.32+1.28+2+0.2+1.2) = 0.066 mM =66 ¼M.

To get [P] the formula below is applied:

v0 =dP/dt

Therefore, the moles of P = v0*t. But, v0 is the reaction rate or the absorbance and t is the incubation time (10min). For example, for tube 3: P=0.005*10=0.05mili moles.

Therefore; the concentration [P] = moles/volume. But, volume is constant ž 3mL

Hence, [P] =0.05/3 =0.017 mM.

In order to obtain the quantity of P which is equivalent to the number of moles in the solution we apply the formula below:

For example, for tube 3; m = Molarity*Volume = 0.017/1000*Volume.

But, volume =volume of the siphoned liquid + volume of NaOH = 2+1=3mL

Therefore; m=0.017/1000*3/1000= 0.15×10-6moles= 0.051¼m

The rate of reaction in ¼molmin-1 is given as below:

For example, for tube 3; v0= 0.051/10 moles/min = 0.0051¼moles per min.

The reciprocal of [S] is given as below:

For instance, tube 3; 1/[S] = 1/12 = 0.083 mM-1

The reciprocal for v0 is given as below:

For instance, tube 3; 1/v0=1/5 = 0.2min/¼mol

Graph 1 of Lineweaver-Burke reaction in absence of an inhibitor.

1/2Vmax

Vmax=0.33

Km=325

Michaelis-Menten reaction in absence of an inhibitor.
Graph 2 of Michaelis-Menten reaction in absence of an inhibitor.
Michaelis-Menten reaction in absence of an inhibitor.

Example of calculation for obtaining Km and Vmax using Michaelis-Menten graph (graph 2):

The graph is in the form of y=mx+c. Therefore; 1/V1 = Km/Vmax (1/[S]) + 1/Vmax.

As such, gradient of the line 2027 = Km/Vmax. But, the y-intercept = 1/Vmax=6.252 min/¼mol.

Therefore; Vmax=1/6.252 ¼mol/min = 0.16 ¼mole/min

Km=2020*0.16= 324.2 ¼M.

For Lineweaver-Burke, this is obtained directly from the graph.

Hence; For Lineweaver-Burke: Km =325 ¼M Vmax = 0.33 ¼mole/min

For Michaels-Menten: Km =324.2 ¼M Vmax = 0.16 ¼mole/min

 Lineweaver-Burke in presence of an inhibitor.
Graph 3 of Lineweaver-Burke in presence of an inhibitor.

Km=600

Vmax=0.11¼mole/min

1/2Vmax

Michaelis-menten reaction in absence of an inhibitor.
Graph 2 of Michaelis-menten reaction in absence of an inhibitor.

In presence of an inhibitor: For Lineweaver-Burke: Km =600 ¼M Vmax = 0.11 ¼mole/min

For Michaels-Menten: Km =9.62 ¼M Vmax = 0.014 ¼mole/min

When plotted on the same graph, the below trend is observed:

Lineweaver-Burke graph for the enzyme reaction in presence and absence of a competitor.
Graph 5: Lineweaver-Burke graph for the enzyme reaction in presence and absence of a competitor.

In presence of a competitor.

In absence of a competitor.

Michaelis-Menten graph for enzyme reaction in absence and presence of a competitor
Graph 6: Michaelis-Menten graph for enzyme reaction in absence and presence of a competitor.

Discussion

The objective of this experiment was to verify the Michaelis-Menten principle and as such validate it using Lineweaver-Burke principle. This was done separately in absence and presence of an inhibitor. From the two experiments performed, the only experiment that displayed the expected trend was the one that was performed in the presence of an inhibitor (Graph 1 and 2). This can be attested by the R2 scores- 0.769 and 0.989 for Lineweaver and Michaelis respectively, which are almost equal to 1. As such, we will rely more on Michaelis results for comparison purposes with the actual. The opposite is true for the second experiment which was performed in the presence of an inhibitor; 0.683 and 0.148 for Lineweaver and Michaelis respectively (Graph 3 and 4 respectively). This shows that the second experiment was flawed. As such, the data collected had a lot of outliers that were responsible for the inaccurate trends. This could be attributed to the temperature fluctuations.

For the first experiment, the values for Km (325¼M) were tallied using both trends. However, there was a difference in the Vmax (0.33 against 0.16 for Lineweaver and Michaelis respectively). The second experiment displayed varied values owing to experimental errors. When we compare the Km value (325¼M) obtained in this experiment with the actual Km (1500¼M) value for phosphatase, we can conclude that the enzyme is not at its optimum functioning. This was anticipated since phosphatase functions optimally in an alkaline environment of between 8-10 pH. Similarly, the Vmax (0.12¼mol/min) is expected to be lower than the actual Vmax (20 ¼mol/min).

Conclusion

As explained above, this experiment was flawed, making it difficult to identify the type of competition present. However, assuming the outliers, the inhibitor exhibited a competitive inhibition (Graph 6).

References

Atkins, P & Paula, J 2006, Physical Chemistry for the Life Sciences, W. H. Freeman and Company, New York.

Chang, R 2005, Physical Chemistry for the Biosciences, Thomson Learning, South Melbourne.

The Empirical Rule in Statistics

The empirical rule is one of the basic statistical terms associated with the normal distribution. Also called the three-sigma rule, this law states that for a normal distribution, virtually all observable data will be within three standard deviations (Hayes, 2021). There is a ratio of 68-95-99.7, according to which 68 percent of observations fall in the first deviation, 95 percent in the second, and 99.7 percent in the third. Therefore, using this rule, one can make predictions about the final results based on the likelihood of a particular observation. In addition, this method is relatively fast and allows getting a rough estimate in cases where detailed data acquisition is impossible or costly. Finally, this law can be used to test the normality of the distribution (Hayes, 2021). However, since the rule of thumb is closely related to the normal distribution, it can only be applied in these cases. Accordingly, all other types of distribution, for example, skewed, are incompatible with this theory.

In reply to one of my classmates, Jamal, I have to point out that his wording is not entirely accurate. Although my friend also notes the relationship between the rule and the standard deviation, I should note that there is nothing in the definition of the empirical rule about symmetry to a mean. In addition, Jamal somehow separates the three standard deviations and the normal distribution in the last paragraph of his post, as if separating them into different forms of distribution, although these parameters are closely related. Finally, a classmate of mine does not quite correctly interpret the application of the empirical rule in the context of skew to the left and skew to the right. These concepts are separate distribution laws and cannot be applied in the context of this law. Thus, Jamals post is not devoid of the right thoughts, but some of the wording needs clarification.

Reference

Hayes, A. (2021). Empirical rule. Investopedia. Web.

The Height Values Obtained Through Statistical Research

Figure 1 below shows the screenshot of the values obtained through excel. The mean height is 68.9 and the standard deviation is 4.41. Compared to the mean height of the results, I am taller since my height is 72.

Descriptive Statistics
Figure 1: Descriptive Statistics

Step 2

  1. The participants for the study were selected through sampling. Sampling is a statistical approach of choosing elements for study or a subgroup of the population from which statistical inferences can be made (Freedman, 2017). The sample also indicates the estimate features of the entire population under study.
    There are various methods that can be applied for sampling. Researchers prefer different sampling methods because they do not need to study the whole population to gather actionable intuitions. Sampling is widely used as it saves time and is cost-effective. It also forms the base of any research design. In selecting the heights, I used the systematic sampling approach. Systematic sampling involves selecting the sample from a target population by choosing an arbitrary point to start from and then choose sample members after a fixed interval (Freedman, 2017). In this case, there were 100 students, so I picked every 10th student to form my sample.
  2. Country of study
  3. The age of the population ranges from the age 20 to 25 years.
  4. The male elements constituted 60 percent of the sample while the female gender made up 40 percent of the sample.

Step 3

  1. The empirical rule is based on the bell-shaped normal probability curve. According to Freedman (2017), the empirical rule shows that nearly all observed data is expected to lie within three standard deviations (Ã) of the mean (µ), as discussed below:

    • Approximately 68 percent of the data will fall within 1 standard deviation of the mean. Alternatively, 68 percent of the data will fall between the mean subtract 1 x the standard deviation, and the mean added to 1 x the standard deviation. Statistically it is represented as: µ±1±
    • About 95 percent of the data will fall within 2 standard deviation of the mean. Alternatively, 95 percent of the data will fall between the mean subtract 2 x the standard deviation, and the mean added to 2 x the standard deviation. Statistically it is represented as: µ±2±
    • About 99.7 percent (or nearly all) of the data will fall within 2 standard deviation of the mean. Alternatively, 99.7 percent of the data will fall between the mean subtract 2 x the standard deviation, and the mean added to 2 x the standard deviation. Statistically it is represented as: µ±3±

From the results in Figure 2 below:

  • 68 percent represents the heights 64.49 to 73.31, it contains 68 percent of the data, and in this case it represents 15 data points apart 5;
  • 95 percent represents the heights 60.08 to 77.72, it contains 95 percent of the data, and in this case it represents 19 data points apart 1; and
  • 99.7 percent represents the heights 55.67 to 82.13, it contains 99.7 percent of the data, and in this case it represents all the data.
Empirical Rule
Figure 2: Empirical Rule
  • My height is 72 inches, then 24.1 percent of the relevant population is shorter. The other 75.9 percent, is taller than me.
Normal Probability
Figure 3: Normal Probability

Reference

Freedman, D.H. (2017). Statistics. 4th ed. New York, W.W Norton & Company.

Identifying a Material in a Chemical Lab

Lab Report

This laboratory exercise used thermal expansion to identify a material. In most cases, solids expand after heating due to the faster vibration of atoms about their fixed points. However, some solids, particularly polymers, experience contraction when exposed to heat. The main cause of negative thermal expansion is the transverse vibrational motion of the atoms in the material. The coefficient of thermal expansion can be used to identify the specific material exposed to heat.

Materials and Methods

A one-meter bar of unidentified material was heated from 90 to 300° Kelvin using a gas flame. The length of the material was measured in intervals of 5° Kelvin using a meter gauge. The results were recorded in MS Excel for further analysis.

Results

The results showed a clear negative relationship between the temperature and length of the material. Figure 1 shows the trend of linear change for the material.

Thermal change in length
Figure 1. Thermal change in length

The resulting model showed that a unit increase in temperature resulted in a 0.00002 decline in length. Therefore, the materials length had a negative thermal expansion coefficient.

Discussion

The material had a negative thermal expansion coefficient of -0.0002, or 0.002%, which indicates that the transverse vibrational motion of atoms that make up the material shorten the material. These transverse forces overcome the longitudinal vibrations that lengthen the bonds (Attfield, 2018). As the temperature rises, the transverse forces increase, which contracts the material. This interplay of transverse and longitudinal motions results in a guitar string effect. This effect arises when the transverse phonon amplitudes arising from torsional motion exceed the expansion force of the longitudinal modes, thereby reducing the overall length of the material. Such behavior is most evident in polymers that exhibit the guitar string effect when heated.

Reference

Attfield, J.P. (2018). Mechanisms and materials for NTE. Frontiers in Chemistry, 6(371), 1-6.

Randomized Clinical Trials Examples

When it is necessary to make data from research credible, healthcare managers and leaders can use numerous types of studies. They include case-control studies, cohort studies, and randomized clinical trials (RCTs), and each of them has specific peculiarities. Thus, the paper will offer examples of such studies and comment on what data can be collected with these tools, their bias, cost-effectiveness, and level of reliability.

Case-control studies are useful to assess the relationship between causes and effects. Munnangi and Boktor (2020) explain that this type focuses on a group with a condition and a group without it. Since the data are assessed retrospectively, a recall bias is present, while cost-effectiveness is considered a significant advantage (Munnangi & Boktor, 2020). The study by Nobel et al. (2020) is an example of a case-control study, while Johansen et al. (2017) admit that this design is lower than a cohort methodology in the hierarchy of studies. That is why it is also reasonable to comment on this type.

Cohort studies are helpful for determining the incidence of a condition by focusing on what exposures result in the given state of affairs. According to Munnangi and Bokter (2020), these studies can be retrospective when scientists analyze the collected data and prospective when researchers select a sample and watch whether they will experience a condition. Recall and selection biases affect this design, and the necessity to cover extended periods makes it expensive (Munnangi and Bokter, 2020). An article by Fuchs et al. (2018) is a cohort study example, and it offers reliable findings. It means that it is necessary to spend more time and resources to increase reliability.

Finally, RCTs are considered a highly efficient and useful study design. They randomly select the control and experimental groups to find an intervention effect (Munnagi & Bokter, 2020). A selection bias has some impact, but randomization is used to minimize it. This study type is expensive, while Verster et al. (2019) admit the reliability of its findings by stating that it is a standardized experimental intervention. Song and Baickers (2019) study is an RCT example in medical research. The information above demonstrates that all these types imply their own pros and cons, and the choice of a specific design should depend on the researchers aims.

References

Fuchs, F., Monet, B., Ducruet, T., Chaillet, N., & Audibert, F. (2018). Effect of maternal age on the risk of preterm birth: A large cohort study. PLos One, 13(1).

Johansen, C. Schüz, J., Andreasen, A.-M. S., & Dalton, S. O. (2017). Study designs may influence results: The problems with questionnaire-based case-control studies on the epidemiology of glioma. British Journal of Cancer, 116, 841-848.

Munnangi, S., & Boktor, S. W. (2020). Epidemiology of study design. In B. Abai et al. (Eds.), StatPearls [Internet]. StatPearls Publishing.

Nobel, Y. R., Phipps, M., Zucker, J., Lebwohl, B., Wang, T. C., Sobieszczyk, M. E., & Freedberg, D. E. (2020). Gastrointestinal symptoms and coronavirus disease 2019: A case-control study from the United States. Gastroenterology, 159(1), 373-375.

Song, Z., & Baicker, K. (2019). Effect of a workplace wellness program on employee health and economic outcomes: A randomized clinical trial. JAMA, 312(15), 1491-1501.

Verster, J. C., van de Loo, A. J. A. E., Adams, S., Stock, A.-K., Benson, S., Scholey, A., Alford, C., & Bruce, G. (2019). Advantages and limitations of naturalistic study designs and their implementation in alcohol hangover research. Journal of Clinical Medicine, 8(12).