Mercury is a key component of the solar system, which is subsequent to the sun. As a result, locating it from earth is difficult. It is noteworthy that the objective of putting a Lander in mercury is to discover various properties of the planet. These include its diameter measured in kilometers. Additional objectives include determining the total amount of density in the planet and the lowest and maximum skin temperature.
Another objective is to determine the period it takes to complete one rotation around the sun and the distance, it covers during this rotation. Determining the temperature in the planet and comparison of the sunrays between this planet and earth is crucial. The availability of oxygen is also a subject of discovery. Finally, determining the availability of life in the planet thus organisms, which could survive is paramount (David).
When a Lander goes to the space, there are some challenges, which it has to overcome. The first challenge pertains to the daytime temperatures in mercury, which are very high i.e. 427 degrees Celsius, because of its proximity to the sun. Thus, the materials used in its construction should be heat proof to prevent damages to the equipments. An additional challenge that the Lander needs to overcome is technical breakdowns.
Thus, there should be a backup program to take care of such incidences. It is a fact that if the radiations are very high, they will end up destroying telemetries and so no data transmission from the Lander to the ground station will take place. An additional challenge that the Lander should overcome is protection of the astronaut’s health. This is imperative because their health is essential for the success of the project. It is a fact that the astronauts bear the greatest responsibility in ensuring the Lander reaches its destination safely.
Thus, it is imperative for provision of enough oxygen for the mission. This is because there is no prove for existence of oxygen in mercury. An additional challenge that the Lander should overcome is the accuracy in its operations. This includes the readings that it will be recording during the mission. It is important for the information collected to be perfect for the mission to succeed (Cain).
There are instruments that the Lander posses to ensure it deliver the right information. These includes oxygen cylinders, telemetries, spectrometers, propellers, video and audio transmitters. Moreover, the Lander must have parachute that ensures it lands safely. One of the most important equipment that I will fit on my Lander is the “Lunar Reconnaissance Orbiters” (LRO). This is a satellite camera that takes photos of high eminence.
Telemetries will collect the data in areas within mercury that the Lander might not have access. This strategy safeguards the lives of the astronauts who will be navigating this equipment. This system will collect information using remote controlled device. Thus, astronauts will not access areas that are not safe.
The spectrometer will measure the properties of light; this in a bid to ensure the light does not damage the Landers sensors. Propellers are essential in the movement of the Lander from the earth surface and in space. It is noteworthy that I will fit the Lander with hydrogen propellers used to shoot it up the sky from the earth surface.
While in space, I will use plate propellers that will help in its movements. Video and audio transmitters will help in the conveying of information about the mission direct from the Lander. Furthermore, it will help in quick documentation of the data collected by the officials on the ground (Broyles).
Works Cited
Broyles, Robyn. The Lunar Lander Challenge. 2011. Web.
Cain, Fraser. Characteristics of Mercury. 2007. Web.
The deep seas will never seize to have discoveries even after the most skilled researchers make mysterious discoveries. Notably, when robots go into the deep seas, there is always a discovery made or a different perspective of an earlier discovery. Unlike other research, there has been countable research projects on the deep seas since the remote technology that applies on land might not work under the sea, and even if it does, it’s hard to reach the deep seas. Therefore, this paper seeks to explain things in the deep sea that can inspire one’s professional creativity.
Deep-Sea Discoveries
Future Of Our Seas (FOOS) is a group of scientists mainly concerned with water bodies. Limited technological access to the deep seas should inspire one to focus on the necessary technology to build the most efficient deep-sea robots. Thus there will be a more diverse discovery on the deep seas. FOOS should also have its agents in each continent to ensure that their research is covered fully from all perspectives. Having FOOS agents in every continent will merge different researches, thus covering a bigger discovery on the deep sea.
Different living species discovered in the deep sea should trigger one to do more research since that is enough evidence that there are more and more mysteries about the deep sea that is yet to be unfolded. There are also medicines made from plants found in the deep sea; therefore, one should try to run experiments on the new plants discovered in the deep sea (Kennedy, 2019). Researchers can also run tests on some fish species found in the deep sea, bring them to the shallow waters, and observe any change of behavior or adoptions; thus, one will have more knowledge of the species.
Conclusion
To sum up, deep-sea discoveries will always remain an open book for research since everyday discoveries occur. Notably, researchers should also be keen and thorough in their discoveries and be cautious if there is existence of sea monsters since that is a mystery that is yet to be unfolded. These mysterious discoveries have evident over the centuries, and as technology advances, more sightings are also being made; therefore, this should inspire more research on the deep seas.
Reference
Kennedy, B. R., Cantwell, K., Malik, M., Kelley, C., Potter, J., Elliott, K., & Rotjan, R. D. (2019). The unknown and the unexplored: insights into the pacific deep-sea following NOAA CAPSTONE expeditions. Frontiers in Marine Science, 480.
Although Rosalind Franklin made an unprecedented contribution to the discovery of the DNA structure, her part in this historical event is underappreciated. The fact is that Maurice Wilkins, who worked with Rosalind to unravel the structure of DNA, could not achieve a high-quality image. However, Rosalind Franklin managed to do this. Moreover, she had a strained relationship with Wilkins, and they worked separately. Wilkins stole the DNA image Rosalind had taken, added the data, and, together with his colleagues, collected and presented it to the world as his own. Therefore, Rosalind Franklin’s picture became the inspiration and basis of the discovery, but her name was not even mentioned.
Discussion
The discovery of the spatial structure of DNA undoubtedly made a decisive contribution to the development of modern biological science and related fields. Firstly, it became the basis for further, more narrow-profile discoveries. For example, it provided the impetus for the fundamental discovery of a unique class of nucleic acid-metabolizing enzymes (Brosh Jr, & Matson, 2020). Furthermore, the discovery of the DNA structure provided a field for future nucleic acid biologists to work on. In addition, this discovery will inspire experimenters to continue exploring new directions of DNA research in the future.
Conclusion
Speaking of my own experience, I have not been deceived and underestimated in any area like Rosalind Franklin. Although there were times when I gave myself entirely to specific teamwork at school, and in conclusion, it was not appropriately appreciated. Therefore, the task was presented as exclusively group work when I did all of it. In any case, my minor school injustices are nothing compared to the underappreciation of Rosalind Franklin’s idea. She was an independent woman scientist in a man’s world, which became one of the reasons for the unfairness that arose.
Atoms are the building block of the world we live in. However, the understanding about the atoms that we have today has not been given to us just like that. It has been a fascinating journey of humanity in the quest for knowledge by applying his inquisitive mind and experimentation across a range of disciplines that have led to the present-day understanding of the atoms that we have today. Atoms are wonderful as we know them in themselves, in the aggregate forms like molecules, large crystals, and large structures around us. But this wonder crosses all the imaginative boundaries as we try to look within an atom. This paper presents the fascinating journey of humanity that has unraveled this wonderful entity known as atoms and its still more wonderful internal architecture.
Introduction
Today the understanding of atoms is so common that one may overlook the marvelous philosophy and scientific development that has led to the creation of the knowledge about atoms. It is thrilling to know-how from this apparent continuum world and matters the philosophers of ancient time could discern the quantum nature of matter and predict the existence of tiniest matter particle as atoms, without any experimentation to support their philosophy. A philosophy that was scientifically recognized and proved over twenty-four hundred years later.
It is also not less interesting that from the macroscopic observations about chemical reactions in gases and the relationship between macroscopic variables like pressure, temperature and volume; scientists could deduce the existence of such tiny particles like atoms and molecules and provide a scientific basis for atoms – a particle that cannot be dissected. Though atoms were proved to have substructures and therefore could be dissected into constituent particles from a physics point of view; from the point of view of chemistry and chemical reactions, atoms continue to remain what they mean – something that cannot be cut.
The architecture of the atom in itself can be termed the most marvelous structure and hides in itself the mystery of the Creation. The philosophical and experimental – theoretical journey to unravel the internal architecture of atoms has been probably the most interesting and exciting activity in modern science. It has had too many constructive and destructive implications for humanity.
This paper attempts to traverse the journey of conception, theorization, discovery, and deciphering of atoms to create the present-day knowledge about atoms.
Atoms in Philosophy
Humans are the most evolved species and they have a wonderful inquisitive brain. So it was not surprising that they were mesmerized by the marvels of the Creation and tried their best to unravel as much of it as they can. The matter being ubiquitous has attracted significant attention from philosophers and scientists. What is the structure of the matter or what the matter is made up of, was naturally a subject of deliberations among philosophers and scientists since time immemorial.
The debate of continuum versus quantum has always been there. A school of thought felt that matter was continuum like space and this was the dominant school of thought in ancient times because this is something that is supported by what our eyes can see. Looking at the objects around us, it is hard to discard the continuum nature of matter and it is said “seeing is believing”. Aristotle and the Stoic philosophers were in the camp that believed and propagated the continuum theory of matter (Handee W. R. et al 2).
But despite the lure of “seeing is believing”, there have been discerning philosophers even in the ancient days who could see through the apparent continuum the basic building blocks of matter the atoms. That the matter is composed of tiniest particle called “Parmanu”, which combines to form different structures, was pronounced by an Indian sage “Maharshi Kanad” as early as 600 BC, who was the founder of “Vaisesika philosophy” (Deshmukh P. C. et al 5). Greek philosophers like Democritus and Leucippus were also in the camp that believed in the quantum nature of matter and proposed the smallest particle for a matter which remains unchanged and in continuous motion (Serway A. R. et al 98). This article was named by them as “Atom” – something that cannot be cut. This term has become the most important term in the scientific community.
Though I have tried to keep the discussion on philosophy restricted to just one page, this in no way can undermine the importance of this section. Though there was no experimental support that the philosophers provided, it should be seen in the context of the availability of technologies at that time. It is easier to conduct an experiment when the supporting technology is there; further, it is easier to propose a theory, when there is some experimental data and evidence; but is the most difficult thing to propose something radically different, merely based on philosophical understanding, which is proved to be correct over two thousand four hundred years down the line.
Besides, the major significance of atomic theory has lies in the fact that it lays the foundation of quantum theory. Whether the matter is a continuum as appears to our eyes or is quantized as one discerns from the application of logic. It is the answer to this question, for which atomic theory favors quantization of matter over its continuum that is of great significance in all the philosophies.
Modern Atomic Theory
So far all that we have discussed was in the realm of philosophy. Wise men put their brains at work and proposed a model which advocated either continuum or quantized structure of matter. But merely putting a theory is not sufficient in scientific temperament. The theory much support or be supported by either direct or indirect experimental observations. The experiments may be performed by either theorist himself or his peers or his predecessors, but the theory should be validated by experimental observations and should be capable of explaining the experimentally observed results. Scientific supports for the atomic theory came from a whole lot of experiments performed by scientists, and some of these experiments are described in subsequent sections.
Debonair Lavosier’s Conservation of Matter
He was a great chemist and is known as the father of modern chemistry. He made immense contributions to modern chemistry. He was a great experimental chemist. He carried out many experiments about the chemistry of reactions and through his careful measurements of chemical reactions he established conservation of mass in chemical reactions. This was a great supporting idea to atomic theory because based on his findings of conservation of mass in chemical reactions, he could substantiate that all the elements are made of indestructible particles or atoms, which does not get either formed or destroyed in chemical reactions, instead it only changes the way it is combined with atoms of other elements to form new compounds. This was certainly the major experimental support to atomic theory.
Gas Laws
Experimental studies on the behavior of gases also contributed to the atomic theory to a great extent. It is interesting to know how macroscopic measurements like pressure, temperature, volume, and amount of gases and the fact that the same amount of all the gases occupied the same volume was used by the scientist to reason that gases are made of small particles. Kinetic theory of gases was developed with the underlying assumption that gases are made of tiny spherical particles with a lot of space between these particles and these particles move randomly in the space with a kinetic energy distribution (Boltzmann Distribution) and from applying Newtonian mechanics on individual particles and Boltzmann distribution; the macroscopic properties like Pressure, Temperature, etc. were derived. These experiments and the kinetic theory of gases provided the sound scientific background for the development of atomic theory by Dalton.
Daltons Atomic Theory
Dalton is known as the father of atomic theory. He was a great scientist. He was primarily a chemist who made great contributions to other spheres of science as well. Besides, atomic theory his great contribution was in the field of color blindness – which is also known as Daltonism. Dalton studied chemical reactions and focused on the ratios in which different pure substances (elements) combine to form a new compound. He found that different elements always combine in definite integral proportions to form compounds. From this, he elucidated that all the elements must be made of identical non-destructible particles which combine in integral proportions to form a compound. He named these simple particles as Atoms – something that cannot be cut.
Other Illustrative Theories
Though the atomic theory was fully established by the time Albert Einstein came into the picture, any description of atomic theory will remain incomplete without mentioning Einstein’s work on Brownian motion. Albert Einstein made a great theoretical contribution to the zigzag motion of suspended particles in dispersion (known as Brownian motion). This motion is due to molecular impact on small suspended particles. This work provided additional support to atomic–molecular theory and led to improved values of Avogadro’s number.
Internal Architecture of the Atom
Though the atomic theory was sufficient to answer many observations like gas laws, conservation of mass in chemical reactions, laws of chemical combination, etc.; but many interesting observations were still waiting for an answer which atomic theory was not able to provide. Some interesting observations like what is the mechanism by which atoms combine to make larges structures like molecules, crystals, etc.; why negative particles are liberated when a metal is heated, what is the origin of hydrogen spectrum, etc. are some of the questions which could not have been answered as long as we remained firm that atoms are something that cannot be cut or something that has no substructure. Indeed subsequent experimental findings did force people to concede to the fact that atoms have internal structure and a lot of interesting work came out of this quest of humanity to decipher the internal architecture of atoms. Some important experiments relating to the deciphering of atoms are described in the subsequent sections.
Faraday’s Laws of Electrolysis
A brilliant experimental physicist Michael Faraday gave his theory of electrolysis in 1833. He carried out electrolysis of many salts and gave his theory of electrolysis. The essence of his theory of electrolysis is that the Mass of an element deposited on the cathode is
Directly proportional to the charge transferred.
Directly proportional to the atomic weight of the element and
Inversely proportional to the valance of the deposited element
Though it was not realized then, this theory provided strong proof for the theory that molecules are made of atoms, that charge is quantized, and that atoms consisted of parts that are negative and positive. However, the nature of the positive and negative parts of the atom was still unknown.
Discovery of Cathode Rays as Electrons & Plum Pudding Model
While performing experimental studies on electrical conduction through gases in low-pressure discharge tube J. J. Thomson in 1897 discovered that the rays seen in low-pressure discharge were due to negative particles (Thomson J. J., 269). It is worth mentioning here that cathode rays were known even before J. J. Thomson and William Crookes (1832–1919) had already shown with his ‘‘Maltese Cross’’ experiment that that cathode rays move in a straight line (Spear B., 331). But Thomson could correctly predict their nature as a negative constituent of all the atoms and measured their charge by mass (e/m) ratio. It is worth discussing the experiments briefly.
Methods and Materials
The primary equipment used in these experiments was a cathode ray discharge tube (shown in Fig. 1, below) and its variants. This is essentially a sealed glass tube filled with some gas at low pressure and two electrodes – anode (+ve) and cathode (-ve) with the provision to supply high voltage to these electrodes.
Different variants of cathode-ray tubes including those filled with different gases, parallel plate capacitors, phosphor screens, etc. were the principal materials used in these experiments.
When high voltage was applied there was a glow in the discharge tube. It appeared as if something was moving from the cathode towards the anode. This was termed a cathode ray. When an obstruction was placed in the path of the cathode ray, these rays applied force and caused motion in the obstruction implying, these rays have inertia and there was a shadow on the anode implying these rays moved in a straight line. But when an electric field was applied in the transverse direction to their motion by using a parallel plate capacitor, the cathode ray deviated towards +ve plate i.e. in the opposite direction to the electric field. This confirmed that these particles were negative corpuscles or negative particles.
Deviation of the cathode ray in the direction opposite to the direction of the applied field was used to measure the charge to mass ratio (e/m) for the cathode ray. Now the question to be answered was the origin of these –ve particles. Whether this has something to do with the gas that is filled in the discharge tube? To check this, the experiment was repeated with a discharge tube filled with different gases and the nature of the cathode ray remained the same for all the gases, including the e/m ratio. The e/m ratio was that matching with that of the lightest negative charge known so far. This left us with no option but to propose that the cathode ray was nothing but the fundamental constituent of all the atoms.
Now that the atom was no more indivisible, with its –ve constituent was taken out in discharge tube; the bigger question before us was to conceive and propose a model of the atom in light of this finding of the –ve corpuscles. We could not detect any positive particle in our experiment and also the particle was much lighter than the lightest atoms i.e. Hydrogen. Therefore, it appeared as if the atom was a big positively charged particle with uniformly distributed mass into which light –ve electrons were embedded like “Raisins in Pudding” (Fig. 2, below), such that the atom, on the whole, remains neutral and only –ve particles are ejected from it when it is supplied with energy – electrical discharge, heating, etc. Somehow we could not devise an experiment to measure the charge and mass of the atom separately.
Measurements of Electronic Charge
As electron was established as a fundamental constituent of an atom and quantum of –ve charge, it became necessary to know more about it. Thomson experiments could calculate its e/m, but then the value of ‘e’ and ‘m’ could not have been known without measuring either of the two alone. Millikan devised an intelligent experiment to calculate the value of ‘e’. This experiment is known as “Millikan’s Oil Drop Experiment” and is briefly described below.
Methods and Material
A schematic drawing of the experimental setup is shown in Fig. 3, below.
The materials used in the experiment are shown in Fig. 3. The basic principle underlying this experiment is the following.
Oil droplets created in the oil atomizer were made to fall through a parallel plate capacitor. The oil drop was loaded with electrons emitted from zinc when illuminated with UV light or X-ray. The motion of these charged oil drops between the parallel plates of the parallel plate capacitor was monitored using a telescope with and without application of the electric field.
Without the electric field, the force balance will be due to upward drag force on the oil drop and downward gravitational force. When the net force is zero a downward terminal velocity is reached, which can be measured with the telescope. With the electric field on, another force in the picture will be an electrostatic force. Again for an upward terminal velocity, the electrostatic force will be balanced by downward drag force and gravitational force.
Comparing the two balanced equations – one with the electric field on and another with the electric field switched off – for the same oil drop; the charge on the oil drop can be calculated. Similarly, the charge on another particle of the same size, but a different charge was calculated. The ratio of the charge on two oil drops with the same size but a different charge showed that electrical charge is quantized. The experiments confirmed the atomicity (quantization) of charge to approximately within 1% accuracy (Millikan R. A., 349). Millikan’s experiment was pioneering and truly ingenious in light of the technological status of that time.
Lenard’s Model of Atom
Lenard made significant contributions in understanding the cathode ray and made several useful observations about the photoelectric effect. When Lenard bombarded a thin sheet of metal with cathode rays, most of the electrons passed through. Based on this observation he rightly concluded that most of the atom is space and that mass and positive charge of an atom are concentrated in smaller regions. He suggested a model in which an atom is made of light negative particles and heavy +ve particles arranged in a manner that most of the space remains vacant in an atom. However, this left with many questions like what will be the configuration of the +ve and –ve particles and also why +ve particle then does not get liberated from atoms. Many of these questions got answered by the planetary model proposed by Rutherford.
Planetary Model of Atom by Rutherford
Rutherford was working on the scattering of α-particles by metal. He was concerned about the broadening of α-particle beam when it passed through a very thin sheet of metal. He was using a very thin sheet of gold. The experimental setup is shown in Fig. 4, below.
What was observed was that though there was broadening of the incident α-particle beam expectedly by scattering; almost all the particles were able to pass without much deviation. This meant that most of the atom was empty; so where were the mass and positive charge of the atom located. This question was answered by an accidental observation and I must put the description of that accidental discovery in the words of Rutherford himself (Rutherford L, 1936).
“I would like to use this example to show how you often stumble upon facts by accident. In the early days, I had observed that scattering of a-particle and Dr. Geiger in my laboratory had examined it in detail. He found in thin pieces of heavy metal that the scattering was unusually small, of the order of one degree. One day Geiger came to me and said, ‘Don’t you think that young Marsden, whom I have been training in radioactive methods, ought to begin a small research?’ Now, I had thought that, too, so I said, ‘Why not let him see if any α-particle can be scattered through a large angle?’ I may tell you in confidence that I did not believe that they would be, since we knew that the α-particle was a very fast, massive particle, with a great deal of energy, and you could show that if the scattering was due to accumulated effect of several small scatterings the chance of an α-the particle’s being scattered backward was very small.
Then I remember two or three days later Geiger coming to me in great excitement and saying, ‘We have been able to get some of the α-particles coming backward…..’ It was quite the most incredible event that has ever happened to me in my life. It was as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you. On consideration, I realized that this scattering backward must be the result of a single collision, and when I made calculations I saw it was impossible to get anything of that order of magnitude unless you took a system in which the greater part of the mass of the atom was concentrated in a minute nucleus.
It was then that I had the idea of an atom with a minute massive center carrying a charge. I worked out mathematically what laws the scattering should obey, and I found that the number of particles scattered through a given angle should be inversely proportional to the thickness of the scattering foil, the square of the nuclear charge, and inversely proportional to the fourth power of the velocity. These deductions were later verified by Geiger and Marsden in a series of beautiful experiments.”
I do not think after putting the words straight from Rutherford, himself I need to write anything more about the nuclear model of the atom with all the positive charge and mass concentrated in the tiny nucleus and the number of electrons sufficient to neutralize an atom revolving around the nucleus like planets around the Sun, keeping almost entire atom empty. However, I must mention some important merits and limitations of this model
Most, almost all of the experimental work supporting Rutherford’s model was carried out by two co-workers of him namely – H. Geiger and E. Marsden (605). They were able to determine the size of the nucleus of aluminum by this experiment (Keller A., 215).
The planetary model of the atom with electrons revolving around the heavy and positively charged nucleus was a stable structure as far classical mechanics goes; but the theory of electromagnetism says that a charged particle accelerating / decelerating in an electric field will emit electromagnetic radiation, therefore, the electrons in Rutherford’s atoms must emit radiation all the time, thus losing all their energy in no time and falling into the nucleus killing the atom itself. But it is something that does not happen. So what is the reason?
What is there to hold so many positive particles (protons) in the nucleus against the huge Columbia repulsion between protons? This was answered by proposing the existence of a strong attractive force between the nucleons, at a later stage by scientists like Yukawa, when nucleons are at a distance smaller than 2 FM. James Chadwick had realized the existence of a much stronger force than an electrostatic force within the nucleus (Chadwick J. and Biele E. S., 923).
For electrical neutrality, several positive particles (protons) have to be the same as that of the electrons, but then the number of protons required for electrical neutrality of atoms was able to account for only half of the atomic weight. So where from comes the remaining half of the atomic weight. It must be mentioned here that neutrons were not known till then. In 1932 J. Chadwick experimentally proved the existence of neutral particles with mass closer to that of a proton by carrying out an artificial nuclear reaction.
How to explain the hydrogen spectrum was also not answered by Rutherford’s model. So something more was needed. Something that was proposed by Bohr and the same is discussed in the subsequent section.
Bohr’s Model
Niels Bohr was a great theoretical physicist. He proposed a quantum model of the atom in his three-part paper (Bohr N., 231) by postulating the quantization of energy levels for electrons within an atom. This model was successful in explaining why electrons do not lose energy while revolving around the positively charged nucleus and also the spectrum of the hydrogen atom. The main postulates of Bohr’s quantum model of the atom are the following.
Electrons revolve around the nucleus like planets in the solar system.
Energy level of electrons in an atom is quantized i.e. electrons can have only a certain fixed value of energy in an orbit and their energy remains the same as long as electrons remain in that orbit.
When electrons jump from an orbit with higher energy to the one with lower energy, the difference of the energy is liberated as electromagnetic radiation, conversely when electrons are supplied with energy equal to the difference between the lower level and higher level, they jump to the orbit with higher energy.
Angular momentum of electrons in an atom is quantized.
This model was radical as it contradicted the well-established theory of electromagnetism by Maxwell. It simply said that for bound electrons this theory will not be applicable as these electrons will be governed by the principles of quantum mechanics.
This theory was able to explain the hydrogen spectrum very well and could calculate different energy levels of electrons, the wavelength of the lines in the hydrogen spectrum, and the value of the Rydberg constant to a great accuracy for the hydrogen atom and hydrogen-like ions i.e. systems having one electron.
On the flip side, one can say that the postulates of this model were arbitrary i.e. all of sudden someone comes and say no rules of electromagnetism will not be applicable for atomic electrons. What was the basis? Nothing, just this hypothesis helped explain the observations existing at that time. So were not these postulates were something like convenient assumptions. While an argument cannot be denied on the face of it, the quantum nature of energy levels for bonded electrons was something that was verified experimentally and also theoretically from the subsequently wave mechanics model of electrons.
In his correspondence principle, Bohr was also able to provide a connection between the quantum level of energy and continuum level of energy at higher quantum numbers and thus he could show that continuum is nothing but a subset of quantum nature of energy levels and that at higher quantum numbers quantum becomes a continuum. Thus it can be said that though it appears arbitrary, the quantum model of the atom proposed by Bohr was a radical and revolutionary theory. Thus far only charge and matter were quantized, but in the aftermath of Bohr’s model, even energy levels became quantized.
Still, Bohr’s model was not able to explain the spectrum of atoms/ions with more than one electron and the presence of hyperfine lines in the hydrogen spectrum. A further refined model of the atom was presented by applying wave mechanics to the electrons. This is briefly discussed below.
Wave Mechanics Model of Atoms
Wave-particle duality was proposed by de Broglie. He said a particle with momentum ‘p’ can be seen as a wave with wavelength λ = h/p; where h is Plank’s constant. Many scientists like Born, Heisenberg, Schrödinger, etc. made significant contributions towards the wave mechanics model of atoms. This is a purely theoretical model. This model considers electrons as wave function ψ(x,y,z,t) and solves this wave function for electrons bonded to an atom. Solution of the wave function for an electron bonded to an atom required three arbitrary integers, which are nothing but quantum numbers. Once ψ(x,y,z,t) is known; ψ is calculated by multiplying ψ(x,y,z,t) with ψ(x,y,z,t). ψ thus calculated is the probability density for finding an electron.
This can be plotted taking nucleus as the origin and a region can be obtained in which the probability of finding that electron is equal to a specified value say 99%. This shape is the shape of the orbital in which that electron is there in that atom. This is a simplistic way to physically visualize the wave mechanics model of the highly mathematical atom. This model remains the most acceptable model of an atom and is capable of explaining most of the observations concerning atoms and molecules.
Other Important Contributions
This discussion about the deciphering of an atom will remain incomplete without mentioning Moseley’s Law, which established atomic number or the number of protons in an atom as the primary determinant of the chemical nature of an atom and the basis for arranging atoms in a periodic table. Hund’s rule, Pauli’s exclusion principle, etc. proved very helpful in deciding the electronic configuration of an atom which determines the valance and chemical characteristics of an atom. The discussion will go endless as the journey to decipher the atom was marvelous, but we have to stop somewhere and this is the right place in my opinion.
Conclusions
Based on the discussions in the preceding sections it can be concluded that the underlying philosophy of the atomic theory and quantization of mass remains great proof of the intellectual capability of the human brain. The experiments designed to give scientific legitimacy to the atomic theory were ingenious. The journey for deciphering the internal architecture of atoms was marvelous and gave birth to many new and radical theories like quantization of charge and energy, wave mechanics model, and an altogether new vision to see the world around us and the Creation. Many useful technologies like CRT tube for television, radio for telecommunication, nuclear energy, laser, etc. owe their birth to this marvelous journey of humanity for deciphering the atom.
Reference
Deshmukh P. C. and Venkataraman S. “A Fragmentary Tale of the Atom” Physics News, Vol.39, Nos.2, pp.5-14, 2009 Published by the Indian Physics Association.
Hendee W. R., Ibbott G. S. and Hendee E. G. “Radiation Therapy Physic”s, Third Edition, by John Wiley & Sons, Inc. Web.
Serway R. A., Moses C. J. and Moyer C. A. “Modern Physics” 2nd Ed. Saunders College Publishing, Harcourt Bruce College Publishers, New York 1997. Web.
Thomson J. J. “Phil. Mag.” 44:269, 1897. Web.
Spear B. “J.J. Thomson, the electron and the birth of electronics”. World Patent Information 28 (2006) 330–335. Web.
Millikan R. A. “Physics Review.” 1911, p 349. Web.
Rutherford L. An essay on “The Development of the Theory of Atomic Structure”. 1936. Published in Background to Modern Science, New York, Macmillan Company, 1940.
Geiger H. and Marsden E., “Deflection of α-Particles through Large Angles”. Phil. Mag. (6)25:605.
Keller A. “Infancy of Atomic Physics: Hercules in His Cradle”. Oxford, Clarendon Press, 1983, p 215.
Chadwick J. and Biele E. S. Phil. Mag. 42:923, 1921.
NOVA’s documentary, Ice Mummies: The Siberian Ice Maiden allows the non-scientific public to share in an important and controversial find. The characteristics and location of the long-dead young woman may suggest just how complex the diffusion of culture and the movement of peoples in ancient times must have been.
Because of the political restrictions placed on these archeological sites, this documentary, albeit targeting a popular audience, significant contributes to public knowledge of this long-gone Pazyryk culture and the debate surrounding it.
The documentary records the finding of a burial site containing a remarkably well-preserved female body in elaborate costume. The Russian archeologist describes how she arrived at the burial mound with its remarkable contents.
The mound, or kurgan, was evidently well-known to the local border guards who patrol the no-man’s –land between China and Russia, and Natalia Polosmak used their help to locate an un-looted tomb. This anecdote highlights how the successful practice of archeology varies from setting to setting, both in terms of the geography and the local population.
In a different region, for example in Virginia, in the USA, in the absence of mounds or other above-ground structures marking area of pre-contact use and habitation, her procedure might have involved weeks or months of visual surveys, sweeping surfaces for artifacts, test pits, and possibly geo-sensing or satellite imagery analysis.
Although not elaborated on in the documentary, it is clear that practicing archeology in the Altai may involve more cooperation with local people than technology. This point might have been a useful addition to the film.
NOVA focuses on the archeologists as individuals and their responses of modern day investigators to the site. For example, the film describes their emotional reaction to the atmosphere in the Ukok region of the Altai. The American graduate student reported having experienced nightmares and feeling resistance from unseen forces. Including this sort of drama makes the archeologists sound a bit unprofessional, or at least, suggestible.
It recalls the newspaper hype about supposed curses surrounding the discovery of early Egyptian tombs. However, the presence of border guards in this contested area between two warlike nations might have made things a bit tense. On the other hand, this region and its mounds are still to this day respected as sacred by the local folk (when they are not being looted).
The film helpfully situates the burial and the Pazyryk people somewhat in context. For many westerners, the narration and images evoke for the Western viewer an unfamiliar lost world of large burial structures, highly decorative textiles and leather work, as well as the possible religious beliefs and social relationships in which these items were used.
NOVA also details Pazyryk embalming processes, which, in combination with the deep freeze conditions of the burial, combined to keep the Ice Maiden intact. The embalming process shares with Egyptian procedures the removal of internal organs to prevent decay, a waterproof coating, and stuffing of cavities with preservative plant and other items. The materials used seem to reflect locally available resources.
The confirmation of Herodotus’ assertions opens the possibility that his other hitherto discredited claims reflect accurate observation. This connection between classical writings and archeological findings is constructive and supports the continued study of the classics in the modern curriculum
The film reveals visually compelling details of ancient clothing and utensils which may be precursors of material culture and practices seen in historic times. For example modern nomads also use vessels that hang from a peg, or a saddle. Thus for nomads as in other modes of life, the forms of material culture follow the needs and demands of function.
The age estimation through study of skull bone fusion demonstrates important principles of growth and development. Determining her season and cause of death showcases the use of several forensic techniques, including the use of dendrochronology and insect life-cycles as well as testing the way skull bones fracture. All this attention evokes the forensic furor surrounding Pharaohnic death.
Her forensic facial reconstruction, showing Caucasian features rather than Mongolian ones, evoked Chinese official anger, apparently reflecting more politics than science. The art involved is trumped by DNA analysis of her tissue, which also reveals European genetic material as well as Mongolian.
The film, in focusing more on the hints to the Ice Maiden’s daily life revealed by the burial, refers to other known nomadic peoples in less detail. The inference by archeologists of a nomadic lifestyle is supported by the importance of the horse in the burial, and the design of her implements for horseback use. Most striking is the apparent egalitarian nature of her society, inferable from the elaborateness of her costume and the solitary grandeur of her interment (suggesting that she was not merely a wife or consort).
Equal female status was attested to by Herodotus as well. This supports the notion that nomadic societies may sometimes permit greater gender parity in some areas of life than a more agrarian society. In an agrarian society, land inheritance pressures may lead to customs that constrain women’s behavior.
Ice Mummies: The Siberian Ice Maiden is a very useful recording of a find whose details may not be shared very widely in the future due to the government of China’s punitive and obstructive reaction to the find. Their response focused on politics, not science, and restricts further investigation. Whether the Ice Maiden was a Mongolian or a Caucasian should be less important than increased understanding of daily life in the ancient world. The film does reveal important archeological techniques.
Invention can be defined as a final result of imagination which could have originated from a mere conception or from experimental researches. Discovery on the other hand is initial or primary acquisition of a given idea or information by an individual.
As invention gives credit to the person who performed the act as the actual source of the conception, discovery is a credit to any other person who comes across an idea for the first time with respect to himself or herself. Invention is therefore a special primary discovery. This paper seeks to discuss some of the significant discoveries that were made as a result of the inventions of the telescope and the microscope.
The paper will look into the history of the discoveries and their effects in the development of the human well beings as well as the enhancement of the human understanding of the surrounding nature in terms of changing the traditions and the society.
Discoveries due to Invention of the Microscope
The invention of microscope occurred in the sixteenth century. Believed to have been invented in Netherlands, the technology of the microscope was developed over time by the improvement on the lenses and other features of the microscope.
Some of the significant discoveries made through the use of microscopes include the discovery of yeast fungus by Louis Pasteur and the discovery of cells that led to the cell theory (Microscopy, n.d).
The Discovery of Yeast Fungus
The discovery of yeast fungus is attributed to Louis Pasteur who was a French national. Born in the year 1822, Louis was schooled to advanced levels of education even though most of his teachers didn’t think of him as worth the higher levels of education. Louis Pasteur was the first personality to lay down the foundation of fermentation.
He illustrated in his discovery the process by which yeast aids the process in which alcohol can be obtained from sugar. In his discovery, Louis disapproved the initial perception that the process of brewing alcohol from sugar was a purely a chemical process rather than a biological process.
In the discovery, he demonstrated that yeast is living organisms that can undertake a process of anaerobic respiration that yields fermentation (Science, 2011).
The discovery made by Louis that established yeast as the driving engine behind the brewing of alcohol changed the traditional perception that the brewing process was chemical. It had been previously assumed and believed that the process of brewing alcohol was primarily a chemical reaction.
The assumption of the chemical process in the conversion of sugar into alcohol subsequently covered a lot of risks that alcohol exposed people to as a result of its bacterial components. In the discovery, it was realized that the fermentation process was infested by a number of disease causing micro organisms that included bacteria, fungi and a number of yeast species.
As a result of the presence of disease causing organism in the fermentation process, steps were taken by another scientist to eliminate these organisms from yeast. The success of this elimination strategy allowed for a brewing process that was free from the micro organisms apart from the yeast.
There were further studies and successes in the refining of the fermentation process which was fueled by Louis’ discovery of the yeast as the basis of fermentation. The move saw the development of the brewing industry and the elimination of disease causing organisms as components of alcohol.
By so doing, the discovery by Louis improved the welfare of people by setting a stage upon which their health was taken care of. The elimination of the bacteria and the other organisms from yeast and further developments of the brewing process had the positive effects of eliminating the diseases and medical complications that were caused by these micro organisms.
The discovery of the yeast further led to development of the knowledge of the biological sector of micro organisms and the subject of anaerobic respiration (Khachatourians and Arora, 2002).
The Discovery of Cells and the Cell Theory
The discovery of cells was made by an English man called Robert Hook. Hook, after designing and upon using a microscope, made an observation in 1665 of substances whose composition he expressed as numerous little boxes. He named the little boxes as cells derived from the Latin meaning of “little room”.
The discovery made by Hook broke the traditional belief that had existed among people that human body is one whole and uniform substance. Hook on the contrary brought people to the realization that a human body is made up of small tiny substances called cells. It has further been established that cells are the primary building elements of organisms (Crown, 2003).
The discovery of cells, after a number of studies led to the establishment of the cell theory. The concept of the cell theory explained that “organisms are composed of similar units of organization called cells” (Meisler, 2006, p. 1). Having its history from as early as 1838, the theory described the cell as distinct element with its own features and as a component of a bigger structure which is the organism.
The cell theory established the facts that living organisms are composed of these elementary cells, that the cells in the organisms are both structural elements and functional components of the organisms; cells are genetic with features that are transmitted during cell division and that the cells have similar composition.
The establishment of the cell theory, however, originates from the discovery of cells by Hook which was facilitated by the invention of the microscope. The discovery of cells refuted the earlier perception that the body organs were a uniform mass of substance. The discovery on the contrary illustrated that the structural organs of organisms are composed of small cells that together forms the organs or body parts.
The discovery subsequently led the advancement of knowledge through further discoveries and studies about cells and the organs that they form. The extension of the discovery and the study of the cells to the discovery and study of the nucleus by Brown and the further exploration of the components of cell, types of cells such as the reproductive cells and the DNA cells have been explorations in the subject of biology.
The study of the human anatomy which also originated from the discovery of cells has led to discoveries and improvement of human health through discovery of medicines that help in preserving human health. The cell discovery though a small ancient innovation, has developed to be the core of human health (Meisler, 2006).
Discoveries due to the Invention Telescope
The invention of the telescope is, according Fowler (n.d.), officially attributed Galileo. The invention was formerly made by a man called Roger Bacon who failed to obtain a patent for his invention on the grounds that his innovation was too simple and could be easily reproduced. Galileo then later in his experiments discovered an improvement on the knowledge that pre existed.
In his innovation, Galileo realized that the magnifying power highly depended on the ratio of the strengths of two lenses used in the system, the concave and the convex lenses. After his discovery and modifications, Galileo was granted tenure over the developments (Fowler, n.d.).
The Dark Energy
The invention of telescope opened the universe for study by astrologists. With a clearer and better view of the universe by aid of the telescope, many discoveries have since been made about the features of the universe and the changes that are taking place. One of the stunning discoveries made was the observation of the dark energy. Dark energy is a feature of the space.
According to NASA (n.d.), the dark energy is exerting an effective force that is greatly accelerating the expansion of the universe. The discovery of the dark energy and the expansion of the universe posed a challenge to the previous theory of gravitational force.
Under the theory of force of gravity, there would be no expansion of the universe as the force exerts an attraction towards the center. This discovery of the dark matter has further triggered the study into the universe by casting doubts on the centrifugal theory (NASA, n.d.).
Walker (2010) expressed the fear that the extensiveness and totality of the dark energy in the universe which has a negative implication on the strength of the gravitational force yields a reason for worry. He recounted that scientists consider the dark energy to be a threat to the universe, though they estimate that the universe still has billions of years of existence.
The discovery is greatly developing the understanding of the state of the universe as more effort is being made to understand the effects of the dark energy. The dark energy is still more of a mystery but could turn out to be advantageous or dangerous to the welfare of people. More of its nature and effects are yet to be discovered (Walker, 2010).
Planetary Nebulae
The planetary nebulae have their history of discovery dating from the eighteenth century. The name nebula was accorded to these matters owing to their similarities to the color of Uranus and Neptune. The nebulae are gaseous substances with a fuzzy view and a recognized level of symmetry (Kwok, 2007). Their discovery added to the richness of the study of the universe.
The ability to identify and view planetary nebulae was basically due to the existence of the telescope that has exposed the universe to exploration. Their discovery further led to advanced study that led to the revelation of how they are formed, their properties and their distribution.
The discovery can therefore be similarly credited with enhancement of human knowledge of the universe in general, and that about the planetary nebulae in particular. The knowledge of the formation of the planetary nebulae for example illustrates that evolution of a star leads to emission of a great wind. An instability created in the process leads to the breakage of the outer layer of the star.
This results in hot substances that can then be seen as a glowing disc. An important idea noted about the planetary nebulae is the fact that they are reabsorbed into the “interstellar medium”. This means that the emissions in the process of the formation of the nebulae do not spread to the earth.
This has a significant importance to the general inhabitants of the earth considering the fact that some of the foreign emissions into the earth’s atmosphere are normally dangerous with adverse side effects. An illustrative example is the harmful ultra violet radiations into the atmosphere.
The knowledge of the reabsorbing these emissions into their systems is a relief that builds confidence in people concerning their safety and welfare on earth. The discovery of the planetary nebulae has therefore promoted the development of knowledge through furthering studies and invention as well as stabilizing fears of external threats to the earth’s atmosphere (Darling, n.d.).
Conclusion
The world of discoveries and inventions has been in existence for over centuries. The inventions have been diverse covering both theories and instruments. The discoveries and inventions are on their merits spontaneous processes with one step leading to a chain of further discoveries and inventions.
An illustration is seen in the invention of the telescope and the microscope that further led to discoveries of elements like the brewing process, the cell theory, the dark matter as well as the planetary nebulae of the space. These discoveries have in one way or the other developed the human knowledge by furthering studies and as well enhanced the well being of people either socially or in terms of health.
References
Crown. (2003). The discovery of cells. Strengthening Teaching and Learning of Cells. Web.
My team of scientists and I have recently confirmed the existence of 1284 new exoplanets, or planets that orbit a star other than our Sun. The process of confirmation was performed by applying the statistical analysis of the Kepler space telescope’s planet candidate catalog, using the data obtained by the Kepler Observatory, a spacecraft on the Earth’s orbit intended for the discovery of the planets similar to Earth and part of the NASA’s search mission.
The team, lead by Timothy Morton, has estimated the probability of the observable space phenomena to be Earth-like planets to be higher than 99 percent for the 1284 listed objects, making it the single biggest discovery of this kind and increasing the number of known exoplanets by more than a third (Astrobiology Magazine par. 2). Furthermore, the analysis has allowed us to recognize the 550 of the exoplanets as rocky, and nine as being within the habitable zone.
Research Method
The discovery of new exoplanets is comprised of two phases. First, the Kepler space observatory monitors the visible light of the space bodies and detects the changes in their brightness. The brightness cycles which are consistent with the planet’s passing in front of the star are considered the indirect proof of the space phenomenon being an exoplanet (Sengupta 89). Once such objects are discovered, they are listed in the catalog of “planet candidates.” The candidates are then subject to the rigorous confirmation process, which aims at checking their status as precisely as possible. However, the process is lengthy, which results in slowing down of the mission.
The statistical approach, devised by our team, has made it possible to analyze the large amounts of available data and assign each candidate a status based on the likelihood of it being an exoplanet. The data has allowed the separation of the candidates into several subgroups, where the ones whose probability exceeds 99% are considered planets. Such division effectively eliminates the need to apply the time-consuming individual approach to the objects which do not qualify, which will potentially speed up the confirmation process. Additionally, the analysis allows for a more precise estimation of the Earth-like planets.
Research Results
By applying the calculation model suggested by Morton, our team was able to make several important findings. First, we were able to positively identify more than a third of the potential candidates (1284 of the 4302) as planets. Other 1327 candidates which were likely to be planets, but whose probability did not exceed the required 99 percent rate, did not qualify. Finally, 707 candidates were confirmed to be the phenomena other than the exoplanets (Astrobiology Magazine par. 2).
We have also applied our analysis to the 984 exoplanets found by other techniques to further validate the findings. More importantly, our analysis has allowed us to conclude that 550 of the 1284 exoplanets are most probably rocky ones, like the Earth, as opposed to gas giants, which are far less likely to be habitable and are thus outside the scope of the Kepler mission. The most important finding is the fact that nine of the newly analyzed exoplanets are within the habitable zone – the distance range between the planet and the star it is orbiting which allows the water to be in liquid condition (Astrobiology Magazine par. 7).
The planets which pass the habitable zone criteria are of primary concern for NASA’s mission, as they not only may be the home for extraterrestrial life, but also open possibilities for colonization by humankind. Before the findings of our team, 12 planets qualified as Earth-like, and thus theoretically suitable for the human population. Our recent analysis raises this number to 21, increasing the probability of finding the “second home” by 75%, which is a significant number, as the habitable zone and solid-state are only two of the multitude of factors, like the temperature, the size, the presence of water, and the atmosphere, which further determine the planet’s value for humans.
Funding Justification
Our research is important for two reasons. First, the validation of the exoplanet status of 984 objects confirms the plausibility of our analysis model. Second, the application of the analysis to the raw data from the catalog makes it possible to narrow the direction of the inquiry. However, the current review process that conclusively validates the exoplanets as habitable is still long and tedious. Thus, we seek additional funding from your organization that will allow us to further develop the process in two directions.
First, we need to improve the calculation model to exclude the potential false validations. Second, we need to devise an equally reliable mathematical model for the one-by-one review of the validated candidates. The resulting two-stage method will allow for the much faster processing of data obtained from the Kepler observatory without sacrificing the consistency. As of today, the search for Earth-like planets is among the top priorities in astronomy, as it increases the chances of finding the planet suitable for colonization by humanity, and, more importantly, of discovering extraterrestrial lifeforms. Its effects may range from the dramatic boost to almost all-natural sciences to the possibility of exchanging experience with highly intelligent beings.
Ever since Galileo discovered that Earth was just one of many planets revolving around the Sun, humanity has been wondering whether Earth is the only inhabitable planet in the Universe or not. In the 21st century, this question is not idle curiosity. Humanity has a vast interest in finding and studying extraterrestrial lifeforms. Seager’s equations state that there are significant changes in finding alien lifeforms in relatively close proximity to Earth (Maccone 278).
At least 20% of known exoplanets are located within the “comfort zone” of their stars and have a rock-based structure, meaning they could potentially serve as homeworlds for extraterrestrial species (Han 828). However, planets and moons located in less hospitable environments be that increasingly high temperature or permafrost, are still capable of carrying life. Some of the Earth-based organisms, called extremophiles, are incredibly resilient, capable of surviving harsh environments and even radiation. The purpose of this paper is to analyze Titan’s potential as a life carrier and make suggestions and estimations regarding the upcoming mission to the moon’s surface.
The Most Likely Place Life will be Found on is Titan
Titan is the largest moon of Saturn. It was discovered by a Dutch astronomer Christiaan Huygens in 1655 (McKay 1). The distance between Earth and Titan is 1.4 billion kilometers, which is why it took approximately 3-4 years for Voyager 1 and Voyager 2 to reach the moon. Due to a series of planetary features unique to Titan, it is considered to be one of the most likely locations to contain life.
Titan’s diameter is approximately 1.48 times larger than that of Earth’s moon and more than two times larger in terms of surface area. It possesses a dense nitrogen-rich atmosphere, thus being the only stellar body aside from Earth to possess a layer of gasses above the surface of the moon. According to recent observations, the chemical composition of Titan’s atmosphere consists of nitrogen (97-98%), hydrogen (2%+- 0.1%), and hydrogen (0.1%-0.2%) (McKay 3).
Trace amounts of various other gaseous elements have also been detected. The upper layers of Titan’s atmosphere are filled with the results of methane breakdown under the influence of ultraviolet, the resulting clouds helping shield the planet from radiation and solar wind. It is likely for the planet to have a large source of methane either on or under its surface in order to replenish the element in the atmosphere.
Due to its thick atmosphere in combination with large distances from the Sun and close proximity to Saturn, the surface of the planet is very cold. Average temperatures vary between 170-180 degrees Celsius because 90% of the Sun’s energy is mirrored back into space by thick smog found in the upper atmosphere (McKay 4). At the same time, the atmosphere causes a greenhouse gas effect, which helps retain some of the warmth that manages to get through to the surface of the moon. Without it, Titan would have been much colder. These extreme temperatures rule out the existence of Earth-based life on Titan.
The moon’s surface has clear processes of natural erosion and stable bodies of liquid methane and ethane, which is something no other moon or planet in the Solar System has. It is covered with crystallized water taking the shape of dust, small stones, and pebbles. This water was likely thrust towards the surface as a result of cryovolcano eruptions caused by Saturn’s gravity. There is evidence of large underground oceans located underneath the moon, where water can exist in liquid form.
There are several reasons why Titan is to be considered the most perspective world to explore for new lifeforms. The first reason is the number of organic chemicals present in the atmosphere. Ethane, methane, carbon monoxide, nitrogen, hydrogen, and many others make up the building blocks, out of which new lifeforms could be born into existence (Dohm and Maruyama 99). The presence of an atmosphere, in general, is a very important feature – without it, the planet’s surface would be irradiated and incapable of containing any meaningful sources of gases or surface liquids. Although low temperatures would present a threat to any lifeforms that use water as a solvent to facilitate metabolism processes, it would not be a detriment for organisms reliant on liquid methane or ethane.
Another argument to be made for Titan in comparison to other planets in the Solar system lies in the presence of water. There is enough evidence to support the existence of large sources of water underneath the surface of the planet. Water is being released from various cryovolcanoes on the surface of the planet, in liquid form (Dohm and Maruyama 96). It indicates that the core of the moon and the distance from permafrost above keep water in liquid form, which makes the existence of water-based life similar to that on Earth possible. One of the largest obstacles to that theory is the saline nature of underwater oceans due to proximity with various minerals found in its soil as well as the lack of sunlight to initiate certain kinds of metabolism.
Another reason why Titan is the best place to find proof of life is that there are chances for it to still exist and even thrive either on the surface or underneath it. Candidates like Venus and Mars have conditions far more hostile than those of Titan, as Mars is irradiated and Venus’s close proximity to the Sun means the surface is extremely hot, with average temperatures varying between 450-500 degrees Celsius (McKay 9).
Even if life existed on these planets at some point, the chances of finding any evidence of it are slim, as the planets are exposed to hostile elements, radiation, and meteor showers from outer space. Titan, on the other hand, is relatively safe from these influences due to close proximity to Saturn, which acts as a shield from the Sun, whose light would have otherwise caused methane to transform into complex organic elements and negatively influence the moon’s atmosphere. Saturn’s gravity field also attracts the asteroids that could have otherwise hit Titan.
The last argument for choosing Titan as a possible location for space exploration is connected to economic profitability. Even if the mission does not find any signs of life on Titan, it still represents a vault of valuable natural resources, such as oil and gas. Current estimations state that the amount of hydrocarbons on Titan exceeds Earth’s total supply by at least ten times (Badescu and Zacny 34). Although it is possible that by the time any commercial extraction of resources from Titan would be possible, humanity would transcend to renewable and more efficient sources of energy, oil and gas can be used for the production of other materials, such as plastics, which are widely used in production.
Therefore, it would be easier to find supporters and investors for NASA’s expedition, as the exploration of Titan would not only bring about the enrichment of existing knowledge of extraterrestrial life, but also long-term profit and sustenance for humanity. As it stands, automated missions can reach Titan in only 3-4 years (Badescu and Zacny 71). It is likely for human technology to improve by the year 2050 and enable much faster and more reliable flights. These trends would make missions to Saturn’s largest moon more viable. Discovering new lifeforms would significantly boost our understanding of the mechanisms behind the evolution of life, likely leading to a greater understanding of astronomy, biology, and medicine.
The Most Likely Life Form We Would Find on Titan are Extremophiles and Methane-Based Organisms
Although Titan does offer the best bet of finding life in our Solar system, it is far from being a perfect place. Extremely low temperatures at the surface and high levels of salinity of its underwater oceans suggest that whatever life could exist in these conditions would have to be extremely resilient to these conditions. Observations of Earth-based extremophiles and theoretical knowledge of life form creation suggests only two possible variants for life on Titan to exist: either as extremophiles underneath the surface of the moon, living in complete darkness in its hypersaline oceans, or as surface-based lifeforms.
One potential way for life to exist on the surface of Titan involves a methane-based life, which would use methane either as a solvent or as a material for biogenic membranes of cells and microorganisms (Lai et al. 7025). These creatures should be capable of living under the conditions of permafrost without suffering from extremely low temperatures. Another obstacle in the path of life existing on Titan includes limited resources available for growth and sustenance. Energy sources on Titan are very scarce due to a lack of warmth and sunlight as well as organic ways of supply, meaning that whatever organisms may inhabit it would have to be extremely efficient in spending their energy and extracting it through alternative biochemical processes.
Extremophiles present on Earth are viewed by many researchers in the fields of astronomy and biology as potential blueprints for lifeforms on other planets. Despite Earth being relatively benign in terms of climate when compared to Titan or Venus, there are still places where temperatures are significantly higher or lower than the median norm.
For example, the highest temperature on Earth (58 degrees Celsius) was observed in the Libyan Desert, whereas the lowest peak (-88 degrees Celsius) was found in Antarctica, recorded by the Vostok Station (Fendrihan 147). Some lifeforms are capable of existing in these extremities. One such creature is the Tardigrades, which can survive temperatures of up to -272 degrees Celsius, hot temperatures of up to 300 degrees Celsius, high pressures, lack of sustenance, and ionizing radiation (Fendrihan 147).
Another form of life likely to survive in the hypersaline underground oceans of Titan is halophilic bacteria. These organisms are capable of astounding feats of resilience, surviving in waters that are approaching the barrier of full salt saturation. They also fit several other criteria that are required for survival on Titan. According to King (4465), extremely halophilic organisms possess some of the following abilities that ensure their survival:
CO as an energy source. It is a known fact that there are deposits of CO gas underneath the surface of Titan. Some halophilic organisms can use it to facilitate cellular metabolism, thus not requiring oxygen to do so.
High-temperature resistance. Most halophilic organisms live in hot areas, with high levels of water evaporation. Such location includes various saline lakes in Australia as well as the Dead Sea in the Middle East.
Resistance to radiation. Various laboratory tests have proven that halophilic organisms can survive exposure to radiation.
Most of these conditions can be applied to projections of the environment underneath Titan’s surface. With close proximity to the planet’s core, it is likely for temperatures to rise up to 60 degrees Celsius. High levels of salinity would ensure that the water is uninhabitable for most other microorganisms, but would make a perfect environment for halophilic bacteria. Lastly, the sources of CO and methane are coming from underneath the planet’s surface. These elements would provide the lifeforms with sources of energy needed for the continuation of existence. Consequently, some of the halophilic microorganisms are also lithotrophic, meaning they could use sulfur and other minerals as energy sources. This further increases the chances of survival of certain kinds of bacteria.
There are also theoretic possibilities of encountering methane-based lifeforms on the surface of the planet. The article by Lai et al. (7026) indicates that it is possible for naturally occurring methane reactions to form biofilm membranes. The biological membranes of Earth organisms are formed by phospholipids. They play an important part in cell adhesion, electron connectivity, and cell signaling. In addition, they protect the organelles of cells from damage and control the passage of elements in and out of the cell. These are some of the core functions found even at the simplest levels of cellular organization.
It is theorized that methane-based membranes could perform similar functions, thus making the existence of life possible even in permafrost. Methane’s boiling point is at -161.5 degrees Celsius, meaning that at Titan’s natural -180 degrees Celsius it is unlikely for methane-based life to vaporize. It is possible for these organisms to exist even in bodies of liquid methane, assembling around sources of nitrogen exiting from underneath the planet’s surface, similar to how methane-consuming microorganisms could be found at the bottom of ocean floors on Earth. However, high atmospheric pressure may have the potential of turning methane into ice.
As is known from chemistry, methane is more sensitive to pressure than to temperature in terms of solidifying itself into ice (Lai et al. 7031). However, considering that methane exists in a liquid state at the surface of the planet, it is unlikely for pressure to cause such adverse reactions. The existence of stable bodies of liquid is one of Titan’s peculiarities, which makes it similar to Earth.
Thus, we have two potential models of life existing on the surface and underneath it. Halophilic bacteria can exist in the saline oceans of Titan, whereas theorized methane-based lifeforms could be found on the surface. The abundance of nitrogen and nitrogen-based compounds could serve as a source of energy to these creatures, in addition to methane and ethane also found on the surface. Neither of these lifeforms would be able to survive in environments not suited for them, as the underground would be too hot for methane-based bacteria, and the upper side would be too cold for halophilic bacteria. Nevertheless, exploring the surface should lend results about both types of creatures, as it would be possible to study the flash-frozen remains of underground bacteria ejected to the surface through cryovolcanoes.
What We Would Need to Explore This World Is Various Unmanned Drones
As it stands, our telescope technology told us all that could be visually known about the surface of Titan from a distance. However, much of what lies underneath the foggy atmosphere is concealed from us. In order to thoroughly explore Titan and its depths, all kinds of robotic equipment may be utilized. Titan offers a variety of environments that would not be suitable for an all-terrain-type vehicle used to explore Mars.
Therefore, there are three types of drones necessary for properly exploring the outsides and insides of the moon. The first type is an aerial drone. Since the atmosphere on Titan is thicker than that of Earth, in combination with lower gravity, an aerial drone would be able to perform well, traveling over the surface unhindered (Badescu and Zacny 59). It would enable advanced topography of the moon as well as supervision of naturally-occurring events, such as cryovolcano eruptions, methane river movements, cyclones, and other interesting events. The first mission to Titan must include this drone, as it would enable us to plan out future missions by creating accurate maps and recordings of the moon’s surface from above.
The second type of drone to participate in the expedition would be an all-terrain drone. Being capable of crossing various obstacles is paramount for the long-term maintenance of the expedition (Badescu and Zacny 55). The purpose of this drone would be to collect and analyze samples, take pictures, and search for signs of microscopic life in ground probes. It could also explore the blocks of ice launched from cryovolcanoes, as they could contain remains of life hidden underneath the surface. It is unlikely for human technology to allow for drilling operations on Titan by the end of 2050, so the researchers would have to rely on these emissions as primary sources of knowledge.
The last type of drone to be used in an expedition to Titan would need to be an aquatic drone. Namely, it should be able to explore the bodies of liquid methane found on the surface of the planet (Badescu and Zacny 53). There is a potential for methane-based organisms living in these natural sources of organic gasses. These machines would be capable of taking samples of methane and ground samples from the bottom, analyzing them, and transmitting reports of findings back to Earth. However, this drone would be highly specialized, thus not as important as aerial or all-terrain machines.
There are several demands towards research drones. The first and crucial demand is that they have to be autonomous. Radio commands take too much time to travel, and laser-guidance would be unavailable due to Titan’s thick atmosphere. Therefore, these machines would be required to make decisions on their own and perform their activities without the need for human input. The second demand is resilience and reliability. The machines would need to be able to operate in extremely low temperatures. The first land-based mission to Titan, the Huygens, went dark after only 90 minutes of operational time due to a malfunction. In order to extract any valuable data, it would require months of active operational time.
Conclusion
Titan is the most likely place in the Solar system to contain life. The combination of location, natural resources, and geology make it possible for not one, but two forms of life to potentially existing on and underneath its surface. In addition, exploring Titan would bring economic profit in the long-term perspective and further humanity’s advancements into space. In order to properly explore Titan, telescopic observations would not be enough.
Aerial, all-terrain, and aquatic drones will be used to provide knowledge of the Titan’s surface and depths. This endeavor has the potential to discover extraterrestrial life. Chances of finding it on Titan are higher when compared to Mars, Venus, Uranus, and other planets and moons in the Solar system.
Works Cited
Badescu, Viorel, and Kris Zacny. Inner Solar System: Prospective Energy and Material Resources. Springer, 2015.
Dohm, James, and Shigenori Maruyama. “Habitable Trinity.” Geoscience Frontiers, vol. 6, no. 1, 2015, pp. 95-101.
Fendrihan, Sergiu. “The Extremely Halophilic Microorganisms, a Possible Model for Life on Other Planets.” Current Trends in Natural Sciences, vol. 6, no. 12, 2017, pp. 147-151.
Han, Eunkyu, et al. “Exoplanet Orbit Database. II. Updates to Exoplanets.org.” Publications of the Astronomical Society of the Pacific, vol. 126, no. 943, 2014, pp. 827-837.
King, Gary M. “Carbon Monoxide as a Metabolic Energy Source for Extremely Halophilic Microbes: Implications for Microbial Activity in Mars Regolith.” PNAS, vol. 112, no. 14, 2015, pp. 4465-4470.
Lai, Chun-Yu et al. “Bromate and Nitrate Bioreduction Coupled with Poly-β-hydroxybutyrate Production in a Methane-Based Membrane Biofilm Reactor.” Environmental Science and Technology, vol. 52, no. 12, 2018, 7024-7031.
Maccone, Claudio. “Statistical Drake-Seager Equation for Exoplanet and SETI Searches.” Acta Astronautica, vol. 115, 2015, pp. 277-285.
McKay, Christopher. “Titan as the Abode of Life.” Life, vol. 6, no. 1, 2016, pp. 1-15.
The basic premises of cell theory include the basic unit of a living organism cell. The cell is the smallest unit in the living thing that is referred to as a structural unit. A cell has diverse characteristics, and among them includes a flow of energy in cells and passage of hereditary information. Jacobs Schleiden and Theodar Schwan defined the cell theory. Technological advancement has contributed immensely towards broadening the meaning of cell theories. It has led to the advancement of cell discoveries and understanding of cell functionality. However, all technological discoveries have a close relationship with the premises of cell theory. Thus, it is evident that the premises of cell theory play a crucial role in advancing the role played by the cell. This paper will outline the basic principles of a cell and its functions in a living organism. It will also outline the diverse discoveries made concerning the cell by advancement in technology.
The basic premises of cell theory include that the basic unit of structure in all living organisms is the cell. It is believed in the theory that cell development occurs from existing cells and is a vital unit structure organization and function in all living things. It states that the composition of all organisms is cells. The cell is the smallest living unit in living organisms, hence referred to as a structural unit. The other premise in cell theory is that the functionality of an organism is a summation of the goings-on and interactions of element cells. Cells have certain characteristics including the flow of energy in cells, the passage of hereditary information through cells, and that every cell has a basic chemical composition (Barlow, et al. nd).
Cell theory was proposed by Jacobs Schleiden and Theodar Schwan but the discovery of the cell was done by Robert Hooke in 1665. This was through his observation of tiny slices of cork, naming the tiny structures, cells but he never knew cell function and structure. After the proposal of the cell theory, Rudolf Virchow gave a vital observation of the division of existing cells for new cell development in 1855. Haeckel made another improvement to cell theory in 1866 on the ability of cells to transmit hereditary traits (Cell Theory, para 3). Through these developments and discoveries, the proposition of cell theory was then accepted and turned into cell theory. This forms the basic premises of the cell theory that has seen many discoveries and advances in cell theory since its formulation.
Recent discoveries on the structure and functionality of cells have led to an increase in available knowledge and understanding of cell theory. Recent discoveries have led to a wholly advanced understanding of cell theory. However, all the discoveries are developed based on the basic premises of cell theory and they are just improvements to the fundamentals of cell theory (Barlow, et al. nd). It has been discovered that the nucleus is the brain part of the cell and has all the information in chromatin that aids in cell growth, development, and reproduction as the DNA. The organization of chromatin in the cell aids in the understanding of diseases and aging. The proteins Lamin A and Lamin B are involved in chromatin organization in the nucleus whereby when these proteins lack the heterochromatin collapses to the nuclear center leading to a disruption in gene expression and affecting skeletal movement (Liull, 2011). This understanding of the operations of the cell helps in the understanding of heart failure and the rise of diseases.
The other discovery on cells is the stem cell by 2012 Nobel Prize winners. They stated that a mature stem cell is a blank cell that can change to be other sells and aid in replenishing the body when the person is still alive but aging. This aids in the treatment of various diseases and disorders including the muscles and the nervous system. In 2012, Yale University researchers discovered certain cells that turn into white adipocytes recognized as fat leading to a better understanding of obesity. There has been an improvement on cell theory including that apart from cells coming into existence from division, some are formed from fusion in the example of sperm and egg cells. Other knowledge from discovery is that all cells have a membrane covering and that all cells are microscopic in their sizes. The other knowledge addition is the presence of biochemical properties in all the cells (Kelly, 2011).
In the television historical drama Life Story: The Race for the Double Helix, 1989, James Watson and Francis Crick, both molecular biologists at Cambridge University, made a historic discovery that led to the establishment and revolutionization of microbiology. The two combined their intellectual and research talents and gathered the necessary materials to aid in creating a model.1 Using an X-ray image, they interpret that DNA has a double helix shape where they accurately construct a model representation that won the Nobel Peace in1962.
The institutional setting of the Life Story: The Race for the Double Helix historical drama is research. Watson joined Cambridge as a researcher while Crick was working on a hemoglobin project, although his wish was DNA. The two easily bond when Crick argues that genetic code is found in DNA, not in protein, as some research alleges. Watson agrees with him, and they both abandon the assignments and focus on research and building the DNA structure model.2 They finally have a breakthrough by discovering the DNA structure.
Watson and Crick are independent; they come up with the idea of building a DNA structure on their own. They are both confident; they believe in their capability to achieve their target.3. Watson and Crick are also determined to achieve their goals.4 They tirelessly work hard until they come up with a DNA model.
Chadarevien argues that the image of Crick, Watson, and the double-helical DNA model has a great significance to the discovery. It recreates exciting memories of the entire research process; the images of the two researchers and the DNA model give a glimpse of the beginning of the new science of life. The image signifies the achievement and origin of the modern study of genetics.5 It also provides the research’s account, reinforces its historical impact on science, and is an iconic symbol of iconic discovery.