The Challenger disaster of 1986 led to the deaths of seven crew members (Jenab & Moslehpour, 2016). After the disaster, an inquiry was established in an attempt to identify the potential causes of the failure. The inquiry was also aimed at presenting appropriate guidelines that could be used whenever constructing new space shuttles. A fault tree analysis (FTA) can be used to identify and examine the major factors that led to the failure of the O-rings in the Solid Rocket Boosters (SRBs).
Fault Tree Analysis of O-ring Construction and the Explosion
Analysis of the FTA
The Fault Tree Analysis (FTA) model can be described using the Boolean algebra. This is usually represented using the symbol (+) and is equivalent to the OR-gate. The Boolean equation is Q=A+B. This means that events A or B (or both) must occur for disaster Q to take place. That being the case, the Challenger disaster took place because of several factors. The above FTA model shows clearly that the explosion was caused by three major factors.
Such issues contributed (wholly or partly) to the failure of the O-rings in the Solid Rocket Boosters (SRBs). For instance, the tree analysis shows clearly that “the temperatures at the launch site and the communication barriers between Thiokol Engineers and NASA led to failure to adhere to test data limits” (Jenab & Moslehpour, 2016, p. 403). The other problem was caused by the failure of the O-rings. This failure was caused by the defective nature of the O-rings and existing temperature variations (Jenab & Moslehpour, 2016). The O-rings could not perform effectively at lower temperatures. The shuttle lacked effective alarm systems to report failures (Jenab & Moslehpour, 2016). These three contributing factors led to the explosion of the Challenger.
Summary of Risks Observed
The above Fault Tree Analysis (FTA) of the O-ring construction shows critical issues that must have been considered before launching the Challenger. The failure of the O-rings led to the explosion of the space shuttle after 73 seconds. The first issue was associated with the construction of the O-rings. The O-rings in the Solid Rocket Boosters (SRBs) were constructed in such a way that “they could close tightly due to the forces produced during ignition” (Jenab & Moslehpour, 2016, p. 404). A phenomenon known as joint rotation occurred during the launch process. The occurrence of this phenomenon forced combustion gases to escape through the safety O-rings. The joints failed due to increased erosion of gases.
The FTA can also be treated as a powerful risk assessment matrix. The matrix indicates that the shuttle did not have effective instrumentation to send signals after the O-rings failed. This fact explains why the crew could not cancel the launch after combustion gases escaped though the failed O-rings. The members of the crew were not warned about the leakage (Donovan & Green, 2003).
The third issue notable from this FTA is that NASA failed to adhere to existing test data limits. This is true because the shuttle was launched at a time when room temperatures exceeded those recorded during various test missions. This issue emerged because of poor communication between NASA and the engineers responsible for constructing the SRBs (Jenab & Moslehpour, 2016). The involved parties failed to monitor the critical weather conditions recorded at the launch site. Nonetheless, NASA decided to launch the space shuttle despite the existing concerns.
Ways to Mitigate the Risks
Several measures might have been undertaken to mitigate the risks identified above. To begin with, the failure of the O-rings shows conclusively that they were unable to seal gases completely after ignition. The two O-rings used to separate various chambers failed. This event allowed gases to escape. The use of three O-rings could have improved the effectiveness of the SRBs. The three O-rings would have sustained the gases and eventually prevent the disaster (Hall, 2003). This fact explains why every future space shuttle was equipped with SRBs containing three O-rings in every combustion chamber.
The issue of communication should have been taken seriously throughout the pre-launch period. The teams should have carefully analyzed test data and flow of different information. The space shuttle was launched at reduced temperatures compared to those specified by Thiokol Engineers (Jenab & Moslehpour, 2016). This disaster should have been avoided by focusing on this critical information regarding the performance of the O-rings.
The involved parties should not have launched the shuttle. Technicalities should have been identified and communicated to every member in the team. This approach would have played a positive role towards averting the disaster (Donovan & Green, 2003).
Future shuttles should be equipped with effective warning systems and corrective devices. Such systems can inform the crew members about the performance of every critical element in the space shuttle (Hall, 2003). The engineers and technicians on the ground should also be informed about the performance of such elements. These lessons should therefore be taken seriously whenever designing and launching space shuttles in the future.
Reference List
Donovan, A., & Green, R. (2003). Setup for Failure: The Colombia Disaster. Teaching Ethics, 1(1), 69-76.
Hall, J. (2003). Columbia and Challenger: Organizational Failure at NASA. Space Policy, 19(1), 239-247.
Jenab, K., & Moslehpour, S. (2016). Failure Analysis: Case Study Challenger SRB Field Joint. International Journal of Engineering and Technology, 8(6), 401-405.
The Challenger was initially referred to as the STA-099. The shuttle was built to work as a test vehicle for the Space Shuttle program and was named after the HMS Challenger, which was a British Naval research vessel. The HMS Challenger sailed in the Atlantic Ocean as well as the Pacific Ocean during the 1870’s.
When the Challenger was built it underwent intensive vibration and thermal testing for a year. NASA awarded Rockwell, a Space Shuttle orbiter manufacturer, a contract in 1979 to build the Challenger by converting the STA-099. The Challenger arrived at the Kennedy Space Center in July 1982, and it became the second orbiter to be operation in the center.
The Challenger had been designed to be a historic craft and many were optimistic it would outlive the rest. The Space Shuttle took its maiden flight on April, 1982 for the STS-6 mission, which saw the first ever space walk in the space shuttle program. The EVA (Extra Vehicular Activity) was done by Astronauts Donald Peterson and Story Musgrave.
This lasted about four hours and it was also during this mission the first deployment of a Tracking and Data Relay System constellation was done. After completing nine successful missions, on January 28, 1986 the Challenger was launched on the STS-51L and after a mere 73 seconds it exploded killing all the seven crewmembers (NASA, 2011).
This paper will look at the SHUTTLE 51-L MISSION, the organization that was involved in the Challenger project, the mechanical failure of the Space Shuttle Challenger, the organizational behavior and management shortcomings that contributed to the disaster and finally make organizational behavior and management changes that can be adopted to prevent a reoccurrence of the same disaster.
Discussion
NASA Program
As the Challenger Space Shuttle progressed, there was an increase in the demands being placed on NASA and this resulted to an increased risk of disaster (Jarman & Kouzmin, 1990). The NASA team had a false sense of security having carried out 2Kramer, & James, 1987 missions, which had been successful.
Prior to the launch, there were many wrangles within NASA, and managers were working in a place with heavy overload and turbulence (Kramer & James, 1987). The management at NASA was characterized with a disease full of decay and destruction (Kramer & James, 1987 p.14).
There was lack of a formal DSS program at NASA initialized before the launch for the shuttle operations. There were strong indications that decisions were being made through satisficing and short cuts.
There were lots of compromise and operations were greatly affected. NASA was accused of having semi-uncontrolled decision making as they tried to satisfy the needs of the military, scientific community, industry and this led to the space shuttle being declared operational even before the development stage of the shuttle had been completed (Kramer, & James, 1987).
Decision making at NASA was done by default as there lacked DSS. The organizational structure at the program was political and manipulations were done to meet requirements of the political power.
When the Reagan Administration declared the Space Shuttle “operational”, many employees at NASA lacked motivation and left with the impression that decision making on the project should be made by the political administration (Jarman & Kouzmin, 1990).
Employees began being complacent and safety of the shuttle was highly compromised, as they tried to keep the shuttle on schedule and satisfy the clients. This presents the situation at NASA prior to making the decision to launch the space shuttle (Dunbar & Ryba, 2008).
Shuttle 51-L Mission (Challenger Flight)
The 51-L mission was the 25th mission that NASA was going to undertake in its STS program. Shortly after launching the Challenger on 28, January 1986, the Challenger exploded mid air, destroying the vehicle and killing the entire seven crew members on the mission. This mission was aimed at deploying a second Tracking and Data Relay Satellite as well as the Spartan Halle’s Comet Observer.
The mission was also going to be the first time there were observers or passengers participating in a program called NASA Teacher in Space Program ((Dunbar & Ryba, 2008). S. Christa McAuliffe was one of the crew onboard and she was going to conduct live broadcasts that were going to be broadcasted to schools throughout the world (Dunbar & Ryba, 2008).
The destruction of the Challenger and the loss of life had profound impact on the society and the way it viewed the Space program and particularly NASA. As this paper will discuss, the tragic decision that was made to allow the launching of STS 51-L was as a result of long term contributing factors that were further increased by bad or weak organizational behavior and management strategies. The outcome of this tragedy caused loss of life, resources and made people to mistrust the space program.
Although the accident of the Challenger was blamed on the hardware failure of the SRB “O” ring (known as Solid Rocket Boost), the decision that was made by the management was also flawed. The decision was based on faulty organizational behavior and management and this was further aggravated by the mismanagement of initial information that suggested the launch be postponed (NASA, 2008).
Other factors that besides organizational behavior and management played a major role in contributing to the accident occurring. They included the demand NASA was getting from the political ruling class to deliver and launch on the scheduled day (NASA, 2008)
The process of proving to the American people and the political system that there was need for a reusable space shuttle had begun in the 1960s. The Challenger was one of the ways that this could be proven and thus a lot of pressure and expectation was put on the program. Unlike the previous missions such as the Apollo, the Space Shuttle was going to be used in space operations without having a defined goal (Jarman & Kouzmin, 1990 p. 3).
This presents the first contributing factor in the Challenger’s accident. Without a defined role for use, the Challenger was going to be used as a utility vehicle for space operations and thus there lacked a strong support for the project, both financially and politically. In order to gain favor and political support for the project, the Challenger was sold and presented to the political elites as a “quick payoff” (Jarman & Kouzmin, 1990 p. 8).
The project also gained support by predicting that it could be used by the military as a means that could be used to enhance the national security. To the industry, it was sold as a commercial opportunity, where companies could offer clients an opportunity to visit space. Many scientists in the program told the American public that the Challenger Shuttle was going to be an American Voyage that was going to have great scientific gain (Jarman & Kouzmin, 1990 p. 10).
To the world, the Challenger project was sold as a partnership that was going to include the ESA (European Space Agency) as well as a means that was going to improve the relations between nations and bring together people of different nationalities, sex and races by serving as crew members during missions (McConnell, 1988).
The process that was used to gain support in the economic, social and political arena for the space shuttle can be cited to be the second contributing factor that resulted to the accident (McConnell, 1988).
There was use of heterogeneous engineering, which means that the engineering and management decisions in the project were structured in ways that were going to be appealing to the political, economic, and organizational factors rather than being structured into a single entity mission that was aimed at achieving specific goals (Jarman & Kouzmin, 1990 p. 9).
When the Space Shuttle became operational, it was faced with many operational demands from many people. It had to live up to the promises that had been given by NASA. This placed a lot of pressure on the management team as they tried to coordinate the needs of the military, political elites and the scientific community.
The political pressure was to provide a space vehicle that was going to be reliable and could be reused. It was also supposed to be difficult to achieve this as it was going to hinder the ability of creating an effective system for integration and development. It was also going to be infeasible to create a management support system that could cater for the diverse requirements.
There was also a low moral within the NASA employee, which was created during the Reagan Administration when the shuttle was given the green light for operation even when the development stage had not been completed (Jarman & Kouzmin, 1990).
The American Congress expected that the Shuttle program was going to be financially self supportive after billions of dollars had been used to go to the moon (Jarman & Kouzmin, 1990, p. 15). With this lack of support from Congress, NASA adopted and operated as a commercial business instead of a government program. It can therefore be concluded that the environment of the program prior to launching had been one mucked wih conflict, short cuts and managerial stress (Jarman & Kouzmin, 1990, p.15).
Mechanical failure of the Challenger
Before the launching date, concerns had been raised about the integrity of carrying on with the launch when the temperatures were as lower than those expected for optimal performance. On a previous mission, 51-C, it had been noted that the booster joints were covered with soot and grease after launching on a cold weather.
Tests were carried out in the laboratory on the effect of low temperatures on the O-ring resilience. It was recommended that they be replaced by steel billets and this would have meant a redesign of the field joint. By the time of the accident, the steel billets were not ready.
Engineers at Alan McDonald made a presentation that detailed on the effects the cold weather was going to affect the booster performance. This was necessary because the temperatures of the launching date were expected to be lower than 350F. After the concerns were raised a meeting was convened and various heads and engineers attended.
The people in attendance included, engineers, top management of Marshal Space Flight Center, Kennedy Center, and Morton Thiokol. The meeting was called to discuss on the effect the cold weather was going to have on the mission especially the boosters’ performance.
Engineers gave a clear presentation that argued that the cold weather would have a major effect on the joint rotator and the O- ring seating. The test carried out had only gone to a low of 530 F and this presented a problem of the unknown (Rogers’s, 1989).
Thiokol provided NASA with information concerning the launch and thought that the low temperatures were going to affect the O-rings to a point they were going to be ineffective. The mission had been cancelled previously due to the cold weather and NASA was not ready for another cancellation (Kramer, & James, 1987 p.23).
Although information had been provided by a GDSS from another company showed that the O-rings were going to work under the predicted weather, engineers from Thiokol were skeptical about the data they had inputted into the GDSS. This meant that NASA was relying on a GDSS that had flawed information (Kramer & James, 1987).
At this juncture, NASA asked for a definitive confirmation or rejection of the planned launch from Thiokol. The representatives from Thiokol responded by recommending the launch be delayed until the temperatures were favorable. NASA continued to pressure Thiokol to change their minds and NASA level three managers is reported to have retorted to the representatives, “My God, Thiokol, When do you want me to launch, next April?” (Kramer, & James, 1987, p.7).
It was after this that Thiokol representatives asked to be given time to rethink their recommendations. An engineer with Thiokol was asked to stop reasoning as an engineer and start thinking as a manager, which suggests that the group was placing organizational needs in front of safety of the shuttle.
Thiokol representatives returned to the GDSS and recommended that the launch be done as planned. When NASA asked if there was any objection to this no one from the GDSS objected. During the launch the O-ring were severely affected by the cold weather and this mechanical failure caused the accident and the eventual loss of the crewmembers (Kramer, & James, 1987).
Critical analysis of the organizational behavior and management shortcomings that contributed to the disaster
The environment, organizational behavior and management which NASA and its developers operated in gave a large margin for human error. However, Thiokol and NASA had a chance to avert the accident during the GDSS meeting before the launch. The organizational behavior and management fallings can be attributed to the accident.
First, the team especially Thiokol had prior knowledge that the O-ring was going to be affected by the cold weather months before the launching. However, the primary goal of the project was to meet the launch date. NASA warned about the problem, but it downplayed it. This presents the first element of the mismanagement of information and bad organizational behavior that resulted into the accident.
Any suggestion and proposals of the launch-taking place were met with positive support from the management while all suggestions of delays were shot down without taking into consideration the risk involved in carrying out the launch (Turban, 1988).
Third, there was a strong feeling among the people involved in the project management to live up to the promises made. Despite the fact that Thiokol engineers were skeptical about the planned launching, their management went ahead and agreed with the other members of the GDSS to continue with the launch (Turban, 1988).
Fourth, there was bad organizational behavior and management on the part of Thiokol, because they agreed with the other teams although their engineers were telling them to stop the launch (Priwer, & Philips, 2009).
Fifth, all people involved in the top management of the project were afraid of how the political elites and the public would react if another cancellation was done. In the previous one year the launch had been postponed six times. Many in this group were starting to rationalize that if they had succeeded in the past they were as well going to succeed this time (United States Congress, 1986).
Finally, the group as stated before was working with flawed data and even when Thiokol engineers began to question the integrity of this information, nobody took action. People in the GDSS meeting who were proposing that the launch be delayed were unwelcome and therefore the management had its mind made on the launching date.
During the meeting, it was seen that NASA representatives were at times assertive and intimidate the other players to a point where they disregarded warnings given. The meeting is also faulted as a bad organizational behavior and management, because it was easy to downplay the personal opinions held by each member.
Instead of the speaker conversion, the meeting should have been held at a place where all members were present and maybe the outcomes would have been different. The GDSS failed the point where Thiokol asked to be given five minutes to conduct a private meeting. Before this point Thiokol had maintained that the launch should be cancelled, but after the private meeting it changed its mind.
Conclusion
The failure of the spaceship Challenger can be blamed on the organizational behavior. NASA has a variety of risk avoidance system. Their aim is to ensure that the missions are safe. NASA is one of the smallest federal agencies and operates under a strict budget of US$ 15 Billion (NASA, 2010).
This removes any flexibility during risky situations. This agency has been known to be dependant to their history for decision making. Since their establishment in 1958, their main aim was to beat the Soviet Union spaceflights. Though their budget keeps being cut, they still stick to their mission.
The cut costs made NASA realize that they could include the private business sector. This increased their pressure for success, which was also coming from the government. They had to research and develop the operations with limited time.
The normalization of deviance is another short coming on the management of the NASA. This is a term, which is used to explain the way sometimes some technical flaws are not scrutinized by the various safety bodies over time. This is because they are both expensive and time consuming. Due to the pressure to produce, it is seen as absurd to spend resources on problems, which are not a risk (Launius, 1992).
The postponing of the launch can be because of many reasons. Maybe the problem was the O-rings significance was not considered so much hence the problem with it was a minor one to them. The other reason would be, because the president was using the flight as a reference in his speech or maybe it was because of the much pressure, which was being exerted by both the private sector and the government.
Recommendations
Failures can happen no matter the safety systems applied. Though the cause of the failure was technical, the organizational failure caries a huge part in it. There are numerous things that NASA can do to avoid these types of organizational failures ever happening (Lewis, 1988).
One of them is the Hierarchical power. Some of the management’s personnel at the high posts have no interest in the hierarchy. Some of them would rather not make decisions that would jeopardize their work. The congress, a body of the NASA which offers regulatory oversight, has no desires to jeopardize the central district of NASA through their decisions. These are huge obstacles to the changes that should be made in the organizational behavior and management.
They should create a way in which the engineers can have the ability of by passing the hierarchy and bureaucracy before launching unsafe missions. If the engineers had had their way during the Challengers disaster, the O-rings would have been replaced or the launch postponed. Though these activities would be very costly to NASA, it would not be as expensive as losing the crew and the vehicle (United States Congress, 1986).
The bureaucratic procedures should be sometimes be exempted from getting some data. This is because hunch or intuitions which the engineers might have may take a long time to be researched on and analyzed (Hall, 2003).
Hall, J.L. (2003). Space Policy. Columbia and Challenger: Organizational failure of NASA. Berkley: University of California at Berkley.
Jarman A. & Kouzmin, A. (1990). “Decision pathways from crisis. A contingency-theory simulation heuristic for the Challenger Shuttle disaster”, Contemporary Crises.
Kramer, C. & James A. (1987). The Space Shuttle Disaster: Ethical Issues in Organizational Decision Making. Michigan: Western Michigan University Press.
Launius, D. (1992). “Toward an Understanding of the Space Shuttle: A Historiographical Essay”. Air Power History, Winter.
Lewis, R.S. (1988). Challenger; the final voyage. New Yolk: Columbia University press.
McConnell, M. (1988). Challenger: A Major Malfunction. London: Routledge.
NASA. (2011). The Mission and the History of Space Shuttle Challenger. Web.
Priwer,S. & Philips,C. (2009). Space exploration for dummies. Hoboken: John Wiley & Sons.
Rogers’s commission. (1989). Report Of the President Commission on the Space Shuttle Challenger Accident. Washington DC. G.P.O
Turban, E. (1988). Decision Support and Expert Systems, New York: Macmillan Publishing Company.
United States Congress. (1986). Investigation of the Challenger Accident; Report of the Committee on Science and Technology, House of Representative, Ninety-Ninth Congress, Second Session. Washington: U.S. G.P.O.
Since the International Space Station became suitable for human habitation, research has been initiated to establish the effects of space and microgravity on various phenomena of human life. Undoubtedly, space exploration breakthroughs have immensely contributed to the betterment of human life. Conspicuous evidence of the benefits of space tours to human life include: advances in human health, education, earth observation, telemedicine, and disaster management among others.
The Role of Human Health
The International Space Exploration has provided a unique platform for carrying out the impact on human health, earth, and beyond. Research has been conducted on the station to provide a better understanding of the phenomena of human health such as the environment, aging, disease, and trauma.
Physiological and biological tests have produced vital results and, therefore, improving our comprehension of the series of physiological events that are usually shielded by gravity and invention of new and advanced medical technology and procedures, including telemedicine, cell behaviour, disease models, and nutrition.
The Canadian Space Agency (2012) gives an inspiring narration of how a robotic arm has successfully performed a brain surgery. In 2008, Paige Nickason became the first brain tumor patient to receive surgery from a robot. Since then, numerous patients have received surgeries from the neuroArm. The development of the neuroArm owes a lot of credit to space exploration.
For a long time, robots have constituted a major component of space technologies and currently, the technology is being tailored to provide medical solutions as evidenced in the neuroArm. mcDolnard, Dettwiller and Associates Limited has made enormous advances in designing a two-armed neuroArm and developing a tele-operated surgery unit for children. Furthermore, the company is developing an image-guided independent robot system for the early diagnosis and treatment of breast cancer.
One of the major health challenges associated with space exploration is kidney stones and bone loss for astronauts during long stays on space. Astronauts have had to participate in regular physical exercises to counter the problem. In a bid to provide more efficient solutions to bone loss and renal dysfunction, astronauts take biophosphonate, vitamin D, and Calcium respectively. The precautions for promoting astronauts health have provided insights for treating osteoporosis in Canada and other parts of the world (Canadian Space Agency, 2012).
Space exploration and its associated technology have also improved the health of humanity through the invention of asthma management devices. The European Space Agency has developed a device for establishing the level of nitrogen monoxide, a major cause of lung inflammation, in exhaled air. The devise has been found to be beneficial to asthma patients since it assists in monitoring and managing the levels of asthma prevention and suitability of medication (Canadian Space Agency, 2012).
Safe drinking water is essential to human life. Regrettably, many people all over the globe fall short of access to clean and safe water. Space technology has led to the development of improved water filtration and cleaning systems. The advances in the water treatment and recovery process provide a lasting solution to people experiencing water shortages in Canadians and across the globe (Canadian Space Agency, 2012). These are among the many contributions of space technology to the improvement of human health.
The Role of Space Technology in Earth Observation
Advances in space exploration, particularly the creation of the International Space Station, has enhanced the observation of the globe to provide better comprehension and solutions to environmental matters on earth (Neil, 2011). The Space Station provides a suitable location for viewing the globe’s ecosystems.
The observations provide vital insights on the earth’s climate, environmental changes, and natural disasters. According to the Canadian Space Agency (2012), space technology has been vital for advances in remote sensing. In particular, the inception of the International Space Station has provided thousands of images of the globe’s surface, oceans, atmosphere, and the moon.
Space technology has also been vital in the provision of real time data. This has been instrumental particularly in providing information on natural disasters including tsunamis, volcanic eruptions, and earthquakes. The Canadian Space Agency (2012) acknowledges that the observation of the globe from space complements human operated systems and provides insightful information on the global environment.
The Canadians and other space agencies in the globe use the International Space Station to back research aimed at providing understanding and insight into climate change. The Space Station has provided a suitable platform for viewing atmospheric changes and movements, the earth’s surface, and oceans. For the past one and a half century, human endeavors have caused substantial changes in the earth’s environment.
These include the greenhouse effect, alteration of the nitrogen cycle, and destruction of land cover. Space exploration is instrumental in providing understanding of the relationship between human activities and changes in the globe’s climate. This information forms the bedrock for engineering sustainable developments for Canadians and the rest of humanity (Canadian Space Agency, 2012).
Even with the enormous milestones made in space travel, it still possesses serious threats to the health of the astronauts. Cosmic and radiations from the sun pose a serious health hazard to the astronauts. The radiations are ingredients for fatal cancer, the nervous system, and heart dysfunction. Other health problems associated with space travel include: bone loss, fainting spells on getting back to the earth’s gravity, cognitive problems, impaired cardiovascular functioning, muscle atrophy, and cabin fever (Canadian Space Agency, 2012).
Conclusion
Although space visits have been posing serious health hazard to the astronauts, space travel has continued to impact on human life since its inception. Humanity owes a lot to the International Space Station as regards to educational, scientific, and technological milestones that have been achieved.
It has inspired the development of medical equipment and procedures to solve some of the disturbing health issues with more precision. A better understanding of our habitat and the earth could not be achieved without the aid of space travel. In addition, the study of sciences, mathematics, engineering, and technology could not be motivating and interesting in the absence of space travel.
References
Canadian Space Agency. (2012). International Space Station Benefits for Humanity. Web.
Neil, M. (2011). What Does Space Exploration Do for Us? London: Capstone Global Library
Space hazards refer to the events that take place beyond the surface of the earth and that may affect the health of an individual. The study of space is best understood under the subject of astronomy which explains what space is composed of as well as what can be done and what cannot be done in space. Astronauts are best positioned to operate in space since they have studied the universe and understand the risks and the benefits of the operations in space.
There are various hazards that are associated with space which are mainly as a result of the events that are carried out in space. Aircrafts are the most common manmade objects in space as well as other objects that astronauts use in the space operations. Space hazards are mainly as a result of the effects of weather, radiation.
Astronomy
Astronomy refers to the “science that investigates the distribution, composition, physical state, movement and evolution of material in the universe beyond the earth’s surface” (Becker 1). The distribution of matter or energy includes its position, arrangement and frequency over a certain area in the universe or in the whole space. The composition of matter includes its chemical composition qualitatively and quantitatively.
For instance, the qualitative composition of the sun includes hydrogen and helium while its quantitative composition includes the numerical or actual percentages of the substances that make it up i.e. the volume of the sun is made up of 95% hydrogen and 5% helium while its mass is made up of 78% hydrogen, 20% helium and 2% heavy metals (Becker 1).
The physical state of the matter in the universe can either be in the form of “solid, liquid, gas or plasma” (Becker 1). On the other hand, the movement of material in space involves their positional change which mainly involves rotation and revolution. Finally, the evolution of matter in space refers to the theories that have been put across by scholars to explain the origin and the future of matter in the universe. Some of these theories include the steady state, the big bang and the oscillating universe (Becker 1).
Space hazards
Many space hazards occur in the universe every day and not only affect the objects operating directly in space but also technological processes on the surface of the earth such as communication, power grids and others (NASA 1).
The effects of space hazards are so extreme since space is one of the most extreme environments imaginable with experiences of extreme temperature, extremely hot and extremely cold as well as threatening levels of radiation. These are the conditions that aircrafts are subjected to when they are operating above the insulating layer of the earth (Cain 1).
Launching is one of the extreme situations. The space craft is maintained in the space from the pull of gravity by the rocket which enables it to have enough mass and speed thus avoiding the pull of gravity from the much heavier objects near them (Jessa 3). While launching, this very “rocket shakes the air craft violently and batters it with extremely loud sound waves” (Cain 3). These phenomena are capable of shattering the delicate parts of the space craft thus causing space hazards (Cain 3).
The extreme temperatures in space are again prone to causing space hazards on the objects operating in space like the aircrafts. Weather conditions vary extremely “temperatures in space go hundreds of degrees below the freezing point and hundreds of degrees up especially when the spacecrafts are near the sun” (Cain 6).
Though there is no air in space, energy from the sun is transferred through radiation which when absorbed by the spacecrafts, it causes a lot of heating in objects that have such machines as computers that ought to operate under moderate temperatures. Though engineers have put effort to make spacecrafts models in such a manner that they have features that can withhold these conditions, this is not always the case as explained below (Cain 6).
As we have seen earlier, energy in space is transferred through radiation either through “trapped and the transient types” (Cain 8). The trapped radiation is composed of sub atomic particles mainly the protons and the neutrons which are usually trapped by the earth magnetic field thus creating Van Allen radiation fields around the planet (Angelo 124).
It has been noted that, “the transient radiation is mainly composed of protons and cosmic rays which constantly streams in space especially during magnetic storms on the sun” (Cain 8). The transient radiation can be so harmful in that when they collide with electronic circuit, they are capable of interfering with the content of memory, they can cause spurious flow of currents around the object or even burn out computer chips (Cain 8).
Space hazards can also be caused by meteor showers. These refer to the little dust particles that make shooting stars visible to us while traveling through space. They are capable of sand blasting some useful large rays of the solar panels thus affecting their importance (Cain 15).
A NASA-funded study carried out in January 2009 describes the consequences of solar eruptions on communication, power grids and other technology on the surface of the earth, the risks of extreme conditions in space as a result of magnetic activity on the sun and effects of extreme space weather (NASA 1).
According to the study, the sun is the most vital star on the surface of the earth but its contribution towards space hazards is adverse. Apart from “emitting a consistent stream of plasma, it often releases quite a large volume of tons of matter known as the coronal mass ejections” (NASA 5).
The study further confirmed that space weather produces solar storm electromagnetic field which induces extreme currents on wire resulting to various adverse effects. Some of the most common effects of this induction on wires include power line disruptions which results to widespread blackouts and also disrupts communication cables that supports the internet. Extreme space weather produces solar energetic particles and causes dislocation of earth’s radiation belts. (NASA 5)
These phenomena are capable of damaging satellites used for “commercial communications, global positioning and weather forecasting” (NASA 5). The modern advancement in technology did not mark the beginning of its problems with space weather since these problems were recorded since the nineteenth century when the telegraph was invented (NASA 5).
The professor and director of the laboratory for atmospheric and space physics at the University of Colorado really appreciated NASA efforts for beginning the extremely challenging task of investigating on space hazards in a quantitative way. This is because their impacts are equally as much as those of natural hazards on the earth’s surface and need not be ignored.
Space travel health and safety issues
For a long time, space travel has been associated with a lot of accidents and hazards thus leaving the activity to just a small number of highly trained and highly motivated individuals.
The Apollo, an American space program, recorded three major hazards of space travel “1 Mission Fire of January 27, 1986, the Challenger accident of January 28, 1986 and the Colombia accident of February 1, 2003” (Angelo 123). These hazards have made space travel to be approached with a lot of consciousness such that it will remain a highly hazardous undertaking in now and in the near future, unless, something is done about it (Angelo 123).
There are various health and safety issues concerning space travel some of which include:
Launch abort, spaceflight and space based assembly and construction accidents, life support system failure, collisions of space vehicles and habitats with space debris and meteoroids, ionized radiation hazards, psychological stress due to strenuous living conditions and many others. (Angelo 124)
The three major challenges that people living in space face are “the substantial acceleration and deceleration forces when leaving and returning to earth, living and working in weightless conditions for long periods of time and chronic exposure to space radiations” (Angelo 124).
Astronauts and cosmonauts usually experience up to a maximum of six times acceleration as compared to that of the earth’s surface. These conditions result to some physical difficulties such as space adaptation syndrome, feelings of isolation and stress as well as post flight recovery problems.
Microgravity or weightlessness results to bone calcium loss and though most of its other effects recover a few weeks after return to earth, this may take longer especially after a long duration space mission. Space travelers throughout circular and interplanetary are bombarded by galactic cosmic rays, which are very energetic atomic particles that contain protons, helium nuclei and heavy nuclei. These rays expose individuals to excess ionizing radiation doses as compared to the standards set for various space missions and occupations (Angelo 124).
Mars expedition personnel and lunar surface base workers’ are prone to experiencing various psychological disorders, some of which include solipsism syndrome and shimanaghasi syndrome. Solipsism syndrome is a state of mind where a person feels like every occurrence is not real but a dream and is usually as a result of small space base or confined expedition vehicle.
The shimanaghasi syndrome is a feeling of isolation where an individual feels left out even when life is comfortable; a situation which requires proper communication with the earth and careful design of living quarters (Angelo 125).
Conclusion
Livings, traveling through space, working and other space missions have been quite challenging and are yet to be since the ever present dangers and hazards have not yet found a solution. However, there is a group of highly motivated and trained individuals who may continue enjoying the extraterrestrial lifestyle which outweighs any personal risks.
Though space hazards have been a challenge to most astronauts, a few things can be put together to curb the challenge. First, most of the psychological and health issues related to the space hazards can be attended to medically.
Works Cited
Angelo, Joseph. Encyclopedia of space and astronomy. New York. InfoBase Publishing, 2006. Print.
As humanity has begun to explore his home system through the use of nuclear fission propulsion technology as a method of effective space travel the privatization and proliferation of various space faring vehicles due to the technology reaching its build out completion has created a sudden boom in the number of space faring craft.
This in effect has created a worrying trend wherein due to the inherent nature of inter-solar objects such as asteroids, meteors, comets and micrometeorites, various private vehicles have in effect been subject to sudden jarring impacts radically altering courses or in the worst possible cases causing a subsequent malfunction in the nuclear energy source causing leakage and the subsequent death of the ship’s passengers.
While nuclear fission propulsion technology utilizes Uranium 235 as an effective means of creating minor nuclear bursts of energy without having to utilize massive multistage rockets the fact remains that its use is isolated to outer space due to the fact that even minor nuclear explosions on the surface of Earth are considered a danger to the continued health and well-being of people within the immediately area (Reisz and Rodgers, 50).
The process starts with a tiny nuclear fuel cube being exposed to an activation matrix composed of neutrinos and electrons in order to “excite” the atoms within the already unstable piece of nuclear material (Reisz and Rodgers, 50).
Once a sufficient level of neutrinos and electron streams has engulfed the cube the process utilizes a multistage explosion utilizing various forms of fissile material in order to trigger a small nuclear explosion at the back of the ship (Fittje and Buehrle, 502 – 504).
This explosion is usually several million degrees in temperature (3.2 million to be exact) and through the use of an electron stream is encouraged to eject its energy outward following the path of the electrons (Fittje et al., 503 – 508). This in effect creates a chain reaction which incites forward motion and propels the ship towards a given destination at speeds previously unheard of for space faring vehicles (Grandin et al., 26 – 30).
Composite materials made out of nano-weave (created through the use of nanomachines and high tensile nanofiber) and titanium X22 (titanium support beams combined with artificial diamonds) have enabled radiation to be effectively sealed off from a ship’s passengers (Zweben, 37).
The fact remains though that such materials were never meant to take direct impacts from asteroids a few kilometers across. In fact any given time, numerous planetoids composed of rock, dirt and various minerals proliferate the area between space and planets (Zweben, 37).
While some are quite obvious the fact remains that several objects move at varying speeds through space in effect creating obstacles from one point to another. One of the detrimental effects of nuclear fission propulsion technology is that from a rudimentary perspective it is effective in getting an object from one point to so long as the route is in effect a straight line.
Course corrections can be made however these take time and with the sheer speed a ship moves at within a given hour (30,000 miles per hour) the ability to effectively create a course correction is inhibited by the fact that an approaching interplanetary object could be moving at a certain velocity as well and due to its sheer size may in effect impact a ship faster than course corrections can be made (Lenard 404 – 408).
Within the past 20 years ever since the privatization of space travel has been put into effect there has been a notable rise in the sheer amount of accidents wherein ineffective course corrections have been attempted resulting in the death of passengers and crew.
While this paper doesn’t disparage the recent boom in the space travel industry it does criticize the reckless abandon of several space travel agencies that leave at nearly any given time in order to get their passengers from one location to another. Established routes have not been created resulting in an increasing amount of accidents as space travel has grown in popularity.
While emergency rescue missions can be mounted the fact remains that upon arrival most ships have either drifted so close to a planet that they are engulfed by a planet’s gravity and burn up in the chromospheres or the impact has jarred the ship in such a way that the occupants are exposed to background radiation in space as well as radiation from the propulsion technology itself resulting in their subsequent deaths.
It is due to this that this paper proposes the creation of various signal buoys that can be placed at various areas in order to effectively create interplanetary highways and an advanced early warning system in order to alert ships of incoming objects before they get too close to avoid.
Such a system will utilize traditional solar panel technology within areas close to the sun however will have to utilize nuclear fusion reactors in areas closer to the outer planets due to the lack of solar energy (Theodorakos, 72).
It is expected that through the strategic placement of possibly millions of these buoys through the solar system that an effective trajectory system can be created wherein routes can be planned out before execution thus preventing future deaths as a result of reckless course trajectories (Janssens van der Ha, 778 – 780).
Feasibility and Application
The use of signal buoys has actually been a technology that has been utilized on Earth for hundreds of years in order to help ships orient themselves near landmasses. In fact the basis of this particular proposal is based off the design utilized in a light house wherein the constantly rotating energy beam helps ships to know when they are close to a particular land mass.
While it is infeasible to place signal devices on every single type of moving object in space what is feasible is the creation of a moving method of detection in order to observe when a particular object is close by. While the use of traditional radar systems is ineffective in space what can be used is a projected gamma wave originating from a rotating gamma energy projected on a buoy.
What this does is in effect project two separate gamma radiation beams in two directions in order to seek out large masses in space such as asteroids, comets etc (Razzaque et al., 611 – 615).
The reason behind the use of gamma radiation is simple, due to the property of gamma rays wherein the propensity for absorption by a particular object is directly affected by the degree of thickness of the object’s various layers this means that the mass of a particular object can be determined through the level of penetration of the gamma ray beam (Lisitskiy, 103927).
A gamma ray buoy can use sensors to determine the degree of penetration of a particular object when a ray encounters it and transmit the information to satellites in order for the information to be relayed to a combined early warning system/course plotter in order to determine a path that avoids certain special masses within a given area (Razzaque et al., 611 – 615).
While there are concerns regarding the potential for such satellites inadvertently exposing the passengers of ships to deadly gamma radiation the fact remains that the hulls of all ships are composed of effective lead, titanium and nanofiber shielding that they in effect block all forms of radiation from entering the ship.
Solar and Space Weather Phenomenon Affecting Detection Grid
The inherent problem with inter-solar (referring to within a solar system) communication is the fact that solar weather and sudden changes in the Sun’s activity can in effect interrupt or delay transmission resulting in possible problems from cropping up in the detection system.
Solar wind is a stream of charged particles consisting of protons and electrons originating from the upper atmosphere of the Sun, has been known to cause significant problems for inter-solar communication systems (Lemaire, 20 – 23).
The reason behind this is the fact that the charged protons and electrons originating from solar wind in effect creates an ionic discharge from electrical components when the protons and electrons interact within the differently charged electrical equipment usually resulting in up to 1000 volts of static electricity building up as a result of the interaction between the electrical components and the charged particles (Bhardwaj, 526 – 527).
As a result of this interaction a significant electrical charge builds up resulting in a subsequent overload of the system as the degree of exposure increases. While such a phenomenon is rare in various communication systems within planetary atmospheres, systems located near the sun have in effect a higher degree of exposure thus a greater likelihood of sudden electrical surges frying the system (Bhardwaj, 526 – 527).
Even in cases where the stream of charged particles reaches the outer planets there is still a significant risk of the charged particles negatively affecting equipment which in some cases has been shown to subsequently shut down as a result of an electrical surge due to sudden outburst of high heat from the interaction resulting in 1000 kelvin or more in released temperatures.(Bhardwaj, 526 – 527).
While proper shielding can be utilized this is often expensive and limits the number of space buoys that can be released due to the added cost. On the other hand it must be noted that cases of solar wind directly affecting equipment in space are not as high as one may think and as such this could be considered an acceptable margin for equipment error when taking into consideration the number of buoys that can be released.
Further examination of other space weather phenomena show that geomagnetic storms which are a result of either solar wind or a coronal mass ejection are capable of creating disturbances in a planet’s magnetosphere resulting in possible fluctuations in the ability of signals to properly transmit (Pandey et al., 366).
While this doesn’t affect space buoys located up to 20 to 30 million miles from interplanetary bodies, buoys located near Venus, Saturn, Jupiter and other celestial bodies may be affected if their orbits are close enough to the planet.
The true problem with geomagnetic storms lies in their ability to increase the solar ultraviolet emission heat in the upper atmosphere of various planets by up to 1,000 Kelvin or more which in effect causes them to expand.
Buoys located near massive planets such as Jupiter and Saturn which already have a significant gravitational pull may cause the buoys to crash into the planets themselves as a result of the upper atmosphere expanding due to the increased heat resulting in the deterioration of the orbits of satellites in orbit around the planetary body (Lago et al., 69).
It must be noted that course adjustments can be utilized in order to maintain a proper geosynchronous orbit however this requires constant vigilance and due to the sheer number of satellites involved will require a secondary system in order to ensure that buoys are within their proper orbits.
Finally, in regard to buoys located near the inner planets a certain degree of concern must be entailed for the sudden occurrence of solar flares from the sun. Solar flares can be described as an event wherein the plasma located in the sun is heated to tens of millions of Kelvins resulting in a sudden brightening and the release of energy from the sun’s surface (McGregor, 195).
Another factor that must be taken into consideration is the fact that most solar flares can reach lengths of several million miles as well as widths of up to 3 million miles or more. The inherent problem with such an event is that it releases electrons, protons, various ions as well as gamma rays into the surrounding environment (Malandraki, 309).
For a system that utilizes gamma rays as a method of detection wherein a single beam of high intensity gamma radiation can reach 5 to 6 million miles in total detection area a solar flare can in effect blind most systems or cause a sudden malfunction over a long period of time.
This sudden blind could enable large objects in space such as comets and asteroids to all of a sudden escape detection systems due to their ability to travel several kilometers within a few seconds. It is due to this that buoys located near the sun need to take into account solar flares as the cause of sudden malfunctions and adjust accordingly.
Planetary Characteristics and their Infeasibility as Possible Detection Platforms
So far it has been established that outer space has various detrimental effects that hamper the ability to create a detection network capable of finding large free floating objects in space. It must be noted that even though there are various difficulties in establishing such networks in space the fact remains that attempting to create a network utilizing a planetary base such as Earth could prove to be a far more arduous affair.
Current methods of detecting objects in space from a planet are actually not that far removed from technologies established in the early 21st century since it was only in the mid 21st century that nuclear fission propulsion was invented. As such methods of detecting large free floating bodies in space involve the use of radio signals, planetary and atmosphere based telescopes as well as various forms of laser detection systems.
The problem with utilizing such systems is the delay in which the data can be properly created and mapped. Not only that, such methods of detection cannot account for the majority of free floating objects in space and as such is an inefficient method of detection.
While it has been proposed in the past that establishing observation sites on various planets and combining the data gathered could be an effective means of “mapping” the various objects in the solar system there are certain problems with the condition of various worlds that make this proposal high unfeasible.
For example, establishing an observation platform on the surface of Mercury entails having to deal with the 700 Kelvin temperatures that the surface is regularly subjected to resulting in not only the possibility of damage to the equipment but its close proximity to the sun means that it is in the direct line of sight for a vast majority of solar cosmic rays, solar waves as well as solar proton events which have the possibility of causing electrical malfunctions in even the most well protected equipment (Wang and Ip, 34).
Venus is also a terrible choice to place an observation platform due to the fact that it has a dense atmosphere composed of carbon dioxide and clouds containing sulfuric acid (Gasparri, 72). Not only that, its surface is well known for having significant levels of volcanic activity which makes establishing an observation platform on the planet nearly impossible without the danger of subsequent eruptions destroying the equipment installed (Gasparri, 72).
While the surface of Mars may seem to be an ideal site for an observation platform due to its relatively thin atmosphere and the absence of volcanic activity the fact remains that accumulated data has shown that Martian dust storms that occur regularly are highly corrosive due to the nature of the Martian soil (Millour, 504).
Gathered data shows that the average Martian dust storm can corrode even free standing steel structures over a period of time and as such this shows that Mars would not be an ideal location for an observation platform as well (Millour, 504).
Jupiter and Saturn are also out of the question due to the fact that the level of gravitational force evident when entering their respective atmospheres would crush any equipment that could be set up there (Barrow and Matcheva, 609).
Conclusion
Based on the data presented it can be seen that the best and most feasible method of implementing set space lanes and an early warning system for space faring objects is to establish a buoy system within the in-between the distances travelled by ships.
While such a system could be vulnerable to the effects of solar weather and other forms of local solar environmental effects that fact remains that its implementation would help to save lives and this can be considered an effect means of promoting safe space travel.
Works Cited
Alisson Lago, et al. “Interplanetary Origin of Intense, Superintense and Extreme Geomagnetic Storms.” Space Science Reviews 158.1 (2011): 69-89. Academic Search Premier. EBSCO. Web.
Barrow, Daniel, and Katia I. Matcheva. “Impact of atmospheric gravity waves on the jovian ionosphere.” ICARUS 211.1 (2011): 609-622. Academic Search Premier.EBSCO. Web.
Bhardwaj, Anil. “X-Ray Emission from the Solar System Bodies: Connection with Solar X-Rays and Solar Wind.” AIP Conference Proceedings 1216.1 (2010): 526-531. Academic Search Premier. EBSCO. Web.
Chen, Shu-cheng S., Joseph P. Veres, and James E. Fittje. “Turbopump Design and Analysis Approach for Nuclear Thermal Rockets.” AIP Conference Proceedings 813.1 (2006): 522-530. Academic Search Premier. EBSCO. Web.
Ehouarn Millour, et al. “The impact of martian mesoscale winds on surface temperature and on the determination of thermal inertia.” ICARUS 212.2 (2011): 504-519. Academic Search Premier. EBSCO. Web.
Fittje, James E., and Robert J. Buehrle. “Conceptual Engine System Design for NERVA derived 66.7KN and 111.2KN Thrust Nuclear Thermal Rockets.” AIP Conference Proceedings 813.1 (2006): 502-513. Academic Search Premier. EBSCO. Web. 21 May 2011.
GASPARRI, DANIELE. “Beneath the Shroud of Venus.” Sky & Telescope 120.4 (2010): 72. MasterFILE Premier. EBSCO. Web.
Grandin, Karl, Peter Jagers, and Sven Kullander. “Nuclear Energy.” AMBIO – A Journal of the Human Environment 39.(2010): 26-30. GreenFILE. EBSCO. Web.
George Theodorakos, et al. “The Distress Alerting Satellite System.” GPS World 22.1 (2011): 72. MasterFILE Premier. EBSCO. Web.
Janssens, Frank L., and Jozef C. van der Ha. “On the stability of spinning satellites.” Acta Astronautica 68.7/8 (2011): 778-789. Academic Search Premier. EBSCO. Web.
Kavita Pandey, et al. “Relationship between interplanetary field/plasma parameters with geomagnetic indices and their behavior during intense geomagnetic storms.” New Astronomy 16.6 (2011): 366-385. Academic Search Premier. EBSCO. Web. 21 May 2011.
Lenard, Roger X. “The advisability of prototypic testing for space nuclear systems. “Acta Astronautica 57.2-8 (2005): 404-414. Academic Search Premier. EBSCO. Web. 21 May 2011.
Lemaire, Joseph. “Convective Instability Of The Solar Corona: Why The Solar Wind Blows.” AIP Conference Proceedings 1216.1 (2010): 20-23. Academic Search Premier. EBSCO. Web.
Lisitskiy, M. P. “Gamma-ray superconducting detector based on Abrikosov vortices: Principle of operation.” Journal of Applied Physics 106.10 (2009): 103927-103939. Academic Search Premier. EBSCO. Web.
O. Malandraki, et al. “Particle Acceleration and Propagation in Strong Flares without Major Solar Energetic Particle Events.” Solar Physics 269.2 (2011): 309-333. Academic Search Premier. EBSCO. Web.
Reisz, Aloysius I., and Stephen L. Rodgers. “Engines for the cosmos. “Mechanical Engineering 125.1 (2003): 50. Business Source Premier. EBSCO. Web.
S. McGregor, et al. “Solar Flares and Coronal Mass Ejections: A Statistically Determined Flare Flux – CME Mass Correlation.” Solar Physics 268.1 (2011): 195-212. Academic Search Premier. EBSCO. Web.
S. Razzaque, et al. “The Gamma Ray Burst section of the White Paper on the Status and Future of Very High Energy Gamma Ray Astronomy: A Brief Preliminary Report.” AIP Conference Proceedings 1000.1 (2008): 611-615. Academic Search Premier. EBSCO. Web.
Wang, Y.-C., and W.-H. Ip. “A surface thermal model and exospheric ballistic transport code of planet Mercury.” Advances in Space Research 42.1 (2008): 34-39. Academic Search Premier. EBSCO. Web.
The solar system is comprised of nine planets and each planet rotates on its own axis. Mercury is the innermost planet because of its close proximity to the sun.
This planet can hardly support any life because the temperatures within its atmosphere are very high1, 2. However, it is difficult to observe Mercury because the sun’s rays create a high contrast, which in return blurs the image. Perhaps this is why astronauts could not explore the entire planet at a go2.
The first spaceship to ever land in mercury was the Mariner 10 in 1974. In March 2011, another spaceship by the name messenger (Mercury, Surface, Space Environment, Geochemistry and Ranging) landed in mercury to complete the unfinished business that was commenced by Mariner 10 2,3 .This paper will therefore focus on the location of mercury with respect to the solar system. Besides that, the paper will analyze the expedition of the Messenger on mercury’s surface.
Figure 1 – Mercury 9
The distance between the nine planets of the solar system is very wide such that human carriers such as planes cannot cover the distance. It is in this regard that scientists invented the rockets. After the rockets were developed, astronauts have been able to explore the other planets that surround the sun 3, 4. Mercury has been of much interest because it is very close to the sun. One would think that it is not necessary to know about the other planets because each planet exists independently. However, the planets depend on one common source of natural lighting and thus, understanding what happens in the other planets would enable us to prepare for future changes such as global warming.
Mercury basics
Mercury Composition
Mercury is comprised of metallic compounds that emit magnetic fields that are weaker than those of the earth 9. The density of this planet is almost the same to that of the earth and this explains why the winds carried the eroded soils. If its density were low, the soils would have remained on its surface. The landscape of mercury has many escarpments, mountains and craters.
Figure 2 – Mercury’s Landscape
There are also low-lying grounds that suggest that the planet has been dormant for a long time 6. Falling comets and asteroids must have caused the craters. These two objects must have landed on mercury’s surface during a volcanic eruption; otherwise, they would not have left permanent impressions. The craters cover hundreds of kilometers and some are more than a thousand kilometers wide 7, 30.
Just like the earth, mercury’s core is comprised of molten lava due to high temperatures. The falling comets pierced through its core, which caused the molten magma to leak into the pools that the falling objects created. Mercury is very small compared to the other planets.
One would thus expect it to go round the sun much faster than the other planets but contrary to this, its speed is relatively slow. It takes 59 days to complete one round 8, 20. The earth takes 365 days, which adds up to one calendar year. This rotation is what brings the changes in seasons. For instance, during certain months days are longer than nights and vice versa. That is why darkness takes longer to set in.
The metallic components cause mercury most of the light it receives from the sun and that is why it can be confused with the moon. There is hardly any water or even water vapor for that matter on mercury’s atmosphere 5. This is because the high temperatures would evaporate the water. But amazingly there is frozen water beneath the craters and it seems the craters shield the ice water from the high temperatures because if this was not the case the ice would have been melted and eventually evaporate into the thin air.
Mercury does not have an atmosphere, but the exosphere, which has metallic compounds such as potassium, helium, hydrogen and sodium 10, occupies this vacuum. Besides that, the orbit of this planet hindered its possible exploration. Mercury has a higher orbit speed and thus the spaceship must travel at very fast speeds because any kind of hesitation would cause the entire house consumed.
The above statement may not sound logical because mercury is in close proximity with the sun and thus the rays of sun are still much stronger when they land on its surface11. In fact, the temperatures in this planet can exceed 400 degrees Celsius during the day and subside to lows of negative one hundred and seventy. It is important to note that its not all craters in mercury that contain ice water and hence the ice limited to north and south poles which are at the extreme ends of the planet and thus they rarely come into contact with sun light 11, 12.
It is possible to see mercury via a telescope. One can spot the planet on the western skies before the fall of darkness and early in the morning on the eastern skies 12. However, one cannot see the whole image because light does not shine on the entire planet. The planet appears in phases similar to those of the moon such as half and full.
Table 1 – Properties of Mercury 39
Mariner 10
Mariner 10 was the first spaceship that NASA (National Aeronautics and Space administration) deployed to mercury in 1974. This model aimed at eliciting an understanding of mercury’s environment, features and atmosphere 13. Among the tools fitted on the Mariner 10 included cameras with digital tape recorder, ultraviolet spectrometer, infrared radiometer, solar plasma, charged particles, magnetic fields, radio occultation and celestial mechanics.
Figure 3 – Mariner 10 26
Without the enhancements, Mariner 10 would not have captured any images. This is because taking the images was supposed to happen when the spaceship was in motion. For instance, the digital camera ensured that the ship would capture as many images as possible 14. Scientists and explores aligned the spaceship to mercury’s orbit and that is how it managed to analyze the movements of this planet.
The radio transmitters were to be used to transmit the findings captured on tape to the experts in NASA’s offices. The expedition did not last for long because the ship ran out of gas, which led to termination of its transmitter’s portal 15. However, scientists believe that the ship still goes round the sun, but this argument is far from being true because NASA received the last signal in 1975.
NASA officials argue that the devices that are responsible for conveying information from the ship failed due to their exposure to the radiations from the sun. In essence, the mariner was able to cover 40% of mercury because the sunlight struck only one of the planets and thus, when the spaceship flew over it the images covered only one area while the rest of the planet was shadowed 16.
The need for messenger
Since Mariner 10 did not complete its mission, NASA decided to deploy another spaceship to provide more information on Mercury. The ship was called Messenger (Mercury Surface, Space Environment, Geochemistry and Ranging) 16,17. Just as its name suggests, the ship was destined for mercury with the aim of analyzing the planet’s environment and the chemical matter on the planet’s surface. Messenger was launched to address the following specific questions about Mercury 38:
What was the planetary formational process of Mercury? Was it through volcanism or coverage by particles from craters?
How can we describe the history of Mercury in regards to geologic concepts?
What are the state and source of the planet’s magnetic composition?
What is the nature and size of Mercury’s iron core?
What are the essential volatile species on Mercury?
What are the radar-reflective components at the planet’s surface?
Messenger used the same approach like its predecessor, the Mariner 10, Messenger was bound to capture more information regarding mercury due to its advanced mechanisms. Messenger strives to address the questions above because the ship still sends information back to earth. More truth waits to be unveiled through space expeditions because now, Mercury is out of bound due to unfavorable weather conditions.
In this regard, space explorations have advanced our knowledge about the solar system. The solar system is far from complete exploration because newer technologies are still emerging to ensure we are able to get more information regarding Mercury. The flyovers have been very useful because scientists have been able to understand the compositions of other planets during the epic flyovers.
The position of Mercury favors space expeditions because it is very close to the sun, and thus the data obtained from it is essential to reflect the composition at the core of solar system. The space explorations help us to understand why the earth is the only planet that can support life.
Messenger
Messenger’s mission was commissioned in 2004 by NASA 18.The first departure took place in January 2008. This implies that the probe team needed four years to prepare for the departure to ensure that the occurrence experienced with Mariner 10 does not recur 19. Nevertheless, during the first departure the spaceship navigated over the skies of mercury and managed to capture numerous images of the planet including the other half that Mariner 10 did not cover.
Test Drive
The second departure happened in October 2008.The ship had earlier been tested in 2005 by NASA who made the ship to navigate the area over the earth’s surface 20. During this period of test drive NASA was certain that all the equipment were going to function as intended. The ship was able to analyze the compounds in the earth’s atmosphere. Besides that, the tools fitted on the ship were able to gauge the magnitude of the earth’s magnetism 21. Engineers repeated these tasks once the ship in the Mercury’s orbit.
In October 2006, Messenger navigated the space adjacent to Venus with the aim of acquiring geological knowledge on the planet. However, NASA did not accomplish the mission because the earth was in a position that prevented the sun from emitting radiation, and thus it was difficult to capture the images without natural light 22.
The explorers had to repeat the fly over session in June 2007 because the first one did not bear any fruits. During this second episode, the tools mounted on the ship were able to capture images on the surface of Venus. This success was because of the sun’s radiation that created ultraviolet rays, thus enhancing the imaging process 23. The camera captured the images in x-ray format.
Encounter with Mercury
After the first and the second flyovers in 2008 were successful, Messenger decided to make the last navigation in 2009. It is important to note that the aim of each flyover was to capture more information than the previous one and therefore the ship had to fly deeper to get a clear view.
During this last session, the speed of the ship declined and thus it had to proceed in safe mode 24. This challenge did not hinder it from navigating the space but it eventually led to loss of data captured. However, scientists managed to control the situation seven hours later. The ship had to go deeper into the space, and it was difficult to achieve this objective without altering the velocity of the ship. Engineers executed the DSM-5, which was part of the ship to earn the magnitude of velocity that was necessary to enhance the integration process.
Figure 4 – Messenger Orbiting Mercury 13
If DMS-5 was not eliminated the ship would not have been able to move faster once it reached the sun’s gravity pond. In addition, the ship would have consumed a lot of fuel to travel at a higher speed 25. In March 2011, NASA managed to incorporate Messenger at a strategic position in Mercury’s orbit to shield it from the sun’s radiation.
Spaceship Design
The entire structure of the ship measures 1.82 meters, which is its height and a corresponding width of 1.27 meters 26. The engineers hoist the structure using graphite fiber materials, which ensure that the propellant tanks remain in their place. The graphite panels is made into a compartment that houses the LVA(large velocity adjust) thrust, attitude regulators, rectification thrusters, antennas, instrument pallet and extensive ceramic textile material.
The ship’s luggage capacity cannot exceed 607.8 kilograms 27. Four 22N monopropellant thrusters direct the ship when the engine executed the10 thrusters at the initial stage. The attitude regulators, through a response wheel attitude-regulator system use the monopropellant thrusters. Attitude regulation is induced by information availed by star trackers in conjunction with inertial system and six sun sensors 28.
Two small deep space transponders convey information through the deep space network. The DSN is aided by three antennas that transmit signals at different intervals that are high, medium and low 28. All the three antennas have the capacity to convey 8.4 GHz of signal. On the other hand, they cannot accept signals beyond 7.2 GHz.
The antennas are on the uppermost front such that they are in the sun’s line of view. Another pair of three antennas is on the rear of the ship 29. This positioning of antennas ensures that they can get signals from the surrounding environment before and after approaching the planet.
The spaceship obtains power from a two panel solar array that generates 450 watts. Each of the panel is movable which means they can rotate as the position of the sun changes. The panels are fitted with reflectors to divert excess solar energy. The solar energy is stored in a 23-ampere nickel hydrogen battery.
Besides, a computerized system operates The Messenger. IBM manufactures the processors of the system and they are highly resistant to radiation. One of the processors is 25 MHz. This capacity is ideal because this processor handles most of the tasks in the system. The other processor is 10 MHz and is responsible for rectifying any errors that may arise during the ship’s expedition 30.
The ship has recorders that are specialized for capturing still images and has the storage capacity of one gigabyte. The recorders have a processor that keeps the information obtained from the space in compressed formats, which the machine then conveys to NASA offices for interpretation purposes.
This means that if the spaceship downloads information to the wrong target, the data would be irrelevant. The spaceship employs a SciBox application program to induce the functionality of the tools that are responsible for capturing images and radiations 31. The program ensures that each tool functions independently to avoid a clash of system request. This is similar to interrupt requests allocated to computer devices to ensure that the system understands the device that seeks its attention.
Tools on the Spaceship
The Messenger has many instruments that enhance its performance. They include numerous CCD cameras. Among the cameras, two of them are unique in their specification because they capture images at broad and narrow angles 32. These cameras have extremely high resolutions because their lenses have 250 pixels and that capability is limited to every 250 meters.
Figure 5 – Imaging System 30
The wide-angle camera captures colored images when need arises. The cameras reside on a raised surface but most times, they are on top of edges of pointed structures. The wide-angle camera is useful because it helps to draw a line between objects that are in a given image. Without the cameras, it would be very difficult to analyze the features of mercury because the spaceship does not stop during the expedition.
Gamma-ray spectrometer (GRS) gauges the gamma rays received from the core of mercury. This device is capable of detecting the presence of certain compounds in the gamma rays 33. The compounds that are most likely to be contained in gamma rays emitted from the core of a given planet include oxygen, silicon, sulphur, iron, hydrogen, potassium, thorium and uranium.
Figure 6 – Components of Messenger
Neutron Spectrometer (NS) analyzes hydrogen compounds in a given mineral matter within a range of forty centimeters. This analysis commences when cosmic rays meet the surface of the mineral. This means that the cosmic rays penetrate forty centimeters into the surface of the mineral. Without this ability, it would be difficult to understand the composition of the minerals because the materials are difficult to extract 33.
Figure 7 – X-ray Spectrometer 16
The x-ray spectrometer (XRS) enables one to see the inside of the surface of mercury by identifying the x-ray spectral dimensions originating from magnesium, aluminum, sulphur, calcium, iron and titanium within a range of 1-10 keV. This device uses the same technology as x-ray equipments used in diagnosis of humans. Through x-ray imaging, we are able to understand the core structure of mercury.
The magnetometer (MAG) gauges the magnetic field from the core of a given planet. The same tool determines whether the magnetic field is strong or weak 34. The earth has a stronger magnetic field than that of mercury: the MAG elicited this information.
The Mercury laser altimeter (MLA) evaluates the heights of tall features on the surface of mercury such as mountains and ridges. Scientists obtain the exact details of land formations when the infrared light lands on the surface of mercury and because while the ship is in motion, it derives the figures when the light is departing from a given area.
Additionally, the mercury atmospheric and surface composition spectrometer (MASCS) analyzes the attributes of mercury’s atmosphere. This knowledge emerges because the machine traps the ultraviolet rays that fall on the surface of mercury when they are being refracted 35.
The reflectance helps is detecting the presence of titanium and iron on the surface of mercury. Besides that, the charged particle and plasma spectrometer (EPPS) gauges the charged particles that are within mercury’s magnetosphere by employing a charged particle spectrometer to observe the charged particle that is obtained from the surface of mercury during the use of fast imaging plasma spectrometer.
Messenger Data to-Date
The expeditions of Messenger into mercury have inspired other agencies to embark on their own missions. Japan is planning to combine its efforts with Bepicolombo, which is a European agency in a bid to deploy a spaceship to mercury 36. The two entities have two common agendas, which entail understanding the map of the planet and evaluating the magnetosphere of the same planet.
Bepicolombo is committed to penetrating the orbit of mercury by the year 2019. This is because the agency aims at gaining geological knowledge of planets that are adjacent to mercury in its first years. This is to suggest that they will employ the same tactics of flying over the planet several times 37.
The approach is ideal because it worked for mariner 10 and Messenger. The Russian Soyuz also expects to deploy their spaceship into the orbit of mercury.
Conclusions
It seems that geological agencies are focusing on understanding the structure of mercury. The space expeditions are significant because they help in eliminating assumptions. For instance, before the deployment of mariner 10 some people used to think mercury is a star while others thought it is a moon of its own kind.
Other agencies should refer to the experience that Mariner 10 encountered to keep their distance from the sun to avoid coming into contact with radiations of the sun. Telescopic observations should also continue because they can also capture events as they unfold. The materials used to construct spaceship should be resistant to radiation such as the ones used on Messenger.
Before deploying a spaceship into the space, engineers should test-drive it to prove that all the tools are functioning as expected. The structure of the spaceship should be flexible to take full advantage of the solar energy as was seen in Messenger. Moreover, engineers must not ignore scientific tools in a spaceship, and that means that a backup plan is required, just in case there is any fault.
References
“BepiColumbo – Background Science”. European Space Agency.
“Countdown to MESSENGER’s Closest Approach with Mercury”. Johns Hopkins University Applied Physics Laboratory. 2008. Web.
“ESA gives go-ahead to build BepiColombo”. European Space Agency. 2007. Web.
Bakich, M. E. (2000). The Cambridge Planetary Handbook. New York: Cambridge University Press
Baumgardner, J., Mendillo, M., & Wilson, J. K. (2000). A Digital High-Definition Imaging System for Spectral Studies of Extended Planetary Atmospheres. I. Initial Results in White Light Showing Features on the Hemisphere of Mercury Unimaged by Mariner 10. The Astronomical Journal, 119 (5): 2458–2464.
Biswas, S. (2000). Cosmic Perspectives in Space Physics. Astrophysics and Space Science Library. New York: Springer.
Correia, A.C. & Laskar, J. (2004). Mercury’s capture into the 3/2 spin–orbit resonance as a result of its chaotic dynamics. Nature, 429 (6994): 848–850.
Dantowitz, R. F., Teare, S. W., & Kozubal, M. J. (2000). Ground-based High-Resolution Imaging of Mercury. Astronomical Journal, 119 (4): 2455–2457.
Lakdawalla, E (2008). MESSENGER Scientists ‘Astonished’ to Find Water in Mercury’s Thin Atmosphere. The Planetary Society. Web.
Espenak, F. (2005). Transits of Mercury. NASA/Goddard Space Flight Center. Web.
Fleming, N. (2008). Star Trek-style ion engine to fuel Mercury craft. The Telegraph
Kelley, D. H., Milone, E. F., & Aveni, A.F. (2004). Exploring Ancient Skies: An Encyclopedic Survey of Archaeoastronomy. Birkhäuser.
L. V. Ksanfomality (2006). Earth-based optical imaging of Mercury. Advances in Space Research, 38 (4): 594.
Laskar, J.; Gastineau, M. (2009). Existence of collisional trajectories of Mercury, Mars and Venus with the Earth. Nature, 459 (7248): 817–819
McClintock, W. & Lankton, M. (2007). The Mercury Atmospheric and Surface Composition Spectrometer for the MESSENGER Mission. Space Science Reviews, 131(1): 481–521.
Moore, P. (2000). The Data Book of Astronomy. New York: CRC Press.
Roylance, F. (2011). Messenger successfully goes into orbit around Mercury. Baltimore Sun. Web.
Schaefer, B. E. (2007). The Latitude and Epoch for the Origin of the Astronomical Lore in Mul.Apin. American Astronomical Society Meeting 210, #42.05 (American Astronomical Society) 38, 157
Srinivasan, D.K., Perry, M.E., Karl, B. F, Smith, D.E., & Maria, T.Z. (2007). The Radio Frequency Subsystem and Radio Science on the MESSENGER Mission. Space Science Reviews, 131(1), 557–571.
Staff. (2008). MESSENGER: Mercury and Ancient Cultures. NASA JPL. Web.
Tariq, M. (2004). “MESSENGER to test theory of shrinking Mercury”. USA Today. Web.
Space travel began in the 1940s when Germany launched a satellite into space (Morris 46). Afterwards, the soviet and the U.S. started space exploration. Yuri Gagarin and Alan Shepard were the first and second people to travel to space respectively (Gifford 35). Pros of going to space include advancement of scientific knowledge, economic benefit, and technological advancement. Cons of going to space include high costs, high risk, and possible cause of conflicts between nations (Gifford 35).
History and reasons of going into space
Space exploration began in Germany in the year 1942. During the World War II, German scientists launched the V-2 rocket into space (Gifford 36). The main scientific space exploration was conducted by the Unites states in 1946 with the help of German scientists (Morris 48). At the time, exploration did not involve humans but cosmic radiation.
In 1957, the Soviet successfully launched the first satellite into space that marked the beginning of space exploration. Yuri Gagarin was the first person to go into space on board a human spaceflight known as Vostoc 1. After the success of the Soviet’s satellite, the U.S. invested more into space exploration. The first human flight to space by the U.S. occurred in 1961. The first American to travel into space was Alan Shepard who did not orbit but only enter space. Shepard travelled to space through the Project Mercury flight program. The feat was achieved through the Mercury-Redstone 3 spaceflight (Morris 48). The first flight excluded orbiting.
Therefore, another astronaut went into space in 1962 (Gifford 38). John Glenn executed the first orbital flight around the earth in 1962. The main reason for going to space was to explore the possibilities of existence of life in other destinations apart from earth. The objective of Shepard’s flight was to orbit the earth and collect data that would be used for further exploration (Scott 60). The main reasons for space exploration include scientific discovery and enhancement of national security. In contemporary America, another reason for space exploration is for economic benefit.
Pros of going into space
Three pros of space travel include advancement of scientific knowledge, economic benefit, and technological advancement (Morris 51). Going to space is an opportunity for scientists to discover new resources that could improve the quality of life on earth. In addition, it aids in the understanding of the universe and prediction of weather patterns (Morris 53).
Three main areas of scientific study include evolution of life, astronomy, and advancement of technology for space exploration. Studies regarding the evolution of the universe can furnish information about the origin of the earth (Scott 66). Satellites are used to study weather patterns and predict natural catastrophes that have caused massive destruction in the past (Morris 52).
Space exploration has many economic benefits that include creation of employment and income generation. In the U.S., the space exploration programs employ more than 500,000 people (Scott 28). Research has shown that research exploration generates income that is vital in the growth of the economy. For instance, space tourism, satellite radios, and navigation systems are sources of income. The country will benefit economically when space tourism commences. Finally, going to space is important because it promotes technological advancement and innovation.
Travel to space promotes technological advancement and innovation because of the need to develop spaceships and advanced technologies (Scott 28). Technological advancements that have resulted from space exploration studies are used in several areas. For instance, they are used to manufacture water filtration systems and wireless electric switches. In order to travel to space, there is need for development of advanced technologies. On the other hand, the need for advanced means of communication has led to development of communication technologies that have increased global communication.
Cons of going into space
Three cons of space travel include high costs, high risk, and possible source of conflicts among nations. The cost of exploration, technology advancement, innovation, and scientific research is very high (Scott 68). The money used to fund the aforementioned undertakings could be used to develop other sectors of the economy. Opponents of space exploration argue that the money used in such programs could be used to improve the lives of Americans. The funds could be channeled towards improvement of health care and education.
In addition, space exploration programs are funded using taxpayers’ money. Therefore, it exerts financial pressure on taxpayers. Travelling to space is very risky because of uncertainties and mechanical errors. For instance, the Space Shuttle Challenger killed seven crewmembers after some of its components failed to work (Scott 76 Finally, travelling to space is a potential source of conflicts between nations. Possible causes of such conflicts include ownership of space resources and the illegal use of advanced technologies to spy on other countries.
Conclusion
Space exploration began in the 1942 launch of a rocket into space. In 1957, the Soviet successfully launched the first satellite into space. Yuri Gagarin was the first person to go into space on board a human spaceflight known as Vostoc 1. After the success of the soviet’s satellite, the U.S. invested more into space exploration. America sent an astronaut to space in order to explore the universe and advance scientific knowledge.
The pros of space travel include advancement of scientific knowledge, economic benefit, and technological advancement. On the other hand, the cons of space travel include high costs, high risk, and conflicts between nations. Money used in research and technological advancement could be used to improve certain sectors of the economy such as health care and education.
Works Cited
Gifford, Clive. Space Exploration: Technology all Around Us. Philadelphia: Black Rabbit Book, 2005. Print.
Morris, Neil. What Does Space Do for Us? New York: Raintree Publishers, 2011. Print. Scott, Carole. Space Exploration. London: Dorling Kindersley, 2010. Print.
Space exploration has emerged as an important mission for scientists in the 21st century. Two of the major space agencies, NASA and European Space Agency have recognized the significant potential of exploring and looking for transits from space. The NASA has made considerable investments in space exploration programs. One of its most ambitious projects is the discovery program series. These projects comprise of many relatively low-cost and quickly implemented precision missions for exploring planets. The Kepler is the tenth principal investigator-led mission selected in the NASA Discovery program (Koch, 2004). This space mission was designed to look for transits from space and it was successfully launched in March 2009. The Kepler Space Observatory was named for the German Astronomer Johannes Kepler and it was able to successfully accomplish its core objectives until May 2013.
Renee (2010) states that the Kepler Space Observatory might never have become a reality were it not for the works of Johannes Kepler about 4 centuries ago. This German Astronomer was deeply fascinated with the universe and this led him to invent the rules of planetary motion. Kepler’s first published work on planetary orbits was the “Mysterium Cosmographicum” booklet written in 1596. This work attracted the attention of Tycho Brah, a Danish nobleman who had a keen interest in astronomy and had gathered a vast amount of hard data on planetary motions. Brahe asked Kepler to collaborate with him in 1600 in order to create a mathematical model for planetary motions.
By 1605, Kepler had come up with two of his laws of planetary motion, which stated, “Orbits are ellipses with the Sun at one focus, and a planet’s orbital speed varies depending on its distance from the Sun” (Renee, 2010, p.24). Kepler’s third law was discovered in 1618 and it stated, “The orbital periods of the planets were related to their average distance from the Sun” (Renee, 2010, p.24). These three laws are fundamental to astronomy and all significant astronomy innovations consider these laws. Kepler’s third law of planetary motion is used by scientists in the Kepler Mission to determine the semi-major axis for each exoplanet after observing its repeated movements in front of its stars.
Scientific Objectives of the Kepler
The Kepler Mission is NASA’s photometric space-based mission launched into space to detect Earth-like planets. The Mission set out to survey around 150,000 Sun-like stars with the aim of identifying Earth-like planets. This primary goal was supposed to be achieved within a span of 3.5 years. When the Kepler Mission was proposed, the main objectives were highlighted as the exploration of deep space with the aim of identifying planetary systems. A number of scientific objectives were highlighted for the mission. The first was to identify the terrestrial planets that existed in the habitable zone of the huge number of stars that the mission was going to analyze.
Kasting (2010) notes that the Kepler Space Observatory was specifically designed to find habitable planets, which are defined as those planets that are about one-half to ten times the Earth’s mass and exist in the habitable zone. The habitable zone is the region where distance from a star where conditions necessary for survival on a planet, such as liquid water, can be found on the planet’s surface. By analyzing a portion of the Milky Way galaxy, the Kepler Mission seeks to determine how many stars have planets that might be habitable. In addition to finding the Earth-like planets, the Kepler was tasked with determining the pattern followed by these planets in their occurrence throughout the Milky Way.
Once the terrestrial planet has been identified, further observation by the Kepler is needed to determine the orbits of the planet. Planets that have Earth-like orbit shapes and durations are likely to sustain life. Another objective of the mission is to determine the masses, densities, and surface temperatures of these exoplanets. In addition to identifying exoplanets, the Kepler Mission seeks to determine the properties of the stars where these Earth-like planets are found. Kasting (2010) notes that determining the properties of these stars is necessary in order to compare them with the Earth’s Sun.
To achieve its core-objectives, the Kepler was specially designed to be a deep space observatory center. Unlike most satellites that orbit around the Earth, the Kepler established its own orbit around the sun (The Kepler Mission, 2014). The Kepler is equipped with a one-meter Schmidt telescope that has a field of view (FOV) in excess of 100 square degrees. The telescope has an array of 42 Charge Coupled Devices (CCDs) with 95 million pixels.
Cowen (2013) declares that the Kepler Space Observatory is a marvel of engineering due to its ability to remain stable while in orbit. This stability is made possible by the presence of reaction wheels that move at the speed of 1,000 to 4,000 revolutions per minute and ensure that Kepler’s telescope are always pointing at the same location in deep space (Borucki, 2006). The Kepler Mission was supposed to last for at least three and a half years. The 3.5-year timeline for the Kepler Mission was chosen since it would take at least 3 years for the Space Observatory to confirm that a candidate planet is indeed Earth-sized and in the habitable zone.
How this Optical Observatory Works
The Kepler makes use of the photometric method known as the transit to discover planets in deep space. Renee (2010) reveals that the idea for a photometric method to detect Earth-sized planets in the galaxy was first conceived by the space scientist Bill Borucki in 1984. His idea was based on the basic concept that when a planet passes in front of a star, the light intensity diminished. The key function of the Kepler Space Observatory was to detect the “1 part in 10,000” dip in light intensity that happened when a planet orbited in front of its star. The method utilized by the Kepler Mission to identify exoplanets is referred to as the “transit method” since it relies on the changes in a star’s brightness as a planet crosses in front of its star.
The Kepler was designed to be capable of continuous observation of the same FOV through its entire working life. The observation would only be interrupted for the brief duration or a day or less in every three months. A number of considerations were made when choosing the field of view for the Kepler Mission. To begin with, the field had to be viewable for the entire duration of the Kepler Mission. To ensure that the Kepler was not blocked by the Sun at any time during its orbit, the FOV was put above 550 (The Kepler Mission, 2014).
Another requirement was that the area chosen should have a high concentration of sun-like stars. The Kepler needed to observe as many stars simultaneously as possible from its fixed position throughout the mission. Borucki (2006) notes that unlike with most other astrophysics missions that changed their FOV during the course of the mission, the Kepler Mission points to a single FOV for the entire mission. The region in space where Kepler focuses on has two constellations: Cygnus and Lyra. This region has a vast number of stars and is above 550 hence it is not obstructed by the Sun, Earth, and Moon at any point in time making it visible though the entire year.
The Kepler was then to maintain the longest possible continuous observation of the region so that variations in light intensity could be noted and investigated further. Continuous observation of the same region in space is necessary since repeat transits have to be observed before a positive declaration of a candidate planet discovery can be made. NASA (2013) states that a single instance of a dip in star light is not enough to declare that a planet has been discovered. Instead, a number of transit events have to be observed in order to confirm that a planet has been discovered. The Kepler Space Observatory is in an Earth-trailing heliocentric orbit, which enables it to have a continuous view of the selected FOV all through the orbital year.
The probability of observing a transiting planet are reduced significantly by the fact that the planetary system has to be almost perfectly aligned with the line of sight of the telescope in order for the transit to be perceived. Koch (2004) confirms that “the probability for alignment of the orbital plane along the line-of-sight from the observer to the star is relatively small, equal to the ratio of the diameter of the star to the diameter of the orbit” (p.1). For this reason, the Kepler misses up to 99% of the exoplanets that might exist within the area it is focusing on. This is the reason why the Kepler was designed to have a wide field of view (in excess of 100 square degrees). This wide FOV enables the Kepler to observe about 150,000 stars increasing the number of discoverable planets. Even so, scientists estimate that hundreds of candidate planets go undetected for every planet that Kepler detects.
Comparisons have been made between the efficiency of the Kepler mission and ground-based surveys. Stefano (2010) notes that the magnitudes of the lensing signature of the Kepler are in reach of ground-based surveys. This means that it is possible to detect antitransits using ground-based centers such as Pan-STARRS and LSST. However, it would be impossible to detect transits as efficiently as the Kepler can due to a number of limitations suffered by the ground-based surveys.
To begin with, ground stations are exposed to the changing weather conditions and cloud cover on Earth. In addition to this, the movement of the earth around the Sun means that some parts of the sky are invisible from the Earth at certain times of the year. These conditions make it impossible for ground-based observatory centers to maintain the precision pointing that the Space based observatories like the Kepler can maintain. The powerful telescope on the Kepler combined with its location in space makes it best suited to collect the data needed to identify Earth-like planets in distant star systems.
Importance of the findings from Kepler
During the active phase of its operation, the Kepler was able to make a number of important scientific findings. The success of the Kepler mission is evident from the vast number of candidate planets discovered by the mission compared to those discovered through ground-based observations. Prior to the implementation of the Kepler Mission, NASA had only been successful in discovering three candidate planets. Less than a month after its launch, Kepler began to observe thousands of stars in order to discover planets. Using the data that was obtained from the first 10 days of the star monitoring process, astronomers were able to discover five new planets.
As of the June 2010, the Kepler team was able to identify 306 exoplanet candidates after analyzing data obtained from the first 43 days. By the end of Kepler’s mission in 2013, astronomers had discovered a phenomenal 3,500 candidate planets. These planets, which orbit other suns, are in the habitable zone and their size makes them eligible to be candidate planets.
The most important discoveries made by the Kepler mission have been the positive identification of Earth-like planets. Following the detailed analysis of the data obtained from the space observatory, 135 exoplanets have already been confirmed. As the Kepler team continues to go through the vast amount of data collected by the Kepler during its mission, it is expected that hundreds or even thousands of exoplanets will be discovered.
Using the data obtained from the Kepler, scientists are able to construct elaborate profiles of the various candidate planets discovered by the mission. Once the data from the Kepler is transmitted to the Earth, scientists in the Kepler team are able to determine the size of the planet and calculate its distance from its star. It is also possible to determine the mass and surface temperature of some planet candidates by augmenting the data obtained from the Kepler with Earth-based observations.
The power and precision of the Kepler have enabled it to detect Earth-sized planets that are hundreds of light years away from the Earth. Such a feat was previously unaccomplished. The Hubble Space telescope was able to photograph an exoplanet in visible light in 2008. However, the size of this planet was large (estimated to be about three times the mass of Jupiter) and it was at a relatively close distance to the Earth at a distance of 25 light years (Kasting, 2010). The Kepler is powerful enough to detect planets that are in orbit over three thousand light years away from our Sun. The sensitivity of the Kepler’s telescope enables it to detect transits at this great distance.
The Kepler has been able to provide scientists with a rich photometric database that is populated with an enormous number of stars. This information has increased knowledge on star systems. Astronomers are now able to formulate better models of distant star systems using the data obtained from the Kepler mission. The mission has also helped in the identification of white dwarfs within the Kepler FOV. Stefano (2010) states that while the Kepler was designed to discover transits by Earth-like planets, the observatory has discovered multiple hot objects in close orbits around main-sequence stars. These objects are presumed to be remnants of stars (white dwarfs).
The information obtained from Kepler has changed the manner in which astronomers view the universe. Before the mission, astronomers only theorized about the existence of other stay systems and their number was not known. Data from the Kepler has confirmed that other star systems exist in the Milky Way. Findings by the Kepler have made scientists appreciate that there are planets in many other Stars. The Kepler has succeeded in providing scientists with statistics as to how many Earth-like planets may exist in the FOV of the telescope. Future missions will make use of the information obtained from the Kepler to seek out alien life in the identified Earth-like planets.
Recent Discoveries of Exoplanets by the Kepler
The data obtained from Kepler in 2013 contained information on planets whose orbit was close in length to that of the earth (Cowen, 2013). These are important findings since such planets are likely to be in the habitable zone, which increases the probability that they might sustain life. A number of significant discoveries have been made by the observatory in the recent past. On January 2013, NASA announced that a candidate planet, named Kepler-69c had been discovered. This planet is considered a likely habitat to alien life forms since it is Earth-like, exists in the habitable zone, and the planet orbits a star that is similar to the Earth’s Sun.
Another important discovery announced in 2013 was two Earthlike exoplanets that also exist in the habitable zone and orbit stars similar to our Sun. These exoplanets named Kepler-62e and Kepler 62f are presumed to possess liquid water. This means that they might sustain life since scientists suppose that their conditions are viable for sustaining life. In spite of the fact that the Kepler stopped searching for exoplanets in 2013, there is still a huge amount of raw data obtained from the space observatory. It will take a number of years for all the data collected from the Kepler to be analyzed and the results made public.
Impact of the Failure of the on-board Gyroscopes
In May 2013, the Kepler experienced a catastrophic failure of a second on-board gyroscope. This rendered the spacecraft unable to accomplish its primary objectives. Kepler mission scientists were aware of the vulnerability that the on-board gyroscopes on the observatory faced. The Space Observatory had four metal reaction wheels that were needed to keep the station stable. The Mission lost its first wheel in July 2012 but it was able to continue functioning since only 3 wheels were needed and one had been added for redundancy. However, Kepler lost its second wheel in May 2013, making it impossible for the spacecraft to operate since at least three wheels are needed to maintain the spacecraft’s precise orientation (Cowen, 2013).
The failure of the on-board gyroscopes meant that Kepler could no longer be relied upon to engage in the precision pointing that was necessary for the collection of data to find exoplanets. The Kepler was able to accomplish its core objectives by continuously observing the same region in space for an extended period. The gyroscopes were needed to ensure that the telescope could maintain this precise pointing. Without three functioning gyroscopes, the Kepler cannot observe the same FOV for the long period needed to discover transits (Cowen, 2013). It is therefore impossible for the Space Observatory to continue with its primary mission in its current damaged state.
The failure of the on-board gyroscopes on the observatory put the future of the Kepler at risk. NASA had to come up with other objectives to justify keeping the Kepler Space Observatory functioning for coming years. If no new scientific purpose were discovered for Kepler, then the spacecraft would have to be decommissioned. NASA made a public appeal for scientists to present proposals for valuable missions that the Kepler could be commissioned to perform even in its damaged state (NASA, 2013). Scientists responded to this call and issued a number of proposals on the type of alternative missions that the space observatory could engage in.
Future Proposals for Repurposing the Kepler
Following the failure of the second gyroscope-like reaction wheel in the Kepler, NASA engineers contemplated ways to fix the wheels in order to restore the spacecraft to full operation. It would be impossible to send astronauts into space to physically carry out repair work on the damaged wheels since the Kepler is orbiting millions of miles away. In August 2013, engineers reached a consensus that it would be impossible to fix the Kepler’s wheels after detailed analysis showed that the task was impossible (NASA, 2013). Following the decision that no future attempts to restore the Kepler space station would occur, scientists began to reconsider other science programs that the spacecraft can engage in.
One proposed future use of the Kepler is to search for comets and asteroids in deep space. It has also been proposed that the Kepler can be used to provide evidence of supernovas. Studying supernova explosions can be done using the lower photometric precision of 300 parts per million (Cowen, 2013). Even in its diminished form, the Kepler is capable of achieving this precision. The new proposed mission for the Kepler to explore deep space for supernova incidents and other objects such as asteroids and comets has been named K2. This proposal is under review by NASA and it is expected that a decision will be reached by May.
The lack of tracking/stabilization capabilities means that the Kepler cannot discover earth-like exoplanets. However, it can still discover huge exoplanets since these bodies can be discovered through gravitational microlensing as opposed to the transit method. Cowen (2013) notes that the Kepler can still discover planets that are about 3.5 times larger than Jupiter even without its precision pointing capabilities. However, such planets cannot be considered Earth-like due to the massive size.
Conclusion
This paper set out to provide a detailed discussion of the Kepler Space Observatory. It began by articulating the significance of the German Astronomer Johannes Kepler, for whom the observatory is named, on modern day astronomy. From the information provided in this paper, it is clear that the Kepler has made a rich legacy for itself. The spacecraft has helped in the discovery of thousands of candidate planets and from hundreds of earth-like planets have been verified due to the data obtained from Kepler. The success of the Kepler mission has inspired proposals for future missions that have improved capabilities to those of the Kepler. The paper notes that in spite of the loss of its precise pointing capability, the Kepler can still be put to alternative uses. The space observatory can therefore be expected to continue providing scientists with important data for space research for the next few years.
References
Borucki, W. (2006). The Kepler Mission: Astrophysics and Eclipsing Binaries. Astrophysics & Space Science, 304(1), 391-395. Web.
Cowen, R. (2013). The Wheels Come Off Kepler. Nature, 497(1), 417–418. Web.
Kasting, J. (2010). How to Find a Habitable Planet. New Jersey: Princeton University Press. Web.
Over the last few centuries, advances in astronomy have transformed the way humans perceive the universe (Thronson & Massimo 47). New transformations into the understanding of how our universe came into existence and how it regularly transforms have been made possible by the invention of the space-based telescopes. In the present, the field of astronomy is flourishing as more explorations and discoveries are being made. In the last decade, several astonishing discoveries, which have amazed most humans, have been undertaken. The discoveries include evaluation of the universe’s expansion rate, location of giant stars, location of black holes, and location of dead stars. As such, the discoveries could not have been realized if the multi-wavelength examination of the Universe was not invented. The technology makes use of radio, Infrared, optical, ultraviolet, and X-ray telescopes.
Based on the above illustrations, it is apparent that humans have unraveled a number of mysteries about our solar system. However, it should be noted that billions of other discoveries are yet to be made in the future. To achieve this, our range of observations should be expanded through the opening up of new electromagnetic spectrums. New spectrums will be opened up if sophisticated telescopes are designed in the future. The telescopes will enable astronomers to observe distant and fainter objects with clarity. Equally, the instruments will enable humans to observe and analyze larger terrestrial bodies with better resolution.
If these instruments are designed, astronomers will be able to explore exoplanets within the next 50 years. For over the last century, debates about the existence of life in the exoplanets have attracted heated debates (Telescopes of the Future 4). The current telescopes are not able to prove or disprove the theories associated with the existence of life on exoplanets. In this respect, telescopes that are more sophisticated will be able to shed more light on these debates.
Despite the fact that the current telescopes have offered astronomers with a number of information about distant planets and their exoplanets, it should be noted that they have not been able to provide clear images because of the blinding glare from the stars (Telescopes of the Future 5). Thus, the future telescope must enable the astronomers to block out the blinding glare from the stars. To achieve this, the next-generation space telescope should be designed in the near future. In this respect, this future grand telescope must be more sophisticated than the HST telescopes.
Because the future telescopes must target at the red-shifted early universe and the exoplanets, the next-generation space telescope should be able to target specific electromagnetic spectrums. To focus on the red-shifted early universe, a telescope that will focus on the 2-5 µm EM spectrums should be designed (Telescopes of the Future 5). To focus on the exoplanets, telescopes that focus on 10-20 µm EM spectrums should be designed. These regions are currently unexplored due to unsuitable telescopes.
The proposed telescope should have a prime mirror measuring 8 meters in diameter. Similarly, the instrument should be able to work in temperatures up to 50 K. Despite its augmented mirror surface area, the telescope should weigh much less than the HST because it will have to be transported millions of kilometers from the earth’s surface. The instrument will have a focal length of 116.6 meters.
The above project will be very expensive to undertake. As such, the whole project will require approximately $8.8 billion (Boss 55). Therefore, an international collaboration with organizations such as NASA will be needed.
Works Cited
Boss, Alan. The crowded universe: the race to find life beyond earth. New York: Basic Books, 2011. Print.
Thronson, Harley, and Massimo Stiavelli. Astrophysics in the next decade the James Webb Space Telescope and concurrent facilities. Dordrecht: Springer, 2009. Print.
On October 4, 2016, a famous astronaut Marsha Sue Ivins gave a speech on her experience in space traveling as part of the lecture for the World Space Week. She shared her knowledge about the space exploration and provided the audience with ideas of further plans regarding the exploration and colonization of other planets. The students, senior staff, and AUS alumnus took part in the lecture. Amer Al Sayegh also gave a speech and introduced the guest speaker to the audience. It should be stated that Mr. Amer Al Sayegh gave rather a long speech and read from the paper, which created an impression that he was not aware of the information he had to present meanwhile Ms. Marsha Ivins was very persuasive and used a variety of visual aids to support the audience’s comprehension.
It should be noted that the guest speaker, Ms. Ivins, stressed out that the government should allocate resources to conduct further space explorations and travels. According to Ms. Ivins, at present, no sufficient equipment that would allow people to travel and settle further than the Moon is available. In addition, the speaker explained that the colonization of Mars that is being vividly discussed is impossible at this stage of the development of the space sector and poor availability of resources allocated to the industry. The objective of the speech was to acquaint the listeners with practices accepted during space traveling and common experiences that may not be known to the general public.
It is worth mentioning that Ms. Ivins is an experienced lecturer and astronaut; consequently, it was evident that she would be able to share her experience and expertise with the listeners freely (“The History of Human Space Flight” par. 2). In contrast, Mr. Amer Al Sayegh relied on the paper materials all the time so as not to forget what he needed to tell further. It was also difficult to understand the meaning of his speech and the core ideas as he made big breaks between the sentences and, in general, procrastinated his speech (Eelen 101). However, Ms. Ivins used the time as efficiently as possible and took into account the specifics of the audience (Donovan and Avery 9). She shared her personal experiences and impressions in detail. Moreover, it was easy to understand the purpose of her speech as the visual materials such as extracts of text and pictures supported it (Rosenberg 16). To boost her speech, Ms. Ivins provided factual information and data and defined the terms that were not known to the audience. Overall, the speech was enjoyable and informative because the speaker changed the mood from serious to humorous to release the possible tension and avoid the distraction of the listeners (Schmid 111). It should be noted that is rather difficult to disagree with the speaker about the points she provided as not that many people have such a rich experience in space traveling like Ms. Ivins does (Cox 14). However, the facts that she provided convince in the reliability and validity of her statements.
In conclusion, the lecture was indeed informative, and the guest speaker was able to acquaint the audience with the main issues of space traveling practices. The contrast between the two main participants allows stating that Ms. Ivins was successful in conducting an effective presentation. She was able to achieve the aims of her statements and kept the audience interested in the content of her speech.
Works Cited
Cox, Ronald. Schutz’s Theory of Relevance, New York: Springer Science & Business Media, 2012. Print.
Donovan, Jeremey, and Ryan Avery. Speaker, Leader, Champion, Pennsylvania: McGraw Hill Professional, 2014. Print.
Eelen, Gino. A Critique of Politeness Theory, New York: Routledge, 2014. Print.
Rosenberg, Jay. Linguistic Representation, New York: Springer Science & Business Media, 2012. Print.
Schmid, Ulla. Moore’s Paradox, New York: Walter de Gruyter GmbH & Co KG, 2014. Print.