Essay on Importance of Vaccines Invention

Vaccines are considered one of the most distinguished inventions in history. From a health standpoint, some of the revolutionary achievements of vaccines are the eradication of smallpox and polio. Millions of people die of smallpox during 1980-90, however, the 1979 vaccination program started by the WHO completely wiped it out. Another, striking accomplishment of vaccination is eradicating polio. After the introduction of a worldwide end polio campaign, the number of polio cases has reduced to 407 in 2013 from 350,000 in 1988, the CDC.

Vaccines– Convergence of Science and Strategy

After the inception of the first vaccine in 1796, human life has been revolutionized tremendously. In the last two decades, innovations have led to unprecedented growth of vaccines. Many lives have been saved from life-threatening diseases including rotavirus, pneumococcus, varicella, and human papillomavirus (HPV). Around 122 million children have been saved since 1990, as per the Bill & Melinda Gates Foundation. Vaccines are the prominent reason for the decline in child deaths. In the past few years, the number of substances registration related to vaccines has increased steadily, with an increase in the volume of patents and journals related to vaccines, in 2017, around 150,000 journal articles and 148,000 patent applications were published, by the American Chemical Society.

Increasing Investment in R&D and manufacturing of Vaccines

Emerging trends are contributing progressively to innovation and increase in investment requirements for R&D and manufacturing. These shifts in the broader infrastructure are increasing innovation in the vaccine industry. For instance, recently, in January 2019, The UK government announced funding of £10 million to the Coalition for Epidemic Preparedness Innovations (CEPI). This funding will help in the development of vaccines against emerging infectious diseases. CEPI was formed in response to the Ebola epidemic in West Africa. The UKVN funds 78 research projects that are developing vaccines against diseases that can cause epidemics. Additionally, the UK has invested around £120 million in the UK Vaccine Network (UKVN). The UKVN is engaged in the vaccine development for 12 pathogens, the GOV.UK. Manufacturers are also restricting their vaccine business for higher market share. In May 2019, Dynavax Technologies Corporation restructured its business to focus on the vaccine business with its first commercial product HEPLISAV-B, which is a hepatitis B vaccine (Recombinant). Emerging Threats are Driving Growth of Vaccines

Vaccines can be used as treatment methods for existing diseases. They can help in targeting emerging epidemiology threats, for instance, the Ebola virus. These. Currently, many drugs are in the development phase for the treatment of the Ebola virus disease. For instance, the National Institute of Allergy and Infectious Diseases is supporting various vaccines, including Merck’s rVSV-ZEBOV vaccine. Moreover, vaccines can target high-burden diseases (such as HIV and respiratory), which carry high commercial potential. For example, pregnant women, make up a potential population for immunization.

Increase in the Number of Small Biotech Players

Historically, the “Big Four” GlaxoSmithKline, Merck, Pfizer, and Sanofi have driven the vaccine market in terms of innovation. However, in the last few years, the pipeline has been driven by emerging-market players. In addition, ongoing R&D of new antigens and novel synthetic (messenger-RNA-based products) are showing potential breakthroughs. Recently, in February 2019, the Coalition for Epidemic Preparedness Innovations (CEPI) awarded, USD 34 million to CureVac for the development of an RNA Printer prototype. This platform will close the gap in the supply of lipid-nanoparticle (LNP)-formulated mRNA vaccine candidates for Lassa Fever, Yellow Fever, and Rabies.

Rising Focus on Immunization Programs

Vaccines are one of the greatest health achievements in the history of mankind. Immunizing children against diphtheria, tetanus, pertussis, and measles saves 2-3 million lives every year worldwide, the WHO. In the U.S. alone, these vaccines have prevented over 21 million hospitalizations and 732,000 deaths among children in the last 20 years, the CDC. In the last 10 years, immunization programs have added new and underused vaccines. Immunization averts the deaths of millions of children, especially in developing countries. Many children do not get proper immunization in low-income countries, however, increasing immunization programs are closing this gap with every passing year. For instance, the Reaching Every District (RED) strategy, was launched in 2002, and its objective was to strengthen immunization delivery at the district level. The pilot project of the first malaria vaccine, RTS, S, has started being administered in Ghana, Kenya, and Malawi since 2018. These three countries are hosting pilots for the evaluation and feasibility of the required four doses of RTS, S its potential role in reducing childhood deaths. A number of organizations have come together for this project, including Gavi, the Vaccine Alliance, the Global Fund to Fight AIDS, Tuberculosis and Malaria, and United. They are providing funding of USD 49.2 million for the first phase of the pilot program. Ministries of Health in Ghana, Kenya, and Malawi will implement the pilots, in coordination with WHO. This vaccine will be made available through routine immunization programs to young children living in the selected areas. The first phase of the pilots is expected to be completed by 2020, following which a second phase is expected to be completed by 2022. Conjugate Vaccine to Account for a Large Share of the Pie

Conjugate vaccines, inactivated and subunit vaccines, live attenuated vaccines, toxoid vaccines and recombinant vaccines are the types of vaccines studied in the report. Among these vaccines, conjugate vaccine captures a larger share of the market. Conjugate vaccines offer various advantages in the case of effective immunization. For instance, their immunological memory provides longer immunity. The PCV is a conjugative vaccine used against 13 types of pneumococcal bacteria causing pneumococcal disease.

Monovalent Vaccines to Grow at a Significant Rate Throughout the Forecast Period

Vaccines can be categorized into monovalent and multivalent. Monovalent, as the name suggests, is designed to combat a single disease like malaria, Ebola, and dengue, however, multivalent are used in immunization against multiple diseases such as DTP, polio, and hepatitis. The growing prevalence of malaria, Ebola, and others is driving the demand for monovalent vaccines. The Ebola outbreak and the vaccination available have prompted healthcare organizations worldwide, in turn, a rise in the research and development activities of monovalent vaccines.

North America Will Remain the Most Prominent Market, Launch of New Therapies are Giving Dominance Over Other Regions

Globally, North America dominates the global vaccines market. The presence of an adequate infrastructure-defined regulatory framework and favorable reimbursement policies contribute to the region’s eminent market position. The emphasis on healthcare across the region remains intense and will only increase in years to come; thus, future growth prospects for North America’s vaccines market are pegged to remain positive during the forecast period. In the U.S., vaccines have prevented more than 21 million hospitalizations and 732,000 deaths among children born in the last 20 years, according to the Centers for Disease Control and Prevention. Technological advancements are boosting the growth of the vaccine market in the region. Recently, in October 2018, the U.S. FDA agreed to consider an application for Dengvaxia, which is the first licensed vaccine to protect against dengue by Sanofi Pasteur. The burden of dengue is mainly observed in Puerto Rico, the U.S. Virgin Islands, Guam, and some other offshore territories in the United States.

Europe to Rank Second, due to Increasing Infection Rate

North America is projected to be followed by Europe in terms of value share in the global ophthalmic API market. Across Europe, healthcare services vary as per geographical coverage, population density, and government policies. Trending anti-vaccine movement, fuelled by social media in Europe, is putting lives at risk and can be blamed for measles outbreaks. After a large outbreak of measles, which affected over 4,000 Italians, Italy made another six vaccines mandatory in 2017, the European Centre for Disease Prevention and Control. Vaccination coverage in France and Italy has increased due to the expansion of mandatory vaccination laws. France, which has the highest rates of vaccine mistrust in Europe has made eight additional vaccines mandatory for babies born from 2018 onwards, including vaccines against hepatitis B, pneumonia, and meningococcal C diseases.

Increasing manufacturing activities and rising public-private investment are fuelling the growth of the market. In 2106, Takeda invested around 100 Million Euros in the Dengue Vaccine Manufacturing plant in Germany. This investment was taken to meet the unmet needs in dengue prevention. Moreover, The EU, is Bavarian Nordic, a biotech company, with a EUR 30 million loan. Bavarian Nordic will use the funding to accelerate advanced biotechnological manufacturing processes and technologies for the production of vaccines.

Government Initiatives to Drive Rapid Growth in Asia-Pacific

Immunization, to date, is the most cost-effective public health intervention, averting an estimated 2 to 3 million deaths every year, the WHO. Due to immunization, the world is closer to eradicating polio, with the remaining three polio-endemic countries–Afghanistan, Nigeria, and Pakistan. Deaths from measles, a major child killer, declined by 80% worldwide between 2000-2017, preventing around 21.1 million deaths, UNICEF.org. Vaccines help in targeting high-burden diseases (tuberculosis and malaria) in low-income markets. Vaccines are often expensive for the world’s poorest countries, and supply shortages and a lack of trained health workers are challenges as well. Unreliable transportation systems and storage facilities also make it difficult to preserve high-quality vaccines that require refrigeration.

However, many organizations are working towards bridging this gap of supply and demand. For example, Bill & Melinda Gates Foundation supported a partnership between PATH, WHO, the Serum Institute of India and African governments for the development of an affordable vaccine to prevent meningitis A. MenAfriVac is the first vaccine designed specifically for Africa, and within a year of its introduction, a dramatic drop in meningitis A infections have witnessed. Apart from this, promising vaccines to prevent malaria and dengue are currently in late-stage development and will have a major impact on reducing the burden of these diseases.

Essay on Who Discovered Solar Energy

While conducting electrochemical experiments in 1838, Alexandre Edmond Becquerel, a French physicist, discovered the photoelectric phenomenon. He monitored the current flowing between two plated platinum electrodes in an electrolyte-filled container. Becquerel discovered that when exposed to light, the current’s strength alerted. The outside effect, in which electrons move out of a stationary substance when exposed to light was involved in this case.

Willough Smith and his assistant Joseph May in 1873 observed that when exposed to light, the semiconductor selenium’s resistance altered. They witnessed for the very first time the internal photo-effect significant to photovoltaics, where light breaks electrons from their bonds in the semiconductor and allows them to be available in the form of free charge carriers in the solid-state body.

After three years, William Adams and Richard Day discovered that when exposed to light, a selenium rod with platinum electrodes may generate electrical energy. This was the first time a solid body had shown the ability to convert light energy into electrical energy directly. In 1883, Charles Fritts, a New York inventor, created a small ‘Module’ with a surface area of 30 cm2 consisting of selenium cells that had nearly 1% efficiency. This was achieved by coating the selenium cells with a thin layer of the gold electrode. Fritts then sent a module to Werner von Siemens, a German inventor, to be evaluated. Siemens acknowledged the significance of the discovery and informed the Royal Academy of Prussia that the conversion of light into electricity had been exhibited.

In the years that followed, the physical background of the effect was better described. This was due to Albert Einstein and his light quantum theory presented in 1905 which received the Nobel Prize. There were also technological advancements occurring in 1916, where the AEG Company’s chemist Jan Czochralski developed the crystal formation process that made it possible to create semiconductor crystals of high quality as single crystals.

William B. Shockley, co-inventor of the transistor and American Nobel laureate, explained the mechanism of operation of the p-n junction in 1950 where he laid the theoretical basis for today’s solar cells. Using that theoretical foundation Bell Lab’s Daryl Chapin, Gerald Pearson, and Calvin Fuller built the first silicon solar cell that has an efficiency of up to 6% which was displayed to the public in 1954.

The photovoltaic research expanded rapidly in the late 1980s, specifically in Germany, Japan, and the USA. Moreover, studies have been conducted regarding the possibility of installing grid-coupled photovoltaic plants on single-family dwellings. From 1990 to 1995, Germany executed the ‘1000 Roof Program’ which provided with vital expertise on module reliability (Mertens, 2018).

Solar Cell Operating Principle

The structure of a solar cell typically consists of a semiconductor known as a p-n junction where the positively charged p-type layer is located at the bottom and the negatively charged n-type layer at the top. The n-type accepts electrons while the p-type gives away electrons and gains holes as a result. When light enters the cell, it forms an electron-hole pair. There is an existing internal electric field where it forces these holes to separate. As a result, the electrons are transferred to the negative electrode and the hole to the positive creating an electrical current. A conducting strip known as a busbar typically made out of copper or aluminum, conducts the electric current generated by the cell. This process is known as the photovoltaic effect. (CITE)

Modeling of PV Cell

To produce the required energy, solar cells are assembled in a series-parallel arrangement to size a PV array. Operational conditions and field factors such as irradiation levels, ambient temperature, and the sun’s geometric location all affect the amount of electric power generated by the PV array (Soto, 2006). An example of a current source model for a solar cell is illustrated below in Figure 3, where Iph known as photocurrent, is the produced current as a result of sunlight irradiation, Id is the diode reverse saturation current, Rsh is the intrinsic shunt of the cell and is usually a very large value, Rs is the series resistance of the cell and tends to have a much smaller value. Hence, the intrinsic shunt and the series resistance may be overlooked to simplify the analysis.

The I-V characteristic curve depicts a PV cell, module, or array’s voltage and current characteristics. It gives a detailed description of the solar energy conversion capacity and efficiency. Knowing these I-V characteristics is crucial as it determines the solar efficiency and output performance of the PV module. The following equation shows the typical I-V characteristic of a PV array (Singh, 2013):

I= NpIph-NpIdexpqVkTANs-1 #(1)Where the reverse saturation current is Id, the photocurrent is Iph, the PV output voltage is V, the electron charge is q, the Boltzmann constant is k, the PV output current is I, the cell temperature is T, the total number of cells in a series is Ns, the number of modules that are connected in parallel is Np, and the p-n junction ideality factor is A. The cell deviation from the typical p-n junction characteristic is governed by the factor A, which ranges from 1 to 5, where 1 is the ideal value.

According to the following equation, the reverse saturation current is Id fluctuates with temperature (Singh, 2013):

Id=Ic[TTc]3expqEgKA1Tc-1T (2)

Where Tc is the cell’s temperature of reference, the reverse saturation current at Tc is Ic, and the band gap energy of the semiconductor is Eg. Similarly, the photocurrent Iph is governed by the cell temperature and radiation from the sun, this can be expressed in the following equation (Singh, 2013):

Iph=Iscr KiT-TcS100#(3)Where at reference temperature and radiation Iscr is the cell short circuit current, the current temperature coefficient in a short circuit is Ki, and S is the sun radiation measured in milliwatts per square meter (mWcm2). Moreover, the power of a PV array can be estimated using the following (Singh,2013):

P=I-VP=NPIphV-NpIdVexpqVKTANs-1#(4)Setting (dPdV) = 0 yields the maximum power point voltage Vmax at the maximum power operating point (MPOP) as shown in equation 5:

expqVmaxKTANsqVmaxKTANs 1=Iph IdId #(5)The photocurrent acts as a function of the PV cell output voltage and is influenced by the load current concerning the levels of solar irradiation during operating conditions, this can be expressed in the following equation:

V=AKTqlnIph Id-IId-RsI (6)

The PV array’s I-V characteristics can be simulated by varying the solar radiation S and cell temperature T in equations 1 to 5. A PV panel ideally would usually operate at a voltage that maximizes the power output. This operation has been made possible through the usage of a maximum power point tracker (MPPT). Kuo et al. (2001) elaborate on the development of a novel MPPT controller for PV energy conversion systems. Additionally, Hua et al. (1998), presented a simple way of tracking these maximum power points where the systems are forced to operate near these points. The large and small signal models as well as the transfer function, are obtained through the energy conversion concept; the authors have validated the simulation results. In the absence of an MPPT, the PV panel runs at a position on the cell I-V curve that also corresponds to the load’s I-V characteristic. Five independent pieces of data are required to evaluate the variables in the previous equations. These variables are known to be the functions of the solar energy impinging on the cell and its temperature. For a given set of operating and field conditions, reference values of these variables are determined. The short circuit current, the open circuit voltage, and the voltage and current at maximum power point are three current voltage pairs that are typically available from the manufacturer’s standard rating conditions (SRC). The derivative of the power at the maximum power point can be set to zero to yield a fourth piece of data (Soto, 2006):

dIVdV=Imp-VmpdIdV=0 #(7)Where, dIdV is:

dIdV=-IdAeVmp ImpRsA-1Rsh1 IdRsAeVmp ImpRsA RsRsh #(8)Additionally, the open circuit voltage temperature coefficient is as follows:

μVoc=dIdV=Voc,ref-Voc, TTc-T#(9)It is required to know Voc, T which is the open circuit voltage near the reference temperature at some cell temperature, to evaluate μVoc numerically. For this purpose, the cell temperature is not of significance because T values between 1 and 10 K higher or lower than Tc produce essentially the same outcome (Soto, 2006).

Nguyen and Lehman (2006) focused on studying the influence of non-uniform changing shadows caused by passing clouds. They have suggested a modeling and computing approach (algorithm) to simulate that scenario and see its effects on the power output of a PV array. They found that the model they have developed can predict the power losses in individual solar cells, detect hotspots in shaded PV modules, and the power output. Their model is also capable of simulating solar PV arrays in a variety of topologies, including or excluding bypass diodes which function as a hotspot eliminator. Using the circuit equations of PV cells and effects of temperature variations and solar irradiation as a basis, Atlas and Ashraf (2007) designed a PV array simulation model to be utilized in MATLAB Simulink GUI environment. Gonzalez (2005) was interested in examining the behavior of PV cells at different temperatures and irradiance levels. He developed a circuit-based simulation and compared his results to that of the manufacturer’s published curve. Chowdhury et al. (2008) published a MATLAB Simulink model of a polycrystalline PV array with a DC voltage source. They discussed the model’s performance under different loads and weather circumstances, as well as how they used it to build a load-shedding strategy for a standalone PV system. The authors have also stated that the laboratory-based cell characterization work can be used to construct simpler low-burden mathematical models for numerous types of PV arrays. This will be extremely useful when simulating and studying distributed power systems and microgrids in the future. Chang et al. (2010) introduced a performance monitoring system of a model-based PV in LabVIEW with an online diagnosis capability. The data obtained was compared to estimated values derived from a single-diode practical PV system. Jiang et al. (2010) developed a better MATLAB Simulink simulation model for PV cells. The results of this newly developed model have been compared to that of current models. Additionally, the authors have also proved the model’s capacity to precisely simulate the I-V characteristics of an actual PV module. This newly suggested model can be used to build and simulate solar PV systems with various MPPT control approaches and power circuit topologies.

There have been some notable conclusions and trends based on the several studies mentioned previously on PV system modeling and analysis:

    • The accuracy of the PV cell mathematical model and analysis can be enhanced by integrating diode saturation current, temperature dependency of photocurrent, and series and shunt resistance
    • The model and analysis can be made more accurate by adding two parallel diodes with different saturation currents or by making the diode quality factor a configurable parameter
    • The relationship between the photocurrent and temperature is linear
    • In cases of high daily irradiation variability, energy output vs irradiation might help compare different modules
    • The maximum power falls as the diode quality factor increases
    • With an increase in atmospheric height, the direct normal irradiance’s absolute value rises
    • As the temperature of the cell rises, the open circuit voltage decreases linearly, resulting in a loss in cell efficiency
    • The value of series resistance should be kept as low as possible to extract the maximum power from the solar cell
    • With increasing the environmental irradiation, the open circuit voltage rises logarithmically
    • The power output of solar cells is determined by the irradiance distribution and temperature

Essay on Who Discovered Solar Energy

While conducting electrochemical experiments in 1838, Alexandre Edmond Becquerel, a French physicist, discovered the photoelectric phenomenon. He monitored the current flowing between two plated platinum electrodes in an electrolyte-filled container. Becquerel discovered that when exposed to light, the current’s strength alerted. The outside effect, in which electrons move out of a stationary substance when exposed to light was involved in this case.

Willough Smith and his assistant Joseph May in 1873 observed that when exposed to light, the semiconductor selenium’s resistance altered. They witnessed for the very first time the internal photo-effect significant to photovoltaics, where light breaks electrons from their bonds in the semiconductor and allows them to be available in the form of free charge carriers in the solid-state body.

After three years, William Adams and Richard Day discovered that when exposed to light, a selenium rod with platinum electrodes may generate electrical energy. This was the first time a solid body had shown the ability to convert light energy into electrical energy directly. In 1883, Charles Fritts, a New York inventor, created a small ‘Module’ with a surface area of 30 cm2 consisting of selenium cells that had nearly 1% efficiency. This was achieved by coating the selenium cells with a thin layer of the gold electrode. Fritts then sent a module to Werner von Siemens, a German inventor, to be evaluated. Siemens acknowledged the significance of the discovery and informed the Royal Academy of Prussia that the conversion of light into electricity had been exhibited.

In the years that followed, the physical background of the effect was better described. This was due to Albert Einstein and his light quantum theory presented in 1905 which received the Nobel Prize. There were also technological advancements occurring in 1916, where the AEG Company’s chemist Jan Czochralski developed the crystal formation process that made it possible to create semiconductor crystals of high quality as single crystals.

William B. Shockley, co-inventor of the transistor and American Nobel laureate, explained the mechanism of operation of the p-n junction in 1950 where he laid the theoretical basis for today’s solar cells. Using that theoretical foundation Bell Lab’s Daryl Chapin, Gerald Pearson, and Calvin Fuller built the first silicon solar cell that has an efficiency of up to 6% which was displayed to the public in 1954.

The photovoltaic research expanded rapidly in the late 1980s, specifically in Germany, Japan, and the USA. Moreover, studies have been conducted regarding the possibility of installing grid-coupled photovoltaic plants on single-family dwellings. From 1990 to 1995, Germany executed the ‘1000 Roof Program’ which provided with vital expertise on module reliability (Mertens, 2018).

Solar Cell Operating Principle

The structure of a solar cell typically consists of a semiconductor known as a p-n junction where the positively charged p-type layer is located at the bottom and the negatively charged n-type layer at the top. The n-type accepts electrons while the p-type gives away electrons and gains holes as a result. When light enters the cell, it forms an electron-hole pair. There is an existing internal electric field where it forces these holes to separate. As a result, the electrons are transferred to the negative electrode and the hole to the positive creating an electrical current. A conducting strip known as a busbar typically made out of copper or aluminum, conducts the electric current generated by the cell. This process is known as the photovoltaic effect. (CITE)

Modeling of PV Cell

To produce the required energy, solar cells are assembled in a series-parallel arrangement to size a PV array. Operational conditions and field factors such as irradiation levels, ambient temperature, and the sun’s geometric location all affect the amount of electric power generated by the PV array (Soto, 2006). An example of a current source model for a solar cell is illustrated below in Figure 3, where Iph known as photocurrent, is the produced current as a result of sunlight irradiation, Id is the diode reverse saturation current, Rsh is the intrinsic shunt of the cell and is usually a very large value, Rs is the series resistance of the cell and tends to have a much smaller value. Hence, the intrinsic shunt and the series resistance may be overlooked to simplify the analysis.

The I-V characteristic curve depicts a PV cell, module, or array’s voltage and current characteristics. It gives a detailed description of the solar energy conversion capacity and efficiency. Knowing these I-V characteristics is crucial as it determines the solar efficiency and output performance of the PV module. The following equation shows the typical I-V characteristic of a PV array (Singh, 2013):

I= NpIph-NpIdexpqVkTANs-1 #(1)Where the reverse saturation current is Id, the photocurrent is Iph, the PV output voltage is V, the electron charge is q, the Boltzmann constant is k, the PV output current is I, the cell temperature is T, the total number of cells in a series is Ns, the number of modules that are connected in parallel is Np, and the p-n junction ideality factor is A. The cell deviation from the typical p-n junction characteristic is governed by the factor A, which ranges from 1 to 5, where 1 is the ideal value.

According to the following equation, the reverse saturation current is Id fluctuates with temperature (Singh, 2013):

Id=Ic[TTc]3expqEgKA1Tc-1T (2)

Where Tc is the cell’s temperature of reference, the reverse saturation current at Tc is Ic, and the band gap energy of the semiconductor is Eg. Similarly, the photocurrent Iph is governed by the cell temperature and radiation from the sun, this can be expressed in the following equation (Singh, 2013):

Iph=Iscr KiT-TcS100#(3)Where at reference temperature and radiation Iscr is the cell short circuit current, the current temperature coefficient in a short circuit is Ki, and S is the sun radiation measured in milliwatts per square meter (mWcm2). Moreover, the power of a PV array can be estimated using the following (Singh,2013):

P=I-VP=NPIphV-NpIdVexpqVKTANs-1#(4)Setting (dPdV) = 0 yields the maximum power point voltage Vmax at the maximum power operating point (MPOP) as shown in equation 5:

expqVmaxKTANsqVmaxKTANs 1=Iph IdId #(5)The photocurrent acts as a function of the PV cell output voltage and is influenced by the load current concerning the levels of solar irradiation during operating conditions, this can be expressed in the following equation:

V=AKTqlnIph Id-IId-RsI (6)

The PV array’s I-V characteristics can be simulated by varying the solar radiation S and cell temperature T in equations 1 to 5. A PV panel ideally would usually operate at a voltage that maximizes the power output. This operation has been made possible through the usage of a maximum power point tracker (MPPT). Kuo et al. (2001) elaborate on the development of a novel MPPT controller for PV energy conversion systems. Additionally, Hua et al. (1998), presented a simple way of tracking these maximum power points where the systems are forced to operate near these points. The large and small signal models as well as the transfer function, are obtained through the energy conversion concept; the authors have validated the simulation results. In the absence of an MPPT, the PV panel runs at a position on the cell I-V curve that also corresponds to the load’s I-V characteristic. Five independent pieces of data are required to evaluate the variables in the previous equations. These variables are known to be the functions of the solar energy impinging on the cell and its temperature. For a given set of operating and field conditions, reference values of these variables are determined. The short circuit current, the open circuit voltage, and the voltage and current at maximum power point are three current voltage pairs that are typically available from the manufacturer’s standard rating conditions (SRC). The derivative of the power at the maximum power point can be set to zero to yield a fourth piece of data (Soto, 2006):

dIVdV=Imp-VmpdIdV=0 #(7)Where, dIdV is:

dIdV=-IdAeVmp ImpRsA-1Rsh1 IdRsAeVmp ImpRsA RsRsh #(8)Additionally, the open circuit voltage temperature coefficient is as follows:

μVoc=dIdV=Voc,ref-Voc, TTc-T#(9)It is required to know Voc, T which is the open circuit voltage near the reference temperature at some cell temperature, to evaluate μVoc numerically. For this purpose, the cell temperature is not of significance because T values between 1 and 10 K higher or lower than Tc produce essentially the same outcome (Soto, 2006).

Nguyen and Lehman (2006) focused on studying the influence of non-uniform changing shadows caused by passing clouds. They have suggested a modeling and computing approach (algorithm) to simulate that scenario and see its effects on the power output of a PV array. They found that the model they have developed can predict the power losses in individual solar cells, detect hotspots in shaded PV modules, and the power output. Their model is also capable of simulating solar PV arrays in a variety of topologies, including or excluding bypass diodes which function as a hotspot eliminator. Using the circuit equations of PV cells and effects of temperature variations and solar irradiation as a basis, Atlas and Ashraf (2007) designed a PV array simulation model to be utilized in MATLAB Simulink GUI environment. Gonzalez (2005) was interested in examining the behavior of PV cells at different temperatures and irradiance levels. He developed a circuit-based simulation and compared his results to that of the manufacturer’s published curve. Chowdhury et al. (2008) published a MATLAB Simulink model of a polycrystalline PV array with a DC voltage source. They discussed the model’s performance under different loads and weather circumstances, as well as how they used it to build a load-shedding strategy for a standalone PV system. The authors have also stated that the laboratory-based cell characterization work can be used to construct simpler low-burden mathematical models for numerous types of PV arrays. This will be extremely useful when simulating and studying distributed power systems and microgrids in the future. Chang et al. (2010) introduced a performance monitoring system of a model-based PV in LabVIEW with an online diagnosis capability. The data obtained was compared to estimated values derived from a single-diode practical PV system. Jiang et al. (2010) developed a better MATLAB Simulink simulation model for PV cells. The results of this newly developed model have been compared to that of current models. Additionally, the authors have also proved the model’s capacity to precisely simulate the I-V characteristics of an actual PV module. This newly suggested model can be used to build and simulate solar PV systems with various MPPT control approaches and power circuit topologies.

There have been some notable conclusions and trends based on the several studies mentioned previously on PV system modeling and analysis:

    • The accuracy of the PV cell mathematical model and analysis can be enhanced by integrating diode saturation current, temperature dependency of photocurrent, and series and shunt resistance
    • The model and analysis can be made more accurate by adding two parallel diodes with different saturation currents or by making the diode quality factor a configurable parameter
    • The relationship between the photocurrent and temperature is linear
    • In cases of high daily irradiation variability, energy output vs irradiation might help compare different modules
    • The maximum power falls as the diode quality factor increases
    • With an increase in atmospheric height, the direct normal irradiance’s absolute value rises
    • As the temperature of the cell rises, the open circuit voltage decreases linearly, resulting in a loss in cell efficiency
    • The value of series resistance should be kept as low as possible to extract the maximum power from the solar cell
    • With increasing the environmental irradiation, the open circuit voltage rises logarithmically
    • The power output of solar cells is determined by the irradiance distribution and temperature

How Science Has Contributed To The Film And Television Industry And Its Impact On Society

INTRODUCTION

For decades’ people have been influenced by the entertainment industry. Movies and television has shaped millions of lives across the world. The entertainment industry is worth nearly $2 trillion. This figure demonstrates the influence of the industry. Computer science applies to almost all forms of the entertainment industry. You may not realize that the movies and television shows you watch are substantially reliant on science.

In my project I have researched, the scientific aspects of film production, computer science in the movie and television industry and its impact on society.

Main research

This application of computer science in film has many purposes, creating fight scenes, explosions, even people. The computer-generated special effects bring alive films such as Pirates of the Caribbean and Star Wars. Ron Fedkiw, an associate professor of computer science uses computation to make solids and fluids more realistic in films. Physically based simulation has become popular in the special effects industry. These technological tools can be used for sinking a ship in movies.

Fedkiw created special effects incomparable in their realism. He designs new algorithms that can rotate objects, create textures, create reflections or mimic collisions. Here, I used an article from the official Stanford news journal written by Dawn Levy A science writer for Stanford University. Levy graduated from Columbia University and has a degree in biology from UCLA and worked in the biotech industry. The article is also courtesy of Ron Fedkiw a professor in the Stanford University department of computer science and a researcher in computer graphics. (see reference A)

In summary, computer generated imagery uses computers to place creatures, backgrounds and more into movies. CGI ( Computer Generated Imagery) has even been used in multiple occasions to insert actors in film ,posthumous. examples of these actors are Audrey Hepburn, Marilyn Monroe, and Paul Walker who were artificially added to advertisements and films through CGI.

More uses of science in film

The soundtrack of a movie is significant in its story line and reception, creating suspense, energy and tension during key points. The music in films is often, played in a studio by musicians. It’s then recorded by computers so it can be modified using computer programming. This way the music fits the scene properly.

Every film requires a screenplay; it is the basis of every film. Screenwriters use many different computer programs that can format text into a screenplay.

Before computers, editors would spend hours looking through frames of film and cutting the film to edit it. Now editors use a computer to go through a movie frame by frame and can cut very precisely. Computer science has enhanced the quality of movies drastically.

I used reference B to find out about the additional uses of science in the film industry. The author Brock Cooper attended Illinois Wesleyan University. He was a reporter for seven years before working in marketing and media relations.

Positives and negatives of science in film

Many believe technology’s greatest impact is new cameras that allow cinematographers to record in higher definition, letting viewers take in more of the amazing work in set design. Technology enabling movies that were not possible before.

Digital film means avoiding the huge cost of older film. It means production companies complete their movie keeping the entire project closer to budget. Using science in film is positive because in the end the film looks cleaner, the audience usually can’t tell when CGI has been used, and it’s a powerful tool filmmakers use to set atmosphere.

Science has also contributed to the preservation of film. Film will crumble and damage over time, Digital archives make it easy to backup and restore film. Without technology, it would be almost impossible for the industry to make the amount of films it does. The practicalities of digital film have largely improved the film industry. Resulting in the film industry being largely reliant on science and technology.

Negatively ,Its been found that people that frequently watch television were more likely to lack confidence in the scientific community, believing science is dangerous and that a career in science is undesirable.

Many people also find the excessive use of CGI in film takes away from overall quality, believing CGI defies the laws of physics and some go as far as believing It has gone from helping filmmakers to invading cinema with its implausible imagery.

I used reference C for the positives, this article is written by film director Charles Matthau. However, the Article is partner content so I reevaluated using much information form this source. I used reference D for the negatives this article has many people in agreement in the responses. it is personally biased but since I noted that it was people’s opinions it is a valid source.

Conclusion

Science and film has been intertwined from the beginning of cinema. The concept of Cinema emerged when scientists were looking for technological methods of studying animal movement. Many of the biggest and most financially successful films have science at their core including Spider-Man and Avatar. A Significant number of popular television shows are also immersed in science and technology including CSI, House and The Big Bang Theory.

Entertainment attributes largely to the public’s attitude and understanding of science as well as employing thousands of scientists. According to Alan Irwin the public develops their understanding of science from their everyday lives and experiences. Popular films have influenced people’s lives by shaping and reinforcing their ideas of science. Multiple high profile scientific organizations have embraced movies and television as ‘legitimate vehicles’ for science communication. Science in cinema includes the methods of science, the social interactions among scientists, laboratory equipment, science education, as well as science policy and science communication. Science in film has had a massive societal effect. Changing the public’s perception of science for us to understand its value,to enhance storytelling and educate us about humanity and the challenges we face in a 21st century society.

Essay on Nikola Tesla Car

The United States of America And Tesla. I’ll be explaining the positions that the country is in at this time. also, Tesla’s origin and how they relate to a more evolved America in one of the greatest countries in the world. Companies that reside Within the United States are exceptional pieces of artwork that people have dreamed of before they have made into reality. the companies that are changing the world and making a new way from where we are living today. All these companies Tesla has shown what it takes for us to move forward into the next generation of energy-efficient vehicles.

The United States of America is one of the greatest countries to live in, this country claims freedom and Liberty for all, Founded on July 4th, 1776 by the thirteen colonies land to be known from there on as the United States of America, a people have been through so much through depression, war, and many tragedies. It is in our nature to have struggles and need to have accomplishments we are a very intelligent species trying to find the way while some of us have our way. Like Tesla, we have to think of the Macro side and judge why we are creating a company and why we are choosing where that company is created.

Country report the United States May 2019, advisors trying to check the policies of President Donald Trump’s agenda Trump prioritizes policies that do not need congressional authorization. President Donald Trump is also under investigation and runs the risk of getting impeached. The Economist Intelligence Unit shows a GDP Growth in 2018 was 2.9% however in 2019 the growth seems like it was stabilizing at 2.2%. The growth is expected to slow by 2022 by one point 1.7%. The GDP shows how the economy is performing and expanding in 2018 the real GDP growth was up to 2.9% it is estimated that in 2023 the GDP growth will be down to 1.7. Unemployment rates in 2018 were 3.9% it is also estimated that in 2023 the unemployment rate will be up to 4.8%. The economic growth is slowing down, and tax reductions and public splurging are going to bring weaker revenue growth for many businesses in the future, The exchange rate in 2018 was 1.18% by 2023 it is projected to be 1.24%. There are many incentives in government programs federal and state to help businesses succeed in the United States.

Household consumptions from 2017 show that the US is In the lead with $13,000 every month compared to China at $4200 a month and Japan at $2200 a month Germany is at $2000 a month and the UK is at $1900 a month This is the outgoing money for bills, debt and other. The population by age 2017 0-14 60 million, 15-29 68 million 30-49 81 million 50-69 79 million 70-85 28 million 86+ 7 million. These are the target markets for companies to go after and find a bigger consumer. Labor market wages are expected to drop to 1.9% in 2020 and will not return to the highest percentage of 3.0 in 2019 by 2023 we will arrive at 2.2% wages.

high-risk situations that are likely to happen in 2019-2020 are recessions, government shutdowns, Leadership handicaps on policymaking, and civil war breakouts. That is why companies like Tesla have to make a better way for the American people to spend their money and not drain the bank they offer extremely affordable cars for what they are capable of doing. This new generation of car designers is paving the way for a new company and they know the age to take place here in America.

Tesla Motor Corporation was founded in 2003 by two men named, Martin Eberhard and Marc Tarpenning in San Carlos, California, and was named after Nikola Tesla who was a Serbian scientist. One hundred years ago, he directed the way to robots, radio, remote control, and wireless transmission of messages and pictures Tesla visualized harnessing the wind and the sun to bring free energy to everybody. Nicholas Tesla had Obsessions and phobias about his ideas which he was widely known and admired.

This new-age Electric revolutionary company is the leading Electric car manufacturer in the United States, Tesla has many facilities and factories all over the United States to help build a better way for the automotive industry, now that we’re coming out of the oil and gas age and seeing what it does to our planet and everything around us, electric is the new way of the future. for a better more successful prosperous world. The great inventors of our time are dedicated to making our world better many smart and interesting people Wish to benefit from free electricity throughout the world we have many continents where people do not live with electricity. Nikola Tesla who is the main idea behind Tesla Motor Corporation his values And his charisma are the leading edge of y Telsa acts and does what it needs to, which shows the Electric benefits of the modern age today.

I would love to get my own opinion about electricity, Nikola Tesla was a great inventor of his time he showed many people from mini lands what could be accomplished through his work, but there was one thing standing in his way the greed of the oil and gas industry they knew that people weren’t ready for this so they discredited Nikola tesla’s inventions and told the population that he was a crock, because of greed and the value of selling gasoline To the public would make them very rich men and this was threatening because Nikola tesla’s inventions were all free there was no way for anyone to make a lot of money off of his inventions, I thought I would give my thoughts to explain why this is so credible.

Tesla, Inc. (Formerly Tesla Motors, Inc.) Has a hierarchical structure that supports nonstop business development. An organization, authoritative or corporate structure is the plan a framework that Characterizes the examples of communications among the organization segments. Tesla has been battling to get into control over mounting misfortunes, with CEO Elon Musk squeezing hard to quicken the creation of its new Model 3 battery car yet at what cost? workers are charging electric vehicle producers during genuine quality issues at its Fremont, CA gathering plant. As Musk predicted a year ago the Model 3 has made fabricating Hellfire since it was authoritatively placed into the generation in July 2016. while Musk initially Valid to fabricate 20,000 of the smaller battery autos a year ago over, generation was running at only a small amount of that. the objective presently is to arrive at a yield of 25,000 vehicles every month before the finish of this quarter and 5000 before the finish of June. One current Tesla engineer evaluated that “40% of the parts made or got at its Fremont processing plant require to adjust,” CNBC detailed Wednesday. now and again, parts should be sent retreat from the plant for revamping. In different cases, it shows Vehicles are being pulled off the line for fixes before being transported for purchases. Tesla offers mini versions of its cars ranging from the Model S, 3, X, and Y, which happens to spell sexy, also its new cyber truck a roadster coming to the market soon. Also, Tesla is going even further and has created energy cells for the roof of your house. Tesla showed up back in 2003 just a little bit before the electric boom when people were finding out that electricity was a reliable source of energy, Tesla was smart enough to witness and act upon this and take advantage of a better more reliable way to travel for everyday life. that was not reliable on the government but was reliable on the people to innovate and drive the company and software to the new age of the electric world.  

Airplnane: Description of the Invention

The credit for the invention of the first powered airplane goes to the Wright brothers Orville and Wilbur. The worlds first powered, sustained and controlled flight in a heavier-than-air airplane was achieved by the Wright brothers using the airplane named Flyer in 1903. The Flyer was a canard biplane as it had its tail in the front. It could only carry the pilot. It was powered by a 9-kW (12-hp) Wright Engine that drove two push propellers turning in opposite directions. The drive chain was crossed for this purpose. Take off was made possible through a two-wheel trolley that carried the Flyer on a 18.3m (60ft) wooden rail. The wings were curved and covered with muslin. The upper and lower wings were joined by nine pairs of strong struts and kept firm with the aid of wires. The airplane consisted entirely of wood, which was mostly spruce. On December 17, 1903, the Flyer did four flights, the longest of which was the last. In this flight the Flyer covered a distance of 260m (852ft) in 59 seconds.

Information regarding the inventor, the society, and circumstances surrounding the invention

Wilbur Wright was born in 1867 and Orville Weight in 1871 to Milton Wright and Susan Catherine Koerner. Their education consisted of high school education, but they did not receive diplomas. The craze for bicycles in the U.S.A. drew them to opening a bicycle repair and sales shop. Their interest in flight developed with the advent of gliders and the national interest in them. Using their experience in bicycles, they progressed to developing gliders and finally providing control and power to progress to the first powered flight. Their success with the Flyer was initially met with skepticism in the U.S.A, as people did not believe that powered flight was possible, and that too by these bicycle mechanics. However their later developments gave legitimacy to their claims and led to airplanes becoming accepted by society and the armed forces.

How this invention changed the society of origin

How (and when) this invention was introduced to other societies and the impact it had upon introduction

European society was just as skeptical of the claim of the Wright Brothers to the first flight. In 1908 Wilbur Wright arrived in France with the Wright model airplane, and over a period of one year made more than 200 demonstration flights in Europe. The critics in Europe turned to admirers and the Wright brothers became heroes. The French and Germans contracted to build the Wright aircraft under license, and flight became a reality in Europe.

Why this invention is considered one that changed the world

The invention of the airplane by the Wright brothers has redefined the concept of time and distance. Since the advent of the airplane people no longer think in terms of days and months for transportation needs, instead people think in terms of hours and days. The airplane has made it possible for large groups of people to move from one place to another, which has vastly increased the migration of people from region of the world to the other. Transportation of goods also takes place by using airplanes. Goods transported in such a manner take very short periods of time to reach their destinations. In essence it can be said that the airplane has verily shrunk the world.

Kitty Hawk, NC (North Carolina), December 17, 1903. Orville Wright's famous first airplane flight.
Figure 1. Kitty Hawk, NC (North Carolina), December 17, 1903. Orville Wright’s famous first airplane flight.
The 1903 Wright Flyer 1, after its fourth and final flight.
Figure 2. The 1903 Wright Flyer 1, after its fourth and final flight.

Works Cited

“First Airplane Flight”. 1996. Wright Brothers History. 2007.

Chant, Christopher and Taylor, J. H. Taylor. The World’s Greatest AIRCRAFT. Isle of Anglesey: S. WEBB & SON, 2001.

”. Wikipedia, the free encyclopedia. 2007. Web.

“Demonstrations in Europe”. THE AERIAL AGE BEGINS. 2007.

Gunston, Bill. Howatson, Ian. and Quigley Sebastian. THE WORLD OF FLIGHT. London: Horus Editions Limited, 1995.

“Inventing the Airplane”. Web.

The Invention of Television

According to Osborne (2010), Paul Nipkow, a German was the first person to transmit pictures over wires. He used the scanning principle developed by a rotating disc technology in 1884. The Nipkow’s scanning floppy thought was developed by John Logie Baird in the 1920’s, which led to development in electronics. He patented the idea of using arrays of transparent rods for television images transmission.

In 1923, Charles Francis had helped in transmission of mobile images (silhouette) – this used a system of mechanical television. Vladimir Zworykin (Russian) improved his own cathode ray tube in 1929.

In 1948, multiple contributions were made by Louis Parker, who developed the idea of the modern Television. Later major inventions were made on color television and cable television (1953), remote controls (1956), plasma televisions (1964), and web television (1996).

According to Street (2006), Paul Nipkow developed the Nipkow disc in 1884, which was capable of transmitting pictures by use of a cable. More innovations were made in 1920’s. By then, John Baird used a collection of clear bars for delivering images, and they were protected by law.

He had started his studies in 1924 and in 1927 – the first complete electronic television was designed by Philo Farnsworth. Werner Flechsig in 1938 patented a modern signal analog color television.

Dumont Company in 1939 became the first company to start manufacturing television sets and became the first television network in 1946. In addition, Goldmark delivered a color Television in the same year – this television could deliver images in blue, red, and green colors, by help of a cathode ray pipe.

The development of the television set is quite a complex series of events, and saying that one particular man may have invented the television is an overstatement since many contributed and improved on the works and inventions made in earlier times. In Ingram (2006), the media have come through a lot of changes.

From the times of 15th Century when Johann Gutenberg rediscovered printing work in Europe to today when internet is the new and biggest source of information.

The media have transformed in a big way since then, a couple can receive messages at the comfort of their living rooms from the times when messages were sent because of emergencies to a time when messages are easy to send. In the earlier times, it was hard to send a message for its expense of time.

A letter sent across the Atlantic Ocean took ages to sail across just because there was no other means for transportation. The recent transformations have allowed easier ways of dissemination of information.

References

Briggs, A & Burke, P 2010, A Social History of the Media: From Gutenberg to the Internet, 3rd edn, Polity Press, Cambridge.

Bull, S 2009, ‘Photography’ in D. Albertazzi and P. Cobley (eds) The Media: An Introduction, 3rd edn, Pearson, Harlow.

Cobley, P 2001, Narrative, Routledge, London.

Duck,S & McMahan, DT 2008, The Basics of Communication: A Relational Perspective Authors, Sage, New York.

Ingram, A 2006, An Advertiser’s Guide to Better Radio Advertising Tune In to the Power of The Brand Conversation Medium, John Wiley & Sons, Chichester.

Kraeuter, DW 1993, British radio and television pioneers: a patent bibliography, Scarecrow Press, Metuchen.

Lewis, PM & Booth, J 1990, The invisible medium: public, commercial, and community Radio. Howard University Press, Washington, D.C.

Locket, M & Patterson, A 2007, Radio man: Marconi Sahib, Universe, New York.

Osborne, J 2010, Radio head: up and down the dial of British radio. Pocket, London.

Pocock, RF 1988, The early British radio industry, Manchester Univ. Press, Manchester.

Street, S 2006, Historical dictionary of British radio, Scarecrow Press, Lanham.

Wells, A 1997, Mass media & society, Ablex Publ, Greenwich.

Wimmer, RD & Dominick, JR 2011, Mass media research: an introduction. Mass, Cengage- Wadsworth, Boston.

Invention Analysis and Claiming: A Patent Lawyer’s Guide

Introduction

Pasteurization is a fundamental technology. Evidently, it has been applied within different fields in the present society. Pasteurization enables the development and preservation of vital products and consumables. Analytically, the historical development of pasteurization dates back to 1856 (Slusky, 2007).

Louis Pastuer emerges as the core founder of this important technological process. Various thematic concepts are evident from the discovery of pasteurization. Principally, the concept of empirical experience and evidence is evident. Several books discuss diverse issues and experiences derived from the process of pasteurization and its invention.

Therefore, a critical examination of these writings depicts different thematic concerns. Particularly, the concerns relate to technological and scientific realms. In page 5 of the book, the significance of persistent empirical research in technological advancement is demonstrated.

Why the Book Defends a Specific Thesis

Significant technological invention is attainable through a persistent process of experiential and empirical research. The important thesis statement is supported in page 18 of this book. It highlights the significance of continuous research and experiment. This crucial thematic concern is promoted through the description of the underlying story of Louis Pastuer.

According to Pastuer’s story, the consistent experiential experience overrides personal talent. Once an individual develops great interest in scientific experiments, the results supersede personal talent (Slusky, 2007). Categorically, the book supports this concept by indicating that Louis was more talented in arts and painting. However, his interests in science and technological experience led to the development of a fundamental process.

The story also indicates that technological and scientific inventions are endless. This is demonstrated within all realms of scientific practice. A single technological discovery is applicable within very diverse fields of scientific and industrial practice.

As indicated in the book, continuous critic and re-modelling of these processes is vital. Indicatively, this is because technological applications are manipulated by different factors. Some of these include impacts of globalization, environmental influences and other diverse intrinsic considerations. Ideally, the example drawn from Louis Pastuer indicates that fermentation process remains applicable across many fields (Slusky, 2007).

Despite this variability in the fields of technological usage, there is a solitary critical underlying concept. Notably, it is the capacity of the process of pasteurization to kill germs. This is achieved at certain temperatures.

The book indicates that the process was initially applied in the preservation of milk. Nonetheless, continuous criticism has led to the emergence of other potential zones of utility. Some of these include preservation of beverages, food and non-food substances. In providing these examples, the book specifies that all scientific or technological discoveries are open to intensive criticism.

The criticisms emerge from both supporters and denouncers of the particular technology. Thus, in most cases, the criticisms lead to important developments, applications and technological improvements. This is independent of the specific field of technological discovery (Slusky, 2007).

The book potentiates the fact that science is interrelated. Therefore, one discovery leads to the development or invention of an equally crucial technological application. This thematic concern is evidenced by the several counts of interrelated and procedural discoveries made by Louis Pastuer.

Conclusively, the book is a potential indicator of basic technological and scientific themes. Pasteurization is used in the exemplification of these different technological themes and applications.

Reference

Slusky, R. D. (2007). Invention analysis and claiming: A patent lawyer’s guide. Chicago: American Bar Association, General Practice, Solo & Small Firm Section.

New Product Invention: Australian Tourism

Executive Summary

Australian tourism sector has experienced a consistent growth over the past. It is one of the main contributors to the country’s gross domestic product on a yearly basis. Finding an appropriate investment opportunity in this industry can be very viable. Given the stable economic growth in many countries around the world, getting a new investment plan in the field of tourism can be a very viable idea. The hospitality industry is one of the most attractive investments that an investor can think of in this market. The number of visitors into this country, and especially around Kimberly is one the rise. Coming up with Kimberly Beach Resort is an investment that will surely give a positive reward to the investor. Kimberly region is one of the places that have been receiving visitors throughout the year. This project will be beneficial because the hotels in the region have since been overwhelmed by the current flow of the visitors. There is a gap in this market that this firm will easily fill.

Introduction

Tourism is one of the fastest developing industries in Australia. In Kimberley, North Western Australia, tourism has been one of the leading sources of income. This is so because of the location of the region. The region borders Indian Ocean to the west, Timor Sea to the north, Tanami Desert and the Great Sandy to the southeast. This makes this region one of the best located regions for tourism in the world. Tourists always come to this region from all parts of the world. According to Kahn (2013, p. 86), Kimberley is one of the most visited regions in Australia by both local and international tourists. This strategic location means that those who visit this region can see several sites while in its hotels. Because of this, many tourists would spend a lot of time when they visit the region.

Finding an appropriate business opportunity in these regions can result into a great success to the investors. As Kahn (2013, p. 38) observes, most tourists always come to a region to spend money as they relax away from their offices or workplaces. The ultimate aim of a tourist is to spend time enjoying the luxury offered by their hard-earned money. Coming with a viable project in this region targeting tourists can be a massive success if this is done appropriately. A tourist’s hotel is the best business opportunity that can be very successful in this region. Kuri Bay may be the best location for this project. This is because of the location of this bay. The bay is remotely located and in order to access it, one would need a helicopter or a seaplane that can reach it. Despite the difficulty of reaching the bay, tourists have continuously been attracted to the beach due to its beauty and the remote location.

Tourist’s hotel would prosper because the place is a little further from the mainland and when one gets into the bay, it is cumbersome to get back to the mainland. People find it difficult using the seaplane to this place, especially those who are using it for the first time. Using a helicopter is a very expensive venture, especially those who may need to make several visits to the place such as those making educational tours. Coming up with a hotel in this bay will eliminate the need to travel back and forth once one lands on the bay. The tourists will have an opportunity to stay in this hotel for as long as they are interested in staying in this bay. This will offer them a cheap alternative cost of staying in this region compared to a situation where they would be forced to book hotels in major Australian cities.

This business idea may give returns after a very short time. This is because of the recent rise in the number of tourists who come to this region for tour. According to Kumar and Phrommathed (2005, p. 72), tourists sector has been experiencing growth in this country, especially in the past ten years. The number of hotels available for both the local and international tourists is reduced. The country has been ranked as one of the safest nation around the world. This has attracted various holidaymakers, and even investors who feel that their investments will be safe in the country. This means that the hotels are under intense pressure to host these visitors. This business venture would attract customers depending on the marketing strategy that would be taken by those who are in charge.

Industry Definition

The hotel industry is one of the oldest industries in the world. In this country, the industry is well established, with some of the players having been in existence for over the last fifty years. The hotel industry can best be defined as an industry that offers accommodation and meals to travellers who are miles away from their homes, but need somewhere to stay and eat while they are away from their homes. In Australia, this industry has been growing in equal proportion with the growth in the number of visitors who come to the country. The scope of this industry can be analysed at different levels, each defining the competitors that this firm will face. The following tree diagram demonstrates this.

Industry Analysis using the Tree Diagram

Industry Analysis using the Tree Diagram

The above tree diagram shows the industry players that this firm will be facing. From the top of the tree, this firm will be all the industry players in the hospitality industry irrespective of their level and size. As we narrow down the industry, the firm will be facing a competition from all the four star hotels and above. Narrowing this down further, this firm will face off with all the hotels with Spa and casino services in the industry. There are visitors who cannot stay in hotels with the tag casino because of the impression they have towards casinos. However, there are those visitors who only books hotels with spa services and casinos. It is this market segment that this firm will be competing with. Firms operating in this region have very strong brands which helps them attract and retain customers. To operate successfully in this industry, a firm needs to develop a strong brand that will be able to compete favourably with the existing brands.

Suitable Brand Name

According to Annacchino (2003, p. 118), brand is one of the most important asset that a firm can ever have. Kahn (2013, p. 83) on the other hand, defines a brand as the substance that remains of a firm after all the physical asserts have been taken away. It is the way the market view a firm and its products in the market. The best brand for this business venture will be The Kimberly Beach Resort. The brand name should adopt this brand name because of a number of reasons. The firm should adopt the name because of the popularity of Kimberly as a region. Although the resort is in Kuri Bay, adopting the name Kuri will make advertising difficult because Kuri is a region that is not as well known as Kimberly.

Kimberly is known, and advertising a product using this name will ease the awareness creation process. This brand name is also appropriate because of the region that Kimberly covers. The name Kimberly would mean that tourists who want to visit any part of Kimberly region would consider booking the hotel because they will be made to believe that the hotel will make their movement to this region easy. The name Kimberly will also open doors for expansion for this firm. By restricting, the name to a small region like Kuri Bay will make it difficult to expand the hotel beyond Kuri. Any expansion to other parts of the Kimberly region would force the firm to develop a different brand name.

The brand logo should be a beautiful coastal beach. This would help pass the message about the location of the Hotel. It will emphasize the fact that visiting the region and staying in the hotels will enable the visitor access the beauty of the beaches.

This industry has experienced a number of changes in the past few years. According to Kahn (2013, p. 96), the tourism industry is experiencing massive competition in this century. This is because of the number of players that have entered this field. This scholar holds that many investors have been attracted to this industry, and this means that they have to fight for the available customers. Annacchino (2003, p. 67) notes that in the past, the hotel industry was considered as being different from the tourism industry. However, this is not the case currently. The hotel industry forms the backbone of the tourism industry. This is because most of the hotels have come up with strategies where they take care of the tourist’s interest from the time they check in to the time they leave. Annacchino (2003, p. 112) says that currently, hotels uses their cars to take tourists to every destination they may wish once they check into the hotels. They would carry most of the activities that were previously done by urgencies in this industry. This makes them the main players in this industry.

Overview of the Competitors

This industry has very strong players who have been in operations for over five decades. The industry has numerous five and four star hotels that have gained popularity with both the local and international customers. The Sheratons are some of the strongest brands in this industry. Their main strength is in their financial power and the strong brand they have developed over the years. This makes them very strong in the market. However, they also have some weaknesses such as slowness in responding to the emerging technologies. This makes it easy for new investment to launch an attack and manage the competition in the market.

Macro environment Analysis

The macro environment analysis for this investment can be analysed both from the political, economical, social, technological and legal perspectives. The political and economic environments in this country have been very stable. The social environment is accommodating of such new investments. Just like any other industry, this industry is open to technological advancements. The legal structure put by the government support investments in the country

Marketing Strategies when Launching the Product

It is therefore, important to develop a strategy that would help make a firm be seen as a unique in the industry. This may be developed during the launching this product to the market. According to Kahn (2013, p. 129), the process of launching a product into the market is very important in developing the image of the firm and of the product in the market. According to Brown and Leavitt (2004, p. 63), the first impression that a product gets in the market will always determine its image among the customers. Product proposition is important in determining the type of consumers that would be attracted to the product. When launching the product into the market, care should be taken to avoid instances where the product gives the wrong image it should give to the market.

Annacchino (2003, p. 93) advises that when launching a product into the market, a firm should determine market forces, mainly the competitors it faces. Launching a new product should be done in a way that would attract the consumers and convince them that the new arrival has a better opportunity to meet their needs as compared to the existing products. This should not mean that it is done in a way that would provoke other competitors to act in a manner that may result into competition in the market.

Kahn (2013, p. 37) says that launching a product into the market requires a firm do determine the current needs of the market. The firm should determine what the market needs the most despite the existence of other firms offering the same product. The firm should then preposition itself in a way that would convince the market that the product comes to fill in the gap that existed before. The firm should convince the market that it comes with the solution that existing competitors were unable to offer. This will help in pulling away customers who were loyal to other established competitors in the market.

Annacchino (2003, p. 57) says that introducing a new product by a new firm is different from introducing a product by a firm that is already established in the industry. In this case, this firm will be introducing a product into the market as a new firm. This scholar advises that the best way of doing this is to try to ensure that this product is introduced as a new product into the product other than a product that is already in the market. The biggest task will be to come up with a way through which this firm such a product as hospitality service will be introduced into the market as a new product. Brown and Leavitt (2004, p. 13) say that the best way that this can be done is to offer this product in a different manner, or alongside other products.

Consumer Behaviour and Market Positioning

The consumer behaviour in the market will always determine market positioning that a firm takes in the industry. Given that this shall be a five star hotel, the hotel shall master consumer behaviour of customers in this class. Kimberly Beach Resort should be propositioned as a hotel that offers tourists more than just accommodation. The rooms should have internet connectivity and a computer that will enable Travers to conduct various activities online. A traveller who has some work pending can come to Kimberly Beach Resort, complete the task and send it back to the office through online means.

This means that those whose work can be conducted with access to internet-enabled computers may not need to cut short their tour because they have to go back to their offices. The hotel will offer them the facilities they need at the office, and therefore, it shall be an office away from the office. Kimberly should go beyond this. The firm should offer its visitors most of the indoor games. For instance, such games as pool, table tennis, poker, swimming competitions, and gym facilities should be available to the visitors. The management should make another step and find a way of making it easy for the lovers of gulf to be attached to the local golf clubs that are close enough to this facility.

The aim of the firm will be to convince consumers that this firm will offer the best facilities that any tourists may need. The pricing strategy should be based on the facilities offered and the prevailing prices in the industry. According to Annacchino (2003, p. 71), the pricing process should be done in a way that will not provoke competitors to equally lower their prices. It should be a price that would is worth what the firm is offering. The firm should not offer a lot but charge too little because this may make consumers feel that the firm offers substandard products in the market. Similarly very high prices may send away customers who may view the firm as being exploitative. Forming a niche in the market may help this firm set its own prices that would reflect its niche. As Kahn (2013, p. 121) says, the best way that a firm can come up with the best price is to develop its own niche in the market and develop a pricing mechanism that is unique to the market.

Promotional Strategies

Promotional strategies will always determine success or failure of firm in the contemporary world market (Brown & Leavitt 2004, p. 67). This is because in the current market, a firm will be competing for attention in the market. Ability of a firm to attract and retain customers will not only depend on the quality of the service offered to the customers, but also the advertising strategy that a firm employs. The current market has so many competitors that a firm will be forced to come up with mechanisms through which it will make its products outstanding in the market. As Annacchino (2003, p. 119) notes, the customer is always left to guess which firm has the best capacity to offer the desired benefits it needs. In order to make a decision on which product to purchase, a consumer will look for the product that has been positioned as that which meets the needs of the consumers.

For Kimberly Beach Resort, advertising will be based on the market segment that is desired. This means that segmentation will be important for this firm. Segmentation of the market helps in determining the best way in which a firm can meet the needs of its consumers. Consumers’ needs vary depending on their social class, religious groupings, age, culture, among other demographical factors. Kimberly Beach Resort should understand all the market segments that exist within this market. After this, the firm will then determine which segment is the most appropriate for its products. This will be important because as Kahn (2013, p. 82) says, promotions is always a reflection of what a firm offers.

This scholar warns that it is dangerous for a business unit to position their product in a given way only for the customers to confirm that what the firm offers is way below what is advertised. This is because when what is offered is below their expectation, and then the product will generate dissatisfaction. This will mean that the customer will not make a repeat purchase of the products offered by the firm. Customer retention is what determines the market share of a firm. When a firm is able to attract and retain customers, then it will be able to experience growth, and this will help it capture markets beyond the current scope.

After understanding the right market share, Kimberly Beach Resort should design a promotional campaign that will attract this segment. In the promotional campaign, customers should be informed of the location of the resort, all the products that are offered, the reason that makes the resort superior to the existing ones, and any other reason that may be needed by the consumer. The firm will target the middle class local and international travelers who come to Kimberly region for leisure travels. This market segment forms the majority of travelers, and they are easier to satisfy as compared to the upper class members of the society. This makes the segment attractive.

The channel through which the promotional campaign will reach the target group should be clearly defined. Social media marketing has become popular among marketers in the 21st century. Kimberly Beach Resort should consider using such media as Facebook, Tweeter, and YouTube among other social media to reach out to the market. This will help the firm reach for the international markets. The marketing management team should develop television commercials that should be uploaded in YouTube. The commercial should be given a name that is closely connected to key words that anyone who searches for information about Kimberly region may use.

This will make it easy for them to access this commercial. The mass media should not be ignored. This firm should use the local television channels to promote this product among the locals in Australia. The radio stations can also be helpful in reaching out for the customers. International televisions stations such as Aljazeera, CNN, and BBC can help reach out for consumers in Europe, Asia, and Africa and American markets. In the commercials should be snapshots of both the interior and exterior parts of the hotel, its proximity to the beach. All the important facilities found in the hotel should be captured in the commercial.

According to Brown and Leavitt (2004, p. 67), it is important to ensure that a firm conforms to the regulations put in place by the relevant authorities to avoid cases where the authorities interfere with its operations due to non-compliance. In this industry, consumers are very sensitive about their well-being. It can be embarrassing when an officer of a regulatory authority forces customers out of the firm and orders its immediate closure because of non-compliance. Customers will not only shy off from the hotel for fear of substandard products, but will also get annoyed with the firm due to the inconvenience caused. When a firm turns away customers because the authority has suspended its operations, customers will develop an impression that the firm lacks the capacity to meet the needs of the market. The management should ensure that such departments as the department of health, environmental agencies, and building and construction agencies, tax authorities amongst others are well consulted, and that their full consent is gotten before operations begin.

Finally, Kimberly Beach Resort should maintain safety within this facility. In this location, travelers will be concerned about their safety, especially given the fact that only a helicopter and a seaplane can be used to access the beach. It is a fact that Australia is one of the safest countries in the world. However, consumers would want to see a sense of security when they visit the facility. This is especially important for the international travelers who may be coming from insecure countries like Afghanistan. Their perception about security is presence of armed security officers at the entrance and the entire compound ready to arrest any security concern that may arise within the facility. This will convince them that once here; there security is not left to chance. The firm may hire officers who have experience in matters about security. These officers will ensure that all the visitors’ security is well taken care of while within the firm.

Recommendations

The above discussion is an elaborate description of the strategy that this firm should take in coming up with a new product concept. The following recommendations should be taken into consideration when implementing this project.

  • The marketing management team should ensure that they have an understanding of this industry within this country before launching the product.
  • The management should develop a marketing strategy that would convince the market that the products of this firm are unique.
  • It would be important for the management of the firm to employ people who have some experience in this industry to serve in most of the managerial positions.
  • The marketing management team should develop a mechanism through which the level of satisfaction of customers can be determined. This will help know when adjustments are needed or not.

List of References

Annacchino, M 2003, New product development: From initial idea to product management, Butterworth-Heinemann, Amsterdam.

Brown, M &Leavitt, P 2004, New product development: A guide for your journey to best-practice processes, American Productivity & Quality Center, Houston.

Kahn, K 2013, The PDMA handbook of new product development, John Wiley and Sons, Hoboken.

Kumar, S & Phrommathed, P 2005, New product development: An empirical study of the effects of innovation strategy, organization learning, and market conditions, Springer, New York.

Optical Tools: History of Invention and Consequential Development

Introduction

It would prove impossible to imagine the realities of living today if people in just about all social strata were not able to take practical advantage of such optical instruments (tools) as telescopes and microscopes. Such a state of affairs is fully explainable – it is not only that these tools represent much value, as useful and utilitarian assets, but they are also often referred to as symbols of humanity’s endeavor to remain on the path of expanding its intellectual horizons.

Moreover, the invention and consequential development of optical instruments have also been mentioned in connection with what accounts for the workings of the so-called “Faustian” (Western) mentality, which makes the history of these instruments a legitimate subject for anthropological inquiry. In my paper, I will explore the validity of the latter suggestion at length, while specifying the highlights in the history of these tools’ development and elaborating on the discursive significance of how they came into being.

History

It is a well-established fact that high-quality magnifying lenses are not a relatively recent phenomenon, but began to be made as far back as ancient times. More than a century ago, during the excavation of the ruins of ancient Troy, Schliemann was able to find a large number of skillfully polished crystal lenses, easily used for the purpose of magnifying things visually. Moreover, there are also a number of eyewitness accounts (articulated by the historian Diocles) of how the ancient Greeks would deploy large magnifying glasses for military purposes, such as projecting and intensifying reflected sunlight onto enemy ships in the sea with the consequence of burning them.

Aristotle, Euclid, and Ptolemy, on numerous occasions in their works, mentioned optical devices and optical laws. Unfortunately, we do not know what these ancient optical tools looked like. However, there is little doubt that such instruments did exist, back through the time of Greco-Roman antiquity, and even in earlier times.

For about six hundred years after the fall of the Roman Empire in the fifth century A.D., Europe remained in a state of barbarianism, which is why there are no historical records of any optical instruments have been in use throughout the historical period in question. Nevertheless, as the socio-cultural progress in this part of the world continued to gain a powerful momentum, the idea of using lenses to create optical devices began to appeal to more and more intellectually advanced (when considered against the backdrop of their time) individuals. The most notable of them was Roger Bacon (1214-1294) – who, for the first time in history (at least, since ancient times), theorized that it should be possible to create a “spyglass” by means of inserting a few lenses into a hollow metal tube.

Nevertheless, it was not until the year 1608 that both a Dutch optician, Hans Lippershey, and an Italian scientist, Galileo Galilei, but Bacon’s idea into practice. The former is credited with conceptualizing the principle of combining lenticular and biconcave lenses within a single optical instrument, which, in point of fact, was intended to enable military commanders to see things at a great distance – something that later came to be known as a “spyglass.” Lippershey is also commonly praised for inventing yet another optical device of great practical value – binoculars.

Even though the functioning of Galilei’s telescope was based on the same principle, his invention is usually regarded as having been much more of a revolutionary breakthrough in the course of the progress of the development of optical instruments. For the first time in history, Galilei showed that an optical instrument (in his case, the telescope) could be used not only to explore the movements of celestial bodies but also to gain analytical insight into what the stars and planets really might be.

The fact that this was indeed the case can be illustrated, with regard to Galilei’s discovery of three moons orbiting Jupiter – a development that contributed substantially toward exposing the fallaciousness of the geocentric model of the universe. As it turned out, this was made possible by the fact that, unlike Lippershey, Galilei was able to find the actual key to making the telescope a practical tool for exploring space. This key involved his unique approach to lens-polishing, which allowed the scientist to succeed in crafting the most advanced lenses in Europe (McCray, 2008).

Nevertheless, Galilei’s telescopes had a major drawback: They tended to distort colors, due to the effect of chromatic aberration, which is caused by the fact that the deflection rate of red light is lower than that of the rest of the color spectrum. What this means is that even a slight imperfection in the shape of a lens will necessarily result in altering the magnified image of a star or planet to some degree, sometimes considerably.

In 1641, Polish astronomer Johannes Hevelius came up with a solution to this problem. It had to do with the astronomer’s realization that the effect of chromatic aberration can be reduced to a minimum with the use of lenses having a very long focal length. Hevelius began experimenting with lenses with a focal length of as much as 20 meters, with the longest of his telescopes having a focal distance of about 50 meters. The lens was connected to the eyepiece with four wooden planks, making the structure much more rigid.

Another step on the path of the instrument’s development was the invention of the reflector-telescope by Isaac Newton, which was brought about by that scientist’s quest to eliminate chromatic aberration altogether. Initially, he was going to use two lenses in a tube (positive and negative), which would be opposite in sign to reduce or eliminate chromatic aberration. However, after having experimented with trying to implement his idea in practice, Newton concluded that there was no way to get rid of the problem, as long as the telescope’s construction remained conventional.

As it turned out, this caused the scientist to decide to do away with this problem in a radical manner. Just like many other astronomers at the time, he knew that the achromatic image of distant objects is built around the axis of a concave mirror in the form of a rotating paraboloid. However, attempts to construct reflective telescopes before Newton’s time were not crowned with success. The reason for this was that the geometrical characteristics of mirrors must be strictly coordinated, and this was exactly what the opticians of the time could not achieve. In 1668, Newton constructed his first reflector-telescope with two concave mirrors, one to refract the incoming light, and another to redirect the formed image into the lens on the side of the telescope’s tube.

In 1672, French astronomer Cassegrain proposed his own two-mirror system configuration. The first mirror was parabolic and the second was in the form of a convex hyperboloid, located coaxially to the focus line. Even to the present time, such a configuration principle continues to be used for building reflectors, especially those meant to be purchased by amateur astronomers. Despite the simplicity of the principle, it took Cassegrain fifteen years of applying continual effort, with the aim of trying to improve reflector-telescopes, before he was able to come up with a practical, working design that he found satisfactory.

It took nearly another century for astronomers to realize the error of Newton’s assertion that it is impossible to create an achromatic lens. In 1751, a British optician, John Dollond, with his son Peter undertook a series of experiments with prisms made out of the Venetian glass krona and English flint glass, which at the time were used for making jewelry. As it turned out, these two varieties of glass could be used for crafting lenses that effectively suppressed a color halo around the magnified images of stars and planets, as seen by an astronomer. As a result, refractor-telescopes were brought to a whole new level of complexity.

Moreover, Dollond’s invention resulted in increasing the affordability of telescopes, which in turn contributed to the popularization of astronomy in eighteenth-century Europe, hence, adding momentum to the pace of scientific progress in the West.

Throughout the course of the nineteenth and twentieth centuries, telescopes continued to be perfected in a variety of ways. One of the most notable improvements took place in 1948 when a Soviet engineer named Ponomariov drew attention to the fact that the telescope’s design could be improved by means of making it possible for the optical instrument in question to rotating around all three azimuthal axes – the so-called “azimuthal mount.” This mounting principle is used in all modern portable telescopes.

As of today, the word “telescope” refers to so much more than merely an optical tool. The invention of radio telescopes and the launch of the space-telescope Hubble proves the validity of this suggestion better than anything else. There is even more to it; the history of the development of the telescope falls within the notion promulgated by the term “Western industriousness.” After all, just about every important breakthrough in the way of perfecting telescope-making technology has been achieved in Europe. Therefore, it is thoroughly appropriate to suggest that the invention and development of telescopes cannot be referred to in terms of a “thing in itself” but rather, it has been dialectically predetermined. In the paper’s following sub-chapter, this idea will be discussed at length.

Just as in the case of the invention of the telescope, the invention of the microscope was brought about by continual progress in the field of optics throughout the sixteenth century. Nowadays, it is commonly assumed that Dutch optician Zachariah Jansen created the first microscope ever in 1595, by the means of mounting two convex lenses in a metal tube – something that allowed Jansen to achieve the magnifying resolution of the small objects under observation up to ten times. Focusing on the object was done by means of sliding the tube up and down its vertical axis.

Nevertheless, it was not until 1681 that the microscope achieved full recognition as a tool for exploring the microscopic world, unseen to one’s unaided eye. At that time, Dutch scientist and tradesman Antonie van Leeuwenhoek presented the British Royal Society with a microscope that had a magnification rate of 270. With this instrument, Leeuwenhoek discovered the flow of blood in the capillary vessels of a tadpole and the existence of microscopic single-celled algae and bacteria. He also revealed the actual mechanics of photosynthesis in plants.

In 1702, Havel introduced a micrometer screw and proposed that a mirror should be placed under the microscope to serve as its table. This development resulted in microscopes beginning to acquire their classic appearance, now familiar to just about every student who has taken a biology class.

The year 1824 marks another important development in the history of the microscope. This had to do with the fact that the French optical firm Chevalier began to produce prisms that combined two to three achromatic lenses together, which in turn allowed the increase of resolution in microscopes by as much as a thousand times. Therefore, there is nothing accidental about the fact that the historical period in question is associated with a number of truly revolutionary breakthroughs in the field of microbiology.

In 1830, Joseph Lister has found a way to reduce the rate of spherical aberration (bending of light due to the curvature of a lens) in microscopes: “He (Lister) discovered that by putting lenses at precise distances from each other, the aberration from all but the first lens could be eliminated” (History of the Microscope, 2015, para. 23). This development made it possible to enhance the resolution capacities of a microscope rather substantially.

Throughout the second half of the twentieth century, the progress in microscopy attained nothing short of an exponential momentum, which allowed microscopes to continue in their progress of becoming ever more precise and sophisticated. Among the most significant developments, during this time, can be mentioned the increased limit of magnifying resolution (from half to one-tenth of a micron).

This, in turn, allowed scientists to explore the construction of physical matter on deeper levels, all the way to a molecular level. During this historical period, it was also found that there still exists an irrevocable obstacle that stands in the way of increasing the magnification capacity of a microscope: the actual wavelength of visual light. According to the diffraction theory, it is impossible to see objects smaller than half a length of the light wave, which means that the smallest objects that can be perceived by a human eye, with the help of an optical microscope, may not be of a size less than one-fourth of a micron.

Nowadays, the most advanced microscopes fall in the category of SEMs (scanning electron microscopes). They use a beam of electrons, projected at the scanned object, to obtain information about what accounts for this object’s atomic subtleties. The magnification rate provided by this type of microscope reaches 500,000 times. It must be understood, of course, that the magnified images of microscopic objects seen through SEMs are digital.

In its turn, this implies that, as of today, microscopy can no longer be discussed in terms of a purely empirical pursuit – quite the contrary to what used to be the case from the time of the early seventeenth century up until comparatively recent times. In this respect, a certain parallel can be established between modern microscopy and modern astronomy – both of these scientific disciplines are now concerned with obtaining highly abstract and often counterintuitive insights into the nature of the surrounding micro and macro reality.

Discussion

Even though these optical instruments, the telescope, and the microscope, are now being used all over the world, there is a certain rationale in discussing them as the physically embodied extrapolations of the so-called “Faustian” (Western) psyche, and consequently suggested that the invention of both tools was bound to take place in Europe and not anywhere else in the world. The logic behind this suggestion is as follows:

It represents a well-established fact that the representatives of different races differ in the manner they perceive the surrounding socio-cultural/natural niche and their place in it – something that can be discussed in terms of one’s endowment with a specific “national mentality.” For example, East-Asians have traditionally been known for their tendency to think and act “holistically” – that is, without trying to exercise any control, powered by the will, over the objectively existing environment around them, which in turn predetermined these people’s strongly defined sense of collectivism and perceptual utilitarianism.

As De Mooij and Hofstede (2011) noted, “In the collectivistic (Asian) model the self cannot be separated from others and the surrounding social context, so the self is an interdependent entity who is part of an encompassing social relationship” (p. 183). This partially explains why the Chinese have always been relatively slow in the pace of invention and application, one example being the time involved in realizing the military implications of their invention of gunpowder. Apparently, the people living in this culture never experienced an unconscious desire to impose mastery over things while aspiring (unconsciously) to attain the status of demigods – contrary to their Western counterparts.

Being highly egocentric and individually minded, Westerners have never ceased being preoccupied with trying to discover the most fundamental laws of nature – an activity that they innately felt was keeping them on the path of existential empowerment. Unlike Asians, Europeans do not strive to blend with nature, but rather to be in the position of exercising full control over nature and natural forces. Therefore, they are naturally predisposed toward trying to desacralize the “ways of God” rationally, as something that allows them to experience the sensation of becoming ever more powerful.

Hence, the significance of the invention of the telescope and the microscope: These inventions are insightful, in the sense of revealing what accounts for the forming of one’s self-identity as a Westerner. Probably better than anything else, the historic developments under consideration advance the legitimacy of the suggestion that being a Westerner (European) means acting in accordance with the idea that the “individual’s willpower must never cease combating obstacles… and that the conflict is the essence of existence” (Greenwood, 2009, p. 53).

The reason for this is apparent – both mentioned inventions imply that the so-called “Faustian soul,” associated with the ways of the West, seeks to attain self-actualization through cognitive reductionism and particularization. As Forsberg (2015) noted: “To use a microscope is to isolate an object, to take it out of circulation and to place it on a slide. One tear a thing from out of its context, dissects that thing, and de- and re-categorizes it.

As the microscope magnifies an object, it simultaneously demagnifies everything around it” (p. 639). Such a cognitive approach is fully consistent with the strongly analytical (object-focused) workings of the “Faustian” psyche, extrapolated by the affiliated people’s tendency to be concerned with trying to discover the essence of things: “In a variety of reasoning tasks… (Westerners) adopt an ‘analytic’ perspective.

They look for the traits of objects while largely ignoring their context” (Bower, 2000, p. 57). Apparently, it is indeed appropriate to suggest that the invention and consequent development of the telescope and the microscope were bound to take place in Europe – an area populated by people who are both analytically-minded/capable of operating with highly abstract categories and driven to subject alive themselves within the surrounding environment.

Therefore, it will not be much of an exaggeration to propose that the invention of the optical instruments under consideration was the most important contribution toward the rise of scientific positivism in Europe throughout the seventeenth through the nineteenth centuries (Rasmussen, 1996). After all, being able to discover new stars and planets, on the one hand, and to explore the bacterial and molecular realms, on the other, will inevitably lead one to assume that there is nothing truly phenomenological about the works of nature.

Thus, it indeed makes much sense in referring to the telescope and microscope as unmistakably “Western” tools, in the sense that they were brought into existence by the very spirit of Western civilization, concerned with conquest and trying to uncover the actual mechanics behind the observable workings of the universe. It is understood, of course, that many of the earlier articulated claims appear rather speculative. This, however, does very little to undermine the validity of the idea that the history and societal significance of a particular tool cannot be discussed outside of what were the objective social and psychological preconditions for it to be invented in the first place – especially if it happens to be a technologically advanced concept.

Conclusion

I believe that the line of argumentation, employed in the defense of the idea that there are strongly defined discursive overtones to the history and development of both optical instruments, is fully consistent with the paper’s initial thesis. Apparently, in mentioning telescopes and microscopes, one not only refers to the instruments that are used to achieve a visual magnification of macro and micro-objects but also to some of the most notable artifacts of Western intellectual legacy as we know it.

Consequently, this implies that it is indeed fully appropriate to expect that, by applying inquiry into the history/development of a particular tool, we should be able to enlighten ourselves on what accounted for the qualitative aspects of its creator’s mentality. The anthropological/ethnographic relevance of this suggestion is obvious: It should be possible to anticipate possible developments in the way of continual scientific progress by assessing the ethnocultural characteristics of those who push the acquisition of knowledge forward. It is understood, of course, that the process of Western societies becoming increasingly multicultural does undermine the prospect’s plausibility to some extent.

This, however, has very little effect on the methodological soundness of the claim that tools not only serve some purely utilitarian purposes but that they also reflect the manner in which the affiliated individuals aspire to achieve self-actualization – just as was suggested initially. Thus, it will be in order to conclude this paper by restating, once again, that there is so much more to the optical instruments in question than one might think.

References

Bower, B. (2000). Cultures of reason. Science News, 157(4), 56-58.

Forsberg, L. (2015). Nature’s invisibilia: The Victorian microscope and the miniature fairy. Victorian Studies, 57(4), 638-666.

Greenwood, S. (2009). Anthropology of magic. Oxford: Berg Publishers.

History of the Microscope. (2015). Web.

McCray, W. (2008). The telescope: Its history, technology, and future. Technology and Culture, 49(3), 789-791.

Rasmussen, N. (1996). Sociology of culture – the invisible world: Early modern philosophy and the invention of the microscope. Contemporary Sociology, 25(1), 123-129.