The Ministry of Foreign Affairs of Qatar Implementing VPN Technology

Project overview

The ministry of foreign affairs of Qatar deals with all of the international affairs and foreign policy of Qatar government and other countries governments. It has around 75 diplomatic mission around the world.

The current methods of communication used to send confidential document are; diplomatic mail, e-mail and fax. Diplomatic mail service would take two to three days to deliver documents. However, faxes and private lines dont provide any type of security as they are unencrypted.

The emergence of virtual private network have created a secure and cheaper medium for transferring sensitive information, and documents between two or more organizations through public network such as; the internet site-to-site VPN.

Throughout my studying period and work experience in networking technology, I have thought of implementing new technology in my work. My idea is to implement VPN technology to support mailing, and video systems in the ministry. This technology will solve many problems such as; providing a lower cost solution of communication and enhancing securing lines and sharing confidential documents.

The VPN connection is to be created between the embassy of the state of Qatar in London and the ministry of foreign affairs IT data centre.

Finding the user Requirements

According to (Sommerville, 2004), Requirements capture involves three different phases which are as follow:

  • Eliciting requirements
  • Validating requirements
  • Recording requirements

All of theses separate phases were carried out in order to start the project.

Eliciting requirements

Both primary and secondary researches were performed to obtain the requirements from two different users who were interviewed and questioned. The two user categories are Ministry of foreign affairs IT staff and some users from the embassy of the state of Qatar in London.

Primary Research

Case Study

Communication as a process has become most fundamental to any business. Initially at the start of the computer age, a lot of focus and weight was placed on data processing or traditional stand-alone processing. However as it became evident that these standalone systems could be networked and be able to collaborate even in processing, the aspect of communication began to gain attention. As it were, information that resulted from these data processes required transmission or dispersion. A communication dimension therefore needed to be added to this setup to ensure that the information resulting from the data processing would be effectively transmitted to where it was most needed to enable timely decision-making. The state of information received would then determine the decision to be taken by the recipient. This called for increased technological studies to ensure this. One such step was the introduction of the virtual private network (VPN) a communication approach aimed at providing reliable corporate data transmission within the public domain. Virtual private network (VPN) ensures that there are dedicated routers and servers to a corporate providing a virtual private network within the public network such as the internet. These servers and routers are configured to a corporate standard portraying a private network within the internet. The routers exchange data and information over the internet as though it is a private network. The advantage of such an implementation continues to be proven. The Middle East Airline (MEA) is one such case where VPN has been utilized and continues to benefit MEA.

The Lebaneses national airline Middle East Airlines entered a $3 million contract with aviation IT service provider SITA. The contract covered the deployment an IP Virtual Private Network (VPN) to link all of the MEAs branches worldwide. The interconnection will cover Rafic Hariri International Airport in Beirut the airlines headquarters as well as nineteen branch offices. This will be achieved using SITAs own virtual private network, and is eventually going to improve performance and security. The airline connection speed will be upgraded from 8kbps or 16kbps to at least 64kbps performance. The improvement and effectiveness of the business applications resulting from this venture is expected to cut down the operating costs. SITA has demonstrated that it has the advanced technologies, industry expertise, and round-the-clock support required to meet our evolving communication needs (Suton 2008, p. 1).

The Middle East airline is currently running applications such as the Gaetan reservation/inventory/departure control application, cargo scheduling as well as Oracle financial over the VPN. The infrastructure is well able to cater for future scalability requirements. (Suton 2008, p. 1)

The IP migration has been considered a success implying the viability of VPN implementation. In summary, it is envisaged that the VPN solutions can add security, flexibility providing for the much needed network support services for MEA. This can also be looked at as one of the largest virtual private networks in operation today.

Primary research methods are used to generate data which does not already exist (Erica & Priest 2009). There primary research techniques were used which were: questionnaire, Interviews and observation.

Observation

I spent few days at the embassy of the state of Qatar in London and I used to work at the IT department of the ministry of foreign affairs and from that I observed the following problems with the current communication and file sharing systems:

  • The only way of making a phone call is via the normal PBX; International call from Doha to London and vice versa which has the following disadvantages:
    • The cost of the voice conversations are very high
    • Only Voice conversation can be made; Video call conferencing cant be established
    • Lack of security; any one can hack into the voice conversation and listen to it
  • Sending files and important document are done by using faxes which again has a higher cost and non-secure at all
  • To solve the problem of the fax security diplomatic mails were used but again it is very slow as it takes 3 to 4 days to reach the destination
  • Sending files by emails are used too but again it might be hacked and any one can read the important document sent by the e-mail

Interviews and questionnaires

Several interviews was carried out with the IT stuff at the Ministry head quarter and with some users at the embassy of the state of Qatar in London and from these interviews the following points were concluded:

From the questionnaires, the following can be deduced:

IT Staffs

  • IT staff are controlled from the finical department with the budget so they need a cost effective solution for the voice conversation
  • Security is a very important feature that the IT staff look for
  • They need to get rid of the diplomatic mails
  • They need a better way of a fast and secure file sharing

Embassy users

  • They need to call the ministry of foreign affairs almost every day due to business need which cost them a lot using the existing technology.
  • They are sending a lot of confidential document to the head quarter using the diplomatic mail. They need a faster way and hence still very secure.
  • They need and organised filing electronic files system that is linked to the HQ.
  • Sometimes, they need to make a secure phone calls for security reasons.

Validating requirements

After gathering the data from the questionnaires and observation I have deeply analysed it to ensure that it is clear and does not make any conflicts with the IT HQ.

Recording requirements

Both the users and the IT staff requirements were recorded in a nice readable way.

Primary research results

Form the primary research results I came up with a VPN, virtual private network, solution which can fulfill the user requirements.

The following points are the benefits from the new proposed VPN solution:

  • Cost effective connectivity method and significant reduction of the monthly cost as the calls can be made over the internet (Voice over IP or VoIP).
  • The calls can be secured as site-to-site-VPN can be used that has encryption capacities; the Site to site VPN Creates a secure tunnel between cooperate and the branch office.
  • Site-to-site VPN creates an encrypted tunnel between main office and its branches.
  • Shared applications and services using electronic archiving system can be accessed remotely from the embassy to the servers at the head quarter.
  • File and document mailing can be done in a fast and secure ways; as all the data and e-documents are encrypted and can only be read by the end user who has the decryption key.
  • The system has a minimum downtime when performing any required upgrades; availability (Andersson et al 2006).
  • The system is flexible to include modifications or any branch addition in the future.
  • The system can operate on various platforms easily as the VPN used over internet which is available almost every where.
  • The system is easy to learn and use and user documentation will be provided too.

VPN Background theory

A virtual private network (VPN) is a computer network that uses a public network like the internet that can provide secure connectivity between remote offices and users with their head quarter or main office. The main benefit of VPN is to provide a non-expensive ways of communication as owing a private telecommunication lines are very expensive; VPN Enhances Productivity and Cuts Costs. The data are transferred between the head quarter and the remote sites in a secure ways as Encryption is used to encrypt the data.

The following picture is an example of an internet VPN:

Internet VPN
Figure 1, Internet VPN

Some companies like CISCO and Juniper provides a VPN solution to the customers that has exceptional security features through encryption and authentication technologies that protect data in transit from unauthorized access and attacks. An intensive research was performed to study a these two solution to learn their practical requirements and uses (Lindsay 1997).

CISCO for example provides two VPN technologies (VPN 2010) that are Site-to-Site and remote access

Site-to-site

Site-Site VPN
Figure 2, Site-Site VPN

It extends network resources to branch offices by using the internet to create a WAN, wide area network, and infrastructure. All traffic between sites is encrypted using IPsec protocol Cisco VPNs also offer:

  • Reliable transport of complex, traffic, such as voice (which is what we need in our project)
  • Simplified provisioning
  • Integrated advanced network intelligence

Remote Access VPNs

Remote access VPN
Figure 3, Remote access VPN

Remote access VPNs extend almost any data, voice, or video application to the remote desktop, emulating the main office desktop so that anyone, at any time and at any where can access the main desktop.

Secondary research

IP telephony

Organizations are realizing the cost cutting benefits by employing VoIP for voice transmission from the advent of the internet and data networks (Vbulletin 2010). Rather than have a dedicated network to cater for the transmission of voice, the internet infrastructure comprising of data networks continues to prove vital in the accomplishment of IP telephony. IP telephony supports consistent voice communication. The Cisco Unified Communication has realized the major benefits of the IP telephony in todays corporate world and has invested in providing Cisco IP telephony solutions.

  • Providing a highly reliable communication channel that is also scalable. This takes advantage of the available LAN and WAN.
  • IP telephony results in improved employee productivity by use of supporting solutions such as the Cisco Unified Communication.

The suitability of the Cisco Unified Communication solution offers a number of services such as voice delivery, video, mobility and the support of IP phones. This range of products makes IP telephony an aspect that can literally transform the communication requirements of any organization. Most firms are exploring the wide range of options available to IP telephony and are making huge cost benefit advancements towards this goal (Stellman & Greene 2005).

The role played by IP telephony can therefore not be underestimated and as more and more firms are hooking on to the internet, the data network infrastructure is coming out as an important factor in the promotion of IP telephony (IP telephony  Cisco systems, 2010)

System Requirements

Real-Life Site-to-Site VPN Scenario
Figure 4, Real-Life Site-to-Site VPN Scenario

For real life scenario, the following equipments are required for implementation:

  • WAN Cisco routers with static public IPs
  • Cisco PIX firewall on each site
  • Cisco Call Manager in the HQ
  • Cisco Switches
  • Cisco IP phones
PIX Firewalls Establish the VPN Tunnel
Figure 5, PIX Firewalls Establish the VPN Tunnel

Each Cisco routers provide internet connectivity for its network. Both network must have public IP assigned by the Internet providers. PIX firewalls are used to negotiate and establish the VPN tunnel between two ends. The Cisco CallManager is used to handle all of the VOIP calls and it acts as PBX. Extra feature which can be used at the branch end by adding a Cisco CallManager Express on the top of the router (i.e. Cisco Integrated Services Router). In case of the VPN tunnel failure, the Cisco CallManager Express can still handle calls inside the branch network thus the employees can call each others (Davies 2007).

For the demonstration scenario, I will use GNS3 network simulator to simulate the VPN tunnel between the two ends.

For VOIP demonstration, I will use the following equipments:

  • ADSL Cisco router (857)
  • Broadband Cisco router (861) , with static public IP
  • Linksys ATA (SPA 3102 and SPA 2102)

The two Cisco routers will negotiate and establish the VPN tunnel. The Linksys SPA 3102 will act as PBX and also can be connected to the PSTN telephone network using the FXO port. The Linksys SPA 2102 acts as an ATA (Analog Telephone Adapter).

Resources required for implementation

  • A PC work station.
  • Broadband Internet connection, with static Public IP.
  • GNS3 Network Simulator.

Risk assessment

Risk management is important in order to ensure the successful completion of this phase of the project and also the complete project.

(Nielsen 1993)

Type of risk Description Risk Level Risk management plan
Mis- understanding the requirement requirements The requirements are recorded from the users but maybe not understood Low Double checking the requirement with as much user as they are available
Non-available resources The project resource might not be available at the project implementation time Medium Ensure all resources are reserved before starting the project implementation
Dead-line of the project is not met Missing the dead line as we might take more time than expected Medium Produce Gantt chart and ensure it is met
System delay an latency There might be some delay and latency in the call conversation due to the nature of the internet low Ensure high quality encryption devices and VPN routers are used to reduce the delay into a minimum figure
Phone System down (internet dependant) Because the phone system depends on the internet it might be down in case of the internet is down Very low Ensure there is an emergency phone that can be used in this case or a cellular phone

Lack of management goodwill

This medium probability risk may be because of inadequate user involvement during system requirement elicitation. When the analysis process is not well performed and does not involve the management or address their concerns, then this is likely to occur. A contingency measure to counter this risk would be conducting an all inclusive systems requirements analysis. Highlighting the long-term benefits of the VPN implementation especially to the management. Establishing tangible economic benefits of the implementation is one of the ways of bringing the management onboard.

Natural disaster

This is a low to medium probability risk that may be because of the natural happenings such as floods, earthquakes, hurricanes and tornadoes among others. A contingency measure would be to establish and define adequate back up procedures most preferably offsite. However not much can be done about this natural occurrences. Only mitigation measures such as infrastructure insurance can be put in place to address this risk (Launer 2005).

Inadequate testing

This is a medium to high probability risk that may be as a result of project slip or an ill-prepared user or even inadequate user requirements capture process. Contingency measures in place may include a properly constructed project schedule that addresses the project tasks including testing in terms of duration and deliverables at the end of such as a phase.

User mistrust

This is a low probability risk as a result of poor analysis. Limited user involvement during the analysis resulting in inadequate system requirements elicitation is the main reason for such mistrust. Contingency measure would be a thorough analysis involving the user at that point and eventually during the project progression.

Limited expertise of the project team

Inadequate funding

This is a low probability risk mainly resulting from lack of management goodwill. While addressing the project benefits the management should be heavily involved and in agreement. The contingency measure here would be conducting an all-inclusive analysis.

Gannt Chart
Gannt Chart

References

Andersson, E., Greenspun, P. & Grumet, A., 2006. Software engineering for Internet applications. Cambridge : MIT.

Davies, B., 2007. Doing Successful Research project, using qualitative or quantitative methods. USA: Palgrave Macmillan.

Erica, H. & Priest, J., 2009. Business and management research: paradigms & practices. Basingstoke: Palgrave Macmillan.

IP telephony  Cisco systems (2010). Web.

Launer, L., 2005. Middle east airlines makes connection, Seattle, WA: The Boeing Company.

Lindsay, J., 1997. Software Engineering. Seattle, WA: The Boeing Company.

Nielsen, J., 1993. Usability Engineering. Cambridge Massachusetts: AP Professional.

Sommerville, I., 2004. Software Engineering. Harlow: Pearson Education.

Stallman, A. & Greene, J., 2005. Applied Software Project Management. Cambridge, MA: OReilly Media.

Suton, M., 2008. Middle east airlines makes connection with SITA- Technology: Arabianbusiness.com. Web.

Vbulletin, 2010. vbulletin. Web.

VPN, Virtual Private Networks, 2010. Web.

Science and Aviation Technology Effects on Society

Usually, users become lawbreakers even if they do not know about it exposing to the serious law violation. There are significant risks of crashes and data loss, theft of personal information including the identity of the access to banking tools, and virus attack. Moreover, when one downloads music, video, or software from the illegal website, he or she might find it malicious or unwanted.

However, the very act of innovation might be ethically right or wrong act to some extent. For example, recent advances in biomedical research (pharmacokinetics, stem cell research, regenerative medicine) open up broad prospects, but they raise questions concerning the responsible use of new technologies and the resulting knowledge about them. Another example is Spector Pro website that would collect all the information about phone user comprising calls, e-mails, activity on the Internet. It is not an evil if you catch a stalker targeting your child (Mollman, 2008, para. 17). However, it is still ethically inappropriate. The following technologies produced the most benefit: cell phones, the Internet while tobacco and alcohol production are considered produced the most harm (Unger, 2014).

There also harmful benefits to the beneficial technology, and conversely, benefits from the harmful ones. For instance, e-mail service that allows reacting promptly to events and communicating with those who do not have the opportunity to meet each other in real life (Belcher, n.d.). No doubt, that it is always a pleasure to talk with an old friend who had gone abroad for a number of years ago without spending a penny. Moreover, the Internet along with TV should be mentioned. These services allow not only spending time having fun but also being aware of the latest news. Nevertheless, examples noted might harm in the case of piracy or inappropriate use (Unger, 2014).

English physicist Stephen Hawking predicted the demise of humankind on Earth in the next thousand years if people do not find a hiding place in the cosmos. If humanity is to survive in the long term, it needs to find a way to leave the Earth (Stephen Hawking: Why We Should Go Into Space, n.d.). In the case, if people stuck on Earth, they would face the risk of two different disasters. The first people would create themselves by the causing of drastic climate changes or developing nuclear or biological weapons, for example. The second fact that could wipe humanity off the face of the Earth is a number of cosmic phenomena.

The asteroid collided with the Earth would kill most of the population and leave the rest of the planet uninhabitable (Ferguson, 2011). Flash of gamma rays near the supernova in the Milky Way could also be damaging to life on the Earth. Life on the Earth might also be threatened by alien civilizations. Dangerous strangers could seize the planet and its resources for their use (Mialet, 2012). For the survival of our species, it would be safer to have a backup plan in the form of moving to other planets.

The most obvious places of a new human colony are the Moon and Mars as there are all the necessary resources (Moreira, 2013). Relocation of people to the Moon would change the future of the human race so that people do not even suspect. It would resolve immediate problems on the Earth.

Partially, I agree with the theory of Hawking because it seems that our planet becomes overpopulated and resources are gradually ending. Therefore, moving to the Moon or other planets is an obvious solution. However, people have no equipment for the movement and living of the whole humanity.

The NextGen fund is an infrastructure, which is involved in promoting the creation of the progressive aeronautics technologies. Specifically, the essential goals of NextGen refer to the expansion of flight system expertise as well as designing the new methodologies of multi-functionalism in terms of R&D, control systems, fabrication management, etc.

The foundation started its functioning in 2003 and gathered a team of talented engineers, technicians, and scientists. The NextGen fund does not focus a specific project but rather targets multiple implementation techniques. For instance, since the beginning of the NextGens existence, the experts designed such successful projections as Air Force and DARPA.

The foundation of the NextGen has some global goals, which are outlined in the aerodynamics and transportation system in the USA. Specifically, the specialists from the fund claim that it is possible to transform the radio-based transportation into satellite system of traffic.

Indeed, the planning of the progressive aerodynamic system offers some beneficial prospects for the community. Specifically, one can name such advantages as reducing constant transportation delays. Moreover, the NextGen design opens up the possibilities of the economy since satellite transportation requires a small amount of fuel and time. Finally, the developed infrastructure provides the chances for external monitoring and managing airspace. Conclusively, I believe that the plan of the NextGen aerospace moderation may be successfully implemented. However, it is critical to involve extensive government support into the system of modification so that to supervise and guide global changes.

References

Belcher, L. (n.d.). . Web.

Ferguson, K. (2011). Stephen Hawking: His Life and Work. London: Bantam.

Mialet, H. (2012). Hawking Incorporated: Stephen Hawking and the Anthropology of the Knowing Subject. Chicago: The University of Chicago Press.

Mollman, S. (2008). . Web.

Moreira, W. (2013). The Big Nest Originated the Big Bang of Stephen Hawkings Black Holes. Bloomington, IN: IUniverse Com.

Stephen Hawking: Why We Should Go Into Space. (n.d.). National Space Society. Web.

Unger, S. (2014). Web.

Noise Cancelling Headphones Technology

Introduction

The history of noise cancellation dates back to the time in the 1950s when technological innovations enabled the engineers to invent the system mechanisms to cancel the noise that originated from the helicopter and the airplane cockpits, Dr. Bose is a very notable personality in the history of noise-canceling headphones, he began his after being provided with headphones while on his international flight, he was dissatisfied with the poor quality which could not withstand the loud engine noise. He began to work on the headphones to find effective means of improving the quality which took him almost a decade of research (Federico, Eric & Nauta, 95).

Initially, the noise-canceling technology was meant to protect the pilots from noise interferences especially those that were participating in the first non-stop flights around the world. The noise-canceling headphones that are available on the market use Analog technology, however, there are other standard methods where digital processing is applied in active noise and vibration control, in particular, noise-canceling is very effective in countering the airplane engine noise, in this case, the headphones have the same size as the normal headphones. However, the actual electronic circuitry is located in the planes hand rest where it takes the sound signal from the microphone which is behind the headphones then inverts it sums it up together with the audio signal (Berger, 23)

Mode of working for the noise-canceling headphones

The noise cancellation in the headphones is achieved mechanisms that aim at reducing the unwanted ambient sounds, also referred to as the acoustic noise; the reduction applies the active noise control. Effective noise cancellation ensures that the sound signal is delivered even at very low volumes. This allows for comfort in communication even in noisy situations such as in the airliner.

The noise-canceling headphones in most cases make use of the Active Noise Cancelling technique to cancel the low-frequency portions of the noise; however, this technique relies on the more traditional methods such as the soundproofing method in the prevention of the higher frequency noise from reaching into the ears (Seabridge & Morgan, 202). This technique is preferred because of the numerous advantages that it provides; its cheap, it has a simplified circuit which reduces the demand for the complicated circuit especially at higher frequencies the active noise cancellation i8s is not effective. It is worth noting that during high frequency to cancel external sources of noise is attained by using a sensor which when combined with the emitters of wavelengths that cancel noise (Hansen 58).

Active Noise Cancellation (ANC)

This is a technique that involves the use of one or more microphones that are placed near the ear. There is a sophisticated electronic circuit that makes use of the signal from the microphone in the generation of the anti-noise signal to counter the effects of the noise signal. With the production of the anti-noise signal from the speaker drivers of the headphone, the ambient noise is canceled by the destructive interference, this results in a clear sound signal being delivered to the ear (Seabridge & Morgan 81).

Principle of working of ANC

Sound is a pressure wave that is made up of two phases; the compression and rarefaction phases. The noise-cancellation speakers produce the sound wave signal which has an equal amplitude but it is in anti-phase as compared to the noise signal.

According to Benesty Et al 213 noise cancellation can also be achieved through the use of the principle of attenuation, in this case, the noise cancellation speakers are located at the same location as the sound source to attenuate the two signals, under these conditions it is required that the noise cancellation speakers are maintained at the same audio power level as compared to the noise source, another technique makes use of the transducer which functions by emitting the cancellation signal, the transducer may be strategically located at the position where the sound attenuation is needed (Hansen 34).

This technique is advantageous since it consumes less power, However, it is disadvantageous since it is only effective with a single attenuation point, the noise cancellation at multiple points is not possible due to the high chances of matching of the unwanted sound and the cancellation signal due to the three-dimensional nature of the sound wave thereby creating the alternative zones with both constructive and destructive interferences (Benesty et al, 197)

Modern technological advancements have enabled the use of the digital computer as a form of advanced active noise control, the computer receives and analyzes the noise signal, and it then generates the inverted signal which cancels the noise signal through destructive interference. Hansen 87 explained that the Active Noise Control techniques differ from the passive Noise control techniques because the former techniques are powered while the later techniques are at unpowered levels. Soundproofing is a form of passive noise control which is achieved through the use of techniques such as insulation walls or sound-absorbing ceilings. The Active Noise Control is advantageous because it is less bulky, more effective at low frequencies, and blocks the noise selectively.

Conclusion

It is worth noting that the technology aimed at curbing noise in headsets has taken advantage of various methods of technological innovation. The mechanisms are used to cancel out noise from headsets. Currently, there are various applications used to curb noise that could be otherwise destructive to mans ears. For instance, it has played a major role in helping individuals working in cities deemed to be very busy characterized by a noisy environment.

There is already a version of the low-tech solutions to this problem, this include; the Ear-plugs and the sound dampeners which are available, however, this are not efficient. The mechanism to cancel noise attains its objective as it blocks sounds from their origin rather than blocking the same from getting their way to the human hearing system.

The innovation of canceling noise in headsets uses dimension 1 zone. Apart from the noise-canceling headphones, there have been other commercial applications that have succeeded, this include; the active mufflers, and the control of noise in the air conditioning ducts (Seabridge & Morgan 72). It is also important to note that the technology to cancel noise has played a major role in helping cut down the amount of noise that interferes with a worker at a place of work. Finally, the cyclic attribute of engines ensures that examination of signals as well as canceling noise is possible and easy.

Works Cited

Benesty, Jacob, Chen, Jingdong. Huang, Yiteng. & Cohen, Israel. Noise Reduction in Speech Processing, New York: Springer, 2009. Print.

Berger, Elliott. (Ed) The noise manual, AIHA, 2003. Print.

Federico, Klumperink, Eric & Nauta, Bram. Wideband low noise amplifiers exploiting thermal noise cancellation, New York: Springer, 2005. Print.

Hansen, Colin. Understanding active noise cancellation, New York: Routledge, 2001. Print.

Seabridge Allan., & Morgan, Shirley. Air Travel and Health: A Systems Perspective, New York: John Wiley and Sons, 2010. Print.

Assistive Technology for Paraplegic Patients

The modern healthcare sector is focused on the significant improvement of peoples quality of life (Assistive Technology: Devices Products & Information par. 5). The rapid evolution of technology and science conditioned the appearance of numerous opportunities that could be used by health care providers to help patients who suffer from various diseases. Moreover, the development of the health care sector and health technology resulted in the creation of new approaches to work with disabled people (Reason Digital par. 6).

At the moment, they are being provided with specific devices that should make their life easier and help them in everyday life activities (Biggest Innovations in Health Care Technology in 2015 & 2016 par. 4). Besides, engineers tend to create new devices to help this category of people not to feel disadvantaged. The given report revolves around the mechanical device created for a paraplegic patient to assist him/her with transitioning from a wheelchair into a Jacuzzi without anyones help.

Paraplegic patients could be characterized by the limited excursion and inability to perform some routine activities (Disability & Health Technology par. 3). For this reason, they are suggested to use specific devices that should help them to overcome all difficulties that might appear in their everyday life. Modern wheelchairs have numerous options and guarantee a certain level of comfort for their owners (People with Disabilities par. 5). Yet, their capabilities are still limited, and there is a need for additional devices (Ten Assistive Tech for People With Disabilities par. 5).

That is why we aim at the creation of a mechanical device to transport a paraplegic patient from a wheelchair into a Jacuzzi. Designing the mechanism we assume that he/she has a good upper body and arm strength that could help to use the device. We believe that the target audience for this report is comprised of paraplegic patients and medical professionals who will install, train, and operate the given device.

In general, the device could be described as a certain system of blocks and levers that guarantee the transition of a patient from his/her wheelchair into a Jacuzzi. It consists of a bolted and flexible beams coalesced by the actuator and slider. The choice of the given design is conditioned by several factors. First, the lever will help to reduce the effort needed to use the device. At the same time, the slider and actuator guarantee that the construction will return to its initial state and there will be no need for extra actions to prepare it for further use. The wheelchair is lifted with the help of the flexible beam and transported into Jacuzzi. It could be easily used by any paraplegic patient who has the same needs.

Besides, the construction could be characterized by a great level of safety and reliability. There are no parts that could harm a person. The absence of sharp edges guarantees serious trauma and injury avoidance. At the same time, it is a mechanical device which means that there is no need for electricity. It contributes to the increased security as the usage of various appliances in the environment implies water is extremely dangerous. For this reason, one should conclude that the given mechanism should be considered advantageous and could be provided for disabled people.

That is why the main aim of the given report is to provide a detailed overview of the given mechanic device that is aimed at the provision of certain assistance for disabled people. It could be considered efficient enough to be recommended for further development and use by a wide range of patients who have similar needs.

Works Cited

. n.d. Web.

Biggest Innovations in Health Care Technology in 2015 & 2016. n.d. Web.

. n.d. Web.

People with Disabilities. n.d. Web.

Reason Digital. Four ways technology can help disabled people. 2013. Web.

Ten Assistive Tech for People With Disabilities. n.d. Web.

Web 2.0 Technology: Development and Issues

The terminology Web 2.0 came into limelight in 2004 in reference to the then developed second generation of the World Wide Web. Following the earlier development of Web 1.0, the inventors settled on Web 2.0, as this is the routine of naming new software programs using an ascending version digit.

The new Web 2.0 has more features and functionalities as compared to the old version, Web 1.0. Nevertheless, it is imperative to note that Web 2.0 is a representation of a progression of technological step-ups and not just a precise version of the Web. For example, some of the most common features of Web 2.0 include blogs, Wikis, social networking and Web applications. Each of these features performs a specific function (Tim 1).

The development of Web 2.0 has enhanced the manner in which people communicate and do business using the Web. For instance, blogs are important to private citizens in that people can post their feelings and personal updates on the Web. Thus, anybody interested in knowing the affairs of a certain person can easily access that information through the blogs. Wikis are also vital features of Web 2.0, as they allow users to affix new information on the Web or revise online information.

Additionally, the social networking cites such as twitter, Facebook, 2go, and MySpace enables users to create and tailor personal information. Finally yet importantly, the web applications have been instrumental in answering the business needs of many users who mainly operate their programs unswervingly in a Web browser (Anderson 1).

Undeniably, Web 2.0 enables level user interaction not provided by Web 1.0. Since the development of Web 2.0, several nonprofit organizations now operate proficiently, create more funding, which in turn affect the lives of many people around the globe. For example, Library 2.0 ibis useful in libraries by supporting cataloguing efforts, and enables the sharing of information with other partner libraries. This type of technology answers the needs of business and private citizens in very many ways.

For instance, in the field of library science, Web 2.0 attends to the needs of both citizens and business. On the other hand, most marketing executives have found Web 2.0 an important technology that end-runs conventionally impassive information technology departments. Web 2.0 also enables marketing managers to be in touch with their customers by updating them on new product development, the undergoing endorsements and service improvements.

Some companies use wikis to answer frequently asked questions regarding promotions, products, services or any interests. In addition, some media powerhouses such as Business Week and New York Times use Web 2.0 to outsource their services hence, positively affecting the threshold for bunch approval of their services (Parise 1).

Both private citizens and financial institutions have found Web 2.0 of great help. For instance, banks have found social networking sites as imperative tools of communication that not only enhance customer loyalty, but also, send important information to customers. Some financial institutions use social networking cites such as Twitter and YouTube to inform their customers on the latest developments, say, the Chief Executive Officer speaking on market news.

It is also important to note that many of the small business enterprises have come out strongly to compote with big business empires due to Web 2.0. Undoubtedly, Web 2.0 is a paramount technology that helps the sharing of information from one user to another. It has also helped many business, both new and old, to adopt new strategies of involving customers. In other words, Web 2.0 acts as a link between business and customers. (Parise 1).

So far, Web 2.0 has been successful amid criticism faced in confronting and in updating its technologies. In fact, some people have even suggested that Web 2.0 does not represent the world of World Wide Web-labeling it a series of Web 1.0. For instance, many of the issues facing Web 2.0 revolves around its techniques (AJAX). One of the most common techniques of developing Web 2.0 is AJAX. However, confronting this technique has been a vain work since it fails to substitute the fundamental protocols such as HTTP.

Instead, all manner confrontation or update will lead to the formation of an extra layer of abstraction. Some experts argue that some of the techniques involved in developing Web 2.0 existed back before the adoption of Web 2.0. Thus, simple definition and excessive hype are common problems facing this technology. It is also important to note that anybody can update information on the Web 2.0. This has resulted into digital amateurism and vanity (Tim 1).

Take for example wikis. Many people update false information on wikis and mislead other people who are genuinely searching for information. Thus, the issue of security is a great concern to many users of Web 2.0. The vulnerability of Web 2.0 attack lies in the fact that any individual can upload the stored contents. Thus, hackers find it easier to carry out their malicious intentions. Of course, this will affect both business and private citizens who rely on such websites for information regarding certain affairs or even market news.

Software developers agree that Web 2.0 is prone to attacks due technical loopholes. Otherwise, the only sure way of protecting Web 2.0 from hacking and message retrieval is using a multifaceted JavaScript code on the user machines. However, even with the multifaceted JavaScript, the URL filtering and cataloguing products do not incriminate nor block some popular sites such as Wikipedia and MySpace, and considers them trusted even when they pose danger.

The biggest question however is how companies should address the issue of insecurity as posed by Web 2.0. In dealing with the problem, software researchers have embarked in the development of assistive technologies that will provide security to this technology. For instance, companies have developed new security solutions with the potential of examining and analyzing every web request or reply before taking an action. A very good example of such security solution is the real-time code analysis.

Since the technology is prone to very many attacks, the real-time code analyzer inspects the exchanges between the browser and the web servers. Here, every bit of information must undergo scrutiny to identity its authenticity, irrespective of its source.

By doing this, the technology ensures that malevolent information whether from trusted sites or otherwise, does not penetrate into the network. Thus, all networking sites and web pages like Facebook, MySpace, Gamma, Twitter and Yahoo pass through the analyzer just like any other normal web page (Yuval 1).

Like any other web technology, Web 2.0 and is prone to ever-emerging complicated web-borne threats. The exploitation of AJAX has reached a compromising state hence, the need for commanding security solutions, which will protect them from malicious intentions. Companies should therefore espouse profound approaches, which involve synchronized (real-time) inspection and signature-based security technologies.

In addition, some companies are busy deploying manifold security solutions aimed at protecting the companys confidential information and internet-based resources. It is also important for companies to mount a security machine right at the internet gateway to carry out real-time code inspection of information that enters and leaves the commercial network. Such actions will address the issue of insecurity that is so common in Web 2.0 technologies (et.al. 1).

Works Cited

Anderson, Paul. What is Web 2.0? Ideas, technologies and implications for education. JISC Technology and Standards Watch. 2007. Web.

Parise, Salvatore. . The Wall Street Journal. 2008. Web.

Tim, OReilly. OReilly Network. 2005. Web.

Tim, OReilly. Amazon Web Services API. OReilly Network. 2002. Web.

Yuval, Ben-Itzhak. Tackling the security issues of Web 2.0. 2007. Web.

Technology Improving Educational Benefits in Learners

A qualitative problem statement

What factors are hindering students from achieving outcomes in learning institutions?

In the modern days, educators face challenges of effectively engaging students, making them achieve better, fostering accountability, and preventing school drop-outs in addition to other issues faced in the industry. At the center of the issues facing educators is a paradigm shift among people with the focus being growing toward embracing creativity. There is a change from being analytical in reasoning and embracing conceptual perception in managing education issues. With technological advancements and the development of the internet, the information age allows students exclusive access to information that was not previously possible. Adoption of technology and effective use in the learning process represents a major way of involving students and widening their thinking. However, implementing the use of technology for learning is a major problem to stakeholders, as there is no understanding of the best approach to use, the best way to maximize the process, the infrastructure needed, and funding of the adoption. With the problems in mind, there is the need for stakeholders to collaborate in designing effective strategies for the adoption of technology to the learning process, which would in term benefits everyone in the system to achieve the full potential of information technology in the education sector (At&At, 2011).

The manual system of learning in schools in the conventional setup does not enable the use of information by several people at the same time as opposed to digital systems. The traditional system is time-consuming and relatively inefficient, which highlights the need to capitalize on the digital system to achieve various learning objectives for students and teachers. However, stakeholders face problems in developing the right infrastructure and system to implement digital learning because other than being expensive, the initial phase is time-consuming. Even though digitization is a feasible way to enhance the learning process, the psychological mentality may be a major hindrance to successful implementation and uptake by stakeholders. There is a need for effective planning and evaluation mechanisms because the process of digitization is resource-intense and it requires significant levels of technical know-how. Overall success in the digitization process requires the full involvement of stakeholders to ensure the success of the digitization of the system (ISTE, 2012).

A quantitative problem statement

How can the use of technology be applied to improve the level of educational benefits in learners?

In the age of information, communication, science, and technology, stakeholders in the education sector face significant challenges in integrating science and technology at all levels of the learning institutions. Although information science and technology has been in the world for some time, resistance to its adoption has meant slow integration into the learning process, especially in lower levels of learning to address the problems of teacher shortage and effective engagement of students. With this in mind, there is the need for stakeholders to work in cooperation in designing effective programs and policies acceptable by everyone to ensure seamless adoption of science and technology in the education system especially in areas of learning, research, and development. Science and technology stand to significantly improve the learning process and achieve more learner-involvement in addition to broadening the range of information accessed by students for education reasons (Finholt, 2002).

In the current education system, shortage of teachers, the applicability of courses to contemporary and professional needs as well as the low number of learners in advanced courses are major concerns for stakeholders. There have been efforts to address the challenges of trough online courses and video learning through various approaches. Indeed, there are concerns over the effectiveness and quality of the teaching and learning processes with the pros and cons of the system coming from different quarters. The challenges highlight the importance of using distant learning, which requires adequate research to identify the best practices and ways to take advantage of e-moderating. Accessibility to learning materials and literature on e-moderating is vital for teachers to deliver distance learning to students in an interactive way. Furthermore, the focus of e-moderating should not be only on higher learning institution, but also the entire education system. Policy guidelines and quality instructions are necessary to guide stakeholders in managing the change to e-moderating (Salmon, 2000).

Considering the changes faced by the education sector in adopting technology and digitization in the learning process, there is a need for the implementation of various approaches to solving the apparent problems and design strategies for effective integration of technology. This is possible by enhancing interaction between learners and teachers through a system that is participatory oriented. In the age of technological advancements, making the system compatible with modern devices and systems would help in ensuring buy-in from stakeholders, especially students. With different hosting solutions, the available focus should be on cost minimization and taking advantage of currently available systems and technologies to achieve more for the education system. Curriculum expansion is vital because it would help in integrating the traditional and modern approaches and achieving complementation. The objective is to obtain innovative technologies aimed at making students learn better and improve the delivery of teachers. Theoretical perspectives are vital in understanding the education requirements of technology and digitization to develop instructional mechanisms that support approaches to student learning.

References

At&At, (2011). Smart Use of Technology Transforms K-12 Education. Web.

Finholt, A.T. (2002). Annual Review of Information Science and Technology. Hoboken, NJ: Wiley & Sons.

ISTE (2012). ISTE Standards, Web.

Salmon, G., (2000). E-Moderating  The Key to Teaching and Learning Online. London, United Kingdom: Kogan Page.

Web 2.0 Technology: Design Aspects, Applications and Principles

Introduction

Web 2.0 is the current biggest talk in internet transformation process. It is an internet technology process through which websites have been transferred to a platform that provides the end consumer with an interactive interface. With web 2.0, the use of software has almost been rendered useless.

Computer end users are now able to send and share information in terms of pictures and documents by attaching them online in the with the web browser. Solomon and Schrum indicate that web 2.0 portents make use of the technology of social book marking, podcasts, weblogs, RSS feeds and wikis (56)

Principles of Web 2.0

The main force behind the perpetuation of this current technology is the data, which is always added by the users. Every time a user logs in, they add more data, leading to the growth of the technology. The www network provides the platform that supports the web 2.0 technology. The World Wide Web enables users to access the technology through browsers. Internet platforms such as Skype and Wikipedia have played a major role in the growth of the web 2.0 technology (Hosie-Bounar and Waxer 36).

The technology has attracted millions of participants in terms of end users who at the same time have a choice of what to access. The presence of different participants pools together different skills and ideas that result in more innovative websites with even more powerful applications. Websites are always being updated and redeveloped to become user- friendly and interactive.

The Design Aspects of Web 2.0

It is important to note that web 2.0 is an improvement developed on web 1.0. Web 1.0 technology is where a small number of developers would develop web and spread them to many viewers (Hosie-Bounar and Waxer 29). Web 2.0 provides people an opportunity to view and also take part in the scripting of the information they view. This has been made possible through blogging for example, where the viewers of the information become contributors of more information through research.

Through blogging, many people add information to the web leading to a tremendous growth of information on the web. This information can further be spilt into numerous small topics and be circulated in different information realms. Web 2.0 has tools that make it possible to select relevant information from the several topics created.

The RSS aggregator and the internet search engine play a major role in the selection of the relevant information that one may be looking for from the pool of information (Hosie-Bounar and Waxer 29). The creation of Google maps has even further revolutionized the design and use of new application on the internet. It is also possible for end users to customize information on a website to fit their own needs.

Web 2.0 Services

With the continuous growth in data, the major concern is now turning out to be data management. The web provides enterprises a platform to show of information on their internal activities in an unsure way (Sankar and Bouchard 36). The information can be made accessible to the whole world using a web browser.

The web communicates through XML, which is flexible in terms of information formatting. The XML information is the formatted by the web service definition language, which then publishes the information for all to see.

Application of Web 2.0 in Business

Companies have been rather slow in accepting and implementing the new technology of web 2.0. This has been occasioned by the perceived cost in terms of procuring new hardware and updating to new skills needed to operationalize the new technology. As a result, only companies that are on the lead in business have been fast in embracing the new technology. Time is however running out for business enterprises that have chosen to remain redundant on this issue.

A company website is increasingly becoming one of the common contemporary company assets, influenced by the growing digital market. Lytras etal points out that only a meager 10% of the 500 fortune companies use blogs and wikis to market their products and services (43). This is a clear indication that the web 2.0 technology has not spread out effectively.

It is however likely that technology may become a prominent form of information sharing within this decade. This therefore makes it the first choice form of product and service promotion mechanism for companies that aspire to take a lead in business. However, many online companies have swiftly switched to web 2.0 to take the advantage of the new technological platform.

Wikis are especially becoming a popular form of information sharing over the internet by various authors. Creating blogs for example on a company website can help increase traffic to the website creating more awareness of the company to potential customers. Wikis can also help companies maintain a good level of communication with customers and provide a form of storing information (Lytras etal 46).

Application of Web 2.0 in social media

Social media is the new hype of networking among individuals of like minds. Distance as an impediment to social networking has been removed by social media. Syndication now makes it possible for social information to reach out to millions of people in a matter of seconds (Shelly and Frydenberg 57).

It will henceforth be difficult for institutions to monopolize information or power because of this technology. Because of the ability to interconnect millions of people from all over the world, there is no one individual or institution that can regulate information flow on the web.

Today there are millions of blogs with billions of posts on various topics being posted every day. The YouTube which is a rather recent innovation has also made it realty to share videos and podcasts as well. With high speed internet connections, it is now possible to stream full videos and download in a matter of minutes.

A greater percentage of movies are now viewed online as revealed by the Google trends data (Sankar and Bouchard 16). Several thousands of videos are uploaded everyday on YouTube for free sharing. YouTube has increasingly gained the characteristics of a social media as a platform through which video and podcast information sharing has become a reality.

Advantages and Disadvantages of Web 2.0

As it would be for any other phenomenon, web 2.0 is surrounded by both advantages and disadvantages. Proponents point out many advantages which are on the other hand quelled by some disadvantages which opponents point out. As pointed out earlier, web 2.0 brings together many users. Information is therefore pooled together and can as well be customized to suit each individuals needs.

It is also possible to add more applications to already existing applications made with the web 2.0 interface (Solomon and Schrum 85). This will therefore allow information sharing on a large scale as opposed to a case where the information is sourced from a single source.

Getting information from different sources enriches the shared opinion leading to an informed society. In a case where information is sourced from a single source, the source is likely to be biased and thus misleading the recipients. The web 2.0 has also provided a seamless medium of fast and reliable form of communication. The internet provides freedom of communication between masses as there is no mechanism to check and censure what is shared over the internet as it would be for the case of print, television or radio.

One can also search for exactly what they what to access using the search engine and key words for the topic of interest. Thus is as opposed to the print media or television where one is fed on information or news on issues of interest or concern to the media publisher.

The main problem with the internet however is dependency that results from the repeated use of the internet tools. It can be sickening to an internet dependent person to experience slow or absolutely lack internet connection. There is also the problem of security to the information shared or kept on the internet (Solomon and Schrum 84). Certain information owned by institutions or government is strictly confidential but often end up in the hands of hackers through the internet.

Conclusion

The prominence of web 2.0 has been on the increase as the technology is adopted both in business and social networking. The research notes a relatively slow uptake of the web 2.0 technology in business enterprises. This is as a result of the business leaders lacking relevant skills to operate with the tools of the new technology and the cost of procuring the tools.

This paper further notes that web 2.0 is geared to be the future of all business operation processes as well as social networking. There is more hype about the technology that is expected to influence the minds of all participants in the business world and also influence social interaction.

Besides the numerous advantages the technology presents to the current generations, there are issues considered as the weaknesses. First is that the technology causes dependency. Web 2.0 also exposes end users to the risk of information security against hackers.

Works Cited

Hosie-Bounar, Jane and Waxer, Barbara, M. Web 2.0: Making the Web Work for You, Illustrated Massachusetts: Course Technology, 2010.

Lytras, Mitiadis. Et al. Web 2.0: The Business Model. New York: Springer Science and Business Model LLC, 2007.

Sankar, Krishna and Bouchard, Susan. A. Enterprise Web 2.0 Fundamentals. Indianapolis: Cisco Systems, Inc, 2009.

Shelly, Gary and Frydenberg, Mark. Web 2.0: Concepts and Applications. Massachusetts: Cengage Learning, 2009.

Solomon, Gwen and Schrum, Lynne. Web 2.0: New Tools, New Schools. Washington: International Society for Technology in Education, 2007.

Garmin Connect Technologys Impact on Sports

Introduction

I will use this paper to focus on the merits and demerits of Garmin Connect, which is a type of a technology applied in sports.

The purpose of this paper is to succinctly analyze Garmin Connect with regard to its benefits and shortcomings while comparing it to some other technologies that are applied in sports. The paper will also aim at arguing for the use of the technology in the contemporary world that is characterized by a high level adoption of IT tools.

My belief is that Garmin Connect and other sports technologies have positively impacted sports across the world by ensuring that people participate in sporting activities more conveniently than in the past. In the modern world, people want to closely monitor their physical activities so that they could be sure that they are benefiting.

One of the merits of the technology is that it can be applied in many sports such as swimming, biking and hiking, among others (Kessler, 2011). Second, it is used to store the correct data with regard to training in the field. The data help in making judgments about progress, which could be essential in making adjustments aimed at achieving better results.

Third, the application has the potential to give the right statistics of sporting activities of an individual based on data analysis. Fourth, Garmin Connect has the merit of sharing all fitness activities of an individual on the internet (Garmin, 2014). Some of the drawbacks if the technology is that it cannot work in the absence of stable internet connections and it requires some specialized devices. In addition, it may require some expertise with regard to IT applications (Kessler, 2011).

Brief description of the history and reception of selected technology

Garmin International, Inc. is credited with introducing the technology. The firm was started in 1989 in Kansas. However, the first name given to the company was ProNavs, before it was renamed to Garmin (Garmin, 2014). The history of the business establishment shows that it grew from a very humble beginning, which was characterized by just one product in the market.

The GPS product that was intended to increase connectivity among users was sold at $2500. It was in the early 1990s that Garmin got the first of its main consumers of its innovative product. The customer was the US Army. It was estimated that Garmin had made sales of about 3 million GPS devices by the start of the millennium. At the same time, it was producing over fifty diverse models of its product. When it was offered to the public in 2000, Garmin was selling its products in over 100 countries across the world, with about 1205 workers, most of who were deployed in the US market (Garmin, 2014).

The technology is the latest product adopted by the company. It was rolled out on the platforms of the existing products, such as GPS devices. The firm took advantage of its existing markets to introduce the new technology. For example, it was estimated that the company had established a stable network of 2500 distributors in the 100 countries that it operated in the 2000s.

Advertising on the internet has been a perfect platform on which the product been popularized. It s expected that most customers of the technology are knowledgeable about the use of the internet. Thus, it was selected as the best option for launching Garmin Connect. Currently, Garmin Connect can be found on various sites such as Facebook and Twitter (Garmin, 2014). In fact, the use of online advertising was the best approach to ensuring that the product was adopted by users across the world.

Users were quite enthusiastic about a product that could allow them to monitor their sporting activities while both indoors and outdoors. Customers were happy that, finally, the pioneer of GPS devices introduced a device that could remotely control their statistics and help them make adjustments when needed. Currently, it is estimated that the technology has over 10 million users across the world.

This number is expected in the future due to the rapid adoption of the internet, which forms the basis of the product with regard to its functioning. For example, many regions in the developing world are in the process of being connected to the internet, which will make people subscribe to Garmin Connect.

The world of sports is characterized by rapid advancements of equipment due to improvements on technological applications (Butryn & Masucci, 2009). Other technologies perform almost similar functions, but they do not involve internet connections and storage of data. In other words, similar products allow devices to capture real-time data.

For example, swimming tracker is a device that can be put on the wrist or on legs by swimmers to record the time taken to complete certain distances. However, the data are not stored for future analysis. Another example is the use of technologies in athletics, which involve the determination of the time taken by athletes on specific lanes to complete races. Just like the swimming tracker technology, there is not analysis and storage of data.

In-depth discussion of research on how people use this technology or similar products

The introduction of Garmin Connect in the management of sporting activities has attracted a lot of attention from researchers. The aim of conducting research into the topic is to determine the impact of the technology in improving the results of sports across the world (Kang, Shilton, Estrin & Burke, 2011). Research is being conducted to ascertain the age brackets of the users of the product.

This would be essential in determining the right markets for the technology across the world (Kang et al., 2011). The following is the major research question: How can the use of Garmin Connect impact the way people engage in sports? Currently, there are over ten million users of the product acoss the world, most of who are in the US. The following graph shows the data with regard to the adoption of the product across age brackets.

A bar graph comparing the adoption rates of Garmin Technology on the premises of age brackets of users.
Figure 1. A bar graph comparing the adoption rates of Garmin Technology on the premises of age brackets of users.

Important findings have been obtained with regard to the use and/or impact of the technology. Research has shown that Garmin Connect has positively impacted peoples lives in many ways. Research results show that the application has made a significant number of people adopt sporting activities because they have the ability to monitor their statistics with the use of Garmin Connect.

Many people are able to make changes in their fitness exercises as well as make adjustments based on the analysis of data done by the application (Hudson, Fudge & Rae, 2011; Kessler, 2011). In addition, research has demonstrated that Garmin Connect help people not in the sports fraternity to improve their general health status. For example, they are able to monitor their cardiovascular performance during and after physical exercises.

Two main lessons can be learnt from the research on this technology. First, technology is inevitable in the sports fraternity. Second, the use of the internet has many applications, which could greatly improve the life of human beings.

A personal reflection on the impact this research has on my own life

I came to learn about the technology on the internet while I was doing research on technologies that could help to document sports data online. This was motivated by my desire to learn more with regard to sports because that is my envisaged career. At first, I could not believe that such a technology existed, but I was able to create an account with the technology online and I started using the service.

I selected the topic because I am enthusiastic about sports and related technologies. I wanted to learn more about advancements in my career field. In addition, the topic was among the latest innovations in the sporting world that made an excellent use of the internet.

Garmin Connect has revolutionized sports by enabling the collection, analysis and presentation of data.

Conclusion

Research in the future should focus on determining the impacts of the technology on a broad spectrum of sporting activities. The following research questions would be answered:

  1. What is the overall impact of Garmin Connect on all sporting activities?
  2. How can the use of the technology be increased?

Low internet connectivity in some regions could negatively impact the usage of the application.

Generally, the technology can have impacts on sports and health states of individuals across the world.

Garmin Connect has the potential impact sports and health outcomes of millions of users.

References

Butryn, T. M., & Masucci, M. A. (2009). Traversing the Matrix Cyborg Athletes, Technology, and the Environment. Journal of Sport & Social Issues, 33(3), 285- 307.

Garmin, (2014). Sports Features.

Hudson, C., Fudge, C., & Rae, J. (2011). From products to services: Understanding the new rules of engagement. Design Management Review, 22(4), 46-53.

Kang, J., Shilton, K., Estrin, D., & Burke, J. (2011). Self-surveillance privacy. Iowa L. Rev., 97(23), 809.

Kessler, F. (2011). Volunteered geographic information: A bicycling enthusiast perspective. Cartography and Geographic Information Science, 38(3), 258-268.

WiMax Technology Specific Aspects

Introduction

Arguably one of the most epic accomplishments of the 21st century was the invention of the computer and the subsequent creation of computer networks. These two entities have virtually transformed the world as far as information processing and communication are concerned. The interconnection capability of computer systems can arguably be described as the feature, which makes them most versatile and invaluable to their users. This being the case, the network functionality of computing systems has been exploited by organizations and individuals alike as efficient local and global communications became the defining attribute of success. As such, the creation of networks is key to any interconnected computing system. A network may be created that uses cables (fixed connection) or that uses radio waves (wireless network).

While fixed Internet networks continue to form the backbone of the communication system, wireless data transmission has become more favored for various reasons. Various forms of wireless technologies have come up to fulfill this role. Nuaymi asserts that WiMAX technology is at the present one of the most promising global telecommunication systems (2). WiMAX emerged as a Broadband Wireless Access System that has many applications ranging from the mobile cellular network to backhauling. Considering the prominence of WiMAX in networking, this paper will set out to give a detailed discussion on some of the specific aspects of WiMAX,

WiMAX Overview

WiMAX specifications have gained significant success in the provision of internet access and broadband services via wireless communication systems. WiMAX is defined as a technology that provides for mobile and stationary broadband wireless access to IP-based services through a common radio technology, providing support for quality of service, roaming of mobile users and strong security (Schmidt and Lian 253). Khosroshahy and Nguyen document that WiMAX is considered as the Last Mile solution, which provides a fast local connection to the network (3). Obvious merit of WiMAX over high-capacity cable/fiber is that it is less expensive to deploy and can be deployed in areas that lack a good telecommunication infrastructure.

WiMAX is a wireless transmission technology that can be used over two wireless network categories; Wide Area Network (WAN) and A Metropolitan Area Network (MAN). A MAN is a data network that may extend for several kilometers and is usually used for large campuses or a city. A WAN is a data network that spans a boar area and links various MANs. Scarfone, Tibbs, and Sexton reveal that WiMAX is the most commonly used form of WMAN and its promotion of interoperability between products based on the IEEE 802.16 standard makes it the model technology for this networks (2).

Interoperability is ensured by the WiMAX Forum, which is an industry-led non-profit organization boasting of a membership of over 500 as of the year 2008 (Roh and Yanover 3). Dowd asserts that WiMAX, in essence, provides a feasible and cheaper alternative to wired WAN technologies such as cable or leased lines (3).

The architectural components of a WiMAX include; a Base Station (BS), Subscriber Station (SS), Mobile Subscriber (MS), and a Relay Station (RS). The BS connects and governs access by the devices from the wireless network subscriber to the operator network. The BS is made up of physical devices such as antennas and transceivers, which are necessary for wireless data network communication. An SS is a fixed wireless node that communicates with the BS or forms a link between networks. AN MS is a wireless node that receives or transmits data through the Base Station. An RS is a Subscriber Station whose purpose is to retransmit traffic to the relay stations or subscriber stations.

Key MAC Features

Mobile WiMAX has certain key MAC (Medium Access Control) features that provide for the high efficiency and flexibility for which the technology is renowned. To begin with, WiMAX provides for connection-oriented services with certain classification rules being specified so as to define the traffic that is associated with a particular connection. In each connection, qualities of service parameters are defined, such as minimum reserved rate and maximum sustained rate (Roh and Yanover 8).

The WiMAX technology also has mechanisms set in place to reduce the MAC overheads during transmission. In particular, the technology has support for general Purpose Header Suppression (PHS) as well as IP Header Compression (ROHC). Roh and Yanover reveal that these mechanisms are effective since data a packet being transmitted at the network level contain many repeated parts of the header, and by replacing this with short context identifiers, PHS greatly reduces the overhead that results from headers (8).

Another feature at the MAC layer that enhances the quality of service in WiMAX is the scheduling system specified by the IEEE 802.16 MAC. According to this scheduling, the subscriber station, which wishes to attach itself to a network has to compete with others when it initially joins the network. A time allocation is made by the Base station though this time slice can be expanded or reduced based on the needs of the SS. This slice remains assigned to the SS, therefore ensuring stability under overload and oversubscription. The scheduling also has more bandwidth efficiency, which results in the quality of service since the resources are well balanced among the needs of the various SSs.

Data Transmission

WiMAX offers a number of coverage for its broadband wireless transmissions. SR Telecom notes that while most other technologies are limited to only providing line of sight (LOS) coverage, WiMAX technology provides for a nonoptimal line of sight (NLOS) coverage as well (2). When the WiMAX propagates signals between nodes at a frequency of 10-66 GHz, the signals are highly sensitive to radiofrequency (RF) obstacles, and as such, an unobstructed view between the nodes is required. This is known as the line of sight (LOS) link, where a signal has to travel in an unobstructed path from the transmitter to the receiver (R Telecom 1). If there is an obstruction in the line of sight between the transmitter and receiver, there will be a significant loss of signal strength resulting in poor performance. LOS employs relatively simpler RF modulation techniques as compared to NLOS. The power needed for transmission is also lower since the signal is propagated in a straight path.

LOS Signal Transmission.
Figure 1: LOS Signal Transmission.

Non-line-of-sight (NLOS) coverage employs advanced RF modulation techniques to compensate for RF signal changes caused by obstacles that would prevent LOS communication (Scarfone, Tibbs, and Sexton 3). The operation frequency is 2-11GHz, depending on whether the link is being used for Mobile or fixed WiMAX operations. The NLOS signals on being transmitted reach the receiver through a combination of reflection, scattering, and diffractions.

NLOS signals are employed more often since the feasibility of LOS is hindered in many areas due to deployment costs, environmental, and licensing factors. WiMAX that employs NLOS technology offers certain obvious advantages as compared to a LOS technology implementation. An NLOS system does not require an antennas to be placed at the great heights that LOS systems call for. In addition to this, NLOS technology results in reduced costs since extensive pre-installation site surveys are not necessitated before the system is installed. NLOS also enables WiMAX technology to deliver services to a wider range of customers.

NLOS Propagation.
Figure 2. NLOS Propagation.

There are two main techniques that are used to deliver broadband data at the physical layer, namely the: Orthogonal Frequency Division Multiplexing (OFDM) and Orthogonal Frequency Division Multiple Access (OFDMA). Khosroshahy and Nguyen document that these techniques, which have only been developed in the past few years, deliver broadband services that can be compared to those of wired services in terms of data rates (6).

In the OFDM technique, a single transmitter sends out a signal at different orthogonal frequencies using advanced modulation techniques to ensure that the signal has a high resistance to interference. This technique is favored by most operators since it has a superior NLOS performance due to its high spectral efficiency (Khosroshahy and Nguyen 7). OFDMA has the same operating principle as OFDM with the added advantage that it allows for multiple users to transmit data using the same spectrum simultaneously. This is achieved through the sharing of sub-channels among multiple users.

Mobility

WiMAX gives full mobility support for devices that may be moving below certain threshold speeds. Naomi goes on to illustrate that WiMAX also allows for portability since a user can move at a reasonable over a large area that is covered by multiple BSs without interruption of the current session or communication. The speed at which a mobile WiMAX device can move between cells in a seamless session is valued at 120km/h. Roh and Yanover state that WiMAX systems can detect the mobile speed and automatically switch between different types of resource blocks to optimally support the mobile user (7). In addition to this, WiMAX technology employs the Hybrid Automatic Repeat Request, which assists in the mitigation of the effect of the fast channel and interference fluctuation.

Security

Securing a network is, at best, a very challenging task due to the fact that new software and hardware keep being developed by the wireless industry and threats and vulnerabilities keep changing. As such, the security implementations of the previous year might prove to be grossly inadequate for the current year. With these considerations, the WiMAX technology has an intricate security architecture that is meant to ensure that the network is secure both for Fixed and mobile wireless access. Schmidt and Lian assert that the overall goal of the security architecture employed by WiMAX is to create an interoperable security solution that is stable but also accepts the common security protocols (263).

At the very basic level, all WiMAX links are encrypted, and for one to read the information, they need to employ some decryption mechanism. Extensible Authentication Protocol (EAP), which is based on mutual authentication between the Mobile and the network, is used at the security level of the WiMAX to ensure security. For fixed wireless access, WiMAX uses a single network access authentication and authorization key established protocol Privacy Key Management (Schmidt and Lian 255).

Mobile WiMAX networks require secure access and authentication is mandatory before communication can commence (Schmidt and Lian 258). WiMAX employs authentication and authorization processes in communication between nodes. Authorization is the process of determining the level of access that a node is given after it has been identified and authenticated. WiMAX uses the public-key infrastructure for device authentication purposes. To provide for secure communication, the WiMAX system performs the three steps of; authentication, key establishment, and data encryption.

WiMAX Security Framework.
Figure 3. WiMAX Security Framework.

It is worthy to note that WiMAX network specifications are constantly evolving, and as such, the security architecture is expanding as well. Roh and Yanover state that the Basic security mechanisms are strengthened by adding digital-certificate-based Subscriber Station device authentication to the key management protocol (9).

Conclusion

This paper set out to analyze WiMAX, which is increasingly becoming the preferred wireless technology for Broadband Wireless Access systems. The paper has discussed various aspects of technology, including its mode of transmission, mobility, and security. From the discussions undertaken, it can be seen that WiMAX technology is secure as a result of the authentication and encryption capabilities employed. A major strength of WiMAX has been seen to be its promotion of interoperability of broadband wireless products, therefore, allowing products from various manufactures to operate seamlessly on a network, as well as its employing of NLOS technology, which allows RF signals to be transmitted regardless of obstacles.

From this paper, it can be suggested that WiMAX technology is the future of wireless communication over WMANS since it provides quality broadband services. As it currently stands, WiMAX specifications have gained significant success all over the world. It can, therefore, be projected that WiMAX technology will continue to be used as the technology of choice in wireless networks.

Works Cited

Dowd, Kevin. Wireless WAN/LAN solutions for schools using WiMax, WiFi and Secured Access and Content. Halestar, Inc, 2008.

Khosroshahy, Massod and Nguyen, Vivien. A study of WiMAX QoS mechanisms. Telecom Paris, 2006.

Roh, Wonil and Yanover, Vladimir. Introduction to WiMAX Technology. John Wiley & Sons, Ltd, 2009.

Scarfone, Karen., Tibbs, Cyrus and Sexton, Matthew. Guide to Security for WiMAX Technologies. National Institute of Standards and Technology. Special Publication 800-127, 2009m.

SR Telecom. WiMAX Technology LOS and NLOS Environments. SR Telecom Inc, 2004.

Active Server Pages Analysis: Technology Overview

Background

Active Server Pages (ASP) is a scripting technology developed by Microsoft Corporation and is based on the server side, contrary to other scripting approaches that are integrated on web-based applications that are based on the client side. Scripting is one of integral technologies used in the development of web services. An example is Java script, which is client based (Francis, 1998). The ASP facilitates the creation of web -based applications and services that are dynamic and incorporates some interactive elements. A basic structure of an ASP page comprises of a Hyper Text Mark-up Language (HTML) page that has server-side page scripts, which run on a web server before it is sent to a web browser application.

ASP is combined with other platforms such as the Component Object Model (COM), HTML, and Extensible Mark-up Language to develop web services that are more user-interactive compared to other scripting technologies (Francis, 1998). Scripts that are based on the server are usually executed when a web browser application requests for an .asp file from the web server. The web server usually responds to the requests through executing the commands in the script. The web server then transforms into a standard format for a web page, after which it sends it to the user (Shaw, 2003).

ASP scripts used in the development of web services can be extended using COM and XML components. COM components provide a platform through which the scripts can be reused and made compact. In addition, they provide a secure means through which information can be accessed. Automation is a significant concept during the implementation of a scripting technology (Mitchell, 2000). On the other hand, ASP scripts that have been extended using XML deploy the use of a mark-up language that formats the data in a structured manner, through the use of tags. ASP was originally developed as an add-on that functioned under the Windows NT 4.0 platform through the Internet Information Services.

Later developments saw its integration into the server operating systems of Microsoft Corporation as a free component (Francis, 1998). The ASP used the .asp file extension. Active Server Pages under the ASP.NET platform use the .aspx file extension. ASP.NET is supported only on the.NET framework of Microsoft (Strahl, 2002). The.NET framework is faster and can robust scripting commands compared to the classic ASP. The development of ASP.NET was based on the frameworks of the classic ASP (Francis, 1998).

Introduction

This research paper attempts to provide an insight into understanding of the ASP scripting technology and its relationship with the development of web services. It provides an outline of the history of the ASP technology, the basics of the ASP technology, the functions of the ASP technology, ASP versions and their applications, strengths and limitations of the ASP technology, and finally providing a summary concerning the use of ASP as a scripting technology.

History of the ASP Technology

The development of the ASP was during the mid 90s and was aimed at the creation of web services that are subject to change according to the users demands and interactions (Francis, 1998). This is helpful during the storage of information that is specific to a particular user. The underlying concept during the development of the ASP was to facilitate what was is called the dynamic web content (Mitchell, 2000); implying that a web service and its respective application is intelligent enough to learn data such as the frequent visitors of a web page, and their matching credentials after which it could store and retrieve them when needed.

For instance, in a business context, such an approach is effective in the management of passwords and login information such as user names. It is also important in applications that require updating after very short periods such as news events (Mitchell, 2000).

The original ASP was developed using the concepts of the dbWeb and iBasic tools (Mitchell, 2000), which were developed by the Aspect Software Engineering (Strahl, 2002). ASP can therefore be said to be one of the first web-based application environments that executed directly on the server, rather than on the client side. This was during 1996, nine months after Apple (which called NeXT by the time) released its web development application called WebObjects (Morneau & Batistick, 2000).

The main objective behind the development of WebObjects was to develop a high performance scripting technology approach compared to the CGI scripts that were being used during the time. The calling of external applications was also not effective and did not exhibit high performance. As a result, this led to the development of classic ASP (Shaw, 2003).

The Internet Information Server 3.0 and other applications that were used for hosting web servers on a windows framework installed the need to develop a web page that could change as per user commands and had the ability to incorporate the concept of dynamic web content. The first approach towards the solution of this problem was the development of the ASP (Strahl, 2002). The original ASP used the Virtual Basic (VB) programming language to execute scripting commands. This was ASP version 1.0, and was created during December 1996 by Microsoft Corporation. Prior to the development of ASP version.01, web developers relied on integrating programming languages and scripts in developing web applications that had to be loaded for them to be able to implement web services that were dynamic and interactive.

This was slow on a significant number of windows based servers, and caused performance issues on the servers. As a result, Microsoft Corporation saw the need to develop a scripting technology that facilitated the creation dynamic web services with little performance constraints on their servers. In order to implement this, Microsoft had to develop a scripting technology that could be executed directly via the server, without having the need to load external programs that are client based (Morneau & Batistick, 2000). The ASP provided an effective solution to these server constraints that were associated with performance issues. The execution of components in ASP was done directly in the website, via a web browser through the ActiveX technology, which was used to build single components (Francis, 1998).

ASP and the development of web services

A web service is one of the elements of a web server, which an end-user on the client side can call using http requests. A web service can be defined as an application that can be accessed using standard web communication protocols such as HTTP, Simple Object Access Protocol (SOAP) and SNMP. The various protocols are majorly used for transport while the backend code is used in the development of ASP web services.

ASP provides a framework through which the web developer can develop web services that are custom that the user can request using various client applications. ASP can support the development of XML web services, WCF services and Web references developed in Visual Studio. An XML web service is used to offer a specific functionality element to web content developed using internet standards that bases on HTTP and XML. A significant characteristic of ASP that makes it suitable for development of web services is its ability to offer frameworks for authentication and state management. It is also important to note that web services developed using ASP is interoperable across all messaging platforms on the client side.

Web services developed using ASP has features that can be used to implement the various types of authentication, user roles and properties of their profiles. ASP web services form an integral fragment of the Service Oriented Architecture, whereby the server provides access to different web services. An essential characteristic of the web services developed using ASP is that they can support different client applications and services. The various client applications that can support ASP web services are AJAX clients,.NET Framework clients and SOAP clients.

There are several approaches for consuming web services implemented using the ASP framework. It is important to note that any platform that can communicate with SOAP clients have the capability of communication with web services implemented in ASP. Most web services implemented in the same server use the same xml document with different back end code for the web service. Some of the ways in which web services are consumed include the use of Visual Studio.NET, command line tools and specific web browser features that facilitate remote scripting. The next section outlines the basics of web services development using the ASP framework.

Basics of the ASP technology

The ASP technology is basically a scripting strategy that aims at the development of more dynamic and interactive web services. There are two major components that are needed in the creation of an ASP; an HTML component and the Code Behind Page (Mitchell, 2000). The HTML component is primarily used in the development of visual representation of the web service (Francis, 1998).

There are diverse scripting languages that can be used in the development of ASP. The most commonly used scripting languages in the development of ASP include the VBScript and Jscript in its scripting engine. The scripting languages form an essential component of the ASP technologies. The scripting code can either be implemented using Jscript or VBScript (Morneau & Batistick, 2000).

The Microsoft Corporation uses the ASP as a basic framework for the development of Web backend logic. The scripting metaphor is an important concept in the development of the ASP that allows web developers and users to integrate HTML and code that has been scripted in order in a single document in order to create web content that is dynamic (Mitchell, 2000). The idea of scripting is not new to Microsoft and has been in existence for a long period of time. Development tools such as Cold Fusion had been implemented long before the scripting technology came to be implemented.

The VBScript is one of the active scripting languages that was created by Microsoft and implemented on a Visual Basic environment (Strahl, 2002). Its advantage for use in scripting is that it is lightweight and therefore offers a fast interpreter in Microsoft environments. Every desktop release of Microsoft comes with an in-built VBScript. An important aspect of the VBScript is that it uses the COM objects to access data in the environment that it is being run. VBScript is used to code functions that are executable and embedded in HTML web services (Francis, 1998). In addition, the scripting language is used for web page processing on the server side, in which case it is implemented using the <% and %> context switch statements (Strahl, 2002).

Jscript on the hand is implemented on the server side through the use of Windows Scrip engine. With development of the.NET framework, a new version of Jscript was developed, it was called Jscript.NET. Jscript is different from Java Script in the sense that it has additional features for conditional compilation. Just like VBScript, Jscript is implemented on a COM platform and it has the abilities to host web based applications on the server side.

With the dawn of ASP, Microsoft took the concept of scripting to another level through the creation of a scripting environment that could be integrated into the Windows operating system architecture and other extensions such as the COM. It is important to note that ASP is the development of the ISAPI, but in a more specific sense and eliminating the complexities associated with the early scripting platforms. The ASP serves to hide the complexity associated with the implementation of system interfaces and the protocols that are used in the client server communication (Mitchell, 2000). The implementation of an ASP engine is uses a server extension that is script mapped, with the driving engine being associated with dynamic linked libraries (dll), this result into ASP-dll, which is an extension of the ISAPI (Shaw, 2003).

The ISAPI extension is an important component of the ASP and is called every time a user tries to gain access to a web page having an .asp file extension. Script maps are stored on the server database, and they are used for redirecting the ASP scripts in the DLL. Script maps primarily serve as an avenue through which the ASP can execute automatically. The underlying principle is that the ASP.dll is usually besought for every .asp file in the server database.

The ASP.dll therefore hosts the two most important components of the ASP: the HTML parser and the scripting language interpreter, which can either be VBScript or Jscript interpreter (Morneau & Batistick, 2000). The HTML parser is responsible for the parsing of the HTML pages, evaluating the code and transforming it into a form that is HTTP compliant, which can be returned to a web browser via a user request (Mitchell, 2000). The diagram below indicates the architecture of the ASP and its significant components.

Active server architecture

The script code used should be able to support the features that available in the scripting language. One significant aspect of the ASP engine is its capability to handle COM objects. COM objects are one of the built-in components of the ASP that are usually available in web services scripted with ASP. Other important components that are available in the ASP include the Request Objects and the Response Objects which are used for handling the input and output respectively for a given web page that has been scripted with ASP (Morneau & Batistick, 2000).

The Session Object is also an important component of the ASP that is used in the management of data that a user manipulates over the web browser in a given duration of time. The Application object is used in the management of a users session (Morneau & Batistick, 2000).

The Active Data Objects (ADO) does not form part of an inherent component of the ASP, it is a must that it is created and implemented in the web server through the use of the command line CreateObject(). Components development in ASP is a complex process because the components that are run in the IIS are prone to crashing (Mitchell, 2000). In addition, they can cause the web server to hang or completely fail on some cases. The complexity of component development is also experienced because during run time, they are almost impossible to debug.

The functioning of the ASP technology

The functioning of the ASP can be said to be primarily based on the server side scripting approach. This means that the web server loads, executes and returns the output with limited involved of the client except in cases whereby the script execution requires input from the user (Morneau & Batistick, 2000).

The execution of a server side script begins when a web browser invokes a request for an .asp file that is stored in the web server. It is the responsibility of the web server to call the ASP, which is duly responsible for the processing of the requested file in a top to bottom manner. This process is called HTML parsing. The scripts are not accessible to the users under any circumstances (Morneau & Batistick, 2000).

The creation of the ASP web applications varies according to the scripting language used. The ASP provides a framework for the creation of server side scripts that can be used in any scripting language and programming language that is COM compliant. Basically, an ASP file of any scripting language should have an extension of .asp. The three basic components of an ASP file include the Text, the HTML tags and the server side scripts (Francis, 1998).

A basic way creating an .asp file is to rename an.html file with an .asp file extension. It is important that the file should contain the ASP functionality in order for the web server to process the asp file and send the output to the client via a web browser. A virtual directory is required in order to save the .asp files on the web site. It is also important that the virtual directory should be script enabled and has the permission to execute scripts. HTML parsing is used in transforming the .asp files into the normal HTTP files in the clients browser. Any text editor can be used in the creation of .asp files, provided they are saved with the required file extension.

The addition of the server script commands is also another significant step in the creation of ASP applications. In this context, the scripts are distinguished from ordinary HTML tags and Text by use of delimiters. When embedding scripts in HTML, the delimiter symbols used are < and >, which are used to enfold tags written in html. The ASP deploys <% and %> to nest the script commands (Shaw, 2003). Any scripting command can be placed anywhere within the enclosure of the HTML tags. An example of script command that can be used to develop a web service enclosed in HTML is shown below.

This web page was last viewed on <%=Now () %>

The Now () function is a VBScript command that returns the current date and time to the web based application or browser. Script commands that are nested in the delimiters are referred to as primary script commands. ASP delimiters can accommodate any statement or expression within them, provided they are valid in the primary scripting language that the developer is using (Mitchell, 2000). Procedures and operators can also be embedded within the script delimiters in order to enhance the interaction within the ASP web application that is being developed.

Security is an important aspect of any web based application and the implementation of security in an ASP web application is not an exception. There are various methods that the ASP technology uses to enhance security during the development of web services. One of the first approaches that the ASP uses in ensuring security is through the use of membership management accounts. The ASP.NET framework provides an avenue through which security information such as login names and their matching credentials can be stored and only retrieved once a user provides the necessary credentials needed for access (Shaw, 2003). This is achieved through the identification of users who log into a web site and storing the necessary information concerning their credentials.

The ASP technology is somewhat intelligent in the sense that it can associate particular users within specified role. This is a security strategy that attempts assign specific user accounts to performing specific tasks in the application. The ASP technology can also be used to limit the number of users who access a given web page and the embedded web based application. Access limitation in ASP can be realised in two basic forms: File authorization and the Uniform Resource Locator (URL) authorization (Mitchell, 2000).

ASP versions and their applications

There are three major releases of the ASP since it was first developed: the ASP version 1.0, ASP version 2.0, and ASP version 3.0. The ASP version 1.0 was distributed with Internet Information Services 3.0 during December 1996. This first version of the ASP was compatible with Windows NT option pack. The ASP version 1.0 had serious performance issues that caused server slowness, the web applications that were implemented under the ASP version 1.0 caused the web server to hang or sometimes cause complete failure during the cases of non-responsive scripts (Francis, 1998).

The ASP version 2.0 was implemented using Information Internet Service 4.0 during 1997. The second release of ASP served to provide a solution to the performance issues that were associated with the first release. The second release had six significant objects that were built: Application, a platform for request, a platform for Response, Server and the session. This release incorporated the concept of active scripting using COM, which provided functionality that allowed web services developed in ASP to access compiled libraries (Mitchell, 2000). An example is the DLLs.

The ASP version 3.0 was released during was released during November 2000. The third version was distributed with Information Internet Services 5.0 and served as an improvement to the previous version. In addition to the five objects available in ASP version 2.0, ASP version 3.0 had an additional object called ObjectContext and ASPError (Strahl, 2002). This means that the third release had additional error handling features that were more effective compared to the earlier releases of the ASP. The error handling object was an intrinsic object that was developed in the ASP version 3.0, meaning that there was no need to load external programs to handle errors in the server side (Morneau & Batistick, 2000).

In addition, the development of ASP version 3.0 reported reduced cases of non-responsive scripts and performance related issues on the server. The third release provided an avenue for the development of effective web applications. There were particular enhancements in the ASP version 3.0 that made it revolutionize the development of web based applications. One such enhancement was the Server Transfer, which was used to transfer the controls of an ASP web page to another page. This new enhance served to eliminate the slowness associated with the Response Redirect of the earlier releases of the ASP (Strahl, 2002).

Another significant enhancement under the ASP version 3.0 was the Server Execute, which was used to redirect the controls of ASP web page content to another ASP web page. The only difference is that the redirection takes place after the Completion of execution.

The latest release of the ASP has significant features that can make it easy to create interactive web services. Apart from the Flow control capabilities and error handling, there are other various approaches that are implemented in the ASP version 3.0 that facilitates the development of web applications. An example is the Script-less ASP, performance enhancing objects that are achieved through the use of components that can be installed in the ASP. XML integration is also another significant feature that was incorporated into the ASP version 3.0 (Shaw, 2003).

Strengths and limitations of the ASP technology

Any computing technology is susceptible to a number of strengths and weakness. The ASP technology is not an exception is such a context. The strengths and weakness of ASP are determined by the number of functions that the ASP supports.

The first basic strength of the ASP is the ability to add script commands on the server side, which consists of a sequence of instructions that are used issue scripting commands to the web server. This is achieved through the use of primary script commands that are embedded in the scripting language (Morneau & Batistick, 2000).

The second significant strength of the ASP is its ability to integrate HTML tags and script commands. This helps in determining how dynamic and interactive an application is. It also provides an avenue through which multiple procedures can be defines in a scripting language under the ASP. Script commands can be used output the HTML text to the browser. To reverse the process, the ASP has an in-built Response Object (Mitchell, 2000).

Another significant strength of the ASP is web developers can use it create scripts that are executable on a server irrespective of the scripting language. It is also worth noting that a single .asp file can contain a number of different scripting languages. This eliminates the need for the client web browser to support scripting since all the scripts are processed by the web server. The effectiveness of the ASP can be used to manipulate scripts that are running on the client side. Another significant strength of the ASP is that it can be integrated with HTML forms to make database access more efficient and secure through the use of database connections (Mitchell, 2000).

The ASP provides a platform for the development of transactional applications that are web based. This is one of the significant strengths of the ASP in the development of web applications. In addition the ASP provides a platform for the development of web based applications that are scalable, and the resources required can be manipulated in accordance with the clients needs (Shaw, 2003).

There are also drawbacks associated with the use of ASP. One such significant weakness of the ASP is that the client has no control or the ability to manipulate the server-side scripting. This means in cases where the scripts become non-responsive or stops functioning, the client has no control and suffers the ineffectiveness associated with web server breakdown. Another significant weakness of the ASP is that it only functions effectively within the windows environment on the server side (Francis, 1998).

Conclusion

The ASP was one of the significant inventions that played a significant role revolutionizing the development of web based applications. Almost every website in the current internet requires an element of dynamism and interaction that is automated. It is in such scenarios that the use of ASP is of great significance. Therefore, ASP forms one of the most effective platforms for the development of web services.

References

Francis, B. (1998). Beginning Active server pages 2.0. New York: Wrox Press.

Mitchell, S. (2000). Designing Active Server pages. New York: OReilly Media, Inc.

Morneau, K., & Batistick, K. (2000). Active server pages. New York: Course Technology.

Shaw, J. (2003). Adding Member services in ASP. Technology, 5-40.

Strahl, R. (2002). Using VFP COM Objects with Active Server Pages. West Wind Technologies, 90-122.