The Internet is a Democratic Technology

Introduction

The Internet has often been branded a democratic technology based on the view that it allows freedom in several aspects. Deborah G. Johnson in her book Computer Ethics presents her view on the Internet is a democratic technology by dividing his argument into three parts. By simple definition, a democratic technology is whereby the people of a country are accorded the liberty and freedom in the matters that relate to the use of technology, as opposed to the government or other organs dictating what they access. For the Internet to be fully accepted as democratic, it would mean reviewing of the regulations governing the use of Internet to allow freedom of access and ethics related to democracy.

Unmediated, many-to-many interaction

The first argument by the author seeks to bring out the prevailing difference between Internet and other communication means in that it is not filtered and allows many-to-many communications. This refers to a situation where a person with access to the Internet is in a position to relay information to very many people, who also have access to Internet. As opposed to what in the media channels of communication where the information let out to the public is filtered and influenced by what the government want the people to know, the Internet is free of any form of restriction in terms of the information shared by the many-to-many. According to Johnson (2001), democracy here is evident in the fact that through the Internet the user is accorded the freedom to share any information they wish with other users as well have access to a lot of information without having to depend on specific sources like the media.

Information is power

In the second argument, the author states that information is power. This argument links its support to the first argument where it affirms that the idea of allowing many-to-many flow of information presumably is a form of power to the user to share with a lot of people. There is however a need to differentiate between information that is useful, relevant and accurate which is when power is earned, with misinformation which is in no way a source of power. Information is power only if it is accurate information and relevant information (Johnson 2001). The user of the Internet is also said to possess power in form of ability to shape attitudes of those reading the information who can in turn relay the information they acquired to other people too and the hence a large group of people are enticed to follow what the original sender stood for. This argument qualifies to be democratic in the sense that, information, which is power, can be passed and accessed by many people which implies that power is therefore given to the many people using the Internet. This is democracy in the view that Internet can accord power to large numbers of people.

More power to less powerful

The third and last argument is titled more power to less powerful. The argument simply implies that the Internet accords additional power to those people who are less powerful. Access to information through the Internet arguably reduces the powers of the most powerful. Through the Internet, people who are separated by geographic distances are able trace down each other and exchange information in common interests forums. The Internet therefore is democratic for the reason that people across the globe advocating for a similar cause can do so with ease and can reach many readers.

Authors reluctance

However, Deborah G. Johnson who is the author who gave these three arguments expresses her reluctance in accepting the three arguments. While appreciating the fact that the Internet has truly contributed positively in terms of allowing access of vast information by large numbers of people, Deborah G. Johnson states that they are problematic. She holds that in addition to contributing positively as stated in the three arguments, the Internet has also resulted to encouraging other behavior trends that either do not in any way add value to democracy or further still contribute to undemocracy through other Internet practices. Examples given are cases where the Internet instead further empowers the already powerful, as well as being in command of peoples lives and bind them instead of changing their lives positively.

Opinion

Despite the argument presented by the author on the Internet being non-democratic, the arguments that support the benefits of existence of the Internet in terms of giving power to the people across the globe outweigh the negatives. The Internet has done its part in allowing people access to information that is independent of any filtering, while allowing them to share that information amongst a large number of Internet users. This can be backed by the number of social sites that have come up in the recent past for instance facebook with a membership of 350 million users, which has seen people across the globe converge in common interest forums and share information freely. This is a clear indication of democracy supported by the use of the Internet.

Conclusion

In summary, the benefits of using the Internet technology are now well understood. The negative behavior patterns supported by the use of the internet, and which demean democracy, are there for every one to see too. The freedom lies with the individuals on the information they wish to take up.

Reference List

Johnson, Deborah G. (2001). Computer Ethics (3rd Edition). Upper Saddle River, NJ: Prentice Hall.

Membrane Filtration Technology

Abstract

This report seeks to analyze a purification technology that is applied in many industrial processes. Since most fluids and soluble solids are prone to contaminations even after production, it is essential to have a purification system that is suitable in the process of production. In this light, this paper looks at membrane filtration technology as a widely used filtration technique. The paper is organized into subsections including how the technology works, alternative technologies, effectiveness of the technology, management of the technology, and application of the technology.

Introduction

Many liquid products are prone to contamination due to the factors that surround their environment. In the process of searching for an effective way to prevent any liquid, like water from contamination, a filtration system appears to be the most effective mean. Filtration systems are simple and effective way to prevent various contaminants. These systems are developed in different forms, including mechanical filters, neutralizing filters, oxidizing filters, and activated carbon filters. As a mechanical filtration system, membrane filtration is a technique that applies permeable barriers to separate materials. The filters used contain different shaped fibers generated from stoneware or metals (Energy Center of Wisconsin 1). Therefore, this paper analyses the membrane filtration technology in regard to how it works, existing technologies it replaces or supplements, its effectiveness, types of membrane systems, and the management of membrane systems. Furthermore, the cost and application of membrane technology are discussed.

Description of Membrane Filtration

Membrane technology has been the most applied separation technology in recent years. The main reason behind the popularity of this technology is because it does not incorporate chemicals in the process of separation and furthermore it uses low energy and a well defined technical process. Membrane technology is a broad concept for various separation processes. The procedures are the same in terms of their operations since they all apply a membrane. Membranes are suitable in the transformation of water from groundwater, waste water, or surface water.

According to Smith, the main aim of the membrane filtration technique is to purify a liquid. Several applications of this technology can be realized; ranging from purifying wastewater to filtering milk required for cheese generation. More so, different companies that specialize in the filtration processes have varying types of systems, together with spare membranes and other materials required in the process (1). In membrane filtration, a liquid such as water is passed through a semi-permeable membrane. The effectiveness of the membrane is established by the size of its pores, and thus it will function as a blockade to materials with a larger size than the holes, while other small particles pass through the membrane. Thus, the filtered fluid and the contaminated fluid are separated.

How Membrane Filtration works

Membrane filtration can be categorized into two groups: micro and ultra filtration on one side and nano filtration and reverse osmosis (RO) on the other side. In essence, when the elimination of bigger elements is required in the process, micro and ultra filtration are normally introduced. Because of this, the membranes presume that the productivity is high whilst the pressure variations are low. On the other hand, if salt is to be separated from water, reverse osmosis and nano filtration are implemented. Nano filtration and RO does not operate in line with the principle of pores; diffusion aides the process of separation. As compared to micro and ultra filtration, nano filtration and RO requires high pressure but generate low productive output.

The main aim of micro and ultra filtration is physical separation. The degree of filtration is resolved by the size of the pores in the membranes. In addition, these techniques depend on pressure, which removes contaminations from water to a low level as compared to nano filtration and reverse osmosis. Micro filtration is a cross-flow membrane process that separates particles in line to 0.1 to 10 microns. The membranes in this process remove all bacteria. Contaminations such as viruses may penetrate the membrane because they are smaller in size as compared to the pore size. Micro filtration is applied in many operations, including cold sterilization of beverages, effluent treatment, separation of oil, and cleaning of fruit juice. In order to completely remove viruses, ultrafiltration is needed. The holes of ultrafiltration membranes can remove elements of 0.001 to 0.1 microns from liquids. Thus, ultrafiltration is applied in several industries such as dairy (milk, cheese), food (proteins), metal industry, and textile industry.

The main aim of nanofiltration and RO is to produce univalent and bivalent ions. Nanofiltration is effective when RO and ultrafiltration fail to meet their objectives. It uses pressure and bases its separation on molecule size. Nanofiltration is used in demineralization, desalination, and color removal. On the other hand, RO relies on the concept of balancing. Two liquids with different rates of dissolved solids are mixed to the same concentration. When membrane filtration is activated, a fluid having a lesser concentration moves through the membrane to the fluids having higher concentration of dissolved solids. By exerting pressure in the fluid column, a reversed effect is realized and thus fluids return to the other side, while solids dissolved stay in the column. Using this method, salt content may be separated from water. Therefore, reverse osmosis is applied in water softening, process water production, and ultrapure water production (Lenntech 1).

Consequently, membrane filtration can be implemented as an alternative for technologies such as evaporative heating, flocculation, adsorption, extraction and distillation. Evaporative heating is normally used to concentrate elements and generate raw materials. It is an energy intensive technology since a lot of heat is required in the evaporation process. Flocculation is a method that filters destabilized particles into floc. Likewise, in adsorption, a solid is used to remove a soluble component from water; active carbon acts as the solid. Other technologies like extraction and distillation are prone to allowing impurities into the generated fluids. Thus, membrane distillation can be used as an alternative or a supplement to these technologies.

Effectiveness of Membrane Filtration

Every systems functionality is based on its performance in relation to productivity. One of the quality measures of a system is efficiency. In membrane technology, two factors determine the effectiveness of the processes; productivity and selectivity. Productivity is estimated as a flux parameter, while selectivity is determined through retention or separation factor. These determinants are membrane-depended. Thus, to achieve the objective of filtration, membrane technology provides the following benefits: (a) it can occur within low temperatures. This is essential since it insures the purification of heat sensitive elements, hence useful in food production; (b) it requires low energy in terms of cost. The overall energy required is less, as compared to the other techniques discussed in the previous section; and (c) it can be expanded easily (Lenntech).

Membrane Systems and Cost

The selection of a membrane system depends on factors such as cost, risks of setting up membranes, and cleaning opportunities. Large surfaces applied as one flat plate for membrane can result to high costs, thus membrane systems are set up densely in consideration of surface volume. As compared to other techniques such as evaporation, the capital costs for membrane filtration are high, but it can efficiently lower the operation costs by 65% or even more and the use of energy by 95% (Energy Center of Wisconsin 1). Therefore, these systems can be termed as tubular and plate and frame. The tubular structures are separated in the tubular, void strand, and capillary casings. While the plate and frame membranes are separated in spiral casings and pillow-shaped casings (Lenntech). However, the effect of membrane fouling is uncontrollable during the filtration process. The rate of un-cleanliness is determined by factors like source water quality, membrane material and type, and the design of the system. Smith suggests that elements, scaling and biofouling are the major inconveniences in the system. These problems affect the functionality of the membranes and it may require the effort of designers to ensure that any foul is dealt with. In this regard, the various techniques that are widely used in ensuring that membranes are clean include forward and backward flushing, and chemical cleaning.

Membrane Systems Management

Two approaches used in running membrane systems are dead-end and cross-flow separation. This is in view to realizing the highest feasible production within a long period, with tolerable pollution levels. When the dead-end filtration process is applied, all the source water entering the membrane shell is exerted through the membrane. Other particles will be left behind as water permeates. Again, it depends on the size of the pores. In addition, greater resistance of passage will be on the water. When the pressure is persistent, the flux will reduce, leading to the need of cleaning the membrane. The dead-end approach is essential since energy loss is less as compared to cross-flow management. It is because of the fact that all energy is exerted in the water crossing the membrane. More so, dead-end filtration is a discrete process, in that, when cleaning is necessary, a component is temporarily stopped. This lessens productivity.

In the case of cross-flow filtration, supply water is recycled. During the process of retransmission the water streams in equivalence to the membrane. Small part of the supply water is used in the filtration process, other part leaves the system. More so, this process experiences a high cost of energy. The whole flow of water requires pressure to enhance functionality, and the speed is high so as to prevent the breadth of the block. In essence, the cross-flow approach can lead to stable fluxes. However, cleanliness is required routinely, and thus this method is implemented for nano filtration, reverse osmosis, micro and ultra filtration in regard to the size of the membranes holes.

Applications of Membrane Technology

Membrane filtration systems are applied in many fields. The most common applications of this technology are food and beverage applications and industrial applications. Membrane filtration systems are being used in processing a variety of products. In processing food and beverages, the technology uses systems like micro filtration, ultra filtration, and reverse osmosis with either stainless steel or spiral-wound. Beverages and food that are processed using this technology include juices: vegetable juices (e.g. carrot, celery, garlic), fruit juices (e.g. strawberry, orange); and potato starch recovery. The starch and sweetener industry and sugar industry have also incorporated membrane technology to increase productivity. In industrial applications, membrane filtration has been applied in the production of ethanol, de-salting, clean-up of wash water streams, pigment production, metal production among others (GEA Filtration).

Conclusion

Any purification technique aims at achieving a more productive output. The systems used to purify fluids depend on the type of materials used and the process followed. Membrane filtration aims at separating contaminations from soluble solids and liquids so as to generate a more reliable product. In this report, the operations of a membrane technology have been discussed including the types of membrane systems, the effectiveness of the technology, managing the system, and the application of the technology. In essence, membrane technology is cost efficient technique in regard to energy as compared to other techniques such as evaporation and filtration. Consequently, reverse osmosis, micro, nano, and ultra filtration are the various methods used in this purification process. On the other hand, membrane purification processes might be controlled through a cross stream method or dead-end stream. Therefore, membrane technology is commonly used in food and beverage production and industry applications.

References

Energy Center of Wisconsin. Membrane Filtration. 2000. Web.

GEA Filtration. Cross-flow membrane filtration systems and replacement membranes. 2009. Web.

Lenntech. Membrane Technology. 2009. Web.

Smith, S.E. What is Membrane Filtration? 2010. Web.

Nanotechnology: The Technology for the Twenty-First Century

Introduction

Nanotechnology is defined as the science of coming up with processes of building materials at a sub atomic scale. In other articles, it has been defined as the science of organizing matter on the atomic and subatomic scale. When talking of nanotech which is the other term that describes nanotechnology, we are describing configurations of the size 100 nanometers or smaller. Nanotechnology involves coming up with materials of the said sizes. The use of nanotechnology has grown over time to be involved in everyday fields such as medicine, molecular physics, and basic industry among others. New materials are always being invented that are within the nano size. Nanotechnology has been touted as the new frontier in regards to particle physics and many scientists are currently involved in this field (Tegart 71).

Thesis Statement

Nanotechnology is now used in the coming u with new devicesand materials that are being used for different applications in the real world scenario. This doesnt mean that nanotechnology doesnt come with implications and it is these implications that are being used by others to discredit its use. Concerns raised involve the environmental hazards that could arise when nanotechnology is used on a wider scale and also the level of toxicity that could be absorbed by the person operat the devices (Tegart 70).

Major Point

A nanometer is a billionth of a meter. This size is so small that spa special set of tools and devices are needed in order to conjure other devices hat operate at this level. There are two advances that are used in nanotechnology and they are, the bottom up advance which in theory states that molecular components are used in the building of components on the molecular scale. The components self assemble by molecular recognition. The other advancement is the top down approach which states that entities of larger sizes are used in the coming up with objects in the nanoscale but in this theory there is no atomic level control (Gerd 45)

Several fields of nanotechnology such as nanophysics and nanomechanics have emerged that either follow the two principles have emerged. The assembly of machines on the nanoscale has proved to be difficult in practice and this is due to the difficulty in arranging atoms of the same size with both similarities in terms of stickiness. It is also difficult to control individual molecules.

Personal Reaction

Applications used in the field of nanotechnology are based on the physical properties of the material used within the sub atomic range. In the field of medicine, contrast agents and therapeutics for cancer treatment are just some of nanotechnological applications (Charles 55).

. Nanotechnology is also used in tissue engineering where the technology is used in the repair of body cells that have been damaged. In the field of chemistry, nanotechnology is used in the catalysis of chemicals and chemical filtration. In the field of energy the technology is used for the storage and conversion of energy, while this is in the research stage, studies have shown that nanotechnology will revolutionize our way of thing when it comes to efficiency. A good example of this is the recycling of batteries. The application of nanotechnology is so wide that it would require us to cover it as a different topic as it literally encompasses materials and processes that we used for our daily survival (Charles 53).

References

Charles, Michael. Strategic implications of nanotechnology. Newsweek. 2009: 52-55. Print.

Gerd, Bachmann. Angels on a pinhead: new research networks for nanotechnology. Times. 2009: 42-46. Print.

Tegart, Greg. Nanotechnology: the technology for the twenty-first century. Time. 2004: 70-71. Print.

The History of RFID Technology: Benefits That Users of the Technology Are Bound to Accrue on Adoption

Introduction

RFID is fast taking shape as manifested by the recent media blitz and newspaper articles about the technology. Not many people can distinguish the different categories and benefits that technology brings. It is the purpose of this paper to highlight the history, uses, and benefits that users of the technology are bound to accrue on adoption. Although not widely used as its counterpart the bar code, RFID is poised to work best when incorporated with the barcode technology. With an integrated database in use, users of RFID can ease business operations and save time and cost for both the manufacturer and the end consumer of the products tagged. As much as there has been a media frenzy about the technology, comprehensive research and public sensitization should be carried for it to be truly appreciated (Burgess 265).

Background Information

The history of RFID technology spans back to the days of the Second World War where research was conducted by Leon Theremin to use RFID technology to spy on other countries for the Soviet Union. Information was to be sent back to intelligence headquarters in the form of audio and hence conversations between individuals could be intercepted. This device was primarily used for listening rather than what the technology is used for today. Earlier on, the Royal Air force used similar techniques as that used in RFID in the identification of aircraft. Using IFF (Identification-friend or foe) transponders, aircraft could be determined if they belonged to the enemy or they belonged to one of their allies (Finkenzeller 63) A ground-breaking article by Harry stockman in 1948, forecasted that there was a need for proper research on reflected power communication for humankind to truly benefit from its numerous uses and applications. The first RFID technology that utilized memory was introduced in 1973 by Mario Cardullo, who is described as the father of the modern RFID. His first device was activated by an external signal and was applied in the collection of government revenue in the toll systems. The biggest user of RFID technology has been the Department of Defense which is currently using over 1.5 million tags in the tracking of containers that are shipped outside the US.

At present times research is now focusing on how to reduce the cost of equipment used for this technology in both the software and hardware aspect. Cheaper materials are being sought for this purpose and this is leading to better and more efficient solutions. Research is also focusing on how to transfer the information from the reader to other networks within an organization at a faster with greater emphasis on increasing the volume of data. As RFID becomes more popular, more and more products are being tagged and this is creating demand for the need for ways to deliver these bulky amounts of information from the reader to the database, to be reflected the users who need it. Manufacturers of the technology are also developing new standards that ensure interoperability between their equipment; this will lead to a greater variety for the consumer. Standards for this technology are dictated by the EPC (Electronic Product Code) which is made up of firms specializing in the manufacture and marketing of RFID products (Rochel 56).

Technical Aspect

Radiofrequency identification employs radio waves in the transmission of the signal from the tagged product to the reader which is located some distance away. Radio waves have the advantage of passing through obstacles. RFID technology encompasses a range of both software and hardware solutions for the purposes of identification of goods. These goods can be tracked regardless of the location and the only limiting factor could be the characteristic of the tag and the corresponding reader (Shepard 58). Some tags have an accompanying antenna which eases the reader in locating the tag. It also enhances the read rate of the tag. In essence, the reader transmits a signal to the tag and this signal is echoed back with information regarding the tag and the product attached to it. The relayed information is then analyzed using sophisticated software. The software also enables the users to locate the product in real-time about its state and history. Each tag has an exclusive number that is used to identify it from the rest. This number also has information on the manufacturer of the tag. Further information that may be useful to the relevant organizations may also be integrated into the tag. The reader can obtain this information wirelessly from a distance and the information is read at a high rate.

Components of RFID

Tag: This is a device that utilizes a microchip attached to an aerial which is miniaturized to beam back the signal received from the RFID reader with information about the product and its location around the organization. These tags come in all manner of shapes and sizes so as to be enclosed in all environments (Finkenzeller 89). The most preferred covering comes in plastic which ensures that the microchip and antenna are not susceptible to environmental elements of degradation such as heat and moisture. There are also active tags that also have a battery as an extra component.

RFID Reader: The reader is responsible for transmitting radio wave signals to the tag and collecting the received information. This information is relayed to a database where it is stored for further analysis. Some readers have multiple antennas in order to pick out tags and enhance the speed at which information from the tags is collected.

Computer/ Database: Information received from the tags is processed via software run from the computer. The computer also enables the data to be stored in databases that could be connected to other networks hence enabling the product to be located on a real-time basis

Advantages of RFID

Since radio waves are employed in the location of objects, the product does not have to be within line of sight of the reader and this is a big advantage when taking into consideration that there are some products such as military hardware which could be dangerous when in close proximity. Such hardware could be radioactive materials. RFID tags are known to be durable as they can withstand environmental elements such as to much heat and this is one of the reasons that the military has used them in harsh terrains where climatic conditions are not favorable. RFID readers can detect tags that could be located far away. Active tags can be detected by readers located approximately 100 meters away. This gives the user advantage as they do not have to be near the tag. Another advantage that the technology has is that the readers can read multiple numbers of tags at the same time. This ensures that auditing and tracking of products are done faster and more efficiently. Last but not least is that the information stored on the tags can be altered when the need arises, this ensures that the tags can be reused in another carton and hence this saves the organization revenue which would have been used in purchasing new ones.

Forms of RFID

The tags are also categorized by the frequency they employ for operations. Frequencies normally used are between low (135 KHz), VHF (13.5 MHz), UHF (860MHz), and Microwave frequencies. These frequency groups dictate where the tags will be applied. For example, tags using low frequencies are used for tracking humans and animals. As we all know that live animals including humans are susceptible to the effects of radiation. These tags are also used for asset tracking. High-frequency tags (VHF) are used in a condition where interference from other environmental elements is unacceptable. Radio frequencies are prone to interference from water or metals that could be around the tag. This makes sit hard for the tag to be detected by a reader. Tags employing Ultra-high frequencies have the advantage of a longer reading range i.e. around three meters and the speed at which this information is read by the reader is also fast as compared to other tags using other frequencies (Espejo, 35).

There are three categories of RFID each displaying its different characteristics. The first class of RFID tags are active tags, they operate with the use of a battery that powers the tag in sending out waves to the reader. These tags are large and have a read range of approximately a hundred meters (Espejo, 42). Large amounts of data and be stored in them as the embedded microprocessor has a large capacity. It is these reasons that make the tag more costly than its counterparts. Another class of tags is passive tags that only have the microchip embedded into the tag. Unlike the active tag, the passive tag relies on the reader to activate a signal which will be picked up by the reader. These tags are much smaller and contain fewer data stored. They are much cheaper and the read range is much shorter i.e. less than three meters. The last class of RFID tags is battery-assisted passive tags which require a battery in order to send out a signal. The difference with this tag is that the signal transmitted can travel further and hence be picked up by a reader more easily. Such tags are used in rough terrain or in harsh environments where they may be hard to reach.

The tags can also be categorized in order of how the information is stored in the RFID tag. Some tags are read-only, meaning that data stored in the tag cannot be modified. The Information in such tags is mostly the unique code identifier that is used to identify the tag in the first place. This information is normally integrated when the tag was being manufactured (Espejo, 40). Some tags are categorized as write-once meaning that data can only be modified and lastly there are read/write tags. These tags can be read and data modified as many times as the user would like. Such tags are more costly than their counterparts.

Challenges Faced by RFID technology

As much as RFID technology has its benefits, it does have its challenges which are being addressed, and research into how to solve the issue is taking shape. The first set of challenges faced by RFID is on the technological front where large amounts of data have to be made available to all people within an organization at precisely the same time, This places an enormous burden on the network resources of a company and hence a comprehensive solution in terms of the readers and the backbone database has to be in place for all this data to be present to the users. As the number of tagged products increases so does the need to expand the network serving this technology. Another set of challenges that are faced by users of this technology is the issue of security. In highly complex environments, where there are a number of persons interested in the same product, sets of authorization have to be in place so as to enable the distinction between who the owners of the system are. These issues normally arise in supply chain environments where many companies and individuals are involved. There is also an issue with how the devices are configured as configurations differ from one manufacturer to another (Thornton, 108).

Solutions to Challenges

As stated earlier, the issues surrounding RFID are being dealt with by numerous research and development initiatives. These initiatives are focused on solving the issues surrounding this technology to make it more affordable to users and more adaptable for other applications. A good example of one solution used to tackle the problem of huge data overloads is that the companies need to come up with a network solution that is easily scalable with the demands of the firm (Finkenzeller 23). This will ensure that the technology grows at the same pace as demand and not just increase network resources blindly. Another solution to the challenge of complexity in configuration is presented in the form of manufacturers coming up with a clear set of standards as to how the technology will best be implemented. There are new standards that are always being incorporated within EPC in order to involve as many stakeholders as possible and also to make the technology as affordable and reachable to all.

Uses of RFID technology

RFID technology is used in numerous applications to increase revenue and reduce the time in which business operations are carried out. An example of applications of RFID is in the supply chain management including logistics. With RFID technology, these two processes have been automated in a manner that ensures less human involvement and more accuracy in the process, this leads to less time taken for the product to go from the manufacturer to the retailer (Thornton, 108). A good case is how Wal-Mart increased revenue by demanding all suppliers start using RFID technology. Also in the Manufacturing sector where automation of manufacturing processes is required such as the labeling of specific products and also control of the products in a manufacturing line, RFID tags and readers can be attached to the production line to identify which products are being produced. Another field that RFID plays is in the Logistics & distribution of items. Goods can be easily tracked from the factory up to the end consumer thus better security for all parties involved in the supply chain. Speed within the supply chain has been enhanced thus making the whole process of transporting goods much easier.

Conclusion

These are just some of the applications that we have discussed but the future of RFID technology is fast growing and in a few years time, the technology will be part and parcel of our lives as its adoption from the military to the mainstream population takes form.

Works Cited

Burgess, Stephen. Effective Web Presence Solutions for Small Businesses: Strategies for Successful Implementation. Sydney: Idea Group Inc (IGI), 2008.

Espejo, Roman. RFID Technology. New York, N.Y: Green haven Press, 2009.

Finkenzeller, Klaus. RFID handbook: fundamentals and applications in contactless smart cards and identification. London: John Wiley and Sons, 2003.

Rochel, Roman. RFID Technology and Impacts on Supply Chain Management Systems. Sydney: VDM Verlag, 2008.

Shepard, Steven. RFID: radio frequency identification. New York, N.Y: McGraw-Hill Professional, 2005.

Thornton, Frank. RFID security. California: Syngress, 2006.

The Technology Today and in Future

In thinking about this subject I considered much of the technology we already have and some which are still on the horizon. However, everything I came up with which seemed beneficial and started out great eventually became a dystopia. It seems that too much of anything is a disaster. We worry that we cannot communicate among ourselves and dream of things such as brainwave communicators. However, while this would allow us to communicate mind to mind, that would become quite stressful as nobody would ever have any privacy.

Finally, I decided upon a few simple things. Everything would be wireless or would have hidden wires underground. The cell phone would be a true communications hub, connecting us all to all we want or need through a central hub, The phones would be very small and they would look like jewelry with a separate screen to use for applications. We could use video or not as we choose. Phones would connect to all the public information in the world and filter it automatically from our plain English or any language, search requests.

Our phones will carry biometric identification, critical medical information, and all the other stuff we like to carry, even money. They will connect us anywhere and keep all our information safe. They will only open to emergency information without our voiceprint. Nobody can steal your data or your phone, and the phones do not break.

Convergence is the key. Everything electronic will connect easily to everything else electronic. A small tube unrolls from the phone to create a reading page for books, newspapers, and magazines or play our favorite TV shows or movies, The screen is hand-held or stands up by itself. The same screen becomes a computer display or an art tablet or game table when needed. The screen can wrap around the arm or wrist for easy carrying and it looks great. Even the style and color of these phones change with your outfit. The only real problem with these phones is they never break down and there are no dead zones.

All of these phones are connected to local central servers which hold all our data for use, encrypted and protected by the biomechanical identification in the phone. Theft has become a minor nuisance due to surveillance systems and the cell phone economy. Nobody carries money as they pay for everything with their phone. Money has become totally electronic. You can even set your phone to control your spending.

The population problem has been solved, so living on earth is very comfortable. Transporters move goods around the world almost instantaneously using very little energy. Terminals are located conveniently for public use, there is no traffic in the city except for small transport bands running through the buildings, Everyone leaves the car at the edge of town. When you reach your terminal you tell your phone to get the car. It drives itself to pick you up. People only work at jobs they like, Robots do the rest, The workweek is whatever length you like for the money you need. Not all problems have been soiled, but people get to them fast and cooperate using their phones.

People with no imagination or hobbies are unhappy, So the phone calms them with subsonic music. Everyone else seems to have less stress and more joy. Man has spread out into the galaxy but not so far that the communications lag is bad. People travel freely among the inhabited planets and satellites, and families stay in touch. People spend more time in leisure activities and live longer healthier lives, When they do get bored they can opt for a stint on a developing planet or satellite. There the work is harder, though there are some robots. It is hard work, so few people stay full time, However, it is not a problem if any guy wants form or needs a solid work framework,

Television and radio are also accomplished through the local cell phone networks, Small producers can broadcast for a minimal investment in studio and field recording and video equipment. It seems that many people have discovered they have something to share as they have gained more leisure time to be used to learn or perfect some skill they favor for a hobby or a second career.

Big business still produces the big manufactured items, but entrepreneurs have sprung up everywhere as small producers make and sell niche products, much like was done when civilization began. However, with all the wonderfully modern tools we have now, the production is high enough for people to make a viable business and the quality is as good as, or better than, the large manufacturers. So anything which can be made in a garage workshop is being created and sold individually and civilization is well into an art and artisan renaissance.

Another important innovation is health. Fast foods restaurants have changed their menus to include a lot of delicious healthy food. People became aware of the dangers of being overweight and tiny, fee-based exercise stations are everywhere. A corner of a larger building might be a ten-person gym. You enter using your cell phone and exchange your belongings for a locker key card and workout clothes. When you are finished you open your locker and retrieve your belongings and put your gym clothes into a bin for washing or you can vacuum wrap them for use all week. Your cell phone is charged before you leave. These small gyms stations are full all day long, and there is a tiny waiting room where you can sit and have a beverage or healthy snack while you wait. As a result of these and innovations in food production and storage, the average weight in a midsized city is within five pounds of ideal. These little stations make it easy to stop for a ten-minute stint on the treadmill, weights, elliptical, or in a personal swim tank.

There are also pet stations at local grocers, all of which are more numerous and smaller, where you can leave your pet for a time while you shop. If you spend a certain amount in the store, the pat stop is free. Young college students care for their pets and clean the facility while they study using local screens. They earn some money toward their schooling and still manage to study. Shopping has moved either too large indoor malls or clusters of small shops with these kinds of services, plus small snack shops and relaxation stations. They all work on the high volume low-cost principle to do what is a thriving business.

So centralized communication and transfer of goods has evolved into a network around each major transfer point of small shops, food suppliers, and entertainment, generally clustered around some green space with places for people to stop and rest. Even very large cities are a network of reasonably close together clusters of small shops where people can purchase goods and services just as well and in as large a variety as in large shopping malls or supermarkets. A culture of local people getting together in these little village squares has added to the relaxed daily routines of local inhabitants. Shops operate upon small footprint and high volume business and all leftovers from markets and restaurants are sold to local kitchens where anyone can buy a whole meal for $5. It has become a world of interconnected small villages and people are happier than ever before.

A Plan for Change: Ensuring Our Technology Investment is Protected and Bears Fruit

Introduction

As the world evolves in the computer world, there is the need for an organization to align itself with the changes. The more the organization adopts new and right technology, the more it will have competitive advantages over the other players in the same sector. To effect/ adopt the new technology, it takes deliberate managerial decision to effectively put it in place. It is via a planned change. The nature of human being is that they repel change and thus to adopt the new technology, there is the need for the management to have a well constituted change platform (Smith, 2006).

Why Change to I.T. is Important

The business world is a dynamic environment that is affected by the changes in the environment that they are in. It is affected by the technological development in the sector as well as the competitors level of technological adoption. There are various advantages that come with adopting the right technology for the right job they include; efficiency in terms of cost and the time required to do business. Efficiency has its advantages that it brings with in the business. They range from customer satisfaction that leads to customer loyalty. Computers are reliable management tools that the management can use to control the businesses that they are managing. There are programs that can be adopted and assist the top management on when to make which decision. They include stock control packages. The time to adopt a new technology is always at the shortest time possible after the invention or improvement of the current one. An aggressive company is the one that adopts a better way of doing things (technology) early enough to have gained from it before the competitors adopt it (Goman, 2000).

Cycle to the Processes of Organization Change

Before a company adopt a new system of doing things, there is need to appreciate that the new way can only succeed if the employees are positive about the change. This calls for a gradual way process of implementing the change. The organizational culture is one of the factors that can affect the change (negatively or positively). The change agents should understand this well before implementing the change program. Generally, a change follows the following procedure;

Problem identification and analysis

This is where the agents of change realize that there are / is some process that has to be made. After this there is the looking for the probable solution to the problem that must be alighted with the mission and vision of the organization. In our case here there is the adoption of new I.T. technology that is expected to bring positive change to the organization. The staffs that will be affected as well as the entire team should be given the detailed analysis of what the organization want to do. At this stage the management brain storms the effect of the program with the employees and let the employees learn how they will be affected. If there is any learning needed it is done at this stage.

Pilot study

There is always no guarantee that the new system is going to be more effective than the old system, thus there is the need for the system to be run as a pilot study alongside the old system. This also offers the employees the time to have a hand on experience on the new system. They learn more about it and may even improve on it. If the program is seen as a better one, then the final stage is done.

Full adoption

At this stage, all things are alighted to follow the new system and the old one is switched off. All the employees are supposed to adopt the system. Improvement of the system is the major thing that follows this (Anon, 2004).

Technology and Human Psychology

Human beings always look for a better way of doing things and the development of technology have offered a solution to them so far; however they are static to change and the change should thus be administered in a way that it doesnt seem to remove the human being from their comfort zones. To administer this it calls for deliberate plan on the action plan. To effectively do it, there is the need to understand the attitude that the individual have towards change? Is he /she an anti change or the agent of change? In organizations, the organization culture plays a major role in determining whether the change will be well embraced or repelled by the employees. The change agent must understand the effect that the attitude and the organization culture will have on the change program. After this there are the opinion leaders in the organization that can assist in persuading the employees to adopt the change. Past experiences that the employees have had is another thing that affects the success of change. It should be understood the effect that the earlier change has had in the lifes of the employees. For example if in the future after an adoption was made, it led to termination of some employees, there can be a lot of difficulty in implementing change in such an organization compared to another that change made work easier for the employees (Penrod, 2007).

The I.T. Manager and Change

One of the most changes that are happening in the organization today is the change of Information and telecommunication this is due to the continuous improvement and invention. The I.T. manager is the agent of change as far as the I.T. is concerned and thus should be the leader and pioneer of the change. Managing change is a process that starts from the problem identification and goes all along to monitoring the change. There are also the slow learners that will need to be managed. This is where the I.T. manager comes into place. Other than ensuring that everything goes as planned, they give technological and emotional support to the entire team. They should have prior knowledge of the new system or they fully understand the system so that they are not seen as stranger to the same program that they are leading. As the program unfolds, there are areas that need to be improved, to make the system more effective to the specific business and it is the I.T. manager who has the mandate to ensure that these are done at the right time with the expertise required. I.T. manager starts the change process from problem identification and sees the program adopted as well as its improvement (Gauthier & Sifonis, J.1990).

Conclusion

Change is inevitable however it is one of the things that need to be planned by the concerned change agents since human being repel change; but when well developed and implemented it is adopted freely. In the I.T. development/ change, the I.T. manager should be the change agent.

Reference List

Anon. (2004). The Roles of Leadership: The Role of Leading Change in the Organization. Web.

Gauthier, M., & Sifonis, G. (1990). Managing Information Technology Investments in the 1990s. American Society for Information Science. Bulletin of the American Society for Information Science, 16(5), 16.

Goman, C. K., (2000). The Biggest Mistakes in Managing Change. Web.

Penrod, (2007). Coping With Poorly Planned Change. Web.

Smith, L. (2006). Technological Change. Vital Speeches of the Day, 72(11), 326-328.

Information Technology Infrastructure

Open System Interconnect model (OSI)

When examining a complex model, it is the situations that are always important to break the condition into small parts that can be understood or that are easily manageable. This is what an organization called International Organization for Standardization did. It broke the communication process into small manageable units or layers, and each unit or layer represents similar functions that provide services to the layers above and request services from layers that are below (ITU-T, 2003). OSI is an acronym for Open System Interconnect model and was developed in the 1980s by the International Organization for Standardization (ISO). The OSI model consists of seven layers namely application, presentation, session, transport network, data link, and finally physical layer. Anything to be communicated like a message starts at the application layer with the top layer and moves down the OSI layers to the bottom layer which is the physical layer. As the message moves through these layers layer-specific information is added and this information is called headers. When the message reaches the receiving end, the headers are stripped or removed as the message travels from the bottom to the top of the layer. So, the sending end encapsulates the message, and the receiving end does de-encapsulation. This is the function of the OSI model (Lorentz, 2005).

Now we are going to discuss each of these layers and state what they do during the process of encapsulation and de-encapsulation (ITU-T, 2003). The topmost layer called the application layer is involved in providing network services to the end-user applications. These services include file access, printing, mail transfer, and word processing. The next layer is called the presentation layer and determines how data is presented to the user. It provides services such as encryption, decryption compression, and decompression of the data. The next layer is the session layer (ITU-T, 2003). It is involved in establishing the connection and ending the connection between the communicating parties. For instance, two communicating mobile phones use a session layer to establish a connection. The fourth layer is called the transport layer and as its name signifies, it is involved in reliable data from the sending device to the receiving device. It is also involved in error detection and correction. The network layer is number three in the OSI reference model. It is involved in determining the most reliable route for the packet to pass through until it reaches its destination. The second layer is called the data link layer and this layer is divided into media access and logical link layer. The main purpose of this layer is to provide a means by which the message being sent can access the media. It also assists to identify the MAC address of the sending device. The first or the bottom-most layer is called the physical layer. Its function is mainly to receive and send raw bits. Bits mean 0s and 1s (Lorentz, 2005). This is because computers only understand binary, but they convert them to a manner in which human beings can understand.

Communication Protocols

Communication protocols can be defined as rules that communicating devices like computers or mobile phones follow to successfully communicate and understand the other partner on the other side (Holzmann, 2001). Since computers have no means of learning these rules, network programmers face the challenge of coming up with these protocols or developing them. For two or more systems to accomplish a given mission, they must exchange a controlled sequence of messages and these messages are the protocols. We cannot fail to mention that computers use controls structures in every system to coordinate the exchange of data between them. Since timing is important in protocol execution the systems maintain timers because it is required that they arrive within certain time intervals (Holzmann, 2001). The main functions of protocols include:

  • Data addressing
  • Deciding how data is sent
  • Compression technique application
  • Errors identification
  • And deciding how to announce sent and received data

The following are the major communication protocols:

Transmission control protocol/Internet protocol (TCP/IP) represents a set of public standards that specifies how packets of information are exchanged between devices over one or more networks (Holzmann, 2001). TCP/IP consists of four layers and they are the application layer which is the starting point of any communication session. Currently, we have other protocols such as Hypertext Transfer Protocol (HTTP) which governs how files such as sound, text, graphics, and video are exchanged over the Internet or the World Wide Web. Another protocol that operates at this layer is telnet that allows access to a remote host. File transfer protocol allows the transfer of files over the internet. The other layer of TCP/IP is the transport layer. At this layer, we have Transmission Control Protocol (TCP). This is the primary internet protocol for reliable transmission or delivery of data (Holzmann, 2001). It includes services for end-to-end connection, error detection, and recovery. Many applications like email depend on his protocol. Other protocols include routing informationion protocol (RIP), Interior Gateway Protocol (IGP), and internet protocol (IP) which provides services for uniquely identifying a computer on the internet using IP addresses.

Industrial Ethernet

By definition, industrial Ethernet is the usage of Ethernet as the media in the data link layer of the OSI layer model. The data link layer is the second layer of the OSI reference model and it defines how a message being sent accesses the media (Holzmann, 2001). When implementing it, the bus topology is used to define both the logical and physical appearance of the network. Industrial Ethernet is considered the fastest growing network. Ethernet cables come in both category 5 which is known by most people as cat5 and also category 6 mostly known as cat6. The industrial Ethernet has evolved. It used to transmit 10mbs, then due to improvement in technology, it started transmitting in 100mbs and finally, we have gigabit Ethernet that is transmitting at 1000mbs (Lorentz, 2005).

According to Steve Jones, there are several ways that Ethernet affects the overall operation of the network and including increased speed in data transmission. The speed has increased from sub-10kbs with RS232 to 1000 Mbps or one gigabit. It is even expected to go beyond this capacity in the future since there is a need for media that can hold a lot of bandwidth and also transmit that data at high speed because availability is a key factor in the network. The other positive effect on the overall operation of the network is low-cost redundancy (Lorentz, 2005). The cost has been highlighted very well, this is under Jones argument that Ethernet has a characteristic of being a network with active infrastructure. Unlike typical device or control level networks-which generally have a passive infrastructure that limits the number of devices that can be connected and the way they can be connected. This is made possible due to the availability of the Ethernet switch that creates redundancy into the industrial Ethernet network (ITU-T, 2003). This could not be achieved with the standard field bus networks. Cost-effective networks can be designed to enable effective scalability in the future due to the capability of industrial Ethernet to accommodate a high number of point-to-point workstations or nodes. The following are the advantages of using industrial Ethernet.

  • It is possible to use standard devices such as routers, hubs, switches, bridges, access points, and cables.
  • One can use industrial Ethernet even when systems are running on different operating systems or different hardware.
  • It allows peer-to-peer to co-exist when using TCP protocol.
  • You can create several nodes on a link.
  • Increased distance.

References

Holzmann, J. (2001) Design and Validation of Computer Protocols. New York Press: Prentice Hall.

ITU-T Recommendation Q.1400. (2003) Architecture framework for the development of signalling and OA&M protocols using OSI concepts, New York, pp 4, 7.

Lorentz, L. (2005) IAONA Handbook Industrial Ethernet, Industrial Automation Open Networking Alliance: Magdeburg.

The Definition of Ethernet Technology

Ethernet technology defines the low-level specification of data communication protocols and also details that are more technical and it is considered that any person who wishes to make products like cards should be conversant with these specifications (Mogul & Kent, 40). This technology is implemented at the physical layer and data link of the OSI model respectively. The very first Ethernet technology supports 10 Megabits per second. As the technology improved another Ethernet came up that supports 100 Megabits per second and now the latest that supports 1000 Megabits per second and it is also called gigabit Ethernet and the 100 Mbps, it is also referred to us fast Ethernet(Jeffrey & Mogul, 45 ). Ethernet technology is implemented using bus topology. In a bus topology, all the computers or all the devices in a network share the same media or cable. The device that wants to send a signal incorporates the Mac address of the destination device to the frame to be sent whereas the other devices compare their Mac address with that one on the frame to determine if the frame was meant for them. The technique employed here is called Carrier Sense Multiple Access/Collision Detection (CSMA/CD). This is because a computer that has a message to broadcast usually listens if the media is free or there is another transmission going on. If the media is free then it broadcasts. When two computers or devices listen at almost the same time, what happens is that they sense that the media is free and send or broadcast at the same time. This means that there will be a collision and this will cause the two transmissions to fail.

The other technology is called token ring which is a reliable network architecture based on the token passing access control method. Its standards are defined in IEEE 802.5 and are an example of whose physical topology differs from logical topology (Mogul & Kent, 46). The topology used is referred to as star topology because of its physical appearance. Inside the component where the devices are connected, the wiring forms a circular data path, creating a logical ring. The technique used here is carrier sense multiple access with collision avoidance (CSMA/CA). This means that a token goes around the ring and only the device possessing that token has the right to transmit a signal or send a message. With this topology, collision is not likely to occur because two devices cannot send at the same time.

FDDI which means Fiber Distributed Data Interface, as its name implies runs on fiber-optic at 100 Megabit per second. It, therefore, combines high performance with the capabilities of the token ring but uses a dual-ring as opposed to one ring (Mogul & Kent, 55). Those rings are primary and secondary. Normally traffic flows on the primary ring and secondary ring only if the primary ring fails. This means that this technology is robust. Advantages of this technology include the fact that it has high-speed transmission, redundancy, and fault tolerance (Mogul & Kent, 45 ). Fiber optic is not affected by EMI and noise and can send data for greater distances between repeaters than Ethernet and traditional token rings.

As an IT manager, I would advise an organization to upgrade from Ethernet technology because the other two technologies are not vulnerable like Ethernet. Token ring does not cause collision and FDDI is fault-tolerant and not affected by EMI and noise.

Works Cited

Mogul Jeffrey and Kent Christopher, Measured capacity of an Ethernet: myths and reality. ACM Sigcomm Computer Communication Review: Oxford. 2008.

Information Technology Project Management Standards

A standard can be defined as a requirement or developed norm that guides the undertaking of a given task. Standards exist as a formal document that defines given criteria, processes, methods, or even practices in technology. The various standards that exist in project management can be termed or referred to as performance-based standards of competency. These standards are used by people to guide them in their job expectations. In other words, they help the workers to identify, know and understand the part of their occupation that is needed such that they can be able to strengthen their basic roles in such institutions, companies, or even organizations (CISC, 1997). It is also through these standards that projects are planned, implemented, and even evaluated. They address the various requirements and ensure that tasks are effectively carried out to achieve expected outcomes. Project management involves the management of the scope to be covered, time to be taken in completing the project as well as the total cost of completing the entire project. In this case, project management can be defined as the process through which resources are planned for, organized, and managed to allow for the achievement of set goals and objectives during the undertaking of a project.

The most important thing to consider about the standards of project management based on information technology is the fact that they have been specifically designed for assessment only. In such a case, the issue of assessment has a developing effect because only well-versed and qualified, and registered assessors do the job. The set principles or standards by the Australian Institute of Project Management (AIPM) are based on the Project Management Book of Knowledge (PMBOK) which includes integration management, cost, time, scope, communications, human resource, quality, procurement, and risk management (AIPM, 1996). This paper will keenly look into and compare two such standards which are the Australian standard ANSCPM and the (National Vocational Qualifications).

Competency units

Quality performance standards for the two chosen study areas have been developed according to some units well described as the units of competence. For both of these two standards, Assessment is usually based on proof of evidence of competence in a particular field against the set-out performance criterion at the levels which are usually selected by the assessor involved in carrying out the required duties of assessment. According to the Australian version, there are 8 basic units of competence as compared to the UK which has five units of competence. Of the eight Australian units of competence, Australian National Competency Standards for Project Management are staged in the fourth, fifth, and sixth levels (OSCEng, 1996). The various similarities between the Australian style and the British style are summarized in the paragraphs below.

The competence units are vastly described as the expected outcome of the personnel involved in the realm of project management.

This expectation is dependent on a particular job aspect. In any area of employment, the so said unit can be used as an area of employment that can serve as a complete function (OSCEng, 1999).

Australian standards have been making tremendous contributions to the management of time during project management. They play a great role in the development of effective and feasible project schedules and the application of the necessary skills required to enhance good and result-oriented management of these schedules.

Competency elements

The competence elements are a description of the various elements which can describe how each personnel is competent and what each one of them is expected to showcase during a certain project and at a particular phase in that project. They describe the various tasks to be undertaken by each person as well as the achievements to be made. Generally, they define the specific expectations, roles, and responsibilities of a worker involved in carrying out a given project.

Performance criteria

The performance criterion is descriptive for all the elements involved in a unit. This criterion is used as a measure of the achieved outcome so that the performance of every category is compared against the desired level of competency. This phase is assessment units are usually evidence-based and it is specific to whichever part of the competence was tested (OSCEng, 1997).

Range indicators

Range indicators are the instances and also the specific areas or situations in which all the competence elements are supposed to be placed or applied. They are measures used to assess the progress of project undertakings in terms of whether the expected outcomes, goals, or objectives have been achieved according to the set standards.

Underpinning knowledge and understanding

The various set standards are mostly used to cover the performances and the performance aspects. In addition to this the underpinning or the strengthening knowledge, as well as the required value of understanding, is covered in such a way that the performance turns out to be outstanding (AIPM, 1996).

The performance evidence guides are the guides that indicate the degree and also the type of presented evidence that is acceptable by the particular enterprise or the particular industry to allow for the achievement of guaranteed competence in whichever unit will be assessed. Elements that might be required to give evidence can be a thorough grasp through demonstration of the underpinning knowledge and evidence (MCI, 1997).

Out of 10 Out of 10 Out of 10
1 Technical competence Elem. Knowledge Experience 2 Behavioral competence Elem. Knowledge Experience 3 Contextual competence Elem. Knowledge Experience
1.01 6 5 2.01 4 4 3.01 5 5
1.02 5 5 2.02 5 5 3.02 3 3
1.03 5 4 2.03 5 4 3.03 4 3
1.04 5 4 2.04 5 4 3.04 5 3
1.05 4 4 2.05 6 4 3.05 5 4
1.06 6 4 2.06 5 5 3.06 5 4
1.07 6 6 2.07 4 4 3.07 4 3
1.08 5 4 2.08 5 4 3.08 4 3
1.09 5 4 2.09 5 4 3.09 4 4
1.10 6 4 2.10 5 4 3.10 4 3
1.11 6 6 2.11 5 4 3.11 3 3
1.12 5 4 2.12 5 4 
1.13 5 4 2.13 5 4 
1.14 4 4 2.14 4 4 
1.15 5 4 2.15 5 4 
1.16 6 4  
1.17 5 4  
1.18 6 5  
1.19 6 5  
1.20 6 5  
Avg. 5.4 4.5 Avg. 4.9 4.1 Avg. 4.2 3.5

(Appendix 3 Self-assessment sheet, ICB  IPMA Competence Baseline Version 3.0).

Underpinning knowledge and understanding

References

AIPM (Sponsor) (1996) National Competency Standards for Project Management. Australia: Australian Institute of Project Management.

OSCEng (1996) OSCEng Level 4. NVQ/SVQ in Project Controls. London: Occupational Standards Council for Engineering.

OSCEng (1999) OSCEng Level 3. NVQ/SVQ in Project Controls. London: Occupational Standards Council for Engineering.

OSCEng (1997) OSCEng Levels 4 and 5: NVQ/SVQ in (generic) Project Management. London: Occupational Standards Council for Engineering.

CISC (1997) Raising standards: Construction Project Management: NVQ/SVQ Level 5. London: CISC (The Construction Industry Standing Conference).

MCI (1997) Manage Projects: Management Standards  Key Role.London: Management Charter Initiative.

.

Technology Changes: the New Version of Microsoft Windows 7

Introduction

Given the nature of how technology changes, it is wise to invest in upgrading a system to a newer system in the market to increase the efficiency of office work. This proposal aims at encouraging users of the old Microsoft system (Windows XP) to upgrade to the new version of Microsoft Windows 7. The new system has better features and protection that are not offered by the old version of Microsoft operating systems XP SP1, SP2, and SP3 or even Vista (Ricciuti 67).

This program has been developed by Microsoft to further enhance and add value to the already existing operating system, Windows XP. Windows 7 is generally used by desktops, laptops, and media center PCs. This operating system was developed and released to the market on July 22, 2005, with its formerly known name Longhorn (Ricciuti 24). Its size is slightly above 2.0GB, and it is available on the Microsoft website for download and licensing.

In comparison to its predecessor, Windows XP, Windows 7 has several helpful additional features. They include an updated graphical user interface, a visual style known as Windows Aero. It also includes specifically designed and developed multimedia tools which include inbuilt internet explorer 8(IE8), windows DVD Maker, networking audio, print and display sub-systems. In general, Windows 7aims at increasing the level of both local and wide area networks (LAN and WAN) using peer-to-peer technology. Additionally, Windows 7 has version 3.0 It is also important to mention that the operating system has features that improve its security levels unlike Windows XP, which has been largely criticized because of its security systems vulnerability to malware, viruses and buffer overflows (Durham 28).

Although these improved features are impressive, they are what make Windows 7 less compatible with existing computer hardware. The high requirement specifications and expensive. This aspect has become the major downfall of the call to upgrade, as most people did not upgrade their systems but preferred Windows XP and versions of Linux over upgrading to Vista. However, despite all the criticism, its usage has increased especially because of its ability to access the internet at a high speed and its well-developed end-user graphical interface. (Ricciuti 45).

Background

Upgrading operating systems from XP to Windows 7 will not only increase the efficiency of the office work but also increase the security concerns because of the developed security systems found in the platform of this Microsoft version. There are other features of this operating system that take computing to the next level. Despite the success of both Windows NT and Windows XP, there were serious design flaws in terms of the level of security offered and the engineering architecture within both systems. Security flaws manifested themselves through presents and destructive malware. Microsofts tribulations with operating system design widened well past security. Microsoft had by this time recognized that although it was excellent at designing a full NT-based operating system with a general GUI, it was completely incapable of creating an operating system without one. Confirmation of dependence on the occurrence of a graphical user interface can be exhibited throughout older versions of Windows 2000 and Windows XP (Durham 23).

Proposal

This proposal on the need to upgrade the current Windows XP to Windows 7 is brought about by the need to increase the efficiency and the speed of computing. The features that the new version of the Windows operating system brings to the table are the genesis of the need to utilize the benefits of this new version. There is a better action center for notifications that reduce restriction, a problem recorder for past PC problems, and a new taskbar that provides an easy shortcut to most used programs on the PC. Windows 7 has a better kernel design which has been improved to take advantage of computer hardware.

Benefits of the project

Many benefits come with observing and acting according to the proposal. Some of the most important benefits are increased computing speed and efficiency. The speed of computing is heightened mainly by the decrease in the systems restrictions=; the system provides an activity center and a notification system that allows all the relevant messages to be delivered at the right time and place, not interfering with the computing progress like the previous versions had. One of the biggest advantages of Windows 7 is its ability to recognize devices and install their drives automatically without the need to search for the plugged devices manually (Durham 90).

Procedure

The most important factor to consider when upgrading to the new versions of Windows is the need to create a backup for the files that were previously in the older version. Using easy transfer software to transfer files to a new disk is appropriate. Once the files have been transferred, it is recommended that the new system be installed by using the disk with the operating system to boot the computer. The instructional manual provides all steps necessary in the process of installation.

Expected Results

After the installation is completed, the system will add the features automatically. The entire collection of plug-in devices that were connected to the computer will be automatically detected and their drives installed. The search, taskbar, notification center, and action center will be available and easily accessible.

Feasibility

The feasibility of the project entirely depends on the existing specifications of the computers system in place. If the features of the current system meet the required minimum specifications, the new version of the windows will be installed without any problems.

Schedule

The time frame for the project will take at least two to three hours for each system if the easy transfer system is in place and the system meets the required minimum specifications. The time frame for the task completion depends on the number of copies of Windows 7 available.

Conclusion

The expected benefits from the project will lead to a better working conditions as the working environment will be much freer from restrictive aspects such as computer viruses and security vulnerabilities. This is, in fact, more critical for a business organization. The major advantage of windows 7 is that it is backward compatible and does not need thorough training. I am sure that users will enjoy the experience of the new Windows Program. The new Windows will assist in enhancing the user experience. This will go a long way in ensuring that business processes are carried out in the best way possible and in a manner that will ensure that efficiency remains the core of business operations. Windows have continually tried to satisfy its customers by continually introducing fresh concepts and this has gone a long way in addressing pertinent issues that affect its customers.

Works Cited

Durham, Joel. Gaming Performance: Windows Vista SP1 vs. XP SP3.New York: CRC Publishers, 2009.

Ricciuti, Mike. Microsoft: Longhorn beta unlikely this year. London: Oxford Publishers, 2010.