The Effectiveness of the Computer

The modern computer is the product of close to a century of sequential inventions. It is also the result of a corroboration that has spanned national and continental borders. And all through, this evolution has been driven by one need: to automate tasks. Mankind realized that rearranging inanimate matter could result in certain mechanical advantages. This is the essence of machines. Knowledge accumulated, and we came up with machines that could totally replace humans in some tasks. This discovery steadied up the evolution, since human limitations, at least on the physical level, were reduced. And now, we are using these machines to catalogue all the knowledge we have accumulated over the years. We are in the information age, and the computer has become a central determinant of our performance.

The effectiveness of the computer pivots on one major characteristic: speed. Every task that a computer does is first broken down into simple, logical steps. These steps are then converted into number values that so that the computer can understand. The manipulation of these numbers can be seen as a kind of arithmetic. A modern computer does thousands, even millions, of calculations, every second, in order to maintain a satisfactory level of output for the user. This is because it has to break down natural language into machine language. Text and numbers are easy for a computer to handle. Media like music and images require more sophisticated manipulation. To illustrate this fact, consider that in order for a computer to play an audio file lasting four minutes, it has to go through the same rigors needed to read a bible, from cover to cover, twice!

So fast are modern computers that they can now multitask. Multitasking is simply performing several tasks at the same time. An average computer can be accessing the internet, displaying a video, and processing some calculations at the same time. This dimension in computers has expanded the horizons, as far as their applicability is concerned. Computers, if well used, can drastically cut down on the amount of time required to accomplish menial tasks. Mainframes and mini-frame computers have taken this a step further, with one central computer being available for use by several people at the same time.

While operating at incredible speeds, computers have the added advantage of being very accurate. Actually, if fed correct data, a computer can always be counted on to produce the correct responses, unless there is a problem with its programming. This advantage can be directed correlated to the fact that now people can concentrate more on the big picture, instead of wasting time rechecking their information and statistics for accuracy. Progress in all sorts of human endeavors is thus much faster.

Computers have forever changed the field of communication. A mere couple of decades ago, trans-continental communication used to take days, sometimes even weeks, depending on the mode of communication. Nowadays, communication to any part of the world can be instantaneous, and requires only the click of a button. This is the significance of the internet. Through the internet, computers communicate with each other at dizzying speeds, spewing forth amazing loads of data, and saving billions of dollars in communication charges. This has even changed the way business is done today. With a minimum of resources, any business can nowadays operate on the international scale. Some businesses are already doing this, by outsourcing some of their interests across national borders. Billion-dollar decisions are now made in split-seconds, and effected with the click of a button. Little wonder, then, that some businesses are shooting up from obscurity into empires overnight.

The primary reason for inventing computers in the first place has been taken to a whole new level. Nowadays, computers have automated so many tasks previously done by humans that some jobs are nowadays the reserves of computers. Any repetitive task can be relegated to a computer. This is because modern computers can be instructed to perform new tasks, and will execute their instructions precisely, as long as they are equipped with the necessary accessories. With the evolution in computers, robots have been made to handle tasks that are either repetitive or dangerous for humans. This way, computers have increased the overall safety level of humans.

The first computers were hard to use. They needed specialized knowledge, since information had to be fed to the computer in special formats. However, with time, more convenient methods of communicating with computers have evolved. These methods approach natural language with each day. For example, computers can now recognize speech, and execute commands that have been fed to them verbally. Scientists are working towards optical recognition in computers, so that they can recognize faces, expressions and gestures. With such breakthroughs in technology, computers will soon be just as user friendly as the next person. A future when computers will respond to human emotions is foreseeable. At that stage, computers will be at their highest effectiveness, tackling problems with their amplified efficiency, while drawing little attention to themselves.

Of course, computers are not without some drawbacks. One of the most significant drawbacks with them is that they have now divided societies into two major groups: the computer literate, and the computer illiterate. One group has a distinct advantage in this information age. The other has to make do with traditional methods of tackling problems, or learn computers. Anybody who can not utilize the advantages of the computer is soon left behind by the rest of the society.

Another disadvantage is that computers have replaced many people in the work place. It can be argued that these replacements are part of human advancements, but the rates at which computers are replacing people have left many people reeling from the unexpected. It now requires a higher level of innovativeness to stay relevant in the changing landscape of technology. Very soon, most job descriptions will be the preserve of computers, and humans will just have to invent other job positions.

The ease at which information can be transmitted through computers has some drawbacks too. For example, some unscrupulous individuals have used internet channels to defraud others. The internet offers anonymity, and thus immunity from legal recourses. Billions of dollars are lost every year from these online fraudsters, and some of them simply disappear. Efforts are being done to tighten online security but somehow, fraudsters seem capable of keeping up with technological advances. Information flow through the internet can also compromise the security of institutions. For example, through satellite mapping, it is now possible to gather sensitive information on such premises as army barracks. Some terrorists have been known to use this while planning their attacks.

And finally, while computers are all good at storing information and processing it, the rapid increase and transfer of information is producing an information-overload scenario. This is where people are being confronted with so much information that they end up uncertain of what to make of it. Information overload is a real threat in these modern days, and can result in all sorts of eventualities, including mental breakdowns in people.

All in all, computers have both advantages and disadvantages. The advantages far outweigh the disadvantages. And with each advance in technology, the disadvantages associated with computers reduce. Their usefulness becomes increase. A future when mankinds activities will be inextricably interwoven with computers is not only foreseeable, it is inevitable.

History of the Personal Computer: From 1804 to Nowadays

The search for newer, faster and smaller computers has been on for many years.

In 1804, Joseph Jacquards, a French man, invented an attachment to the mechanical loom for weaving clothes. He realized that the design found in a woven cloth followed a fixed repetitive pattern which was a program. By punching holes at specific patterns and intervals in cards attached to the loom, he was able to control the threads, reducing desired patterns and hence storing information by punching the cards (Chronology of personal computers, 2010).

In 1833, Charles Babbage designed a steam powered device which was called an Analytical Engine which was a special purpose machine that could perform specific calculations. The Analytical engine was a far more sophisticated general purpose computing device which included five of the key components that performed the basic of modern computers. These are:

  • Input devices that were used to punch cards that contained instructions or data.
  • A processor/Calculator/Mill-this is where all the calculations were performed.
  • Memory unit also called a store where data and intermediate calculators could be stored.
  • The control unit which controlled the sequence in which operations operated.
  • The output devices-this is where Babbage got his results.

Charles contributed by developing the problem solving instructions that the engine would follow while doing calculations (The history of computers, n. d).

Hollerths Census Machine

Hollerth developed a machine that automated the tabulating process. The machine combined electricity with Jacquards method of storing information on punched cards. Whole representing census papers information was punched on stiff paper cards.

Burroughs Adding and Listing Machine

Burroughs invented the 1st adding and listing machine. It had a full numeric keyboard and was operated by hard crank.

ENIAC (Electronic Numerical Integrator and Calculator)

It was large in size and complex. It was a room sized machine that used 1800 vacuum tubes as internal components.

It had independent cutis for storing program instructions and numbers. Several mathematical functions could be performed at once by modern standards. It had limited storage capacity, limited memory and did not store instruction as modern computers.

Each new programme required rewriting its program circuit and could multiply numbers in 0.003.The only disadvantage is that it used a lot of electricity and power (Allan, 2001).

Von Neumanns Logistical/Computers

He was a mathematician who dealt with ideas and not reality or limitations of technology and as a result he was able to develop a logical framework around which computers have been built. He developed the concepts of storing programmes in the computer memory that was called stored program concept. Before; computers were storing only the numbers with which they worked. He converted each programmed instruction into numeric codes of which were binary digits (0 and 1) that could be stored directly in the computer memory as if they were data.

Von organized the hardware of the computer, broke it into components whereby each component performed a specific task and could be called upon repeatedly to perform its functions.

The components in his theoretical computer bare a remarkable resemblance found in the Charles Babbages analytical engine. These components were: An arithmetical unit for performing basic computations, Logical unit where decision and comparison could be performed, a input device for accepting coded instruction and numerical data, memory unit for storing instructions and data, the control unit for accepting the coded instruction and controlling the flow of data, and the output unit for communication of results (PC-History, n. d).

The Electronic Delay Storage automatic Computer (Edsac) was the first computer to incorporate the stored program idea and use the letter as input and convert them to binary digits in 1949. EDSAC was a stored program machine that used a unique code of binaries Computer Genealogy.

Computer components have decreased in size since the 1950s.

1st.Generation (1951-1958)

They had the following characteristics:

  • They used the vacuum tube technology where the input and output of data and instructions used to be done using punched cards.
  • The machines were programmable. The stored programme machine used numeric codes that were called the machine language. First generation machine were eight hundred meters in size and had huge price tags.

However the following drawbacks resulted from these machines:

  • The vacuum tube generated tremendous heat resulting in blowing off of tubes.
  • They used massive amounts of electricity to power the 1000s vacuum tubes.

2nd Generation (1959-1964)

These generations machines were developed by John Barden, Water Braltan and William Shock. In nineteen forty eight, the above invented a transistor while they were working for the bell tacs. They were produced at a cheaper cost and in larger quantities in 1959.A transistor is a tiny electronic switch that relay electronic message and yet built as a solid unit with no moving parts to generate computer vacuum tubes were replaced by transistors now that the new machine were once smaller, faster and more reliable than the first generation computers.

The computers used the solid technology which required no worm up time.

3rd Generation (1965-1970)

These machines involved more and more circuit to be packed into chips. Technology at that time moved from a large scale integration to a very large scale integration. Ted Hoff, the Intel Co-operate Engineer, introduced the microprocessor in 1979. Computers became smaller by condensing. In addition to developing a highly compact Central Processing Unit, the peripheral devices were designed to make the PC easy to use. Peripheral devices are any devices attached to the CPU. Such devices include the Compact storage devices, colour service (the monitor) and a wide variety of pointing devices such as the mouse, in addition to development of small desktop computers. In 1969, Intel was commissioned to produce an IC, a computer chip, for a Japanese calculator companys line of calculators (Early History of the personal computer, n. d). Ted created the Microprocessor, which did away with handwriting the logic of the calculator into the chip.

Later, the 8008 was created by Intel, and the company retained its marketing rights, although they could sell to CTC. There was need to create support for the programmable 8008 chip and Adam Osborne, an employee of Intel, was assigned the task of writing the manuals for the programming language for the 8008. He later gained fame in the development of the PC for creating the first portable computer (History of the PC, n. d).

The first operating system for microprocessors called CP/M was developed Gary Kildall. Operating Systems are vital because without them, using a PC can be impossible.

In Albuquerque in New Mexico, in the early 1970s, a man named Ed Roberts created a kit for assembling a home computer and based it upon a new chip (8080) that had been developed by Intel. He then struck a deal which allowed him to purchase the 8080 chips in large volumes at a discounted price.

On the other hand, use of ray large scale integration, super computers has a vast storage and processing capacity. Such machines are used in performing complex mathematical tasks. The compact chip Technology has also brought the development of parallel computers which uses multiprocessors working simultaneously to solve problems. Hardware advances were followed closely by software explosion and prewritten software is now available for all sizes of machines (CompInfo- the computer information Center, 2005).

Beyond the 4th Generation (5th Generation)

Judging from the current research in the United States of America and Japan, the next generation of personal computers is likely to have common features as given below:

  • They will be more compact-the hardware will be more compact based on the super chip composed of thousands of already compact smaller chips linked together.
  • The Personal computers will be faster as they will operate and calculate 100s and 1000s times faster than the current machine.
  • They will be closer to natural language as software will make greater use of natural or spoken language.
  • They will be smarter-will be considerably more intelligent than modern computers.
  • They will be friendlier-software will make this new computers even more user friendly meaning that people will find them easier to use and operate because the software will require less technological experts.

References

  1. Allan, R. (2001). A history of the personal computer: the people and the technology. United States: Allan Publishing, 2001
  2. Chronology of personal computers (2010).
  3. CompInfo- the computer information Center (2005). Software and computers.
  4. Early History of the personal computer (n. d).
  5. History of the PC (n. d.).
  6. (n. d). Web.
  7. (n. d.). Web.

Computers and Transformation From 1980 to 2020

Introduction

Main idea

The humanity dreams about innovative technologies and quantum machines, allowing to make the most complicated mathematical calculations in billionths of a second but forgets how quickly the progress of computers has occurred for the last forty years. Data volumes, sizes, and productivity of modern devices have been updated so much that ancestral forms can be called dinosaurs of informatics.

Thesis Statement

The modern computers are largely updated in comparison with the ancestral forms from 1980.

Pioneer among Computers

The development of electronics contributed to the appearance of a non-primitive computing device. By the second half of the twentieth-century, specialists already had at their disposal advanced resistors, relays, and chip elements by those standards of the time (Grudin, 2017). Thus, the first personal computers created for commercial sale to users were presented to the public in the mid-1970s: it was a small staffed device with a processor clock frequency of 1 MHz and RAM up to 48 kilobytes from Apple (Safian, 2018). Despite so few hardware features, this computer was a pioneer, expanding the field for creativity to all other companies.

Body Paragraph 1  Discussion Point 1: Processor

Topic Sentence

Over forty years, processors have undergone enormous changes, acquiring considerable computing power.

Subject A  Supporting Detail 1

By the end of the 1970s, the market was filled with a wide variety of variations of eight-bit processors. The sales leaders were companies that deserved respect among modern consumers as well  Intel and Motorola (Zimmermann, 2017).

Subject A  Supporting Detail 2

Since 1980, there has been a revolution in the world of computer technologies, as 16-bit and even 32-bit models replaced the old processors (Grudin, 2017). Processors of that time were produced by the 3-micrometer technological processor, although in 1982, Intel brought this figure to 1.5 (Martindale, 2020). At the same time, the clock frequency of the processor did not exceed 10 MHz, and the bit rate of registers was 16 bits. The amount of data nested in the device was 1 Mbyte.

Subject B  Supporting Detail 1

In 2020, these parameters do not seem to be as grandiose. For comparison, the flagship line of 10th generation Intel Core i7 processors, which most adequately describes the achievements of modern circuits, has 14-nanometer lithography, a base clock frequency up to 2.60 GHz and 128 gigabytes of memory (Martindale, 2020). Such results allow us to achieve billions of calculated actions per second.

Subject B  Supporting Detail 2

However, it should be remembered that processors for computers, as a rule, are based on a microscopic silicon crystal, and silicon technology is rapidly approaching the limit of its physical capabilities. For this reason, the pace of processor development is gradually decreasing.

Body Paragraph 2  Discussion Point 2: Memory

Topic Sentence

In general, RAM and ROM storages have tended to reduce their physical size while increasing the amount of accumulated data.

Subject A  Supporting Detail 1

In 1980, the first five-inch hard drive was produced in 1980 with a capacity of 1GB, but at that time, it weighed over half a ton (Zimmermann, 2017). At the time, RAM was measured in megabytes, which was enough for the job.

Subject A  Supporting Detail 2

Modern ROMs and RAMs should be as large as possible to provide the user with a smooth and uninterrupted workflow. Therefore, the current size of RAM is counted in tens of GB, and ROM has long passed the mark of terabytes.

Body Paragraph 3  Discussion Point 3: Screen

Topic Sentence

Monitors, as one of the main components of computers, have undergone many modifications in the history of their existence.

Subject A  Supporting Detail 1

Even in the period up to 1980, electron-beam monitors were replaced by liquid crystal analogs.

Subject A  Supporting Detail 2

However, unlike modern versions of displays with a diagonal to 55 inches and 60-150 Hz refresh rate, versions from 1980 were much more primitive. For example, such monitors were monochrome and required a separate backlight.

Subject B  Supporting Detail 1

Just over the next five years, IBM and Apple have made significant improvements to the displays, adding color models, and improving transmission quality.

Body Paragraph 3  Discussion Point 3: Mouse and Keyboard

Topic Sentence

Computer peripherals have undergone the most changes over the past forty years, not only in terms of the principle of operation but also in terms of appearance.

Subject A  Supporting Detail 1

Computer mice from the 1980s gradually replaced optical sensors with a rubber ball, but the concept of buttons remained unchanged.

Subject A  Supporting Detail 2

In the world of modern devices, there are two ways: either computer mice are replaced by the touchpad, or the mouse improves. New mouse models are usually based on laser systems. Wired mice are a thing of the past and are replaced by Bluetooth or USB devices.

Subject B  Supporting Detail 1

The first input devices developed in the 1980s were not separate from the central computer: the motherboard was located in one case with a keyboard. The number of keys was limited, and most of the function keys used today were not available.

Subject B  Supporting Detail 2

The most impressive features of 2020 include touchscreen keyboards, wireless keyboards with fingerprint sensors, and high functionality fully.

Conclusion

Restate Thesis

In conclusion, computers have undergone a significant number of transformations over the past forty years.

Opinion

Some have only changed the appearance and form factor, while others have entirely redefined the machine concept. From a comparative analysis, we can see that since 1980 the devices have had several trends for further evolution. Firstly, the chipboard portion has been updated as much as possible: the computing power of computers has increased millions of times. This makes it possible to perform more complex mathematical calculations in the tiniest fraction of a second. Secondly, due to the growing demand for computational power, computers have got more memory. Today, even the size of the clipboard is larger than the RAM of devices in 1980. Finally, the output and input devices have changed. Displays have become more qualitative, acquired different colors, and increased the density of pixels. The mouse and keyboard did not change their concept categorically but began to have a more convenient and organic design in addition to increased functionality.

References

Grudin, J. (2017). Synthesis Lectures on Human-Centered Interaction, 10(1), 1-183. Web.

Martindale, J. (2020). Digital trends. Web.

Safian, R. (2018). Fast Company. Web.

Zimmermann, K. A. (2017). Live Science. Web.

Overview of Computer Languages  Python

The history of programming languages dates back to over seventy years since the development of computers. Computers are not only playing a growing role in traditional scientific computing but are also widely used in other fields. The world of computing technology is fast-changing as numerous worlds leading tech firms compete to introduce the most innovative ideas. As a result, the comprehension of programming languages is becoming more necessary. Even students at the elementary levels are currently beginning to learn programming languages, making computer languages all the more relevant. A computer language helps people to speak to the computer in a language that the computer understands. As a consequence, programming is at the heart of technological innovation that is in use today. The paper discusses python computer language.

A Brief History of Python Computer Language

Python language first appeared in 1991 as a successor for ABC language. Python project was created by Guido van Rossum who was by then a lead developer. Python version 2.0 was launched in 2000 and included features such as garbage collection and list comprehensions by means of reference counting (Shukla & Parmar, 2016). Come 2008 and python version 3.0 was launched, with the most critical adjustments being that the language was not backward-compatible and any code made in python version 2.0 was required modifications first before it can run on python version 3.0. The update and release of python 2.0 versions were discontinued early this year  2020, no more security patches will be expected for python 2.

Purpose of the Language

Python is a high-level, interpreted, and general-purpose language. The language emphases are on the readability of the code as programmers and developers can leverage whitespaces. In addition, the language utilizes an object-oriented and language constructs approach with the goal of aiding programmers to compose logical codes that are clear for both small-scale and large-scale projects. Again python is both garbage-collected and dynamically typed. As a result, the language supports various paradigms, including object-oriented, functional programming, and structured programming. Python was designed to be highly extensive instead of having all its functionalities built into its core (Alyuruk, 2019). As a result, this compact modularity has made python more preferred as a method of adding programmable interfaces to existing applications.

Advantages and Disadvantages of Python Language

Python language is among the few top-most popular programming languages of 2019. Python continues to be popular, thanks mainly to its role in data science and teaching. The advantages of python are also outstandingsome of the benefits associated with python programming, including a comprehensive standard library for reference. Pythons syntax is very clear, and it is not even a free-form language (Dierbach, 2014). The second feature is that pythons extensibility is reflected in its modules, which have the richest and most powerful class libraries in the scripting language. However, python also has some of the weaknesses of interpreted languages. The first disadvantages are that Python programs run slower than programs developed using programming languages like Java, C, or C++. Again, the open-source nature of python means that the python language cannot be encrypted.

The Application of Python Programming

The most crucial application of python is that it is universally embedded in a scripting language, which is a firm foundation on which numerous web frameworks and automation tasks, including 3D software applications. In addition, python can be used to complete the activities of the desktop tools of program and data calculations. Python elasticity makes it possible to develop apps that are compatible with various operating systems, including Android OS.

Numerous Python interpreters are available for various operating systems making python computing language a reliable, robust, and efficient language for use in different platforms. In addition, a world community of programmers develops and maintains a free, and open-source reference implementation called CPython. Also, Python Software Foundation, which is a not-for-profit consortium, directs the resources for the development of both Python and CPython (Bogdanchikov & Zhaparov, 2013). Python programming has been used to create numerous software programs that are doing well in the technology niche, such as YouTube, Google, Reddit, Instagram, Spotify, Dropbox, and Quora.

Conclusions

To sum up, computers are no longer everyones previous impression of the desktops or servers but has evolved into the objects around us everywhere. For example, phones, tablets, laptops, and devices that many people dont realize are all computing devices. Other devices such as TV sets, microwave oven, car, even a small robot that children play with have computing capabilities. The development of computer programming language is developed with the development of computer hardware, and programming language is an indispensable tool to shape the computer from the development history of computer languagethe more advanced the language, the closer to peoples thinking convenient to use. Therefore, the development of computer language in the future is bound to be more accessible to human beings and closer to human life.

References

Alyuruk, H. (2019). . In H. Alyuruk (Ed.), R and Python for Oceanographers (pp. 121). Elsevier. Web.

Bogdanchikov, A., & Zhaparov, M. (2013). Journal of Physics: Conference Series, 423, 15. Web.

Dierbach, C. (2014). Python as a first programming language. Journal of Computing Sciences in Colleges, 29(6), 153154.

Shukla, X. U., & Parmar, D. J. (2016).Journal of Statistics and Management Systems, 19(2), 277284. Web.

Computer Problems: Review

Over the years, there has been an increase in the number of user forum-based websites. This increase has reached levels where this genre is now referred to as a cottage industry. This paper presents the relevance and nature of these websites from the perspective of a personal experience.

I was continuously experiencing difficulty with hard disk detection by several software setups. These also included the Windows XP setup. Upon placing the installation into the CD tray, the setup would self-execute and when the setup would perform the automated system evaluation to determine whether or not the system specifications were up to mark with the software requirements, it would conclude the processing by displaying a message stating that no hard disk could be detected.

I attributed the problem to corruption in the setup software in the beginning, but I soon noticed that the problem was recurring more frequently. The problem became evident when upon trying to install software from a tried and tested CD, I was greeted by the same message once more. It was at this point that I realized that the problem was not with the setup programs, but possibly lay in my system.

To obtain help, I decided to use some of the many internet-based websites that offer free advice along with tips on tricks on how to achieve optimal performance. These websites are more than often utilize user forums to generate their database of free advice which remains open to all the members of the website who wish to take advantage of the information or add to the information by joining the forum.

Needless to say, these websites were not my first option. My first inclination was to make use of the windows website to try and figure out if there was a troubleshooter that Microsoft had to offer that could help me figure out the source of the problem I had at hand.

Trouble shooters are special programs that operate by asking the user a number of questions in a sequence (Synergenics, LLC, 2008). These questions are designed to root out the problem that the user might be facing and the program usually functions by instructing the user on how to fix the problem once the problem has been identified by asking the user the sequential questions. In certain cases, some trouble shooters are designed to carry out a an analysis of the system in order to ascertain the nature of the problem and to instruct the user on how exactly the problem has to be solved without taking the user through a lengthy series of questions.

I decided to use no more than three websites in my search to determine the solution to my problem. I was referred to the these three websites by my peers who told me they had experienced problems with their computers in the past and were of the opinion that these websites could provide me with the solution my problem. The three websites that were brought into use and have been covered in this paper as well include:

  1. PC Pitstop
  2. Computing.Net
  3. TomsHardware.

All of these three websites seem to be designed on the same user group forum philosophy that has been mentioned above. Hence, it can be assumed that the advice that these websites gave was the information that the reader would eventually derive after going through the discussions on the query that the user was faced with. In this particular case, I was greeted by a barrage of information on all three websites. All three websites held user forums in which users had posted their advice and their comments regarding various problems and it took a bit of searching to find the user forum for the problem that I was faced with.

In the case of PC Pitstop, the advice I got was that I should consider completely formatting my hard disk of the previous operating system before initiating the installation and if the problem still persisted, the user forum led me to the conclusion that I should attempt to make my BIOS recognize the Universal Serial Bus as a bootable device in order to get on with the installation (Invision Power Board, 2009). So the advice given by PC Pitstop was based on a slight tinkering with the BIOS, and a complete reformatting of the current operating system from the hard drives.

When going through Computing.Net, I was amazed to see that the problem I was facing had not only been experienced by other users the very same way such as I had seen in the PC Pitstop user forum, but was also shared by many other users who had experienced it at different points during the Windows XP setup as well. Computing.Net also offered hard drive recovery tools that were placed within the forum of my interest (Computing.Net LLC, 2005). Computing.Net was of the opinion that I required a few drivers to be installed before I began the actual windows installation. This of course required that I make additional purchases of driver CDs.

My third stop was Toms Hardware, where I discovered that the issue of hard disk detection was even more widely encountered than I had presumed after witnessing the user forum at Computing.Net (Bestofmedia Network, 2005). The advice I found on Toms Hardware however was one that I chose to go through after I had successfully installed Windows XP since I found next to no advice on the problem of no detection of hard drives before the installation.

I consider it necessary to mention at this point that I found no significant help on the Windows XP website at all. The Windows XP was continuously subjecting me to a marketing campaign as I found myself drawn to the immense number of options that Windows XP had to offer and the equivalent number of tutorials that the website was overflowing with.

In times like these, I discovered that the significance of this cottage industry is one that allows users to gain the knowledge that they need quickly and in a cost-efficient manner. The websites also allowed users to take advantage of various shareware and freeware software by either directly downloading it or By doing so, these cottage industry websites allowed users to obtain cost effective technical support on Windows XP and other software while clearing the path for all the parties involved.

Works Cited

Bestofmedia Network. (2005). hard disk not detectable. Web.

Computing.Net LLC. (2005). Hard Disk Not detected in xp instal. Web.

Invision Power Board. (2009). . Web.

Synergenics, LLC. (2008). EchoLink TroubleShooter. Web.

Protecting Computers From Security Threats

Nowadays, everybody use their personal computers for storing data which when lost creates a lot of trouble for us. In order to prevent spyware and viruses from entering our computer system we need to properly examine the sites before entering them. We should never give our personal information to any online site, like credit card numbers, even if we are asks for it, as we might become an identity theft victim. We should check the URL of the site before entering it.

If it starts with https then it is very secure, otherwise if it starts with http then the security of our computer may be compromised. The only information we should provide is our name or e-mail address. We should also be careful about certain e-mails that ask for our personal information. It could be a part of a scam called phishing where when we provide our details a spyware gets attached to our computer. (Adams, 2007)

Worms and viruses spread very rapidly in our computer without our knowledge. Thus, we should install antivirus softwares and firewalls to stop unauthorized entries in our computers, before we install any other programs on it. Before purchasing an antivirus software or firewall we should know about its abilities and limitations through customer reviews. The virus and spyware programs are extremely smart ones and can even get past along some antivirus softwares and firewalls. Some cyber criminals are selling certain software to us with the promise that they are going to protect our computers but in reality these softwares are actually harming our system.

Thus, we should always use reliable softwares from reputed companies and not just from any online site or download them from anywhere. To ensure the safety and protection of our computer we should always install those antivirus software and firewalls, which automatically updates itself for at least a year after we install it. This guarantees that our software is of the latest version and thus, cyber criminals will have difficulty in penetrating their system to enter our computers.

A number of antivirus softwares and firewalls are available in the market, some for free and some with a price tag. One such antivirus software, which can be used for protecting our computers, is the Avast Antivirus. It protects our computers from Trojan horses, viruses and worms. The official website of Avast is www.avast.com. The Avast 4.8 Home Edition is free to use, updates itself automatically, gives warning about license renewal, has user friendly interface, uses very little system resource, easy to install, has a Virus Recovery Database and Real Time Internet Monitor and detects almost every form of viruses and Trojan horses. (Avast, 2009)

The Cisco Pix Firewall has been rated as the best firewall in recent years. The official site of Cisco is www.cisco.com. It has many versions depending on the size of company, Cisco Pix 501, 506E, 515E, 525 and 535 Firewall. Its price also varies from $500 to $50000. It gives high performance, strong security, it is affordable, efficient, easy to install, run and almost impenetrable. It works on a real time, embedded, non-Unix security algorithm and does Stateful packet inspection. Their higher model also supports the Gigabit Ethernet interfaces. The adaptive security algorithm or ASA that is used by Cisco PIX Firewall also makes it one of the quickest firewalls available. (Cisco, 2006)

References

Adams, G. (2007). Protecting Your Computer From Viruses, Spyware, And Other Security Threats. Web.

Avast; 2009; FREE antivirus software with spyware protection: avast! Home Edition; avast. Web.

Cisco; 2006; Cisco PIX 500 Series Security Appliances; Cisco. Web.

Building a PC, Computer Structure

Introduction

Computers are the need of the current world. No matter how old one is and how well acquainted with technology one should be, computer literacy and operations should be known to everyone to compete in this fast-moving world. Computers have changed appearances over time and now they are near a palm-size and can do almost all operations performed by workstations 20 years ago. In our case, a desktop is a must for day-to-day operations.

Casing

The first item to be purchased is the case for the machine. A case should be large enough to give a good exhaust system and entertain a 12 by 12 motherboard for expansion purposes. From the given list, we have Cooler Master Elite 330 (420W) and Thermal takes M9. Dave should go for the Cooler master Elite 330 (420W) casing with supply as it has enough room to contain a full-sized motherboard and spacious enough to keep the system cool. It has expanded capabilities in terms of increased optical drives as it can hold a minimum of three optical drives at a time. The chassis gap is also big enough to hold a 9 series GFX card.

Motherboard

The next component is the motherboard. This is the heart of the system and the entire PC performance depends on it. The entire system hardware setup depends on the mainboard. It should be able to allow fast data transfer and must have enough PCI slots to entertain further expansion. The motherboard should have 3 RAM slots so that Dave can have a large amount of physical memory available by installing more RAM chips to the system. It should have a PCX 2.0 interface for enhancing video performance. As Dave plans to plug in the plasma TV using the HDMI option, he should go for a motherboard that should support a new generation graphics card and MSI P45 Neo3-FR and Asus M2N68-VM fulfill the needs, however, I would suggest selecting MSI P45, Neo as it offers high connectivity along with all options Dave requires for extension the board has enough PCI slots so he can plug in a 1394 connector easily for the digital camera.

Memory

Next up would be the RAM for the system. To run a video editing tool, we need to have a large physical memory bank. The options are DDR-2 800 and DDR-3 modules. I would suggest going for the DDR-2 800 module with 2048 MB capacity as it is much cheaper and would suffice the requirements. Dave should install 2 chips which would give a total of 4 GBs that is adequate to operate the video editing tools more physical memory is required to run such software plus the fact that Windows Vista alone requires around 2 Gbs of memory to run smoothly.

Processor

The processor is regarded as the brain of the computer and it should be fast enough to entertain multitasking at every level. Amongst the choices present Intel core2quad Q8200 and AMD Phenom II X4 9950 seem to be the most suitable. Intel Core2Quad is by far the most appropriate processor for decoding activities as far as the price range is concerned. For encoding activities, we need a processor that can handle many calculations at a single point in time and the Q8200 has this ability as it has 4 cores to perform the massive operations, and furthermore it requires less power as compared to the previous versions.

Hard Drive

The hard drive is the primary storage device for the system and today the size varies from 80GBs to TBs. in our case, Dave does not require a huge hard drive as most of the data is to be burnt on DVDs. The choices available are Western Digital 320GB and Seagate 320GB and my advice would be to go for Western Digital as it is a good performer all along. Why 320 GB? Because today, an HD movie is around 4-7 GB thus 320 GBs would be adequate to contain a few of these videos and Windows Vista.

Graphics Card

The final item is the GFX card and as Dave wants to plug in his Plasma to turn the PC into a home Theatre then the options are ATI 4670 and 9600GT. Here I will suggest selecting ATI 4670 as ATIs performance at this level is excellent and the Nvidia series can only compete with the late 9800 GTXX version of the product to compare with ATI 4670, it has Direct X 10.1 support, HDMI and HD ready as well at a comparatively cheaper price than 9600GT.

Optical Drive

As far as Optical drives are concerned, I would go for the SATA DVD recorder from IDE and SATA given in the list. The motherboard has more SATA connectors than IDE connectors further there is no need to go for a Blu-Ray writer as the HD DVDs can be written to a dual-layer media easily if they exceed the usual size of a DVD (4.2 GBs).

The table below compares the cost of building the complete system from TIMR and other bargain prices from various sources

Component Cost at TIMR Best Bargain Price URL of the best bargain price
MSI P45 Neo3-FR $147 $139 Web.
Intel Core 2 Quad Q8200 $298 $254
Coolermaster Elite 330 (420W) $131 $95 Web.
SATA 320GB $72 $72 Web.
2048MB DDR2-800 $78 $59.90 Web.
512M ATI 4670 $129 $114 Web.
(PCI) Firewire/1394 Card $33 $8.80 Web.
SATA DVD-Recorder $32 $27 Web.
Total $920 $770

References

Build Your Own PC. 2009. Web.

CCPU Computers. 2009. Web.

Computer Target Online. 2009.

Gasior, G. 2009. Web.

Gocomp. 2009. Web.

2009. Web.

ITSky Online Store. 2009.

Toms hardware guide comparison charts. 2009.

. 2009. Web.

MegaWare Computers. 2009.

Life, Achievement, and Legacy to Computer Systems of Alan Turing

Alan Turing the computer scientist, logician, cryptanalyst, and English mathematician was born on June 23, 1912. Turing was quite influential in computer science development, in addition to setting the framework for the formalization of the algorithm concept, in addition to the use of the Turing machine that he developed, for computation. Between 1945 and 1947, Turing was involved in an Automatic Computing Engine project. In February 1946, Turing presented a paper that has been considered the first detailed design of a stored-program computer (Copeland & Proudfoot 2004 par. 3-5). The University of Manchester appointed Turing to head its computing laboratory in 1949, as the deputy director. It was also during this time that Turing progressed with on another project that involved the development of software dubbed, the Manchester Mark I, regarded as amongst the earliest known stored-program computers (Agar 2002 p. 36).

Turing endeavored to attend to the artificial intelligence challenge, and this is what prompted him to propose the Turing test, which is more of an attempt to come up with a standard that would allow for intelligent machines. The idea behind the Turing test holds that it is possible to assume that a computer is thinking if and when it is capable of hoodwinking an interrogator to harbor the assumption that the conversation that occurred was not with a computer, but with a fellow human being. In his proposal, Turing opined that as opposed to the development of a program that would simulate the mind of a human being, it would be far much easier to design a modest one that would instead simulate the mind of a child, followed by exposure of the same to an education program. Turing was also instrumental in the development of a chess program (Levin 2006 p. 43).

Turing published a significant paper titled, On Computable Numbers, with an Application to the Entscheidungsproblem (Turing 1936 p. 241), whereby he reformulated the results that had been realized by Kurt Godel before him, regarding computation and proof limits. Accordingly, Turing was able to replace the formal language that Godel had realized, and which relied on arithmetic. That is how the simple and formal Turing machines came along. Turing was able to prove that these kinds of machines had the capability to execute imaginable computation in mathematics, as long as an algorithm was used to represent it. The theory of computation heavily relies on Turing machines. The work by Turing is significantly more intuitive and accessible. T

The Association for Computing Machinery has been giving away the Turing Award on an annual basis since 1966 to individuals that exhibit profound technical contributions towards the world of computers. This is an honor in computing that is regarded in the same rank as the Nobel Prize (Geringer 2007 par. 5). Turing was named by Time Magazine as one of the 100 Most Important People of the 20th Century (The Time 100 1999 p.1) due to the role that he played in the development of the modern computer. As Time Magazine has noted The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine. (The Time 100 1999 p.1).

Reference List

Agar, J., 2002, The Government Machine. Cambridge, Massachusetts: The MIT Press.

Copeland, J, and Proudfoot, D 2004,. Web.

Copeland, B., 2004, The Essential Turing, Oxford: Oxford University Press

Geringer, S 2007, ACMS Turing Award Prize raised to $ 250,000. ACM press release. Web.

Levin, J., 2006, A Madman Dreams of Turing Machines. New York: Knopf (34)

The Time 100 1999, Allan Turing. The Time 100. Web.

Turing, A.M. (1936), On Computable numbers with an application to Entscheidungsproblem, Proceedings of the London Mathematical Society, Vol. 2, No. 42, pp. 230-65.

The Drawbacks of Computers in Human Lives

Since the invention of computers, they have continued to be a blessing in many ways and more specifically changing the lives of many people.

However, in as much as computers have changed the way people live and do things, it must also be emphasized that they are associated with negative effects. This paper, therefore, discusses the negative effects or drawbacks of computers in the lives of many people. In trying to explore this issue, the paper will also mention what proponents of computers say before refuting it.

Although many people may argue that use of computers help in enhancing education especially through research, when it comes to young people the reality is different. Use of computers reduces quality study time (Lin and Jin 411). That is to say, chatting, gaming and other social related software are highly tempting in the eyes of young people.

As a result, this significantly eats into their study time. In the end, the effort he or she may devote to education is virtually insignificant. In short, for a nation or its people to reap a sizeable output out of something is to invest heavily. Therefore, spending less time on education-related matters can affect one negatively in the future.

Computers tend making people be over-dependent (Bowers 115). That is, instead of thinking as it was the norm before computers were invented, people today can just find answers without difficulty on the Internet.

Additionally, spending more time on the computer can easily cause health effects like eye strain, mental disorders, and shortsightedness. What is more, according to psychological studies, being on computers for a long time can as well lead to depression and anti-social behavior, and this is mostly associated with young persons (Kassin, Fein and Markus 601).

Using computers affects the social lives of people. This is based on the fact that human beings are social animals and they live in a society that is highly interactive (Lin and Jin 411). That is, sharing his opinions, beliefs and ideas with other people play a very important role in his life. However, computers have completely cut him from this kind of life. He or she is completely isolated from the events of the real world.

In fact, he or she becomes slowly changed to becoming a machine that is lifeless. He or she just like computers will eventually start perceiving things in the context of numerals and numbers. In short, this argument does not underestimate the importance of computers.

In fact, computers can easily create unimaginable things and support man to reach the intended level of success. However, if not managed well, it can as well be the start of something whose end results may be dangerous.

The power a computer has over human beings is mesmerizing. Use of computers on a daily basis can easily lead people to commit atrocious crimes that have a negative effect on the natural environment. For instance, people are used to same things like socializing with the same people. Besides, people always want to do things that they are restricted.

The bes,t example is the tendency to access various sites or carry out certain activities malware, hacking, and even spamming, which are characterized as offenses under the law.

Moreover, the risks involved in talking with unknown people online or strangers have negatively affected the life of many people in the past. With this, the world must come to the realization that what people need is not a potential killer but something that contributes to the general development.

Whilst computers have a lot of benefits such as making clerical and computational work easy. It is undeniable that spending many hours on a computer may easily lead to idleness. As a general fact, laziness reduces the self-respect of the person (Cash and Smolak 446).

Laziness prevents someone from realizing and exploitation his or her innate skills. In fact, the approach that is slothful in nature is known to weaken the body making it unable to function properly. Therefore, in as much as computers have really changed the way people do things, but it also comes at a huge cost.

Even though most users of computer argue that it enhances communication and accessibility of information worldwide. They further argue that computers provide conveniences when it comes to using various programs like accounting programs, Microsoft Office and PowerPoint. The truth of the matter is that dependence on computers reduces ones outlook.

Indeed, anyone who has been using or who has ever used other machines like typewriters understands the advantages that come with computers when it comes to heavy workloads. However, the argument here is that a prolonged usage or staying in front of a computer for a long time destroys ones sense of intrinsic knowledge to that he or she can obtain from the Internet.

This makes him or her less responsive to real life activities whilst he or begin nearly all things as simulated. With this, someone can easily be labeled as an introvert or even contemptuous. His or her life remains tightly held on an imaginable line, that is, hanging somewhere between appearance and reality.

Computers negatively affect ones creativity. This is because when people want to write assignments these days, they can easily copy-paste someone work although is professionally illegal. The argument is that technology has greatly affected the thoughts to a point where they cannot afford to carry out some additional studies on their own.

Every human being is born with skills that are inherent and ingenious. Channeling creativity in a proper way can take people far. In any case, intelligence is not just about relying on the works which belong to other people. Intelligence entails making use of ones abilities to provide new information to the world as well as helping in organizing the proper or solid base to accomplishment.

The argument here is that, in so doing, ones insights and thoughts could be of great help to a developing nation when compared with the resources offered by computers. In short, the present day computer era is highly limiting the flow of idea and thoughts.

In conclusion, in as much as computers are highly acclaimed for what they have done and what they continue doing, the computer is associated with many drawbacks that negatively impact human life. As it has been seen from the discussion, computers reduce people into mere machines making them unable to do any work without them.

Not enough, computers are associated with dangers such as psychological diseases, crime, laziness, lack of creativity, impact on social life and reduction in outlook. The list is not exhaustive as computers are still being used and these continue to affect people in different ways.

Works Cited

Bowers, C. Let them eat data: how computers affect education, cultural diversity, and the prospects of ecological sustainability. Athens: University of Georgia Press, 2000.

Cash, Thomas and Linda Smolak. Body image: a handbook of science, practice, and prevention. New York: Guilford Press, 2012.

Kassin, Saul, Steven Fein and Hazel Markus. Social psychology. Belmont, CA: Wadsworth, 2014.

Lin, Sally and David Jin. Advances in computer science, intelligent system and environment. Berlin: Springer, 2011.

Honeypots and Honeynets in Network Security

Introduction

Arguably one of the most epic accomplishments of the 21st century was the invention of the computer and the subsequent creation of the internet. These two entities have virtually transformed the world as far as information processing and communication is concerned. Organizations have extensively employed the use of computer systems as efficient global communications became the defining attribute of successful organizations. However, these advancements have also increased the frequency and sophistication of computer crimes. It is therefore imperative that countermeasures be developed to detect and prevent these attacks. The key to fulfilling these countermeasures is the gathering of information on vulnerabilities and gaining an insight into the strategies employed by attackers. Presenting prospective attackers with honeypots which are the easy target that is in fact traps is one of the tools that is been utilized to enable covert monitoring of intruders. This paper argues that Honeypots and Honeynets are an effective method to identify attackers, system vulnerabilities, and attack strategies, therefore, providing a basis for improved security as well as catching attackers. The paper shall provide a detailed description as to the benefits of this method and its subsequent implementation. The legal issues that surround the use of honeypots and honeynets shall also be addressed so as to determine how one can make use of these tools within the legal framework of our country.

Honeypots and Honeynets: a Brief Introduction

A honeypot is defined by Lance Spitzner as a security resource whose value lies in being probed, attacked or compromised. (Pouget, Dacier & Debar, 2003; Spitzner, 2002). As such, a honeypot is a device that is exposed on a network with the aim of attracting unauthorized traffic. A honeynet on the honeyn is simply a network of honeypots with a firewall attached to it. Once the system is compromised by an intruder attack, data is collected on this unauthorized access so as to enable the studying of the same so as to learn about the latest trends and tools used by intruders as well as help in tracing back the traffic to the intruders computer. Since the value of a honeypot lies in its being compromised by an attacker, It makes sense to make it look not only enticing but also authentic to a hacker. A honeynet will therefore consist of standard production systems that may be found within a real organization and generally several computers as with a real intranet. Operating system emulators such as VMware can be utilized to simulate several computer systems in one physical system (Krasser, Grizzard & Owen, 2005).

Types of honeypots

Honeypots can be categorized into two broad groups: production honeypots and research honeypots. The difference between the two categorizations springs from the role that the honeypot plays in a system. Production honey-phoneypotsed to avert risk to organizational resources by presenting a kind of red-herring for the intruders to compromise. Research honeypots on the other hand are meant to gather as much information from attackers as possible. Production honeypots assist in mitigating the risk that organizations face and provide evidence of malicious attempts which may be used in a court of law. Research honeypots are an excellent tool to use as a basis for validating an organizations security setup since potential threats and risks are assessed to enable administrators to make the best security decisions.

How Honeypots Work

An important point to note is that honeypots are not designed to prevent a particular intrusion but rather, their objective is to collect information on atta, therefore, ore enabling administrators to detect attack patterns and make necessary changes in their system so as to protect from attacks on their network infrastructure. A honeypot device is placed openly with the aim of attracting unauthorized activity. The defining characteristic of honeypots is the level of involvement that they afford the attacker. A low-involvement honeypot (also referred to as a low-interaction honeypot) only emulates systems and services running (Carter, 2004). This kind of honeypot does not provide a real OS for the attacker to operate on thus greatly reducing the amount and significance of the data captured from the intruder. Low interaction honeypots can offer information such as the date and time of the attack and the IP address of the attackers. However, their effectiveness is limited only to already discovered attack patterns. The other kind of honeypot is the high-involvement honeypot. This honeypot makes the entire OS along with installed services accessible to the intruder(Carter, 2004). This unlimited access allows for more data to be captured and subsequently analyzed.

Technical implementation

The type of honeypot that is implemented is highly dependent on the objective of an organization as well as the amoresourcesesource they have at their resources. Law enforcement agencies require a lot of data so as to reconstruct the aattackersmotives and identity and as such, a high-interaction honeypot may be utilized. The agencies also have the resources necessary to finance and maintain this system. Corporations may not need to capture as much data and therefore, a low-level honeypot that isbothh easy to set up and provides limited danger may be preferred.

In ma ost implementation, a single physical machine running multiple virtual operating systems is utilized. Carter (2004) suggests that the honeypot servers should be unsecured to allow the intruder free reign over the system. To track the activity of the intruder, detection tools such as Snort can be utilized to analyze the types of traffic received. One of the setbacks of honeypots is that outgoing traffic cannot be limited. As such, an attacker can use the system to carry out DOS attacks with legal consequences for the honeypot user. as such, placing a firewall in front of the honeypot is vital to ensure that the outbound traffic is controlled thus lowering the risk posed by a hostile take over of the honeypot. VMware is the software that is favored in setting up multiple virtual systems so as to mimic a real network setting.

Benefits of honeypots

Honeynets present a myriad of benefits for an organization or institute which employs them. By the use of honeynets, an administrator is able to detect other compromised systems on the network (Krasser, Grizzard & Owen, 2005). This is possible since attackers use the honeypot as a starting point to hijack other systems. By having the honeynet log files analyzed, one can trace out the path that the attacker used and end up at the other system that was possibly compromised by the intruder.

Honeypots enable an organization to carry out research into the threats that it may face. As such, questions such as who is the attacker and what kind of tools they use in their attacks can be answered. This will enable the IT security branch of the organization to better understand their potential threats thus increase their preparedness and their defense mechanisms.

A production honeypot acts as an easy target therefore distracting the intruder from attacking the real organizations system. This gives an organization some form of protection since the potential attacker compromises the more enticing honeypot therefore leaving the organizations system unscathed. In addition to this, the organization can use the production honeypot to positively identify the attacker. If this information has been lawfully obtained, it can be used to criminally prosecute the attacker in a court of law.

Challenges

Despite the numerous merits that may be reaped from the use of honeypots by an organization or individual, running of this tools comes with its inherent problems. Loss of control over the honeypot by the controller can render the honeypot unbeneficial since its main purpose is to capture unauthorized activity. If an attacker can succeed in infiltrating the system without being notice, then there is a flaw in the device and it is unbeneficial to the owners.

Since honeypots are correlated with the host operating system, there is always the danger of an attacker breaking out from the virtual environment and into the host operating system (Baumann & Plattner, 2002). This will result in the attacker having access to data and resources that are vital to the organization and he/she can therefore compromise the entire system leading to losses.

Baumann and Plattner (2002) affirm that the effectiveness of honeypots can be greatly impeded when encrypted connections are employed by the attacker. while logging and listening in on unauthorized traffic is still possible even when the connection is encrypted, deciphering of what is captured in the attackers packets is at times impossible. In some cases, an attacker can take over the entire system thus rendering the administrator helpless. The attacker can then proceed to utilize the system resources available to him to launch attacks on other systems. This attacks e.g. Denial of Service attacks on other networks can result in the damages to a third partys network (Krasser, Grizzard & Owen). The consequences for such errors can be costly as the honeypot owner can be held legally liable for the attack and therefore forced to compensate the third party.

The use of honeypots presents a number of legal issues to both person or organization that implements them. One of the core legal issues that arises from honeypot usage is the issue of Entrapment. Spitzner (2006) defines entrapment as the act by a government agent to induce a person to commit a crime by fraudulent means or unwarranted induction so as to criminally prosecute the person. As such, an attacker who is taken to court as a result of compromising a honeypot can argue that he were induced to commit the crime thus nullifying the evidence contained in the honeypot logs.

The issue of privacy which is prevalent in the Information Technology spheres is also applicable with the use of honeypots. Honeypots can be configured to capture the content data of a transmission. These data has privacy issues attached to it and therefore, collection and use of the same may be a violation of the transmitters privacy. Spitzner(2006) suggests that placement of banners that obligate individuals to consent to monitoring thus wavering their rights to privacy is a one of the ways in which monitoring in a system can be legitimized.

Honeypots can also lead to attacks on third parties by use of the honeypot as the platform of attack. This presents a legal situation since the owner of the honeypot will be held responsible for the attack even though it was an attacker who utilized the honeypot to attack another persons system. Baumann and Plattner (2002) assert that it is the honeypot owners responsibility to ensure that no harm is caused on third parties as a result of their honeypots.

Conclusion

The IT arena is ever evolving and as its effectiveness increases, so do the risks. Preventive and detective measures should therefore be employed to improve security. This paper set forth to illustrate that honeypots can be used to identify and catch security threats as well as identify vulnerabilities in an organizations network system. It has been demonstrated that honeypots can be used to identify attackers and take legal action against them. However, while honeypots do present a versatile tool for revealing the identity of attackers and prosecuting them in a court of law, law enforcers should be careful to ensure that the information they obtain does not infringe on the rights of the individual thus making it inadmissible in court.

While honeypots are an important weapon in the IT security personnels arsenal against attackers, it is clear from this paper that they do not protect the computer infrastructure of an organization from attacks. It is therefore prudent for organizations to invest in security measures such as firewalls and antivirus softwares and adhere to best security practices so as to safeguard the system. Having done this, organizations and individuals alike can make thrive from the numerous benefits that computer networks present to us.

References

Baumann, R. & Plattner, C. (2002). White Paper: Honeypots. Web.

Carter, W. L. (2004). Setting up a Honeypot Using a Bait and Switch Router. Web.

Krasser, S., Grizzard, B. J. & Owen, H.L. (2005). The Use of Honeynets to Increase Computer Network Security and User Awareness. Haworth Press. Web.

Pouget, F., Dacier, M. & Debar, H. (2003). White Paper: Honeypot, Honeynet, Honeytoken: Terminological Issues. Web.

Spitzner, L. (2006). Honeypots: are they Illegal? Web.