Computer Numerical Control CNC machines have become very popular in the past few days since they offer acceptable reputability of the machined parameters, allows many operations to be combined, allows machining in more than three axes, carries out the process with very little human intervention and a single operator can operate many machines. The paper provides an in-depth analysis of CNC machines and automated machining.
Characteristics of CNC Machines
A CNC machine, depending on the type may have two or more axis that can be controlled by a computer program. A CNC machine is defined as A system in which the actions are controlled by direct insertion of data and the system must automatically interpret and carry out the instructions. A dedicated computer is built into the control system of the machine and is connected to the servo controllers that provide motion to the machine axes. Based on the program written in G Codes, the axis would move to the new position at the required feed while the machine spindle would rotate at the specified cutting speed. Depending on the type of machine, the turret may have a number of tools that are fixed in special tool holders, and the distance from the tooltip to the spindle seating face is entered into the computer and this is called pre-setting.
This distance is called the offset and the spindly axis will move back or forward to ensure that only the tooltip, tool sides are in contact with the faces to be machined. A CNC machine would have multiple programmable axes such as X, Y, and Z and in addition, the machine table which can be programmed to turn so that a fresh face is presented when required. By using proper tooling and design and depending on the number of setups required, it is possible to load more than one component on the different faces of the tooling. The machines can have two fixtures, one that is inside the machine with components and undergoing machines and the other outside so that un-machined components can be loaded and kept ready for the next cycle so that idle time is kept to a minimum (Pabla, 2007).
CNC machines can also be integrated with CAD and CAM processes and it is possible to convert a CAD design into a programmable set of instructions so that the required dimensions can be machined.
Primary processes and their characteristics
CNC machines can be used for performing a number of operations such as milling, drilling, boring, tapping, reaming, spot facing, turning, EDM, grinding, and so on. Materials that can be machined include castings, forgings, bar stock material, roughing operations for dies and molds, stamping, drawing, plastic and wood components for prototyping, and so on. When it comes to CNC machines there is a different class of machines and these are designated as per the machining operations that they can carry out. The different primary processes are grinding, drilling, boring, milling, turning, EDM, drilling/ boring and tapping, metal spinning, deep drawing, and others. The different primary processes and the machine types are given in this section (Smid, 2005).
CNC Turning
CNC Turning machines are special types of lathes that are used to turn stock material, castings of odd shapes, forgings, and other components. These machines have a rotating spindle that can be made to rotate at different surface speeds. A turret is placed at the back and it can have multiple tool holders that are used for turning off the external and internal diameters, internal and external threading, fine boring, internal grooving, circlip machining, taper turning, and others. The machines are limited by the maximum dimension and weight of the component. Maximum size refers to the maximum swing over the bed that can be accommodated and the component has to swing clear of the bed while rotating so that the component does not dash against the bed while rotating. Other factors to be considered are the maximum stock to be removed, the number of tools required in one setup, tooling required, and so on. Cutting tools that are used include carbide inserts, cermets, HSS tools, and diamond insert tools that can only be used for nonferrous machining (Stephen, 2008). The following figure shows the layout of a typical CNC lathe.
While machining longer components such as rods and bars, a dead center, as shown left side should be used as this gives proper support for the job and reduces problems of overhang. Threading of different forms such as metric, inch, NPTF, and others can be cut and the required pitch can be maintained. These machines come equipped with coolant so that the machined component can be cooled and the burr can be removed. By using rigid work holding devices, complex-shaped parts such as castings, small housings can be machined for the bore, internal and external circlip grooves, and other geometries. Typically components such as shafts, propeller shafts, driveshafts, pulleys and others can be machined. There would essentially be two setups since the side that is clamped in the spindle would have to be reversed and machined in the second setup (Stephen, 2008).
CNC Milling and Machining Centres
CNC milling and machining centers are the workhorses of the manufacturing industry and are used to perform a number of operations. The main types of CNC machining centers are horizontal and vertical. While CNC milling is used to perform only milling operations, CNC machining centers perform a number of operations such as milling, drilling, boring, internal threading, slot milling, and many others. The machines are used to combine many operations and drastically reduce the setup time. They are most useful when the parts require complicated tooling, different faces have to be machined in the same setup, different bores need to be aligned with a common reference point, and when the requirement is for a medium output. Since these machines have a central table on which the component can be clamped, it is possible to machine all the faces by just reloading the component so that different faces are exposed.
It is essential that clamping must be done on a flat surface of the component and that the required master dowel holes are machined so that subsequent operations take the location of these master dowels. The machines have an automatic tool changer in which a number of tool holders and qualified tools can be placed. The tool holders are designated with numbers such as T1, T2, and so on and the required tool can be called in the CNC program. A tool changer is used to remove the tool from the spindle and load a new tool as per the requirement. By employing appropriate G codes for the program, it is possible to carry out canned cycles, peck drilling, contour milling, and even cam lobe milling.
While three axes machines are common, machines with 5 axes are also available. Some manufacturers also offer very high precision machines for jig boring operations that are used in tool rooms for the manufacture of jigs and fixtures, dies and molds, and other precision parts. Internal threads are cut using taps of the required pitch and thread diameters and special tapping attachments need to be used. In tapping, the tap is fed as per the required feed rate while rotating clockwise, stops at the end of the stroke, and then rotates anticlockwise and retracts the tap to produce a fully finished thread. The machines can be used for machining of castings, housings such as crankcase, gearbox, fuel pumps, governor housing, water pump casing, flywheel housing, gear case cover, cylinder block, and cylinder head, and many others (Bannister, 2006).
Rotary Tables
Rotary tables are a key component of the CNC machining center and jigs and fixtures, as well as components, can be clamped on them so that machining can be done. The devices are classified as per the table size, the weight they can accommodate, type of servomotor used, number of indexing possible, and mounting diameter. The tables are provided with a flat face that is hand-scraped for perfect flatness and very low face run out. Machines with the tiltable axis are also available and these can be used for machining inclined and angular holes and faces. Depending on the type of machine and control, the table can be either programmed to index at a rapid rate to the required rotation angle or it can have a programmable ax so that the table can be given a feed when it is rotating and this facility is used to mill cam lobes (HAAS, 2008).
Work Holding Devices
CNC fixtures are usually of modular design and allow additional elements to be added or replaced as per the component requirements. The following table shows the types of fixtures used for CNC machining centers.
The fixtures need to have a master locating dowel so that the component can be seated accurately and the dowel pins also serve as the reference point for the machining program. The fixtures should be rigid enough to withstand the severe vibrations from the machining process and the clamping should not cause the component to be distorted.
A modular fixture with multiple tools is shown below.
Tooling
Tool Holders play a very critical role in the accurate machining of components. The tool holders have an ISO or a Morse taper on the outside and these are pushed into the machine spindle by the tool changer. Locking lugs at the sides prevent the tool holder from rotating or getting dislodged in the spindle. A locking stud at the back helps the tool holder to be locked into position during machining. The tool shank can be pushed into the tool holder and a tang ensures that the tool remains firmly seated in the tool holder. Different types of tool holders are available as per the tool diameter, spindle taper size, and others. For a specific machine, since the spindle size is fixed, all the tool holders would need to have the same external taper. The following diagram gives an illustration of different tool holders.
Spindle Power
Spindle power defines the power of the machine and specifies the maximum amount of material that can be removed, the maximum size of the tool holder, and the machining parameters that can be used. The spindle power is specified by the maximum speed in revolutions per minute and the power expressed in kilowatts and the power is a direct function of the servo motor that is used to drive the spindle. The following graph shows the performance of the machine with reference to the spindle speed, torque, and power output.
In the graph, it can be seen that as the speed increased, the available power would increase but the torque would reduce. For this reason, when using heavy milling cutters or when the depth of cut is more, lower speeds are used.
Machine specifications of a typical CNC machine
While there are many parameters that are used for specifying a CNC machine, some important parameters to be used include the table size, the maximum size of the component, number of axes, spindle power and speeds, number of tools available in the tool magazine, the maximum size of the tool and so on. The following table gives a typical specification for a CNC machine.
Market Analysis
The CNC market is projected to be worth more than 5000 million dollars and the global demand varies with the economies of different areas. Countries such as India and China that have seen increased outsourcing of components from the US and Europe show an increase in demand. Please refer to the following graph that shows the trend.
The report by the ARC advisory group suggests that many leading manufacturers such as Dixie, HAAS, Traub, Vomard, Makino, BFW, HMT, and others provide customized services that address the needs of the low end and low cost to high cost and high precision machines.
Conclusion
The paper has analyzed different types of CNC machines and conducted an in-depth study of tooling, work holding devices, spindle power, and other parameters for the CNC machining centers.
References
ARC. 2008. CNC Worldwide outlook. Web.
Bannister Ken. 2006. Programming of CNC Machines: Student Workbook 2 edition. Industrial Press, Inc. ISBN-13: 978-0831131623.
Since the advent of networking of computer systems through the intranet and internet, security of data and information has always been threatened by unauthorized access, use and modification of data. A weak computer security cannot only affect government and state security but could also cause the collapse of the economy. As new threats continually crop up and devised by skilled computer hackers and individuals who wanted to earn or simply disrupt a specific computer network, the government should double its time in passing new laws that would deter attack on computer networks.
It occurs for a number of times that government prosecutors find it difficult to prosecute apprehended computer offenders due to lack of appropriate laws to cover specific acts. It is only after the fact of commission that the government can react and pass laws that would address such act. The government is merely reactive to circumstances when passing computer security laws but it can be proactive and pass laws that would cover possible computer violations and intrusions.
This paper will present the laws and acts enacted by Congress that would penalize cyber crimes and strengthen the computer networking system. Although the focus would be current laws involving computer security, previous laws shall also be cited to provide a historical perspective on the development of the laws and acts. Moreover, later laws are passed to cover issues that were not addressed by the previous laws. Cases of computer breach will also be cited to show how they affect legislation.
Introduction
The computer system is in constant threat from various sources, individuals, groups and even other governments. The computer system can be attacked internally, by unauthorized users who may be employees of the company, and externally (by hackers who want to steal information or simply to disrupt the operating system or program). With the rise of the internet, voluminous valuable data and information pass through international boundaries that involve commercial, banking and financial transactions. Intelligence and defense information accessed or damaged by unauthorized persons can disrupt the stability of a country.
The internet has become part of everyday life such as email messaging and online purchases (Smith, Moteff, Kruger, Seifert, Figliola & Tehan 2005). Retail purchases in November 2004 were done online with 69 percent using broadband and 31 percent through dial-up (Nielsen/ /NetRatings, 2005, as cited in Smith, Moteff, Kruger, et al., 2005). Out of the total retail sales of $938.5 billion for the fourth quarter of 2004, $18.4 billion comes from e-commerce retail sales (U.S. Census Bureau, 2004, as cited in Smith, Moteff, Kruger, et al., 2005).
Computer security is associated with vulnerability of a computer while connected with a network of computers (Kinkus, 2002). Computer security has three areas of concern (referred to as CIA) that should be addressed: a) confidentiality (access only by authorized users), b) integrity (protection of information from unauthorized changes which are not detected by authorized users, also refers to privacy), and c) authentication (verification of users) or availability (access to information by authorized users (Kinkus, 2002). Privacy of user information is the most important of the technical areas (Kinkus, 2002) since personal data must not be shared unless the user consents thereto.
Pieces of information of the user can be taken from various sources that can give a holistic search habit of the user (Kinkus, 2002). The user must have complete control of the information provided, the purposes it is used and who can use it (Kinkus, 2002). Breaching these technical concerns is considered a crime in several jurisdictions and referred to as cyber crime.
Context of the Problem
Cyber crime refers to activities committed using a computer intended to harm a computer and network (McConnell International 2000). Computer crimes have gained international attention but laws against such acts are unenforceable in other countries (McConnell International 2000). The absence of legal protection can only mean that establishments have to implement technical protection to hinder unauthorized access or prevent destruction of information (McConnell International 2000).
The commission of cyber crimes continue to increase but victims of illegal access prefer not to report them since it would expose their technical weakness, the possibility of commission of copycat crimes, and loss confidence by the user to the system (McConnell International 2000, p. 1). It is incumbent upon the government to provide sufficient protection to public and private computer network and system to avert huge financial losses and damage through appropriate regulation and passage of laws.
Problem Statement
The rise of the internet paves the way for a new mode by which to communicate and conduct commercial transactions. Valuable information stored electronically is also transmitted through this technology. Along with this development, individuals with criminal minds find this technology lucrative to prey upon. They continually seek ways to commit offenses either to earn or simply damage a system or information. They look for weaknesses in the computer system so that they can break into them. The government continually addresses the problem of computer security through the passage of laws that would penalize certain internet activities and regulate the system through guidelines and standards. But cyber crimes are not deterred by the laws and still occur. The government should pass laws that would totally eliminate the commission of internet crimes.
Hypothesis
The laws and Acts passed by the government have sufficiently provided security to computer systems and maintain privacy of information against unauthorized intrusion, access and damage.
Research Questions
What are the cyber crimes that affect computer security?
What are the laws and Acts passed by the government to bolster computer security and protect information against illegal access and damage?
How much damage and loss do cyber crimes have upon the computer network and resources?
Did the laws and Acts deter the commission of cyber crimes?
Terms and Definitions
Act statute passed either by the Federal Congress or State Congress. All statutes generally fall under the term law.
Artefact same as artifact. The term used by social constructionists when referring to a technological device.
CALEA Communications Assistance for Law Enforcement Act of 1994.
Closure in SCOT, it is a stage wherein the meanings attributed to an artefact stabilize and further innovation to the device ceases.
Computer a machine consisting of hardware, software, peripherals and accessories. It needs a software consisting of programs and applications in order to function as intended.
Computer Security refers to the implementation of standards and guidelines, the technical and software applications that would protect the computer system, as well as the information stored and transferred electronically from one computer to another.
Computer System comprises the hardware, software, and interconnection that enable transfer of information and communication through electronic gateway.
COPPA Childrens Online Privacy Protection Act.
Cyber crime crime committed upon a computer system or database with the use of computer.
ECPA Electronic Communications Privacy Act of 1986.
FACT Fair and Accurate Credit Transactions Act.
FCC Federal Communications Commission.
FCRA Fair Credit Reporting Act.
Federal law statute or Act promulgated by the federal Congress.
FISMA Federal Information Security Act.
GLBA Gramm-Leach-Bliley Act.
Internet interconnectivity of computer systems around the globe.
Intranet networking a series of computers within a closed system or a single organization. A firewall protects the system from outside access.
Law generic term that includes statutes, Acts, presidential issuances, etc. passed by federal or state governments and other government institutions authorized to pass such issuances.
NIIPA National Information Infrastructure Protection Act of 1996.
Relevant social group a group of users in society that exerts some influence upon the development of technology and ascribes meaning to the artefact.
SCOT Social Construction of Technology.
Limitations
The materials included in this paper are sourced out from internet websites that provided commentaries on computer security laws, copies of the laws themselves and news items. All the laws are public documents and are readily available for public use on the net. Since the laws and articles that would be available on libraries can also be located on the internet, this student availed of the latter mode to search for data. No statistical correlation is included in the paper except the presentation of figures that correspond to damage or loss caused by cyber crimes.
Delimitations
The laws and Acts passed by the State Congress vary as there are a number of states in the US. Such laws and Acts have different contents, and requirements. Therefore, they are excluded in the discussion of this paper. State laws also have different definitions of specific acts as well as conditional requirements for the laws apply, therefore, they are intentionally not discussed in the paper.
Assumptions
From the numerous laws and issuances passed by the federal government and institutions, this student assumes that they have adequately addressed the need to protect the computer systems. The government is doing its utmost in order to maintain the integrity of the computer infrastructure and protect valuable information from passing to unscrupulous individuals preying upon any weakness in the computer system.
Theoretical Support
Privacy is a socially constructed value that should be upheld for being the foundation of other rights of an individual such as freedom, rights to property, right to associate, etc. (Levine, 2003). Privacy extends to computer systems that stored personal information (in digital form). Banks, hospitals, and other commercial firms possess personal information of who availed of medical, financial or banking services. Technology develops within society to meet specific needs of individuals in the community.
The Social Construction of Technology or SCOT is a theoretical framework that views a social group as an active participant in the construction of technology (Bijker, 1995, as cited in Engel, 2006). SCOT is the first constructivist outlook that views development in technology as a social process that shapes society and is shaped by society (Engel, 2006, p. 2). Technology develops in response to a perceived need of society. The users in that society react to the technological development or innovation. SCOT is also utilized in exploring the issues concerning anonymity of users, online payment, security and privacy (Phillips, 1998).
The users as relevant social groups are not passive end-users but participate actively to further innovate technology (Engel, 2006). Different social groups give different meanings to an artefact (i.e. technological device) that allows for the different forms of the device (Bijker, 1995, as cited in Engel, 2006). It is when a dominant meaning prevails that flexibility of forms slows down until a closure occurs (Bijker, 1995, as cited in Engel, 2006). The users are capable of influencing the development of the technology through the different meanings attributed to it that gives different forms and thereby contribute to the construction of the technology (Engel, 2006).
The computer and internet technology can have different meanings for various users (Engel, 2006). They can use the technology according to the meaning they ascribe to the technology. Thus, one group may use it for social networking, another for remote teleconferencing, or for banking services. However, a group of users can attribute a meaning to the device which is to inflict damage or gain profit.
There is a constant shaping between society and technology (Bijker, 1995). The computer system developed as a stand-alone machine. Later, the computer was able to connect with other computers through a network of cables within a closed system. The internet allowed interconnection with other computer systems across boundaries to other organizations. In all the stages of these developments, the users exert some influence (Engel, 2006). The meanings of a technological artefact in a developed country may not differ from the meanings of the relevant social groups in developing countries since the former can transfer the meanings together with the artefact (Engel, 2006).
There has been continuous innovation being introduced into the artefact with the two-way influence of technology and society. At present, there is no stabilization of the meaning or closure since user groups continually introduce changes into the device. The user group that causes damage or loss to the computer system continually challenges the security setup of the computer and find new modes by which to break into it.
The government as another user group, has to pass laws and Acts that would criminalize activities that infiltrate the computer system since it violates privacy and confidentially, as well as profit from illegal activities. The laws also impose a certain fine for the damages caused to institutions infiltrated. As new acts are perpetrated against computer security, the government must cope up with new laws that would properly define such acts so that the offender can be prosecuted. The offender that causes the damage and loss should not only be sanctioned with fines but be penalized under the criminal justice system since the extent of the damage is widespread with pecuniary loss reaching billions of dollars.
The government also prescribes standards and guidelines for organizations that store information and offer online services to the public to strengthen their computer security and which should be regulated by government agencies to ensure compliance. With the interplay of the various user groups in society, the consumers, the organizations offering financial or banking services, organizations that hold personal information (e.g. hospitals), the software programmers, the hackers and offenders, and the government, the artefact changes in order to make computer security invulnerable to cyber attacks. While the programmers make new programs to hinder existing threats, on their own because the product software can be sold to users or through the prodding of an existing client that used the software companys application to run the information management of the client, the government must seek ways through standards, regulation and laws so that computer security can be strengthened.
Since computer systems within the US can be accessed via the internet by offenders in other jurisdictions, social construction occurs on a global level. That is why great powers as well as established international organizations encourage all countries to codify their laws to cover cyber crimes so that prosecution would be facilitated on all fronts, locally and internationally. On the global scale, the user groups would include the states and countries, international bodies, and international corporations.
Significance of the Study
This study is a great contribution to existing studies that explore the effectiveness of laws passed to address the problem on computer security. There is no study identified that addresses the effectiveness of the laws in deterring cyber crime. Thus, this paper can provide the groundwork for future studies concerning this area of research.
Research Design and Methodology
This paper used the quantitative research design and methodology in exploring the impact of the laws and Acts passed on deterring cyber crimes. The research design is non-experimental wherein no variables are manipulated but only establishes the relationship between the variables (Belli, 2008). The variables the laws and cyber crimes are analyzed as they exist because they cannot be manipulated (Belli, 2008). Literature, laws and Acts, and available statistics are included in the research to determine if the laws are able to maintain the integrity of the computer systems and information and the newer offenses not yet addressed by legislation.
Organization of Study
Data will be gathered from available literature on the internet on the kinds of cyber crimes already addressed by law. Also to be explored are the activities that affect computer security but cannot be prosecuted criminally because they are not defined as crimes by existing law. The extent of damage, frequency of commission of cyber crimes and cost of loss will be correlated with the laws already passed in order to determine if specific crimes are deterred.
Types of Cyber Crimes: Damage, Loss and Prosecution
The Federal Bureau of Investigation has a four-fold mission to counter cyber crime, namely: a) to stop those behind the most serious computer intrusions and the spread of malicious code, b) to identify and thwart online sexual predators who use the Internet to meet and exploit children and to produce, possess, or share child pornography, c) to counteract operations that target U.S. intellectual property, endangering &.. national security and competitiveness, and d) to dismantle national and transnational organized criminal enterprises engaging in Internet fraud (U.S. Department of Justice, n.d., para. 1). These FBI objectives reflect the common illegal acts committed on the internet.
There are a number of computer or cyber crimes that can impact upon privacy and invades the computer system illegally. Hacking is infiltrating a system without authorization to access confidential information, or entering into a transaction under false representation (Go, 2009). In phishing, spurious emails are sent to a user with links that leads the user to a fake website (presented as an authentic or real website of a company) that would extract username, password or credit card data (Go, 2009). Pharming is an online fraud that redirect users to a fake website that looks authentic in order to steal relevant information (Online Fraud: Pharming, 2010). The user who wants to access a website is redirected to the fake website without the user knowing it, even if the correct web address is entered into the browser (Online Fraud: Pharming, 2010).
Creation and deployment of viruses (programs that replicate themselves) that can cause harm to the computer system without the knowledge of the user (Go, 2009) is a common cyber crime. A virus is a software program attached to a file (e.g. document, excel) to spread to the computer system (Kutner, 2001). The virus runs once the file is opened and then attaches itself to other programs and replicates itself (Kutner, 2001). An email virus is attached to an email that reproduces itself by sending emails automatically to everyone stored in the email address book (Kutner, 2001). There is also the worm that uses the internet to find vulnerable servers wherein it can reproduce (Kutner, 2001). The Trojan horse presents itself as a game or other program that can delete hard drive contents or block the screen with some graphics (Kutner, 2001).
In identity theft, the criminal takes money, receives benefit or purchases goods using the identity or credit card of another person (Go, 2009). Identity theft is carried out by cyber criminals through phishing and pharming (Brody, Mulig, & Kimball, 2007). Cyberstalking (usually preys on women and children) is a crime wherein the criminal stalks a person by sending emails and threats as well as dissemination of false information (Go, 2009).
As reported by the U.S. Uniform Crime Reporting Statistics, there are more than 300 million internet users (starting year 2000) worldwide with 1 million of them engaged cyber crimes (Computer Crime Definitions, 2010). As of 2004, $30 billion has been used in the maintenance of computer security (Computer Crime Definitions, 2010). In the survey conducted by the Computer Security Institute (CSI) and the FBI with 538 private and government institutions surveyed, it was reported that as of the year 2000, 85 percent experienced breaches in security (Computer Crime Definitions, 2010).
The breaches caused financial loss to 65 percent of the respondents while 35 percent (186 firms) quantified its losses to a total of $378 million (Computer Crime Definitions, 2010). Three hundred seventy seven (377) respondents said that the breaches occurred through internet connectivity (Computer Crime Definitions, 2010). Internal attacks are committed by disgruntled and terminated employees (Computer Crime Definitions, 2010). Organized crime groups even recruit telecommunication experts to commit fraud, piracy, and money laundering (Computer Crime Definitions, 2010, para. 3).
One of the first to be prosecuted under the Computer Fraud and Abuse Act is Robert T. Morris (Cornell University student) who deployed a worm to show the vulnerability of computer security but miscalculated the speed the worm replicated itself that by the time he publicly released the instruction on how to kill the worm, it had infected around 6,000 computers causing them to crash (Computer Crime, 2010). The damage suffered was $200 to a maximum of $53,000 for each organization (Computer Crime, 2010).
A computer science student created a virus that momentarily disrupted the operations of military network and contractors, as well as universities in 1988, although no files or data were destroyed (Gerth, 1988). This case is unprecedented without any previous case being prosecuted (Gerth, 1988). The Secret Service admitted difficulty in investigation because numerous computers were affected (Gerth, 1988). Smith, Moteff, Kruger, et al. (2005) stated that the expanse of the problem on computer security cannot be known.
A gang of hackers (called Masters of Deception) was also prosecuted and indicted in 1992 under the Computer Fraud and Abuse Act for unlawfully obtaining computer passwords, illegal possession of long-distance call card numbers and wire fraud (Computer Crime, 2010).
Phishing activities and related fraud reached $1.2 billion annually with around 57 million US citizens targeted in 2004 (Phishing, n.d.). The bill (Anti-Phishing Act of 2005) proposed by US Sen. Patrick Leahy that aims to penalize phishing and pharming with a maximum fine of $250,000 and maximum imprisonment of five years (Phishing, n.d.) was never passed to become a law (S. 472109th Congress, 2005).
Federal Laws on Cyber Crime
The federal government generally does not regulate the security of private computer systems but merely requires protection of specific information under the control of private systems against illegal access and dissemination (Moteff, 2004). Even the control of domain name (Domain Name System or DNS) has been transferred from the federal to the private sector (Smith, Moteff, Kruger, et al., 2005).
The enacted of the Counterfeit Access Device and Computer Fraud and Abuse Act in 1984, the first computer crime law, criminalizes infliction of damage to computer systems, networks, hardware and software, and makes wrongful the act of obtaining financial and credit data protected by statutes (Computer Crime, 2010).
There are laws enacted to protect privacy and personal information held by the government and private institutions such as the Gramm-Leach-Bliley Act (specific provisions under Title V) (Moteff, 2004; Smith, Moteff, Kruger, et al., 2005), Health Insurance Portability and Accountability Act of 1996 (specific provisions under Title II), and the Sarbannes-Oxley Act of 2002 (mandates accounting firms to certify integrity of their control systems as part of the annual financial reporting requirements) (Smith, Moteff, Kruger, et al., 2005). The privacy concern is confined to financial information (under the Gramm-Leach-Bliley Act, Title V) and medical information (under the Health Insurance Portability and Accountability Act of 1996) (Moteff, 2004). The Secretary of Health is authorized to prescribe the standards to be used in the protection of medical information (Moteff, 2004).
Under the Health Insurance Portability and Accountability Act of 1996, healthcare institutions must comply with the standards set by the Secretary to ensure the confidentiality of medical information and records which are transferred electronically (Fogie, 2004). Development of standards on financial control under SOA and enforcement of the same is done by the Security Exchange Commission who has authority to prescribe standards and enforce these regulations (Moteff, 2004).
Further laws that prohibit disclosure of personal information of consumers include Federal Trade Commission Act (Section 5), and the Fair Credit Reporting Act (FCRA) (Smith, Moteff, Kruger, et al., 2005). Congress has also passed laws to protect identity such as the 1998 Identity Theft and Assumption Deterrence Act, the 2003 Fair and Accurate Credit Transactions (FACT) Act, and the 2004 Identity Theft Penalty Enhancement Act with corresponding remedies for victims of identity theft (Smith, Moteff, Kruger, et al., 2005). The Childrens Online Privacy Protection Act (COPPA) was passed by Congress in 1998 (Smith, Moteff, Kruger, et al., 2005) to regulate the collection of personal information of websites created specifically for children (Childrens Online, 1998).
For acts committed against computer security when no cyber crime laws have been passed yet, government institutions use commerce and federal telecommunications laws to prosecute computer hackers (Fogie, 2004). The US Congress passed in 1984 the Computer Fraud and Abuse Act, the first computer crime statute (Fogie, 2004) that makes it a crime the act of intentionally accessing computer systems of the government without approval and thereby disrupting its normal operation (Gerth, 1988). It was later amended in 1986 and 1994 (Fogie, 2004). It further penalizes use of a password without authority to access a computer system or accomplish fraudulent acts (Fogie, 2004). The penalty for violation of the Act consists of a fine of $5,000 or twice the damage done or benefit gained and one year imprisonment for first time offenders, and a maximum fine of $10,000 plus two times the damage done or gain and imprisonment of ten years for second time offenders (Gerth, 1988).
The 21st Century Department of Justice Authorization Act mandated the Department of Justice to report to Congress the latters use of DCS 1000 software and similar programs at the end of fiscal years 2002 and 2003 (Smith, Moteff, Kruger, et al., 2005). Earlier, the FBI installed DCS 1000 (previously called Carnivore) into the system of ISPs (Internet Service Providers) to intercept email messages and surfing activities (Smith, Moteff, Kruger, et al., 2005). The FBI said that it ceased using the DCS 1000 and substituted identical commercial software instead (Smith, Moteff, Kruger, et al., 2005).
The Electronic Communications Privacy Act of 1986 (ECPA) updated the Federal Wiretap Act of 1968 (The Federal Wiretap, 2010) to cover intercepting of electronic communications and deliberate illegal access to electronically stored data (Fogie, 2004, para. 10). ECPA applies to both private and government institutions to protect access and disclosure of electronic communications (The Federal Wiretap 2010). Although the Act did not specifically mention email messages as covered by the protection, decisions of U.S. courts said that they should be included (The Federal Wiretap 2010). ECPA caused modification in company policies and procedures in that at present, the company has to inform telephone callers that the conversation is recorded for quality control (The Federal Wiretap 2010).
The U.S. Communications Assistance for Law Enforcement Act of 1994 (CALEA) introduced changes in wiretapping activities by enjoining telecommunication companies to allow wiretapping by law enforcers provided a court order is duly issued (Fogie, 2004). Through CALEA, law enforcement agencies can still perform surveillance while the privacy of individuals is assured (Ask CALEA, 2009).
The National Information Infrastructure Protection Act of 1996 (NIIPA) defined more computer crimes to enhance protection of computer systems (Fogie, 2004). NIIPA also extended the protection to computer systems used in local and international commercial transactions and communications (Fogie, 2004). The law substantially amends the precursor Computer Fraud and Abuse Act of 1984 (which was amended in 1986 and 1994) (National Information Infrastructure, 2010).
The Gramm-Leach-Bliley Act (GLBA), otherwise known as Financial Services Modernization Act of 1999 (The Gramm-Leach-Bliley, n.d.) delimited the occurrences that a financial firm can divulge consumer personal information to non-affiliate third parties (Fogie, 2004). Financial agencies are also mandated to reveal their privacy polices and procedures on such information sharing with affiliates and non-affiliate third parties (Fogie, 2004). Private financial records (e.g. balances, account numbers) are regularly sold and purchased by banks, credit cards and financial firms (The Gramm-Leach-Bliley, n.d.). It also provided protection of persons against pretexting (i.e. gaining personal information through fraudulent pretension (The Gramm-Leach-Bliley, n.d., para. 1).
The USA PATRIOT Act (enacted after the September 11, 2001 attack) expanded the governments intervention on the privacy rights over the internet (Smith, Moteff, Kruger, et al., 2005). Under this law, the ISP is authorized to disclose records and information (excluding the content of message) of a subscriber to specific government agencies if it believes that death or injury might occur (Smith, Moteff, Kruger, et al., 2005). Section 225 of the Homeland Security Act amended in 2002 the provision on disclosure wherein the ISP is now authorized to disclose the content of the communication to local or federal agency on the same grounds (Smith, Moteff, Kruger, et al., 2005).
Laws that Strengthen Computer Security
Laws are also enacted to strengthen computer security besides penalizing the wrongdoer. For instance, the Computer Security Act of 1987 was enacted to strengthen the security of government computers and thus make it difficult for external computers to infect the system with virus (Gerth, 1988). Strengthening the network must be accomplished along with the passage of more laws that would penalize cyber crimes, Democrat Sen. Patrick J. Leahy (Vermont) said (Gerth, 1988).
The Homeland Security Act of 2002 authorizes the Department of Homeland Security to work with the private sector in protecting the information infrastructure (Moteff, 2004). The passage of Federal Information Security Management Act (in 2002) granted the head of the Management and Budget supervisory authority over the drafting of the standards and security guidelines and conformance thereto (Moteff, 2004). Excluded in that authority are computer systems utilized for national security (governed by the National Security Directive 42) (Moteff, 2004). The Homeland Security Presidential Directive No. 7 and National Strategy for Securing Cyberspace further bolster the departments role in security reinforcement (Moteff, 2004).
The Telecommunications Act of 1996 granted authority to the Federal Communications Commission (FCC) if the latter determined that broadband has not been implemented reasonably and timely (Smith, Moteff, Kruger, et al., 2005, p. CRS-4). Pres. Bush even endorsed in March 26, 2004 the deployment of universal broadband access without taxes (Smith, Moteff, Kruger, et al., 2005). The Critical Infrastructure Board (created by E.O. 13231 passed by Pres. George W. Bush, later dissolved by E.O. 13286) issued the National Strategy to Secure Cyberspace that enumerated the responsibilities to the Department of Homeland Security to protect the information infrastructure (Smith, Moteff, Kruger, et al., 2005). The National Cyber Security Division (NCSD, under the Information Analysis and Infrastructure Protection Directorate) managed Homelands cybersecurity activities (Smith, Moteff, Kruger, et al., 2005).
The federal statute Computer Fraud and Abuse was passed in congruence with the Comprehensive Crime Control Act of 1984 (makes as federal crime the unauthorized access and damage to government and private computers that deal with banking and foreign commerce) (Smith, Moteff, Kruger, et al., 2005). The Federal Information Security Act of 2002 (FISMA) lays down the primary statutory needs in securing federal computers and network (Moteff, 2004). FISMA was founded upon the Computer Security Act of 1987, the Paperwork Reduction Act of 1995, and the Information Technology Management Reform Act of 1996 (Moteff, 2004). This Act mandates all agencies to have an inventory of all computer systems, to identify the security protection needed and provide measures to address the need, and to develop, document, and implement an agency-wide information security program (Moteff, 2004, para. 10).
Conclusion
Numerous laws have been passed that cover computer security and protection of information for specific institutions, the government and the private sector. Even FTC Chairwoman Majoras called it (during the Senate Banking Committee hearing in March 10, 2005) a complicated maze the existence of numerous laws on data protection in the various government and private institutions (Smith, Moteff, Kruger, et al., 2005).
Many possible threats have already been identified and addressed at present with the laws enacted. The voluminous laws should be re-codified so as to streamline them, thus making enforcement, regulation and prosecutions easier. Newer computer acts that cause damage or financial loss occur on the net that cannot be penalized or sanctioned since the law does not define them. And if one jurisdiction has defined the act making it a crime, the law cannot be enforced in another jurisdiction or country when the latter has no law for it or does not recognize the criminal law of the other country whose citizens suffered loss or damage.
It is therefore necessary to create a few comprehensive cyber crime law that would define all computer crimes at present and the future, even if unknown in the present. This would facilitate prosecution of the offenders and deter others from committing cyber crimes and device new means to infiltrate computer security. As what previously occurred, government prosecutors find difficulty in handling a case due to lack of supporting law. The government has the primary authority in regulating computer security matters and it should assume full responsibility for the task.
On the wider scale, not all countries have enforced computer security measures strictly. And still, a number of states do not criminalize certain malicious acts on the internet. A study supported by the World Information Technology and Services Alliances (WITSA, an international organization composed of 41 IT industry organizations) revealed that only nine countries out of the 52 subjected to the study criminalized certain acts involving cyber space (Fogie, 2004). Without cooperation by all countries, there will be a break in the international legal system wherein cyber criminals can still commit crimes and find refuge in the holes in the law. Only when there is a global move to prosecute and penalize cyber crimes, together with the strengthening of the computer systems can breach of computer security ceases.
References
Ask CALEA. (2009). Web.
Belli, G. (2008). Nonexperimental Quantitative Research, pp. 59-77. Web.
Brody, R.G., Mulig, E., & Kimball, V. (2007). Phishing, pharming and identity theft. Academy of Accounting and Financial Studies Journal. AllBusiness. Web.
Childrens Online Privacy Protection Act of 1998. Web.
Computer Crime. (2010). TheFreeDictionary. Web.
Computer Crime definitions, Types of computer crimes, Anti-cyber-crime legislation, Enforcement agencies, International computer crime. (2010). Free Encyclopedia of Ecommerce. Web.
Engel. N. (2006). Technology users in developing countries Do they matter? Web.
Fogie, S. (2004). Computer Crime Legislation. InformIT. Web.
Gerth, J. (1988). Intruders into Computer Systems Still Hard to Prosecute. The New York Times. Web.
Go, P. (2009). Types of Computer Crimes. EzineArticles.com. Web.
Kinkus, J.F. (2002). Computer Security. Science and Technology Resources on the Internet. Web.
Kutner, T. (2001). Whats the difference between a Virus and a Worm? Web.
Levine, P. (2003, May-June). Information technology and the social construction of information privacy: Comment. Journal of Accounting and Public Policy, (22)3, pp. 281-285.
McConnell International. (2000). Cyber Crime& and Punishment? Archaic Laws Threaten Global Information. Web.
Moteff, J. (2004). Computer security: a summary of selected federal laws, executive orders, and presidential directives. Congressional Research Service (CRS) Reports and Issue Briefs. Web.
National Information Infrastructure Protection Act (NIIPA) of (1996). (2010). Free Encyclopedia of Ecommerce. Web.
Phillips, D.J. (1998). The social construction of a secure, anonymous electronic payment system: frame alignment and mobilization around Ecash. Journal of Information Technology, (13), pp. 273284. Web.
Phishing. (n.d.). Phishing and Pharming Information Site. 2010, Web.
Ross, S.T. (1999). Computer Security: A Practical Definition. Unix System Security Tools. The McGraw-Hill Companies. Web.
S. 472109th Congress: Anti-phishing Act of 2005. (2005). In GovTrack.us (database of federal legislation). Web.
Smith, M.S., Moteff, J.D., Kruger, L.G., Seifert, J.W., Figliola, P.M. & Tehan, R. (2005). Internet: An overview of key technology policy issues affecting its use and growth. Web.
The Federal Wiretap Act of 1968 and The Electronic Communications Privacy Act of 1986. (2010). YourDictionary.com. Web.
The Gramm-Leach-Bliley Act. (n.d.). epic.org. Electronic Privacy Information Center. 2010. Web.
U.S. Department of Justice. (n.d.). Cyber Investigation. Federal Bureau of Investigation. 2010. Web.
The computer is perhaps the most iconic invention present in current times that influences multiple aspects of our lives, if not all. It has changed the way we work, our social life, and even our way of thinking. The world has now become one single platform where people can interact and do real business thanks to this great machine- the computer (Hall 156). Originally; the term computer meant a person with the ability to perform calculations of numerical nature with the assistance of a mechanical device of computing. The real computer revolution began in the 1930s with binary computing being central to all aspects of computing of all ages. The mechanical machine of addition of 1642 is the root of computer invention and it is from here that inventions like ABACUS-an early computing tool, John Napiers logarithm, and William Oughtreds slide rules evolved from as some of the early computing tools.
The abacus is the earliest known existence of the current computers ancestor, dating back to close to 200 years. It was simply a bracket made of wood that held corresponding wires with beads attached. All forms of arithmetic operations would then be achieved just by moving the beads along the wires by rules of programming. In 1694, Blaise Pascal to help his father-a tax collector, came up with the next phase of computer invention when he invented the digital calculating machine which could only go as far as adding numbers entered through the turning of dials (Soma 32). Charles Babbage, a professor of mathematics designed a steam-powered calculation machine capable of storing up to one thousand fifty digit numbers; and it included built-in operations that are vital to a general modern computer. Cards with holes, commonly referred to as punched cards were used to program the machine and these were also used to store data. However, most of the inventions of this professor were a failure due to a lack of proper techniques of precision machining and poor demand for devices of this nature (Soma 46).
Application in commercial industries
There was a witnessed loss of interest in computers especially after the period of the inventions of Babbage until the period between 1850 and 1900 when the interest was reborn due to great advances in mathematics and physics. Some of the progress comprised intricate arithmetic and formulas that hitherto used a lot of time besides being very arduous for individual engagement. The progressive interest was well sustained and in 1890 when computers found major use during the conduction of the U.S. census. This was made possible through a punched card system with the ability to read the information on the cards automatically without the need to depend on humanitarian assistance.
The computer then proved to be a crucial tool in the process of tabulating the census totals, given the fact that the U.S. population was growing extremely quickly. Commercial industries became aware of these advantages of computers and soon, new versions of punch-card machines specially made for business were developed by IBM and other corporations like Burroughs. The punched card machines were heavily used in most businesses worldwide for purposes of computing. This was especially after the discovery by businesses in other industries that the machines had a powerful capability that could handle most of their work in a short time; hence saving most of the time used in normal activities. They also found a good percentage of application in science for purposes of research especially in analyzing acquired data, a function that is very significant in all works of science. The use of machines of punch-card architecture went on for over fifty years since their first usage and this marked the formal spread of computers to other critical fields like healthcare (Chposky, 1988).
Computers in Healthcare: Health Information Systems
From ancient times, healthcare was generally about the collection of information and its processing to identify the specific problem that a patient suffered from to offer appropriate treatment. Hippocrates and Galen are known to be the early physicians to have the healing of their patients documented to improve care through the use of the documented information. However, it is until the 19th century that the technology of computers started to be used in healthcare for purposes of diagnosis and eventual treatment. Hutchinsons device is one of the initial systems that was used and it served the function of measuring the lungs vital capacity. The application of computers in healthcare then underwent a revolutionary. Some of the popular technologies of this period include the thermometer, ophthalmoscope, x-ray, stethoscope, and microscope. The increase of medical technology and the needed specialization also led to the increase of the quantity of data necessary to make a diagnosis and have treatments administered (Brighthub, 2010). Subsequently, medical records became significant as documents of keeping the information for patients hence the need to organize the data and records in fast and efficient ways, and from this point; the era of healthcare information systems was born.
A Healthcare information system is essential, a computerized data system that performs the core function of routine collection, analysis, reporting, and storage of information about all aspects of healthcare; including service delivery, demographic details, cost, and quality. These systems for healthcare bear a significant relationship with most of the information systems used for business operations in companies, industries, institutions, and governments; thus implying that the basic operating principles are largely similar. Therefore, the development of healthcare information systems can also be traced to the early evolution of computers since it is the very same computers that are used to perform the functions of the healthcare information systems through specialized programming. For a long duration, in fact, until the late 1960s, information systems in healthcare, like most other industries were paper-based. Relevant aspects had to be put in place in systems and technology evolution to cater to healthcare organizations. The early application of information systems in healthcare took place in the late 1960s and early 1970s, mostly focusing on financial operations. The clinical area also found considerable usage for the information systems especially for capturing clinical information and making crucial medical decisions. It is during this period that several projects related to healthcare information systems were undertaken. A good example is the Warner project undertaken in Salt lake city, Utah at the Latter Day Saints hospital (Merida, 2002).
In the recent past, the healthcare industry has experienced a lot of growth as information systems seem to be playing a very critical role in the provision of healthcare services. There is the implementation of large scale applications in electronic medical records, telemedicine-which enables the provision of remote diagnosis, the upgrade of hospital information structures, the use of public networks like the internet for distribution of relevant information to the public and patients in general and setting up of intranets and extranets for sharing crucial information with stakeholders among others (Beaver, 2002). Currently, there is the proposition that healthcare should rely thoroughly on information systems to cut down costs to some reasonable levels. Healthcare spending has been increasing at an alarming rate. This is coupled with the increase in the impact of chronic diseases among the aged whose dependency on the healthcare system has also increased in recent years. In the field of research, there are several projects aimed at accelerating the application of more established healthcare information systems. Some of them include I-living which is essentially a supportive system for living. These systems have been under development by researchers at the University of Illinois, Urbana-Champaign. There is also the smart in-home monitoring system that is in progress at the University of Virginia. This is emphasizing the collection of data by using a low-cost suite of non-intrusive sensors (Durresi & Barroli, 2008).
Conclusion
In conclusion, healthcare information systems have evolved gradually to become significant in the medical practice and all related fields. From diagnosis, treatment to secure and convenient storage of patient information for different valid purposes; these systems have proven their worth in both cost-cutting and ease of patient handling. Governments and all relevant institutions worldwide now need to work closely with the IT industry to clearly outline the role of information systems in healthcare. This will be vital for the security of the information systems. Overall, healthcare information systems are ultimately vital and should be encouraged in all organizations to improve the quality of healthcare which is a very important need for all human beings.
Works Cited
Beaver, Kevin. Healthcare Information Systems, Second Edition (Best Practices). New York: Auerbach Publications, 2002.
Brighthub (2010). Evolution of Medical Technology. Web.
Chposky, James. Blue Magic. New York: Facts on File Publishing. 1988.
Durresi, Arjan. Barolli, Leonard. Secure Ubiquitous Health Monitoring System. New York: Springer, 2008.
Hall, Peter. Silicon Landscapes. Boston: Allen & Irwin, 1985 Gulliver, David. Silicon Valley and Beyond. Berkeley, Ca: Berkeley Area Government Press, 1981.
Merida, Johns. Information Management For Health Care Professions (The Health Information Management Series). Kentucky: Delmar Cengage Learning, 2002.
Soma, John T. The History of the Computer. Toronto: Lexington Books, 1976.
Technological innovation is always at the forefront of computer component technology with advances in production and development resulting in faster, better and lighter products (Bursky, 26). These changes can be seen with the improved memory capacities inherent in todays hard disk systems, the increased processing power of processors, new innovations in disk drive technology which enable greater media storage capacities as well as new technologies enabling better component cooling (Bursky, 26). It must be noted though that not all technological innovations are actually inherently adoptable by the general population. For example, during the late 1990s and early 2001 one of the latest innovations in external storage methods was the development of the ZIP drive. Back then it was thought of as a revolutionary concept in external storage however it was never truly adapted by the general population due to subsequent advances in driver technology which enabled people to burn information onto CDs. Nicholas Carr in his article, IT doesnt matter, which examines the use of technologies and their implications on society explains that technologies and their widespread use only become cheaper once they reach their build out completion. The term build out completion refers to a point in technological development wherein a type of technology has already reached commercial viability and can be effectively replicated and mass produced. Carr explains that so long as certain forms of technology have not reached a point of build out completion they will most likely never be adopted due to their prohibitive costs and the uncertainty attached to the technology itself. This particular lesson can be seen in the case of ZIP drive technology wherein the uncertainty behind its use led to it never being adequately adopted by the general population. It is based on this that it can be assumed that not all technological innovations will actually be adopted by the general population and this includes several of the new innovations currently being released in the market today. For example, the advent of 3D computer screens is heralded by many as a possible new standard to computer viewing yet industry data shows that not only is its usage unwieldy for the average user it serves no purpose for normal computer tasks such as using word processing software or utilizing the internet. When trying to determine what the future holds for computer component technology the manufacturing process itself should also be taken into consideration. Lately various consumer report groups have stated that certain PC component shave increasingly been found to actually be designed to eventually break down due to inferior materials being used. Such a case is actually quite true in the case of certain components whose operational lives are limited to only a few years and are not meant to last more than 4 years of continuous usage at most. The reason behind this lies with changes in the method of production wherein components are no longer being built to last but rather are being built with the current pace of innovations and consumer demands for cheap components. While current trends in various computer innovations may seem like the future of computer component technology it should not be assumed that they will actually attain the status of general utilization since various factors such as consumer adaptability and their inherent build out completion need to be taken into account.
Future Computer Components and their Durability
When examining computer components made recently with those constructed 12 years ago it can be seen that older parts are bulkier, slower, and of course less advanced as compared to recent creations yet for some reason older computer parts seem to have a longer operational timeline compared to some of the newer parts created. Operational timelines refer to the length of time a particular component is expected to work under normal operational conditions. On average this can range from 3 to 5 years depending on the rate of usage before the parts begin to cease functioning. Yet an examination of various studies examining the durability of various computer parts constructed in 1998 show that components used back then apparently still work all the way till 2012. While it can be argued that those particular components do not endure the same type of punishment components today undergo the fact remains that when comparing the operational timeline of computer components constructed within the past year they are apparently getting lower and lower with some components lasting only 2 to 3 years before problems begin to occur. This rather strange phenomenon brings up a rather intriguing question: if advances in technology are suppose to make components better why are they breaking down sooner as compared to parts constructed in previous generations? An examination of previous computer parts show that a majority of the components are far heavier as compared to recently constructed parts. While it may be true that as technologies improve components get lighter further examination reveals that older model components seem to have a far sturdier basis for construction as compared to recently created parts. This can range from using heavier plastics, stronger silicon and using more metal in the construction of the parts themselves. This translates into greater durability over the long term as compared to parts that would use cheaper forms of metal and lighter materials. In comparison computer components today are made of far lighter materials which do translate into better heat and electrical conductivity however this also makes them far more prone to break down as compared to parts constructed out of heavier materials. The fact is computer components created 10 years ago were actually created to last for a significantly longer time as compared to parts today. In fact various studies examining advances in computer component technology reveal that several parts manufacturers actually build their components to eventually break down. The reasoning behind this lies with the fact that with the current rate of technological innovation building parts to last does not make as much sense as it used to since components are replaced with newer models and types on an almost yearly basis. More durable components translate into greater production costs which results in higher component prices. With competition in the component industry being determined by who can produce the latest type of product at the lowest price it is not competitively feasible to sell components at a higher cost on the basis that they are more durable. In effect durability no longer holds as much sway as it used to due to the current fast paced advances in technological innovation which almost ensures that public companies or private individuals replace their various computer components before they break down. Another factor to take into consideration is the fact that a majority of consumer buying behavior is geared more towards acquiring the latest parts rather than the most durable. Not only that, various studies examining consumer buying behaviors show that more consumers buy components on the basis of their low prices as compared to determining whether a particular component is durable or not. This in effect encourages companies to adopt a production strategy that focuses more on producing cheap components while at the same time sacrificing their durability. It is based on this that it can be expected that computer components in the future will be manufactured in such a way that manufacturers will intend for them to fail after a certain degree of usage. The inherent problem with this possible future situation is that while it inherently benefits companies due to a continuous stream of income it does not benefit certain segments of the population who cannot afford continuous component changes due to a limited component life spans.
Switching from Air Cooled to Water Cooled Systems to Mineral Oil Cooled Systems
Nearly 85% of all computer systems in the world utilize traditional air cooling technologies in order to control the high temperatures created by either the processor, the north and south bridge in the motherboard, the video card and various other components within a computer that generate heat through constant usage (EDN, 16). This process usually involves a metal plate directly being attached to a particular component with a set of metal attachments being connected directly to fans from which the heat is dissipated via cool air being circulated directly to the component through the fan system (EDN, 16). Auxiliary fans are also usually connected to various PC casings from which cool air from the exterior of the casing is circulated towards the inside with warm air being removed via an alternative fan system. This particular setup has been utilized for the past 17 years in a majority of computer systems today and continues to remain one of the dominant methods of cooling PC components (Goldsborough, 30). Unfortunately due to the increased temperatures produced by various parts this particular system has begun to reach its limitations in terms of effective usability. While fan systems are effective in bringing cold air into various systems they are actually ineffective in keeping temperatures low within casings over a prolonged period of time. In fact, over a certain period of time fan systems fail to keep temperatures within nominal levels for proper operation and this often leads to parts breaking down earlier than they should due to the high temperatures within the casing that the fans are unable to effectively control. This often leads to burned out components, sudden computer shut downs and a variety of other results normally associated with such ineffective systems for cooling. One method that companies have utilized in order to resolve this particular issue has been to constantly keep the areas where a number of systems are located under a particular temperature through various air conditioning units. While this particular method of resolving growing PC temperatures is effective in the short term the fact remains that in the long term such a method of temperature control is costly and as such an alternative means of resolving this particular problem is needed. In the past 5 years one of the growing alternatives to resolving the heating problem of PCs has been to utilize liquid cooling systems as a replacement for traditional fan cooling systems (Goldsborough, 30). Liquid cooling systems utilize a series of tubes containing a mixture of distilled water and various coolants which circulate towards a cooling plate directly attached to the component producing heat (Upadhya and Rebarber, 22). Heat exchange occurs when cool water from the system hits the heat sink and absorbs the heat transferring it away from the system via the serious of tubes towards a radiator that expels the heat and cools down the water (Upadhya and Rebarber, 22). Unlike conventional fan systems liquid cooling can reduce heat up to 60 percent better and produces far less sound resulting in better component longevity and better long term performance from a PC (Upadhya and Rebarber, 22). It must be noted though that liquid cooling systems utilize more electricity compared to traditional fan cooling systems due to the use of both a pump and a radiator in order to dissipate the heat that and such systems are also more maintenance oriented due to the necessity of checking the coolant levels in order to ensure that there is enough liquid coolant to make the system work popular. Unfortunately, the utilization of liquid cooling systems over most of the current consumer market is still isolated to a few select groups such as gamers, graphics artists and variety of other users that utilize systems with high core temperatures. One of the reasons could be the inherent price of liquid cooling systems which are considerably higher than normal fan cooling systems. Nicholas Carr in his article, IT doesnt matter, which examines the use of technologies and their implications on society, explains that technologies and their widespread use only become cheaper once they reach their build out completion. The term build out completion refers to a point in technological development wherein a type of technology has already reached commercial viability and can be effectively replicated and mass produced. Carr explains that so long as certain forms of technology have not reached a point of build out completion they will most likely never be adopted due to their prohibitive costs and the uncertainty attached to the technology itself. Based on this it can be assumed that another reason why the use of liquid cooling technologies hasnt achieved widespread use is due to the fact that it hasnt reached its build out completion thus creating uncertainty in the technology itself. An examination of current methods of liquid cooling show that while the technology itself has taken great strides in innovation, effective systems are still rather unwieldy for the average computer user. This as a result has created a certain level of consumer uncertainty in the product itself despite the fact that it is a far better alternative compared to fan cooling systems. Based on this it can be assumed that general adoption of liquid cooling systems will come in future only after the technology itself has reached sufficient build out completion that it could be utilized easily by average computer user and will be at a sufficiently lower cost. It must be noted though that the general utilization of liquid cooling systems is only one possibility that could occur for computer component cooling technologies in the future; recently the use of mineral oil cooling systems has taken been gaining a substantial following resulting in it being a possible contender in the future as a primary cooling technology. Mineral oil cool utilizes a different heat exchange principal compared to either fan cooling technologies or liquid cooling; it involves pouring a substantial amount of mineral oil into a water tight casing in order to immerse all the computer components. After which the oil is subsequently pumped through a series of tubes into an external radiator in order to dissipate the heat accumulated through the oil. The technical aspect behind this particular system is actually rather simple; heat from the various computer components is transferred directly to the mineral oil and is then subsequently cooled down by the external radiator. This actually ensures the parts rarely accumulate significant heat, it must also be noted that due to the special qualities of mineral oil it does not cause electrical shortages in the equipment unlike when the components are submerged in liquids such as water. Similar to the case of liquid cooling systems mineral oil technologies are far from their build out completion stage and as such it really cannot be said whether mineral oil systems or liquid cooling systems will become the dominant form of component cooling in the future. It will all depend on which system is the most feasible to commercially utilize in the immediate future and whether it can be adapted for general usage.
CPUs made out of Diamond
In relation to the topic mentioned earlier involving heat dissipation technologies one of the latest breakthroughs in semi-conductor research is the utilization of diamond based semi-conductors as a replacement for current silicon based chipsets. What must be understood is that as consumers demand more performance and processing power from CPUs companies in turn have developed smaller transistors within processors in order to provide consumers with the processing power they need (Merritt, 12). Unfortunately, as CPUs get increasing smaller, more sophisticated and possess more processing power the end result is greater difficulties in terms of thermal dissipation. On average a single processor consumes nearly a hundred watts of electricity in order to maintain proper operational standards, as the number of processes increase so to does the level of power utilized (Merritt, 12). Unfortunately dissipating heat from an area that is smaller than one square centimeter presents itself as a significant problem for various chip manufacturers since the amount of heat produced tends to reach significant levels after a certain period of time (Davis, 37 38). With the release of core i7 processors by Intel as well as the latest Intel Sandy Bridge processor the end result are chips which require increasing sophisticated methods of cooling which conventional fan systems are hard pressed to provide. While alternatives do exist such as the use of liquid cooling systems or mineral oil systems these are still far from their build out completion stage and as such are not generally used by most consumers (Oskin, 70-72). This presents itself as a significant problem for processor manufacturers such as Intel since recent consumer trends show that on average consumers demand higher processing power nearly every 2 years due to the demands of ever increasingly sophisticated software systems. Unfortunately silicon based processors show signs of thermal stress after they reach temperatures of 100 degrees Celsius or more (Davis, 37 38). If consumer demands are to be met companies would need to increase the capacity of their current processors which would in turn increase the amount of heat produced which would very likely cause the processors to literally melt as a result of increased thermal stress. It must be noted that as technological innovations continue to grow companies increasingly find that the traditional materials and components they utilize have actually reached the limits of their usability and can no longer provide the necessary structural infrastructure needed for their products. It is this particular situation that current processor manufacturers find themselves in (Oskin, 70-72). One possible alternative to utilizing silicon that processor manufacturers say shows potential is the use of diamond based semi-conductors. It is a well known fact that diamonds are in fact the hardest substance known to man and due to their inherent qualities possess useful properties that can be utilized in the production of robust processors. On average diamonds possess the ability to conduct heat better than silicon, they have an inherently high breakdown voltage and it must be noted that they also possess a distinctly high carrier mobility. Not only that, as mentioned earlier silicon based processors have a tendency to show severe thermal stress when temperatures reach 100 degrees Celsius, diamonds on the other hand can endure several times that particular temperature and still have little ill effect. It must be noted that though that various critics are skeptical over the use of diamond based CPUs due to the fact that a diamond is based from carbon and as such acts as an insulator rather than a semi-conductor. One way around this that researchers have discovered is to dope the diamond in boron resulting in it turning into a p-type semi-conductor. Not only that, by reversing the charge of the boron this in effect enabled the creation of an n-type semiconductor. It must be noted that both p-type and n-type semi-conductors are needed to create a transistor and as such the ability to create both through a diamond doped in boron indicates the definite possibility of effectively creating diamond based processors in the future. Unfortunately one of the inherent problems in utilizing diamonds as processors are their inherent cost. Due to the relative rarity of diamonds and the fact that they are coveted by the jewelry industry their utilization as a replacement for current semiconductor technology seems infeasible due to the fact they would not be cost effective for both consumers and manufacturers. Researchers found one way around this by effectively creating a process the creates artificial diamond sheets that are purer and on par in terms of overall hardness compared to real diamonds. The only difference is with this particular process the diamond like material can be molded into various different shapes depending on the use required and it is actually cheaper to create as compared to various semi-conductor sheets used in todays processor industry. Based on this innovation various experts agree that it might be possible for processors to achieve speeds equivalent to 81 GHz without the processors subsequently melting from the sheer amount of heat produced by such an activity. It must be noted though that the utilization of this particular type of technology is still in the testing phase and as such will not come into effect for another 10 years at the very least.
Blu-ray Disks and the Future of Disk Drives
Disk drives have been an ubiquitous part of most computer setups for the past 18 years or so making it one of the most well known and used components in any computer. As of late though, the advances in driver technology have slowly updated the technology from initially allowing consumers to burn information onto a CD-rom disk to eventually allowing them to watch and burn DVDs. The latest incarnation of the technological advances in driver technology have resulted in the creation of the Blu-ray disk format enabling consumers to store up to 30 to 40 gigabytes of data onto a single disk (Digtial Content, 39). The creation of Blu-ray disk technology actually follows a distinct trend in driver technology wherein every few years drivers and disks are created that have higher storage capacities, faster speeds, better video resolution quality and an assortment of added functions that surpasses the previous generation. Initially CD-rom disks had a capacity of 700 mb this changed when DVD based technologies and drivers came into the picture resulting in storage capacities reaching 4 to 6 gigabytes depending on the type of DVD bought. Today, Blu-ray disk technology enables storage capacities of up to 30 gigabytes which far surpasses the storage capabilities of DVD based drivers and disks. Based on this trend in advances for this particular aspect of computer component technology it can be assumed that Blu-ray disk technology will not be the final version of disk drive technology rather it is merely another evolutionary level (Perenson, 94). In fact disk production companies such as Philips, Sony, Maxwell, TDK, Optware and Panasonic have already announced a new potentially new form of media consumption in the form of the HVD disk which is slated for release in the next few years. HVD or Holographic Versatile Disk utilizes a new form of holographic embedding technology enabling data to be stored holographically onto the surface of a disk thus enabling greater storage capacity on a smaller surface area (Digtial Content, 39). In fact estimates show that a single HVD disk has the capacity to hold up to 6 terabytes of data, greatly exceeding the 30 gigabytes of data most Blu-ray disks can hold. It must be noted though that one of the more unfortunate aspects of disk drive technology is the fact that when new drivers and types of disk come out it becomes a necessity to transfer data from the older version of the technology to the newer type which is an arduous process at best (Medford, 8). In recent years the degree of technological innovation has advanced to such an extent that newer versions of disk drivers and disks come out nearly every 2 or 3 years resulting in a painful cycle for consumers as they migrate their data from one storage medium to the next (Digtial Content, 39). Based on this it can be seen that disk drive technology has an inherent weakness connected to the migration of consumer data from one version of the technology to the next. In fact it can even be said that no matter how far this particular technology progresses in terms of storage capacity and sharper video playback it will still have the same data migration problems that consumers now face. At times it must even be questioned whether disk drive technology is even necessary at all. For example, at the present solid state devices such as USBs are one of the dominant forms of external storage due to the ease of data transfer that they provide (Digtial Content, 39). While such devices are no where near the capacities of the future HVD format the fact remains that they provide a far easier method of data transfer as compared to the disks (Medford, 8). Another factor that must be taken into consideration is whether regular consumers really need disk formats that can store 6 terabytes of data. While it may be true that in the current information age the amount of media consumed by the average person reaches several hundred gigabytes the fact remains that it doesnt go above 500 gigabytes. In fact consumers that are able to consume storage capacities of 1 terabyte and up are in the relative minority compared to most computer users today. Another factor that must be taken into consideration is the fact that with the current popularity of cloud computing especially with the release of Apples iCloud network this has in effect made data storage more of a problem for companies rather then regular consumers. It is based on this that it can be stated that the development of increased storage for regular consumers should follow a slower rate of consumption in order to lessen the frustration of continuous changes to new media storage formats. Unfortunately, based on current trends in technology releases companies seem to be more inclined to release new media storage formats without taking into account actual consumer necessity behind the release. What is currently occurring is case where consumers seem to irrationally inclined to follow new media storage formats such as the Blu-ray disk without taking into account the fact that their current method of storage is perfectly fine. Companies take advantage of this by continuously releasing new storage formats since they know consumers will follow the new formats and port their data over to the new storage devices. This particular situation benefits companies more so than it does actual consumers and as such, based on current trend and consumer behavior, the future of disk drive technology seems to be destined for the continuous release of ever increasing file storage methods which consumers will unlikely need but will buy nonetheless.
Changes in Display Technology: is 3D the future of Digital Display Technology?
For many computer users, classic CRT display technology has been around for 25 years and was once one of the most used types of monitors in the computer component industry. Yet with the development of cheap LCD technologies within the past 7 years or so CRT screens are beginning to be phased out in favor of cheaper and more cost effect LCD screens. It must be noted that unlike other forms of component technology monitors tend to be rather slower in technological innovation. While it may be true that in the market today there are several brands and types of LCD screens ranging from small 22 inch screens to massive 41 inch LCD monstrosities the fact remains that most of them adhere to the same basic design principles with certain additions added in by manufacturers to differentiate them from the rest (Kubota and Yazawa, 942 949). In fact within the past 5 years the technology has only improved slightly with the creation of High definition screens and LED display systems but the basic design and components utilized are still roughly the same. It must be noted that unlike the developments seen in processor technology, disk drives and pc cooling, enhancements in display technology only benefits the visual aspect of a users experience. It does little to improve PC performance, longevity, and in fact a large percentage of current consumers tend to stick to the classic LCD models and types developed 5 years ago as compared to the newer high definition LED screens used today. The reason behind this is rather simple; most people are unwilling to pay higher prices for a technology that can be obtained at a lower cost with little discernable change in PC performance (Kubota and Yazawa, 942 949). In fact a majority of most PCs are used mainly for work and as such unless a person is in the media industry high definition screens arent really a necessity. While it may be true that as of late there has been significant developments in display technology as it can be seen with the creation of 3d screens for computer users it must still be questioned whether the adoption of such technology for general use will happen within the immediate future. 3D vision technology has been largely advertised by most companies as the latest wave in innovations in display technology. In fact companies such as ATI and NVIDIA have attained significant profits in selling video cards that are 3d capable, yet when examining the percentage of actual use most laptops and PCs today still utilize classic LCD technology that was available 5 years ago. Comparatively processors, disk drivers, memory sticks and even disk drivers today have changed drastically from way they were since then. The reason behind this is rather simple, LCD technology already reached its build out completion years ago as such the technology has proven to be stable, most consumers prefer to use it and its cheaper as compared to utilizing some of the latest screens available today. In fact various projects show that the consumer market that will avail of the latest developments in screen technology will be isolated towards gamers, media related junkies, and media corporations. For a vast majority of other computer users LCD technology will still be utilized for quite some time due to its stability and lower price. What must be understood is that when examining the current trend in 3d vision technology it seems to be more inclined as a creative gimmick rather than serving an actual use. While it maybe be true that it makes games seem more realistic it is not necessarily an integral and necessary part of a users computer experience. In fact, the 3d capabilities of a screen can be taken away and programs on PC will run with no difference whatsoever in performance or display. A majority of programs today dont require 3d vision to be utilized and its use is isolated towards only a certain segment of the computer user population. It must also be noted that in order to actually use a 3d capable monitor a user would need to wear a special set of 3d glasses in order to see the 3d effect. One problem with this method of usage is that various studies show that not all users are actually comfortable with utilizing 3d vision technology. Cases of eye strain, blurred vision and distinct feelings of discomfort have been noted in certain computer users which call into question the ability of the technology to appeal to a large segment of the population. Not only that it must be noted that 3d vision screens are on average several times the price of normal LCD screens and require special 3d capable video cards to actually work. This increases their overall cost to consumers and actually discourages them from buying the screens in the first place. Display technologies should serve to provide a discernable utility to consumers, while 3d vision may seem nice it doesnt really serve any actually positive purpose beyond making games look better. Other potential technologies that could be applicable to future display technology is the use of holograms as a replacement for solid screen devices however after extensive research on the applicability of the technology in the near future for general use it has been shown that even marginal commercial use is still 10 years away at best.
Heat Sinks, RAM memory and its Future Capacity
When various people think of PC components they usually bring up topics such as screen resolution, hard disk space, disk drivers and the capacity of their video cards yet they always seem to forget about RAM memory. The reason behind this is actually quite simple, the inherent popular culture by the general population often interacts directly with output devices such as monitors and input devices such as keyboards, the mouse and the disk drive. It is due to this that they often take notice of the factors that directly affect their interaction with various PC systems and such the amount of memory on their hard drive in order to store their files, the quality of the screen resolution on their monitor, the type of disk drive they have as well as the capacity of their video card so that they can play crisp video files. RAM memory is often relegated into the capacity of being a secondary aspect of the average home computer setup yet is an integral component in any PC. Lately RAM memory has been generating a greater degree of interest as the amount of tech savvy enthusiasts grows resulting in more interest in individual computer parts. It was actually due to this growing interest that various PC enthusiasts discovered the inherent limitations of RAM memory due to their increasing capacity. Similar to the case of processors, as the amount of RAM memory grew per individual stick the end result was a greater degree of heat produced which in turn affected the performance of the RAM in the long term (Deen, 48). Prolonged operating conditions actually resulted in slower computer responses as the memory struggled with increasing temperatures affecting its ability to actually work. While this is not the case for all computers it has been noted in enough cases that the RAM production industry has in effect released a stop gap measure to resolve it (Deen, 48). Depending on the manufacturer of origin, certain memory sticks actually came with heat sinks included to help dissipate the heat away from the sticks and into the air within the casing. It must be questioned though whether this particular addition to RAM memory will become an industry standard in the coming years. Heat sink technology works by drawing heat away from a particular device through either a copper or aluminum tube and theoretically dissipates it into the air within the casing. While various experts and industry personnel may say that it is effective in dissipating heat logic dictates that it seems to be a rather inadequate method of heat dissipation. For one thing, the technology works through the process of inherent temperature changes within particular zones (Deen, 48). The high temperatures from the memory stick are drawn to the lower temperatures in the surrounding air through the copper or aluminum heat sink, in effect lessening the temperature burden on the memory stick. What must be understood though is that this works only if the there is a distinct temperature change between the two zones. As operational times increase so to does the ambient temperature within a particular PC case, while there are casings with sufficient temperature control mechanisms in the form of interior fans that help to regulate the temperature the fact remains that not all casings have this since this entails a significant additional cost to computer users. As a result ambient temperatures within particular case models can increase to such a degree that in effect it reduced the efficacy of heat sinks resulting in a gradual deterioration of performance. It must also be noted that as mentioned earlier liquid cooling systems or mineral oil cooling systems have been gaining significant amounts of consumer interested and could be eventually used as the primary cooling methods of PCs in years to come (Deen, 48). It has already been proven that both cooling mechanisms are far more effective in cooling memory sticks as compared to heat sinks and as such it cannot really be said that heat sinks incorporated into memory sticks will become industry standards. While various manufacturers are advocating that they should be (they get to charge higher prices for memory sticks with heat sinks) the fact remains that based on future methods of cooling such as the use of mineral oil and liquid cooling it cannot be said that the current trend in the utilization of heat sinks in memory sticks will continue into the future especially when liquid cooling or mineral oil technology reaches their respective build out completion times and become commercially viable.
The Push towards Miniaturization and Holograms
One of the most recent trends in the development of PC components has been a distinct push towards miniaturization with various components decreasing in size and weight as consumers demand more portability in the devices they use. This has given rise to products such Intels Atom processor, more efficient miniature laptop batteries and a host of other innovations all aimed at making PC components smaller and thus more easily carried by the average consumer (Murphy, 113). In fact it can even be said that this apparent push towards miniaturization is a trend that will continue far into the future with holographic technology taking precedent in future portable devices such as net books and laptops. The reason why holographic technology is being stated is due to the fact that one of the latest developments in holographic technology has been the utilization of credit card sized keyboard that uses infrared technology to identify the positioning of a users fingers to in effect create the simulation of interaction between a user and a holographic image (Issei et al., 32 34). While this particular type of technology is still a decade or two away from actual commercialization it must be noted that this in effect could become the future of all input devices (Gomes, 40). Holograms can be described as 3d images created through various output projection sources to create the illusion of volume. For example one of the current applications of the technology has been the creation of vocaloid concerts in Japan wherein a projected holographic image is created on stage to simulate an actual person singing in a concert. Unfortunately, while input technologies involving infrared light can be currently utilized holographic technology is still within its infancy. The fact is creating an effective 3d hologram requires a significant amount of energy as well as a self contained projection apparatus that can project the necessary image onto a black screen or onto a particular space template. With the current push towards miniaturization it cannot be feasibly stated that holographic technology can utilized as a portable medium within the next few years. Various studies examining the rate of development of holographic technology specifically state that it will require at least another 20 years before the technology becomes applicable for commercial purposes. One of the reasons behind this is connected to the fact that the development of holographic display technology is mostly being conducted by research labs in various universities and not by any significant commercial company that produces various forms of display technology. Companies such as AOC, VGA, LG and Asus are focusing their efforts more on traditional methods of display technology such as LCD screens rather than conceptual technologies that have yet to actually attain proper conceptualization (Gonsalves, 6). What must be understood is that companies are more or less profit oriented and as such they will not expend resources on developing a product that is still in the theoretical stage (Gonsalves, 6). While it may be true that holograms may be the wave of the future for display utilized in computers the fact remains that until the technology actually proves itself commercially feasible it is unlikely that companies will allocate any amount of resources towards the development of the technology.
Digtial Content.A Word on Storage. Digital Content Producer 33.3 (2008): 39.
GOLDSBOROUGH, REID. PC a Little Sluggish? It Might Be Time for a New One Or Not. Community College Week. 2008: 30.
Gomes, Lee. KEYS TO THE KEYBOARD. Forbes 184.4 (2009): 40.
Gonsalves, Antone. Nvidia shaves costs of graphics processing. Electronic Engineering Times (01921541) 1509 (2008): 6.
Issei Masaie, et al. Design and development of a card-sized virtual keyboard using permanent magnets and hall sensors. Electronics & Communications in Japan 92.3 (2009): 32-37.
Kubota, S., A. Taguchi, and K. Yazawa. Thermal challenges deriving from the advances of display technologies. Microelectronics Journal 39.7 (2008): 942-949.
Medford, Cassimir. Music Labels, SanDisk in CD Rewind. Red Herring (2008): 8.
Merritt, Rick. SERVER MAKERS GET GOOGLED. Electronic Engineering Times (01921541) 1553 (2008): 22.
Murphy, David. Upgrade to Gigabit Networking for Faster Transfers. PC World 27.12 (2009): 113-114.
OSKIN, MARK. The Revolution Inside the Box. Communications of the ACM 51.7 (2008): 70-78.
Perenson, Melissa J. Blu-ray on the PC: A Slow Start. PC World 27.4 (2009): 94.
EDN.Point cooling advances for hot ICs. EDN 54.5 (2009): 16.
Upadhya, Girish, and Fred Rebarber. Liquid Cooling Helps High-End Gamer PCs Chill Out.. Canadian Electronics 23.3 (2008): 22.
Arguably one of the most epic accomplishments of the 21st century was the invention of the computer and the subsequent creation of computer networks. These two entities have virtually transformed the world as far as information processing and communication is concerned. Although computers are hardly a century old, they have revolutionalized the way in which we carry out our day-to-day activities and hardly any arena in our lives has escaped the influence of these systems. As such, a breakdown of the computer systems would be catastrophic to an individual or an organization. Despite this, Goldsborough (2004) notes that a constant reality when working with computers is that the data stored on the PC can disappear in an instant. For this reason, it is of uttermost importance to ensure that contingencies are taken in the event that a computer system should fail. Crash recovery measures provide the best way to salvage a failed system. This paper will engage in a detailed analysis of how to carry out a crash recovery.
Computer Crashes
All computing systems are vulnerable to a wide variety of potential threats including; viruses, hacking, bugs, and physical damage to name but a few. All these threats may result in a system crash with varying consequences to the users. Bobrowski (2006) defines a crash as the unanticipated failure of the system in question. Crashes result in the computer system being inoperable until a solution to the crash is arrived at. There exist two major forms of crashes; hard disk crashes and OS crashes, and the recovery process will be dependent on the unique nature of the crash.
Hard Drive Crash
A hard disk crash is brought about from a hardware failure on the disk. A hard disk crash implies that all or some of the hard disk sectors are damaged and therefore unreadable. Hard disk failure may be caused by contaminants on the disk which may cause a head crash that damages some of the data on the disk. Head crashes may also arise from the hard disk being jarred while it is in use. Some of the signs of a hard disk failure are clicking or whirring sounds from the hard disk when the computer is turned on. To recover from a hard disk crash, one will be forced to invest in a new hard drive. Even so, one may wish to salvage the data that exists in the damaged drive.
Operating System Crashes
OS crashes are logical failures that make either part of or the entire OS unusable. In the event of a logical failure, the hard drive is still fully functional and the error is only on the OS. An improper shutdown of the computer due to power failure or poorly written software may damage critical system files resulting in an OS crash (Gookin, 2009). OS crashes may also occur as a result of memory overflow where data from one area of memory goes to memory allocated for another program. This may result in data corruption and the eventual crash of the OS. OS crashes can also be caused by viruses, which are malicious scripts written to interfere with the normal operation of the computer.
Crash Recovery
In the event of a crash, the first step is to identify the type of crash and then determine the best way to recover from the crash. For an OS crash, the primary goal of recovery is to get the OS up and running. This can be done by reinstalling the OS from the original disk. Most OS providers provide users with recovery disks which are very useful in the recovery process since they enable one to return the computer to its factory settings (Gookin, 2009). For hard disk crashes, the aim of recovery is to retrieve data from the damaged drive. One can use data recovery software to try and recover some of the data from the damaged drive. Recovery software is designed to access sectors of the hard drive that are damaged or retrieve information that may have been deleted in the event of the crash. While there is no guarantee that all the data will be recovered from the damaged hard disk, data recovery software present the best means to retrieve at least some of the data that would otherwise be irretrievable from the faulty hard disk.
Backing Up
Arguably the most important safeguard against computer crashes is backing up of data. Failure to back up important data may prove to be catastrophic in the instances where vital data is lost following a crash. Backing up is based on the premise that preparations must be made beforehand for the physical loss of an important OS file due to operation error, file corruption, or a disk failure. In this kind of disaster, it is impossible to recover the data from the system by the use of recovery programs. Backup copies of the lost data files are the only means through which the recovery process can be done. Backup software can be used to restore data from backups to a computer that has crashed. This is because backup software has modules for restoring files. However, this is only possible if one has already installed the backup software on the computer. In case the hard drive crashes, it is necessary to reinstall the backup software before commencing the recovery process. When performing backups, it is important to ensure that the integrity of the files is not compromised. Data integrity can be compromised by viruses which can damage files to the point that a computer cannot access the data contained therein. It is therefore important to ensure that files that are backed up are virus-free. This can be done by performing an up-to-date virus check as the first step in every backup routine (Parons & Oja, 2010).
Crash recovery can be assisted by the use of extra disks on the computer system. This can be implemented through the use of Redundant Arrays of Independent Disks (RAID) technology which is hailed as the fastest means to recover from a system crash (Goldsborough, 2004). RAID can enable one to replace their failed hard drive almost seamlessly. This is made possible since RAID mirrors all the data that one store on their main hard drive onto a secondary hard drive. Despite the huge reliability that RAID technology offers through redundancy, it is not commonplace among individual PC owners owing to the additional costs that RAID technology demands.
Identifying Crash Cause
An important step in crash recovery is to try and identify the cause of the crash. This is very important since it is desirable to avoid future crashes. There are commercially available software tools that can be used to diagnose the system for physical or logical errors (Miller, 2007). Having identified this, appropriate measures can be taken to ensure that future crashes are avoided. Scan disk utilities can help identify faulty hard drives by providing information on bad physical sectors. One can therefore take up appropriate action and avoid future crashes.
Conclusion
As our society becomes increasingly dependent on information technology for a myriad of operations, the responsibility to maintain and protect computing systems increases proportionately (Kilbridge, 2003). This is because, with the increased use of computers, the cost of system failure becomes significantly higher. There are instances where the cost of a system crash may be too high and in such a scenario, avoiding the crash completely is desirable. Running software tools that warn the user of impending problems is an effective way of subverting a disaster that may arise from disk failure. Third-party software utilities such as Symantecs Systemworks have tools for monitoring the internal diagnostic capabilities of new hard drives to give an alarm in case of looming problems (Goldsborough, 2004).
This paper set out to discuss crash recovery measures that can be used to salvage a failed system. This paper began by noting that all computer systems are susceptible to crashing which may result in dire consequences for the person(s) who depend on the computers data. It has also been noted that crash recovery procedures will vary depending on whether the crash is an OS crash or a hard disk crash. In either case, the best way to recover from a crash is to take up proactive measures such as backing up all the data on a secondary storage device. This paper has also noted that the use of the redundancy capabilities provided by RAID technology presents the best way to recover from hard disk failure. Successful crash recovery will restore the system to its pre-crash status hence enabling the individual(s) to continue reaping the advantages of his computer system.
References
Bobrowski, S. (2006). Hands-on Oracle Database 10g Express Edition for Windows. McGraw-Hill Professional.
Goldsborough, R. (2004). Signs of an impending hard disk crash. Teacher Librarian, 14811782, Vol. 31, Issue 3.
Gookin, D. (2009). Troubleshooting and Maintaining Your PC All-in-One Desk Reference for Dummies. NY: For Dummies.
Kilbridge, P. (2003). Computer Crash: Lessons from a System Failure. New England Journal of Medicine, 00284793, 2003, Vol. 348, Issue 10.
Miller, M. (2007). Beginners Guide to Computer Basics. Que Publishing.
Parsons, J.J. & OJa, D. (2010). New Perspectives on Computer Concepts 2011. Cengage Learning.
Computer control system is an approach that is used to control and monitor specific parameters of substances within laboratory settings remotely. The elements of a control system include two important parts; data acquisition instruments that enable monitoring of desired parameters and microcontrollers intelligent instruments that are essential for controlling the desired parameters (Hebert, 2007). In this paper we are going to briefly discuss the functioning system of one control system used in regulating temperatures for liquid substances among other variables in a laboratory setting; the iControl system.
Advantages
The advantages of using iControl systems within laboratory settings are many and numerous; most important is the fact it enables control and monitoring of key variables in a laboratory environment that might pose risk where hazardous substances are involved (Hebert, 2007). In addition, the iControl system enables monitoring and control of parameters over long duration of time thereby saving on costs associated with personnel that would have been required to achieve the same objective (Hebert, 2007). Another advantage includes the consistency of data collection and overall data quality that is detailed and well organized since it is computer generated (Hebert, 2007). Finally, automatic remote control and monitoring of parameters saves on time and effort that would be required by personnel to physically implement desired changes (Hebert, 2007).
Design
IControl is an advanced microcontroller system that is installed in a PC and capable of controlling and monitoring laboratory parameters wirelessly on a 400 MHz signal. In this case the iControl system is designed to control and monitor temperature variables of a hazardous liquid within a laboratory environment as well as other parameters such as smoke and light. The iControl software is run by Labview program that enables the system to function at the desired level, monitor parameters and control processes (Sparkfun.com, 2010). The essential components of an iControl design system are two microcontrollers, sensors, peltier heater/cooler, H-bridge, and Analog to Digital (ADC) converters.
The type of microcontrollers used for this project is the ATMEGA328 model manufactured by Atmel, each with its own function; one is for data acquisition and the other is for adjusting desired parameters to function appropriately (Hudson, 2006). There are three sensors installed in the iControll system to measure the key parameters by detecting changes on specific variables of interest which in this case includes temperature, smoke and light intensity. The temperature variable is measured by LM334 sensor model which is essentially a Zener diode that is configured to operate at temperature range of -400C and 1000C (Hudson, 2006).
The fire detection component is another type of a sensor that detects the presence of smoke to trigger a fire alarm through the ADC relay component then to a buzzer that alerts personnel through sound. Finally, the light parameter is monitored by a third sensor, a LDR component that functions in the same way as the smoke sensor (Projects.net, 2010). The ADC component is for converting the sensors output data which is inform of analog to digital format that can be analyzed by the Labview program. The peltier heating and cooling element functions by initiating cooling or heating processes achieved through a H-bridge driver based on the prevailing temperature conditions and the desired temperature level (PeltierInfo.com, 2010).
Other components of an iControl system include control module and control loop that are used to transmit control commands, RF communication system, LCD and a buzzer. The RF wireless communication system is an Amplitude Shift Keying device that is set at 433 MHz which is able to transmit and receive data (JayCar.com, 2010). All the processes of the iControl system are facilitated by the Labview software program that enables the actual monitoring and control of laboratory parameters remotely.
Conclusion
The iControl system project was successfully completed and a trial operation set up to determine how well it will function under laboratory settings. The Labview program was able to accurately capture and record the temperature, light and smoke variables as desired. To determine the effectiveness of the iControl at specific temperatures, the monitoring and control system were calibrated to maintain the temperature of the liquid at 350C, beyond this temperature the peltier component switched from heating to cooling mode in order to maintain the temperature at 350C. The LED output signals of smoke and light functioned by lighting when tested by introducing smoke and light in the laboratory environment. Finally, the process of data capture and transmission of information was determined to occur instantly routed by the two transmitters as desired. As such the iControl system project was certified as successful having been tested and determined to function as designed in a way that would control liquid temperatures and monitor the effects of smoke and light intensity within laboratory settings.
It can be argued that the preceding century was the beginning of the Information Age. It can be described as such because of two technological breakthroughs the invention of the personal computer and the Internet. These two technologies combined, brought another milestone when it came to man his desire to improve the way he leverages technology, specifically the ability to communicate long distances.
The telegraph, radio, and television were considered major breakthroughs in this field but no one was prepared for the coming of the personal computers and the Internet. These two technologies combined produced another ground breaking innovation which is the World-Wide-Web and the creation of a web page and a website. The following is a discussion on the evolution of the personal computer and the Internet from its humble beginnings and how the website became a potent application of Information Technology.
Personal Computers
It all began with computers and as the name suggests it is a machine that is expected to make computations beyond the capability of humans or at least crunch a great deal of numerical information that requires a high level of consistency and accuracy.
Using this basic definition then the humble scientific calculator can also be described as a basic example of what a computer is all about. But for the purpose of this study a computer is an equipment of considerable size that was used primarily by the government to perform complex tasks. In the early days a typical computer can be as large as the average persons bedroom. Truly, there is a huge difference comparing it to the desktops and laptops of the 21st century.
From the very beginning only governments can afford these sophisticated machines up until inventors, enthusiasts and entrepreneurs worked together, either as a team or in direct competition with each other to produce what will be known as the personal computer. A computer that does not have to occupy the whole living room but powerful enough to perform computations faster than even the most brilliant mathematician or the most conscientious student.
However, without the creation of the World-Wide-Web and the webpage the computer will continue to be an expensive toy, affordable only for those who have money to spare.
And it would have remained to be so considering that there are still alternative ways to produce that report or assignment, there is always the trusted old typewriter that can be pulled out from storage and with rudimentary skills in typing a student or a professional can still fulfill those requirements. But when people began to access the Internet and use websites, Information Technology took a decisive turn and as historians would love to put it: the world was never the same again.
Tim Berners-Lee
Before going any further it is important to understand the difference between the Internet, the World-Wide-Web, a webpage and a website. All of these can be explained succinctly using the following definition:
The World-Wide-Web is an infrastructure of information distributed among thousands of computers across the world and the software by which that information is accessed. The Web relies on underlying networks, especially the Internet, as the vehicle to exchange the information among users. (Dale & Lewis, 2010)
A web page on the other hand contains information and even links to other resources such as another webpage or images or video and when there is a collection of web pages managed by a single person or company then that is called a website (Dale & Lewis, 2010). It can be argued that without the webpage the world may never knew the full potential of the Internet and for that matter enthusiasm for improving personal computers so that it is accessible and affordable for household use.
Due to the complexity of the subject matter it is necessary to digress once more and discuss briefly the history of the World-Wide-Web so that there is a clearer understanding of what it is all about. Although there are different opinions regarding the exact history and origion of the Web there is general agreement that the basic infrastructure and laid the most significant foundation for the eventual evolution of the World-Wide-Web as known today, is none other than Tim Berners-Lee (Valee, year).
He was working at a Geneva based laboratory for experimental and theoretical physics and he had a persistent problem; he was frustrated with the high level of difficulty when it comes to finding files that he needed and these are files already stored in the Internet (Valee, year).
The Internet is a computer network of networks with a general infrastructure that allows computers to link together using standardized protocols (TCP/IP) and in a sense there is a limited way for computer networks all over the world to communicate with each other (Cerf, 2010).
In the case of Berners-Lee and other scientists working for CERN they have to format their data so that it can be accessed by the common system used at CERN and many find it tedious and unacceptable (Cerf, 2010). Tim Berners-Lee had the same frustration but instead of merely complaining about it he figured out a solution to the problem.
It has to be pointed out that there was already an Internet or a network of computer networks but Berners-Lee created a way for all networks to share a common language so that information can be shared all over the planet. And so Tim Berners-Lee created a new language, a software called hypertext markup language or HTML (Valee, 2003). In 1990 he wrote the Hypertext Transfer Protocol and it is the software the computers and computer networks would use to communicate hypertext documents (Cerf, 2010).
He also created a way to locate these documents or files and he called it the Universal Resource Identifier, which later would be renamed Uniform Resource Locator (Cerf, 2010). In that same year Berners-Lee developed client program, also known as a browser, to retrieve and view hypertext documents and he called this client program the WorldWideWeb (Cerf, 2010). This was the auspicious beginning of the Web.
Websites
It is hard to imagine the World-Wide-Web without websites. Surfing the web would be as dull as reading an unexciting book without any illustrations. But a website, specifically a webpage that is well constructed does not only provide information but also the ability to access information in a fun and interactive way.
For instance try opening a popular website and chances are there are icons there that are clickable and will allow the use to open a new window or it will lead to another webpage or another website and this gives the user the feeling that the webpage is interacting with the user.
Without websites, surfing the web would be as dull as reading pages of after pages of technical data that does not make sense to the average person. But with websites a layman can have access to a more practical and functional page where one can get information, send information and interact in so many different ways that the user somehow feels engaged and ready to explore more.
However, this is only possible if the web developer or the author of the webpage or website is knowledgeable about recent design trends or at least sensible enough to understand that a well-conceived webpage should be easily accessible to any type of user. But this is easier said than done.
The following pages will discuss the usual problems encountered by users and developers alike. As web technology improves and its capability ramped up, more and more problems are revealed. This means that usability and functionality must be top priorities and each one should be tailor-made to suit the needs of every client and in turn the users of the said website.
Usability Engineering
In this section one can find some of the common complaints from web users. The ideas and solutions that will be used in this part of the discussion was taken in large part from the work of Jakob Nielsen, considered to be one of the experts when it comes to usability engineering.
Nielsen espouses on the need for fast and efficient methods for improving the quality of user interfaces (Stout, year). These are some of the major issues that Nielsen wanted developers to be mindful of: a) beating around the bush; b) advertising; and c) usability protocols (Stout, 2004). A more detailed description of these issues and how to incorporate principles gleaned from studying them can be seen below.
Beating around the Bush
When asked what could be the most problematic issue when it comes to website design, Jakob Nielsen said this: The biggest mistake is really not getting to the point not telling people what they can do on the Web sites, what it is about and smothering the information in hypertalk & get to the point thats the number one guideline for Web design (Stout, 2004).
For those who are not used to visiting other websites and confined their selection to popular sites such as Yahoo and Facebook may find this recommendation odd. But for someone familiar with usability and website design this is indeed a problem.
Part of the reason why Nielsen listed this issue a number one concern is perhaps due to the fact that the Internet and the World-Wide-Web is almost synonymous to freedom of expression. Therefore, everyone can build their own website, host it and then publish it for the whole world to see. Without an eye towards design and propensity towards usability it would be difficult to notice these errors; however, if the website was intended for commercial purposes then the owner must take heed.
Advertising
This one is easy to figure out. Advertising means money and explains the eagerness of many website owners to place them in their sites or allow others to install an application that will generate various forms of advertisement in their said site.
Nevertheless, there is a trade-off because in order to make money the website should attract visitors in the first place and they are not willing to visit the site if it is riddled with advertising material. The reason for visiting a website can be as varied as enjoying the content or completing a transaction; it is rare to find a person who loves to visit a website because he or she intends to view the ads.
Advertising banners and advertising content is not the strongest asset of the site and must be removed whenever possible. It is more than a distraction; it also can affect the impact of the website. It also can cause frustration such as the time when the site will fail to load because of the graphics needed for advertising content.
A designers primary goal is to create a website that will deliver content and user friendly applications for entertainment or business purposes. The designer cannot afford to create distractions. One usability expert also pointed out the need to guide the users, to use the overall design of the website as like a well-conceived map that does not confuse but acts like a skilled tour guide leading the way (Collis, 2007). Thus, a website must not appear like a labyrinth that adds a heavy burden to the user.
Nielsen said that there is no need to clutter the website with useless ads because website visitors can make the necessary adjustment when it comes to a website full of banner ads and animation that desperately tries to attract the attention of the users.
Nielsen said that this is called banner blindness the self-taught ability of website users to block non-essential stimulus, especially when it comes to advertising ads (Stout,2004). They also mastered the skill of purging the pop-ups when a pop-window suddenly appears out of nowhere to sell something to the visitor.
Usability Issues
One of the most common problems is the disregard when it comes to using fixed font size. Those with eye problems or reading disabilities will find it a wearisome exercise to visit a website with different fonts and font sizes. Furthermore, Nielsen added that it must be a standard feature of the site to allow user to dictate preferences (Stout, 2004). This will certainly please not just a few first-time visitors.
It is also important to be mindful of the navigation aspect of the website. Since a website is a collection of web pages and may contain links that will lead to other sites, it is imperative for the design to consider how easily visitors can figure out where they want to go and how they will do it. It is also critical for them to have a general idea where they are at in the midst of using the said site (Collis, 2007). If they feel lost then the owner of the site already lost a visitor or a repeat visit.
There is also a need to be mindful of seemingly trivial aspects of the design but in the greater scheme of things will greatly determine if the website will be a success or doomed to obscurity. Thus the website designer must first make sure that the page has a URL and so the bookmarking of the site will not be a difficult experience for the users.
It is also imperative to consider who will be the users of the said website (Collis, 2007). It is not only about their needs but also on how they will be able to access the site. This is very much applicable to those who are disabled and the elderly. The web designer must incorporate these needs into the overall design so that the handicapped and the elderly can still enjoy the benefits of the Web.
Conclusion
The invention and evolution of the Web is simply amazing. It has changed the way people see the world. It has revolutionized the way people communicate and how they do business. The whole world is indebted to Tim Berners-Lee and like-minded individuals who were willing to work on something that they believe is needed and yet not understood by many.
Yet even the creator of the Web could not have anticipated the social and cultural phenomenon that it has created in the few years that it was first used in Europe and then the United States. However, a website was not an immediate success and its growth and success was driven not merely by technology but the willingness of many people to improve their Web experience.
It is therefore imperative to be mindful of the various web design issues that can help a web developer create web pages and websites that are effective in delivering goods and services. Without a good web design then people will shy away from a particular website. But a well-conceived design will generate a great number of visitors every day and this means success for the web developer and those who hired them in the first place.
References
Dale, N. & J. Lewis. (2010). Computer Science Illuminated. MA: Jones and Bartlett.
Valee, J. (2003). The Heart of the Internet. VA: Hamptons.
The society today has completely changed due to technology. Technology is changing at a very rapid rate and with the changes come the need to adapt to them. Computer has changed the way human being does his activities (Beaureau 2008, p.36). Unlike before when most of the activities were done manually, computer has enabled automation of most of activities, especially in large companies.
It is now possible for a manager to monitor activities taking place in a different companys branch from where he or she is by use of computerized gadgets. Management and other duties have been redefined by introduction of the modern day gadgets that are computer controlled.
The Future of Human Computer Interface and Interactions
Human Computer Interface refers to the interactivity between computer and people (Sutherland, Robertson and John 2009, p.49). Unlike other gadgets that do not communicate with the user, computer is the only tool that will have a direct communication with the user. This interactivity is made possible by both the software and the hardware.
The user passes the communication by use hardware like mouse and keyboard and receives the communication through characters displayed on monitor, or through sound. However, this method of communication is only reliable to people who are not handicapped. People without hands and those that are mentally handicapped may not be in a position to operate the traditional computers properly (Rodgers and Streluk 2002, p.98).
However, this may be changing very soon. Dr. Eric Leuthardt and a group of other scientist have developed a new computer interface that would accommodate the physically handicapped.
This interface allows one to control the computer using the brain. The computer is programmed to read the mind and respond to the demands of that mind. By using the power of their thought, these physically handicapped individuals are able to control the cursor to issue commands to the computer. This interface will also benefit individuals with spinal code injuries or paralysis
This invention is so sophisticated as in makes it possible for anyone to use this gadget regardless of the physical challenges that one may be having. Moreover, it comes with speed as the commands will be issued as soon as they come to the mind.
When iPad was launched, everyone was asking what the next invention will be. A group of Australian scientist has come with a new invention that is very similar to that of iPad. It has the ability to read anything placed on it. This makes it very appropriate in places that require high level of security like in the airports or the five star hotels. It can also be of good use in places like supermarkets in the billing section.
The future of human computer interface and interactivity is already here. Life is becoming easier with every technological invention. This has a positive impact both in the short and long run, especially in the field of entertainment and digital divide. Human being will also be able to delegate much of his duties to computers.
A keener look into this phenomenon will reveal that in as much as these inventions are necessary and that they have positive impact on mankind, it is also true that they come at a cost. The effect of these sophisticated machines on the environment is adverse, especially when poorly disposed (Abbot 2001, p.79). These inventions will also impact negatively on culture, as life becomes what one wants, regardless of age. It is therefore necessary to take care as we embrace this technology.
List of References
Abbot, C 2001, ICT: Changing Education, New York, Routledge.
Beaureau, B 2008, Information and Communication Technology: The Industrial Revolution That Wasnt, New York, Lulu.
Rodgers, A, and Streluk, A 2002, ICT Key Stage 1, London, Nelson Thornes Ltd.
Sutherland, R, Robertson, S and John, P 2009, Improving Classroom Learning with ICT, New York, Routledge.
The advancement in technology has heralded the onset of a whole new era in global networking. So much so that the safe keeping and continued existence of the discrete data has become an issue of global interest. Business institutions and well off individuals alike, are investing heavily in the security of their systems. They are buying sophisticated systems whose services may come in handy when there is need to operate their systems; be it in aviation, nuclear operations, and radiotherapy services or even in securing delicate information in sensitive databases. This creates a need for the service to be delivered. When the system can justifiably be trusted, it creates dependability.
Dependability entails the ability of computer systems to incorporate features that comprise its consistency, accessibility, data protection, its life and the cost of running and maintaining it. Failure of these systems leads to loss of delicate data and may expose the data to unwarranted parties. The main aim in this project is to highlight and expound on the primary ideas behind its operation.
After going through observations made over time in regard to their dependability on computer systems, people have tried to define dependability. From this in depth look at dependability then commences. It is rated according to the threats it faces, its attributes and how it ensures users of its dependability. Means of dependability are also looked into, by terms of how fast the system responds to queries, and the probability of generating results. Dependability merges all this tasks within a single frame work. Therefore the main purpose of this study is to present an on-point and brief outline of the model, the methodology and equipment that have evolved over time in the field of dependable computing.
The ideas behind dependability are founded on three basic parts that involve the threat to the system, the attributes of the system and the technique in play through which the systems achieves dependability.
Reliability
Reliability can be described in a number of ways:
It can be defined as, the idea that something is fit for a purpose with respect to time; the capacity of a device or system to perform as designed; the resistance to failure of a device or system; the ability of a device or system to perform a required function under stated conditions for a specified period of time; the probability that a functional unit will perform its required function for a specified interval under stated conditions or the ability of something to fair well (fail without catastrophic consequences) (Clyde & Moss, 2008).
In the dependability on computer systems, reliability architects rely a great deal on statistics, probability and the theory of reliability. A lot of business computer systems are employed in this type of engineering. They encompass reliability forecast, Weibull analysis, thermal management, reliability testing and accelerated life testing (Clyde & Moss, 2008). Since there is a big amount of reliability methods, their cost, and the changeable degrees required for diverse situations, a majority of projects build up a reliability-program-plan to identify the duties that will be carried out for that particular system.
The purpose of reliability in computer dependability is to come up with a reliability requirement for an item for consumption; establishment of a satisfactory reliability system, and performing of suitable analysis and tasks to make certain that end results meet necessary requirements. These particular tasks are controlled by a reliability manager, who is supposed to be a holder of an accredited reliability engineering degree. Additionally the manager has to have added reliability-specific edification and training. This kind of engineering is intimately linked with maintainability and logistics. Loads of problems emanating from other fields for example security can also be handled using this kind of engineering methods.
Availability
In dependability of computers, availability can be described as follows: the extent at which a system is, in a specific operable and committed state when starting a mission, (often described as a mission capable rate). Mathematically, this is expressed as 1-minus unavailability; the ratio of (a) the total time a functional unit is capable of being used during a given interval, to (b) the length of the interval (Blanchard, 2000).
An example for this is when a component capable of being used 100 hours per week (168 hours) would have an availability of 100/168 (Blanchard, 2000).Though, distinctive availability values are shown in decimals i.e. 0.9998. In elevated availability functions, a metric, equivalent to the numeral of 9 that follows decimal points, is used. In this kind of structure, 5 nines are equal to (0.99999) availability (Blanchard, 2000).
Availability is well recognized in the writings of stochastic modeling and the finest maintenance. Barlow and Proschan (2001) for example describe availability of a fixable system as the probability that the system is operating at a specified time. On the other hand Blanchard (2000) defines it as a measure of the degree of a system which is in the operable and committable state at the start of a mission when the mission is called for at an unknown random point in time.
Availability actions are categorized by the time gap of interest or the devices for the systems downtime. When the time gap of interest becomes the main concern, we consider instantaneous, limiting, average, and limiting average availability. The second primary classification for availability is contingent on the various mechanisms for downtime such as the inherent availability, achieved availability, and operational availability. (Blanchard, 2000)
Security
Computer dependability has a division known as computer security. This is a division of computer system technology dealing in information securities that are related to computers and networking. The purpose of computer-security includes the safeguarding of information from theft, corruption, or natural disaster, while allowing the information and property to remain accessible and productive to its intended users (Michael, 2005).
Computer-system security is the combined procedures and mechanisms by which responsive and important information and services are safeguarded from publication, interference or disintegration by illegal activities or unreliable individuals and unintended events correspondingly. The tactics and methodologies employed security-wise, often vary depending on the computer technology. This is because of its rather indefinable purpose of preventing unnecessary computer behavior, instead of allowing required computer behavior
Expertise on computer dependability security is founded upon logic. Because security is not essentially the most important goal of the majority of computer applications, planning a system with the issue of security in mind, frequently inflict restrictions on that programs performance.
The following are four moves towards security in computers, sometimes a mixture of approaches come in handy:
Trust all the software to abide by a security policy but the software is not trustworthy (this is computer insecurity). Trust all the software to abide by a security policy and the software is validated as trustworthy (by tedious branch and path analysis for example). Trust no software but enforce a security policy with mechanisms that are not trustworthy (again this is computer insecurity). Trust no software but enforce a security policy with trustworthy hardware mechanisms (Michael, 2005).
Computers are made up of software performing on top of hardware. A computer system is a merge of these two components providing precise functionality, to comprise either a clearly stated or completely accepted along security policies.
Undoubtedly, quoting the Department of Defense Trusted Computer System Evaluation Criteria (TCSEC) archaic though that may be the inclusion of specially designed hardware features, to include such approaches as tagged architectures and (particularly addresses stack smashing attacks of recent notoriety), constraint of carrying out text to the precise memory sections and/or register groups in computers and computer systems (Michael, 2005).
Safety
Safety is the condition of being safeguarded against physical, social, spiritual, financial, political, emotional, occupational, psychological, educational or other types or consequences of failure, damage, error, accidents, harm or any other event which could be considered non-desirable (Wilsons, 2000). This can also be described as the control of identified risks to attain a satisfactory level of risk. It can possibly take the structure of being guarded from the occurrence or from disclosure to something that brings about health or cost-effective losses. This can comprise the safeguarding of people or property.
Computer system safety or reliability is an engineering regulation. Continuous modification in computer knowledge, environmental regulation and community safety concerns make the scrutiny of compound safety critical structures even more demanding.
A common myth, for example in the midst of computer system engineers concerning structure control systems, is that, the subject of safety can be enthusiastically deduced. As a matter of fact, safety concerns have been exposed one after another, by many practitioners who cannot be assumed by one person over a small period of time. Information on literature, the standards and routine in any given field is an important part of the safety engineering in computer systems.
A mixture of theory and track-record of performance is involved and track-record points out a number of theory areas that are pertinent. In the US for example, persons with a state license in professional systems engineering are expected to be competent in this regard, the foregoing notwithstanding, but most engineers have no need of the license for their work (Bowen, 2003).
Safety is frequently viewed as a small part in a collection of associated disciplines these are: quality, reliability, availability, maintainability and safety. (Availability is sometimes not mentioned; on the principle that it is a simple function of reliability and maintainability) (Wilsons, 2000). This subject tends to establish the worth of any work, and insufficiency in some of these areas. This is considered to bring about a cost, beyond the cost of addressing the area in the first place (Wilsons, 200o); high-quality managing is then anticipated to reduce total costs.
Per-formability
Per-formability, at an initial impression, comes out like a simple gauge of performance. One may define it simply as an ability to execute a task or perform. In real sense, performance only constitutes about half any per-formability evaluation. This is actually a compound measure of a systems dependability on overall performance. The measure is one of the fundamental assessment methods for degradable systems (extremely dependable structure s that can go through a refined degradation in performance when malfunctions are detected and still allows continuous usual operation). A good example of such a system that can be degradable is a spaceships control system having 3 central processing units (CPUs).
A malfunction in the system could be disastrous, perhaps even causing the loss of life. Thus, the system is designed to degrade upon failure of CPU 1, i.e. CPUs 2 and 3 will drop their lower priority work in order to complete high priority work that the failed CPU would have done (Carter, 2009). The majority of functions in per-formability currently deal with degradable computer systems like the one mentioned. On the other hand, the idea is applicable to degradable structures in a diverse area starting from economics to biology.
Performance can be considered as the QOS (quality of service) as long as the structure/system is exact. Performance modeling on the other hand entails representations of the probabilistic nature of consumer demands and predicts the systems performance ability. This is done under the supposition that the system-structure remains steady. Dependability is an all-encompassing definition for reliability, availability, safety and security.
Thus it is what allows reliance to be justifiably placed on the service the system delivers. Dependability modeling deals with the representation of changes in the structure of the system being modeled. (Ireson, 2006) These are usually due to errors, and how these changes influence the ease of use of the system. Per-formability modeling, then, considers the result of system changes and the effect on the general performance of the whole structure
At one time, the majority of modeling work separated performance and dependability. At first, dependability of structures might have been satisfactory, and then the execution was optimized. This brought about systems with good performance especially when they were fully functional, but a severe decline in executing tasks when, unavoidably, malfunction occurred. Fundamentally, the system was either on and running perfectly or off when it crashed. Improvements on this, lead to the design of degradable systems. (Lardner, 2002) Since degradable structures are designed to carry on with operations even after some component have failed (albeit at decreased levels performance), their ability to execute tasks cannot be precisely assessed without considering the impact of structural/system changes, breakdown & repair.
Initially scrutiny of the structures from a pure performance point of view was most likely optimistic because it disregarded the failure-repair performance of the systems. On the other hand, dependability scrutiny tended to be conservative; this is because performance considerations were not well thought-out. Therefore, it was fundamental that processes for the joint assessment of performance and dependability be increased.
Maintainability
This is the measure of an equipment or systems ease and rapidness to restore its initial operational status as a result of a failure (Randell, 2001). As shown by Randell again this characterizes an equipment design to installation, availability of skilled personnel, and environment under which physical maintenance should be performed and how adequate the procedure of maintenance performs. Maintainability is articulated as the likelihood that an item is retained in or otherwise reinstated to that definite condition over a said instance, when maintainability is being carried out accordingly to the set method and resources.
Maintainability merit is normally the MTTR (Mean Time to Repair) and a perimeter for the maximum repair time. Often it is expressed as; M(t) = 1-exp(-t/MTTR) = 1 exp(-mt). M = constant maintenance rate, MTTR Mean time To Repair (Laprie, 2005).
MTTR is a mathematical average implying a system repair speed and visualization is easier than the probable value. Maintainability concern is to accomplish shorter repair times for maintaining high availability so as to minimize cost control of downtime productive equipment at the time availability is critical. A 90% possibility that in less than or up to 8 hours, maintenance repair will have been completed with 24 hours as the maximum repair time could be a good maintainability goal.
Evaluation of dependability
Computing systems operate on five basic properties. These are: Functionality, usability, performance, their cost (both purchasing cost and operational cost) and their dependability. The ability of a computer system to deliver the appropriate service and gain the trust of an end-user is referred to as dependability. Service delivery refers to the efficiency of the system as per the view of the user. The function of the system refers to its intended use, as is prescribed by a particular systems specification. When the system carries out the intended service it was meant for without a hitch, and delivering on time, its referred to as correct service.
When the system fails to deliver a service it was made for, or shows some deviation from rendering the correct service its referred to as system failure. This is also called an outage. When this happens and the system is restored to performing its original service, it is called service restoration. Reflecting on the understanding of the definition of system failure, dependability can be defined as the ability of a system to avoid failures that are more frequent or more severe, and outage durations that are longer, than is acceptable to the user(s)(Jones, 2000).
System failure may be associated with its non-complicity to a given specification or if a programs function is not fully described. This part of a system that may in future cause system breakdown is called an Error. The assumed cause of an error is referred to as a Fault. When a fault in a system produces leads to an error, then the fault is active, if it is in an inactive state then it is dormant. There are various ways through which a system can fail. The failure modes can be ranked according to how severe their exposure to the system can be. The positions, with which they are categorized, are classified into three categories. These are; the failure domain, the perception of a failure by the end-user and the consequences of these failures to the environment (Laprie, 2005).
An interactive set of components forms a system set while component states encompass the system state. When there is an error it causes a malfunction in the state of the components but it will not lead to system failure unless it encroaches up to the service interface of the system. One of the ways used to classify errors is categorizing them according to the component failure they cause.
A system comprises of a set of interrelating components; consequently a system state is a set of its constituents states. A fault initially causes inaccuracy inside the state of a component, but the whole structures malfunction will not come about as long as the fault does not get to the service interface of the whole structure. A suitable categorization of errors is described in terms of component malfunction that they bring about.
Using the expressions in the figure above, value vs. timing errors; consistent vs. inconsistent (Byzantine) errors when the output goes to two or more components; errors of different severities: minor vs. ordinary vs. catastrophic errors (Schneider, 2000). A mistake is noticed if its existence in the particular system is specified by an error note or signal. Errors/faults that are in the system but not identified are dormant errors. Faults and their causes are very varied. Their categorization as per the 6 chief criteria is shown in Figure 3 below.
It could be argued that introducing phenomenological causes in the classification criteria of faults may lead recursively to questions such as why do programmers make mistakes?, why do integrated circuits fail? Fault is a concept that serves to stop recursion. Hence the definition given: adjudged or hypothesized cause of an error. This cause may vary depending upon the viewpoint that is chosen: fault tolerance mechanisms, maintenance engineer, repair shop, developer, semiconductor physicist, etc (Schneider, 2000).
Elementary fault classes
Conclusion
The main concept of dependability is its nature to incorporate which in turn permits the putting on board classical notions of reliability, availability, safety, security, maintainability, that are then seen as attributes of dependability.
The fault-error-failure model is central to the understanding and mastering of the various threats that may affect a system (Newman, 2003). This facilitates unified presentations of the threats, at the same time preserves their specificities through a range of fault classes which can be definite. Another important aspect is the employment of a fully generalized idea of failure as is contrary to one which is limited in some way to scrupulous types, reasons or outcomes of failure. The model presented as a way for achieving dependability is exceptionally valuable. It is because the means conforms to the attributes of dependability. This is again with regards to the design of whichever system has to carry out trade-offs owing to the fact that the attributes are inclined to clash with each other.
Computers and digital devices are increasingly being used in significant applications where failure can bring about huge economic impacts. There has been a lot of research and presentations describing techniques of attaining required high dependability. To achieve the desired dependability, computer systems are required to be safe, secure, readily available, well maintained, and reliable, have enhanced performance and robust against a variety of faults that includes malicious mistakes from information attacks. In the current culture of high tech tightly-coupled systems which encompass much of the nationalized critical communications, computer malfunction, can be catastrophic to a nations, organizations or individuals economic and safety interests (Mellon, 2005).
Understanding how and why computers fail, as well as what can be done to prevent or tolerate failure, is the main thrust to achieving required dependability. These failures includes malfunction due to human operation, as well as design errors (Mellon, 2005). The aftermath is usually massive, as it embodies record loss, lack of reliability, privacy, life loss, or even income loss exceeding thousands of dollars in a minute (Mellon, 2005). By system structures that are more dependable, such effects can be avoided.
References
Algirdas, A. (2000). Fundamental concepts of computer systems dependability. Los Angeles: Magnus university press.
Barlow, A and Proschan, N. (2001). Availability of fixable systems. Computer systems, 43, (7): 172-184.
Blanchard, D. (2000). Availability of computer systems. New York, NY: McGraw-hill.
Bowen, S. (2003). Computer safety. IBM Systems journal, 36 (2): 284-300.
Carter, W. (2009). Dependability on computer systems. Computer systems, 12 (1): 76-85.
Clyde, F., & Moss, Y. (2008) Reliability Engineering and Management. London: Mc-Graw-hill.
Ireson, W. (2006). Reliability engineering and management. London: McGraw-hill.
Jones, A. (2000).The challenge of building survivable information-intensive systems. IEEE Computer, 33 (8):39-43
Laprie, J. (2005). System Failures. International journal of computer technology, 92 (3): 343-346.
Lardner, D. (2002). Degradable computer systems. P. Morrison and sons Ltd.
Mellon, C. (2005). Dependable systems. London: McGraw Hill
Michael, R. (2005). Computer software and hardware security. New York, NY: CRC Press.
Newman, B. (2003). Fault error failure model. Sweden: Chalmers university press.
Randell, B. (2001). Computing science. UK: Newcastle University press.
Schneider, F. (2000). Faults in computer systems. National academy press.
Wilsons, T. (2000). Safety. London: Collins & associates.
Today, computers have become an integral component of human life. One wonders how life would be without computers. Randell holds, In reality, computers, as they are known and used today, are still relatively new (45). In spite of the computers being in existence since the abacus, it is the contemporary computers that have had a significant impact on the human life. The current computers have progressed through numerous generations to what we have today. The ongoing technological advancement is bound to result in the development of supercomputers in the future (Randell 47). Computer engineers look forward to the development of miniature, powerful computers that will have a significant impact on the society. This paper will discuss the evolution of computers. The article will also discuss the future of computers and its potential repercussions on the society.
The Evolution of Computers
The modern-day computers have evolved through four generations. The first generation of computers occurred between 1940 and 1956. The computers manufactured during this period were big and used magnetic drums as memory (Randell 49). Additionally, the computers used vacuum tubes as amplifiers and switches. The use of vacuum tubes led to the computers emitting a lot of heat. The computers did not use advanced programming language. Instead, they relied on a simple programming language known as the machine language.
The second generation of computers dated between 1956 and 1963. The computers used transistors instead of vacuum tubes. As a result, they did not consume a lot of power. Furthermore, the use of transistors helped to minimize the amount of heat that the computers released (Randell 50). These computers were more efficient than their forerunners. The elimination of vacuum tubes led to a reduction of the size of the computer. The second generation computers comprised a magnetic storage and a core memory.
The third generation of computers dated between 1964 and 1971. The computers developed during this period were superior in speed. They used integrated circuits. The integrated circuits comprised many tiny transistors embedded on silicon chips. The integrated circuits enhanced the efficiency of the computer. Besides, they contributed to the development of small, cheap computers (Zabrodin and Levin 747). The previous generations of computers used printouts and punch cards. However, the third generation computers used monitors and keyboards.
The fourth generation computers were developed between 1971 and 2010. The computers were designed at a time when the human had realized tremendous technological growth. Thus, it was easy for computer manufacturers to put millions of transistors on one circuit chip. Besides, the manufacturers developed the first microprocessor known as the Intel 4004 chip (Zabrodin and Levin 748). The development of a microprocessor marked the beginning of production of personal computers. By early 1980s numerous brands of personal computers were already in the market. They included International Business Machine (IBM), Apple II, and Commodore Pet. The computer engineers also came up with graphical user interface (GIU) that enhanced computer usage (Zabrodin and Levin 749). They also improved the storage capability, primary memory and speed of the computer.
The Future of Computers
The current computers use semiconductors, electric power, and metals. There are speculations that future computers will use light, DNA or atoms. Moores Law hints that the future computers will shift from quartz to quantum. Computer scientists continue to increase the number of transistors that a microprocessor holds. With time, a microprocessor will comprise multiple atomic circuits. That will usher in the era of quantum computers, which will utilize the power of molecules and atoms to execute commands (Ladd et al. 47). The quantum computers will use qubits to run operations. A quantum computer will ease the computation of complicated problems. Unfortunately, such computers will be unstable. People will require ensuring that they do not interfere with the quantum state of the computer. Interfering with the quantum state will affect the computing power of the computer.
Lajoie and Derry claim, Perhaps the future of computers lies inside us (23). Computer scientists are in the process of developing machines that use DNA to execute commands. The collaboration between biologists and computer scientists could see the creation of the next generation of computers. Scientists argue, DNA has the potential to perform calculations many times faster than the worlds most powerful human-built computers (Lajoie and Derry 31). Therefore, in future, scientists may look for ways to develop computers that exploit the computing powers of the DNA. Scientists have already come up with the means to apply DNA molecules to execute complicated mathematical problems (Lajoie and Derry 34). Indeed, it is a matter of time before computer scientists use DNA to develop biochips to enhance the power of computers. DNA computers will have a storage capacity that can hold a lot of data.
Effects of Future of Computers
The development of sophisticated computers will have a myriad of effects on human life. The future computers will have an intelligent that is akin to or superior to that of humans. Presently, some computers can read multiple books in a second. Besides, some computers have the capacity to respond to questions asked in natural language. Google Company is working on a project to develop an artificial intelligence that can read and comprehend different documents (Russell and Norvig 112). Such an artificial intelligence will serve as a source of information. People will no longer require reading books or going to school. Besides, it will render insignificant the need for human interactions. People will use computers to get answers to all their problems.
Development of sophisticated computers will result in many people losing their jobs. Once computer scientists develop a computer with intelligence akin to that of human, there will be the rise of intelligent robots that will perform most human jobs. Currently, some robots facilitate production of products (Doi 201). In the future, there will be robots that can construct roads, work in supermarkets, and prepare meals in restaurants. There will be no need for human labor any longer. The development of supercomputers will have positive impacts on the provision of quality healthcare. There will be computers that can perform blood tests, measure the level of cholesterol, and diagnose allergies (Doi 203). Besides, the computers will examine peoples DNA to determine potential genetic risks and forecast possible illnesses. Such computers will help to boost the quality of healthcare and minimize deaths that result from erroneous diagnoses.
Conclusion
Computer development has evolved over time resulting in the formation of personal computers that are not only small in size but also efficient. Computer scientists continue to develop sophisticated computers. In Future, computers will use DNA, light, and atom to process data. The scientists are in the course of developing quantum computers. Additionally, collaboration between computer scientists and biologists will facilitate the creation of biochips using human DNA. Development of supercomputers will not only enhance the provision of quality healthcare but also eliminate the need for schools and human interactions.
Works Cited
Doi, Kunio. Computer-Aided Diagnosis in Medical Imaging: Historical Review, Current Status and Future Potential. Computerized Medical Imaging and Graphics 31.5 (2007): 198-211. Print.
Ladd, Thaddeus, Fedor Jelezko, Raymond Laflamme, Yasunobu Nakamura, Christopher Monroe and Jeremy OBrien. Quantum Computers. Nature 464.1 (2010): 45-53. Print.
Lajoie, Susanne, and S. Derry. Computers as Cognitive Tools, New York: Routledge, 2009. Print.
Randell, Brian. The Origins of Digital Computers, New York: Routledge, 2013. Print.
Russell, Stuart and P. Norvig. Artificial Intelligence: A Modern Approach, London: Prentice Hall, 2003. Print.
Zabrodin, Aleksey and Vladimir Levin. Supercomputers: Current State and Development. Automation and Remote Control 68.5 (2009): 746-749. Print.