Computer Viruses, Their Types and Prevention

Introduction

Computer viruses are somewhat similar to their organic counterparts since they function under the same principles of infecting a system they are introduced to and focus on replication. However, computer viruses are not a natural aspect of software programs; rather, they are purposefully created to carry out various functions, some of which are malicious in nature. Actions such as compromising the integrity of a computers security system, introducing flaws in the programming architecture to cause errors, or even cause the hardware to malfunction resulting in its destruction. These are only a few of the possible actions that a computer virus could be responsible for and, as such, show why it is necessary to know about the different types of viruses out there, how they can infect systems and what measures can a user take to either prevent infection or get rid of one.

Types of Virus

Macro Virus

The infection vector of a macro virus is through programs that utilize macros such as.doc,.xls, or.ppp. While the extensions may not be familiar, they consist of Microsoft Word, Excel, and Powerpoint. A macro virus infects these files and spreads when it is shared across various emails and USB drives.

Memory Resident Virus

A memory-resident virus is one of the most resilient types of viruses out there since it resides in the RAM of the computer and comes out of stasis every time the computers OS is activated. As a result, it infects other open files leading to the spread of the virus.

Worms

A worm is a self-replicating virus that focuses on creating adverse effects on your computer. This can consist of deleting critical system files, overwriting program protocols, and taking up valuable CPU processing space. Worm infections are identifiable based on process errors happening all of a sudden as well as a noticeable decline in the performance of your computer.

Trojan

Trojan viruses are aptly named since they stay hidden in a computers system subtly gathering information. Unlike works, the impact of trojans is rarely felt since their primary purpose is to collect information and transmit it to a predetermined location. Banking information, passwords, and personal details are what Trojans are usually after since this enables malicious hackers to use the information in identity theft as well as to illegally access online accounts and transfer funds.

Direct Action Viruses

This type of virus takes action once certain conditions have been met such as when they are executed by the user (i.e. opened or clicked). They are typically found in the system directory and infect the various therein; however, some varieties of direct action viruses tend to change location depending on how they were initially programmed.

While these are only a few examples, they do represent the various types of computer viruses out there and show why it is necessary to devise different methods of combating them.

Why is it Hard to Prevent the Creation of Computer Viruses?

The problem with computer viruses is that they are often created by people that are looking for exploits in computer systems. Since they are intentionally looking for holes in the security to use, it is not surprising that it is hard to create a truly impregnable system that can withstand all manner of computer viruses. The problem lies in the fact that computer viruses are not static entities; rather, they continue to evolve based on new programming architecture. This evolution is not the viruses itself evolving. Instead, it is the programmers themselves who create new viruses based on additional principles they learn as technology and in turn software development continues to improve. It is due to this that attempts at creating more efficient anti-virus prevention solutions are met with new types of viruses that try to circumvent them. The only way this practice were to stop altogether would be if all virus creation were to stop, an event that is highly unlikely to occur.

Standard Practices to Prevent Infection

Install an Anti-Virus Software Program

One of the best ways of stopping infection is to install an anti-virus program (ex: McAffee, Symantec, Avast). They specialize in scanning programs, identifying a virus based on information from a database, isolating the file, and deleting it if possible. Do note that anti-virus programs are not infallible since new viruses are created almost every day and, as such, the battle between anti-virus companies and virus creators is never-ending.

Do Not Visit Suspicious Websites

Suspicious websites fall under the category of sites that have questionable content or lack the necessary SSL certificates or verifications. These sites often try to draw visitors via advertisements that indicate that free games can be downloaded from the site, or it has other types of content that a person would usually need to pay for to obtain. Torrent websites are often the most visited of these sites since they offer a wide variety of free content that has been illegally obtained by hackers. However, while it may be tempting to download different movies and games, some of these torrent files are often mixed with viruses that can compromise your systems security. This can lead to instances of identity theft which can cost you several thousand dollars more than the original price of a movie or game that you illegally downloaded. Utilizing anti-virus programs and following the various instructions in this paper should result in a relatively low chance of your computer.

Be Wary of Foreign USB Drives

USB drives are a ubiquitous method for sharing information around campus; however, since people tend to share these drives among their friends, there are instances where an infection in one computer can rapidly spread to others from that single USB. It is due to circumstances such as these that computer owners need to be cautious with any USB drive that they accept. If you know that the drive has been continuously shared, you need to perform an anti-virus scan on it.

Complete System Reformating

In cases where a computer system has become unavoidably infected and has slowed CPU processes to a crawl due to junk data, it is often necessary to perform a complete system reformat. A system reformats consists of the OS and BIOS of the computer being erased and reinstalled. This helps to get rid of any viruses that remain and enables the computer to work properly again, though this is at the cost of all the files on the computer unless they have backups.

Conclusion

All in all, computer viruses can cause considerable damage if the proper precautions are not taken. Utilizing anti-virus programs and following the various instructions in this paper should result in a relatively low chance of your computer.

Firewalls in Computer Security

Introduction

Computer security is one of the branches of technology, and as far as it is applied in the field of computer, it is known as information security. The aims of computer security are far fetched but mostly encompass shielding information from theft, distortion or making the preservation of information possible. Computer security instill conditions in the computer that are different from most conditions in

other systems as far as what computers are supposed to do is concerned. However, these conditions make computer security a challenging issue since it makes computer programs to carry out only that which is required of them and in a specific manner. This limits the scope and the speed with which the computer program is supposed to operate. Computer security programs aim at lessening these inhibitions by transforming these negative constrains into positive enforceable principles (Layton, 55). Therefore, computer security can be said to be more technical and mathematical in comparison to other computer science related fields. However, it must be noted that the main issue of concern in information security and /or computer security is the protection of information that is stored, or processed or worked on by the computer. This is true whether it is the protection of the computer hardware or the systems software that is involved.

Main text

Much developments and evolution has taken place in the field of computer security so that presently it is globally held that there are four most common typical approaches to attaining computer security. The first approach involves the imposition of the physical barring of access to computers to computer security compromisers. The second and third approaches involve the use of hardware programs to set in rules on different programs to avoid susceptible computer programs, and third approach being the use of operating systems mechanisms that entrench rules on programs to avoid gullible computer programs, respectively (Peltier, 256). Much of the operating system security technology is based on the 1980s science which has been used to produce the most impenetrable operating systems. However, presently, they have been put into limited use since they are both laborious and very technical (and therefore, little understood about them for efficient and maximum exploitation). An example of this is the Bell- La Padula Model. The fourth approach involves the use of program strategies to make computer programs highly reliable and able to withstand subversion.

A firewall is a configured device designed to allow, reject, encrypt, or proxy all the traffic of the computers between diverse domain sections, based upon a set of specific rules. Conversely, it may be defined as a dedicated appliance equivalent to a software connection running on another computer which checks the network traffic passing through it and rejects or allows passage following a set of rules. The basic function of the firewall is to control the flow of traffic between or among computer networks depending on the different trust level. There are three different trust levels, and these are the Internet, which is a total no trust zone (this is so since the Internet is highly porous to all materials that can be sent through the web), the Internet link which is a higher trust zone, and the demilitarized zone (the DMZ), an intermediate trust level which is located between the Internet and the perimeter network which is the trusted internal network. The firewall operates on a rule of default deny as a methodology of allowing in only the designated network connections for entry and locking out the rest. Without proper configuration, firewall can be almost useless.

Historically, the term firewall was used to refer to measures taken to ward off fire from reaching buildings. With later developments,the term came to be used to refer to the structure or metal sheet that severs the engine compartment from the passengers cabin in a vehicle or an aircraft. In the 1980s, the firewall computer technology emerged when the Internet was still a fledgling in terms of global use and connectivity. The antecedent factors that pushed for firewalls introduction were the discovery of German spies trifling with the Internet system by Clifford Stoll, Bill Cheswicks 1992 manipulation of electronic devices in jails to observe attacks (this demonstrated clearly that Internet users were not safe but were susceptible to spying and unwarranted interference either by online criminals with vast computer and Internet acumen, or by computer bugs), the 1988 viral invasion on the NASAs American Research Center in California, and the first large scale Internet attack by the Morris Worm.

There are diverse types of firewall classification and these are based according to qualities and characteristics such as speed, flexibility, and simplicity, greater authentication and higher logging speed. Types of firewalls that fall under the rubric of speed, flexibility, and simplicity include the Packet filter firewall and the Stateful inspection firewall (whose modi operandi have been discussed in the succeeding paragraphs). On the other hand, those that are classified under greater authentication and higher logging capacity include the Application Proxy Gateway firewall and the Dedicated Proxy Servers. These too have been delved on in the succeeding paragraphs.

In 1988, the first paper on firewall technology appeared following the Digital Equipment Corporation conducted a series of researches and subsequently came up with a filter system known as the Packet Filter Firewall. This was the first generation of the highly evolved technical Internet security system. Later improvements came, courtesy of Bill Cheswick and Steve Bellovin of the AT & T Bell Labs on packet filtering.

Packet filtering works by inspecting packets which are the basic units of data transfer betwixt the inter connected computers. The packet filters by following a set of rules, achieve this feat by dropping the packet and sending the error responses back to the source if the packet corresponds the packet filter (Zhang and Zheng, 300). This type of packet filtering filters out every packet depending on the information in the packet, and pays no attention to whether or not the packet falls within the rubric of the already existing string of traffic.

The packet filters work at a network layer of 1-3 and operates very efficiently since they only inspect the header of the packet. Initially, there was the stateless firewall which lacked the capacity to detect whether or not a part of a packet was already inside an already existing connection or not. An example of this is the File Transfer Protocol which opens opens up itself to arbitrary ports by design. This type of firewall operates by maintaining a table having open connections which are associated with new connection requests and already existing connections that are held as legitimate.

The packet filters and the stateless firewall work efficiently also because they check against the IP address and other involved ports in relation to the connections and the sequence of the packets encircling the connections. This, the stateful firewall is able to achieve since it has the capacity to contain memory in significant range of every connection from the beginning to the end.

The packet filters, when the client starts a new connection,sends a set of SYN bit in the header of the packet. The firewall in turn deems all sets with SYN bits, new connections. The service replies the SYN packet upon the service being asked by the client being available. Should the client respond with an ACK bit packet,the connection enters an established state. Having by passed all outgoing packet, the firewall accepts incoming packets that have been established as an already existing o r established connections. Thus, the hackers are kept from being able to start unwanted connections in the protected machine. If there is no traffic that has passed, stale connections are deleted from the state table to keep the latter from overflowing. To prevent dropping connections, the firewall sends periodic updating messages to the user. Thus the packet filters work.

One of the side effects of the pure packet filters is that they are highly susceptible to spoofing attacks. This is so because packet filters do not posses the concept of of a state as dwelt on in the field of computer science and computer security. On this premise, the pure application firewalls are also vulnerable to exploits especially at the hands of online criminals.

A layer is a collection of functions that are interrelated to offer and receive services to other layers above it and from those below it respectively. The history of layers stretch back from 1977 when work under the American National Standard Institute working group, was carried out on a layered model; to become the Operational Standard model to come up with the Distributed Systems.

Although the ISO has influenced the Internet protocol, yet none has done so heavily as the concrete operational system model.

The application layer explicitly interfaces and discharges application services to facilitate the application process and also forwards requests to the presentation layers. The application layer exists to offer services to the user- defined processes of application, and not to the end user. For instance, the application layer defines the transfer protocol file, but still refers the end user to follow the application process to facilitate the transfer of files.

The primary functions performed by the application layer include the the facilitation of the the applications and the end user process, and the identification of the of the communication partners. Application layer facilitates services for the transfer of of files, e- mailing, and the running of other network software services (Abrams, Jajodia, Podell, 199). In addition to all these, the application layer bolsters the aspect of privacy and establishes and authenticates the application user and marks and identifies the quality of services that are offered. Examples of application layer include, the Telnet, the FTP and Tiered. The application layer also marks constrains on the data syntax.

Under the application layer, there is the senior sub layer which offers functional services which entail association control, remote operations on the servicing elements and the facilitation of all the transactional processes. Above the common sub layer of the common application services, there are important application programs such as; File transfer (FTAM), Directory (*500), Messaging (*400) and Batch job manipulation.

In computer networking, proxy servers are computer application systems or programs which by forwarding clients requests to ether servers, serve the clients interests. On the client connecting to the proxy server in order to request for some servicer (for example, connecting to a web page or retrieving a file), the proxy server will respond by providing resources by linking to the server and inquiring the services on the clients behalf. Sometimes, the proxy server may change the clients requests or the feedback from the server without having notified the particular server. Herein, there is a proxy server that transcends all the requests and issues general responses- and this being called the gateway or the tunneling point. Proxies are connectible with the users local computers or alternatively at particular main points between the Internet and the destination server. There are myriad types of and functions of proxies as discussed forthwith.

The caching proxy server which caters for requests without corresponding with the particular server by tracing the previously saved client request. Whereas this process is known as caching, it is worth noting that caching proxies maintain domestic copes of sources that are most frequently requested, and thus allowing big organizations to plummet their the companys upstream bandwidth dispensation expense while increasing the performance at the same time.

There is also the content filtering web proxy that conducts the administrative regulation over the relayable content through the proxy. This type of proxy is used in both commercial and non commercial agencies for example in schools to instill conformity in use. The common approaches include the URL regex filtering, DNS blacklist, the URL, and the content filtering. This technique is mostly employed because it facilitates authentication to allow the web page control. Saperate from this, there is also the web proxy which majors on the WWW traffic and commonly acts as the web cache. This method is vastly employed in the corporate domains and this has been highly evidenced by the increasing application of the Linux in both small and large enterprises, and at homes. Examples of this include the Squid and the Net Cache which allows the filtering of the content and thus providing ways to refuse access to some URLs specified in the blacklist.

Anonymizing proxy servers otherwise known as a web proxy anonymizes the web. However, this facility is susceptible to being overridden by the administrators of sites and thus remaining useless. Nevertheless, this form of proxy is able to facilitate the control accessibility since it implements log on requirements. This helps in the organizations being able to limit the web access to authorized users only and also helps keep abreast on how the web is being used by employees.

Intercepting or the transparent proxies integrates a gateway with a proxy server. This type of proxy has been vastly used in businesses to increase the use of policies, and to ease the administrative load on the premise that the configuration of the client users browser is not a prerequisite. Intercepting proxies are detectable through the comparison of the HTTP header and the IP address. Hostile proxy as the name suggests is normally set up by cyber criminals to access the flow of data between the client and the web. The panacea to this is arrived at by changing the pass word that is used to access online services on detecting proxy that is unauthorized.

While on the one hand transparent proxy leaves open the response beyond the proxy authentication, non transparent proxy on the other hand modifies responses to bring more services to the group of users. Open proxy is a server set by administrators to counter abuse so as to deny access to open proxies services. Another form of countering this problem is by testing the system of the client to detect open proxy.

Forced proxy takes away the traffic on the accessible pathway to the Internet.. Besides this, it also configures proxies to facilitate entrance into the Internet. This operation is expedient for the interception of the TCP connectivity and the HTTP. For instance, the HTTP interceptions affect the usefulness of proxy cache and can therefore impact on the mechanisms of authentication.

The reverse proxy server is one that is installed within the proximity of a single or multiple web servers. Herein, all the traffic emanating from the Internet into one of the servers web page passes through the reverse proxy server. The reverse proxy server has multiple functions such as the acceleration or the encrypting of the SSL when designing secure websites,compression and the expansion of the web content to catalyze the loading time, and the serving of the static content of the cache so as to offload servers.

The server proxy is also able to abate resource usage that is incurred due to slowness on the clients side. This feat is achieved by caching the web server content that has been sent and issuing it to the client in dribs and drabs. This undertaking is known as spoon feeding. The proxy server also beefs up security as an extra stratum of safety and can therefore shield against attacks that are known to be web server specific.

There are also special kinds of proxies- the extra net publishing, a reverse type of proxy server used for communicating to an internal server that has been firewalled, and issuing extranet services as the server remains behind firewalls.

In computer networking, the network address translation (NAT) which is also known as the network masquerading, IP masquerading, and the native address translation, is a technique that works towards the transception of the network traffic by using the router that entails re- encoding the IP address destination, the TCP or the UDPs IP packets port number on passing through. The methodology also entails using the check sums which are written down to take note of the changes. Most NAT systems do this to capacitate a myriad hosts on undisclosed networks to reach the Internet via the use of IP address.

NAT as a technique first came in as a method of countering the IPV4 shortage and lessening difficulty the IP Address reservation. Recently, there has been a widespread adaption of the technique by countries with less allocation of address book per capita (Bragg, Rhodes- Ousley and Strassberg).

Conclusion

NAT adds security by disguising the structure of internal network. This it does by letting the the traffic pass through to the Internet from the local network. As the translation of the source address in every packet is done, the router keeps track of on each connection that is active, and the basic data. Afterwards, the TCP or the UDP port numbers in the case of the overloading of the NAT are used to enable the demultiplexing of the packets. As the source for the traffic, the router reveals itself.

The merits of using NAT are very far fetched. One of them is that it allows convenience and entrenches minimal costs. The fact that NAT is devoid of full bidirectional connectivity means that it keeps away malicious activities which are carried out by external hosts from permeating local connections. This keeps at bay worms and abates scanning and thus enhancing privacy.

Perhaps the greatest of all these benefits is the fact that NAT solves the problems that result from the exhaustion of the space in the IPV4 Address.

The drawbacks of the Network Address Translation are also clear. For instance, there are no end- to- end connections at the back of the Network Address Translation- supported routers. This makes it impossible for the system to accommodate the Internet protocol which is very useful. In the same wavelength, this makes it mandatory that servicing receive the TCP connection initiation from external networks. Conversely, to curtail this problem, there will have to be the use of the stateless protocol- the problem with this being that some of these stateless protocol systems such as the UDP are not impregnable to interference or to disruption.

Bibliography

Abrams, D. Marshall, Jajodia, Sushil and Podell Harold. Some integrated essays on information security. US: IEEE Computer Society Press, 1994.

Bragg, Roberta, Rhodes- Ousley, Mark and Strassberg, Keith. A complete reference of network security. US: Mc Graw Hill Professional, 2003.

Layton, P. Timothy. Information security: measurements and compliance. US: CRC Press, 2006

Peltier, R. Thomas. Guidelines to information security policies. US: CRC Press, 2001.

Zhang, Kan and Zheng, Yuliang. The seventh information security conference. US: Springer Press, 2004.

How Computers Negatively Affect Student Growth

Technology is becoming an ever-present entity in the lives of students today. Since todays students are the potential workforce of tomorrow, they will need skills for problem-solving, which originates from computer technology. Since the computer is a potent tool for information processing, it has become an enormous part of our daily life. Lowell (2004) observed that although computers and other related technology have been emphasized in the learning process, the amount of technology currently used in the classroom is the main difficulty. Some examples of the computer technology adopted in the school are computer-assisted learning, open learning, connected learning community, and anywhere, any time learning program, among others. In this paper, an argument for the negative impact of the computer on student growth is presented.

Accessibility and suitability: most of the school and student do not have computers that imply that they cannot use computer programs for learning, lack of availability of internet facilities availability also makes the students lack information and content required for academic purposes. Those lucky to have access may not understand the range due to language deficiency or cultural differences. (Veasey, 1999)

Interfering with natural development- students, primarily from lower grades, when subjected to a computer for the learning process, do not utilize their propensity for physically-based activity since they spend a lot of time with the computers. According to researchers, where a student spends most of the time on the computer, his or her development is impaired. This may interfere with cognitive development since psychologist claims that, student or children should socialize with peers or adults in order to acquire new concepts.

Lacks depth- Computer content does not offer in-depth and flexible content, i.e., the content usually is shallow and not dynamic. According to researchers, a trained and dedicated teacher can provide more profound and more flexible range full of examples than a computer can offer to the student. This, in turn, implies that a student can have a vast knowledge and skills of tackling a problem which in turn positively improves his grade rather than learning from a computer.

Quality of content: Digitalized content is overly simplistic in its structure; for instance, a sum can only be wrong or right. The content does not explain why the sum was terrible, but a natural teacher will make a piece of work and offer the essential logical reasoning for the decision. This makes the student gain a fundamental understanding of the system behind what constitutes correct or incorrect.

Health hazards: computers are hazardous to the health of children in that they can lead to; repetitive stress injuries, eyestrain, obesity, social isolation, and long-term physical, emotional, and intellectual development damage which affects academic progress.

Safety: internet poses a lot of danger to the student, which affects his or her academic performance. This danger includes; stalkers, hate and violence, pornography materials, etc.

Technology is not absolutely essential for meaningful learning; as portrayed by Lowell Monke that it has led to sacrifices in intellectual growth and creativity. The use of computers in education lends the student to be lazy and less innovative. The laptop does not offer a conducive environment for discussion, illustration, debate, etc., which is well provided by a natural teacher.

Due to the fact that computers and associated programs such as the internet are costly as well as the shortcoming of the technology discussed above, the money and funds directed to technology should be used in other fields. (James, 2004)

In conclusion, for effective learning to take place, the real teacher should be encouraged, especially in lower grades of education. Computer-based learning should be advocated for students in higher levels of education such as colleges and universities.

References

James, W. (2004). Taking Sides; Clashing Views on Educational Issues. Newyork, McGraw-Hill.pp.36-78.

Lowell, M. (2004). The Ecological Impact of Technology, the journal of opinion and research, Vol.4 pp. 23-30.

Lowell M.(2004).The human touch: in a rush to place a computer on every desk, schools are neglecting intellectual creativity and personal growth. vol.2 pp. 57-65.

Veasey DSouza, P. (1999). The Use of Electronic Mail as an Instructional Aid: An Exploratory Study. Journal of Computer-Based Instruction 18, 1, 106-110.

Student Growth: The Development of Enhanced Practices for Computer Technology. Web.

Threats to Computer Users

Introduction

It is commonplace that modern users of computers encounter a myriad of challenges in the course of their endeavors. This calls for increased vigilance and awareness by these users, to protect the confidentiality and their data and personal information. It is noteworthy that organizations are also susceptible to such attacks, highlighting the necessity of intervention and protective measures. In view of this, several organizations have introduced security training as a mandatory segment of their orientation procedures (Newman, 2009).

Phishing Tricks

Phishing infers a structure of social engineering that deceives credulous computer users to offer private information to third parties feigning legitimacy. The information varies greatly, and can include basic details; including a persons complete name and address. Some request for social insurance details. A majority of the Phishing frauds involve fiscal resources, thus ask for bank account and credit card details. Initially, swindles were limited to select groups of computer users. Presently, they are extensive and have copious delivery techniques. Most of the rip offs propagate through e-Mail, and assume the individuality legitimate brands or depository institutions. Other vectors used for attacks gaining popularity include Instant Messaging services.

A fraudulent message delivered through e-mail ensures that unwary users receive a specially crafted correspondence from what appears as a bank or any other credible on-line service. These statements often refer to procedural concerns with the recipients account, thereby requiring them to provide the necessary updates. This is achieved by following an attached link for prompt admittance (Stewart, Tittel & Cha, 2005). In most cases, the links lead to duplicates of authentic sites and require the unsuspecting clients to fill certain forms, disclosing their personal information in the process.

Mitigating vulnerability:

Ensuring they respond to personalized correspondence only.

Clients should avoid providing personal information and responding to forms included in e-mails.

They should also make certain to be over safe networks whenever they reveal their fiscal details, including credit card information.

Subscribers should check into their virtual accounts regularly to ensure their integrity.

Experts also recommend that clients install protective tool bars in their browsers that can offer protection from Phishing sites. Most importantly, clients should ensure their browser applications have updated defense patches, which are renewed at regular intervals.

Network Scans and Attacks

Scans happen with the aim of determining open ports or service areas. The vulnerability of services running on a system is directly proportional to the amount of open ports. Vulnerable systems are often exploited for different reasons. Some include; crashing the running service and rendering it inoperable; unlocking a gateway with system administrative rights and connect to the attacker; carrying out functions embedded in its payload by launching scripts or programs; incorporating the attacked system into a network of distributed denial of service, embedded on a website of the attackers choosing. This makes it a functionless system. Lastly, they carry out espionage missions by recording and relaying confidential or significant information to the sender (Stewart, Tittel & Cha, 2005).

Mitigating vulnerability:

Computer users should obtain and use firewall products with their computer systems, whether they are software or hardware based.

It is advisable to have up to date operating systems, which function properly and have all their security patches in place.

Lastly, users should disable all unnecessary services within their systems.

Eavesdropping

This threat entails spying on other persons while they relay personal information over the internet. It is notable that this vice often targets persons revealing fiscal or other personal information. Prime targets are users of systems located in public places, since these persons cannot monitor the individuals standing at their backs; neither can they prevent strangers from looking at the keyboard or monitor. The availability of minute monitoring devices has propagated this offense, because they can be mounted on a targets body (Stewart, Tittel & Cha, 2005). The powerful zoom technology allows the offenders to monitor activities on the keyboard and monitor of the targets computer from a distance.

Mitigating vulnerability:

Persons should avoid personal computing in public places and areas that are easily accessible.

Computer users should embrace the use of password enabled screen savers.

Persons connected to a network should log out whenever they break from their engagements on the computer.

Using privacy screens for monitors also helps, persons viewing from an angle wider than 30 degrees will be obstructed.

Most importantly, users should avoid revealing private information in public places. They should note down private details whenever they wish to communicate classified details.

Computer Theft

It is notable that present day personal computers are smaller than they were several years back and store more information than the earlier models. Laptops, net books and tablets are in the mainstream, while PDAs and cell phones constitute technological waves. This implies that confidential information can be stored different locations within the computer and carried from one place to another with the user. On many occasions, the data includes commonly saved data files and private information. The latter often exist within the cache files of internet browser applications. On many occasions, they include mail inbox details and other customized settings governing third party applications. This implies that well-informed thieves may access crucial information stored in the device they stole (Stewart, Tittel & Cha, 2005).

Mitigating vulnerability:

People should be conversant about the location of their devices.

People should install tracking devices in the machines for activation in case of a loss.

The use of security cables for small computers slows down the activities of thieves.

Keeping the appliances in hidden spots reduces risks of theft

Ensuring that all computers have Boot level passwords

Using software that encrypts data present in the hard disk

Ensuring crucial information is removed from the computer at regular intervals.

Viruses, Worms and Trojans

An increase in the use of internet applications, including peer file sharing increase the replication and spreading rate of malware to minutes or hours. Advanced programming techniques and the advent of scripted utilities further increase the danger posed by these programs. Some malicious activities include; obliterating operating system and personal records; recording personal information and monitoring system traffic flow (Stewart, Tittel & Cha, 2005).

Mitigating vulnerability:

Ensuring the antivirus commences operation automatically upon system boot.

Scanning through all incoming mail attachments before accessing them

Conducting downloads from reputable sites only

Updating the operating system regularly by installing vendor availed patches

Spyware and Adware

Adware programs spread with browsers as part of their scripting codes, appearing as download links or vending sites. Most spyware applications embed themselves on these programs and other peer applications. Free programs, including screen savers and other utilities also propagate these spyware applications (Newman, 2009). Most of them are installed without prior consent, mostly hidden in end user agreements. Some of their operations are enumerated below.

Mitigating vulnerability:

Abstain from free downloads, especially unknown plug-ins and other system utilities.

Use licensed antivirus software an regularly scan for spyware.

Obtaining and using popup killers with browser packages.

Restricting the use of unnecessary applications and cookies

Avoiding peer-to-peer networks

Scanning mail attachments before opening them

Ensuring the operating system is patched and up to date

Refraining from accessing SPAM and mail from pornographic and other un-trusted sites

Downloading updates of the operating system and patching whenever necessary.

Social Engineering

It refers to manipulating trust to obtain confidential information from others as part of an espionage mission. This makes the threat more pronounced internally, since interested parties in the organizations can trick subordinates to reveal confidential information. Perpetrators gather information about their target from various sources, including dumpster diving and corporate websites. While the vice is not rampant, an occurrence often has grave repercussions (Newman, 2009).

Mitigating vulnerability:

Ask for return contacts to verify the identity of callers

Deny all requests incase of intimidation

Shred papers with confidential information

Erase magnetic media after use or use physical destruction when erasing fails

References

Stewart, J. Tittel, E & Cha, M. (2005). Certified Information Systems Security Professional Study Guide. California, CA: John Wiley and Sons.

Newman, R. (2009). Computer security: Protecting Digital Resources. Massachusetts, MA: Jones & Bartlett Learning.

PayPal and Human-Computer Interaction

Introduction

Paypal is one of the largest, most profitable, and most prominent electronic wallet/online payment systems. It operates largely online, allowing people all over the world to connect their finances together for leisure, business, and family matters. Big, medium, and small organizations alike have embraced the use of PayPal as a trusted payment method, expanding their client base and becoming more flexible. One of the strong points of the PayPal brand is its capacity to use visual design in the process of creating new users. For the purposes of this work, some of the principles behind PayPals visual design will be discussed. The process covers the websites design, focus, approach to working with clients, and accessibility.

Standards of HCI

Visual Quality

Visually, the paypal.com website is considerably simplistic. It follows the first of the golden rules of HCI  a drive for consistency. All of the text on the page uses uniform fonts and colors, and the color scheme of the brand is complemented and supported on various tabs. The design of the page changes depending on whether one has logged in to their account shaping the user experience depending on their past experience with the service. The ability of the Paypal website to transform answers to the need to enable universal utility. Those who are not clients of Paypal are greeted by big, bold banners of intent, translating the main goals of the service and its benefits to the consumer (Digital Wallets, Money Management, and More, n.d.). If a user scrolls down, they will be able to see a representation of the usual PayPal interface and be walked through the process of engaging with it step-by-step. In addition, the upper part of the page contains a ribbon  shortcuts to other parts of the website are placed there, allowing various types of users to find what they need.

For existing users, the Paypal interface is much more robust. An individuals account is represented with a series of menus and utility ribbons, each of which opens up specific sections of the website (User interface bug: Wrong amount displayed as on hold, 2019). Transitioning between them gives an individual the ability to change settings, process payments, check their balance, and many other things. All of the buttons and menus present on the screen at one time use a large font and do not overload the viewer with information. This helps minimize the short-term memory load of the viewer. Speaking from a purely visual standpoint, the website is simplistic. It uses a two-color design, dominated by white and dark blue.

Functionality and Ease of Use

For an average user, PayPal is simple to navigate and straightforward. The main functionality of the website  transferring money and making payments, is accessible from the upper menu. The user does not need to make more than a few clicks to reach any potential destination. All of the existing convenience options, coupled with the additional support menu, exist to give users access to basic important functionality quickly. At any point a user can come back to a previous page or reverse their actions, encouraging exploration and increasing trust in the website. Similarly, only options that are available for use, such as send/receive money are open to the end user. Any pathways to making errors are closed off, notifying the user that they cannot perform a certain operation. The naming conventions make it easier to understand where one needs to look to find what they need. For example, if one wanted to check their transaction history, they could examine the upper corner of the website, and find both the summary and activity tabs to be fit for their need. In this case, both of these options would also be correct, as they display some amount of information about payments.

Social, Cognitive, and Environmental Factors

Paypal is an international service, therefore, it works in multiple languages and in different parts of the world. Therefore, it is possible for people of different cultural backgrounds to use it. Much of the functionality the site presents comes at an intersection of personal finance and digital platforms, which can come with a steep learning curve. An individual may not understand how to add a payment method or issue a refund.

Accessibility

While the company is committed to promoting accessibility, the core parts of its website may be difficult for some people to work within. A commitment to accessibility can be seen on the organizations webpage, but not within the service itself (Accessibility statement, n.d.). The white color scheme and smaller text are a problem for people with impaired vision, and the option to change the sites visual composition is absent.

Recommendations

Implementing Changes

Paypal needs to make changes that would allow its userbase to more smoothly integrate within the applications ecosystem. This includes making the design more flexible and allowing for a certain level of customization or accessibility changes. As of now, the color scheme and placement conventions Paypal follows may create difficulty for some people. In addition, it is also not fully consistent throughout the website, using different types of vertical and horizontal design, and closing off sections that are open in different parts of the site, among other things. It is necessary to create a more uniform interface. In addition, it may be necessary to follow the 4th rule of interface design and offer users more feedback. Actions on a page must have consequences, and those who make them must be aware of what they are doing, especially in conversations concerning finance. Because of this, the organization must commit to focusing on the user experience and building a stronger connection with its user base.

Conclusion

In conclusion, PayPal is a successful and ambitious organization, working to make digital wallets a simple and accessible tool for the majority. The work of the organization has been able to incorporate digital payments into the regular landscape of today and secure the brand with many customers. Both individuals and entire businesses choose PayPal as their partner of choice. Visually and practically, the website of the company fulfills its purpose, delivering value to the consumer and directly providing access to money-transferring services. However, it lacks in accessibility and modification capacities a better website would have.

References

Accessibility statement. (n.d.). . Web.

(n.d.). . Web.

. (2019). PayPal Community  PayPal Community. Web.

Pipeline Hazards in Computer Architecture

It is important to note that pipeline hazards in computer architecture refer to situations where a pipelined machine experiences some form of impediment on a subsequent instruction execution. The major pipeline types are categorized as structural, data-based, and control-based, which correspondingly create hazards such as structural hazards, data hazards, and control hazards. In computer architecture, it is critical to be able to distinguish between different pipeline types and their hazards in order to address them effectively.

Firstly, the structural hazard is a result of an overlapped pipelined execution coming from more than one instruction. It is stated that it arises when hardware cannot support certain combinations of instructions (two instructions in the pipeline require the same resource) (Shanti, 2022, para. 5). In other words, it is a demand for the limited resource by several instructions, which can be addressed by adding more hardware or replicating the resource. Secondly, the data hazards take place when an instruction depends on the result of prior instruction still in the pipeline (Shanti, 2022, para. 5). Thus, the results of an instruction become a necessary input for another subsequent instruction down the line, creating the dependency. Thirdly, the control hazards are caused by the delay between the fetching of instructions and decisions about changes in control flow (branches and jumps) (Shanti, 2022, para. 5). Therefore, branch instructions are the primary reasons for these types of pipeline hazards to emerge.

In conclusion, it is important to be able to distinguish between different pipeline types and their hazards in order to address them effectively, which include control hazards, data hazards, and structural hazards. The latter arises when several instructions demand one resource simultaneously, whereas data hazards are the result of the dependency of one instruction on the execution of another. It should be noted that control hazards are mainly attributable to branch instructions.

Reference

Shanti, A. P. (2022). Pipeline hazards. In R. Parthasarathi (Ed.), (pp. 11-12). Creative Commons Attribution Non-Commercial. Web.

Advancements in Computer Science and Their Effects on Wireless Networks

The most significant technological advancement witnessed in the 20th century was the expansion of World Wide Web in the 1990s. This resulted in the interconnectedness of millions of computers and other pages of information worldwide (Masrek et al. 199). Additionally, it was very cheap to share information across the globe. The introduction of laptops and other portable devices that replaced the desktop computer indicated that there was an influx of mobile wireless connections across the globe (Masrek et al. 199). With time the mobile phones and palmtops were included in the list of easy to move around and accessible networks. The wireless developments in the society have in addition greatly improved from the advent of these technologies.

The 21st century has witnessed a wide range of usage of wireless devices especially due to portability. This is attributed to the fact that the design of wireless devices does not emphasize on heavy computation and overly secure communication but it is often treated as add-ons (Peng & Sushil 45). Additional limitations such as shared medium have attracted a large number of users to wireless networks. Security procedures such as the jamming attack are difficult to detect and yet very easy to initiate (Peng & Sushil 45).

Smith and Caputi (265) note that wireless networks are very cheap and thus are getting used in many modalities currently. Their uses range from the wireless local area networks to mesh and other sensor networks. As such, their range is wide making the provision of security and trustworthiness a critical issue. Generally, wireless networks are open in nature and are constructed on shared mediums, thus the provision of secure networks is very difficult in such instances (Peng & Sushil 45). An outsider can for instance interrupt a communication from taking place. This can be done by sending constant and other bogus messages that are periodically sent for collision purposes in the network. The bogus messages later lead to increased back off at the node level and other multiple transmissions of data. A jammer for instance can be referred to as a wireless device that produces radio interference attacks on a wireless network (Peng & Sushil 48).

The main idea behind a jammer is to block wireless connections and keep the medium solely for itself or deform a valid communication that is going on (Peng & Sushil 48). This goal can be successfully accomplished by warding off the traffic source of information from sending out packet data or by thwarting the reception of legitimate data packages. This process, just like hacking, is meant to retrieve information from an authorized user to unauthorized users. There are many jamming approaches and strategies that can be used by an attacker to disrupt communication. The most common is the time-based strategy, where the jamming signal is active and occurs in specified time (Smith & Caputi 268). Other more advanced schemes in jamming use the knowledge of the physical and layer specifications of the target system. Thus jamming is carried out by eliminating some radio frequency signal in the target system (Smith & Caputi 268). Jamming can however be effectively tackled by the PHY-layer communication techniques; which are based on spreading techniques, for instance the Frequency Hopping Spread Spectrum (FHSS). The installation of such systems ensures that the suppleness of the system is maintained (Smith & Caputi 270).

Advancements in computer science have made it possible for companies such as banks to lose billions through electronic fund transfers, which the culprits use via undetected bank systems. Additionally, through wireless communications, terrorists have been able to carry out their main targets. For instance in the 9/11 attacks, the supposed culprits purchased their air tickets through the undetected online system of the airport. Hacking, the unprofessional act of assessing unauthorized data has been possible through advancements in technology. As such, it has led to many problems for the international community, most notably emanating from the leaked messages from the United States government following their release by Julian Assange. Thus, many computer engineers have proposed the Mobility Oriented Trust system (MOTS) that makes use of the trust table by incorporating a trust node in each cable (Murgolo-Poore et al. 175).

Wireless communications where wireless management systems are installed can greatly improve the effectiveness and ease in the application of multiple procedures in an organization. Murgolo-Poore and colleagues add that the output of the various procedures when in good form increases productivity (179). Many organizations have thus transcribed to the wireless connections and the technology is greatly growing, assisted with the advancements in computer science. The concept of trust is very vital in communication and in the development of net work protocol designers, especially where the main intent is to establish a trust relationship in the participating nodes (Murgolo-Poore et al. 175). The latter allows for collaborative use of the systems metrics. Trust however, as noted, is built if interactions between users have been faithful. In any wireless network, trust is very fundamental and it can be defined as the degree of belief about the behavior of other entities and in most cases it is context  based. For instance one can be trusted as an expert in car fixing but not in network installations.

Works Cited

Masrek, Mohamad Noorman, Jamaludin Adnan and Mukhtar Sobariah Awang. Evaluating academic library portal effectiveness. Journal of Library Review, 2010, 59.3, 198-212. Print.

Murgolo-Poore Marie E., Pitt Leyland F., Berthon Pierre R. and Prendegast Gerard. Corporate intelligence dissemination as a consequence of intranet effectiveness: an empirical study. Public Relations Review, 2003, 29.2, 171-84. Print.

Peng Ning and Sushil Jajodia, Intrusion detection techniques. The Internet Encyclopedia. John Wiley & Sons. 2003. Print.

Smith, Brooke and Caputi Peter. Cognitive interference model of computer anxiety: Implications for computer-based assessment. Behavior & Information Technology, 2001, 20.4, 265  273. Print.

The Reduction in Computer Performance

The issue at hand pertains to a slowing down in the performance of a users machine. In addition to this, there is a warning message about a defect on the hard disk. There are a number of likely causes for the reduction in machine performance which include: the hard disk is badly fragmented or the storage space is running low. In addition to this, the warning message indicates that the hard disk may contain system errors or even bad sectors which are responsible for the degraded performance. If not dealt with, these defects may result in a complete crash of the hard disk.

Since it is evident that the hard disk has some defects, it is necessary to run some maintenance procedure to deal with the issue. The Check Disk (Chkdsk) utility available in Windows XP enables one to monitor the health of the hard disk. The chkdsk helps to verify the integrity of the file system by examining the hard drive for file system errors and for physical defects. Since a thorough check disk utility is to be run, the options automatically fix-system errors and scan for and attempt recovery of bad sectors will be checked. By doing this, the scan will be thorough and lengthy which will ensure that errors are fixed.

Fragmentation of data may also cause the computer to be slow and hence exhibit reduced performance. Severe fragmentation may result in a computer crash. It is therefore necessary to run Defragmentation on the hard disk. Badly fragmented hard disks can affect the performance of the computer and one uses the disk defragmenter utility to reorganize the files on disk. By running this utility, the hard drive will be analyzed and a map of the data will be given. From this an alert is issued of whether the disk needs to be defragmented. One defragmentation takes place a performance boost will be noticeable.

The degradation in system performance may be the result of an accumulation of unnecessary files. To deal with this, one can run the Disk Cleanup utility. This utility assists to deal with temporary files that Windows may have collected over time. This utility will analyze the disk and display actions that can be undertaken to recover disk space. From this utility, programs that are not used can be uninstalled or old system restore points deleted. This will result in space being freed up and hence improve system performance.

Assessing and Mitigating the Risks to a Hypothetical Computer System

The security of information is very important for the success of any organization and therefore should be given the first priority in the organizations strategic plans. The computer system of a Hypothetical Government Agency (HGA) faces quite a number of information security threats that need to be brought under control for the agency to operate efficiently (Newman, 2009). Some of the major threats to HGAs computer system include accidental loss of information, virus contamination, theft, unauthorized access and natural disasters. This paper will highlight some of the Biometric solutions the agency has put in place to address computer security issues and finally assess the role of Biometric systems in an Information Technology environment.

Payroll fraud and errors is one of the major security issues that occur due to inadequate information security system (Vacca, 2009). This threat can be solved by initiating an automated payroll process that is in line with both Government and HGAs information security policies. The time and attendance data should be concealed as much as possible to reduce chances of manipulation. The other security threat is unauthorized execution that can be controlled if the system administrator is the only person who can control privilege to server programs (Vacca, 2009). The Local Area Network should not remain operational past the normal working hours by installing a limited configuration to the system. Apart from configuring clerk and supervisory functions, there should be constant managerial reviews and auditing to check for any unauthorized execution (Newman, 2009). Errors experienced during entry of data can be minimized by entering time and attendance sheets in duplicate.

Accidental corruption in information systems can be controlled by having a special program that limits access to the system server (Kizza, 2008). The loss of time and attendance data is prevented by having backups of the server disks incase of a disaster or any accidental loss. The time and attendance data can also be left to be online for three years before the data is taken and stored in the archives. HGA has policies in place that are supposed to ensure there is continuity in all the major operations of the agency. One of the major security threats in HGAs computer system is constant interruption of its operations by various reasons (Kizza, 2008). The threat of interruption of operations is controlled by developing contingency plans that are supposed to ensure that adverse circumstance does not interfere with continuity of operations (Newman, 2009).

In conclusion, it is important to note that biometric systems are very important in any Information Technology environment. Information systems have network-related and virus threats that require biometric measures in order to protect the system and ensure security of information (Kizza, 2008). The risk assessment team reports all the threats found in the system so that all fraud vulnerabilities are mitigated as soon as possible. Authentication mechanisms are very key to solving the problem of payroll fraud through time and attendance data. All the paper work procedures and information handling procedures must comply with HGAs policies to safeguard the system against all forms of corruption (Vacca, 2009). Regular auditing and protection of the system from external networks are some of the biometric solutions to information security issues. Information security is essential for the successful operation of any organization.

References

Kizza, J. M. (2008). A guide to computer network security. New York, NY: Springer.

Newman, R. C. (2009). Computer security: Protecting digital resources. New York, NY: Jones & Bartlett Learning.

Vacca, J. R. (2009). Computer and information security handbook. New York, NY: Morgan Kaufmann.

Advanced Data & Computer Architecture

Abstract

Data and information storage, access, manipulation, backing up, and controlled access forms the backbone of any information system. Solid knowledge and understanding of the information architecture, access, storage mechanisms and technologies, internet mechanisms, and systems administration contribute to the complete knowledge of the whole system architecture. This knowledge is vital for procuring hardware and software for a large organization and effective administration of these systems. The discussion provides a detailed view of data storage, access, and internet applications and takes us through systems administration for large organizations. It ends with a discussion on large organizations and their application in large organizations.

File and Secondary Storage Management Introduction

Introduction

An aggregation of software applications, data access control, and manipulation functions and mechanisms in files and secondary storages sum up to file management systems, commonly referred to as FMS. The database and operating systems share file management system functionalities occasionally (Burd, 2008).

Components of a File System

Graphically, the functions of the operating system and file management systems can be demonstrated based on the layering structure defining both systems, as shown below.

From: Systems Architecture
From: Systems Architecture

The file structure represents physical data storage mechanisms and data structures defined by bits and bytes and contiguous memory blocks. The operating systems interfaces device controllers and device driver applications. Data storage, access, and control are achieved through the kernel, which manages the transfer between the memory and other storage locations (Thisted, 1997). The kernel software is modularized with buffers and cache managers, device drivers that manage input and output devices ported to the system, device controllers, and modules that handle interrupts in the whole system architecture (Burd, 2008). Data access and control are defined in the files through logical access mechanisms whose architecture is independent of the physical structure of a file. File contents are defined by different data structures and data types which can be executed through file manipulation mechanisms integrated into the file management system at the design stage.

Directory Content and Structure

A directory is tabular storage of files and other directories in complex data structures whose information and data are accessed through graphical command lines, as exemplified in the UNIX file system. However, command lines are made transparent to users for other systems since the FMS manages access to the directory (Burd, 2008).

The hierarchical structure of files and directories has the unique attribute of being assigned unique values whose data structures point to each specific directory through the directory hierarchy (Thisted, 1997. Thus a file access path is specified in UNIX and Windows in different ways. Each of the access mechanisms takes the user to a specific file (Burd, 2008).

Storage Allocation

Controlled secondary data and information storage is achieved through input and output mechanisms to files and directories. The systems efficiently identify data and information storage blocks defined by efficient data structures which emphasize smaller units of storage for more efficient space utilization. Thus allocation units vary with varying sizes of allocation (Burd, 2008).

Storage allocation tables define data structures that contain information about allocations, entries, varying allocation units and sizes, access mechanisms that may include sequential and random access mechanisms, among others, such as indexing that makes access to a data item quite efficient.

Blocking and buffering provide a mechanism for accessing and extracting data from physical storage. Blocking factors determine the size of buffering that can be done on any size of the buffer. Buffer copying operations at times intervene where physical blocks of data cannot be copied in their entirety into the buffer locations.

File Manipulation

An open service call prepares a file for reading and writing operations through the mechanisms of locating, searching internal table data structures, identifying and ensuring privileged access is provided, identifying buffering areas, and performing file updates. The FMS ensures controlled access and prevention to file data from manipulations from other programs. Once the process is complete, the FMS issues a close file call that ensures the resident program is flushed into secondary storage, the buffer memory is de-allocated, and the table-data structure and file data stamp are both updated (Burd, 2008).

Update and deletions of records are achieved through the file management system. Mechanisms are integrated into the FMS to enforce data integrity and access privileges. Microsoft used the FAT file but earlier on developed the NFTS that provided higher speeds operations that were more secure, highly fault-tolerant, and incorporated abilities to handle large file systems (Burd, 2008).

File Migration, Backup, and Recovery

Utilities, integrated features, and other file protection and recovery mechanisms such as file migration through mechanisms such as undo operations and configurations for automatic backups. On the other hand, file backups are done periodically, constituting full backups, backing up incrementally, and through the differential approach. Recovery is also achieved through utilities incorporated into the FMS through consistency checking and other mechanisms.

Fault Tolerance

Hardware failure has the potential to cause loss of sensitive data, particularly for large organizations causing undesirable consequences. However, data and information recovery mechanisms are acceptable and reliable. Optical, magnetic, and other devices such as the hard disk are managed through disk mirroring and RAID technologies (Burd, 2008).

Mirroring is achieved through data stripping across a number of disks through smaller parallel read operations and RAID redundant read operations. A round-robin approach through regenerations contents to new disks.

Storage Consolidations

Large organizations find the DAS-Direct attached access inefficient and expensive in a shared environment. Thus, they opt for Network Access Storage, whose architecture is defined by network access and connectivity. The network server concept is hereunder illustrated.

The network server concept

Access Controls

The read, write, and file manipulation mechanisms and operations are controlled through the FMS. The FMS and Operating systems structure endeavors to implement and enforce security at different data access levels. Data integrity is enforced to ensure controlled access, authentication, and authorizations mechanisms before service requests are granted by the FMS. Data and sequential access operations are access mechanisms dependent on the physical organization of data. That includes a data structure such as graph directory structures. The directory data structure can be hierarchical, logical, physical directory presentations, and other file control mechanisms (Burd, 2008).

Internet and Distributed Application Services

Introduction

Data and information transfer through the internet architecture follow specific network protocols, operating systems, and network stacks. These enable resource access and interaction through hardware and software application interfaces. The chapter provides a detailed view of network architecture, network resource access approaches, the internet, emerging trends in distribution models, directory devices, and the software architecture of distributed systems.

Architectures

The client-server architecture is layered with other variations, such as the three-layered architecture commonly referred to as the three-tier architecture. These divide the application software into the database layer, the business logic layer that defines policies and procedures for business processing, and the data presentation and formatting view (Burd, 2008). The peer-to-peer architecture improves scalability with roles of the client-server and other ported hardware well defined.

Network Resource Access

Access to network resources is provided through the functionality of the operating system that manages user requests through the service layer. The resident operating system, therefore, explicitly distinguishes between foreign applications and remote operating systems. Thus, communication is managed through established protocols.

The Protocol Stack

The operating system configures and manages communication through a set of complex software modules and layers. It establishes and manages static connections on remote resources (Burd, 2008). The management of these resources is defined on the premise that resources are dynamic, resource sharing through the networks is possible, and that there is minimized applications and operating system complexity of a local system. Resource sharing is achieved through resource registries, particularly on the p2p architecture.

Dynamic resource connections are achieved through various mechanisms, one of them being the domain name system (DNS). This technique is defined by an IP address that incorporates destination address information at its header, which, however, changes dynamically with changing network platforms. A domain name serves for each network connected to the internet consists of two servers, one defining the registry and IP addresses that are mapped into the DNS and the local registry DNS. Standard container objects are defined on LDAP for different destinations such as countries. These objects are uniquely identified based on their domain name attributes (DN). These can be used for administrative purposes across large organizations.

Interprocess Communication

Processes communicate when they are local to an application or when they are split executing on different computer platforms. These processes coordinate their activities on standards and protocols. A specific focus on the low-level P2P processes across network communications is defined on application, transport, and internet layers. Sockets ported to these devices are uniquely identified through port numbers and IP addresses (Burd, 2008). Data communication takes in either way. The packaging and unpackaging of data packages are achieved through these layers. Remote Procedure call

A machine can invoke a procedure in another machine with a remote procedure call technique through the mechanisms of passing parametric values to the called procedure, holding on until the called procedure completes executing its task, accepting parameters from the process that had been called, and resuming task execution subsequent to the call. At industry levels, tickets and other mechanisms are used to access and pass data from one procedure to the other.

The Internet

The internet is a technology framework that provides a mechanism for delivering content and other applications to designated destinations. It consists of interconnected computers that communicate on established protocols. The model is defined by HTTP protocols, Telnet protocols, and mail transfer protocols SMTP, among others. It is an infrastructure that provides teleconferencing services among a myriad of other services (Burd, 2008).

Components Based Software

The component-based approach in software design and development provides benefits similar to those provided by complex software applications such as grammar checkers. Decoupling and coupling mechanisms when building or maintaining the components are unique attributes of this approach. Interconnection of companies based on different computers is achieved through a protocol such as those developed by COBRA, Java EE, among others (Burd, 2008).

Advanced Computer Architecture (n.d) and Burd (2008) both note that software can be viewed as a service that focuses on web-based applications through which users interact with applications and in which services are provided in large chunks. Infrastructure can also be viewed as a service with the advantage of reduced costs and other related issues. Different software vendors and architecture present potential risks that need to be addressed. These include cloud computing architectures and software applications that define the cloud computing frameworks.

Security spanning authentication, authorization, verification, and other controls and concerns such as penalties and costs associated with security bridges.

Emerging Distribution Models

High-level automation, ubiquitous computing, and decades of unsatisfied needs in various industries have led to new approaches to be adopted by business organizations today. These enterprises include Java enterprise editions, COM, which is a computed object model, SOAP, among other trends.

Components and Distributed Objects

These components are autonomous software modules characterized by a clearly defined interface, uniquely identifiable, and executable on a hardware platform. These interfaces are defined by a set of unique services names and can be compiled and run or executed readily (Burd, 2008).

Named Pipes

Data between executing processes occupying a similar memory location are identified as named pipes. These pipes provide communication services for requesting and issuing service requests.

Directory Services

This is middleware descriptive of service provided, including directory updates, storage of resource identifiers, provided a response to directory queries, and synchronizes resources. Middleware provides an interface between client-server applications (Burd, 2008).

Distributed Software Architecture

This architecture provides a link to distributed software components across multiple computer platforms geographically spread across large areas (Burd, 2008).

System Administration

Introduction

This chapter takes one through the process of determining the requirements and evaluating performance, the physical environment, security, and system administration.

System administration

System administration identifies strategic planning process for acquiring hardware and software applications, the user audience, determining requirements for system and hardware acquisition. All the integration, availability, training, and physical parameters such as cooling are identified in the process and documented on a proposal. System performance is determined by hardware platforms, resource utilization, physical security, access control mechanisms that were discussed earlier, virus protection mechanisms, firewalls, and disaster prevention and recovery mechanisms (Burd, 2008).

Software updates are integral in creating a large organization systems infrastructure. Security service, security audits such as log management audits, password control mechanisms, overall security measures, and benchmarking are core activities in building the infrastructure. Protocols are evaluated prior to the acquisition process that incorporates the identification of new and old software and vendors and standards.

System administration may also constitute acquisition, maintenance, and developing software and security policies (Burd, 2008).

Applying the Concepts for Large Organizations

Large organizations demand software applications and hardware platforms that support their activities on a large scale. That is the case with large organizations such as Atlanta, GA. The software and hardware infrastructure is defined by file access mechanisms that are characterized by high-speed access to information, large backup facilities, fault-tolerant systems, backup methods such as mirroring, and RAID technologies. Storages are consolidated, and access is provided through networked infrastructure on a wide or local area network. Internet access for these systems and institutions provides resource access mechanisms through controlled access protocols. These protocols include web protocols, among others. Software platforms are defined by components that interact and interface with other applications. The infrastructure in these large organizations is interconnected through cables, and security mechanisms span firewalls, system audits, security levels, including privileged controls, among others. Hardware and software acquisitions for large organizations are made through proposals and vendor identification strategically tailored to meet the organizations goals.

References

(n.d).The architecture of Parallel Computers. Web.

Burd, S. D. (2008). Systems Architecture. New York. Vikas Publishing House

Thisted, R. A. (1997). Web.