Firewalls in Computer Security

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!

Introduction

Computer security is one of the branches of technology, and as far as it is applied in the field of computer, it is known as information security. The aims of computer security are far fetched but mostly encompass shielding information from theft, distortion or making the preservation of information possible. Computer security instill conditions in the computer that are different from most conditions in

other systems as far as what computers are supposed to do is concerned. However, these conditions make computer security a challenging issue since it makes computer programs to carry out only that which is required of them and in a specific manner. This limits the scope and the speed with which the computer program is supposed to operate. Computer security programs aim at lessening these inhibitions by transforming these negative constrains into positive enforceable principles (Layton, 55). Therefore, computer security can be said to be more technical and mathematical in comparison to other computer science related fields. However, it must be noted that the main issue of concern in information security and /or computer security is the protection of information that is stored, or processed or worked on by the computer. This is true whether it is the protection of the computer hardware or the systems software that is involved.

Main text

Much developments and evolution has taken place in the field of computer security so that presently it is globally held that there are four most common typical approaches to attaining computer security. The first approach involves the imposition of the physical barring of access to computers to computer security compromisers. The second and third approaches involve the use of hardware programs to set in rules on different programs to avoid susceptible computer programs, and third approach being the use of operating systems mechanisms that entrench rules on programs to avoid gullible computer programs, respectively (Peltier, 256). Much of the operating system security technology is based on the 1980s science which has been used to produce the most impenetrable operating systems. However, presently, they have been put into limited use since they are both laborious and very technical (and therefore, little understood about them for efficient and maximum exploitation). An example of this is the Bell- La Padula Model. The fourth approach involves the use of program strategies to make computer programs highly reliable and able to withstand subversion.

A firewall is a configured device designed to allow, reject, encrypt, or proxy all the traffic of the computers between diverse domain sections, based upon a set of specific rules. Conversely, it may be defined as a dedicated appliance equivalent to a software connection running on another computer which checks the network traffic passing through it and rejects or allows passage following a set of rules. The basic function of the firewall is to control the flow of traffic between or among computer networks depending on the different trust level. There are three different trust levels, and these are the Internet, which is a total no trust zone (this is so since the Internet is highly porous to all materials that can be sent through the web), the Internet link which is a higher trust zone, and the demilitarized zone (the DMZ), an intermediate trust level which is located between the Internet and the perimeter network which is the trusted internal network. The firewall operates on a rule of default deny as a methodology of allowing in only the designated network connections for entry and locking out the rest. Without proper configuration, firewall can be almost useless.

Historically, the term firewall was used to refer to measures taken to ward off fire from reaching buildings. With later developments,the term came to be used to refer to the structure or metal sheet that severs the engine compartment from the passengers cabin in a vehicle or an aircraft. In the 1980s, the firewall computer technology emerged when the Internet was still a fledgling in terms of global use and connectivity. The antecedent factors that pushed for firewalls introduction were the discovery of German spies trifling with the Internet system by Clifford Stoll, Bill Cheswicks 1992 manipulation of electronic devices in jails to observe attacks (this demonstrated clearly that Internet users were not safe but were susceptible to spying and unwarranted interference either by online criminals with vast computer and Internet acumen, or by computer bugs), the 1988 viral invasion on the NASAs American Research Center in California, and the first large scale Internet attack by the Morris Worm.

There are diverse types of firewall classification and these are based according to qualities and characteristics such as speed, flexibility, and simplicity, greater authentication and higher logging speed. Types of firewalls that fall under the rubric of speed, flexibility, and simplicity include the Packet filter firewall and the Stateful inspection firewall (whose modi operandi have been discussed in the succeeding paragraphs). On the other hand, those that are classified under greater authentication and higher logging capacity include the Application Proxy Gateway firewall and the Dedicated Proxy Servers. These too have been delved on in the succeeding paragraphs.

In 1988, the first paper on firewall technology appeared following the Digital Equipment Corporation conducted a series of researches and subsequently came up with a filter system known as the Packet Filter Firewall. This was the first generation of the highly evolved technical Internet security system. Later improvements came, courtesy of Bill Cheswick and Steve Bellovin of the AT & T Bell Labs on packet filtering.

Packet filtering works by inspecting packets which are the basic units of data transfer betwixt the inter connected computers. The packet filters by following a set of rules, achieve this feat by dropping the packet and sending the error responses back to the source if the packet corresponds the packet filter (Zhang and Zheng, 300). This type of packet filtering filters out every packet depending on the information in the packet, and pays no attention to whether or not the packet falls within the rubric of the already existing string of traffic.

The packet filters work at a network layer of 1-3 and operates very efficiently since they only inspect the header of the packet. Initially, there was the stateless firewall which lacked the capacity to detect whether or not a part of a packet was already inside an already existing connection or not. An example of this is the File Transfer Protocol which opens opens up itself to arbitrary ports by design. This type of firewall operates by maintaining a table having open connections which are associated with new connection requests and already existing connections that are held as legitimate.

The packet filters and the stateless firewall work efficiently also because they check against the IP address and other involved ports in relation to the connections and the sequence of the packets encircling the connections. This, the stateful firewall is able to achieve since it has the capacity to contain memory in significant range of every connection from the beginning to the end.

The packet filters, when the client starts a new connection,sends a set of SYN bit in the header of the packet. The firewall in turn deems all sets with SYN bits, new connections. The service replies the SYN packet upon the service being asked by the client being available. Should the client respond with an ACK bit packet,the connection enters an established state. Having by passed all outgoing packet, the firewall accepts incoming packets that have been established as an already existing o r established connections. Thus, the hackers are kept from being able to start unwanted connections in the protected machine. If there is no traffic that has passed, stale connections are deleted from the state table to keep the latter from overflowing. To prevent dropping connections, the firewall sends periodic updating messages to the user. Thus the packet filters work.

One of the side effects of the pure packet filters is that they are highly susceptible to spoofing attacks. This is so because packet filters do not posses the concept of of a state as dwelt on in the field of computer science and computer security. On this premise, the pure application firewalls are also vulnerable to exploits especially at the hands of online criminals.

A layer is a collection of functions that are interrelated to offer and receive services to other layers above it and from those below it respectively. The history of layers stretch back from 1977 when work under the American National Standard Institute working group, was carried out on a layered model; to become the Operational Standard model to come up with the Distributed Systems.

Although the ISO has influenced the Internet protocol, yet none has done so heavily as the concrete operational system model.

The application layer explicitly interfaces and discharges application services to facilitate the application process and also forwards requests to the presentation layers. The application layer exists to offer services to the user- defined processes of application, and not to the end user. For instance, the application layer defines the transfer protocol file, but still refers the end user to follow the application process to facilitate the transfer of files.

The primary functions performed by the application layer include the the facilitation of the the applications and the end user process, and the identification of the of the communication partners. Application layer facilitates services for the transfer of of files, e- mailing, and the running of other network software services (Abrams, Jajodia, Podell, 199). In addition to all these, the application layer bolsters the aspect of privacy and establishes and authenticates the application user and marks and identifies the quality of services that are offered. Examples of application layer include, the Telnet, the FTP and Tiered. The application layer also marks constrains on the data syntax.

Under the application layer, there is the senior sub layer which offers functional services which entail association control, remote operations on the servicing elements and the facilitation of all the transactional processes. Above the common sub layer of the common application services, there are important application programs such as; File transfer (FTAM), Directory (*500), Messaging (*400) and Batch job manipulation.

In computer networking, proxy servers are computer application systems or programs which by forwarding clients requests to ether servers, serve the clients interests. On the client connecting to the proxy server in order to request for some servicer (for example, connecting to a web page or retrieving a file), the proxy server will respond by providing resources by linking to the server and inquiring the services on the clients behalf. Sometimes, the proxy server may change the clients requests or the feedback from the server without having notified the particular server. Herein, there is a proxy server that transcends all the requests and issues general responses- and this being called the gateway or the tunneling point. Proxies are connectible with the users local computers or alternatively at particular main points between the Internet and the destination server. There are myriad types of and functions of proxies as discussed forthwith.

The caching proxy server which caters for requests without corresponding with the particular server by tracing the previously saved client request. Whereas this process is known as caching, it is worth noting that caching proxies maintain domestic copes of sources that are most frequently requested, and thus allowing big organizations to plummet their the companys upstream bandwidth dispensation expense while increasing the performance at the same time.

There is also the content filtering web proxy that conducts the administrative regulation over the relayable content through the proxy. This type of proxy is used in both commercial and non commercial agencies for example in schools to instill conformity in use. The common approaches include the URL regex filtering, DNS blacklist, the URL, and the content filtering. This technique is mostly employed because it facilitates authentication to allow the web page control. Saperate from this, there is also the web proxy which majors on the WWW traffic and commonly acts as the web cache. This method is vastly employed in the corporate domains and this has been highly evidenced by the increasing application of the Linux in both small and large enterprises, and at homes. Examples of this include the Squid and the Net Cache which allows the filtering of the content and thus providing ways to refuse access to some URLs specified in the blacklist.

Anonymizing proxy servers otherwise known as a web proxy anonymizes the web. However, this facility is susceptible to being overridden by the administrators of sites and thus remaining useless. Nevertheless, this form of proxy is able to facilitate the control accessibility since it implements log on requirements. This helps in the organizations being able to limit the web access to authorized users only and also helps keep abreast on how the web is being used by employees.

Intercepting or the transparent proxies integrates a gateway with a proxy server. This type of proxy has been vastly used in businesses to increase the use of policies, and to ease the administrative load on the premise that the configuration of the client users browser is not a prerequisite. Intercepting proxies are detectable through the comparison of the HTTP header and the IP address. Hostile proxy as the name suggests is normally set up by cyber criminals to access the flow of data between the client and the web. The panacea to this is arrived at by changing the pass word that is used to access online services on detecting proxy that is unauthorized.

While on the one hand transparent proxy leaves open the response beyond the proxy authentication, non transparent proxy on the other hand modifies responses to bring more services to the group of users. Open proxy is a server set by administrators to counter abuse so as to deny access to open proxies services. Another form of countering this problem is by testing the system of the client to detect open proxy.

Forced proxy takes away the traffic on the accessible pathway to the Internet.. Besides this, it also configures proxies to facilitate entrance into the Internet. This operation is expedient for the interception of the TCP connectivity and the HTTP. For instance, the HTTP interceptions affect the usefulness of proxy cache and can therefore impact on the mechanisms of authentication.

The reverse proxy server is one that is installed within the proximity of a single or multiple web servers. Herein, all the traffic emanating from the Internet into one of the servers web page passes through the reverse proxy server. The reverse proxy server has multiple functions such as the acceleration or the encrypting of the SSL when designing secure websites,compression and the expansion of the web content to catalyze the loading time, and the serving of the static content of the cache so as to offload servers.

The server proxy is also able to abate resource usage that is incurred due to slowness on the clients side. This feat is achieved by caching the web server content that has been sent and issuing it to the client in dribs and drabs. This undertaking is known as spoon feeding. The proxy server also beefs up security as an extra stratum of safety and can therefore shield against attacks that are known to be web server specific.

There are also special kinds of proxies- the extra net publishing, a reverse type of proxy server used for communicating to an internal server that has been firewalled, and issuing extranet services as the server remains behind firewalls.

In computer networking, the network address translation (NAT) which is also known as the network masquerading, IP masquerading, and the native address translation, is a technique that works towards the transception of the network traffic by using the router that entails re- encoding the IP address destination, the TCP or the UDPs IP packets port number on passing through. The methodology also entails using the check sums which are written down to take note of the changes. Most NAT systems do this to capacitate a myriad hosts on undisclosed networks to reach the Internet via the use of IP address.

NAT as a technique first came in as a method of countering the IPV4 shortage and lessening difficulty the IP Address reservation. Recently, there has been a widespread adaption of the technique by countries with less allocation of address book per capita (Bragg, Rhodes- Ousley and Strassberg).

Conclusion

NAT adds security by disguising the structure of internal network. This it does by letting the the traffic pass through to the Internet from the local network. As the translation of the source address in every packet is done, the router keeps track of on each connection that is active, and the basic data. Afterwards, the TCP or the UDP port numbers in the case of the overloading of the NAT are used to enable the demultiplexing of the packets. As the source for the traffic, the router reveals itself.

The merits of using NAT are very far fetched. One of them is that it allows convenience and entrenches minimal costs. The fact that NAT is devoid of full bidirectional connectivity means that it keeps away malicious activities which are carried out by external hosts from permeating local connections. This keeps at bay worms and abates scanning and thus enhancing privacy.

Perhaps the greatest of all these benefits is the fact that NAT solves the problems that result from the exhaustion of the space in the IPV4 Address.

The drawbacks of the Network Address Translation are also clear. For instance, there are no end- to- end connections at the back of the Network Address Translation- supported routers. This makes it impossible for the system to accommodate the Internet protocol which is very useful. In the same wavelength, this makes it mandatory that servicing receive the TCP connection initiation from external networks. Conversely, to curtail this problem, there will have to be the use of the stateless protocol- the problem with this being that some of these stateless protocol systems such as the UDP are not impregnable to interference or to disruption.

Bibliography

Abrams, D. Marshall, Jajodia, Sushil and Podell Harold. Some integrated essays on information security. US: IEEE Computer Society Press, 1994.

Bragg, Roberta, Rhodes- Ousley, Mark and Strassberg, Keith. A complete reference of network security. US: Mc Graw Hill Professional, 2003.

Layton, P. Timothy. Information security: measurements and compliance. US: CRC Press, 2006

Peltier, R. Thomas. Guidelines to information security policies. US: CRC Press, 2001.

Zhang, Kan and Zheng, Yuliang. The seventh information security conference. US: Springer Press, 2004.

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!