The invention of the computer in 1948 is often regarded as the beginning of the digital revolution. It is hard to disagree that computers have indeed penetrated into the lives of people have changed them once and for all. Computer technologies have affected every single sphere of human activities starting from entertainment and ending with work and education. They facilitate the work of any enterprise, they are of great assistance for scientists in laboratories, they make it possible to diagnose diseases much faster, they control the work of ATMs, and help the banks to function properly. The first computers occupied almost the whole room and were very slow in processing data and performance in general. The modern world witnesses the development of computer technologies daily with computers turning into tiny machines and working unbelievably smoothly. A computer is now trusted as a best friend and advisor. It is treated as a reliable machine able to process and store a large amount of data and help out in any situation. The storage, retrieval, and use of information are more important than ever since (w)e are in the midst of a profound change, going from hardcopy storage to online storage of the collected knowledge of the human race (Dave, 2007), which is why the computers are of great assistance to us. However, to become a successful person, it is not enough to simply have a computer at home. It is often the case that people use computers merely to play games without knowing about the wide range of activities they may engage a person in. One has to know more about computers and use all their capabilities for ones own benefit. Knowing the capabilities of ones computer can help in the work and educational process, as well as it can save time and money. In this essay, you will find out reasons as to why it is important to know your computer; and how much time and money you will save by using all the capabilities of your computer.
What should be mentioned above all is that knowing ones computer perfectly gives an opportunity of using it for the most various purposes. It depends on what exactly a person needs a computer for, in other words, whether it is needed for studying, for work, or for entertainment. Using a computer for work or education purposes involves much more than is required for playing computer games. These days most of the students are permitted to submit only typed essays, research papers, and other works, which makes mastering the computer vital. Information technologies have played a vital role in higher education for decades (McArthur & Lewis, n.d.); they contributed and still continue to contribute to students gaining knowledge from outside sources by means of using the World Wide Web where information is easily accessible and available for everyone. To have access to this information one has to know how to use a computer and to develop certain skills for this. These skills should include, first of all, using a Web browser. In 1995, Microsoft invented a competing Web browser called Microsoft Internet Explorer (Walter, n.d.), but there exist other browsers the choice of which depends on the user. Moreover, knowing different search engines (for instance, Google, Yahoo, etc,) is required; the user should also be able to process, analyze, and group similar sources by means of extracting the most relevant information. At this, the user is supposed to know that not all Internet sources should be trusted, especially when the information is gathered for a research paper. Trusting the information presented in ad banners is unwise for their main purpose is attracting the users attention. They may contain false or obsolete data misleading the user. Utilizing the information obtained from the Internet for scholarly works, one should remember about plagiarism or responsibility for copying somebody elses works. Students who use such information should cite it properly and refer to the works of other scholars rather than simply stealing their ideas. Plagiarism is punishable and may result in dropping out of school or college. This testifies to the fact that using a computer for studies demands the acquisition of certain computer programs and practice in working with them, which would give a perfect idea on how to search and process the information needed for completion of different assignments.
Whats more, knowing a computer for work is no less important. Mastering certain computer programs depend on the type of work. Any prestigious work demands a definite level of computer skills from the basic to the advanced one. The work of a company involves sometimes more than using standard computer programs; the software is usually designed specifically for the company depending on the businesss application. This means that acquisition of a special program may be needed and a new worker will have to complete computer courses and gain knowledge on a particular program. Nevertheless, the knowledge of basic computer programs is crucial for getting a job one desires. Since the work of most companies is computerized, one will need to deal with a computer anyways and the skills obtained while playing computer games will not suffice. A person seeking a job should be a confident user of basic computer programs, such as Microsoft Office Word, Microsoft Office Excel, Internet Explorer (or other browsers), etc. A confident user is also supposed to know what to do with the computer when some malfunctions arise. Of course, each company has system administrators who deal with computer defects but minor problems are usually born by the users themselves. Apart from knowing the computer, a person should be aware of the policy of using it in the office. For instance, some companies prohibit using office computers for personal purposes, especially when it comes to downloading software and installing it on the computer without notifying the system administrator. This may be connected either with the fact that incorrectly installed software may harm the system of the computer in general or, if the software has been downloaded from the Internet, it may contain spyware which makes the information from your computer accessible for other users. This can hardly be beneficial for the company dealing with economic, political, governmental, or any other kind of issues. Therefore, knowing a computer is necessary for getting a prestigious job and ensuring proper and safe performance of the company one is working for.
And finally, using all the capabilities of a computer can save time and money. Firstly, a personal computer has a number of tools which facilitate peoples life. Special software, for instance, Microsoft Money, makes it possible to plan the budget, to discover faults in the plan, and correct it easily without having to rewrite it from the beginning; the program itself can manage financial information provided by the user and balance checkbooks in addition. Such computer tools as word processors enable the users to make corrections at any stage of the work; moreover by means of them, one may change the size of letters and overall design of the work to give it a better look. Mapping programs can also be useful; by means of a computer one may install such a program (GPS) into the car; the program then will take care about planning the route avoiding traffic jams and choosing the shortest ways. Secondly, electronic mail allows keeping in touch with people not only in your country but abroad. It is cheaper and much faster than writing letters or communicating over the telephone when the connection is often of low quality and the conversation is constantly interrupted. Most telephone companies are aimed at getting profits from peoples communication with their friends and relatives whereas electronic mail is almost free; all that one needs to do is to pay a monthly fee to the Internet Service Provider. Eventually, computer users have an opportunity to do shopping without leaving the apartment; the choice of the products one may want to buy is practically unlimited and the user can always find recommendations from those people who already purchased the product. A personal computer can also help to save money due to its being multifunctional. Knowing much about the capabilities of the computer, one may start using it as a TV set watching favorite programs online, and as a Playstation playing the same games on the personal computer. Not only can a user watch favorite TV shows by means of his/her computer, but can download them at various torrent sites for free. Using a PC to send faxes through online fax services saves money for one does not have to buy a fax machine and to use an additional telephone line; it also saves paper and ink which one would have to buy otherwise.
Taking into consideration everything mentioned above, it can be stated that knowing a computer is important for it can make peoples life much easier. Firstly, computers are helpful in getting an education since by means of them the students can find any possible information necessary for writing research papers and other kinds of written assignments. To do this, a student needs to know how to search the Internet and to process the information he/she can find there. Secondly, knowing a computer raises ones chances of getting a good job because most of the companies look for employees with a sufficient level of computer skills. When working for a company one should also remember about its policy regarding the use of computer for personal purposes and be able to cope with minor problems arising in the course of work with the computer. Finally, a computer allows saving time and money. It saves the users time due to utilizing such tools as word processors, budget planning, and mapping programs which facilitate the users life. The computer can also save money serving as a TV, fax, and Playstation giving access to TV shows, online fax services, and allowing playing video games without buying special devices for this.
References
McArthur, D., Lewis, W.M., ND. Web.
Moursund, D. (2007). A College Students Guide to Computers in Education. Web.
Walter, R. ND. The Secret Guide to Computers. Web.
The Trusted Computer System Evaluation Criteria is a standard established by the US Department of Defense that outlines the fundamental requirements in order to evaluate the efficiency of the computer security controls that have been integrated onto the computer system. The fundamental role of the TSEC was to assess, catalog and facilitate the selection of computer systems that are to be used for effective processing, data storage and the retrieval of sensitive information (Daly, 2004). The Common Criteria for Information Technology Evaluation is a framework through which users of a computer system can specify the functional and assurance security requirements, after which vendors can facilitate the implementation of computer security basing the claims of the users. The common criteria offer assurance that facilitates specification of user requirements in terms of functional and assurance, vendors implementation of their requirements and standard evaluation in order to ensure that a security product meets the international computer security standards. This paper discusses the impacts associated with the transition from the Trusted Computer System Evaluation Criteria to the International Common Criteria for Information security evaluation. The paper provides an overview of the concepts of security assurance and trusted systems, an evaluation of the ways of providing security assurance throughout the life cycle, an overview of the validation and verification, and the evaluation methodology and clarification techniques deployed in both criteria for security evaluation.
Security assurance is one of the core objectives and requirements of the Trusted System Evaluation criteria, which stipulates that a secure computer system should have hardware and software mechanisms, which can be evaluated independently in order to foster adequate assurance that the computer system ensures minimum security requirements. In addition, the concept of security assurance should provide a guarantee that independent portion of the computer system works as it is required. Security assurance guarantees the protection of data and any other resources that it hosts and it controls. The basic argument is that that the hardware or software entity in itself is a resource, and should have appropriate security mechanism (Herrmann, 2003). In order to facilitate the realization of these objectives, there are two principal kinds of security assurance that are required; they include assurance mechanisms and Continuous Protection Assurance. Assurance Mechanisms involve operational and life-cycle assurance; while the Continuous Protection Assurance involve the trusted mechanisms that are used in the implementation of the basic security requirements and ensure that these requirements are not subjected to unauthorized alterations. Trusted system on the hand refers to a system that can be depended upon to undertake its specified functionality and ensure the outlines security policies (Lehtinen, 2006). The underlying argument is that the failure of a trusted system is bound to result to a breaking of a particular security policy. Basically, a trusted system can be perceived to be a reference monitor and play an integral role in monitoring all the access control decisions. The relationship is that security assurance results to a trusted system, with the outcome being an integration of computer hardware and software, and any other middle ware that can be used in the enforcing of particular security policies. In order to avoid failure of the trusted system, higher levels of system assurance are required in order to guarantee the effectiveness of the trusted system. An empirical analysis of the above implies that the TSEC utilized six evaluation methodologies, while the Common Criteria utilized seven evaluation methodologies (Merkow, 2004).
Under the Trusted Systems Evaluation Criteria, life-cycle assurance normally entails the carrying out a security testing, the specification of the design and its respective verification, configuration management and then finally the Trusted System Distribution. One of the TSEC requirements is that security implementation should take place throughout the lifecycle of system development. Security testing is used to determine whether a system has the capability of protecting its data and resources without impairing its overall functionality. Therefore, security testing aims at assessing the ways in which a system ensures confidentiality, data integrity, user authentication, system availability, user authorization and non-repudiation. Specification of the design is done in accordance with the functional and user security requirements. Trusted systems have to integrate the functional and user requirements with the security policies in order guarantee the core objectives of information security. Design specification is an important process during the outlining of design requirements during the implementation of a security system (Lehtinen, 2006). Verification simply entails confirmation that the security system is functioning in accordance with the expected requirements, in the sense that it should meet the minimum security requirements in order for the system to be deemed effective in terms of enforcing security. Configuration management aims at ensuring that there is consistency with respect to security performance. These normally include keeping track of any needed changes and constant adjustment of the security baselines in accordance with the nature of the security threats available. Two core processes are undertaken during configuration management, they include revision control and baselines establishment. Trusted System Distribution on the other hand aims at offering guaranteeing the security of a trusted system prior to its installation. According to the TSE requirements, it is important that the security properties of a trusted system be intact prior to its installation for the user. In essence, the installed system should be an exact copy of the system that was evaluated against the requirements of the TSEC. Basically, the lifecycle of a security system implementation entails the definition of security requirements, design and the implementation. Assurance justification and the design implementation requirement are vital in ensuring that the implemented system meets the security evaluation criteria under the TSEC and the Common Criteria, with the Common Criteria having more evaluation frameworks compared to the TSEC (Daly, 2004).
Validation and verification are vital in ascertaining the effectiveness of a security system. Validation and verification are used in checking out that a security system meets the design specifications and that its functionality is not impaired as required. Validation and verification are significant elements of the Quality Management System. Specifically, validation can be perceived as a quality control strategy used in the evaluation of whether as security system has complied with the international security standards, regulations and specifications. Verification is usually an internal process and takes place during all the phases of the security system development and implementation. Validation on the other hand can be viewed to be a quality assurance process that has the principal objective of analyzing the performance of a security system and also aims at guaranteeing high levels of security assurance, in the sense that the implemented security system meets all the requirements in order for the security system to be deemed effective. Basically, it evaluates the fitness of purpose and facilitates the acceptance of a security product by the end users. Validation entails developing the right security system; while verification involves building the security system in a right manner, implying that it takes into account that the specifications are implemented as required, which means that it is a match against the needs of the users.
There are various evaluation methodology and certification techniques that can be used to determine the levels of security assurance of an information system. The main objective of evaluation methodologies is to determine the vulnerability of a system, which normally includes an assessment of the breaking of the security policies and controls, which may in turn result to a violation of the security policies (Daly, 2004). Formal verification is one methodology that can be deployed in order to ensure that a security system meets some certain constraints. It normally entails the establishment of preconditions and post conditions of the implemented system. In order for the system to be deemed effective, the post conditions must meet all the constraints. Penetration Testing is also another technique that can be used to determine is a security system meets some minimum constraints. It normally entails the hypothesis stating of the characteristics of the system and the state that is likely to impose the system to vulnerability. The result is normally a state that has been compromised. Tests are carried in order to see if the system will become compromised, therefore making the system vulnerable (Herrmann, 2003).
In conclusion, the transition from the transition from the Trusted Systems Evaluation Criteria to the International Common Criteria resulted to more secure systems owing to the fact that the Common Criteria has more evaluation frameworks compared to the TSEC.
References
Daly, C. (2004). A Trust Framework for the DoD Network-Centric Enterprise Services (NCES) Environment. New York: IBM Corp.
Herrmann, D. (2003). Using the common criteria for IT security evaluation. New York: Auerbach.
Lehtinen, R. (2006). Computer security basics. New York: OReilly Media, Inc.
Merkow, M. (2004). Computer security assurance using the common criteria. New York: Cengage Learning.
Toms hardware guide offers a comprehensive online publication that reviews developed computer hardware and computing technologies. This means that it is an important start for all information technology professional resources. In the article titled Wi-Fi security: cracking WPA with CPUs, GPUs and the cloud, a posting by Andrew Ku, argues that despite the convenience associated with wireless Wi-Fi networks, security is a major problem ranging from the cracking of passwords at the desktop level to the cloud (Ku 2011). The Wi-Fi is a potential for security breaches. The increasing perception that information is secure when online facilitates increased the use of online applications. However, instances of computer security vulnerabilities are imposing significant constraints to the user confidence regarding the use of online applications and wireless networks. The article aims at exploring the vulnerabilities that wireless networks and applications that are executed over the cloud. The article also discusses the First Line of defense security measures that can be implemented when adopting wireless security strategies (Ku 2011). The article also suggests that WEP encryption is obsolete as hackers can bypass the encryption methods that are mostly deployed by wireless networks. Strategies that can be used in securing a WPA network are also discussed in the article. This essay offers a comprehensive discussion of what was learn from the Toms hardware website with respect to the core aspects of the module. The main focus of the post is with respect to the security issues of web 2.0 technologies.
The articles found in the Toms hardware website offers comprehensive review regarding the security of web 2.0 technologies are vital in the current data communications field. A significant characteristic of the Web 2.0 platform is that mobile users are the ones who undertake actions such as generation and uploading of content to the web sites (Ku 2011). This is increasing evident as large enterprises are embarking on the adoption of Web 2.0 tools, which include blogs and RSS. With such features, the Web 2.0 is vulnerable to exploitation by malicious users, implying that organizations have to implement appropriate mobile security strategies (Ku 2011). One of the most significant mobile threats associated with web 2.0 technologies is cross-site scripting, which allows malicious users and hackers to inject client-side script into web content that has already been accessed by other users. Basically, cross-side scripting provides a framework through hackers can evade the access controls. Cross-site scripting accounts for approximately 80 per cent of Web 2.0 threats; as a result, large enterprises should deploy appropriate strategies to combat this threat. In addition, the detection of attacks initiated by cross-side scripting is normally difficult, and is used by malicious users to maximize the effects of the attacks. XSS uses the Browser Exploitation Framework to establish an attack on the user environment and the web content (Ku 2011).
The second mobile threat that Web 2.0 technologies is susceptible to is SQL injection attacks, which primarily entail the use of a code injection technique in order to take advantage of a security vulnerability associated with the Web 2.0 technologies. Web 2.0 is susceptible to injection attacks due to the fact that users can generate and upload web contents to a web site. This in itself is vulnerability, through which malicious users can initiate an SQL injection attack. Other injection attacks can be initiated in the form of Java Script Injection and XML injection. Because Web 2.0 technologies significantly depend on client side code, hackers make use of client-side input validation in order to evade the access controls.
The third issue associated with mobile security in Web 2.0 technology is information leakage that is initiated by the user generated content. Hackers exploit this feature of the Web 2.0 technologies to upload and run their malicious code on the web site. This could result to a large enterprise hosting an inappropriate content, which could not only result to cases of data breaches, but also affect the brand. Information leakage has significant effects on the operations of a company and normally serves as a threat to data integrity and confidentiality (Ku 2011).
Insufficient anti-automation also makes the initiation of attacks on Web 2.0 applications easy. This is facilitated by the programmatic interfaces of most of the Web 2.0 applications. Inadequate anti-automation can foster the automated retrieval of information and the automated opening of accounts in order to facilitate access to the web content. Such threats can be curbed by the use of Captchas. Information leakage is also another mobile security issue associated with Web 2.0 technologies. The aspect of mobility of Web 2.0 technologies facilitates content sharing, which can initiate a vulnerability that malicious users can exploit in order to gain access to the system. It is arguably evident that the internet revolutionized the way businesses are conducted and how people undertake their work. The Web 2.0 is an important aspect of the internet that played a significant role in enhancing business functionality. A significant limitation is that with its increased usage implies increased risk; as such, they offer opportunities through which malicious users can inject and run malicious code in web content (Ku 2011).
There are also various links within the website that are helpful to follow up. Some of them include For IT Pros, Charts, Brands and Articles that offer comprehensive information concerning hardware and a review of the computing technologies. Other salient information concerning the website is that it can be used a starting point for conducting hardware reviews basing on performances, benchmarks and costs for people intending to acquire computer hardware. Altogether, the site is an important tool for an IT professional owing to the diverse pool of articles that covers almost all domains of information technology ranging from computer hardware, computing technology, emerging computing trends and so on.
Personal Computer (PC) is defined by the American heritage dictionary as a small computer which is designed and intended for use by an individual at home or at work for light-computing tasks(Personal computer). The PC gained widespread adoption from the early 1980s with the introduction of versions of Apple computers notably the Apple II and later IBM PC by IBM which enabled many people to own computers in their homes. Technological advancement and competition in the computer hardware industry has resulted in low-cost microprocessors and compacted computer parts. Today there are a host of computer parts manufacturers that produce compatible components that work together when correctly assembled. In this process essay it is shown that is it easy for anyone with little computer knowledge to assemble and own a computer of his/her own specifications.
The first step in assembling a PC will involve gathering the components. Some of the basic ones are: motherboard, Computer case, power supply unit, processor and its cooler, memory, hard disk drive, optical drives (DVD/CD/Blue ray drive), video graphics card, keyboard, mouse, monitor and operating system. In addition, tools such as a set non-magnetic screw drivers, thermal paste ,manuals and a pair of pliers are also required. The choice of many of these components is up to the owner and their prices vary depending on the manufacturer. However, emphasis should be on their compatibility ,purpose and availability (How to assemble a computer). The motherboard forms one of the core components as everything is plugged onto it and its choice will depend on whether one is building an AMD or an Intel system. It also determines the type of memory used, maximum speed and future upgradability (Hutcheson 3). The processor will determine the speed of the system but the choice between the two major types-Intel and AMD- remains a matter of taste. The case should be suitable as far as air circulation for CPU cooling is concerned. For those that come with a power supply unit, it is important to ensure that it is compatible with the motherboard. Hard Disk Drive is for storage but is important to go for the most modern ones with high speed (RPM). Video card is required in case the motherboard lacks an integrated one or higher resolution is required. Keyboard, mouse and monitor are required for configuring the system after the assembly. The thermal paste serves to help in dissipating heat generated by the CPU and should be changed after a period of 6 months (Gupta n.pag). Manual are critical for this process especially for beginners for reference purposes.
After gathering all the components, the assembler should be aware of a few safety precautions before the actual assembly begins. As static electricity can damage any electronic device, it is important for one to discharge static electricity from the body prior to handling any device. This can be by wearing an anti-static wrist band or simply by touching the casing with both hands(Hutcheson 4). In addition, excessive force should not be applied during installing of the components to avoid any damages (Hutcheson 4). It is also important to work on a spacious table top in a well-lit room. Before commencing the assembly process, the components should be unpacked from their wrappings and laid neatly on the working area.
With all the parts ready and the necessary safety precautions in mind, the final activity is the actual assembly. The first step involves opening the case and studying its layout particularly where the motherboard is laid. The casing cover is can be removed by a sliding mechanism or unscrewing the screw on its back panel. With the case open, the motherboard is then mounted on its inner side while ensuring that its integrated ports are well within the rectangular cut-out of the back panel. This may involve securing it on the stand-offs on its perimeter or using screws. The processor is then carefully inserted on the motherboard processor socket. It is important to consult the motherboard manual for the correct alignment and locking mechanism. With the processor well secured, the thermal paste is applied to cover the whole of its top. This is followed by placing the heat sink on top the processor and locking it into place. The locking may involve a lever like pin but it is important to consult the manual. If the heat sink lacks an attached fan, the fan can be fixed with the help of its manual. After inserting the processor and its coolers, the power supply unit (PSU) can be inserted (in case the case does not come bundled with one) in its position such that the power ports are visible from the back panel and held using screws. The PSU is then connected to its corresponding 24 or 20-pin connector on the motherboard. The square-like power cable for the processor from the PSU should also be connected on the appropriate port on the motherboard. After installing the PSU, the memory modules can then be inserted on their slot on the motherboard by unlocking the end clips and gently pushing them down until the clips snap up. Afterwards, the hard disk drive is inserted into its bay area and held using screw. It is then connected to the power supply and its motherboard data cable. The type of connection to the motherboard will depend on whether it is IDE or SATA-based in addition to the number installed. The optical drives such as DVD/CD/Blue Ray drives are fixed by first removing the front panel plates and secured with screws inside their bay area. They should also be connected to the power supply and onto the corresponding data cable from the motherboard. I case of more than one non-SATA optical drives, the jumper setting should be set as required with the help of a manual. The last step will involve connecting the wires of the power switch, the reset switch, the hard drive LED, internal speakers and other inputs such as front panel USB and audio panel to their appropriate motherboard ports. It is very important to consult the motherboard manual on this as some of these have look-alike connectors and it may be easy to make a wrong connection. However, this should not pose a serious problem as most motherboards bear appropriate abbreviations near the correct ports. The graphic card if required or due to lack of an integrated one on the motherboard can be installed by fixing it firmly on a PCI slot after removing the cover plates at the back panel. With everything in place, it is important to recheck every connection before closing the case cover and powering the computer after connecting the system unit to the monitor , keyboard ,mouse and to a power outlet. A successful assembly will result in the system booting up although not to completion because of lack of the operating system. After installing an operation system, a window system will require that installation of device drivers using the motherboard driver CD. On the other hand, a system that does not boot up or results in smoke or a burning smell will require a recheck of the connections and /or verification of the individual components.
Assembling a computer is a pretty simple affair nowadays. It is possible to build a computer system that lives up to ones preference and taste. The process involves sourcing the required components and connecting them together on a motherboard contained in a case. With just the accompanying manuals any person with little computer knowledge can build a customized system that best serves his/her interests.
Works Cited
Gupta, Ankur. How to assemble and build a PC. DigitGeek.com. DigitGeek, 2008. Web.
How to assemble a computer. Liutilities.com. Uniblue, 2007.Web.
Hutcheson, Mike. How to build a computer.Squidoo.com. Squidoo, LLC.n.d. Web. 2011.
Personal Computer.thefreedictionary.com. Farlex, inc, 2011.Web.
Routers and firewalls are network connectivity devices that handle data packets on the network. The router will direct these data packets to their addressed destination on an internal as well as external scale. This is within or without the local area network (LAN). The firewall, on the other hand, is a hardware or software that secures a network against external threats. On their own, firewalls have no inbuilt intelligence as far as identifying or recognizing intrusions is concerned. These must therefore be used in conjunction with an intrusion detection system (IDS). It is therefore desirable that a competent network administrator should include all of these in their network management policy.
Subnetting
A subnet defines a physical part within the transmission control protocol and internet protocol (TCP/IP) environment. This process makes use of IP address that represents a distinct network ID. Normally one network ID by the InterNIC is issued to an organization (Holliday, 2003). When the network is divided into subnets, each segment should use a diverse subnet ID. Each segment therefore has a distinctive subnet formed by dividing into two parts the host ID bits. One part identifies the segment as a unique network, while the other identifies the hosts. This process is called subnetting.
Benefits of Subnetting
Organizations apply subnetting to create multiple physical segments across one network. This enables one to:
Blend various technologies including Ethernet and token ring
Overcome the current technology limitations like those placed on the maximum number of hosts per segment
Reduce network overcrowding by re-directing traffic and minimizing broadcasting
The IP addressing system used for subnets is known as subnetting. Before one implements subnetting, he needs to resolve the current requirements. One should define the required host addresses corresponding to the physical segments available.
Each TCP/IP host must have a minimum of one IP address. Based on these a single subnet mask is sufficient for the whole network. A unique subnet ID must be defined for every physical part and thus a range of host IDs for each segment must be defined.
IP Addressing
All networks must have a way of uniquely identifying individual components in a way that the identifier may be in a name or number format. In the case of the TCP/IP protocol, a unique number called the IP address is used to recognize each host. The IP address defines a host position on the network (Day, 2008). The IP address is unique and has a standardized format. Every IP address identifies the network ID (network number) and Host ID (host number). The network ID represents the resources positioned on the same physical section of the network. The host ID represents a TCP/IP host like a workstation, server, and router located on the same segment. The IP address is 32 bit long and is subdivided into octets; a group of 8 bits separated by periods. The internet community derived and categorized the IP address into five classes to cater for different sizes of networks. The classes A, B and C are widely used in this classification. These classes of addresses identify the bits that represent the network ID and those which represent the host ID. It also gives the total figure of networks and hosts on each network. Class A network addresses cover networks having many hosts. This allocates 126 networks and roughly 17 million hosts for each network. Class B network addresses cover varied sized networks and thus can support 16,384 networks with about 65,000 hosts on each network. Class C addresses are suitable for local area networks (LANs), which are relatively small with approximately 2 million networks and 254 hosts per network.
Address Resolution Protocol (ARP)
When two machines communicate, an IP address identifies the destination machine. However, transmission of data must take place at the physical and data link layers. For this purpose, the physical address of the destination machine must be used. The address resolution process involves comparing IP and hardware addresses. Address resolution protocol (ARP) obtains the hardware address of broadcast based hosts on networks. ARP acquires the hardware address of the target host or gateway by using a local IP address broadcast of the destination (Athenaeum & Wetherall, 2005).
Immediately the hardwares address is obtained they are stored as an entry within the ARP cache together with the IP addresses. This ARP cache is continuously scrutinized for the IP and hardware address mapping prior to starting an ARP request transmission. Prior to communication taking place between two hosts, the IP address of each host must be determined and mapped to the hosts bandwidth address. An ARP request and reply constitutes the address resolution process with the following steps:
An ARP request is started anytime a host attempts to communicate with another host. In case an IP resolves that the IP address is local, it checks for hardware address in its cache for the destination address. If no match is found ARP constructs a request basing on the queries like, who is the IP address; which is your IP address, etcetera.
The source hosts IP and hardware addresses form part of the request. The ARP request is transmitted over the network so that all the hosts on the same network can receive and process it.
Every host on the network gets the broadcast and compares it to its own IP address so that where there is no match, the request is ignored.
The destination host identifies the requests IP address it has received and matches it to its own address. It then sends an ARP reply consisting of hardware address directly to the source host.
The destinations host ARP cache is then updated with the IP and hardware addresses which are matching with those of the source host. After the reply, the source host establishes the communication.
An ARP broadcast can generate considerable traffic on a network hence reducing network performance. To optimize this process, the results of an ARP broadcast are held in a cache for two minutes, and if the entry is not used within the two minutes, the entry is held for a further ten minutes within the cache before it is deleted. Entries in the ARP cache are automatically timed out in case hardware addresses change or if a network card is replaced.
IP Routing
Routing involves choosing a path to use in sending packets over a network. Routing takes place at an IP router or host when it transmits IP packets. A routing table stored within the memory of the host or router is consulted during the routing process. The table consists of entries with the router IP addresses to additional networks which are used during communication. A configured router is only able to send packets to certain networks. During an inter-host communication, IP initially determines whether the location of the host is local or remote. If it is remote, the IP then verifies the routing table for a path to the remote network or host. The IP may use an IP address for the router to transfer a packet over the network. The routing table is referenced continuously for the addresses of the remote network or host. If the path is unavailable, an error message is send back to the source.
Static and Dynamic IP Routing
Static routers make use of routing tables which are constructed and updated manually. When a route is changed, static routers have no ability to share this information amongst themselves nor do they exchange routing information with dynamic routers.
Dynamic routing, on the other hand, is a function accomplished by the routing information protocol (RIP). Similarly, open shortest path first (OSPF) may also be used. Dynamic routers periodically share routing information during dynamic routing process.
TCP/IP Services
Email
What we call E-mailing or electronic mailing is the transmission of messages pr a message over a communication network. Most e-mail systems consist of a basic text editor used to compose, edit and send the messages. The messages are addressed using specific email address, which must be unique to the recipient. Electronic mail boxes store the sent messages until the intended recipient retrieves them. All online services and internet service providers (ISPs) offer gateway functions as well as e-mail although most of them support the exchange of mail with users on other systems
Simple Mail Transfer Protocol (SMTP)
Simple Mail Transfer Protocol (SMTP) specifies how a mail is delivered from one system to the other. It is a relatively straight forward protocol that makes the connections from the senders server to that of the recipient and then transfers the message.
SMTP is used for:
Delivering messages from the email client to the SMTP server
Transferring messages between various SMTP servers
SMTP is not used for transferring the message from SMTP server from a recipient to its client email because it requires both the source and destination to be connected to the internet.
Post Office Protocol 3 (POP3)
Post Office Protocol 3 (POP3) is a typical messaging protocol that is used in receiving e-mail. This protocol, which operates in the client server setup, ensures that e-mail is delivered and stored by an internet server. One can then proceed to check their mail box and download the mail. The internet message access protocol (IMAP) is an alternative protocol to POP3, which is used to view e-mail at the server as though it were on the client computer. Both POP3 and IMAP protocols are involved in receiving e-mails. However, they differ from SMTP used to transfer email across the internet.
Hypertext Transfer Protocol (HTP)
This protocol defines the basis of the web, where most information is stored in a hierarchical or two dimensional sequences. However, the hypertext formatted documents will allow information to be accessed in any order and from any direction by using links from one document to the other. These links are embedded into the documents, and contain the uniform resource locator (URL) address, to another location. This is then used as the addressing system on the internet. This URL is unique because it contains all the information required to locate any internet resource. Typically the URL address consists of five parts; the left most part defines the name of the protocol and a fully qualified domain name (FQDN) of the server contains the resources in question. The third portion of the URL corresponds to the port address followed by the directory path. Then the directory path to the resource is followed by the file name with the suitable extension.
File Transfer Protocol (FTP)
This is a connection oriented protocol which is considered useful especially when transferring files between different operating systems. This protocol may be used to relocate files in both directions between an FTP server and client. An FTP site can be password restricted meaning that one would require a username and a password to access it. The FTP operation involves logging on to the distant computer, browsing directories and then transferring the required files. Browser software such as internet explorer makes this procedure much simpler by automatically logging one on to the FTP server, provided anonymous connections can be allowed. FTP also supports the process of copying files to a server from the client computer thus it is more suitable for the transfer of files than HTTP.
Telnet
Another way of accessing information from servers is to log on to the remote computer using telnet. This service provider terminal emulation software supports a remote connection to another computer. However telnet has a disadvantage where one must know how to issue commands to the computer they are logged in. Moreover, the computer must be in a position to grant access to the server and should be able to run the telnet demon service.
Simple Network Management Protocol (SNMP)
This is a constituent of TCP/IP originally developed to monitor and troubleshoot routers and bridges. SNMP provides the ability to monitor and communicate status information between terminal servers, wiring hubs, routers and gateways, and computers running Windows NT and LAN manager servers. In its structure, the SNMP has two components agents and management systems on a distributed architecture.
Ports
Application processes using TCP/IP for transport have unique identification numbers known as ports. These ports specify the communication path between a client and the server part of the application where all applications, on servers, have pre-assigned port numbers. This assignment is carried out by the internet assigned numbers authority (IANA) ranging between 0 and 1023.
The Table 1 below denotes some of the well known port numbers with their corresponding services. These port assignments are documented in RFC 1700.
Table 1: Ports numbers and corresponding process names
Port number
Process name
20 21 23 25 53 67 80 110
FTP Data FTP Telnet SMTP Domain UDP HTTP POP3
User datagram protocol (UDP) and TCP indicate source and destination as port numbers within packet headers. The network operating system software broadcasts data from various applications to the network, and recaptures incoming packets from the network. It then matches them to their corresponding packet IP addresses. Firewalls are devices configured on the network to distinguish between packets, based on the source, and destination port numbers, services in this case, associate with the transport protocol ports using sockets (Day, 2008, p.67). A socket is a software oriented set up identified as the transport point on a particular network. In summary the 256 values, categorized in the documentation for the port numbers and their associated services, are outlined below:
0 to 63 reserved for network wide standard functions
64 to 127 which covers the host specific functions
128 to 239 reserved for future use while 240 to 255 was for any experimental functions
Ethernet
Ethernet is a term used to refer to a variety of local area network (LAN) technologies. Ethernet contains various wiring and signal standards that focus on the physical layer. This standard was developed, by Xerox PARC, in 1975 (Blanc, 1974). Currently Ethernet is a pattern describing a connection scheme for computers and data systems through a shared cable. The Ethernet specification covers similar functions as the open standards interconnection (OSI), physical and data link layers of data communication.
Some of the typical features of Ethernet include:
The use of linear bus or star topology
The signal mode baseboard
Access method carrier that senses multiple access with collision detection CSMA/ CD
The transfer speed that ranges between 10Mbps to 100Mbps
The Cable that is used is thicknet, thinnet or unshielded twisted pair (UTP)
The maximum frame size is 1518 bytes
The media is passive drawing power from the computer and therefore will not stop working unless the media is improperly terminated or physically cut.
Ethernet arranges data in frames, which defines a data unit transmitted individually The frame itself has a length of between 64 and 1518 bytes, including the typical Ethernet frame, which is 18 bytes long. Generally, the data portion in this frame is 46 to1500 bytes long.
Ethernet was formerly based on the concept of computers exchanging data using coaxial as the media. This coaxial would traverse every building to interconnect the computers. In order to manage collisions on the medium a scheme carrier, that senses multiple access with collision detection (CSMA/CD), was introduced. This scheme carrier was preferred because it was simpler than the token competing technologies which the computers used special kind of transceiver called attachment unit interfaces (AUIs) to provide connection to the cable (Day, 2008, p.5). The Ethernet cable length was considered a limitation at that time. It was not possible to build very large networks using Ethernet and thus Ethernet repeaters were developed. The invention of Ethernet repeaters greatly improved signal strength over long distance transmissions.
Ethernet Standards
10 Base T
This Ethernet standard implies a transmission speed of 10Mbps on base band when utilizing the twisted pair cable. It is an Ethernet standard that uses unshielded twisted pair (UTP) for connection purposes. The 10Base T is physically wired as a star but the logical topology is a bus. Usually, the hub of a 10Base T network can be used as a multi-port repeater and is often situated at the end of a cable linked to the hub. Every computer is connected using two pairs of wire, with one pair in use to receive data, while the other for transmitting data.
Other variants of the Ethernet standards are 10Base 5 and 10Base F, which are implemented over the Cat5 and fiber optic cable respectively. The 10Base 5 standard is a standard Ethernet implementation that uses a thick coaxial. Regardless of the physical star topology, Ethernet uses repeaters which enforce a half duplex and CSMA/CD schemes. The repeater, which is normally limited in capacity, also has the signal enforcing collisions function that deals with packet collision. The total through-put by the repeater depends on a single link and hence a uniform operating speed (Day, 2008, p.5). Connectivity devices like repeaters define Ethernet parts, but they forward all data to other connected devices. This becomes problematic because the entire network becomes one collision domain. Therefore, order to address this issue, switching and bridging can be implemented to enable communication at the data link layer.
The bridge, at this level, discovers the port address and forwards network traffic addressed to these ports (Halsal, 1989, p.21). The use of bridges further enhances mixing of speeds and results in the interconnection of more segments. As a result of this, fast Ethernet was introduced. This was introduced due to increased demands for greater bandwidth because of faster server processors, innovative applications and more challenging environments that required superior network data transfer rates than those provided by the existing LANs. As networks grow, there are more users catered for through server application approaches. As a result of this, there is increased network traffic. One inevitable effect is that the average file server is strained in the through-put typical of todays LANs. Current data demanding applications, which include voice and video with network server backups, demand reduced latency with enhanced data transmission speeds and reliability.
The popularity of 10Mbps LANs and their expanse makes them a suitable springboard for faster networking technologies. Some of the key features of the fast Ethernet will include:
Its basing on CSMA/CD protocol that defines the traditional Ethernet access technology. However, this standard decreases the duration of time that each bit is broadcasted by a factor of ten. This raises the packet speed from 10Mbps to 100Mbps while requiring minimal changes to the network system. The challenging factor for implementing this technology is the collision detection function. While the bandwidth increases ten fold, the collision gap shrinks to one tenth.
Data is transmitted between Ethernet and fast Ethernet, devoid of protocol conversion, because fast Ethernet includes the old error control functions, the frame format and length.
These standards can use twisted pair and fiber optic as media.
Fast Ethernet can be categorized into 100Base TX; 100BaseT4 and 100Base FX.
Other technologies that have competed Ethernet can also be mentioned here. The most common of these is the token ring.
Token Ring
This was developed in 1984 to cover the complete collection of IBM computers. The objective of developing token ring was to allow for the use of twisted pair cable to connect a computer to a LAN using a wall socket. The wiring structure for this scheme is centralized. The features of the token ring scheme include:
Star ring wired network topology.
The ring is logically implemented on a central hub
Token passing is utilized as the access method
Can use shielded and unshielded twisted pair as well as fiber optic cabling
Have transfer rates that range from 4 to 16 Mbps. The 16Mbps token ring reduces delay by placing the token back on the ring immediately after the data frame has been transmitted. Token ring switches, which support full duplex, may support speeds of up to 32Mbps by simultaneously transmitting and receiving.
Baseband transmission with a maximum cable segment length of computers within a space of between 45 and 200 meters.
A frame based technology with a maximum frame size of approximately 5,000 bytes.
Fiber distributed data interface (FDDI)
The fiber distributed data interface (FDDI) is a 100Mbps network that uses token passing and fiber optic media. It was released in 1986 and was used as; a metropolitan area network (MAN), campus area network (CAN) and local area network (LAN) technology. It provides a high speed backbone and is either a physical star or ring where the logical layout represents a ring. Its media access control scheme is token passing. A copper distributed data interface (CDDI) is often used as a migration path to FDDI. The system can use existing twisted pair cabling thus functioning in the same manner as FDDI. The specification of fiber distributed data interface (FDDI), is similar to 802.5 (token ring) but it supports higher bandwidth and maximum segment distance of 100 kilometers. The FDDI 2 provides sound and video handling and can use dual counter rotating rings to protect against media failure. Incase a failure occurs in such a setup the node, known as dual attached stations (DASs) on either side of the break, re-establishes a ring by using a back up ring.
Frame relay
This is a high speed transmission scheme that uses wide area network (WAN) protocol. A Frame relays is implemented at the physical and data link layers with reference to the open systems interconnection (OSI) model. It can suitably work on integrated service digital network (ISDN) interfaces. Moreover, it is a packet switching technique that is still in use over a variety of other network interfaces (Stallings, 2006). In line with this set up, some users are permitted to use the available bandwidth during idle periods. As a packet switching technology, frame relay makes use of two techniques. The variable length packet technique, that supports a more efficient and elastic data transfer process and the statistical multiplexing technique, when implementing controls network access. This allows for more flexible and efficient use of the available bandwidth. Today most local area networks (LANs) support the packet switching technique.
Asynchronous transfer mode (ATM)
Asynchronous transfer mode (ATM), unlike the frame relay, is a cell relay technique. The ATM scheme uses tiny packets of fixed size called cells to transmit data, video or voice applications. The networks rely on an already existing link between a transmitter and a receiver. These networks will achieve very high speeds of data transmission typically between 155Mbps to 622Mbps. An ATM network implements cell switching and multiplexing technologies. This is mandatory in order to maximize the benefits of circuit switching. Circuit switching supports guaranteed capacity and constant transmission delay. Alternatively, packet switching has more flexibility and efficiency for irregular traffic than any other network. ATM networks can also implement scalable bandwidth and this makes them more efficient than time division multiplexing, which is an example of synchronous technologies. Therefore, TM can be used with varied media such as coaxial, twisted pair and fiber optics cable which are intended for other communication systems.
Integrated services digital network (ISDN)
This is a service offered by telephone carriers to transmit digital data. In ISDN, digital signals are transmitted over the existing telephone network. The signals include text, voice, music, data, graphics and video, which are digitized and then transmitted over existing telephone wires (Stallings, 2006). This is currently used to implement telecommuting, high-speed file transfer, and video conferencing among other applications.
Synchronous optical network (SONET)
Synchronous optical network (SONET) wide area network oriented technology that is implemented using fiber optic as the key medium. The network transmits voice, data and video at high speeds. The synchronous transport signals (STS) and the synchronous digital hierarchy, are the American and European equivalent versions of this technology respectively.
Cable modem
This technology is implemented using the cable modem, which acts like a bridge, and is able to support bi-directional data communication. This is implemented by use of radio frequency implemented over hybrid fiber-coaxial (HFC) and RFoG infrastructures. The cable modem is suitable for implementing broadband internet access because it supports high bandwidth infrastructures. Cable modem also comes in handy with the advent of protocols such as voice over internet protocol (VoIP), because it can be implemented as a telephone within such as setup
VGAny LAN
This technology combines Ethernet and token ring. It is also known as 100 VGAny LAN, 100Base VG and has the following specifications:
100Mbps bandwidth
Can be used to implement a cascaded star topology using category 3,4 and 5 twisted pair and fiber optic cable
The demand priority access method supports two priority levels low and high
Can be used to implement an option to filter individually addressed frames at the hub thereby enhancing privacy
Wireless Local Area Networks (IEEE.802.11)
A WLAN uses wireless transmission media, where 802.11 is used as a standard to implement wireless local area networks (LANs). The WLANs are applied in LAN extensions, building interconnections, nomadic access and on demand networks. They are also suitable for connecting devices on large open areas. In most cases, WLAN is not known to work in isolation and therefore it will be linked to a wired LAN at some point hence becoming a LAN extension. The typical WLAN setup will include a control module, which serves as the interface to the wireless LAN and which can either be a bridge or router linking the wireless LAN to a backbone (Mittag, 2007). The control module works by polling and token passing in regulating access to the network. User modules on the other hand could be hubs interconnecting a wired LAN, workstations or a server. Whenever various wireless devices are placed in the same range grouping, connoting a single control module, this is called a single cell wireless LAN. For the cross building interconnect, a point to point wireless link can be implemented between buildings. Typically, bridges or routers could be used to connect devices in this case. The nomadic access approach involves a LAN hub and a number of mobile data terminal equipment such as laptops. Both devices must be within operation range for transmission to succeed. An adhoc network is a WLAN that is peer to peer in nature and which is established to meet an immediate need. WLANs requirements will include the following:
A medium access scheme needed to optimize the use of available medium.
A substantial number of nodes
A connection to a wired LAN backbone
Restricted surface area of a diameter that ranges between 100 and 300 meters.
Available battery power source to power the mobile nodes.
Well defined transmission robustness and security to deter eavesdropping
Dynamic configuration to manage MAC addressing and network management
Roaming capabilities to enable mobile user modules to move from one cell to the other.
WLAN is generally categorized based on the transmission technologies used. The three main categories are:
Infrared LANs whose expanse is limited to a single room
Spread spectrum LANs which will generally require no licensing to operate
Narrow band microwave which operates at frequencies that may require licensing.
The following table 2 summarizes the wireless technologies available today.
Table 2: Wireless technologies.
Infrared
Spread spectrum
Radio
Diffused infrared
Directed beam infrared
Frequency hopping
Direct sequence
Narrowband microwave
Data rate (Mbps)
1 to 4
1 to 10
1 to 3
2 to 50
10 to 20
Mobility
Stationary /mobile
Stationary with line of site(LOS)
Mobile
Stationary / mobile
Range (meters)
15 to 60
25
30 to 100
30 to 250
10 to 40
Detectability
Negligible
Little
Some
Wavelength
1: 800 to 900nm
902 to 928 MHz 2.4 to 2.4835 GHz 5.725 to 5.85 GHz
902 to 928 MHz 5.2 to 5.775 MHz 18.825 to 19.205 MHz
Modulation technique
ASK
FSK
QPSK
FS / QPSK
Radiation
<1W
25mW
Access method
CSMA
CSMA/ Token ring
CSMA
ALOHA, CSMA
License required
NO
NO
YES
The spread spectrum is becoming the most commonly used and dependable form of encoding for wireless communication. As such, the use of the spread spectrum will ultimately improve reception while reducing the jamming and interception incidences. The basic idea used in spread spectrum involves modulating a signal in order to boost the signal bandwidth to be transmitted. There are basically three approaches worth mentioning for the spread spectrum. Frequency hopping spread spectrum: This is where the signal is broadcasted over various radio frequencies hopping between these frequencies at some defined interval of time.
Direct sequence spread spectrum: This is where every signal bit is translated into numerous bits for the transmission. This process is accomplished by use of a spreading code.
Code division multiple access: This is where several users make use of the same bandwidth with limited interference.
802.11 Architecture
A basic service set (BSS) constitutes the basic unit within a WLAN. This BSS involves various stations which run the same MAC protocol and share the same medium of transmission. The BSS can be standalone or interconnected to a distribution system (DS), via an access point (AP). The access point here acts as a bridge and the BSS is what is commonly known as a cell.
802.11 Architecture terminologies:
Access point (AP): This is a station or node on a WLAN that offers access to the distribution system.
Basic service set (BSS): These are a number of stations under a single operational command.
Distribution system (DS): This is the interconnection that exists between BSS and integrated LANs.
Extended service set (ESS): Consists of integrated LANs and BSSs that appear singularly at the logical link control (LLC) layer.
MAC protocol data unit (MPDU): This is a data unit transmitted between two devices.
MAC service data unit (MSDU): The unit of information shared between two users.
802.11 Services
A total of nine services are provided by the WLAN. These services are implemented at an access point or another special purpose device. These services are summarized in the table below.
Table 3: Wireless LAN services.
SERVICE
SUPPORTS
PROVIDER
Privacy
Access and security
Station
Integration
MSDU delivery
Distribution system
Authentication
Access and security
Station
Distribution
MSDU delivery
Distribution system
MSDU delivery
MSDU delivery
Station
Disassociation
MSDU delivery
Distribution system
De-authentication
Access and security
Station
Re- association
MSDU delivery
Distribution system
Association
MSDU delivery
Distribution system
Association: This service defines the connection between a station and the access point device.
Re-association: This is the transfer of an association between two access points.
Disassociation: The notification between two access points that a connection is about to be ended.
Authentication: Set up the distinctiveness of each stations to the other
De-authentication: The termination of an authentication service.
Privacy: Established to stop the messages contents from being read by an unauthorized receiver.
WLAN Switches
These devices are used to implement a link to the access points through a wired connection. The switches act like gateways to the wired network (Halsal, 1989, p.21). Initially, in WLAN deployment, all the access points were autonomous. However, a centralized architecture has gained popularity providing the administrator with a structured method of network management. In WLAN, a controller carries out management, configuration and control of the network. A most recent innovation in WLAN is the Fit Access Points (FitAps). This setup supports encryption while establishing the desired exchange. This is then supported by the new on-market chipsets, which support WPA2. Within FitAps, there is also dynamic host configuration protocol (DHCP) relay. Additionally, other functions like VLAN tagging, which relies on service set identifiers (SSID), are implemented in the FitAps.
Network Address Translation (NAT)
This process involves the dynamic address modification of IP packet headers which is carried out within a routing device. The one to one network address translation process focuses on the IP addresses, the header checksum and other checksums which have the IP address of the packet that needs to be changed (Holliday, 2003, p.1). Alternatively, the one to many NAT alters TCP/IP port information in the outgoing packets while maintaining a translation table. In this case returned packets can be correctly translated back using the information in the translation table. The one to many NAT is also referred to as network address and port translation (NAPT). The NAT process often occurs in the router. A number of ways are available for port translation and the full cone NAT is most common of these and supports a one to one transmission mode.
At this juncture, the internal address is matched to an external one and as such, any external host is capable of sending packets through the internal address port as well as the external address of the port. For the restricted cone translation, once the internal and external addresses of the ports have been mapped, an external host sends packets to internal address only if the internal address had initially communicated to the external address. For the port restricted NAT, once the internal and external addresses have been mapped, an external host can transmit packets to the internal address by broadcasting them to the external port address if the internal port address had beforehand send a packet via the external host port. (Stallings, 2006, p.89)
Public and Private Addressing
Private networks that do not have a link to the internet can use any host addresses. Public addressing does not permit two network nodes to have the same IP address. Private addresses are available to tackle the decreasing public IP addresses and thus they can be used for different nodes directly interconnected to each other.
Domain Name Service (DNS)
The TCP/IP protocol uses the binary version of the IP address for locating hosts on a network. The dotted decimal notation is used for configuration purposes but it is not particularly intuitive for humans to remember. This led to a unique friendly name being assigned to each host on a TCP/IP network. This consists of two types of names:
Host name: The administrator assigns an alias to a computer. Originally, a local file was held on each host to provide a lookup table to match host names with corresponding IP addresses.
Fully qualified domain names (FQDNs) are used to provide a unique identity for the host to avoid duplicate host names.
Fully qualified domain names must adhere to the following rules:
The host name must be unique within the domain.
The total length of the fully qualified domain name must be 255 characters or less with each node (part of the name defined by a period) having not more than 64 characters (Holliday, 2003, p.1).
The FQDNs supports alphanumeric and hyphen characters only.
FQDN and their corresponding IP addresses are held on a domain name service (DNS) server, although a local host file can be used. Each domain must provide an authoritative DNS server to hold information relating to that domain. Host names and FQDNs are used for PING and other TCP/IP utilities instead of the IP addresses. To make use of these friendly names, there has to be a system for resolving a host name to its IP address and also ensuring the names are unique. Prior to domain name services, a host file called HOSTs was used. Resolving host names to IP addresses involved the InterNIC, which is the central authority maintaining a text file of host names and IP address mappings. Whenever a site required to add a new internet based host, the site administrator would dispatch an email to InterNIC giving the host name to IP address mapping; a process that was carried out manually. Downloading and copying the latest HOSTs file and installing it on each host was the task of the network administrator. Each host then performed name resolution by looking up a host name within a copy of the HOSTs file and locating matching address. Maintaining completeness and accuracy of the file became too difficult as hosts increased leading to the development of DNS.
The DNS has a hierarchical and distributed structure for name resolution to IP addresses. DNS uses a distributed database system which contains information related to domains and hosts found in those domains. Information is distributed amongst name servers that hold a portion of the database. Maintenance of the system is delegated and the loss of one DNS server does not prevent name resolution from being performed because of the distributed nature of the system. The DNS system has its own network of servers that are consulted in turn until the correct resolution is returned for every request. The hierarchical structure of the domain name system (DNS) is such that at the top, there are nine root servers (A to I).
Immediately, underneath the root lies the top level domain labeled by the type of organization. In some countries, the top level domains are organized using the ISO country codes which include uk for United Kingdom, nl for Netherlands and de for Germany. Beneath this level is the second level domain that covers companies and governments with extensions such as com. or gov. By tracing records from the root, and traversing the hierarchy, one can find information about a particular domain.
The root servers have complete information about the top level domain servers. In turn, these servers have information relating to servers for second level domains. Records within the DNS tell them where the missing information can be found. FQDN reflect the hierarchy from most specific (host), to the least specific (a top level domain). The user types in a uniform resource location, which is an address the browser, will ask the DNS client software to determine the IP address. The local DNS client software then requests the DNS server for the resolution of the submitted address to an IP address. The request is transmitted to the root domain where the server provides the IP address requested. This address is then returned and stored in a local cache and the IP address is also retransmitted back to the DNS client software. This address is then forwarded to the browser which establishes the connection and opens the corresponding web page. This process only takes a few seconds.
Network security management
Computer system security remains very important in order to guard the integrity of the information stored. The file system has the mechanism needed for storing and accessing data and programs within the computer system. Resident file information system is vital and needs to be monitored in order to detect unauthorized and unexpected changes thereby providing protection for the system against intrusion. The most effective process to detect host-based intrusions is by noting changes to the file system. In any network platform, the process of monitoring such changes becomes quite a challenge for the administrator. Online threats remain a reality for todays businesses, especially those relying on the internet. Active and passive attack incidents are escalating every day and network administrators are having a daunting task of detecting, controlling, or minimizing the effects of such attacks. One of the common methods to secure the facility includes the common access control and auditing procedures. Perimeter systems, that are sensitive to intrusions, can be set up to boost physical security. An intrusion detection system (IDS) is one of the tools in the organizations network security armory that also includes a firewall and an antivirus. The IDS will compliment a firewall to ensure desirable network security for any organization.
In todays enterprise networks and the internet, there is a gap between the theoretical understanding of information security best practices and the reality of implementation. Much of this is due to inability of network management staff to communicate this approach such that non-technical influencers and decision makers can grant executive sponsorship. The approach of layered defense in depth policy procedures and tools is well accepted as best practice for information security. The pressure for return on security investment (ROSI) has further exacerbated the difficulty of implementing technology. Since 9-11, security has been elevated to an area of critical liability for many business continuity providers. Based on this approach, service and business continuity providers such as Cisco have questioned how to protect the integrity of mission critical operations without limiting the flow of business. Corporate information security officers have for a long time now been asking this question.
Intrusion detection systems (IDS)
These form part of the amalgamation of tools known as the intrusion detection system (IDS). The IDS constitutes an application software and associated hardware that is capable of monitoring network activities. This is for the sake of detecting malicious activities or any other violations relating to policy and procedures. The system actively monitors and reports back to the network administrator. Intrusion detection systems will address intrusion prevention, which is the process of attempting to stop likely incidences after performing an intrusion. Generally, IDS, besides establishing a record of intrusion activities and generating appropriate notification to the administrator, these systems can also rebuff threats causing the intended threats to fail. IDS will broadly cover two main categories.
The network intrusion detection system (NIDS), which consists of an infrastructure of hardware as well as software, can identify intrusions through the examination of host activities as well as network traffic. This is made possible through established connections to a network hub or switch or otherwise a configuration that can enable network tapping or the establishment of port mirrors (Stallings, 2006, p.93). Often, the administrator will establish network borders using sensors or set up choke points. These are employed to capture traffic on the network. Snort is an example of an NIDS used to capture and analyze individual packet content in order to establish malicious traffic.
The host based intrusion detection system (HIDS) defines the other intrusion detection system type involving an agent. Here the agent will analyze system calls as well as application logs.
References
Athenaeum, A., & Wetherall, D.J. (2005). Computer networks (5th ed.). Upper Saddle River, New Jersey: Prentice Hall.
Blanc, R.P. (1974). Review of computer networking technology. National bureau of statistics, 1-136.
Day, J. (2008). Patterns in network architecture. Upper Saddle River, New Jersey: Prentice hall.
Halsal, F. (1989). Data communications, computer networks and OSI (2nd ed.). Boston MA: Addison Wesley.
Holliday, M.A. (2003). Animation of computer networking concepts. Journal on Educational Resources in Computing, 3(2), 1.
Mittag, L. (2007). Fundamentals of 802.11 protocols. Web.
Stallings, W. (2006). Data and computer communications (8th ed.). Upper Saddle River, New Jersey: Prentice hall.
The Internet and modern technologies have changed the nature of learning. Thousands of teachers throughout the world consider computer-aided learning as a kind of dream that will bring only positive outcomes. The implementation of the computer-assisted language learning (CALL) in EFL classrooms can be rather a challenge.
Many scientists have conducted researches aiming to investigate the impact, problems, and efficiency of CALL in EFL classes. According to Afrin (2014), the computer is becoming an integral part of every classroom nowadays. The Internet and CALL are not the only modern approaches that alter the traditional educational procedures.
CALL modifies the role of both the teacher and student in the process of language learning drastically. In fact, CALL presupposes the language learning and teaching with the help of additional resources such as the computer, the Internet, and other computer-based sources. The author dwells on several significant advantages of CALL.
These benefits include motivation, adapting to learning, authenticity, and the development of critical thinking. However, it can be rather a challenge to integrate CALL into classrooms. As Alkahtani (2011) states, there are three levels of CALL integration. Integration at the institutional level includes the purchasing of equipment, the formation of technical support, and providing the access to it.
The second stage is the integration at the department level. Department should formulate their policies to maximize the efficiency. The third level includes the integration by teachers. Teachers should receive necessary knowledge and skills to implement CALL in the EFL classrooms.
The implementation of CALL in EFL settings should enhance the learning outcomes of learners and facilitate the meeting of needs of both pupils and teachers. Nevertheless, the problem with teachers attitude exists. Naeini (2012) writes that teaching staff have always been encouraged to integrate computer and the Internet into their classrooms, but very few have practically responded positively (p. 9).
Despite teachers attitudes, several other barriers impede the successful implementation of CALL in EFL classrooms. These barriers include the lack of financial support, availability of hardware and software, the insufficient theoretical and technical knowledge (Lee n.d.). Jager et al. (2014) emphasize the importance of the combination of the computer-based language learning with face-to-face learning.
They introduce the idea of blended EFL e-learning. In most cases, teachers assume that they realize pupils needs while it is not always so. The implementation of CALL should fill the gap between teachers assumptions and students needs. Finally, it is significant to evaluate some practical results of the implementation of CALL in EFL classrooms.
Knowledge of sufficient amount of words is always a problem of students who learn English as the foreign language. The results of several types of researches and experiments indicate that the usage of CALL enhances the vocabulary acquisition. Students process information efficiently via computer-aided resources.
It results in a long-lasting remembering of words (Ghabanchi & Anbarestani 2008). Listening skills are also of extreme significance for foreign language learning. Most EFL students face difficulty in recognizing the foreign language in everyday conversations between native speakers. Phuong (n.d.) provides the information concerning the implementing of CALL for the promotion of listening skills in EFL learners in Vietnamese Universities.
The author pays attention to the fact that teachers who have studied necessary information concerning CALL and have changed their attitudes are more likely to teach pupils efficiently. Students improved their listening skills with the help of CALL and teachers assistance.
Reference List
Afrin, N 2014, Integrating Commuter Assisted Instruction in the EFL Classroom in Bangladesh, Journal of Humanities and Social Sciences, vol. 19, no. 11, pp. 69-75.
Alkahtani, S 2011, EFL female faculty members beliefs about CALL use and integration in EFL instruction: The case of Saudi higher education, Journal of King Saud University, vol. 23, no. 2, pp. 87-98.
Ghabanchi, Z & Anbarestani, M 2008, The Effects of Call Program On Expanding Lexical Knowledge Of EFL Iranian Intermediate Learners, The Reading Matrix, vol. 8, no. 2, pp. 86-95.
Jager, S, Bradley, L, Meima, E & Thouesny, S 2014, CALL Design: Principles and Practice, Research Publishing, Dublin.
Naeini, M 2012, Meeting EFL Instructors Needs through Developing Computer Assisted Language Learning (CALL), International Journal of Language Teaching and Research, vol. 1, no. 1, pp. 9-12.
Phuong, L n.d., Adopting CALL to Promote Listening Skills for EFL Learners in Vietnamese Universities. Web.
A computer system is made of different electronic components and parts that make it look complex. These fundamental components constitute a computer system that makes a computer to run effectively. The first component is the motherboard that forms a perfect platform on which internal electronic components are attached (Miller 67).
The motherboard consolidates all the basic components of a computer system. All the programs and applications in a computer system are run by a component known as the central processing unit.
It is impossible for a computer to execute any tasks without the central processing unit (Miller 67). A computer can only boot to the operating system courtesy of the central processing unit.
A computer system can not be complete without the storage media. The Random Access Memory commonly referred to as RAM is another fundamental component in a computer system that is responsible for storing files and information temporarily when the computer is running.
The size of the Random Access Memory determines how fast the computer responds to commands. A computer with a low Random Access Memory will run slowly even if it has a powerful central processing unit.
The other storage component in a computer system is the hard disk drive. Compared to the Random Access Memory, the hard disk drive stores data for a long time. The hard disk is able to retain the stored data and information with or without power.
The data in the Random Access Memory disappears the moment the computer goes off. Because of its reliability, the hard disk is used to store very important files like system files and other program files.
The other components include the power supply unit that is responsible for powering the computer system and the video card that is essential in displaying images. The graphics card makes the images and pictures appearing on the computer monitor to look clear and detailed (Pearsons 53).
There are various ways in which the computer interacts with users on a daily basis. The user interacts with the computer system through the input and output devices. A combination of software and hardware form what is normally referred to as the user interface.
The software enables the monitor to display the output on the computer monitor (Pearsons 53). The computer system receives input from users through hardware devices like the mouse, keyboard, joystick, game controller, touch screen and other peripherals. These input devices are referred to as peripherals because they are externally attached to the computer system.
After processing the input in relation to the specified command, the user receives output from the computer system via output devices. Some of the output hardware includes the monitor and the printer.
Human-computer interaction requires the user to be well equipped with the relevant computing skills and knowledge to be able to navigate the computer system effectively.
The user is required to have some basic knowledge in computer operating systems, computer graphics, programming, database management and computer hardware technology. The computer is very useful in the sense that it helps human beings to process and store their data in a much simpler way.
Computer applications such as the internet and computer games have enhanced human- computer interactions by necessitating information retrieval, electronic commerce and entertainment.
Works Cited
Miller, Michael. Absolute Beginners Guide to Computer Basics. New York: Que Publishing, 2007. Print.
Pearsons, June & Oja Dan. New Perspectives on Computer Concepts2012: Brief. New York: Cengage Learning, 2011. Print.
Computer Science, abbreviated as CS, is the study of the fundamentals of information and computation procedures and of the hands-on methods for the execution and application in computer systems. Computer scientists formulate algorithmic techniques that generate, define, and transform information and devise appropriate concepts to represent complex systems. Computer science has several branches and sub branches, these include computational complexity theory that focuses on computational techniques, computer graphics that focuses on particular aspects of computational methods. Yet, others focus on the application of computational methods, these include programming languages and human-computer interaction (University of Cambridge, para. 3).
Many people tend to confuse computer science with other careers or courses that deal with computers such as IT, or think that it follows on their interaction with computers, which usually involves activities such as gaming, internet access, and document processing. CS offers a deeper comprehension of the inner workings of the programs that enable us to use various computer applications and programs, and using that knowledge to design new applications or improve the current ones.
Preparations Required of a Person Intending to Major in Computer Science
Before majoring in Computer Science, a student needs to prepare adequately in order to have a background on the topics and concepts covered. The first preparation expected of a student is to undertake a general reading in order to have a broad background and loose understanding of topics and issues covered in this Major. This will also increase interest towards having a deeper understanding of these issues. Since the study of Computer Science considerably depends on mathematical techniques, a person should develop a mathematical background, both technical and on a recreational level such as games and puzzles. Students are also advised to have basic knowledge of coding and cryptography, fields that have a strong link to this Major. For general reading, students can refer to various journals such as New Scientist that has articles relating to advances in computer science (Williams College, para 1).
Apart from reading widely, students are expected to have good study skills as they will have to manage their own studies, schedule their time, arrange classes, and still find time to attend to other non-academic activities such as recreational activities. Computing equipment is an integral element in understanding computational techniques, there fore, students are advised to have, or plan to purchase a computer, as this will assist in completing class projects and in undertaking personal studies and projects (My Majors, para. 4). Students will also be expected to have a fair typing speed as this will improve their working speed, this can be achieved by using the various training programs available at low prices.
Courses Required of a Computer Science Major
The compulsory and elective courses required for a Computer Science major differ significantly among various universities, however, certain courses are common. The courses include computer based courses and non-computer courses, and students must be careful that they do not fail non-CS courses such as business courses or language development courses just because they are boring. This is because these courses will count to the final grade, besides, they may become useful in future.
The most common courses expected of a CS major are listed below (Williams College, para. 5):
Algorithms
Artificial Intelligence
Calculus
Compiler Design
Computer Architecture
Computer Graphics
Computer Organization
Computer Science Theory
Computer Theory
Data Logic
Data Management
Data Structures and Advanced
Programming
Design Physics
Device Utilization
Discrete Mathematics
Distributed Systems
Electronic Design
Files and Databases
Information Management
Introduction to Calculus
Introduction to Computer Science
Logic Design
Machine Language
Network Fundamentals
Operating Systems
Programming Languages
Statistics
Theory of Computation
Some of these courses are elective while others are compulsory, again, this varies from institution to institution. In addition to the computer courses, students will be expected to take non-CS courses (Spolsky, para. 6). These courses offer the student an all-rounded perspective and can be of importance in both work and non-work environments and include Business Development Skills, Entrepreneurship Skills, and Communication Skills. However, other courses assist the students in the various computational techniques and include the various level of Calculus, Discrete Mathematics, Algorithms, Theory of Computation, and Computer Graphics.
List of Potential Jobs in Computer Science
A Computer Science major can land a vast range of jobs depending on the courses taken during the study. Some majors are more marketable than others, therefore, students must choose their courses wisely, probably at the advice of human resource experts. However, students should not only look into being employed by various firms, rather, they can opt for other prospects such as self-employment or establish consultancy firms. According to Spolsky (2005), a major in Computer Science can land the following jobs:
Application Developer
Business Analyst, IT
Computer / Network Support Technician
Computer and Software Sales
Computer Game Developer
Computer Graphics Design
Computer Hardware Technician
Computer Networking/IT Systems Engineer
Computer System Developer
Database Administrator
Information Technology Consultant
Information Technology Director
Information Technology Project Manager
Information Technology Specialist
Network Administrator, LAN / WAN
Network Engineer, IT
Network Manager
Programmer
Programmer Analyst
Senior Software Engineer
Software Architect
Software Developer
Software Development Manager
System Manager
Systems Administrator
Systems Analyst
Web Developer
Works Cited
My Majors. Computer Science Major. No Date. Web.
Spolsky, J. Advice for Computer Science College Students. 2005. Web.
University of Cambridge. Preparing to study Computer Science. 2010. Web.
Williams College. Computer Science: Major Requirements. Web.
Most colleges do not necessarily require their students to acquire computer systems, but in the digital age, all scholars are expected to have unhindered access to fully operational and up-to-date computer architecture. Consequently, you need to purchase your student a computer system that can at least compliment the software and hardware standards that can be found in most colleges. It is also important to consider that the student might have to carry the computer to college on a regular basis. These are my recommendations for your childs computer needs after considering various factors.
Hardware
There are various factors to consider while planning to purchase computer hardware for the student. First, computer hardware is evolving at a first pace and the machines that you purchase during the students freshman year may not suffice during all four years of college (Mathews 16). However, the hardware equipment that you decide to purchase should be compatible with both school and home network connections. Computer prices have been dropping rapidly over the last few years and the constantly changing hardware standards should not be a big issue. Furthermore, some hardware components can be purchased or leased at the same time.
The first piece of hardware that your student requires is a computer. In your students case, I would recommend a laptop computer that can easily support the students computing needs. The laptop computer should most preferably be a light model and it should be accompanied by a laptop stand for home use. The laptop will cater to the students mobility whereby it can be used even for class trips and it can easily connect to college wireless networks. The laptop should come with an operating system of Windows 7 or higher and a multi-core processor.
These specifications are suitable because most Microsoft Windows systems are supported in almost all colleges and they can also be used in most home environments. The processor choice is viable for both the educational and leisure needs of the student such as video streaming and gaming. Consequently, the laptops memory should at least be 4 GB and the hard drive should be at least 250GB. If possible, the memory and hard drive capabilities of the computer should be scalable to give an allowance for hardware upgrades if necessary.
Other important specifications of the laptop computer include highly functional video and sound cards that can support video-lectures, high-definition presentations, and other extracurricular activities. The laptop should also come with a DVD that has to write capabilities. If need be, the student can also acquire a USB mouse and keyboard to make his work easier. The laptops wireless and LAN network capabilities should be up to the industrys standards. Backup capabilities of the computer are also a major consideration when purchasing a laptop although a machine that has the above specifications should be able to perform adequate backups.
The purchase of a laptop should be accompanied by that of a printer. The printer should at least be an easy-to-operate Laser printer. This hardware equipment will most likely be stationed at home where the student can print course materials and assignments among others. If the printer is part of an open network, it should be switched off to avoid unnecessary usage by outsiders.
For networking purposes, you should consider acquiring a short-range wireless router from a reputable service provider. Wireless routers are easy to configure and the whole family can use this infrastructure to access the network on other gadgets such as phones, tablets, I-pads, and gaming consoles. Wireless routers are also relatively cheaper compared to other network options.
Software Options
First, the student will require functional anti-virus software so as to protect his computer against attacks and malicious software. Most new laptops come with free anti-virus software and some institutions provide it to students at no cost. Given that the family is familiar with Microsoft Windows, I recommend an Office Software Suite that includes; Microsoft Office, Word, Excel, and PowerPoint. This software should serve both the needs of an English and History Majors student and at the same time any domestic activities such as inventory or emailing. For browsing needs, I would recommend Firefox, Chrome, or Safari Browsers. All these browsers also support most email platforms including institutional ones. After registering in college, the student will most likely get a discount on essential student-centric software programs.
Other Recommendations
There are other software and hardware purchases that the student should consider buying. First, the student should consider acquiring an Ethernet network cable for use in college environments. The cable gives the student access to LAN, which is at times faster than wireless networks. It is also prudent to consider purchasing a surge protector that can protect most of the hardware against damage from power surges. The student might also require a backpack for ferrying the laptop to-and-from school. In addition, you should also invest in extra security for locking up the computer equipment thereby offering protection against theft and burglary. A USB stick or an external hard drive is also a viable purchase in this case. Extended warranty especially for the laptop because it will be subjected to various risks during the students commute (Mathews 18).
Works Cited
Mathews, Brian. Flip the model: Strategies for creating and delivering value. The Journal of Academic Librarianship 40.1 (2014): 16-24. Print.
Appendix
Essentials
Approximate Total Cost: $800
Others
NB: All products can be found online at Betbuy.com and they can be reviewed by clicking.
In any given research, it is important to have a well-planned search strategy to aid in collecting all the required information on the topic being researched. In addition to a well-planned search strategy, it is vital that the researcher fully understands the topic of research. In this case, the research concerns formulation of a computer-based search strategy to collect information using evidence-based research methodology only.
Research Steps
The first step in this search will be to formulate a question that will guide the research. In evidence-based methodology, for the question to be most comprehensive, it needs to have the PICO aspects (Huang, 2003). The PICO aspects cover the population, the intervention, comparison and the outcome. In this case, the question guiding my research is Can additional choices of food and places to eat improve appetite and maintain weight in residents with dementia? The population in this context will be the dementia patients while the intervention will be the provision of additional choices of foods and places to eat. This is in comparison with the current choices that the patients have. The outcome will be the effect that these changes will have on their appetite and consequently their weight.
The next step will be to conduct a search on the internet to get all the written materials related to this topic. There are various materials written concerning the feeding habits of dementia patients including the types of food recommended for them to eat (Fisher, 2011). In addition, there are researches done on the response of these patients to certain types of food (HelpGuide, 2011). Moreover, there are articles written on the best food for dementia patients and this will help in guiding this research (LiveStrong, 2011). All these collected material will be read through to see all that has been written about the topic which will assist in giving the background information and setting a foundation for this research.
The other step will be to use a computer-based search strategy to collect data on this research topic. This will involve searching for all the homes that care for patients with dementia around my area. A thorough and careful search will be conducted and all the details concerning these homes will be recorded. These details will include the location of the home, the number of patients within each identified home and the types of diets given to these residents in each and every home. In addition, the details of the caregivers in each home will be recorded so that in case there is a need to contact them to clarify any information, then that will be easily done. Also, weight records of these residents will be retrieved and if this is not possible due to the policy of confidentiality, the researcher will make a personal request to the homes to get all the information that will assist in the conduction of this research.
When all the details concerning these homes have been collected, a comparison of the types of foods being given and the residents records of weight and eating habits will be made. All the foods which are most liked by the residents will be identified and recorded. These records will be used as evidence of the effect of additional food choices on the residents eating habits. Therefore, for the research to be truly evidence-based, the conclusion must be drawn from the best evidence available (DynaMed, 2010).
In addition to the computer research mentioned above, the following evidence-based research methodology which is experimental and scientific will be used. It will involve use of control and experimental groups (Brown, 2007). Residents from a number of randomly selected care centers or homes will be involved. The researcher will suggest the addition of a number of foods. The residents responses will be studied and recorded. In addition, during the research, the residents will get chances to choose where they want to have their meals and the researcher will collect the data on how they respond.
All the results collected from this research will be recorded in the computer and a comparison of the results from various residents in various centers will be done to see how each resident responds to the provision of additional food variety and the choice of the area to have their meals from. Any weight improvement or deterioration will be recorded. Any significant similarities in response to certain foods and areas will be noted so as to be included in the findings and recommendations where they are needed.
During this research, the caregivers will be involved to assist in monitoring and providing information to the researcher concerning the observations made. This will be necessary since the researcher cannot collect sufficient data from all these centers all alone.
To communicate with all the caretakers involved and to collect sufficient data within the set period of time, communication via telephone and emails will be involved so as to liaise with all the caregivers and collect a lot of information while saving time (Brown, 2007). The caregivers will be sent for tables with information on various identified food types and they will be required to write their observations against each food type and feeding area and include a record of changes in the residents weight.
Conclusion
When all the data has been collected, an analysis will be done to see how the residents respond to the changes of adding food and offering them a chance to choose where they want to eat from (Fischer, 2009). In addition, each residents favorite food and the types of food generally most liked by the residents will be identified. All the information collected during the evidence-based research study is used to help in making vital medical decisions (The American Dental Hygienists Association, 2010). Therefore, after this analysis, a report will be prepared to record the findings of this research and make the necessary recommendation where needed.
References
Brown N., Fitzallen, N. (2007). Evidence-based Research in Practice. Web.
Fischer W., Etchegaray J. (2009). Understanding Evidence-Based Research Methods: Descriptive Statistics, The Health Environments Research and Design Journal. Web.
Fisher M. (2011) Reducing calorie and carbohydrate intake may affect Alzheimers disease (search) risk.
HelpGuide (2011) Alzheimers Behavior Management: Managing Common Symptoms and Problems. Web.
Huang, W. (2003) Formulating Clinical Questions During Community Preceptorships: A First Step in Utilizing Evidence-based Medicine, Baylor College of Medicine, Houston, Texas.