Most colleges do not necessarily require their students to acquire computer systems, but in the digital age, all scholars are expected to have unhindered access to fully operational and up-to-date computer architecture. Consequently, you need to purchase your student a computer system that can at least compliment the software and hardware standards that can be found in most colleges. It is also important to consider that the student might have to carry the computer to college on a regular basis. These are my recommendations for your childs computer needs after considering various factors.
Hardware
There are various factors to consider while planning to purchase computer hardware for the student. First, computer hardware is evolving at a first pace and the machines that you purchase during the students freshman year may not suffice during all four years of college (Mathews 16). However, the hardware equipment that you decide to purchase should be compatible with both school and home network connections. Computer prices have been dropping rapidly over the last few years and the constantly changing hardware standards should not be a big issue. Furthermore, some hardware components can be purchased or leased at the same time.
The first piece of hardware that your student requires is a computer. In your students case, I would recommend a laptop computer that can easily support the students computing needs. The laptop computer should most preferably be a light model and it should be accompanied by a laptop stand for home use. The laptop will cater to the students mobility whereby it can be used even for class trips and it can easily connect to college wireless networks. The laptop should come with an operating system of Windows 7 or higher and a multi-core processor.
These specifications are suitable because most Microsoft Windows systems are supported in almost all colleges and they can also be used in most home environments. The processor choice is viable for both the educational and leisure needs of the student such as video streaming and gaming. Consequently, the laptops memory should at least be 4 GB and the hard drive should be at least 250GB. If possible, the memory and hard drive capabilities of the computer should be scalable to give an allowance for hardware upgrades if necessary.
Other important specifications of the laptop computer include highly functional video and sound cards that can support video-lectures, high-definition presentations, and other extracurricular activities. The laptop should also come with a DVD that has to write capabilities. If need be, the student can also acquire a USB mouse and keyboard to make his work easier. The laptops wireless and LAN network capabilities should be up to the industrys standards. Backup capabilities of the computer are also a major consideration when purchasing a laptop although a machine that has the above specifications should be able to perform adequate backups.
The purchase of a laptop should be accompanied by that of a printer. The printer should at least be an easy-to-operate Laser printer. This hardware equipment will most likely be stationed at home where the student can print course materials and assignments among others. If the printer is part of an open network, it should be switched off to avoid unnecessary usage by outsiders.
For networking purposes, you should consider acquiring a short-range wireless router from a reputable service provider. Wireless routers are easy to configure and the whole family can use this infrastructure to access the network on other gadgets such as phones, tablets, I-pads, and gaming consoles. Wireless routers are also relatively cheaper compared to other network options.
Software Options
First, the student will require functional anti-virus software so as to protect his computer against attacks and malicious software. Most new laptops come with free anti-virus software and some institutions provide it to students at no cost. Given that the family is familiar with Microsoft Windows, I recommend an Office Software Suite that includes; Microsoft Office, Word, Excel, and PowerPoint. This software should serve both the needs of an English and History Majors student and at the same time any domestic activities such as inventory or emailing. For browsing needs, I would recommend Firefox, Chrome, or Safari Browsers. All these browsers also support most email platforms including institutional ones. After registering in college, the student will most likely get a discount on essential student-centric software programs.
Other Recommendations
There are other software and hardware purchases that the student should consider buying. First, the student should consider acquiring an Ethernet network cable for use in college environments. The cable gives the student access to LAN, which is at times faster than wireless networks. It is also prudent to consider purchasing a surge protector that can protect most of the hardware against damage from power surges. The student might also require a backpack for ferrying the laptop to-and-from school. In addition, you should also invest in extra security for locking up the computer equipment thereby offering protection against theft and burglary. A USB stick or an external hard drive is also a viable purchase in this case. Extended warranty especially for the laptop because it will be subjected to various risks during the students commute (Mathews 18).
Works Cited
Mathews, Brian. Flip the model: Strategies for creating and delivering value. The Journal of Academic Librarianship 40.1 (2014): 16-24. Print.
Appendix
Essentials
Approximate Total Cost: $800
Others
NB: All products can be found online at Betbuy.com and they can be reviewed by clicking.
Perceiving the constantly accelerating pace of technology adoption, one is forced to believe that wireless technology will fast replace the wired world. However, despite apparent advances in wireless technology, many are skeptical about its future with respect to the complete replacement of wired communication. Many IT specialists believe that the major barrier to the spread of wireless networks will be the little amount of innovation and investment in the industry of wireless systems support software (Orr, K., December 2001). However, many believe that organizations may be slow in responding to the growing needs of software for wireless systems, but they are not oblivious to them. Software development programs are being initiated by a number of companies to support wireless systems. Cutter Consortium conducted a survey in which 37% of IT professionals said that their companies were planning to develop wireless applications (Orr, K., Dec 2001). Therefore, we hope that in due time, there will be many issues regarding software support to wireless systems. The developers are only busy with that how far this technology will go and how big the market for its software will be in the future.
The first example of the wireless technology that comes to ones mind is the cell phone technology, and perhaps it is one of the most widely used network technologies in history. Cell phone networks make use of GSM and CDMA technologies in most countries. Mobile internet devices also include Personal Digital Assistants (PDAs). Wireless Local Area Networks (WLAN) is a relatively recent technology. In our paper, we will concentrate on the applications and operating software used by mobile networking devices and WLAN.
Wireless Systems and Software used
WLAN is a technology that allows people in a limited geographical area to be linked to each other, just in the way a normal Local Area Network or LAN works the only difference is that WLAN is wireless. WLAN could be used in organizations to interconnect the entire organization. We see examples of WLAN in shopping malls and hotels where hotspots or Wi-Fi systems allow people in the area with appropriate devices and software to access the services of the internet through this WLAN. As Goth (2006) notes, In mid-August, Google launched Google Wi-Fi, a free wireless network for users in the city of Mountain View. In early September, a consortium including IBM, Cisco, Azulstar, and SeaKay won a contract for a 1,500-square-mile network intended to serve 42 entities and 2.4 million people in Silicon Valley, including every city in both San Mateo and Santa Clara counties.
Software for mobile devices is more difficult to design because it will be developed and tested in a totally different environment, i.e., Solaris or MS Windows machine, from the one it will eventually be run on. Applications can be developed in several environments, for example, J2ME Wireless Toolkit. Software to download applications on cell phones includes Motorola iDEN Update Software Application/Java Application Loader. (Mahmoud, K. H. and Lorain N., May 2002).
LLAN technology requires both operating software and applications. These applications must provide the system with security as well as efficiency. Thick and thin access point wireless solutions are provided by Cisco. They have recently acquired thin access point technology of Airespace, putting Cisco in the drivers seat for enterprise-grade wireless. Other networks are also in work: While not as big, but leveraging the benefits of thin access points are Aruba Wireless Networks and Trapeze Networks. (Gilliot, I.)
Major issues in developing software
Developers of software for mobile internet devices face constraints from the limited memory space and processing power available for the application. The display screens of cell phones are also very small, and so there is very little information that can be made to appear on the screen at one time. Developers who may be used to working on large computer systems may find it hard to work with such limited power of the device. Besides the device the software is being designed for, there are limitations of the wireless environment as well. Wireless networks are unreliable and expensive, and bandwidth is low. They tend to experience more network errors than wired networks.
The very mobility of wireless devices increases the risk that a connection will be lost or degraded. (Mahmoud, K. H. and Lorain N., May 2002)
There are further challenges posed by a wireless environment that developers have to face. Communication over a wireless system is prone to interference, creating transmission errors. Wireless network protocols may be able to detect and correct some errors, but you need to come up with error-handling strategies that address all the kinds of transmission errors that are likely to occur. (Mahmoud, K. H. and Lorain N., May 2002). This kind of interception could not only lead to inaccuracy of the message delivered but also insecure connections. If data is very sensitive, security is a high priority. Applications must ensure a secure environment for wireless communication. The time required to deliver a message depends on the processing speeds of both the devices involved. A good application must do away with processing delays.
Conclusion
Software for all wireless technology must have a good architecture, must ensure accuracy, security, and speed of delivery of the message, and must be user-friendly. For the development of wireless software engineering, it is crucial to enhance the interoperability of different software platforms and develop middleware.
Modern software engineering is composed of several separate activities such as requirement analysis, test, and implementation. These ones are performed in sectors with not dynamic flowthrough of data. The segment approach in this way precludes the progress in the elaboration of sound wireless applications. That is why the manual process of engineering must be changed to create an effective automated system that will increase efficiency and create friendly interfaces for software applications.
The utilization of the most up-to-date approaches to software engineering will produce a hugely positive effect on the development of wireless technologies. The development of modern platforms for software engineering and adaptation of wireless hardware leaves no barriers for wireless software production.
Bibliography
Gilliot, I. (n.d.). The Business Case for Wireless Software Applications in the Enterprise.
One of the most frustrating things in the use of computer technology is being infected by computer viruses! Imagine yourself conveniently working on a computer on a very important task, and then all of a sudden, the file you have been slaving on for nights on end disappears! Most likely, it has been eaten up by an irritating, nerve-wracking computer virus.
What is a computer virus?
A computer virus is a software program that attaches itself to, overwrites or otherwise replaces another program in order to reproduce itself without the knowledge of the computer user, as defined by Collette Dilly (2001). It is actually similar to a biological virus wherein both the computer and biological virus share the same characteristic of infecting their hosts and have the ability to be passed on from one computer to another.
Just like humans, a computer that is infected with a virus also becomes sickly and thus, prone to suffer malfunction in its operation or running of computer programs and software.
Trading programs with other peoples computers may contaminate ones computer programs. The use of modems necessary to connect to the internet can likewise acquire viruses. Some unsuspecting users may get it from seemingly innocent emails.
When this program enters your computer through your input device, it hides in your computers memory and starts to duplicate itself like a disease. When you save your data, you also save the virus. Slowly but surely, the virus crowds out your data and causes major system problems. (Trickum Middle School, 1997)
What does a virus do to a computer?
These computer viruses are actually created and developed by people using bits and codes designed to adapt to a computers system or files and data. Depending on the particular type or kind of computer virus, the effects of viral infection on computers may range from a simple display of some sort of messages to a devastating crash of your computer system and programs.
The most common types of computer viruses are Trojan Horses, E-mail viruses, and Worms. A Trojan horse is simply a computer program. The program claims to do one thing (it may claim to be a game) but instead does damage when you run it (it may erase your hard disk). Trojan horses have no way to replicate automatically. An e-mail or network virus moves around in e-mail messages and usually replicates itself by automatically mailing itself to dozens of people in the victims e-mail address book. A worm is a small piece of software that uses computer networks and security holes to replicate itself. A copy of the worm scans the network for another machine that has a specific security hole. It copies itself to the new machine using the security hole and then starts replicating from there, as well.
How can we protect ourselves from computer viruses?
Users should also consider the variety of anti-virus products currently available to protect their computers.. There are three classes of anti-virus products: detection tools, identification tools, and removal tools. Scanners are an example of both detection and identification tools. Vulnerability monitors and modification detection programs are both examples of detection tools. Disinfectors are examples of removal tools.
Such anti-virus programs must be used to scan the computer of probable existing viruses. Never insert floppy disks or CDs from unreliable and unknown sources. Scan them with the antivirus before running them on the computer.
Avoid opening emails from unknown sources. They may contain malicious information as well as destructive viruses. Download mails with care. Scan attachment with anti-virus software. Big email companies like Yahoo! usually have their own reliable default anti-virus software that automatically scans the attachments.
Back up files for security reasons. In case the computer gets infected and needs to be reformatted, then it is ensured that important files have been saved in a separate folder or CD.
Now that awareness of the management of a computer virus is widespread; it is a comfort to know that it is not a hopeless case! The important thing is to maintain a clean and virus-free computer to save ones files& and ones sanity!
The visual tracking has been a major challenge for the researchers for decades. The visual tracking is a projection of the movement of an object over an extended time frame. Based on the history of the movement, the moving object is checked and projected to identify the future positions it would occupy. This work has brought in a number of theories and working models into fore. The research has existed, particularly, on recognizing hand movement (D Hogg, 1983).
Detection of the movement of hand, recording of the history of such movements, depends on a number of factors. This includes background (V. Athitsos and S. Sclaroff) for the movement, the skin color and even wrist delimitation (R. Rosales, V. Athitsos, L. Sigal, and S. Scarloff) apart from a number of other factors. The speed of the hand movement also plays an important role in identifying or tracking the hand movement (J. M. Rehg and T. Kanade). After the movement has occurred the pose of the hand can be reconstructed using a Kinematic model reconstruction as shown by (Y. Wu and T. S. Huang). In this case also, it can be found that edge conditions, contours and color have to be taken into account when simulating the movement of the hand (V. Athitsos and S. Sclaroff). There had been studies based on both two dimensional (McCormick & Isard) as well as three dimensional movement of hand. In case of two dimensional it is easier because of the limitations in the degrees of freedom. In the case of three dimensional studies, the degrees of freedom are six, whereas it is only four for two dimensions, taking into consideration rigid objects like that of a hand.
In order to bring into focus the entire tracking of the hand, it has been a practice to simulate or build a model of the hand movement. Models are reconstructed based on either planar patches (J. Yang, W. Lu, and A. Waibel), polygon meshes or generalized cylinders. Any of these methods can be employed to bring in a model while simulating the entire track of the hand. In order to mathematically identify the geometric location of the hand, a Bayesian filter is made use of. Many researchers have employed recursive Bayesian filter to realize the model. In many cases, this has been influenced by the Kalman filter and a combination of the two can be employed for the purpose. In the following paragraphs, the Bayesian filter has been adopted for the purpose and is presented below.
Theory of Bayesian Model
Model based studies of the hand movement are based on the following standard processes.
The model collects the input image and does a feature extraction of the inputs. Based on this, it generates a new position and projects the image. A routine is planned that would identify the changes or differences between the actual image and the realized one subsequently. This would, in turn, would bring out the error between the two. The error thus identified is used to bring out a better image using the feedback loop created in the model. This model would provide a closed loop control of the entire work and thereby, enable the model to improve upon itself over a period of time.
Bayesian model is based on the similar structure with a feedback loop in it. Rehg and Kanade, first employed this model for the purpose of building the track of a moving hand. They employed 27 degrees of freedom and reduced the error using the square differences method and reduction using the Gauss-Newton algorithm. The mathematical model below is built based on these concepts.
The Model
Let Pt be the position of the hand at time t in a plane defined by coordinates x and y. In this case let us consider only two dimensional visualizing and a degree of freedom of four.
Therefore, Pt = f(Pxt, Pyt)
Position of the hand at time t-1 will be:
Pt-1 = f(Px(t-1), Py(t-1))
The velocity vectors at this point of time t are:
Vt = f(Vxt, Vyt)
And at time t-1 it is:
Vt-1 = f(Vx(t-1), Vy(t-1))
Based on the probabilistic approach of the recursive Bayesian method, the probability of a specific movement occurring after the current point of time is estimated as Probability Pr. Pr is expected to be any of the degrees of freedom allowed. In this case of two dimensional studies, it is taken to be four or a eight depending upon the need. In this case, we will consider this to be eight. The following probabilities are assumed for this purpose. Pr(left), Pr(right), Pr(up), Pr(down), Pr(left-up), Pr(right-up), Pr(left-down) and Pr(right-down).
As per the first order Markov assumption, the velocity and position at time t is dependent only on the same at t-1.
i.e. Vt = f(Vt-1)
and the second assumption states that the velocity at t will be conditionally independent of all previous velocities. In order to track the movement of a hand, the actual position at time t is taken to be Xt and that of the projected position and velocity at time t as Zt where Xt and Zt are the functions of Vt and the error correction or recursive update is a function of both these variables.
The distribution can be taken from the Bayesian rule which indicates for the above variables:
This projection is also called the Chapman-Kolmogorov equation (A. H. Jazwinski).
Conclusion
The slight of hand has always been an intriguing prediction problem for the mathematicians and researchers. This probability based prediction bases itself in correcting and learning through the process. This will ensure that the model presented will have the option of becoming better as it progresses and would be able to present a much better result over a period of time. The fitment and the estimation of the probability itself has been varied starting from Gauss Newton algorithm to other normalization distributions.
References
A. H. Jazwinski. Stochastic Processes and Filtering Theory. Academic Press, New York, 1970.
D. Hogg. Model-based vision: a program to see a walking person. Image and Vision Computing, 1(1):5.20, 1983.
V. Athitsos and S. Sclaroff. An appearance-based framework for 3D hand shape classi_cation and camera viewpoint estimation. In IEEE Conference on Face and Gesture Recognition, 45.50, Washington DC, 2002.
R. Rosales, V. Athitsos, L. Sigal, and S. Scarloff. 3D hand pose reconstruction using specialized mappings. In Proc. 8th Int. Conf. on Computer Vision, volume I, 378.385, Vancouver, Canada, 2001.
J. M. Rehg and T. Kanade. Visual tracking of high DOF articulated structures: an application to human hand tracking. In Proc. 3rd European Conf. on Computer Vision, volume II, 35.46, 1994.
J. MacCormick and M. Isard. Partitioned sampling, articulated objects, and interface-quality hand tracking. In Proc. 6th European Conf. on Computer Vision, volume 2, 3.19, Dublin, Ireland, 2000.
Y. Wu and T. S. Huang. Capturing articulated human hand motion: A divide-and conquer approach. In Proc. 7th Int. Conf. on Computer Vision, volume I, 606.611, Corfu, Greece, 1999.
J. Yang, W. Lu, and A. Waibel. Skin-color modeling and adaptation. In Proc. 3rd Asian Conf. on Computer Vision, 687.694, Hong Kong, China, 1998.
A crucial part of the ethics of business is computer ethics or information ethics. Most corporations today are teetering on whether computer improprieties are a violation of professional ethic rather than a legal ethics issue. The purpose of this paper will be to examine some of the ethical issues of the Internet as it relates to the theft of private or personal information from the material sent over the Internet. Professional ethics can best be defined as learning what is right or wrong as it relates to the workplace and then doing the right thing. This code of professional ethics lays down the standards of integrity, professionalism and confidentiality which all members of that particular profession shall be bound to respect in their work. Where as legal ethics is best defined as principles of conduct that members of the profession are expected to observe in the constraints of the governing laws.
The right to privacy in Internet activity, especially in creating databases out of personal information, is a serious issue facing society. As such it raises serious ethical issues. An additional example is of the people on the Internet who use anonymous servers as a way to avoid responsibility for controversial and inappropriate behavior. Cases of harassment and abuse have become increasingly frequent, aided by a cloak of anonymity. There are also problems with fraud and scam artists who elude law enforcement authorities through anonymous mailings and postings. These types of examples describe the ethical issues created by technology and the people or corporations that control them (Tavani, pp. 179-85).
ISPs Role in Cyber Ethics
ISPs have rightfully asserted that prescreening would be especially burdensome, they have not made the same argument that post screening would impose similar economic or administrative burdens. ISPs act like publishers when they sponsor or operate newsletters or other online publications over which they exercise editorial control. At other times, when they are simply functioning as a conduit for other information content providers, their role is equivalent to a distributor. Clearly there must be a higher standard of liability when they assume a publisher like role. If an ISP functions as a publisher it must be held to a higher standard of liability; that is, it must be held accountable for defamatory remarks in the same way that the New York Times or other media would be held accountable.
In most situations, however, ISPs will not be acting as publishers but as distributors, passive conduits for the exchange of information by their legions of subscribers. In this context, ISPs should assume responsibility for post screening even if the law allows them to do otherwise. They should not take refuge in misguided policies and questionable legal precedents. But the policy should also be changed so that no one is victimized by an intransigent ISP that fails to live up to its moral obligations. Unless we abandon blanket immunity for ISPs and reach the type of compromise sketched out here, it is likely that ISPs will become the unwitting accomplices of many Internet defamers, who are often hiding behind the cloak of anonymity. Libelous speech is different from pornography and hate speech; it cannot be regulated from the bottom up through code. It requires some regulation from the top down through carefully crafted statutes. Unfortunately, the current statute is inimical to the interests of the Internet community.
Lessig (1999) argues that both fair use and the entry of works into the public domain will be jeopardized by these systems, since nothing requires that the balance now provided by copyright law be preserved. The problem is that those writing the rights-management code can embed their own intellectual property regulations into that code: They can program the system to charge a fee for any use and ignore any fair use or first-sale considerations by anchoring the content to a specific user. It is possible, of course, that rights-management systems will be constructed that will at least try to strike the right balance. (Lessing, pp. 133-36) Lessig seems to deny this, but some developers of DRM may realize that there are moral and social issues at stake here and will work to preserve fair use or its equivalent in their systems.
It would be irresponsible and imprudent to design these rights-management systems without allowing for fair use and without respecting other safety values such as first-sale. There is a lively debate about the technological feasibility of developing a system that would include a realistic provision for fair use, but that discussion is beyond the scope of this analysis. Suffice it to say that doing so will be a challenge, since system developers will need to anticipate the myriad array of fair use requests without being duped by those trying to manipulate the system. According to industry analysts Ashish Singh, Fair use algorithms could be written into the code and that would become the hook, the attractiveness of the product (Howe, pp. 10-11).
Anonymity And Cyber Ethics
One of the biggest security problems for the Net is the fact that individuals and organizations can still misrepresent themselves with impunity. At present, there is no uniform system or mechanism for identifying users who frequent cyberspace. The Internet does support architectures that facilitate identification, such as password protections, biometric systems, and digital certificates. It is still quite possible for users to interact in cyberspace anonymously, and it can be difficult to trace the real identity of users who are deliberately trying to conceal their identity. While anonymity supports privacy and free-speech rights, it also interferes with security and the curtailment of cyber crime. Hence, the lack of an identifying infrastructure has sometimes been detrimental for electronic commerce and for effective law enforcement in the realm of cyberspace.
The interconnected issues of digital identity and anonymity are highly charged ones that stir deep emotions. This was evidenced by the heated response to Intel Corporations announcement in February 1999 about its plan to embed identification numbers in its next generation of computer chips, the Pentium III. The primary purpose of the embedded serial numbers was to authenticate a users identity in e-mail communications, enhance security for electronic commerce by reducing the risk of fraud, and allow organizations to better track their computer equipment. Privacy advocates, on the other hand, argued that this unique identifier would enable direct marketers and others to surreptitiously track a users meandering through various Web sites. While Intel capitulated to this intense pressure and agreed to ship its products with the serial numbers turned off, the incident seemed to elevate awareness about the tenuous future of electronic anonymity. Also, the incorporation of identity features into chip technology has not been abandoned by Intel.
Despite Intels quick response, there is growing anxiety that these serial numbers are harbingers of a trend toward ever more invasive surveillance networks (Markoff, C1). What if governments throughout the world attempt to mandate the deployment of such invasive identifying numbers in order to keep better track of their respective citizens? According to security expert Vernor Vinge, The ultimate danger is that the government will mandate that each chip will have a special logic added to track identities in cyberspace (Markoff, C1).
The Intel incident also graphically illustrates the difficult dilemma faced by policy makers involving an unavoidable trade-off between security and anonymity or privacy. Security and anonymity seem to be mutually exclusive goods. If we really want to make the Internet a more secure environment it is necessary to suppress anonymous transactions and hold users accountable for what they say and what they do. For example, thanks to a tracking mechanism installed in Microsofts Office software, the rogue programmer who released the destructive Melissa virus was swiftly apprehended. But the cost of security and better accountability is a loss of privacy and perhaps the termination of untraceable Internet communications.
There is a cost to preserving anonymity; its central importance in human affairs is certainly beyond dispute. From a moral perspective, it is a positive good, and it is valued as highly instrumental in helping to realize two other goods that are vital for human fulfillment, freedom and privacy. Anonymous communication in cyberspace is enabled largely through the use of anonymous remailers in conjunction with cryptography. A brief word about these is in order. According to Lohr (1999), an anonymous remailer functions like a technological buffer. It strips off the identifying information on an e-mail message and substitutes an anonymous code or a random number. By encrypting a message and then routing that message through a series of anonymous remailers a user can rest assured that his or her message will remain anonymous and confidential. This process is known as chained remailing. (Lohr, 5-6) The process is quite effective, because none of the remailers will have the key to read the encrypted message, neither the recipient nor any remailers (except the first) in the chain can identify the sender, and the recipient cannot connect the sender to the message unless every single remailer in the chain cooperates. This would assume that each remailer kept a log of their ingoing and outgoing mail, and that is highly unlikely.
According to Froomkin, this technique of chained remailing is about as close as we can come on the Internet to untraceable anonymity; that is, a communication for which the author is simply not identifiable at all. (Froomkin, p. 245) If someone clandestinely leaves a bunch of political pamphlets in the town square with no identifying marks or signatures, that communication is also characterized by untraceable anonymity. In cyberspace things are a bit more complicated, and even the method of chained remailing is not foolproof: If the anonymous remailers join together in some sort of conspiracy to reveal someones identity there is not much anyone can do to preserve anonymity. Anonymity can also be useful for revealing trade secrets or violating other intellectual property protections. In general, secrecy and anonymity are not beneficial for society if they are used improperly.
Thus, anonymity can be exploited for many forms of mischief. There are concerns about anonymity abuses from both the private and public sectors. In the private sector there are worries about libel, fraud, and theft, while the state is worried about the use of anonymity to launder money, to evade taxes, to manipulate securities markets, and so forth. Hence, there is the temptation for governments or even digital infrastructure providers, such as ISPs or companies like Intel and Microsoft, to develop and utilize architectures that will make Internet users more accountable and less able to hide behind the shield of anonymity (Nissenbaum, pp. 141-44).
In response to these concerns, there are several options available for more comprehensive digital identity systems. The most thorough system would ensure that there is always an indissoluble link between ones cyberspace identity and ones real identity. This is accomplished by somehow mandating the traceability of all Internet transactions. The use of technologies such as chained remailers helps protect anonymous communications so that they are untraceable. But what mechanisms might be adopted to mandate traceability; that is, to make the untraceable traceable? One way to achieve mandatory traceability is to demand a users identification as a precondition of Internet access. The government might also implement such a system by law, by requiring that all ISPs demand verifiable identification as a prerequisite for access to the Internet. There are many variations on these two broad approaches.
The basic idea behind any system of mandatory traceability is that speakers entering cyberspace would be required to deposit (e.g., with the ISP), or attach to their communications, a means of tracing their identities. One can conceptualize mandatory traceability by positing a regime in which an encrypted fingerprint automatically would be attached to every transaction in cyberspace. In such a regime, the fingerprint could be encrypted with the governments public key such that properly authorized law enforcement officials could access the private key necessary for decryption while participants in the cyber-transaction would not be able to strip away the speakers anonymity. Digital certificate technology could also play some role in the development of such an identity infrastructure. (Fitzgerald, pp. 77-80) It would provide a feasible method of authenticating individuals and verifying the integrity of their transmissions. Recall that digital certificates allow individuals or organizations that use the Internet to verify each others identity.
Conclusion
The Nets inherent vulnerabilities have been exploited by hackers and other miscreants who frequent cyberspace. Among the architectures that accomplish this goal, there are firewalls, powerful encryption software and digital identity systems. Paradoxically, some of the architectures like encryption code that secure the Net and safeguard privacy obstruct the efforts of law-enforcement authorities to deal with cyber crime and computer-related crime. The protection of public safety can sometimes conflict with respect for basic civil liberties. There is a tension between a perfectly secure Internet, where all transactions are traceable and users are held accountable, and an Internet that respects the values of privacy and anonymous free expression. Anonymous expression is an important social value worth preserving in cyberspace. Therefore, architectures and technical standards utilized to implement a digital identity system or other security mechanisms must not preclude the possibility for anonymous expression or create serious new privacy hazards.
We can preserve the integrity of the Internet as a tool of autonomy and a forum for creativity if we comport ourselves in cyberspace in an ethical manner. Each member of the cyber community must engage in activities that support the collective values of that community and refrain from activities that regard it as a commodity. The Internet, like all public goods, is vulnerable to abuses and excesses that can endanger its fragile ecology and some of those abuses can be the result of poorly crafted code. The global nature of the Internet makes addressing the legal issues associated with Internet or information privacy daunting and complex. It is a legal arena without walls or physical boundaries, where the laws vary from country to country. Even within the United States there is dissent and disagreement about the definitions of Internet Privacy, who owns that information and what constitutes appropriate or inappropriate use of that information. Perhaps the issues that have been unsuccessfully resolved through the law can be resolved through the creation of moral and ethical guidelines that will frame the issues, at which point legal protections can be put in place.
Works Cited
Fitzgerald, A. (2000). Going Digital 2000: Legal Issues for E-Commerce, Software and the Internet. St. Leonards, Australia: Prospect Media. 77-80
Froomkin, M. (1996). Flood Control on the Information Ocean: Living with Anonymity, Digital Cash, and Distributed Data Bases. University of Pittsburgh Journal of Law and Commerce 395:245.
Howe, Jacob and Andy King. Three Optimisations for Sharing. Technical Report, Computing Laboratory, University of Kent at Canterbury, 2001. 10-11
Lessig, L. (1999), Code and Other Laws of Cyberspace, New York: Basic Books.133-36
Lohr, S. (1999). Privacy on the Internet Poses Legal Puzzle. New York Times. 5-6
Markoff, J. (1999). Growing Compatibility Issues: Computers and User Privacy. New York Times, C1.
Nissenbaum, H. The Meaning of Anonymity in an Information Age The Information Society 15: 1999; 141-144
Tavani, H. (2001). Defining the Boundaries of Computer Crime: Piracy, Break-Ins, and Sabotage in Cyberspace. In Readings in Cyberethics, edited by R.Spinello and H.Tavani. Sudbury, Mass.: Jones and Bartlett.179-85
Cisco router and any other computer dont have a very different set of processes for booting if they are not very similar. An understanding of the processes that would help in setting up the configuration of a router and its various elements can lead to relating the booting process of the router to that of any other computer. Both router and a simple computer have a sequence of events that lead to the booting process.
Beginning with a computer, the BIOS (Basic Input/ Output System) is used for ensuring three main functions that make way for booting up of the system. The BIOS provides a set of machine code subroutines. The codes are called by the operating system and various hardware components of the computer are being accessed. The BIOS then causes the initiation of the boot sequence and allows the third process of changing the low-level setup options. The BIOS code is later burned onto a Flash EPROM memory chip installed on the motherboard (Mossywell). In a Cisco router (for example Cisco 2501 router), the Flash memory contains a valid IOS image which is similar to the BIOS of a simple computer. The router is yet to be configured and here also the sequence of events goes through during the completion of the boot process (DiNicolo, 2006).
The boot process in the router begins with POST (Power-on self-test). The router carries out a POST after being powered up. The purpose of this POST is to check the capability of CPU and router interfaces about their ability to function properly. Next in the line is the execution of bootstrap to load IOS (Input/ Output System). The successful outcome of POST, the router executes the Bootstrap program already burned previously in ROM. The bootstrap searches the Flash memory for a valid Cisco IOS image and is then loaded after the positive completion of the search. In case of non-availability of any IOS image, the router gets booted with the RxBoot limited IOS version found in ROM. IOS loading is basically the loading of the configuration file. The IOS image upon loading searches the NVRAM for a valid startup configuration. In case if there is no valid startup configuration file, the router then looks for System Configuration Dialog which is also called setup mode. This mode also enables the user to perform the initial configuration of the router (DiNicolo, 2006).
The simpler computers begin their processes after the Reset signal is issued. Various components of the motherboard are made to undergo the checking process. The CPU is allowed to execute codes only after the Reset signal is turned off. Testing and initialization of the hardware are as per the Power on Self Test (POST) protocol and upon successful completion of the same, the BIOS initiates the boot sequence from the hard disk or any other device specified in the BIOS setup. The process begins with ROM BIOS which initiates Power on Self Test. The BIOS discovers the boot device. Upon discovery, the BIOS loads the contents of the very first physical sector of the disk into memory and then instructs the CPU to execute the code. The machine code routines are the most basic part of BIOS with the size of each of the routines differing with different BIOS and are accessed through issuing software interrupts to the CPU. The BIOS loads the interrupt vector table and thereby enables the mapping process between the interrupt number and the corresponding routines memory location (Mossywell).
Reference
DiNicolo, D. (2006), Cisco Router Boot Process Posted in CCNA Study Guide Chapter 07. Web.
The purpose of this paper is to highlight why a consumer should select a MAC operating system over the Windows PC. Both Mac and PC are popular operating systems, readily available in the mainstream market catering to the various needs of the consumers. While the basic functionality of the operating system on a computer is the same, both operating systems differ in terms of their characteristics and the specific benefits that they provide to the end user. Through this paper a justified argument, supported by evidence based on books and scholarly journals, is presented on why consumers should look towards adopting a Mac computer instead of one based on Windows PC.
Introduction to Mac and PC
The Mac operating system was a computer software developed by the company Apple Inc to provide operating systems for Apple computers. Initially the software was only available for use on Apple computers and apple devices, however now other computers can also make use of Mac OS. The original line of computers that were launched in 1984 and supported the Mac OS where named Macintosh and were sold exclusively by Apple Inc.
This was a revolutionary computer operating system as previously the only operating system that was available to consumers was MS Dos a command line and prompt in nature and was tedious for majority of the consumers. On the other hand the Macintosh computers with the Mac operating system were unique and revolutionary as they provided a graphical user interface for the consumers and enabled them to use programs, and software developed on the object oriented programming language. This form of a computer interface was more user-friendly and easy to comprehend for the consumers.
The Mac OS made use of the hierarchical directory tree approach for navigation and file addressing and enabled the users to create their own files and folders with relative ease. The system itself promoted multiple and comparative multitasking as the user could use more than one program at the same time which was previously not possible with other non graphical user interfaces. Aside from this the Mac operating system was also developed and supported by the UNIX language which meant that it was open source.
As a result the Mac users were able to customize their own systems according to their requirements and share the knowledge of new developed upgrades and programs with others in the UNIX based OS community. This enabled the Mac users to have highly customized and efficient operating systems that were based on upgrades and programs developed by other Mac users purely for their convenience and facilitation. Other application programs that are provided by the Mac operating system included image processing software, audio and video processing software as well as software for word processing, email and development of databases and spreadsheets.
While the Mac OS was launched by Apple Inc, the Windows OS was launched by Microsoft Windows in 1985. PCs are usually associated with Microsoft Windows as they are computers that run on the Windows based operating system. The Windows operating system was also a unique operating system in the market at the time of its launch as it provided the consumers in the market to be able to use a graphical user interface for operating their computers and navigating instead of using the old command line based operating system.
The Windows operating system was launched by Microsoft Windows as an additional product/ service to the currently available MS-DOS, however the new operating system became more popular than MS-DOS amongst the users as it was more user friendly and allowed just about anyone to be able to use the computer with minimum knowledge of computers and programming languages or command line instructions. The Windows operating system also provided a range of application software for video and audio playback, video and audio recording as well as the famous Office Suite known as MS Office.
The MS Office suit provided application software for word processing (MS Word), spreadsheets, (MS Excel), data base and file management (MS Access), email management, (MS Outlook), and multimedia based presentations (MS PowerPoint). The other characteristic of the windows OS is that it is not an open source system. This means that the code of the Windows OS is restricted and accessible to only Microsoft Company. As a result it is not possible for the users to make changes to the underlying code of Windows OS in order to make customizations or change any flaws or errors that might be present in order to make the operating system more efficient.
Advantages and Disadvantages of Mac OS
The advantages that are associated with a Mac OS pertain to the fact that the Mac operating system is more reliable and has less chance getting a virus. Specifically after the launch of the Mac OS X in the year 2000, the number of viruses, ad ware or malicious spy ware that attack the computers has greatly reduced for computers having a Mac OS. Having a Mac does not guarantee that there are no viruses that can attack the computer, however having a Mac OS greatly reduces the chances of acquiring a virus through normal use or internet browsing.
Aside from this the Mac operating system is very friendly for the user. The users are easily able to comprehend navigation and control on the operating system. Moreover the Mac operating system is often available on machines pre-installed or can be installed at the request of the customer. This is specifically true for Apple machines or Macintosh.
Another great feature and advantage of the Mac operating System is that it allows the user to use both Mac OS as well as MS Windows OS on the same machine, therefore providing the user with a choice of dual OS usage. In fact it is also relatively easy to convert files created on the Mac OS to be transferred and used on the MS Windows OS through migration and conversion. The Mac OS and Mac computers provide the user with exceptional video, audio and photo processing technology. Using the Mac OS it is very easy for the users to run professional applications for video, audio and photo editing.
Mac OS based computers are highly capable when it comes to dealing with multimedia as their performance using multimedia is excellent. Today, many computer users in business and industry are adopting Macintosh computers as a primary multimedia tool because of its superior video, images, and sound (Jun Na Rajaravivarma, 2003). Aside from this the Mac OS also provides application software unique in nature like iChat for chatting using audio and video, iLife for managing files related to multimedia and the Time machine application which allows the user to schedule data backups.
The Mac OS is based on the UNIX platform in the open source. This has enabled the users to create and highly customize the Mac OS according to their specific requirements and needs. Alben highlighted in her Muriel prize winning article about the approach taken by apple for Mac OS that provides that apple conceived the Mac OS to be a computing environment that allows people to choose what works for them. Instead of having to conform to the confines pf the computer, they will work and play and learn in a way that better fits their needs and wants. These customizable appearances take the Mac OS beyond the utilitarian operating systems currently available (Alben, 1997).
The UNIX base of the Mac OS has made it more reliable as well as less faulty as it has been extensively tested, used and adjusted to eliminate any possible faulty code or programming by peers. The open source nature of the Linux and UNIX has allowed users to fine tune software and additional application software for the Mac OS (Lerner & Tirole, 2005).
The open source nature of the Mac OS has enabled users of Mac OS based machines and computers to create new application programs for the Mac OS system. These application software have been accessed by Apple and have been provided licensing by the company while integrating them into the Mac OS package for future users. One such development that has been developed using the Mac OS open source environment are the technical or virtual bulletin boards that were originally used by the Mac OS open source programmers to share code and advise on developing short code for Mac OS (Luca & McLoughlin, 2003). The Mac OS was specifically designed for Apple products by Apple Inc.
Therefore it is more reliable when it is run on apple products as it provides a complete solution of hardware and software that works together in a mutually cohesive manner to benefit the user and make using a computer machine easier for them.
The disadvantages that are associated with a Mac or a Mac OS pertain to upgrading the OS. As no upgrades of the Mac OS are routinely launched by Apple, it is not possible for the users to upgrade their Mac operating systems. However the users can always download shortstopped from the internet provided by other Mac users and customize and adjust their Mac Os features and application programs. Aside form this a recent trend analysis of the Mac systems on the internet has revealed that Mac systems are more expensive as opposed to PCs to purchase as they are more multimedia oriented with additional features.
The old Mac OS is somewhat redundant and some of the applications that traditionally have worked on Windows OS might not work on the Mac OS unless the user installs the Mac OS X software. Similarly it is often difficulty to repair a Mac and resolve issues as the code for the Mac is often different and not standardized. Moreover one drawback for serious gamers is that most of the games that are readily available on the MS Windows based PC are not compatible with the Mac OS platform and therefore cannot be run on Macs. Although many cross-platform file types are currently available, connection type depends mainly on network purpose, and media types are sufficiently diverse that uninformed users can encounter serious problems (Jun Na Rajaravivarma, 2003).
Advantages and Disadvantages of Windows OS/ PC
The main advantage of a MS Windows based PC is that the MS Windows Company provides extensive and unlimited upgrades for the Windows OS. As a result the users only have to have access to the internet in able to automatically download upgrades for the Windows OS and the related application software that run on the Windows OS.
Another advantage of the PC based on MS Windows OS is that majority of the programs that are available in the market and the games that are available to the users are those which are created keeping the MS Windows in mind. As a result they run much better with MS Windows and some are only able to run if a Windows MS based platform is provided on the PC. The PC is also a very common computer that is extensively available in the market, relatively cheap as well as having a high percentage of population using it.
The disadvantages that are associated with PCs and MS Windows OS is that they are highly prone to viruses and bugs. This makes them very unstable and unreliable. The PCs as a result have to be secured with complex antivirus programs that can often be very expensive for the consumers to purchase. The system of a PC can often crash and become slow or unresponsive over time with heavy usage. This is another major problem with the PCs that makes it unsuitable for extensive heavy usage. Moreover as PCs have a standardized code based operating system provided by MS Windows, the hackers specifically target PCs with their viruses and malicious code.
AS a result the MS Windows OS and PC users have to extensively take care of their computers, look after them and keep them upgraded in terms of their systems and their anti virus software in order to have smooth operations on the computer. This is especially true for the new Windows Vista which has been released. Vista makes PCs more unreliable and unstable in terms of performance. Another disadvantage that is present to the PCs and MS Windows OS is that PCs that are bought with MS Windows OS are often provided with the most basic of application software in the package. However the Mac systems with the Mac OS have customized media applications and software that can only be run on Mac. As a result this reduces the appeal of PCs for the younger generation.
Why Mac is better than PC
In todays day and age a Mac is better than a PC. This is mainly because the software and media that is used on the computer by an average user is highly complex in nature with high level of multitasking taking place. In such a situation that Mac fares much better than a PC as it especially designed for multimedia applications and use of heavy duty multitasking programs. In addition to this the Mac is also exceptional for gamming and multimedia processing with the enhanced applications dedicated to this and high level of graphics provided. Moreover the design of the hardware for Mac anther GUI interface makes Mac innovative and suave choice for style conscious users. On the other hand PCs are considered to be boring with lack of stylish appeal.
The PCs are readily available as the most affordable systems in the market. However the Macs are more affordable for the users in the long term. This is because the Macs have integrated customizable application software for office as well as connectivity, multimedia and online chat that are not available on PCs. The PC users as a result have to purchase MS office suite in additional to the Windows OS and the PC, while additional security protection software and graphics cards also have to be bought in order to bring the Pc up to the level of a readily available Mac therefore increasing the cost of the PC.
The Macs are more reliable for the users as they tend to break down less often and suffer from much fewer crashes as compared to the PCs. The PCs are prone to malicious code, viruses and faulty which have to be addressed by Microsoft. This makes them unreliable. The Macs however are less prone to viruses and attacks form hackers as it is much more difficult to hack a Mac as compared to a PC. Aside from this the Macs are also well known for their high level of performance and processing Speed. The Macs are designed specifically for heavy duty usage with multitasking of multimedia applications and software.
The Macs, as a result deliver a much more efficient performance in terms of speed and reliability. On the other hand PCs tend to slow down over time and can often crash when multiple multimedia applications are being used on a PC. Similarly if high quality gamming is to be conducted on the PC, an additional graphic card is required which is not necessary for a Mac.
The Macs are also easy to use for the end users of the system. The Macs provide an interactive graphic user interface that is specifically made keeping the users requirements in mind. As a result the Macs are much easier to navigate and use for users as compared to PCs. Regular updates are available for Macs provided by the company as well as other users which can upgrade the Mac OS X. aside form this the Macs also feature instant connections to external devices, internet based communication devices, other apple products. The provision of the iChat software along with a web camera allows the users to have access to video based chat at the click on a button.
The Mac also features a unique capability of housing two operating systems at the same time, enabling the user to make use of Mac based OS as well as MS Windows based US on the same machine. This dual operating system is particularly useful for families where multiple people prefer different types of platforms and operating systems. Apple Inc provides the Mac computers on which Mac Os is provided by the company.
The company designs hardware which corresponds with the Mac OS and adjusts the software of the Mac OS to the changes in the hardware making both the hardware and software mutually cohesive. The integration of hardware and software provided by Apple in the Macs allows Mac to feature unique capabilities where services like chatting and internet connectivity is instantly available to the user.
The Mac users are spoilt to choice when it comes to what application they want to have on their systems and the level of customizations that they prefer for their systems. The Mac is available in unique designs and covers which cannot be rivaled by PC in terms of design or style. Moreover the Mac provides the user with a range of customized application software which can be loaded on to the machine at the time of Purchase. A limited number of such applications often are provided by MS Windows based PC at an additional charge.
The main reason as to why the Mac is so diverse and is able to provide the user with a range of benefits and customization is because of the open source nature of its UNIX and Linux platform. As the Mac OS is open sourced, users can make changes to the system code and develop new application software according to their requirements that can be used on Mac Systems. The sharing of this information enable Apple Inc to provide these newly developed application to consumers in the Mac packages, therefore making the entire Mac offering more customized for the end user. The increased performance, reliability and diverse capabilities of the Mac are also based on its open source nature.
Conclusion
The Macs of today are highly evolved with better reliability, speed, performance security and customizations offered to the users as opposed to the PCs. The consistency of the Macs, along with their predictability, the low level of security and virus threats as well as the increased solutions provided by Apple for Mac users makes Macs a better choice for users instead of a PCs.
Shaffer, G., Zettelmeyer, F., When Good News About Your Rival Is Good for You: The Effect of Third-Party Information on the Division of Channel Profits. Marketing Science, 21.3 (2002): 273-293. Web.
Jun Na Rajaravivarma, V., Multimedia file sharing in multimedia home or office business networks. System Theory Proceedings of the 35th Southeastern Symposium, (2003): 237-241. Web.
Alben L., At the Heart of Design. Design Management Journal, (1997): 9-27.
Luca, J., McLoughlin, C., Peers Supporting Peers through structured bulletin boards. Digital Voyages, (2003).
Casadesus-Masanell, R., Pankaj, G., Dynamic Mixed Duopoly: A Model Motivated by Linux vs. Windows, Strategy Unit Working Paper No. 04-012, (2003). Web.
Speight, E., Bennett, J.K., Brazos: A Third Generation DSM System. USENIX Windows NT Workshop, (1997): 95-106. Web.
Bitzer, J., Commercial versus open source software: the role of product heterogeneity in competition, Elsevier B.V., (2005). Web.
The article describes that modern organizations and government bodies should pay a special attention to threats and vulnerability related to sensitive data. In this case, it is possible to distinguish two types of threats: internal and external.
Internal threats include damage of laptops and disclosure of personal information by employees. External threats are hackers and data thieves. Because the biometrics data does hold must be accurate, it is worth thinking from the outset about how managers are going to keep it that way. The article describes history of biometrics, its pros and cons. The author pays a special attention to biometric technology, fingerprinting, hand geometry, Iris and Retina Scanning and face recognition. The article is objective and is based on substantial literature review. The author supports ideas and suppositions with detailed facts and arguments related to the topic.
Bielski, L. Striving to Create a Safe Haven Online: ID Theft, Worms, Bugs, and Virtual Eavesdropping Banks Cope with Escalating Threat. ABA Banking Journal, 95 (2003), 54.
The article discusses the problems of safety and technological risks associated with data protection and hacker attacks. This starts with deciding what information to collect and how to get it. Good design of data capture forms can help; so can choosing reliable and up to date sources if an organization is not acquiring the data directly from the Data Subject. This means that government agencies must hold enough data but, importantly, not too much.
The biggest risk to security is almost always the companys own staff. The damage they do can be deliberatestealing information about people, such as business contacts they want to use for their own purposes, for example, or trashing the database out of frustration on being demoted. The arguments in the article are well-supported by facts and research studies conducted on this topic. A special attention is given to banking sector and possible tools used to protect privacy issues.
Casella, R. The False Allure of Security Technologies. Social Justice, 30 (2003), 82.
The article states that biometrics and other elated fields of research require huge investments and financial support in order to protect data and electric information. More often it is un-thinking or inadvertentgiving information over the telephone to someone who should not have it, leaving confidential files at home for a neighbor to see when they are working at home, or chatting in the canteen about a users borrowing habits where other people can overhear.
The role of the government is to control data protection and develop innovative technologies against attacks and intrusion of the third parties. The use of security technology in public places in the form of biometrics, detectors, surveillance equipment, and advanced forms of access control are relatively recent developments (92). The article is based on current literature review and state documents related to the problem of biometrics.
Lineberry, S. The Human Element: The Weakest Link in Information Security Journal of Accountancy 204 (2007), 44.
The article pays a special attention to such problem as human elements which can be a risk factor in security. Security must be seen in the context of wider organizational policies. Many aspects of security will be taken care of by, for example, the IT department or its equivalent.
However, high level security provision on its own is not enough; the systems have to work in practice. Facial recognition is an important area of concern for many state agencies. Also, the state agonies should maintain a perimeter security system. This system consists of firewalls, intrusion detection systems and anti virus measures installed on each laptop. Specific issues may arise where a Data Controller feels the need to monitor the behavior of staff or members of the public. The organization must be careful only to provide the information to the right person. This article proposes readers a unique approach to data and information security connected and depended upon human motivation and fairness.
Orr, B. Time to Start Planning for Biometric. ABA Banking Journal, 92 (2000), 54.
This article is devoted to importance of biometrics as a science and opportunities proposed by further development of face recognition technologies. This means that the state institutions should ask for information to verify their identity. State institutions may also ask for information to help GCI locate their records. State institutions might, for example, want to ask what part of organization they originally dealt with, or the approximate date they were last in contact.
A data access request is not valid until employees have received any of this information needs, but can only ask for reasonable information. The first line of defense is therefore to ensure that staff are aware of the possibilities and operate within a culture where information, and especially personal data, is handled carefully and responsibly. This article is objective and is based on a detailed analysis and data collection methods.
Papacharissi, Z., Fernback, J., Online Privacy and Consumer Protection: An Analysis of Portal Privacy Statements. Journal of Broadcasting & Electronic Media, 49 (2005), 259.
The article proposes analysis of online privacy issues related to consumer marketing and biometrics. Principle is that precautions must be taken against unauthorized processing. The staff must therefore not use data in any way that they are not permitted to, and they must not disclose it to anyone else who is not permitted to have it. But in order for this to make sense, someone has to do the authorizing. Unless there are clear guidelines on what is permitted, staff cannot be expected to comply.
The second Protection Principle says that all processing must be compatible with the purposes it was obtained for. Therefore in deciding who is authorized to see any particular type of data, it is important to think about what type of access is compatible with the purpose. The article is based on well-thought analysis and up to date information related to the filed of face recognition and biometrics. As a minimum it is usually best to get from the requesting agency in writing the legal basis on which they are asking for the information.
Building a custom personal computer (PC) for gaming is often seen as a challenging task that can be performed only by people with in-depth knowledge of technology. However, it has many advantages and can be a rewarding experience for the user. It is a process with many parts, requiring careful planning and preparation to achieve the best results. The present guide explores the main steps of computer building to help one build a powerful and reliable gaming computer.
Discussion
The first step to creating a custom build for a PC is making a list of all the necessary components. The main parts that every PC needs are a motherboard, a central processing unit (CPU), a graphics processing unit (GPU), memory storage, a power supply unit (PSU), a cooling system, and an operating system (OS) (Intel, 2022). Furthermore, one needs to choose a case that will fit all the elements mentioned above. Other peripheral parts, such as a monitor, keyboard, mouse, and more, are also vital to the quality of the final product. The choice for each element depends on the individual needs of each user, and it is necessary to research their quality and compatibility.
After all parts are planned out, the next step is purchasing them and preparing a space for assembly. One may also need some tools to install the components and make sure that they stay in the designated place (Intel, 2022). A clean, well-lit working space creates a better environment for working on the computer as well. It is vital to note that all components may have special instructions that should be read before starting the process and consulted as necessary.
It is essential to follow the manual to ensure everything operates correctly. Building a PC starts with installing the CPU on the motherboard (Parrill, 2022). Next, the CPU cooler is attached to the motherboard with the CPU. Memory (RAM) is installed after that the motherboard has RAM slots, and it is easy to insert the needed number of sticks. An optional step is including M.2 SSDs (solid-state drives) for additional storage. Finally, the motherboard is ready to be mounted to the case.
After the motherboard is secured, one can start connecting the PC to the power supply. It is necessary to carefully plug in all cables and install all connectors in the front panel, following the manual for the motherboard (Parrill, 2022). After everything is installed, cable management is optional but advised it helps make the space inside and outside the case neat, accessible, and visually pleasing.
The user should get a computer ready to boot as a result of the previous step. Here, one of the last components the OS is installed. One should find a copy of the Windows OS on Microsofts official website or choose another operating system if desired. The copy is then uploaded onto a USB with at least 8 GB this USB is used to store only the OS. The USB is plugged into the new PC and chosen as the boot device on the PCs BIOS screen to finish the building process. After restarting the computer with the new settings, the process should be completed.
Conclusion
This explanation of how to build a custom gaming computer demonstrates that the process is not complicated. It is essential to research components and learn about their installation from manuals before starting the assembly. Otherwise, the steps are transparent and can be replicated by a person without in-depth technological knowledge. The result of building a PC is a unique machine that is created to fulfill the needs of its users.
Today, the email system of our company has become one of the most indispensable and widely used business communication tools. But due to its increased popularity, our email has also become a suitable target for crackers and hackers who intend to cause harm to our company. Although email is a very convenient and efficient tool, it has certain vulnerabilities which the hackers exploit. Internet communication systems using UDP or TCP are the most vulnerable to such attacks. The attackers try to discover the services that are present at the network target, i.e. us. Then they use techniques like ping sweeps and TCP and UDP port scans for gathering data from that remote network. (Fletcher, 2009)
Body: Ping sweep and port scans
Ping sweep and port scans are the most common types of reconnaissance network probes. The port scan technique can be used by attackers for discovering the services that run on our machine. By using port scans an attacker can find out the live services that are running on our machines. Then he can plan any type of attack on the services that he has found. The attackers can port scan all the possible UDP and TCP ports and can even limit the ports scanned for avoiding getting detected. Port scans are extremely simple to carry out since the intruder simply has to link up with the ports of our machine and determine which out of them are active. UDP scans are a little more difficult than the TCP scans since the former is a connectionless protocol. The attacker simply sends to an intended port a garbage UDP packet to check the machines that are active. Since TCP scans are easy the attacker can use stealth scans, FIN scans and TCP connections for determining whether a machine is active. (Dollard, 2006)
In ping sweep a series of ICMP ECHO packets can be sent to the network where the machines have a range of IP addresses. By this way the attacker determines which machines are active and responsive so that he can focus on a particular active machine for attacking it. By using this mechanism an intruder can choose a list of our IP addresses and then send those ping packets to us. But unlike a normal ping operation, a ping sweep will send one of the packets to a single IP address and the next one to another IP address. This goes on continuing in a round robin fashion. (Fletcher, 2009)
Conclusion
Although ping sweeps and port scans can be used by attackers for hacking into our systems, they are not very harmful if proper precautions are taken. Also, sometimes we have seen that network administrators use ping sweeps and port scans on their networks for determining which of the machines are active and which are not so as to perform a diagnosis. Our company needs to be aware of the different types of network probes that can be extremely harmful for our company. But network probes like ping sweep and port scans cannot be stopped and this is the reason that they need to be taken somewhat seriously. (Dollard, 2006) Since we cannot stop them we need to be ready in case either a ping sweep or port scan takes place so that we can immediately protect our vulnerable systems and data.
References
Dollard, J. (2006). Secured Aggression. New Haven and London: Yale University Press.
Fletcher, R. (2009); Software Security: Beliefs and Knowledge. Auckland: Howard & Price.