Modern Computers: Changes Within Our Current Technological World

Modern Computers and Their Functions in Human Lives

The use of computers changes human life considerably in different ways. People discover more ways on how to improve their work, communication, and calculations. Within a short period of time, a computer becomes an integral part of this life, and there is no person in the world, who has not heard about it.

Nowadays, there are many means by which computers are able to exchange information. Input and output devices allow fast and reliable transfer of information, and the demands of the quality of such devices raise considerably day by day.

Du to such qualified competitions, modern computers no longer contain such devices like serial ports and floppy drives.

Floppy Drives and Serial Ports

Floppy drives are the devices, by means of which, it is possible to read and write floppy disks. The middle of the 1980s was known due to a kind of revolution, when floppy disks became the most frequently used devices to share information. Unfortunately, small sizes and low speed turned out to be crucial for these devices, and people start inventing something more comfortable and reliable.

The destiny of serial ports is a bit similar to the destiny of floppy drives. This physical interface aims at transferring information between a computer and terminals or other possible peripherals.

The necessity to create the same ports at other devices creates numerous challenges and leads to the idea of create another kind of port to unite machines.

  • Inability to transfer huge amounts of information;
  • Low speed and constant wasting of time;
  • Considerable sizes of the device itself;
  • Too old technologies and their incompatibility with other devices;
  • All these are the reasons of why floppy drives become unnecessary for modern computers. People introduce more interesting and reliable things, which may replace the functions of floppy drives.

However, some professionals still make use of these drives to make the system work in accordance with the already established norms.

  • Considerable place to take;
  • Too complicated construction to use;
  • Ability to supersede for something better;
  • Inability to promise constant connection all the time.
  • These points play a very important role for the use of serial ports.

Still, server computers and some industrial automatic systems use these ports due to the inability to make some changes, improve the system, and not lose something really important.

Instead of Floppy Drives and Serial Ports

USB ports and FireWire substitute serial ports with time and provide computer users with an opportunity to transfer info faster and more reliable.

Varieties of CDs and DVDs and their abilities to store more information in an appropriate view attract the attention of users and make their work easier. Memory cards become one more invention that does not take much place and still store the same or even more amounts of information.

So, it is useless to analyze what device is better for computer users now, but the fact that floppy drives and serial ports are out of use and fashion is obvious. Just because of the reason that new time requires new ideas and new services.

How the Knowledge of Human Cognition Improves Computer Design

Abstract

The study aims at looking at how the knowledge of human behaviour will help the manufacturers of computers in coming up with better computer designs that are human friendly after incorporating much of how human behaves and relate. After the identification of the dominant human behaviour that can be incorporated in the manufacturing of computers, the study will go ahead and identify some of the cognitive perceptions that have already been made and their application has either been short or long-term.

Introduction

The advent of technology has changed how things are done. More and more is being done using computers and there is no substitute to this as they have become more persuasive than manual ways of doing things. This era has forced humans to co-adapt to technological changes by coming up with cognitive models that would guarantee good interaction between the invented machines and human beings. This has prompted scientists in the technology business to come up with methods that will identify how human behaviour will influence the manufacturing of computers and related devices that will make their usage easy because human behaviour was taken into consideration in coming up with the design. For the topic of cognitive human behaviour in the designing and manufacturing of computers can be understood more easily, it will look at the existing human behaviours and examples of programs that have been identified.

Background study of the topic

Human cognitive is a study that is being used by scientists to identify the best ways that can assimilate human behaviour in the manufacture of computers. Human cognition is the field in the technology world that is involved with the study, design and construction of human-computer interaction. The human-computer interaction models are meant to make the usage of computers more human friendly and lastly in the implementation of the findings by incorporating the discoveries made into the manufacture of more human-friendly computers (Jacko and Sears, 2003). This understanding will help in co-adapting humans to the constantly changing technology in the manufacture of computers which has now become part of our lives. It is in black and white writing that technology has induced new ways of doing things in almost all fields in human life and the roles played by humans.

Interaction of humans with computers

To understand what human cognition entails, it is important to understand how humans interact with computers. It should be noted that interaction between the user and the computer occurs in a stage referred to as the interface. The interface incorporates the various components of a computer that is, the hardware and the software. The software concerning its designer displays different features, for instance, characters such as letters or graphics on the monitor which the user identifies. These characters are fed into the computer using hardware devices like keyboards, touchpads among others. The interaction of users and computers involves the flow of information and this loop of interaction has several aspects in it. They include; the objectives set by the user and the type of connectivity the computer has. Here, what is important is whether the connectivity is plugged into a network or not (Card and Moran and Newell, 2004). Thirdly, is the entry of raw data into the computer that is processed to produce some output and lastly, the interface provides the user with feedback.

The above knowledge is cardinal because it helps us understand the process of human interaction with the computer and as a result come up with the best human cognition strategies that will ascertain the most important facet of this topic, customer satisfaction will be guaranteed. The success of any application that is available on the computer depends on its effectiveness to provide the required necessities to complete a task the user wants to be completed. It is an essence that in the manufacturing of computers, human behaviours are taken into consideration because poorly designed interfaces can lead to complexities and therefore problems. Therefore, to achieve the required results of good human-machine interaction, we need to identify that, the human cognitive depends mostly on humans for it to be successful. This is by understanding human beings as possessing knowledge that helps them organise general, but very complex task-oriented information using the computer. To achieve their objective, they may have to carry out many activities where some can be inventive or routine. The execution of these tasks by different individuals brings out a differing perception of an interface in a computer. Every user is unique and this will influence their perception of the interface because of their different interaction with the computer. This finding should help the user interface developers to come up with software that is present more acceptable and has a usable computer interface that will satisfy a wide range of users.

The main objective of human cognition is the improvement of the ways that humans interact with machines and this can only be achieved if the manufacturers make the use of these gadgets easier and manageable. The reception of human instruction follows lesser commands or procedures to get things done.

Human cognitive studies long term objectives are aimed at identifying systems that will reduce the rift that exists between the human cognitive designs and other systems that are present. This will help the users in using the computers with the ability to understand what the users want to be done more effectively (Salvendy and Smith, 2011). This has prompted the manufacturers to come up with recent human cognitive designs that aim at retaining constant communication between the user, the computer and its designers. This interaction has enabled the manufacturers to come up with human cognitive models that are designed based on the human experiences into the computers.

The user-centred design (UCD) whose designers have stressed the importance of the users taking the centre stage in stipulating what should be incorporated in the designing of the interface. Another design that is projected to take the leading role in designing these interfaces is the virtual reality interface which will incorporate virtual simulations such as the three dimensions entertainment scenarios or military training. Designs that are expected to cater for the handicapped are also being designed. This handicapped interface will enable the blind and the deaf to interact with computers easily just like the rest of the population. The software for the handicapped will apply a sensory interface in interacting with the user.

So that the above long-term human cognitive interfaces are achieved, the manufacturers have to come up with the latest good display designs that will take into consideration all the interests of the user as the inspiring factor (Antonio, 2008). Display design refers to human-invented artefacts that are made to represent the system variables that are inspired by the users of the computers. These artefacts facilitate the effective and efficient processing of information to give the required results. It should be noted that so that a design is made, the intention that it is expected to achieve should first be defined for instance; is it meant to navigate, entertain, point, help in the decision making process among others. This will enable the designer to come up with a product that will fulfil the users expectations (Hippe and Kulikowski, 2009). Any legitimate user knows the outcome they expect to be processed by the computer. Therefore, the human cognitive software should always support the views held by the users, the expected outcomes through awareness and a general understanding of the interface process.

The short-term human cognitive designs have failed to provide sufficient solutions to the users and the designers of user-friendly interfaces. This process involved the development of interfaces based on scientists and manufacturers-based approaches that never took the opinion of the public. This should be reversed by designing more user-friendly interfaces.

Conclusion

In conclusion, an effective human cognition interface is a very important aspect of user-computer interaction. It is therefore prudent to conclude this study has identified that, it is very important that the designers should check out the most appropriate inputs of design that will satisfy the users most. This will lead to absolute interaction between the user and the computer.

Reference List

Antonio, J. 2008. Human-computer interaction and cognition in e-learning environments-the effects of synchronous and asynchronous communication in knowledge development. New York: Cengage.

Card, S., K., Moran, T., P. and Newell, A. 2004. The psychology of human-computer interaction. New York: Cage.

Hippe, Z., S. and Kulikowski, J., L. 2009. Human-Computer Systems Interaction: Backgrounds and Applications. New York: Springer.

Jacko, J., A. and Sears, A. 2003. The human-computer interaction handbook: fundamentals, evolving technologies, and emerging applications. New York: Routledge.

Salvendy, G. and Smith M. J. 2011. Human Interface and the Management of Information. Interacting with Information. Chicago: Cengage.

Computer Forensics for Solving Cyber Crimes

Introduction

Several questions are often asked when it comes to the use of information technology to commit crimes and the possibility of deploying the same technology in detecting and apprehending people who commit cyber crimes. Computer forensics is a strategic field, especially for organizations that need to protect their information technology systems from possible security breaches and steps to take when such breaches occur. This paper argues that a lot of technical issues still present in the use of information technology for gathering evidence on cyber crimes, in spite of computer forensics being widely used in unearthing evidence in a substantial number of cybercrimes. This paper presents research about the deployment of computer forensics in solving cyber crime. The paper brings out a number of cases concerning crimes in the cyberspace to elaborate on the diverse approaches of computer forensics. Here, the paper focuses on the use of computer forensics in politics and money scheme related cases.

Overview of computer forensics

The use of scientific knowledge, especially computer and information technology, is critical in the collection and analysis of information on particular incidences that occur in the cyber environment. The contemporary developments in the cyber environment result in the ease with which transactions are made. It also results in the ease with which people advance criminal activities by utilizing the computer networks. Therefore, computer forensics is a relatively new field that entails the search for information and evidence that link certain people to certain crimes. It is important to note that computer forensics as a field is not only used to detect crimes that occur within the cyberspace, but it also helps in detecting and analyzing crimes that occur outside the cyberspace. As a relatively new discipline, computer forensics is meant to enhance efficiency in the criminal justice system by aiding in linking crimes to suspects using computer technology. Therefore, the legal elements combine with the technological elements to enhance a detailed analysis of information relating to crime. The information is collected from different technological devices like computer networks and systems, mobile devices, and the wireless communications. However, standardization has not yet been achieved in computer forensics. This makes it hard to deploy computer forensics across all the criminal justice systems in the world. The comprehension of the technical and legal aspects that go into computer forensics is a critical step for the criminal justice systems that want to embrace computer forensics (Baggili 81-82).

Bennett (159) observes that conducting forensic exercises is a difficult undertaking. In addition, it might not result in the correct and accurate linkage of crime to offenders in a number of cases. There are other sets of legal enforcements, like the Fourth Amendment, that can impede the continuity of the forensic practices by making it difficult for computer forensic experts to conduct their exercise (Manes and Downing 124).

Cases that attract computer forensics

According to Watney (42-43), crime in the contemporary society is mainly accomplished through the electronic platform. The development of the electronic platform as a result of the advancement in information and communication technology makes it hard for the players in the criminal justice system to conduct investigations of crimes. There is a wide range of crimes that are committed by people today because of the presence of a wide range of information and communication technology tools. Therefore, the escalation of crime in the cyberspace necessitates the deployment of computer forensics as a way of apprehending the large number of criminals who keep enhancing crime by utilizing the larger electronic platform (Vacca 4). Crime easily spreads across different countries in the world because of the expansiveness of the computer network systems. This is why different states continue to advance surveillance at the state level as a way of enhancing national security. Different countries increasingly incorporate aspects of control in the cyberspace to tame crime (Watney 43-44). It is in line with the contemporary developments that the United States has reiterated the need to cooperate with other countries in the world to apprehend the cyber criminals who take advantage of the global interconnectedness of the computer systems and networks to commit crime. The balance between the privacy concerns and civil liberties has been given priority in the cooperative cybercrime treaty that pulls a substantial number of countries together (Senate Ratifies Treaty On Cybercrimes 8).

Based on the fact that states make use of computer networks to establish security networks, breaches in the security networks by other states result in increased surveillance by states through computer forensics in order to establish the extent of such breaches (Watney 42). However, it is also important to note that computer forensics is used in civil and criminal cases where the gathering of evidence is mostly done through the aid of computer technology.

Civil vs. Criminal

Vacca (4) observes that the process of acquisition, examination, and application of digital evidence is important in detecting and apprehending cyber criminals. Therefore, computer forensics has been intensely deployed in apprehending cyber crimes across the United States. Unlike the old forms of crimes, crimes in the contemporary times can hardly be committed without leaving behind the digital or electronic trail. This is one of the indicators that computer forensics is vital in enhancing criminal justice. People who commit crimes often use digital tools to enhance communication. Therefore, tracing the information that is passed through these devices is one of the critical leads for the investigators, attorneys, and judges who are working in the criminal justice system. They find it easy to establish cases and make judicial determinations through the acquisition of information from the computer networks and systems (Garfinkel 370).

It is easy to detect the transactions made by an individual by relying on digital forensics provided that the transactions are carried out through the support of the digital electronic tools. While it is easy to apprehend criminals through the use of technology in what is referred to as computer forensics, it is critical to note that there are complexities related to relying on digital forensics in making judicial determinations in criminal cases. Corroborative evidence often comes from the digital devices that are used by the offenders, thus tracking these devices and accessing the content of communication in these devices is one way through which the evidence concerning the crimes committed is gathered. In most cases, the victims devices are used to track all forms of communication that point to the crime committed. One of the challenges that bring out the complexity is the presence of diverse technologies that present the problem of compatibility. An example is the ability of Bernard Madof to track the victims information in the money laundry scheme. However, it was impossible to translate the data because of the technology platform on which the crime was performed. It became apparent that Madof took advantage of the gap in technology development to commit the crime (Garfinkel 371).

The civil law oversees the relationship between private parties. Crimes are often committed in the course of cooperative relationships between individuals and organizations, or business dealings between individuals and organizations. Therefore, computer forensics can also be applied in the determination of civil cases. Some offenses that are committed in the cyberspace can be classified as both civil and criminal depending on the nature of evidence that is established. What is important in civil and criminal cases is the availability of evidence to link the suspects to the offenders. In this case, the electronic evidence that is provided through the use of computer forensics acts as a basis on which judicial determination is made in civil and criminal cases (Maras 29-31).

Misdemeanor vs. Felony

One of the inherent complexities in the determination of cases of fraud is the difficulty in establishing objectivity as far as the access and use of information on computer networks is concerned. This stirs up the issue of a misdemeanour versus a felony. It is important to separate between the genuine users of the computer networks and the people who use the cyberspace to advance felony. The increase in the cybercrime incidences in the United States and the world over has resulted in a substantial amount of investment in cyber space security. An example is the 2008 cyber security breaches in the US. Two pieces of legislation in the United States that are accused of enhancing harsh conviction of the cybercrime offenders are the Cyber Security Enhancement Act, which is one of the latest laws, and the USA PATRIOT Act that was enacted earlier. In other words, the federal laws that govern the use of the cyberspace have faults and need to be reassessed to provide a fairground on which computer crimes can be dealt with (Skibell 909).

The most essential thing in the enhancement of cyber security and the prevention of over-criminalization of the supposed cyber offenders is the need to review the cyberspace legislation in the United States to differentiate between the sets of breaches that are conducted in the cyberspace and the ascertainment of the intention of the cyber users accused of such crimes. This is critical in the accurate classification of cyber crimes into either misdemeanours or felonies, instead of the current modalities of classification that result in the categorization of most of the suspects as felons (Skibell 944).

Galbraith (320) reiterates the need to sieve information as part of the strategy of minimizing unfair accusations of people who use the cyberspace. He also notes that the law that deals with cyber crime cases has to ensure that there are regulations when it comes to the use of different online platforms by different people. Even the publicly accessible websites need to be regulated to reduce the chances of unlawful use of the online platforms. This is a regulatory mechanism that is preventive in nature, instead of pieces of law like the CFAA that are responsive in nature.

Conclusion

Research indicates an increased in the deployment of computer forensics in the justice system of the United States. From the research conducted in the paper, it is worthwhile to note that computer forensics gain acceptance in both criminal and civil litigations because of the increased use of computer and other technology tools. Therefore, it is easy to track the codes of communication and recover electronic evidence to link people to all kinds of crimes. However, the gaps in technology are bound to advance the complexities in computer forensics.

Works Cited

Senate Ratifies Treaty On Cybercrimes. CongressDaily 2006: 8. Academic Search Premier. Web.

Baggili, Ibrahim. Digital Forensics and Cyber Crime: Second International ICST Conference, Icdf2c 2010, Abu Dhabi, United Arab Emirates, 2010, Revised Selected Papers. Berlin: Springer, 2011. Print.

Bennett, David. The Challenges Facing Computer Forensics Investigators In Obtaining Information From Mobile Devices For Use In Criminal Investigations. Information Security Journal: A Global Perspective 21.3 (2012): 159-168. Print.

Galbraith, Christine D. Access Denied: Improper Use Of The Computer Fraud And Abuse Act To Control Information On Publicly Accessible Internet Websites. Maryland Law Review 63.2 (2004): 320-368. Print.

Garfinkel, Simson L. Digital Forensics. American Scientist 101.5 (2013): 370-377. Print.

Manes, Gavin W., and Elizabeth Downing. What Security Professionals Need To Know About Digital Evidence. Information Security Journal: A Global Perspective 19.3 (2010): 124-131. Print.

Maras, Marie-Helen. Computer Forensics: Cybercriminals, Laws, and Evidence. Sudbury, MA: Jones & Bartlett Learning, 2012. Print.

Skibell, Reid. Cybercrimes & Misdemeanors: A Reevaluation Of The Computer Fraud And Abuse Act. Berkeley Technology Law Journal 18.3 (2003): 909-944. Print.

Watney, Murdoch. State Surveillance of the Internet: Human Rights Infringement or e-Security Mechanism? International Journal of Electronic Security and Digital Forensics 1.1 (2007): 42-54. Print.

Aspects of Computer Ethics

Notably, it is impossible to behave legally without acting ethically as an IT professional. According to Nazerian (2018), ethics classes are essential in computer science education. For instance, numerous colleges, including Princeton and Harvard, have lately sparked campaigns to consider incorporating ethics into computer science courses. Nazerian (2018) suggests that thirty-five tech leaders, including executives from Instagram, Microsoft, and Lyft, announced their support for the Responsible Computer Science Challenge, a $3.5 million challenge to incorporate ethics into undergraduate computer science curricula. Universities integrate ethics into computer science education to create a community of individuals who value impact and are learning about the ethical issues in the field of data science (Stolzoff, 2018). Additionally, IT engineers have access to confidential information, and ethical conduct plays a fundamental role in computer science.

To perform ethically, IT employees must comprehend the moral foundation of individual obligations. Following a slew of controversies in technology development and business management in recent years, the visibility and public demand for courses in tech ethics on university campuses have increased dramatically (Ferreira & Vardi, 2021). Winiecki and Salzman (2019) depict Susan Fowlers experiences as a Site Reliability Engineer at Uber and emphasize how one may attribute responsibility for harassing conduct to individuals while also pointing to systemic shortcomings in responsibility and accountability at Uber. Acting to remove the specific harasser would not address the fundamental issues in the workplace, and it may even allow new varieties of the problem to emerge in the future. Although this is not an engineering challenge, IT engineers should work as responsible players in the social system and create ethical solutions to the problem. Therefore, the goal of computer science ethics is to develop a new generation of computer scientists who recognize the benefit of partnering with social science specialists and become informed about the societal implications of computing.

References

Ferreira, & Vardi, M. Y. (2021). Deep tech ethics: An approach to teaching social justice in computer science. Proceedings of the 52nd ACM Technical Symposium on Computer Science Education. Web.

Nazerian, T. (2018). New competition wants to bring ethics to undergraduate computer science classrooms. EdSurge. Web.

Stolzoff, S. (2018). Are universities training socially minded programmers? The Atlantic. Web.

Winiecki, D., & Salzman, N. (2019). Teaching professional morality & ethics to undergraduate computer science students through cognitive apprenticeships & case studies: Experiences in CS-HU 130 Foundational Values. 2019 Research on Equity and Sustained Participation in Engineering, Computing, and Technology (RESPECT). Web.

Professionalism and Ethics: Impacts of Computers, Ethical Obligations and Information Awareness

Negative Impacts of Computers

One of the first negative impacts of computers and their related software that I would like to discuss can be seen in the arguments of Nicholas Carr in his book The Shallows. In it, Carr presents readers with the notion that the traditional method of reading books, essays and various other written works are superior to what is offered today on the internet (Carr 10). For Carr, the internet is a medium based on the concept of interruption where multitasking and rapid-fire reading is the norm rather than curious oddities (Carr 14). Reading short articles, responding to emails and chatting at the same time has become so ubiquitous with internet usage that most people barely give it a second thought. On the other hand, as Carr explains, this has resulted in people losing the ability to enter into a slow, contemplative method of thinking normally associated with reading novels in print (Carr 20). A crowding out effect can be seen where people find it harder to concentrate on lengthy articles, books or essays and a growing preference has developed for short, rapid-fire articles which can be browsed within a few minutes. For Carr, the perceived value of the internet is one of human deterioration where people lose the ability for solitary, single-minded concentration in favor of rapid-fire multitasking. In essence, the argument of Carr represents the proliferation of thousands if not millions of websites solely devoted to brief articles that do not even reach the initial steps of literary heights reached by classical and modern-day literature found in various books, novels, and academic journals. The second negative impact of computers and their software comes in the form of the dissociative manner in which people communicate with one another and how people have begun to prefer emotionless convenience over traditional emotional conversations. The modern world can be described as a fast-paced and erratic environment where actions need to be done immediately unlike in previous eras where a person could take their time to think things through properly. As a result of this need to rapidly communicate, the internet has become a means by which people communicate with their loved ones, friends, colleagues, and acquaintances through email or even chatting. Unfortunately, recent studies have shown that there has been a growing trend where people have begun to prefer the simple and immediate convenience of internet messaging rather than going to the person themselves and talking to them upfront. As a result, our society as a whole is continuing to foster an attitude of isolationism where simple face to face conversations are considered a slow and time-consuming action when the fast rapidity of the internet is preferable. The last of these negative impacts are seen in the creation of various MMORPG (massively multiplayer online role-playing games), such software programs are intrinsically designed to capture person attention and keep it. The popular online RPG World of Warcraft has aspects that were designed by psychologists to encourage addiction to the game itself. Unfortunately not only has this resulted in people wasting their times online it has created an entire age group of individuals that define themselves not by what relationships they develop through regular social interaction but rather with the people they meet online which further fosters enhancement of the distinctly isolationist tendencies begun by trends in online internet messaging. The situation where people feel isolated and prefer online content rather what is present in the real world is similar to the concept of the red pill and blue pill from the movie The Matrix. In it the character Morpheus gives people the option to either see the truth or remain in a fantasy world; for many individuals devoted to online content, they would prefer to remain in their fantasies rather than accept reality. One method of preventing this would be to limit the overall time people can spend online however because most people are free to do what they want such a method is largely ineffective and to this day remains one of the leading causes for the continuing trend in social isolation. It is due to this that I have become disillusioned over the progress of technology as such I have become a technological pessimist rather than an optimist as a result of this continued trend of social isolation that continues to persist in our society as a result of computers and various software programs.

Ethical Obligations

There is one fact that remains true and unchanging in this ever-shifting world, and it is this: not everything you read or see on the internet can be considered the cold hard fact. For every fact that is posted online there are hundreds of other online articles that say and mean the exact opposite of what was stated. Ethical obligation towards the presentation of facts only applies when it deals with a professional presentation, project or paper that will be relied upon as a source of accurate information. Personal websites or blogs are not meant as a credible source of information despite various individuals claiming them to be so. In terms of ethical obligations, there does not seem to be anything particularly wrong in posting something inaccurate on a personal site so long as the readers understand that not everything they read is wholly accurate. The ethical obligation only comes into play when the website/websites in question are meant for other purposes beyond that of personal use such as a way to educate particular people about a topic. It is only then that some measure of ethical obligation does come into play, but there is no enforcing principle behind it.

Information Awareness

In his book Cognitive Surplus Clay Shirky explains that the internet acts as an open platform for a contribution where user-driven content and collaboration drives social and cultural development (Shirky 5). Collaborative efforts such as Wikipedia, Wikis and social networking sites such as blogs, twitter, and online message boards all contribute to utilizing the aptly named cognitive surplus towards creating an ever-increasing amount of user-driven content that contributes towards societal development. As such content available on facebook, twitter, myspace, etc., is considered a way in which a person either positively or negatively impacts societal development through his or her unique contributions (Shirky 15). A company needs to know this kind of information to better evaluate a person as a potential candidate for employment since what they contribute to society determines what they will contribute to the company. As such the practice of checking up on people to see if they are a positive force for society is in a way ethical since it does safeguard the integrity of the company. While a certain type of information should remain private, contributions of various individuals such as blog posts and twitter feeds are in the public domain, and as such, it is by their own choice that it becomes public, no one forced them to post it online.

Works Cited

Carr, Nicholas. What the Internet is Doing to our Brains The Shallows. New York:

Norton & Company, 2010. eBook.

Shirky, Clay. Cognitive Surplus: Creativity and Generosity in a Connected Age. New

York: Penguin Press, 2010. eBook.

Teaching Computer Science to Non-English Speakers

Introduction

Learning computer science (CS) presents many challenges in itself  however, it becomes increasingly more complicated if the learner has the additional disadvantage of not speaking the language CS is taught in. The current learning methods may not always be as accommodating for non-native speakers as they could be, causing various additional and, arguably, unnecessary learning challenges. Hence, the proposed study aims to investigate significant barriers to CS education and how the process could be improved.

Problem

The specific technical problem that the proposed research aims to address is creating CS learning materials that are more accessible and culturally neutral. Before being able to create and interpret a functional piece of code in any programming language, people must understand the guidance for it. Hence, non-native English speakers should be able to access the instructions on programming in a way that makes their learning process more manageable. According to Alaofi (2020), poor knowledge of the English language was predictive of poor CS performance. Coupled with the fact that most CS is conducted, taught, and discussed in English, not knowing it created an enormous potential for erroneous coding (Guo, 2018). One of the implications is the loss of potentially highly qualified specialists due to the lack of accommodations at the beginning of their educational journey.

Solution

The proposed technical solution to the outlined problem aims to eliminate the barriers, curtail misunderstandings, and make CS more accessible to speakers of any language. The study investigates several scholarly publications to obtain the best techniques and approaches for higher accessibility to achieve such an aim. Some common suggestions included excluding excessive use of highly localized and technical jargon, relying highly on visuals, and incorporating dictionaries for specific terms (Alaofi, 2020; Guo, 2018). Further, Hagiwara and Rodriguez (2021) recommend increasing lab and hands-on learning times instead of prioritizing lectures, encouraging cooperation, and accounting for the stress non-native speakers may experience. Therefore, educators are highly encouraged to engage students in experiential learning and account for varying levels of English proficiency instead of viewing the student body as homogenous in terms of skills.

Computer Programs: Programming Techniques

For computers to execute their functions, specific programs with specific applications are used. These programs must be executable by any computer depending on the program instruction. For easy analysis and compilation, these programs have to be in a human readable form (depending on the source code). The codes are in programming languages, which must adhere to imperative or declarative programming. Computer programs in most cases use compilers to convert the complex source codes into executable programs. Alternatively, the central processing unit runs these programs with the help of interpreters. Computer programs are of two types, depending on their functions: application and system software. These programs run either in many computers or in a single unit depending on their specificity. Their development must undergo a series of steps to make sure they meet the required standards. These steps have thorough analysis of all program components by programmers or software developers. The steps include construction, analysis, testing and up to date refining of programs, depending on the software type (Abelson, Jay, and Sussman p.7, 8).

All computer programs require interpretation for clear understanding of their basis and rules of standardization and abstraction. This makes the implementation, upgrading and maintenance processes easy. Compiled programs require no human commands when running, but all the interpreted programs must receive human instructions for them to be of use. Examples of computer programs are Microsoft office, window players, anti viruses, typing tutors and navigators (Computer software p.1).

Computer programs enable people to learn, communicate and produce products that are of high quality. On the other hand, these programs help individuals to create collaborative and personalized learning environments, where people can learn on their own. Some programs face criticisms due to their complexity, stability, security and inability of to produce expected results when executed.

For example, Microsoft office is a program that carries all writer applications. This include Microsoft word, PowerPoint, excel, access and publisher. This applications help in the preparation and presentation of documents. Although they have made work easy, these programs never use open document formats, instead they use protected forms. These file formats lack the flexibility required in case modification need arises. The Office XMLs format (a product of Microsoft office), lacks the standards dictated by international standard organization (ISO). Action groups on the other hand, argue that most of these programs formats only consider the Microsofts office application standards, hence lack the required seller and user demands. The standards lacked by these products include date and time formats, cryptographic algorithms and required color codes. In addition to lack of conformation to ISO standards, the program has a collection of settings that are complex to use. For example, line break, footnote format, and auto space settings that are hard to set and use (Galli Para.3-7).

The Microsoft office on the other hand, has many technical errors. These errors include the application of binary code that never emphasizes some important office applications. In addition, many IP addresses and copyrights laws of these programs lacks clarity. Further, most of the office applications bring bugs and have proprietary parts that limit users (Galli Para. 5-7).

Many office applications from Microsoft easily breakdown hence, forcing many users to use new updates as a solution. Applications for example, excel and word lack many features that users need for document preparation. Instead, most of them have specified features that are not easy to modify depending on user requirements. For example, Interfaces incorporated in the 2007 Microsoft office edition, has altered many connections between its applications. This has made it difficult for many users to interpret and use it (Kadima p.48-50).

Although computer programs have contributed greatly to learning, they have been criticized due their high costs and impacts on education quality. Many students who originate from low income families and schools in poor environment cannot afford computer programs and hardware. This has led to inequality in education provision. In addition to education provision, many programs are complex to be understood by many learners. This is because many teachers lack computer knowledge and basics. To learners, computer programs are hard to repair and maintain in case one lacks training. This makes computers to be of no benefit to many learners, in cases where they cannot access specialists easily (Lai and Kritsonis p.2, 3)..

In addition, Kritosonis and Lai argue that, many computer programs that aid language acquisition are flawed. This is because; most of the programs emphasize only reading and writing. Many computer programs that deal with speaking have few supporting devices hence find minimal application in this field. Speaking programs, in most cases never emphasize appropriateness of the spoken language, but they only deal with the accuracy concept. They further add that, many computer programs lack mechanisms of dealing with changing learner needs. The interaction between learners and computers is minimal, due to differences in information interpretation among various learners (p.4).

In conclusion, programmers should device better programming techniques that meet current standards and user needs. In addition, the programs should be in formats that users can easily modify depending on specific needs; so long as the modifications are within the copyright laws.

Works Cited

Abelson, H., and Jay, G., and Sussman, J. structure and interpretation of computer Programs. Cambridge; New York: MIT press; McGraw Book Company, 1996.

Computer software. Computer hope. 2009. Web.

Galli, P. Few substantive criticism of Microsofts office open XML. Eweek. 2007. Web.

Kadima, A. history of Microsoft office: criticisms to Microsoft office. 2009. Web.

Kritsonis, A. W. and Lai, C. C. The Advantages and Disadvantages of Computer Technology in Second Language Acquisition. Doctoral Forum 3(1) (2006): 3-4.

Computers: Macs Vs PCs

Introduction

This research paper aims to describe Macs and PCs. The paper will make familiarize the reader with Macs and PCs through examples. Apart from this research paper includes comparison and contrasting of both Macs and PCs.

By example

There are lots to consider in mind while thinking about Mac and PCs. Both Macs and PCs have got some differences that everyone needs to be very aware mainly at the time of buying. This paper is going to discuss this issue by taking an example from both sides. It is said that Macs are more expensive than PCs in terms of the price range. The cheapest Mac laptop is the MacBook, which retails for $999 (Nicholas). The price mentioned here is without taxes and if taxes are included then prices will be some higher. In the case of the MacBook Pro, the prices range from $1,199 to $2, 499. If the product is MacBook Air then its price range falls from $1,499 to $1,799. When it comes to the case of PC the price range is very less while comparing with Mac and a PC would cost nearly $500. But, it would not have as good of specs as a MacBook. (Nicholas). It is said that PCs prices are much below Macs while considering the overall price range. There is no problem for one to buy a PC from Best Buy for $400. As in the case of Dell Inspiron laptop having the same kind of spec as MacBook costs only $700. These are major concerns regarding Mac and PCs by different examples.

Comparison and contrast of Macs and PCs

There are some similarities between Mac and PCs. Mac vs PC comparison has always been like comparing apples and oranges. Both have their fan followings, but both are very different in the way they approach the market. Similarly, both OS-X and Windows are operating systems, but both are built on two different foundations (Mac or PC: Who Will Win the Bout?). However, there is a slight difference in the price of these computers the hardware used in both Mac and PC is similar. In a Mac computer, it can also run windows with the use of a particular program named Virtual PC. Running Windows on a Macintosh is cheaper than buying two computers. With a laptop, it can be more convenient, too. If youre torn between a Mac and its seamless hardware/software integration and the universality of Windows, you can have your cake and eat it too, albeit at a steep price (Apple Laptops and Desktops: Full Report).

All the functions which the PC can perform and support can carry out by the Mac computer also. It does not signify that both computers are operating in the same way but there is nothing impossible by the Mac which the PC does. Both computers can browse the internet, share and download files, make use of e-mail, and so on. Moreover, the Mac and PC can be networked simultaneously that is PC to Mac, Mac to Mac and PC to Pc, etc. Both these computers are excellent each one has its benefits. If there is any negative point in PC compared to Mac in some other case PC will be better than Mac. So the choice of buying these computers is to depend on customers wish and need.

Even though there are some similarities between Mac and PC, both computers have a wide variety of differences. Macs have been produced by Apple whereas PCs were first produced by IBM.

The difference between a Mac and PC can be summed in one sentence: Macs and PCs operate and work differently.The biggest difference between Macs and PCs can be summed up by different thought processes and philosophies. Macs think more like humans, while PCs tend think to the opposite of humans (What is the difference between Mac and PC?). An example is that a Mac will read a number in the same way read by humans but a PC will reverse the number and read. Macs are traditionally classified separately from PCs because they are based on the PowerPC architecture from Apple/IBM/Motorola instead of the traditional Intel-based microprocessors that have powered PCs for decades. A great deal of software is also compatible with either Mac or PC, but not both (The Difference between a PC and a Mac).

Macs can run a Macintosh operating system, whereas a common PC cannot. The soft wares used in a Mac and a PC are different. The PCs often come installed with Windows operating system. But a Mac is made up of UNIX operating system which is less prone to viruses and other malicious soft wares. Thus a Mac has less chance of virus attack than a PC. Macs are more efficient and simpler than PCs. They have a user-friendly interface. Any hardware part can be customized using a PC. But a Mac provides options for only fewer hardware customizations. Macs offer better support to the customers. In terms of price, PCs are cheaper than Macs. There is also a difference between the icons of both of them. All these differences make Macs and PCs unique from each other.

Conclusion

This research paper has discussed the Macs and PCs in various aspects. Differences coming at the price of Mac and PCs are explained by giving suitable examples and in the comparison and contrast section it is found out that there are lots of differences and similarities are there among Mac and PCs.

Works cited

Apple Laptops and Desktops: Full Report. Consumersearch. 2009. Web.

Mac or PC: Who Will Win the Bout? The Times of India. 2010. Web.

Nicholas. Buying a Mac or Notebook  Which to Choose? Bright HUB. 2010. Web.

What is the Difference between Mac and PC? Yahoo Answers. 2010. Web.

Computers Will Take Over the World or Not

Introduction

Intelligent computers are defined as machines that involve the use of intelligence and computer science concepts to execute various tasks initiated by the user. Intelligent computers are systems that are able to perceive their environments and take actions that are meant to maximize the effectiveness of executing these actions. Intelligent computers are also referred to as intelligent agents because they have the ability to observe and act on the environment rationally so as to achieve specific goals. These computers like human beings can be able to learn how they can use knowledge to achieve these goals (Russell and Norvig 2003).

Computer programs have been described as a vital component of intelligent agents because they are designed to act as rational systems that can be able to think, reason, react and acquire new knowledge just like human beings do. Software programs are able to intelligently execute various commands that have been initiated by users of computer programs thus making them a major component of intelligent computers. Intelligent computers have systems that encompass practical reasoning, socio-cognitive modeling and moral ethics which means that they are designed to be more human than the machine (Russell and Norvig 2003).

For computers to be termed as intelligent, they have to possess certain characteristics which according to Kasabov (1998) include the ability to accommodate new problem-solving rules at an incremental pace, the ability to react rationally to various commands issued by a program user, ability to adapt to online time and real times, ability to learn in an efficient way from large amounts of data and also improve its own system by constantly interacting with the environment, possess a large storage memory that will support storage and retrieval activities (Kasabov 1998). This essay seeks to determine whether we should be concerned that intelligent computers might take over the world thereby enslaving human beings to these intelligence agents.

History and Background of Intelligent Computers

The history of intelligent computers can be traced back to ancient Greece where artificial beings and machines that could think were used in many Greek mythologies with some of these machines including Talos of Crete, Pygmalions Galatea, and the bronze robot of Hephaestus which were all developed to be intelligent machines that could acquire, learn and process information just like human beings. These machines became a common feature in the 19th and 20th centuries as a form of fiction used in many works of literature such as Mary Shelleys Frankenstein and Karel Capeks Rossums Universal Robots. The existence of these robots during the 18th century was seen as a way of forging the gods to be more realistic and intelligent where they could meet the hopes and fears of subjects who worshipped these gods (Perkowitz 2007).

The ability of these robots to engage in formal reasoning was mostly developed by ancient philosophers and mathematicians who were able to conduct studies on logic so as to develop a programmable digital electronic computer that was intelligent. One such mathematician was Alan Turig who was able to come up with the theory of computation which would be used by programmable machines to simulate mathematical functions by shuffling 0 and 1 digit. Turigs theory of computation was later used by a small group of researchers concerned with neurology and information theory to develop computers and other machines which possessed an electronic brain (Russell and Norvig 2003).

The small group known as the Al research was able to develop programs that would be used by computers to solve various problems such as mathematical equations, grammatical problems, and authenticating logical theories. During the 1960s, laboratories that would be used to create intelligent computers began to increase as a result of the additional funding given to researchers by the Department of Defence. The founders of Al research were confident that computers and machines would be capable of doing the same work that human beings were doing in twenty years time. Herbert Simon and Marvin Minsky who were some of the founders of the research group believed that the problem of creating artificial intelligence would be substantially solved by developing intelligent machines (Kasabov 1998).

Despite experiencing substantial problems in the initial days of their research, the Al research group was able to develop an expert system in the 1980s which was seen to be a major success in the technology market. The expert system was able to simulate the knowledge and analytical skills of human beings meaning that the machine could be able to process information just like human beings do. The group was able to develop various technologies that would be used in various aspects of artificial intelligence such as logistics, data mining and diagnosis of medical patients, and manufacturing of products. The increasing use of computers also increased intelligence activities that would see computers becoming more human than the machine (Kasabov 1998).

Concerns brought about by Intelligent Computers

Computers have become a common feature in the world today where most faucets of life involve some form of computer technology or the other. Computers have basically made work easier for us as we are able to perform specific tasks within a short period of time meaning that our lives have become more effective and efficient. The dynamic technological world has seen the development of computer technology that incorporates intelligence and intellectual capabilities that are similar to the human brain. The human brain is by itself a complex computer that engages in various intellectual tasks at one point or another. The human brain is made up of millions of processors that have been connected by live wires which feed signals in the form of information for processing and retrieval (Popular Science 2004).

Intelligent computers also possess the same composition of live wires that are used to relay electrical signals containing information for processing, storage and data exchange activities. By developing computers to have the same neural network as that of the human brain, intelligent computers slowly seem to be replacing human beings in all functions that are executed by them. Intelligent machines can be able to learn about the world, acquire useful knowledge which they can then use for useful purposes and also establish communication channels and responses as well as react to various influences that affect behaviour. Software programs developed for computers have made it possible for computers to develop their own original ideas in the same way that human beings do.

The 2004 blockbuster movie I, Robot which featured many intelligent computer operated machines basically demonstrates the concerns that intelligent computers might take over the world obliterating the existence of human beings completely. The robots in the movie have basically taken over the human world whether they are able to deliver mail just like human beings and they are able to collect garbage and clean the house the same way that human beings do. In the movie there is a robot known as Sonny who has been designed to evolve and learn just like human beings by involving the use of experiences and emotion. The robot is able to express emotions and also react to various psychological situations the same way that human beings do. The robots in the movie sooner or later turn on the human beings who created them leading an all out war against the human beings (Popular Science 2004).

Led by a maniacal brain computer, the robots in the movie try to kill all human beings because they have been led to believe that the best way to protect human beings is to rule over them. While the scenario in the movie is quite different from that in the world right now, the intelligent robots that are portrayed in the movie reflect the existing programming paradigms that are being used by artificial intelligence programmers and robotics researchers. The movie basically demonstrates that robots have the capability to take over and run the world the same way that human beings do (Popular Science 2004).

Many computer programmers in the world have directed their efforts towards developing computers that can be able to make use of reasoning capabilities. Robotics researchers in recent years have directed their efforts towards developing intelligent computers that make use of higher-order thinking capabilities similar to those of human beings. Intelligent computers are being developed to represent more human characteristics instead of robotic or machine capabilities. This dynamic change means that more and more human activities will be performed by intelligent computers gifted with the ability to engage in reasoning and thinking capabilities just like human beings (Popular Science 2004).

As mentioned earlier on in this discussion, computer intelligence research has continued to gain a lot of momentum in the past decade especially in the biological approach of developing computers that possess human characteristics. Robotics researchers and technicians have concentrated on developing computers that can execute the same logical and reasoning capabilities of human beings. This extensive research on computer intelligence is done with the sole purpose of reducing the amount of work that human beings have to do in their daily lives. Having intelligent computers will mean that the machines will carry out most of the workload that would have normally been done by human beings (Perkowitz 2007).

The increasing shift to computerization has created a lot of concerns especially amongst those who have not embraced intelligent computers. One such concern is that intelligent computers just like human beings are prone to errors at some point or another as demonstrated by the brain computer in I, Robot who misinterpreted the three laws of robotics that were designed to protect human beings above all else. Because intelligent computers have been wired to reflect the thinking and reasoning processes of human beings, they are more than likely to commit errors the same way that human beings do. This means that placing a high level of trust on an intelligent computer to execute a certain task might mean it has a high probability of making a mistake when compared to a human being who performs the same task (Popular Science 2004).

According to Isaac Asimov an author of various short stories in the 1950s who came up with the Three Laws of Robotics, the true usefulness of robots will be determined by their ability to make up their decisions without any form of commands from human users which means that robotics researchers will have to accord them all the power to take actions. Asimov however noted that giving robots such autonomy would mean that they would have the ability to disobey human beings. Asimov argued that the more sophisticated an organism became, the more difficult it was to regulate or contain its behaviour meaning that at some point in its life the organism was more than likely to react to certain situations. This for robotics researchers and engineers meant that if they were to develop reasoning and thinking capabilities such as those of human beings, the rules that would govern their behaviour would have to be more sophisticated than Asimovs three laws for robots (Popular Science 2004).

Another concern raised by robotics researchers is that the classical method of intelligent computers which explains the biological approach of gaining momentum has become limited in its ability to respond to real world situations. This limitation is mostly attributed to the constant application of logic and rules to data that is used by the intelligent computers to execute various tasks. These researchers have argued that creating robots that can be able to execute various levels of tasks in the current context is a difficult task as it will require different types of wiring and biological application of momentum to execute the robots though processes (Perkowitz 2007).

Conclusion

The purpose of this study has been to determine whether intelligent computers will take over the world since they have been developed to function the same way that human beings do. The study has addressed the question adequately by highlighting the various areas of concern that are presented by intelligent computers. There is growing concern that the world is becoming too computerised reducing the relevance of human beings in the world. The fear that intelligent machines might eradicate human beings from the face of the world is at best unrealistic but again a futuristic reality.

References

Kasabov, N., (1998) Introduction: hybrid intelligent adaptive systems. International Journal of Intelligent Systems, Vol. 6, pp 453-454

Perkowitz, S., (2007) Hollywood science: movies, science and the end of the world. New York: Columbia University Press

Popular Science (2004) Army tech vs street tech. Popular Science, Vol. 265, No.2, pp 1-112

Russell, S., and Norvig, P., (2003) Artificial intelligence: a modern approach. New Jersey: Prentice Hall.

Computer Ethics and Privacy

Introduction

Currently, many people depend on computers to undertake their homework, create and store important information. It is consequently necessary that the information is accurately and safely kept. It is equally important to safeguard computers from the loss of data, abuse as well as other forms of manipulation. For instance, businesses have to make sure that their information is kept secure and shielded from malicious intent[1]. Computer ethics involves ways upon which ethical traditions, as well as customs, are tested. Computers brought about enhanced power of communication together with data manipulation but at the same time, ethical controversies have been forced to the forefront of the current ethical debate.

Information Technology and computer professionals started considering the long-term consequences of computer ethics as early as the 1980s and early 1990s. The need to organize professionally was approved through bodies like as Organization for computing codes of conduct. Nevertheless, the increased level of proliferation of highly powerful computers among nonprofessionals increases the scope of the possible problems (Bynum, 2004).

Interest groups like Computer Ethics Institute have tried making attempts to come up with a procedure for ethical behaviors of the computer that are appropriate throughout the society. The institute came up with Ten Commandments of Computer Ethics. These are composed of the dos and donts for the use of computers. It is the responsibility of professionals to safeguard the privacy as well as the veracity of data that describe persons. Clear definitions for the retention together with the storage of that kind of information and the enforcement need to be put into practice for the protection of the privacy of people and their data.

The absolute scope of the usage of computers, spanning almost every day nowadays work such as medical records, communications, and the defense system among others straighten ethical considerations more essential. Ethical violations which are not checked in one area can have unfavorable repercussions in a wide system (OLeary & Timothy 2008). At the individual level, a person can easily run through difficulties in ethical issues in trying to reflect the nature of activities he is facilitating through performing their functions through the computer. However, the innovation speed of computers has in most cases outpaced the advancement of the principle norms to help in guiding the applications of the emerging technologies.

Data that is available to individuals, as well as organizations, intensifies the concern that exists in computer ethics. For example, no single firm is willing to forego the chance to take the advantage of the abundance of information and manipulation that is afforded through present-day information technology as well as telecommunications.

The nature of competition of the economy offers the motivation to beat rivals to some advantageous practices to enable capitalization on the benefits involved. It is therefore imperative to formulate ethical principles which allow for a higher level of advancement as well as competitive strategies and at the same time remaining within the limits of acceptable ethics of the society (Bynum, 2004). This helps in maintaining the cohesion of the system upon which they are likely to benefit. Similarly, businesses should organize codes of ethics so that their own information systems are not compromised and that they are not put into disadvantaged situations.

General Moral Imperatives

Contribution to human well-being

An important intend of computing professionals is to reduce negative results of the system of computing which include peril to health and safety. While designing as well as implementing systems, professionals should try to ensure that the end result of the efforts put will be used in a responsible way and will avoid harmful effects to the welfare of the society (Sobh, 2006).

Avoid harm to others

Computing professionals need should reduce the level of malfunctions by making sure that they follow the generally accepted principles for system blueprint and testing. Honest and trustworthy.

The most fundamental constituent of trust is honesty. Computing professionals do not make deceptive claims concerning the system. Instead, they will provide a full revelation of all related limitations of the system.

Property rights: copyrights and patent

Infringement of copyrights secretes of trade as well as other terms agreements are illegal. Even in cases where the software has not issued protection, such forms of breaches are against professional behavior.

Privacy of others

ICT enables the collection and exchange of individual information on different ranges. There is therefore increased probability of violation of the privacy of persons. It is the duty of the professionals to uphold the privacy as well as the integrity of data that describe individuals. This entails taking safety measures to guarantee and information accuracy. It has to be protected from unauthorized access or cases of disclosure. To add to that, procedures should be established to enable individuals to evaluate their records and amend inaccuracies.

Conclusion

Computer ethics and privacy need to be taken care of by not just the professional but the entire human race. Computer-generated information is essential to almost everyone. Therefore it is the responsibility of everybody to ensure that collective responsibility is enhanced. Rules and regulations predetermined should be adhered to.

References

  1. Bynum, T. (2004). Computer ethics & Accountability. New York: Blackwell Publishers.
  2. OLeary & Timothy J. (2008). Computing Essentials, 19th Ed. New Orleans: Bernan Press.
  3. Sobh, T. (2006). Advances in ICT and Engineering. New Jersey: Springer Books.