Impact of Computers on Business

The advent of the 21st century saw mass adoption of commercial electronics in various industries which led to the emergence of the digital age. While operational business computers were introduced as early as the 1970s, the devices everyone is familiar with today and can easily use for daily functions came in the late 1990s. With decades, computers have grown to be more complex but powerful, usable for a variety of purposes, and adaptable. As business massively adopted computers the very core nature of doing business and managing operations changed for all stakeholders (Petersen, 2019). This paper seeks to explore the impact of the computer and technology on the business world.

Data Storage and Manipulation

Prior to computers, all information was stored on paper. Information had to be written or typed and organized in complex bookkeeping in each business. This was ineffective, time-consuming, and took-up significant resources. Paper files could be easily misplaced, human error was prevalent, and it was costly. Computers allowed for digital file storage, which completely revamped how businesses collected, stored, and used data and information, both about their own business and outside contexts. Once the Internet arrived, data could be stored even more safely, in virtual clouds, while information was accessible at a moments notice (IBM, n.d.). Digital file storage presented many benefits, including organizing and storing data with greater ease, reducing human error, creating back-up copies, and allowing for search of that data to find the necessary information at a moments notice. Any given file can be retrieved, shared, and manipulated in any means needed for the business in a matter of seconds.

As it is well-known, the very concept of digitally collected data has fundamentally changed how businesses operate. While business decisions were reliant on empirical data in the past, it was rudimentary statistics based off data points collected and reported on paper. Digitization of data has now meant that data points from both historic and present operations could be used. First, the shift in various processes as discussed later had allowed for much more sophisticated data point gathering in both internal functions and commercial endeavors. At last, computers provided opportunities to perform highly complex manipulations, statistics, and calculations based on a range of algorithms (Leonard, 2018). Depending on the business, this could provide highly accurate and predicting data strategies in their enterprise decision-making and commercial success.

Job Functions and Processes

As mentioned, the digital transformation brought companies into a completely new space. Multiple business functions critically changed, redundant ones were removed, and new ones added. The core function of most businesses is now to operate within the digital realm whether through marketing, building logistics chains, e-commerce, and others. Work could be done much faster on computers and new programs were developed for various functions ranging from finance to HR to team management. Many corporations had to transition, with the majority of the global workforce now working from behind a computer. The computer age was one of the catalysts to bring about the white collar revolution, essentially shifting many jobs from blue-collar industrial jobs or low-skilled labor into the office (Browne, 2017). This was especially relevant for highly developed countries, which then outsourced much of the manual labor to the developing world. Therefore, the computer not only brought changes to the companies and their processes but has fundamentally shifted the economy and ultimate geographical distribution of labor, essentially contributing to the globalization of business processes.

Communications

The foundation of all businesses from pre-digital times to modernity has been communication. Both internal communication among staff to external communication with stakeholders, clients, and consumers. The computer and digital tools have revolutionized the art of communication in business. Originally, when computers emerged, e-mail became the popular tool of communication, and still is highly used and effective. It allowed to quickly transmit written messages in a matter of seconds, distributing information to whomever and to as many people as needed (LaMarco, 2019). As technological solutions developed, various other tools emerged, such as corporate instant messaging services and the latest of which (under remote working), video conferencing to replace face-to-face meetings but being able to still discuss information in greater detail and human interaction than would be possible via written messages (Harris-Briggs, 2018). These digitally driven communication methods have impacted business structures and operations towards more effective information sharing and reduced inefficient face-to-face contact and meetings (Trint, 2021). Furthermore, the more effective and affordable communication is, which digital solutions are typically free or highly cost-efficient, the more businesses can save.

Mobility and Flexibility

Computers brought a certain level of freedom to many businesses. As discussed, the ability of one device to perform virtually all backend tasks to business operations has provided much greater flexibility. As computer technologies developed with powerful laptops, tablet PCs, and smartphones, it has given rise to what is known as IT mobility. An individual does not have to be physically present at the workplace to do their job or even monitor key aspects of manufacturing or other physical operations. This has given much social flexibility, as was demonstrated by the COVID-19 pandemic when people were forced to work from home (ColoradoSupport, n.d.). It also allowed for international business to develop as an employee could be halfway across the world fulfilling their duties.

Modern developments such as cloud computing allows to store information and even generate the computing power on remote servers, essentially streaming the data to any eligible device. This means that business employees can engage in virtually any task with a good internet connection, regardless the computing power of their respective device, because the computing power is done on the servers. In contemporary contexts, this IT mobility offers significant benefits to businesses in terms of flexibility and fulfilling their needs as any employee they need can be remotely found to fulfill tasks as needed.

The Internet of Things

The internet of things (IoT) is a web of physical electronic devices ranging from computers to sensors to other technologies which are interconnected within the same network, allowing to exchange data with other devices and systems over the Internet and communication networks. It is estimated that by 2025, 55.7 billion connected devices will be in the marketplace, with 75% being part of an IoT platform (Consolidated Technologies Inc., 2021). IoT is a critical technological development in businesses across various industries. This can range from smart light bulbs to networks of industrial machines with oversee the manufacturing process and report data back. Using the series of devices and sensors, IoT allows to collect and then analyze massive amounts of data on behaviors, processes, environments, and other key points. The machines can then enact various forms of change, either under human direction or via artificial intelligence to self-correct and make processes more efficient (Zhang & Wen, 2016). This takes the components of data storage and analysis discussed earlier and takes to a new level.

IoT impacts businesses primary by offering accessibility to tremendous amounts of data in regard to their processes. As mentioned, it can track both physical performance of manufacturing, as well as digital characteristics such as customer behaviors and product sales. IoT allows for continuous optimization of business processes and can serve as a tool for strategic decision-making, improving key performance indicators, and even benefit in human resource management such as team engagement. The more complex and comprehensive the technology becomes, the more data points it can collect and greater range of uses that it can be applied to across industries. This represents computers taken to a new generation of computing devices.

E-Commerce

E-commerce is the latest digital development of businesses, which has grown to a multi-billion-dollar market. E-commerce refers to the buying and selling goods over an electronic network, particularly the Internet. As e-commerce grew in popularity, driven in part by massive online retailers including Amazon and global events such as the COVID-19 pandemic, businesses began to shift their sales online virtually in any way possible. A business typically conducted three major elements online that were payment processing, website development, and advertising. E-commerce added on it, also simplifying the sale process significantly for both, the seller and the consumer. In both cases, business-to-consumer and business-to-business interactions, the buyers could list catalogs of available goods either on the sellers website or through third party retailers. Upon selecting necessary goods, they can place the order with all necessary details, pay online which would immediately go through, and all would be left is for the seller to ship the goods or offer the service, with logistics costs often built-in to the price (Chai, n.d.).

E-commerce in its rudimentary forms was around as early as the 1990s, but it has had tremendous impact on businesses. In many cases, doing business online has shifted entire business models or led to the creation of new ones. Some examples are Netflix who transitioned from a physical media business into fully online streaming, as well as Paypal, a payment processing company which saw a gap in the market of digital payment systems which at the time was necessary since not every retailer could easily set up credit card processing. In 2022, e-commerce is one of the primary forces of the current market economy and the majority of business is done online via computers and smartphones. It has also shifted the way companies do business, in terms of advertising, ease of access to purchase products, and other means to make the experience for consumers smooth and flawless.

Conclusion

Computers are one of humanitys most innovative inventions, significantly revolutionizing the world. The majority of systems, platforms, and processes that everyone is familiar with today came as a result of the computer and the subsequent digital and technological revolution. Businesses, which are commonly complex organizations, rely heavily on consistency, functionality, communications, and sales of goods and services. The computer was able to effectively combine, simplify, and multiply efficiency of all the core business processes. In turn, this has allowed for the development of ultra large corporations with global logistics and networks that can function to provide goods and services around the world.

References

Browne, C. (2017).

Chai, W. (n.d.). .

ColoradoSupport. (n.d.). .

Consolidated Technologies Inc. (2021). .

Harris-Briggs, N. (2018). .

IBM. (n.d.).

LaMarco, N. (2019). Chron.

Petersen, L. (2019). . Chron.

Leonard, K. (2018). . Chron.

Trint. (2021). .

Zhang, Y., & Wen, J. (2016). . Peer-To-Peer Networking and Applications, 10(4), 983994.

Computer Attacks and Critical Privacy Threats

Introduction

Computer security is one of the major challenges in the twenty-first century. It is commonly associated with dire consequences, especially in case of virus attacks. The issue is even more significant when virus attacks are initiated at larger scales  for instance, when they are regional. However, the most astounding problem is the fact that they occur on a timely basis. Therefore, the paper at hand will provide an overview of two computer attacks  the worst in history and one of the most recent ones  as well as speculate on the most critical privacy threats as of 2017.

The Worst Virus Attack

Attack under consideration is Slammer  worm outbreak first registered in 2002. It was reported as SQL server vulnerability detected by Microsoft. Widespread panic began in January 2003. The specificity of this worm is that it was not distributed via e-mail like viruses commonly spread during other attacks. Instead, even though it could not write itself on any of computer discs, it could be duplicated online, so around 15,000 servers around the globe were affected. It was spread via the Internet due to the fact that the malware used UDP Protocols. It means that all machines already infected with the worm, spammed the network they were connected to, so the volumes of the attack increased swiftly.

It is assumed that Slammer was the worst virus attack because its costs summed up to almost $1 billion (The 8 Most Famous). However, what is even more critical, it was one of the fastest virus attacks because it spread over the Internet during 15 minutes. During this time, more than 75,000 people became its victim. Still, there are even more challenging details to recall. Slammer was a malware that affected 911 services, ATM servers, and almost half of the operating online servers. At the same time, there were issues with e-tickets and checking-in, so flights were delayed (Bell). All in all, it became the cause of severe panic. In addition, even 13 years later, there have been records of the worm attack, so it is a lasting one (Pauli).

Recent DoS Attack

Distributed denial-of-service (DDoS) attack is a common example of DoS attacks. One of the recent attacks is one that hit BBC in 2015. It was initiated on New Years Eve, and the majority of BBC servers have been down for around three hours. The attack affected BBC domain, including such services as radio and television (5 Biggest DDoS Attacks). It is essential to point to the fact that television and radio have been working with issues for the rest of the day. This DoS attack was launched by an organization that referred itself to as New World Hacking. It cracked a previous record (334Gbps) by reaching 602 Gbps (Korolov).

For this reason, it became one of the most critical DoS attacks of that time. The representatives of the group said that their attack was an experiment to test their capabilities to compromise other powerful servers, such as Trumps campaign website and ISIS website. In fact, they wanted to demonstrate their power, pointing that it is the hacking organization that has resources and strengths to change the news and rule the world, not the news agency.

Major Privacy Threats

Even though there has been impressive progress in the sphere of computer security, there are still some critical privacy threats to consider. It is commonly associated with the fact that regardless of developing new and progressive security technologies, cybercriminals are constantly outrunning software developers, so threats become more intricate and challenging. As of 2017, there are several risks to keep in mind.

To begin with, the most critical problem is that of the growing instances of cybercrimes. It is associated with the breaches of both databases and personal information. In this case, personal and corporate data can be stolen by cybercriminals who either seek benefits by asking victims to pay for returning it or want to make benefit of the stolen information (for instance, like in case of stealing details regarding credit cards or other financial information).

Here, it essential to point to the fact that this threat may be easily coped with because the increased risks of security breaches are related to insecure activities of individual users or companies, such as poor passwords, ignoring the criticality of updating software and installing anti-malware applications, browsing sketchy and not reputable websites or registering on them, and paying no attention to the origin of downloaded software (Nguyen).

Even though this privacy threat is usually initiated by individuals and their insecure behaviors on the Internet, it is considered to be critical due to the fact that more than 200 million letters consisting malware applications were sent in 2013, while almost 13 million people became victims of personal information theft (Nguyen). However, what is even more challenging, these figures are constantly increasing.

This privacy threat is commonly referred to as ransomware (The Biggest Security Threats). Using malicious applications, hackers steal personal or corporate data and demand money for returning control over this data to the initial owner (Olavsrud). The main challenge with the problem is the fact that hackers select influential companies or individuals, as well as health care facilities, as their targets. In other words, they choose those people or enterprises that are likely to suffer the most from the loss of personal information (Komando).

Regardless of the powerful influence of individual behavior on the increased online privacy threats, there are as well other issues to pay attention to  those initiated by companies and states decisions. One of the appropriate examples is the threat deriving from the growing popularity of big data. In fact, it is connected to online trace left by an individual. In this case, it is essential to note that it is the risk for those people who have mobile devices with the Internet and geolocation options.

Even though potential benefits of big data are fantastic, it is as well related to several significant threats. First and foremost, people are connected to the global network that means that their location is no longer a secret. In some case, all of their actions are as well recorded, especially keeping in mind information collected from city surveillance systems and geolocation details (Olavsrud). From this perspective, personal privacy turns into an illusion if one has a mobile device with the switched on Internet.

Another common challenge is the fact that human element is absolutely eliminated from the process of analyzing big data because it is simply too complicated for one person to cope with it. In this case, the challenge is associated with being potentially manipulated by the outcomes of data analysis and overvaluing this process (Llic).

Works Cited

5 Biggest DDoS Attacks. Abusix. 2016. Web.

The Biggest Security Threats Coming in 2017. Wired. 2017. Web.

Bell, Steve. Which Is the Worst Computer Virus in History? Heres Out Top 10. Bullguard. 2014. Web.

Komando, Kim. The 3 Biggest Security Threats of 2016. USA Today. 2016. Web.

Korolov, Maria. DDoS Attack on BBC May Have Been Biggest in History, CSO Online. 2016. Web.

Llic, Danny. 3 Biggest Cyber Security Threats of 2017. ITProPortal. 2017. Web.

The 8 Most Famous Computer Viruses of All Time. Norton. 2016. Web.

Nguyen, Peter. The 6 Biggest Online Privacy Threats You Should Be Considered with. HotSpot Shield. 2013. Web.

Olavsrud, Thor. 4 Information Security Threats That Will Dominate 2017. CIO. 2016. Web.

Pauli, Darren. Slammer Worm Slithers Back Online to Attack Ancient SQL Servers. The Register. 2017. Web.

Researching of Computer Systems

Currently, a person uses many different devices and computer systems. One of the everyday computer systems that people use is the personal computer. Data on personal computers is collected using the Windows operating program on which the system is running. By default, Windows collects full diagnostic data and sends it to Microsoft, a big plus for storing information. However, the downsides are that the system is prone to data leakage and the collection of a considerable amount of user data, which can overload the storage. An indisputable plus is recovering all their data if they are lost for the user. However, a significant disadvantage is the lack of privacy for the user.

In addition, there is such a type of computer system as the mainframe. It is a universal high-performance server with a large RAM and external memory. The mainframe collects user data on a direct access storage device or optical media. Thus, data can be retrieved directly and sequentially, which is convenient depending on the task. Application programs do not need to collect initial information from multiple sources by storing it on a single server. However, a significant disadvantage for users is the user interface. The mainframe has weak communication between the user and the computer system. However, it is now possible to provide a web interface at a minimal cost.

Moreover, a computing cluster (supercomputer) consists of computing nodes united by a high-speed communication network. Each computing node has its RAM and solid-state drive and is connected to shared parallel network storage (Jouppi et al., 2020). The advantage is that, following the rules, the user independently provides a backup of his data. However, a significant disadvantage is that data storage is no longer involved in calculations on a supercomputer is not allowed. The maximum amount of data for a user is determined by the disk quota determined during registration, while in the future, the percentage can be increased if necessary.

Reference

Jouppi, N. P., Yoon, D. H., Kurian, G., Li, S., Patil, N., Laudon, J.,& & Patterson, D. (2020). A domain-specific supercomputer for training deep neural networks. Communications of the ACM, 63(7), 67-78.

Security Plan for Computer and Data System

Introduction

The security of data and information in an organization is paramount. This is because all the activities and decisions made depend on the integrity of data systems. A breach of the security system of data and information would be disastrous to the organization. It is important to put the appropriate systems into place that would help secure the organizations data and information against possible invasion from malicious quarters.

Malicious Software

This refers to viruses, worms, and Trojan horses. A virus is malicious computer software that replicates itself in a computer system (National Institute of Standards and Technology, 2006). A worm is a self-contained malicious program or a set of programs that spreads full copies or smaller portions of itself from one computer system to another through network connections, email attachments, or instant messages (National Institute of Standards and Technology, 2006). A Trojan horse is usually disguised as a popular program, which secretly installs its code on the host computer and opens a network port for the attacker to take control of the infected computer (National Institute of Standards and Technology, 2006).

Types of Attacks

There are several types of attacks precipitated on servers, clients, and mobile devices. These include the brute-force attack, which attempts to break passwords by cycling through a number of possibilities (Meyers, 2009). Another is the dictionary attack through which attackers capture encrypted password files and try to compare them with dictionary words that most people use as passwords (Meyers, 2009). There is also shoulder surfing, social engineering, and phishing. Lastly, physical access to servers, clients, and mobile devices is also another threat.

OS Hardening

This is important in fostering security for web servers, email servers, and file, print, database servers. OS hardening can be implemented by installing the latest web server and browser software and applying the most recent security patches (National Institute of Standards and Technology, 2006). This should be implemented by the administrators. Administrators should ensure that file servers are secured by passwords to prevent unauthorized access. The general staff should ensure they employ best practices such as safeguarding the passwords and enforcing confidentiality when handling sensitive files.

Network infrastructure attacks

These can be carried out through back door attacks. This malicious program avails a port to the hacker hence they can control the infected system (Meyers, 2009). The port availed by the malicious program is usually that not used by network services. Through the use of bugs, malicious users can gain access to the system by by-passing device security checks. A hacker may overload a specific network grounding the flow of information to a halt. Administrators should install the latest firmware and software and scan their network devices in order to identify unused ports.

Security Zones

They include the DMZ where a high number of publicly accessed network systems are located. Control of traffic is administered by an administrator firewall. There is the NAT that allows private IP addresses to be translated into routable addresses to be used on the internet. NAT and sub-netting ensure that internal addresses are not accesses by external forces, thus addressing the problem of spoofing (National Institute of Standards and Technology, 2006). VLAN enables the segmentation of large physical networks. It provides security because users in one LAN will not access other LANs on the same network.

Network Device Hardening

It refers to the examination of the network infrastructure for security vulnerabilities. This is done by installing the latest network software and constantly checking for newer updates. The latest security and bug-fix patches should also be installed on network systems. Configuration settings can be optimized to exclude optional services that can be exploited for malicious intent (National Institute of Standards and Technology, 2006). In addition to this, all network devices like routers and switches should be secured to prevent unauthorized access into the network. For wireless networks, encryption of data is the best security measure.

HIDS and NIDS

These are used to deter intrusions into networks. NIDS analyzes network activity and data packets for suspicious activity. It determines whether data packets have been changed on transit, or contain suspicious codes, or even malformed or corrupted. Notice is then sent to the administrator via an alarm system. HIDS examines a specific device or host for suspicious activity. It detects attacks from a user physically working at the console. It then alerts the administrator. HIDS should be installed on specific devices in a network while NIDS should be established on specific points of the network.

Wireless Infrastructure

Possible threats to wireless networks include data emanation, wardriving, and rogue access points and devices. Security measures to address them include service set identifiers that call for network passwords and names, MAC address filtering, WEP security, WPA and WPA2 security, and personal firewall (National Institute of Standards and Technology, 2006).

Access Control Methods

Administrators should adopt proper access control and authentication policies. This can be done through the creation of login IDs and passwords. The passwords should be reinforced through the use of long, non-dictionary alphanumeric combinations of characters, regular password rotation, and aging. Network file servers should be propped with file access permissions per-user or group basis (Meyers, 2009).

User Accounts and Password Management Policies

For security purposes, user accounts should be restricted through appropriate naming conventions, limiting login attempts, disabling unused accounts, setting time and machine restrictions, and use of tokens (Meyers, 2009). Important password policies include the setting of a minimum length of passwords, and password rotation, and aging.

Authentication

It can be accomplished through the use of serial cable networks and dial-up modems. Modern methods involve the use of complex VPNs. They can be secured by encrypting information over the network so that it is not captured by unauthorized users.

VPN Protocols

The various VPN protocols are PPTP, L2TP, and IPSec (Meyers, 2009). PPTP decrypts PPP packets in order to create VPN connections (Meyers, 2009). L2TP is a hybrid of PPTP and L2FP created by Cisco systems (Cisco Systems Inc., 2007). IPSec provides privacy, integrity, and authenticity to information transferred across IP networks. I would recommend the L2TP since it offers authenticity, privacy, and integrity of data being transmitted; hence, it is more secure (Meyers, 2009).

Authentication Protocols

These include PAP, CHAP, RADIUS, LDAP, TACACS, Kerberos, Biometrics, among others (Hassell, 2002). I would recommend CHAP, which is more secure than PAP as it prevents replay attacks by hackers who capture data and resend it (Cisco Systems Inc., 2007). I would also recommend RADIUS especially in the case of dial-up modems (Hassell, 2002). TACACs are also recommendable as it works with dial-up modems.

Physical Security Measures

Several physical security measures can be taken to secure building/housing networks. Such measures can be achieved through laying down of physical barriers, e.g. fences and gates, so as to restrict unauthorized persons from accessing network infrastructure in the buildings (Meyers, 2009). Various rooms can also be further secured through the application of burglar-proof locks and alarm systems on their doors. Employees working in the buildings should also wear identification passes to distinguish them from posturing aliens. Finally, all areas of the building should be well illuminated to deter intruders and provide security to employees.

References

Cisco Systems Inc. (2007). Understanding and configuring PPP CHAP authentication. Cisco tech notes. Web.

Hassell, J. (2002). RADIUS. Securing public access to private resources. Cambridge, MA: OReilly and Associates.

Meyers, M. (2009). Mike Meyers CompTIA security+ certification passport (2nd ed.). McGraw Hill.

National Institute of Standards and Technology (NIST). (2006). Guide to general server Security (Special Publication 800-123). Washington, D.C: U.S Department of Commerce. Web.

Upgrading Computers in Business Organizations

Introduction

The functioning of any company today critically depends on the technology it uses. The use of appropriate devices can help to attain numerous benefits. For instance, enhanced effectiveness, reduced cost, and higher client satisfaction can be achieved by integrating innovations. At the same time, it is vital to ensure the available technology is not outdated and can perform the major tasks available at the moment; otherwise, it will serve as a barrier to future improvement. Under these conditions, the timely updating of computers, software, and networks vital for a companys work is a critical demand for modern businesses. It should be given specific attention as it demands costs and plans for replacing outdated devices with new ones. The proposed scenario implies the same update process as the core demand for the following functioning of the department store.

Background

The company wants to enhance its IT infrastructure to improve its order processing activities and introduce new features. For this reason, the upgrading procedure was initiated to consider available options. At the moment, the store uses eight computers with the following features:

  • Quad-core processors;
  • one terabyte hard drives;
  • 5 GB RAM;
  • Windows XP, Linux, or Mac OSX.

By now, the given specifications were sufficient; however, the focus on adding new features introduces the need for upgrading. The following future tasks should be considered while choosing the hardware and software:

  • Running a processing application;
  • Communicating with customers;
  • Storing customer data on the local PC.

Under these conditions, the planned upgrading process should offer solutions able to meet these requirements and contribute to the boost in the department stores effectiveness. At the same time, it is vital to consider the price for all components to ensure the company can find the necessary finances.

Operating System

Multiprocessing is one of the core demands outlined by the department store. It means that the chosen operating system should be able to support and utilize more than one computer process, also helping to perform several tasks simultaneously. Several modern operating systems support multiprocessing, including Linux/Unix and Windows. However, regarding the existing tasks, Windows family, precisely Windows 10 Pro, seems the best possible option. Thus, it costs $199, while upgrading from XP to 10 costs $99 (Microsoft, n.d.). It supports multiprocessing, which is one of the core demands, and, moreover, it is more user-friendly and helps to minimize potential compatibility issues as it is prevalent (Microsoft, n.d.). Moreover, developers create numerous applications using this platform, which will simplify the choice of the needed software and increase its effectiveness. For this reason, Windows 10 Pro is the best update option.

Processor

The choice of a processor is also an important step vital for upgrading four computers. Thus, the critical demand influencing the choice is multiprocessing, running applications, and resolving several tasks at the same time. At the moment, quad-core processors are used on all companys PCs. Thus, considering the scenario and current tasks, the Intel Core i7 is viewed as an appropriate option. It can be bought from the official vendor using Amazon and costs $417 (Intel Core i7-12700KF, n.d.). Several reasons justify the given a choice and the necessity to buy the processor. First, it is a powerful and up-to-date solution allowing the company to benefit from its computers stable and practical work (Intel Core i7-12700KF, n.d.). Second, the Intel Core i7 series remain relevant and the moment and will preserve its topicality within the next several years. It means that the company will not have to initiate a new upgrading process.

RAM

RAM requirements are linked to the OS, planned workload, applications, and tasks that would be performed. Thus, the company plans to perform several tasks at the same time while also processing requests from clients and providing them with needed information. It means that RAM should be sufficient enough to meet all these requirements. It is recommended to use at least 8 GB RAM as it will ensure high-speed response and the absence of delays. One of the possible options is Kingston 2X8Gb,16 Gb in general, costing $105 (Kingston, n.d.). The official vendor can deliver it to the company and guarantee the stable work of the device. This choice will help to support the stable work of new applications and create the basis for new improvements in the future.

Hard Drive

Several factors also influence the choice of hard drive. First, the OS requirements influence the choice of the given device. Second, the planned applications and operations should also be considered. Finally, the number of clients who will use the computer is critical for selecting the hard drive. The increased number of clients means big information portions that should be stored to improve strategic planning. Thus, Kingston KC3000 PCIe 4.0 NVMe M.2 SSD with a capacity 512Gb and costing $133 can be a suitable option (Kingston KC3000, n.d.). First, it allows for storing significant amounts of data, which is critical when working with clients. Second, it is a safe and reliable device that can be used for purposes similar to those introduced by the department store. For this reason, the given hard drive can be used to update computers.

Software

Finally, selecting the software that will help the company to align interaction with clients and ensure they are satisfied with the increased quality of service. Thus, a backup database should be created to avoid losing data (Qureshi & Sharif, 2021). It can be made by using cloud services, such as Google. Second, the company focuses on the idea that the Internet will be used as the major tool for communicating with clients and acquiring information from them. For this reason, a free Internet browser, such as Google Chrome, can also be used. It will help to access clients from different locations and cooperate with them (Qureshi & Sharif, 2021). Furthermore, messaging with clients demands a stable Internet connection, meaning that the company should ensure that the provider can meet these requirements and avoid data loss.

Public platforms for exchanging messages and emails can also be utilized. For instance, Yahoo is one of the possible choices. It ensures a high level of protection, can be used by most clients and is understandable for them. Under these conditions, it becomes one of the possible choices for the company. At the same time, the recommended upgrades will provide the department stores management with a chance to use other applications which they find useful for achieving their goals. The integration of the recommended changes will eliminate limits linked to the hardware and promote the diversity of choice for the firm, which is vital to support its competitiveness and future development.

Costs

In such a way, the following recommendations are related to the major components of the computers that should be upgraded to remain capable of performing existing tasks. These are OS, RAM, hard drive, and processor, which are fundamental for attaining a high speed of response and reliable work. Thus, the final cost of updating one computer is around $854 (OS $199 + processor $417 + RAM $105 + hard drive $133). Upgrading four PCs will cost $3,416, which can be viewed as a long-term investment in the development of the firm and its future success. Thus, if the cost is too high for the firm, it can be reduced by selecting a less powerful processor and RAM. Furthermore, the hard drive with lower capacity can also be chosen as an alternative to reduce spending for updating.

Conclusion

Altogether, the scenario shows that while selecting hardware and software for a computer, it is vital to consider the goals and purposes the company introduces. The increased number of users, along with the stored data and the requirements for multitasking, means that the PC should possess enough resources to process all information and ensure it is used effectively. Under these conditions, Windows 10 Pro, with Intel Core i7, 16 Gb Ram, and 512Gb hard drive, is chosen as a possible option. Using these components will cost around $854 per computer; however, it will create the basis for future excellence and improved client service, which is also vital for higher satisfaction levels. All components can be bought directly from vendors and delivered to the needed destination point. The given upgrading process is viewed as an essential part of the companys functioning and a positive change vital for goal achievement.

References

Intel Core i7-12700KF Desktop Processor 12 (8P+4E). (n.d.). Amazon.

Kingston. (n.d.). Kingston FURY Beast DDR4 memory.

Kingston KC3000 PCIe 4.0 NVMe M.2 SSD. (n.d.). Amazon.

Microsoft. (n.d.). Windows 10 Pro.

Qureshi, H., & Sharif, H. (2021). Snowflake cookbook: Techniques for building modern cloud data warehousing solutions. Packt Publishing.

Computer Fraud in the United Kingdom

Computer Fraud

The term computer fraud refers to cybercrime where a person uses a computer to access an electronic database illegally without the owners consent. In the United Kingdom (UK), there have been computer frauds reported to be more than 1.3 billion euros in 2021 from individuals and organizations (Scroxton, 2021, par. 2). Cybercrime has facilitated computer fraud to the skill to manipulate systems that can be used to access data in an effective but unlawful way. According to National Fraud Intelligence Bureau, major spikes during the UKs second pandemic lockdown, where more than 137,000 crimes were reported to have cost enterprise managers more than 635 million euros in the first quarter of 2021 (Scroxton, 2021, par. 4). The three main types of computer fraud include hacking, data mining via malware, and the distribution of hoax emails. A person is guilty of fraud if they breach Sections 2, 3, and 4 of the UK Constitution (Gupta et al., 2020). Under these sections, a person commits fraud by either false representation, failure to disclose information or abuse of position.

Hacking

Hacking is a common form of computer fraud that consists of someone identifying the weaknesses of a computer system and attempting to exploit the network to gain personal data or business information. For example, people who use password-cracking algorithms to access a given database in a computer are classified as hackers. People are often probed by business or personal issues that make them start hacking. 82% of firms in the UK have been victims of ransomware attacks whereby they most have to pay the malicious people who hacked their system to regain data (Allen, 2017, p3). The tendency of the UK to pay cybercriminals means there are notable cases of hacking witnessed in the country. Companies that have been affected by hacking must have a point of interest, such as monetary or other values that the hackers want to benefit from after accessing their hacking.

How Hacking is Conducted

For hackers to gain information on sensitive data, they must have a central point of observation concerning a given system, therefore, target to exploit the variables. Phishing attacks are the key methodology that many hackers use to have access to given networks. Phishing occurs when a party in a given firm is lured into clicking a certain link sent through an email that has malware (Gupta et al., 2020). Through this formation, criminals access the data by getting assisted by their system that monitors the protocols set in the computers for authentication when launching a given process digitally. Through the help of a customized configuration of computer elements, a hacker infiltrates a system by installing a virus, which is a damaging component that can wipe anything stored in a computer device. Most of the time, enables hackers users to click on the links via emails for opening messages, websites, and downloadable networks.

Additionally, hackers review log keystrokes that are entered during computer usage. Once the malware is installed in the victims computer, it can record the keystrokes that give the hacker all the details such as passwords, pins, and other security features required to open files, folders, and applications. Advanced innovation has led to improved processes that can develop algorithms that can generate a combination of symbols, numbers, and letters used as passwords (Ashman Solicitors, 2020). By using a brute force attack, the hacker can get the combinations, making it easy to access the computer. Furthermore, hacking can be successful under the dictionary attack, a feature that inserts random common words into the fields that require a password. In this case, a system can suggest a possible figure that is put to enable access to the database.

The use of zombie computers has been common in enabling hacking processes in the UK. A hacker commits distributed denial of service (DDoS) attacks. A user can execute a code without awareness of the possible repercussions (Gupta et al., 2020). The zombie technology with DDoS enables a user to connect between their system and that of the victim. At this point, it is easy to control the two systems without the awareness of the other user. Thus, computer fraud is successfully done by these metrics. It is important to mention that hacking is a major cybercrime issue that has been a bug to the UK government. However, legal statutes deal with this kind of computer fraud.

How Hacking Might be Prosecuted in the UK

Hacking is dealt with under cybercrime issues in UK law. The Computer Misuse Act 1990 (CMA) is the major UK law related to issues that lead to attacks on computer systems in which hacking fits. A jurisdiction to prosecute CMA offenses is relevant if a direct link exists with domestic jurisdiction (Gupta et al., 2020). Section 3(1) under Investigatory Powers Act 2006, which was enacted in 2018, defines hacking as one which deliberately intercepts communication through telecommunication without the consent of the main party where hacking is directed. CMA criminalizes any access to data without consent from the owner. The penalty for committing a hacking crime in the UK is punishable by up to two years imprisonment (Gupta et al., 2020, p. 67). However, that is for level 1, unauthorized access to computer material. Level 2 incorporates access to a computer system intending to commit further offense or facilitate the same.

Additionally, level 3, which comprises elements in previous levels but intends to distort or impair a computer system, is punishable by sending the culprit to up to ten years in prison (Gupta et al., 2020). However, for these levels to be fully acknowledged, there involve legal processes that ascertain whether or not the crime was committed. It is important to combat hacking to ensure there are few cases of computer fraud not only in the UK but in other parts of the world.

Distribution of Hoax Emails

An email hoax means a scam sent via electronic mailing systems designed to deceive users for monetary gains. Most of the time, email hoax is targeted at people who have the potential to accept the idea (Ashman Solicitors, 2020). In common tactics, emails lure users to give their money based on charity firms such as children missing, lottery, fake security warnings, and chain letters. For instance, an email hoax may be probing the user to adhere to an alleged threat such as a virus or another security-related issue. 45 % of interviewed people about experiencing such issues admitted to having fallen victim in the UK (Allen, 2017, p. 3). Therefore, it can be true to say that cybercrime is a sensitive issue that requires regulation.

How Email Hoax is Done

Email hoaxes and scam has been major issue in terms of cybercrime. Nowadays, companies rely on emails for communication. Therefore, official communication meant to ease communication has led to malice since people want to trick others for monetary gains. For example, it is possible to hear that a certain software development company offers $100 to people who forward emails having given information to several people (Allen, 2017). An email hoax often comes with a warning about a harmful component such as a virus that can delete hard drives. In this case, scammers will be prompting the victim to send such a message to their address box. Due to the level of cautiousness that people have, they will accept and forward the contents of the email.

Part of the emails sends usually have payment methods in case one wants to pay for prevention from uncertainties in their system. Once a person pays the money, they never get any assistance. Rather, the scammer turns on the victim and accuses them of attempting to fraud them by false allegations. One of the familiar messages in this section is You can be a Millionaire, which has led to many people falling prey to the scams (Gupta et al., 2020). The purpose of the scammer is to develop a certain mania that can make people pay money due to fear of the subject in contention. Therefore, email hoaxes have been a challenge to the UK business and individual perspectives. There is a need to impose legal actions that can lead to long-term combatting of these problems.

How Email Hoax Might be Prosecuted in the UK

When scammers pretend to help online users concerning a given issue with their computer, it is one way of false representation, which is a breach of lawful statutory that covers the fraud element in society. When one is caught in email hoax fraud, they are subjected to 5,000 euros, 90 days in prison, or both (Gupta et al., 2020, p. 23). Even though this issue lies under the regulation of CMA, it appears to be lighter than hackers. However, it depends on the degree of scamming. When false representation is targeted to weaken a given company in terms of market share, the people behind such may be subjected to paying any possible damage costs financially.

If the scamming move was targeted to sabotage any government agencys operations in favor of the people of the UK, a person might be sent to prison for many more years. That has happened to convicts in the high court, whereby most of them may receive an unlimited fine that can make them be put behind bars. One is invited to pay a civil penalty as an option for prosecution, and proceedings are dropped depending on the intensity of the matter (Ashman Solicitors, 2020). After one is presented to the court and there is enough evidence that they committed scamming through an email hoax, prosecution begins. The terms may be flexible depending on the level of authority that was crossed.

Identity Theft

In computer fraud, an attacker applies deceptive theories meant to obtain data or information from a victim and misuse the data by acting as the owners name. It is important to note that identity theft is different from hacking. In this type, criminals obtain personal information by eavesdropping on passwords or retrieving them secretly without tampering with computer systems (Ashman Solicitors, 2020). When the thieves access the computer for the victims, they act as owners to fraudulently manipulate it by acting as the real owner of the computer. Thus, this is where the idea of identity theft is derived from.

How Identity Theft is Done

When malicious people gain the personal details of the real owners of the computer, they start doing various activities such as applying for loans, buying and selling online, and modifying the victims medical and financial information for their gains. Identity theft works as phishing linked to social engineering tactics that are utilized to pry confidential data from computer users (Ashman Solicitors, 2020). It is important to mention that public profiles available on social connections and other popular metrics can be manipulated to get data that help criminals succeed in their targets. A 2019 October survey on Identity theft in the UK indicates that 6% of the respondents had experienced identity theft in the last four years (Gupta et al., 2020, p. 45). Other 45% of the people involved said they had attempted to fall prey to identity theft in 2019 (Gupta et al., 2020, p. 45). Thus, this issue is a part of many others that have expanded the cybercrime issues in the UK

How Identity Theft Might be Prosecuted in the UK

The punishment for identity theft in the UK is seven years in prison. When a person is charged under the Fraud Act, the offense might go up to 10 years in prison. This applies to any effort to supply articles that may have led to the theft (Allen, 2017, p. 2). UK courts decide on how sophisticated identity theft is depending on the plans and opportunities that were executed and the motivation behind the crime. Additionally, a person might be sent to prison for more than ten years if the position of trust is bestowed to the prosecutors against people such as imposters (Allen, 2017, p. 2). However, possessing articles for fraud offenses may range from one to one-and-half years depending on the complexity of the matter. There is a need to reduce these computer fraud issues to have a secure way of using computers online and offline.

Reference List

Allen, M. (2017) The state of crime: fraud, hacking and malware double the number of crimes committed in UK, Computer Fraud & Security, 2017(2), pp.1-3.

Ashman Solicitors (2020) What is Fraud by False Representation? [online] Ashman Solicitors. Web.

Gupta, B., Perez, G., Agrawal, D. and Gupta, D. (2020) Handbook of Computer Networks and Cyber Security. Cham: Springer International Publishing.

Scroxton, A. (2021) UK loses £1.3bn to fraud and cybercrime this year. [online] ComputerWeekly.com. Web.

Computer Application System: Management and Purposes

Question 1

In computer application systems we have studied quiet a number of application systems and a number of questions comes up. These include:

What is AIS?

AIS stand for Application Interface Specification and is basically assortment of open specifications.

Q2. What are three factors that influence the design of AIS? The three factors that influence the design of AIS are; the first one is Platform Management Services (PLM), second Cluster Membership Scope (CLM), and the last one is Availability Management Framework (AMF).

What three important business functions do an AIS FULFIL?

AIS have around twelve (12) services which are classified into three major categories: AIS platform services, the second one is AIS utility services as a general, and the third is Management Services AIS.

There are four types of data processing which helps keep the data stored in files or databases current. What are they? Define each.

There are approximately four types of data processing which aid in keeping the data stored in files or databases updated. These are as follows; Information Model Management Services (IMM) which is also known as configuration management database, Log Services (LOG), Notification Service (NTF), Security Services (SEC). The later is used to authenticate and authorize particular activities. This helps to retain or in other words uphold the integrity of AIS.

NTF is generally a center which identifies concept and then sends a notification alert in case of incidences or status change. LOG is another database which basically deals with instances of logging events. The system is responsible for all logging services which perform as per current settings at that point in time of logging. Lastly IMM is wholly responsible for the entire administrative applications which are specified for the object classes in the AIS.

What is an ERP system (Enterprise Resource Planning)?

ERP system (Enterprise Resource Planning) is a system that coordinates both the internal and external management information throughout an organization. ERP operates under automation of a software application which enhances finance/accounting, sales and services, manufacturing among others like customer relation management. In general, it links the flow of information of an organization/business to the outside stakeholders.

Question 2

An internal Environment, as a component of ERM, consist of seven items, which are they? Give a brief example of each.

There are seven items which make up the Internal Environment as a component of ERM (Enterprise Risk Management) which includes:

  1. Establishing Content:-it serves the purpose of understanding the internal, the risk management and the external context;
  2. Identifying Risks:-it includes thorough documentation of threat material to a organization;
  3. Analyzing/Quantifying Risks:-identification of any possible risks from the material gathered;
  4. Integrating Risks:-it deals with a total evaluation and reflecting corrections and hence coming up with results on the effect on the organization;
  5. Assessing/Prioritizing Risks:-it involves consideration of any possible risk and determining how big the risk is in terms of priority to the organization;
  6. Treating/Exploiting Risks:-this involves controlling measures put in place;
  7. Monitoring and Reviewing:-it is gradual assessment and monitoring of various risks to the company.

What is COSO (Committee of Sponsoring Organization)? Explain its purpose/value.

COSO (Committee of Sponsoring Organization) is an initiative of about five private sector organization which come together to offer leadership and guidance in the risk management framework. It is mainly put in place to identify possible risks, manage the risk, and provide reasonable and acceptable assurance to the entity objectives.

What is PCAOB? What created it? What is its main purpose?

PCAOB, which stands for Public Company Accounting Oversight Board which is a non-profit corporation and is purely a private-sector. It was created by Sarbanes-Oxley Act, in 2002 a United States Federal law. It main purpose is to regulate the audit reports for organizations so that they may contain sufficient and appropriate information for investors so as to safeguard their interests.

What are the most common methods of collecting audit evidence? List at least seven and give a brief explanation of each.

The most commonly used methods of collecting audit evidence are several. The major seven methods are:

  1. Inspection:-its general role is to confirm that an asset exists or that a certain transaction really occurred;
  2. Observation:-this involves an auditor being present when a procedure is taking place;
  3. External confirmation:-it involves seeking information from other sources such as a bank;
  4. Recalculation:-checking arithmetic accuracy thoroughly;
  5. Reperformance:-redoing of several reconciliations as at certain reporting dates;
  6. Analytical procedures:-performing comparison of different years report and coming up with predictable relationships;
  7. Inquiry:-it involves finding information from clients or management body.

911 Evolution: Computer-Aided Design for Personal Safety

Introduction

USA emergency service relies heavily on the speed of reaction and allocation when interacting with people in need. However, the lack of funding and logistical management issues often lead to inefficient service provision and consequent victims. Every year, over 10,000 Americans die due to the outdated 911 system (Reynolds, 2017). The problem of comparative inefficiency persists despite the implementation of NG911.

911 Dispatcher: Job Description

They receive a wide range of complaints, from automobile accidents to active criminal actions, and coordinate the delivery of appropriate emergency personnel.

The methods of communication utilized in this line of job change, together with the latest advancements, satisfy the need for urgent linking appropriately.

Introduction: CAD/CAM

CAD/CAM software is used to design a product as well as program manufacturing processes, such as CNC machining. CAM software generates toolpaths that control machine machines to transform designs into actual components using models and assemblies produced in CAD software like Fusion 360 (Simon, Taylor & Todd, 2019). Prototypes, final parts, and production runs are all designed and manufactured using CAD/CAM software.

What Is CAD/CAM?

The phrase CAD-CAM refers to the software that is used for designing, machining, and manufacturing with a CNC machine. Computer-Aided Design is abbreviated as CAD, and Computer-Aided Manufacturing is abbreviated as CAM.

CAD software is used to design and sketch models, which are then assembled using geometric forms. Not every produced item, however, needs to be created as a solid 3D model.

CAD

Modern CAD software enables the design of components for CNC machining on 2, 3, 4, and 5 axes. As planned parts are transmitted to CAM for programming the machine side of the production process, CAD software is a vital aspect of the manufacturing process.

CAM

Before a CAD model can be converted to this machine language, the CAM software must be set up to calculate the cutting routes that the tools will take to remove superfluous material and create a component.

CNC Milling, CNC Lathe, and CNC Router are the most common applications. However, component programming for CNC Water Jets, Plasma, Laser, and CNC Burning machines may also be discovered.

Workplace Background

With the advancement of contemporary endosseous implant design and better surface technology, new restorative procedures have been developed that reduce the overall treatment duration for patients. We are able to produce personalized dental restorations with great accuracy and flawless precision of fit by utilizing the latest scanning, CAD/CAM, and manufacturing technologies.

About the Job: Surgical Technologist

Surgical technologists work under the supervision of a surgeon to ensure that invasive surgical procedures are carried out safely and effectively, ensuring that the operating room environment is safe, that equipment is in working order, and that the operative procedure is carried out in a manner that maximizes patient safety.

Surgical Technologist: CAD/CAM Use

The traditional procedure necessitates a time-consuming and costly logistic chain. The workflow of directly milled surgical guides was assessed in a pilot study. These surgical guides were created using a combination of optical impressions and radiological information.

Current Technological State

At the moment, 911 has experienced yet another period of involuntary modernization.

Some of the police stations are pushing against further technological implementation, but it is required to optimize the performance of each individual unit.

Contribution to 911 Services

CADCAM-based security solutions provide a full package of services included in a singular software application.

The computer-assisted 911 vehicle dispatch program includes vehicle dispatch, call dispatch, resources deployment, instruction and protocols, and status modification (Lum et al., 2020).

Optimization of the vehicle dispatch and the coordination between them would allow the security services to be there for many people around Dublin.

CAD/CAM Disadvantages

Hand sketching can be used to divide data into different transparent overlays. Different overlays for structural, electrical, and plumbing components, for example, may be included in a construction plan (Renydols, 2021). In CAD, layers are similar to transparent overlays.

Layers like overlays can be shown, modified, and printed alone or in combination. One may name layers to make it simpler to remember whats on them, and you can lock layers to prevent them from being modified.

Computer processing power is frequently used by CAD applications. This necessitates the purchase of expensive high-quality computer hardware. CAM necessitates the use of costly, sophisticated production equipment. The high cost of hardware is a key drawback of CAD/CAM and a major impediment to their widespread adoption.

CAD software is becoming more versatile and adaptive as time goes on. This, however, comes at the cost of increasing the softwares complexity. This intricacy makes learning the software more challenging for new users. This intricacy, when combined with the cost of educating employees in CAD/CAM technology, is another drawback of CAD/CAM.

Maintenance of the computers and equipment required for CAD/CAM is required, which can be a strain on available resources. When computers or gadgets fail, it results in costly downtime, which is inconvenient for everyone concerned. Maintaining a preventative maintenance program can be beneficial, but breakdowns are unavoidable, which is a negative.

CAD/CAM Advantages

The capacity to produce extremely precise designs; drawings that may be generated in 2D or 3D and rotated; and other computer programs that can be integrated into the design software are just a few of the benefits of CAD (Nath, Ray, Basak & Bhunia, 2020).

Before one begins sketching using manual drafting, one must first identify the scale of a view. This scale relates the real size of an object to the size of a paper model.

One of the benefits of CAD/CAM is that design modifications may be done quickly with CAD software. A design modification would have required a draftsperson to entirely redraw the design to the new standard before using CAD. One of the advantages of CAD in textiles is that it allows designers to play with designs and make tiny adjustments on the fly. It may also be utilized in software to replicate the designs behavior. CAD software may be used to simulate the airflow around an engine, for example. This gives the software development process more freedom.

Rapid prototyping is another benefit of CAD/CAM. Designers may use rapid prototyping to build tangible prototypes during the design process. Various parts of the design may be tested using these actual prototypes. If the objective is to create a steel item, for example, a prototype constructed of clear acrylic can be used. Designers can see the pattern of stresses and strains within the product because of the transparency of the acrylic sample. This gives the physical design and development process more freedom.

Conclusion

Although CADCAM technology is most well known for its role in physical goods manufacturing and retail, the 911 security system could definitely take advantage of its advertisement survey.

Its automation-focused options are a great fit with the current persistent inefficiency of time I have not been here in a few months.

Bibliography

Lum, C., Koper, C. S., Wu, X., Johnson, W., & Stoltz, M. (2020). Examining the empirical realities of proactive policing through systematic observations and computer-aided dispatch data. Police Quarterly, 23(3), 283-310., Web.

Nath, A. P. D., Ray, S., Basak, A., & Bhunia, S. (2018). System-on-chip security architecture and CAD framework for hardware patch. In 2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC) (pp. 733-738). IEEE. Web.

Reynolds, M. S., MacGregor, D. M., Barry, M. D., Lottering, N., Schmutz, B., Wilson, L. J., Meredith, M., & Gregory, L. S. (2017). Standardized anthropological measurement of postcranial bones using three-dimensional models in CAD software. Forensic science international, 278, 381-387. Web.

Simon, M. A., Taylor, S., & Tom, L. S. (2019). Leveraging digital platforms to scale health care workforce development: the career 911 massive open online course. Progress in community health partnerships: research, education, and action, 13(5), 123., Web.

Plan to Support Students Learning English and Programming

Summary

It is clear that learning English and coding at the same time presented challenges for non-native English speakers when it came to reading educational content, communicating technically, reading and writing software, and other related tasks. They demanded additional images, multimedia, culturally-neutral code patterns, simplified English without culturally unique language, and training materials with built-in dictionaries (Aeiad & Meziane, 2019). Some people were inspired to learn English more effectively through programming, and it also clarified their logical reasoning toward natural languages.

The preponderance of the literature and widely used programming languages are written in English. A person must expend half of their brainpower to understand one English phrase and the other part to acquire the new terminology of computer languages, according to a current study on how communicating with ordinary basic comprehension impacts learning new information (Alaofi, 2020). It might be challenging for non-native English speakers to describe their programming skills in manuals.

The statements about the students native language condition were made as a result of the studys self-reported language competence restriction. Confidence levels varied significantly by gender, with male students expressing much greater confidence levels than female and non-binary pupils. Confidence levels were also strongly influenced by past experience, with more knowledgeable students reporting higher levels of satisfaction than more minor experience pupils. This supports earlier research that asserts academic self-efficacy is linked to prior educational excellence.

Plan

Future research in this area may focus on identifying the phrases that hinder non-native English speakers the most. Programming languages are closely related to English, despite the idealized idea of a computer language being purely mathematical rationality apart from messy human languages (Guzman & Gerald Soosai Raj, 2021). The implications on supportability, durability, and usefulness of different software implementation strategies between native and non-native English speakers might be experimentally measured in future research. To identify the aspects that non-native English speakers struggle with the most when studying a coding standard, one may assess the variations in mental demand experienced by native and non-native English speakers.

The development of a system of learning material that is compatible with the goal of occupational training and adjusts to the features of first-year learners as well as the instructional features of a course taught in English will be a crucial component of the plan. The future project will benefit from comprehensively utilizing a variety of cutting-edge teaching techniques and from developing an assessment system with different assessment modes that concentrate on learning students conceptual understanding and practical abilities. Students understanding and effective application of theoretical information, the development of practical abilities, and piqued enthusiasm in practices are all goals of teaching experiential content. It is necessary to carefully choose the experimental material due to the constrained course hours for experiments.

References

Alaofi, S. (2020). The impact of English language on non-native English speaking students performance in programming class. Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education. 585-586.

Aeiad, E., & Meziane, F. (2019). An adaptable and personalized E-learning system applied to computer science Programmes design. Education and Information Technologies, 24(2), 1485-1509.

Hagiwara, S., & Rodriguez, N. J. (2021). English learners (EL) and computer science (CS) learning: Equity issues. In Handbook of Research on Equity in Computer Science in P-16 Education. IGI Global. 70-87.

Guo, P. J. (2018). Non-native English speakers learning computer programming: Barriers, desires, and design opportunities. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1-14.

Guzman, C. N., Xu, A., & Gerald Soosai Raj, A. (2021). Experiences of non-native English speakers learning computer science in a US university. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education. 633-639.

Reasons Why Computers Will Never Achieve Self Awareness?

Introduction

Computers have dominated the human world today, with almost every task requiring a computer, in order to be accomplished. The Computers devices have developed through different generations, with each developing the product development process. Each generation is used to advance the generation before it. The command, speed and CPU memory, have proportionally enlarged and improved because of miniaturization. Each computer invention is characterized by vital technical progress that has altered the way computers basically function. This has brought about cheaper, smaller, efficient, more powerful and reliable devices. In the first generation, computers used vacuum tubes and magnetic drums for memory.

The generation that followed was marked by replacement of tubes by transistors then integrated circuits (ICs). The fourth cohort was ushered in by the improvement of microchip, thousands incorporated circuits which were transformed into a single chip. This is the present generation. There is a fifth generation, though, which covers the present and the future, and it is the generation of the artificial intelligence. This is still in development, although some aspects are in operation like voice recognition. This artificial intelligent is targeted to make computers behave like human beings and develop self awareness. This has led to the argument on whether computers will or will never develop self awareness. The computer scientists argument that computers will develop self awareness is not true.

Self awareness

Although computers are improving at a soaring rate, having larger memories and operating at even faster rates, they are just machines performing tasks that they have been designed to do. Computers do not, in the actual sense, create or develop new ideas of their own or even think on their own (Dreyfus 190). Self awareness is to have intelligence, and with intelligence comes creativity, understanding and simulation vs. emulation. The fact that computers cannot have creativity, empathy, understanding and cannot work, act and behave like humans means that, computers cannot and will never achieve and develop self awareness like human beings (Fetzer 13). Computers blindly follow all the commands given by human beings regardless of how stupid the instructions or commands are. The day computers will learn to work on their own without following any instructions or, commands given is unlikely to come; therefore, computers will never achieve self awareness. Lack of intelligence makes computers not to develop self awareness.

It has been argued by computer scientists that the field of computer science, known as artificial intelligence will surely take computers to the level of self awareness (Charniak 7). The fifth generation of computers aims at developing computers to a level such that they will outwit, outsmart and even outlast their inventors who are human beings. Game playing is already being seen as a breakthrough to computers developing self awareness for computers. In 1997, an IBM computer gave the artificial intelligence a break through, when it successfully beat a chess world champion at his game. In 2011, Watson was introduced by IBM in the United States to jeopardy viewers. It was used to test how artificial intelligence can use logic in finding answers to questions and interpret human language, and managed to beat all human opponents (Dreyfus 237). This break through in the computer world has led to the concept that computers will eventually develop self- awareness.

Computers can perform some tasks better than human beings like playing chess, not because they are intelligent or are aware of what is required to be done, but, because they are programmed. Computers are programmed to use and exhaust all the possible means in order to reach a solution. The aptitude of computers to comprehend innate language consist only translations of sound effect to particular lettering that form specific words (Fetzer 5-9). These words are programmed to certain computer functions, hence, simulating perception. This is not because computers are aware of what they are expected to do, but are just following mapped out instructions and commands. It is true that computers are improving, becoming more complex with the ability to handle an array of tasks, but they have been designed to perform these tasks. Therefore, as long as computers will operate as a result of obeying commands, they will never develop self awareness despite the fact that, they can perform better than humans.

The above points of view bring about the argument whether, computers will or will not develop self awareness. Human beings are always aware, and, know that they exist. They are aware of their environment and their surroundings, which computers do not. Computers evidently do not discern anything like knowing whether they exist, hence, they cannot grow self awareness. Human beings had a hard task developing a machine that would be able to know that it exists. This was mainly by a deep understanding of how others exist. Schank argues that computers only follow the programmed instructions, and this fact is indelible. Any self understanding that computers will build up, will not be as a product of electronic progress, but as a consequence of advancement of human ideas concerning the character of self awareness and intellect (7).

Reasons why computers will never achieve self awareness

It is true that computers have actually advanced to the artificial intelligence stage, and they perform quite an astounding task. Nevertheless, they still follow any instructions given blindly, despite how wrong or stupid the command is. For instance, a person can spend the whole day compiling a project, and then press the quit button before saving the work. A computer will obediently delete all the work that has been done without considering the effort being put to waste. If a computer is designed to have self awareness, it would know that too much effort and time have been invested in that work, hence, save it. With the present break through in the computer science, it is extremely inviting to believe that, at the trend technology is advancing; computers will eventually develop self awareness. Well, this can only be achieved if humans come up with computers which have intelligence and the ability to solve problems using reason and not commands (Schank 44-46). Natural intelligence is unlikely to be impacted in to computers; hence they are not likely to develop self awareness.

Developing self awareness is done by understanding own existence, and use of reason to solve problems. Computers appear to contain understanding and make use of reason, but it is only a pretense at the rear of the clever code that machines trail correctly. The intelligence is just artificial, which cannot develop, to a level of self awareness (Dreyfus 21). For a workstation to be perfectly intellectual, equivalent to humans, it ought to be lucid and wholly awake, yet, completely beyond the margins of the dualistic awareness. Computers are free of any conceptuality. They do not include any opinion, sense of uniqueness, cognitive formations, no awareness or self reflection. The sophisticated, hypothetical, artificial intelligence computers will not have the ability of experiencing authentic experiences like in religion. This is because they will not be able to simulate or even emulate them. Computers are crippled without information, making them information processors. They do not have information content. This happens to be a clear justification that computers are gadgets that process information and thus not aware of themselves.

Being self aware require thinking. Thinking involves making decisions, making choices, looking at consequences, differentiating the truth from what is false, taking action and solving problems. Computers may appear to resolve problems; however, they do not formulate decisions or even plan in advance. The IBMs Deep Blue, which beat, the world Chess champion, Gary Kasparov, in 1997 had not planned ahead but, was just following a set of instructions devised by expert chess players. A human actor makes hurdles of judgment instead of slavishly going through all calculations. A computer, on the other hand, goes through all the possible moves until it gets the possible option for winning the game (Fetzer 3). Common sense in human beings helps in thinking about things, while computers do not possess this attribute. Therefore, it is quite hard for computers to develop self awareness if at all they do not have the sense of thinking and common sense.

Robotics is expected to be most trendy area of artificial intelligence. The computer science field aims at creating robots which will be aware of their surroundings and lead semi-autonomous lives. The robots are expected to experience external stimuli and react to it just like humans. This is unlikely to be achieved because for robots to respond to external stimuli, they are expected to have emotions. Machines can perform some tasks being done by human beings, but they cannot do the same way as human beings (Fetzer 18). Even with the advanced technology, it is hard for machines to be given emotions, which are innate, like human beings for them to be self aware. Therefore, computers will never achieve self awareness.

Conclusion

The brain and the body of a human being are organic systems which are not involved in creation of consciousness. True consciousness is not found in any formal information process because it is fundamental to the cosmos. Awareness is not created by humans; it is always present. Humans can just be transformed, shifted then channeled from position to position. Human beings are truly self aware without using or creating any information. Computers, on the other hand, are not possible to do, sense or know anything without the use of information. It is apparent that computers will never be able to develop self awareness because they obviously need information for them to operate. Computers will on no account have the ability to duplicate or create self awareness with any information route. Full intelligence will never be achieved in computers due to lack of self awareness, and, self awareness is also impossible as long as computers can only operate with the use of information and programming. Again, computers will never match the intelligent of humans which is inborn, and the self awareness which is innate.

Works Cited

Charniak, Eugene, et al. Introduction to Artificial Intelligence. Reading, MA: Addison-Wesley, 1985. Print.

Dreyfus, Hubert. What Computers Still Cannot Do. Cambridge, MA: The MIT Press, 1992. Print.

Fetzer, James. Artificial Intelligence: Its Scope and Limits. Dordrecht, the Netherlands: Kluwer Academic Publishers, 1990. Print.

Schank, Roger. The Cognitive Computer. Reading, MA: Addison-Wesley, 1984. Print.