Researching of Computer Systems

Currently, a person uses many different devices and computer systems. One of the everyday computer systems that people use is the personal computer. Data on personal computers is collected using the Windows operating program on which the system is running. By default, Windows collects full diagnostic data and sends it to Microsoft, a big plus for storing information. However, the downsides are that the system is prone to data leakage and the collection of a considerable amount of user data, which can overload the storage. An indisputable plus is recovering all their data if they are lost for the user. However, a significant disadvantage is the lack of privacy for the user.

In addition, there is such a type of computer system as the mainframe. It is a universal high-performance server with a large RAM and external memory. The mainframe collects user data on a direct access storage device or optical media. Thus, data can be retrieved directly and sequentially, which is convenient depending on the task. Application programs do not need to collect initial information from multiple sources by storing it on a single server. However, a significant disadvantage for users is the user interface. The mainframe has weak communication between the user and the computer system. However, it is now possible to provide a web interface at a minimal cost.

Moreover, a computing cluster (supercomputer) consists of computing nodes united by a high-speed communication network. Each computing node has its RAM and solid-state drive and is connected to shared parallel network storage (Jouppi et al., 2020). The advantage is that, following the rules, the user independently provides a backup of his data. However, a significant disadvantage is that data storage is no longer involved in calculations on a supercomputer is not allowed. The maximum amount of data for a user is determined by the disk quota determined during registration, while in the future, the percentage can be increased if necessary.

Reference

Jouppi, N. P., Yoon, D. H., Kurian, G., Li, S., Patil, N., Laudon, J.,& & Patterson, D. (2020). A domain-specific supercomputer for training deep neural networks. Communications of the ACM, 63(7), 67-78.

Security Plan for Computer and Data System

Introduction

The security of data and information in an organization is paramount. This is because all the activities and decisions made depend on the integrity of data systems. A breach of the security system of data and information would be disastrous to the organization. It is important to put the appropriate systems into place that would help secure the organizations data and information against possible invasion from malicious quarters.

Malicious Software

This refers to viruses, worms, and Trojan horses. A virus is malicious computer software that replicates itself in a computer system (National Institute of Standards and Technology, 2006). A worm is a self-contained malicious program or a set of programs that spreads full copies or smaller portions of itself from one computer system to another through network connections, email attachments, or instant messages (National Institute of Standards and Technology, 2006). A Trojan horse is usually disguised as a popular program, which secretly installs its code on the host computer and opens a network port for the attacker to take control of the infected computer (National Institute of Standards and Technology, 2006).

Types of Attacks

There are several types of attacks precipitated on servers, clients, and mobile devices. These include the brute-force attack, which attempts to break passwords by cycling through a number of possibilities (Meyers, 2009). Another is the dictionary attack through which attackers capture encrypted password files and try to compare them with dictionary words that most people use as passwords (Meyers, 2009). There is also shoulder surfing, social engineering, and phishing. Lastly, physical access to servers, clients, and mobile devices is also another threat.

OS Hardening

This is important in fostering security for web servers, email servers, and file, print, database servers. OS hardening can be implemented by installing the latest web server and browser software and applying the most recent security patches (National Institute of Standards and Technology, 2006). This should be implemented by the administrators. Administrators should ensure that file servers are secured by passwords to prevent unauthorized access. The general staff should ensure they employ best practices such as safeguarding the passwords and enforcing confidentiality when handling sensitive files.

Network infrastructure attacks

These can be carried out through back door attacks. This malicious program avails a port to the hacker hence they can control the infected system (Meyers, 2009). The port availed by the malicious program is usually that not used by network services. Through the use of bugs, malicious users can gain access to the system by by-passing device security checks. A hacker may overload a specific network grounding the flow of information to a halt. Administrators should install the latest firmware and software and scan their network devices in order to identify unused ports.

Security Zones

They include the DMZ where a high number of publicly accessed network systems are located. Control of traffic is administered by an administrator firewall. There is the NAT that allows private IP addresses to be translated into routable addresses to be used on the internet. NAT and sub-netting ensure that internal addresses are not accesses by external forces, thus addressing the problem of spoofing (National Institute of Standards and Technology, 2006). VLAN enables the segmentation of large physical networks. It provides security because users in one LAN will not access other LANs on the same network.

Network Device Hardening

It refers to the examination of the network infrastructure for security vulnerabilities. This is done by installing the latest network software and constantly checking for newer updates. The latest security and bug-fix patches should also be installed on network systems. Configuration settings can be optimized to exclude optional services that can be exploited for malicious intent (National Institute of Standards and Technology, 2006). In addition to this, all network devices like routers and switches should be secured to prevent unauthorized access into the network. For wireless networks, encryption of data is the best security measure.

HIDS and NIDS

These are used to deter intrusions into networks. NIDS analyzes network activity and data packets for suspicious activity. It determines whether data packets have been changed on transit, or contain suspicious codes, or even malformed or corrupted. Notice is then sent to the administrator via an alarm system. HIDS examines a specific device or host for suspicious activity. It detects attacks from a user physically working at the console. It then alerts the administrator. HIDS should be installed on specific devices in a network while NIDS should be established on specific points of the network.

Wireless Infrastructure

Possible threats to wireless networks include data emanation, wardriving, and rogue access points and devices. Security measures to address them include service set identifiers that call for network passwords and names, MAC address filtering, WEP security, WPA and WPA2 security, and personal firewall (National Institute of Standards and Technology, 2006).

Access Control Methods

Administrators should adopt proper access control and authentication policies. This can be done through the creation of login IDs and passwords. The passwords should be reinforced through the use of long, non-dictionary alphanumeric combinations of characters, regular password rotation, and aging. Network file servers should be propped with file access permissions per-user or group basis (Meyers, 2009).

User Accounts and Password Management Policies

For security purposes, user accounts should be restricted through appropriate naming conventions, limiting login attempts, disabling unused accounts, setting time and machine restrictions, and use of tokens (Meyers, 2009). Important password policies include the setting of a minimum length of passwords, and password rotation, and aging.

Authentication

It can be accomplished through the use of serial cable networks and dial-up modems. Modern methods involve the use of complex VPNs. They can be secured by encrypting information over the network so that it is not captured by unauthorized users.

VPN Protocols

The various VPN protocols are PPTP, L2TP, and IPSec (Meyers, 2009). PPTP decrypts PPP packets in order to create VPN connections (Meyers, 2009). L2TP is a hybrid of PPTP and L2FP created by Cisco systems (Cisco Systems Inc., 2007). IPSec provides privacy, integrity, and authenticity to information transferred across IP networks. I would recommend the L2TP since it offers authenticity, privacy, and integrity of data being transmitted; hence, it is more secure (Meyers, 2009).

Authentication Protocols

These include PAP, CHAP, RADIUS, LDAP, TACACS, Kerberos, Biometrics, among others (Hassell, 2002). I would recommend CHAP, which is more secure than PAP as it prevents replay attacks by hackers who capture data and resend it (Cisco Systems Inc., 2007). I would also recommend RADIUS especially in the case of dial-up modems (Hassell, 2002). TACACs are also recommendable as it works with dial-up modems.

Physical Security Measures

Several physical security measures can be taken to secure building/housing networks. Such measures can be achieved through laying down of physical barriers, e.g. fences and gates, so as to restrict unauthorized persons from accessing network infrastructure in the buildings (Meyers, 2009). Various rooms can also be further secured through the application of burglar-proof locks and alarm systems on their doors. Employees working in the buildings should also wear identification passes to distinguish them from posturing aliens. Finally, all areas of the building should be well illuminated to deter intruders and provide security to employees.

References

Cisco Systems Inc. (2007). Understanding and configuring PPP CHAP authentication. Cisco tech notes. Web.

Hassell, J. (2002). RADIUS. Securing public access to private resources. Cambridge, MA: OReilly and Associates.

Meyers, M. (2009). Mike Meyers CompTIA security+ certification passport (2nd ed.). McGraw Hill.

National Institute of Standards and Technology (NIST). (2006). Guide to general server Security (Special Publication 800-123). Washington, D.C: U.S Department of Commerce. Web.

Upgrading Computers in Business Organizations

Introduction

The functioning of any company today critically depends on the technology it uses. The use of appropriate devices can help to attain numerous benefits. For instance, enhanced effectiveness, reduced cost, and higher client satisfaction can be achieved by integrating innovations. At the same time, it is vital to ensure the available technology is not outdated and can perform the major tasks available at the moment; otherwise, it will serve as a barrier to future improvement. Under these conditions, the timely updating of computers, software, and networks vital for a companys work is a critical demand for modern businesses. It should be given specific attention as it demands costs and plans for replacing outdated devices with new ones. The proposed scenario implies the same update process as the core demand for the following functioning of the department store.

Background

The company wants to enhance its IT infrastructure to improve its order processing activities and introduce new features. For this reason, the upgrading procedure was initiated to consider available options. At the moment, the store uses eight computers with the following features:

  • Quad-core processors;
  • one terabyte hard drives;
  • 5 GB RAM;
  • Windows XP, Linux, or Mac OSX.

By now, the given specifications were sufficient; however, the focus on adding new features introduces the need for upgrading. The following future tasks should be considered while choosing the hardware and software:

  • Running a processing application;
  • Communicating with customers;
  • Storing customer data on the local PC.

Under these conditions, the planned upgrading process should offer solutions able to meet these requirements and contribute to the boost in the department stores effectiveness. At the same time, it is vital to consider the price for all components to ensure the company can find the necessary finances.

Operating System

Multiprocessing is one of the core demands outlined by the department store. It means that the chosen operating system should be able to support and utilize more than one computer process, also helping to perform several tasks simultaneously. Several modern operating systems support multiprocessing, including Linux/Unix and Windows. However, regarding the existing tasks, Windows family, precisely Windows 10 Pro, seems the best possible option. Thus, it costs $199, while upgrading from XP to 10 costs $99 (Microsoft, n.d.). It supports multiprocessing, which is one of the core demands, and, moreover, it is more user-friendly and helps to minimize potential compatibility issues as it is prevalent (Microsoft, n.d.). Moreover, developers create numerous applications using this platform, which will simplify the choice of the needed software and increase its effectiveness. For this reason, Windows 10 Pro is the best update option.

Processor

The choice of a processor is also an important step vital for upgrading four computers. Thus, the critical demand influencing the choice is multiprocessing, running applications, and resolving several tasks at the same time. At the moment, quad-core processors are used on all companys PCs. Thus, considering the scenario and current tasks, the Intel Core i7 is viewed as an appropriate option. It can be bought from the official vendor using Amazon and costs $417 (Intel Core i7-12700KF, n.d.). Several reasons justify the given a choice and the necessity to buy the processor. First, it is a powerful and up-to-date solution allowing the company to benefit from its computers stable and practical work (Intel Core i7-12700KF, n.d.). Second, the Intel Core i7 series remain relevant and the moment and will preserve its topicality within the next several years. It means that the company will not have to initiate a new upgrading process.

RAM

RAM requirements are linked to the OS, planned workload, applications, and tasks that would be performed. Thus, the company plans to perform several tasks at the same time while also processing requests from clients and providing them with needed information. It means that RAM should be sufficient enough to meet all these requirements. It is recommended to use at least 8 GB RAM as it will ensure high-speed response and the absence of delays. One of the possible options is Kingston 2X8Gb,16 Gb in general, costing $105 (Kingston, n.d.). The official vendor can deliver it to the company and guarantee the stable work of the device. This choice will help to support the stable work of new applications and create the basis for new improvements in the future.

Hard Drive

Several factors also influence the choice of hard drive. First, the OS requirements influence the choice of the given device. Second, the planned applications and operations should also be considered. Finally, the number of clients who will use the computer is critical for selecting the hard drive. The increased number of clients means big information portions that should be stored to improve strategic planning. Thus, Kingston KC3000 PCIe 4.0 NVMe M.2 SSD with a capacity 512Gb and costing $133 can be a suitable option (Kingston KC3000, n.d.). First, it allows for storing significant amounts of data, which is critical when working with clients. Second, it is a safe and reliable device that can be used for purposes similar to those introduced by the department store. For this reason, the given hard drive can be used to update computers.

Software

Finally, selecting the software that will help the company to align interaction with clients and ensure they are satisfied with the increased quality of service. Thus, a backup database should be created to avoid losing data (Qureshi & Sharif, 2021). It can be made by using cloud services, such as Google. Second, the company focuses on the idea that the Internet will be used as the major tool for communicating with clients and acquiring information from them. For this reason, a free Internet browser, such as Google Chrome, can also be used. It will help to access clients from different locations and cooperate with them (Qureshi & Sharif, 2021). Furthermore, messaging with clients demands a stable Internet connection, meaning that the company should ensure that the provider can meet these requirements and avoid data loss.

Public platforms for exchanging messages and emails can also be utilized. For instance, Yahoo is one of the possible choices. It ensures a high level of protection, can be used by most clients and is understandable for them. Under these conditions, it becomes one of the possible choices for the company. At the same time, the recommended upgrades will provide the department stores management with a chance to use other applications which they find useful for achieving their goals. The integration of the recommended changes will eliminate limits linked to the hardware and promote the diversity of choice for the firm, which is vital to support its competitiveness and future development.

Costs

In such a way, the following recommendations are related to the major components of the computers that should be upgraded to remain capable of performing existing tasks. These are OS, RAM, hard drive, and processor, which are fundamental for attaining a high speed of response and reliable work. Thus, the final cost of updating one computer is around $854 (OS $199 + processor $417 + RAM $105 + hard drive $133). Upgrading four PCs will cost $3,416, which can be viewed as a long-term investment in the development of the firm and its future success. Thus, if the cost is too high for the firm, it can be reduced by selecting a less powerful processor and RAM. Furthermore, the hard drive with lower capacity can also be chosen as an alternative to reduce spending for updating.

Conclusion

Altogether, the scenario shows that while selecting hardware and software for a computer, it is vital to consider the goals and purposes the company introduces. The increased number of users, along with the stored data and the requirements for multitasking, means that the PC should possess enough resources to process all information and ensure it is used effectively. Under these conditions, Windows 10 Pro, with Intel Core i7, 16 Gb Ram, and 512Gb hard drive, is chosen as a possible option. Using these components will cost around $854 per computer; however, it will create the basis for future excellence and improved client service, which is also vital for higher satisfaction levels. All components can be bought directly from vendors and delivered to the needed destination point. The given upgrading process is viewed as an essential part of the companys functioning and a positive change vital for goal achievement.

References

Intel Core i7-12700KF Desktop Processor 12 (8P+4E). (n.d.). Amazon.

Kingston. (n.d.). Kingston FURY Beast DDR4 memory.

Kingston KC3000 PCIe 4.0 NVMe M.2 SSD. (n.d.). Amazon.

Microsoft. (n.d.). Windows 10 Pro.

Qureshi, H., & Sharif, H. (2021). Snowflake cookbook: Techniques for building modern cloud data warehousing solutions. Packt Publishing.

Computer Fraud in the United Kingdom

Computer Fraud

The term computer fraud refers to cybercrime where a person uses a computer to access an electronic database illegally without the owners consent. In the United Kingdom (UK), there have been computer frauds reported to be more than 1.3 billion euros in 2021 from individuals and organizations (Scroxton, 2021, par. 2). Cybercrime has facilitated computer fraud to the skill to manipulate systems that can be used to access data in an effective but unlawful way. According to National Fraud Intelligence Bureau, major spikes during the UKs second pandemic lockdown, where more than 137,000 crimes were reported to have cost enterprise managers more than 635 million euros in the first quarter of 2021 (Scroxton, 2021, par. 4). The three main types of computer fraud include hacking, data mining via malware, and the distribution of hoax emails. A person is guilty of fraud if they breach Sections 2, 3, and 4 of the UK Constitution (Gupta et al., 2020). Under these sections, a person commits fraud by either false representation, failure to disclose information or abuse of position.

Hacking

Hacking is a common form of computer fraud that consists of someone identifying the weaknesses of a computer system and attempting to exploit the network to gain personal data or business information. For example, people who use password-cracking algorithms to access a given database in a computer are classified as hackers. People are often probed by business or personal issues that make them start hacking. 82% of firms in the UK have been victims of ransomware attacks whereby they most have to pay the malicious people who hacked their system to regain data (Allen, 2017, p3). The tendency of the UK to pay cybercriminals means there are notable cases of hacking witnessed in the country. Companies that have been affected by hacking must have a point of interest, such as monetary or other values that the hackers want to benefit from after accessing their hacking.

How Hacking is Conducted

For hackers to gain information on sensitive data, they must have a central point of observation concerning a given system, therefore, target to exploit the variables. Phishing attacks are the key methodology that many hackers use to have access to given networks. Phishing occurs when a party in a given firm is lured into clicking a certain link sent through an email that has malware (Gupta et al., 2020). Through this formation, criminals access the data by getting assisted by their system that monitors the protocols set in the computers for authentication when launching a given process digitally. Through the help of a customized configuration of computer elements, a hacker infiltrates a system by installing a virus, which is a damaging component that can wipe anything stored in a computer device. Most of the time, enables hackers users to click on the links via emails for opening messages, websites, and downloadable networks.

Additionally, hackers review log keystrokes that are entered during computer usage. Once the malware is installed in the victims computer, it can record the keystrokes that give the hacker all the details such as passwords, pins, and other security features required to open files, folders, and applications. Advanced innovation has led to improved processes that can develop algorithms that can generate a combination of symbols, numbers, and letters used as passwords (Ashman Solicitors, 2020). By using a brute force attack, the hacker can get the combinations, making it easy to access the computer. Furthermore, hacking can be successful under the dictionary attack, a feature that inserts random common words into the fields that require a password. In this case, a system can suggest a possible figure that is put to enable access to the database.

The use of zombie computers has been common in enabling hacking processes in the UK. A hacker commits distributed denial of service (DDoS) attacks. A user can execute a code without awareness of the possible repercussions (Gupta et al., 2020). The zombie technology with DDoS enables a user to connect between their system and that of the victim. At this point, it is easy to control the two systems without the awareness of the other user. Thus, computer fraud is successfully done by these metrics. It is important to mention that hacking is a major cybercrime issue that has been a bug to the UK government. However, legal statutes deal with this kind of computer fraud.

How Hacking Might be Prosecuted in the UK

Hacking is dealt with under cybercrime issues in UK law. The Computer Misuse Act 1990 (CMA) is the major UK law related to issues that lead to attacks on computer systems in which hacking fits. A jurisdiction to prosecute CMA offenses is relevant if a direct link exists with domestic jurisdiction (Gupta et al., 2020). Section 3(1) under Investigatory Powers Act 2006, which was enacted in 2018, defines hacking as one which deliberately intercepts communication through telecommunication without the consent of the main party where hacking is directed. CMA criminalizes any access to data without consent from the owner. The penalty for committing a hacking crime in the UK is punishable by up to two years imprisonment (Gupta et al., 2020, p. 67). However, that is for level 1, unauthorized access to computer material. Level 2 incorporates access to a computer system intending to commit further offense or facilitate the same.

Additionally, level 3, which comprises elements in previous levels but intends to distort or impair a computer system, is punishable by sending the culprit to up to ten years in prison (Gupta et al., 2020). However, for these levels to be fully acknowledged, there involve legal processes that ascertain whether or not the crime was committed. It is important to combat hacking to ensure there are few cases of computer fraud not only in the UK but in other parts of the world.

Distribution of Hoax Emails

An email hoax means a scam sent via electronic mailing systems designed to deceive users for monetary gains. Most of the time, email hoax is targeted at people who have the potential to accept the idea (Ashman Solicitors, 2020). In common tactics, emails lure users to give their money based on charity firms such as children missing, lottery, fake security warnings, and chain letters. For instance, an email hoax may be probing the user to adhere to an alleged threat such as a virus or another security-related issue. 45 % of interviewed people about experiencing such issues admitted to having fallen victim in the UK (Allen, 2017, p. 3). Therefore, it can be true to say that cybercrime is a sensitive issue that requires regulation.

How Email Hoax is Done

Email hoaxes and scam has been major issue in terms of cybercrime. Nowadays, companies rely on emails for communication. Therefore, official communication meant to ease communication has led to malice since people want to trick others for monetary gains. For example, it is possible to hear that a certain software development company offers $100 to people who forward emails having given information to several people (Allen, 2017). An email hoax often comes with a warning about a harmful component such as a virus that can delete hard drives. In this case, scammers will be prompting the victim to send such a message to their address box. Due to the level of cautiousness that people have, they will accept and forward the contents of the email.

Part of the emails sends usually have payment methods in case one wants to pay for prevention from uncertainties in their system. Once a person pays the money, they never get any assistance. Rather, the scammer turns on the victim and accuses them of attempting to fraud them by false allegations. One of the familiar messages in this section is You can be a Millionaire, which has led to many people falling prey to the scams (Gupta et al., 2020). The purpose of the scammer is to develop a certain mania that can make people pay money due to fear of the subject in contention. Therefore, email hoaxes have been a challenge to the UK business and individual perspectives. There is a need to impose legal actions that can lead to long-term combatting of these problems.

How Email Hoax Might be Prosecuted in the UK

When scammers pretend to help online users concerning a given issue with their computer, it is one way of false representation, which is a breach of lawful statutory that covers the fraud element in society. When one is caught in email hoax fraud, they are subjected to 5,000 euros, 90 days in prison, or both (Gupta et al., 2020, p. 23). Even though this issue lies under the regulation of CMA, it appears to be lighter than hackers. However, it depends on the degree of scamming. When false representation is targeted to weaken a given company in terms of market share, the people behind such may be subjected to paying any possible damage costs financially.

If the scamming move was targeted to sabotage any government agencys operations in favor of the people of the UK, a person might be sent to prison for many more years. That has happened to convicts in the high court, whereby most of them may receive an unlimited fine that can make them be put behind bars. One is invited to pay a civil penalty as an option for prosecution, and proceedings are dropped depending on the intensity of the matter (Ashman Solicitors, 2020). After one is presented to the court and there is enough evidence that they committed scamming through an email hoax, prosecution begins. The terms may be flexible depending on the level of authority that was crossed.

Identity Theft

In computer fraud, an attacker applies deceptive theories meant to obtain data or information from a victim and misuse the data by acting as the owners name. It is important to note that identity theft is different from hacking. In this type, criminals obtain personal information by eavesdropping on passwords or retrieving them secretly without tampering with computer systems (Ashman Solicitors, 2020). When the thieves access the computer for the victims, they act as owners to fraudulently manipulate it by acting as the real owner of the computer. Thus, this is where the idea of identity theft is derived from.

How Identity Theft is Done

When malicious people gain the personal details of the real owners of the computer, they start doing various activities such as applying for loans, buying and selling online, and modifying the victims medical and financial information for their gains. Identity theft works as phishing linked to social engineering tactics that are utilized to pry confidential data from computer users (Ashman Solicitors, 2020). It is important to mention that public profiles available on social connections and other popular metrics can be manipulated to get data that help criminals succeed in their targets. A 2019 October survey on Identity theft in the UK indicates that 6% of the respondents had experienced identity theft in the last four years (Gupta et al., 2020, p. 45). Other 45% of the people involved said they had attempted to fall prey to identity theft in 2019 (Gupta et al., 2020, p. 45). Thus, this issue is a part of many others that have expanded the cybercrime issues in the UK

How Identity Theft Might be Prosecuted in the UK

The punishment for identity theft in the UK is seven years in prison. When a person is charged under the Fraud Act, the offense might go up to 10 years in prison. This applies to any effort to supply articles that may have led to the theft (Allen, 2017, p. 2). UK courts decide on how sophisticated identity theft is depending on the plans and opportunities that were executed and the motivation behind the crime. Additionally, a person might be sent to prison for more than ten years if the position of trust is bestowed to the prosecutors against people such as imposters (Allen, 2017, p. 2). However, possessing articles for fraud offenses may range from one to one-and-half years depending on the complexity of the matter. There is a need to reduce these computer fraud issues to have a secure way of using computers online and offline.

Reference List

Allen, M. (2017) The state of crime: fraud, hacking and malware double the number of crimes committed in UK, Computer Fraud & Security, 2017(2), pp.1-3.

Ashman Solicitors (2020) What is Fraud by False Representation? [online] Ashman Solicitors. Web.

Gupta, B., Perez, G., Agrawal, D. and Gupta, D. (2020) Handbook of Computer Networks and Cyber Security. Cham: Springer International Publishing.

Scroxton, A. (2021) UK loses £1.3bn to fraud and cybercrime this year. [online] ComputerWeekly.com. Web.

Computer Application System: Management and Purposes

Question 1

In computer application systems we have studied quiet a number of application systems and a number of questions comes up. These include:

What is AIS?

AIS stand for Application Interface Specification and is basically assortment of open specifications.

Q2. What are three factors that influence the design of AIS? The three factors that influence the design of AIS are; the first one is Platform Management Services (PLM), second Cluster Membership Scope (CLM), and the last one is Availability Management Framework (AMF).

What three important business functions do an AIS FULFIL?

AIS have around twelve (12) services which are classified into three major categories: AIS platform services, the second one is AIS utility services as a general, and the third is Management Services AIS.

There are four types of data processing which helps keep the data stored in files or databases current. What are they? Define each.

There are approximately four types of data processing which aid in keeping the data stored in files or databases updated. These are as follows; Information Model Management Services (IMM) which is also known as configuration management database, Log Services (LOG), Notification Service (NTF), Security Services (SEC). The later is used to authenticate and authorize particular activities. This helps to retain or in other words uphold the integrity of AIS.

NTF is generally a center which identifies concept and then sends a notification alert in case of incidences or status change. LOG is another database which basically deals with instances of logging events. The system is responsible for all logging services which perform as per current settings at that point in time of logging. Lastly IMM is wholly responsible for the entire administrative applications which are specified for the object classes in the AIS.

What is an ERP system (Enterprise Resource Planning)?

ERP system (Enterprise Resource Planning) is a system that coordinates both the internal and external management information throughout an organization. ERP operates under automation of a software application which enhances finance/accounting, sales and services, manufacturing among others like customer relation management. In general, it links the flow of information of an organization/business to the outside stakeholders.

Question 2

An internal Environment, as a component of ERM, consist of seven items, which are they? Give a brief example of each.

There are seven items which make up the Internal Environment as a component of ERM (Enterprise Risk Management) which includes:

  1. Establishing Content:-it serves the purpose of understanding the internal, the risk management and the external context;
  2. Identifying Risks:-it includes thorough documentation of threat material to a organization;
  3. Analyzing/Quantifying Risks:-identification of any possible risks from the material gathered;
  4. Integrating Risks:-it deals with a total evaluation and reflecting corrections and hence coming up with results on the effect on the organization;
  5. Assessing/Prioritizing Risks:-it involves consideration of any possible risk and determining how big the risk is in terms of priority to the organization;
  6. Treating/Exploiting Risks:-this involves controlling measures put in place;
  7. Monitoring and Reviewing:-it is gradual assessment and monitoring of various risks to the company.

What is COSO (Committee of Sponsoring Organization)? Explain its purpose/value.

COSO (Committee of Sponsoring Organization) is an initiative of about five private sector organization which come together to offer leadership and guidance in the risk management framework. It is mainly put in place to identify possible risks, manage the risk, and provide reasonable and acceptable assurance to the entity objectives.

What is PCAOB? What created it? What is its main purpose?

PCAOB, which stands for Public Company Accounting Oversight Board which is a non-profit corporation and is purely a private-sector. It was created by Sarbanes-Oxley Act, in 2002 a United States Federal law. It main purpose is to regulate the audit reports for organizations so that they may contain sufficient and appropriate information for investors so as to safeguard their interests.

What are the most common methods of collecting audit evidence? List at least seven and give a brief explanation of each.

The most commonly used methods of collecting audit evidence are several. The major seven methods are:

  1. Inspection:-its general role is to confirm that an asset exists or that a certain transaction really occurred;
  2. Observation:-this involves an auditor being present when a procedure is taking place;
  3. External confirmation:-it involves seeking information from other sources such as a bank;
  4. Recalculation:-checking arithmetic accuracy thoroughly;
  5. Reperformance:-redoing of several reconciliations as at certain reporting dates;
  6. Analytical procedures:-performing comparison of different years report and coming up with predictable relationships;
  7. Inquiry:-it involves finding information from clients or management body.

911 Evolution: Computer-Aided Design for Personal Safety

Introduction

USA emergency service relies heavily on the speed of reaction and allocation when interacting with people in need. However, the lack of funding and logistical management issues often lead to inefficient service provision and consequent victims. Every year, over 10,000 Americans die due to the outdated 911 system (Reynolds, 2017). The problem of comparative inefficiency persists despite the implementation of NG911.

911 Dispatcher: Job Description

They receive a wide range of complaints, from automobile accidents to active criminal actions, and coordinate the delivery of appropriate emergency personnel.

The methods of communication utilized in this line of job change, together with the latest advancements, satisfy the need for urgent linking appropriately.

Introduction: CAD/CAM

CAD/CAM software is used to design a product as well as program manufacturing processes, such as CNC machining. CAM software generates toolpaths that control machine machines to transform designs into actual components using models and assemblies produced in CAD software like Fusion 360 (Simon, Taylor & Todd, 2019). Prototypes, final parts, and production runs are all designed and manufactured using CAD/CAM software.

What Is CAD/CAM?

The phrase CAD-CAM refers to the software that is used for designing, machining, and manufacturing with a CNC machine. Computer-Aided Design is abbreviated as CAD, and Computer-Aided Manufacturing is abbreviated as CAM.

CAD software is used to design and sketch models, which are then assembled using geometric forms. Not every produced item, however, needs to be created as a solid 3D model.

CAD

Modern CAD software enables the design of components for CNC machining on 2, 3, 4, and 5 axes. As planned parts are transmitted to CAM for programming the machine side of the production process, CAD software is a vital aspect of the manufacturing process.

CAM

Before a CAD model can be converted to this machine language, the CAM software must be set up to calculate the cutting routes that the tools will take to remove superfluous material and create a component.

CNC Milling, CNC Lathe, and CNC Router are the most common applications. However, component programming for CNC Water Jets, Plasma, Laser, and CNC Burning machines may also be discovered.

Workplace Background

With the advancement of contemporary endosseous implant design and better surface technology, new restorative procedures have been developed that reduce the overall treatment duration for patients. We are able to produce personalized dental restorations with great accuracy and flawless precision of fit by utilizing the latest scanning, CAD/CAM, and manufacturing technologies.

About the Job: Surgical Technologist

Surgical technologists work under the supervision of a surgeon to ensure that invasive surgical procedures are carried out safely and effectively, ensuring that the operating room environment is safe, that equipment is in working order, and that the operative procedure is carried out in a manner that maximizes patient safety.

Surgical Technologist: CAD/CAM Use

The traditional procedure necessitates a time-consuming and costly logistic chain. The workflow of directly milled surgical guides was assessed in a pilot study. These surgical guides were created using a combination of optical impressions and radiological information.

Current Technological State

At the moment, 911 has experienced yet another period of involuntary modernization.

Some of the police stations are pushing against further technological implementation, but it is required to optimize the performance of each individual unit.

Contribution to 911 Services

CADCAM-based security solutions provide a full package of services included in a singular software application.

The computer-assisted 911 vehicle dispatch program includes vehicle dispatch, call dispatch, resources deployment, instruction and protocols, and status modification (Lum et al., 2020).

Optimization of the vehicle dispatch and the coordination between them would allow the security services to be there for many people around Dublin.

CAD/CAM Disadvantages

Hand sketching can be used to divide data into different transparent overlays. Different overlays for structural, electrical, and plumbing components, for example, may be included in a construction plan (Renydols, 2021). In CAD, layers are similar to transparent overlays.

Layers like overlays can be shown, modified, and printed alone or in combination. One may name layers to make it simpler to remember whats on them, and you can lock layers to prevent them from being modified.

Computer processing power is frequently used by CAD applications. This necessitates the purchase of expensive high-quality computer hardware. CAM necessitates the use of costly, sophisticated production equipment. The high cost of hardware is a key drawback of CAD/CAM and a major impediment to their widespread adoption.

CAD software is becoming more versatile and adaptive as time goes on. This, however, comes at the cost of increasing the softwares complexity. This intricacy makes learning the software more challenging for new users. This intricacy, when combined with the cost of educating employees in CAD/CAM technology, is another drawback of CAD/CAM.

Maintenance of the computers and equipment required for CAD/CAM is required, which can be a strain on available resources. When computers or gadgets fail, it results in costly downtime, which is inconvenient for everyone concerned. Maintaining a preventative maintenance program can be beneficial, but breakdowns are unavoidable, which is a negative.

CAD/CAM Advantages

The capacity to produce extremely precise designs; drawings that may be generated in 2D or 3D and rotated; and other computer programs that can be integrated into the design software are just a few of the benefits of CAD (Nath, Ray, Basak & Bhunia, 2020).

Before one begins sketching using manual drafting, one must first identify the scale of a view. This scale relates the real size of an object to the size of a paper model.

One of the benefits of CAD/CAM is that design modifications may be done quickly with CAD software. A design modification would have required a draftsperson to entirely redraw the design to the new standard before using CAD. One of the advantages of CAD in textiles is that it allows designers to play with designs and make tiny adjustments on the fly. It may also be utilized in software to replicate the designs behavior. CAD software may be used to simulate the airflow around an engine, for example. This gives the software development process more freedom.

Rapid prototyping is another benefit of CAD/CAM. Designers may use rapid prototyping to build tangible prototypes during the design process. Various parts of the design may be tested using these actual prototypes. If the objective is to create a steel item, for example, a prototype constructed of clear acrylic can be used. Designers can see the pattern of stresses and strains within the product because of the transparency of the acrylic sample. This gives the physical design and development process more freedom.

Conclusion

Although CADCAM technology is most well known for its role in physical goods manufacturing and retail, the 911 security system could definitely take advantage of its advertisement survey.

Its automation-focused options are a great fit with the current persistent inefficiency of time I have not been here in a few months.

Bibliography

Lum, C., Koper, C. S., Wu, X., Johnson, W., & Stoltz, M. (2020). Examining the empirical realities of proactive policing through systematic observations and computer-aided dispatch data. Police Quarterly, 23(3), 283-310., Web.

Nath, A. P. D., Ray, S., Basak, A., & Bhunia, S. (2018). System-on-chip security architecture and CAD framework for hardware patch. In 2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC) (pp. 733-738). IEEE. Web.

Reynolds, M. S., MacGregor, D. M., Barry, M. D., Lottering, N., Schmutz, B., Wilson, L. J., Meredith, M., & Gregory, L. S. (2017). Standardized anthropological measurement of postcranial bones using three-dimensional models in CAD software. Forensic science international, 278, 381-387. Web.

Simon, M. A., Taylor, S., & Tom, L. S. (2019). Leveraging digital platforms to scale health care workforce development: the career 911 massive open online course. Progress in community health partnerships: research, education, and action, 13(5), 123., Web.

Plan to Support Students Learning English and Programming

Summary

It is clear that learning English and coding at the same time presented challenges for non-native English speakers when it came to reading educational content, communicating technically, reading and writing software, and other related tasks. They demanded additional images, multimedia, culturally-neutral code patterns, simplified English without culturally unique language, and training materials with built-in dictionaries (Aeiad & Meziane, 2019). Some people were inspired to learn English more effectively through programming, and it also clarified their logical reasoning toward natural languages.

The preponderance of the literature and widely used programming languages are written in English. A person must expend half of their brainpower to understand one English phrase and the other part to acquire the new terminology of computer languages, according to a current study on how communicating with ordinary basic comprehension impacts learning new information (Alaofi, 2020). It might be challenging for non-native English speakers to describe their programming skills in manuals.

The statements about the students native language condition were made as a result of the studys self-reported language competence restriction. Confidence levels varied significantly by gender, with male students expressing much greater confidence levels than female and non-binary pupils. Confidence levels were also strongly influenced by past experience, with more knowledgeable students reporting higher levels of satisfaction than more minor experience pupils. This supports earlier research that asserts academic self-efficacy is linked to prior educational excellence.

Plan

Future research in this area may focus on identifying the phrases that hinder non-native English speakers the most. Programming languages are closely related to English, despite the idealized idea of a computer language being purely mathematical rationality apart from messy human languages (Guzman & Gerald Soosai Raj, 2021). The implications on supportability, durability, and usefulness of different software implementation strategies between native and non-native English speakers might be experimentally measured in future research. To identify the aspects that non-native English speakers struggle with the most when studying a coding standard, one may assess the variations in mental demand experienced by native and non-native English speakers.

The development of a system of learning material that is compatible with the goal of occupational training and adjusts to the features of first-year learners as well as the instructional features of a course taught in English will be a crucial component of the plan. The future project will benefit from comprehensively utilizing a variety of cutting-edge teaching techniques and from developing an assessment system with different assessment modes that concentrate on learning students conceptual understanding and practical abilities. Students understanding and effective application of theoretical information, the development of practical abilities, and piqued enthusiasm in practices are all goals of teaching experiential content. It is necessary to carefully choose the experimental material due to the constrained course hours for experiments.

References

Alaofi, S. (2020). The impact of English language on non-native English speaking students performance in programming class. Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education. 585-586.

Aeiad, E., & Meziane, F. (2019). An adaptable and personalized E-learning system applied to computer science Programmes design. Education and Information Technologies, 24(2), 1485-1509.

Hagiwara, S., & Rodriguez, N. J. (2021). English learners (EL) and computer science (CS) learning: Equity issues. In Handbook of Research on Equity in Computer Science in P-16 Education. IGI Global. 70-87.

Guo, P. J. (2018). Non-native English speakers learning computer programming: Barriers, desires, and design opportunities. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1-14.

Guzman, C. N., Xu, A., & Gerald Soosai Raj, A. (2021). Experiences of non-native English speakers learning computer science in a US university. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education. 633-639.

Reasons Why Computers Will Never Achieve Self Awareness?

Introduction

Computers have dominated the human world today, with almost every task requiring a computer, in order to be accomplished. The Computers devices have developed through different generations, with each developing the product development process. Each generation is used to advance the generation before it. The command, speed and CPU memory, have proportionally enlarged and improved because of miniaturization. Each computer invention is characterized by vital technical progress that has altered the way computers basically function. This has brought about cheaper, smaller, efficient, more powerful and reliable devices. In the first generation, computers used vacuum tubes and magnetic drums for memory.

The generation that followed was marked by replacement of tubes by transistors then integrated circuits (ICs). The fourth cohort was ushered in by the improvement of microchip, thousands incorporated circuits which were transformed into a single chip. This is the present generation. There is a fifth generation, though, which covers the present and the future, and it is the generation of the artificial intelligence. This is still in development, although some aspects are in operation like voice recognition. This artificial intelligent is targeted to make computers behave like human beings and develop self awareness. This has led to the argument on whether computers will or will never develop self awareness. The computer scientists argument that computers will develop self awareness is not true.

Self awareness

Although computers are improving at a soaring rate, having larger memories and operating at even faster rates, they are just machines performing tasks that they have been designed to do. Computers do not, in the actual sense, create or develop new ideas of their own or even think on their own (Dreyfus 190). Self awareness is to have intelligence, and with intelligence comes creativity, understanding and simulation vs. emulation. The fact that computers cannot have creativity, empathy, understanding and cannot work, act and behave like humans means that, computers cannot and will never achieve and develop self awareness like human beings (Fetzer 13). Computers blindly follow all the commands given by human beings regardless of how stupid the instructions or commands are. The day computers will learn to work on their own without following any instructions or, commands given is unlikely to come; therefore, computers will never achieve self awareness. Lack of intelligence makes computers not to develop self awareness.

It has been argued by computer scientists that the field of computer science, known as artificial intelligence will surely take computers to the level of self awareness (Charniak 7). The fifth generation of computers aims at developing computers to a level such that they will outwit, outsmart and even outlast their inventors who are human beings. Game playing is already being seen as a breakthrough to computers developing self awareness for computers. In 1997, an IBM computer gave the artificial intelligence a break through, when it successfully beat a chess world champion at his game. In 2011, Watson was introduced by IBM in the United States to jeopardy viewers. It was used to test how artificial intelligence can use logic in finding answers to questions and interpret human language, and managed to beat all human opponents (Dreyfus 237). This break through in the computer world has led to the concept that computers will eventually develop self- awareness.

Computers can perform some tasks better than human beings like playing chess, not because they are intelligent or are aware of what is required to be done, but, because they are programmed. Computers are programmed to use and exhaust all the possible means in order to reach a solution. The aptitude of computers to comprehend innate language consist only translations of sound effect to particular lettering that form specific words (Fetzer 5-9). These words are programmed to certain computer functions, hence, simulating perception. This is not because computers are aware of what they are expected to do, but are just following mapped out instructions and commands. It is true that computers are improving, becoming more complex with the ability to handle an array of tasks, but they have been designed to perform these tasks. Therefore, as long as computers will operate as a result of obeying commands, they will never develop self awareness despite the fact that, they can perform better than humans.

The above points of view bring about the argument whether, computers will or will not develop self awareness. Human beings are always aware, and, know that they exist. They are aware of their environment and their surroundings, which computers do not. Computers evidently do not discern anything like knowing whether they exist, hence, they cannot grow self awareness. Human beings had a hard task developing a machine that would be able to know that it exists. This was mainly by a deep understanding of how others exist. Schank argues that computers only follow the programmed instructions, and this fact is indelible. Any self understanding that computers will build up, will not be as a product of electronic progress, but as a consequence of advancement of human ideas concerning the character of self awareness and intellect (7).

Reasons why computers will never achieve self awareness

It is true that computers have actually advanced to the artificial intelligence stage, and they perform quite an astounding task. Nevertheless, they still follow any instructions given blindly, despite how wrong or stupid the command is. For instance, a person can spend the whole day compiling a project, and then press the quit button before saving the work. A computer will obediently delete all the work that has been done without considering the effort being put to waste. If a computer is designed to have self awareness, it would know that too much effort and time have been invested in that work, hence, save it. With the present break through in the computer science, it is extremely inviting to believe that, at the trend technology is advancing; computers will eventually develop self awareness. Well, this can only be achieved if humans come up with computers which have intelligence and the ability to solve problems using reason and not commands (Schank 44-46). Natural intelligence is unlikely to be impacted in to computers; hence they are not likely to develop self awareness.

Developing self awareness is done by understanding own existence, and use of reason to solve problems. Computers appear to contain understanding and make use of reason, but it is only a pretense at the rear of the clever code that machines trail correctly. The intelligence is just artificial, which cannot develop, to a level of self awareness (Dreyfus 21). For a workstation to be perfectly intellectual, equivalent to humans, it ought to be lucid and wholly awake, yet, completely beyond the margins of the dualistic awareness. Computers are free of any conceptuality. They do not include any opinion, sense of uniqueness, cognitive formations, no awareness or self reflection. The sophisticated, hypothetical, artificial intelligence computers will not have the ability of experiencing authentic experiences like in religion. This is because they will not be able to simulate or even emulate them. Computers are crippled without information, making them information processors. They do not have information content. This happens to be a clear justification that computers are gadgets that process information and thus not aware of themselves.

Being self aware require thinking. Thinking involves making decisions, making choices, looking at consequences, differentiating the truth from what is false, taking action and solving problems. Computers may appear to resolve problems; however, they do not formulate decisions or even plan in advance. The IBMs Deep Blue, which beat, the world Chess champion, Gary Kasparov, in 1997 had not planned ahead but, was just following a set of instructions devised by expert chess players. A human actor makes hurdles of judgment instead of slavishly going through all calculations. A computer, on the other hand, goes through all the possible moves until it gets the possible option for winning the game (Fetzer 3). Common sense in human beings helps in thinking about things, while computers do not possess this attribute. Therefore, it is quite hard for computers to develop self awareness if at all they do not have the sense of thinking and common sense.

Robotics is expected to be most trendy area of artificial intelligence. The computer science field aims at creating robots which will be aware of their surroundings and lead semi-autonomous lives. The robots are expected to experience external stimuli and react to it just like humans. This is unlikely to be achieved because for robots to respond to external stimuli, they are expected to have emotions. Machines can perform some tasks being done by human beings, but they cannot do the same way as human beings (Fetzer 18). Even with the advanced technology, it is hard for machines to be given emotions, which are innate, like human beings for them to be self aware. Therefore, computers will never achieve self awareness.

Conclusion

The brain and the body of a human being are organic systems which are not involved in creation of consciousness. True consciousness is not found in any formal information process because it is fundamental to the cosmos. Awareness is not created by humans; it is always present. Humans can just be transformed, shifted then channeled from position to position. Human beings are truly self aware without using or creating any information. Computers, on the other hand, are not possible to do, sense or know anything without the use of information. It is apparent that computers will never be able to develop self awareness because they obviously need information for them to operate. Computers will on no account have the ability to duplicate or create self awareness with any information route. Full intelligence will never be achieved in computers due to lack of self awareness, and, self awareness is also impossible as long as computers can only operate with the use of information and programming. Again, computers will never match the intelligent of humans which is inborn, and the self awareness which is innate.

Works Cited

Charniak, Eugene, et al. Introduction to Artificial Intelligence. Reading, MA: Addison-Wesley, 1985. Print.

Dreyfus, Hubert. What Computers Still Cannot Do. Cambridge, MA: The MIT Press, 1992. Print.

Fetzer, James. Artificial Intelligence: Its Scope and Limits. Dordrecht, the Netherlands: Kluwer Academic Publishers, 1990. Print.

Schank, Roger. The Cognitive Computer. Reading, MA: Addison-Wesley, 1984. Print.

Being Human: Human-Computer Interaction

Computers are everywhere. Computers already pervaded the homes and offices of many industrialised countries. Computers used to be considered as mere machines, tools to help man become more efficient. A complex machine that is a great help when it comes to processing a considerable amount of data. Computers can be programmed to help manage a factory, an office or laboratory.

But those days are gone. Today computers are human-like with the capability to work tirelessly and unceasingly without the direct supervision of humans. This has prompted many to raise the question if living with technology can make us feel happier or will technology make us more tiresome, frustrated, angst-ridden, and security driven and what will it mean to be human when everything we do is supported or augmented by technology?1

Digital technology should be a force for good but ignorance of its functions and capabilities can cause security problems and make life extremely difficult for vicitims of identity theft. It is therefore important not only to appreciate the impact of Information Technology but also to be aware of its different consequences such as digital footprints.

Being Human

Man has gone from primitive to technical sophistication in just a very short time. In the latter part of the 20th century, his understanding of machines, electricity, microprocessors, and computer programming has improved tremendously to the point that it is already a cause for concern.

There seems to be no limit to his ability to create new technologies that purportedly aims to make life better. However, this argument has become debatable in recent years as more and more people have become alarmed with the negative impact of some highly advanced technologies. The power and pervasiveness of computers can be seen as a double-edged sword, meaning it is both beneficial and harmful at the same time.

Our Changing World

There are those who believe that technological advancement is beneficial and the predictable consequence of human development. The benefits come in multiple packages such as healthier lifestyles, expansion of creative skills with digital tools, and instant access to information never available beforehand.2

Every area and every sector of society has been affected by technology. Technological innovations in business, especially in supply chain management created significant changes in this field of endeavor. It is now possible for businessmen, corporate leaders and employees to work faster, at a more efficient rate than ever before. It is also the reason why competition is at an all time high but overall it helps in creating a more productive business environment. The root cause for all of these innovations can be categorized into two: computer systems and the Internet.

Being human compels scientists to leverage technology to change lives. This can mean different things. First of all, technology has transformed the way people study and learn. Researchers from Microsoft were able to describe this paradigm shift succinctly by stating, the way teachers and professors engage with their students in class (e.g. use of online assessment tools to provide feedback and reports) is very different from the chalk and talk model of the past.3

Aside from learning changing lives also means the capability to save lives in a search and rescue situation. There is not enough space to discuss fatalities and mishaps that occur in many search and rescue operations because the operators could not see through the smoke, fog and even in heavy rains. Technology is changing all of that in favour of humans.

Being human means the necessity of finding ways to improve the quality of life. This can be seen in the creation of social networks. Aside from increasing interaction and making new friends through the World-Wide-Web new technologies are being used to monitor the activities of loved ones. When it comes to the elderly, technology makes it possible for them to remain active even in their twilight years.

Transformation in Interactions

In 2020 the human race will not only witness the rapid development in computer technology in terms of storage capacity and processing power but also in the radical evolution of computer-human interactions. Ten years from now people should expect the end of interface stability and the growth of techno-dependency.4

There is also the need to be ready when it comes to the consequences caused by the growth of hyper-connectivity and the growth of creative engagements.5 Today computer use is characterized by having a monitor in front of the user but ten years from now graphical user interface can easily become a secondary method for man to connect with machines and computers.

Ten years from now the phenomenon called human-computer interaction is no longer limited to user touching buttons on mobile phones or interactive monitors. This is made possible by wearable technology such as the electronic sensing jewelry developed by Philips Design that has prompted experts to remark: the boundary between us and machines and the extent to which it is visible to us is now no longer as clear as when we interacted at the desktop or the terminal.6 Ten years from now technology does not only come in smaller packages but can be also embedded within the human body.

HCI: Looking Forward

Although the rapid technological breakthroughs seem to occur at breakneck speed, it can be argued that the development of technology follows a certain pattern. This simply means that a particular technology is the by-product of discoveries made in the past and that scientist and inventors were merely in the right place and the right time to have the insight to develop solutions to problems, software and even electronic devices that enabled them to create something new.

This pattern of change is the same reason why one can fairly predict what will happen in the future. For example, there is a trend when it comes to developing smaller and yet powerful mobile phones. Tracing this pattern will reveal that in the future phones thinner and more sophisticated than an iPhone will be able to the public in the next few years or even in the next 6 months.

If one will apply the same technique in predicting technological advances in the near future the result would be something that can excite and terrify at the same time. It has resulted in the emergence of at least three groups of people. The first group are composed of those who believe that technological breakthroughs must keep on coming because mans survival depended on it. However, there is a high probability that computers can bring harm to families and individuals as well.

The second group are those who are thankful for technology but has shown concern with regards to some of its more obvious negative consequences such as the loss of privacy, the loss of jobs due to mechanization of labour, and the weakening of human-to-human interaction as face-to-face communication has now been replaced with various wireless human-computer interaction.

The third group is comprised of those who are totally unconvinced that rapid technological improvements in the field of business, transportation, healthcare, agriculture, entertainment, and research can significantly improve the quality of life. These three groups of people will further tackle divisive issues now that technology has evolved from electrical-mechanical to digital technology.

It is therefore important to look into the link between technology and human values. Technology can be misused or abused and in some cases people can offend others by simply ignoring cultural differences. It has to be made clear that computers are not neutral and therefore it can be used in ways overlooked by designers and programmers. Thus, there is a clamour to extend the research and design cycle and add another stage which is entails conceptual analysis.7 This will ensure that designers are able to anticipate the full implications of releasing a new type of technology into the world.

Digital Footprints

UKs Parliamentary Office of Science and Technology revealed that, A carbon footprint is the total amount of CO2 and other greenhouse gases, emitted over the full life cycle of a process or product. It is expressed as grams of CO2 equivalent per kilowatt hour of generation (gCO2eq/kWh), which accounts for the different global warming effects of greenhouse gases.8 The concept labelled captured the imagination of many people because of two reasons.

First, there is now a way to understand the extent of human culpability when it comes to climate change. Secondly, there is a way to measure carbon emissions through the burning of fossil fuels. This ability to track down and monitor a citys carbon footprint also provides the framework to make people accountable for actions that are viewed as detrimental to the environment.

The revolutionary concept known as carbon footprint is an inspiration to those who tried to comprehend and deal with the proliferation of personal information in cyberspace with the real owners unaware that a total stranger can study their habits and tendencies without having to see them face-to-face.

Decreasing Cost and Increasing Capacity of Digital Storage

Today, the principle behind the tracking and monitoring of something as omnipresent  and yet at the same time undetected by human senses  as carbon dioxide gas, is now used to label a phenomenon in cyberspace. It is not physical but digital and therefore aptly labelled as digital footprint.

According to research analysts at Microsoft, huge amounts of information are being recorded and stored daily about peoples behaviour, as they walk through the streets, drive their cars and use the Web& while much of this may be erased after a period of time, some is stored more permanently, about which people may be naively unaware.9 Digital footprints are generated each time a person uses a social networking site and post comments and pictures on it. A close-circuit camera can record human activities and store it in a security firms database etc. This is another way to generate digital footprints.

Before digital footprints became a major issue the main problem is all about the lack of storage space and therefore limitations when it comes to the need to record important information.

Nowadays, the problem is not the capability to record voluminous data but how to manage it and at the same time destroy it when it is no longer practical to keep it hidden in computer hardware or stored in a database somewhere in the world. However, it was discovered that in the 21st century it is easy to leave digital footprints but difficult to remove it from a system as vast and interconnected as the World-Wide-Web.

According to one commentary on digital footprints, The big difference between paper and digital trails is that the tracks left in cyberspace are extremely difficult to destroy& written, photographic, audio, or video content of any kind abut a person that finds its way into cyberspace forms that persons digital footprints and unlike footprints in the sand, a digital footprint cannot simply be washed away.10 This is a problematic issue based on theft-identity and the prevalence of bullying on the Internet.

The existence of a digital footprint has two major implications. First of all computer users may not give a thought to where their digital information are stored and who can access it. Thus, there are users who will post pictures, comments, and other personal information without considering the fact this digital footprint can be stored forever in a database somewhere.

It is possible that a teenager who posted something embarrassing in a social networking site the action could never be erased and accessible 24 hours a day and seven days a week. Secondly, the digital footprint left in cyberspace may contain something more valuable than an embarrassing photo, it can be personal information that unscrupulous people can use to victimize the naïve.

A recent survey asking employers their strategies on hiring people revealed something that should make everyone extra careful when visiting websites.

Researchers from a popular job site inimated that 26 percent of hiring mangers said they had checked a job seekers digital footprint and of the hiring managers who admitted making these checks, 63 percent said they found something that made them decide not to offer a job to an applicant.11 There should be a follow-up question but unfortunately the same researchers did not elaborate on what the hiring managers found that made them decide not to hire these people.

Before the advent of the Internet and powerful computer systems there is a simple way of managing data. It is filed and kept in secure storage only accessible to those who have the correct set of keys or combination to a safe. Information that may prove scandalous and carry the risk of tarnishing the image of celebrity, businessman or politician can be sent to the shredder. For those who are paranoid with security and confidentiality documents and photographs can be burned. There would be no evidence or any trail left behind. In the Age of Information this can never be the case.

In a digital world, statements posted in a website and the photographs submitted to an online site can be downloaded and copied without permission. The same data can be reproduced, passed on to others and other people can post it to other websites and the cycle goes on and on until a particular data is already multiplied several times in cyberspace. But the most problematic thing about it is that there is a great possibility that an embarrassing photo can remain inside a database for decades to come and it can be stumbled upon many years from now.

A digital footprint can also be understood as some form of a floating curriculum vitae that is floating around and anyone can grab it, study, manipulate and use to defame or blackmail someone. The problem is that the digital footprint is comparable to a fingerprint; it has plenty of information that links it to a real person.

It does not matter if someone hacked into a particular system and assumed the identity of another person; the more important thing is the fact that the digital footprint left behind can be an incriminating evidence that can destroy the reputation of an individual. In the past there is a way to escape negative criticism and there is a proven strategy to overhaul a bad reputation and most of the time it simply requires a suitcase and a one-way ticket out of town. But nowadays the only safe place to hide is an island without computers and Internet.

Digital footprints are also created through the implementation of government mandated security procedures. The idea that the government can spy on people is nothing new but in recent years, technological advancements in monitoring human activities has elevated the discussion into a whole new level. Consider for example what IT experts are saying regarding this subject matter:

Identity cards and passports have increasing amounts of digital information embedded in them that can be read at passport controls. Opinions about what information

governments need and ought to have, and what citizens ought reasonably to provide are changing. In many ways, technology is making the relationship between government and the individual more complex, not least because it is often difficult to know how much information is being gathered, how it is being used, and who has control of it.12

The concern with regards to the degree of digital footprint that can be left behind comes in the wake of increasing human-computer interaction. Part of the reason is the reality that more and more people are living in a networked society. According to experts there is now a higher interconnection of networks and systems between individuals and organizations.13

In the words of one IT practitioner, the 21st century is characterized by the migration from accepted systems and procedures (commercial, administrative, technical) to new ones (electronic commerce, digital cash, tele-working, electronic mail) and it is increasingly difficult to live outside the grid of digital networks.14

Global interaction as well as the use of new technology is fuelling the desire to transmit information through the World-Wide-Web. It is time to consider the consequences of creating digital footprints. Policy makers must look into this issue and develop strategies as well as ratify laws to mitigate the impact of unauthorized access to a persons digital footprints.

Conclusion

The rapid development of technology is both a blessing and a curse depending from which perspective it is viewed. The quality of life is greatly enhanced but on the other hand threat to security abound especially when it comes to the phenomenon called digital footprints. There is no way to reverse the evolution of human-computer interaction and the best way to deal with it is to simply upgrade or extend the development cycle.

There is a need to add one more step and it is to give room for conceptual analysis so that developers, programmers, IT experts, government officials and interest groups will have a clear idea of the future implications of a new technology before it can be accessed by the general public.

Bibliography

Baldwin, S, Carbon footprint of electricity generation. UKs Parliamentary Office of Science and Technology, 2006. Web.

Dowland, P. et al, Security management, integrity, and internal control in information systems. New York: Springer Science, 2005.

Grayson, R, Managing your digital footprint. New York: Rosen Publishing Group, Inc., 2011.

Harper, R et al., Being human: human-computer interaction in the year 2020.

Microsoft Research Ltd., Cambridge, 2008.

Pathak, J, Information technology auditing: an evolving agenda. Springer, New York, 2005.

Footnotes

  1. R Harper et al, Being human: human-computer interaction in the year 2020. Microsoft Research Ltd., Cambridge, 2008, p.10.
  2. Harper, p.11.
  3. Harper, p.25.
  4. Harper, p.34.
  5. Harper, p.35.
  6. Harper, p.36.
  7. Harper, p.59.
  8. Baldwin, S, Carbon footprint of electricity generation. UKs Parliamentary Office of Science and Technology, 2006. Web.
  9. Harper, p.21.
  10. Grayson, R, Managing your digital footprint. New York: Rosen Publishing Group, Inc., 2011.
  11. Grayson, p.9.
  12. Harper, p.29.
  13. P Dowland et al., Security management, integrity, and internal control in information systems. New York: Springer Science, 2005, p.262.
  14. J Pathak, Information technology auditing: an evolving agenda. Springer, New York, 2005, p.107.

Momenta Pentop Computers Design and Technology

Introduction

Momenta Corporation was known as the first company that developed the design of a tablet computer. Known as a pen top  a touch-sensitive tablet computer operated by means of a pen  it was a revolutionary and innovative technology at the beginning of the 1990s. Nevertheless, regardless of the new design and a promising future, the company has become a failure, even though the proposed technology has become the foundation for modern tablets, including iPads  the first tablets manufactured based on the Momentas ideas. Due to the significant influence of the company on the development of the tablet industry, it is essential to study the specificities of the product design (technology assessment) as well as understand major human factors and system limitations that led the corporation to failure.

Technology Assessment

System diagram

The Momenta pentop computer contains several parts  a screen, a pen, and a keyboard. They are connected with wires. The screen is a transflective monochrome device with an option of flip-upping and a 10-inch diagonal axis. Another element is a detachable keyboard for typing in texts and developing spreadsheets. A tethered pen is the last element of the pentop computer. Its specificity is a relatively wide variety of functions and options, including a pen-based word processor, extensions and applications for the pen, and different communication and spreadsheet options. At the same time, the pen can eliminate the limitations of the multi-touch screens, as it can be used for completing information-related operations and controlling the computer. In this way, both pen and keyboard can be deployed for improving user experience and boosting the tablets performance (Momenta Corporation 1/40 Pentop Computer). See Figure 1 below for getting acquainted with the technological details of the pentop computer.

Momenta pentop computer (Momenta Corporation 1/40 Pentop Computer)
Figure 1. Momenta pentop computer (Momenta Corporation 1/40 Pentop Computer)

Human Factors Analysis

For completing the human factor analysis, it is essential to pay special attention to everyone interested in the design and operation of the new product. In the case of the Momenta pentop computer, the assessment will cover three groups of people associated with the product  designers, manufacturers, management, and users.

Designers

Even though the design demonstrated above was a novelty as for the beginning of the 1990s, the industry-wide breakthrough had not occurred due to several gaps ignored by designers. It is evident that the tethered pen was potentially connected to some technical and comfort issues. For instance, making it detachable (like the keyboard) might have helped to minimize the risks of problems caused by wire breakage. At the same time, the very combination of both keyboard and pen in one device is questionable. From this perspective, the functionality of the pen is initially dubious. It means that if the pen worked accurately and relevantly, there would have been no need for offering the keyboard as a component initially provided with the computer, not one purchased for making its operation more efficient or comfortable. Finally, the wiring system is too complicated for the comfortable use of the device, i.e. locating portholes for different components (pen, keyboard, and charger) on three different sides leads to comfort-related issues.

Nevertheless, there are as well some strengths of the product design. For instance, the keyboard and tablet are of nearly the same size that makes carrying the device easier. Furthermore, an opportunity to detach the keyboard is as well beneficial, especially in cases when it is unnecessary or during transportation and carrying the pentop computer.

Manufacturers

In addition to the identified design gaps, manufacturers were as well responsible for the failure of the pentop computer. Even though they used resources available at the market, the hardware was poorly manufactured. The major drawback is a short battery life that means that manufacturers ignored the criticality of searching alternatives for making it longer (for instance, using more energy-saving materials for producing the computer). Moreover, there were issues with a sensitive-touch screen (poor handwriting recognition) that as well points to the fact that manufacturers either failed to find the best materials for their product or could not develop the product properly.

Management

The companys senior management (including a chief executive officer) paid special attention to PR and marketing instead of improving the design and functions of the product. Therefore, the product was overpriced, and the price-quality ratio was lower than expected. Moreover, senior management ignored the requirements of the market. It means that even though the product was revolutionary, it was too innovative. From this perspective, the market was not ready for accepting it and satisfying the companys supply needs, i.e. purchasing all products manufactured and offered by the corporation (Einstein).

Users

Even though the design of the new product was innovative and promising, it was not intuitive. It means that it was not available to an ordinary customer. Therefore, a user had to possess particular skills and knowledge in order to operate the machine effectively and maximize its benefits. In addition, most consumers were forced to use a keyboard to type in texts because handwriting options were poorly developed so that there were numerous misprints when people put in information by means of the offered pen. At the same time, the users had to pay special attention to recharging batteries often due to short battery life so that the computer worked while completing important tasks. All in all, regardless of the forward-thinking product design, there was much work left to the user, including the necessity to study how to operate the sensitive-touch device before purchasing it.

System Limitations

Regardless of the revolutionary approach to designing the pentop computer, the system as a whole was a failure. In this case, it is essential to point to the fact that it was the combination of factors that played false with the corporation. The primary system limitations were implicitly identified in the human factor analysis above. Still, they should be indicated directly. To begin with, even though the tablet was underpowered, at the same time, it was overpriced. In this case, attention should be paid to both battery life and technological gaps combined with inefficient management strategies. From this perspective, due to the limited technology development (overall technological issues in the industry), handwriting recognition software was operating poorly. More than that, it pointed to significant screen-related issues, i.e. inadequate sensitive-touch options (What Is the Chasm and How Do You Cross It?). All in all, the major limitations were associated with ignoring the readiness of the market to accept the product. It means that it might have been possible to avoid system limitations if the company operated in the field of technology that has been already approved and used by ordinary customers, i.e. did not require them to obtain new knowledge in order to enjoy the new product (Moore).

Works Cited

Einstein, David A. The Magic of Failure. SFGate, 1998, Web.

Momenta Corporation 1/40 Pentop Computer. RICM, 1991, Web.

Moore, Geoffrey O. Crossing the Chasm: Marketing and Selling Disruptive Products to Mainstream Customers. 3rd ed., HarperBusiness, 2014. Google Books.

What Is the Chasm and How Do You Cross It? Product Strategy, 2008, Web.