Design Foundations and Analysis

Though providing its users with a range of efficient tools and opening new opportunities for communication, digital technology has become a part and parcel of not only casual communication, but also business interactions. However, due to the dents in the security of the digital data, the necessity to upgrade security analysis consistently emerges.

At present, an array of tools is used for enhancing the security of online data. Specifically, the tool such as the Web Security Threat Classification, contributes to a major enhancement of the security rates for a certain database. The Web Application Security Statistics tool created by the Web Application Security Consortium (WASC) uses the statistical data gathered from the Internet in order to create a framework for detecting the potential threats to the security of databases by identifying the existing vulnerability classes and designing testing methodologies for locating these vulnerabilities.

The tool in question allows for both manual and automated processing of data with the following compilation of the representative statistical data. The latter, in its turn, allows for differentiating between the vulnerabilities found based on their risk level.

Vulnerabilities classification.
Picture 1. Vulnerabilities classification (WASC, 2010, para. 6).

The graph provided above shows that the methods adopted are applied successfully to detecting the key types of safety issues by splitting them into urgent, critical, high, medium and low.

The above-mentioned theory of numbers plays a huge part in enhancing online data security. Particularly, the theory in question provides the methods for designing the tools in question and carrying out the analysis of the existing threats. To be more exact, the number theory allows for carrying out the cryptographic computations, which support secure communication and eliminate the existing vulnerabilities. For instance, by creating a digital signature to a message, one is capable of verifying the integrity of the message in question, therefore, protecting ones personal data from a possible exposure. The design of a digital signature, in its turn, is carried out with the help of the cryptographic processes (encryption/decryption).

The confidentiality issue, which is often identified as the key problem in the process of online communication, in its turn, can be resolved with the help of the RSA cryptosystem, which is also an essential concept in the number theory. Though the RSA approach itself has already been washed away by the sands of time, the principle, which it is based on, i.e., the fast Fourier transform, is still adopted in designing modern strategies for detecting threats and preventing them within online environment. To be more exact, the aforementioned approach qualifies for creating digital signatures with the help of the following formula:

Formula.

The verification of the signature, in its turn, is carried out with the help of the following formula:

Formula (Goodrich & Tamassia, 2010).

Though the effects of the WASCs activities may not be as evident as they should, the work that has been done is still beyond impressive. As the Table 1 below shows, a total of 178764 vulnerabilities have been detected with the help of various tools for a mathematical analysis designed for detecting issues in the security system.

Table 1. Vulnerabilities Statistics (2010) (WASC, 2010, para. 11).

No. of Vulns No. of Sites % Vulns % Sites
ALL Stat (Server-Side) 50856 10125 52.13% 83.09%
ALL Stat (Client-Side) 46698 7580 47.87% 62.20%
Scans (Server-Side) 19746 8922 55.60% 85.40%
Scans (Client-Side) 15767 6607 44.40% 63.24%
BlackBox (Server-Side) 4260 804 23.77% 76.86%
BlackBox (Client-Side) 13665 747 76.23% 71.41%
WhiteBox (Server-Side) 17700 145 63.73% 96.67%
WhiteBox (Client-Side) 10072 117 36.27% 78.00%

Reference List

Goodrich, M. T. & Tamassia, R. (2010). Chapter 10. Number theory and cryptography. In Algorithm design foundations, analysis, and internet examples (pp. 453508). New York, NY: John Wiley & Sons, Inc. Web.

WASC. (2010). Web application security statistics. Web Application Security Consortium. Web.

Project Information Systems Complexity

Abstract

Information systems projects are complex enterprises subject to high rates of failure. Indicators that determine success or failure include stakeholder specification, budget and time constraints, and stakeholder satisfaction with the functioning system. Many of the issues which arise are due to poor communication with relevant stakeholders. Effective communication has been empirically linked to increased success on projects. A variety of strategies such as building social capital, introducing managerial frameworks to improve accountability, and improving social skills have been beneficial in enhancing communication among team members.

Introduction

Technological and information systems are being used in practical settings by organizations worldwide, becoming increasingly more complex and evolved. However, there are significant risks of failure, with studies indicating that a large percentage of information systems projects do not achieve success because of the numerous components involved. This report will seek to analyze how communication plays a role in the management and subsequent success of information systems projects.

Problem

The discussion of why some information systems projects achieve success and others fail is inherently complex and multi-factorial. Despite significant research and literature over the years, the failure rate remains high. An important factor that is regarded as vital to success is effective communication, particularly on large-scale enterprises. A lack of competent communication via cross-functional and inter-departmental channels leads to only moderate success and a high chance of failure. Through communication, stakeholders can comprehend the objectives, specifications, and become further involved in the work.

A proficient communication system between project boards, managers, and their teams are more likely to produce a successful and functional project. Meanwhile, failed projects are characterized by poor communication, sparse feedback, a lack of face-to-face meetings, and exchanges through e-mail. Communication issues can lead to several unpleasant outcomes such as delays, errors, misunderstandings of stakeholder expectations, confusion, and ultimately a failure (Imtiaz, Al-Mudhary, Mirhashemi, & Ibrahim, 2013). Due to these factors and supporting research, communication was selected as the theme of this report as it seems to be a critical success factor for information systems projects.

It can be argued that communication is central to the success of information systems projects because several aspects are inherently based on the interaction of stakeholders. Based on a comprehensive study, some of the most valued factors for success include clear vision, realistic expectations, frequent communication with stakeholders, and maintaining a clear understanding of project requirements.

Meanwhile, frequent causes for failure are based on customer requirements being changed, inaccurate or incomplete, with poorly defined specifications and inaccurate time or cost parameters (Montequin, Cousillas, Ortega, & Villanueva, 2014). This demonstrates that the success or failure of information systems projects largely depends on clear and effective communication amongst stakeholders both ways.

Success of Project

There are various indicators of a successful information systems project, but the most common ones have met specifications, on-time delivery, and remains within budget constraints. It is a measure of effectiveness for organization processes responsible for implementing such projects until the deployment of the system to the end-user. Success also encompasses proper functioning and excellent quality, the satisfaction of the client, and value-added to the organization which is vital to project stakeholders when the project is delivered (DeLone & McLean, 2016).

There are differing data on the success rate of information systems projects. A 2013 survey indicated that at least 50 percent of businesses experienced a project failure within the last year. Another survey of IT professionals in 2015 reported a 55 percent failure, which was higher than their estimate of 32 percent failure rate in 2014 (Florentine, 2017). The level of support is vital as the failure rate increases with a lesser amount.

Global organizations can waste on average anywhere between $97 and 122 million dollars per every $1 billion invested in information systems. However, enterprises are becoming more mature and aware of project management regarding information systems as well as boosted by the digital convergence rate globally which has demonstrated some improvement in success rates over the past couple of years.

Data Collected

Research on the topic of communication for the success of information systems projects was conducted mostly through the use of online academic databases. Scholarly journal articles and professional papers on the topic serve as the primary source of information. Most of the data is qualitative, regarding general methods or observations in systems project management. However, quantitative data such as statistics are used to provide some support throughout the paper as well.

Discussion

Most sources on the topic agree that information systems projects are complex and multidimensional, requiring efficient communication. Both the project in its process of development as well as the system of the finished product should follow a theoretical concept in project management such as the Theory of Communication suggested by Shannon and Weaver. They defined various degrees of communication, such a technical level which addresses accuracy and efficiency, a semantic aspect that conveys meanings, and an effective level which is how well the information is received (DeLone & McLean, 2016).

Furthermore, sources agree on the importance of communications management and engaging stakeholders in the process. This effectiveness of communication with stakeholders allows for competent decision-making and can prevent many factors that lead to project failure. Through a communications requirements analysis, the stakeholders are highly aware of the ongoing events and the project can be modified to fit specifications or remain within budget and time constraints (Imtiaz et al., 2013).

A significant dissimilarity is an approach to evaluate the success and communication of information systems projects. Most organizations use adherence to planning as the primary indicator of success, such as specifications, timeline, and budget constraints. However, there is often neglect of stakeholder satisfaction, particularly of clients who are dissatisfied with the final product. It falls under the expectation-confirmation theory which is based on cognitive dissonance suggesting that knowledge can be contradictive.

In other words, unmet expectations lead to discomfort and dissatisfaction, which can be mitigated by organizational relationships (Diegmann, Basten, & Pankratz, 2017). This inherently provides vital importance to client-vendor communication during the project as an important tool to enhance client satisfaction and avoid misunderstandings.

Strategies

Team Social Capital and Knowledge

The development of information systems is a knowledge-intensive collaboration process. It requires high levels of communication and social capital in the team amongst business and technology professionals. Most tasks on the project include a heterogeneous exchange of complex knowledge amongst team members, which allows for collaborative problem-solving. Improving social capital is key to enhancing communication within three sub-categories: establishing social connections, trust, and a shared vision.

Effective communication has a positive impact on forming reliable relationships and leads to a raised level of team productivity. Therefore, to build social capital with subsequent trust and vision, it is vital to creating a structure where team members can communicate and collaborate based on their unique expertise of either business or technology. Project managers should seek to establish the social capital early on in the process for most effectiveness through an optimal combination of team members which can complement each other in skills (Lee, Park, & Lee, 2014).

Risks

Furthermore, general stakeholder involvement such as holding frequent short meetings to keep everyone informed; communicating directly and truthfully (not overly optimistic or pessimistic); and being a good active listener are all contributing factors to establishing social capital and information sharing. There is inherently little risk in using this strategy. The only downside is that it may take up a managers time and resource to the right combinations of employees, and it is not something that can be changed later on without negatively impacting the project. To mitigate this risk, it is possible to have possible alternatives for team members who collaborate well together and enable rotations to ensure overall communication and cohesiveness among the project members and stakeholders.

Flexible Management and Improving Accountability

Traditional information systems project management has been criticized as lacking the flexibility to respond to any changes in later stages of the project and requires significant planning. Introducing a methodology such as the Scrum project management framework which focuses on communication rather than documentation can lead to significant improvements. A lesser administrative oversight results in increase accountability and trust amongst project stakeholders.

The framework is built on incremental goals which a team dedicates productive time to in short periods, helped to pace the large project. In between such periods, meetings and training are held which allows us to hold meetings, discuss accomplishments, and troubleshoot issues. The meetings are set so that each team member establishes individual goals to achieve maximum progress in the next working increment (Dulock & Long, 2015). In turn, this leads to significantly improving accountability since stakeholders are responsible for themselves just as much as they are to management in this framework.

Risks

Accountability is a vital aspect of improving communication, as it ensures a level of participation in the activities meant to enhance collaboration and keep stakeholders up to date. People can be held accountable by publishing meeting minutes and an attendance list as well as assigned responsibilities to team members and stakeholders. However, accountability holds the risk of creating pressure on staff and possibly lowering morale. This can have the opposite effect and lead to a decrease in communication. It can be mitigated by taking a gradual and responsible approach to introducing accountability in project management. The measures should focus on stimulating accountability such as the system described above and avoid punishing employees.

Improving Communication Skills

Various individuals inherently have different levels of communication skills. In the context of information systems projects, many employees have superior technical skills, but lack in communication. However, communication skills can be developed and can be significant in advancing professional careers and the success of the project. These abilities include everything from the ability to conduct a presentation to mannerisms and cultural awareness of other stakeholders.

The importance of communication and meetings in achieving the success of the project implies a certain level of participation and teamwork, for which people require social skills. Leadership and management can help improve communication by example and setting high expectations. The quality of presentations, written reports, and day-to-day personal communication should remain professional and competent, with successful organizations providing time and feedback to prepare communication mediums (Schwalbe, 2015).

Risks

The risk of focusing on communication methods may draw away from the technical aspect of the project, as significant training and resources have to be dedicated to it as well. Furthermore, if communication strategies are artificially forced, it is unlikely that the information shared will always be substantial and useful, thus taking away more time. To mitigate this, the balance should be found. Management can guide instances when communication should be necessary while not removing the technical aspect, but rather encouraging its integration into effective communication.

Recommendations

To implement the discussed strategies, it is vital to develop a communications planning process. This includes the stakeholders receiving the information, setting, and method of communication, as well as types of data that will be shared. It is a significant aspect of a managers job. Communication methods can take various forms and should be appropriate to the type of information and situational contexts. Methods can include emails, formal status reports, online schedules, meetings, conversations, and conference calls among others. Communication must be synchronous, to include all levels of stakeholders and those operating from different locations or time zones (Watt, 2014).

Stakeholders and employees should be aware of communication parameters and objectives as the primary aspects of strategic development. The communication should not be a burden, provide the benefits to increase change and accountability, and not overwhelm with unnecessary information. Therefore, the communications plan should be streamlined. The best manner to implement a communications strategy is to notify stakeholders and employees of protocols (such as daily briefings, weekly meetings, and quarterly status reports) which ensure consistency and accountability in the long-term.

Conclusion

It is evident that information systems projects are highly complex and multi-dimensional which leads to a high rate of failure. One aspect that strongly impacts success is communication, which can be lacking in technological projects. Effective communication allows us to keep stakeholders up to date and satisfied. Communication can be improved through factors of social capital, accountability, and collaboration. These lead to the success of meeting stakeholder expectations and satisfaction while remaining within budget and time constraints.

References

DeLone, W. H., & McLean, E. R. (2016). Information systems success measurement. Foundations and Trends® in Information Systems, 2(1), 1-116. Web.

Diegmann, P., Basten, D., & Pankratz, O. (2017). Influence of communication on client satisfaction in information system projects: A quantitative field study. Project Management Journal, 48(1), 81-99. Web.

Dulock, M., & Long, H. (2015). Digital collections are a sprint, not a marathon: Adapting Scrum project management techniques to library digital initiatives. Information Technology and Libraries, 34(4), 5-17. Web.

Imtiaz, M. A., Al-Mudhary, A. S., Mirhashemi, M. T., & Ibrahim, R. (2013). Critical success factors of information technology projects. International Journal of Social, Human Science and Engineering, 7(12), 1547-1551. Web.

Lee, J., Park, J.-G., & Lee, S. (2014). Raising team social capital with knowledge and communication in information systems development projects. International Journal of Project Management, 33(4), 797-807. Web.

Montequin, V. R., Cousillas, S., Ortega, F., & Villanueva, J. (2014). Analysis of the success factors and failure causes in information & communication technology (ICT) projects in Spain. Procedia Technology, 16, 992-999. Web.

Schwalbe, K. (2015). Information technology project management (8th ed.). Boston, MA: Cengage Learning.

Watt, A. (2014). Project management. Victoria, B.C.: BCcampus.

Cloud Computing: Main Concepts and Features

Advancement in technology resulted in creation of computers, as well as the Internet has led to unimaginable changes in the way human beings carry out their day to day activities. The desire of firms to backup information regarding their customers, operations, among others, to reduce the costs led to the development of cloud computing. Cearly and Phifer (2012) defined cloud computing as a style of computing in which scalable and elastic IT-related capabilities are provided as a service to customers using internet technologies.

It is worth mentioning that organizations, such as Amazon, Google, IBM, and HP, among others, are now offering cloud computing services to businesses. Among the most recent developments, Apple Inc. created iCloud which has made cloud computing to be accessible to businesses as well as consumers.

Since Ericsson adopted cloud computing through Amazon Web Services, it has tremendously reduced the cost of running business. It is recorded that it lowered the costs by 30-40%. It is worth noting that the cost reduction came from a better use of hardware and decreasing the number of support team. Additionally, the firm only needed to spend money on those services they required. This is attributed to the fact that the concept of cloud computing gives users the ability to use the services and only pay for them.

Ericsson also benefited in terms of automated software updates. While using AWS, the company realized that despite it being cheaper to run software updates, cloud computing helped it resolve numerous cases that were previously linked to failed or delayed software updates. This is due to the fact that automation drives process efficiency as it eliminates the need to go through a server provisioning team. This has resulted in more satisfied customers who are loyal to Ericsson brand.

Additionally, Ericsson has taken advantage of cloud computing and the attribute of remote accessibility (Ramgovind, Eloff & Smith, 2010). With this, Ericsson employees no longer work in offices. A flexible work arrangement has been put in place to ensure that employees work at whatever location they are. This has increased productivity and customers and employees satisfaction. Lastly, Ericsson has managed to gain access to relevant information based on demand. Since AWS offers a variety of options for the data to be accessed, Ericsson as well as its customers can get the information at anytime they need.

Although the concept of cloud computing has tremendously grown in helping firms hit by recession to continue with there IT infrastructural development, there are security concerns that require serious analysis. Many organizations, especially those who have not adopted cloud computing, are more concerned about privileged user access. They wonder who will be in charge of accessing their data which contain business secrets in most cases.

To curb this problem, there is a need for the involved parties to open up so that businesses can be aware of who has an access to the data, hiring as well as management of related administrators. Moreover, organizations wonder what would really happen if they were out of business, or a disaster happened. To address this problem, the service provider ought to provide organizations with vital and factual information concerning how they should provide a business that is closing up with the information.

Another security concern is based on the fact that while data can be secure in one jurisdiction, there is no guarantee that the same is the case in another country. It has been shown that various countries have different laws concerning data access. This coupled with the fact that many businesses do not know where their data is stored raises more security concerns. A typical example is the EU supporting the use of very strict protection policies, while in America, the government as well as specific agencies are bestowed with limitless powers to access information including those of the companies and businesses.

To address this issue, there is need for the involved countries to engage in efforts aimed at harmonizing their policies and laws concerning protection of privacy. Generally, for businesses to address security concerns related to cloud computing, it is necessary for them to inquire about exception monitoring systems to be established; if the service provider accommodates the business security laws, it is required to establish the third parties the service provider is dealing with as well as seek for independent security audit of the service provider (Turban & Volonino, 2011).

Although cloud computing has been deemed to be very reliable, there has been concerns with regard to adopting cloud computing. When one uses resources out of his or her building, the issue of horse power one uses is not well-known. Additionally, failures usually happen, thus it is necessary to have a full understanding of how they should be resolved. Concerning costs, there is need to establish accurately the cost of bandwidth to determine the true cost of using cloud computing.

To curb this, a firm needs to critically and accurately determine the cost of bandwidth as well as the willingness to deal with unreliability which comes with cloud computing, for instance, asking how quickly failures will be fixed. Cloud computing is characterized with scalability. But with increase in its use, there will be need to ensure that it continuously meets the growing amount of the users needs. This can be attained by beefing up with vertical scaling, multiplying with horizontal scaling, growing with diagonal scaling, designing to scale and building to scale (Marston, et al. 2011).

References

Cearley, D. & Phifer, G. (2012). Case Studies in Cloud Computing. Web.

Marston, S., Zhi L., Bandyopadhyay, S. Zhang, J. & Ghalsasi, A. (2011). Cloud computing: The business perspective. Decision Support Systems, 51(1), 176-189.

Ramgovind, S., Eloff, M. & Smith, E. (2010). The management of security in Cloud computing, Information Security for South Africa, 2(1), 1-7.

Turban, E. & Volonino, L. (2011). Information technology for management: Improving strategic and operational performance. New York: Wiley.

Mobile Phone Utilization During a Conversation

Communication is a very important aspect in various spheres of life, including business, work, romantic relationship, and friendship. People use written, verbal, and nonverbal methods to convey a certain message or exchange information. However, the quality of communication may be impaired due to the influence of outside factors. Such disruptions in the interpersonal communication may constitute inattentiveness or distractions which often result from the utilization of mobile phones. This article examines how a cell-phone use is perceived by friends during a face-to-face interaction from the politeness theory perspective.

Summary

The turn of the twenty-first century introduced a number of technological innovations that have facilitated but also depreciated interpersonal communication. Previously, people were talking to each other during gatherings, which made a conversation interesting and lively. However, this is not common at present, especially among the millennial groups because people are absorbed by technological gadgets.

The previous studies have revealed that cell-phone addictive behavior, whether it is a quick call or scrolling the feed, is very annoying (Miller-Ott and Kelly 192). As those investigations were conducted among couples with romantic relationships, partners often reported that cell-phone use during a face-to-face interaction as very rude and disrespectful.

Therefore, this study integrated the politeness theory to analyze the attitudes towards mobile phone utilization during a conversation between friends. The given framework outlines that each individual has a face, a public self-image, that can be either positive or negative. The former implies that a person wants to be validated by others, while the latter indicates autonomy and freedom from control by others. There is also a notion of face-threatening actions that undermine self-image stability.

In such cases, individuals may use bald-on record, redressive action, or going off-record strategy. The latter avoids engaging in a face-threatening act by being ambiguous and indirect, such as by dropping a hint (Miller-Ott and Kelly 194). The first one is used when the efficiency of conversation is of the vital importance, while the second strategy relies on a hybrid technique with no explicit manifestations of either positive or negative politeness.

This study employed thirty-three participants (twenty-six women and seven men) ranged in age from eighteen to twenty-four years old that were allocated into eight focus groups of two to nine participants. Researchers conducted a semi-structured interview in each group where each participant was asked to discuss personal experiences with friends using a cell phone during gatherings. Afterward, the transcripts were assessed according to the six-step process of thematic analysis, so that common themes were distinguished and outlined. The credibility of politeness theory as the sensitizing framework was approved. Based on this, such themes as positive and negative face, face-threatening acts, and responses to them were generated.

The main findings of this research study revealed that cell phone use behavior has an inherently face-threatening nature because one needs to be available to others during a one-on-one conversation. Such a situation tends to impact both positive and negative face of the speaker. Subsequently, the most common responses to face threatening behaviors are associated with hybrid redressive actions and politeness strategies, particularly using disclaimers. Nevertheless, participants reported that they were less concerned with friends use of cell phone when it was the issue of critical importance or occurred within a large group of peers.

Critical Analysis

Even though this research study was well conducted and implemented politeness theory as the sensitizing framework, it should be critically analyzed in terms of bias and the importance of findings. Understanding research bias is crucial because it significantly influences the outcomes. Furthermore, prejudices exist in each stage of research, influencing the validity and reliability of the study findings and misinterpretation of data (Smith and Noble 100).

In this particular research study, there is an explicit bias in the process of participant selection. The first aspect concerns a limited age group, while the second one points out on the unequal number of gender representatives. Indeed, the number of female participants was almost four times higher than male ones. Such obstacle may greatly influence the quality of results because of a different mindset and world outlook in men and women. To eliminate bias, a thorough critical evaluation of research findings in necessary.

Despite the comprehensive nature of this research study, the importance of its findings should also be evaluated. Indeed, the cell phone use is inevitable in everyday life, that is why many people consider it as normal.

The findings of this study are important because they reveal that mobile phone use is a face-threatening act that influences both positive and negative face of the speaker. That way, people feel disrespect when the interlocutor gets distracted or shows present absence. The importance of life conversation is crucial, while technological advantage may impose risks on its quality. Therefore, this study can be further improved by investigating concrete cell phone activities present during a face-to-face interaction and define which of them is the harshest face-threatening act.

Conclusion

Interpersonal communication allows building strong relationships between partners and friends. However, frequent utilization of cell phone imposes risks on their quality. The recent study is based on the politeness theory which analyzes such behaviors during face-to-face interactions between friends. The main findings reveal that a cell phone use has the inherently face-threatening nature, impacting both positive and negative face of the speaker, while hybrid redressive actions were the most common responses. Overall, this study provides significant findings, but it is biased because of the larger female group among participants.

Works Cited

Miller-Ott, Aimee E., and Lynne Kelly. A Politeness Theory Analysis of Cell-Phone Usage in the Presence of Friends. Communication Studies, vol. 68, no. 2, 2017, pp. 190-207.

Smith, Joanna, and Helen Noble. Bias in Research. Evidence-Based Nursing, vol. 17, no. 4, 2014, pp. 100-101.

Significance of The Information by James Gleick

The article we are going to analyze is called The Information. It is written by James Gleick. It becomes obvious from the title, that the article centers around information, its importance and history of its triumphant march. It is quite clear why the author decided to devote his article to this issue. Information now becomes the integral part of contemporary society. We obtain a great number of facts every day. Our age can be easily called the epoch of digital technologies and it is impossible to imagine our everyday life without big streams of data which we obtain every moment. However, the process of development of these technologies did not take a good deal of time. Humanity managed to develop ways of conveying information in very short terms. Even fifty years ago it was impossible to imagine the level of development of digital technologies we have today. That is why the authors choice seems rather obvious.

Describing importance of information nowadays, he just shares his astonishment with readers. Moreover, historical facts about development of informational sphere of science and society make him sound more concrete and convincing. It is clear, that the author belongs to modern age, where information is a special kind of driving force for all processes. It goes without saying, that its significance grew along with increase of informations availability. Nowadays, information is a power which can destroy states or people, create some new powerful organizations or unite men around some actual problem. That is why, it is possible to suggest, that the author is inspired by significance of this notion in modern world and he just wants to try to trace its roots and share his thoughts, impressions and conclusions with readers in order to make them think about blistering development of informational sphere. With this in mind, it is possible to analyze the whole article, trying to understand the main idea of the author.

The main idea of this article becomes obvious from the first lines of the work. The author wants to underline significance of inventions in sphere of information and show readers how quickly they have become an integral part of our life. The authors phrase The bit now joined the inch, the pound, the quart, and the minute as a determinate quantity  a fundamental unit of measure (Gleick para. 4) can be recognized as a thesis statement for the whole work. Placing the bit in the same row with other units of measurement which people use every day in their conversation or life, the author underlines the idea that the bit has easily become the same important part of our life as the previous units of measurement. The only difference is, that it has done it faster than rest of them. The article is full of historical facts and some other descriptions of the process of development of this sphere of our life.

The authors point that information is one of the most important parts of our life is being proved throughout the whole article. He outlines the idea of all-pervading character of information, stating the fact that it constantly surrounds us and our attempts to investigate it, understand its nature and convey data will lead to further understanding of its complicated character and development of some new ways of its conveying and recording. The article is very informative and it shows the authors mood very clear, making the reader to understand his point of view from the first lines and then follow the authors cogitations in order to see his thoughts and conclusions and understand whether they coincide with their own impressions. Moreover, a reader becomes involved in the process of the authors cogitations, trying to follow his chain of thoughts. Thesis statement of the article can be constantly seen throughout the whole work and it is very easy to remember what is the main point and idea of the text.

The article is easy to follow and believe as it is supported by clear and understandable evidences which serve to prove the information and thesis given by the author. All historical retrospectives which the author makes in order to prove his point are very important. He is describing in details the way the bit was introduced and created. Having read such detailed description, it becomes obvious that it is one of the main points in the history of development of information technologies. The manner of presentation of this information also promotes creation of reliable image of the data given. The author underlines importance of creation of such measuring unit as the bit, repeating several times its great significance. Moreover, he gives good evidences of its importance, stating the fact that the whole world which surrounds us is a stream of data and it is vital for us to start to understand and collect it. The author underlines great significance of invention of the bit saying It is hard to picture the world before Shannon as it seemed to those who lived in it. It is difficult to recover innocence, ignorance, and lack of understanding (Gleick para. 4). With this in mind, it is possible to say that he manages to provide good evidences to support his idea about the bit becoming an integral part of our life as nowadays, it is impossible to imagine people without a device which can guarantee them immediate access to some informational field. Having read this article, it becomes obvious that availability of information became possible due to blistering development of this sphere of science.

Another interesting evidence the author gives to demonstrate a great importance of development of information technologies, is their great significance for economy. Money has always been a special kind of information, especially nowadays when people created credit cards and a lot of other digital means of payment. Atoms and our body are also given as a good evidence of importance of information in modern world.

Having read this article, it is possible to realize the main authors idea and his main thoughts about this problem. It is possible to say, that the author managed to create well organized, interesting and very informative article which is trying to underline a great importance of creation of such unit of measurement as the bit and importance of information in the whole. There is a great number of different historical facts about stages of development of information technologies and the way we convey them. They serve as a good evidence for the authors thesis and are really interesting for readers as they give an ability to trace the whole way these devices have passed to obtain their modern functions and to realize how fast was this development and how fast they continue to develop, as we are living in constantly changing world. The only drawback of this article is, that the author does not contribute something new to study of communication, just repeating well known facts of a great significance of information in the age of digital revolution. However, with this in mind, it is possible to say that this article is important for study of communication as it suggests a lot of retrospectives for people which are not involved in this issue to understand it better. Moreover, it is very convincing and easy to read and understand. The reader believes the author from the first lines. His thesis seems to be very clear and logic and that is why it is easy to follow it while reading the article.

Moreover, very important is the fact the author gives evidences from different spheres of our life which make his statement even more reliable and significant, convincing the readers in its rectitude. The whole image is positive, as having read the article it is possible to recommend it for further reading for people which are interested it this issue or just for men which want to get to know more information about this topic.

Having analyzed this article, it is possible to make some conclusions. First of all, it should be mentioned that this work is full of information. The author gives us different historical retrospectives and facts which are connected with the history of development of this field of knowledge. That is why it is possible to call this article very useful, especially for a certain kind of readers which are interested in history or just want to enlarge their general knowledge. However, historical facts given by the author cannot be called very specific and can be easily understood by a common reader. This fact can characterize this work from different perspectives. On the one hand, it is rather interesting and valuable for nonprofessionals which are just making their first steps in this sphere. On the other hand, it does not give new information to professionals which study this issue and devote a lot of time to its investigation. The work is written in clear and interesting way, that is why it is easy to read it and to follow the authors thoughts.

One more thing which should be mentioned is the authors way of supporting his ideas by clear evidences. He very often uses well known facts and clichés. However, in some cases the author tries to present some new and fresh ideas, underlining a great importance of information nowadays. He shows integral and all pervading character of information, cogitating about different ways of its usage. Economy, atoms, our planet and human body serve as evidences to support his thesis and show his main idea. With this in mind, it is possible to conclude that the author managed to create well planned, reliable article with clear structure, logical evidences and adjusted sequence of facts. Resting on these facts, it is possible to recommend this article for other people to read and to get some new information. It will enlarge their knowledge in this field and promote development of an outlook and increase of interest to this sphere.

Works Cited

Gleick, James. The Information. The New York Times. 2011. Web.

System Security Types and Cyberattacks Prevention

Introduction

System analysis and design are the arts of evaluating information systems and forming systems that carry on the communication within the firm. System analysis is a formal inquiry carried out to help people identify a better action to take to make better decisions in technology. Systems design is the method of defining the architecture, systems components, network interfaces, and data to meets specified requirements.

The system design process develops systems to satisfy the users specifications. The communication system is full of security breaches, which corrupt the system network system (Beekman 34). The destructions bring about a significant challenge to the operations of businesses in the economy. The paper discusses different types of system security. It discusses the ways of preventing cyber-attacks and the consequences of cyber-attacks.

System development process

The system development process defines the methods to use to form a system of networks. The decision on what system to utilise depends upon different factors. The factors include the level of trust in the operating environment, the levels of security the systems it will connect to, the people using the system, and the sensitivity of the data under consideration. Other factors include the critical nature of the functions of the business and the cost of installing the system.

E-commerce is the area that threatens the security of a firm. E-commerce brings about the threat because it uses the internet network to do business (Richards 103). The threats under consideration range from hacking, personal data breaches, and to access to unauthorized information. Technology developers involve measures to curb the attacks.

Most commercial applications have a security control built to eliminate the chances of access by other users. The increased worldwide threat to cyber-attacks is the reason for the necessitated level of cybercrime awareness globally. Although the world is doing so much to curb cyber-attacks, the criminals also become more sophisticated in their skills prompting researchers to discover new methods of preventing attackers.

Existing Technological Problems

Technology is a definite thing in organizations. The advantages are brought about by the ease of movement of information and how fast it is to calculate and keep records. Most people keep their valuable information in computers. The information may be sensitive or sometimes may cause great harm if exposed to the world. Due to the growth in technology and globalization, unlawful access and sharing of information increased around the network system (Perker 81).

The illegal accesses involve hacking of computer systems and personal data breaches. Systems hacking is the act of using software to access networks and websites without the administrators permission. A hacker conducts the hacking process in a remote area to accesses a computer system in a different location. The adverse effect of hacking is exploiting the networks to obtain information from persons, then using the information for selfish purposes.

The last problem is the corruption of folders by viruses and worms. Viruses are software created to get corrupt files in the system. Viruses or worms emerge from sharing removable devices between computers. The infections may also infect computers through the internet system. Computer and software programmers develop viruses and worms to cause distraction of the network system among technology users.

Todays Business Needs in Light of the Identified Problems

Firms embraced technology to ease communication and flow of information around the business. The process became possible due to the affordability of technology devices and the internet. Today, organizations can conduct businesses with offices across the globe without travelling to the other office. Just as the information travels across the system, the network develops weaknesses that hackers can exploit. Businesses risk losing valuable data that the exploiters can use to harm the business (Perker 81). Leaked information can cause losses to the firm. It is the mandate of the managers to ensure that the technological systems are secure.

Organizations need to protect their businesses from cyber-attacks to protect vital information from spilling. The process starts with organizations employing qualified workers in the IT department. The IT workers need to develop systems analysis and design groups to deal with technological problems. The personnel needs to analyze the physical components of devices to ensure that they are in good shape.

The system designers need to analyze the architectural design of the firms network sector to identify weaknesses. The designers and analysts need to put measures of security like encrypting files and passwords to limit access to files by unauthorized users. If the panel finds a problem, the people look for ways to curb the situation to aid in avoiding the future occurrence of the same. Anti-virus software and other software are vital to protect the network system from infection from viruses. Technology continues to grow. The systems analysts and designers need to keep updating their software and systems to keep up with the latest developments (Perker 81).

Stakeholders in Security Systems Development

There are no definite stakeholders in systems security development. Different companies specialize in selling protective applications and antivirus software for computer networks. Some companies are involved in the development of specific types of security systems. It is a big process to protect the entire system from unlawful access. A firm may decide to use products from different companies.

It is upon the firms preference to determine which company they feel is capable of protecting its network system. For the physical problems of the technology devices, it is upon the firm to implement their own ways for security to secure the devices. IT personnel protect the network from unauthorized intruders by putting up firewalls to protect the networks from attacks.

Challenges in Designing and Implementing Security Systems

Programmers develop security systems to secure a network, but the same programs have their downfalls. Some security firewalls are vulnerable to attacks. Security systems are supposed to protect the network from cyber-attacks, but hackers always find a weakness and exploit it to intrude on important information of the firm. The breach mostly occurs if the attacking software has a higher designed protocol than the protocol firewall. It may also occur when the security programmer is the hacker attacking the system.

The network users need to have enough knowledge of technology security. The systems administrators may lack all the knowledge of securing information with the security procedures (Perker 81). Lack of knowledge risks the organization from leaking information that is vital, causing huge firm losses. The firm should have a body of trained employees on the maintenance and design of the technology system. Constant training is important to keep the systems analysts aware of the latest technologies.

Security firewalls cannot hide the network topology. Weakness in the topology exposes the private network to the outside world. The inability of the firewalls to hide provides a vulnerable point where hackers can use to manipulate the firms systems. Another security challenge is that the firewalls have limited auditing capabilities. Auditing is the process of trying to revise the books of accounting and determine if the company keeps the books in a true and fairway.

If a firewall has limited auditing capabilities, the auditors may be unable to identify some embezzled cash through the system (Csonka 9). The auditors may be unable to access all the financial information of the firm, limiting the scope of the audit.

The last challenge is that firewalls may be expensive to install. The life spans of firewalls are low, and firms need to keep updating their systems. To update the systems, one needs to purchase new ones and sometimes hire professionals to install the software. Every time the firm encounters a security breach, it has to revise the security system. Small businesses might be unable to conduct the security system due to a lack of finances.

In-house or outsourced

In-house security systems are the firewalls and security measures created by the firm itself. Outsourced security systems are the cyber-attack preventers purchased or acquired out of business. A firm may decide on whether to outsource their security system or use the in-house -built system. In-house built systems involve the organizations measures to prevent technological problems like physical measures taken. The physical measures involve hiding files from public viewing by passwords and encrypted files preventing the computers. A firm may also utilize the IT personnel to form their own firewall software to protect the firm against a security breach.

Outsourced security systems protect the network system from attacks using external measures. The external measures include purchasing firewalls from security companies. Outsourcing firewalls include purchasing antivirus software together with software that controls hacking and other cyber evils. Some organizations employ outside professionals to design their network and to analyze the firms technology systems. Many companies prefer to outsource their firewalls from companies that specialize in security software production. A firm may decide to use the in-house system alongside an outsourced security system.

Advantages of outsourcing

Outsourcing enables a firm to gain what they cannot acquire from their own staff. Outsourcing enables firms to gain firewalls from professional companies. Some companies concentrate on providing security to other users at a price. The firm has professionals to provide the other users whatever they need to their specifications. The outsourcing companies may also provide installation services, which the in-house personnel might not know how to install (Csonka 9).

Outsourcing is easy to maintain. To maintain the outsourced firewalls, one needs to purchase a product that best fits the organizations applications. The firms may decide to use different securities for different offices. The outsourced company may not know the weaknesses of the system as they only do what they are asked (Perker 34). The in-house IT personnel may use the information on the firewalls to manipulate the system to their advantage. Firms may decide to use both systems to their advantage. A firm may build security in the way it can through its personnel. That which it cannot make they may decide to outsource.

Disadvantages of outsourcing

The first disadvantage is the expensive way of purchasing firewalls. Firewalls are not cheap on the shops and may be expensive for a company to maintain such a system. It is expensive to pay an outsider to install or design security measures for a firm. It is expensive to keep updating systems to keep up with new applications in the firm (Csonka 9).

The last disadvantage is that an outsourced company may not understand the needs of a firm the way the internal personnel understands it. Internal personnel may be better persons to provide security. The reason to this is that they are easily available, and they understand the faults in the business.

The Prevailing Trends in Security Systems

The trends in security systems are several. Most businesses prefer to use the in-house-built systems. Others use the outsourced systems. In most cases, organizations tend to use the two ways simultaneously. In-house built systems are available in large firms where they have the money to hire system personnel to handle their networks. The small businesses use outsourced persons probably due to the expenses of hiring and the small business network (Beekman 34). The big organizations utilise outsourced systems when the threat is too complex to handle on its own. The world is changing technologically just as systems keep on changing. The criminals change too. Current security measures are necessary for every firm.

Conclusion

The system analysis and development process is the method of analysing a network in its physicality, identifying and designing a network that best fits the firms specifications. Due to the technology level in the world, most organizations adopted the communication system of transferring information from one place to the other using the internet. The internet contains loopholes that sometimes threaten the work of organizations. Hackers and cyber-attackers are a major disadvantage to the system. The cyber attackers might access information that may cause huge damages to organizations. The firms need to instil a strong security system to curb the same.

Works Cited

Beekman, Gann. Computer security and risks: Navigating tomorrows technology, Redwood City: Benjamin Cummings Publications, 2002. Print.

Csonka Peter. The Draft council of Europe convention on cyber-crime: A response to the challenge of crime in the age of the internet. Computer Law & Security Report 16. 5 (2000): 3-14. Print.

Perker, Donn. Crime. Encyclopaedia of science and technology 15.6 (2003): 34-456. Print.

Richards, Collie. Anatomy of a bug: understanding a computer virus. Journal on Computer Education 74.5 (2000): 12-15. Print.

Techniques in Fire in the Workplace by Ron Nunez

The article Fire in the Workplace, written by Ron Nunez, outlines the possible fire in workplace prevention techniques. Ron Nunez works at Fluor Corp,m where he implements and develops various programs related to environmental safety, corporate safety, as well as the health of the employees. In the first part of the article, the author gives examples of the 1991 Imperial Chicken Plant fire and the 1911 Triangle Shirtwaist Factory fire that both had devastating consequences. The 1991 fire resulted in twenty-five workers dead and forty-nine injured while the fire in 1911 caused the death of one hundred and forty-nine people (Nunez, 2007, p. 46).

The second part of the paper offers practical information on how the organization should act upon the possible danger of fire and apply the learned lessons from the provided examples of fire in the workplace.

Fire exits, fire suppression systems, portable fire extinguishers, emergency evacuation, and fire prevention plans are the primary methods of fire prevention, as mentioned by the author. In addition, apart from providing effective fire prevention methods in the workplace, the company should also encourage its workers to also implement fire safety measures at home. Thus, the author puts an emphasis on the fact that fire safety directly relates to learning from the previous instances of fire in the workplace and taking any precautions in order to prevent it.

The article supports the idea that safety in the workplace is not only the most important condition for the employees but is compulsory since an organization is responsible for the safety of its employees while they are working. The proposed fire prevention plans relate to designing a comprehensive process that controls the ignition sources in the workplace, which is accessible to all employees (Nunez, 2007, p. 48).

The primary emphasis in the case of a fire in the workplace should be put on the evacuation. Evacuation is important because even if an employer equips the workplace with fire extinguishers and expects employees to use it in a case of an emergency, the use of fire extinguishers can also lead to potential harm (Spellman & Whiting, 2005, p. 287).

In my opinion, despite emergency evacuation plans being the most effective in the case of a fire in the workplace, it will not be as effective without combination with other fire prevention methods. Thus, it is crucial to find a golden middle among the possible fire prevention methods and use those techniques that are the most suitable for the organization in question. Furthermore, conducting training on workplace safety for all employees is an important step in establishing safety and encouraging employees to cooperate in case of an emergency.

In addition to cooperation, the organization should go to great lengths to have the latest fire prevention technologies in their facilities. The mentioned Fire-Lite Alarms and emergency evacuation audible technologies are useful but are already dated. For instance, the organization can also add a new fire extinguishing system like Victaulic Vortex that combines nitrogen and water in one suspension in order to cool the fire as well as counteract the oxygen that is necessary for the fire to burn.

Thus, an effective fire prevention system in the workplace is a combination of the most successful techniques that involve employee cooperation as well as innovative technologies targeted at fire prevention and, in a case of an emergency, extinction.

References

Nunez, R. (2007). Fire in the Workplace: Fundamental Elements of Prevention & Protection. Best Practices, 5(11), 46-48.

Spellman, F., & Whiting, N. (2005). Safety Engineering: Principles and Practices (2nd ed.). Lanham, MD: Government Institutes.

Modern Technologies Assisting Freelancers

The paradoxical task of freelancers is to be ubiquitous while staying at home and doing their jobs. They need to keep current with professional developments and anything else that might be useful in their work. Therefore, no matter what their occupation, whether it is copywriting, proofreading, translating, graphic design, or programming, freelancers must stay cognizant of all the recent innovations in the sphere of technology. Nowadays, we have access to so many electronic devices with such powerful mobile capabilities that a freelancer can practically push buttons and find themselves earning money.

Unfortunately, not all out-of-office workers realize how indispensable these gadgets are to their profession. Sadly, the freelancers who eschew this beneficial technology, and the science behind it, can be left as outsiders in the competition. Moreover, they can find themselves missing this fast train!

If you wish to be a twenty-first freelancer, your work will not be optimally efficient if you do not consider the following high-tech innovations:

  1. First, you will use e-book readers to obtain information vital to your work. You can read books any time you want, unchained from your computer monitor, free to be outdoors, or in any comfortable spot. Fear not; reading will be just as it was in the Middle Ages, but with one considerable benefit.
  2. Second, virtual reality will be the main element of an advanced freelancers success. This will become the natural environment in which they will hunt and bring down any work prey on which they set their sights.
  3. Finally, some workers may view the Internet as a bottomless black hole, a whirlpool drawing them down into a fake world where truth is indistinguishable from delusion. Indeed, when surfing and bobbing on the unending waves of information, it is easy to choke on too much data. However, the apparent unlimited virtual reality of the web is never greater than the breadth of the human mind.

Thus, if you, the average freelancer still unfamiliar with all those tricky e-tools that allow faster work, you really will have to master them soon. To give yourself a sense of the advantages of these technologies, imagine yourself back in the eighteenth century, with no laptop, no email, and no computers at all. In this scenario, your email has roughly approximated the function of a carrier pigeon.

Your computer is the equivalent of a quill and ink composed of lamp blacking, grease, and heaven knows what else. Your graphics software would be replaced with a palette of oil or tempera paints, filled with toxic heavy metals and infinitely messy brushes! On the next wave of technology are the handheld devices  these seem like biros or ballpoint pens in comparison to the quill or the stylus. E-book readers function like libraries with cracker-jack librarians to help you find resources and help you locate the specific sections you need.

Quite a technological gulf exists between olden time and where most of us are now, is it not? You should keep in mind that the gap between the technologies you feel comfortable with right now and what is just on the horizon is just as dramatic. There are inventions in the pipeline for widespread personal use that you may not be familiar with unless you are the serious computer geek. What about an electronic ink display, or personal digital assistants (next generation), and complete voice control? How cool (and laborsaving) would it be to control your computer and compose using your voice alone? Creative and exciting? Absolutely!

While deliberating the benefits and shortcomings of such high-tech conveniences, freelancers may well ask, what about creativity? Can it be suppressed by using all those automatic devices? If all the information you need is in the palm of your hand, does it not take but the slightest effort to achieve an impressive result? Those who hang up on this question are misguided because nothing replaces the power of the mind.

What high-tech innovations endow freelancers with is the most valuable resource for all of us; time. Finding ways to use that time productively is the next challenge. Which shall it be: professional-development, self-actualization, or greater volume of work, diversification of jobs?

Of course, you can continue your traditional work-at-home methods. However, once having tried one of the new devices or conveniences, you will be loath to go back to 20th Century technology.

Just look over the following data gathered by ResearchWritingCenter.com (from internal reports and Google Analytics):

The proportion of users reporting bugs or malfunctions with the site was 3%. Of that 3% of users, the proportion that was using outdated browsers (e.g., Internet Explorer 6) was 95%.

This single statistic demonstrates how critical it is to keep abreast with updates to even your current technologies. Regular monitoring of your tools and resources for patches, updates, fixes, and newer versions (especially when offered for free) will help you keep up with the unending and rapid changes in the computing world. Remember the admonition of the Red Queen in Lewis Carrolls Through the Looking Glass. She said, It takes all the running you can do to keep in place! and some thinkers feel this applies to real life as well (blog.creativethink.com/2007/03/the_red_ queen_e.html).

So, dont be afraid of facing the future squarely, because the technological future is merely an advanced version of your technological past, and you have been productive thus far, have you not? With the help of the most modern tools, you can preserve your current mode of life and income, or even improve it. What is important is that you can economize on time and earn more money for your skills and abilities.

Application of Instructional Design Experience

Introduction

Any educator, regardless of their background, has a specific way of approaching the design of their lessons. This approach is influenced by both the personal characteristics of this educator and by the theories and models that they use as a guide. In the contemporary environment, where classrooms are diverse and students have grasped an understanding of complex topics such as justice, race, critical thinking, and others, educators face the challenge of developing an ID model that would allow them to address complex topics and make these explanations accessible for the students. This paper will consider the instructional design (ID) from multiple perspectives, present a devised personal ID model based on the ED 6025 contents, and apply it to an education scenario.

Instructional Design (ID) Model

Currently, various ID models allow an educator to create the best learning environments considering the different steps that will lead to the desired learning outcome. One definition of ID is the design and guidelines that help an educator organize adequate education scenarios for the learners (Gunawardena et al., 2019). Generally, an ID model has to be suitable both for the educator, the purpose of a lesson, and the students that are receiving the lesson.

An important insight that I received over the course of EDTC 6025 is the need to devise a personal ID model. By examining the different options of how ID can be developed, I have learned to look at lesson design from various perspectives. Moreover, as a result of this course, I have come to understand that ID is not merely a process of creating instructions; it is the framework for thinking.

My ID is devised from the assessment of a modern-day classroom. Mainly, educators today have to work in an environment where the students have different ethnicities and come from different cultures; for some, English may not be the language they speak at home. This approach is the basis of my ID frameworkthe lesson has to be tailored toward the variety of viewpoints and perspectives that students may have due to their backgrounds. This also prompts me to think about the limitations of my knowledge about diversity since, although I am learning and trying to know more about different cultures, I also want to be aware of the gaps in my knowledge. One way of addressing these gaps is through communication and engagement; students have to feel free to share their opinions or ask questions over the course of the lessons. Through this process, I will update my knowledge as an educator, and the students will learn about the topic in question.

Since there are multiple models of instructional design, an educator may struggle to choose one approach. For instance, ADDIE is an abbreviation of analyzing, designing, developing, implementing, and evaluating stages of the ID (Gunawardena et al., 2019). At each step, an educator revises their plan, and one can use ADDIE in a non-linear manner. Another approach is using sociological and psychological theories, which help an educator understand the capabilities of different age groups. For instance, Piagets stages of cognitive development explain the different skills and capabilities that students develop over the years (Gunawardena et al., 2019). This theory also helps one understand that some concepts may be too complex for some age groups and need to be simplified. In my ID model, I apply the elements of both ADDIE and Piagets theory; for example, I analyze the classroom first and determine what type of information the students can be competent with, considering their age group.

Additionally, my ID model incorporates the use of technology that aims to help students navigate the modern information-driven world. According to Gunawardena et al. (2018), technology has eliminated the barrier of time and space which existed within interpersonal communication. Moreover, it has affected every aspect of interaction, as well as learning. This is why integrating the use of technology is both helpful and necessary because, for example, by showing videos or pictures, the educator can enhance the learning process since the students see varied representations of a topic. Hence, the justification for integrating the technology element into my ID model is the environment in which the students will live and work in the future; there is a need to teach them how to leverage technology for the best learning outcomes. At the design stage, I collect information about the topic and think about the ways in which I can use technology to explain the topic better. The development stage of the ID implies creating a detailed plan of activities, questions, information, and homework that I will use in the classroom. After the implementation stage, I find it essential to evaluate the lesson, particularly the interpersonal communication with the students and the use of technology during the lesson, to improve the plan for the future.

Scenario

The scenario for this assignment is the education of elementary school students about justice. Following my ID model, the first step is reflecting on the specifics of the audience that will be the recipient of the education materials. In this case, these are elementary school students who are between 5 and 10 years old. The topic of justice is a complex one and can be approached from several perspectives, for example, ethics, laws, or social relations. However, elementary school students have yet to learn about these different perspectives, and the explanation of justice for them has to be based on the examples in their scope of understanding. Hence, using my ID model, the first step would be analyzing the characteristics of the classroom and applying Piagets theory to determine the type of information that can be comprehended by this age group.

Next, an educator using my ID approach would have to proceed to the design stage, where one has to think about technology use in the classroom. For the topic of justice, one may find a clip from a cartoon that demonstrates a situation where there is injustice. Next, to integrate communication and encourage students to share their opinions, considering the classrooms diversity, an educator may create a set of questions, such as how would you explain the notion of justice after seeing this clip? The teacher may also encourage students to share their examples of just or unjust treatment they have experienced.

The educator would benefit from applying these two steps of my ID model because they would better understand the students learning capabilities. Moreover, this information is collected at the first stage of the process, when an educator reflects on the topic and uses Piagets stages of cognitive development to determine what concepts can be understood by the age group in question. Moreover, following the implementation, the next stages of the ID require the educator to think about the lesson, the benefits, and the downsides of the selected methods. In this way, one can improve the plan and make the lessons more engaging for the other students. Hence, this model helps the educator to understand the class dynamics better and enables continuous development.

The use of technology, which is an integral part of my ID process, also helps the educator. Mainly, by using an example of a cartoon, one can demonstrate a concept using the things that the students enjoy. The majority of elementary school students like to watch cartoons in their free time, and this integration will also prompt them to reflect on the topics uncovered in these films, as opposed to merely watching them for entertainment purposes. The integration of diversity is also integral to my ID, and an educator may benefit from asking the students about their views of justice because it allows them to simultaneously teach the students about the topic and consider the perspectives of other cultures and ethnicities. Hence, they will also learn to pay attention to the different views and respect the varied cultures that surround them.

Conclusion

In summary, this paper discusses the personal ID model and applies it to a real-life scenario. The first section is dedicated to my ID model, which is a combination of ADDIE and Piagets Stages of Cognitive Development. The second part of this paper is dedicated to explaining how to apply my ID model in a scenario where an educator has to teach elementary students about justice and the benefits of using this approach.

Reference

Gunawardena, G., Frechette, C., & Layne, L. (2019). Culturally inclusive instructional design. Routledge.

Artificial Vision: History and Modern Trends

Introduction

Thousands of people worldwide are in need of technology that can restore the ability to see. However, despite a significant increase in innovation in the medical field in recent years, there have long been no affordable and effective ways to implement vision prosthetics. In this regard, there is a need to engage advanced technologies designed to help visually impaired or blind people regain the ability to see. The situation is, nonetheless, complicated because, in addition to the eyeballs themselves, the visual cortex of the brain and the nerve pathways that connect the eyes to the brain are involved in the process (Zhang et al., 2019). Due to the emergence of robotic systems as the latest neuroengineering developments, the future of artificial vision has favorable prospects, and special prostheses can become effective tools to restore peoples ability to see.

History of Artificial Vision

Although advanced technological solutions based on the use of robotic self-learning systems have appeared recently, the process of studying the possibilities of correcting vision started a long time ago. In 1823, J. E. Purkinje, the Czech scholar, became interested in the issues of vision and hallucinations, as well as the possibility of artificial stimulation of visual images (da Mota Gomes, 2019). It was he who first described visual flashes  phosphenes, which he received during an experiment with a battery by passing an electric current through his head and describing his visual experience (da Mota Gomes, 2019). One hundred thirty years later, in 1956, J. I. Tassiker patented the first retinal implant that did not give any useful vision but showed that it was possible to artificially induce visual signals (Allen, 2021). However, ocular prostheses have been slowed down for a long time due to technological limitations.

It took a long time before any real developments appeared, which could give vision that a person could use. According to Farnum and Pelled (2020), in 2019, there were approximately 50 active projects in the world, which focused on vision prosthetics. Bionic implants have proved the greatest clinical effectiveness, resembling robotic information processing systems by the type of their structure and functioning. Therefore, these devices may be called the future of ophthalmic neurosurgery.

Modern Trends in Bionic Implants

The fundamental technologies by which bionic vision implants are produced are algorithms that help stimulate individual areas of the eye system. The modern variety of these devices is due to the distinctive approaches to production, as well as to the purpose since different vision problems are addressed. The modern market of bionic implants allows selecting optimal systems that correspond to individual characteristics and perform specific functions.

Retinal Nanotubes

One of the simplest but least efficient technologies is equipping the eye system with retinal nanotubes. In 2018, a group of scholars from China conducted an experiment on mice, during which they proposed the use of nanotubes instead of non-functioning retinal photoreceptors (Wu et al., 2021). The advantage of this project is the small size of these devices. However, each of the nanotubes can only stimulate a few retinal cells, which makes their use not the most convenient.

Biopixels

Biopixels are microparticles that perform a function similar to real cells. They have a sheath made of a lipid layer in which photosensitive proteins are embedded (Sarkar & Bagh, 2022). They are affected by light quanta, and as in real cells, their electric potential changes, thus arising an electric signal (Sarkar & Bagh, 2022). This technology has not yet become widespread, but its development is continuous.

Perovskite Artificial Retina

The main developments associated with this technology are aimed at stimulating all layers of living cells. With the help of perovskite artificial retina technology, scholars are trying to provide the ability to not only receive light sensations but also to distinguish color by modeling the signal (Yang et al., 2020). This can be performed in such a way that the signal is perceived by the brain as having a certain color, as a result of which a person is able to view a particular object naturally.

Photovoltaic Membrane

This material is a small film coated with a layer of a chemical. This substance has the property of absorbing light and converting it into an electrical signal (Taherimakhsousi et al., 2021). The membrane is placed on a spherical base so that it can be conveniently placed on the fundus (Taherimakhsousi et al., 2021). Such a mechanism for prosthetics is complex, but it can be applied in large quantities if it becomes more accessible.

Semiconductor Polymer

The technology of introducing a semiconductor polymer solution under the retina is a form of special chemical prosthetics. With the help of this material, the light is fixed and transformed into electrical signals, sending impulses to the brain and giving a person the ability to see (Maya-Vetencourt et al., 2020). All of the considered implantation systems can be effective, but some devices that work on the principle of artificial vision are implanted directly into the brain, thus being more complex structures.

Cortical Implantation System

Cortical prostheses are a special subgroup of visual neuroprostheses that are installed directly into the brain. They are able to induce visual perceptions in blind people through direct electrical stimulation of the occipital cortex that is responsible for image recognition (Foroushani et al., 2018). This approach may be the only available treatment for blindness caused by glaucoma, end-stage retinitis pigmentosa, optic nerve atrophy, retinal and optic nerve injury, and other problems (Foroushani et al., 2018). In recent years, neuroengineers have made significant progress in creating this intracortical visual neuroprosthesis that can restore limited but useful vision.

Cortical prostheses may vary depending on specific criteria and purpose. For instance, as Foroushani et al. (2018) note, a prerequisite for installing one of such implants is the patients visual experience. This means it can only be used for people with a developed visual cortex who were born sighted and have lost their sight. These devices can also be intracortical and consist of groups of miniature wireless implantable stimulator grids that transmit image information directly to the human brain (Foroushani et al., 2018). The development of technology allows improving modern developments, and bionic robotic devices can become integral tools in the lives of thousands of people.

Conclusion

People in dire need of vision restoration can count on modern robotic devices developed due to the latest neural engineering advancements. Work in this direction has been going on for dozens of years, and to date, a wide range of solutions has been presented, which can perform different functions and meet individual needs. In addition to bionic prostheses, cortical implantation systems are applied, which are complex and efficient algorithms. The industry is progressive, and machine intelligence, augmented by robotic tools, may help solve the problem of low or no vision in the future.

References

Allen, P. J. (2021). Retinal prostheses: Where to from here? Clinical & Experimental Ophthalmology, 49(5), 418-429.

da Mota Gomes, M. (2019). Jan Evangelista Purkinje, a brilliant, multifaceted Czech biologist: Nerve tissue. Revista Brasileira de Neurologia, 55(4), 13-17. Web.

Farnum, A., & Pelled, G. (2020). New vision for visual prostheses. Frontiers in Neuroscience, 14(36), 1-11.

Foroushani, A. N., Pack, C. C., & Sawan, M. (2018). Cortical visual prostheses: From microstimulation to functional percept. Journal of Neural Engineering, 15(2), 021005.

Maya-Vetencourt, J. F., Manfredi, G., Mete, M., Colombo, E., Bramini, M., Di Marco, S., Shmal, D., Mantero, G., Dipalo, M., Rocchi, A., DiFrancesco, M. L., Papaleo, E. D., Russo, A., Barsotti, J., Eleftheriou, C., Di Maria, F., Cossu, V., Piazza, F., Emionite, L., & & Benfenati, F. (2020). Subretinally injected semiconducting polymer nanoparticles rescue vision in a rat model of retinal dystrophy. Nature Nanotechnology, 15(8), 698-708.

Sarkar, K., & Bagh, S. (2022). Synthetic gene circuits for higher-order information processing. New Frontiers and Applications of Synthetic Biology, 373-395.

Taherimakhsousi, N., Fievez, M., MacLeod, B. P., Booker, E. P., Fayard, E., Matheron, M., Manceau, M., Cros, S., Berson, S., & Berlinguette, C. P. (2021). A machine vision tool for facilitating the optimization of large-area perovskite photovoltaics. NPJ Computational Materials, 7(1), 1-10.

Wu, N., Wan, S., Su, S., Huang, H., Dou, G., & Sun, L. (2021). Electrode materials for brain-machine interface: A review. InfoMat, 3(11), 1174-1194.

Yang, X., Xiong, Z., Chen, Y., Ren, Y., Zhou, L., Li, H., Zhou, Y., Pan, F., & Han, S. T. (2020). A self-powered artificial retina perception system for image preprocessing based on photovoltaic devices and memristive arrays. Nano Energy, 78, 105246.

Zhang, H. J., Mi, X. S., & So, K. F. (2019). Normal tension glaucoma: From the brain to the eye or the inverse? Neural Regeneration Research, 14(11), 1845-1850. Web.