Work Systems Design in Business

Work Systems Design is the practical analysis of work processes with the objective of determining the most effective and efficient utilization of resources and establishing quality requirements for the progress. The idea is influenced by the technological development and it should descry all potential hindrances. Work Systems Design is a combination of approaches that appears to be a beneficial assistant for the performance of the entire organization.

The analysis possesses critical factors that heavily influence its implementation. The three core factors include organizational, behavioral, and environmental points (Chapter 6). Considering the role of technology in the concept, it generates a need for employees with more intensive abilities. A significant number of young workers have inadequate expertise and, in some cases, little or no training beyond basic reading (Chapter 6). In addition, businesses have eliminated lower-level roles occupied by individuals who performed duties that now can be automated due to the deployment of technologies (Chapter 6). Despite the impact of technological development, the importance of the human factor and the environment is not omitted.

Individuals with unlimited vacation policies, often known as unrestricted full pay or open vacation policies, can utilize as many sick, personal, or vacation days as they prefer as long as they complete the tasks. The advantage of such a concept from an employees perspective would be increased flexibility of the schedule, whereas the disadvantage is described as prerequisites for procrastination and enervation. The benefit of the unlimited vacation system, considering employers, is reducing the amount of effort and paperwork needed to track employee vacations and diminishing the need for employers to pay for wasted days off (Chapter 6). The priority of the mentioned idea is determined individually in each company since the level of trust between management and subordinates matters. The organizational barriers are the resistance of the internal environment, personal profit of workers, risk and uncertainty, and poor communication (Chapter 6). The ensuing measures should be applied to overcome the drawbacks: provision of individual benefits, participation, and two-way communication.

To sum up, Work Systems Design is an approach that aims to ensure the growth of an organizations results. The automatization of replaceable positions determines the impact of using technological advancements. The unlimited vacation policy has positive and negative aspects both for the employees and employers, so its priority should be considered separately. Barriers in companies are avoided by paying attention to the internal environment.

Reference

Chapter 6: Design and Redesign of Work Systems.

Technological Developments in Aircraft

Introduction

Nowadays, the most popular term in commercial and military technological development is independence, which includes self-driving automobiles, airplanes without a human pilot, and terrestrial, sea, and undersea vehicles without human operators. Remotely piloted aircraft, piloted remotely by people, has been a significant feature of the worlds largest military for years, headed by the United States, and proved with a deadly impact. Corporate and public expectations are accelerating improvements in autonomous aerial systems. This work was written with the aim of studying new technologies in the field of unmanned aerial vehicles.

Artificial Intelligence

Remotely operated Aerial Vehicles and deep learning have begun to capture the interest of industrial and academic researchers. Pilotless Aerial Vehicles have increased the ability to control and regularly monitor isolated areas. The introduction of computer vision has decreased the number of hurdles to Unmanned Aerial Vehicles while also improving capabilities and opening doors to new sectors (Khan and Al-Mulla, 2019). The collaboration of remotely piloted aircraft with computer vision has resulted in quick and dependable results. Unmanned Aerial Vehicles combined with computer vision have aided in real-time surveillance, data gathering and analyzing, and forecasting in computer systems, intelligent buildings, defense, farming, and mining.

Machine learning techniques, sensors, and information technology advancements have paved the way for UAV applications in a variety of industries. The key areas include wireless communications, intelligent buildings, the military, farming, and industry. The usage of unmanned aerial vehicles (UAVs) in bright urban and the defense to achieve various goals is fast expanding. A graffiti-cleaning system was created using the UAV system and machine learning algorithms.

Hardware

The majority of UAVs are made up of the same hardware elements. A drones essential components include a body, power source, hardware device, internal and external detectors, actuator, and autonomous algorithms. A drones cameras compute external measurements and identify exterior forms to avoid accidents. A UAVs power source can range from lithium-ion batteries to regular aircraft engines. UAVs also include technology in the shape of a flight stack, which includes hardware, software, and system software and is responsible for air traffic control, guidance, and judgment (Huang et al., 2021). Patent owners suggestions for prospective future drone technology may affect future UAV utilization. Hydrogen-powered drones, enhanced machine learning, concern for the environment, and self-charging are examples of such technology. UAVs might be used for a variety of purposes in the future, including driverless cars and public transit, drone waiting staff, and hovering administrative assistants.

The drone method was first developed in the 1900s, with the primary goal of supplying training targets for military members education. The advancement of sophisticated technology and superior electrical-power systems has resulted in an increase in the usage of the market and commercial aircraft drones. Quadcopter UAVs represent the great appeal of hobby airwaves, aviation, and gadgets; yet, the application of crewless aerial vehicles (UAVs) incorporate and general aircraft is hampered by a loss of authority.

GPS

The Global Positioning System (GPS), formerly known as Navstar GPS, is a satellite-based radio navigation network sponsored by the US state and maintained by the US Space Force. It is one of the global navigation satellite systems that transmit location and trustworthy source of information to a GPS module everywhere on or near the Earth when four or more GPS receivers have an unimpeded line of vision. Terrain and structures, for example, might obstruct the comparatively weak GPS data.

GPS in UAVs is vital regardless of whether the aircraft is steered independently or by ground-based pilots. GPS navigation algorithms can provide continuous precision as long as adequate satellites are available during the UAV flight (Liang et al., 2019). GPS is frequently used with Inertial Navigation Systems (INS) to provide more complete UAV navigation options. The most prevalent application of GPS in UAVs is navigation. GPS, which is an essential factor of most UAV GPS devices, is utilized to identify the vehicles location. The UAV GPS is also used to calculate the vehicles relative position and speed. The receivers location could be used to monitor the UAV or, in conjunction with an autonomous guide, to guide the UAV.

Gallium Arsenide

Gallium arsenide is a substance that is frequently utilized in integrated circuit chips due to its appealing features, and it has a wide range of applications. It has become very popular in high electron mobility transistor (HEMT) constructions, in comparison to silicon, because it does not necessitate any change in momentum in the transformation between both the peak of the conductive band and the minimal amount of the permeability ring, and it does not involve a cooperative particulate interplay. For many years, gaAs-based photovoltaic cells have been developed as an alternative to commonly accessible photovoltaic cells. Even though cells based on indium gallium have the maximum performance, they are not widely used. They have distinct features that make them appealing, particularly in certain places.

Gallium arsenide (GaAs) photovoltaic thermal, which are highly efficient and inexpensive photovoltaic panels built entirely of dielectric material GaAs material, is an excellent alternative for powering UAVs. They are incredibly light and flexible in comparison to conventional solar cells, making them suitable for UAVs since they are simple to attach and contribute minimal excessive pounds to UAVs (Pape~ et al., 2021). Furthermore, their excellent energy economy ensures that UAVs have peak energy. Upcoming UAVs will be capable of flying for long periods of time, maybe forever, by switching from the regular battery to solar panels.

Fiber-optic Detectors

Fiber-optic detectors are becoming increasingly important in the field of sensing devices. Microstrip patch antennas provide several benefits over traditional technologies (Luo et al., 2017). These devices are small, light, simple to implement, cheap, and resistant to electronic radiation, all of which are essential characteristics for sensor applications. As a result, semiconductor lasers are highly adaptable in monitoring temperature, stress, outside refractive indices, temperature, moisture, and electrically charged fluctuations in high voltage situations. To control the UAV electronically, the operator must transmit commands to the aircraft, which regulates the rotational speeds of the UAVs four propellers. Essentially, the PWM signals are received by the aircrafts flight control unit (FCU) and transmitted to an electronically controlled central controller (ESC). The UAVs batteries supply the ESC unit and regulates engine spin for the required flight circumstances.

Conclusion

To summarize, unmanned drones operated remotely by humans have been a prominent element of the worlds largest military for years, led by the United States, and have proven lethal. Machine learning algorithms, cameras, and advances in technology have opened the road for UAV applications in a wide range of sectors. GPS is essential in UAVs, whether the aircraft is directed autonomously or by earth pilots. Gallium arsenide (GaAs) photoelectric panels are extremely easy and high solar panels made entirely of the piezoelectric semiconductor GaAs, is a viable option for powering UAVs. In the realm of different sensors, bres detectors are now becoming progressively crucial.

References

Huang, J., Tian, G., Zhang, J., & Chen, Y. (2021). On Unmanned Aerial Vehicles Light Show Systems: Algorithms, Software, and Hardware. Applied Sciences, 11(16), 7687. Web.

Khan, A. I., & Al-Mulla, Y. (2019). Unmanned aerial vehicle in the machine learning environment. Procedia computer science, 160, 46-53. Web.

Liang, C., Miao, M., Ma, J., Yan, H., Zhang, Q., Li, X., & Li, T. (2019). Detection of GPS spoofing attack on an unmanned aerial vehicle system. In International Conference on Machine Learning for Cyber Security (pp. 123-139). Springer, Cham. Web.

Luo, Y., Shen, J., Shao, F., Guo, C., Yang, N., & Zhang, J. (2017). Health monitoring of unmanned aerial vehicles based on the optical fiber sensor array. In AOPC 2017: Fiber Optic Sensing and Optical Communications (Vol. 10464, p. 104640K). International Society for Optics and Photonics. Web.

Pape~, N., Dallaev, R., blu, ^., & Kaatyl, J. (2021). Overview of the Current State of Gallium Arsenide-Based Solar Cells. Materials, 14(11), 3075. Web.

Social Engineering Attacks and Network Security

While looking for other articles on the topic, I was trying to find those addressing the ways to deal with social engineering attacks as well as those that would go into detail about the techniques that are typically implemented by hackers. When I investigated the issue and the case study provided in the chapter Social Engineering Attacks of the book Security+ Guide to Network Security Fundamentals, I was surprised that exclusively individual techniques are emphasized in the description of the case. This psychological explanation creates a very vague notion of social engineering. That made me look for a more well-structured and logical explanation of it.

The article that I finally selected is called Dissecting Social Engineering. I got interested in the abstract and now I can state that this research fully lived up to its promise. In the book, there is a classification of social engineering attacks; however, such categorization does not provide a thorough analysis of the topic since underlying principles are simply described, without any further attempt to draw parallels or identify guiding principles (Ciampa 68-72). On the contrary, the article demonstrates a more well-structured and profound approach to the investigation of the problem. First, the authors reviewed 40 texts related to the issue to conclude that most scholars overemphasized the significance of individual techniques in social engineering attacks. The majority of cases cannot be explained by this factor (Tetri and Vuorinen 1014). That is why the researchers concentrated not on the techniques (well-described in several different textbooks) but rather on their functions, which makes this study more practically-oriented.

What appealed to me in this article is that the three dimensions of social engineering deduced by the authors (persuasion, fabrication, and data collection) allow understanding all aspects of such attacks instead of attributing the problem to the psychological traits of hackers and their victims (Tetri and Vuorinen 1021). This will make it possible for other researchers to grasp the diversity of the problem and further develop the categorization.

I also particularly liked the structure of the study since it features a visual representation of each step. The table provided sums up the problems that were addressed, theoretical frameworks applied for this purpose, and potential practical implementation of the results (Tetri and Vuorinen 1022). This gives a good insight into the approach and explains to the reader why multi-dimensionality is crucial for the research.

Another article on the same topic, which is called Advanced Social Engineering Attacks, can be contrasted to the one described above based on the approach to the problem selected by the researchers. While in the previous case the major focus was made on the internal principles of social engineering, this study is more concerned with the external factors that make it possible for new vectors of the problem to develop (Krombholz et al. 114). The taxonomy of attacks that the authors provide is particularly valuable in this research since it contains the most recent and advanced types of social engineering. The necessity to give such a detailed classification is well explained by the fact that in the modern technological environment, these attacks can lead to recurring threats.

All three sources devoted to social engineering seem useful to me since they highlight different aspects of the issue. Still, I believe that the first article (unlike the chapter in the book and the second research) is more informative and comprehensive. The researchers managed to find flaws in the existing approaches and account for them. Moreover, they arrived at a new framework that can be applied for case analysis, which is a significant achievement for any scholar.

Works Cited

Ciampa, M. Security+ Guide to Network Security Fundamentals. Cengage Learning, 2012.

Krombholz, Katharina, et al. Advanced Social Engineering Attacks. Journal of Information Security and Applications, vol. 22, no. 2015, pp. 113-122.

Tetri, Pekka, and Jukka Vuorinen. Dissecting Social Engineering. Behaviour & Information Technology, vol. 32, no. 10, 2013, pp. 1014-1023.

Station Night Club Fire Tragedy and Prevention

Introduction

The Station Nightclub incident is one of the most tragic episodes in the history of the United States that engulfed the Rhode Island in 2003. The catastrophe stemmed from the pyrotechnics display that appeared in the club during the concert and led to an instant flaming. The incident had deadly consequences for one hundred of visitors. Moreover, the expert state that the outcome of the tragedy was maximum beneficial for the visitors since the emergency team managed to eradicate the source of fire within the first three minutes after the inflammation. Nevertheless, the quick reaction did not save the building, and it collapsed in thirty minutes after the episode start (Harrington, Biffl, & Cioffi, 2005). The case evoked multiple discussions both on the local administration and state platforms, which aimed at the elaboration of the consistent fire prevention plans.

The Incident Prevention Plan: Exploring the Standards of Fire Safety

The Station Nightclub fire analysis outlined some observations, which contributed to the compiling of the fire prevention plans. Thus, some scientists linked the outcomes of the episode to the individual behaviors of the occupants of the building. Subsequently, it was decided to incorporate some psychonomics instruction courses into the academic programs on fire safety (Kobes, Helsloot, Vries, & Post, 2010).

The complex prevention concerns that follow the Station Nightclub case are based on several aspects. First, the fire safety regulation predetermines the usage of fire resistant barriers between the building interiors, particularly if they are made of wood and thermal coverings. Second, the necessity of automatic sprinklers within the area that is vulnerable to fire is predetermined. Third, the detailed guidance on the building leaving rules is installed in the visible sector. Thus, one of the principal reasons for the tragic consequences of the Station Nightclub incident was a huge crowd-crush that occurred in front of the main entrance door (Bryner, & Madrzykowski, 2005).

Therefore, the investigation of the episode consequences allows creating an elaborate recommendation planning that provides the necessary fire preclusion instructions. The draft combines several critical issues. The first recommendation may be to found a fire identification program that would focus on the complete inspections of public establishments fire-resistance systems. The second endorsement stipulated an initiation of the research study that aims at the analysis of human behavior specifications under the conditions of fire emergencies. A support of the flexible communication between the emergency organizations and the civil society serves as the third aspect.

The fourth recommendation predetermines a development of universal public protection systems that would refer to such problematic issues as exit doors blocking, sprinklers breakages, occupancy limits exceed, etc. Fifth, the fire prevention plan recommends enhancing the sizes of entrance doors in the public establishments such as nightclubs, restaurants, concert halls, etc. Moreover, the regulation predetermines an improvement of general evacuation techniques as well as suggests holding public instructions on the emergency conduct rules before the start of any mass performance. Finally, the planning provides an overview of the basic fire-resistant materials that can be used in the constructions of public buildings.

Conclusion: Fire Safety Implications

The Station Night Club fire tragedy discloses a range of specific prevention improvement suggestions. The planning includes such concerns as evacuation regulation, public buildings reconstruction, and fire-resistant materials choice. The account can become a fundamental prevention draft for the fire safety programs.

References

Bryner, N., & Madrzykowski, D. (2005). Draft report of the technical investigation of the Station Nightclub fire. National Institute of Standards and Technology, 11(1), 1-22.

Harrington, D., Biffl, W., & Cioffi, W. (2005). The Station Nightclub fire. Journal of Burn Care & Research, 26(2), 141-143.

Kobes, M., Helsloot, I., Vries, B., & Post, J. (2010). Building safety and human behavior in fire: A literature review. Fire Safety Journal, 45(1), 1-11.

3M Companys Cloud Solution Implementation

It should be noted that a Cloud solution is a convenient and multifunctional tool that both companies and individuals can use in their daily activities. This concept allows having all the applications and data on a remote server on the Internet. Thus, this software is a service that makes it possible to use a convenient interface for remote access to selected resources via the Internet or over a local network. With the help of a Cloud solution, anybody can access computing resources, programs, data, and so on from any location (Qumer Gill, 2015). In this case, computers and other devices serve as a terminal, and the load between them is distributed automatically. The purpose of this paper is to review the problem faced by 3M and the way Cloud solution assisted them in resolving it.

Problem

According to the case study, 3M is a big enterprise that has multiple divisions and Cloud solution has helped it to meet its business needs (3M speeds mobile-app development, 2014). To be more precise, the company was engaged in offering innovative technology solutions and wanted to track its assets in one of the segments but did not possess the tools to do it. 3M had concerns over the security and management in this division and required its workforce to develop an app to meet this evolving requirement (3M speeds mobile-app development, 2014). The team had to develop a corresponding application in two days.

Solution

To complete the task, it has been decided to use Microsoft Azure Mobile Services with Microsoft Visual Studio and the Xamarin development platform to rapidly create a tracking app that syncs with the cloud and runs on multiple mobile devices (3M speeds mobile-app development, 2014, para. 1). It enabled the companys employees to have access to the files, programs, and data from a variety of different spots using not only regular computers but also any types of mobile devices. The company had a team that was engaged in gathering information from a special app about the location of clients and the precise dates when those have set the companys programs. It was believed that Microsoft Azure platform would be the best choice to meet 3Ms needs.

Microsoft tool turned out to be a scalable and secure solution that would enable the enterprise to gain the required control over the division (3M speeds mobile-app development, 2014). With the help of it, the company has been able to create the necessary app, which could be operated on different kinds of devices. At present, 3M is able to satisfy customer needs better by offering the growing businesses rapid and secure technological solutions. They can gather real-time information from any mobile device and store it conveniently without fearing that the data can leak. The tool uses GPS to provide customers with location-specific data (3M speeds mobile-app development, 2014). In addition, the solution helps companies ensure they stay connected with all their departments and divisions, and their employees are equipped to complete their corporate objectives.

Conclusion

Thus, it can be concluded that the Cloud solution Microsoft Azure allowed 3M to solve their problem within the shortest deadline. This open and flexible platform made it possible to create and implement an application that could be managed in Microsofts global data center network (Qumer Gill, 2015). It allowed expanding the capabilities of the infrastructure due to the unlimited resources of the Cloud.

References

3M speeds mobile-app development and gains real-time insight with Cloud solution. (2014). Web.

Qumer Gill, A. (2015). Adaptive Cloud enterprise architecture. Singapore, Malaysia: World Scientific.

Snowboard Design Project: Engineering Materials

In snowboard manufacturing, it is essential to find the materials that would be perfect to help the snowboard withstand the external forces. The board will be built as a sandwich structure (Purdy et al., 2013). In order to calculate the key snowboard properties of bending and torsional stiffness, a new model has to be developed. Any complex layer structure might be assessed for use within a snowboard, and the material architecture should also be well-thought-out.

A geometric unit-cell approach should be utilized to predict the global fiber volume fraction, typical drag ripple, and areal mass (Clifton, Subic & Mouritz, 2010). The current situation in the snowboard market now suggests possible design origination prospects for performant, multipurpose snowboards. Overall, changing the meandering and torsional toughness dispersals and the bow seems to be the vital method of bettering the experience of riding a snowboard when it comes to the most popular riding styles (Clifton, 2011).

The top sheet of the snowboard should be made of epoxy resin. It is rather robust and vibration restraining, which is the central asset for aboard. For layers two and four, fiberglass should be used. Fiberglass is used for the reason that it is more durable and cheaper than any other comparable material. The core of the snowboard should be made of poplar, a thick wood layer that is also supple. Elasticity is significant because the core has to be crooked and shaped in presses throughout the engineering process, and the board has to have some spring when weight is applied when riding. Poplar is low-priced and comparatively easy to obtain. It is frequently selected over other woods for the reason of its ideal mixture of thickness and springiness.

Binding baseplate should be made of high-density polyethylene (HDPE), which is a long-chain polymer made of carbon and hydrogen atoms. HDPE is durable, hard, and dampness resistant, and is used for the edge to guard the internal layers of the board from the liquid and abrasion that the exterior comes across. The binding plate on the board with channel bindings must be water and friction-resistant. The force the rider applies to the snowboard is concerted where the attachments are fastened, so the board needs strengthening in this area.

The base should be made of low-density polyethylene (LDPE), which provides a water-resistant layer on the bottommost part of the board that averts the inward wood and fiberglass sheets from captivating dampness. LDPE has extra polymer chain diverging, which stops close-fitting molecular padding. It is more elastic than HDPE, permitting the board to curve and move with the rider. The perfect edge material for a board should be solid enough to get through snow and ice when sharpened and resist corroding. Stainless steel is supreme but costs more than other steels.

In conclusion, a snowboard should be tough and flexible at the same time. This is why the author recommends a strict sandwich structure that would combine elastic materials with durable ones. We need a serious approach when developing the design of a snowboard, as we have to find out what is the right material for each of the snowboard layers and how they actually get together. It is important to carry out all the necessary performance tests and pay attention to the studies conducted by the previous researchers.

References

Clifton, P. (2011). Investigation and customisation of snowboard performance characteristics for different riding styles (Doctoral dissertation, RMIT University).

Clifton, P., Subic, A., & Mouritz, A. (2010). Snowboard Stiffness Prediction Model for Any Composite Sandwich Construction. Procedia Engineering, 2(2), 3163-3169.

Purdy, D. J., Simner, D., Diskett, D., Duncan, A., Brooks, L. E. E., & Sheppard, P. (2013). A theoretical investigation into the handling characteristics of snowboards at low lateral acceleration. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 227(8), 1697-1714.

Neural, Symbolic and Connectionism Learning Models

Neural Network Models

In my opinion, neural network models have the potential to become an artificial intelligence that would think like a human. Certainly, it will not be observed in the near future, as there are some obstacles that need to be overcome in order to make such a computer. First, the amount of data needed for implicit learning, which is a characteristic of the human brain, is infinitely large. Teaching a machine how to teach itself how to read using the concept of a back-propagation rule and relying on fuzzy logic is a very simple task compared to other more complex tasks that require taking into account a great number of variants based on associations, implications, and experience that human beings can easily do.

For example, when people face a problem, and they do not know how to solve it, they think of different implications of this problem, use various associations, and rely on a similar experience in order to find a solution, while computers rely only on their computing capabilities (He, Chen, & Yin, 2016). Second, in which I agree with the author, by the time such artificial intelligence is created, humans themselves will already be partially machines with the same capabilities, thereby being always better than machines. Thus, the creation of an AI, if it is possible at all, will be senseless. Simple robots, on the other hand, can be very useful for people (Lefrancois, 2011).

The most effective way to explain the concepts mentioned in this chapter is implementing methods that the author has used; namely by providing vivid examples and situations that practically prove the identified concepts.

Chan, Jaitly, Le, & Vinyals (2016) present a neural speech recognizer called Listen, Attend, and Spell (LAS), which is capable of making transcriptions from speech utterances to letters without different components such as HMM or pronunciation models that common speech recognizers have. The system consists of a speller and a listener. According to the tests, LAS reaches a WER of almost 15% compared to only 8% achieved by other speech recognizers.

This makes LAS the most efficient speech recognizer so far. Although 15% is not much, the Machine Learning industry is developing very fast now, and, in my opinion, this number will soon increase.

Symbolic and Connectionism Models

The symbolic model is based on the assumption that any concept can be represented symbolically and conforms to certain rules. Initially, this model was used in the creation of computers that would surpass the human brain (He et al., 2016).

Connectionist models are based on fuzzy logic that resembles the work of the human brain. They are less predictable but rather good at performing recognition tasks such as telling a poem with expression or recognizing items on pictures (Lefrancois, 2011).

Certainly, I find connectionist models more interesting, as for me, in spite of their abilities being rather limited, it is very interesting to observe the human-like thinking process in machines based on these models. It is difficult to imagine that a machine can look at a picture and provide different variants of what it sees on that picture explaining its point of view, and theoretically, using connectionist models, it is possible to achieve. In my opinion, such machines can soon be created to assist people in robot-like professions.

Townsend, Keedwell, & Galton (2014) analyze neural-symbolic networks whose function is to represent logic programs. The motivation for developing these networks lies in work on a biologically plausible network that represents knowledge in the same way as the human brain. According to the experiments in evolving genomes that turned out to be successful, the development of connections between neurons in neural-symbolic networks will also be successful. In my opinion, the fusion of the biological models and synthetic models will result in a much more efficient hybrid model.

References

Chan, W., Jaitly, N., Le, Q., & Vinyals, O. (2016). Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on, 4960-4964.

He, W., Chen, Y., & Yin, Z. (2016). Adaptive neural network control of an uncertain robot with full-state constraints. IEEE Transactions on Cybernetics, 46(3), 620-629.

Lefrancois, G. R. (2011). Theories of human learning: What the professor said. (6th ed.). Belmont, CA: Wadsworth Publishing.

Townsend, J., Keedwell, E., & Galton, A. (2014). Artificial development of biologically plausible neural-symbolic networks. Cognitive Computation, 6(1), 18-34.

Data and Information Difference Explanation

The web search of the phrase difference between data and information resulted in a variety of links, from which to choose. Upon the exploration of some of them, www.differencebtw.com website was chosen as it specifically focused on providing information about how certain phenomena differ from one another. The Difference between data and information page contained images, text, a chart, examples, and a video explanation for a better understanding of the topic at hand. Such a presentation of material looked the most practical when it comes to getting to know the differences between data and information, two terms that people often confuse. The chart contained a basis of distinction section, which was differentiated into four categories: definition, example, significance, and etymology.

The web page also included different examples of data and information so that readers could see how the two terms are applied in real-life situations. The example given for data was survey data: different companies collect data by survey to know the opinion of people (Difference between data and information, 2015, para. 4), while the example for information was survey results: survey data is summarized into reports to present to the management of the company (Difference between data and information, 2015, para. 6). Another useful component of the web page was a short video explanation that discussed the two notions in detail. The most interesting thing I learned about the topic was the fact that most people see data and information as synonymous terms; however, the web page explained that data referred to the raw and unprocessed material while information was associated with material that had already been organized, analyzed, or managed in some other way.

Reference

Difference between data and information. (2015). Web.

Virtualization and Cloud Computing World

Introduction

Recently, many IT/IS companies started to use Saas (Software as a Service), PaaS (Platform as a Service), and IaaS (Infrastructure as a Service). The three services mentioned above provide a virtualized space that replaces all the required data storage, desktop computing, networking, and many other elements necessary for engineers. The following paper will discuss some issues and considerations that have to be taken into account by IT/IS firms that use SaaS, PaaS, and IaaS on a regular basis.

Items to Consider When Using SaaS, PaaS, and IaaS

The first factor that must be considered by every firm that uses IaaS in its work is that it gives employees an ability to control almost all virtual elements and mechanisms that function as clouds and appear to be adjustable almost to any human needs in computer engineering. The second item implies the fact that people who provide their clients with PaaS can let the latter individuals personally invent and launch a wide range of applications with the use of particular tools and virtual programs (Kavis, 2014). For instance, Microsoft Azure, Google AppEngine, and many other applications perfectly meet peoples needs in this industry.

Finally, the last idea to consider is that SaaS gives its users an ability to access various programs that work only in connection with the World Wide Web (specific programming websites in particular). It is necessary to mention that SaaS providers profit from the difference between the operational cost of infrastructures and the revenues generated from their customers (Kavis, 2014).

How SaaS, PaaS, IaaS Increase ROI and Reduce TCO

This paragraph will explain how such services as SaaS, PaaS, and IaaS are used to reduce IT companies ROI (return on investment) and their TCO (Total Cost Ownership). To begin with, it is necessary to state that companies are not required to pay as much for their software when using SaaS, PaaS, and IaaS as they would for final products of identical programs developers. Instead, they cover only annual and monthly payments that are significantly cheaper than the first variant. This is one of the ways how IT firms can save their financial means (Kavis, 2014). Also, the more expensive products are supposed to be installed properly and maintained according to particular rules, which require additional costs. Instead, such actions are not necessary for virtual applications that run with the help of SaaS, PaaS, and IaaS.

IT/IS firms ROI is increased by the services mentioned above as they provide extra performance and additional capabilities to their users. Also, they offer many variants of ways that let them manage their growth and increasing demands of customers. In turn, all the necessary business calculations or strategies become less complicated when applied to the use of SaaS, PaaS, and IaaS (Kavis, 2014). Therefore, the time required for similar operations with regular software reduces significantly and brings additional profits to IT/IS firms.

Impact on the IT Support Personnel

This paragraph will discuss the impact on the IT support personnel that can emerge when their employers use SaaS, PaaS, and IaaS. To begin with, it is necessary to state that the number of employees in the supporting department might be reduced as the services simple controlling options and mechanisms are much easier than that of regular programs (Lal & Bharadwaj, 2016). Moreover, all the management can be easily done by users as almost all the necessary instructions can be accessed by them within a single moment. Therefore, the work of IT supporting personnel might not be as critical for organizations as it was before. Instead, these professionals can focus on various business requirements of their firms.

Considerations for IT Companies Management

The first consideration that managers of IT/IS organizations must take into account is that various cloud technologies and programs have different options, which might make the working process faster if computer engineers have enough experience. Another item to consider is the increased requirements for security services that are important to prevent all the possible vulnerabilities (Kavis, 2014). In the end, the concern of IT organizations managers implies the fact that people who work with the software mentioned above must have specific knowledge and be prepared for the possible difficulties during their working processes. Therefore, all employees are recommended to be trained and provided with a new experience as to their professional activities.

Security Issues

The first issue that must be considered by computer engineers regarding the security of their information in clouds implies data breaches that can emerge even in the most protected environments. It appears that the majority of developers face this problem on a daily basis and report it to their superiors. The second issue is the possibility of account hijacking (Shahid & Sharif, 2015). As all the necessary documents and files are kept online, some advanced users can access them just by applying certain manipulations with passwords or webpage scripts. The last issue is the fact that some employees of IT/IS firms can use their positions to access the personal data of their clients or other people in their interests.

Conclusion

The use of SaaS, PaaS, and IaaS has more advantages over regular software as it does not require much support and financial investments. Cloud computing is a new approach to engineering that makes all the tasks easier for inexperienced workers. Nevertheless, the system presents some security issues that might have an adverse impact on the working process of IT/IS organizations due to the leak of important information.

References

Kavis, M. (2014). Architecting the cloud: Design decisions for cloud computing service models (SaaS, PaaS, and IaaS). Hoboken, NJ: John Wiley & Sons.

Lal, P., & Bharadwaj, S. S. (2016). Understanding the impact of cloud-based services adoption on organizational flexibility. Journal of Enterprise Information Management, 29(4), 566-588. Web.

Shahid, M. A., & Sharif, M. (2015). Cloud computing security models, architectures, issues and challenges: A survey. The Smart Computing Review, 35(1), 602-616. Web.

Building Thinking Intelligent Machines

Ever since World Chess Champion Garry Kasparov had lost the game of chess to Deep Blue computer in 1997, the possibility of creation a self-learning Artificial Intelligence (AI) had effectively ceased to be solely associated with sci-fi genre in literature and cinematography, and instead, such possibility became a subject of scientific futurology. What it means is that, it is only the matter of time, before genuinely thinking intelligent machines would be built on the industrial scale.

Nevertheless, as practice shows, there still many people, whose intellectual inflexibility prevents them from recognizing the full validity of an earlier articulated suggestion, which in its turn, extrapolate itself in these peoples strongly defined negative attitude towards the very idea that such machines could be built, by definition. For example, according to Dreyfus (1992), the reason why a machine, endowed with AI, cannot indulge in genuine thinking, is because due to its essentially mechanical nature, such a machine would not be able to actively interact with surrounding realities, which in its turn, would prevent it from gaining an experiential common-sense understanding of dialectical relationship between causes and effects.

Such Dreyfuss suggestion is based upon his belief that human cognition is being of non-computational nature. According to him, it is by being exposed to objects and events situational context that we get to be aware of their qualitative essence: Our global familiarity& enables us to respond to what is relevant and ignore what is irrelevant without planning based on purpose-free representations of context-free facts (p. xxix). Nevertheless, the recent discoveries in the fields of psychology, informational technology, neuro-medicine and genetics, and also the utilization of what author refers to as commonsense logic, renders his line of argumentation conceptually fallacious.

The reason for this is simple  the analysis of human reasonings metaphysical and structural subtleties, reveals an undeniable fact that the manner, in which people address existential challenges, is being essentially similar with the manner in which neuro-computers (perceptrons) address a variety of cognitive tasks. In order for us to be able to substantiate the validity of an earlier statement, we will have to explore the issue at length.

Let us say we have a function (8X+10)/9. What would be Y when X equals 5? To be able to come up with the answer, we will have multiply 8 by 5, then to add 10, and then to divide the received number by 9. The consequence of how we proceeded with solving this function is called algorithm. And, the utilization of mathematical algorithms is the fundamental principle, upon which the functioning of a Turing machine is based: Of course, a Turing machine cannot boil an egg, or unlock a door. But the algorithm& is a description of how to boil an egg. And these descriptions can be coded into a Turing machine, given the right notation (Crane, 2003, p. 100). To be compatible with the principle of Turing machines functioning, every task must be algorithmically formalized.

For example, in order for this machine to be able draw the silhouette of persons head, it would have to be provided with formulas, as to the execution of processs consequential phases (the drawing of a straight nose would be described by a linear function, the drawing of a rounded forehead by hyperbole, etc.). And, if error occurs, within the course of a machine executing even one formalized task, the eventual outcome would be wrong  in Turing machine, even the slightest error is fatal.

Nevertheless, there is also another way to solve earlier mentioned function, by the mean of constructing a graph, within X and Y-axiss two-dimensional system of coordinates. One might wonder what represents a fundamental difference between these two methods of solving the same function? After all, both methods are concerned with the application of an abstract math. But, let us imagine a situation when we have silhouettes graph but do not have a formula to describe this graph. It would represent a substantial challenge to work out mathematical function, in regards to graphs every situational variable, which in its turn, would allow Turing machine to algorithmically process the drawing of a silhouette.

Moreover, upon having encountered the absence of even utterly insignificant algorithmical data, regarding the process of drawing a silhouette, Turing machine would come to a stall. And yet, just about anybody would be able to reconstruct the missing part of a silhouette with ease if, for example, some maliciously minded individual destroys it with eraser. The reason for this is simple  unlike what it is being the case with Turing machine, people are endowed with associative memory, which according to Dreyfus, allows them to gain propositional knowledge.

In its turn, this brings us to the question  is peoples associative memory (propositional knowledge) computational? Yes, just as the procedure of constructing a graph on X and Y-axiss scale is. For example, the process of childs upbringing is nothing but the process of childs parents prompting him of her to memorize a number of different behavioral modes (graphs), meant to be deployed in accordance to the qualitative essence of existential challenges, which are expected to be faced by such a child later in his or her life. After having memorized what accounts for these behavioral stereotypes, the child is able to choose in favor of a proper behavioral strategy, when dealing with formally unfamiliar but qualitatively similar (to which behavioral stereotypes apply) situations. And, what is the most important is that, when addressing lifes challenges, while never ceasing to observe earlier memorized behavioral stereotypes, the child also gains a propositional knowledge.

The following, is how in his article Changeux (1994, p. 193) outlines the functional subtleties of peoples artistic cognition: Experimental psychology teaches us& that the memorized con­figuration is integrated into a highly or­ganized, hierarchical ensemble, a taxo-nomic chart, a system of classification already in existence. What it means, is that in order for an individual to be able to effectively deal with a particular situation, he or she does not have to posses the actual experience of having dealt with the same situation in past. The realization of this fact represents a conceptual foundation, upon which the theorization of what should account for the principle of AIs functioning is based.

As it has been prophesized by one of the most prominent theoreticians of AI, Marshall Yovits (1960, p. viii): It appears that certain types of problems, mostly those involving inherently non-numerical types of information, can be solved efficiently only with the use of machines exhibiting a high degree of learning or self-organizing capability. Examples of problems of this type include auto­matic print reading, speech recognition, pattern recognition, automatic language translation, information retrieval, and control of large and complex systems. Apparently, Yovits was able to realize a simple fact that, in order for computational systems to attain the full extent of operational efficiency, they should not be programmed (as it is being the case with Turing machine), but to be allowed to indulge in self-learning.

The validity of such Yovitss suggestion has been illustrated during the course of sixties, when Frank Rosenblatt had built the first perceptron, which was able to recognize letters in the typed text. Therefore, it comes as not a particular surprise that Rosenblatts invention is now being commonly referred to as AIs starting point:

Rosenblatts schemes quickly took root, and soon there were perhaps as many as a hundred groups, large and small, experimenting with the model either as a learning machine or in the guise of adaptive or self-organizing networks or automatic control systems (Minsky & Papert, 1986, p. 19). As of today, properly functioning neuro-computers are no longer being mentioned as the element of a futuristic living, but as an integral part of todays highly technological post-industrial realities. And, it is namely the fact that the operating subtleties of neuro-computers are being attuned to the workings of ones biological brain, which explains the phenomenon of their exponentially increasing popularity.

Nowadays, these computers are not being only able to define consistency patterns in processed data, but also to form their own associative memory. Just as it being the case with people, neuro-computers organize semiotic signifiers within semantically structured memory-clusters, which in its turn, allows such a computer to generate associations, during the course of performing a particular computational task. It is needless to mention of course, that this represents another important step towards creating a genuinely thinking machine, endowed with AI.

The context of what has been said earlier helps us to define what accounts for argumentative fallaciousness, on the part of another ardent opponent of an idea that AI can indulge in genuine (human) thinking  John Searl. According to this authors line of reasoning, sublimated in his famous Chinese room argument, computer programs can never possess a genuine understanding of processed datas implications, which means that the building of a genuinely thinking intelligent machine is impossible. The quintessence of his argument, Searl (1991, p. 47) defines with perfect clarity: I believe that it is a profound mistake to try to describe and explain mental phenomena without ref­erence to consciousness.

The attribution of any inten­tional phenomena to a system, whether computational or otherwise, is dependent on a prior acceptance of our ordinary notion of the mind, the conscious phenomenological mind. Apparently, it was due to Searls clearly defined emotional uncomfortableness with the idea that one is being fully capable of indulging in rationalistic reasoning, while remaining unconscious of a number of such reasonings implications, which had prompted him to come up with earlier quoted statement.

And yet, even the brief analysis of qualitative essence of minds neurological workings, refutes the soundness of Searls assumption. After all, history features many examples of ones mind having proven its ability to effectively address cognitive tasks, without the actual consciousness playing any role in the process. The most notable one is the discovery of Periodic Table of the Chemical Elements by Dmitri Mendeleev, which had taken place in time when this famous Russian chemist was sleeping. According to Atkins (1995, p. 86): It is said that during a brief nap in the course of writing a textbook of chemistry, for which he (Mendeleev) was struggling with the problem of the order in which to introduce the elements, he had a dream.

When he awoke, he set out his chart, in virtually its final form. The date was February 17, 1869. Apparently, it is not simply by an accident that, during the time of sleep, ones brain consumes 10% more energy, as compared to the time when an individual is being awake. The reason for this is simple  during the nighttime, brain processes the information, accumulated during the course of its daytimes functioning.

Therefore, it is methodologically inappropriate to refer to AIs lack of consciousness, in traditional sense of this word, as the proof its unintelligebleness. Quite on the contrary  given the fact that the workings of peoples consciousness are being rather biologically then cognitively defined, the lack of human consciousness, on the part of computers, should be thought of as an indication of their cognitive impartiality, which has always been considered the psychological trait of worlds most prominent intellectuals. And, as history indicates, these intellectuals have always been considered the best part of humanity.

This brings us to address the essence of moralistically minded individuals another objection to the idea that genuinely thinking intelligent machines can be built  namely, the fact that, as these individuals believe, computers will never be able to experience the whole set of human emotions. As Dewhurst and Gwinnett (1990, p. 695) had put it: Given that human intelligence is so emotionally complex that it cannot be fully replicated, all that AI research can actually achieve is to model particular aspects of human intelligence in relation to specific domains. But, what are emotions, both: positive and negative  love, hate, fear, anger, joy, sadness, etc.? Emotions are nothing but the agents of ensuring our biological well-being, as representatives of Homo Sapiens specie. To put it allegorically  they are the sticks and carrots that induce environmentally appropriate behavior not only in people, but in animals, as well.

An individual, as energetically open system, enjoys certain freedom in decision-making, and emotions are there to make sure that these decisions will not undermine the extent of such individuals biological survivability. When, for example, we make love, such our activity has the objective of ensuring the spread of our genes, and for this, we get to be rewarded with the whole range of positive pleasure-inducing emotions. Alternatively, when we are injured, we get to experience pain, simply because pain is nothing but a warning sign that there is something wrong with our bodies physical state.

Even though it has been long time ago, since our distant ancestors have claimed down the trees, in search for additional sources of food  hence, creating objective preconditions for the eventual emergence of Homo Sapiens specie, the biochemical workings of our bodies never ceased being essentially the same with that of apes. Just as it is being the case with all the primates, people strive to love and to be loved, to attain social prominence, to enjoy good-tasting food, to spend as much time as possible relaxing and as little time as possible working, etc.  all of these emotion-induced activities are of clearly defined animalistic nature. This is exactly the reason why it is possible to define the essence of an emotion, experienced by an individual at particular point of time, by assessing such emotions physiological emanations.

As it was rightly noted by DeLancey (2002, p. 10): Affects, especially some emotions, have noticeable and measurable physiological correlates& For emotions, many more measurable physiological changes occur. Depending upon the intensity of the emotion, these can include changes in autonomic functions, such as heart rate, blood pressure, respiration, sweating, trembling, and other features; hormonal changes; changes in body temperature; and of course changes in neural function. Therefore, under no circumstances, should human emotions be referred to as the mark of peoples higher humanity. Instead, they should be referred to as what they really are  the indication of the fact that, while dealing with lifes challenges, people never cease remaining utterly constrained, in biological sense of this word.

What has been said earlier, directly relates to the subject matter that is being discussed in this brief. The genuinely thinking intelligent machine, that we propose to be built, will not be able to experience human feelings, of course. This, however, should not be thought of as the proof of its cognitive inferiority; simply because, unlike what it is being the case with humans, our machine will not utilize biochemical but electronic mechanism of interacting with surrounding realities. And yet, it is specifically this mechanism, which appears to be perfectly attuned with the actual workings on human brain.

After all, ones mind does not operate with digits and formulas, while assessing the emanations of a surrounding environment. In a similar manner, computer does not operate with digits and formulas per se, but with electronic signals. The only difference between human brain and computer-based AI is that; whereas, human brain generates electricity from within, computerized brain requires an outside source of electricity  pure and simple.

Yet, human brains energetic portability comes at the expense of its computational powers being severely undermined. After all, a good half of their lives, people spend taking care of their bodies biological needs. Our genuinely thinking intelligent machine, however, will not need to indulge in clearly physiological pursuits, just to ensure its continuous existence, which in its turn, will not only increase its computational powers, but will also result in the dramatic increase of its computational insights validity.

Even today, the cognitive outdatedness of a human brain appears to be a well-established fact. Such brain contains 10 billion neurons, which construct ones memory and function as the logical elements of perception. Due to the chemical essence of these elements functioning, brains computational performance cannot be referred to as utterly effective. For example, within ones brain, electronic impulses are being transmitted at the speed of 30 km per second. This, of course, cannot even be compared with the speed that electronic impulses are being transmitted within a microchip  300.000 km per sec.

Therefore, it appears to be the matter of foremost importance to adopt a proper perspective on what should account for our machines metaphysical significance. We do not perceive such a machine as simply robot, endowed with AI, which can be utilized as peoples life-enhancing asset, but as something that might very well bring about the next evolutionary jump from biological intelligence, represented by Homo Sapiens specie, to a pure trans-human intelligence, which will not be biologically constrained.

Even today, there are many indications that point out to the fact that trans-human revolution is just around the corner. For example, within the matter of another decade or two, it will become practically possible to install microchips into peoples brains, which would allow them to instantly learn new languages, to upgrade their memory and even to go as far as saving their consciousness (individuality) onto computer hard drives. Therefore, our willingness to apply extra effort, while creating genuinely thinking intelligent machine, should not be thought of as simply the proof of our intellectual open-mindedness, but as an indication of our existential status being nothing less then that of demi-Gods, because by establishing objective preconditions for such machines creation, we intentionally facilitate the course of evolution.

As it was pointed out by Kurzweil (2005, p. 476): Evolution moves toward greater complexity, greater elegance, greater knowledge, greater intelligence& Evolution does not achieve an infinite level, but as it explodes exponentially, it certainly moves in that direction therefore, the freeing of our thinking from the severe limitations of its biological form may be regarded as an essentially spiritual undertaking. Therefore, our intention to create genuinely thinking intelligent machine should indeed be regarded as essentially spiritual enterprise, even though it has nothing to do with the notions of conventional spirituality  whatever the ironic it might sound.

Before we conclude this brief, let us reinstate its foremost theses:

  1. There are no good reasons to believe that, due non-biochemical principle of its functioning, AIs perception of surrounding realities may be cognitively deficient. On the contrary  because it is being freed of a number of biological constraints, AI will be able to attain a qualitatively new level of these realities understanding.
  2. The suggestions that there is a fundamental difference between the cognitive functioning of a human mind and that of artificial neural networks is conceptually fallacious, simply because, in both cases, it is specifically the flow of electrons, which serve as informational medium. Just as it is being the case with people, neuro-computers have proven their ability to indulge in associative reasoning. In its turn, such reasoning has always been considered the attribute of a higher intelligence.
  3. In order for the proposed machine to be able to proceed with genuine thinking, it does not have to be conscious of the process, in conventional sense of this word. After all, most people are being rarely conscious of what prompts them to decide in favor of crossing the street, or to wait until there are no incoming cars nearby, before they do it  their intuition simply allows them to configure what accounts for their chances of not being hit by the car, while crossing the street. And, peoples intuition is nothing but their ability to unconsciously reconstruct the missing parts of a graph, without having to apply mathematical functions, just as neuro-computers are able to do. It is specifically such peoples ability, which accounts for their intelligence per se, and not their tendency to assess the essence of surrounding reality through the lenses of their emotions, as Dreyfus and Searl would like us to believe.
  4. The building of genuinely thinking intelligent machine may very well trigger the initial phase of trans-human revolution. Given the aspects of todays living, which derive out of growing inconsistency between peoples ability to push forward scientific progress, on one hand, and their biological imperfection, on another, the beneficial effects of such a revolution can hardly be underestimated.

References:

Atkins PW 1995, The periodic kingdom: A journey into the land of the chemical elements. Basic Books, New York.

Changeux, JP 1994, Art and neuroscience, Leonardo, vol. 27, no. 3, pp. 189- 201.

Crane, T 2003 [1995], The mechanical mind. A philosophical introduction to minds, machines and mental representation. 2nd ed. Routledge, New York.

DeLancey, C 2002, Passionate engines: What emotions reveal about mind and artificial intelligence. New York, Oxford University Press.

Dewhurst, FW & Gwinnett, EA 1990, Artificial intelligence and decision analysis, The Journal of the Operational Research Society, vol. 41, no. 8: pp. 693-701.

Dreyfus, HL 1992 [1972], What computers still cant do: a critique of artificial reason. The MIT Press, Cambridge.

Kurzweil, R 2005, The singularity is near: When humans transcend biology. Viking, New York.

Minsky ML & Papert SA 1986. Perceptrons: An Introduction to computational geometry, The MIT Press, Cambridge.

Searle, JR 1991, Consciousness, unconsciousness and intentionality, Philosophical Issues, vol. 1, pp. 45-66.

Yovits M, and Scott, C (eds) 1960. Self-organizing systems: Proceedings of an interdisciplinary conference, 5 and 6 May, 1959, Pergamon Press, London.