Engineering: Sewing Machine Timeline

1700

In 1755, Charles Weisenthal, a German immigrant who lived in England patented a needle to be used for mechanical sewing (The Origins of the Sewing Machine 1). No one tried to use this machine and no one knows if it could be used for sewing.

In 1790, Thomas Saint, an Englishman, patented a sewing machine that had an awl that makes a hole in leather and allows a needle to pass through (The Origins of the Sewing Machine 1). People who tried to use the patent failed to produce the machine. No one knows whether Thomas Saint constructed the machine or used it.

1800

In 1818, in the USA, John Knowles and John Doge invented the machine that made good stitches but it could work on short pieces of material (The Origins of the Sewing Machine 2). The machine was very complicated and it needed a lot of time for resetting. The inventors tried to copy hand movements and it makes the machine very complicated.

In 1851, In the USA, Isaac Merritt Singer patented a lockstitch sewing machine. It was very different from machines created at that time as there was a straight eye-pointed needle and transverse shuttle (History par. 3). The machine also had a table where the cloth was placed and there was a wheel that moved the needle. This was the first machine of that kind and it became the basis for all machines created later.

1900

In 1921, Singer developed a new Portable Electric sewing machine (History n.p.). The machine had an electric motor and was very convenient for users. It could be used at home and many women bought it. It was also powerful and could help to produce different types of clothes.

In 1953, the company Toyota introduced its first home-use zigzag sewing machine, TZ-3 (History of Toyota Sewing Machines n.p.). It was easy to use and it was beautiful because the founder of the company, Mr. Kiichiro Toyoda believed that machines for home-use must be functional yet beautiful (History of Toyota Sewing Machines n.p.). People could make more types of clothes at home and they did not have to buy a lot of clothes.

2000

In 2000, Toyota introduced a new home-use overlock machine. It was convenient and easy to use. It helped to create lots of types of clothes. Of course, it was electric and it had a lot of options.

In 2001, Singer introduced the most advanced home sewing and embroidery machine (History n.p.). It had an automatized re-threading system. It is very professionals and it is convenient to use.

Conclusion

The sewing machines have developed for more than three century and they are still changing. First, inventors tried to copy hands movements when they created their sewing machines, but in the 19th century, there was a revolution and a sewing machine (which is similar to the modern machine) was created.

First, they were a bit complicated and not very convenient to use but these machines had more and more options. The 20th century was the time when more options were introduced (machines became electric), and the machines became very effective. Now, people have electric sewing machines that can help to create different types of clothes and people can produce millions of items at factories (or they can buy a machine and sew for themselves).

Works Cited

History. Web. 

History of Toyota Sewing Machines. 2011. Web. 

The Origins of the Sewing Machine. Web. 

Mobile Imprisonment: Threat or Opportunity?

Technology saves a lot of time to people and enables them to focus on such concepts as self-development, relationship, social contribution rather than do the chores. Technology has also made the planet really small as people can communicate with each other irrespective of distances or time. At the same time, researchers warn that people are becoming addictive to technology (Salehan and Negahban 2013, p. 2636). A mobile phone is one of examples of this impact of technology. The artwork in question reveals the essence of peoples concerns about technology and its impact on the society.

The artwork under discussion can be entitled Mobile Imprisonment. Now many people can panic if they leave their mobile at home. People are literally attached to their devices. Salehan and Negahban (2013, p. 2636) reveal this dependency and note that it is likely to become worse in future. Of course, mobile phones (as well as other devices) help people communicate. There are various useful applications that can be employed in numerous settings. For instance, there are mobile phones with stethoscopes and there is a prototype of phone oximeter (Sheraton, Wilkes & Hall 2012, p. 945). Apart from these almost life-saving applications, there are many more that make peoples lives easier. Cameras, calculators, notepads and, of course, social networks help people (especially youth) make various things within seconds.

Social networks (which have become available in mobiles) have had a great impact on the society. Technology has shaped the way people communicate. However, it is necessary to add that technology and culture ae interconnected. Technology affects culture and culture, in its turn, has an impact on technology. Minsky (1997, p. 1121) stresses that science-technology and culture are two things that help people make sense of things. At the same time, Meikle and Young (2012, p. 175) note that technology is not shaping cultures but is a reflection of issues existing in the society.

However, both approaches are correct though somewhat limited. Culture and technology are closely connected and interrelated. Thus, people develop their cultures throughout decades and centuries. Beliefs and societal norms are being formed. Technology is aimed at solving issues existing in the society. Thus, peoples desire to work together and communicate led to creation of telegraph, telephone, a mobile phone and the Internet. At the same time, the technological advances also shape the societal norms. Thus, people used to be able to communicate irrespective of distances and now they have such an opportunity. However, extensive communication is leading to peoples need to communicate and remain in touch. Otherwise, they feel lonely and lost.

Art can reveal the interrelatedness between technology and culture. Thus, the artwork in question depicts a mobile phone that looks like a prison inside. There are convicts and their cells, there is barbed wire and sophisticated security system. The image reveals the idea of a highly technological and sophisticated prison people have built for themselves. Importantly, people understand that they are inside a prison and their freedom is highly restricted. However, like convicts they are unable to go out.

People have chosen comfort to freedom. They are choosing various applications and become addicted to them. For instance, young people find it easier to communicate via social networks than face-to-face. It is so easy to purchase an application and each new app seems to make a persons life so easy and comfortable. Individuals are waiting for new apps without trying to solve some daily issues using old ways. People are becoming dependant on technology. People have carried out a variety of experiments and it turns out that a contemporary person needs devices and can hardly live without them. It is easy to check this argument by spending a day without a mobile phone.

Some may say that this dependency is disastrous. Nonetheless, the 21st centurys dependency on mobile phones (as a symbol of technology) can become a warning and make people wake up from the sweet dream. Art can contribute greatly to this process. Such artworks as the Mobile Imprisonment can help people understand that technology is a necessary and indispensable part of humanity development. However, technological development has to be controlled by people as it may create a society of convicts or slaves of technology. Humans have to draw the line between evolution and degradation, between technological advances and replacement of human activity by operations of various devices.

In conclusion, it is possible to note that culture and technology are two facets of the human societys development. They are interconnected and interrelated. Technological advances help people evolve but they can also be hazardous. Art can help people notice the line they are about to cross. Reflecting on the trends existing in the society people can try to stop and think what they need and what is superfluous. Mobile imprisonment can be the first sign of people becoming too dependent on technology. At the same time, it is also a reminder that people are very responsible for their development and have to make right choices when it comes to technology.

Reference List

Meikle, G & Young, S 2012, Media convergence: networked digital media in everyday life, Palgrave Macmillan, Basingstoke.

Minsky, M 1997, Technology and culture, Social Research, vol. 64, no. 3, pp. 1119-1126.

Salehan, M & Negahban, A 2013, Social networking on smartphones: when mobile phones become addictive, Computers in Human Behavior, vol. 29, no. 1, pp. 2632-2639.

Sheraton, TE, Wilkes, AR & Hall, JE 2012, Mobile phones and the developing world, Anaesthesia, vol. 67, p. 945-950.

Defining the Concept of Net Neutrality

Net neutrality underscores the need to introduce equality in the way different Internet stakeholders treat data, with governments and Internet service providers being the key players. The past few decades have experienced an unprecedented technological revolution. Net neutrality has led to massive connectivity coupled with introducing novel ways of doing things. The Internet extensiveness and pervasive proliferation have eventually fostered social engagements through networking to create a globally accepted working tool. Net neutrality creates a free and fair platform to Internet users, which include individuals, companies, and organizations. A debate has been raging on if something should be done to control those who access the Internet.

Moreover, some individuals propose the implementation of the net neutrality regulations, and the others oppose such propositions. If the net neutrality regulations are implemented, they will infringe on Internet competition, innovations, freedoms, and free speech. The net neutrality regulations should not be implemented; on the contrary, net neutrality should be embraced in its entirety.

Some individuals have vetoed net neutrality as they are ready to ensure that the proposed regulations are affected. The proposed policies will impose dramatic new restrictions on broadband Internet access service providers (Albenesius par.1). The proponents of the net neutrality regulations believe that the Internet service provider will tamper with information privacy. Also, they are suspicious of the ISPs transparency as it may view the subscribers contents and use the information for financial purposes. However, the Internet is all about openness and accessibility. The Internet will be of no use if it werent open, it wouldnt be the internet (Riley 20 and Cleland 20).

When the Internet is open, users can access information directly and indiscriminately. Without net neutrality, business, or services from various companies may have an unfair competitive advantage over the others. ISP may block smaller companies and increase Internet access to big companies, which have paid large amounts to access the services. Riley posits, Without net neutrality, AT&T, Comstat, and Verizon would be free to favor Hulu, but block Netflix (20). Freedom on the Internet can only be realized if people have full access to required information.

On the other hand, the opponents of net neutrality do not want such equality so that they can deliver services in hierarchy. They want to provide these services in levels, as they believe that users should be in a position to pay depending on the offered services. The opponents are pitted against offering a common platform for the content users. By opting to provide services based on how much the content user can pay, it bridges the genuine competition in the net neutrality principles where all the companies have a flat level to display their contents on the Internet (Riley 20). For example, it may be assumed that a small company has been in business for five months, and another big company has been in operation for the last fifty years. If these companies are to be offered services according to how much they can pay, then the small company will unquestionably stand no chance. With genuine competition gone, on the consumer end, paying supra-competitive prices for the Internet is, on its face, more of antitrust harm than a net neutrality violation (Reicher 738). Therefore, Internet users are likely to get Internet services at inevitably high costs.

Also, the antagonists of net neutrality claim that the ISPs should censor what is to be uploaded or downloaded, coupled with the access of the users to the contents. Also, ISPs can block the contents criticizing the companies that they favor and their favorite politicians. Furthermore, the ISPs can block those companies that seem to pose a threat to them in terms of competition. Without net neutrality, the ISPs can monitor the use of the Internet and change the upload or download transfer speeds and rates depending on what the user is accessing. If the ISPs are given the mandate of censoring the contents of the Internet, they may collude with some autocratic countries to shut down the voices of democracy. In such a case, Internet users will be Left to their own devices; the broadband gatekeeper will chisel away at our right to engage in open internet communication (Riley 20). In some countries across the world, people are not allowed to gather and air their views on the economic, social, and cultural activities; however, they can share their views and grievances through the Internet.

Websites like Facebook connect numerous people from different countries, classes, and races. Such social media sites are tools of communication that enable people to exchange information and ideas. With the net neutrality regulations in place, such social media sites can block some users with directives from governments in the name of countering hate speech in society people. Activists have used social media sites to champion for human rights all over the world. They have been in a position to mobilize people through social sites to stage demonstrations, which catch the authorities unawares.

People have also been in a position to share content over social media sites, which has significantly changed the temporary world. For instance, homicide incident caught on camera can be shared for people across the world to watch, voice their condemnations on the killing and push the government to take the appropriate action. Sharing of videos and music on sites, such as YouTube, enables the cultural exchange amongst different races of the world, thus creating a bond and sense of peace globally. If the net neutrality regulation is passed, content providers, such as YouTube, may be limited in terms of the content that the public can upload or download. Net neutrality should remain in place to protect the right of free speech.

While the opponents of net neutrality are rallying behind the legislation of the proposed regulations, they have forgotten that open Internet has led to monolithic innovations around the world. The Internet provides a vast research environment where users can identify problems and use the same open Internet to find solutions; for example, the developing of applications that make the Internet efficient. The free access and posting of information over the Internet create a sense of self-esteem, which allows introverted geeks to come up with novel ideas and life-changing innovations. Also, through the open Internet, people discover an inherent weakness in the already developed solutions, and thus modify and make them perfect.

In conclusion, the government should promote net neutrality, as opposed to supporting the net neutrality regulations, which will incontrovertibly violate human rights (Albenesius par.1). Internet democracy in the 21st century should not be interrupted. On the contrary, it should be guarded at all costs, thus allowing people to exchange ideas and make the world a better place for their coexistence.

Works Cited

Albenesius, Chloe. Verizon: FCC Net Neutrality Rules Violate First Amendment PCMag 2012: 20, 23. Print.

Cleland, Scott. No need for net neutrality regulation Network World, 2011. Web.

Reicher, Alexander. Redefining net neutrality after Comcast v. FCC. Berkeley Technology Law Journal 26.733 (2011): 733-763. Print.

Riley, Chris. Innovation begins with an open Internet PCMag 2012: 20, 23. Print.

Concurrent Engineering and Its Advantages

The modern business world is highly competitive, and the universal truth remains unchanged as time is money. Concurrent engineering can make projects more cost-effective and companies more competitive. The sequential approach has its advantages as it is easier to control the development of new products since each stage is over when the next phase starts. The degree of uncertainty is limited, as well. However, it takes far more time to develop a new product.

Concurrent engineering involves close cooperation of many departments of multidisciplinary teams. The stages of the project usually start when the previous ones are not over (Gray & Larson, 2014). This reduces the time of the project. However, this approach requires a considerable degree of control over operations and the quality of communication. Apple is one of the companies that have employed the approach to the fullest and benefited from it (De Wit & Meyer, 2010). Toyota is another well-known company that has successfully utilized the concurrent method. The company managed to develop the product development system where employees practice the approach by reasoning, developing, and communicating about sets of solutions in parallel and relatively independently (Al-Ashaab, Howell, Usowicz, Anta & Gorka, 2009, p. 465).

The success of many companies, including leaders in their spheres that employed the concurrent method, shows that the approach is effective. The time invested in the development of new products is reduced and, hence, the company develops a competitive advantage. However, it is essential to make sure that the company has well-established communication channels and the teams work efficiently. Otherwise, various errors and faults may appear at different stages of product development or its use.

CyClon Project

Initial network.
Figure 1. Initial network.

The project will take up to 60 days. The critical path involves steps 2, 5, 6, 7, 8, 11, 12, and 13 (see fig. 1).

The adjusted network.
Figure 2. The adjusted network.

The three finish-to-start lags have made the schedule more detailed. It is clear that the time for the delivery creates a certain kind of time-out for teams. However, it will not lead to significant changes in the period of product development. Although it can help save up to 6 days if some stages start concurrently.

Clearly, after the final adjustment, the project time can decrease. It may last for up to 45 days. The management would like the new project as it is more effective due to the significantly decreased period for the development of the new product. As far as the critical path is concerned, it will also be reduced considerably. Several processes will take place almost simultaneously, which will lead to a reduction of time and costs. The sensitivity of the network decreased as the processes of the critical paths are unlikely to change. Apparently, the concurrent approach enables managers to make projects more cost-effective.

The final network.
Figure 3. The final network.

Reference List

Al-Ashaab, A., Howell, S., Usowicz, K., Anta, P. H., Gorka, A. (2009). Set-based concurrent engineering model for automotive electronic/software systems development. In Proceedings of the 19th CIRP Design Conference  Competitive design, held 30-31 March at Granfield University (pp. 464-471). Web.

De Wit, B., & Meyer, R. (2010). Strategy: Process, content, context: An international perspective. Andover, UK: Cengage Learning EMEA. Web.

Gray, C.F., & Larson, E.W. (2014). Project management: The managerial process. New York, NY: McGraw-Hill. Web.

Use of PowerPoint Presentations in a Learning Process

More than forty years ago, presentations were conducted by organizing speech and drawing schemes and pictures on blackboards or on large sheets of paper. Further development of these approaches to delivering information refers to the emergence of overhead projections that premise on mechanically typeset slides and flipcharts that were effective as well (Craig and Amernic, 2006). Slide presentation software has become an integral part of a learning setting that is specifically used for large courses directed at information exchange rather than at skills advancement.

Nevertheless, presentation is regarded as an effective tool for explaining and outlining the main themes and frameworks by means of illustrations, graphs, charts, and thesis statements (Craig and Amernic, 2006). This device has allowed both teachers and students to process information and present it to the audience in an effective manner. From a sociological perspective, technology should be controlled by humans and serve as an amplifier of human abilities and skills. In this particular case, instrumentalism, critical, and social construction of technology shape the basis of technological progress. Specifically, PowerPoint Presentation is considered as a socially constructed entity whose role is confined to using technological tools for enhancing educational activities.

Social History of PowerPoint Presentation

The emergence of slide presentation software is predetermined by shifts in views on public speech, as well as on the tools that can enhance the effectiveness of presentation. It is not surprising that the development of PowerPoint presentation received feedback in many spheres of social activities, including education. Craig and Amernic (2006) have provided multiple research studies that criticize the use of the software because it does not contribute to public speaking and has detrimental outcomes for promoting dialogue and interaction.

However, the scholars withdraw the idea that these side effects directly relate to the use of PowerPoint Presentation; rather, the challenges premise solely on inappropriate use of technology. According to Craig and Amernic (2006), PowerPoint Presentation is considered &another dominating, socially forceful technological mediator of teaching (p. 148). More importantly, the software has emerged as the necessity to recognize new modes of communication that call for alternative patterns in thinking and socializing. Therefore, slide projection is a unique mixture of social media technology and personal communication skills that reflect the changes to educational approaches.

Invention of the new software and its application for pedagogical purpose has become the focus of the current events. Specifically, some of the scholars insist that PowerPoint presentation does not contribute to the learning process because it simplifies information exchange and does not allow students to develop logic and critical thinking. These social underpinning are predetermined by long existence of old-fashioned clichés about the structure and organization of a learning process before the emergence of the software. At this point, Yen-Shou et al. (2011) have conducted their own studies to find the positive correlation between the introduction of PowerPoint and pedagogy.

In particular, the researchers explain that the main purpose of multimedia learning devices lies in presenting visual information because it successfully enhance students perception of the course material. According to Yen-Shou et al. (2011), PowerPoint in a lecture has shown that it can improve the note-taking ability of students they study the teaching materials (p. 43). It is also effective for motivating students to self-fulfillment and professional growth. Further advancement of multimedia devices provides new directions for developing software and creating new applications.

The effectiveness of PowerPoint Presentation does not depend solely on functionality and availability of options that the speaker can use. Rather, it relies on the speakers creativity and approaches that he/she employs to establish communication and information exchange with the audience (Craig and Amernic, 2006). Interactive flexibility is also typical of this software because it relates to the way the speaker applies this technology in various socio-cultural settings. In fact, sociological perspective has deep historical and cultural underpinnings for integrating multimedia learning and developing new mode of communication. Ruokamo and Pohjolainen (2000) underscore the fact that the main requirements to a learning process have been projected on the new tools and devices included into an academic setting. Previously designed for conducting meetings and conferences, slide presentations have become an integral part of communicating various messages in classroom learning. In addition, computer-based learning environment contributes greater to developing a person-centered approach in studies. In other words, PowerPoint presentation created by an individual also stimulates students to develop their own schemes and mechanisms for presenting and structuring information.

Theoretical Foundations Explaining the Emergence of PowerPoint Presentation

A moral aspect of technology studies has recently referred to the social construction of technology (SCOT) theory whose supporters agree that technological systems are socially predetermined. At this point, SCOT theory implies that the invention of new technological devices has become the precursors of technological change. Various social constructs in which technology is applied promotes transformation of information exchange and social interaction between instructors and their students. Thus, the emergence of computers, television, and radio has a highly transformative nature leading to development of new solutions and approaches in a learning process.

According to Jones and Bissell (2011), through interpretative flexibility, what makes a piece of technology educational is not necessarily inherent in the design of the technology; it might instead be a question o usage (p. 287). Although presentations were not initially meant for educational purposes, their principal application has been reconsidered across time to advance educational sphere and support the new framework of learning experience. In other words, the SCOT theory postulates that software tools are designed for promoting students understanding through building new means of comprehension of activities in which they are engaged.

Knowledge is the product of a range of activities in which students are involved. At this point, applying PowerPoint Presentations in classrooms creates new experiences and skills that can later be employed for addressing new material. Development of new skills through technology integration fosters new dimension of social influence because new educational opportunities are open in front of students. At this point, social environment is a driving force for advancing technological tools, which correlates with technological determinism. Oliver (2011) focuses on four theoretical underpinnings  activity theory, actor-network theory, SCOT perspective, and community of practice theory to demonstrate that social environment posits technology into a new dimension.

More importantly, building alternative visions of technology can explain its significance for learning, which shapes the framework of technological determinism. PowerPoint has been created to empower the knowledge presentation and information exchange between the speaker and the audience. In case technology has a social influence, much consideration should be taken in terms of morality and power. These issues are discussed in the context of technology and education and form an integral component of critical theory. Under these circumstances, technology is assumed to have the power to determine choices (p. 375). Focusing on positive aspects of technological change creates new solution in education. Therefore, invention of slide presentation software is a step forward toward visual representation of information.

In the context of sociology, historical perspective is vital for understanding technological change and its influence on society. A framework of techno-historical interplay, therefore, defines economic, political, and social preconditions of development and application of new technological tools into education that is embedded into historical development of humankind in general (Hallström & Gyberg, 2011). Hence, external and internal influences should be taken into consideration to define the prerequisites of new technological advances in a learning sphere.

For instance, geographic factor has made people think over alternative system that would allow students all over the world to study in international universities and obtain degree without crossing the boundaries of their country. As a result, the emergence of the Internet has developed new initiatives that solve the problem through developing distant education. Indeed, the construction of knowledge through social networking has triggered computer-mediated environment and interactive models of socialization (Saritas, 2008). Therefore, looking from social perspective, technology cannot be considered solely from engineering or innovation perspective because it directly relates to social change.

With regard to the above-presented social theories of technology integration, it should be stressed that both technologically determined perspective and social focus are closely interrelated because they provide a new insight into improvement of educational techniques. Both social construct of technology and activity theory are applicable to the case because they explain the actual role of technological discoveries.

References

Craig, R., & Amernic, J. (2006). PowerPoint Presentation Technology and the Dynamics of Teaching. Innovative Higher Education, 31(3), 147-160

Hallström, J., & Gyberg, P. (2011). Technology in the rear-view mirror: how to better incorporate the history of technology into technology education. International Journal Of Technology & Design Education, 21(1), 3-17

Jones, A., & Bissell, C. (2011). The social construction of educational technology through the use of authentic software tools. Research In Learning Technology, 19(3), 285-297.

Oliver, M. M. (2011). Technological determinism in educational technology research: some alternative ways of thinking about the relationship between learning and technology. Journal Of Computer Assisted Learning, 27(5), 373-384.

Ruokamo, H., & Pohjolainen, S. (2000). Distance learning in a multimedia networks project: main results. British Journal Of Educational Technology, 31(2), 117.

Saritas, T. (2008). The Construction of Knowledge Through Social Interaction Via Computer-Mediated Communication. Quarterly Review Of Distance Education, 9(1), 35-49.

Yen-Shou, L., Hung-Hsu, T., & Pao-Ta, Y. (2011). Integrating Annotations into a Dual-slide PowerPoint Presentation for Classroom Learning. Journal Of Educational Technology & Society, 14(2), 43-57.

The Future CSUF College Town Project

Executive Summary

California State University, Fullerton, the oldest university in Fullerton, in collaboration with the City of Fullerton has come up with a project to develop a town known as college town. The location of the town is to be behind Hope University and the current Nutwood Street. The college town is designed to provide various activities, which may be of great benefit to not only the members of CSUF, but also to other members of the public.

Due to the magnitude of the project and people who have the potential of being affected by the town, there must be a consensus in development of the town between CSUF and the city of Fullerton regarding the development of the town. In addition, Fullerton University plays a very big role in supporting the economy of Fullerton thus setting up the college town will also help improve its image.

The zoning of the college town will include a residential area that will be divided into the middle class and upper middle and upper class areas. The town will also have a market district where the main businesses of the town will be located. In addition, the town will have a plaza district where prime businesses will be located. Another area will be set aside for use as a residential area or any other development, which may be suitable for the town.

The cost of putting up the town is estimated to be around $ 63.9 billion compared to the benefits, which are estimated to be around $ 30.5 billion. Thus, the setting up of the town would be a worthwhile investment, as the recurrence of the benefits would take the investment positively. The Return on Investment (ROI) is found to be 47 %; this is a fairly large value considering that this is a public development. Thus, ROI show that setting up the college town is a viable investment to the university and the people of Fullerton.

To help promote the image of the town, the paper proposes that it should be done through several Hollywood magazines. In addition, promotion would also be done through CSUF and the City of Fullerton who will undertake many of their activities in the town so as to advertise the town indirectly. The town will also try to attract several large chain stores, which are recognized throughout the country to set up their businesses in the town.

Introduction

Universities are very important learning institutions; they are vital to the development of an area. In addition, Universities help in improving prestige and public view of an area, as well as attracting young and bright students, and bringing intellectuals i.e. the lecturers and other trainers to a town.

The presence of a successful university in a given town helps in attracting investments into an area since the companies are assured of a pool of young, bright, and hardworking students who would help in the provision of the necessary manpower required for the effective functioning and growth of the companies.

However, for the investments to be attracted to the area there must be proper infrastructure, which has been put in place to facilitate the smooth running of the companies. In addition, the presence of good infrastructure will help in attracting the students to the university since most young people are known to be attracted to urban centers.

In order to acquire space for expansion of its activities, CSUF came up with the college town to provide additional space for expansion of its activities. In addition, the college town will provide more revenue to CSUF and the city of Fullerton. The university of Fullerton has for a very time been the major establishment which greatly affects the lives of the residents of the city of Fullerton. Establishment of the college town will help it consolidate its position as the defining feature of the city of Fullerton.

Background

California State University Fullerton, known commonly as CSUF, is an integral education institution in the state of California as it is the university that enrolls the highest number of students. In the city of Fullerton, it is the largest learning institution in terms of the number of students who are enrolled in the university (university website, 2011)

Fullerton is generally an area that is almost fully developed with very little land for development. The city is mainly composed of low industrial developments. In addition, there is a project to develop a transportation network known as Go local. The transportation network aims at connecting FTC and CSUF campus.

The development of the Go local, plan aims at covering just the right place where development of the college town is to be located; that is, behind Hope University and current Nutwood (Focus on Fullerton, 2009). The main strategy for the creation of the Go Local is to help ease the congestion in the citys freeway. Hence, by easing the congestion the city would be opening up other areas for development and investments.

One of the areas is the area behind Hope University and the current Nutwood Street.

Go local would not only help in decongesting the city but also help in facilitating the development of the college town. The development of the project has therefore been done at a timely period, as its design will be integrated with the design of the Go Local.

Economic Climate

Despite the fact that the city of Fullerton is not a heavy industrial town, various economic activities are undertaken to support its economy and provide thousands of jobs to the residents of Fullerton. Among the economic activities undertaken, include Tourism, Agriculture, Financial services, Building and construction, and Manufacturing.

All the above economic activities generate revenue to the City of Fullerton. In addition, CSUF generates considerable revenue to the city; indeed, CSUF alone is the largest single employer in Fullerton. The economic impact of the university offers $ 4.28 million in output to the economy of Fullerton. In addition, student expenditure offers about 4550 jobs to the residents of the town (Bhattacharya & Cockeril, 2002, p. 4).

Due to the fact that California state university of Fullerton is of so much economic importance t o the city of Fullerton, setting of the college town in the vicinity of CSUF, would enable the city to reap maximum economic benefits from the university by creating thousands of jobs to the residents of Fullerton.

The College Town Project

The college town will offer a variety of services to the people of Fullerton. The college town is designed in such a way that it has a residential area where several apartments will be set up, which will be available to everyone. The apartments are designed in such a way they can be able to offer accommodation even to students who would not wish to be accommodated in the university.

All the apartments will have ample parking and security of the residents. In addition, the college town will offer space for the university to build residential areas to accommodate the students. The college town will also have space to accommodate the expansion of the other CSUF activities.

In addition, the college town will offer an up-market residential area, which is specifically suited for the intellectuals and business people. This area will have various recreation activities that would enhance the comfort of the residents.

This area will have residential units, which will not be fenced to enhance the communal setting of the college town. Though the units will not be fenced, there will be ample security in the area to help in the protection of the residents property. Just like all the areas of the city, the residential districts will have all the social services and amenities.

The college town will also have a market place district that will be mainly suited for setting up business enterprise. This area will be specifically suited for banks and other financial institutions in addition to other businesses, which are relatively small. The market place district will be the focal point of the town and will therefore have many aesthetic features, which will help in the improvement of the image of the town. There will be many hotels, clubs, and other entertainment joints, which will be located in this area.

Just past Nutwood road, there will be a plaza district. This area will be suited for entertainment spots, a place where both the youth and families can hang out during their free time. This region will have a five star hotel that will offer world-class facilities and will be ideal for tourists and other people having leisure. This region of the town will be specifically suited for high-income business ventures. A mall will also be established in the area.

The town will also have other parts that can be used for both residential and business purposes. In this area, there will be a large library, which will cater for both the needs of the university students and the public.

This area will have many other recreation facilities such as sports center, swimming pool, sports ground and a recreational park. The recreational park will offer an ideal place where people can relax without the noise and interference of the other areas. This region is also ideal for students for students to spend their leisure time while socializing with other students or the members of the public.

Cost Benefit Analysis

In a business case, cost benefit analysis is used to explain the benefits of a certain investment so as to provide good comparison and thus help in determining whether the investment should be made when the benefits are compared with the costs of undertaking the investment.

However, the cost benefit analysis is not written only for business investments, it can be taken to determine whether certain actions, which are undertaken by a party, will turn out to be good. Hence, the cost benefit analysis of the college town will look at various aspects, which may be caused by the setting up of the college town. Factors that are mainly considered are social, environment and financial or economic.

Costs of setting up the college town

Preliminary costs:

Consultation and planning costs  $ 100 million

Preparation of the area  $ 5 billion

Survey  $ 50 million

Environmental impact estimate  $ 3 billion

Construction costs:

Cost of materials and equipment  $ 50 billion

Labor  $ 4 Billion

Provision of necessary support services:

Access roads  $ 500 million

Electricity  $ 75 million

Water and sewerage  $ 150 million

Landscaping  $ 50 million

Other expenses:

Logistics  $ 1 billion

Total  63.925 billion

Benefits of setting up the college town:

Employment offered in construction work  $ 1 billion

Employment offered upon completion  $ 6 billion

Entrepreneurial knowledge to the students estimate  $ 10 billion

Social cohesion of the town estimate  $ 2 billion

New students attracted to the university  $ 500 million

New investments attracted to the town  $ 10 billion

Revenue from tourism attraction  $ 30.25 billion

Evaluation of costs and benefits

The total cost of putting up the college town is 63.925 billion. The above costs include all the expenditure that may be involved in the setting up of the college town. However, the benefits of setting up the college town are estimated to be roughly 30.5 billion.

However, the recurrence of some of the benefits would make the benefits outweigh the costs of putting up the town, as most of the costs are not recurrent. Therefore, the setting up of the college town is a potential worthwhile investment and should therefore be made for the benefit of the residents of the town, the students, and other relevant stakeholders.

Promotion

Promotion is a very vital aspect that will determine the success of the college town. Promotion of the college town will be done through famous magazines that are based in California. Particular emphasis will be paid to Hollywood magazines. This will help attract tourists to the college town by portraying the college town as a place where celebrities meet.

The college town will be formulated as a place where intellectuals, both the young and the old meet. The college town will also be portrayed as a place where the true culture of the people Fullerton is exhibited. In addition, it will be portrayed as a place where the young people can have fun responsibly.

In addition, CSUF and the City of Fullerton should form a website of the college town to help attract people to the town. The website will inform people of the various services and activities which are offered at the college town thus attracting them to the college town.

To help in promotion of the town, several CSUF and City of Fullerton activities should be carried out in the college so as to help in making the people of know of the existence of the town. In addition, CSUF and the City of Fullerton should carry out several campaigns to help in attracting investments into the college town.

This can be done through posting of advertisements in several investment magazines. In addition, CSUF and the City of Fullerton should be involved in several workshops and trade fairs to help in the promotion of the image of the town and help attract investments into the town. Several advertising campaigns should also be undertaken using televisions.

The shopping mall to be located at the heart of the town will help in attracting supermarkets and other large chain stores into the area. These establishments will act as a form to promotion to the college town, since, by establishing their businesses, they will be bringing their good reputation to the college town hence attracting people and other investments into the area.

Return on Investment (ROI)

Return on investment or simply ROI, is a method that is used to determine the return that a certain investment is likely to yield over a given period of time. It is used by most analysts to determine the potential income, which the investment may generate before the setting the investment. In calculating ROI, the most tangible financial gains of the project are compared against the cost of implementing the project. Return of investment is not usually as comprehensive as the cost benefit analysis explained above (NSGIC, 2006, p. 1).

ROI is usually presented as a ratio of the total financial gains to the costs of undertaking a certain project.

Costs of undertaking the project

Preliminary costs:

Consultation and planning costs  $ 100 million

Preparation of the area  $ 5 billion

Survey  $ 50 million

Construction costs:

Cost of materials and equipment  $ 50 billion

Labor  $ 4 Billion

Provision of necessary support services:

Access roads  $ 500 million

Electricity  $ 75 million

Water and sewerage  $ 150 million

Landscaping  $ 50 million

Other expenses:

Logistical expenses  $ 1 billion

Total  $ 60.925 billion

Financial benefits of the project:

Employment offered in construction work  $ 1 billion

Employment offered upon completion  $ 6 billion

New students attracted to the university  $ 500 million

New investments attracted to the town  $ 20 billion

Revenue from tourism attraction  $ 1 Billion

Total  $ 28.5 million

Return of investment of the project

ROI = 28.5/60.925 = 47 %.

From the above calculations, the ROI of the project has been determined as 47 %. For a viable business venture, the ROI should generally be greater than zero. Hence, this shows that the investment is viable. The value is far much greater that the ROI of most governments projects which are in most cases negative (NSGIC, 2006, p. 1).

Conclusion

College towns or other establishments undertaken near colleges have been shown to be a great benefit to various societies. Other similar establishments have been successfully implemented in other universities such as the University of Connecticut, Ohio state university and Arizona State University (Anon, 2011).

Creation of the college town would be of great benefit to the city of Fullerton. The benefits are not restricted to the ones explained above. In addition, the several methods of analysis done on the project have shown that the project is a viable undertaking, which has the potential of reaping very good returns to the city of Fullerton. The Cost Benefit Analysis (CBA), the most efficient method of analysis of a project clearly shows that the project is bound to have great benefits.

Even though the implementation and completion of the project may take several years to complete, successful planning of the college town should be done in order to benefit the future generations and help uplift the standard of the city of Fullerton and the California State University, Fullerton.

References

Anon. (2011). City Council Meets College Town! A University without Walls, Where Campus and City Life Converge. CSUF news. Web.

Bhattacharya, R. & Cockerill, L. (2002). Economic impact of California state university, Fullerton. Web.

Focus on Fullerton. (2009). City of Fullerton website. Web.

NSGIC. (2006). Economic justification measuring Return on Investment (ROI) and Cost Benefit Analysis (CBA). Advanced Statewide Spatial Data Infrastructures in Support of the National Spatial Data Infrastructure. Web.

University website. (2011). California State University, Fullerton. Web.

Reactive Chemical Explosion in the T2 Laboratories

The T2 laboratories Inc. located in Jacksonville City, Florida suffered a heavy loss in property and human life following an on-site chemical explosion that occurred on 19th December 2007. As a result of the incident, four employees of the company were killed. The co-owner of the chemical plant was also among the casualties. Besides, twenty-eight other people working in nearby business establishments sustained serious injuries. Some of the businesses were later relocated from the vicinity of the factory site while others went down completely due to total damage.

T2 had been successfully producing methylcyclopentadienyl (MCMT) in batches for some time despite minor technical hitches such as heat control and cooling of the chemical reactor which proved to be highly exothermic (U.S Chemical Safety and Hazard Investigation Board, 2009). To begin with, it is imperative to note that Methylcyclopentadienyl manganese tricarbonyl (MCMT) is a highly toxic and volatile liquid that contains both manganese and some organic compounds (Urban, 2000). The liquid compound is often added to normal gasoline to improve its efficiency. The level of expected exposure to MCMT has been clearly defined by the National Institute for Occupational Safety and Health (NIOSH) and Environmental Protection Agency (EPA).

The process of manufacturing MCMT entails a three-step procedure. At the T2 plant, the process is carried out using one reactor. The reactor was constructed by the Annealing Box Company and it required heating and subsequent cooling to maintain the appropriate temperature. For the manufacturing process to run smoothly, the raw materials have to be added by the process operator. At the same time, the operator controls the heating and cooling of the mixture as well as regulating the optimum pressure required by the system.

The operator makes use of a control system that has been computerized. The first step entails metalation whereby the dimmer of methylcyclopentadiene and diethylene ether is blended inside the common reactor by the operator. On the other hand, another external operator performs the function of feeding the 6-inch opening valve with lumps of sodium metal. When this action is complete, the outside operator closes the valve. The mixture is then continually heated by the process operator. The heating agent used at this stage is hot boiling oil streaming from a piping system. The optimum pressure required to at this stage is 3.45 bars while the temperature is controlled at about 182 degrees Centigrade. During heating, the metalation process is initiated which melts down the sodium blocks.

Each of the dimmers is split down into two identical MCPD molecules. After the subsequent breakdowns, the two chemical components are then reacted together leading into the formation of sodium methylcyclopentadiene. Also, the reaction mixture evolves significant amounts of hydrogen gas alongside large amounts of heat energy. The hydrogen gas liberated is vented out from the reaction system and redirected into the free atmosphere through a narrow valve measuring one inch in diameter (Urban, 2000).

The agitator is started by the process operator when the temperature of the reacting mixture reaches a maximum of 98.9 degrees Centigrade. The main purpose of rising the temperature of the mixture to this level is to increase the rate of chemical reaction of the mixture in the single reactor. Hence, the rate of metalation is achieved faster and more effectively at higher temperatures. At 3000F, the hot oil system which is acting as the main source of heat is turned off by the operator.

After turning off the heat, the reaction temperature is expected to go down considerably as part of the cooling process. However, the T2 laboratory disaster of 2007 was mainly occasioned by the inability of the system to cool down. The temperature of the metalation reaction mixture continued to rise and eventually lead to an explosion. In case of an emergency due to the failure of the cooling system, a water supply valve is put readily in place to cater for alternative cooling. A backup supply of water to be used during emergencies is also installed as part of the safety procedure (LaDou, 2006).

The U.S. Chemical Safety and Hazard Investigation (Board CSB) described the explosion at T2 laboratories as highly disastrous due to the extreme quantity of heat energy that was released in the system as a result of exothermic reaction from the reactor (U.S Chemical Safety and Hazard Investigation Board, 2009). In an exothermic reaction, heat is lost from a system. The chemical mixture contained in the common reactor undergoes the process of bond formation to produce the final product which in this case is MCMT (Urban, 2000). In other words, when the process of bond formation releases more heat energy compared to that needed in bond breaking, an exothermic reaction takes place. If this reaction takes place on large scale like in the case of T2 laboratories, enormous amounts of heat energy will be released which will then necessitate the need for an effective cooling system.

The amount of heat energy evolved as a result of the exothermic reaction during the T2 incident was estimated at one thousand and four hundred pounds of TNT. As already noted, CSB pointed out that the cause of the explosion was the runaway or exothermic reaction that produced extremely high temperatures in the single reactor. This type of chemical reaction where a lot of heat is released is often very risky especially in the event of an explosion like the one witnessed at T2. The inability of the process operator to reduce the level of exothermic reaction may have been caused by quite a several factors like cross-contamination of the reaction vessel as well as the use of impure raw materials.

Inadequate cooling of the system and excessive heating are also possible causes of uncontrolled exothermic reactions. Apart from conforming to standard regulatory measures on industrial safety and hazard management, incidents of the T2 magnitude can be prevented from occurring by adopting and implementing extra precautionary procedures. Firstly, before any chemical plant is erected on-site, the firm must be accredited with the Engineering and Technology Board (LaDou, 2006). Accreditation should be done continually; this implies that the company works with quality assurance boards throughout the lifetime of the company so that issues related to technical hitches are addressed at the opportune time.

Secondly, the use of refurbished equipment when setting up high-intensity chemical plants like T2 laboratories should be avoided. Numerous technical hitches were reported in the use of the companys reactor largely because it was an old vessel. The company settled at the refurbished reactor due to insufficient funds. Last but not least, company owners and chemical plant managers should have adequate experience and technical know-how in reactive chemistry so that they can be able to handle emergencies (U.S Chemical Safety and Hazard Investigation Board, 2009). The co-owner, as well as the plant operators of T2 laboratories, lacked prior skills and competencies in handling the exothermic reaction emanating from the chemical reactor.

References

LaDou, J. (2006). Current occupational & environmental medicine. CA: McGraw-Hill Companies.

Urban, P. G. (2000). Brethericks Handbook of Reactive Chemical Hazards, 6th ed, New York: Elsevier Science.

U.S Chemical Safety and Hazard Investigation Board (2009). Investigation Report: T2 laboratories, Inc. runaway reaction. Web.

Information System Development: From Scratch to Guide

Introduction

In all cases of developing an information system from the ground up, it is vital to create an IRD strategy to guide the development process. The system model not only expresses requirements but also provides an informal description of data points and allows for the definition of logical architecture, which will encompass functional requirements and non-functional needs. The requirement modeling strategy that would be the best to adopt in this scenario is a structured analysis that uses a data-oriented approach to conceptual modeling.

Project Management Case Study

Before beginning a project, it is necessary to identify the requirements analysis strategy. Since Bevs Barricades has no existing system in place, an outcome analysis seems best. An outcome analysis focuses on fundamental outcomes that provide value to the company and its customers (Dennis, Wixom, & Tegarden, 2015). For Bevs Barricades, the outcome is to provide equipment to its clients that efficiently fit their needs. However, to meet this goal, it is necessary to undergo a series of processes, ranging from payment processes to inventory management and logistics that the owner would like to be present in the information system.

It is viable to create a strategic matrix that provides relevant context and enables the facilitation of vertical processes in an organization while offering a level of consistency. A matrix includes a self-contained strategy that outlines principles of integration and governance. Each process area of the company should be considered in the context of independent management and information resources, with each potentially having a sub-strategy. A strategic matrix allows the creation of parallel strategies for each department, with tools and activities that feedback to the core process strategy and management framework.

A fitting team-based technique to use would be joint application development (JAD), which allows for interaction between the project team, management, and users to identify key requirements for the information system. It is a comprehensive method for collecting information that enables the project development team to accurately identify system specifications from end-users to avoid potential complaints or dissatisfaction in the future.

Participants in JAD sessions should include managers from each relevant department outlined by Bevs Barricades such as inventory, accounting, design, marketing, IT support, and logistics. A facilitator should be present as well to facilitate the process in a streamlined manner (Dennis et al., 2015). Although more expensive and time-consuming, this approach is necessary considering that there is no existing infrastructure in place and the system must fit the custom requirements of the business.

Conclusion

Information collection is a critical process in an IRD strategy. The best elicitation techniques of user requirements in the case of Bevs Barricades will be interviews and observation. The selected interviewees should be from various employee levels within the company who can provide key specifications and viewpoints on the system. For example, in accounting, it may be necessary to interview the department head to identify the strategic vision, a manager for common problems in day-to-day operations, and a data entry clerk as a regular end-user of the information system. A similar approach should be used with each department identified by Bevs Barricades that will be served by the future information system.

Interview questions should be both close-ended, to gather precise information on data analytics, and open-ended, to provide subjective information on user preferences (Dennis et al., 2015). At the same time, observations should be performed on any processes that are brought up during the interview. Observation offers the opportunity for deeper insights into practices and patterns of an organization, as well as serving as a vital supplement elicitation technique to interviews.

Reference

Dennis, A., Wixom, B. H., & Tegarden, D. (2015). Systems analysis and design: An object-oriented approach with UML (5th ed.). Hoboken, NJ: Wiley.

Linux Deployment Proposal: Ubuntu 12.10

Abstract

This paper explores the advantages of a Unix-based operating system and provides relevant evidence supporting the choice. The author dwells on the peculiarities of Ubuntu and its positive sides compared to Windows 7/ XP. The deployment plan is thoroughly described, and rationale for new hardware is also presented. The proposal pays close attention to the utilization of specific Ubuntu services (DHCP, SAMBA, Encfs) and describes in detail the process of migration from the Windows operating system to Ubuntu. The author discusses the options regarding the encryption of important data and the provision of network access to those files. Reasonable explanations that are justifying the choices can be found in every section of the proposal.

Ubuntu 12.10

To begin with, there is a necessity to justify the choice of Ubuntu 12.10 over Windows XP. There is a number of important advantages that should be enumerated. First, one should pay attention to the design of the operating system. Ubuntu features the Unity design, and it has been kindly welcomed by the majority of users (Sobell, 2011). Numerous individuals also mentioned that its user interface (UI) looks way better than that of, for instance, Windows 8. Second, it is customizability of Ubuntu when compared to Windows OS. The options for customization are almost limitless in the case of Ubuntu. Third, Ubuntu features a number of versatile apps that come out-of-the-box (Sobell, 2011). In addition to that, the majority of Ubuntu apps is open-source and free-to-use (unlike Windows, where the majority of apps is only available for a fixed trial period).

Fourth, one would pay attention to the minimum system requirements. The system requirements of Ubuntu are much more modest, and this OS is a perfect choice when the individual or organization are limited in resources and hardware (Sobell, 2011). Fifth, I would emphasize the importance of security options that are present in Ubuntu. Its Linux Security Modules and Linux Containers make this OS almost invincible to different viruses and other external threats. Sixth, Ubuntu supports Active Directory and features the Landscape app which is an exclusive Ubuntu alternative able to perform the majority of Active Directory tasks (Sobell, 2011). The seventh option, which is VPN, is available for both Ubuntu and Windows users. The last advantage of Ubuntu over Windows is its price. This Unix-based OS is available for free, while Windows license should be paid for (Sobell, 2011).

Moreover, I would also like to justify the choice of Ubuntu 12.10 over other available Linux options. At the outset, its graphic user interface (GUI) is easy to understand, and it suits the majority of users (including both Linux experts and those who are new to this kind of operating systems). Moreover, Ubuntu features Apt  a download-and-install helper which makes things really easy for the end users (Sobell, 2011). It should be noted that Ubuntu works out of the box (as is) and there are no additional steps that should be performed when it comes to the installation of the OS itself. Another advantage over other distros is that Ubuntu features much more software than other distributions and it is not dependent on any other distributions (Sobell, 2011).

Hardware Review

After a thorough review of the current hardware configuration for Windows 7, I consider it to be adequate, and it can be successfully replaced by Ubuntu in the nearest future. The only change I might recommend is the replacement of Intel Core i3s with i5s. This would cost the company some money, but the outcome would guarantee steady performance. Four gigabytes of RAM would be enough for a Windows 7-based machine. We might add up to 6 GBs of RAM only in the case if the most important stations in the organizations have to be updated (therefore, I do not recommend upgrading all of the Windows 7 systems to 6 GBs of RAM). The current hardware configuration for Windows XP should be considered a low-end configuration and replaced as soon as possible. Despite the fact that Windows XP is not a rather resource demanding operating system, I would update the processors and the amount of RAM on every machine in the organization. I advise installing 3 gigabytes of RAM (or 4 gigs for x64 systems) and Intel i3 processors. This hardware can be purchased for a reasonable price and provide stable and high-level performance. I would go with Intel Core i3-3130M and Kingston or Crucial RAM with the frequency of 1866 MHz.

Migration Plan

The first and the foremost task is to review the current setup. Network hardware and other crucial parts should be evaluated from the migration point of view. I would analyze the readiness of the system to migrate to Linux and the software that is going to be replaced (Parziale et al., 2014). There is also a necessity to repeatedly assess the hardware requirements and the new configurations that will serve as the replacement machines running Ubuntu. Another crucial task is to divide the present software into three categories  serious, beneficial, and insignificant  and plan further steps in compliance with the importance level of the apps (Parziale et al., 2014).

The next step is to generate a hard disk image. The key goal of this step of migration is to create generic versions of the operating system so that the end users would not have to install necessary apps by themselves (meaning that the required apps are already preinstalled) (Parziale et al., 2014). Any other applications would be installed later using Apt. Consequently, all of the apps installed discretely should be verified and tested. The applications that are essential for the organization should be deployed in the first place. Moreover, I would also pay attention to the issue of compatibility (Parziale et al., 2014). The apps that can cause problems should be checked first. A group testing of applications should also be initiated for the reason that some apps would only work correctly when installed on a clean OS. In this case, I might use virtualization in order to solve any transpiring problems (Parziale et al., 2014).

The third task is to transfer users files, settings, and preferences as smooth as possible.

I would take into consideration all of the customizations performed by users. In order to migrate correctly, it is necessary to detect the key settings that should be transferred (including network drives, printers, and so forth) (Parziale et al., 2014). The users should be made aware of the fact that some of the files might be lost. The best way to successfully transfer users files and settings is to automate the process of the transfer. The latter should be set up in a way that presupposes that the next step is only made when the previous step is completed successfully (Parziale et al., 2014). These steps include saving the settings, installing the OS image, installing the essential apps, and reinstating the saved settings.

The last step is to check if the deployment process was successful. I recommend starting with a single machine and then testing the machines across the organization. It is necessary to make sure that all of the settings have been transferred and all of the applications are functioning as expected (Parziale et al., 2014). All the data concerning the migration should be logged (including the cost of migration, the number of involved working stations, and so on).

Hardware to Be Used and Installation Options

New Desktop/ Laptop Configurations for Ubuntu 12.10

Processor: Intel Core i3-3130M/ Intel Core i5

Memory: 3GB RAM/ 4GB RAM

Hard Drive: 250GB/ 500GB

Network Card: 10/100/1000 Mbps

USB Ports: 4 USB 2.0

Monitor: 19/ 21inch LCD

Rationale

The majority of currently available Linux distributions (and Ubuntu especially) are supported by the hardware developers. Ubuntu 12.10 will automatically detect and install the hardware. In case if certain drivers are necessary, hardware compatibility issues may transpire (Martinez, Marin-Lopez, & Garcia, 2014). Nonetheless, there are two alternatives  to buy new hardware or to put the project on hold. For laptops, it is essential to download the official drivers from the manufacturer.

Log-in Process

First, the user will have to introduce the username. If the user is not root and /etc/nologin file is present in the file system, a cautionary note pops up, and the login process is stopped (Sobell, 2011). Second, the system is looking for the specific restrictions set for the user that is logging in in the /etc/usertty file. There may be a number of certain restrictions for regular users and even for root users when it comes to specific terminals (Martinez et al., 2014). The system records all the cases of sudo command use and every user login. A number of security programs can look through the /var/log/messages file to find glitches and indicate any probable system security violations.

IP Addresses

The Ubuntu-based systems will receive IP addresses by means of the Dynamic Host Configuration Protocol (DHCP). Each host would send a DHCP request over the network in order to demand an IP address or to find any other available DHCP server and subsequently request a new network configuration (Martinez et al., 2014). DHCP client then connects to the DHCP server and updates the information on the IP address until the lease time of an IP address expires. If that particular DHCP client is not able to update its IP address due to interruption or client shutdown, its IP address expires (Martinez et al., 2014). After that, another DHCP client has the option of leasing this IP address from the DHCP server. All leased IP addresses are stored by the DHCP service into a file called dhcpd.leases. The latter is stored in /var/lib/dhcp. By means of this file, the DHCP server will be able to track all the IP leases even after a reboot or a crash (Sobell, 2011). I would also note that there are several advantages of setting up a DHCP server. First, no conflicts between IP address will appear. Second, the service will guarantee that no IP address will be duplicated. Third, DHCP server stores all IP address assignments in compliance with the hosts MAC addresses (Martinez et al., 2014). Based on the latter, DHCP allows creating a specific configuration for a specific host. Fourth, DHCP requires minimum setup but is rather efficient.

DNS Access

In order to let LSDG access the DNS, the organization will have to set up the /etc/network/interfaces file properly. This should also be done with the intention of allowing the implementation of changes made to the DNS server by means of the command line. Moreover, it is worth noting that the company will need at least two servers (Martinez et al., 2014). One of them will serve as the master DNS server where all the necessary zone files will be created. The other one will be the slave server. This server will receive the data from the master server and provide the data if the master encounters a critical error (Sobell, 2011). By doing this, the organization will be able to secure its DNS servers and minimize the occurrence of perilous events. This kind of setup will provide the organization and its clients with a highly performant system. It is important for LSDG because this way they would not have the problem of resolving outdated requests from the customers. The organization would only be worried about the setup of the DNS servers (Martinez et al., 2014).

Network Access to Files

Files on the network can be accessed by LSDG by means of an SSH connection. This method is available if a secure shell is set up on the server. Numerous web hosts provide SSH services such as protected file upload and so forth (Sobell, 2011). The key feature of SSH servers is that they require credentials at all times. All the data sent via SSH is encrypted (including user password), so no one else on the network has the ability to see that info. Another option worth mentioning is WebDAV (Sobell, 2011). This service is based on the utilization of the HTTP protocol and is frequently used to share files locally or store data online. One of the most important features of WebDAV is that it uses a tenacious SSL encryption. This means that no one can see the personal data of the user accessing or uploading the files and it is practically impossible to steal that information (Sobell, 2011).

Secured File Sharing

One of the best options for Linux desktops is SAMBA. It is an open-source application of the Server Message Block (SMB) file distribution procedure (Sobell, 2011). The main advantage of SAMBA is that it can be installed easily on Ubuntu or any other Linux distribution. Absolutely for free, SAMBA can substitute, for instance, a domain controller only available in Windows NT. The issue that can be encountered by LSDG is connected to restrictions inherent in the Unix-based operating systems. Permissions available to the users are not really user-friendly, and that is hard to change (Sobell, 2011). Nonetheless, there are several alternatives aimed to help to fight this problem. I recommend using OpenAFS (an open-source app representing a client-server system for uploading and downloading files) and Netware Novell Storage Services, which soon will be available out-of-the-box on Linux systems (Sobell, 2011).

Printer Access

In order to access the printer, it should be connected to the computer (usually through a USB port or Wi-Fi) and turned on. Then we would go to the printing options and add a new printer. Then, the printer would be automatically detected, and we will only have to set it up (Sobell, 2011). After we select the printer, we choose the driver (normally, default drivers are the appropriate ones). After this, we would fill in the descriptive information which serves to identify the printer in the network. The changes are applied, and the driver is installed (Sobell, 2011). If it is necessary, any particular drivers can be copied (in the form of a *.tar archive or a *.deb package) manually from the manufacturers official website. If we are setting up a local printer that would work via Wi-Fi, the steps are the same as mentioned above. The only difference is that we will have to indicate the IP address of the printer (Sobell, 2011).

Data Encryption

There is information that should be encrypted (Sobell, 2011). This includes certain business documentation stored electronically and personal information of the employees. The information should also be available with different levels of access. In order to guarantee the safety of this data, I recommend using Encfs. This is an app that permits the administrators to generate encrypted files and file directories (Sobell, 2011). Moreover, any unencrypted file that is moved to an encrypted directory will become encrypted as well. The access to the encrypted files and directories will be granted by means of complex passwords. The key drawback of this app is that it can only be set up from the command line (Sobell, 2011).

References

Martinez, A. R., Marin-Lopez, R., & Garcia, F. P. (2014). Architectures and protocols for secure information technology infrastructures. Hershey, PA: IGI Global.

Parziale, L., Franco, E., Gardner, C., Ogando, T., Sahin, S., & Gunreben, B. (2014). Practical migration from X86 to Linux on IBM System Z. Springville, UT: Vervante.

Sobell, M. G. (2011). A practical guide to Ubuntu Linux. Upper Saddle River, NJ: Prentice Hall.

Cyber Attacks on Financial Institutions

Abstract

The financial services industry has been reeling from the effects of frequent cyber attacks. This problem has slowed down advancements in the financial services industry. This essay catalogs some of the causal factors when it comes to cyberattacks on financial institutions. Some of these factors include an enabling environment and a lack of a common approach towards this problem.

Introduction

Over the last decade, the financial services industry has relied on digital technology. Consequently, financial institutions rely on digital technology to conduct business, form liaisons, share information, and trade with external institutions. The growing dependency on the internet is also on the rise and this dynamic attract both positive and negative elements within various industries. Cyber attacks are one of the consequences of the digital revolution that is ongoing within the financial industry. The frequency of cyber-attacks within financial institutions is on the rise according to statistics from the last few years.

To mitigate the effects of cyber attacks within financial institutions, it is important to understand their anatomy. Cyber attacks are still evolving in nature and this makes them a complex challenge for financial institutions. Cyber attacks are caused by various factors including developments in the global finance industry. Some of the known causes of cyber attacks include the fact that the financial industry is scalable. Besides, currently, there are no formal institutions that are actively involved in the fight against cybercrimes. This essay analyzes the elements that contribute towards cyber-attacks on financial institutions. The paper will also put the causes of cyber attacks in their respective contexts.

Background of the problem

The frequency and relentlessness of cyber attacks on financial institutions prompt major banking stakeholders to enhance systems that can improve information security systems. Current statistics indicate that around 93 percent of major financial institutions have had their cybersecurity compromised in the last one year (Lennon, 2014). Cyber attacks are responsible for the theft of various aspects of digital operations including usernames, passwords, credit card information, and money among others.

A cyber attack can be in the form of phishing, (social engineering and technical subterfuge), malvertising (injection of malware into legitimate online advertising sites), watering holes (injection of malware into commonly visited web sites), and web-based attacks (targeting of systems and services that contain customer credentials) (Mukhopadhyay, Saha, Mahanti, & Podder, 2005). Another worrying trend of cyber attacks involves instances when stolen information is sold online to any willing buyer. To counter the effects of cyber attacks, institutions have been forced to invest colossal amounts of money towards the mitigation of this vice (Lennon, 2014). On the other hand, cyber-attacks have led to the loss of both intellectual properties and financial-based assets. Customers are likely to lose confidence in institutions that are subject to cyber-attacks. The development of robust infrastructure is vital in the fight against cyber attacks.

Combating cyber attacks prompts financial institutions to subdivide their resources into various departments in line with the severity of threats. The financial institutions industry is interconnected into a web of bigger and smaller organizations. Consequently, a cyber attack on one institution can have far-reaching effects on other players including suppliers, vendors, partners, and customers, among others. This scenario was evident when the popular retail chain Target fell victim to hackers (Richardson, 2008). The attack affected a wide range of stakeholders including customers, suppliers, and financial institutions. Currently, the risk of cyberattacks is noted to be one of the major impediments to growth in the financial services industry.

Causes of cyber attacks

One of the major causes of cyber attacks is the vulnerability of financial institutions. Naturally, financial institutions do not operate in isolation and they have to sustain a myriad of connections to survive. Through a financial institutions normative activities, a malicious actor can easily gain entry into the guarded systems. For instance, a hacker has the ability not only to steal data but also to delete or modify it. Consequently, the vulnerability of any financial institution comes from its core operations. Ordinary software, hardware, or human vulnerabilities can be exploited by hackers with the view of gaining administrative control of networks which, if abused, could cause catastrophic consequences (Pfleeger & Rue, 2008). Financial institutions are subject to competitive market dynamics and this means that they have to adopt a welcoming attitude. Also, financial institutions rely on the achievement of sizeable market shares for them to consolidate their financial stability.

In most financial-industry environments, there is a lack of a coordinated effort to address the issue of cyber attacks in a collective manner. Consequently, malicious actors have continued to take advantage of this shortcoming. For example, cyber-attacks have not yet been addressed at the international level (Pfleeger & Rue, 2008). At this level of action, financial institutions should be able to exchange information concerning cyber attacks. This information can range from intelligence to potential attackers, recognized best practices, and past experiences. Lack of coordinated efforts leaves financial institutions at the mercy of their own devices thereby giving potential attackers an advantage. The capacity for individual institutions is not sufficient to combat the efforts of attackers who are increasingly becoming sophisticated. Until global initiatives to combat cyber attacks have been instituted, the attackers will continue to prosper at the expense of small industry players.

Incidences of cyber attacks are also being fuelled by the fact that there is a general lack of political will when it comes to this problem. Until now, cybersecurity issues have been confined to financial matters. Lack of a political approach to the problem of cybersecurity is a major cause of cyberattacks. The financial services industry is yet to adopt a proactive stance in the matter thereby involving all major stakeholders including government departments.

Hackers and other cybersecurity offenders have enjoyed a relatively free reign as opposed to other criminals of similar nature (Bignell, 2006). For example, some countries do not have adequate systems for prosecuting cybercriminals. Consequently, offenders find safe havens through a lack of coordinated political will to combat cyber attacks. International political awareness can shield the financial services industry from increased cyber attacks. The leading industry representatives such as the European and International Banking Federation have failed to take up the initiative to drum up political awareness on cybersecurity (Nasheri, 2005). This lack of initiative is has left the industry vulnerable to the problem of cyberattacks.

Another cause of cyber attacks on banks and other financial institutions is that they are structured in a manner that leaves them vulnerable. The supply chain that is associated with financial institutions prompts banks to be vulnerable to attacks. The incessant risk of third party players prompts some institutions to seek consultancy services on how to deal with this issue (Richardson, 2008). For instance, some institutions have invoked supply chain working groups to manage risks associated with third parties and others have built comprehensive lists of who supplies what so that during incident information and intelligence can be shared with these companies (Bignell, 2006, p. 23).

In recent times, the involvement of state actors in matters to do with cyber attacks is evident. Government-associated institutions initiate cyberattacks on targeted organizations for reasons that go beyond the norm. For example, one common motivation behind state-affiliated cyber attacks is espionage (Nasheri, 2005). State actors often target financial institutions that are affiliated with governments. In normal circumstances, attacks that come from government quarters often use the network intrusion tactic to perpetuate persistent threats against their targets. The risks that are associated with state-affiliated actors in cyber attacks rarely materialize (Bignell, 2006). Nevertheless, sour inter-relations between states and governments are a major contributor towards cyber-attacks on financial institutions.

Another major cause of cyber attacks on financial institutions is the environment in which they occur. For instance, cyber attacks occur within a virtual environment that mimics an international arena. Geographical and political boundaries are not a factor where cyber threats are concerned. Consequently, perpetrators of cyber threats can undertake their operations in any type of environment. The only real hindrance to perpetrators of cyber attacks is the pace of changing technology. Consequently, if attackers can adapt to the changing technology their attacks can go on unhindered for a long period. One observer analyzes the factors that contribute to increasing cyber threats by noting that actors have shown themselves to be capable of adapting quickly to the rapid pace of technological change, taking full advantage of the convergence of internet-enabled technologies to develop new and bespoke attack vectors (Bailey & Richter, 2014, p. 18). The ambiguity of the environment in which cyber-attacks take place is a major contributor to this vice. Also, at any given time, operations of financial institutions have to be evolving to keep up with the adaptive nature of cyber attackers.

Cyber attacks on financial institutions are also being fuelled by the fact that there are no centers of information that can give a comprehensive outlook on the issue. Cyber threats are an emerging threat but they are also progressing at an impressive speed. Consequently, vital data concerning the impacts of cyber attacks on financial institutions are still at the collection stage. It will take more time for the full impact of cybersecurity to be quantified. The collaboration of various institutions can diminish instances of cyber attacks by creating a resourceful information pool. Until credible data on the developments of cyber attacks has been compiled, financial institutions will continue to suffer from avoidable instances of cyber attacks. On the other hand, the collaboration of relevant information in an effective manner will give institutions access to information on cyber attacks (Rigby & Bilodeau, 2015).

Conclusion

Cyber attacks are a culmination of various oversights, omissions, and challenges within the realm of financial institutions. The attacks are causing a slowdown in the growth and expansion agendas of various institutions. On the other hand, advances in digital technology have provided an enabling environment for potential offenders. One prominent cause of cyber attacks is the fact that financial institutions operate in an interconnected environment where an attack against one institution could end up affecting many organizations. Lack of political commitment in the cybersecurity issue also means that cyberattacks can go on unnoticed. The entry of global stakeholders in cybersecurity matters means that pertinent data will soon be available to institutions and other players in the financial services industry. On several occasions, cyber offenders have found it easy to operate in a virtual environment where they are not limited by political or geographical boundaries.

References

Bailey, T., & Richter, W. (2014). The rising strategic risks of cyberattacks. McKinsey Quarterly, 2(14), 17-22.

Bignell, B. (2006). Authentication in an internet banking environment; towards developing a strategy for fraud detection. London: Bain & Company.

Lennon, M. (2014). Hackers hit 100 banks in unprecedented $1 billion cyber heist. Web.

Mukhopadhyay, A., Saha, D., Mahanti, A., & Podder, A. (2005). Insurance for cyber-risk: A utility model. Decision, 32(1), 153-169.

Nasheri, H. (2005). Economic espionage and industrial spying. Cambridge: Cambridge University Press.

Pfleeger, S., & Rue, R. (2008). Cybersecurity economic issues: Clearing the path to good practice. Software IEEE, 25(1), 35-42.

Richardson, R. (2008). CSI computer crime and security survey. Computer Security Institute, 1(1), 1-30.

Rigby, D., & Bilodeau, B. (2015). Management tools & trends 2015. London: Bain & Company.