Network Infrastructure Upgrade: Selection Process

Good networking infrastructure is very important for the smooth running of a company or any other organization for that matter. Networking management should therefore be considered as an effective tool for managing an organization. Networking management simply means all activities that are involved in ensuring an effective networking system within an organization run smoothly (Carey, Tanewski, and Simnett, 2000). It is important to note that the term organization is sometimes used instead of a company in this paper because the network infrastructure application is similar in both of them. In network management, maintenance means repairing damaged parts or upgrading the network to offer better services. The provision means configuring network resources to offer desired services (Carey, Tanewski, and Simnett, 2000). Network operation means ensuring that the network is performing its tasks smoothly as intended.

Midlands Environmental Business Company (MEBC) wants to upgrade its existing network infrastructure. As one of the top management team members, I was involved in advertising a job for a suitable company to write a proposal report and present to us so that we could evaluate them and choose the best one to do for us the job. We received seven reports from seven different companies and after evaluating them, we selected ManageEngine Company. We decided to pick on this company because it addressed most of our needs in its proposal report more than any other company. The capability of ManageEngine Company to do the work we expect to be done was also the highest amongst all the companies that sent their proposal reports. This report described some of the factors that made us choose ManageEngine as the best company to assist Midlands Environmental Business Company to improve its networking infrastructure.

Even though the selected company may not have met all the requirements that the Midlands Environmental Business Company wanted, it demonstrated that it was in a position of tackling most of the issues. According to the management team that was involved in the selection, no company wrote a good proposal report than the ManageEngine Company. When provided with several alternatives, the best thing to do is to choose the best alternative. The services that ManageEngine Company provides to its customers match our requirements and thus we believe they will be able to produce quality work for us.

We are living in a dynamic world and therefore companies should have new management systems that can enable them to compete favorably in the dynamic business world (Dunn, 2004). ManageEngine is a company that can do this through network visualization. This is done by first identifying all the networks available in a given organization and grouping them with inappropriate views as desired by the organization. This is exactly what is required to be done in MEBC. Now that all the regional and local offices in London, Leeds, and Aberdeen are known, ManageEngine will not need to identify them but group them appropriately using special software for easy management. For example, ManageEngine has software known as ‘custom maps’ that can provide a magnified view of a network in an organization. This can be used to view and link all the three regional offices and all other local offices for easy management. This means that all the offices can be well configured so that they can be viewed from one central point. This also means that all the offices will be remotely configured so that they can all be viewed from a computer. Remote configuration is one of the requirements that MEBC would like a new management system to fulfill and therefore I think ManageEngine is the right company to do this job. The company has described that it can make the new management system function through automated response, which implies that remote configuration is possible. Before I continue talking about ManageEngine, it is also important for me to talk a bit about the other six companies that presented to us their proposals.

Overview of the Strength and Weaknesses of the Proposed Systems

The following overview of the strengths and the weaknesses of the proposed systems will be based on the conformance of the main features of the system with the needs for the NMS in MEBC, and the absence of the features which are critical for the system in short and long terms.

GFI

Strengths

  • Advanced monitoring capabilities, covering databases, web servers, processes, services, and others.
  • Monitoring functions of vital computer indicators.
  • Easy to use and learn, with the possibility of the software to take corrective actions automatically.
  • Status Remote access.
  • Advanced Internet capabilities.
  • Affordable price.

Weaknesses

  • Incompatible with Linux for future implementation
  • Redundant functions, not critical to the company at the present point.
  • The absence of recovery functions.
  • The absence of remote assistance

Verdict

GFI can be seen as a good standalone network monitoring tool, rather than a network management system. The main strengths of the system are in features and functions which are not necessarily within the list of the essential functions MEBC seeks. At the same time, the absence of necessary functions can be seen as the main criterion for rejecting the GFI proposal.

HP Open View

Strengths

  • It an easy and convenient to graphically represent the network map and track and indicate the problem.
  • Proactive management through providing a dynamic view of the network.
  • The possibility for expanding the capabilities of the system through plug-ins.
  • Providing reports and statistics.

Weaknesses

  • High hardware requirements.
  • Incompatibility with Linux for future installations
  • Costs
  • Cannot be automatically configured.

Verdict

HP Open View is an excellent solution for a branded name. The high costs of the solution, characteristic of first-tier vendors, are combined with the need for a trained IT staff to install and configure the system. One of the main requirements of MEBC is the automated setup and configuration. Accordingly, the system requires hardware from the same vendor.

EM7

Strengths

  • Compatibility with different hardware and operating systems.
  • Dynamic environment and customization capability.
  • Cost-efficient solution.
  • Remote configuration and control.

Weaknesses

  • Little or no information on configuration capabilities.
  • The integrated tools are concerned with applications that are not within the current scope of MEBC.
  • The emphasis on monitoring, with little control capabilities.

Verdict

The solution provided by Science Logic is a good option for companies that want to ingrate a single all-in-one solution covering various IT information aspects. In that regard, such aspects that can be considered within the main features of the system such as VoIP Cisco management and dynamic applications for satellite network systems are not within the current future needs of MEBC, and thus, cannot be taken as advantages. For the core needs of MEBC, EM7 fails to provide a solution.

SNMPC 7

Strengths

  • Remote access and management.
  • Pro-active management.
  • Different upgradable modifications of the system, facilitate flexible customization of the system.

Weaknesses

  • The absence of compatibility options for the future switch of OS.
  • A small number of configuration options.
  • Complex interface.
  • Omitting essential options from the enterprise editions, to be purchased separately – remote console user access.

Verdict

The solution proposed by Castle Rock computing lacks the majority of the needs identified by MEBC. The main features of the system are present in other products. Accordingly, including only a single remote access license in the enterprise edition can be seen as a serious drawback of the system. The system can be recommended for small and small to medium enterprises, in which the planned upgrade for the infrastructure in the future will not require serious modifications.

IPSwitch

Strengths

  • Automatic discovery and configuration out of the box.
  • Unlimited remote networks.
  • Dynamic.
  • Pro-active.
  • Free-trial
  • Economic bandwidth utilization for reporting.
  • Visual mapping.

Weaknesses

  • Vague recovery options.
  • High system requirements.
  • No indication of support and/or training options for personnel.
  • Attachment to Microsoft products.

Verdict

The main attribute of the proposed system is the availability of a free test of the system and its compatibilti9y for 30 days. However, it can be stated that such a solution is not feasible, considering the need for the system cannot be proven in a test environment. However, the solution will not be capable to provide the options for future Linux switches. In general, the main aspects that can be outlined in such a system fits the main requirements of MEBC, and thus, despite the lack of thorough description of the system can be considered as one of two alternatives in the company.

MAPIt

Strengths

  • Centralized, decentralized, and remote management options.
  • Graphical representation of the networks along with a robust search engine.
  • Hardware is more reliable than a software solution

Weaknesses

  • The requirement to purchase hardware.
  • A hardware solution, rather than a management option.

Verdict

The option proposed by Siemon focuses on providing the physical layout of the network. The costs of such a system are substantially higher, and accordingly, it is more difficult to switch to other options or other providers, due to the necessity to change the infrastructure. The reliability advantage might be a serious aspect, although the requirement to purchase the software and the hardware from a single vendor can be too costly to consider such an advantage.

Manage Engine

Strengths

  • Automation options.
  • Network visualization
  • Dynamic network discovery.
  • Reporting options.
  • Low Cost

Weaknesses

  • Compatibility question in the future.
  • Scalability options.

Verdict

The solution proposed by Networks Unlimited can be seen as the most appropriate in the context of the needs of MEBC. There are a few weaknesses in the solution which can be compensated by the fact that the strengths of the system address the majority of the problem in the company, while at the same time, the weaknesses are not concerned with any pressing issue. Combined with low costs, it can be stated that Manage Engine is the most appropriate solution offered to MEBC.

Analysis

GFI is one of the companies that sent a good proposal to MEBC. However, GFI Company did not mention anything about the cost in their report proposal. The services the GFI Company proposed to offer are good but they seem to be very expensive. The main aim of doing business is to make maximum profit with minimum cost (Dunn, 2004). It is because of this that we decided to choose to work with ManageEngine Company because it can offer the same services as those offered by GFI Company but at reduced prices. It was indicated on the heading of the proposal report submitted by ManageEngine Company that it had cost-effective ways of network management. However, it should be understood that one should not compromise quality and opt for cheapness. If the services offered by two companies are the same in terms of quality, the next aspect to be considered should be their cost and the cheaper alternative should be chosen. This is the criterion we used in picking ManageEngine Company and leaving GFI Company. These were the most competitive companies that we got proposal reports from.

Midlands Environmental Business Company (MEBC) received a proposal from HP Company. Remote configuration of all our regional and local offices must be taken into account by any company interested in providing services to us. In short, the report by HP OpenView Company did not address most of the needs of Midlands Environmental Business Company (MEBC) and therefore we decided not to choose it to do for us the job we required.

The company also received a report from EM7 Company but according to us, the report was not detailed. The report was full of graphics and required experts to interpret them. We expected the company to explain in detail the services it can offer and how they can be offered.

MapIT G2 is known to many people as the Siemon Company. It also presented to us its proposal report but the report had the same problems that the report submitted by EM7 Company had. It used many graphics to explain what it can offer rather than using normal descriptive language that could be understood even by laymen. The graphics can be misinterpreted and thus our selecting team decided to pick on ManageEngine Company that described most of the things in the language all of us could understand. Clarity is important in passing across a given message and this is why we decided to choose ManageEngine Company to do for us the job.

Ipswitch Company presented to us a very shallow proposal report. In other words, the report was just a summary from the Ipswitch Company and it was therefore difficult for us to establish whether the company had that required capacity to produce a high-quality networking management system or not. The report submitted by ManageEngine Company appeared to be more elaborate and detailed as compared to the one submitted by Ipswitch Company. Even though Ipswitch Company seemed to be possible to meet some of our MEBC requirements, it did not seem to be able to meet as many requirements as ManageEngine could meet.

The last proposal report that Midlands Environmental Business Company received was from SNMPc7 Company. I have to admit that if we were to choose six companies out of the seven that presented their proposal, we would have left SNMPc7 Company because it was way below our expectations. I am saying “we” because as I stated earlier, I am one of the management team members of MEBC and I was involved in the selection process. The reason why we could not choose SNMPc7 Company to do the work is that the report sent to us by the company was too shallow to provide any tangible evidence that the company could do any meaningful work for us. This again gave ManageEngine Company an upper hand over the other companies and thus it became the best choice for doing the network upgrading work for MEBC.

I have just provided the reasons why all the reports from all the companies were rejected and only the report from ManageEngine Company was accepted. This was done after considering FCAPS, which is an ideal way in which network monitoring solutions can be characterized. ‘F’ stands for faults, ‘C’ stands for configuration, ‘A’ stands for accounting, ‘P’ stands for performance, and ‘S’ stands for security. ManageEngine Company indicted through the report sent to us that it is in a position to fulfill all these requirements and therefore we had no option but to award ManageEngine Company the contract to upgrade our networking infrastructure. I hope that ManageEngine Company will be able to produce quality work for MEBC after being chosen to do the work.

By first considering faults in the networking infrastructure, ManageEngine proposes proactive monitoring of the networking infrastructure. It proposes to do this by continuous surveillance on the networking infrastructure to detect faults in the system. In fact, in its report, ManageEngine Company indicated that it offers surveillance within 24 hours for seven days a week which implies that any error that occurs in the system can be detected as soon as possible before causing any further damage. Detection of faults in a system becomes easier when the system is automated. Automation is another feature that ManageEngine Company is proposing to include in our new networking infrastructure. This is one of the requirements that MEBC would like to be fulfilled as one of its network monitoring solutions.

By considering networking configuration, ManageEngine is the company that will fulfill it as one of our requirements. Remote configuration is a requirement for a network monitoring solution that MEBC specified that should be fulfilled. ManageEngine stated that it can provide this by configuring the network monitoring system to automatically produce tickets to relevant resources. After making the system operate automatically, it is possible to remotely configure it so that it can be operated without coming into actual contact with it. This is a way of ensuring that time wastage is reduced as much as possible because a remote can be used to operate multiple systems at ago. This is the way every company wants to go and MEBC is not an exception. I hope ManageEngine Company will be able to fulfill this requirement for MEBC.

Accounting is also a very important aspect of network monitoring and it is abbreviated by ‘A’ in the FCAPS. There can be no proper management without proper record keeping (Dunn, 2004). In the conclusion of the report presented to us by ManageEngine, it is indicated that there is a software tool known as ManageEngine OpManager that can be used to save IT budget. OpManager is also used to generate many reports that existed for different periods. This will be very beneficial to Midlands Environmental Business Company because it will help in saving time and energy. In the absence of software such as ManageEngine OpManager, getting particular files that were used three or four months ago may consume a lot of time especially if the office was misarranged for one reason or the other and the files were in hard copies. The company proposed in its report that the OpManager software can track and save the utilization pattern for the last six months in the MEBC office in London. This is just one of the examples that ManageEngine Company gave to us in its proposal report. The mentioning of the London office and the Birmingham office indicated to us that ManageEngine Company understands Midlands Environmental Business Company and therefore it has a clear understanding of the scope of the work to be done.

The performance of any system should be given keen attention. This is because if a system is not able to effectively perform the intended tasks, the objectives of an organization cannot be achieved (Dunn, 2004). The performance of the networking infrastructure that ManageEngine is proposing to provide is going to be effective because as stated earlier continuous monitoring to detect and correct faults are going to be ensured. ManageEngine Company proposed to include proactive health monitoring performance by using thresholds. It is indicated in the report that once the software is installed in the networking system to monitor the health of the most critical devices in the system, appropriate signals will always be sent to alert the operator in case there is deterioration in the performance of the devices. This can help in ensuring that corrective measures are taken by the operator by responding to the part sending the signal. The performance of the networking infrastructure is expected to be high because ManageEngine Company proposed to use the automated response to failures. This is intended to be achieved by sending notifications by email or SMS to the right people whenever there are errors. Continuous monitoring of the networking infrastructure as proposed by ManageEngine will ensure the effective performance of the whole network.

Under the five stages of FCAPS, ‘S’ stands for security. Security of networking infrastructure should always be ensured so that operation of an organization is not interfered with. Security is not a problem because security issues had already been resolved by the management of MEBC. Security means ensuring that both internal damage and external damages are prevented.

The components of the network that ManageEngine Company proposed to install are routers, switches, servers, DNS, and wireless components. These components can be able to be used now and in the future because their installation is intended to be long-term. Some of the OS that ManageEngine proposed to include in the networking infrastructure are Windows and Linux. These are some of the features that are required to be used now and in the future. In other words, the work that ManageEngine proposed to come up with will be beneficial to MEBC even in the future. In addition, DB servers such as SQL, MySQL, and Oracle are intended to be included in the networking infrastructure. These will improve the monitoring of the networking infrastructure and thus security of the system will be guaranteed.

Even though we decided to choose ManageEngine Company to do the work for us, I think there were weaknesses in its report. The company should have used more graphic representations to show what it can do. The only graphic representation used in the report was the custom map which represented Cisco networking. I think this is a weakness that ManageEngine Company showed in its proposal report but it should make necessary corrections in the future. Another weakness is that there was no estimation of the total cost required to complete the work. No company indicated in its report the estimated costs of different equipment and thus comparison in terms of cheapness became a bit challenging but we later picked on ManageEngine because it addressed cost-effectiveness in its report.

In conclusion, the network monitoring solution will ensure effective communication within MEBC because all the offices will be viewed in computers stationed in each office. This means that coordination between different offices will be made easier. Cisco networking ensures easy sharing of data among different users and thus access to information will be improved in the company once the work is completed. This will also improve the rate at which business is done at Midlands Environmental Business Company because time wastage will be a thing of the past just as it has already been explained.

Reference List

Carey, P., Tanewski, G., and Simnett, R. (2000). Demand for network literature and directions for future research. Journal of Information Technology 19 (supplement): 37-51.

Dunn, P. (2004). The Impact of Proper Networking Infrastructure in Organizations. Journal of Management 30 (3): 397-412.

“An Empirical Analysis of the Business Value of Open Source Infrastructure Technologies”

Introduction

Information and communication technology (popularly known as ICT) is perhaps one of the most dynamics fields in the society today. New developments and new innovations are taking place on a daily basis in this field, and what may be high technology may be obsolete tomorrow.

Free/Libre Open Source Software (herein referred to as FLOSS) is considered as a fairly recent major development in this area, according to Chengalur-Smith, Nevo & Demertzoglou (2010). According to these scholars, FLOSS- based innovations have publicized their source codes, and these source codes are availed freely to end users. Also, the users are able to modify the source codes, apart from accessing them freely.

For the time they have been around, FLOSS based technologies have been conceptualized as a revolutionary development in this industry. Today, this technology has become a force to reckon with as far as open source developments in various realms are concerned.

Governments, academic institutions and business organizations all over the world have embraced FLOSS based technologies, which is currently competing with other commercial software on equal basis.

This paper is going to critically review an article addressing this topic. The article is titled An empirical analysis of the business value of open source infrastructure technologies, and it is authored by Chengalur-Smith et al (2010). The article was published in the Journal of the Association for Information Systems volume 11, special issue edition, on November 2010.

It is a 22 pages long article, and it is to be found on pages 708 to 729. In this critical review, several aspects of the article will be addressed. This includes the purpose and hypothesis of the study reported in the article, the type and design of the study, the conclusions made by the researchers, the effectiveness of the presentation of the data, and the value of the study or its practical application.

The limitations of the study and the opinion of this author regarding this study will also be included in this critical review.

Purpose and Hypotheses of the Study

The purpose of the study reported in this article was to analyze the business value of open source infrastructure technologies. This is given the fact that business organizations, governments and other institutions have continued to explore FLOSS based technologies which serve as alternatives to commercial version of the technology.

However, despite this fact, there is a lack of empirical data on the business value of such innovations, and this study set out to address this gap.

The researchers set out to test three hypotheses formulated at the onset of the study. The first hypothesis stated that the business value of FLOSS based technologies increased with increase in the absorptive capacity of the IT staff for an open source infrastructure technology (Chengalur-Smith et al, 2010; p. 712).

The second hypothesis was divided into two parts. The first part stated that the extent of utilization of the focal FLOSS based technology increased with a rise in the degree of source of openness of an entity’s information technology infrastructure.

The second part of this hypothesis stated that an increase in the extent of utilization of FLOSS technologies leads to an increase in the business value that accrues from the technology. The third hypothesis stated that business value accrued from this technology increased with the strengthening of the ties with the open source community of practice.

Study Design

This study was non-experimental, as it is noted that there is a lack of experiment or control groups. It is a mixture of a quantitative and a qualitative study. The researchers took the open source database MySQL as the case study.

Questionnaires were sent to 3,000 members who were using this database, giving this study a quantitative touch. The qualitative aspect of the study is to be found in the descriptions used and the review of literature that is found in this field.

Conclusions of the Study

As already indicated, the researchers were interested in testing the three hypotheses identified above. The analysis of data supported hypothesis one, where a link was found between the IT staff absorptive capacity for the innovation and the business value accrued from the same by the organization.

Hypothesis two was also supported by data collected, where a link was established between the source openness of the information technology infrastructure, extent of utilization and the value accrued from the technology.

Likewise, hypothesis three was also supported by the data, where a link was found between the information technology staff’s ties to the technology’s end user and developer community and the business value that the organization accrued from the technology (Chengalur-Smith et al, 2010).

Effectiveness of Data Presentation and the Value of the Study

The researchers used a combination of tools such as tables and diagrams to present the results of this study. The use of the structural model to present the connection between the FLOSS based technology and the 3 hypotheses was particularly helpful, given that it provides the reader with a detailed and vivid connection between the various variables of the study.

Such a visual presentation makes it easier for the reader to make sense out of the statistics presented.

The explanations given by the researchers are clear and understandable to the reader. This is especially made possible by the deliberate efforts made by the researchers to tie together all the three hypotheses, in turn tying them to the FLOSS based technology.

The significance of the findings of this study cannot be downplayed. It is to be noted that there is little literature that is found in this field, despite the fact that it is an emerging field that is widely used by businesses and governments around the world. As such, this study is significance as it expands the knowledge base in this field, giving it a practical application.

Integrity and Credibility of the Study

This study is highly credible, in the opinion of this author. The integrity of the study is also beyond reproach. This is especially so given the fact that the three authors are forces to reckon with in the academic field as far as information technology is concerned. They are all affiliated to reputable institutions of higher learning. The researchers are also honest, as they find it important to make it known the limitations of the study.

Conclusion: Limitations of the Study

As indicated above, the study has several limitations, and the researchers have acknowledged them towards the end of the article. For example, the study was limited in the fact that it only used key informants who directly implement and manage the FLOSS based technologies in the organizations selected. It fails to incorporate other parties such as developers and end users.

The researchers also acknowledge that the study may be of little value to FLOSS based technology producers, given that it focused only on organizations that have already implemented MySQL only. But all in all, this is a great study despite the various limitations.

References

Chengalur-Smith, I., Nevo, S., & Demertzoglou, P. (2010). An empirical analysis of the business value of open source infrastructure technologies. Journal of the Association for Information Systems, 11(11/12): 708-729.

Public Key Infrastructure and Certification Authority

The Fundamentals of PKI

A basic public key infrastructure comprises several elements that include certain policies, software, and hardware and is intended to manage the creation and distribution of digital certificates and keys. Digital certificates, on the other hand, can be considered the core of a PKI because they are used to create a linkage between the public key and the subject of a given certificate. Other key elements are outlined further:

  • The first element is termed a “certificate authority” (CA). It is a service provider that is used to maintain authorization procedures (regarding the end-users, computers, or any other entities).
  • The second element is termed a “registration authority.” For the most part, it is also known as a subordinate certificate authority because it is utilized to grant root access to specific users (Pfleeger, Pfleeger, & Margulies, 2015).
  • The third element is a database that is used to store certificates and information regarding revocations, requests, and other activities.
  • The fourth element is a certificate store that is used to keep the information regarding private keys or any of the certificates that were issued earlier.

After the identities of the end-users are verified by the CA, several digital certificates are issued. Then, a self-signed CA is utilized to disclose a public key to the parties that have access to it and enables a private key that safeguards the disclosed data. Another notion that is worth mentioning is a “chain of trust.” It can be explained by the root CAs that are embedded, for instance, in Web browsers and are enabled by default. The information regarding algorithms is also contained in these digital certificates.

PKI and the Company’s Software

One of how a PKI could help during the process of signing the company’s software is a solution that is solely based on the complexity of environments that are in control of testing and development environments. Therefore, the authenticity of the software can be tested using deploying a test certificate server that will deploy a test root certificate (Sinn, 2015). Microsoft’s Active Directory CA Services can be used to perform this task, but the option of Group Policy should be turned on so that certificates could be easily revoked and managed. Other tools can be used (such as OpenCA or EJBCA) but there is also a variety of way to deploy the CA testing procedure:

  • First, the certificates should be issued to all CA testers and developers that are involved (Wu & Irwin, 2016).
  • Second, certain requests to the server regarding the enrollment of certificates should be made by the client. The administrator should perform those requests either manually or using an ACL (Sinn, 2015).
  • Third, the certificate requests may be made by certain power users (such as team leaders) to make it easier for the end-user.

One of the most important steps, in this case, is the automation of the signing process. Moreover, the developers are required to include the latter in the development environment. By doing this, the team will be able to evade issues and certify that the end product will be of high quality. This would highly benefit the users of complex environments that are usually forced to come up with several sets of signing requirements (signature packaging configurations and other conventional applications).

The Comparison of Public/ In-House CAs and Recommendations

Several advantages are characteristic of the in-house CA. First of all, this type of CA allows you to manage the available key and certificates in a simple and easily understandable way (Conklin, White, Williams, Davis, & Cothren, 2016). The key reason behind it is the decision to get rid of external entities and the absence of any dependencies that would interfere with the certificates mentioned above. Second, the CA can be used in pairs with Microsoft’s Active Directory.

This fact also positively influences the process of managing the CA. The disadvantages of the in-house CAs include, in the first place, the complexity of the implementation of this type of CA (Conklin et al., 2016). It is also safe to say that the organization, in this case, is responsible for the development and implementation of the PKI. The third disadvantage revolves around the fact that external parties will not trust an in-house CA.

The first advantage that relates to the public CAs is that the latter is ultimately in control of the organizational PKI. Moreover, the majority of external parties approve the certificates that are signed using trusted public CAs (such as SecureNet, VeriSign, or Comodo) (Wu & Irwin, 2016). One of the core disadvantages, at the same time, is that the linkage between the organizational infrastructure and the public CA becomes too restricted. Additionally, the use of public CAs generates more pay-per-certificate expenditures. In perspective, this means that there will be potential issues with the management of certificates and inflexibility of CA configuration (Pfleeger et al., 2015).

It is recommended to use the in-house CAs because there is no need to spend money on the pay-per-certificate services. Also, it should be noted that they are easy to configure and tend to be much cheaper than their public counterparts. They also positively affect the PKI and simplify the process of issuing new certificates.

References

Conklin, A., White, G. B., Williams, D., Davis, R., & Cothren, C. (2016). Principles of computer security (4th ed.). New York, NY: McGraw Hill.

Pfleeger, C. P., Pfleeger, S., & Margulies, J. (2015). Security in computing. Upper Saddle River, NJ: Prentice Hall.

Sinn, R. (2015). Software security technologies. Boston, MA: Thomson.

Wu, C. J., & Irwin, J. D. (2016). Introduction to computer networks and cybersecurity. Boca Raton, FL: CRC.

Critical Infrastructure Vulnerability and Protection

In this session-long project, the topic of interest is the critical infrastructure protection (CIP) of information and communication in the United States. Specifically, the report will assess and analyze the overall development of the US’s critical information infrastructure protection (CIIP). Internet technologies during its existence have created both a host of new opportunities for economic development and many dangers for the world community. Challenges and threats emanating from the information space were included in the priority areas of work of the leading states of the world in the early 2000s. The United States was one of the first to work out the legislative framework for cyber policy, which was aimed primarily at ensuring the country’s security after the terrorist attacks of 2001. Over time, more than a dozen legislative acts were adopted, many committees and agencies responsible for ensuring the country’s information security were created.

The report will examine the evolution of US doctrinal approaches to ensuring information security during the ten years of the presidency of three presidents: George W. Bush, B. Obama, and D. Trump. It will be traced how the priorities of American politics in this area have changed over time, as well as the development of US relations with other leading players in the cybersphere. Particular attention should be paid to Washington’s policy aimed at ensuring the security of critical information infrastructure (CII) (Viira, 2018). Despite the adoption of several regulatory acts, the level of safety of CII facilities remains quite low. In general, an analysis of doctrinal documents will highlight several key features that characterize the development of US cyber security policy in recent years. In particular, the tendency toward unilateral actions related to exerting sanction pressure on certain countries and their companies is growing. At the same time, the issue of cybersecurity is often considered not as an independent area, but only as a tool to achieve other, broader foreign, and domestic political goals. In general, the US policy in the field of information security is more reactive in nature, which cannot but affect its effectiveness.

It is important to analyze the approaches stated in key American doctrinal documents to the issues of ensuring cybersecurity and the protection of CII objects. The implication is that it will find out whether it is possible to discuss the existence of a coherent line in US policy regarding the information space, as well as whether this policy is proactive or reactive. At this stage, the American establishment has a clear desire to pursue a strict policy regarding the information space, attempts not only to protect the state from any kind of interference using the CIP but also to explain its potential steps in advance.

In conclusion, the actions of the US leadership in practice are reduced to the development of new policy documents and the creation of specialized authorities. At the same time, despite the stated ambitious goals, it is difficult to talk about the effectiveness of the measures taken since the threats emanating from the information space are growing every year, and the level of security of CII objects remains low. Therefore, it is important to understand the course of action of the US in this regard and the nation’s approach to protect critical information infrastructure elements.

Reference

Viira, T. (2018). Lessons learned: Critical information infrastructure protection: How to protect critical information infrastructure. IT Governance Publishing.

Infrastructure for IT Security Policy: Remote Workers

Introduction

Lacking security policies is one of the most common sources of information leaks, increased chances for cyberattacks, and misconduct related to remote workers. Curran (2020) states that “many organizations are now more vulnerable to security threats than ever before” due to the complexity of infrastructure and equipment (p. 11). Remote workers add a significant share of workload to the tasks related to cybersecurity, partially due to the usage of third-party software, personal devices that are not business-oriented, and other organizational issues (Curran, 2020). This paper summarizes the research article “Infrastructure for an IT security policy: Remote workers” by extracting the essential topics and analyzing their importance.

Purpose of the Paper and Data Gathering Method

The purpose of the research paper is to analyze the security policy for remote workers in Dubai, define its requirements and methods of protection, and give recommendations regarding the existing security holes. It concerns both physical and digital information containers that pose threats to the company’s private data. The paper was created by using the current data points from different data sources related to the security policy in general and those explicitly dedicated to the issues with remote workers.

Discussion

The discussion part of the paper reviews the current safety security policies within Dubai. The paper’s primary topic is digital security, however, it also discusses the threats of handling private physical documents outside of the company’s office bounds. Safety security policies aim not to enforce compliance but to show an optimal way to prevent and combat security threats by applying a comprehensive set of rules when dealing with sensitive information (Walker, 2019). The first part of the discussion is related to the software requirements that are imposed by the government of Dubai, which require the usage of licensed software only. By using inappropriately licensed or unlicensed software, remote workers put their company at the risk of being targeted by malware and hackers (Sarginson, 2020). The paper discusses the way to raise compliance with this regulation within the company by updating the security policy and hiring a specialist who can assist with this issue.

The next parts of the central section discuss the safety of using a Wi-Fi network and personal devices as a medium between a remote worker and the company’s private servers. The paper ushers the addition of additional network protection to monitor and manage data transfer. According to Sarginson (2020), “many employees are working on less secure devices while at home, and on less secure networks than usual” (p. 10). Moreover, remote workers need to install and run additional protection software, such as firewalls and antiviruses, to ensure the safety of data.

The article also discusses a potential threat from direct physical access to the devices used for business purposes or the transfer of private data via paper or screen. Remote workers within Dubai require additional protection, and the companies need to increase their attention to remote operations. In the end, the paper lists what repercussions can be applied to the company and its workers for violating the security policies of Dubai.

Summary of Findings

Findings suggest that while the existing security policies do prevent some threats, numerous changes are necessary for the security policy of Dubai companies to become safer and more comfortable to implement and use. A detailed examination of each system has revealed weak spots that can be improved. It is essential to bring government policy guidelines to a higher level of comprehensibility, coverage, and application. Remote workers are a vital part of the structure of modern businesses, and the increased amount of cyberattacks shows the need for additional research and development on this topic.

The research paper emphasizes a sudden increase of remote workers in most companies due to the COVID-19 pandemic. Malecki (2020) states that “cybercriminals are exploiting the Covid-19 pandemic by launching ransomware attacks on unprepared, unprotected businesses” (p. 11). This recently emerged issue shows that malware attacks are a common and highly dangerous aspect of remote access. The paper concludes that the companies of Dubai need to update these regulations in order to decrease the number of security breaches.

Recommendations

The paper contains a set of recommendations regarding the necessary improvements to the system. It emphasizes the need for enforced software licensing as it is a highly efficient and optimal way of regulation (Walker, 2019). The second crucial point is that the company’s employees must be educated about security threats and know security policies related to their field of work. Malecki (2020) states that “when properly educated and well prepared, employees can prove a crucial weapon in the fight against ransomware” (p. 11). Companies also need to regulate the ability of remote workers to connect to business-related computers and servers, and necessary restrictions must apply to the availability of private data from remote access. Former or current employees who have this type of permit can commit violations or serve as an involuntary access point for a cyberattack (Dokuchaev et al., 2020). It is also crucial for the company to have an IT security specialist to manage and upgrade the system and advice on policymaking.

The Role of the Highlighted Text

The selected part of the paper describes several crucial elements in the security policy and the way these points must be addressed within the setting. It shows the importance of synchronization of remote workplaces with the company’s internal network, as well as the need for additional protection of channels through which remote workers connect to said network. Moreover, it describes the concerns regarding potential security threats that arise from handling the company’s private documents outside of the company’s offices.

The highlighted text expands the findings of the research paper and covers multiple security threats related to remote workers. The first part depicts how the synchronization of date and time affects the company. The second and third parts describe and expand readers’ knowledge of remote access to the business network, what threats can arise from inappropriate handling of this system, and how to avoid them. The final portion of the highlighted text refers to physical security holes that might occur in a remote workplace.

This part bears the utmost importance since non-compliance with the security policy regarding the listed points can cause a significant loss. Remote access threats are one of the most common holes in information systems (Dokuchaev et al., 2020). For example, private network traffic can be illegally accessed via malware or spoofing, it can be used to gain access and passwords to the system, and false data can be injected into the network packets (Dokuchaev et al., 2020). Therefore, the highlighted section is a crucial part of the paper, as its purpose is directly related to the topic and aims to inform businesses about these security holes and to prevent these issues from occurring.

References

Curran, K. Cyber security and the remote workforce. Computer Fraud & Security, 2020(6), 11-12. Web.

Dokuchaev, V., Maklachkova, V., & Statev, V. (2020). . T-Comm, 14(1), 56-60. Web.

Malecki, F. . Computer Fraud & Security, 2020(7), 10-12. Web.

Sarginson, N. Securing your remote workforce against new phishing attacks. Computer Fraud & Security, 2020(9), 9-12. Web.

Walker, C. K. (2019). . The Palgrave Handbook of the Public Servant, 1-17. Web.

Cloud Storage Infrastructures

Brief History of Data Storage

The history of data storage devices can be briefly outlined as follows (Foote, 2017):

  1. The late 1950s – the early 1960s – hard disk drives. These work via magnetization of the film made out of ferromagnetic materials. HDDs are still used nowadays in most computers. Although their initial capacity was small, contemporary HDDs can contain terabytes of information.
  2. 1966 – semiconductor memory chips; these stored data in small circuits, which were called memory cells. Their capacity was 2,000 bits.
  3. 1969 – floppy disks – disks that were 8 inches in size; consisted of magnetic film stored in a flexible case made of plastic. Initially, their capacity was nearly 80 kB; later, more voluminous disks were created.
  4. 1976 – a new, 5.25-inch model of floppy disks; a smaller version of the large floppy disk. The new disk’s capacity was 110 kB.
  5. The 1980s – yet another model of a floppy disk, 3.5 inches in size. Their capacity started at 360 kB.
  6. 1980 – first optical discs. (The spelling with “c” instead of “k” refers to optical rather than magnetic storage devices.) Using the principles of optical data storage, CDs and DVDs were later developed. Their initial capacity was several hundreds of megabytes; contemporary optical disks can store tens of GBs.
  7. 1985 – magneto-optical disks (5.25 inches and 3.5 inches). These employed optical and magnetic technology simultaneously so as to store information. Their capacity started from 128 MB to several GBs.
  8. 2000 – flash drives. Consist of chips and transistors. First flash drives had the minimal capacity of several hundred MBs; contemporary flash drives can store hundreds of GBs.
  9. Cloud data storage – this new technology utilises remote servers for storing data that can be accessed via the Internet. The capacities of clouds are extremely large, for there can be numerous servers and storage arrays on which to keep the data.

Storage Bandwidth, IOPS and Latency

Storage Bandwidth, Storage IOPS, and Storage Latency

On the whole, storage bandwidth (which is also called storage throughput) is the maximum of data that can be transferred to or from a storage device in a unit of time (Crump, 2013). For instance, it can be measured in MB/sec, so the bandwidth of 100 MB/sec means that the storage device can transfer 100 Megabytes of information each second.

However, storage bandwidth is an omnibus notion that does not take into account several factors; in fact, it only shows the maximum amount of throughput per second. In this respect, it is important to explain the notion of IOPS (Somasundaram & Shrivastava, 2009, pp. 45-47). IOPS stands for “input/output operations per second”, and denotes the maximal number of storage transactions that can be accomplished each second on a given data storage device (Crump, 2013). The greater the IOPS, the larger number of transactions can be done per second; however, the actual rate of the transaction also depends upon the size of the pieces of input. So, in general, bandwidth = average size of input or output × IOPS (Burk, 2017). Also, it should be noted that IOPS is limited by the physical characteristics of the data storage device.

In addition, every input or output request will take a certain amount of time to finish; the mean amount of time to do this is called the average latency (Burk, 2017). Latency is usually measured in milliseconds (10-3 seconds), and the lower it is, the better (Burk, 2017). In storage devices, latency depends upon the amount of time it takes the reading/writing head to find an appropriate place on the drive where the required data is stored or is to be stored (Crump, 2013). Rotational latency is equal to the half of the time needed for a full rotation, and, therefore, depends on the rotation speed of the drive (Somasundaram & Shrivastava, 2009, p. 46).

Thus, on the whole, IOPS shows the number of transactions of data (and IOPS may depend on latency, which is a property of the hardware), but it does not take into account the amount of data transferred per a transaction (Crump, 2013). Therefore, IOPS on its own is not enough to assess the rate at which a storage device can work; it is also needed to take into account the latency, the bandwidth, and the average input/output size (Burk, 2017).

Costs, Limitations, and Administrative Controls

When it comes to the need to meet the demand for the workload of a given storage device, it is paramount to take into account the IOPS of that device. For instance, Somasundaram and Shrivastava (2009) provide an example when the capacity requirements for a system of storage devices are 1.46 TB, but the maximum workload needed is estimated to be nearly 9,000 IOPS (p. 48). Simultaneously, a 146 GB drive may provide only 180 IOPS. In this case, it will be needed to use 9,000/180 = 50 disks only to meet the workload demand, although the capacity demand would have been met if merely 10 disks were employed (Somasundaram & Shrivastava, 2009, p. 48). Therefore, in order to work around the physical limitations of disks that do not allow for large IOPS of a storage device, several drives can be used, but this, clearly, greatly increases the cost of the storage system.

As for the bandwidth, it should be noted that in general, it is possible to reach quite high values of the bandwidth in a storage device (Somasundaram & Shrivastava, 2009). Also, when it is needed to use multiple storage devices connected together, it might be needed to choose between hubs and switches. For instance, fabric switches allow for using a full bandwidth between several pair ports, which increases the speed; however, this may be rather costly. On the other hand, the utilisation of hubs is significantly cheaper, but they only offer shared bandwidth, so hubs may be considered primarily as a low-cost solution for the need to expand via the process of connectivity (Somasundaram & Shrivastava, 2009, p. 124).

Finally, when it comes to the latency, it is stated that low latency is considered one of the main performance killers in storage devices (Poulton, 2014, p. 437). Achieving high levels of IOPS is useless unless latency is not adequate to that figure (Poulton, 2014, p. 438). Therefore, when considering the overall performance of a storage device or system, it is pivotal to ensure that the latency – and, consequently, the rotation speed – are adequate to the IOPS of that device or system.

Comparison of Protocols

SCSI

When it comes to Small Computer Systems Interface (SCSI) protocol, it should be pointed out that this protocol is generally utilised by operating systems with the purpose of conducting input and output operations to data storage drivers and peripheral devices. SCSI is typically used in order to connect tape drives and HDDs, but can also be employed to connect an array of other devices such as CD drives or scanners. On the whole, SCSI is capable of connecting up to 16 devices in a single data exchange network (Search Storage, n.d.).

One of the important limitations of the SCSI protocol is the above-mentioned limit on the maximal number of devices which can be connected so as to form a single network; for instance, SAS can connect up to 65,535 devices in a single network (by using expanders), in contrast to SCSI’s mere 16 devices (Search Storage, n.d.). The performance of SCSI is also inadequate for numerous purposes, in which case it is possible to utilise iSCSI; the latter preserves the command set of SCSI by employing the method of embedding the SCSI-3 protocol over the Internet protocol suite while also offering certain advantages over SCSI (Search Storage, n.d.).

FCP

The Fibre Channel Protocol (FCP) is a SCSI interface protocol which employs a fibre channel connection, that is, a connection utilising optical fibre cables which are capable of transferring data with high speed; initially, the offered throughput was 100 MB/s, but modern variants can provide the speed of several gigabytes per second (Somasundaram & Shrivastava, 2009, p. 118). Therefore, one of the major differences between SCSI and FCP is the cables they use (SCSI employs Ethernet cables).

FCP is most commonly utilised for establishing a storage network, and thanks to the high speed of transfer, it is capable of providing quick access to the data located on various devices in the network (Ozar, 2012; Somasundaram & Shrivastava, 2009). It also allows for significantly increasing performance, for it excludes the possibility of interference between storage and data traffic (Ozar, 2012). Nevertheless, the speed of the connection is still limited, for it is lower than what one can get using a single computer (Ozar, 2012). In addition, the process of establishing, configuring and troubleshooting an FCP network may be rather tedious (Ozar, 2012).

iSCSI

On the whole, the Internet Small Computer Systems Interface (iSCSI) is a standard of networking that is based on IP, which encapsulated the SCSI protocol, and which supplies block-level access to a variety of devices for storing data via transferring the commands of Small Computer System Interface through a TCP/IP network. In other words, the utilisation of an iSCSI entails mapping the storage via an Internet protocol suite (Ozar, 2012). In an iSCSI, every storage device and each of the servers which are connected to the network possess their own IP addresses, and a connection to a device which holds the required data is established by using the method of specifying an IP address that is associated with the needed storage device or drive. It is also noteworthy that Windows displays each of the drives connected to an iSCSI network as a separate hard drive (Ozar, 2012).

It should be observed that an iSCSI (for instance, a 1-gigabyte iSCSI) is comparatively cheap, and it is rather easy to configure due to the fact that a 1-gigabyte network switch infrastructure is already present when it is utilised. Nevertheless, there exist limitations to an iSCSI: it is quite slow, and because of this, it is generally not appropriate for an SQL server because of the temporal length of the operations conducted via an iSCSI (Ozar, 2012). Nevertheless, an iSCSI can be effectually used for virtualisation, for the latter does not require a considerable amount of storage throughput.

NAS

NAS (network-attached storage) is a computer data storage server and file-sharing device that is capable of being attached to a local network (Somasundaram & Shrivastava, 2009, p. 149). This device supplies multiple benefits, such as the consolidation of a server (one NAS is used instead of multiple servers), and the provision of file-level data sharing and access (Somasundaram & Shrivastava, 2009). NAS is attached to computers via a network; the network protocols used in this case typically include TCP/IP with the purpose of organising data transfer, as well as NFS and CIFS for managing remote file service (Somasundaram & Shrivastava, 2009).

Generally speaking, NAS devices supply a shared file service in a standard internet protocol network; they can also consolidate a number of multi-purpose file servers (Poulton, 2014). NAS devices offer multiple benefits when compared to other storage systems. For instance, in comparison to general-purpose servers, they focus on file serving and provide comprehensive and high availability of information, increased flexibility and efficiency, as well as centralised storage and simplified management procedure (Somasundaram & Shrivastava, 2009). However, NAS devices also have some limitations; for instance, they consume a considerable proportion of the bandwidth in the TCP/IP network. The fact that Ethernet connections are lossy also means that network congestions will occur sooner or later; this often makes the high performance of the network on the whole critical.

FCoE

Fibre Channel Over Ethernet (FCoE) protocol is quite similar to FCP, only it utilises Ethernet cables in order to carry out the protocol; more specifically, 10 gigabyte Ethernet cables are employed (Ozar, 2012). Generally speaking, FCoE carries out input and output operations over the network by using a block access protocol (Hogan, 2012). In addition, unlike iSCSI, FCoE does not employ the method of IP encapsulation, relying on the Ethernet instead, but retaining its independence from the forwarding scheme which is used by the Ethernet; however, FCoE is similar to iSCSI in other respects (Hogan, 2012; Somasundaram & Shrivastava, 2009, p. 186).

When it comes to the limitations of FCoE, it should be noted that FCoE is rather difficult to configure, and requires conducting the procedure of zoning at the FCoE switch level, after which it is needed to carry out the LUN masking process (Hogan, 2012). In addition, FCoE does not supply visualised MSCS support.

As for the utilisation of FCoE, it is mainly implemented in storage area networks (SANs) in data centres due to its usefulness with respect to reducing the total amount of cabling needed in such centres (Poulton, 2014). FCoE also comes in handy when there is a need for a server visualisation application because these often need a large quantity of physical input/output connections for each of the servers connected (Somasundaram & Shrivastava, 2009).

Benefits and Functions of a Unity 450F Flash Storage Array

Principles of Functioning

On an all-flash storage array, the data is persisted in flash cells (single-level, multi-level, or, very rarely, triple-level cells), which are grouped in pages, which are then grouped in blocks. Initially, all the cells have a value of 1; it can be changed by a “program” operation (application of low voltage); however, erasure is only possible on a block-level (high voltage to a whole block; Poulton, 2014). Flash cells eventually fail due to physical wear; therefore, information is commonly purposefully backed on redundant memory cells that are hidden from users (Poulton, 2014).

Storage arrays consist of front-end ports, processors, the cache, and the backend (Poulton, 2014). Front-end ports typically utilise FCP or Ethernet protocols (FCoE, iSCSI, SMB, or NFS); it should be noted that in order for a host to use resources from the storage array, it is necessary for that host to employ the same access protocol (Poulton, 2014). After the ports, the processors are located; these run the storage array’s firmware, deal with input/output and move it between ports and the cache. The cache, in turn, is located after the processors, and its main purpose is accelerating the performance of the array, which is paramount for mechanical disk-based arrays, and also quite important in flash storage arrays (Poulton, 2014). Finally, after the cache, the backend is to be found; in this part, additional processors may be located (or everything might be controlled by the processor from the front end), as well as the ports connecting the cache to the storage drives (Poulton, 2014).

The disks are divided into logical volumes. This is done via partitioning, that is, separating the physical volume into several regions and memorising the capacity of these areas, their locations, and the addresses of the information clusters inside it (Somasundaram & Shrivastava, 2009). The information about partitioning is stored in a special area of the physical storage drive which is labelled “a partition table”, and is accessed prior to any other part of the physical volume.

In a network, the storage array is accessed via a protocol. As has been noted, Ethernet protocols (FCoE, iSCSI, SMB, or NFS) or FCP are typically employed for this purpose (Poulton, 2014). Although it is important to take into account which protocol is currently being used, the logical partitions within the flash storage array may often be presented to the host as separate volumes for information storage (in iSCSI, for example). On the other hand, if file sharing is carried out in the network, the author of a file usually identifies what type of access other users will have to that file, and carries out the control over changes made by these users to that file (Somasundaram & Shrivastava, 2009).

Benefits

When it comes to a Unity 450F flash storage array, it should be noted that such an array may include from 6 to 250 drives, each of which has the memory of 128 GB (Dell EMC, n.d.; Dell EMC, 2017). As a flash drive array, it permits access to the files stored in the drives at a greater speed, featuring enhanced productivity when compared to hard disk arrays (Poulton, 2014). Finally, the presence of redundant (hidden) memory allows for making sure that the data which is stored in the drive will not be lost due to the wear of flash blocks or because of a failure of a controller (Poulton, 2014).

Disaster Recovery

Principles of Disaster Recovery

The term “disaster recovery” in the context of data storage refers to the actions, processes and procedures utilised in order to allow an organisation to recover or continue using key technological systems and infrastructure after a disaster (natural or technogeneous) has occurred. Generally speaking, disaster recovery planning starts with an analysis of potential business impacts; at this point, it is paramount to define a recovery time objective (RTO, the maximal acceptable period of time during which technologies can be offline) and a recovery point objective (RPO, the maximal acceptable period of time during which data may be missed from a piece of technology) (Google Cloud Platform, 2017). Next, it is recommended to create and follow a disaster recovery plan (Google Cloud Platform, 2017). Best practices include: identifying recovery goals; designing for full, rather than partial, recovery; making tasks and goals specific; introducing control measures; integrating standard security mechanisms; ensuring that the software remains currently licensed; maintaining several ways of data recovery; and regularly testing the recovery plan (Google Cloud Platform, 2017). It is also paramount to ensure that there are no SPOFs (single points of failure), i.e., no parts of the system the failure in which will cause the whole system to fail to exist.

Protecting from SPOF: Synchronous Remote Replication

In data storage disaster recovery, it is pivotal to make sure that a business does not lose all data due to a SPOF, for instance, if all its servers are located in the same place. For this purpose, it is possible to employ a method of synchronous replication of data. The crux of this method is that all the data which is stored on a storage array is also immediately transferred to a different array that is situated in a different place so that an exact replica of the data is created in a remote location (Poulton, 2014, p. 295). Thus, in the process of synchronous replication, the data is saved on a storage array, and then also copied on the remote server; the remote server sends confirmation that the data has been gained, and only upon gaining this confirmation, the process of writing is considered finished (Poulton, 2014). One of the large advantages of this method is that it is a method of zero data loss; however, the need to wait for a confirmation from an external server means a considerable drop in the performance of the storage array (Poulton, 2014).

Protecting from SPOF: Local Replication

The method of local replication refers to the creation of a backup copy of the data in the same array or data centre so that if the data is destroyed in the primary storage location, the exact replica of the data would be saved in the target LUN, that is, in a reserve location (Somasundaram & Shrivastava, 2009, pp. 283-284). Local replicas can be utilised as an alternate source of backup, for data migration, as a testing platform, and so on. These replicas can prevent the loss of data in case of failure of the main server or array (Somasundaram & Shrivastava, 2009).

There are numerous methods for local replication. For instance, host-based local replication can be employed. During it, file systems or logical volume managers (LVM) carry out the process of local replication (Somasundaram & Shrivastava, 2009). These LVMs create and control logical volumes at a host-level; logical volumes are mapped to two different physical partitions of the physical storage drive; data from both volumes can be accessed independently of one another (Somasundaram & Shrivastava, 2009). This allows for preserving information from a logical volume if one of the physical volumes in which the information from that logical volume is stored suffers from damage or loss of data due to any reason.

References

Burk, C. (2017). Storage performance: IOPS, latency and throughput. Web.

Crump, G. (2013). What is Latency? And how is it different from IOPS? Web.

Dell EMC. (n.d.). Dell EMC Unity 450F all-flash storage. Web.

Dell EMC. (2017). Dell Emc Unity all-flash storage. Web.

Foote, K. D. (2017). Web.

Google Cloud Platform. (2017). How to design a disaster recovery plan. Web.

Hogan, C. (2012). Web.

Ozar, B. (2012). Web.

Poulton, N. (2014). Data storage networking. Indianapolis, IN: John Wiley & Sons.

Search Storage. (n.d.). Web.

Somasundaram, G., & Shrivastava, A. (Eds.). (2009). Information storage and management: Storing, managing, and protecting digital information. Indianapolis, IN: Wiley Publishing.

Network Infrastructure: Ethernet Ports and Serial Around the Router

Introduction

The lab referred to as the second network infrastructure lab lays its focus on the concept of a router, and more so, on its basic configuration. This lab aims at comprehending the concepts of Ethernet ports and serial around the router. It is said to cover areas such as file sharing, which is done via File Transfer Protocol and the Trivial File Transfer Protocol, the Telnet and also the Cisco Discovery Protocol.

The Registered Jack

The Registered Jack – 45 (RJ-45) is the UTP Ethernet network cable that is mainly recommended to connect a personal computer with the aid of the console. Its features are telephone-like and hold up to eight wires hence making its wire connector wider than any other Ethernet connectors. This has made it the most suitable cable for connecting the PC to a local area network (LAN) because of its high speed in the connection. This cable has been recommended and favored over the other connectors due to its strong network that guarantees a strong connection to the success of the communication for the appropriate sockets.

Its maintenance is also lower compared to other cables as it is smaller with sufficient power. Its connectivity is also not limited as it has both male and female connectors. The connection is said to increase when other cables have been connected resulting in voltage area and a current that is relatively low. Recent research has indicated that the RJ-45 registered jack is commonly used in telecommunications firms hence being termed as the ever-present Ethernet connector. RJ-45 is, therefore, a vital part of the Ethernet and failure to integrate it in the connection can cause some adverse consequences. It can result to failure of PCs connection due to lack of the hard-line required to make the connection complete.

The Registered Jack

HyperTerminal

The functionality of the terminal evaluation and communication relies on a program known as HyperTerminal. The program uses the Windows 98 version and is offered as part of Microsoft operating systems enabling the usage of resources of any other PC by creating a link between the operating systems. HyperTerminal program has been defined as an application used to connect a computer to Telnet sites, online services, servers and bulletin board systems (remote systems).

The program has been configured to institute a console connection using a router. After connecting the consoler cable to the modem or router on the adjacent end of the terminal, one then connects the RJ-45 console serial cable to the terminal. Once the connection has been done, a Run option is selected from the start menu of the windows and in the run dialog box, the word hypertrm.exe is typed and click the enter button.

The HyperTerminal loads and its splash screen will appear. The program uses some router modes which include the Privileged Mode, Specific Configuration Mode, the Global Configuration Mode and the User Exec Mode. Some modes require one to have a password while some don’t. The User Exec or the Privileged Mode’s password is set either during the router initial configuration or at a later stage. The Global Configuration Mode which configures the access lists, routing protocols and the interface further does not require any password to access. It is entered in the Privileged Mode.

Interface serial

Interface serial shows details such as line protocol, encapsulation, serial 0 and the internet address are highlighted in the command show.

  • EXEC mode – Upon the establishment of a connection by the router, this basic EXEC mode is applied. The EXEC mode has been set as a default in the router once it connects to the computer.
  • Privileged mode – it uses the ‘enable’ command. It also has the capability of executing all the show commands from its mode.
  • Global configuration mode – it uses the ‘configure terminal command’ or the ‘config t command’. It has been established to achieve the highest level of access to the router during configuration.
  • Ensures that the interface hardware is presently active.
  • It further points out the hardware type in use.
  • It determines the Internet address and subnet mask.

Ethernet 0

The line protocol, internet address, ARP type, encapsulation and the Queuing Strategy helps to display detailed information of the command show interface fast Ethernet 0.

  • Helps the interface to achieve Maximum Transmission Unit.
  • The interface’s bandwidth in kilobits is calculated per second.
  • The delay of the interface is calculated by microseconds.

Routing

Routing helps to acquire information about neighboring network devices. Routing is defined as a means of choosing network paths used to send network traffics. Use of packet switching technology enables routing in electronic data. The passage process of the logical addresses packets which begin from the source to the last stage through the intermediate node is aided by certain devices. These devices include bridges, firewalls, gateways, switches and routers. Computers can also be used in the transition stage even though they are meant for general purposes. This leads to their performance being limited because they are not specialized.

The routing tables are used to store routes records and information forwarded in routing to be used for a range of network destinations. Information about neighboring devices can therefore be gotten through Routing. During routing, one does not regard its memory as the routing tables are stored here and thus acts as a very vital determinant to achieve efficient routing. One can choose to use algorithms on routing if it’s a single network path at a time and if there are many paths being used, a multipath is preferred. The basic step is therefore to point out the most appropriate path to be used as this is not an easy task.

Use of Metrics as a means to determine and assess the path to be used has highly been recommended and is commonly used. A metric is a tool used as a standard measure to determine optimal paths during routing. The data of the route relies on the type of the routing algorithms that has been chosen. The routing algorithms therefore play a key role as it initializes the routing tables. Routers are in constant communication with each other and use various messages to preserve the routing tables. The communication between the routers include analyzing updates from other routers, the ability to put up a detailed picture on a network typology and sending notifications to each other regarding the condition of the sender link.

A Telnet network

A Telnet network plays a key role as it serves as a connector to remote computers, also known as hosts, over a TCP/IP networks. To be able to connect to a telnet server such as a remote host, one nee to use a software called the telnet client software. The client automatically becomes an embedded terminal when the client connects to the remote host. The client is able to get contact with the computer remote host after this connection.

Operating systems such as Windows 95, 2000 and XP have in most cases in built telnet clients commands as telnet clients are present in most of the operating systems. The process protocol of incorporating the telnet clients in the command lines of the operating systems requires the particular command lines and then one enters the telnet host. The remote computer then replaces the telnet host that one want to be connected with.

To have access to a distant computer by following the terminal of a remote computer, one requires to follow the specific provided network protocol. The protocol enables the distant computer to function online by use of an interface which, is taken to be part of the local system of the user. Telnet has been preferred for various reasons, the key reason being that it does not limit one when logging as a regular user of the computer to view all the programs and data that has been installed and stored respectively. The protocol used is also important as it can be applied in technical support usage.

Telnet works in a simple manner. It uses software, also known as the telnet client which has already been installed in the computer and creates a link with a distant computer. The telnet client uses a command to send a request to the remote host which in turn replies by asking one to type a username and a password. If the request is accepted by the remote host, then the client connects with the host making the computer in use a virtual terminal. This allows one to have a full access to the host’s computer. It has been argued that with the command of the password and username, one needs to have a set up account on the distant host before sending a request. In some instances, the computers which have been installed with Telnet may allow restricted access by guests.

The Trivial File Transfer Protocol

The transfer of files with functions such as those of FTP basic for (TFTP SERVER, 2008) uses the Trivial File Transfer Protocol (TFTP). The Trivial File Transfer Protocol requires a very minimal memory to be implemented and very vital when using computers that lacks any storage devices when booting its routers. When transferring minimal amounts of data or information, the transfer protocol is mostly preferred. The information is transferred between the operating system images or the IP Phone firmware and other network hosts.

The Trivial File Transfer Protocol is also used when loading a basic kernel during the initial stages of installation of various installation systems that depends on the network. The communication process in this protocol relies on port 69 and the protocol is unable to list content directory as it does not possess the encryption mechanisms of authentication.

The mail, netascii and the octet are the transfer modes supported by this protocol. The mail transfer mode has been considered obsolete and is hardly used. On the other hand, the netascii and the octet match up to the ASCI and also the image modes. This protocol lacks privacy and its security is wanting. It is not advisable to use it over open internet network but rather on private local networks. The size of the file is not limited when the server and the client are in favor of the block number wrap around. The UDP is hence used by the protocol when supplying its own session support and transport. The Trivial File Transfer Protocol can be able to access the remote server to be able to read and write files from the server.

The Trivial File Transfer Protocol however has its disadvantages. One of the disadvantages is that it does not have a direct visibility and validation. This acts as a hindrance when accessing the available files and directories.

Router configuration files by use of restore users are managed by theTrivial File Transfer Protocol. The files are very important incase the router or the switch fails completely. Back up of configuration data is very important when one needs to refer to them in future or even for purposes of documentation. This process of configuration requires one to copy the router configuration by using a copy of running-config TFTP or the copy startup-config command to a TFTP server so as to restore the configurations. The router configuration therefore running in the DRAM and configuration kept in the NVRAM is backed up by the process. The running-config command verifies the existing configuration in DRAM. After the existing configuration in DRAM has been verified, then it is copied to the NVRAM. Finally, it is copied to a TFTP server.

Conclusion

In conclusion, the lab is used to understand how to deal with Ethernet Ports and familiarizing oneself with Cisco Discovery Protocol. In addition, the router configuration has been made easy with the use of network lab.

Infrastructure-as-a-Service Concept

IaaS or Infrastructure as a Service is a special model of cloud storage that provides the users with an opportunity to outsource computer hardware, equipment, networking, and the services related to it, among which there are content delivery networks and load balancing ones. In Infrastructure as a Service, the provider is the owner of the storage and equipment which they lend to the user companies for a certain payment, which is assigned based on the resources used.

The provider is responsible for the maintenance and balanced work of their equipment. IaaS carries a number of benefits for the organizations, but it also has some disadvantages. Among the most well known IaaS provider companies, there are Windows Azure, IBM Smart Cloud Enterprise, Amazon Web Services, and Google Compute Engine. Before choosing to begin using IaaS, the companies are to evaluate all the pros and cons of this service properly.

The cloud infrastructure that is included in IaaS is not managed by the consumer. Instead, the customer controls the data with the help of applications and operating systems. In order to control the environment, the clients employ an IT operations management console with GUI (graphical user interface) (Reed, 2014). For the large companies, one of the most important benefits provided by IaaS is the opportunity to outsource great portions of work to the cloud during the busiest seasons.

This process is also referred to as “cloud bursting.” Without the employment of IaaS, such work normally requires additional services on on-premise systems. This way, the organizations using IaaS are able to save costs that otherwise would be spent on additional servers, which would only be used during certain periods of the fiscal year. Using IaaS, the user only pays for the times when the service was actually launched. This allows the user to calculate the exact cost of the service carefully.

This way, IaaS solutions may be customized to provide the best assistance unique for each customer. Another significant benefit is timing. Since in the contemporary high-speed world, time is everything, it can be confidently stated that quick response of the service provider allows the client to save time, which translated into costs.

With IaaS, the system can be implemented within seconds, and the user does not need to wait for the installation of servers and networks. Access to IaaS is flexible, and the user can is available for any device and from any location.

One of the main and most frequently discussed disadvantages of IaaS is security. There is a perception that using such services, the clients lose control over their data, and as a result, information may be accessed by someone else, and its security may be violated. To address this worry, some of the IaaS providers have an option of Private Cloud, which represents an environment where one’s own servers will be stored. One more factor surrounded by a lot of arguments is compliance.

In cases of disaster recovery, the issue is mainly solved based on the national regulations of the domestic governments of the providers. IaaS providers usually have datasets located abroad, which creates frustrations in this concern. Finally, IaaS outages are often spoken about since such situations have already occurred, and in many cases, there is no guarantee that the provider would be able to fix the system quickly and preserve all the data.

Reference List

Reed, R. (2014). Infrastructure-as-a-Service: Concepts, Advantages & Disadvantages. Web.

Om Limited Company’s New IT Infrastructure

Introduction

Business activities have kept on evolving over the resent past. Markets have evolved into something more diverse and only the fittest and quick adapting businesses have survived in this era (Bent, Mette & Søren 2002). In the process of adaptation cooperates have met several challenges that have seen some of these cooperates fall to the bottom of the value chain.

Several factors have been highlighted as the main cause of these challenges. Over the years, issues such as professional conduct, business ethics and civility issues have been major contributors to the general success of cooperates (Brenner 1997). Similarly, information technology has become a key aspect that largely affects the success of the business. The current global world is characterized by improved technology and advanced IT infrastructures.

Information technology is a vital element that largely contributes to the general success of the business. With the current state of the market, mailing and other slower means of communication have become a thing of the past (Gordijn 2002). With the current globalization, the world has become like a village market and therefore any business activity should aim at reaching out to the entire world at large. This can be possible through the improved science technology that has seen several cooperate and companies cut across the global market.

This research report tries to explore some of these technological advancements especially in the information technology that are of concern in a business corporation today. The research features Om Limited, a company that deals with designer furniture and appliances. The company needs to upgrade on its IT assets so as to cope with the growing competition in the market today.

As such, the research report tries to highlight some of the key aspects that should be put into consideration in the process. It gives guidelines on the computer hardware and peripheral devises that should be employed. It also gives suggestions on the operating systems, productivity software and other specialist applications needed to be upgraded. Choice of telephony devices and the IT use procedures and back up facilities to help better Om Ltd output are also researched on. All the suggestions in the research are justified with an explanation on how each meets the requirements of the company.

Background

Om Limited is a company that deals with the selling of designer furniture and appliances. It is a big company with its headquarters in Salford Quays. Its branch outlets spreading through Lowry to Trafford centre and Liverpool shows how the company has rapidly grown. As such, it is evident that the company has an increased customer base, an issue that poses a production pressure on its output.

With the growing demand of its product, the company seeks to embark on a plan that will enable it reach all its customers in the present competitive global market. This called for reviewing of its IT infrastructures and other related technology appliances. Om Ltd has links with several independent designers. These designers have to go to the office in Salford Quays to submit their designs or send them by courier. Accepted designs are then sent by courier to a manufacturing plant near Cardiff. This whole process is slow and inefficient in many ways.

As such, Om Ltd wants to upgrade on its IT assets so as to improvise faster and efficient processes. Some of the suggestions are to expand the company by setting up an in house design studio. The studio will form part of the head office in Salford. Some of the IT assets that need to be improved include computer hardware and peripheral devices, productivity software, various operating systems, telephony devices among others. These improvements would improve connectivity between its three stores, the head office and the manufacturing plant.

In addition, they will provide new software for carrying out important business functions such as payroll and accounting. The research report hereby provides various suggestions and an evaluation of each suggestion of the impact they would have on the company’s business.

Computer hardware and peripheral devices

The computer is a good example of the tremendous improvement in technology that has changed people’s lives. Since its discovery, it has revolutionized world business with all sectors embracing its use in their day to day activities (Gordijn 2002). This has been attributed to its efficiency in processing and transfer of data and storage of information.

In addition, there have been more discoveries on the computer that have proved that it can do more than what we already know. This has led to the manufacturing of more advanced computer that further transforms both the corporate and the business world. With such closely succeeding discoveries of more advanced computers, it is therefore important for every corporate to be alert to upgrade to the latest technologies (Hamel 2000). They should watch out for the latest computer hardware and other peripheral components.

Computer hardware is those parts of the computer that we can see. Neal (2008) describes the term as:

Hardware is the physical aspect of a computer. While computer software exists in the form of ideas and concepts, computer hardware exists in substance. By definition, the different parts of a computer that can be touched constitute computer hardware. Computer hardware includes Central Processing Unit (CPU), the Random Access Memory (RAM), motherboard, keyboards, monitors, case, drivers (hard, CD, DVD, floppy, optical, tape, etc.), microchips as well as computer peripherals like input- output and storage devices that are added to a host computer to enhance its abilities. Together they are often referred to as a personal computer. (p. 72).

On the other hand, peripheral components are those devices that can be attached to the computer so that together with the computer they achieve certain special tasks such as printing, storing more information or accessing the internet. They are supportive devices that the computer can still function without them. “The peripheral devices are the devices that are connected to the computer in order to get most of the advantage out of it. If you in to detail description of peripheral devices it can be explained as the devices that are optional, and which are not required in principle” (Neal 2008, p. 73).

Therefore, peripheral devices include computer scanners, mouse, printers, modems, digital cameras and cards. Information about the computer is very important in deciding which computer one should buy and for what purpose it is expected to serve.

Om Limited wants to set up an in house design studio. The studio is expected to link with other branches of the company so as to fasten data transmission within and between the studio and the branches. Being a designer company, it entails much drawing.

As thus, computer peripheral devices that help enhancing computer ability in drawing should be preferred in the process of improving the IT assets. For example, advanced printers, scanners, digital cameras, digital flash disks, media devises, blue-ray disk, compact disks among others. All these peripheral devices play a big role in enhancing data processing and storage of information. Scanners help in inputting images and photographs into the computer (Neal 2008). A series of bits and bitmaps is used to transfer the scanned photographs into the computer.

In the computer, the photographs can be edited to improve their contrast and quality. Printers on the other hand help enhance the output of digital information onto papers. Blue-ray disks and other media devices improve storage of information as it is burned on CDs and DVDs. Blue-ray disks have the ability to burn a lot of data onto storage devices such as DVDs. Other storage devices such as digital flash disks can also be used to store more information.

Digital cameras can help in taking high resolution photographs as exhibits of the various designs. The photographs can be sent to the studio house in the company’s headquarters through various electronic means such as emails and faxes that are faster and more efficient than the use of couriers (Jose 2009).

Computer networking IT assets are of significance when it comes to internetworking computers. The job processes Om Ltd indulges in require much interconnection between its data processing computers within its studio house. This interconnection helps in enhancing easy movement of data and information within and between it studio and the branches in Trafford Centre and Liverpool. As a result, IT assets improvement should consider acquiring more advanced IT assets that are efficient in data transfer (Neal 2008).

Computer networking in the modern competitive world requires more advanced networking hardware components. These components comprises of modems, network cards, routers among others. Modems are computer peripheral devices that help in dial-up connections. They connect the computer to the internet by changing computers’ digital signals into analog. In so doing, the computer is enabled to interpret telephones’ and satellites’ digital messages. As thus, modems are of great significance in Om Ltd as they enable transfer of data and information through the internet through means like emails. They increase the company’s research base as studies of new designs can be done on the vast internet.

Network cards also enhance communication of the computers over the internet, “They provide each computer with a MAC address that acts as a networking medium” (Stern & El-Ansary 1992, p. 67). With its state of the arts technology, emails and attached files can be sent across vast distances upon a click of a button. Both wired and wireless computers in the studio are joined by routers. When it so happens, designers will not have to transport their pieces of work manually to the headquarters, instead, they can send them from one computer to the other, courtesy of the routers.

Choice of computer hardware and peripheral devices

Speed and efficiency are the key determiners of the kind of computer one has to buy. Similarly, the nature of the job the computer is expected to do also play a big role in the general decision of the right IT assets to be acquired (Stern & El-Ansary 1992). In order to obtain maximum value of the assets, the equipment agreed on should be relevant and pocket friendly. That is to say, instead of acquiring whole new and expensive assets, the company can decide to improve the ones they are using by acquiring the appropriate hardware.

Various aspects have been suggested to be taken into consideration when accessing the IT assets required. For example, the company required advanced peripheral devices such as printers and scanners. When choosing a printer one should consider the color quality of the output. Getting such a state of the art printer can be a complex task. However, a good printer should be able to support multiple applications.

Clock speed is a key element when it comes to choosing the right CPU. Jobs involving designer graphics require faster processors. In the past we had processor that had a speed of below 30 megahertz (MHz). However, today we have more advanced processors with a speed up to 3000+ MHz (3 gigahertz). The speed is determined by the circuit boards and other relevant chips and the motherboard on with they are housed in. Most of these chips can be upgraded without having to replace the whole motherboard.

The software developed now are designed to work with the newest and fastest processors. A number of the software recommended for the design job at Om Ltd require this high speed processors and the latest software (Sullivan & Steven 2003). The choice of good computer hardware should have most of these aspects into consideration.

For the specifications of the recommended computer hardware please look below.

150 HP desktop with the following features

Type Intel Pentium Dual core E5300 processor
Cache 2 MB L2 cache
Motherboard 800 MHZ
Memory 3 GB
Floppy drive 3.5” 1.44MB floppy disk

50 Laptops

Type Intel core 2Duo P8700
Cache 3 MB L2 cache
Motherboard 1066MHZ
Graphic card Discrete nVidia FX 370M
Memory 2 GB
Hard disk 320 GB
Floppy Drive 3.5” 1.44 MB floppy disk

Scanners

The recommended specifications of scanners is listed below

Scanner Specifications
Scanner Type Universal Workgroup Scanner
Scanner Element CCD (2)
Light Source CCFL
Features Border Removal, Custom Color Dropout / Enhance Color, Deskew, Double Feed Detection, Punch Hole Removal, Skip Blank Page, Text Orientation Recognition
Max. Resolutions Optical: 1200dpi
Scanning Speed: Feeder Capacity: 20 pages per minute, 40 images per minute
50 sheets
Scanning Mode Simplex, Duplex, Skip Blank Page, Color, Grayscale, Black and White, Error Diffusion, Advanced Text Enhancement (Two Types)
Max. Document Size 8.5″ x 14″
Interface Hi-Speed USB 2.0
Dimensions (W x D x H) 17.3″ x 15.7″ x 7.1″
Weight 15.2 lb.
OS Compatibility Has Drivers for Windows 7 (32/64), Vista (32/64)
Software Canon CaptureOnTouch, Canon CapturePerfect, Adobe Acrobat Standard, NewSoft Presto! BizCard, Nuance OmniPage SE, Nuance PaperPort Standard
Max. Power Consumption 33W or less (Energy Saving Mode: 3.7W or less)
Warranty Six-Year Advanced Exchange
Scanner Device Driver ISIS, TWAIN

Printers

Speed / monthly volume
Print speed, black (normal quality mode) Up to 31 ppm
Print speed, black (best quality mode) Up to 31 ppm
Print speed, color (normal quality mode) Up to 31 ppm
Print speed, color (best quality mode) Up to 31 ppm
First page out, black Less than 10 sec
First page out, color Less than 10 sec
Processor speed 533 MHz
Recommended monthly volume, maximum Up to 100000
Print quality / technology
Print technology Laser
Print quality, black Up to 600 x 600 dpi
Print quality, color Up to 600 x 600 dpi
Paper handling / media
Paper trays, max. 6
Input capacity, max. Up to 2600
Standard envelope capacity Up to 20
Envelope feeder No
Media sizes, std. Letter, legal, statement, executive, envelopes (No. 10, Monarch)
Media sizes, custom Multipurpose tray: 3 x 5 to 8.5 x 14 in; 500-sheet input trays: 5.8 x 8.3 to 8.5 x 14 in
Media weight, recommended Multipurpose tray: 16 to 58 lb bond; 500-sheet input trays: 16 to 32 lb bond
Media types Multipurpose tray: paper (plain, glossy, colored, preprinted, letterhead, recycled, HP tough and high-gloss laser), envelopes, transparencies, labels, cardstock; 500-sheet input trays: paper (plain, glossy, colored, preprinted, letterhead, recycled, HP tough and high-gloss laser), transparencies, labels
Memory / print languages
Memory, max. 544 MB (512 MB DDR SDRAM, 32 MB Flash memory on the formatter)
Memory slots Two 200-pin DDR DIMM slots, two Flash memory card slots
Print languages, std. HP PCL 6, HP PCL 5c, HP PostScript Level 3 emulation, HP-GL/2
Typefaces 93 internal TrueType fonts scalable in HP PCL and HP Postscript Level 3 emulation; additional font solutions available via Flash memory
Connectivity
Connectivity, std. IEEE 1284C-compliant bidirectional parallel port, USB 2.0 Hi-Speed port (compatible with USB 2.0 specifications), 2 open EIO slots, foreign interface port, accessory port for third-party solutions
Connectivity, opt. HP Jetdirect internal and external print servers, HP wireless print servers, Bluetooth wireless printer adapter
Macintosh compatible Yes
Dimensions / weight / warranty
Dimensions (w x d x h) 20.5 x 37.4 x 22.9 in (with paper tray extended)
Warranty, std. One-year, next-day, onsite warranty

Operating systems, productivity software, and specialist applications

There are various operating systems available today. This is due to the advancement in science technology. Today, the business world has been improved as almost every task can now be done by a click of a button. Various discoveries in computer operating systems seen the business platform reform from the old slow business processes to new modern processes characterized by high levels of accuracy and speed. Jose (2009) states:

Operating systems and productivity software have seen business personnel acquire

  1. Have a mastery of the Microsoft Office suite of business productivity programs (word processing, spreadsheets, databases, and presentation software).
  2. Master the skills necessary to produce professional looking documents utilizing software application (word processing, spreadsheets, databases, and presentation software).
  3. Are prepared with the specialist skills required for the Microsoft Office Specialist (MOS) certification in Microsoft Word, Microsoft Excel, Microsoft Access, and Microsoft PowerPoint.
  4. Demonstrate keyboarding speed and accuracy with a minimum touch keyboarding rate of 40 words per minute on a three-minute timed writing with 95% accuracy.
  5. Communicate clearly and effectively, both orally and in writing.
  6. Conduct research from a variety of sources.
  7. Demonstrate computer literacy.

With all these, information technology has reformed the face of business all over the world. (p. 89)

Om Ltd, being a designer company, requires good operational systems. Its work entails more printing and data processing. As such, document processing systems and software will be of great significance to the general success of the company. The choice of the operating system should consider the following.

Safe:

  • It has in built security features which enhance the safety of information on the computer.
  • Safety against suspicious programs like viruses.

Easy:

  • Easy to navigate through open files and programs
  • To search for files and programs by help of search boxes
  • Have an explorer interface that provides consistent, streamlined menus. This makes sorting and filtering of information easier

Ready:

  • The system should serve the small business in the course of operational growth. It provides a solid foundation for small business to move into larger network in the event of growth in activities

With the above specification, the most appropriate operating system for Om Ltd is Windows Vista Business Operating system. The system is best designed for small companies as it is user friendly and cost effective. With the system, more complex business documents such as receipts and other payment documents such as payroll scripts can be easily processed.

System requirements

Its system requirements are compatible with the laptops and desktops recommended for Om Ltd and includes:

  • 2 GH 32-bit or 64 bit processor
  • 1 GB of system memory
  • 80 GB hard disk with at least 15 GB available space
  • Support for Direct x 9 graphics with:
  • WDDM Driver
  • 256 MB of graphic memory
  • Pixel Shader 2.0 hardware
  • 64 bits per pixel
  • DVD-ROM drive
  • Audio output
  • Internet access

Other features why Window Vista Business is chosen for the operations of Om Ltd are:

  • It can scan and remove unwanted application using spyware
  • It restricts other operating system resources such as malicious software – malware- from spreading to the system
  • It includes internet explorer 7 browser. Internet 7 browser offers dynamic security protection and helps prevents users from unintentional provision of personal data or sensitive data to fraudulent websites.
  • Microsoft office 2007 small business software which include word, excel outlook PowerPoint and publisher.

The following shows price suggestions for various software and operating systems offered by KRW Digital Company. KRW Digital ships an open source software bundle with all new computers. This bundle includes the latest versions of the Mandriva Linux operating system, the OpenOffice.org productivity software suite, the Mozilla Firefox web browser and the Mozilla Thunderbird email client. KRW Digital can also supply the following titles:

Vista Business Operating system $190.00
Microsoft Windows XP Home $185.00
Microsoft Windows XP Pro $289.00
Microsoft Office 2003 Basic $299.00
Microsoft Office 2003 Pro $499.00
F-Secure Internet Security $120.00
Laptop (Intel core 2Duo P8700 with 2 GB RAM) $110.00
HP Desktops with 800MHZ $80.00
AutoCAD 2008 $50.00

At your request, we can also source other titles to fulfill your needs. Don’t forget that if you need custom software to support your business, KRW Digital offers specialist software development services. If you’d prefer that we refrained from loading any software at all, please mention this when ordering your computer.

For productivity software, AutoCAD 2008 software version is recommended to be installed in designer’s desktops and laptops. The software is best in computer-aided-designs. Its system requirements are listed below and are compatible with both the hardware devices chosen and the operating system Windows Vista business.

  • Intel Pentium 4 or AMD Athlon dual-core processor, 3 GHz or higher with SSE2 technology
  • RAM of 3 GB
  • 1 GB free disk space for installation
  • 1,024 x 768 VGA display with true color
  • Internet Explorer 7.0 or later
  • Install from download, DVD, or CD

Sage Construct Advanced software is recommended for accountants desktops. Its features include:

  • Business Process
  • Integration Construction
  • Job Costing
  • Purchase Order Processing – POP
  • Construction Industry Scheme Processing
  • Tracking Variations or Extras
  • Retentions
  • Valuations and Applications for Payment Aged
  • Application Reports
  • Insurance Tracking

Being an art and design company, it needs constant connection between the employees. The employees in the company’s branches need to report to the studio at the company’s headquarter. This connection can be through cloud computing. This is where by computers are connected over the internet. It uses virtual servers that can host OS application services thus enabling over the internet computing which is efficient in business activities such as sending invoices and other business documents. The concept is the most effective for the company as software used with it are cheap and readily available. For example, Google apps are readily available software. It also has high security and compatible with most small enterprises.

Telephony devices

These are devices that work hand in hand with voice modems in transferring the analog data and information into digital data. It is important for the Om Ltd to acquire advanced software and telephony devices in the efforts of upgrading their IT infrastructure (Shafritz 1990). IP phone models can work effectively for the company. This is because the models are cheap and user friendly.

Similarly, the phone devices can multitask. For example, business documents can be sent at the same time making receiving invoices. The new model of IP Phone 7941G handsets is user friendly has interactive soft keys and an inbuilt manual that help users on how to use it.

IT use procedures and back up facilities

With all the IT assets acquired, the company requires appropriate IT procedures and back up facilities. Back up facilities are those facilities that help in recovery of information incase there is an interruption in the ordinary functioning of a system. These interruptions may be due to electricity interruption or attacks of the software by viruses. As such, every company needs to put in place effective measure to minimize on the losses that result from these interruptions.

In the case of Om Ltd, it needs to put in place mechanisms such as ‘uninterrupted power supplies’ (UPS) and standby electric generators. This will help solve the challenges posed by abrupt interruption of power. These back up measures are important as they ensure continuous running of the business. Even though they seem expensive to acquire, their tremendous advantages can not be compromised. A back up electric generator goes for around $18,500. It is expensive but durable and fairly easy to maintain.

Conclusion

Globalization has seen the business world transform tremendously. It has made the world become like a global village. The current market is characterized with a diverse nature. This factor has made business corporations seek other adaptations in the efforts of securing a place in the current competitive world.

Information technology is one of these adaptations that has been embraced by several companies so as to cope with the ever growing demand of quality and speed. IT infrastructures are key elements that determine the general success of the business. It is a factor that can see Om Limited rise up the value chain if it embraced the changes in IT assets and dedicated to adoption of the latest assets.

References

Bent, F., Mette, K. & Søren, L., 2002. Underestimating Costs in Public Works Projects: Error or Lie? Journal of the American Planning Association, 71 (2), pp. 279-295.

Brenner, S., 1997. Business interaction networks. Journal of Business, 21 (3), pp. 391- 399.

Gordijn, J., 2002. Value-based Requirements Engineering – Exploring Innovative e- Commerce Ideas. Amsterdam: Vrije Universiteit.

Hamel, G., 2000. Leading the revolution. Boston: Harvard Business School Press.

Jose, P., 2009. The era of the computer: Market Research. New York: Times Business.

Neal, B., 2008. Understanding computer technology. Washington DC: The National Academies Press.

Shafritz, M., 1990. Essentials of Business. New York: Penguin Books.

Stern, L. & El-Ansary, A., 1992. Marketing Channels. Englewood Cliffs: Prentice-Hall.

Sullivan, A. & Steven, M., 2003. Economics: Principles in action. New York: Pearson Prentice Hall.

Public Key Infrastructure System

The public key infrastructure allows securing the transferring of the information. This approach can be implemented in the internet banking, private emails, and other spheres where the security should receive the priority. The easy passwords are not difficult to break, and that is when some additional proof is needed the system of the public keys is commonly used. The primary purpose of the paper is to evaluate the significance of implementation of the public key infrastructure in the working process and discuss strength and weaknesses of the system.

To get better involved in the issue, the question regarding the fundamentals of public key infrastructure should be taken into account. As the matter of fact, the PKI makes the distribution of the encryption keys possible and allows the user to exchange data over the Internet privately. The PKI usually consists of the following essential elements, namely hard and software, policies, and digital certificates. Certificate Authority (CA) provides the entities with trust. The PKI has the chain system and consequently if one element is weak, the whole system suffers and can get affected. The main problem is there is no standard that is prior and dominant for all the policies. CA is believed to be a “trusted third party”, however, it should be stressed that the problems with the security put the whole system of the PKI in the risk zone as a number of fake certificates has been discovered recently (Davies, 2011).

A digital certificate is considered to be the form of document that identifies the personality. The certification authority (CA) provides the needed information regarding the entity. The information stays actual for the certain time. Digital certificates are significant while they support the PKI (Karamanian, Tenneti, & Dessart, 2011). As the matter of fact, the digital certificate can be compared to the passport, and it gives an opportunity to the person to send and get the information privately and secure. Digital certificates provide the trusted and secured exchange of the files. The certificate should have a variety of characteristics for the recipient to be sure that it is real. Among such characteristics are the following, namely name, serial number, digital signature, and expiration date. The implementation of the PKI will improve the overall work of the organization making it safe and secure. The clients will be confident that their private information will not be used in any criminal cases.

The definition of the PKI is multi-faced as it encompasses numerous tasks at one. The PKI is used for the fulfilment of two tasks. The first one is to secure the information and the second is to support the authentication process and check the content. The systems provide a number of benefits. First and foremost, the information that is sent and received will be proofed. One of the most valued advantages is that the files will be secured and sent in time. Moreover, it can be used as the evidence in the court.

Originally, it was created to provide the ability to exchange the information privately in an unsecured environment, such as the Internet. To increase effectiveness and be able to compete in the market, companies should use technologies creatively; it results in the improvement of management. Signature software products are very important as they prove that the document is original and can be used in the court as evidence. The development of the signature software will provide the clients with the safety and ensure that the files are original.

Having the internal CA provides the company with a number of benefits; among them is the independence from the external CA and more easy and flexible organizational management. This type can be easily integrated into the active directory. In addition, it is not very costly. The disadvantage is that it is more difficult to implement than the external CA. One of the benefits of using the external CA is that they “are responsible for the PKI and external CA trust other external CAs digital certificates” (Ristic, 2014). Moreover, the system does not require lots of overhead in contrast to the internal CA. However, it is costly to pay for every certificate using the external type. The external system is less flexible and has not so developed infrastructure as the internal one. The organization is likely to implement the internal CAs to decrease spending on different certificates in the case a lot of them are needed.

In conclusion, it should be pointed out that implementation of the PKI and the internal certificates seems to be an essential part of the success of the company. The modern world offers society the extensive usage of the informational technologies; however, the issue regarding safety and security remains the most urgent and discussed. Having an opportunity to exchange the information safely in the unsafe environment of the Internet will provide the users with a higher degree of trust, respect, and assurance. Despite certain disadvantages, the system of PKI is significant for implementation especially in the software company, where confidentiality and security seem to be fundamental aspects for the success.

Works Cited

Davies, J. (2011). Implementing SSL/TLS using cryptography and PKI. Hoboken, NJ: Wiley.

Karamanian, A., Tenneti, S., & Dessart, F. (2011). PKI uncovered: Certificate-based security solutions for next-generation networks. Indianapolis, IN: Cisco Press.

Ristic, I. (2014). Bulletproof SSL and TLS: Understanding and Deploying SSL/TLS and PKI to Secure Servers and Web Applications. London, U. K.: Feisty Duck.