The rapid advancement of technologies in the field of information and communications is increasing reliance on cyberspace. However, as many organizations explore the opportunities brought about by digitalization, they have to assess and confront the resulting security risks (Mukhopadhyay et al., 2019). As stated by Dinger and Wade (2019), there have been reports of major breaches that expose the privacy of millions of individuals. Contrary to the assumption of many, hacking occurs in not only computers, but also in critical infrastructures, including train networks, water treatment plants, and electronic resources (Bialas, 2016). Security measures and principles need to be put in place to prevent hostile attacks by network predators. This research proposal aims to investigate the strategies that firms use to mitigate identity theft by hackers and enhance security in their infrastructure.
Research Topic
Cyber theft comprises all activities carried out to obtain people’s information, such as credit card numbers, and then using that data for other misdeeds such as purchases of illicit products or identity theft. The modern age is becoming dangerous due to organized crimes that have developed mechanisms to sell wide-scale distribution of stolen data (Lavorgna, 2019). Carding forum websites now specialize in providing information about their victims. Thus, hackers have a ready market that motivates their unethical behavior. The topic of this research proposal is investigating the strategies used by organizations to combat identity theft by hackers and enhance cybersecurity in critical infrastructure.
Research Description
This research proposal is concerned with strategic prevention of cyber theft and protection of critical infrastructures. The context for this study will be organizations which have digitalized their documentation. Although many researchers have investigated the topic of cybercrimes, there is still a gap in understanding ways to deal with such threats. Thus, this proposal will merge different strategies by different organizations to develop a comprehensive best practice which companies can use to enhance privacy in their networks. A qualitative research methodology will be applied to gather and analyze the data collected from interviewing security system administrators in selected organizations.
Preliminary Literature Review
In writing a research proposal, it is crucial to survey previous studies and their findings. The selection criteria for the articles in this review include having a related topic and publications within the last ten years. Therefore, this section will focus on discussing, comparing, and contrasting past projects on cyber theft. First, theoretical framework for the current work will be discussed in brief. The consecutive subsections will then expound more on different issues in online networks.
Theoretical Underpinning: Actor-Network Theory
This theory has its roots in scientific and technological studies with primary focus on the digital transfer process. According to Oliveira et al. (2019), the ANT’s primary proponents are Bruno Latour, John Law, and Michael Callon. This model holds that universal transmission of data is necessary for modern society as it results in economic growth. The other assumption is that no fixed definition can be universally applied to all industrial context. Regarding the role of humans in the cyber space, ANT uses the principles of performativity, relationality, and urgency (Grommé, 2018). The prepositions are relevant in understanding breaches which make hackers successful.
Issues in Cyber Theft
Privacy violation is a primary challenge for all criminal activities in the cyberspace. The online identity thieves can use credit cards illegally or take over another person’s identity entirely while successfully evading apprehension by police. In 2017 online crimes resulted in more than 600 billion dollars globally (McAfee, 2018). Such crimes inflict privacy issues on an individual and result in systemic harm which can destroy the critical infrastructure of a nation (de Souza et al., 2020). Many stakeholders in the online network stand the risk of being affected by a single breach.
Leakages Facilitating Cybercrime
To understand the factors which make it possible for network violations, it is essential first to understand various trespass forms in digital infrastructures. Hacking involves all trespass activities such as unlawful appropriation, embezzlement, espionage, and plagiarism (Carpenter et al., 2020). Sham websites are the other common offense characterized by phone communications. Also, spoofers commit violations on the internet by forging email addresses to trick people into releasing sensitive data (Levitin et al., 2018). Other offenders include information brokers, spyware, and chatroom boards.
One of the factors which make cybercrimes possible is the ease of accessibility of private information online, such as in social media accounts or unauthorized employees can easily access. Such recklessness can result in identity theft, which may be difficult to notice until there is a significant systemic issue (Ratten, 2019). Similarly, physical attacks on computers and other infrastructure are likely to occur when there are no boundaries and restrictions to allow only a few people to access crucial technologies (Elhabashy et al., 2019). Also, most of the breaches occur between people who are close and share some information (Bossong & Wagner, 2017). For example, a close relative can easily guess the password and access information without raising suspicion.
Risk Management Strategies
Many organizations have policies strategies which ensure the protection of critical infrastructure. Projects involving private-public partnerships are at more risk, given their complexity and many stakeholders, hence the need for protective policies (He et al., 2017). Firms need to establish information security objectives which are measurable and have specific objectives. The first strategy is to provide supportive resources to all people on the Internet of things to ensure their digital systems are secure (Halima et al., 2018). When individuals are safe then there will be no breach on other devices.
There should also be operational planning, regular evaluations, and continuous improvement of the network. Computer scientists are already establishing data analytics techniques which can autonomously monitor the network and detect irregularities (Patterson et al., 2017). As stated by Hu et al. (2017), when systems are regularly updated, the hackers will find it hard to find a breach, and even if they succeed in committing a trespass, it will be noted early. Also, installing antivirus and antimalware on critical infrastructures helps detect, shut-down, and report any attempts to breach the data (Vučković et al., 2018). Firms should also use guards, such as complex passwords, limit access, and apply Compstat principles on their sensitive infrastructures. Such measures will make it hard for hackers to interfere with critical infrastructures.
Research Thesis/Claim
Digitalization is important for organizations as it promotes faster communication, cloud for storage of mass data and enhances competitiveness. For example, after COVID 19 many industries resolved to using digital means of communication as employees worked from home. However, the reports indicate that cybercrime is becoming more rampant and costly for companies. This study claims that the efforts of an individual firm cannot stop cybercrime. The involvement of all stakeholders such as that of the workers, customers and business partners is needed to mitigate identity thieves to make the digital sphere more secure than it was before COVID 19.
Research Questions
How can individuals within an organization handle issues of identity theft to minimize risk to the stakeholder?
Objective/Aims of Research
To establish how individuals within an organization can handle issues of identity theft to minimize risk to other stakeholders
Significance and Benefit of Research
The current study will have practical application to all organizations which are using digitalized information communication systems. Findings from the research will be used to tighten security measures for critical infrastructure. Specifically, the proposal provides unique and more comprehensive ways of preventing cybercrimes. The implication is that individuals and enterprises enjoy the benefits resulting from Internet of things without worrying about data.
The other benefit of this study is in the academic sphere since it complements the existing intelligence about cyber theft. Students will find the information on this topic valuable for learning about modern-day information technology. Similarly, researchers can use the information from this paper to identify gaps in the literature and further explore them to make storage and transfer of digital data effective. Besides, suggestions for areas which need investigations will be provided to guide other scholars.
Deliveries: An Outline of Planned Arguments /Evidence
Internet connectivity for multiple devices as is the case of Internet of things can be prone to systemic breach that takes long to be detected. For example, Cisco has approximately 50 billion objects that are connected (Louchez & Rosner, 2016).
Hackers are becoming more strategic and sophisticated when planning and executing cyber theft; hence, more dangerous.
The current security measures are not sufficient to protect the physical infrastructure and private data from cyber thieves.
Companies should take the threat of cyber security seriously so that policy makers can draft laws that will ensure hacking is reported and investigated by the legal authorities such as police.
Methodology
Gathering data in search of the responses to the research question is a crucial part of any study. The information can be collected from primary sources using interviews, observation, questionnaires, or secondary sources (Mallette & Duke, 2020). A qualitative methodological approach will be used to ensure that in-depth information on a phenomenon is collected from the respondents (Strokes, 2017). All the ethical considerations suggested by Ballin (2020), such as confidentiality, anonymity, and putting the interest of the participants first, will be observed. The objective is to enhance the efficacy of the study.
Conclusion
The use of digital communication, storage, and transfer of data via online means are now common, thanks to the Internet of things. Such advancements have also brought the challenge of cyber theft, which may involve stealing private data or physical tampering with critical infrastructures. Many organizations have fallen victims to hackers due to insufficient strategies to combat online criminals. The study will have practical implications for firms and recommendations to future researchers. In gathering and analyzing data, the qualitative methodology will be used while adhering to all ethical recommendations.
References
Ballin, E. H. (2020). Advanced introduction to legal research methods. Edward Elgar Publishing.
Cloud computing is a relatively new approach of offering computing services on a shared hardware and software platform due to the benefits associated with the approach. That is the rationale for organizations such as Google and Amazon among others in shifting toward the cloud in their service provisions.
Typically, the entire system of cloud computing drawing from its definition provides the infrastructure for organizations to offer clients services dynamically responding to their needs based on the pay-as-use basis, and the economic benefits which include positive net present value, a positive value on the benefit to cost ratio (BCR) calculated based on the ratio between cloud benefits and discounted investment costs and the discounted payback period on the period it takes for a firm that has shifted its services into the cloud computing platform.
However, for organizations to shift to the new computing platform, it is crucial to draw in the principles of system engineering which also draws on the Capability Maturity Integrated (CMMI) model to integrate the service. A comparative study of Google vs Amazon, emerging technologies adopted by Google and Amazon EC2 are presented in addition to the impact of cloud computing on technologies with a focus on virtualization.
Introduction
Cloud computing is an approach organizations in the recent past have adopted of offering computing services to clients without the need for the client own dedicated software and hardware resources, but a computer to access the services on the cloud.
It is a computing approach defined as “internet based computing where virtual shared servers provide software, infrastructure, platform devices, and other resources and hosting to customers on a pay-as-use basis”. These computing services are offered on shared hardware and software platforms characterized by network access, virtualization, multitenant model, and resource pooling based on private, public, and hybrid clouds.
It is with significant success that many organizations, such as the giant Google and Amazon have migrated into offering their computing services based on the cloud. However, the successful migration to the cloud computing platform draws on the economic benefits derived from offering cloud computing services on the shared platform reflected in the above definition.
The migration approach is however, implemented base on the central system engineering and the capability maturity model play in the design, development and integration processes. However, despite the migrations, a number of security issues related to the cloud computing environment have unfolded with well-designed counter measures despite the differences existing in the approaches by Google and Amazon and impact cloud computing has had on technology such as facilitating the development of virtualization, and other new technologies including Amazon EC2 and Google drive technologies as discussed in the following sections.
Definition
Cloud computing is defined as “internet based computing where virtual shared servers provide software, infrastructure, platform devices, and other resources and hosting to customers on a pay-as-use basis” (Krutz & Vines, 2010). On the other hand, 11.pdf defines cloud computing as a “model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) “and adds that it “can be rapidly provisioned and released with minimal management effort or service provider interaction” (Krutz & Vines, 2010).
These definitions point to computing services offered on shared infrastructure and accessible on demand on the internet. In conclusion therefore, the definitions make us understand that the computing infrastructure referred to cloud computing offers services that can are tailored to meet user needs and expectations despite the shared platform. Typically, therefore, each user is transparent to the other users on the shared platform.
Working Context of the Definition
To meet the objectives of offering cloud computing services on a shared platform to enable “ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources”, the cloud computing platform is characterized by on-demand services which can be accessed without human intervention, thus providing any customer in need of the services server access and other services such as network storage regardless of the availability or unavailability of any of the service providers.
Typically, the “On-demand self-service” makes services available and accessible any time and from anywhere in the world, making cloud computing such a powerful tool in offering computing services. It is crucial to note that the cloud computing platform is a host to different services with varied characterized.
Characteristics
To address the varied needs and services offered on the cloud computing platform, the computing platform provides broad network access as one of its crucial characteristics. Thus, network access to the cloud is based on standard mechanisms on a homogeneous environment which can either be thin or thick clients.
These characterizing elements of loud computing are complimentary to other characteristics which include rapid elasticity, resource pooling, and measured services. According to Krutz & Vines (2010), cloud computing is characterized by resource poling where service providers pool their hardware and software resources to provide computing services to different clients based on a multi-tenant model.
The multi-tenant model constitutes services provisions to different clients on a similar platform, who are differentiated from one another based on access policies, data access protection, and application deployment to isolate customers in a cloud computing environment.
Under the subject of the multi-tenancy model, virtualized application servers, shared virtual servers, fully isolated business logic, and shared application servers play the central role in isolating different customers on the same cloud computing environment.
On the other hand, to support services provision on the same cloud computing environment, it is indispensable that the platform support rapid elasticity. Rapid elasticity operates on an underlying principle of elastic allocation and de-allocation of computing capabilities, which sometimes are executed automatically.
In addition to that, the provisions of the comput9ing capabilities scale either inwards or outwards in relation to demand for the services. These provisions are made effective based on the provision of measured services as one of the characteristics of cloud computing. Thus, measures services lead to resource optimization based on the capabilities to monitor resource utilization, ability to control the provision and resource usage, and making the series transparent to the client and the service provider.
To attain the main objective of provisioning services on a share hardware and software platform, studies show that the cloud computing operates on different service models that include platform as a services, software as a services, and infrastructure as a service. These models are further deployed on private cloud, public cloud, community cloud, and hybrid cloud. Having discussed briefly on the definition of cloud computing and the working context, it is crucial to discuss the Cloud Computing Infrastructure.
Cloud Computing Infrastructure
Different authors approach the subject of cloud computing infrastructure with varying views in relation to the services provided on the cloud and associated characteristics, while others merge the approach into cloud computing models that include cloud computing “Infrastructure as a Service (IaaS)”, “Software as a Service (SaaS)”, and “Platform as a Service (PaaS)” models (Krutz & Vines, 2010).
Typically, these models provide the basis for discussing cloud computing infrastructure in the context of the current study. On the other hand, one of the key underlying principles of operation of cloud computing is virtualization, elastic capacity, resource management as a single entity, and functionalities on applications that execute cloud computing tasks. The figure below (figure 1) shows interconnections between different applications and services illustrating cloud computing capabilities.
Infrastructure as a Service (IaaS)
One of the distinguishing characteristics of cloud computing infrastructure is the functionality associated with the provision of services on the cloud defined by Infrastructure as a Service (IaaS). Based on the fundamental definition of cloud computing and its functionality in the provision of services on a shared platform, Infrastructure as a Service (IaaS) provides hardware and software resources on a shared platform to provide computing services without any dedication or long term commitment of any of the resources to any specific individual, while enabling available resources to be dynamically provisioned in response to rising demand.
The underlying principle of operation is to enable the IaaS provider maintain and keep its data center operational while software maintenance remains the responsibility of the client (Krutz & Vines, 2010). IaaS is therefore a form of hosting that allows for network access, provides storage capabilities, and performs routing services. The latter functionalities are enabled on the cloud computing environment based on a pool of switches, servers, routers, and storage systems. The Infrastructure as a Service’s core function is to provide administrative services for the running and storage of applications when executing tasks in the cloud computing environment.
To attain the above objective, Infrastructure as a Service works on the principle that the owner is the sole provider of the resources required to provide computing services while the client can access the services by lease based on a flexible pricing mechanism (Krutz & Vines, 2010). That is made possible based on payments the client makes for the key components defining Infrastructure as a Service (IaaS). These components provide an enabling provision of services based on the Infrastructure as a Service (IaaS) model defined by policy based services, dynamic scaling, internet connectivity, automation of administrative services, and the utility computing services and billing model.
Thus, the central role of Infrastructure as a Service (IaaS) in this case is to provide an enabling environment for clients to execute tasks on a virtualized environment, thus optimizing virtualization using virtual machines built into the Infrastructure as a Service (IaaS) environment (Krutz & Vines, 2010).
Virtualization
While Infrastructure as a Service (IaaS) provides an enabling environment for clients to run their applications, to optimize the services and resources available on the environment, virtualization plays a significant role in optimizing the physical hardware and software platforms by extending these resources using a virtual machine, thus creating a virtualized environment (Wilde & Huber, n.d).
Virtualization “broadly describes the separation of a resource or request for a service from the underlying physical delivery of that service” (Wilde & Huber, n.d), while elsewhere, virtualization is defined as “the ability to run multiple operating systems on a single physical system and share the underlying hardware resources” (Wilde &Hubber, n.d).
While the definitions agree on the concept of providing some kind of resources abstraction, however, key to that is resource scaling based on a virtual environment. Thus, the virtual environment provides greater computing capabilities and resources than is physically available.
The rapid and seamless deployment of virtualization technologies into the Infrastructure as a Service (IaaS) model enables the provision of services based on Infrastructure as a Service (IaaS) model in its role. That enables clients to access the services from any part of the world that has internet connections.
In addition to that, the services provided based on the Infrastructure as a Service (IaaS) model are susceptible to incremental growth due to modularization. On the other hand, it is crucial to note the high degree of availability and resilience of the system, with little or no system failure. However, if system failure is experienced, system up time is minimal (Krutz & Vines, 2010).
In conclusion, from a technical perspective, however, Infrastructure as a Service (IaaS) model is seen as the underlying platforms that provides virtual infrastructure, easy deployment and dynamic provision of web-based applications with changing demand from clients, ensures load balancing is affected in the provision of resources for executing tasks on the cloud, provides service level leverages with clients, and pools resources for shared use.
However, it is important to distinguish Infrastructure as a Service (IaaS) with other services including Software as a Service (SaaS) in an attempt to crystalize cloud computing infrastructure (Krutz & Vines, 2010).
Software as a Service (SaaS)
Software as a Service (SaaS) is one of the cloud computing models with the underlying idea of centrally hosting software and associated data accessible by the customer on a web browser through a thin client. Software as a Service (SaaS) does not require the user or the client to create programs to use, but the client can configure the software to address their own needs and pay for their use based on the multi-tenant architecture (Mell & Grance, 2011).
Platform as a Service
On the other hand, Platform as a Service (PaaS) is a model that provides the required amount of hardware platform and application software based on client demands. A summary of the modes is tabulated in table 1 below.
Model
Characteristics
Examples
SaaS
Highly scalable internet based applications are hosted on the cloud and offered as services to the end users.
Google.com, acrobat.com, salesforce.com
PaaS
Here, the platform used to design, develop, build, and test applications are provided by the cloud infrastructure
Azure service platform, force.com, Google App Engine.
IaaS
In this pay per use model, services like storage, database management & compute capabilities are offered on demand.
Amazon Web Services, GoGrid, 3 Tera2
By Krutz & Vines, 2010
However, there is need to evaluate the public, hybrid and private cloud models and the most appropriate for any organization to use in deploying its services and intergrading its application to the cloud.
Having discussed literature on cloud computing and its variants models, it is crucial to conduct a comparative study between Google.com and Amazon in industry application of the cloud computing concept and the rationale for firms migrating from the old models into the new cloud computing concept based on the role of system engineering and CMMI (Capability Maturity Modle Integrated).
Role of System Engineering and the Capability Maturity Model
Despite the security issues and vulnerabilities associated with cloud computing, many organizations have followed big organizations such as Google.com and Amazon.com in shifting their computing services from the old models to the new cloud computing model with the underlying migrations and integration into the cloud computing platform based on system engineering and the capability maturity model integrated CMMI) (Constantinescu, n.d).
That is partly due to the confidence inspired in many organizations on the security measures the computing platforms operates on besides the commercial benefits associated with cloud computing. Among the benefits include utility pricing, mobility of servers and data centers, flexible deployment of servers on demand, and fault tolerance support for alternative sourcing among others.
These benefits translate economically to positive net present value, a positive value on the benefit to cost ratio (BCR) calculated based on the ratio between cloud benefits and discounted investment costs. In addition to that, the discounted payback period on the period it takes for a firm that has shifted its services into the cloud computing platform (Constantinescu, n.d). However, it is recommended that further research be conducted on the economics of cloud computing to establish the economic rationale of investing and shifting toward the cloud.
It is however important to note that the migration process with system engineering and the capability maturity model integrated CMMI) underpinning the migration and integration strategies draw on the operational principles the CMMI and system engineering as discussed below.
To attain the objective of migrating from the old computing models into the cloud computing models by integrating their services, organizations follow the system engineering model consisting of system engineering management and the technical domain with the CMMI underlying the entire process being complimentary.
However, this paper focuses on the role of system engineering management in the migration and integration process with the CMMI playing a crucial role in the integration process. In the migration process, the system engineering management the system engineering processes, the development phase, and the lifecycle integration phases which is comprehensive top down approach (Constantinescu, n.d).
The migration process begins based on the system engineering approach, begins with the development phase, organizations focus on the system engineering concept in imitating migration into the cloud by descripting the system (Systems engineering fundamentals, 2001). That entails borrowing from the CMMI model that includes identifying people, methods, procedures, equipment, and tools. That is illustrated in figure 4 below.
In addition to that, sub system components are considered with the system engineering process crucial ate every phase of the development and migration process (Constantinescu, n.d). The process phase is illustrated in figure 5 below.
The entire process phase is characterized by concept studies, system definitions with underlying functional capabilities, preliminary design, and detailed design as shown in the above figure. Typically, the entire process bows from the CMMI process model with a multidisciplinary nature. Each of the elements in the system engineering process can be evaluated against the CMMI model characterized by the elements in table 1 below as a continuous representation of the process elements (Constantinescu, n.d).
Table 1
Each of the elements fit into the system engineering process with its core activities illustrated in figure 6 below.
The entire process begins with the requirements analysis phase organizational experts make, functional analysis of the requirements, system analysis and control, and design and synthesis. However, reach of the phases are iterative and allows a high degree of looping, thus allowing modifications where necessary.
According to Bhardwaj, Jain and Jain (2010), the entire process culminates with the lifecycle integration process that allows for integrated development. At this stage, the design solutions are evaluated against initially defined requirements, for clarity, and if the process fits into the cloud computing model of the migrating firm.
System functionalities, operations, training, deployment, development, and verification of the entire system are evaluated for consistence with the original requirements in the lifecycle integration before tests are conducted and the platform ready for commercial use. However, there is need for further research in this area as different models fit into different system approaches and organizational needs (Bhardwaj, Jain & Jain, 2010). A typical example of the use of the capability maturity models is illustrated in figure 7 below. The process draws from different expertise from different disciplines before an organization adopts each capability from each discipline.
Typical examples of firms that have shifted into cloud include Google.com, which has innovatively developed the Google Drive as new technology.
Google Drive as New Technology
Google Drive as new technology is the current trend Google.com has adopted in offering cloud computing services based on a cloud storage service. Goggle drive is a technology that relies heavily on Google search in the provision of storage services on the cloud. Google drive as a new technology “is a cloud service that enables you to store documents, music, photos, and videos in one place” (Siddiqui, 2012).
Typically, “Uploading and accessing all your files is made simple as Google Drive syncs to all your mobile computer devices, essentially providing you with access to your stored files on any device” (Siddiqui, 2012). “With the incorporation of Google Docs in Google Drive, the process of transferring your Google Docs files is pretty much automatic” (Siddiqui, 2012).
The storage capacity provided varies on demand with 5 GB of storage given free of charge, while other chargeable storage services offered at a very low cost. it is worth mentioning the storage cost where “Google is offering plans of 25 GB for $2.49 a month and 100 GB for $4.99 a month” (Siddiqui, 2012).
Typcially, it is argued that “there are other amounts available as well with the highest storage plan at 16 TB for $800 a month” (Siddiqui, 2012). However, “for the types of files you can store, Google Drive supports up to 30 types of files and has support for third-party programs, allowing you to open up your files in any program of your choosing”. (Siddiqui, 2012).
Google drive is compatible with several technologies that include windows operating systems, and the Android operating systems with future projections of Google developing Google Drive iOS client. However, there are many competitors in the market offering cloud computing services including Amazon.com as comparatively examined below.
Google vs Amazon
Google and Amazon have been identified as some of the computing giants, which had migrated their services into the cloud computing platform with each organization providing the services with different approaches as illustrated in figure 8 below
Research studies show Amazon to be the main competitor with Google based on different technology platforms. Amazon’s competition thrives on the elastic cloud compute that Google has strategized to combat using the recently unveiled Google App Engine. The Google App Engine provides services to application developers allowing them develop their own applications on the firm’s scalable systems that have fully integrated development environments.
However, the field had been dominated by Amazon based on its elastic compute cloud which is a web based service with the flexibility of allowing developers access resources to develop applications. One of the benefits associated with Amazon’s elastic cloud compute cloud (Amazon EC2) is the small up time required to boot its servers while scaling in a small amount of time.
In addition to that, the Amazon EC2 allows for quick scalability in response to dynamically changing client demands for computing space and power and provides “developers the tools to build failure resilient applications and isolate themselves from common failure scenarios” (Amazon Web Services, 2012). Base on that argument, it is crucial to discuss the Amazon EC2 business environment below.
Amazon EC2
Amazon EC2 is a cloud computing web service that provides computing capabilities that can be scaled and respond to dynamic changes in client computing needs within minimal time. In addition to that, users have absolute control of their instances with interactive direct access. That enables the user direct control and ability to stop an instance and restart or reboot the instance using appropriate APIs (Amazon Web Services, 2012).
It is crucial, however, to note that Amazon EC2 provides a wide variety of capabilities and benefits which include reliability, flexibility where multiple instances can be run with multiple operating systems without any conflicts on a secure environment.
One economic characteristic of Amazon EC2 is its low cost associated with on-demand services, spot instances, and reserved instances. These capabilities are attained based on the powerful features integrated into Amazon EC2. These features include the Amazon Elastic Block Store, Elastic IP Addresses, Auto Scaling, VM Import, High Performance Computing (HPC) Clusters, Elastic Load Balancing, and Multiple Locations. On the other hand, instances provides on the Amazon EC2 include Cluster GPU Instances, High-Memory Instances, Cluster Compute Instances, and High-CPU Instances (Amazon Web Services, 2012).
Cloud Computing’s Impact on Technology
Cloud Computing’s has had a significant impact on technology. One of the compelling innovations and development of technologies that enhance the services offered on the cloud. One such is the development and integration of virtualization as a technology that enhances the capabilities of hardware and software in the cloud. Thus, virtualization, besides the radical departure from old computing models to the cloud computing model is one of the new developments related to the impact of cloud computing.
Virtualization this case is “the ability to run multiple operating systems on a single physical system and share the underlying hardware resources” (Wilde &Hubber, n.d). Virtualization as provides a virtual environment that integrates virtual machines for executing applications on the cloud computing platform. That has also led to the development of a virtual PC. A virtual PC is a tool that enables users to exploit the services provided on a virtualized platform to access services from the cloud.
That comes with additional benefits to the cloud computing environment where Virtualization focuses on security issues that span users and service providers. Typically, all forms of external services, customers, and service providers need to work in a secure environment.
Security Issues with Cloud Computing
In answer to the above question, studies show the vulnerability of cloud computing in the context of its services on a shared platform. These include high tech crime, data loss, organized crime, and internal threats (NIST FIPS Publication 200, 2006). Organizations endeavor to provide solutions by use of separation of duties that draws on the plurality of conditions when satisfaction is derived from an execution of a specific task such as appending a signature on a sensitive object and data.
Furthermore, another security principle embraced includes defense in-depth. Defense in-depth includes use of multiple layers to enforce security. Typically, multiple layers provide additional security in the event the previous layer is fails (NIST FIPS Publication 200, 2006).
Thus, multiple locations are used to enforce security to ensure security robustness, and use of intrusion detection mechanisms to ward off attempts for unauthorized access to data and information. As organizations grapple with the idea of security, there is, need to comprehend other related security issues including fail-safe.
System on the cloud is designed to fail-safe should a catastrophic event occur. Fail safe is a concept integrated into the cloud computing and virtualization environment to inspire confidence in service providers using the technological platform by ensuring data and information remains safe without any modifications should the cloud and the virtualized platform fail (NIST FIPS Publication 200, 2006).
Thus, during recovery, the system recovers to a secure and safe state and the administrator only does access to system information. In conjunction with, economy of scale where system development and deployment remains simple, bit ensures on insecure and unauthorized paths exists (NIST FIPS Publication 200, 2006)
Conclusion
In conclusion, cloud computing is relatively new in the computing world with giant organizations having taken the leap toward the adoption of the technology based on benefits associated with the technology in the provision of computing services. These include utility pricing, mobility of servers and data centers, flexible deployment of servers on demand, and fault tolerance support for alternative sourcing.
Typically, these benefits translate economically to positive net present value, a positive value on the benefit to cost ratio (BCR) which can be obtained from the ratio between cloud benefits and discounted investment costs.
In addition, other computing platforms provide benefits which include reliability, flexibility where multiple instances can be run with multiple operating systems without any conflicts on a secure environment, with that of Amazon EC2 based on low cost associated with on-demand services, spot instances, and reserved instances. However, it is recommended that further research be conducted on areas economic benefits specifically for smaller organizations.
Bhardwaj, S., Jain, L., & Jain, S. (2010). Cloud Computing: A Study Of Infrastructure As A Service (Iaas). International Journal of Engineering and Information Technology. IJEIT 2010, 2(1), 60-63.
Constantinescu, R. (n.d).Capability Maturity Model Integration. Reliability and Quality Control – Practice and Experience Journal of Applied Quantitative Methods. Academy of Economic Studies, Bucharest, Romania
Krutz, R. L., & Vines, R.D. (2010). Cloud Security. A comprehensive Guide to Secure Cloud Computing. New York: Wiley Publishing, Inc.
Mell, P., & Grance, T. (2011). Recommendations of the National Institute of Standards and Technology. Computer Security. Special Publication 800-145. National Institute of Standards and technology U.S. department of Commerce.
NIST FIPS Publication 200. (2006). Minimum Security Requirements for Federal Information and Information Systems
Systems engineering fundamentals (2001). Supplementary text Prepared by the defense acquisition university press fort Belvoir, Virginia 22060-5565
Siddiqui, Z. (2012). Google Drive: What is It and Can Other Cloud Providers Compete? Web.
Wilde, N., & Huber, T. (n.d). Virtualization and Cloud Computing. New York: Wiley Publishing.
Footnotes
1 A figure 1 illustrates the entire concept of cloud computing and its relationship with the client and applications that run on the cloud.
2 A summary of the infrastructure service models characterizing cloud computing infrastructure.
3 System engineering phases.
4 Basic elements of the capability maturity model integrated into the system engineering concept in imitating migration.
5 Table showing the elements to consider in the CMMI model integral to the system engineering process phases.
6 System engineering process.
7 The figure details the approach used to adopt the capability maturity model and its role in the cloud computing platform for organizations shifting from other computing models into the cloud in the system engineering process.
8 Diagram representation comparing Amazon and Google cloud computing service providers. There is need for further research in this field.
Different countries have different needs for critical infrastructure development, but they also have to consider the emergence of globalization. Globalization has led to great interconnectedness and interdependence among different states. The phenomenon has informed various countries to form collaborations aimed at promoting and enhancing cross-border interaction (Baggett, 2018). The United States is a member of the Critical 5, a collaboration that involves five countries, including Canada, Australia, New Zealand, the United Kingdom and the United States.
Critical 5 informs the need to collaborate with other states mainly through information sharing. The collaboration involves connecting and speaking the same message as pertains to the sense, worth, and meaning of critical infrastructure. Each of the countries has the capacity to develop its unique critical infrastructure. However, the recent events encountered across the globe, such as the US bombing, have called for united action and broader thinking. This also forms the basis for the internationalization of critical infrastructure (Baggett, 2018). The collaboration has enabled the United States to develop and implement an all-hazards method in dealing with the current along with the expected difficulties that touch on critical infrastructure.
One important area the country has been able to collaborate on involves the emerging issues of climate change and the changes in demography. These developments form an integral component of infrastructure systems and properties. The occurrence of unforeseen incidents can cause great disruption in service delivery, which necessitates governments to work together in developing the necessary safe and resilient critical infrastructure. In particular, collaboration during the initial stages of development is fundamental in addressing the trends together with other potential disruptors (Simpkins, 2018). It is also crucial due to the long-term nature of critical infrastructure. Through internationalization, the Critical five members recognize the importance of preparing for imminent changes that possibly will interrupt the services provided through the infrastructure.
Another important area for collaboration involves the emergent issue of cyber-security. Today, information technology (IT) forms the central point of managing critical infrastructure systems. It is used to run key systems in hospitals, airports, power plants, railway transport, and traffic control. A large portion of the systems is managed through computer systems and software that are vulnerable to cyber-attacks. Cyber-attacks mainly target to diminish or defeat the operation of computer systems as opposed to a physical attack on the computer devices themselves (Taquechel & Lewis, 2017). The attackers can seize control of the systems through which they can disturb the system’s operations.
The development of critical infrastructure is the backbone for national prosperity through which the country facilitates economic growth and expansion. The infrastructural developments are a catalyst to greater economic output since they provide support to different sectors of the economy. Specifically, the developments facilitate effective supply and delivery of commodities as well as improved business efficiency. Through working with different states and partners, the US can create dependable and robust systems that shore up business confidence. The systems drive greater business growth and investment that leads to the discovery of innovative economic openings (Simpkins, 2018). They also help the government to realize key objectives, such as improved quality of life for the citizens, creation of job opportunities, improved economic productivity, and reduced prices of commodities for American citizens. As part of internalization, the systems enable the movement of goods and services across different markets, thereby promoting international trade and cooperation.
References
Baggett, R. K. (2018). Infrastructure partnerships and information sharing. In R. K Baggett & B. K. Simpkins (Eds.), Homeland security and critical infrastructure protection, 2nd ed. (pp. 171-189). Santa Barbara, CA: Praeger Security international.
Simpkins, B. K. (2018). Introduction to critical infrastructure and resilience. In R. K Baggett & B. K. Simpkins (Eds.), Homeland security and critical infrastructure protection, 2nd ed. (pp. 1-31). Santa Barbara, CA: Praeger Security international.
Taquechel, E.F. & Lewis, T.G. (2017). A right-brained approach to critical infrastructure protection theory in support of strategy and education: Deterrence, Networks, Resilience, and “antifragility.” Homeland Security Affairs, 13(8). Web.
Critical infrastructure comprises networks, facilities, systems, assets, and related assets on which the society depends to preserve economic viability, public safety and health, and national security. The chosen public location for this essay contains various critical infrastructure assets, including food stores, fuel supply, hospitals, public transport, and financial institutions. Regarding the assets mentioned, several related threats can be fatal (Cybersecurity & Infrastructure Security Agency, 2020). Generally, the five assets face accidental threats, a primary threat that can occur at any given time. Categorically, food stores face a natural and artificial threat. Environmental weather due to human activities on land affects food production, which lowers the food supply to the stores. Natural hazards like earthquakes and floods also affect growth and food supply production.
On the other hand, fuel suppliers face hiked prices and fuel depletion, both man-made and natural threats. Hospitals face man-made and natural threats; for example, the novel coronavirus is a natural threat that imminently impacts hospitals’ operations. Besides a shortage of healthcare professionals, equipment and, tools, medications are some man-made threats affecting the hospital sector. Public transport experiences both natural and man-made threats; for example, extreme weather conditions, for instance, hurricanes can interfere with transport modes. Furthermore, fuel shortage, an artificial threat, can reduce the efficiency of the transport sector. Lastly, banking institutions face security breaches due to cyber attacks.
Ranking Assets:
Food Stores
Fuel Supply
Hospitals
Public Transport
Financial Institutions
The top five assets listed above face various threatening activities. In the contemporary world, bank institutions face security breaches due to cyber attacks. For example, cyber-criminal attackers breached the security system of Flagstar Bank in Troy, Michigan stealing customers’ vital information and data. On hospital assets, if there are lack of professionals, tools, and equipment, the public will not acquire comprehensive healthcare services, exposing their lives to medical conditions. Natural calamities on food stores, fuel supply, and public transport will derail these assets’ services to its citizens. When natural calamity destroys roads, railroads’ calamities and service delivery is cut. For example, hurricane Ida collapsed a Mississippi highway injuring ten people and killing two. Additionally, the destruction of these assets requires extra capital for reconstruction. Besides, if the oil supply is low due to hiked prices, ordinary citizens face economic suffrage, as noticed in the United States and internationally.
Current protection measures include, for a banking institution, there are robust computer, network, and system security. Agricultural departments have put new farming methods in the food stores to overcome adverse weather conditions. Additionally, the transport assets are repaired frequently and are subjected to more weather defiance architecture. The hospital sector has ensured that healthcare professionals are recruited to suit the demand, and equipment and tools are also made available. Lastly, the oil supply is being boosted by utilizing new oil extraction methods, storage, and purchase when needed.
Uptown community space originated in the early 90s with various leaders’ initiatives bringing a breakthrough for the community. It started with a small health center that was expanded in 2004 into a hospital. This accelerated its growth by creating banks, storing foods, and transportation systems, which were essential in satisfying the needs of individuals and families who lived in the community. That is through being able to access fundamental assets and opportunities. These aspects act as the background frame in identifying uptown, which has grown to be a famous town.
The assets that would require the most protection include; banking institutions. Banks have been prone to attacks for an extended period. For example, there are physical thefts, computer fraud, and cyber fraud, where people’s servers are hacked into to acquire customers’ personal identifiable information (PII) (Homeland Security, 2022). They require protection as individuals and companies depend on bank transactions to carry out enterprise activities and save their money. Therefore, it is upon the bank to create a safe and secure environment for customers’ assets. This can be done by security personnel physically guarding bank buildings to ensure no interruptions or theft activity.
Although these regulations have prevented unnecessary attacks, it is not always known when and how the thieves could attack; some of these theft activities often result in death, holding people captive, which may result in both mental and physical trauma. Taking hostage might take the bank to use different techniques in bargaining with the criminals, leading to losses and a wastage of time (U.S. Department of the Treasury, 2020)). In some cases, the bank may be forced to pay a large sum of money in realizing the hostages; this, plus the money stolen, results in a considerable loss, often making it impossible for some banks to reorganize and grow again. The banks often fall, resulting in losing customers’ assets which take years to be refunded.
Secondly, many banks have upgraded to online services due to technological advances. These aspects have also been prone to breaches, with people attempting to steal confidential information or funds. The banks use cybersecurity in banking system transactions to safeguard customers’ assets. Many individuals continue going cashless; procedures are carried out online through checkout websites and physical credit card skimming devices.
PII can be used in both areas, redirected towards other sites, and used for malignant acts. This activity influences the consumers and substantially sabotages the banks while trying to recover data that has been lost or stolen. In turn, customers also lose trust in a bank that has been stolen from. When a bank’s online page has been breached or hacked, the consequences may be harsh on both parties, with customers canceling their credit card registration and establishing new accounts at other banks (U.S. Department of the Treasury, 2020). Therefore, banks need to be protected to provide customer satisfaction and security of their assets.
Weak protection measures are prone to hacking or theft. They include; Use of personal identifications when entering banks. This can be outdone by placing multifactor applications to help identify individuals as quickly as possible. Use of untrained and skilled security guards; this problem can be improved by training security personnel in dealing with theft. Changing bank locks more often; can be corrected by using safe and secure locks that do not need alteration every time. Use easy security passwords to log into the bank’s homepage. This can be solved by creating passwords that the user or bank staff can only remember. Automatic log-ins are prone to hackers who can easily access customers’ credentials without entering log-in details. This can be solved through automatic logouts.
The strong and recommended bank protection measures are liable for safeguarding banks’ data and customer assets. They include; banks should educate their customers on the consequences of vulnerability, such as data hacking, to help them establish a new habit of making sure they are aware of such acts. They can act accordingly to ensure their data is safe. Banks should also hire well-trained and equipped with skills to deal with emergencies. Thirdly a well-thought-out security audit is fundamental before implementing the new computerized security program (U.S. Securities and Exchange Commission, 2022). This helps banks foresee the advantages and disadvantages of an existing setup. It also develops alternative approaches that help the bank save money.
Banks’ cybersecurity structure includes exercises that need the appropriate hardware to restrict attacks. A modified firewall helps banks stop malicious activities before connecting with other network sectors. Antivirus software is also used with firewalls through updates that can miss potential disastrous attacks on a system. Multifactor authentication is also critical in protecting consumers who use phones and internet apps. To carry out banking activities. Application MFA restricts hackers from accessing the network due to its requests for increased security. For example, a six-digit code and notifications are sent to a user’s phone. The use of biometrics is considered much safer than texted code since it uses retinal scans, fingerprints, and automatic face recognition to determine users’ identities. This helps identify everyone entering and leaving the bank.
Several web pages and apps permit users to stay logged in if they need to. This enables them to access their information at any required time without putting in login details. Automatic logout minimizes automatic log-ins by closing users’ connections after a few minutes or seconds of inactivity. These measures are the best approaches to protecting banks since they have been well implemented and regulated using various techniques, skills, and knowledge. They have been applied for a while now, and most have produced excellent results.
Hydrocarbons have come to be the most common type of fuel, used in machines throughout the past century. Most machines have been designed to use the most efficient, and easy-to-get fuels available. On the same note, these hydrocarbons are also useful in domestic uses which include cooking.
Hydrocarbons have been the most preferred source of fuel, because of their availability, minimal effects to the environment, and safety associated with using them. Nowadays, liquefied natural gas (LNG) is preferred because of its chemical and physical properties, to other hydrocarbon fuels.
Liquefied Natural Gas (LNG)
Chemical and Physical Properties
Liquefied natural gas is generally a colorless and odorless gas, which is changed into liquid form for easy transportation and use. However, in order to detect gas leaks, an odor substance is added at some point, when liquefying the gas. This ensures that any leakage can be detected easily through smelling (Tusian and Gordon 87). In addition, liquefied natural gas is not corrosive, is non-flammable as well as non-toxic.
It is important to note that, this type of fuel is a fossil fuel and is composed of hydrogen and carbon compounds; hence it is categorized as a hydrogen carbon fuel. Natural gas is a mixture of various compounds majorly propane, ethane, methane, and butane. Additionally, this type of fuel also contains some impurities and other heavier hydrocarbons including carbon dioxide, hydrogen, and sulphur compounds (Hazlehurst 451).
The boiling point of liquefied natural gas is usually -162oC, though this depends on the compounds present in the mixture. When burned in sufficient air, liquefied natural gas produces carbon dioxide and water vapor. This is one quality that makes it non-toxic.
On the contrary, if air supply is limited the gas can produce carbon monoxide gas which is toxic. The density of LNG also varies with its components but is usually between 430kg/m3 and 470kg/m3. Its specific density is approximately 0.6 and is thus lighter than air. Liquefied natural gas has a very high ignition temperature, which is around 540oC thus making it non-flammable, contrary to natural gas which is flammable (Tusian and Gordon 88).
Storage and Mode of Transport
In its liquid form, LNG occupies a very small volume compared to the gaseous status, which makes it economical and cost-effective to store it in a liquid state. Additionally, liquefied natural gas cannot burn without air, making it impossible for combustion to take place in the cylinder (Hazlehurst 37).
Therefore, LNG is commonly transported by way of intermodal tanks. It is important to note that, LNG is safe to use because besides the odor which ensures that any leakage is detected, it is not poisonous and does not produce poisonous products, during combustion.
Hazard scenario
If liquefied natural gas carried in a ship spills, it leads to a state known as localized overpressure. This is due to the fact that its boiling point is far much lower than earthly temperatures, and this causes a physical explosion when the liquid turns spontaneously into a gaseous state. This can be catastrophic at times because the pressure rating of the cargo is low.
Uses
Liquefied natural gas has been put into use, both domestically and commercially. Domestically, LNG is used for cooking as well as lighting while in industries, it is used as a source of heat for various processes (Hazlehurst 469).
Works Cited
Hazlehurst, John. Tolley’s Basic Science and Practice of Gas Service. London: Routledge, 2012. Print.
Tusian, Michael, and Gordon Shearer. Lng: A Nontechnical Guide. Tulsa: Penn Well Books, 2007. Print.
In ancient Europe, infrastructure was archaic and medieval. They were simple but even with their simplicity, the merchants, the soldiers, and the pilgrims of ancient Europe did use this infrastructure.
An analysis of today’s infrastructure to the landscape of Europe shows that Europe has developed from such archaic infrastructure into modernized infrastructure. Thus, elements of today’s Europe need to be well studied to well understand the whole aspect of infrastructure about the landscape. It should be understood that linear characteristics or features can be defined in terms of the elements that structure the landscape (Lamy, 2005).
A study of the history of structures in Europe has revealed that structure planning on the European landscape began a long time ago, close to one century. This means that as time has been moving on, the people on the European continent have been thinking of ways to restructure the infrastructure to match time requirements. It further means that they have a greater value in the linear structures such as roads and linkages. Today’s infrastructure in Europe is characterized by modern roads and railways. Besides, distinct waterways for marine operations and transportation have developed, with many harbors and docking points developing (Wolfgang et al, 2011).
Today’s railway lines are quite modern. The rails make such an important element of the infrastructure on the European landscape. Given that the landscape is partly mountainous, underground railway lines are characteristic of the European landscape. Thus, it means that the technology on the continent is now fully-fledged and its results are more evident in the development of the infrastructure that is not only hi-tech but also breath-taking. Glimpses at the modern roads show a technological skill that is in a class of its own. These roads, as elements of today’s European infrastructure, are indeed modern- to suit the mountainous landscape. These landscape infrastructures create a perfect linkage between cities and towns of the European landmass.
Conclusion
It has been noted that the European landscape infrastructure has developed from archaic features as paths into modern infrastructures that are a show of modern technology. It was further realized that the landscape elements and linear features give a linkage between towns and cities, making human activities easier due to the ease with which transportation has become.
Thus, it was stated after a study of the modern European landscape infrastructure that roads, highways, and super-highways, modern railway lines that have underground passages and waterways are the features or elements of today’s landscape infrastructure in Europe. These infrastructures have eased many human activities due to the ease with which transportation comes and the linkages created between towns and cities of Europe.
References
DeBlij, J and Muller, P. (2010). Geography – Realms, Regions, and Concepts, 14th edition. New York: John Wiley & Sons, Inc.
Lamy, L. (2005). Cognizant – A European Infrastructure Newcomer. Dana: IDC Research.
Wolfgang, B., Bill, M., Chris, A., Connaughton, M & Grannan, M. Market Overview: European IT Infrastructure Outsourcing. Cambridge: Forrester Research Inc.
Literally, America’s infrastructure is falling apart. Ranging from increased natural calamities through extensive power blackouts and collapsing bridges to mediocre roads, all these have proved that the infrastructure in the US is not up to standard. The spending on the infrastructure does not reflect value or objectives of these constructions. As such, American has not been up to speed with the rest of the world.
Thus, this collapse of infrastructure could kill the economy and jeopardize efforts to revive it. The American Society of Civil Engineers released the latest infrastructure report card in 2009 giving the US a mean grade of D (GPA). This paper focuses on roads, which received grade D, waterways, graded D- and bridges at slightly better grade, C. In order to attain grade ‘A’, America needs a five-year plan, which will see the government invest over $2.2 trillion dollars.
America’s bridges are in a devastating condition –collapsing and filled with holes (Cooper Para. 1). The federal highway administration estimates as per the national transport statistics show that about 25 percent of the bridges in the United States are structurally deficient and obsolete in terms of functionality (Gerdes 21).
As such, heavy vehicles and school buses have to take longer routes in order to use proficient bridges. These lengthy detours waste fuel and time thus economically robbing the government of resources that could be channeled elsewhere for better production (The Economist Para 16).
Almost one out of four miles of the urban interstate are categorized as exceptionally poor or in a mediocre state (American Society of Civil Engineers Para. 2). As we can see, the roads are no better. About one third of the American roads are in a poor state. American engineers society estimate that about one third of the highway fatalities are cause by the substandard road conditions, roadside hazards, and/or the old and outdated road designs (American Society of Civil Engineers Para. 3).
The American waterways and levees are remarkably undependable. Last year experienced a significant damage and loss of property and hundreds of lives after heavy rains overwhelmed the Mississippi River levees (American Society of Civil Engineers Para. 3).
American society of civil engineers estimates that over 170 levees face extremely high risk of faulting because of pitiable repair practices (American Society of Civil Engineers Para. 3). Over one quarter of the dams in America have already exceeded their lifespan for which they were designed to function well. They are hence in dire need for repairs and extensive maintenance to make sure they are safe for use (Cooper Para. 1).
In certain aspects, the American substandard infrastructure is worth expecting. Most of the physical infrastructures that Americans use today were constructed during the World War II and the humongous depression era (Gerdes 22).
For instance, most of the roads were constructed following the signing of the Federal-Aid Highway act of 1956. President Eisenhower signed the act on 29th of June 1956 and the interstate highways system was then constructed. That is about 55 years since. With no major repairs, this is sad news for drivers (Gerdes 23).
In 1982, an average American driving every day would spend 16 hours in a traffic jam. This figure rose to 47 hours by 2003 because of poor road and maintenance causing bottleneck situations. All these hours of traffic jam waste up to 2.3 billion gallons of oil estimated to cost over 64.1 million US dollars.
Clearly, there have not been new bridges, new roads, newer waterways and even substantial repairs of these infrastructures. All the leading facilities seen around were constructed by the previous generation, the Holland Tunnel, the Interstate highways system, the Golden Gate Bridge and Hoover Dam are all from previous generations (Gerdes 25). It is during those times that the US had a transport system envied by the whole world.
Today, the highways are particularly congested, with second-grade ports and primitive traffic control systems. The image of a superpower is deteriorating. America must be deeply concerned, and embarrassed (The Economist Para. 16). The current regime is like a rich child who failed to maintain a commodious mansion he inherited.
For over three decades now, American has been surviving on patched roads and bridges and believing it can take for granted the crucial facilities that saw American rise become powerful in the modern world.
The government has to come out clean with Americans. It has to stop reinvesting in the failing infrastructure by carrying out short-term miracle cure repairs to heighten the slow-moving economy, but conduct considerable restructuring of this infrastructure (The Economist Para. 19). Critics should stop labeling all the public investment as robbing the citizens of their hard-earned cash. Revamping the infrastructure requires collective responsibility and support.
This will see better project than the ‘shovel-ready’ project set by president Obama as an economic stimulus strategy. There needs to be a significant overhaul design supported by organizations like the American Society of engineers who will review the needs of the current infrastructure and design a national list of things to do ranging from the most risky and critical (American Society of Civil Engineers Para. 3)
America’s infrastructure needs a dramatic improvement. Government has failed to prioritize and fully finance the national infrastructure projects. Consequently, the current infrastructure is in not only defective shape but also not improving. The idea of conducting repairs and maintenance has prevented the country from progressing towards excellent infrastructure improvements.
Works Cited
American Society of Civil Engineers. The 2009 Report Card for America’s Infrastructure, 2011. Web.
Cooper, Michael. U.S. Infrastructure Is In Dire Straits, Report Says. The New York Times, 2009.
Gerdes, Louise. How Safe Is America’s Infrastructure? Farmington Hills, MI: Greenhaven Press, 2009. Print.
The Economist. America’s Transport Infrastructure: Life in the Slow Lane. New York: Free Press, 2011. Print.
The infrastructure security has always been on the highest level for the US government. Still, the September 11, 2001 was the breaking point which is considered to be the start for heightened interest in critical infrastructure protection, both in public and in private sectors. To understand the main purpose of the research, it is crucial to check the main notions which are going to be considered, critical infrastructure in private sector.
Critical infrastructure is the notion which identifies physical and computer-based systems, like telecommunications, banking, transportation, water and energy resources, etc. The private sector of the country’s economy is characterized by the organizations which are not controlled by the state, like private firms, companies, banks and other private non-government organizations (Radvanovsky and McDougall 5).
Thus, the main purpose of the research is to consider the main security strategies the private sector uses in the relation to the protection of critical infrastructures. The USA has a Department of Homeland Security which helps the private sector to cope with the problems it may face.
Critical Infrastructure Protection Challenges for Private Sector
There are a number of different challenges a private sector should cope with the purpose to organize critical infrastructure protection properly. There are a number of different normative laws which are aimed at analyzing those challenges and offering some decisions to solve them. Considering the challenges in addressing
cybersecurity, the following key ones may be identified: the organizational stability should be achieved, the roles and capacities of the cybersecurity should be considered and the awareness should be increased, the efficient partnership with stakeholders should be considered, the information exchange should be on the high level (Powner 12).
Private sector also faces other challenges, like securing control systems. One the one hand, technological innovations are involved in the sphere and allow specialists to control the process by means of different facilities.
On the other hand, the specialized security technologies for control systems are not invented yet due to a number of reasons. Moreover, there are some ideas that securing control systems are not justified economically that create some problems. Finally, the security control systems may become the issue for conflicts on the basis of the priority notion (Dacey 18).
There are a number of challenges private sector faces in the informational sphere. National Infrastructure Protection Center is the organization that helps the private sector cope with those challenges as the establishment of the correct information-sharing relations with the state is the first step for dealing with the problem.
These challenges should be faced both by the private sector and by the Department of Homeland Security, even though it is the state institution, the security is going to be on the highest level in the private sector only when the government supports it.
Introduction to Threat and Risk Analysis Models
To conduct the critical infrastructure protection properly and on the highest level, the risk assessment in the sphere should be provided. Risk management and critical infrastructure protection in the private sector should be conducted on the basis of the assessment, integration, and management of such facilities as threats, vulnerabilities, and consequences.
To conduct the risk assessment in the private sector, the following steps should be considered in this succession:
The identification of the most critical infrastructures,
Identification, evaluation and assessment of the threats,
Consideration of the vulnerability of those critical assets,
Specification of expected risks along with the expected consequences,
The next stages should be followed to prioritize risk reduction activities. That is, the specialists should state and evaluate the ways aimed at reducing the risks which have already been highlighted and prioritize risk reduction by means of the risk reductions strategy.
The private sector should collaborate with the government with the purpose to be aware of the innovations in the critical infrastructure protection field and to count on the state and its help. The role of the government in the security of the private sector is crucial. The Homeland Security Act of 2002 and other administration documents are directed at helping the private sector to cope with the threats and minimize the risks to minimum.
Basic Principles for Critical Infrastructure Protection
The fundamental principles for critical infrastructure protection may be based on the CARVER method. This method is based on six factors which influence the efficiency of the procedure. CARVER method is a military strategy which is used for identifying the targets for the attacks. It is reasonable to consider these principles for identifying the threats in the private sector directed at critical infrastructure.
This method should be used to prioritize the targets which are considered to be the most vulnerable. Thus, the CARVER method is based on the following components, Criticality, Assessibility, Return, Vulnerability, Effect, and Recognizability. The main principle of this method is to identify the infrastructure with the highest value and to try to protect it by much attempt.
The main idea of the criticality is to identify the target which plays crucial role in the achieving the goal and the elimination of which will put a private company much behind. The accessibility means that the company should consider whether the target easily reached or not.
The critical infrastructure protection means high level of security and low level of accessibility. The company should check the return capacity of all the critical infrastructures and pay more attention to those which capacity to return is lower. The vulnerability of the target is really essential. The company should try to organize the work in such a way that all objects and targets which may be considered as vulnerable should be protected better.
The effects should always be predicted. It is important for the private company to understand the outcomes of the threat in order to prevent those in case of any problems. It is also important to understand that the recognizability of the critical infrastructure is also essential. The private sector should protect its points with the purpose to reduce the risk for the target to be recognized by the competitor and either copied or destroyed (Pavlina n/p).
Vulnerability Analysis Models
Using the vulnerability analysis model, the company should follow the next steps to make sure that the competing agents are not going to reach the critical infrastructure and are not able to violate the company security. It may be concluded that the main purpose of the vulnerability analysis is to check and reduce the systems which may be available for natural and man-made damages.
Thus, the steps one should follow to complete this method are: a) to identify the gaps and research needs in the sector, b) to check the competitors which may be suspected in organizing the attack, and c) to develop the strategies aimed at reducing the threat.
The main purpose of this model is to encourage businessmen and entrepreneurs to protect their strategic objects better or, vice versa, to find faults in the critical infrastructure protection of the competitors and to use the information to combat them on the business arena (Catlin and Kautter 3).
Introduction to CI/KR Dependencies and Interdependencies
The Department of Homeland Security has identified the Critical Infrastructure and Key Resources (CI/KR) which are protected by the government no matter whether public or private sector is involved in the affair. It is obvious that DHS cannot cope with all the CI/KR, so there are a number of other departments which help.
To provide an effective protection of the CI/KR, the public and private sectors should establish good relationships based on the ideas and information exchanges, security planning with sharing the best practices, the coordinating structures should be perfectly established, the collaboration with the international community is important as well as the building of public awareness.
The DHS identifies the following CI/KR: agriculture and food, commercial facilities, dams, energy, information technology, postal and shipping, banking and finance, communication, defense industrial base, transportation systems, chemical, critical manufacturing, emergency services, healthcare, nuclear reactions, materials and wastes, and water (“Critical Infrastructure and Key Resources”). If any of those CI/KR are involved into private sector, the company managers should care greatly of its security.
Concepts of Continuity Of Operations (COOP) Plans and Continuity Of Government (COG)
Continuity of operations is the notion which may be defined as a government effort to make sure that Primary Mission Essential Functions are going to work in spite of any incidents, including natural disasters, technological attacks and other accidents. The main purpose of COOP is to make the private sector, which deals with CI/KR, continue its work no matter what is happening in the country. The Continuity of Operations (COOP) Plan is a map for implementing the program designed by the Continuity Program (FEMA n/p).
The Continuity of Government (COG) is defined as the necessity for the government and all its structures and operations to function without paying attention to any of the incidents which happen in the country. The main purpose of the COG is to provide the constitutional protection to the citizens of the country and the constitutional form of the government (FEMA n/p).
In conclusion it should be stated that the proper functioning of the government is possible only in case if the private and public sectors work together and are able to collaborate with each others. It is crucial to understand that the critical infrastructure of the company should be properly protected.
This means that the CARVER method should be applied to make sure that the cyber systems as well as other engineering systems are properly protected. The vulnerability analysis is really helpful for maintaining security in the critical infrastructure. The Continuity of Operations (COOP) and the Continuity of Government (COG) are the document which state that all systems and projects essential for the state should function, no matter what is happening in the country.
Works Cited
Catlin, Michelle and Donald Kautter. “An Overview of the Carver Plus Shock Method for Food Sector Vulnerability Assessments.” Federal state department of agriculture 18 July 2007. Print.
“Critical Infrastructure and Key Resources.” Department of Homeland Security. 2010. Web.
Dacey, Robert F. “Critical Infrastructure Protection: Challenges and Efforts to Secure Control Systems.” United States Government Accountability Office 30 March 2004. Print.
FEMA. 2010. Web.
Pavlina, Steve. “How to Prioritize.” Pavlina LLC May 22, 2007. Web.
Powner, David A. “Critical Infrastructure Protection: Challenges in Addressing Cybersecurity.” United States Government Accountability Office 19 July 2005. Print.
Radvanovsky, Robert and Allan McDougall. Critical Infrastructure: Homeland Security and Emergency Preparedness. New York: Taylor and Francis, 2010. Print.
The significance of infrastructure in a nation’s economic growth should not be underestimated. Research indicates that reliable infrastructure is significant in a country’s investment decisions as it has a direct impact on a nation’s economic growth. As a matter of fact, improved infrastructure is a necessity for sustainable development. For any nation to thrive economically, it calls for a well-organized transport network, improved sanitation, enough energy and an effective communication system.
In addition, infrastructure services lead to improved productivity in businesses, homes as well as government services. The time spent to get water, fuel or to get to market places as well as other social centers is extremely significant. Thus, when household connections, transport systems and telecommunication services are reliable, the members of the house can engage in more productive activities.
On the other hand, the expansion of quantity and enhancement of quality of infrastructure also reduces costs and boosts market opportunities for businesses. This leads to improved investment as well as productivity which are crucial in sustaining a country’s economic growth. Besides, the endeavor results to the creation of employment which is also crucial for economic growth.
For many years, the US has had a history of the strongest economy throughout the world. This has mostly been attributed to its infrastructure which ranges from airports, to telegraph lines to a super transport system among others (Altman, 2007). This notwithstanding, the current state of America’s infrastructure has been a concern to many Americans.
The deteriorating state of infrastructure has been linked to the current economic crisis that is being experienced in the United States of America (Altman, 2007). This has impelled an abrupt reaction by some economic analysts in an attempt to ease the effects of aging infrastructure which has negatively impacted on the country’s economic development.
The costs incurred as a result of America’s crumbling infrastructure, seem to be negatively influencing its economic growth. Such costs may be in form of repairing America’s poor road networks, the amount of time spent in traffic and airline delays, the effects of electrical power losses as well as the additional operating expenses.
For these reasons there is need for America to maintain its infrastructure in order to keep up with the rate of growth it has enjoyed in the past and also to remain ahead of other nations. American politicians should start thinking of the long-term benefits attained from investing in infrastructure. On the other hand, the American public should also start viewing the issue of investing in infrastructure from a national point of view (Altman, 2007). They should put their leaders to task in order to ensure the accomplishment of this undertaking.
The US government should come up with policies that are aimed at stabilizing the economy and commence on a quick resurgence of the infrastructure. Such policies should ensure that the recovery process is comprehensive for long-term benefits. Moreover, the crisis should also be used as a catalyst to speed up structural shifts towards a stronger economy in future.
In conclusion, even though the costs incurred in the process of improving America’s infrastructure may seem to be enormous, there is no doubt that the long-term benefits that will be achieved from this undertaking will be massive. For this reason, the US government should set their priorities right and embark on investing in infrastructure. This will result to the creation of employment opportunities, preservation and enhancement of the citizens’ standard of living and enhancement of economic growth.
Since early 1990s, the private sector has become increasingly involved in the provision of urban infrastructure in Australia, under National Competition Policy, to meet the increasing demand of population growth (Ennis 125). Additionally, there is interest in private financing of infrastructure, which was caused by the fiscal crisis of the government, pressure for services and high rates of interest in 1980s.
There are several key levels of private sector involvement including the introduction into the public sector and privatization, among others (Cannadi and Dollery 6).
Throughout this paper the term ‘infrastructure’ will be used to refer to physical and social infrastructure. These two can be simplified by defining them as physical facilities and the services, which are provided (Fox 10). They involve transportation, communication systems, schools, water, and power lines. Moreover, a typical infrastructure project has three phases: construction, operation and ownership (Quiggin 51).
Additionally, there are several key roles of the private sector like developers, which involve financing, constructing, operating, maintaining and managing infrastructure. Private companies have been involved in facility development, like designing, financing, construction, ownership, and operation of a public sector utility (Akintoye et al. 461).
Increased involvement of the private sector in the provision of urban infrastructure in Australia has a host of benefits. Firstly, it gives an opportunity for different players to compete. This type of competition is necessary in order to improve the quality of services offered (Kumar 18).
According to the Australian government, electricity bills fell by approximately 30 percent on average and rail freight rates for the Perth-Melbourne route fell by 40 percent. Service quality and transit times improved, as a result of the introduction of competition in 1995.
Notably, competitive pricing and service improvement are part of the key benefits of the private sector involvement. Additionally, it leads to improved competition, low costs, affordable prices and good quality of services offered (Cannadi and Dollery 14). This is necessary in curbing monopoly within the industry.
Funding infrastructure through the participation of the private sector is another benefit. The private sector can be considered as an additional source of funding, not only in infrastructure provision, but also in the maintenance of existing infrastructure.
For instance, the construction, operation and maintenance of major urban water facilities in Adelaide have been provided by the private sector (Department of the Prime Minister and Cabinet 8).
According to Kirwan 1990, reliance on public funding of the infrastructure can be reduced by making the private sector directly responsible for providing and financing these services. Hence, the financial problems of service authorities can be reduced by funding infrastructure through developers (Kirwan 185).
In addition, budget deficits would be reduced by privatisation in both short and medium terms (Cannadi and Dollery 15). Lack of governmental funding and unwanted low density of suburban development tend to be key reasons for funding by developers. Indeed, there are two ways of funding through developers, which include cash contribution and transfer of assets.
For instance, over 10 percent of Sydney Board’s capital expenditure, including cash and assets, and about 25 percent of assets for providing water and sewerage services in Melbourne since 1991, have been funded by developers (Neutze 23). Other benefits include unavailability of resources, which can be accessed through the private sector, financial assistance to weak systems and risk reduction of public infrastructure investment.
The third benefit of private sector involvement is the positive effect on the overall management. The Department of the Prime Minister and Cabinet, reports that improved management and working practices are potential benefits of the private sector involvement in infrastructure provision.
By basing on their experience, the increased involvement of the private sector leads to the improvement of safety and security. This could be seen through the enhancement of water users’ safety due to this involvement (Department of the Prime Minister and Cabinet 7).
The fifth benefit is the achievement of efficiency in terms of delivering infrastructure. In accordance with the Department of the Prime Minister and Cabinet, by using new technology, the private sector is capable of providing improved infrastructure, i.e. water delivery.
By comparing the private sector’s delivery of infrastructure to the public sector, the former seems to be more sufficient (King and Pitchford 313). This is associated with private sector profit. An increase in infrastructure delivery means more users and thus greater profitability. Consequently, private sector’s revenue would be affected negatively unless the infrastructure’s efficiency and quality are fulfilled.
On the other hand, the involvement of the private sector has some potential risks. The first risk is associated with construction and operation phases. The Department of the Prime Minister and Cabinet states that a private operator can fail to deliver sufficient services as a financial provider, simply because of unavailability of resources to support the project.
Additionally, the key reasons for construction and operation risks could be specified in escalation of costs, including construction, operation and maintenance costs, faulty techniques and delays in construction (Grimsey and Lewis 108). Another part of these risks is caused by the mismatch between supply and demand associated with the basic infrastructure and services (Global Network for Disaster Reduction 1).
These tend to be the effect of unexpected changes in cost, interest rate and/or demand. The effect of unexpected change in demand, for example, can be shown through the over demand of the Sydney City to the Airport rail link, and therefore the private firm failed to operate the facility (Cannadi and Dollery 5). In terms of ownership phase, the public sector is considered to be better placed to deal with risks than the private sector.
In another aspect of the risks associated with standards and regulation, PPIAF and the World Bank allocated the private sector’s risks comprising of:
Failure to meet the required standards
Changing regulations, standards or pricing over the contract period (Global Network for Disaster Reduction 1).
Since the private sector is putting high priority on increasing profit, infrastructure provision at low income areas and some suburbs, the value of land might be affected negatively. Accordingly, this would raise the issue of equity. If the developers bear the cost of infrastructure provision, either the prices for products will be increased or the payment for raw land will be low, especially in the long-term (Neutze 24).
Moreover, the increase in unemployment is another risk associated with private developers. Cannadi and Dollery (2004) assert that private sector involvement in the provision of public sector infrastructure services leads to unemployment. Furthermore, this approach has environmental risks.
This is because the private sector does not bear the external environmental impact which result from either producers or consumers (Productivity Commission 1).
Conclusion
Increased involvement of the private sector in infrastructure development is due to population growth, high demand for infrastructure and services, and the governmental need for additional sources of funding. Similarly, there are several benefits, ranging from quality of services to affordable prices. However, it is important to note that this involvement has some risks, which have to be considered while making decisions.
It is recommended that the authorities should pay sufficient attention on allocating and analysing the potential risks in order to reduce their impact before creating the contractual arrangement of each infrastructure project, involving the private sector. This would allow the private sector to play an essential role in providing high quality, affordable and sustainable services and urban infrastructure.
Works Cited
Akintoye et al. “Achieving best value in private finance initiative project procurement.” Construction Management and Economics 21. 5 (2003): 461–470. Print.
Cannadi, John and Dollery Brian. “An Evaluation of Private Sector Provision of Public Infrastructure in Australian Local Government.” University of New England, School of Economics, 2004. Web.
Department of the Prime Minister and Cabinet. “A Discussion Paper on the Role of the Private Sector in the Supply of Water and Wastewater Services.” Australian Government, 2006. Web.
Ennis, Frank. Infrastructure Provision and the Negotiating Process, Ashgate Publishing Ltd., Hampshire, 2003. Print.
Fox, William. Strategic Options for Urban Infrastructure Management, World Bank, Washington, D. C., 1994. Print.
Global Network for Disaster Reduction. “Urban Risk Reduction: Private-Public Partnerships –Civil Society Perspectives.” Global Risk Forum, 2008. Web.
Grimsey, Darrin and Lewis Mervyn. “Evaluating the risks of public private partnerships for infrastructure projects.” International Journal of Project Management 20 (2002):107-118. Print.
King, Stephen and Pitchford Rohan. “Privatisation in Australia: Understanding the Incentives in Public and Private Firms.” Australian Economic Review 31. 4 (1998): 313-28. Print.
Kirwan, Richard. “Infrastructure Finance: Aims, Attitudes and Approaches.” Urban Policy and Research 8. 4 (1990): 185 – 193. Print.
Kumar, Deepak. “Infrastructure in India.” The ICFAI Journal of Infrastructure (2005):18-19. Print.
Neutze, Max. “Funding Urban Infrastructure through Private Developers.” Urban Policy and Research 13.1 (1995): 20 – 28. Print.
PPIAF and World Bank. “Approaches to private sector participation in water services – a toolkit.” The World Bank Group, 2006. Web.
Productivity Commission. “Public Infrastructure Financing: An International Perspective.” Australian Government, Productivity Commission, 2009. Web.
Quiggin, John. “Private Sector Involvement in Infrastructure Projects.” Australian Economic review 96.1 (1996): 51-64. Print.