South Korean Transportation Infrastructure

Introduction

South Korea has an advanced logistics infrastructure, which the country has been developing since the 60s. Moreover, the logistics industry is the ninth-largest in Korea with regards to revenue from sales and therefore is among the key drivers of economic growth. As of 2014, the sector employed up to 600,000 workers, and the combined revenue from sales in the sector stood at $84.4 billion (Kotra, 2016). This means that the country is high on the list of economies with effective logistical infrastructure, with ports and airports allowing travelling to a variety of destinations worldwide. It is also important to mention that the aviation and marine transport in Korea is the fifth (to sixth) largest industry in the world, with the Incheon Airport classified as the second-best airport for international freight while the Busan Port regarded as one of the best container ports in the world (Kotra, 2016).

Transportation Infrastructure

Highways

The government of South Korea is known to invest in transportation infrastructure to launch new projects of building expressways, railroads, and other transportation facilities for improving the nationwide infrastructure (South Korea governments US$108.61 bn for transport infrastructure, 2013). The country can boast of its highways (divided into national roads, expressways, and other types of roads below the national level). In 2001, the government numbered the expressways in order for it to resemble the US system of interstate highways. The network of freeways in Korea covers the majority of the roads in the country, with tolls collected via the system of electronic toll collection operated and developed by Korea Expressway Corporation.

Railways

The railway system is considered among the most convenient ways of traveling between different destinations within the country. Since schedules for bus routes range depending on the severity of traffic, the railroad allows travelers to make exact planes since the schedules rarely change (Trains, 2017). Depending on the types and the number of amenities offered onboard, trains can be classified into KTX express trains, KTX-Sancheon, Saemaeul, ITX-Saemaeul, ITX-Cheongchun, Mugungwa, and KORAIL tourist trains (Trains, 2017, para. 2). It is important to mention that the Korean railway is extremely accommodative to both foreign and local travelers, providing trains to both remote destinations (e.g., Yeosu and Changwon) as well as the most popular tourist attractions with the exclusive pass that allows for the unlimited traveling for visitors.

Water

When it comes to the water infrastructure in Korea, the government managed to develop a sustainable water sector that is developing simultaneously with the countrys economy. Despite the challenges that South Korea experienced in the past, the country can now boast of its universal wastewater and water services. Water pollution has significantly reduced, with population morbidity associated with water-borne diseases becoming non-existent (Danilenko, 2016). The government of South Korea continues its investment into the water infrastructure for maintaining the system as well as ensuring its sustainability.

Air

Korean Air Lines Co., Ltd. is the largest airline carrier in the country, which replaced Korean National Airlines that operated until 1962. Currently, Korean Air is privately owned and serves both international and local passenger. Asiana Airlines also has a large presence in the country, serving as both commercial and cargo traffic carrier. At the moment, airlines in South Korea serve international routes while smaller airlines provide services for domestic travelers. It is important to mention that South Korea experiences extremely busy passenger air corridor (based on passengers per year). This occurred due to the affordable prices on air travel as well as the tight competition in the sector, which facilitated the trend for air travel. As of today, there are one hundred and three airports in South Korea that serve both international and domestic destinations.

Pipelines

South Korea operates two pipelines: the South-North Pipeline and the trans Korea Pipeline. The former is owned by the Daehan Oil Pipeline Corporation while the latter belongs to the Korean Ministry of National Defense. With regards to the import of gas to South Korea, there have been some tensions with the North Korean government. However, Russia has been planning to reduce the tension between the countries for ensuring pear in the Korean peninsula, as reported by Sputnik News (2017).

Intermodal

With regards to the intermodal transportation in South Korea, the country is striving to establish itself as a regional and global hub through the development of the Eurasia Initiative. Under the initiative, South Korea has a vision of creating a new era of Eurasia with co-prosperity through integrating digital, transportation, and Korean Wave networks (Park, 2015, p. 10). The goal of the initiative is ensuring smooth intermodal transportation through the promotion of regional investments and open economic integration of the country. The Eurasia Initiative has several priorities that include the commercialization of the North-South route, the establishment of efficient energy transportation networks, and facilitating transport connectivity (Silk Road Express) (Park, 2015).

Logistics and Warehousing

Since the 1990s, South Korean national logistics infrastructure has been expanding to cater to the dynamic logistics activities. As mentioned in KenResearch (2017) article, recently, the dominance of first-party logistics and second-party logistics has been declining, whereas third-party logistics has been growing at an alarming rate in the country (para. 2). With such large logistics destinations as Busan, South Korea is presented with opportunities many opportunities for growth and expansion within the logistics industry. The warehousing industry has increased between 2011 and 2016, with retail and industrial manufacturing leading the segment. The market is dominated by small warehouses (between 2 and 5 thousand square meters); large size warehouses (10 thousand square meters and more) constitute the second place in the rating, followed by medium size ones.

Communication Infrastructure

The South Korean government also invested in the development of an effective telecom structure to keep up with the active IT market. The economy was transformed with the help of the development of ICT and other high-technology equipment. Also, the partnership between companies such as Huawei and LG led to the creation of Seoul TechCity for bringing smart city capabilities for the country capital. It is essential to mention that the popularity of fixed lines has been declining.

Utilities

South Korea is continuously improving the availability of convenient and innovative utilities to improve both commercial and private life. The utility construction market will be reinforced by the plan of the government to support the provision of high-speed Internet to citizens. By 2025, South Korea is expecting to establish a 5G connection to cover at least 90% of the country.

Therefore, a brief overview of the logistics infrastructure of South Korea showed that the country is highly advanced in this sphere. This means that such locations as Busan-Jinhae will be suitable for setting up a manufacturing operation and distribution hub.

References

Danilenko, A. (2016). Korea: A model for development of the water and sanitation sector.

KenResearch. (2017). South Korea future. South Korea logistics market, warehousing automation.

Kotra. (2016). Koreas leading industries: Logistics. Web.

Park, S. (2015). Korean road to developing intermodal transport system.

South Korea governments US$108.61 bn for transport infrastructure. (2013). Web.

Sputnik News. (2017). Pipeline of peace: How Russian gas could soothe tensions in the Korean Peninsula.

Trains. (2017).

Governance and Infrastructure in a Small Medical Practice

Background Statement

Midtown Neurology has faced significant issues regarding governance and infrastructure management. Initially, the medical practice was founded by a single physician who had over 20 years of experience helping the community members. As the practice grew further, it has changed from a mom-n-pop operation to a more significant practicum site but faced pain points of growth. The founding physician hired four new neurologists to help to expand the practice. However, as the primary physician insisted on his own decisions, the new professionals forced him to leave the practice. The critical organizational points that a practice faces include issues in organizational leadership, the structure of the practice, and the management of proper behavior from employees and adjustments.

Summary of the scenario and issues in the case

Midtown Neurology, established by a physician with significant experience, gained recognition primarily due to the work of that medical professional. The founding physician had all the powers, responsibilities, and decision-making duties. However, when the large urban hospital associated with the Midtown Neurology gained Level 1 trauma status, the practice had to expand to capture new opportunities for development. However, the major problem was the absence of the structure and governance established defining employees duties, responsibilities, and schedules to present the information and track the processes within the practice.

The secondary issue that, from time to time, became the primary issue is the absence of leadership skills of the founding physician. It is evident that the founding physician was able to schedule his time correctly and was responsible for successfully managing his responsibilities. When new employees came to Midtown Neurology, the physician could not empower others and detach some of his duties. The main difference that comes out when reviewing the problems is the leadership and management of a small and a more significant organization that brings the necessity to establish controlling protocols, medical records, and administrative accountabilities and divide it among all workers.

An analysis of the causes and effects

I will address the cases significant problem from the point of the outside consultant who would advise the practice. I have chosen such a role because it provides an opportunity to present results in a structured manner and present specific solutions that are achievable and applicable for the organizations. The advantage of being an external consultant lies in the ability to be unbiased and make decisions based on gathered information without emotional attachment to the practice (Allen, 2020). The disadvantage of being an external consultant lies in data collection because sometimes, consultants cannot get the full scope of problems and existing relationships within the organization (McKewen, n.d.).

It is clear that the cause and effect system in the organization is primarily connected to the decision-making issues and lack of leadership. When there are no clear responsibilities of employees, they do not want to be accountable for their actions. When there is a lack of a clear strategy for future development and continuous communication, everyone does what is in his or her interests and does not take joint steps to help each other. An outside consultant can define the roles and authority for team members and present a structure that would exclude vague duties.

The strengths and weaknesses of the organization

The lack of governance is one of the central weaknesses of the Midtown Neurology practice. Governance helps to establish a clear direction of the organization, its goals, and objectives. This approach would aim to oversee the managerial activities, make everyone accountable for the organizations success and risk, and satisfy the regulative authorities (Jyoti & Dev, 2015). The second weakness of the organization is the lack of employees managerial skills to supervise the existing infrastructure. It is suggested that managing complex and evolving organizations requires leadership skills, including division of responsibilities, coordination of all stakeholders, participation in the decision-making process (West et al., 2015). The lack of diversity of views can also negatively affect the organization.

One of the significant strengths of Midtown Neurology is its recognition in the medical community and its ability to attract talent. It should be highlighted that the founding physician has built a workplace where other professionals want to work because he was able to hire four new neurologists. This fact suggests that the organization has a good brand image and human resources techniques that allowed it to recruit new employees and retain them. The other strength is the analytical approach and the evidence-based decisions that helped the organization treat patients successfully.

Solutions and alternatives

There can be named several options that might be implemented in the organization to resolve the primary problem. First, it is possible to focus on the transformational leadership approach that helps develop and nurture the leadership skills of employees and personality traits to encourage open communication and employee accountability. The implementation of proper leadership strategies will help to resolve the major problem. It can be suggested that Midtown Neurology will benefit from a transformational leadership style that implies an exchange of information among staff while avoiding hierarchical structure (Xenikou, 2017). The advantage of this approach is the unity of employees that can be achieved when developing the necessary skills together. The disadvantage of this approach is a lack of governance rules that should also be established in the organization.

Another way to approach the significant problem initially is to establish a clear strategy and constructive organizational culture and then deal with leadership issues. It is suggested that a constructive organizational culture brings transparency and open communication about the company processes, employees activities, and duties (Cooke, 2015). Constructive organizational structure assigns the leader as a facilitator accountable for the development of the employees skills and providing essential benefits to adjust according to circumstances (Cooke, 2015). In this case, the leader is the first to improve competencies and then convince workers to follow the changes. The pro of this approach is an opportunity to develop clear governance rules and divide responsibilities. The con is the possibility of employees disobeying and do not follow the example of a leader who adopted the rules and regulations.

Any alternative implies that the adjustments must start from the leaders to show other employees willingness to establish a clear structure, rules, and evaluation methods to encourage others to follow. To assess the success of adjustments, it is essential to develop an evaluation plan based on goals, KPIs, and participation. Firstly, the list of goals will be defined. Duties will be distributed; anonymous surveys will be conducted weekly to check whether the proposed solution works and see if the organization reaches its goals. For all employees and leaders, there will be established a list of duties and an electronic track record system, where they should mark their performance. For instance, checked ten patients in one day, prescribed medicine to 5 patients, gave a presentation on proper communication with colleagues, depending on the role. All the goals should be in the proper time frame; the leader and an employee should define deadlines. The proposed solutions will bring Midtown Neurology to a new stage of development while gaining more and more recognition and expanding its services to get more revenues and help patients.

References

Allen, T. (2020). This is what it takes to become a successful management consultant. Forbes.

Cooke, R. (2015). Create constructive cultures and impact the world. Human Synergistics International.

Jyoti, J., & Dev, M. (2015). The impact of transformational leadership on employee creativity: The role of learning orientation. Journal of Asia Business Studies, 9, 78-98.

McKewen, E. (n.d.). Hiring a consultant: The pros & cons. California Manufacturing Technology Consulting.

West, M., Armit, K., Loewenthal, L., Eckert, R., West T., Lee, A. (2015). Leadership and leadership development in health care: The evidence base. The Faculty of Medical Leadership and Management.

Xenikou A. (2017). Transformational leadership, transactional contingent reward, and organizational identification: The mediating effect of perceived innovation and goal culture orientations. Frontiers in psychology, 8, 1754.

Transport Infrastructure in Kenya

Introduction

Kenya is a developing country, and in spite of the associated challenges, the Kenyan Government is focused on improving the transport infrastructure. The government plans to enhance the road transport infrastructure and the railway infrastructure to address international standards. However, such issues as the high poverty level in Kenya and the low level of education do not contribute to the development of the transport infrastructure in the country.

The Kenyan Government reported that in 2014, 49.1% of the Kenyan population lived below the poverty line, and the country is the sixth one in the world according to the extreme poverty index (Kenya’s infrastructure investment potential 2015).

In addition, 7.8 million Kenyans are illiterate, and 38.5% of people among them are the youths (Estache & Rus 2013). Thus, limited financial and skilled human resources are available for developing the transport infrastructure (Rietveld & Bruinsma 2014). This report aims to compare economic and social factors associated with the development of the road transport infrastructure and the railway infrastructure in Kenya in order to provide recommendations regarding the alternative that should be selected by the Kenyan Government for the further development.

Background

The current transport system in Kenya is not developed effectively because of the economic challenges, high poverty rates, high levels of illiteracy, and the lack of funding. Currently, the focus is on road transport, railways, ports, and air transport (Stough 2012). However, the Standard Gauge Railway transport infrastructure is rather outdated. The existing road transport infrastructure is not developed, roads are narrow, and the traffic congestion is observed. For 2013, Kenya’s GDP was $1,245.51 (Bias 2013). For the population of 44 million people, the GDP is not high, causing economic instability (Bias 2013).

Kenya has developed ties with the African, Western, and Asian countries. This factor influences the investment and funds’ distribution in the country. The Kenyan Government launched two Standard Gauge Railway projects that would cost $4.5 billion. Referring to the Kenyan GDP, the success of these projects will influence the country’s transport infrastructure and economy (Bias 2013; Kenya’s infrastructure investment potential 2015). Still, the Kenyan current transport infrastructure is less developed comparing to middle-income African countries (Estache & Rus 2013).

Options

The Standard Gauge Railway

To improve the rail transport infrastructure, the Kenyan Government is focused on promoting the Standard Gauge Railway that will connect Mombasa and Nairobi in Kenya. The goal is to connect Rwanda, Kenya, South Sudan, and Uganda, as well as to decrease the travel time and address the passengers’ demands. The project implementation was started in 2013, and it is planned to be completed in 2017 (Transport infrastructure before development 2015).

Road Transport Infrastructure

The Kenyan Government also participates in building the Lagos-Mombasa highway and the LAPSET corridor that will connect Addis Ababa, Juba, and Lamu (Figure 1). The goal of the project is to connect the African economic centres in different countries with the help of the modernised road network (Kenya’s infrastructure investment potential 2015). In order to achieve the goal of enhancing the road infrastructure, the Kenyan Government will consider economic, social, political, cultural factors and the project’s overall costs.

Figure 1. Lagos-Mombasa Highway and the LAPSET Corridor in Kenya (Kenya’s infrastructure investment potential, 2015).

Requirements

Economic Requirements

It is important that the road transport infrastructure should fulfil the economic requirements in Kenya. The reason is that Kenya has a small economic growth with a GDP of $1,245.51 (Estache & Rus 2013). If the Kenyan Government decides to continue focusing on the roads’ development as the priority, the country’s economy can fail to support the project, and the Government will be forced to seek loans from international financial institutions and developed nations.

This situation will lead Kenya to increasing the foreign debt. However, road transport is economically advantageous because the Kenyan regions are connected by roads, and the facilities and vehicles are available.

Rail transport is not well developed in Kenya; thus, railway projects are not economically viable for the country. Kenya requires the complex approach to developing rail transport in the country, but the economic requirement is mostly associated with the quick development of roads. The investment in the road infrastructure can address the economic needs of Kenya more quickly (Kenya: infrastructure, power, and communications 2016). The reason is that road transport will serve the majority of Kenyans and organisations, creating connections and contributing to economic development.

Social Requirements

The means of transport should fulfil the social requirements in Kenya. It is important for the Kenyan Government to invest in those transport methods that can be beneficial for the citizens since the resources spent on the infrastructure construction are received from taxes paid by citizens (African Union Conference of Ministers of Transport 2014). The social requirements include the necessity of connecting the remote regions of the country in order to provide all citizens with an opportunity to reach the economic centres of the country. The developed transport systems are also necessary to guarantee the access to the healthcare facilities and other social services.

Cultural, Political, and Cost Issues

In Kenya, several cultural practices and beliefs are facilitated with references to the transport systems’ development. Thus, the Standard Gauge Railway transport is perceived as having a negative impact on the society because of the high construction costs. Similarly, the political aspects of Kenya also influence the development of the road transport infrastructure, as compared to the Standard Gauge Railway project (Cárcamo-Díaz & Goddard 2013).

Thus, the Kenyan Government should choose to develop the transport infrastructure that is culturally and politically supported (Allen Consulting Group 2013). The total cost of constructing the road transport infrastructure is small compared to the total cost of constructing the Standard Gauge Railway infrastructure. The choice depends on the availability of financial resources. It is necessary to avoid the increases of foreign debts and development of the financial crisis in the country (Transport infrastructure before development 2015).

Comparison

Economic Factors

The development of the Standard Gauge Railway project requires spending $3.8 billion, 10% of the necessary sum should be provided by the Kenyan Government when other expenses are covered by the investment (Transport infrastructure before development 2015). On the contrary, the construction of the Lagos-Mombasa highway and the LAPSET corridor costs $100 million that is 2% of Kenya’s GDP. Therefore, the road transport project is more cost-efficient for the country (Kenya’s infrastructure investment potential 2015).

Furthermore, the use of roads in the country is potentially higher than the demand for the Standard Gauge Railway while referring to the needs of businesses and different organisations. Currently, the Rift Valley Railway and the Kenya Railway Corporation serve the needs of passengers and businesses, and economists state that their services address the needs of customers (Bias 2013). As a result, the Kenyan Government will gain benefits while developing the roads transport system rather than the Standard Gauge Railway because of the predictions for returned investments and spent resources.

Social Factors

The development of the Standard Gauge Railway project can cause the unequal distribution of citizens in the country, leading to social issues because workers participating in the project need to change locations depending on the project’s progress. The project development also disrupts the normal functioning of transport in regions, affects citizens, and influences the life of the pastoralist community in Kenya (Bias 2013).

In its turn, the development of the road transport system in Kenya leads to overcoming such social issues as the unemployment because of the creation of job positions and investments. Furthermore, the development of the roads leads to enhancing the social interaction, trade, and communication between the country’s region while leading to the exchange of ideas, information, and technologies (Stough 2012). While comparing the transport methods, it is possible to state that the investment in the development of roads is advantageous because the Kenyan Government will address the problem associated with the fact that miles of modern roads in the country suffer from the traffic congestion that affects the country’s social and economic growth.

Conclusion

In conclusion, the Kenyan Government should consider the social, economic, cultural, and cost issues, as well as political concerns when deciding regarding the transport infrastructure to develop. The country’s GDP, economic growth, poverty, and educational levels are important factors that will determine the success of developing the chosen transport system. The careful analysis of the most feasible and friendly transport modes is completed to predict the returns on investment and the economic growth for Kenya.

Recommendations

The method recommended for Kenya is the development of road transport infrastructure since it is economically feasible and socially friendly. The costs of constructing the roads transport system are affordable to Kenya, and the project will benefit to the Kenyan population. On the contrary, the development of the Standard Gauge Railway transport infrastructure will benefit only a small portion of citizens, but it will require the huge percentage of the Kenyan budget.

Reference List

African Union Conference of Ministers of Transport 2014, Investment in transport infrastructure – 1985-1995, OECD Publishing, Cairo.

Allen Consulting Group 2013, Land transport infrastructure: maximising the contribution to economic growth, Allen Consulting Group Publishing, Melbourne.

Bias, R 2013, Kenya has big plans for ports, power, rail and roads. Web.

Cárcamo-Díaz, R & Goddard, J 2013, Coordination of public expenditure in transport infrastructure: analysis and policy perspectives for Latin America, United Nations, Santiago.

Estache, A & Rus, G 2013, Privatization and regulation of transport infrastructure: guidelines for policymakers and regulators, World Bank, Washington.

. 2016. Web.

Kenya’s infrastructure investment potential. 2015. Web.

Rietveld, P & Bruinsma, F 2014, Is transport infrastructure effective?: transport infrastructure and accessibility, Springer, Berlin.

Stough, R 2012, A set of guidelines: for socio-economic cost benefit analysis of transport infrastructure project appraisal, United Nations, New York.

Transport infrastructure before development. 2015. Web.

Sydney Airport Infrastructure Plans for the Airbus A380

Sydney Airport is a very important piece of infrastructural facility to the Australian economy and the world in the general. The government in coordination with the Airport’s management is planning to launch major commercial flights at the airport and therefore there is need to expand the facility in order to accommodate commercial flights such as A380 and other new generational aircrafts. Airbus A380 has a capacity of around 600 that surpasses the capacity of normal planes by almost 50%. Apart from carrying more passengers, A380 will greatly help in the reduction of emissions and noise. This paper will highlight the infrastructure plan and building works by the airport management to enable the smooth operations of Airbus A380.

Sydney Airport has got elaborate plans in place to meet the ever increasing demand and at the same time maximize its positional advantage. The government in coordination with the Airport’s management has a plan of building new terminal extensions and at the same time embarks on upgrading the existing ones (Parkin, 1999). The other projects in the master plan include a new aircraft parking and an expanded fright terminal to accommodate A380. The strengthening of runways is also important in facilitating the operation of the A380 aircraft. The Airport has to upgrade its infrastructure in line with the international aviation standards for it to be permitted to land high capacity aircrafts such A380. Public and passenger safety is mandatory and therefore the necessary infrastructure and building works had to form the core part of the Airport’s master plan (Parkin, 1999). Since the pavement shoulders to runways and taxiways are very narrow, widening them was a priority. In order to accommodate the A380 wingspan, Taxiway G had to be relocated to the east of Taxiway D. The plan also involved strengthening of the General Holmes Tunnel that is situated on the main runway as a way of dealing with additional takeoff weight of A380. The other building works in the plan include the relocation of the perimeter road and the demolition of hangers to create space for the construction of a new Taxiway Golf. The plan also included the construction of new aerobridges to service the access doors of A380.

The estimated cost of the Airport Airfield and terminal works for Airbus A380 IS $128 million. The main challenge faced by the planners was how to maintain the commercial operation of the Airport while the upgrading work goes on. Since Sydney Airport is one of the busiest Airports in the world, elaborate plans were a mandatory to maintain the normal operations of the Airport (Parkin, 1999). In order to sustain operations during construction, the engineers opted to work only during curfew hours.

In conclusion, the master plan to upgrade and expand the Sydney Airport was very timely considering its location and importance to the Australian economy. For the facility to accommodate high capacity planes such as A380, it was necessary to upgrade the existing facilities and at the same time construct new ones o meet the aviation standards necessary to accommodate high capacity commercial planes such as A380. Some of the notable building works include the construction of new terminals and the strengthening of the main runways.

Reference

Parkin, J., 1999. Infrastructure planning. New York, NY: Thomas Telford.

Virtual Case File: FBI It Infrastructure Failure

In the modern world, information technology has become an important management tool in many organizations. Since the advent of computers, operations in many organizations have been computerized with easy management of information in the organization. However, all has not been well in many organizations with the installation of IT infrastructure. According to Alter, S. (2002, many organizations have experienced a massive loss of resources in an attempt to install or upgrade the IT system. Failure in the IT system can be caused by many factors.

One of the greatest reasons leading to failure is due to lack of initial assessment of the system and the implications it will have on the organization. This leads to the installation of systems that do not help the organization or are too expensive for the organization to maintain. Poor architectural design and decision can be another major cause of the failure of the system. This can be due to the inclusion of personnel not competent enough in the computer science field. As White (2004) argues, the management of the software system can be another reason for the failure of the system. These are just but few major causes of IT system failure. There are other causes but some of them are specific to an organization and a particular instance.

Virtual Case File of the FBI

The FBI is the major investigation body in the United States of America. It was established in 1908 to deal with the investigation of criminal cases ranging from smart crimes happening in day to day life in America, to complex matters involving counterterrorism, counterintelligence, cyber, public corruption, civil rights, organized crimes, white-collar crimes, and other kinds of theft and violent crimes.

Due to its scope of operation, the FBI needs a comprehensive information management system since it deals mainly with intelligence issues that involve a lot of information. At the turn of the new century, the FBI has been faced with a lot of challenges in the management of the information system. According to Doherty (2006), it has woken up to the realization that the world is advancing in technology and it needed to catch up with the changing technology if it had to remain relevant in the investigation field. With information technology being no longer the privilege of the top organizations of the world and with the crime world becoming more and more highly computerized, the FBI must always being on top of the criminals in some of the areas that they seem to outsmart. This was the reason that led to the development of a new IT system.

Initial FBI information system

The FBI was for a long time threatened with new crimes that were becoming more and more sophisticated for it to handle. From the early days when Italian Mafia partnered with Russian mobsters to siphon millions of dollars from New Jersey State which saw Gambino family, the Genovese family, and others being convicted of the highly-rated crime, to the 2001 terrorist attack on the Twin Tower, the FBI was highly criticized of not having an efficient system that could deal hard on breaking the information system of the crime world.

The September 11 attack left the FBI with clear information that their information system was not competent to that used by the terrorists and other criminals. It was criticized for not gathering crucial information that would have prevented the loss of life in the attack. Before the agent thought of installing the Virtual Case File, it had for a long time relied on The Archaic Automated Case Support System which was adopted in 1995.

However, this system was rarely used by some agents since it was cumbersome, inefficient, very low capability, and very poor in management. It was also criticized for not being able to manage, link, research, analyze and share information effectively. The system was not efficient and it needed an immediate overhaul if the agent was to remain on top of crime. This replaced the 1970s program which used over 40-odd software including database Adabas and programming language of Natural based on software like AG, Darmstadt, Germany, and others.

The start of VCF

In the year 2000, the FBI embarked on a mission to upgrade its IT systems by the installation of new software. In the same year, Congress approved $379 million to be spent over three years to upgrade the FBI information system. The then Assistant Director of Information Bob Dies was the one who prepared the starting plans in the year 2000.

The system was divided into three components hence the name Trilogy was born. All the 56 FBI file offices, some 22 000 agents, and support staff were to be provided with new Dell Pentium PCs running the Micro-soft office with scanners, printers, and servers were to make up the Information Presentation Component. Then there would be the Transportation Network Component with was to provide secure local area and wide area network for easy sharing of information. The third component was to be the User Application Component which was the Virtual Case File and the new system to be used to manage its information system.

It was of vital importance not only to the investigative body but also to the whole country. The VCF was to use five investigative applications including the Automated Case Support System, Intelplus, the Criminal Law Enforcement application, the Integrated Information Application, and the Telephone Application. It was also to rebuild and remake the FBI intranet and also identify new ways of replacing all the 40 odd software that was used by the FBI. The project was bound to start working in 2004 but it never saw its date of completion since it was officially abandoned in April 2005. The project was commissioned by Director Robert S. Mueller III.

In June 2001, the contract to implement the new system was awarded to major U.S government contracts in a cost-plus-award-fee system. Dyncorp was awarded the tender for hardware and network projects while SAIC (Science Applications International Corporation) was awarded the tender for software. The contract was to be delivered in 2004.

Problems of the VCF

With the appointment of Robert Mueller as the Director of FBI one week before the September 11 attack, there was a high geared commitment towards the implementation of the initial plan of the VCF. With the expectation that the VCF would replace the inefficient system an 800-page document that was prepared was of very low quality and was expected to be the base for the development of the system.

This document was prepared after meeting with the users of the ACS system and their recommendations were contained in the initial document. This document showed that the project defied the basic laws of software planning. It defied the rule which needs the plan to define the exact role of the project. It should then define how the plan is to be executed systematically. This showed that the project was a failure right from the start.

Mueller has estimated that 251 computers, 3408 printers, 1463 servers, and new LAN (Local Area Network) and WAN (Wide Area Network) would be in operation in the summer of 2004 which was 22 months behind its schedule. The project never developed along with its schedule despite the frequent consumption of taxpayer’s money.

According to Eggen and Witte (2006), another failure of the system was a communication problem between the eight groups working under SAIC. It became difficult to thread the 8 teams into one cohesive one. This was intended to address the issue of urgency that the project had and its importance in combating the rising terrorist cases. They used the wrong plan in the implementation of the project preferring to use basic technologies like messaging, workflow, or email to the existing software. The project never worked on the 2002 schedule. Seeing it was lagging behind, the FBI requested additional funding of $70 million to pump into the project in order to accelerate it. Congress responded by awarding $78 million and both contractors vowed to deliver their tenders a year earlier than agreed.

In the plan, SAIC agreed with the FBI that they would replace the ACS system within 22 months in one swoop. This involved using the risky flash cutover system. This meant that the agent would switch off the ACS system on Friday afternoon and log on to the new VCF system on Monday morning. This was risky since there was no plan B in case the system did not work.

At the time of development, there was intense pressure both on the side of the SAIC and the FBI on delivering the new system. SAIC embarked on staffing new developers to meet the deadline. At the same time, there were quick successions of the CIO’s office. In May 2002, Bob Dies who launched the project handed it over to Mark Tanner who acted in the position for only three months before stepping aside for John Darwin. John was later replaced by Wilson Lowery.

SAIC worked very hard and delivered the VCF in 2003. However, it was declared not fully functional by the FBI on account that it had 17 operational deficiencies which needed to be addressed before the new system was installed. This resulted in a heated debate between the two sides with the SAIC team claiming that deficiencies resulted from specification changes which were what the FBI had insisted on. An arbitrator mediated between the two and both sides were declared faulty on their allegations.

It was clear to all that that system had failed even to the management of the system. The Director of the agent, Robert Mueller convinced the congress standing before the Senate Committee on Appropriation’s Subcommittee on Commerce, Justice, State, and the Judiciary that the system would be in operation in a matter of months. To this SAIC claimed that the project needed an additional fund of $50 million to be operational but Congress only approved $16 million to save the system. Congress also hired Aerospace Corporation to assess the project further on grounds of its viability at a cost of $2 million. Aerospace released its report on the project in late 2004 which showed that that system was faulty and could not be applied.

It was highlighted that the system failed due to software engineering errors. The failure was specifically caused by poor architectural design work and decisions made from the beginning. As earlier highlighted by SAIC, repeated changes in the Agent’s specifications, made it difficult to focus on one area of development to the end, as there were many alterations in the process which eventually led to code bloat.

The problem in the specification was due to the fact that the FBI dictated what they wanted and went further to dictate how it is to be done. As we have seen earlier, there was heightened succession in the CIO’s office and at the same time, SAIC was involved in a lot of new hiring which brought new people to work in the system. Hence management and workforce changes can be partly attributed to the failure of the project.

The supervision of the developers was closely tied to the management issues and was a cause of failure in the development of the project. The problem of management was compounded by the involvement of unqualified FBI officers as the managers of the project. The use of a flash cutover deployment system could have complicated the adoption of the system taking into account the nature of the FBI work and the role of information in their work.

In the end, the project claimed that the project had used up more than $104 million of public funds. However, the figure could be much higher if well accounted for since initially, congress had approved $379 million. Although it is not all accounted for, congress had approved additional funds of $70 million, $16 million, and $2 million. This totals $88 million and does not put into account the initial funding.

The project must have used more public funds than estimated. According to Dizard (2007), it is claimed that the project used more than $581 of public funds although it may not have exhausted all the funds. The project cost the agent a lot in terms of resources and the fact that it resulted back to the much-criticized ACS system which it uses up to date. Stirland (2005), argues that although the agent has planned to install a new IT system by the name Sentinel which is expected to be in operation in 2009, there is still much to be done in the management of the project if it has to succeed.

This is case study demonstrates that the implementation of any information technology project should be initially assessed and all aspects considered before it is implemented. This case gives appropriate lessons to the FBI officials in the future when they think of implementing another IT project. Software Engineering and management issues are key considerations in any IT project to be implemented.

Reference

Alter, S. (2002). Information Systems. University of San Francisco Press.

Dizard, W. (2007): FBI overhauls Virtual Case File contract. Government Computer News. Web.

Doherty, A. J. (2006). The FBI I-drive and the right to a fair trial; Iowa Law Review; Iowa University Press.

Eggen, D. and Witte, G. (2006): The FBI upgrade that wasn’t. Washington Post.

Stirland, S. L. (2005). Senators grill FBI chief over failed Virtual Case File system. Nationals Journal’s Technology Daily.

White, C. M. (2004): Data Communication and Computer Networks. Thomson.

Melbourne Airport Infrastructure Plans for the Airbus A380 Aircraft

Abstract

This report aims to highlight on the plans carried out during the implementation of expansion of Melbourne International airport. Background, planning implementation and strategies carried out by project teams are discussed and findings are presented through in depth analysis.

Executive Summary

Melbourne airport is the main airport that provides air services to Melbourne city. In addition, the airport is the 2nd most active airport in Australia. Having commenced in the year 1970, the airport has evolved to become the only exclusive international airport that serves the metropolitan area. The airport also boasts of being the 4th most toured air route in the globe. It serves 33 direct destinations that link to other parts of Australia, including international hubs in Africa, Europe among others. It acts as the main hub of Qantas and Virgin airlines, Australia division (Ellis 2006).

When it comes to Cargo, the airport acts the most active when it comes to international export freight while at the same time acting as the second most active in import freight (Albertini 2008).

Melbourne Airport Airfield Infrastructure Planning and Building Works for the Airbus A380 Aircraft Operations

The entrance of the new Airbus A380 into our skies signaled the beginning of expansion plans in most international airports all around the globe. Pundits have labeled the new Airbus as a splendid giant, the five hundred and sixty tone machine with wings that spans a massive fifteen meters wider than the jumbo. It has over five hundred and thirty kilometers of wiring which contributed to delays in delivery to customers. The passenger version can carry up to five hundred and fifty passengers (Melbourne’s Airport 2003).

Following preliminary studies conducted by consultative project teams, Firms in the project had to conduct concept stage studies that included the development of design criterion founded on international standards. Planning also included the analysis of design codes and prior evaluation of construction methods. This was all done to go with procedural constraints to ensure that the airport would remain fully operational during the expansion period.

The international terminal was widened by five thousand squared meters with emphasis placed on increasing seating capacity and construction of a 3rd level that is home to airline lounges. The gates (gate nine and eleven) were integrated with aerobridges that can hold at least one A380 at a time. This greatly reduced the turnaround time for the planes being served by the airport. The aerobridges facilitate the boarding and disembarking of passengers from the double-decker airplane. There was also the addition of baggage carousel within arrival halls.

The project involved the close participation of planning and development consultants together with officials from the Melbourne airport. Close participation assisted in minimizing passenger and cargo traffic disruptions within the airport. Planning of the expansion included the design of intricate multi level spaces that hosted the high movement of passengers through the terminals. The terminals also required complex functionalities such as comfort and convenience for its users. Planning of the expansion was greatly aided by 3D animations as provided by Connell Wagner. Project architects were successful in illustrating the modifications to be done on the terminal and how it would cope with increase in passenger traffic. This significantly increased savings on time and money as it introduced a creative approach from an engineering standpoint. This together with close collaboration assisted, not only timely project completion but also making of better informed decisions (Harrison 2000).

Total Costs of Melbourne Airport Airfield and Terminal Works for the Airbus A380 Aircraft Operations

The Melbourne airport invested over two hundred and twenty million dollars in its expansion program which included the plans to accommodate the massive airbus. The expansion program included work on terminal precinct, runway among other facilities. This was done to offer support to the expected rise in the number of passengers that the airbus caries. The airport boasts of being the sole airport that can comfortably accommodate the “splendid giant”. Work also went into extending the width of North South Runway by fifteen meters. It cost fifty million dollars for this task alone. This brings home the massive work that was needed for the project (Australia Pacific Airports (Melbourne) 2005).

Strategies Implemented by Melbourne Airport to Mitigate Disruption to Commercial Operations during Airfield/Terminal Infrastructure Works for the Airbus A380 Aircraft

The international state of the airport provided project engineers with the laborious task of ensuring that the project could be coordinated with minimum disruption to the daily routine within the airport. The engineers had to ensure that there was a daily interaction between the project and 24/7 operations. The strategies implemented by the engineers included the modification of work zones, procedures among other tasks. Shifts in normal timetable were necessary to allow work to be carried within out of office hours. The major challenge to minimizing disruption to normal services was orienting passengers and airport officials on the changes in procedures that followed the expansion of the airport (Dempsey 2000). The fact that expansion work was to be integrated with existing infrastructure introduced major challenges to project completion. A good example of this was highlighted by the hidden sewer lines that were spotted within the location where bored piles were to be placed. This necessitated in the designing of a new foundation system. The new redesigned foundation was made from multiple micropiles.

Other strategies implemented by project engineers include proper site measuring. This was done to arrange out where everything was in the first place. Expansion work was sometimes allowed within a 4 to 5 hour window during off peak hours which was normally at night.

Conclusion

Expansion of the airport to accommodate the Airbus A380 was completed after twenty nine days and this was greatly aided by close participation of project teams. The wishes of Melbourne airport officials were respected as the work followed the runway closure for four weeks and a program that enabled construction tasks to be carried out over a span of 24 hours,7 days each week. The wide range of experience held by the project team members and experience allowed the development of favorable outcomes that tackled the severe limitations that were created by the project. The A380 being as massive as it is requires dedicated support as the traffic introduced by the aircraft can quickly cripple an airport that has not been designed for such an airplane. Both passengers and cargo traffic have to be kept in mind when accommodating the plane. With this hindsight, Melbourne airport took a proactive approach rather than to wait in expanding its facilities to accommodate the plane. Regardless of both financial and time constraints, the project ended up as a success due to efficient planning and implementation of objectives set forth by project teams

References

Albertini, C. A380 Airplane Characteristics Rev: 2008. (PDF). Airbus. Web.

Australia Pacific Airports (Melbourne). (2005). Melbourne Airport: master plan: preliminary draft. Australia Pacific Airports (Melbourne) Pty. Ltd.

Dempsey, P. (2000). Airport planning and development handbook: a global survey. McGraw-Hill Professional,

Ellis, F. (2006). Aerospace Notebook: It’s no cruise ship of the sky, but A380 is raising the bar for comfort. Web.

Harrison, M. (24 June 2000). Airbus opens its books for the world’s biggest jumbo. But is it a plane too far?. The Independent (UK).

Melbourne’s Airport – A World Class Operator. Melbourne Airport Media Releases. 2003. Web.

Movements at Australian Airports (PDF). Airservices Australia. 2010. Web.

Public Key Infrastructure: Concepts and Applications

The public key infrastructure or PKI is a data security architecture which is based on cryptography, a branch of applied mathematics. Unlike other earlier systems, it uses a pair of keys, one public and available freely to end users on the network, and another secret or private key that is known only to its owner. Data communication and storage systems need to ensure that data or systems are identifiable, messages may be authenticated, data is confidential and also that, data once transmitted, cannot be repudiated by the receiver. The basic functions of the PKI is to achieve these very ends and it does this as per a clearly defined and published set of policies and procedures, geared to build on trust and participation in the system. A PKI has some essential components like the CA, the Digital certificates, the CRL, the certificate repository, the RA, etc. The CA is the vital component for ensuring maintenance and management of the infrastructure, on which, modern business and individuals have come to rely on, for effecting secure transactions and operations in a global and complex virtual environment. Optimal security is sought to be provided in the storage and transmission of data by the PKI and, in spite of some flaws in the system, it is a distinct improvement over previous security control systems and is still evolving through efforts of discrete entities, spread across the globe and over time. However, PKI is a vital and necessary feature of today’s globalized and complex network systems and is increasingly vital for growth of modern e-commerce and other such initiatives.

Public Key Infrastructure: An Introduction

The growing trend of electronic commerce and considerable use of information communications require an enhanced information systems security structure. Increased use of the Internet and the need for ensuring that information systems, whether for data storage or transmission, are not compromised, means that an appropriate IS security infrastructure needs to be developed and maintained. Organizations, IT professionals and governments the world over are aware since long now that traditional security systems are often ineffective in dealing with modern varied IS security violations spanning diverse locations and time zones. Financial transactions, archival systems, and other data communication and storage operations in cyberspace require a strong security infrastructure which may be provided by using non-cryptographic or cryptographic information security systems. Cryptography was felt to be an advanced and complex approach to the issue. In this method, applied mathematical concepts are utilized whereby information to be transmitted is first encrypted i.e., unprotected plain text is coded or transformed into protected cipher text and after transmission is decrypted by the user to revert the cipher text to its original text form. Public Key Cryptography (PKC) is an advanced crypto graphical system in which two distinct keys are used, one a public key to first encrypt the data, and the second, a private or secret key used to decrypt the data back to its original plain text form. A Public Key Infrastructure or PKI is a PKC based information systems security architecture that seeks to protect data being transmitted and ensure a secure data distribution mechanism, almost perfectly feasible across locations and over time. Kuhn, D.R., et al (2001) defines Public Key Infrastructure as “the combination of software, encryption technologies, and services that enables enterprises to protect the security of their communications and business transactions on networks“. The Public Key Cryptography utilizes a pair of keys. One is the public key, available to all online. The other is the secret or private key, which is known only to the entity that owns it. This owner may be an individual, service or software application.

Key PKI Functions

Weise. J. (2001) maintains that the primary function of a PKI is to allow the distribution and use of public keys and certificates with security and integrity. He also states that a PKI is a foundation on which other applications and network security components are built. Various systems require the use of PKI. These include email, chip card operations, e-commerce (like, debit or credit cards), e-banking, etc. Actually, it was Diffie, W. and Hellman, M. (1976) who invented the public key system, as new cryptography information exchanges mechanism. Suri, P. R. and Puri, Priti (2007), state that by this method, a network user has an individual private key and a public key, where the public key is distributed to all members of the network, while only the user holds the private key. They add that a message encrypted with the public key of a person can only be decrypted with the Private Key of the same person and vice-versa. Most modern digital signatures and certificates are based upon PKI technology which essentially integrates digital certificates, public key cryptography (PKC) and certification authorities (CA) into one whole, network security architecture. The PKI helps to issue digital certificates to users and servers, provides enrollment software for the end user, integrates certificate directories and tools for managing and renewing certificates, as also revokes the same. PKI also encompasses related support and services. John Marchesini and Sean Smith have defined PKIs as complex distributed systems that are responsible for giving users enough information to make reasonable trust judgments about one another (2005).

One of the basic functions of a PKI is the encryption of data through cryptographic mechanisms; it thereby ensures data confidentiality or data secrecy and privacy. As compared to public keys, the preferred choice of keys is the secret or private keys for ensuring data confidentiality. Another function is that of ensuring integrity of data. Data needs to be incorruptible and unalterable while in storage or during transmission through networks. A third function is that of authentication or entity identification, achieved through digital certificates and signatures. A fourth function, very relevant in case of e-commerce transactions, is that of data repudiation, which means that data, once transmitted and received, cannot be renounced by the receiver.

Cryptographic Concepts: An Overview of PKI

Since PKI is a form of cryptography, some basic information on key cryptography concepts may be in order. Kuhn, D.R. (2001) defines cryptography as a branch of applied mathematics concerned with transformations of data for security. They add that in a cryptographic mechanism, an information sender transforms unprotected information or plaintext into coded text or cipher text and after such message is transmitted, the receiver transforms the cipher text back into plaintext or verifies the sender’s identity or data’s integrity (p. 9). The basic requirements of a cryptographic or a PKI system (as per the Open Group, 1997) are the establishment of trust and governance domains, ensuring confidentiality of communications, maintaining data integrity, authenticating users, non-repudiation, and achieving end-to-end monitoring, auditing and reporting of (PKI) security services. Kuhn, et al has identified a few key PKI services (2001): integrity and confidentiality of information exchanged, identification and authentication of users and entities, and non-repudiation. The Public Key Cryptography, on which the PKI is based, is also termed as Asymmetric Cryptography. Dam, K. W., and Lin, H. S.,(1996) identifies the asymmetric cryptographic systems in primary use as cryptographic systems that base their security on the difficulty of two related computational problems, viz, factoring integers and finding of discrete logarithms.

Certificate Authorities or CA

Weise, J. (2001) in his overview of public key infrastructure, has identified a PKI framework as essentially consisting of various operational and security policies and services, and, additionally, some interoperability protocols that support a PKC driven management and control of keys and certificates. This framework works through some key logical components namely, Certification Authorities or CA, end-users or subscribers, certificate policies (CP) and practices statement (CPS), hardware components, public key certificates, certificate extensions, certificate depositories and Registration Authorities (RA). The CA is the most important and critical component of the system. An end-user or entity is an entity which is not a CA (Weise, J, 2001). The CA identifies and certifies the end user or entity. The message generated by the CA on successful identification of an end entity is called a Certificate and essentially contains the entity’s identity and public key. This certificate is signed cryptographically by the CA. In identifying the end entity, the CA establishes and maintains a set of policies and procedures and essentially generates or revokes a certificate. A Certificate Policy or CP statement specifies the way of handling of various data and systems within the PKI security framework. The procedural details and operational practices are published in a certificate policy statement or CPS which are supposed to contribute to building of trust in the PKI, and may help improve user participation in the same. The CA also has hardware security modules or HSM s, which are used by the CA for storing and using its private keys-those keys which a CA uses to certify subscriber public keys. Also, various standards like the FIPS-140-1 define the HRM s for ensuring trust and security of the entire system.

Digital Certificates

The CA’s basic purpose is to ensure the upkeep and control of a security infrastructure which is done through the management, storage, deployment, and revocation of public key certificates or Digital Certificates that essentially verify the binding of an end entity’s identity to its public key (Weise, J., 2001). Accordingly, the Digital Certificate contains all such relevant information that help another user identify the owner of the certificate. This basically includes the entity name, information on its identity, period of validity (expiry) of the certificate, and the entity’s public key. Also, the Digital Certificates, for effective global network operations, particularly e-business transactions, need to be suitably standardized. The most widely used common standards now appears to be the X.509 formulated by the IETF

Registration Authorities or RA

This component of the PKI is optional and undertakes some of the administrative tasks delegated by the CA. Essentially the RA identifies an end entity and determines whether a public key can be issued by the CA to it. It also helps implement the policies and procedures mandated through the CPS and the CP.

Certificate Depositories

This component of the PKI enables the distribution of certificates through regularly publishing and updating certificates issued by the CA. The depository or directory is accessible publicly on the network. The LDAP is the defining and most currently used protocol in this regards. But some like the X.500 are more robust. In addition, certificates which are no longer required may be revoked by the CA through a Certificate Revocation List or CRL on which entities may rely to check certificate validity, The CRL is published by the CA in a publicly available depository

Conclusion

Public Key Infrastructure has developed to chart an enterprising and long way from the original non-cryptographic information security systems of the past decades. But with increasing dependence of the global community on data communications that can effectively surmount all space and time barriers, and the proliferating and diverse attacks on the security infrastructure developed by leading IT experts over the years through spy software and viruses, data systems and data storage and communications architecture need to be better equipped and fool proofed to thwart any information system security compromise attempt quite effectively and successfully. Ellison, C. and Schneier, B. advise caution as to choice of security systems and opine that no one system, whether it be firewalls, intrusion detection systems, VPN s, or PKIs are actually fully secure or are effective against any and every security threat in the present global complex data communications environment (2000). Effective security risk management, in their opinion, is hamstrung by false commercial promises without any actual basis and the user of any particular system would do well to understand the critical security requirements. However, not all experts are gloomy. For instance, Benantar (2001) has opined that the PKI concept is based on mathematical foundations and is computationally reliable, simple, and elegant. Benantar also believes that the presently used X. 509 certificates are an improvement over previously used protocols, and that, the underlying PKIX technologies providing the solution are robust and promising (2001). Time and further developments in the field alone can tell the actual truth.

References

  1. Benantar, M., (2001), “The Public Key Infrastructure”, IBM Systems Journal, Vol. 40, No 3, 2001, 648-665
  2. Cheng, P.C., (2001), “An Architecture for the Internet Key Protocol”, IBM Systems Journal, Vol. 40, No 3, 2001
  3. Diffie, W., and Hellman, M, (1976), New Directions in Cryptography, IEEE Transactions on Information Theory, 22 (1976), 644-654
  4. Ellison, C., and Schneier, B., (2000), Ten Risks of PKI: What you are not being told about Public Key Infrastructure, Computer Security Journal, Vol. XVI, No. 1
  5. Guide: Architecture for Public-Key Infrastructure (APKI), (1997), Draft 1, the Open Group, 1997
  6. Kuhn, D.R. et al, (2001), Introduction to Public Key Technology and the Federal PKI Structure, NIST [Online: 2008]
  7. Marchesini, J., and Smith, S., (2005), Modeling Public Key Infrastructure in the Real World, Public Key Infrastructure: Euro-PKI [Online: 2008]
  8. National Research Council, (1996), “”, Kenneth, W. D. and Herbert, S. L., Ed., Committee to Study National Cryptography Policy, Washington D.C.: National Academy of Sciences. ISBN: 0-309-52254-4
  9. Suri, Pushpa. R. And Puri, Priti, (2007), Asymmetric Cryptographic Protocol with Modified Approach, International Journal of Computer Science and Network Security (IJCSNS), VOL.7 No.4, 2007. 107-110
  10. Weise, J. (2001), Public Key Infrastructure: Overview, Palo Alto, California: Sun Microsystems Inc.

How Has a Threat of Attack on Critical Infrastructure Within the US Influenced Technology-Oriented Policy-Making This Last Decade?

Introduction

Cyber-warfare refers to the situation whereby both computers and the Internet are used to undertake warfare actions through cyberspace (Jonathan, V.1979). Cyber-warfare can also be referred to as the “cyberwar” or the “cybernetic war”. In addition, cyber warfare attacks can be grouped into several classes with some being mild and others severe. The several methods of attacks that are used in warfare include, gathering data, distributed denial of service attacks, web vandalism, and propaganda, and attacking critical infrastructure.

All over the world, many countries have been identified to develop various Internet uses as a cyber weapon against other nations. The attacks are usually targeted at the utilities, financial markets, and even government computer systems.

Cyber-warfare attacks on critical infrastructure target the most vulnerable infrastructure such as fuel, water, power, communications, commercial, and transportation. In the United States, cyber-warfare attacks on critical infrastructure have posed a high risk of national disasters if fast and resolute mitigation actions are not taken, (Janczewski, L. et al, 2007). For instance, the Sept 11 terror attack on the US raised concerns and resulted in a group of concerned scientists writing a letter to President Bush urging him to initiate measures that will prevent future cyber attacks on the infrastructure.

The scientists recommended the formation of a cyber-warfare Defense project with a modeled style similar to that of the Manhattan Project, (Ryan N, 2008). The vulnerability of the US to the cyberattacks on its critical infrastructure has continued to create the need to develop technology-oriented policy-making to minimize or completely prevent such attacks in the future. In this paper, the ways in which the threat of cyber attacks on critical infrastructure within the US has influenced technology-oriented policymaking in the last decade will be discussed.

Discussion

The United States as a nation has recognized the great threat that cyber-attacks pose to its critical infrastructure, because the possibility of conventional terrorism against the US through the computer/internet may cause damage to its vital infrastructure. This is due to the purchase of commercial operating systems together with the software applications, which lack secure shipment from the manufacturers. In 1997, the US government presented problems that arise from cyber-warfare in a government review, “Infrastructure Protection Task Force (IPTF)”. The review was put together by then-President Bill Clinton to examine the US critical infrastructure.

The efforts to form the projects to protect the US from cyber-warfare can be viewed as a major step towards the development of technology-oriented policies that take care of such threats. The problems and possible solutions to the cyber attacks on the US critical infrastructure were pointed out in the multifaceted project referred to as “The Manhattan cyber project” (MCP). The project was developed by a joint Collaboration of the IPTF (Infrastructure Task Force), government sector agencies, private sector agencies the War Room Research, LLC, and Winn Schwartau of Info war-com, (Marshall, L., 1997).

In 2001, a working group of law enforcement representatives identified the common issues that were encountered in the crime scenes. The representatives who were from the different states’ police departments later developed a manual referred to as, “The Best practices for seizing Electronic Evidence”. The project to develop the manual was facilitated by the Advisory Committee for police investigative Operations and it provided an understanding of the technical and legal factors relating to electronic storage devices seizing.

Cyber-attacks occur which are viewed as criminal activities through the use of computers and the related devices with attacks on the critical infrastructure through the use of electronic devices (computers and related devices) resulting in the integration of technology-oriented policies that allow law enforcement officers to seize, identify, investigate and prosecute the offenders, ([email protected]). Over the years of experienced cyber attacks, the United States policies have been developed to enable law enforcement officers to recognize and stop any cyber attacks on the critical infrastructure.

In the year 2000, a national plan that was aimed at protecting the US from cyberattacks was formulated. The plan was referred to as, “Defending America’s cyberspace: National plan for Information Systems protection”. This plan has been very instrumental in ensuring that technology-oriented policies promote partnerships between the private sector and the US government. The partnership is vital in creating safeguards that are vital in other sectors of the economy (public health, safety).

The plan was an attempt by the US national government to design a way through which it could protect its cyberspace. President Bill Clinton directed that the federal government develop a plan to defend the US cyberspace in Dec 2000, which was to be fully operational by May 2003. The National Plan complimented the Federal computer security and Information Resources Management Responsibilities (IRM) and it would further manage vulnerability and risk assessment which is a requirement of the OMB Circular A-130, Appendix III, “Security of Federal Automated Information Resources (A-130)”

In addition, the plan through the CIO council would assist in the development of recommended practices, which would be in accordance with the 1987 Computer Security Act. In June 2000, the US federal government targeted to complete the critical physical infrastructure protection plan through maximum-security facilitation of its cyberspace through technology-oriented policies.

Technology-oriented policies that have enabled the federal government to develop government-wide intrusion detection capability have been promoted, (Clifford D.R. 1999). Through such policies, the intrusion detection capability offers both the civilian and national defense information systems a timely warning to any cyber attack threats, attacks, or vulnerabilities. Together with the infrastructure security issues analysis through the Computer Emergency Response Teams (CERT), the US government is always in a position to get a better understanding of the threats and vulnerabilities that may be present in its information systems.

The US technology-oriented policies have recommended the use of security systems in the web browser, as a result of cyber attacks on its critical infrastructure, (Janczewski L. et al, 2007). The US-CERT (United States Computer Emergency Readiness Team) encourages the use of security in the Websites, which protects individuals or the nation from cyber-terrorism acts. A technology-oriented policy in the US such as the USA PATRIOT Act ensures the US entry-exit system of cyberspace is surveyed.

Under section 403c of this act, a fingerprint matching system serving under the National Institute of Standards and Technology’s statutory requirement was administered in June 2003 (National Institute of Standards and Technology June 2003). The Fingerprint Vendor Technology Evaluation (FpVTE) is administered in order to certify the biometric technologies that may be used in the entry-exit system in the US.

President Clinton’s endorsement of filtering software used to screen Internet material is seen as having to be a good alternative to unsuccessful communication Acts. Many states in the US have continued to propose legislation that would promote the installation of filtering software packages in computer systems. For instance, Senator John McCain has in the past introduced legislation in the US Senate, which aimed at denying funds to schools, or libraries that do not implement a blocking system for the computers that are connected to the Internet.

This move clearly indicates that the US has realized the danger of cyber threats and is trying its best to make new legislation in technology policies to prevent the attacks. The US even developed a policy that promoted consistent approaches to physical and computer security. The policy is known as “Policy Issuance Regarding Smart Cards system for Identification and credentialing of employees” which provides the guidelines on smart card-based systems use, identification, and credentialing systems. Consistency in physical and computer security approaches is vital in discouraging cyber-attacks through this system.

The development of policies that protect the US critical infrastructure continues to encourage measures that protect computer systems from attacks. These proposed measures are expected to be integrated into technology-oriented policies in the future. Efforts that have been made to protect the computer systems include the identification of 18 commercially available cybersecurity technologies by the US General Accounting office.

The technologies are used by Federal agencies to protect computer systems from cyber attacks, (Clifford D.R. 1999). For instance, Smart tokens are used to monitor the user identities and the security correlation tools in order to monitor the network devices. Another proposal that has been made to mitigate cyber attacks in the US is the consolidation of terrorist watch lists to promote sharing and integration, which aims at preventing and defending the US from attacks in a better manner. According to a report by the US general accounting offices, in April 2003 standardization and merging the federal government’s watch list structure would promote US border security through technology-oriented systems.

Assessments continue to be done to identify cybersecurity requirements. The information gathered is used to improve technology-oriented policies to make it better in protecting the US critical infrastructure against cyber attacks. Technology Assessment in relation to cybersecurity for CIP (Critical Infrastructure Protection) identifies key cybersecurity requirements in each CIP sector as well as the cybersecurity technologies to be applied. In addition, the assessment provides ways in which the cybersecurity requirements can be implemented in the present technology policy issues such as privacy and information sharing.

The President’s National strategy to secure cyberspace and the Homeland security Act of 2002 implementation has been good in combating cyber threats. This has even seen the creation of the National Cyber Security Division to increase effective results in curbing cyber threats. Finally, the US-CERT (United States Computer Emergency Readiness Team) currently ensures that the web browser’s security settings are evaluated. These measures protect against cyber-terrorism, which is a great threat to the US critical infrastructure.

Conclusion

The US has one of the strongest economies in the world, which has stimulated its great advancement in technology, especially in computer use. This has made it very vulnerable to cyber attacks, which have in the past damaged its critical infrastructure, and continues to threaten it. The US government in the past has made efforts to prevent these attacks through security measures integrated into their technology-oriented policies. This has been very instrumental in combating the attacks but will require future improvements.

References

Best practices for Seizing Electronics Evidence. A joint project of the International Association of Chiefs of police and the United States Secret Service.

Clifford D.R. 1999. Computer and Cyber Law: Cases and Materials. Carolina Academic Press Dougherty. US Developing Cyber-warfare Capabilities. World Net Daily Exclusive.

Janczewski L.J and Colarik A.M 2007. Cyber Warfare and Cyber Terrorism. IGI Global Federal Identity and Credentialing Committee Report. Web.

Jonathan V. Post, “Cybernetic war” Omni, 1979 pp 44-104 Reprinted, The Omni Book of Computers and Books Marshall, L., U.S. Government to Participate in the Manhattan Cyber Project,” Press release by Schwartz Communications. Web.

National Institute of Standards and Technology Report 2003.

Fingerprint Vendor Technology Evaluation (FpVTE) preliminary Announcement. Web.

Report on Information Technology: Terrorist Watch lists should be consolidated to promote Better Integration and sharing by US GAO, 2003. Web.

Ryan N. Manhattan Project e WEEK. 2008 Chertoff Describes Manhattan Project for Cyber-defenses.

Staten, C. L. Asymmetric Warfare, the Evolution and Devolution of Terrorism; The Coming Challenge For Emergency and National Security Forces,” published in the Journal of Counter-Terrorism and Security International, 1999 edition, Vol. 5, No. 4, Pg. 8-11. Web.

US-CERT Report. 2008. Evaluating Your Web Browser’s Security Settings. Web.

US General Accounting Office (GAO). 2004. Information Security Technologies to Secure Federal systems. Web.

US General Accounting office Report. 2004. Technology Assessment cyber security for Critical Infrastructure protection. Technology Assessment: Cyber security for Critical Infrastructure Protection.

It Network Infrastructure Basics

Introduction

To begin with, it is necessary to point out that contemporary organizations can not perform their activity (especially business activity) without properly arranged and effectively adjusted IT network and data communication infrastructure. Originally, it is the very heart of information management, while the information itself is the very key to success.

Design of a Network

Originally, the creation of any project, associated with the IT sphere and data management strategies corresponds to a particular cycle.

Cycle
Figure 1. Cycle

This cycle is an essential part of the data management strategy, as the elimination of any single point of the cycle will completely destroy the data management system. The only difference among these points is the duration of the destruction period.

The technical side of the project implementation should look the following:

  • Network design. The IT management team is responsible for planning the most efficient way for creating an easily modifiable computer network to meet the company’s specific needs. The all-over solution is intended to be regarded as the most cost-effective approach for meeting the long-term requirements. Moreover, this design should be performed by taking into consideration the possible upgrades and have a flexible structure for quick modifications.
  • Cabling. It is regarded to be one of the most time-taking parts of the work. In order to avoid some unforeseen obstacles, alternative cabling plans should be elaborated.
  • Network Hardware Management. Originally, it is required for proper wok of the data management system, as it is required for IP allocation, technical maintenance of the allover network, and providing proper work for the IT sphere of the organization.
  • Data safety maintenance. For the data communication system to work properly, there is a strong necessity to keep it safe from breakdowns, attacks, and failures

Taking into account the necessity to clarify the technical aspect of the network design, it should be stated that Cisco technologies offer one of the most comfortable solutions for IP communication systems. Originally, there are the following advantages: Cisco offers open standards, based on AVVID principles (Architecture for Voice, Video, and Integrated Data). The flexible interoperable migration strategy will allow the It management team to choose the IP Communications solutions that will fit the company’s requirements the best.

IP communication technology, powered by Cisco AVVID technological principles is regarded to be the best solution for the data management infrastructure and is aimed at creating a converged network that can manage the communication of voice, video, and data traffic simultaneously. Initially, all the capabilities are provided with a high level of equipment and software availability, an integrated Quality System, and the increased security principles. Leveraging the principles offered by Cisco AVVID, Cisco IP telephony solutions provide high-quality performance and opportunities, which address current and emerging communications requirements in the enterprise environment. Cisco IP telephony solutions are aimed at optimizing the functionality of the data communication systems and decrease the configuration and technical support requirements for a wide variety of applications.

Finally, it should be stated that the IP communication and the communication technologies, based on IP data transmission are regarded to be the most reliable in the sphere of data communication technologies. The high availability and quality of the Cisco equipment and hardware place it among the most required and reliable technical solutions.

Cloud Computing and Corporate IT Infrastructure

Cloud computing technology is one of the most recent technologies. It is applied within the IT processes. The technology is presently applied within the corporate IT infrastructure. There is an eminent shift from the customary software models. Competitive corporate organizations are shifting to cloud computing.

Most corporate organizations are geared towards their IT infrastructure improvement (Williams 17). Application of cloud computing technology has eminent impacts.

For instance, procurement processes within the IT departments have reduced. This is because the heavy reliance on the software material has reduced. This means that most corporate organizations enjoy the economic benefits of cloud computing.

Organizations have reduced the number of employees within the IT departments. Cloud computing has alleviated the various manual operational systems within these organizations. The technology has also lead to strategic management of the IT systems (Vidgen 25).

This is evident within all organizations. There is adequate utilization of IT services. Ideally, this occurs with minimal delays. This is because the IT services are readily dispensed through the online channel. Most organizations have realized the need to train their employees on cloud computing. This is aimed at increasing the rate of uptake of the technology.

Cloud Computing, Reliability and Security

Cloud computing technology has transformed the IT landscape. Several security benefits are derived from the application of this technology. Some of these include the alleviation of the risk to lose critical information. This might occur through hardware failures and disruption. Various software materials are prone to malware and other landscape security threats.

Basically, cloud computing has considerable benefits. It is important to recognize that the application of cloud computing has led to the development of new security threats. The threat landscape has potentially grown due to the application of cloud computing (Williams 41). The development of various malware and security threats based on the internet is eminent.

There are also risks of information loss. Companies have advanced their information security systems within all operations. They have also stepped up their security monitoring and evaluation systems. The companies employ qualified and skilled IT professionals. This ensures reliability in service dispensation.

Cloud Computing and IT Outsourcing

The application of cloud computing has significant impacts in IT outsourcing. The number of permanent employees within the IT departments has remarkably reduced. This is because through cloud computing, the IT services can be acquired online by all end users (Williams 48).

Organizations must consider various business factors when outsourcing for cloud computing. The foremost consideration should be the level of security. All systems and information pertinent to the organizational operations must be safeguarded adequately.

There are other hidden costs that are associated with outsourcing. These must be comprehensively addressed. It is necessary to evaluate various outsourcing contracts. Most organizations must ensure proper safeguards. Data security and location are very crucial in all the outsourcing processes. The considerations are very vital for the companies utilizing cloud computing technology.

Reasons Why Information Systems Projects May Be Prone to Difficulties

Most information systems face constant challenges due to many reasons. Most organizations have evaluated their systems and redesigned the operations. This is in order to meet security needs. One of the common reasons for the difficulties is the engagement of lowly skilled workers within the IT departments.

The organizations must consider the possibility of engaging the services of technical and experienced IT professionals (Mather, Subra & Shahed 54). All organizational objectives must be properly aligned with the operations of information systems.

The failure to undertake this initiative has led to the rise of problems associated with information systems. The poor IT outsourcing strategies also contributed to these constant challenges.

Organizations have failed to integrate proper outsourcing mechanisms. Such failures have led to the development of security threats. Apart from this, it has also led to the low uptake of various security guidelines outlined for all the end users by the IT departments.

The highly transformative technological industry is another contributory element to this challenge (Vidgen 46). Evidently, it is clear that the advancing is characterized by spontaneous developments within the IT sector. Drastic measures are required for organizations to minimize the instances of information failures.

Adapting to these notable transformations and integrating these technological advancements within the operations of an organization might be critical. Primarily, such measures ensure that organizations remain relevant within the technological spheres. These measures reduce instances of information insecurity.

Steps to Be Taken Be Organizations to Ensure Successful Information Systems Development and Implementation Projects

Organizations should value the significance of information security. Proper and comprehensive alignment of these information processes with the organizational goals must be achieved. This is important for many reasons. For instance, it makes the information technology processes to operate towards achieving collective organizational goals (Vidgen 61).

Organizations must recognize the importance of carrying out strategic needs assessments within different departments. Through this initiative, they are capable of determining the various information needs that may arise within different departments within the organization.

Such results help in the process of streamlining the basic IT application procedures. Certain investigations have revealed that lack of proper IT application schedules have resulted into technical and operational failures. This challenge has also been encountered within various security systems.

It is crucial to engage an adequate number of highly skilled and trained personnel. This must be undertaken by the organizations (Vidgen 78). Observably, the technological industry is prone to several instances of flexibilities and transformations. Therefore, organizations must engage the services of highly skilled and adaptable personnel.

This consideration is important since it enhances their capacity to remain productive and competitive within the technology industry. Adequate budgetary allocation for the information technology operations is also important. Generally, financial mismatches have been noted as chief contributors to information security failures. Organizations must consider these elements for them to achieve their full potentials.

Knowledge management strategy

Knowledge management procedures should be undertaken in a systematic and active manner. The process aims to manage and control the available knowledge within an organization. There are basic considerations in the development of an effective strategy of knowledge management. The first step is the analysis of the kind of knowledge in context.

A transformative approach to knowledge creation, storage as well as retrieval is necessary (Robertson 2004). Apart from this, an organization must examine its capacity to undertake the transfer and application of the particular knowledge. Like in the case of my organization, all organizations on the verge of initiating new technological systems have a variety of factors to consider.

The organization must consider the type of knowledge and the most economically feasible management strategy. Due to the organization’s compromised economic competency, a relatively affordable and less complex knowledge management strategy is advisable. This means that the organization has to budget for a remarkably small number of IT infrastructure and personnel.

The less complex knowledge management strategy is most preferable within the organization. This is because the simple strategies are more adaptable and easy to comprehend. This provides potential benefits both to the workers and the organization in general.

The strategy must focus on developing people’s capacity to gather, classify and assimilate data. An outline of the procedures involved in data processing, retrieval and consumption must also be drawn. The procedures for providing adequate access to data must be comprehensively addressed within the strategy.

Codification strategy is most applicable for the organization (Robertson 2004). In designing the strategy, the organization aims to avail the critical knowledge to all beneficiaries in a more easily accessible manner. Proper scanning procedures must be outlined. In addition, the necessary schedules for information organization must be developed.

Other important considerations include the development of the necessary knowledge maps. Because of the financial limitations, the organization must utilize simple and robust knowledge maps. These maps must require less frequent maintenance. The organization must utilize the codification strategy. This is because of the high data needs and information processes within its operations.

Most IT applications enable high level of information sharing. This can be realized within the entire organization. Robust technological applications such as cloud computing may be appropriate for achieving these benefits (Williams 81).

Knowledge management increases capacity development and talent sharing amongst the employees. In addition, it supports the most vital communication and feedback processes. Knowledge management increases the capacities for transparency and accountability within organizations.

Works Cited

Mather, Tim, Subra, Kumaraswamy, and Shahed, Latif. Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance. Sebastopol, CA: O’Reilly, 2009. Print.

Robertson, James. . 2004. Web.

Vidgen, Richard. Developing Web Information Systems: From Strategy to Implementation. Oxford: Butterworth-Heinemann, 2002. Print.

Williams, Bill. The Economics of Cloud Computing: An Overview for Decision Makers. Indianapolis, Ind: Cisco, 2012. Print.