IT Security: The Problems Analysis

Information Technology has always been a source of information that requires vigilance due to its ability to affect its users lives drastically. Not only can one communicate and share conversations, documents, pictures, etc. throughout the world, there is also private and confidential information stored that is not ought to be shared. The problem Im investigating is called hacking, which essentially means gaining unauthorized access to someones private and confidential information.

A major problem like Hacking for IT and its related fields is hazardous to the security of organizational secrets since if a person can gain access to confidential information, that information can be used for personal gains. If not used, the intruder could also harm that information making it useless or misleading the actual users of that information.

There is a common philosophy in the computer world that if you can make it, you can break it. This philosophy is often used to describe the mentality of hackers who think that if a program can be made, it can be unmade. The virus industry has boomed because of this philosophy as viruses keep on coming in while the anti-virus companies keep on building their cures. Since hacking is a technological problem just like viruses, this problem can also be fixed through a technological solution.

In search of information regarding hacking and its solutions, I visited the www.infosyssec.com website. The website from the outlook looks cluttered and heavily textual. However, being a Meta website, this is a feature expected from such sites. With long pages full of new links and information, the first impression was of a page crammed with the meaningless information. However, the interface helps to clear out the mess in a second. The left side panel clearly describes the websites research facilities and projects. It contains links of information relevantly titled with their headings and contains guides for certain researches.

The guide on the website proved to be an effective framework to prevent hacking. The first and foremost thing to do is to implement a firewall. A firewall, as the name suggests, acts like a barrier that only allows authorized entry into the network. Also having an updated anti-virus keeps the viruses at bay that may help in allowing hackers to gain entry into the network. Furthermore cookies should be disabled; no passwords must be saved on the computer; auto-complete wizards must be turned off and the history should be erased after each use. Most hacking is done through a Trojan virus installed from a source unknowingly thus care must be taken that when downloading software from the internet or installing programs they must originate from authentic and trustworthy sources.

The best solution for me would be to install an anti-virus program and keep updating it regularly so that my computer and network is secure from Trojans and backdoors that allow hackers to gain access to my data. Although most people would opt for a firewall as first priority, I would choose the anti-virus since it would serve multiple purpose of keeping spywares and viruses from my computer at the same time making it more secure from intrusion.

The website proved to be quite useful in the end. Although at times the website looked unstable and I found a few broken links, there was still a lot of information that could be used for other problems too. My first impression of a cluttered website was replaced by the impression of a informative website. Such websites are one window stops for the users as all information can be taken from only one source. Thus I believe that this website fulfills the needs of the users of any need related to information technology.

Where science has given us various tools and techniques to help us tackle many problems, it also has given a weapon to the people to chance their evil on others. However technological problems can only be solved using technological solutions since science created them and only science can put an end to them.

References

The Security Portal for Information System Security Professionals. (n.d). Business Continuity and Disaster Recovery Planning. Web.

IT Project Management: IT System Implementation

Introduction

Aim

This report aims to research, investigate, implement and promote an IT system in a small business center. In this particular report, we will be looking into a medical simulation business center in Chelsea and Westminster NHS Trust located in central London (Chelsea and Westminster NHS Trust 2008, Centre for Clinical Practice) and attempting to implement an improved IT system within this business center. The Simulation center in Chelsea and Westminster NHS Trust trains doctors and nurses in a safe environment through the process of simulation (Chelsea and Westminster NHS Trust 2008, Simulation). The Chief Medical Officer, Sir Liam Donaldson (2009, pp.49  55) in his 2009 annual health report stated the importance of establishing more medical simulation centers in the United Kingdom as they provide better and safer training to doctors and nurses of various grades. That report resulted in large funds being released by the London Deanery for the establishment of 13 simulation centers in London. The development of these 13 simulation centers will pose a serious threat to the business profits of the simulation center in Chelsea and Westminster as now the target customers (doctors and nurses) will be divided between more simulation centers.

Such a competitive environment demands change in the IT system to provide better customer service. This report will investigate possible methods to improve the quality of IT systems in the simulation center in Chelsea and Westminster to give it an edge over its competitors.

Objective

Since the target customers are doctors and nurses we must provide a system that enables easy flow of information from the front end (clients) to the back end (servers). Currently, if a client has to register for a simulation course in Chelsea and Westminster he needs to download a PDF format of the registration form and post it to the center. Once the form is received in the center, the clerical officer does the data entry in a local Microsoft Access Database.

The above process of flow of information from the front end to the back end is cumbersome and time-consuming. We shall be researching and implementing a better IT system to enable an easier flow of information between the front and back end users. We also factor in the necessary suppliers, budgeting and promotional strategy for the implemented new IT system.

Research Methodology

We will conduct thorough research on various available options within cost and time constraints for improvement of the IT system for the front end user as well as a back end user. This will be discussed more in detail in Section 2: IT Systems Options and System Design.

Business Scenario

A developed IT system between the client and server will provide a better platform to attract potential customers to the simulation center resulting in improved business. Its also important that the project of IT systems have pre-defined outcomes with quality criteria, circle fenced finance, resources and period for the completion of the project.

IT System Options and System Designs

Research

The research of various options available for improving the IT system can be broadly classified into front-end and back-end applications.

Front end application: The Front end application will be improved in web hosting service enabling an online form through PHP programming. This online form will allow easy access for doctors and nurses to register for a simulation course. While comparing web-hosting services, the following factors were considered such as cost, maintenance, ease of use, disk space, bandwidth, upload and download speed. The selected web hosting company also had to be supported by the programmer dealing with this project. It was observed that common hosting companies are Host Papa, 1&1, UK2.Net, web fusion, UKhostforu, Easy space, Easily.co.uk and One.com (Web Hosting Review 2009, Top 8 Web Hosts). The research showed that Host Papa meets our requirements with the necessary endorsements from the technical team.

Back-end application: Here, various databases such as Microsoft Access (Microsoft Office Access 2009, Overview), MySQL (MySQL 2009, Products), and SQL (SQL 2009, Home) were investigated. The research process factored in costs, technical support, size limit and suitability for driving a web-based application. MySQL was preferred because it was cost-efficient and an open-source database application. It also had better reviews for driving web-based applications. Moreover the chosen hosting company Host Papa was compatible with the MySQL database providing a strong link between the front-end website and the back-end database.

Architecture

The architecture of the proposed new IT system will comprise of new webpage initiating the online user in this case a prospective customer to register for a course. The information provided by the online user from the front end will be linked to the back end MySQL database through a continuously running PHP Script. It would also be worthwhile to link the data from the online form to an email address for the back-end user. This will serve as a necessary backup in case MySQL runs into technical problems also ensuring that no data is lost during the downtime period of MySQL.

Proposed IT System Block Diagram
Figure 1: Proposed IT System Block Diagram

Implementation

Implementation of a new IT system in a large organization like NHS will have to satisfy certain protocols and policies through the IT department. The factors that need to be considered are discussed below.

IT Policies

Since there will be a download stream of information from remote computers into the NHS Trust network it is imperative that before implementation of the IT system there is consent from the Trust IT Department. Even before the implementation of the IT system, there must be meetings held with the IT department and written documentation of approval.

Cost

Before the implementation of IT system, there needs to be an agreement of budgeting for the project between the center manager running the simulation center and the project manager leading the IT development. Besides agreeing on the total expenditure there also needs to be a written document of approval of distributing of funds. The approvals of costing for the IT project should be carried out between the center manager of the simulation center, the project manager of IT systems and finance manager of NHS Trust.

Resources

A realistic approach is important in understanding the financial and manual resources available to complete the IT project. The project would get a boost by getting the necessary support from concerned departments such as the IT department, Finance department and Estates department who will be doing the necessary networking, cost management and cabling respectively. A proper document listing the available internal and external resources is necessary and also their impact on cost implications.

Time

The use of Microsoft Project enables time management through the use of the Gantt Chart. The project needs to be divided into various smaller stages. In this case, stage 1 would comprise of setting up MySQL database for back end, stage 2 would comprise of programming an online form for the front end, stage 3 would be programming a PHP script to link the rows of the online form to the appropriate cell in MySQL database and stage 4 will be to create a back uplink between the online form and the email address of the simulation center. Documentation of the time frame for the development of each stage in the project needs to be approved between the center manager and the IT project manager.

Business Benefits

This section will discuss how the proposed IT system will benefit the business of the medical simulation center of Chelsea and Westminster NHS Trust. Implementation of MySQL database will replace Microsoft Access Database, which will provide better security of confidential and financial information. Unlike Microsoft Access database, MySQL provides a multi-user interface which will result in two or three employees from the back end accessing the database simultaneously resulting in better time management. MySQL also has a large storage capacity making it suitable to drive a web-based application. MySQL database is compatible with MyPHPAdmin, which provides easier access to the database for technical concerns. These factors contribute to better time management for the employees of the simulation center allowing them to promote and market their courses in a better way. The use of online forms in the front end will replace the traditional way of downloading PDF format registration forms enabling easier use to potential customers and encouraging them to pay online replacing the use of cheque or demand draft. This allows a faster, easier and efficient way to enroll in a simulation course in comparison to other centers in London. Such a protocol will attract more clients to the simulation center hence directly impacting the financial outcome. Furthermore the PHP Script linking the data stored in the online form to the back end MySQL table will enable direct entry of data information from the front end to back end. This process saves time effectively to the employees of the simulation center who are currently investing time in data entry of registration forms into Microsoft access database.

Hence it can be observed that implementation of a new IT system will result in better customer service resulting in more clients attending the simulation course and also restricting futile time consumption of the employees of the simulation center allowing more scope and time to market and promote their course.

Management Challenges

This section will consider certain issues such as security, ethics, risk management, quality management and change management during the implementation of the proposed IT system.

Security

Security needs to be enforced on the front end as well as back end especially since data information will be passed through the NHS Trust network. According to NHS Information Code of Practice any data stored in paper, computer, CDs, memory stick and across the network will be subjected to the NHS IT policies of security (Department of Health 2007, Information Security Management: NHS Code of Practice). Several security measures can be adopted through web hosting services to block or control spasm information. Some of the procedures are spasm guard and blocking IP addresses. MySQL provides strong security measures in accepting information through networks. Some of its general security measures are based on Access Control Lists

for all connections, queries, and other operations that users can attempt to perform (MySQL 2009, General Security Guidelines). There is also support for SSL-encrypted connections between MySQL clients and servers (MySQL 2009, General Security Guidelines). There shouldnt be access given to user tables in MySQL to any user accounts except for MySQL root accounts (MySQL 2009, General Security Guidelines). MySQL allows statements like GRANT and REVOKE to control user access to the back end (MySQL 2009, General Security Guidelines)

Ethics

The development of the new IT system requires liaisons between many departments. The departments involved in this project would be the center manager of the simulation center, IT project manager leading the development of the IT system, Estate Department for the cabling, NHS Trust IT department who will see if the proposed IT system falls in line with the NHS Trust IT policies and Finance department who deals with budgeting, invoice and other finance requirements. Since such a process demands negotiable across tables between different departments it is imperative to consider the ethical dilemmas between different departments in implementing a new IT system. Every user accessing the MySQL database will have to sign a form agreeing to maintain the confidentiality of the personal data information of clients. Staff needs to be trained to understand the proper use of the MySQL database and ethical issues concerning it. These relate to honor, honesty, bias, adequacy, due care, fairness, social cost and action (Rogerson, 1996).

Risk Management

For an IT project like this involving different departments and considerable costing, it is important to evaluate the risks impacting the project outcome at the very onset. This initiates risk management which is to manage the projects exposure to risk by taking action to keep that exposure to an acceptable level in a cost-effective way (Risk Management Guide 2009, Management of Risk). Risk analysis involves the identification and evaluation of potential risks to any aspect of the project. Risk evaluation assesses the probability of the risk occurring and the impact on the project should the risk occur. Having identified the risks, possible actions to deal with the risks need to be considered and appropriate actions selected. Some of the actions to deal with possible risks include prevention of risk, reduction of risk, transfer the risk to another stage, accepting the risk as a last stage if the impact on the project is within tolerance limits or contingency plan which is an emergency plan to eliminate the risk.

Quality Management

Before the start of the project, there should be certain quality criteria established by all the concerned departments and stakeholders. These quality criteria will be the ideal quality outcome of the project and that needs to be documented and approved by all concerned parties. These quality criteria will be based on quality standards and responsibilities for ensuring quality is built into the project will be delivered from a variety of sources, including the customers quality expectations, requirements of ISO standards, NHS IT quality management and existing quality management systems. Planning for quality will involve agreement on how each stage will be tested against its approved, documented and pre-defined quality criteria. Also, at what point of the project will such a comparison of quality testing be made and which department. The quality of the project is attained by a combination of actions. Firstly, defining quality criteria for each stage of the project in measurable terms, developing stages according to defined quality standards and checking for quality in all delivered stages. Quality management can be monitored by configuration management and change control.

Change Management

With a new IT system in place, there is bound to be lesser work for the employees in the simulation center with regards to data entry. This will involve changes in management and organizational structure. In this particular case, change in management ensures a fixed process and technique to be followed for efficient management of changes to all IT structures which will result in reduction of the impact of any related incidents on the start of service. Such changes will also be made in the documentation for future reference to the staff. Also, its important to see if these changes abide by legislation and legal policies related to the NHS Trust. Change in management of organizational structure needs to be done in measurable terms, slowly but effectively. Sudden unexplainable changes in organizational structure will almost always result in opposition and failure of change. Hence, adopting new IT system will result in changes in technical issues as well as managerial issues which need to be handled sensitively and properly.

Conclusion

From the research conducted it can be concluded that the proposed IT system will require involvement of many departments with different policies and regulations to follow. Since the proposed IT system is to be installed in a NHS Trust this will also result in incorporating IT policies adopted by the trust with regards to security issues. However, with the resources and legislations to be followed the proposed IT system will prove to be cost-effective and far more beneficial for the business of the simulation center in Chelsea and Westminster NHS Trust and will provide it an edge over its competitors.

References

  1. Chelsea and Westminster NHS Trust. 2009. Centre for Clinical Practice. [Internet] London: Chelsea and Westminster NHS Trust.
  2. Chelsea and Westminster NHS Trust. 2009. Simulation. [Internet] London: Chelsea and Westminster NHS Trust.
  3. Department of Health, 2007. Information Security Management: NHS Code of Practice. [Online]
  4. Liam Donaldson, 2009. 150 years of the Annual Report of the Chief Medical Officer: On the state of public health 2008. [Online].
  5. Microsoft Office Access. 2009. Overview. [Internet] United States of America: Overview.
  6. . 2009. Products. [Internet] United States of America: Products. Web.
  7. MySQL. 2009. . [Internet]. United States of America: General Security Guidelines. Web.
  8. Risk Management Guide. 2009. Management of Risk. [Internet]. United Kingdom: Management of Risk.
  9. Rogerson, S., 1996. Training for Ethics. IMIS, [Online]. 6(1)
  10. SQL. 2009. . [Internet] United States of America: Home. Web.
  11. Web Hosting Review. 2009. Top 8 Web Hosts. [Internet] United Kingdom: Web Hosting Review.

Network Admission Control

Introduction to Network

A network can be termed as a collection of two or more computers that are connected; computers are connected so that they can easily communicate with each other. This communication includes sharing of resources, sharing of files and documents; it also includes every other kind of electronic communication.

Computers can be linked up with each other by using any kind of architecture. The basic idea of networking is to share information, no matter how complex the network is designed. Basically, for developing and establishing a network, four elements are required. These include Protocol, Network Interface Card, Cable and Hub.

The protocol is a set of rules which are used to make the communication of all the computers identical so that all the computers can understand each others language.

Network Interface Cards are required for making the connection of different computers with the network  it also allows them to exchange information. The medium is required to connect all the computers can be a cable, infrared and any other medium.

Hardware is required for controlling the traffic over a network. Normally, a hub is used to connect the computers.

There can be many different types of networks. The networks are categorized based on many factors. The factors over which the networks can be categorized include the size of a network, the complexity of a network, a distance of a network, security of a network, access of a network and how the computers are connected to each other. The network can be arranged peer to peer i.e., without having any dedicated server, or the network can be a client-server network. [Jelen 2003]

Network Security

No matter what kind of a network is established, how much complicated it is, and how many computers it contains, the main elements that should be present in every kind of network is the network security. Network security can be defined as any step that is taken by companies and organizations for the protection of their networks and systems. It is good to provide more security to a network. The amount of a security is decided by companies and organizations on their own  through experts. Network security is among one of the most discussed topics. All organizations and businesses are primarily concerned with the security of their networks as if the security is weak then any kind of harm can occur to a company on its secret information which could result in a big loss. A network security is not a very simple topic rather it is a very complex issue and must be handled with great efficiency. Many different activities are performed for providing a security and protection to the assets and valuable information of organizations that are going to implement network security. Nowadays, a proper and strong network security is provided by using many different techniques and devices. Few of these techniques and tools may include: antivirus software, firewalls, Intrusion Detection Systems, Virtual Private Networks, identity services, encryption and security management. For providing a high level of security, most of the time, all these tools and techniques are implemented together using layers that make it difficult to access the network easily. Today, network attacks are very common and information theft takes place to harm organizations; thus, the importance of implementing strong network security cannot be denied and should be deployed in all companies and organizations that possess their own networks. Data or information has got threats from many different kinds of network problems like viruses, vandals, attacks, data interception and hacking and so on. So security measures should be taken for preventing the assets of an organization. [What is network security n.d].

What is Network Admission Control?

Network Admission Control or NAC can be defined as a collection of rules, techniques, technologies and solutions that are adapted by the network infrastructure for applying and ensuring the security measures over all the devices that are trying to access any particular network. Today, Network Admission Control (NAC) is being used as popular network security. Access to a network with the help of standards which are created by a security team, this method is known as NAC. It can be used in all types of devices such as desktops, PDAs, etc. The firewall helps restrict access; NAC on the other hand does not restrict this access, but rather incorporates intelligence that is required for the network access. Nowadays, we can find many solutions from which we can choose. There are several reasons why the NAC should be recommended? It can be a simple control access policy or might be a choice of the virtual LANs. Also, it can be very complex such as, firewall settings that only allow a specified network to access it. [Davis 2007]

Components of NAC

The system of Network Admission Control is made up of a few components. All of these components are involved in the system because of some specific purposes. All these components have their own importance and are necessary for performing the operations properly. The following is the overview of the important components of NAC:

  • Cisco Trust Agent

Cisco Trust Agent is software that is located on a system at the endpoint. This tool is used to collect information regarding the security state of the device from the security software solutions. This software may include antivirus and Cisco Security Agent Clients. This information is then transferred to the network access device. These trust agents have been licensed by the successful security software development company, CISCO. These Cisco Trust Agents are combined with the Security Agents licensed by the CISCO in order to give information about the security of the endpoint.

  • Network Access Devices

These are the devices that put into effect all the policies, techniques and rules that are related to the control over admission. These network access devices include firewalls, switches, security appliances, routers and other wireless access points. These devices are used for demanding security. Also, these devices are used for providing the information to the servers where the NAC decisions are carried out. According to the specifications, provided by a customer the suitable NAC is applied like, deny, move, quarantine, ignore, restrict or permit.

  • Policy Server

The policy servers are used to analyze and observe the security information that has been provided by the Network Access Devices. The policy servers are responsible for making decisions that what kind of access policy is suitable for the device to be applied.

  • Management System

The management systems are aimed at providing the capabilities of monitoring and reporting. Different provisions are also provided to the devices by this management system.

  • Advanced Services

The NAC also provides many different advanced services, these services may include:

  1. Network Readiness Assessment: This service allows evaluating the network and the infrastructure of the network to find out the speed of the network.
  2. Design Development: This service is used to specify the design of NAC so that it can be used within the networks of the organizations.
  3. Implementation Engineering: The service of installation engineering allows configuring, testing, installing, and tuning the NAC elements and components.
  4. Optimization Engineering: The service for optimization engineering allows to provide periodic changes for improving the reliability, effectiveness and efficiency of the system.

Hence, NAC can be termed as a good solution as it provides many services and facilities through its components that if deployed separately, can charge a lot. The NAC solutions can definitely save time, money, effort and many resources. [Network Admission Control n.d].

Benefits of NAC

There are several benefits of NAC. By the implementation of the NAC, the security of a network can be improved. It requires that the hosts must use the latest antivirus available for their systems. They also must use their system patch policies before they can acquire network access. All this enables the network to protect the system from viruses and worms. As the devices can only be accessed by the network, this enables the NAC to use a network for its advantage by inspection and implementation of security policies for the host to help the users system. With different services of a network such as fragmentation of a network via Access Control Lists (ACLs) or VLANs, the uncooperative hosts are denied access to the network to protect a system to become targets of virus infections. Another benefit of the NAC is extension of the network which already exists and also increases the investment of the security, which includes the infrastructure of the network and the security technology of the host. If the information about the security level of the endpoint is available and the information is combined with the network admission policies then the NAC for sure allows the customers over the network to enhance the security of their computing communications. The NAC definitely provides the complete control over access to the network and no unwanted devices can enter into the system. The NAC has got a complete control over all the provided access methods. Also, it makes sure that all the end-users are following the network security policy. [Network Admission Control n.d].

The NAC provides many different benefits. This is the one of the reasons why most businesses and companies prefer to deploy NAC solutions. The NAC solutions are being deployed in almost every kind of organization whether it is on a large scale or small scale. The NAC provides a very strong and secure infrastructure to businesses; therefore, it is the top choice of every business. Corporations and businesses have very important and sensitive information that should not be leaked out, and for this, they want strong security policies and checks. By eliminating the risk of attacks, the productivity of employees is also ensured. For these and many other benefits, the NAC is known as the best choice for providing strong security to companies and organizations. [Neuwirth 2007]

How NAC can be used for providing Security

There are many security-based reasons for which the NAC can be a better solution. These reasons provide solutions for improving the security of a network. The NAC does not only provide a security, but also provides efficiency to the company in which it is being used. It is better to say that by using the NAC solutions, the efficiency of the company is improved more than the security. The following are the few reasons that are responsible for providing the security:

  • Force Compliance

A lot of time is being spent by the network administrators and the Windows administrators to find out that how the end users device can be forced into falling in line with the available antivirus updates  the firewall settings and Windows patches. The installation of antivirus software is vital to ensure network security. But on the other hand, only installing antivirus software is not enough, it is also necessary that different ways should be used in order to make the antivirus software up to date.

The antivirus programs should also be able to perform the virus scans  all the latest patches should be applied. All these efforts are performed by companies that are having the networks for ensuring network security. None of these processes were automatic and all the efforts would have been made manually  all these efforts were used just for improving the security. Things had to be done manually, like, the network administrator had to start a virus definition update and the other such processes and the rest depends on the end-users. If they do not want to update the application they can cancel it. The NAC is known as the solution for this problem as it enables and ensures the compliance with antivirus updates and all other such activities. [Davis 2007]

  • Quarantine

Apart from providing the force compliance over the network, it is also necessary that the network should be able to detect and quarantine any device that is considered to be unpleasant for the network. If the network is not able to detect the unwanted devices that can create problems then any worm or virus can crawl into your system and can cause damages. In this case, the NAC provides a very important solution which is the ability to quarantine any device. All this activity is performed from the center point of the network  this act prevents the spread of a virus or worm throughout the network. Centralization can also be availed in the networks that do not contain any NAC solution, but on the other hand, centralization is not the only thing required for making the network efficient. Network administrators and the network teams will never be able to handle worms and viruses effectively by using the traditional manual methods. The NAC solution is not only able to detect and quarantine the unwanted devices; it can also update the virus definition. Also before allowing any device to enter into the network; NAC will check the device for security purposes. If such security is applied then the viruses and worms cannot penetrate your network easily. If the NAC solution is designed and applied properly then none of the worms and viruses will be able to penetrate into a system, and only the access of the network will be to the devices and information that are needed. [Davis 2007]

  • Provides Access to the Guests

Some many companies and businesses want to provide access to guests and vendors, but with full network security. Most of the companies are still trying hard but are unable to get any solution to this problem. This is comparatively a big issue for the companies that are using wireless networks. Many solutions have been found for this problem. A few of these solutions include deployment of loaner computers, deployment of VLANs for providing the guests with isolated access, presentations that are totally based over the web so that can be used from anywhere and so on. On the other hand, if the NAC is deployed then there is no issue of providing the guests what they want rather the guests simply have to be compliant with the network. Now, all this procedure is automatic, previously it was done manually. The manual procedures were not simple and sometimes it required hours for allowing any device to get access to the network. In a manual process, although security was being provided to the network, this was the security for more obvious risks  the less obvious risks were ignored. The NAC provided a solution by making a guest device compliant with the network so that all kinds of risks are dealt with it. [Davis 2007]

  • Risk Avoidance

It is obviously always good to avoid risks as much as possible because this effort can make a system very secure and reliable. Apart from worms and viruses, still many risks are associated with daily tasks. It is better to connect all the security policies, like, antivirus, firewalls and so on to the NAC. [Davis 2007]

Why use NAC?

Normal network security systems provide solutions for checking and developing a log of the people who are accessing the network. A traditional network security system also provides the list of functions that the users can perform. Traditional network security does not perform any activity for making sure that the end-users are deploying all the network security policies and performing according to the rules specified by the network policy. In traditional network security systems, it has been noticed many times that some of the endpoints never seem to act according to the security policy that has been defined for making the network secure. This shows that a network security policy is there, but it is not as efficient as it should be to provide complete security. Because the security policy is not being followed properly in these networks, so for this reason, viruses, worms and other threats can attack a system. This can be said as a deficiency of the traditional security systems. The Network Admission Control is designed to deal with this issue. It monitors the endpoints and checks out that whether these endpoints are following the security policy or not. The NAC also provides many different services in a single system which also reduces the cost of implementing every service separately. The NAC gives a lot of attention in making sure that the devices should follow the security rules so that no virus, worm or other threats can enter into the system and do any damages. The NAC also makes every user compliant with the network who is trying to access the network. Hence, in short, it can be concluded that the NAC is a very strong network security solution and all organizations should deploy the NAC in their networks so that their information and resources can be protected. [Network Admission Control n.d]

Functions of NAC

The NAC is a complete network security solution. It can perform many different functions at a time. All the functions have their own responsibility and benefits, few of the most important functions that are performed by the NAC are listed below:

  • The major function of the NAC is to provide a security policy.
  • The NAC is also responsible for defining the security policy for all the endpoints and other devices that exist in the network.
  • The policy rules and regulations are also configured by the NAC.
  • It makes sure that the devices and endpoints are following the policy that has been defined for security.
  • It can identify the suspected traffic and can block such traffics.
  • It is also responsible for generating alerts, reports and logs of the threats that have been found.
  • It also builds the compliance information on a regular basis which consists of all the information  to know which devices are following the security policy rules and which are not.
  • It can also allow the users to be compliant with the network.
  • The NAC is able to monitor and check the compliance of end-user before making it a part of the network.
  • The NAC also provides the ability to restrict quarantine, deny, ignore or permit any suspected or unsuspected traffic.
  • To is also used to detect the threats whether new or old that are entering the system

These are a few of the responsibilities of the NAC; although there are many more responsibilities that make the system secure. [Network Admission Control 2007]

Types of Security Checks provided by NAC

The NAC continuously and regularly monitors and checks its devices for making sure that the devices are following the security policy rules and are compliant with the network. The NAC devices and services are different from one another. Different facilities are provided by different vendors and companies to buy the system that is providing the facilities of their use. The following checks are normally made by the NAC systems for ensuring the security throughout the network.

  • It checks for the versions and service packs.
  • It applies patches for browsers and operating systems.
  • It also checks for the settings of the browser and configuration of the operating system and makes sure that all the settings and configurations are the same.
  • The versions, configurations and settings of all the firewalls that are installed over the endpoints should be the same.
  • All the signature files for the antivirus software should be updated and identical.
  • The log of the users should be created and updated regularly.
  • All the new users or guests should be checked for compliance.
  • The list of the viruses and threats should also be updated and provided to every antivirus software.
  • The NAC also checks and locates the MAC and IP addresses for every device.

All these checks are performed by the NAC over every device that exists in the system as these checks are essential for providing high-level security. [Network Admission Control 2007]

Conclusion

This paper provides a detailed study about the Network Admission Control (NAC). The NAC can be described as a solution for network security. The trend of attacking a system has been increased to a great extent and organizations must equip their networks with the best security solution. The NAC is the right choice in todays world of innovation and development. It would not be the right choice to deploy old and traditional methods of security and prevention. Old traditional methods used antivirus software, firewalls, switches and routers and so on; although, all these methods were suitable for defining the security rules and policies for the endpoint devices. These methods were also suitable for monitoring the traffic over the network and for detecting the threats over there. There were still some deficiencies in traditional security methods. The biggest deficiency was that these devices never monitored the devices that whether the devices that are present over the network are following the policies and rules which are defined for the security purpose. The NAC came forward with the solution to this problem as it provides many different checks for making sure that the security policies should be followed. The NAC provides many different services in a single package to reduce the cost. The benefits that are provided by using the NAC solutions are unlimited; companies can use the NAC solutions without any doubt for providing strong security to their sensitive information. In the NAC solutions, guest access is also provided -the best feature is that the guest device is also checked for compliance -a great feature for providing security that is not available in any other security-providing methods. The NAC also makes sure that all antivirus software is updated with the latest viruses and threats that have been detected. The NAC is no doubt the best solution for network security in the world. It is highly recommended that all companies and organizations which are interested in providing high-level security should deploy the NAC solutions in their networks. [Greene 2009]

References

  1. David Davis, SolutionBase: Learn how NAC improves network security (2007) .
  2. Network Admission Control (no date)
  3. Network Admission Control (2007)
  4. Tim Greene, NAC used for something other than what it was designed for (2009)
  5. Tom Jelen, (2003) Web.
  6. What is network security? (No date)

Research Critique of Management of the Difficult Airway

Introduction

This paper will critique Prehospital Management of the Difficult Airway: A Prospective Cohort Study by Keir Warner, Sam Sharar, Michael Copass, and Eileen Bulger, from the Journal of Emergency Medicine, Vol 36., pp 257-256, 2009.

The purpose of the study is to properly define how to manage difficult airway in prehospital management. The failure to set up an ultimate airway is the main reason for death when sufficient oxygenation and ventilation cannot be otherwise obtained. In this instance, death is avoidable as early airway management is very important in cardiac arrest patients and the exhausted, but is also significant in patients at danger for the progressive loss of airway patency and individuals at danger for aspiration, such as those individuals with head or neck injuries. Oral endotracheal intubation (ETI) is the present standard for the best airway administration and accomplishment rates in the prehospital locations vary from 33100% (Wang, 2006). This unpredictability has been credited to advanced life support (ALS) provider differences in the level of early preparation, ongoing learning, medical mistake, or access to neuromuscular blocking agents (NMBAs) (Bulger et al, 2007). Numerous studies in both the field and the Emergency Department (ED) have established enhanced ETI success rates with the use of NMBAs. Continuing ability maintenance may also be very important, as the incidence of ALS provider performance of airway measures is considerably associated with ETI success rates (Garza et al, 2003).

The paper was able to discuss its riddance. It add up to these ALS provider issues, and why many patients need definitive airway access may be complicated to intubate due to anatomic irregularities, distressing injuries, foreign bodies, incapacity to open the jaw, or insufficient muscle relaxation. Such difficult airway patients have an advanced occurrence of complications and an increased threat of death. If a patient cannot undertake successful ETI in the prehospital location, the ALS provider must either return to bag-valve-mask ventilation or resort to a more sophisticated airway method. a number of highly developed airway measures have come out to help administer these demanding cases, including surgical cricothyroidotomy, retrograde ETI, transtracheal jet ventilation (TTJV), and Eschmann sty let (gum elastic bougie)-assisted ETI. Lately, with the arrival of supraglottic airway devices such as the Combitube and the laryngeal mask airway, the position of prehospital intubation has been questioned, as achievement rates are extremely changeable in pre-hospital ETI (Bulger et al, 2007). The reasons for this reading were to broadly assess a large legion of patients experiencing prehospital endotracheal intubation with and without rapid sequence intubation (RSI), and especially illustrate the incidence, presentation, and management of the difficult airway and look at consequences of airway breakdown.

Study Design

The study design focused on a group of patients who undergo attempted oral ETI in the prehospital location over a 4-year period. It was carried out in conjunction with the Fire Department Medic Program and nine hospitals within the city of Seattle. The procedure was evaluated and accepted by the University Institutional Review Board (IRB), as well as the IRBs for each other hospital. Subjects were incorporated if they were assessed by ALS providers and undergo attempted ETI in the prehospital location. Patients were disqualified if they undergo inter-facility transfers after previous failed airway management. Patients were divided into two groups: non-difficult airway, classified as successful endotracheal intubation within four attempts (two attempts per provider); or difficult airway, describe as more than four attempts at ETI or the need for attempting any advanced airway management process.

EMS System and Paramedic Education

This course necessitates 2500h of educational, laboratory, and field practice for certification. All paramedics are taught to carry out rapid sequence intubations. In addition, all paramedics are mandatory to recertify their airway management expertise every 2 years in an advanced airway clinics consisting of educational lectures by anesthesiology, and surgical airway laboratory the medical director and trauma surgeons. A minimum of 12 simple ETIs per year is mandatory for qualifications.

Airway Management Protocols

Throughout the learning period, the EMS scheme use many different methods for airway management, including BVM with oral/nasal pharyngeal airways, endotracheal intubation, Eschmann-assisted intubation, retrograde intubation, TTJV, and surgical cricothyroidotomy. The application of RSI is permitted under the direction of the physician providing medical control. All medics are taught to use the paralytics, succinylcholine at a dose of 11.5 mg/kg for laryngoscopy, and pancuronium (0.1 mg/kg) for post-intubation paralysis during transfer. Previous to ETI attempts, every effort is made to optimize intubating circumstances, including the use of head elevated positioning (except trauma) and cricoid pressure or external laryngeal handling. Each medic is permissible for two attempts at intubation before going on to the complicated airway algorithm.

Data Collection

Potential information was collected for each qualified patient using a consistent airway management survey form completed at the termination of the patients prehospital care by the provider performing the airway process. This survey form detailed the number of intubation attempts, series, and outcomes of advanced airway measures and issues contributing to a complicated airway. The medics documented their explanation of the features contributing to tricky ETI and these were later on classified by the abstractor. This information was then combined with the electronic database uphold by the Fire Department, which coconf thorough prehospital information for all patient come across. Integrating these two balancing datasets guaranteed that a prehospital questionnaire was finished for every patient who undergoes attempted ETI. The Fire Departments electronic record also offers additional factor regarding demographics, means of injury or sickness, vital signs and preliminary Glasgow Coma Scale (GCS) score, use of NMBAs or sedatives, measures performed, and destination of transfer.

All patients meeting complicated airway criterion undergo hospital follow-up with complete chart evaluation. The hospital follow-up was carried out in a display fashion by a skilled chart abstractor. At set periods throughout the learning, the abstractor would travel to all hospitals within the city and gather the essential data from the patients chart. Hospital records were collected by means of a standardized form and included ED airway administration, prehospital airway difficulties perceive after hospital admittance, occurrence of shock and need for cardiopulmonary resuscitation (CPR), admitting/discharge analysis, and initial chest X-ray study outcomes. Added outcome actions included medical analysis of aspiration and succeeding progress of pneumonia, hospital death, presence and severity of neurologic deficit, and final outlook. Neurologic result was based on the thought of the treating physician and the discharge location. Information is reentered into an Excel folder by a skilled study abstractor.

Statistical Analysis

The study compared the demographic, injury severity, and outcome data were performed using the Students t-test for continuous variables and the chi-squared test for categorical variables. Analysis was conducted using SPSS (SPSS 13.0; SPSS Inc., Chicago, IL). Significance was considered at p <0.05.

Results

The study found that between April 2001 and April 2005 there were 80,501 ALS patient contacts, of which 4114 undergo attempted ETI. Of these, 23 patients were disqualified as inter-facility transfers with previous failed airway management, leaving a total of 4091 (5.1%) patients registered. Patients vary in age from 1to107 years, together with 49 children ages 1 to 14 years. Of these 4091 patients, 3961 (96.8%) undergo successful oral ETI within four attempts and include the non-difficult airway group, leaving 130 (3.2%) patients in the difficult airway group. The mean and median numbers of ETI attempts were 1.3 and 1, correspondingly, for the non-difficult airway group, compared to 5.4 and 5, in that order, for the difficult airway group. There was no dissimilarity in demographic variables among the two groups. In general, 83% of patients were seen for medical complaint, with 17% being care for distressing injuries. There was a somewhat higher percentage of trauma patients in the difficult airway group (20%vs.15%), although it did not accomplish statistical implication. The study found that there was no distinction between the groups concerning the proportion of patients in cardiac arrest.

Use of Neuromuscular Blocking Agents

The study further established that 62.3% of patients undergo RSI by means of succinylcholine for paralysis. The most distinguished factor coupled with not needing RSI was the presence of cardiac arrest necessitating CPR, present in 86.1% of patients who did not need RSI, but only 8% of patients who did need RSI (p<0.001). apart from those patients in cardiac arrest, the mean initial GCS score was also considerably lower in patients who did not go through RSI (7.2) in contrast to those who did (8.7, p <0.001). There was no important disparity in the proportion of difficult airway patients between those who received NMBAs and those who did not (3.4% vs.2.8%, respectively).

Prehospital Difficult Airway Management

Of the 130 patients in the difficult airway group indicated in the study, 102 patients received more than four attempts at ETI and 28 patients goes straight to an advanced airway technique. Of 130 patients in the group, 59 were effectively intubated orally following more than four attempts; while 43 patients were administer with BVM ventilation for transfer to the hospital. There were 9 patients for whom ultimate airway access could not be accomplished and who could not be efficiently ventilated using a BVM. In general, there were 40 attempts at advanced measures with unreliable success rates. Of these measures, nasal and digital intubation had the worst success rates, at 0% and 14.3%, in that order. The most ultimate of the advanced interventions was surgical cricothyroidotomy, with a success rate of 91%. Of the 130 patients in the difficult airway group, a safe prehospital airway was completed in 78(60%), for an overall success rate of 98.7% for all 4091 patients. In the post-intubation questionnaire, ALS providers were requested to personally recognize factors that may have contributed to the complicated airway. The most usually explain features were anterior trachea (39.2%), small mouth (30%), and foreign body aspiration (27%).

Transport and Hospital Management of Difficult Airway Cohort

Of the 130 difficult airway patients, 27 died in the field and 10 were gone to hospital record due to complexity of matching hospital and prehospital accounts, leaving 93 patients transferred to an ED with total follow-up. Of these, upon arrival in the ED, 57 patients (61.3%) had a safe airway and 33 (35.5%) were effectively managed by BVM ventilation. Twenty-two patients (23.7%) were in shock (systolic blood pressure <90mmHg) and 12 patients (12.9%) were in full cardiac arrest. Of the 33 patients effectively managed by BVM ventilation throughout transfer to the hospital, 18 (54.5%) were orally intubated in the ED by direct laryngoscopy, 5 (15.1%) were orally intubated with the help of an Eschmann stylet, 3(9.1%) were intubated with the help of a fiber optic device, and 7(21.2%) were not intubated and were instead managed using non-invasive techniques. Of the 9 patients who are unsuccessful to have any best airway placed in the prehospital location and were not capable to be efficiently ventilated, 4 were found in medical cardiac arrest and were pronounced deceased after initial laryngoscopy attempts and failed ventilation by BVM, but before surgical airway attempts. The remaining 5 patients were considered failed airway patients. The remaining 3 patients were found in cardiac arrest and transferred to the hospital, where they were found to have suspected esophageal intubation and underwent subsequent ED ETI.

Complications of the Prehospital Airway Management

Among the sum of 4091 patients with prehospital intubation attempts, 3 patients (0.07%) had an unrecognized esophageal placement of their endotracheal tube (ETT).

Two extra (pediatric) patients had an unintentional extubation on the way to the hospital and were then reintubated in the ED. Eleven patients (0.3%) go through prehospital surgical cricothyroidotomy throughout the learning period. Of these, one patient died in the field and the remaining patients were transferred to the hospital.

There were two reported complications taking place in 2 patients. One of these patients required operational reconsideration for extreme bleeding and trachea damage, and the other patient had sustained a gunshot wound to the neck and had poor ventilation, oxygenation, and continuing CPR.

Hospital Outcome of Difficult Airway Cohort

Of the 93 patients transported to the hospital, 30 died during their hospital stay, with a third of deaths occurring in the ED and 60% in the Intensive Care Unit. Of these deaths, airway complications were judged as possible contributing factors in 6 patients. Seventeen patients had aspiration of either gastric contents or foreign body, diagnosed on arrival to the ED, with 88.2% occurring in the prehospital phase of care. Five of these patients went on to develop aspiration pneumonia. Of the 63 patients who survived to discharge, 79.4% returned to baseline neurologic status, leaving 13 patients with a persistent cognitive neurologic deficit. Only 2 of those with neurologic deficit were thought to be related to initial airway management, with the remainder attributed to the underlying disease process or associated injury. The pooled mortality for the difficult airway cohort, including prehospital and hospital deaths, was 43.8%.

Pediatric Intubation

During the 4-year study period, medics intubated 49 pediatric patients (aged d14 years). Most pediatric patients were seen for medical issues (n 24, 49%), including 5 with severe respiratory distress (10.2%), 4 with respiratory arrest (8.2%), 10 with cardiac arrest (20.9%), with the remaining medical complaints a result of exacerbation of previous medical conditions. Forty-one percent of pediatric patients were intubated for traumatic injury; the most prevalent mechanism was 6 patients struck by moving vehicle (12.2%), followed by 4 in motor vehicle crashes (8.2%) and 4 falls from height (8.2%). Eight percent of pediatric patients had environmental injury such as burns (2%) and drowning (6%). Overdoses, both accidental and those with intent to harm self, accounted for 6.1%. Rapid sequence intubation was used in 26 pediatric intubations (53%), with succinylcholine as the primary paralytic, and atropine available for premedication. Sixty- one percent of patients were intubated on the first laryngoscopic attempt, with success rates rising to 92% by the third attempt. Three patients were judged to be difficult airways, one patient requiring six attempts at ETT placement and the other 2 patients having unintentional extubation after uncomplicated initial ETT placement. Both patients who were extubated had uncuffed endotracheal tubes placed. The overall difficult airway rate for the pediatric population was 6.1%.

Discussion / Conclusion

The study described prehospital difficult airway as one needing more than four attempts at ETI or the utilization of any advanced airway management methods. An earlier study by Bulger et al (2007) already indicated that ETI can be carried out in the prehospital setting with success rates as high as 98.2% and as low as 33%. The amount of ETI attempts also may relate to concluding patient result, as earlier hospital studies have shown a rising occurrence of hypoxia with growing laryngoscopic attempts (Wang, 2006). The study further reviewed the maximum of three attempts at oral ETI recommended as sensible to accomplish prehospital ETI by Wang (2006) as Davis et al (2003) reported a remarkable increase in prehospital airway management success rates for patients with severe head injury (39% to 86%) with the addition of NMBAs for airway management. In the difficult airway group, 36% of patients received only BVM ventilation and were transferred to EDs with no main unpleasant consequences. On the other hand, if the patient cannot be intubated and also cannot be sufficiently oxygenated or ventilated with BVM ventilation, the condition maybe hastily deadly. In such cases throughout the study period, TTJV, retrograde ETI, and surgical cricothyroidotomy were accessible to ALS providers as persistent airway additions to rapidly institute ventilation and oxygenation. Surgical cricothyroidotomy offer best airway access but is persistent and necessitate important training and skill preservation. Bair et al. report a surgical cricothyroidotomy rate of 1.1% in the ED, and a prehospital cricothyroidotomy rate of 10.9%. In this study, the cricothyroidotomy rate was only 0.3%, signifying that this process is not often required in a prehospital ALS structure with widespread experience and access to NMBAs to make possible intubation. The most generally observed downside of cricothyroidotomy is the high difficulty rate, including unwarranted bleeding, aspiration, tracheal/cricoid ring breakage, damage to other close by anatomic structures, and fake airway passage into the extra-tracheal space. This article review established an 18% complication rate of field cricothyroidotomy, which is similar to other studies (Bair, 2003).

This review of the study understood that 3 out of 4091 patients with unrecognized esophageal intubation, all of whom were in cardiac arrest at the time of ETI, making the colorimetric CO2 detect or a less reliable indicator of endotracheal tube position.

As Katz and Falk (2001) reported, constant expired CO2 (EtCO2) checking can be used to lessen esophageal tube placement and ETT displacement in the prehospital setting. Also, ETI has been shown to facilitate hyperventilation that leads to cerebral ischemia that is mainly detrimental in patients suffering from traumatic brain injury (Warner, 2007).

It is then concluded that research in this area is divided, with some studies showing increased survival in patients undergoing prehospital ETI by aeromedical crews and paramedics (Bair, 2003). On the other hand, some studies have shown that prehospital ETI by paramedics is disadvantageous after brain injury while the paper concludes that ED intubation and survival after traumatic brain injury, definitive securing if airway with ETI increases survival.

Reference

  1. Wang HE, Yealy DM. (2006). How many attempts are required to accomplish out-of-hospital endotracheal intubation? Acad Emerg Med;13:3727.
  2. Bulger EM, Nathens AB, Rivara FP, MacKenzie E, Sabath DR, Jurkovich GJ. (2007). National variability in out-of-hospital treatment after traumatic injury. Ann Emerg Med; 49:293301.
  3. Davis DP, Ochs M, Hoyt DB, Bailey D, Marshall LK, Rosen P. (2003). Paramedic-administered neuromuscular blockade improves pre-hospital intubation success in severely head-injured patients. J Trauma; 55:7139.
  4. Garza AG, Gratton MC, Coontz D, Noble E, Ma OJ. (2003). Effect of paramedic experience on orotracheal intubation success rates. J Emerg Med; 25:2516.
  5. Bair AE, Panacek EA, Wisner DH, Bales R, Sakles JC. (2003). Cricothyrotomy: a 5-year experience a tone institution. J Emerg Med; 24:1516.
  6. Katz SH, Falk JL. (2001). Misplaced endotracheal tubes by paramedics in an urban emergency medical services system. Ann Emerg Med; 37:327.
  7. Warner KJ, Cuschieri J, Copass MK, et al. (2007). The impact of prehospital ventilation on outcome following severe traumatic brain injury. J Trauma; 62:13308.

Extensible Business Reporting Language (XBRL)

Extensible business reporting language or XBRL is an open standard that supports information modelling and the expression of semantic meaning normally essential in business. XBRL is related to XML. One of the important uses of XBRL is to identify and exchange financial information like a financial statement. This standard is related to the communication between business and financial information. It is a language for the electronic communication of business and financial data, which is revolutionizing business reporting around the world (Introduction to XBRL).

XBRL allows the financial community to communicate in a worldwide language. It reduces the need to incorporate unequal database applications for unit-wide settlement and consolidation purposes. The important factors using XBRL as the efficiency of the process, simplicity of information and the important cost savings come together with distribution and analysis. In an organization, the important responsibility of management is to verify the correct and absolute financial statement. This duty is done on the basis of XBRL. The systematic information of XBRL is interactive because it arranged the method of representing internal and external reports. The internal auditor provides very help to the achievement of XBRL. They recognize reporting format, directive requirements etc. In September 2008, a survey was conducted to explain the internal auditors contribution to XBRL. The important suggestions that affect the applications of XBRL are the following;

  1. The movement from physical to computerized processes.
  2. It can able to access the more efficiently and incorporate data.
  3. The concept of business policies that can be applied large collection of software applications.
  4. Lower-cost working surroundings.

XBRL is considered an important device of the audit. That means improving the estimation of risk and can easily access the data and analysis. The use of continuous auditing and monitoring is very fast. Develop information about the area of audit resources through the external audit manual. Facilitating consumer-oriented reporting and the creation of customized dashboards for specific groups of information consumers (for example, company audit committee, the management, or external auditors) (XBRL as an Audit Tool and Aid).

XBRL business reporting

XBRL is an XML based mark up language used for the sharing of business and financial data electronically. It gained support and assistance from the United States Securities and Exchange Commission (SEC) as well as from the European Parliament. XRBL is objected to minimizing the costs by way of removal of time-consuming and error-prone human interaction. The XRBL tags are capable of increasing the speed of the data integration and transmission, and it is free from data redundancy and quality issues. Through the adoption of XBRl, financial institutions will be capable of complying with the existing and future regulations by applying comfortable and affordable tools.

XBRL is a global non-profit consortium consisting of more than 550 major companies, organizations as well as government agencies. It is developed with the intention of facilitating business intelligence automation through the application of machine to machine data transmission and processing tools. This is also facilitating the spot delivery of data to various output formats. Through the incorporation of technologies, business reports consisting of taxonomy documents and instance documents can be easily described.

XBRL, of course, is XML, and it has taken full advantage of all of the extensibility that XML has to offer to add a sophisticated set of features for handling metadata, business rules, content validation, expressing relations, advanced computations, and dimensional data. (Extensible Business Reporting Language (XBRL); overview for technical users by Liz Andrews).

Financial Reporting

The IFRS statements are prepared with the object of revealing the true financial position of the company together with its overall cash flow and financial performance for the sake of stakeholders for decision-making purposes. The financial reports consist of five different statements such as balance sheet, income statement, cash flow statement, statement of changes in equity and notes to the financial statements. In the balance sheet, the financial position of the company is summarized for a specific date. In the income statement, the revenues and expenses earned and incurred during a financial period are involved. The net result of the income statement shows the net profit or net loss incurred to the company. A cash flow statement is a statement providing information on the ability of the company to generate adequate cash flows for the business operations. Statement of changes in equity explains the changes in the shareholders equity at the end of a financial year. Notes to the financial statements is an additional statement of information prepared with the intention to explain specific items in the financial statements. The statement of internal accounting policies of the organization is also included in this.

Differentiated from traditional methods of financial statements presentation, XBRL is aimed to verify the data on an automation mechanism basis. It adopted advanced data exchange integration and analysis technologies for the reporting, and thus, credibility and transparency can be ensured at a higher level. With the XBRL taxonomies, the individual financial disclosure requirements of companies can be effectively fulfilled. Through this, organizations can prepare financial statements on a more accurate and reliable basis without losing data integrity. It consists of two distinct parts as concepts and relationships. In the concepts part, the XML technologies adopted for the resulting document is explained. In the relationships part, the relationship between concepts and other resources is explained.

Semantic expression is another advantage of the XBRL. It has the ability to express semantic meanings through using extended links. Multidimensional data representation is possible through this tool. Normalization is another advantage of the XBRL tool. By way of applying standard data format for the electronic transmission of data, it creates industry standards. It facilitates the verification of the calculations and other relationship in the financial statements through the normalization process.

XBRL allows software vendors, programmers, intermediaries in the preparation and distribution process and end-users who adopt it as a specification to enhance the creation, exchange, and comparison of business reporting information.(Abstract; Extensible Business Reporting Language (XBRL) 2.1 RECOMMENDATION  Corrected Errata  2005).

The XBRL Financial Statement Repository facilitates the storage of the five financial statements prepared through the XBRL tool and placing them on the websites for access to the stakeholders. The XBRL Financial Statement Repository stores five financial statements which have been prepared using XBRL and placed on the website to provide test data. For example, if you wanted to prepare extraction and comparison prototypes to test the XBRL concept, you could use this data.

As an electronic format, XBRL is facilitating the presentation of financial statements, consisting of performance reports, accounting records, and other financial information in a simplified manner by applying for software programs. It is free from royalty payments as it is an open consortium developed collaboratively with the intention to reduce the cost of publishing financial information as well as presentation in a more understandable manner. Quickly and easily transferring financial information is made possible through this tool. The duplication of financial data is avoided through this tool, and thus, errors and inconsistencies are effectively prevented.

XBRL enhances the usability and transparency of financial information reported under existing accounting standards, simplifies disclosure, and allows companies to communicate financial information more readily via the Internet. (What is XBRL? An introduction to XBRL).

XBRL tagged data makes the data searching process quicker an

The benefits of XBRL:

  1. Accountants get adequate and reliable data on the financial performance of the company with reduced efforts and time. It reduces the cost of collection and analysis of the data. The software applications improve efficiency.
  2. The creditors of the company can obtain reliable information on repaying capacity of the company at a faster rate, and thus, quick decision-making is possible.
  3. Companies are able to automate their data collection, and thus errors can be identified and corrected on a real-time basis.

The PDF-based financial reports presentation are subjected to inflexibility. The users face difficulty with locating and extracting required data from these statements. XBRL is free from these limitations, and through, this a common taxonomy can be adopted by the company.

The minimal general and systems control that Great Lakes have to be adopted for supporting the implementation of XBRL:

The implementation of XBRL on financial reporting has potential impacts on data quality. They are discussed below.

Intrinsic data quality: The intrinsic data quality of XBRL is limited in nature. The verification of the accuracy of the input data through XBRL is subjected to limitations. It is not capable of identifying the malpractices of the accountants on the assets and revenue value.

Accessibility data quality: Even though XBRL facilitates easy Accessibility of the information in the financial reports, its security cannot be ensured at the appropriate level.

Contextual data quality: The Schema and presentation link bases applied on the XBRL ensures the contextual data quality, and thus, complete and relevant information can be ensured.

Representational data quality: Through standard browser technology, concise and consistent information can be accessed by the stakeholders.

Before the implementation of the XBRL in the organization, the return on investment from the project, its relevant costs and its long-term benefits has to be identified and analyzed. The proper conversion process also has to be identified. Proper training should be provided to the employees for the effective conversion process to XBRL.

The Great Lakes Inc. has to take adequate control measures for eliminating the limitations of the XBRL language on the data quality for supporting its effective implementation.

References

Web.

Web.

Web.

. Web.

Web.

What is XBRL? An introduction to XBRL.

A Unique Bag with the Sun Inside

Innovative technology is the main characteristic feature of the modern world. Having reached the high levels of technological progress, people continue to think about other innovations, which, first of all, may save the natural resources, second, be not very expensive, and third, to be useful. Scientists all over the world work under making peoples life easier and the innovations which may be offered may help. The innovative products may be of small size and be useful, otherwise, the customer is not going to be interested in its purchase.

Scientists have already proved that the power of the sun is the biggest on the Earth, and solar batteries are already used by people. The product, which is offered to be manufactured, is the bag that contains solar power and several charges inside this unique bag. The product is planned to play the role of the accumulator. The bag may be filled with solar power during the day and thanks to charges work at night. The small size of the bag may allow taking it everywhere. The expenses for its development are not very high, especially if to provide a big advertising campaign paying attention to the free natural resources usage and environmental protection, it will be easy to promote the product and to earn money on the money turnover. The bag is going to be made out of metal, so it is going to be long-lasting and biodegradable and at the end of its exploitation may be recycled. The sun bag is very easy to fix as the only necessary thing is to place it on the sun for charging and plug the cord in the facility which needs electricity.

There is a tendency for traveling now and more and more people like the places which are far from civilization and where they can rest from busy city life. The problem appears with the absolute absence of electricity in the places of isolation from civilization and modern people cannot live without creature comforts, so the product which will be able to provide people with power and be compact may be in great demand. The main purpose of the sun bag is going to be the extra power during the traveling on the long distances especially if the place is uninhabited. Radio, mobile phone, and some other facilities, without which people cannot live, even trying to relax from busy civilized life, and the named subjects need power. The offered sun bag is the main decision to the problem.

Unfortunately, offered sun bag, like any other product, has its disadvantages, but they are not so numerous as advantages. The only disadvantage of the sun bag is the impossibility of its usage during rainy days, meaning that in the case of the rainy season when the sun does not rise for several days, the sun bags accumulator will not be able to work, as its charge is not so big because of small size.

So, the unique sun bag is the best variant of power during the traveling to the uninhabited places, moreover, it is going to be not so expensive, small, easy to fix and very useful. The disadvantages, such as the impossibility to be charged during the rain, may become the advantage, as people may have absolute rest from civilized life and feel aborigine.

Characteristics of the Web 3.0

The first time the discussions about web 3.0 surfaced on the internet it was not earlier than three years ago and even then there were many different versions predicting what web 3.0 represents. With three years passed the situation did not change, except that new versions might have been added. From a personal perspective, the situation with web 3.0 might seem better than with web 2.0, in terms that the latter is still discussed regarding its definition, while web 3.0 is already taking a certain form. This paper is a discussion of web 3.0 in terms of its perspective, analyzing the question of whether the currently known characteristics entitle it to carry the number 3.0 in the designation.

First of all, without going into details, it should be established what web 3.0 is all about and in what way it will differ from its predecessor, web 2.0. The main differences can be summarized through the terms semantic web, i.e. web content generated by semantic technologies (Hoover, 2009), where computers will be able to recognize and read the web pages through metadata, and a set of standards that turns the Web into one big database (Metz, 2007)

In general, the idea seems prospective, where for example, it is known that Google News is already using similar technology to publish stories ranked by computers as the best. (Hoover, 2009) Thus, it is a set of technologies and languages which will be available for designers to create their websites already marked with specific metadata. Nevertheless, if taking a long perspective, what was the main feature of web 2.0 that distinguished it from web 1.0? It was the ability for the users through back-end technology to produce content and manipulate the links between their materials and the materials of other users. In that regard, it might be assumed that such freedom was lacking to an extent that in the next generation part of this ability was transferred to the machines.

Accordingly, the difference might be seen in that WebPages will include metadata that will be defined by humans, and thus the technology will be vulnerable to manipulation. The difference will be that the content will be manipulated by web creators and professionals rather than the users. In such a way, it might be assumed that such a step was taken due to the certain boom in web 2.0 projects that were low quality and a lot of them were used for spamming. Nevertheless, there might be some skepticism in regard to the amount of efforts needed to re-encode the existing WebPages, where a complete reannotation of the Web is a massive undertaking. (Metz, 2007)

Additionally, it might be argued whether the new addition, in terms of metadata, is worthy of a new number in the web 3.0 designation. Taking into consideration that the points of contrast that distinguish web 3.0 from 2.0 are almost the same that distinguished 2.0 From 1.0, it is more logical to call the new version Semantic web, rather than web 3.0 which implies a more distinguishable step forward. (OReilly, 2007)

It can be concluded that web 3.0 has a perspective, at least on paper and through the implementation of such web giants as Yahoo, Google and Microsoft. With a massive implementation, the results might vary, not to say that they will take time. The problem in designation, whether the specifications deserve the number 3.0 or not, is a debatable issue especially considering that web 2.0 definition still varies through different resources. The positive thing is that the implementation of new technologies does not eliminate the possibility for the older ones to exist, and in that regard, the designation might be revised in the future resulting in the web 2.5 concepts.

References

Hoover, L. (2009). Semantic technology pulls a gun on the blogosphere. Computer World. Web.

Metz, C. (2007). PCMAG.com.

OReilly, T. (2007). .

Information Security for Astra-World

Overview

As information is becoming one of the main assets of modern companies, the influence of information technologies on business processes is constantly increasing. Accordingly, the requirements for the level of security of the informational system are also increasing along with the safety of the data and its accessibility. In that regard, our company -IS Consulting, provides a full complex of services of information protection and a wide spectrum of solutions for the provision of the security of the information systems of any difficulty.

At the current time, when procedural, technical, and technological aspects of the exploitation of the systems of informational security of the company demand a high level of specialists preparation, it is economically effective for many companies to use the services of organizations which are system integrators. It is not necessarily an episodic or one-time consultation service, such as incidents investigations in the area of information security. In that regard, IS Consulting, having large personnel of highly qualified experts in the field of informational security along with all the necessary technical means and efficient procedures, provides a wide spectrum of consultation services in the field of information security.

The Organizations Security Needs

Astra-World is a theme park located in Orlando, Florida, focusing on the space theme. The park has a staff of 150 employees, varying from service duties to managerial and administrative positions. The informational security of the park consists of a system of subscription, in which customers can be by a monthly and annual subscription to visit the park and which contains personal information of the subscribers. Additionally, the park has a network, which has a database of customers credit cards, which paid for the subscription.

The park also has a system of communications that connects the parks offices, as well as the surveillance system which is located throughout the whole park. After the occurrence of a few incidents, such as attacks on the information system of the park, which caused the downfall of the database, attacks on the corporate website, and breaking and stealing the information regarding customers credit cards, the management of the park decided to implement a new system of information security.

This security issue has also brought a controversy through several customers filing complaints regarding their privacy, where several employees have used the surveillance system to record videos of the visitors and publish them on various websites on the internet. The company does not have a particular policy regarding the information security and assurance program. Astra-World requested IS Consulting to develop a formal information security system, which will deal with the aforementioned issues effectively. Accordingly, the needs of the organization could be summarized as follows:

  • A protected system of authorization  as dishonest employees make up 13 percent of information protection problems(Peltier, 2002, p. 8), this issue is specifically important for theme park management.
  • Protection from hackers the term refers to those who break into computers without authorization or exceed the level of authorization granted to them. (Peltier, 2002, p. 9)
  • Develop a security policy for the user of the informational system of the park  a protection policy is the documentation of enterprisewide decisions on handling and protecting information. (Peltier, 2002, p. 9)
  • A system of encryption of important data in the information system.
  • Trained personnel able to manage and update and upgrade the infrastructure of the security system as needed.

Information Security Roles and Titles

A general definition for information security management can be seen as a complex of measures directed toward the provision of the informational assets of the company. In that regard, the informational assets of Astra-World can be seen in any digital information stored or exchanged within the informational system of the company such as customers data, credit cards information, surveillance records, corporate portal data, financial information or any other documentation exchanged within the business processes of the company. Thus, the main point in information security management is its complex approach, where solving a separate issue, whether technical or organizational, will not solve the problem.

The complex approach implies three basic elements of security, i.e. access control, authentication, and accounting. (Dhillon, 2001) Additionally, as one of the objectives and accordingly necessary enforcement for the security system, is a security policy, focusing on confidentiality, integrity, and availability. The security policy, as defined earlier, should be focused on several issues, which is adapted to the need of Astra-World, should focus on the functional aspect of the system, the physical components of the system, the procedures to protect the system, and the organizational aspects of the system, i.e. managing the balance between the accessibility of the system through managerial and employees hierarchy, and the operational functionality. (Dhillon, 2001, p. 10)

The roles and the positions within the security system should be divided as follows:

  • Chief Security Officer (CSO, or CISO)  in addition to reporting to the companys top computing executives, the CSO directs the information security department to examine existing systems to discover information security faults and flaws in technology, software, and employees activities and processes. In general, according to the financial limitations of the company, the department of information security can be positioned within the IT department of the company. (Whitman & Mattord, 2005) Additionally, he develops the budget for the system, works on strategic plans, and drafts or approves policies.
  • Information security consultant mainly being an outside position, this function is administered by IT Consulting.
  • Information Security Administrator  the functions of security administrators might be combined with that of the IT department administrators, in terms of the network administering responsibilities. Other administrator responsibilities might include reporting incidents to the Chief Security Officer, supervising technicians, monitoring the performance of the informational system of the company, and
  • Information Security Technician  the main functions include technical configuration (e.g. firewalls, software, IDSs), troubleshooting problems, coordinating with systems and network administrators. (Whitman & Mattord, 2005, p. 479) By the companys specifications, the functions can be seen in controlling and configuring access for different systems, such as the websites, internet access, administering control, etc.

Threat Analysis and Risk Assessment

Before implementing any solutions regarding the strategies of information security of the organization (in both long and short terms), a risk assessment should be conducted. As long as the company is holding the information that represents value to the company, its competitors, or random hackers, the company is taking a risk of losing such information. The function of any informational mechanism of security control should consist of limiting such risk factors with previously established levels. This factor should be true for protection policies, where policies are control mechanisms of existing risks that are designed and developed as a solution to both existing and potential risks. Thus, an all-around risk assessment would be among the first stages of the process of policies formation, where the assessment should identify the weak spots of the system and should define future goals and methods.

The risk assessment process will be conducted by both external agents, i.e. IS Consulting, and internal agents, in the case of the present company this job will be done with the cooperation of the existent IT department of the company.

The methodology of assessing risks and threats will be consisting of identifying four aspects which are: assets, threats, risk level, and selected possible control. (Peltier, 2005, p. 44) In that regard, the table of risk and threat assessments will be as follows:

Assets Threats Probability Impact Risk Level Control New Risk Level
Physical
  • Surveillance Records
The possibility of the records to be intentionally copied and published by the employees of the company. Medium High High -Accessibility Control
 Policies and standards
Low
  • Computer Hardware(servers, desktop machines)
The possibility of the hardware to malfunction due to an attack or a virus, and thus terminating the functioning of one or several departments Medium Medium High  Recovery Control Medium
Logical
  • Customers personal and financial data (credit cars)
Stealing customers information by external or internal agents (employees or hackers) High High High  Encryption
 Security application architecture
Low
  • Corporate website
Causing the website to malfunction Low Low Low No action needed at this time Low
  • Internal documentation
Stealing the documentation of the company by internal or external agents, for the purpose of selling them to competitors. Medium High High  Intrusion Detection
 Policies and standards
 Secure Communication plans
Low

Policies and Procedures

The final draft of the companys policy and procedures will be revised in accordance with the senior management structure, wherein the present time an outline of the contents of the policy and the procedures could be summarized in the following next points.

  • Strict access to the information of the company should be based on the direct functions of the users, with restrictions to their level of access. The implementation will require setting a hierarchical access control system, with documentation in which the logs of the accesses of the users will e stored and analyzed when necessary. The employees are held accountable for all actions carried out under their user IDs(Tipton & Krause, 2007, p. 466), taking full responsibility for securing their machines when away from their working place.
  • The employee will have physical authentication cards, which will control the access to various facilities of the company. Different cards will correspond to different levels of access. When accessing a facility with a personal card, employees are held responsible for any physical or logical assets in the facility until they exit the facility.
  • The internet activity of the employees is controlled within the working place, where each employee should have the possibility to use the internet through user IDs and passwords provided by the administrators. The employees are prohibited from individually downloading and installing any external software from the internet. Additionally, it is prohibited to exchange any electronic correspondence unrelated to the working environment.
  • Any information created or used in the support of the company is corporate information owned by the company and considered one of its assets.

The outlined policy issues will be formalized in official documents after it will be revised and approved by the CEO of the company.

Contingency Plan

Contingency can be defined as a coordinated strategy involving plans, procedures, and technical measures that enable the recovery of information technology (IT) systems, operations, and data after a disruption. (Tipton & Krause, 2007, p. 1603) The plan includes setting the preventive measures that should be implemented for the information not to be lost. In the case of the Astra World, the plan includes implementing common measures such as providing uninterruptible power systems (UPS), alternative generators and emergency master switches for the system.

Additionally, the system should have a backup data storage which should regularly store the data from the main system. The plan should also cover policies and regulations regarding all business units within the organization on how to react in the case of an emergency, including key people, standard procedures, and recovery timeline. (Tipton & Krause, 2007)

Security Education, Training, and Awareness Program (Seta)

A Security Education, Training and Awareness (SETA) program can be defined as an educational program that is designed to reduce the number of security breaches that occur through a lack of employee security awareness. (Hight, 2005) In that regard, SETA can be considered as preventive measures, which would involve the human resources to participate in the development of the information security program, rather than technology alone.

The pan of the company will be established to raise the general awareness of the employees to the security aspects of the company, as well as raising the general knowledge of IT basics. Accordingly, the program will consist of two-week seminars and training which will be conducted by IS Consulting on a three-day-a-week basis, for a total of six seminars. The participation of the employee should be obligatory, where each training session will be held in the main conference hall of the company, after work for 60 minutes. The outline of the seminars will be as follows:

  • Day one  Assessment of the skills of the employees, as well as explaining their role in the area of information security.
  • Day two  Individual workplace security management, including outlining such procedures as protection from malicious software and viruses, password change, password storage, and overview of potential risks.
  • Day three  Access to facilities, a lecture on the basic issues to examine when entering the facility with a personal ID card or workplace, and the standard procedures when exiting.
  • Day four  Review the policies on informational security.
  • Day five-overview of the most common security errors that are made in the workplace, e.g. leaving the workplace unsecured.
  • Day six  Summing up, an assessment test.

References

Dhillon, G. (2001). Information security management : global challenges in the new millennium. Hershey, PA: Idea Group Pub.

Hight, S. D. (2005). Info Security Writers.

Peltier, T. R. (2002). Information security policies, procedures, and standards : guidelines for effective information security management. Boca Raton: Auerbach Publications.

Peltier, T. R. (2005). Information security risk analysis (2nd ed.). Boca Raton: Auerbach Publications.

Tipton, H. F., & Krause, M. (2007). Information security management handbook (6th ed.). Boca Raton: Auerbach Publications.

Whitman, M. E., & Mattord, H. J. (2005). Principles of information security (2nd ed.). Boston, Mass.: Thomson Course Technology.

Embedded Blue Wireless Serial Port

Introduction

The Embedded blue is a wireless serial port that can communicate easily with a collection of the standard blue tooth like cell phone, PDA, pcs, etc. For this device, no other external component is required. It can be used as a standalone cable solution that can be controlled by a processor. This profile is the easiest way of connection establishment and this is most popular. In this case, once the connection is established, it is similar to the wired connection. The simple serial UART is used for communication. The main features of the embedded blue are that its easy connection establishment is very simple and its FHSS technology with 2.4GHz frequency ensures high quality, reliability, and strength of interference. Its supply voltage range is 2.2-4.2 V and its current consumption is low. It has an output capacity of 320kbps. This is a fully qualified blue tooth i.e. v2.0+ EDR. It is used instead of RS 232 data cables and is for sending data at a small distance. The important application of embedded blue is in medical equipment, POS systems, Telemetry systems, Industrial Automation, Barcode, and RF-ID scanners, Lighting control, and robotics.

How the chip is fitted

The diagram shown below is the pin-out diagram of the embedded blue eb101 with RF pad pinout. It is a 34 pin IC. (EmbeddedBlue eb101 2009, p.3).

Device Pinout Diagram
Device Pinout Diagram

In this pin-out diagram, the pins RESET, STATUS, BRAEK SWITCH, IND_LED, HOST_WAKE, WAKEUP, RF are the radio and control pins. IND_LED and HOST_WAKE are CMOS output and the WAKEUP pin is the CMOS input pin. UART_CTS, UART_TX, UART_RTS, UART_RX are the UART pins (Universal Asynchronous Receiver / Transmitter). Other pins are voltage sources, ground, etc.

Working description

The working of embedded blue is mainly based on two modes i.e. Easy connect and command mode. When the system is operated in the simple cable solution, then it is called easy connect mode. In this mode of operation, there may be a simple cable replacement solution between two web serial or one web serial and other blue tooth dives which are fitted in a mobile, PC, etc. There is no need for another extra configuration. The web serial devices can be controlled by a set of serial commands. SW1 button of the eb501 serial adaptor should be held on and applied power, till the LED turns on. That should remain on until the setup is completed. Then the SW1 of the second adaptor should be held on and apply power on that. When its LED turns on, leave that, and LED should remain on till the setup ends.

In the command mode of connection, there are several functions to provide programmatic control over the function. If there are more functions than the easy connect mode, this mode is used. In this, after the transmission of data, the acknowledgment string is returned. After giving all commands, the eb101 radio is disconnected and changed back to command mode using brake control lines.

Conclusion

The importance of embedded blue is now not in the medical field alone but has spread to various other fields. The use of the devices in the new era is very common. The main applications are medical application, POS system, Telemetry System, Industrial Automation, Barcode, and RF-ID scanners, Lighting control, and robotics. In all these applications, the system embedded blue provides blue tooth connectivity without any knowledge about this Bluetooth. Engineers, educators, etc are using the benefit of this system.

Reference List

EmbeddedBlue eb101: OEM Bluetooth serial module: device pinout diagram 2009, A7 Engineering. Web.

Automatic Voltage Regulators and Stabilizers

Introduction

The automatic voltage regulator operates in such a way that the output voltage remains constant irrespective of the load even in situations where the input voltage keeps varying. It uses electrical signals to regulate both AC and DC voltages to convey information as well as control energy flow in various systems. The actual load voltage could vary but the input voltage remains constant. The difference in voltage between the two could then be amplified to regulate an electrical system without remarkable surge current damages to a circuit (Patchett, 53).

Methods by Which Electrical Signals Convey Information

Voltage regulation allows for use of an operational amplifier to facilitate the transfer of the regulated voltage in signals that are comparable to the load changes within the minimum and maximum voltage ranges (Meriep, 30). Converting their signals to sound or data can then amplify electrical current. The amplifier is then linked to communication circuits with reliable saturation and proper coupling to the integrated circuit using the regulated signal.

Methods by Which Electrical Signals Control Energy Flow

Electrical signals are equally applicable in the regulation of the speed of generation of electric current from power generators. This utilizes both the linear and non-linear algorithms as a function of the amount of power required to sustain a particular load (Patchett, 70). The speed is determined by the load voltage requirements that should not alter the input voltage in such a control system. Non-linear systems are particularly difficult to regulate since they are always in motion.

A computerized system is therefore necessary for the optimization process of an energy control system (Patchett, 94). Controllers can be used in such non-linear systems. Converters are then used in networking both the DC and AC energy control systems in a circuit that combines the capacitor, engine, battery, and load to function. The converter allows for the control of the angular momentum of the engine during the process of electrical power generation. The voltage generated in alternating current generators keeps varying with a specified speed at which the prime movers rotate the conductors cutting through the magnetic fields (Meriep, 45).

As a result, direct current voltage is determined whereas the various loads are manipulated without further adjusting of the input voltage. As the speed of the engine varies, the decoupling current in the converter keeps the output current to the load constant from the generator. This current provides for the control of the load voltage under a fluctuating speed of electricity generation.

The alternating current can then be converted into direct current using a rectifier circuit that can be utilized by the load at the end of the circuit (Meriep, 62). An interface of this controller network, therefore, provides for control of energy flow from the generator while at the same time the output voltage is maintained constant. The whole system is configured such that there is proper compensation of the difference in the output and input voltage from a feedback path. As such, the system is thus not interrupted by changes at the load or any other system alterations with outstanding performance.

Operation of a thyristor

A thyristor is a diode that is used in current flow. It gives way to current flow when there is a control voltage applied at its ends or terminals (Patchett, 101). Upon removing the terminal voltage, the thyristor will not turn off. The strength of the thyristor lies in its ability to provide a steady and definite current flow. It is only when the incoming forward current drops so low that it reaches zero that the thyristor will turn off. A thyristor preferably applies with AC voltages. It can only be used with DC in cases where there are safety protections. The commonest application of the thyristor is in AC circuits (Meriep, 78).

The essence of a thyristor comes in handy in AC circuits since they always experience a lagging effect in their forward current that keeps fluctuating, hence dropping to levels as low as zero. These repeated cycles require that the gate or the terminals have to be triggered for each of the cycles to turn it on once again. This is the major function played by the thyristor, as it consolidates the cycles and ensures that there is constant fixed current flow. This way, it acts as the power control.

The cycles are alternating, negative and positive. If it so happens that the thyristor is instantly turned on just as the positive voltage excursion is beginning, then very little forward conducting will occur. Forward conducting is inversely proportional to the power available to the load, hence if the thyristor is turned on when the positive excursion is almost at the end, there will be a higher level of forwarding current. This means that minimum power will be available to the load (Meriep, 99). The best results are obtained in cases where two thyristors are used, back-to-back so that there is total control of the current being conducted in each of the directions.

System components

This technology utilizes an excitation system that compensates for the energy used in the voltage regulation circuit. The rotor being used in power generation requires energy, which the exciter produces from the neighboring magnetic field. The resultant voltage is then distributed between the original voltage regulator and that linked to the excitation system. The voltage coming from the input is directed to the original regulator while the current generated from the field generates more voltage for propelling the rotor through excitation. The excitation circuit is supplemented with voltage from the input for it to operate through conductors linked to it from the original regulator (Meriep, 115).

The conductors in this second circuit operational in rotor regulation are preferably closed rather than the open circuit associated with the typical voltage regulation linking the load to the mains supply. The output voltage from the generator is coupled to the mains voltage together with the resultant field voltage through the exciter circuit. Either the voltage from the mains or the generator serves to provide the incoming signal to the exciter. The voltage from the generator is preferably used to ensure safe operating conditions that minimize the effects of surge currents that could originate from the mains during fluctuations in the speed of voltage generation (Patchett, 120).

The excitation circuit utilizes the signal transducers with four channels to maintain a stable flow of current or voltage to the output. The transducers are networked in a bridge circuit, which allows constant output current or voltage after voltage generation. As a result, no serious fluctuations are recorded in the overall circuit with proper compensation provided.

Works cited

Patchett, G, Automatic voltage regulators and stabilizers Pitman, New York, 1995.

Meriep, K, Information and Energy Control Systems, McGraw-Hill, New York, 2005.