Comparison UML and Modelscope

Introduction

This project relates to the development and application of a computer-based library system. Its implementation is of great importance because it provides ease of access and drastically reduces wastage of time when compared to a paper-based manual library system. The proposition is done using two different modeling languages, namely, Unified Modeling Language (UML) and Model Scope. The two library system models developed are also compared based on their merits and demerits.

Unified Modeling Language is modeling software used in the software engineering field, which contains graphical representation techniques. The main objective of library system implementation is to support the readers as well as the management in the process of borrowing and reserving books. The service by this system is useful to both readers and administers. The readers are the buyers of the services which are sold by the administrators. The system is an interface between the users and administrators to support the operation of a library.

The steps and procedures for the development of the models are done to reduce the redundancy of work done. The diagrams at each step of UML modeling are developed from previous diagrams which give a flow to the work. UML modeling expresses itself to be a well-defined structured modeling method. Model Scope requires a strong knowledge of the Java language. This modeling is done using various states and state transitions in the library functions.

This entails a real-time activity information requirement in the process. The project, as a whole, puts forward two valid models which are applicable in the libraries for direct implementation. UML model can be used just as it is and cannot be altered concerning any new requirements. As localized objects evolve into distributed components, developers are asking that UML provide better support for component-based development using EJB. (Kobryn 2000).

ModelScope model can be modified by the requirements of the users, a Java platform is essential for its implementation. Thorough knowledge of high-level programming language is also a must in this behavioral modeling. Simulations can be carried out and result analyzed easily because of their simplicity. Java dais is used for the simulation of the model. The comparisons are done on initial steps as well as on the final results obtained. ModelScope is a new arrival in the field of software engineering which has much scope for development in the future.

Thus, an application-specific project has been completed and a comparison of different engineering tools for modeling has been carried out effectively. Ability is developed in approaching a constraint, evaluating the scenario, selecting appropriate tools for implementation, and getting the structural modeling done. These are very much essential qualities for a person functioning in the field of software engineering and modeling tools.

Outline

This paper discusses a system design by using a design tool called model scope. The UML is used for the system tools in this section. The project aims to enable engineers to understand the solution to real-world problems. This paper also discusses how paperwork can be converted into a computerized format. This conversion increases the efficiency and productivity of the system. The requirement of the system is defined by using design tools. As mentioned above, the major utility of this system is the library management system, which aims to support the management of lending and reservation of books. In library system management, there are two types of clients for this system library administrator and reader.

Initially, the software engineering field is explored in different steps. The techniques and tools used in this area have been studied. The entire characteristics and procedure for the Unified Modeling Language (UML) are then determined and absorbed to model the structure of a library. Later, the ModelScope method comes as the study topic. This model is a new arrival in the software engineering field. The library setup has proved to be productively operative in the ModelScope platform. Both the models have been judged against each other to arrive at conclusions on their merits and demerits. The advantages and disadvantages of the model using the ModelScope tool have particularly been dugout. Thus the project has been completed.

The main objectives of the system are to effectively record reader loans, reservations, publications, etc. In this case, the application in the library reader system is explained. The reader and the administrator are the two users in the system. The administrator is the user who administrates the system. A relation with the requirements of the clients is an important factor to be considered while the modeling process is carried out. The client may or may not have technical knowledge. This is a key point while selecting the modeling tool in software engineering.

The reader needs to be registered and when it is done, he or she needs to involve at every stage of the activity of the system. The main activities include adding, changing of name, borrowing, reservation, etc. An example of such model scope is already mentioned here. The UML is a globally accepted structure in this approach. I have introduced a client to this project, who has understood and accepted its advantages and the technical side of the model scope project received his appreciation. However, he has found one shortcoming. He opines that the project engineer overlooked the requirement that the reader can take more than 5 books. There can be a situation that the reader needs more than 5 books, and this aspect has not been considered while doing the modeling. In this case, the system will have to check out the urgency of the particular publication.

Thus, it transpires that though the system has so many advantages there exist some limitations also. Another constraint is that the reader cannot reserve the book which is taken by another person. However, because the merits outrun the weaknesses, the system and its approach are globally accepted. The system encompasses the difference between the tools of UML and model scope.

The two-way approach to the modeling of the library work architecture gives a chance to the design and project engineers to select among different models to meet the client requirements. The ModelScope method can in the future be developed to a more efficient one because at this initial stage of advent it has found application and position in software engineering.

Reference List

Kobryn, C 2000, Modeling components and frameworks with UML, Communications of the ACM, vol.43, no.10. Web.

Tech & Engineering: System on Chip

Introduction

Chips are nowadays found in almost every electronic equipment, system, or application. The semiconductors market has taken the lead in the world economy and most of the countries that seek to have technological independence are focusing on their development. No doubt that technology is growing far and wide hence the use of semiconductors is not coming to an end shortly. Currently, it is possible to incorporate thousands of transistors and other virtual components in one chip.

As these chips form truly integrated systems, they are thus called systems on a chip. The performance of these systems is relatively high with low power consumption. Technology advancement has made it possible for developers to integrate assorted components of the same substrate in one chip. The reusable components are usually called intellectual property blocks and are often referred to as soft cores. The issue of reuse can start at the block, chips level, or platform and entails ensuring that the intellectual property is completely general, configurable, and programmable for it to be used in an extensive variety of processes. This has resulted in the chips being too complex.

Intellectual property integration involves connecting the computational parts to the medium of communication. The methods of testing the system are also defined together with verification issues that are encountered when integrating reusable components.

Design teams have adopted a block-based design approach that emphasizes design reuse. However, as a result of changing to this approach, the teams have plunged into many challenges. Sometimes they have used components that are not designed for reuse thus ending up not having achieved their objective. The team has realized that for the design reuse to succeed, they must have a clear method of developing macros that are easily integrated into the system-on-chip.

Deep submicron technology makes it possible for SoCs to be developed with the required features. It helps in the whole set of design which includes timer and power circulation, interconnection delays, allocation, and routing of millions of gates. For the macros to be reused, they must be exceptionally easy to integrate into the chip design. They must also be so robust that there is no need of verifying their functionality after development.

Therefore there must be proper documentation, good coding, and systematic commenting about the macros. As a result, the designers are coming up with macros that are easily configurable for different applications. The macros are also designed with standard interfaces for them to be used with any application. They are also designed in a way that it is possible to verify them in any chip (Keating & Bricaud pp.1-6).

Properties of the system on chip

The system-on-chip consists of one microcontroller or microprocessor core, memory blocks which are a selection of read-only memory (ROM), random access memory (RAM), and flash. They also consist of phase-locked loops and oscillators which work as timing apparatus. Some of the peripheral devices include counter-timers and power-on reset generators. They support connection to other external interfaces such as the Ethernet and FireWire. Apart from all the above hardware devices, system-on-chip contains software that controls the microprocessor, peripherals, and interfaces (White par. 1, 2 &3).

To develop a system on a chip, companies need to consider both the core reuse and the system-on-chip integration architecture. The designers should learn to approach the designing problem from the system downwards to the core level instead of the upward approach. One of the challenges in construction is incorporating intellectual property to allow broad reuse. To solve this problem, there is a need for a complete intellectual property core interface comprising of data, control, and test. This will allow cores to be independently created, tested, and integrated as components to the system not attached to the system-on-chip interconnect.

Until such a system is developed, there will be no breakthrough in the system on chip development. For this technology to be conceptualized, there is a need for all the stakeholders in design to collaborate without undermining each others roles (Pierce par. 1, 3, &5).

The integrated circuits of the system on the chip consist of various reusable well-designed blocks, such as memory arrays, digital signal processors, and microprocessors. These blocks are called cores and are usually designed in advance before they are integrated into the circuit. SoC integrates various properties in one chip device thus providing a wide range of capabilities under a single component. It may contain a combination of cores providing different services such as microprocessors, audio and video controllers, modems, to mention but a few. Most of the peripheral devices that used to be implemented in separate circuits in the traditional models of processors have been integrated into one block in the system-on-chip. Field-programmable gate array (FPGA) based system on chip has also been developed as a result of advancement in FPGA technology.

FPGA consists of several programmable elements, electronic circuits, and routing infrastructure. It can be programmed to selectively interconnect the processors as well as define the functions of these processors. The SoC also contains internal memory that stores information such as the instructions for the modules and data. As more tasks are being performed using computers, the available space in computer chips is shrinking leading to high demand for functionality in less physical space. This has led to the manufacturers coming up with systems-on-chip to meet this demand. The benefits of the system on chip speak for themselves as everything required for a computer to run is integrated into one chip. The system on chip is unique in that it is both software and hardware. The drawbacks of it are, however, time and money.

Advantages of the system on chip

A system on a chip provides many benefits to users over traditional systems. Integrating all components in one chip leads to a reduction in the size of the product hence making them easily portable. It also enhances the processing speed of the system and its reliability. The separate integrated circuit is usually comprised of components that are connected on a printed circuit board while the system on chip is designed such that the entire system is implemented on a single chip making the products smaller in size, faster, and more efficient. The system on chip has faster chip speed as a result of the components being integrated into one chip. By integrating all the chips functions under one device, one is saved the task of having to physically move data from one chip to another thus enhancing speed.

Since the system on the chip allows integration of all components of systems whether complex such as cell phones and television receivers, it leads to the reduction in the size of the products. Consequently, the power consumption of the devices is greatly reduced as well as the production cost. A circuit operation that takes place under an integrated circuit requires less power than that required by a similar circuit on a printed circuit board with distinct components (System on chip (SOC) par.1, 2 &3).

System on chip helps in achieving better and increased functionalities. This is because the required logic can be stored in a chip. The cost of having to incorporate other circuit components is thus reduced to ensure that the chip is capable of storing all the required data for the process to be performed completely and successfully. The designers are therefore left with the task of developing a versatile chip that will be able to accommodate all the functionalities of the individual component used in the system implemented on printed circuit boards.

Also, the hardware is easily reconfigured to meet the required functionality as per the new protocols. System on chip supports reuse of reconfigurable chips which has many advantages and applications. Systems on chip implemented in field-programmable gate arrays that have partially failed are easily reconfigurable to steer clear of the damaged parts and continue with the original operations though at a reduced performance. Also, it is possible to implement identical SoCs in one system hence enhancing reliability through redundancy.

System on chip supports parallel processing of operations hence saving on time. Its possible to configure the various modules stored in one chip in a way that they can run parallel. The embedded characteristic of the system on the chip makes it possible to reconfigure the system to execute multiple functions at the same time. This has also helped in different applications sharing the same resources to perform varied tasks at different times without having to provide resources for each application. Due to the system on chip reconfigurable nature, it is possible to quickly modify it from a remote location to improve its performance. Its ability to be reconfigured to perform a completely different task makes it less costly compared to the traditional application-specific integrated circuits (ASIC).

As the system on chip eliminates the ASIC design, the cost of its development is reduced. Its reconfigurable nature further reduces cost since it can be reconfigured to meet the dynamic specifications of applications without having to buy another system. This allows for them to be upgraded hence extending their useful time and consequently reducing their lifetime cost. For the case of ASCI and general-purpose hardware designs, if their technology turns obsolete, it requires one to purchase new technology hence making them more expensive.

System on a chip has reduced time to market. With the obsolescence of application-specific integrated circuits, the development effort of the system on chip has been reduced such that it is possible to upgrade the system even after it has already been introduced to the market. The design can be sent to the market with minimum specifications and the subsequent upgrading done later without changing the system or the devices. This has led to the system on the chip being used to run most of the applications.

Disadvantages of the system on chip

System on a chip has been attributed to various disadvantages such as; chips take time in reconfiguring to a given task and the difficulty experienced in attempting to come up with such chips. It cost a lot of time and money to develop one system on a chip than it takes to develop the traditional chips. This is because the materials needed for its manufacture are still new and unfamiliar to the manufacturers. This is however changing gradually as the chip manufacturers continue discovering the usefulness of the system on chip and its potential. The laws of physics also remain to be the main obstacle to the final version of the system-on-chip.

The demand for chip and their silicon increases as one combines the hardware and the software. The demands may even become overwhelming for technology to manage them. This has to lead to the developers using alternative surface areas with different conductivity requirements from those of silicon. With the progress that these alternatives have made, there is hope for a system on the chip being developed shortly (White par.3&4).

As the system on the chip is highly integrated, it is hard to replace a particular device in case of failure. The entire system is overhauled and a new one is used on its behalf. This becomes costly in terms of money for coming up with a new system as well as time consumed during the system development. Some of the devices integrated into the system are also to understand their designs. This may lead to some of them not performing as per the organizations expectations.

Complexities associated with the system on chip

The system on chip is associated with various complexities which comprise; design space, routing, timing, assignment, consistency, and development tools. If one requires reconfiguring new hardware, and ample space is required for keeping the hardware. This becomes complex for the system on chip especially if the hardware requires to be placed near resources such as built-in memories or input/output resources.

When reconfiguring or upgrading, the existing components have to be connected to the newly introduced devices. This asks for the provision of ports to help in interconnecting the components. It becomes a problem when the available ports turn out not to be enough or even when there are no extra ports to accommodate the new components. It is required that the newly configured equipment meet the timing requirement to harmonize the operations of the system. In an event where long cables are interconnecting the components, they may affect the timing hence compromising the processing speed of the system. Sometimes there might be under timing or over the timing of the added device leading to flawed results.

Reconfiguring the system on a chip might affect its computational consistency. When adding new components to the device, one should ensure that the existing design of the device is not tampered with or deleted. There are also no readily available materials for dynamically reconfigurable computing systems such as systems-on-chip. Most of the available materials are still under development process. This has made it hard to have these systems in full operation (ASIC-System On Chip (SoC)-VLSI Design par.1-11).

Conclusion

Despite the numerous challenges the field of the system on chip is facing, the technology will continue to grow. There has been a collaboration between universities and the industries that develop this system resulting in the emergence of research that will help in the development of a viable system-on-chip as well as cutting on the challenges that are currently facing this industry. With increasing market pressure, field-programmable gate arrays and other programmable logic devices will be integrated into the system on-chip. These will be a tremendous breakthrough in the communication industry as well as computer technology.

Works cited

ASIC-System On Chip (SoC)-VLSI Design. 2007. Web.

Keating, Michael & Bricaud, Pierre. Reuse Methodology Manual for System on a Chip Design. 3rd ed. Massachusetts: United States of America.2002.

Pierce, Grant. System-on-chip design: Is it reuse or useless?1999. Web.

System on chip (SoC).2007. Web.

White, David. What is a System on a Chip (SOC)? 2009. Web.

Video Distribution Systems

Melinger (30) has defined Microsoft media service as a streaming media server which enables an administrator to create video or audio streaming media. Microsoft media service is capable of imposing authentication, record streaming or caching, limiting access, enforcing a variety of connection limits, as well as utilising various protocols, in addition to Forward Error Connection (FEC) application, Microsoft media service is capable of handling a multitude of connections. The way in which a Microsoft media service works is that the distribution of streams usually takes place between the various servers that feeds a distributions network.

In this case, an individual server is used to feed diverse networks. Moreover, there is a need to take into account the fact that both multicast and unicast are well supported by Microsoft media service. The decoding of the various distributions of streams is accomplished through the use of a Windows media player, thereby enabling the users to not only watch these streams, but also listen to them. Therefore, users are in a position to obtain valuable information via the media.

In order to set up a newspaper or radio station there are a number of requirements that must be met. First of all, there should be a network set up device (wireless or wired) connected directly to the internet. Then there ought to be a distribution of information type context and entertainment framework. Moreover, there need to be an allotment of user created amateur background and professionally formed perspective. Collaboration ratings and filtering systems should also be included. There ought to be situation description interactivity and configuration via a second screen such as the website. Furthermore, programs need to be individualized so as to be able to suit viewers needs and interests. Lastly, there should be a common time code when the program will be watched (Melinger 2).

Unicast transmission, within the context of computer networking is used in reference to the activity of relaying packets of information to those networks that are operated by single users.

Unicast messaging finds application in processes of networking in which exclusive or private resources are often in demand, in effect ensuring that the traffic within the network assumes the form of a unicast. Unicast messaging is especially useful in a case whereby the completion of a network transaction hinges upon two way connections. On the other hand, Multicast is a network technology for the delivery of information to a group in a particular place simultaneously. Multicast entails the application of a strategy that is well-organised for purposes of ensuring that messages to the various network links are only delivered once followed by a reproduction of these messages to diverse destination.

It is generally employed for streaming media and internet television applications. At the data link layer, multicast describes one to numerous distribution such as asynchronous transfer made (ATM). It is distrusted to a large receiver population by not requiring of who or how many recipients are there. The nodes in the network take care of replicating the packet. It does not require a source sending group to know about the receiver of the group. Its tree construction is initiated by network nodes which are close to the receivers. This allows it to scale to a large audience (Differences between Multicast and Unicast).

To enable streaming of a network we normally use windows encoder. This software is normally configured in the computer. Once this software has been acquired and installed in the personal computer (PC). It will then become the live streaming encoder. The live feed or broadcast will be connected to the PC through audio/video card generally installed in the (PC). The windows encoder software will then convert (encode) live signal to be sent to the windows media streaming platform. This is done through a secure connection. The windows media will rebroadcast this signal to many audiences as possible ( start streaming with windows media).

Works cited

Differences between Multicast and Unicast. Microsoft. 2009. Web.

Melinger, Daniel. Interactive Telecommunications Program Tisch School of the Arts, New York University. 2004. Web.

Start streaming with windows media. Netro Media journal. 2009. Web.

The Pros and Cons of Implementing the Main GCP Principles

Introduction

Drugs and medications are used in the treatment of diseases and conditions in human beings, and so in the development of new drugs and medications there comes a phase, wherein these have to be tested in human subjects. The principles of Good Clinical Practice (GCP) come into focus at the time of conducting medical trials using human subjects. The purpose of the principles of GCP is to ensure that these trials remain with the required ethical standards and the results come from the scientific inquiry that upholds the standards of quality, irrespective of which part of the world these trials take place. Yet the principles of GCP are not the same worldwide. There are advantages and disadvantages in different countries implementing the principles of GCP in different ways.

Pros of Different Countries Implementing GCP Principles in Different Ways

The principles of GCP have evolved through the combination of the economic development and wealth available in the developed world on one side and through the prevalent culture in the developed world on the other side. Both these factors are non-existent in the developing world, which makes it difficult to conduct rigidly designed clinical studies in the developed world. Hospitals in the developing world may lack even the necessities and be overcome by the flood of patients they face.

Hayasake 2005, p.1401, points out that in Morocco cancer patients have to wait for days for a bed to fall free so that they can be admitted, and in India between 60-70% of patients fail to get hospital support, because of the deficiency in medical resources. In addition, the availability of trained staff for the conduct of the trial in a developing country will be limited. This suggests that outside assistance will be required, but experience has shown that trials conducted with assistance from outside seldom provide good results (Hayasake 2005).

By implementing the GCP principles in different ways it becomes possible to include the countries in the developed world in clinical trials. This is pertinent because many diseases are endemic to the developed world and in the case of other diseases the prevalence rate can be higher. Furthermore studying treatments of diseases in different environments gives better insight into the disease processes, and such studies in the developing world come at much lower costs than in the developed world (Hayasake 2005).

Failure to develop new drugs for the developing world test them for actual translation into use in the treatment of diseases would mean denying a very large segment of the population of the world, the benefits from the advances in science and technology. Implementing GCP principles in different ways removes this anomaly (The European Group on Ethics in Science and New Technologies, 2003).

Cons of Different Countries Implementing GCP Principles in Different Ways

A key element in the objective of the GCP is to maintain ethical standards in the conduct of clinical trials in any part of the world. This key element gets corrupted when the GCP principles are implemented in different ways, with particular concern in the poor adherence to the ethical standards prescribed by the GCP in trials conducted in the developing world (Nundy, Chir & Gulhati, 2005).

Examples of this poor adherence and consideration for human dignity abound in plenty. In India, there is growing concern over the conduct of illegal and unethical clinical trials, since India made it easier for international pharmaceutical companies to conduct clinical trials in 2005. India has about 14,000 general hospitals, but just about one percent has the infrastructure to conduct trials to meet compliance requirements of GCP, while the number of appropriate pathological laboratories is less than a dozen. This means that clinical trials will occur with poor adherence to GCP principles in many of the clinical trials as has been the case.

Two new chemical entities discovered in the United States of America termed M4 N and G4 N, were used in clinical trials at the Regional Cancer Centre in Kerala. Another illegal clinical trial was the formulation of vaginal pellets of erythromycin tried out as contraceptive agents in West Bengal. Yet another case is the trial of the cancer drug letrozole as an agent for the promotion of ovulation in women. These trials were all illegal and not in adherence to the GCP principles, as they took place without regulatory approval. Thus failure to implement GCP principles and the interpretation of the GCP principles arbitrarily in developing countries leads to corruption of the ethical standards required in the conduct of clinical trials (Nundy, Chir & Gulhati, 2005).

Literary References

Hayasake, E. 2005, Approaches Vary for Clinical Trials in Developing Countries, Journal of the National Cancer Institute, vol.97, no.19, pp. 1401-1403.

Nundy, S., Chir, M. & Gulhati, C. M. 2005, A New Colonialism?  Conducting Clinical Trials in India, New England Journal of Medicine, vol.352, n0.16, pp. 1633-1636.

The European Group on Ethics in Science and New Technologies. 2003, Ethical Aspects of Clinical Research in Developing Countries, Opinion of the European Group on Ethics in Science and New Technologies to the European Commission.

Comparison of Privacy and Security Policies

Beth Israel

Beth permits occasional and limited personal use of information technology resources subject to the condition that such use does not interfere with the users or any other users work performance and result in the violation of the provisions of the privacy and security policies of the organization. The policy also states that the users cannot expect to have any expectation of privacy of the information generated by them using the technological resources of the organization. The policy prohibits the use of inappropriate or unlawful materials. The users are generally restricted from copying or distribution of any copyrighted materials except when they comply with the legal requirements of fair use. The policy further states that all the e-mail addresses assigned by the organization remain the property of Beth Israel (HIMSS).

Mayo

Mayos web policy states that the organization will not share the information that is submitted by the users to third parties unless it is warranted for any specific purposes. However the company will request the third parties to protect the information passed on by it. The company will take responsibility for ensuring that the information collected is accurate, up to date and complete. The policies also state that the organization will take reasonable steps to protect the personal information from loss, unauthorized access and misuse. The policy reiterates that subscribing to the policy statement the user has consented to abide by the terms of the companys policies regarding the use of the information resources (Mayo Healthcare).

Georgetown

Georgetown has laid down well-defined elaborate policies regarding the privacy and security of information. The most important policy pertains to the patient access to protected health information and accounting of disclosures. The organization permits the patients to inspect and obtain a copy of protected health information in a pre-designed format subject to certain restrictions. This ensures that the patient or any other users draw only necessary information.

While the policies regarding the access and use of information remain the same in all the three organizations, Georgetown has laid down policies, which are different from other organizations, which can be considered to provide better privacy and security of information. One of the distinguishing features of the security and privacy policy of Georgetown is that the patients may request for an accounting of disclosures to any other authority or user (Georgetown University).

This ensures higher level of security of personal information, as the organization commits itself to provide the patients to pass on information on the nature of information disclosed and the authority or third party to whom the disclosures were made. However, the organization may not provide disclosures of information pertaining to treatment, payment and health care operations and several other information contained in the policy statement. Another feature that makes the policies of Georgetown distinct from others is that the University entertains the requests of patients to enable them to receive communication of protected health information by alternative means or alternative locations. This policy enhances the availability and utility of information more than the others do.

References

Georgetown University University Policies: Privacy Policies. 2009. Web.

HIMSS Managing Information Privacy &Security in Healthcare. Web.

Mayo Healthcare Privacy Policy. 2009. Web.

Linux OS: Review and Analysis

The Linux Operating System is a general term that refers to a Unix-like Operating System and it is based on Linux kernel. This system was developed by Linus Torvalds along with other developers all over the world. It is a part of the Open Source Project as we can use, modify and redistribute all of its underlying source codes. We learn that Linux is highly flexible and offers a long-term and completely future-proof strategic platform for us and that the Linux platform is supported by most of the major middleware and server vendors. The Linux Operating System is completely based on open architectures and standards.

Linux is simply not just a world-renowned operating system but also provides an attractive overall cost of ownership and choice that benefits us a lot. The best part about Linux is that even we can constantly integrate and develop its leading-edge technologies by adding our codec and practices to it making it a forward-looking platform. Linux was released and developed under GNU General Public License. The Linux kernel is the base on which the operating system has been developed and within a decade it has been accepted as a primary software platform. Another important feature about Linux that we learn is that this operating system can directly be incorporated into microchips by embedding it and used in various devices and appliances. (CGS, 2008)

Project Linux From Scratch or LFS provides us with gradual instructions to build our very own personalized Linux operating system from the source. IBM, on the other hand, designs and plans various Linux implementations, helps to migrate to Linux from other platforms, develops and distributes Linux installations, and supports and manages the production of Linux environments. Thus, if we learn Linux from LFS we would only know how to build a customized Linux operating system but IBM offers a far greater coverage of Linux.

As IBM supports Linux on all of its middleware, storage and servers it offers the widest flexibility and power of Linux to support our needs. Although LFS teaches us about the internal workings of a Linux operating system, IBM offers us a number of its middleware solutions along with more than 500 software applications that can be run on Linux. (IBM, 2008)

IBM also provides a scalable, robust and open learning platform for us, which helps to achieve adaptability when using an open computing model. However, learning from LFS is not at all flexible since it is source-based distribution. To learn from it we have to specify compiler options at the beginning and then have to sit idle for as long as it takes to build the system. However, this is not the case with IBMs learning techniques. Through its course, building the operating system and working on it are both parts of the learning experience. Also unlike LFS, IBM focuses more of its topics discussing the operating system rather than its distribution.

Thus, we learn about the system much more without bothering about the distribution. Unlike LFS, which only focuses on how to build the system, IBM courses provide active lab exercises that have been specifically designed for working with present versions of major Linux distributions, like Fedora Linux, SuSE Linux Enterprise Server (SLES) and Red Hat Enterprise Linux (RHEL) allowing us to apply our skills learned, for our systems Linux deployment. (LFS, 2007)

References:

CGS; 2008; Online Demo; cgselearning. Web.

IBM; 2008; IBMs online training course; IBM. Web.

LFS; 2007; Linux from Scratch; linuxfromscratch. Web.

Information Needs and Data Inputs, Processes and Outputs

In healthcare systems team members have to work together to achieve the common goal (Sales, Cooke, & Rosen, 2008). Research indicates that it is not enough to support the performance of isolated tasks and suboptimal communication is one of the most important causes of medical errors. Effective communication among the team members is a key component in aiding the value-added processes and this has resulted in improvement in the patient care quality (Baker et al, 2007). However, processes, which are computerized without careful analysis, would lead to inefficiencies (Koppel et al, 2008). By supporting communication and coordination, well-designed healthcare processes reduce the inefficiencies in the processes and make the management of complexities easier.

Developing Value-added Processes

In order to improve the communication among the healthcare team members, it is imperative that the processes are converted into value-added processes so that the information inputs can be converted into valuable outputs that help improving the provision of effective and quality healthcare. The development of value-added processes requires the institution of new activities and reorganization of many current activities. The first step in this direction is to identify and rank the most important processes of the organization. The next step is to establish an interdisciplinary team representing the healthcare team members, clinical units, and facilities that take part or contribute to the processes (Surowiecki, 2004).

The foremost task of the team is to define the standard practice of the organization clearly and the existing clinical practices, practices in respect of patient satisfaction and cost measures and performance targets need to be defined properly (Casale et al, 2007). It is better to align the incentives of the team members to the maximum extent possible. It has been proved that professional; team and organizational trust are some of the important motivators for improving the communication among the members and ensure the effective participation of team members in the process development.

Audit and Analysis

There is the need for conducting an audit with a feedback on the shared understanding where the processes are to be improved. Pay for performance programs established both internally and externally would be effective when they are made to reward the achievement of evidence-based process measures. Analysis of current processes and prospective risk assessment enable the translation of the practice in to different sub-processes and procedures including skill sets and information flows (Carayon et al, 2006)

Testing Monitoring and Feedback

It is important the communication processes are tested in small setting to confirm that they have been made ready to be deployed in all appropriate sites and venues of care. After the processes are implemented, the process needs to be optimized by continuous monitoring and feedback of process measures.

Maintaining Team Awareness

A better process-design can be achieved by human-factors engineering. The human factor engineering can provide both theoretical and pragmatic guidance to designing the process. Maintaining the shared awareness of the needs of the patients would enable a better coordination of the efforts of the team members in providing quality healthcare to the patients. Similarly, since such shared knowledge improves the organizations awareness of the overall situation it contributes to the enhanced level of organizational efficiency (Schultz et al, 2007).

Automation of Routine Tasks

Another effective way of improving the communication processes within any healthcare setting is to create flexible processes, which help automate routine activities. The automation of routine tasks enables the physicians, nurses and other clinical technicians create value even in unstructured situations. The automation of routine tasks, in addition to supporting to standardization of routine tasks help in supporting intentional variation based on uncertainty that characterizes the patients condition, strength of the available evidence, needs of the patients, local factors affecting the communication processes and the professional judgment of the providers.

The Fire Protection Services

Project Background

Building fire remains one of the greatest threats to human life working in building. Fire threatens governments, weaken economies and effectively destabilize societies and kill people. Building fire thus has important ramifications for the nation-state as well as for the corporate world. Increasingly, arsons and other flammable goods have seriously affected economies by bring down business and destroying human life. Occupational safety of buildings is one of the most challenging phenomena in the modern world take for example the recent fire at twin world center caused by a terrorist act of hijacking airplanes and destroyed the building. That incident is will remain in the mind of people for many years to come. This project undertakes to study fire protection services that are available to those users.

Project Scope

The scope of study follows literature review of fire protection measures currently available and being utilized by companies and building owners to achieve protection of buildings and user. The building administrator manages the entire gamut of processes from training to installation of safety equipment for providing a pre-defined measure of fire protection. The capabilities required by the fire protection manager in managing the building are involving. What are the necessary attributes required by a fire protection department of a building to manage building based on end user requirements outlined by organizational objectives.

This study focuses only on developing fire protection procedures, equipments and training required necessary to assist and to improve the processes, which has been calibrated to deliver a specific fire protection measures meeting the required standards. However, this study assumes that other factors like reputation of the building, financial stability are of less consequence.

Options Available

There are many options available for fire protection for people in these incidents. To begin with there are many equipments that are installed in the building to protect fire fighters. Some of the apparatus available is sprinkler system which uses water or gas in mitigating the effect of fire breakout. The sprinkler should have a longer horse pipe for the water sprinkler or a lighter cylinder for those using the cylinder to put off the fire these systems should be strategically placed so that no corner or area of the building should be inaccessible. The pumps that will be used should have water throughout to ensure that when there fire breakout it shall not become destructive. each building should ensure that their staff members are trained of the following.

How to use the alarm system to alert about impeccable danger, the alarm system that used should be strategically placed so that it can be accessible to all users in the building especially those with knowledge about thee building

Causes of Fire

Although you cannot see it, the current running through your electric is a source of heat and if a fault develops in the wiring ,that heat can easily become excessive and start afire. In fact neglect and misuse of wiring and electrical appliances are the leading causes of fires in business premises. So have any faults in your wiring promptly repaired-and get an electrician to do the job. Do it yourself is a dangerous with something unseen like electricity. Most electrical fires start with appliances-heaters standing near combustible material, pressing soldering irons left on after job has been done. It really is worth taking trouble to see that everything is switched off after use.

Electrical lamps get very hot and fires start if lamp shades or if materials used in window displays come into contact with them.

Rubbish

FIRES love rubbish. Fire will be destructive if there will be hip of paper not well disposed. Get rubbish out of your premises and into metal bins as quickly as you can.

Another big danger with rubbish is when you burn it. These sorts of bonfires get out of control or too often-sparks fly through the windows and before you know it, you have got a really fire on your hands. If you must burn rubbish use proper incinerator, well away from the building and storage, and stand guard over it.

Smoking

The discarded cigarette- is still one of the most frequent fire starters. Getting rid off rubbish will help to reduce fires from this cause-but even so, wherever cigarettes and matches are used, there is the chance off fire starting. Do not smoke in rooms where goods are stored.check last thing in the evening that no cigarette-ends have been left burning. Have plenty of ashtrays around

Heaters

Portable-heaters-electric-,oil or gas-start fire if goods come into close contact with them ,or if they are accidentally knocked over. Make sure all heaters are well away from goods or combustible materials in walls or ceilings. Place or fix portable heaters so hat they cannot be knocked over, and keep oil heaters away from draughts. They should also be securely guarded. With convectors or thermal storage heaters, never stand books or papers on thermo drape cloths over them, or you may make them overheat and fire can result

Plan of Action for Prepared for Fire Protection

The following is a plan of action and events that needs to be considered in order to overcome help in fire evacuating

Training of the residents s  The supervisors should be thoroughly trained on how to handle these incidents. Some may be like them and some may not be like them, and this is a big challenge to the county. They should be trained on all safety measures in all departments depending on which floor works. They should also be psychologically be prepared to handle this challenged people as their understanding is not like theirs.. It is the responsibility of the management of the building of occupational safety and health to make sure that the all are prepared. These trainings should be regular, depending on the needs at work, and after training, whatever that has been learnt should be implemented and the supervisors should make sure that it works and its friendly to all, who will also need to be taught on some issues about their safety at work.

The supervisors must not blame the employees when an accident occurs, but take an immediate action which will prevent repeated accidents. They should also be taught of the legal implications if they are not responsible in handling their duties well as the employees may take them to court if their issues are not handled well. Their public relations with the employees should be of high standards They must treat them well to avoid unnecessary confrontations. During the training, posters, audio visuals, films and classroom presentations can be used to facilitate training and understanding. The director should set aside funds for trainings of the staff and the other employees who can be selected for the group.

Purchasing of modern and enough safety equipments  Sometimes, cheap safety equipment can be bought but can be dangerous to the users of the building. At times the safety wears may be bought but are not meant to be used when doing a specific job. This will result to accidents now and again. Enough safety wear should be bought to serve all the employees at all times without any shortage. There should be no sharing of this safety wears and the employees should be taught be the supervisors on how to use them. When these safety wears are worn out, there must be an immediate replacement of them so as to reduce cases of accidents. The county must make sure that it has machines which are friendly to the employees, which do not endanger their lives.

Introduction of the insurance cover and compensation scheme  This will protect the people while in the building.

Why solutions are necessary

Fire disasters have occurred in the world over many years. They have occurred both in highly and less developed countries. These fire disasters have either been caused by human forces or natural forces or interaction of both natural and human forces. When they occur they cause serious challenges and consequences to the economies of affected people or countries. In most cases, the phenomenon that triggers disasters is beyond human control. In general the losses that cause fire are largely a function of human factors which are human decisions, human actions and human choices or sometimes lack of these. Fire disaster means misfortunes or calamities. It can also be termed as an incident of great harm and distress. It causes the society to stop it normal functions and divert all resources to use in mitigating the effects of fire.

As I have mentioned before, fire is a result of human force, natural force or a combination of both; management for these fire have been put in place. The management of fire means setting policies that helps in fighting fire at levels of occupancy and the policies involves all stages of fire management from putting off, evacuation to post fire counseling. So fire management body has to come up with personnel and facilities dealing with fires. The personnel includes both the administrative, individuals and community actions who try to minimize loss of lives or/and damage of facilities. they do through fires preparedness includes efforts for effective rescue of people involved in the fires disaster, relief and also rehabilitation and reconstruction of destroyed materials like buildings.

The administration, individuals and community also engage in fire disaster mitigation which encompasses all measures to reduce the impact the fires disaster phenomenon by improving the communitys ability o withstand the impact of the fires. This they do through the prevention, preparedness and real response of fires disaster during or after which includes relief, rehabilitation and reconstruction.

Fires disaster will definitely leave behind vulnerable people who are prone to it again incase it occurs again. To be vulnerable is to live with a likelihood that one will suffer from hazardous events. In the society, some people are more vulnerable than others. The nearness to hazardous places the more the consequence one will face. Earlier, fires disasters in building have established that natural hazards are a cause of vulnerability to disasters. People who live or work in certain environment are prone to disasters that may occur to such areas. This means that humans living or working in certain areas make themselves vulnerable to fire disaster. Vulnerability is simply reduced to zero by people not living in affected areas.

Scientists, technologists and engineers have attempted to predict hazardous events and development of technologies that can enable human structures to withstand fires. The assumption has been that events are acts of nature that cannot be prevented but rather there are possibilities of reducing their consequences. As a result of this, technologies and materials for building and construction for example have been developed so that they can withstand fires.

In spite of many games in the scientific and technological process to control vulnerability to fires disasters people continue to be injured, die and loss of property. One reason for this is because many fires disasters predictions and other mitigative technology are costly and individuals and communities are either unwilling or unable to afford them. The costs tend to set criteria for deciding on what mitigation methods to use under various circumstances. So according to this view, although vulnerability is a cost, vulnerability reduction is itself costly.

These consequences can be in view of the time period of a disaster that is either short term, mid term or long term. Effects of fire disasters are short term consequences of disaster that comprise of direct damage,, indirect damages and secondary effect. Impacts comprise of economic, social, psychological and environmental impacts. These are mainly long term consequences of fire. The worst case scenario that determines the degree of risk is whereby fires occurs because people who are vulnerable simply do not know when the disaster may occur, what protection measures to take and these coupled with negative attitudes towards use of certain measures. This increases human suffering from disastrous situation. A good example here is a case where by oil tankers exploit and kills thousands in Nigeria. This will definitely make everyone to take such scenarios for granted not knowing that oil tankers can cause devastating effects. The fact that disaster can happen anywhere and anytime, everyone should be prepared.

Fire protection of office documents

The information on a computer is on a hard drive. Hard drives have moving parts and can wear out. The computer could get damaged in disasters. Data could be lost with accidental deletions. Backups are used to recover data in case of system failure or disasters. Backups are taken on data storage devices. Backups can be either at a file level or of the complete system. Data should be archived offsite as well to guard against disasters like fire. The information is sent to the remote site across a network. Original software comes with CDs and does not need to be backed up. The data files however need to be backed up. Emails and personal settings on a computer need to be backed up as well. The frequency of the backup needs to be decided based on the volume and the importance of the information.

It is important to keep spare hardware and cartridges in case of failure during the backup. The backed up data must be checked regularly to ensure that the data is safe and the procedures are right. The back up hardware and software must be tested before use. The simplest backup system has one computer and user. Otherwise, a network computer can be used that hosts file sharing.

The backup procedures start with the selection of the device to record the backups. These include floppy disks, tapes, removable hard disks, cartridges and CD-ROMs. The devices should have more than enough space to backup all the data. A controller card can be used to interface with your backup drive. The backup drive is connected to the computer to backup the system state data. It is necessary to create a schedule for the backups. It is better to use separate disks each time a backup is done. This helps when an older copy of the file is required.

Given the time and effort that goes into creating work it is required that a high level back up plan is in place for your system. All the employees should be conversant with the backup procedures so that each person keeps all his work safe.

High level disaster recovery plan for a business

The computer system of any organization is sometimes subject to failure. The main reasons for failure are disk crash, power failure, software problems or disasters like fire. It is important that the work is not lost. Hence a recovery mechanism is required that restores the system to its state before failure. A transaction failure is because of a logical or system error. A logical error is due to internal conditions with data and resources. A system error is due to undesirable states of the system. A system crash occurs with hardware malfunction or a bug. This brings transaction processing to a halt. Disk failure implies that the disk loses its contents.

Recovery algorithms ensure that actions are taken to keep enough information to allow recovery from failures. Actions are also taken after failure to recover the contents of work. Information in volatile storage is usually lost in a systems crash. Non volatile storage retains the information.

Some systems have battery backups so that the information is not lost in a power failure.

Stable storage is used so that the information in the system is not lost. To enable this we use several nonvolatile storage media like disks that have a failure mode. The information is regularly updated. The simplest system uses a mirrored separate disk. For disasters like fire the information is stored on a remote site. A remote system may also be used. The information needs to be protected during data transfer. The transfer is considered complete only when the information is recorded on all the physical blocks. Logs are used to record database modifications. After system failure some transactions need to be redone. Shadow paging is used to maintain the current transaction page and a shadow page. Recovery techniques also include the use of logic and transaction rollback.

Disasters are possible and hence it is important to be prepared. In case a disaster does occur the consequences can be devastating. A disaster recovery plan will keep you safe.

References

Jan Holmberg: (2006); Security Management and Security Systems at Maihaugen Open Air Museum, STSM.

Canadian Commission on Building and fire Codes,(1995); National Building Code of Canada, National Research Council of Canada, Ottawa, ON.

Drysdale, D; An Introduction to fire Dynamics, Second Edition, John Wiley & Sons, Kalakota, R. & Whinston, A. (1999), Frontiers of Electronic Commerce, 1st edn., Addison Wesley Longman, Inc., USA

Magnusson, S.E., et al., (1995);A Proposal for a Model Curriculum in fire Safety Engineering, fire Safety Journal, Vol. 25, pp. 1-88.

Mike Coull: (2006) Management strategies to Secure Integration of Damage Limitation Teams and Professional fire Services, STSM.

Silberschatz, A., Korth, H. & Sudarshan, S. (1997), Database System Concepts, 3rd edn., The McGraw-Hill Companies, Inc., USA.

Society of fire protection Engineers (SFPE), (1995); The SFPE Handbook of fire protection Engineering, Second Edition, National fire protection Association, Quincy, MA.

Statens Fastighetsverk (2004); Inventory of Risk and Risk Assessment with Suggestions of Preventive Measures.

Simulink Broadens Video, Runs Simulations for Real-Time

Introduction

Simulink broadens video as well as image processing with a wide range of provisions that are able to offer rich, customizable structure for swift authentication, execution, replication as well as the blueprint of video and image processing algorithms and systems. Simulink has superior as well as fundamental algorithms used in several different applications such as surveillance for defence, medical electronics industries, teaching, consumer electronics, and communication in the automotive industry. Blocksets that use Simulink application have many capabilities. They include: morphological operations, geometric transformations, 2-D (two-dimensional) filters, motion estimation techniques, input/output (I/O) capabilities as well as 2-D transforms. For the purposes of C-code generation, simulation and modelling, the blockset is capable of handling floating as well as fixed-point data types. To be able to quickly optimize and debug models, it has statistical functions and has the ability to analyze data as well. Its useful functions are endless; it can validate simulation results, it has several image and video data visualization techniques, video displays and scopes. Optical flow estimation technique is one of the models it uses to estimate the motion vectors in each frame of the video sequence to track cars.

Modelling and Simulating Video and Imaging Systems

With Simulink, the blockset for video and image processing has a particular library for crafting the performance of your imaging structure. Regardless of your systems intricacy, the Simulink environment offers apparatus for subsystem customization, data management, as well as hierarchical modelling that makes it simple to come up with crisp and precise depictions. Despite the fact that certain blocksets support integer and fixed-data point data types, all blocks used in the video and image processing blockset support both single-precision as well as double-precision floating-point data types.

The video and image processing blockset and Simulink makes it much easier to quickly run simulations for real-time entrenched video, vision as well as imaging systems. To have a golden reference for the purpose of authentication throughout the design process, you can communicate to the system to downstream design teams through creation of executable specifications.

Generating and Optimizing C Code

To be able to automatically generate ANSI/ISO C code from your model, the video and image processing blockset has to interface with real-time workshop and real-time workshop embedded coder. They are both available separately. The generated C code can be used for large-scale simulations or be deployed from your models on programmable processors (DSP or GPP). Basic primitives as well as highly advanced video algorithms and other features for designing real-time video and imaging systems are all included in the video and image processing blockset library.

Multimedia I/O, Video Viewer, and Display Blocks

Files such as AVI, MPEG, WMA or any other windows media supported file can be brought into the video and image processing blockset. With the video viewer, you are able to go through your entire simulations one frame at a time since you are able to start, pause or stop it. This makes it much easier for you to analyze the video stream in real time throughout the model. The video and imaging system models can now be designed and restored at a much faster pace due to these great and time efficient features. This video and image processing blockset offers several functions, they include:

  • They are able to relay real-time video data to a video output device, screen or camera as long as its linked to the system
  • From your workstation screen or your PC, you are able to watch the video stream
  • It also lets you write the input to an assortment of the MATLAB workspace
  • Intensity or RGB Video streams as well as images can be displayed
  • Video signals in Simulink models, video files or MATLAB workspace can be easily viewed by use of MPlay GUI
  • Video frames can be converted into a multimedia file for analysis whose results can be easily shared

Text as well as graphic objects can be inserted into the video stream through this blockset. Images can be interpreted and combined, overlay and marked regions can all be achieved through this blockset. By use of Blob analysis block and Kalman filtering subsystem people tracking application, every person can be detected and tracked in a video frame. The same people can be tracked systematically from one frame to the next with each individual being highlighted by a bounding box.

Algorithms Used For Video and Image Processing

Primitives for geometric transformations, transforms as well as 2-D filters are provided in this video and image processing blockset. Tasks like noise reduction, smoothing and sharpening are eliminated with the help of 2-D filters. The video stream frequency content is analyzed by use of the 2-D transforms. A fine example is the elimination of unwanted frequency content in MPEG by use of DCT to condense the video pixel information. With this blockset, you can execute 2-D FIR filtering of input matrix I, by using a filter coefficient matrix, H. This blockset also helps you to translate images for alignment or registration, rotate, resize and apply protective transformation. It also enables you to output the complex fast Fourier transform in two dimensions (2-DFFT) of a real or complex input.

Primitives for Filtering, Transforms and Geometric Transformations

Through this blockset, geometric transformations, 2-D transforms and 2-D filters are provided. Duties such as noise removal, smoothing as well as sharpening are performed by 2-D filters. A video streams frequency components are distinguished by the 2-D transforms. To be able to trade-off between performance and precision, three interpolation methods of nearest neighbour, bilinear and bicubic are provided by the geometric transformation block.

Example: Select the Separable filter coefficients check box if your filter coefficients are separable. Using separable filter coefficients reduces the amount of calculations the block must perform to compute the output. For example, suppose your input image is M-by-N and your filter coefficient matrix is x-by-y. For a non-separable filter with the Output size parameter set to same as input port I, it would take x * y * M *N multiply-accumulate (MAC) operations for the block to calculate the output. For a separable filter, it would only take (x + y) * M *N.

Colour Operations

For different video formats, you need colour space conversion operations to make it possible to represent and manipulate colour signals and conversions. To decouple colour information from luminance, colour space conversion makes it possible to process these two components independently. With this colour operation, extensively used colour formats like RGB can be converted to or from YCbCr, RGB to or from XYZ, RGB to or from HSV and RGB to or from L*a*b*. It also makes it possible to either apply gamma correction to an image or removal of gamma correction from an image. Referred to as binarization, an intensity image can be automatically or manually changed to binary image. You can upsample or downsample chrominance components of images.

Analyzing Videos and Images

To be able to get concrete information from the video streams, you need image analysis techniques. These techniques are used for the extraction of image features, correction of non-uniform illumination as well as noise removal. This video and image processing blockset enables you to do the following functions:

  • By use of optical flow technique, you can identify motion in a video sequence, 2-D sum of absolute differences (SAD) or block match for motion estimation
  • Cross-correlation technique helps you to match patterns to an existing template
  • A video scenes relative focus can be judged
  • Moving objects can be tracked and classified
  • Segmentation techniques are used to separate foreground from background
  • Edge detection methods by Roberts, Prewitt, Sobel or Canny are used to identify object boundaries in an image frame
  • Spatial coordinate locations such as bounding boxes or centroid for labelled regions in a binary image statistic and return values can be calculated by using Blob analysis.

To calculate 2-D statistical analysis standard deviation, mean, min, max, correlation, variance as well as histograms can be accomplished through statistics library. Arbitrary of Interest (ROI) is the basis on which mean variance as well as standard deviation is calculated.

References

Web.

Ampere-Hour Meter Overview

Introduction

A caravan is a vehicle designed for living. It is a way of lifestyle. It is used as home. So it consumes electric current just like home. A caravan, of course, consists of home appliances and hence continuous supply of electric current is required. Even though caravan is a vehicle, it is equipped as a home. So the power supply becomes a problem for the caravan. It uses a battery as in the vehicles. Different power supply systems are used in the caravans.

Among these, some are through electric cable as in the home, or by the solar panel, or using generator, or by a rechargeable battery etc. In caravan a rechargeable battery is generally used for power supply. This battery provides electric current for the working of all devices in that caravan. But caravanning requires the use of high drain equipments such as televisions, fridges and pumps etc.

A plan of caravan is shown below. The family in the caravan uses the electronic devices in the convenient time.

A plan of caravan

Regular car batteries may be used for caravanning but they may not be very effective. Car batteries are designed to provide large amounts of power in short periods, mainly for starting the engine. They are then quickly recharged. But the battery should be charged in the requisite time. For the uninterrupted working of battery, recharging should be done according to the charge in that battery. Generally generators are used for this purpose.

But noise of the generator avoids its application in the caravan during night. So there is a need for other methods for finding convenient time for recharging of battery, which should be reliable and simple. The electric devices in caravan utilize high volume of electric current, due to use of high power electronic equipments and may be discharged quickly and hence it might be run out, possibly at night. By knowing the status of battery we can simply recharge the battery when needed. Status should expose the present state as well as life of battery. Ampere hour meter is the solution for this situation.

Purpose of an ampere hour meter

Purpose of this project is to solve the problem relating power supply of a caravan. Goal of the ampere hour meter in the battery is to get information about battery life. Ampere hour meter is the key instrument which shows battery capacity. The name ampere hour shows two different units. So we can call it as derived unit. The ampere = coulombs/sec and the hour means 3600 sec. So the term ampere hour meter gives  (coulombs/sec) * (3600 sec) = 3600 coulombs.

The unit of electric current is ampere and for time is seconds (or 1/3600 Hour). Ampere is an SI unit of electric current. Ampere can be defined as the rate of change of charge. That is equal to coulombs/sec. Ampere hour of a power supply is expressed in coulombs, which in terms express in the unit of charge. An ampere hour meter can act as a battery charge indicator. For example, a battery with a 70 AH can supply a 70 ampere for 1 hour continuously. Or else it can be expressed as the battery can supply 1 ampere for a long duration of 70 hours. The charge quantity in the battery is expressed in coulombs.

Design of ampere hour meter

The ampere hour meter should alert the user about the amount of charge left in the battery. Many types of ampere-hour meter have been manufactured in the past, the most important being electrolytic meters and motor meters. Theoretically the former are capable of very accurate registration but in practice the working results are not so good as with motor meters, and the latter are preferred by most supply authorities. (Direct Current Meters).

In the past, the electrolytic or motor based ampere hour meters were used. So the size of device is bulky and also it is very tedious for trouble shooting. The conventional method for finding ampere hour or charge content in the battery is given below. Shunt resistor produces a voltage across it proportional to plating current. This voltage is then converted into frequency term which is proportional to voltage by using a voltage/ frequency converter.

There is a frequency divider circuit which is used for finding ampere hour (Ah) corresponding to summation current. Then this is fed to a LED, which indicates the reference of the measured voltage. Ampere hour meter can be designed in different methods. Here we are using the ADC (Analog to Digital converter) and microprocessor or microcontroller. An ampere hour meter is connected in series with battery. Ammeter is used to find the electric current in the circuit, which also connected in series with circuit. Resistance of ammeter must be very low. ADC is used in this circuit for measuring the voltage across the resistor. Analog to digital converter converts an analog data to its digital form. This conversion is done according to the reference voltage.

Battery voltage in caravan is generally 12V or 24V. The voltage is obtained by ADC in the circuit. ADC gives digital data. This data is then handled by microcontroller unit and calculates the ampere hour. This calculated value is sent to the display section. LCD is used for announcing the ampere hour value visually.

Components for the ampere hour meter

The main components used for the design of an ampere hour meter are ADC, microcontroller, LCD (Liquid Crystal Display) unit. ADC is available in various types. An ADC can be made by combining comparator circuits. Integrated circuits are also available. ADC acts as a volt meter in the direct current (DC) circuit. ADC0800 is an example for an 8 bit monolithic analog to digital converter using p channel ion implanted MOS technology. Microprocessor or microcontroller is a main unit in the ampere hour meter. PIC or 8085 can be used as microprocessor. 8085 is an 8 bit microprocessor. 8051 is the example for 8 bit microcontroller. A shunt resistor is connected parallel to circuit to find the parameters of that circuit. A shunt is simply a resistor of very low value (frequently less than one ohm) that is used to help measure current.

A shunt resistor can be made of a copper wire. The resistance of wire should be designed depending on sensitivity of instrument. Shunt resistor provides low resistance path and hence shrink the sensitivity of instrument to a known quantity. Shunt resistance should be designed according to the amount of current in the circuit. Here the current varies with 0 to 5 amps. Shunt can be calibrated and hence yield a more accurate one.

Principle of Ampere hour meter

Ampere hour meter is an integrating meter similar to the what hour meter used to measure electricity usage in home. Typical ampere hour meters are digital indicators similar to odometer in automobile. It is direct current meter that will register in either direction depending on direction of current flow. Similar to ammeter, Ampere hour meter is connected in series. (Ampere Hour Meter About).

The main principle in the design of an ampere hour meter is the dependency of the input or source impedance of battery with the charge content in the battery. Consider a caravan with a battery as a power supply. The whole electric devices are connected parallel to the battery as load. For finding ampere hour, we have to first find out the electric current reading. Ammeter for a caravan can measure up to 10 ampere. This is because of use of high current draining appliances in the caravan. A voltage is produced in the shunt resistor proportional to plating current. Below figure shows a phase detector. Here the reference switching signal and input signal are in phase.

Principle of Ampere hour meter

Output of a phase detector gives a Direct Current proportional to Alternating Current signal level. Amplification of signal is generally done for the proper detection of signal. But here battery provides a high rate of current and hence the amplification of input signal is not required. A phase detector circuit provides a DC voltage proportional to amplitude of input wave. This is used to monitor changes in the input impedance of power supply battery.

By this way we can find out the state of battery. It gives the state of charge of that battery. The battery charge state is related to the source impedance of battery. Phase detector uses a low pass filter which cuts the signal having frequency more than its cut-off frequency. So at the output of low pass filter we get a direct current (DC) level. At discharged condition of the battery, it shows high impedance. And the impedance decreases according to increase in charge in battery.

The ADC is connected to the circuit to find the voltage. This converts the continuous form of signal from the ammeter to the digital form. Then this digital data is passed to a microcontroller. Microcontroller contains highly complicated circuits. But it can be used very easily. A micro-controller has different ports for digital communication with surroundings. Serial and parallel ports are in a micro controller to pass the information or data to or from the micro-controller.

If the analog to digital converter has the parallel output, then we can connect this output to the parallel port of microcontroller. A microcontroller contains many different units inside it. Arithmetic and logic unit, control unit and memory etc. are some examples of the internal peripherals in a microcontroller. So we can say that a microcontroller is a complex device according to the internal parts. But we can program a microcontroller simply by the assembly language. This program can be done in simulating software, so that we can find the errors and we can visualise working of our code. This reduces the cost of design as well as the time of design.

Generally simulator is in assembly language, but we can change this code into its hexadecimal format. A microcontroller comprises an internal memory to store this program. There is other kind of memory available in the microcontroller for doing calculations in the controller. This memory is volatile, and is called registers. This is a very fast memory and is compatible with other internal unit in the microcontroller. The program is stored to read only memory (ROM) of microcontroller. This can be erased by electrical method. So this kind of read only memory is called electrically erasable read only memory (EEROM).

Microcontroller is more reliable in the sense that it can be programmed, without changing the connections in the circuit. That means when we need to improve our device with present output, we may not change the connections just make changes in the program. The analog to digital converter acts as voltmeter, which is commonly used for measuring voltage in the circuit. This digital converter then samples the input level.

The sampled output is fed to micro controller unit and hence can perform various operations on this data. These data are then manipulated by the controller. The summation is done in the controller to get the ampere hour of the battery. There is a liquid crystal display to show the result visually. Liquid crystal display (LCD) should be firstly interfaced with the controller. The interfacing means connecting and programming the device LCD to controller. After interfacing the LCD to controller, microcontroller sends the result to display. The block diagram for the ampere hour meter is shown below.

Principle of Ampere hour meter

Project analysis

Electric current from the battery is by passed through shunt resistor to get a medium sensitivity for the measurement of electric current. An ADC is connected across or parallel to shunt resistor. This ADC is used as voltmeter and also converts analog voltage data to digital. Combination of ADC and shunt resistor will act as an ammeter. So this unit gives current reading in digital form. Then this data supplied to any one port of microcontroller.

The microcontroller is programmed to get the Ah value. The programme should be done with that input of microcontroller is a electric current amount. Microcontroller gets power supply from the same battery. But the battery provides 12V and micro controller needs 5V. So we use an IC (integrating circuit) 7805. The value calculated by the microcontroller is then sent to LCD. The measurement is accurate because the data is handled digitally.

If the battery is full of charge then source impedance of battery is low. And if battery is reached to run out then the impedance is at its peak. So we can say that the impedance of battery is inversely proportional to the content of charge in the battery. Battery charge also depends on the load in the caravan. That means when we are using all the major current drawing devices at a time then ampere hour meter shows very low value. So reduce the load as the Ah rating decreases, and go to recharge. The ampere hour meter must be checked every evening so that we can avoid the complete discharge of battery at night. The use of microcontroller and analog to digital converter reduces error that may happen in the decision of ampere hour. Precision also increased due to the digital use ampere hour meter.

Cost of project

An ampere hour meter is designed and developed for finding status of battery in the caravan. The peripherals used in the circuit are easily available and cheap. Only microcontroller and ADC and LCD are the some complex unit in this circuit. These all can be designed and manufactured with an amount below 60$.

Conclusion

Ampere meter is fitted serial to the battery, for which the charge is measured. Lack of precision and reliability in the ampere hour meter makes its usage low. So nowadays the usage of ampere hour meters is reduced. There are other precise electronic devices which are available for finding the specific value of charge in the battery. The objective of ampere hour meter is, to warn about life of battery. The design of an ampere hour meter using a microcontroller is very reliable.

But in real case, the precision of an ampere hour meter is very low. It just shows the presence of flow of electric current in the battery. Invention of microprocessor is a milestone in the history of electronics. But microcontrollers with advanced features are available. So the design of highly precise, reliable, and error-free ampere hour meter is very simple. The digital world is growing with incredible speed. Electronic field provides more convenient and simple product for our life.

Works Cited

. Engineers Edge: Solution by Design. 2009. Web.

Direct Current Meters. 2009. Web.

Shaw, Ian. Block Diagram of a Phase Detector. FAS: Phase Detector Utilising TRAC. 1998. Web.