Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)
NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.
NB: All your data is kept safe from the public.
Introduction
Concentration of highly skilled personnel and specialized equipment, as well as cost effectiveness, is among the driving forces when several hospitals consider combining their work at a central, off site laboratory (Vonderschmitt 1999, p. 89). Improved communication of results via laboratory and hospital information systems lessens the negative impact on turn around time usually feared by clinicians. A close working relationship between the centralized and local laboratories is mandatory for centralization to be successful. Flexibility in the centralization or decentralization equation may prevent or even at least improve the feeling of loss of control that may accompany centralization. According to some scholars, a centralized system is needed to provide a mechanism for managing the huge volumes of data that are now being generated by laboratory technologists as a result of advancements in technology (Zhu 2005, p. iv).
According to Cowan (2005, p. 7), the laboratory is both a light industry and an office workplace. It is no longer feasible to administer a busy laboratory without the aid of electronic data processing. Generally, “laboratory informatics comprises the theoretical and practical aspects of information processing and communication, based on knowledge and experience derived from processes in the laboratory and employing the methods and technology of computer systems” (Van Bemmel, 1984, p175). In this application, informatics focuses mainly on deployment, planning, and policy development.
In addition, utilization review, computer consultation, rules based expert systems, and decision support are growing areas of attention. Laboratory informatics, reflecting the laboratory’s status as a technology center, is more technology oriented than general medical informatics. As pointed out by Zhu (2005, p. iv), it is the large amount of data resulting from research laboratories together with statutory requirements that have prompted companies to turn to the use of laboratory information management systems. Among other benefits, LIMS improves the tracking process, assists in the management of samples, and makes it possible to accurately report results from laboratory tests. However, most LIMS have to work with other applications in order to deliver the expected results (Goldschmidt et al. 1998, p. 5).
In medical laboratory, decision making is multi dimensional. From hiring the right person to joining the team, to selecting the right equipment, reasoning is very critical and typically requires input from many people. Centralization of laboratory information will, therefore, facilitate easy sharing of information and improve decision making.
Search Strategy
The Internet has a lot of useful information and will be used heavily in this research. Besides, reference will be made to various scientific books and databases that focus on the subject of laboratory information centralization. Together with the above, a good web search combined with a good database search will certainly provide tons of useful information relevant to the research. It is, however, important to note that despite the fact that databases contain a great deal of information, little pertinent information will be obtained without a proper search strategy to find the information desired (Conn 2007, p. 53).
The search strategy for this study will begin with the review of work done by other scholars on the subject of centralizing laboratory information. Among other things, the review will focus on the findings of studies by the different scholars and an analysis of the methodologies used will presented (Kukoyi 2008, p. 23).
Overview of the Literature
This section is an overview of existing literature on the subject of centralizing laboratory information. Findings of different studies will be examined and the methodologies used by various authors analyzed. It includes a discussion on data management systems and specifically, the motivation for the development of laboratory information management system (LIMS) mentioned herein.
Laboratory Information Management
The laboratory is typically a component of a larger entity such as a medical or hospital care system (Goldschmidt et al. 1998, p. 10). If it is a free standing it offers laboratory information services to clients that may include hospitals, physician offices, clinics, patients, and companies that offer or require laboratory testing of their employees. General, a centralized laboratory system can generate revenue as a reference laboratory for physicians and other health care institutions in the service area. The information system must support a variety of situations. Communication of orders and results should be automated and must reach appropriate practitioners regardless of their location (Cleverley 1989, p. 647). Direct communication between the laboratory and other ancillaries such as the pharmacy must be on line and available when needed. Billing must be extremely flexible to ensure maximum returns within this diversified department.
According to Paszko and Turner (2001, p. 83), Laboratory Information Management Systems are complex systems that integrate hardware, software, people, and procedures. In most cases, it is easy to focus on the hardware and software aspects of LIMS to an extent that the people and procedural aspects are completely overlooked. Typically, the LIMS is what is used by a laboratory to track and manage its information resources, particularly the data that represents the laboratory’s product. Any change in the data handling system, therefore, engenders some potentially traumatic changes in the way that the laboratory operates. The laboratory staff is thus called upon to adopt new routines. To function effectively, the LIMS has to be compatible and integrated with the quality and business objectives of the laboratory.
The LIMS implementation team should be composed of representatives from each department that will be affected by the LIMS. Users, information services personnel, financial personnel, customer service representatives, clients, analysts, and managers all need to be involved from the beginning of the project.
Benefits of Laboratory Information Management Systems
According to Mozayani (2011, p. 444), the benefits of LIMS software have been well established over the last decade and the efficiencies they have brought have allowed many forensic labs to deal with numerous complex laboratory cases. From the initial days when LIMS software was used to a very limited degree, it is today widely used and its popularity among laboratory technologists will continue to grow.
Quality control and quality assurance are two key issues that, apparently, have been addressed by the use of LIMS software. Quality control can be defined as the operational techniques and activities required to maintain and improve product or service quality. Ideally, it is important to select a LIMS based on an open architecture and Open Database Connectivity compliance so that it can communicate effectively with other applications. Users should avoid LIMS that use proprietary databases (Strom 2006, p. 35).
Typically, the LIMS contains a great deal of information on laboratory operation, data quality, and performance. In spite of this, few users effectively mine the information in their LIMS. Many LIMS contain query builders or screens on which users can check boxes for information they are interested in retrieving (Waegemann 1996, p. 63). For example, a laboratory manager can obtain information on each analyst. He or she can then sample the volume that they analyzed per day, per month, per year, by test, by client, by the number of audits they signed off, and so on. Users can also examine workload by department, by instrument, and turn around times for each department (Strom 2006, p. 35). By measuring the overall laboratory performance, laboratory managers can then identify areas of improvement and also commend areas that are performing well. In short, therefore, all LIMS can play a significant role in operations overall quality. Many of the reports generated from LIMS such as the analyses reports, statistical process control charts, and trend analyses provide significant insight into overall product quality.
Data Management Systems
According to Tharayil (2007, p. 12), data management for laboratory operations encompasses experimental design, data generation and acquisition, data modeling, data integration, and data analysis. Based on a number of past studies that the use of data management systems by laboratory technologists continues to increase and this has been prompted by the fact that these professionals have realized the need to have a system that enables central access to laboratory information. As pointed out earlier, one major advantage of centralizing laboratory information is that it will allow medical laboratory technologists to easily share information and fast track the decision making process. Several tools necessary for effective data management are discussed as follows.
Laboratory Information Management System (LIMS) makes it possible for laboratory technologists to trace and track all samples or specimens through a laboratory process. Steps in such a process would likely include data analysis and storage, tracking system analysts, and recording date and time of each step in the analysis process. A Discovery System is a database repository that will integrate all available information derived from the processing and analysis of samples into a more comprehensive context. Finally, Workflow systems are process management systems that are very similar to LIMS. Among other functions, these systems help laboratory technologists to manage the vast quantities of data that pass through many stages of processing and analysis.
Controversial Subjects Inherent to Centralization of Laboratory Information
Turn around time is always affected by centralization. Usually, the effect is, but not always, negative. Turn around time is an extremely important consideration in decisions concerning whether or not particular tests from distant sites should be centralized. By working closely with the practicing physicians, laboratory technologists are able to identify valid rapid turn around time test requirements. These can be either handled locally or expedited centrally. Most inpatient and virtually all outpatient clients can be handled centrally. Turn around time consideration has been significantly affected in favor of centralization through use of information communication and telecommunication of results from the central laboratory to the various sites from testing emanates.
Loss of local control was felt most strongly at the inception of centralization. Frequent communication and formal mechanisms that have allowed input into the decision making process have ameliorated this as an issue. Clinicians have seen that the central laboratory is able to respond effectively to their needs and as such, they are in support of centralization.
Standardization of instrumentation is local facilities is strongly encouraged by the existence of an overall laboratory system. Despite some perceived loss of autonomy by local laboratory managers, the advantages of standardization are particularly compelling. A central planning activity that allows for contributions by all involved parties and abundant discussion of objective factors has therefore proven to be the best way to achieve homogenization without alienation.
Centralized Data Repository
Independent biological research laboratories routinely produce data in the terabyte scale. In order to realize its full value, this data needs to be organized, analyzed, queried, and reduced to useful scientific knowledge. Existing data management technology is often challenged by the lack of stability or the evolving nature, diversity, and implicit scientific context that characterize biological data. Most of the information that the biological research is interested in is available in public reference databases, specialized private data sources, and the over 12 million articles of scientific research literature, most of which is accessible on the web (Tharayil 2007, p. 14). It is estimated that 80 per cent of the biological data are in text form, and the rest resides in databases that range from indexed files, to rational and specialized formats. Biological data bases may contain primary data, built by integrating data from primary sources in which case, their integrity depends on the constituent sources. Given that many of these data sources are non standard and not well documented, accessing, integrating and sharing biological data certainly becomes a challenge and an art.
Despite the challenges, scientific users have a wealth of information available to them, and have built specialized applications to access portions of it. According to Tharayil (2007, p. 14), many early data collections were initially created using word processing or spreadsheet applications. Apparently, the wide spread use of the web and excel spreadsheets makes the ad-hoc and unsustainable data access practices, unnoticeable to many people. To the bio-scientist, database development means production of a dataset and not the construction of a system that manages data.
Many problems with life-science databases have their origins in the fact that lab technologists often lack the skills for managing large complicated data sets. In many cases, they do not have the proper data management tools to integrate their data with large public data sets. In most cases, even exchanging information among different lab technologists is a very difficult thing to accomplish.
Most public repositories of biological data focus on deriving and providing one particular type of data, be it biological sequences, molecular interactions, or gene expression. Integrating these disparate sources of data enables researchers to discover new associations between the data, or validate existing hypotheses.
Furthermore, if the biologist is interested in a potentially large number of records in a database, the collection of relevant data has to be automated. Therefore, each database would ideally be equipped with programming interfaces that enable software developers to query and search databases from within their own programs. Despite the fact that modern database management systems support mature standard interfaces for this purpose, such as Open Database Connectivity (ODBC) and Java Database Connectivity (JDC), public access to these databases is rarely granted by database providers. The reasons for these access restriction range from security concerns to political issues. There are, however, a few databases that allow access to life-science data through the use of web services as a more recent technical standard. With the help of web services, predefined queries can be used to automatically access a remote database.
Although many databases are freely accessible and may be mined through a web interface, they can not be downloaded. Even if the researchers have access to these information rich databases, they do not have the tools or the ability to integrate their data with these databases. For example, Genbank is a public repository with a wealth of information. This notwithstanding, the scientists do not have proper tools to integrate their data with this public data set without making their data public.
The distributed nature of laboratory data makes it a time consuming task to gather the sought information for one gene, but to do it for many is a manually intractable task. This implies that “there is a need for a query based access to an integrated database that disseminates biological rich information across large datasets and displays graphics summaries of functional information” (Dennis, Sherman, Hosack, Yang, Gao, Lane, Lempicki, 2003, p. 3).
There are a lot of tools available for collecting, storing, querying, and visualizing genomic data. When high throughput technologies start becoming available proteomics data, similar tools will probably start to develop. Software for analyzing, querying, and visualizing the integrated data will then be the next thing, but before they can be used, the data has to be stored in a centralized location.
The most important tool for reaching an understanding of laboratory information at the level of laboratory information management systems is the analysis of laboratory data models. Seemingly, the basic building blocks for these models are existing experimental data and known path ways, which are stored in thousands of databases. Data integration makes it possible for lab technologists to assemble targeted data for analysis, and to discover scientific relationships between data. As a result, database integration is an important issue that must be addressed immediately for a holistic understanding of laboratory information management systems.
Custom Built LIMS
In many, if not most cases, LIMS were developed in-house by organizations for data acquisition and reporting processes. Then custom built systems were developed by independent systems development companies to run in specific laboratories. Parallel to these were initial efforts to create commercial LIMS, according to Bentley (1999), “such commercial LIMS were proprietary systems, often developed by analytical instrument manufacturers to run on the instruments that they produced” (n.p.). Today, most commercial LIMS are user customizable and offer a very high degree of flexibility and functionality. Many of the most popular commercial LIMS take advantage of open systems architectures and platforms to offer client/server capabilities and enterprise-wide access to laboratory information. Web based LIMS are now offered by vendors.
Presently, Extensible Markup Language (XML) is being incorporated into LIMS because it can enhance the information content in documents, simplify web automation, and integrate applications within or between organizations. These commercial systems developed for a particular industry requires considerable software customization to meet a specific laboratory’s needs. In many cases, laboratories need very specific format and reporting requirements. In general, customization is expensive in terms of effort required to design, develop, test, and validate the software. Researchers are often not able to afford solutions of this nature (Fieschi 2004, p. 67). Most of the LIMS suffer from a number of limitations such as having a proprietary data storage format, being developed for a specific domain of experimentation, and lacking the capability of interaction with other data management systems.
Consider the case of AGCC, the LIMS for Microarray. The AGCC helps the laboratory technicians to track the samples through the microarray experimental process. It does not convey any information regarding the sample status or that would be helpful for the researcher on the other side. The capability of interaction with other data management systems is very limited for AGCC. If it had a way to exchange the information regarding the status of each experiment being submitted, that would have helped a lot for making other systems interoperable with AGCC. Instead, a front end system capable of AGCC specific communication has to be written in order to get additional information from the system (Price et al. 2004, p. 56). The industry actually has to take a new initiative in building systems that adhere to a set of standards which enable independent systems to interact with one another. This is done so as to provide seamless interoperability, which will also improve efficiency and flexibility, and reduce installation and expansion costs (Day 2002, p. 16).
Thus all these above discussed factors led to the development of the current application in order to provide a solution to the problem of interoperability between the researcher and the scientific community, an easy to use, readily available Laboratory Information Management System which will be interoperable with a centralized data repository.
Methods and Methodologies
This section looks at the methods and methodologies used by different authors in carrying out their studies. In carrying out his study, Zhu used a combination of techniques including data collection techniques. In the first place, a middle layer application was used to gather information from a LIMS database. Later, tables with sample analytical information were identified and the information needed for analysis purposes was retrieved and securely stored. This information was then put in a format easy to work with and finally, it was analyzed (Zhu 2005, p. 25). Part of this process included checking and preparing raw data for analysis, carrying out an initial analysis based on the evaluation plan, undertaking additional analysis based on the initial results, and finally integrating and synthesizing the research findings.
Similar to Zhu, Cowan followed an approach of defining, capturing, analyzing, transforming, transmitting, and reporting on the results of his study (Cowan 2005, p. 13). Although some of the steps are bound to change in internal performance over time, external reference data was used for comparative purposes. This was specifically done in order to maintain security and confidentiality of data and information.
Paszko and Turner on the other hand used an elaborate approach that entailed making sample analysis requests, collecting the sample, logging the sample into a LIMS, and distributing the samples after analysis is done (Paszko & Turner 2001, p. 85). In addition, the flexibility of the method used by these authors allowed both quantitative and qualitative factors to be incorporated into the same methodology. Besides, the methodology allowed the evaluation team to define major factors as related to the use and maintenance of the library management system.
Another approach take by Cleverley (1989, p. 655) involved the use of case studies. Using different cases where centralized and decentralized systems were used, a comparison could easily be made among the different LIMS and how these could be improved for efficiency. In addition to the use of case studies, the study by Cleverley also used probability and statistical models. The choice of these models has to do with the fact that many laboratory processes are stochastic rather than deterministic. They are often subject to random fluctuation and exact predictions about different occurrences are not possible. To analyze a process with any degree of accuracy, the theory of probability and statistics had to be used appropriately. The main objective of statistics is to make inferences about a population based upon the information contained in a sample of the population. Generally, a population will consist of all the events, whether finite or infinite with which the analyst is concerned. For example, a population under study may be all the medical laboratory technicians in a give hospital (Bliesner 2006, p. 3). The sample contains partial information about the population but is used to make inferences about the whole population. In line with these, various statistical techniques had to be used such as estimation, binomial distribution, exponential distribution, Poisson distribution, and normal distribution had to be used.
Similarly, HajShirmohammadi and Wedley (2004, p. 22) adopted a case study approach with different cases being examined. With the help of different applications, summary reports could be obtained, used and later stored or future needs.
Application of Findings to Change Project
As has been discussed in this paper, the amount of laboratory data keeps increasing. To improve efficiency and to effectively manage the data, it is absolutely necessary to ensure that disparate applications are brought together.
Clearly, the need for laboratory technologists to access laboratory information in a centralized manner can not be over emphasized (Price et al. 2004, p. 56). Providing the information in a centralized way will, among other things, reduce turn around time, increase processing speeds, and guarantee security and uniformity of available information.
The security aspect will be handled by implementing client/server architecture. This architecture enables information to be located in a central location from where it can be accessed by all users with the correct privileges (Lee 2009, p. 21). Through a centralized system, an administrator can ensure that only authorized personnel can access information from the system. Apparently, these advantages can not be realized when disparate applications are used. A skilled systems administrator will also ensure that intruders are kept at bay and no single outsider is able to access internally stored information. Despite the fact that implementing such a system may be very expensive and maintenance could be quite involving, the advantages are very obvious and far outweigh the drawbacks (Harbers & Kahl 2012, p. 14).
The findings of this study further indicate that centralization of laboratory information is very critical and must be embraced by all laboratory technologists. However, not all laboratory technologists are well versed with the use of information technology tools (Lee 2009, p. 24). There is, therefore, a need to organize trainings for those who need it in order to ensure that everyone is comfortable working with the technology. Considering the numerous benefits that this brings, the use of laboratory information management systems should by all means be encouraged.
Conclusion
The research focused on the in depth examination of library information management systems. Information accessed through the web and other written texts were used to generate the discussions in this paper. A major objective of the research was to rubber stump the claim that the use of centralized laboratory information systems carries numerous benefits for laboratory technologists.
Though the structure of maintenance facilities has major effects on the management of laboratory information and on equipment effectiveness, the decision to centralize different functions of laboratory operations is certainly critical and one that must not be taken so lightly. Through this paper it is clear that centralization of laboratory information is no longer optional. Considering the pace at which technology is advancing, laboratory technologists have no choice but to work towards fitting into the system lest they are left behind.
Although the process of centralization is quite involving and expensive at the same time, no single industry can escape from it if it has to realize better results. The same applies in the operation of laboratories. However, it is important to ensure that issues of security are critically looked into. Security is especially an important consideration since laboratory information is in most cases confidential and must be handled with care. Legally, any disclosure of private information is wrong and must be avoided at all costs.
However, given that a centralized system is normally implemented through a secure client/server environment, there is little to worry about as far as security is concerned. By following proper security implementation policies, an effective systems administrator will be able to secure the information stored within the library information management system. With careful analysis, a reliable model for centralizing the library information management system can be used for implementation.
Reference List
Bentley, D 1999, “Analysis of a Laboratory Information Management System (LIMS)”, MSIS 488. Web.
Bliesner, DM 2006, Establishing A CGMP Laboratory Audit System: A Practical Guide, John Wiley & Sons, Hoboken, NJ.
Cleverley, WO 1989, Handbook of Health Care Accounting and Finance (2 Volume Set), Jones & Bartlett Learning, Sudbury, MA.
Conn, PM 2007, Sourcebook of Models for Biomedical Research, Springer, Totowa, New Jersey.
Cowan, DF 2005, Informatics for the Clinical Laboratory: A Practical Guide, Springer, New York, NY.
Day, INM 2002, Molecular Genetic Epidemiology: A Laboratory Perspective, Springer, New York.
Dennis G Jr, Sherman BT, Hosack DA, Yang J, Gao W, Lane HC, Lempicki RA 2003, “DAVID: Database for Annotation, Visualization, and Integrated Discovery”, Genome Biol, vol. 4, no. 5, p.3.
Fieschi, M 2004, MedInfo 2004, San francisco, USA, IOS Press, Fairfax, VA.
Goldschmidt, HMJ, Cox, MJT., & Grouls, RJE, 1998, Reference Information Model for Clinical Laboratories: Rila as Laboratory Management Toolboox, IOS Press, Amsterdam, Netherlands.
HajShirmohammadi, A, & Wedley, WC 2004, ‘’Maintenance management – an AHP application for centralization/decentralization’’, Journal of Quality in Maintenance Engineering, Vol. 10 Iss: 1 pp. 16 – 25.
Harbers, M, & Kahl, G 2012, Tag-based Next Generation Sequencing, John Wiley & Sons, Hoboken, NJ.
Kukoyi, BO 2008, Ethics and Moral Reasoning among Medical Laboratory Professionals, Universal-Publishers, Boca Raton, Florida.
Lee, M 2009, Basic Skills in Interpreting Laboratory Data, ASHP, New York, NY.
Mozayani, A 2011, The Forensic Laboratory Handbook Procedures and Practice, Springer, New York, NY.
Paszko, C, & Turner, E 2001, Laboratory Information Management Systems, Marcel Dekker, New York, NY.
Price, CP, John, A, & Hicks, JM 2004, Point-of-Care Testing, 2nd Edition, Amer. Assoc. for Clinical Chemistry, Boston, MA.
Roberts, AR, & Yeager, KR 2004, Evidence-Based Practice Manual: Research and Outcome Measures in Health and Human Services: Research and Outcome Measures in Health and Human Services, Oxford University Press, New York, NY.
Strom, BL 2006, Pharmacoepidemiology, John Wiley & Sons, Hoboken, New Jersey.
Tharayil, SM 2007, Laboratory Information Management System for Microarray Facility, ProQuest, Ann Arbor, MI.
Van Bemmel, JH 1984, “The structure of medical informatics”. Med Inform, no. 9, pp. 175-80.
Vonderschmitt, DJ 1991, Laboratory Organization. Automation, Walter de Gruyter, Zurich, Switzerland.
Waegemann, CP 1996, Toward an Electronic Patient Record ’96: International Symposium on the Creation of Electronic Health Records and Global Conference on Patient Cards, Medical Records Institute, Sudbry, MA.
Zhu, J 2005, Automating Laboratory Operations by Integrating Laboratory Information Management Systems (LIMS) with Analytical Instruments and Scientific Data Management System (SDMS), Indiana University, Indiana. Web.
Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)
NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.
NB: All your data is kept safe from the public.