The British Library: Big Data Management

The British Library needs the use of big data to keep track of user searches. In addition, through IBM, the British Library can provide users with access to websites that no longer exist but that may be of interest to searchers. Thus, the company uses big data to manage resources and meet searchers needs.

State and federal law enforcement agencies use big data analytics to identify crime patterns that occur, enabling them to prevent a recurrence. Some relationships between crimes may be obscure to humans, but maintaining and analyzing big data considers all factors to obtain more extensive information. In addition, big data allows you to collect information about criminals based on their activity on the Internet, which helps to identify criminal activity faster.

The New York City Police Department (NYPD), with the help of IBM, collected data on over 120 million criminal complaints, 31 million national crime records, and 33 billion public records (Big data, big rewards, 2014, p.261). It gives the police department access to information that improves their performance, such as photos of suspects and crime scenes.

The worlds largest wind energy company, Vestas, uses big data to select the best location for wind turbines. Previously, this process took up to three weeks, given the amount of information that needed to be analyzed. The use of IBM systems made it possible to expand the companys wind library and explore the location and weather conditions using 178 parameters in 15 minutes (Big data, big rewards, 2014).

Hertz uses big data analytics to analyze customer behavior and quickly respond to feedback or mood swings. Centralized collection and analysis of data from all the companys stores help to understand customers needs better and promptly adapt to them. Consequently, maintaining and analyzing big data helped raise its productivity and the level of customer content with the services provided.

Reference

Big data, big rewards. (2014). In K.C. Laudon & J.P. Laudon (Eds.), Management Information Systems: Managing the digital firm, (pp. 261-262). Pearson.

Qualitative Data Organization and Management

Qualitative research is, by all means, an asset in terms of scholarly investigation and translation of new ideas to the academic and professional community. However, unlike quantitative approaches to research, qualitative data cannot be interpreted and organized solely with the means of statistical calculation tools. Essentially, qualitative research encompasses such tools as semi-structured in-depth interviews, focus group discussions, and grounded theory research (Wolff, Mahoney, Lohiniva, & Corkum, 2018). As a result, in order to draw tangible conclusions, it is of paramount importance to process the information presented by various respondents, who, in their turn, belong to diverse socio-ethnic backgrounds.

Thus, as far as the data organization is concerned, the creation of a filing system should be addressed by the researcher. When collecting a considerable amount of data in a limited time, some researchers tend to neglect the proper file naming, causing difficulties in finding relevant information and even losing necessary data in the long-term perspective (Suvivuo, 2021). Thus, in order to eliminate such a risk, it is advisable to create a template for document naming that would include the topic, category, and respondents identification and organize the files immediately after access.

Another relevant strategy in terms of data management concerns the data volume and the inability to analyze it with no third-party assistance. Emerging trends in data analysis are currently crowdfunding and labeling the data through classifiers (Suvivuo, 2021). The phenomenon of crowdsourcing stands for attracting people to contribute to the research by analyzing a certain amount of data on the matter of specific characteristics outlined in the instructions. Labeling, in its turn, encourages the usage of machine tools programmed to categorize data according to the keywords indicated. Hence, it may be concluded that todays demand for qualitative research should catalyze more relevant proposals in terms of systematic data systematization, as manual labor capital is not sufficient to conduct large-scale research.

References

Suvivuo, S. (2021). Qualitative big datas challenges and solutions: An organizing review. In Proceedings of the 54th Hawaii International Conference on System Sciences (pp. 980-988).

Wolff, B., Mahoney, F., Lohiniva, A. L., & Corkum, M. (2018).

DNP Project Development: Data Management Plan

Introduction

In this DNP project, several questionnaires are used as the main evaluation tool. In a self-administrated questionnaire, there are four issues for analysis (gender, age, nursing experience, and duties) that contain nominal scales with the help of which variables are labeled without any quantitative value (numerical significance). The other two questionnaires, the ENSS and SF-12, aim at gathering ordinal scales. The main characteristic of ordinal data of these evaluation tools is the presence of the values order (not quality differences). These scales are applied to measure non-numeric concepts, including stress and satisfaction.

Despite the intention to plan each step and predict the type of data, in any DNP or research project, extraneous variables cannot be ignored. These unwanted factors prevent finding the relation between dependent and independent variables and measuring the results. To control such variables, it is expected to minimize differences between participants (nurses of the same age group who work at the same facility), to invite the same educator (not to cause various participants reactions), and to create a similar environment for all practices. Finally, statistical control (via the ANOVA statistical tool) helps to reduce the impact of extraneous variables.

The analysis of quantitative data obtained from a self-administrated questionnaire is based on the application of descriptive statistics. With the help of this questionnaire, the researcher proves the appropriateness (meeting the inclusion criteria) of the participants to the project. The results of the SF-12 questionnaire will be analyzed by means of a linear regression analysis. This type of analysis allows to include several parameters (gender, position, or age) that can predict better outcomes (Pandis, 2016). The same test is taken before and after the intervention within one group of nurses. Another group of nurses does not participate (knows nothing about) in the program but has to work under similar pandemic conditions. As well as many studies about the levels of stress among medical employees, this DNP data analysis will be organized in the Microsoft Office Excel program (SPSS), where the answers of the participants will be divided into categories defined after descriptive statistical analysis (Adzakpah et al., 2016). Linear regression makes it possible to identify the relationship between one interval predictor (a mindfulness meditation program) and one expected outcome (stress reduction and well-being among nurses).

Project Management Plan and Gantt Chart

Ten weeks are chosen for a total implementation timeframe, including research, communication with hospital workers, education of nurses, implementation of the intervention, the analysis of the results, and editing of the gathered material. Below there is a Gantt chart to identify periods for each task of the DNP project. This chart contains an overview of the evaluation process with its formative and summative evaluation. Formative evaluation focuses on the issues that foster the development and improvement of the activity (participants and a program), and summative assessment includes the analysis of the outcomes (a program and participants conditions).

Project Plan and Schedule
Project Plan and Schedule

The first week will be devoted to gathering and evaluation of the already-known facts about mindfulness meditation among nurses and the peculiarities of the pandemic situation. Then, one week is offered to choose a hospital, find participants, and contact with the hospital employees to discuss the details of the study. The third week will be used for educational purposes when nurses approved for the study cooperate with an educator and learn the basics of medication to reduce stress. The next four weeks are for intervention implementation, and the last three weeks include data collection, analysis, and editing.

Proposed Budget

Attention should be paid to the financial aspects of the intervention due to the required resources (both human and material) and expected outcomes and contributions. As it is shown in Table 1, the expenses that include direct and indirect costs to hire a professional meditation coach and a statistician may be covered by revenues (billing, grants, and institutional support). In addition, one should admit that the possibility to reduce stress levels among nurses will contribute to the reduction of costs on medications for nurses to support their well-being. As soon as positive outcomes are proven, it is possible that the government or hospital administrators share their interest in the program. They could invest in health care and support nurses in their duty to cooperate with high-mortality patients during a pandemic.

Table 1: Budget

EXPENSES REVENUE
Direct Billing $2000
Salary and benefits $1200 Grants $500
Supplies $50 Institutional budget support $300
Services $100
Statistician $50
Indirect
Overhead $100
Total Expenses $1500 Total Revenue $2800

Ethical Issues and Considerations

One of the critical elements of this DNP project is the promotion of voluntary participation of nurses. The results of this intervention depend on nurses and their willingness to learn something new and meditate. However, it is necessary to provide the participants with guarantees to withdraw the study any time they want without fines and additional explanations. In Chamberlain University, the Institutional Review Board (IRB) is a committee that aims at reviewing and approving research where human subjects participate. Its purpose is to ensure that all federal, institutional, and ethical regulations are followed. Before the intervention, the researcher gets approval from the board and introduces informed consent to be signed by every nurse individually. In terms of this agreement, human rights and privacy issues are discussed. All personal information remains confidential, and the answers to questionnaires are anonymous. The data will be kept for the next seven years digitally, and no one except the researcher has access to it.

References

Adzakpah, G., Laar, A. S., & Fiadjoe, H. S. (2016). Occupational stress among nurses in a hospital setting in Ghana. Clinical Case Reports and Reviews, 2(2), 333-338. Web.

Pandis, N. (2016). . American Journal of Orthodontics and Dentofacial Orthopedics, 149(4), 581.

Risk and Internal Data Management

Incident Management

Having a well-developed incident response capacity is very important for any entity because it enables it to detect a risk at its earliest stages and manage it before it becomes a disaster. According to Virshup, Oppenberg and Coleman (1999), incident response capacity ensures that an organization can effectively respond to emergencies fast enough and with the required efficiency necessary to avert possible negative outcomes. This means that an entity will always be ready to respond to incidents and accidents in a way that would minimize the adverse effects as much as possible.

The case study about TSF presents one incident that the top management ignored. This firm has been using TSF-ONE in internal data management. It is reported that due to the emerging new trends and the amount of data this firm has to deal with on a regular basis, TSF-ONE has become obsolete. This has slowed data management capacity, sometimes threatening to erode critical data of this organization. The disappearance of TSFs back-up data due to the insolvency of the service provider is a disaster to this firm. This will cripple its operations unless urgent measures are taken.

Based on the incident and disaster mentioned above, TSF needs to enact a disaster response and communication system that will help in rapid communications of events such as those mentioned above. This system will ensure that the response to the disaster or incident and that communication of the information to the relevant stakeholders is done simultaneously. The stakeholders will be informed that the incident or accident occurred and that relevant agencies are making efforts to respond to the issue. The stakeholders in this case will be the donors, employees, and the focus groups benefiting from the services of this firm.

In incident management, incident triage plays a very critical role. This assessment tool helps in determining if there is an actual security incident. This eliminates cases where an organization responds to mere threats other than security incidents. It then prioritizes the incidents to enable the organization determine the best ways to respond to multiple incidents in cases where an organization faces multiple threats. Finally, the tool helps the response team to know if there is a need for escalation. This way, the response will be well-calculated, accurate, and fast enough in addressing the issue at hand.

The excerpt brings out very important aspects about building contingency plans and capacity. According to Jordan and Silcock (2005), it is almost impossible to eliminate risks. The best that an organization can do is to have measures that can help in managing the risks when they occur. This is the message brought out in this excerpt. When an organization is hit by a disruptive event, in some cases this may exceed its capacity to work under normal routine. The ability of such an organization to overcome such disruptions wholly depends on the contingency measures that it has put in place to deal with the problem. The excerpt clearly describes how such contingency plans work. It explains that when the disruption occurs, an organization is able to shift from the normal operational systems to a contingency system as the risk management team works to normalize the affected system. This means that the contingency plan offers an organization a unique capacity to continue with its operations even after its system has been hit by a disruptive event, but using a contingency platform. It also insists on taking advantage of the opportunities presented by such occurrences to be in a better position to manage risks in future.

Argumentative Essay

Having gone through the lecture notes, it is now clear to me that risk cannot be eliminated in an organizational setting. Firms face different forms of risks almost on a daily basis. Some risks target normal operational systems of an organization. Risks may affect finances of an organization, or even the strategic objectives set by the top management unit. According to Jordan and Silcock (2005), it is not easy to determine the section or systems within an organization that a risk factor will hit next. However, it is possible and very important that one plans for the risks before they occur. In a broad way, Das and Teng (1999) note that risks can be categorized as incidents or disasters. Incidents are disruptive events that cause minor impacts on the normal running of an organization. They are events that can be rectified easily and in many cases the stakeholders involved in daily operations of the organization may not realize they had occurred. These events have minor or sometimes no impact at all on finances of an organization. On the other hand, disasters are events that affect major operations of an organization. Incidents like that may paralyze operations of an organization (Standards Australia 2010). They have the capacity to force an organization out of its operations.

Risk management is something that organizations can no longer ignore. According to Alberts and Dorofee (2004), some firms are using insurance as a strategy of managing risks (Virshup, Oppenberg & Coleman 1999). This is one of the oldest strategies that firms have been using to protect them from such disruptive events. However, it is not possible to insure all the possible risks that an organization may face. For instance, a bank may face a risk of system breakdown, forcing it to stop offering their services to their clients. The bank can insure the loss that may occur due to such incidents, but it may not insure the dissatisfaction of the customers due to such unfortunate occurrences. If fact, Das and Teng (1999) say that excessive insurance reduces the profitability of an organization. This makes it necessary to come up with internal measures that will help in managing risks as they occur in a way that will ensure continuity in the operations. Coming up with an enterprise wide management framework is very critical when developing a risk management plan. This is so because it enables risk management teams to develop a holistic approach on risk management that looks at all the systems and structures within an organization.

It is important to note that I was absent in the first lectures. This means that I have a lot to catch up with. However, I have learnt much about risk and risk management from the lecture notes. My conceptualisation has changed because I have taken initiatives of reading the relevant articles and books that address this topic. I am interested to find out how to develop risk management plans that can help organizations respond to various forms of risk.

It is clear that developing a contingency plan is very important when it comes to risk management. I have noted that the use of information technology helps in risk identification, especially in detecting vulnerabilities and threats. This makes it easy to develop risk mitigation plans that can respond to the threats as soon as they occur. From my personal readings, I have noticed that different people have different perspectives of how technology should be used in developing contingency plans and risk response systems. I would like to know the probable approaches that an organization can use the emerging technologies to develop contingency plans taking into account the fact that these technologies may at times be disruptive in nature. A major question that I would want an answer for is how an organization can use an approach that can at times be disruptive to respond to a disruptive situation. In case the emerging technology used becomes disruptive when addressing a disruption within the system, how should an organization react, especially when the affected process or system is of critical importance? In such cases, an organization may not afford to take chances when the stakes are so high because further mistakes made in addressing the current problems may cripple an organization. These are fundamental questions that I was not able to ask due to my absence from the lectures.

Legal and ethical concerns are also very important when it comes to management of information security. I now know that when developing a contingency plan, this is an issue that cannot be ignored. However, I need further knowledge on how this can be incorporated when developing a contingency plan. As Alberts and Dorofee (2004) says, there are incidences where the law is silent on issues relating to security management. It is important that I understand how one is supposed to act in such situations where the existing laws are either contradictory or silent over some issues.

List of References

Alberts, C & Dorofee, A 2004, Managing information security risks: The Octave approach, Addison-Wesley, Boston.

Das, T & Teng, B 1999, Managing Risks in Strategic Alliances, Academy of Management Executives, vol. 13, no. 4, pp. 50-61.

Jordan, E & Silcock, L 2005, Beating IT Risks, John Wiley & Sons, Chichester.

Virshup, B, Oppenberg, A & Coleman, M 1999, Strategic Risk Management: Reducing Malpractice Claims through More Effective Patient-Doctor Communication, American Journal of Medical Quality, vol. 14, no. 4, pp. 153-159.

Standards Australia 2010, Business continuity, Managing disruption-related risk: AS/NZS 5050: 2010, Standards Australia, Sydney.

Information Technology-Based Data Management in Retail

Introduction

Data management has become an important part of organisational management. When appropriately integrated into the companys business environment, IT-based data management offers a wide range of advantages in operations, marketing, HR, and finance. At the same time, irresponsible handling of data creates a number of major ethical considerations. The following paper discusses the specificities of data management and identifies the most apparent ethical considerations using retail as an example.

Data Management

Operations

The operations aspect of retail involves a considerable number of individuals. The most active participants are employees, management, and suppliers. The suppliers typically submit data by inbound shipment of goods. The information may include the details of delivery, quantity, and cost of goods received at the warehouse. After this, the responsibility for data management is passed to the employees, who, depending on the type of technology used in the company, dispatch the necessary amounts of goods to respective departments or report to the management on the delivery. The data on goods relocated to the sales department is collected automatically. At the same time, data on employee performance is handled by the line manager. Finally, the management submits the data to the finance and marketing departments.

The main process associated with data management in retail is based on the management of goods. Upon arrival at the warehouse, goods are marked in the system as available. After this, the sales department is notified of the change in a surplus and can request the necessary items. The sales data is gathered automatically during the process and compiled into a set that can be retrieved by the marketing department for analytical purposes (Fernie & Sparks 2014). The system also tracks important variables such as expiration dates of goods in order to allow for more efficient management of resources. The inconsistencies in supply (e.g., an unforeseen shortage of items determined individually for each department) are tracked and sent to managers responsible for interactions with suppliers.

The majority of the described processes are performed using enterprise software solutions. The solutions in question are purchased as ready-made options or configured in accordance with specificities of retail operations. The platform can be run internally or hosted on the cloud. Some of the data (e.g., inbound shipments from suppliers with no compatible equipment) is submitted to the system manually, whereas the bulk is recorded automatically with the help of cross-compatible formats (Fernie & Sparks 2014). In addition, the technology in question is capable of disaggregating the data on consumer behaviour and employee performance to adjust the existing strategies and techniques.

Finance

The financial department involves two main groups of stakeholders. The first group includes employees that submit data to the financial department. Importantly, while the bulk of data is generated by their actions, only a fraction is collected and entered manually. The second group includes the members of the accounting department who gather, analyse and interpret data.

The processes relevant to financial data management include all actions that generate expenditures or profits, such as sales data, inbound shipments, marketing expenses, and operating expenses, among others (Einav & Levin 2014). The data from different sources is arranged to allow for its seamless processing. Once all necessary data for a given period is obtained, it is compiled into balance sheets and verified for accuracy and integrity.

At this point, the need may arise to identify new relevant directions for analysis or locate and eliminate redundant ones in order to optimise performance. The data is then processed using the tools available from the enterprise solution and reviewed to identify the most important trends, challenges, and advantages. The results are compiled in a meaningful format with the help of visual aids and presented to the management. Finally, the data that requires disclosure is presented in the form of a publicly available report or submitted to auditors. The majority of the described tasks are done with the help of statistical tools integrated into the enterprise solution. In certain cases, additional instruments can be used that support the format used in the organisation.

Marketing

The first main stakeholder involved in the marketing data management process is the customers. On the one hand, they serve as a primary source of data on consumer behaviour patterns, which can be used to develop or adjust marketing strategies and tactics. On the other hand, they comprise the main target of these strategies. The second group of people relevant to the process is employees of the marketing department who select tools to gather data, oversee the collection procedure, interpret the results, and issue recommendations on necessary changes.

The data collection process is facilitated through two main channels. First, sales data available through an enterprise management system is submitted to the marketing department. This data is disaggregated, which allows identifying various segments within the target audience and thus achieving necessary diversification of solutions. The second source of data includes dedicated tools intended for a direct inquiry. These tools include surveys and questionnaires. These tools either generate data in the digital format or require a conversion of the dataset after the completion of the research (Gandomi & Haider 2015). Evidently, the second method allows for a broader range of results to be obtained. In addition, the specificity of surveys ensures the relevance of the collected information. On the other hand, sales data is a more cost-effective method.

The tools for data collection include online survey services, statistical tools for qualitative and quantitative analysis, and enterprise management systems capable of collecting and submitting sales data. The bulk of data handling is automated, with minor exceptions such as manual input of qualitative data into respective analytical software. Once the necessary data is gathered, it is processed by statistical tools in order to identify behaviour patterns responsible for customer satisfaction rate. This information is submitted to the organisations managerial department and integrated into corporate decisions.

Human Resource Management

The main stakeholders of the human resource management process are the organisations employees and the HRM department. The second group is responsible for the appropriate application of talent within the organisation and the maximisation of employee potential. In order to achieve these goals, it is necessary to collect relevant data, identify necessary variables, and determine the preferred course of actions based on the results.

The HR-related data is collected via a wide array of tools. The most readily recognised ones are KPIs  metrics that are determined to be relevant to the success of employee performance. Depending on the type of the company and the specificities of the business environment, different combinations of KPIs can be identified, including average customer spend, sales per square foot, and gross margin, among others (Stone et al., 2015). The obtained KPIs are logged and tracked using a dedicated solution or the functions of the enterprise management system. The latter allows for seamless, automated handling of data obtained from employee activities. In addition, a KPI dashboard can be utilised, which organises the most relevant KPIs and findings and time indicates the emergence of issues.

Data Integrity

The data collected for business purposes need to comply with the criteria of timeliness, completeness, and accuracy. Compromising the data in any of the given areas may lead to a number of adverse issues. The most apparent undesirable effects are related to a decrease in performance and, by extension, the profitability of a business. A good example would be the data on employee satisfaction rate during the introduction of a new strategic approach. In this case, data completeness depends on the relevance of metrics used to determine the response of the employees to changes. This parameter can be ensured by identifying correlations in the available data and including the most relevant ones in the analysis.

Next, the accuracy of data depends on the appropriateness of data collection tools and the quality of the performed analysis. The former can be ensured by developing a tool that is suitable for the processor using one of the ready-made solutions known to be compatible with the goals of the evaluation. The latter is achieved by eliminating the possibility of human error during data input, analysis, and interpretation. The bulk of issues can be eliminated by automating the data collection and analysis via enterprise solutions (Martin, Borah & Palmatier 2017). Finally, the timeliness of the data is ensured by the computational capacity of the IT-based solutions. With the exception of big data management, analytical software is capable of generating results on the fly, increasing responsiveness to different factors.

It is also important to recognise the legal implications of data integrity. The accuracy of financial data submitted for audit by independent organisations depends in part on the absence of honest errors. In this case, appropriately configured accounting software can ensure the consistency of results and minimise the risk of reporting flawed data, thus maintaining the value of the companys shares on the stock market.

Ethical Perspective

Finally, it is necessary to recognise the ethical aspect of data management. In most cases, data necessary for marketing analysis contains sensitive information, which raises several major concerns. First, it is possible that the data in question is used for purposes that harm the party that submitted it. For instance, it would be trivial for a vendor to use the available contact information to deliver targeted advertisement without obtaining the consent of the owner, which is an apparent violation of personal privacy. It is also possible to imagine a scenario where the dataset collected for statistical research becomes available to a third party. This may occur as a result of an unintentional flaw in the systems security or as a result of a deliberate attack. For instance, criminals may gain unlawful access to a vendors database of its customers financial credentials, putting all of their funds at risk. Alternatively, the data can leak to an external party as a result of insider activities (Hemphill & Longstreet 2016). Finally, an organisation may be tempted to sell the data to someone despite the absence of permission for such actions.

Several security measures can be identified that allow minimising these risks. First, the data in question needs to be encrypted using the algorithms and tools compliant with the industry safety standards. Second, the handling of data should require a confirmation from at least one trusted party in order to ensure the integrity of the actions. Third, the data should be depersonalised by removing sensitive demographic information that would pose the risk of disclosure. Admittedly this approach is applicable only to the cases where the removed information is irrelevant for the results of the analysis. Finally, and, perhaps, most importantly, the information needs to be disposed of appropriately after the desired results are obtained, which would prevent subsequent leaks.

As can be seen, all of the approaches require the allocation of time and resources. For instance, the processing and storage of encrypted data require additional expenses for familiarising the employees with the technology. Appropriate storage of data also necessitates dedicated software and hardware. At the same time, the conflict of interests introduces certain regulatory requirements to the process of data handling. Understandably, the organisation may be tempted to compromise their ethics and thus increase the profitability of a business. Nevertheless, in the long run, the existence of a transparent and robust data management system will improve the accuracy, timeliness, and completeness of the information while at the same time minimise the opportunity of a breach.

Conclusion

As can be seen, the applications of information technology to data management in retail are numerous. The current capabilities of enterprise-scale solutions create offer a number of improvements in the sales process, facilitate seamless data collection and submission, and help to timely locate and address barriers and shortcomings. In addition, they contribute to the accuracy of financial information. Next, these solutions can be used to identify emergent customer behaviour patterns, integrate the findings into the companys strategy, and monitor the feasibility of changes. Finally, the integrity and safety of the data in question can be secured using IT-based tools and approaches.

Reference List

Einav, L & Levin, J 2014, The data revolution and economic analysis, Innovation Policy and the Economy, vol. 14, no. 1, pp. 1-24.

Fernie, J & Sparks, L 2014, Logistics and retail management: emerging issues and new challenges in the retail supply chain, Kogan Page Publishers, Philadelphia, PA.

Gandomi, A & Haider, M 2015, Beyond the hype: big data concepts, methods, and analytics, International Journal of Information Management, vol. 35, no. 2, pp. 137-144.

Hemphill, TA & Longstreet, P 2016, Financial data breaches in the US retail economy: restoring confidence in information technology security standards, Technology in Society, vol. 44, pp. 30-38.

Martin, KD, Borah, A & Palmatier, RW 2017, Data privacy: effects on customer and firm performance, Journal of Marketing, vol. 81, no. 1, pp. 36-58.

Stone, DL, Deadrick, DL, Lukaszewski, KM & Johnson, R 2015, The influence of technology on the future of human resource management, Human Resource Management Review, vol. 25, no. 2, pp. 216-231.

Data Storage Management Solutions: Losses of Personal Data

Introduction

The term data refers to a collection of facts about anything. As it is often said, processed data results to information and he who has information has power. In the modern world, companies are in dire need of faster data processing in order to meet the challenges brought about by stiff competition. Managers need to obtain information fast in order to enhance their critical decisions making whose outcome highly determine the firms operations. Most of us undertake our daily work without having a thought of what will happen when we lose our data. We only wish that something could be done after our data is already compromised. Bearing in mind how important and critical data is to an organization there is need to do whatever it takes to guard and protect it from damage and loss. At the present information age data is stored electronically usually using flash disks, hard drives, discs, tapes etc. In this paper we analyze the problems of data loss by companies and recommend ways in which it can be reduced.

The criticality of lost data depends on its application within an organization. Study shows that when a company experiences a computer outage for more than 10 days, its chances of recovering financially are minimal and 50 percent of firms suffering that problem will experience losses for duration of five years. For example the loss of computer code presents a very significant loss, but why? This is due to the fact that it takes a lot of time and very highly skilled personnel to come up with the code. In contrast the loss of client history from the database is less significant, having the assumption that copies of the information are present. In such an occurrence a less skilled and moderately paid personnel should be able to key in the data back.

Causes of Data Loss

The failure of the hard drive is a common phenomenon in the current business setting. This failure can be because of software corruption, human error or other causes leading to incidents of data loss which in turn has serious consequences to a business entity. For example if a sales and marketing company experiences data loss and computer downtime, sales margin will be cut down in addition to reduced customer service while the data is being restored or rebuilt. According to a survey carried out by Verio on data loss in small business, it came out that seventy percent of the business will be critically hit financially (Verio, 2007). Even though manufacturers of hard drive allege less than 1 percent failure rate, a research carried out recently by computer scientists at Mellon University came with the findings that a 2  4 percent failure rate of failure is frequent and in some situations failure rate may get to 13 percent (Schroeder & Gibson, 2007). Survey that comes from companies whose specialization is data recovery attributes that 38 percent is due to hard drive failure. 30 percent is due to write, read instability. This is because of the degradation of the media. Software corruption contributes to 13 percent of another cause of data loss. This can be because of system software or another program e.g. virus attack. 12 percent of data loss is attributed to human error. This is due to incorrect partitioning of the hard drive or accidental deletion. The comparative magnitude of types of data loss is shown in the figure below.

Data Loss and Causes (source: a survey of 50 data recovery firms across 14 countries, DeepSpar Data Recovery Systems)
Figure 1: Data Loss and Causes (source: a survey of 50 data recovery firms across 14 countries, DeepSpar Data Recovery Systems)

Another cause of data loss is theft of a laptop, desktop or a storage device. The most critical issue with this kind of loss is information leakage to wrong hands. The information obtained can be used in credit cards scam hence jeopardizing customer loyalty. A permanent loss of data occurs when there is no backup for stolen data.

Effects of Data Loss

Any kind of data loss has severe implications on a company even though the impact cannot be the same due to the criticality of the lost data. Let us take an example of a research and development company; in case such a company suffers a hard disk crash the presence of confidential data on latest inventions will make the company loss trust on the current data recovery company if unable to recover the data. This poses a challenge to the company to choose a data recovery company that has lots of success in doing recovery for decades. Many R&D projects take a long period to be completed, so loosing project information implies loss of money and time spent by developers. Take another case of financial institutions running a million of transactions every day. Suppose someone wants to invest $ 3000 in stock market is unable to do so due to problems of data loss at the brokerage company. If the assumption is that the brokerage company deals with 200 transactions daily the loss will total to $ 6000 per day up to when data recovery is done successfully (Toiga, 2004).

Proved from the examples stated above loss of data can affect an organization in several ways depending on the works the company undertakes and type of information in possession. This makes choice of a data recovery software or data recovery companies very relevant.

In cases of stolen data, forensic experts need to be called to prevent cyber criminals in their operations from causing too much damage on the customers. Too many workstations having such important information are either stolen from business premises or on transit. Unfortunately many companies will not be able to survive loss of data because of lack of proper backup policy. If a computer storing such sensitive information is stolen we consider that a permanent loss (Harris, 2007).

In case an organization is experiencing data breaches, inform all your customers about the problem without fear, this is to prevent loss of loyalty from their part. Tell them about the breach personally and possible safeguards via emails or telephone.

Business managers need to invest in technologies which can reduce the likelihood of data loss. Examples include use of computer back-up systems and virus software a. Personal Computers ought to be secured by use of password to decrease the chances of losing data to an impending burglar. Also helping as burglary prevention are computer-tracking services that can be used as a kind of LoJack for laptops. Another way of avoiding loss of data is implementing a sound backup policy such as online backup, data encryption and off-site data centers. Workstations also need to be password-protected to discourage theft. With financial records make sure you keep track of all transactions and if possible inform the bank storing the records for you to lock and encrypt the data for you appropriately.

However, even with the presence of reliable protection measures, episodes of loss of data occurring are inevitable. Well laid plans to tackle such episodes will lessen recovery times. Despite the fact that back-up protocols are widespread for data located in the servers; strategies to guard data in a distributed systems environment are less common. The cost of recovering data cannot be compared to the cost of its permanent loss because of the availability of technologies (Patterson, 2004).

One of the ways in which to reduce data loss by companies is by educating the employees about the importance of data to an organization and the effects of its loss. Ways and procedures to prevent loss of data should be availed to the employees. Secondly, some companies such as Symantec suggests locking down mobile devices, computers and other removable devices. This can be achieved by either using physical locks or software. The major problem here is how to prevent employees from taking companies internal data to the outside. Thirdly, access controls should be implemented where an individual has access to the information that he or she only requires. This implies that network access controls should be rolled out in the most appropriate way possible. Finally, data should be monitored to prevent leakages (Smith 2003).

Conclusion

Safeguarding data is a constant effort by many companies. This data is protected with the maximum efforts available. However, despite the efforts, loss of data may occur and this is the reason why companies dealing with data recovery are continually on demand. Failure of storage devices in every firm is inevitable, the question is how prepared are we to tackle such an eventuality? If the strategies discussed in this paper are employed, impact of data loss will be reduced and companies will take full advantage of the decreasing cost of storage devices.

References

  1. Harris, R. (2007) How Data Gets Lost, Addison Wesley: New York.
  2. Patterson, J 2004, A Simple Way to Estimate the Cost of Downtime, Yourdon Press: New York
  3. Schroeder & Gibson 2007, Disk failures in the Real world, Addison Wesley, New York.
  4. Smith, D 2003, Cost of Lost Data, Carswell: Toronto.
  5. Toiga, J 2004, Disaster Recovery Planning: Managing Risk and Catastrophe in Information Systems, Yourdon Press: New York.
  6. Verio, L 2007, Data Storage management Solutions, Macmillan: Toronto.

Database Management and Machine Learning

A database is computerized system of information with the ability to search and process data (Vermaat et al. 556). Data is the collection of texts, images, audio, video, and other items presented in the form of records in the database.

Data along with a database management system and applications are called a database system. Data is usually formatted in tables to ensure efficient processing and querying (Vermaat et al. 556). Therefore, it can be easily obtained, controlled, changed, monitored, and organized.

Self-managed are the latest and most revolutionary cloud-based databases that implement machine learning to robotize configuration, protection, backups, upgrades, and other common maintenance tasks. A data management system helps enterprises to leverage data from diverse sources by seamlessly integrating on-premises and cloud environments.

Machine learning is used in science, business, industry, healthcare, education, etc. The possibilities of using machine learning technologies are constantly expanding (Alpaydin 4). In business, machine learning helps to improve user experience, predict customer behavior, show relevant ads, customize personalized email campaigns, reduce the processing time for the request, etc.

Machine learning helps a business to analyze the behavioral factors of customers to maintain a competitive advantage over other businesses (Alpaydin 4).

With the full range of information about the target audience and existing customers, companies can improve the quality of communication.

In-depth analysis and trend detection help a business to stay ahead of the competition by being the first to offer the best solution to the customers.

Data analysis helps to make e-commerce more profitable since it allows businesses to optimally use the capital and funds of the company, reduce risks, increase market stability, and efficiency.

Web analysts optimize marketing campaigns through robot assistants that recognize language and conduct a dialogue (Alpaydin 19). Personalized email campaigns and calls make potential clients more loyal.

Machine learning allows conducting a comprehensive analysis of information about potential suppliers and partners. This data can be subjected to careful analysis to build a rating of the reliability of counterparties.

To begin with, the task itself may contain no ethical intentions. For example, if machine learning is used to teach an army of drones to kill people, the results may be unexpected.

Not all algorithms are ethical; many of them work for the good of their creators. For example, in medicine, machine learning can be implemented to offer target users more expensive treatment.

Machine learning algorithms manipulate Internet users in various ways. The system advises which movie or news to watch, or which products to buy based on a persons tastes (Alpaydin 11). This process violates privacy and changes tastes over time, making them narrower.

Many of the mechanisms using which modern systems process data are unclear to the developers. This casts a shadow on the safety of the result of the work of any smart machine. Therefore, AI algorithms should initially be designed in such a way that the actions of the system are safe and predictable (Alpaydin 14). When implementing machine learning, it is important not to collect as much data as possible, but to understand how to properly structure and process it so that automated protection tools work effectively.

Machine learning can improve decision-making among management using in-depth analysis. It is possible to identify and predict the further development of events in many areas, as well as fill gaps in past observations. Algorithms assist in decision-making, continuously selecting the best parameters for any process.

Works Cited

Alpaydin, Ethem. Introduction to Machine Learning. MIT Press, 2020.

Vermaat, Misty E., et al. Discovering Computers 2016: Tools, Apps, Devices, and the Impact of Technology. Cengage Learning, 2017.

Big Data Usage in Supply Chain Management

Abstract

This paper gives a summary of the research that was conducted to understand the unique issues surrounding the use of big data in the supply chain. The discussion identifies the major opportunities associated with the continued use of big data. The emerging obstacles affecting the use of big data are also outlined. The best recommendations and frameworks that can be implemented by supply chain managers who want to drive performance are presented.

Overview

Big data has become a powerful phenomenon that businesses cannot afford to ignore. The term big data refers to unstructured and structured data that inundates a company on a day-to-day business (Schoenherr and Speier-Pero 121). The use of information today presents a wide range of opportunities as well as obstacles. The loss of privacy and jobs are some of the concerns emerging from the concept of big data. Additionally, competent employees capable of completing different activities and managing data are required. Despite such issues, the ability to deliver goods to the consumers promptly remains the main goal of supply chain management (Jaggi and Kadam 1013). This discussion will present numerous insights regarding the use of big data and implications for supply chain managers.

Questions, Issues, and Problems

The targeted study seeks to answer several questions surrounding the current use of big data. The first question or issue revolves around the level of transparency associated with big data. The research will go further to examine the privacy and personalization issues arising from the use of big data for the supply chain. The loss of jobs and a shortage of technical experts are the other issues emerging from the targeted topic (Jaggi and Kadam 1014). The study will go further to describe supplier and customer-centric business operations. The analysis will identify some of the best approaches to maximizing the use of big data in the supply chain and addressing the issues associated with it.

Literature Review

The past two decades have been characterized by increased collection and storage of business data. This has led to numerous challenges because many companies are unable to deal with the stored data. The concept of big data, therefore, emerged due to the nature of this kind of information. The biggest challenge is how to use the data to predict the behaviors of different customers. This development has even become more promising for supply chain managers (Richey et al. 729). Experts have been focusing on the best methods to analyze big data and use it to influence supply chain decisions.

Navickas and Gruzauskas indicated that players in the supply chain industry could collect and interpret information promptly (18). The practice was observed to present new opportunities for setting competitive prices for different products (Schoenherr and Speier-Pero 125). The other challenge emerging from the research is that many customers have become unpredictable. This is due to the changing population dynamics, availability of substitute products, and the emergence of new competitors. Scholars have gone further to indicate that the use of big data can make a difference for many logistical operators. The strategy can manage costs and delivery different products to the final users promptly.

A study by Zhong et al. indicated that big data was presenting numerous challenges to the users (584). For instance, the approach was observed to require competent individuals to analyze and make appropriate inferences from the collected data. Intelligence extraction was also becoming a major obstacle for many supply chain managers. Companies that have implemented new cultural strategies have managed to achieve positive results. The analytical competencies and skill sets required by many companies are still unavailable (Navickas and Gruzauskas 22). This gap explains why it has become impossible for many companies to embrace the power of big data.

Davenport and Dyche indicated that the use of big data was expected to present new opportunities and challenges to future users (17). The author argued that the use of big data could result in personalization. Similarly, Richey et al. observed that big data was a possible cause of reduced privacy for many customers (735). The issue of transparency was also singled out as one of the benefits emerging from the use of big data. These mixed issues explain why analysts have been identifying the best measures to make big data beneficial in supply chain management.

Analysis and Discussion

The completed research study indicated that many supply chain managers were aware of the benefits of big data. Additionally, the managers argued that their firms were struggling with the implementation of big data due to several barriers. For instance, many firms were unable to purchase the right equipment to collect information from different customers, suppliers, and logistical operators. The first challenge was the issue of cost. The overwhelming amount of information was another barrier towards the use of big data (Schoenherr and Speier-Pero 127).

The issue of privacy has surrounded the use of big data in business operations. In supply chain management, companies have been using big data to forecast the behaviors of different consumers and ensure the right products are delivered to them promptly. Skeptics, on the other hand, have been against the idea because it encourages companies to use customers confidential information (Sanders 3). A specific customer might decide to purchase the required products from other companies if he finds out that his or her sensitive information is no longer secure. This challenge has made it impossible for many companies to pursue the use of big data to support their logistical activities.

Big data has led to automation in different companies. This is the case because many firms have purchased powerful equipment capable of analyzing data and presenting adequate ideas. The future might be dim for job seekers due to the continued use of big data (Sanders 4). This means that more people might be entrenched to pave the way for the big data experience.

Technologists are required by companies that plan to use big data in their logistical operations. Unfortunately, these professionals can be expensive for the company. This gap explains why big data has not been embraced by many companies. Supplier and customer-centric approaches are new ideas emerging from the use of big data. When used adequately, big data can result in powerful business strategies that focus on the emerging needs of the customers. The business can make timely decisions and deliver different consumer goods in a convenient manner (Zhong et al. 589).

Findings and Discussions

The completed study has revealed numerous issues regarding the implication of big data for the supply chain management. It is agreeable that the practice presents numerous challenges that can affect business performance. For example, supply chain managers will have to breach their customers privacy in an attempt to develop personalized marketing channels (Wamba and Akter 7). The issue of transparency has been embraced by supply chain managers because it promotes service delivery. These issues cannot be ignored because they dictate the success of the supply chain process. Businesses that fail to consider these gaps and emerging issues might find it hard to address the changing needs of the targeted consumer.

On the other hand, the use of big data is a revolutionary idea that is capable of transforming the performance of every business organization. The approach is even more beneficial if it is applied to a companys supply chain process. The completed study has, therefore, come up with a powerful framework that can be used to apply big data in an organization (Wamba and Akter 9). The framework can be used by companies that want to achieve these two objectives: Realize the benefits of big data in the supply chain and overcome the hurdles affecting the process.

The first step towards using big data sustainability is known as segmentation. The goal of segmentation is to come up with a powerful supply chain that is competitive in terms of time, cost, and flexibility (Jaggi and Kadam 1016). Alignment is the second step whereby the supply functions of the targeted organization are matched properly. This strategy should be implemented to ensure the major stages of the supply chain benefit from the use of big data. Measurement is the other critical attribute of the framework. This step is used by supply chain managers to develop key metrics (also known as key performance indicators) to analyze the performance of every segment.

Big data has been observed to result in a loss of jobs. This is the case because many companies decide to employ competent individuals who can support the use of big data. It would, therefore, be appropriate for business organizations to outsource the services of different providers (Richey et al. 725). This approach can support the goals of companies that do not have the required technologies or equipment. Issues of privacy must be addressed from a professional perspective to meet the needs of the customers.

A focus on the needs of the customers (customer-centric approach) should be the driving force behind every supply chain process (Wamba and Akter 9). The collected data should be used in such a way that it delivers value to the customer. The supplier network should be matched with the other functions of the organization.

Conclusions and Recommendations

The modern age has presented numerous challenges and opportunities for business organizations. The emergence of modern practices and technologies continues to revolutionize how business operations are done. Big data is, therefore, one of the recent phenomena that should be adopted by firms that were to compete in the modern business world (Davenport and Dyche 27). The outstanding fact from the completed research is that big data presents numerous opportunities for supply chain managers while at the same time remaining difficult to execute. This is the case because big data can guide companies to offer personalized supply chain networks for their customers. Additionally, big data can present new opportunities for making evidence-based decisions. It can also be impossible for supply chain leaders to maintain a strategic focus for their operations (Wamba and Akter 9). With proper planning and the ability to address every emerging challenge, more companies will use big data to improve customer satisfaction.

Works Cited

Davenport, Thomas, and Jill Dyche. Big Data in Big Companies. International Institute for Analytics, vol. 1, no. 1, 2013, pp. 1-31.

Jaggi, Harjeet, and Sunny Kadam. Integration of Spark Framework in Supply Chain Management. Procedia Computer Science, vol. 79, no. 1, 2016, pp. 1013-1020.

Navickas, Valentinas, and Valentas Gruzauskas. Big Data Concept in the Food Supply Chain: Small Markets Case. Scientific Annals of Economics and Business, vol. 63, no. 1, pp. 15-28.

Richey, Robert, et al. A Global Exploration of Big Data in the Supply Chain. International Journal of Physical Distribution & Logistics Management, vol. 46, no. 8, 2016, pp. 710-739.

Sanders, Nada. How to Use Big Data to Drive Your Supply Chain. CMR, vol. 58, no. 3, 2014, pp. 1-12.

Schoenherr, Tobias, and Cheri Speier-Pero. Data Science, Predictive Analytics, and Big Data in Supply Chain Management: Current State and Future Potential. Journal of Business Logistics, vol. 36, no. 1, 2015, pp. 120-132.

Wamba, Samuel, and Shahriar Akter. Big Data Analytics for Supply Chain Management: A Literature Review and Research Agenda. NEOMA Business School, vol. 1, no. 1, 2015, pp. 1-11.

Zhong, Ray, et al. Big Data for Supply Chain Management in the Service and Manufacturing Sectors: Challenges, Opportunities, and Future Perspectives. Computers & Industrial Engineering, vol. 101, no. 1, 2016, pp. 572-591.

Cockpit Data Management Solutions: Strengths/Flaws

Introduction

The use of information systems as a process of improving performance and productivity in businesses is a key strategy for many companies today (Rainer & Cegielski, 2011). This paper explores on how information systems support the processes of doing business in an organization. The paper focuses on a technology known as Cockpit Data Management Solutions, which is a more up to date version of the Electronic Flight Bag (EFB).

The Goodrich Cockpit Data Management Solutions

The Cockpit Data Management Solutions can be described as an electronically operated management system through which, flight attendants are able to perform their various tasks with more ease and efficiency, as compared to the past. The system involves less paperwork (Goodrich, 2012).

The system, as an Electronic Flight bag, was developed to replace the cumbersome materials that were previously carried by pilots in their Pilot’s carry-on Flight Bag. The device is an all in one application, containing operating manuals for the pilot and flight crew, as well as pilots’ navigational charts. The management system technology can also be installed with software through which other, different aerospace related tasks can be automatically performed. Examples of such tasks are the calculation of take- off and landing times, messaging, data management, video surveillance and weight assessment.

The Cockpit Data Management Solutions, as a system, was created by the Goodrich Corporation. Goodrich is a leading company in the supply of services and systems to various aerospace companies all over the world. The company ensures efficiency within the aerospace industry, through its various technologies that allow planes to fly, land and stay safe. The device has, over a number of years, been sold to many airplane and jet operators in different parts of the world for business, regional as well as commercial use (Goodrich, 2012).

Strengths of the Information System

The Cockpit Data Management Solutions system has been designed to ensure fast and efficient use and performance of tasks, as is the goal of most information systems (Turban & Volonino, 2010). Additionally, the system is easy to use, and hence requires less training as well as a lower cost of training. The change from the use of paper, to the use of software, hardware and other technological services, is one way through which, airport employees are able to perform their various duties and tasks more efficiently, easily and at a higher speed.

The system is an effective way through which, airports are able to effectively manage their flow of information. Through it, airline companies are able to maintain up to date databases that also make access to required information easier and faster.

Other than speed and efficiency, the information system device is light and, therefore, weight saving, as compared to the convectional flight bag. The technological device is also cost saving, as less is spent on paper work and operations. The use of electronic tools helps in reducing the cost of operations. Additionally, the system has been found to reduce workload for pilots, as well as to increase safety (Goodrich, 2012).

Another major strength of the Cockpit Data Management Solutions is in the materials used to make it. The use of aerospace grade materials makes the system durable, reliable and of high quality.

The device has also allowed for improvements in various operations that include; the digital access of documents in real time and the acquisition of a better form of situational awareness. The system has, additionally, allowed for information to be shared online, or through a computer database, a process that ensures that there is effective communication between staff members (Rainer & Cegielski, 2011). An effective means of communication means better and faster delivery of services, smooth coordination and better means of solving problems (Goodrich, 2012).

Weaknesses

One major weakness of the Cockpit Data Management Solutions is that related to, the initial cost of implementation. The implementation cost for this system, has been found to be high, a reason why some airline companies are hesitant to adopt it. Another weakness of the system has to do with its complexity, especially In the case of knowledge in information technology, which requires expertise. The legal requirements for the installation of the system into the company, is also another weakness and draw back to the effective adoption and acceptance of the system, by numerous airline companies and staff members.

Conclusion

From the above discussion, the Cockpit Data Management Solutions is an effective form of information system through which, the aerospace industry has been able to improve its performance and efficiency in the undertaking of the various activities and roles. The Information system device is safe, efficient, cost effective and saving, as well as one that ensures quality and maximum productivity.

Despite its many strong points, the information system also has its weaknesses, with the main one being the high cost of the initial set up. As is evident, however, the strengths are more than the weaknesses, a fact that makes the system, an important requirement for all airlines and flight attendants. By seeking measures through which the various weaknesses can be eliminated, then the system will be readily acceptable, and therefore easy for many airline companies to implement.

References

Goodrich. (2012). Flight Deck Products and Systems. Web.

Rainer, R. & Cegielski, G. (2011). Introduction to information systems: Supporting and transforming business. Hoboken, NJ: John Wiley & Sons.

Turban, E. & Volonino, L. (2010). Information Technology for Management: Improving Performance in the Digital Economy. Hoboken, NJ: John Wiley & Sons.

Organizational Information Management: Data Warehousing

Introduction

This report explains the meaning of data warehousing and how organization can benefit from using this storage solution. The report also explains the SAP as an ERP solution and gives an architectural overview of the same.

Moreover, the report samples out examples of companies which are using this solution to achieve their organizational goals. In conclusion, the paper establishes that data warehousing is an important step in organizational information management. Further, a SAP solution is viable as the companies that have used it have benefited immensely.

Data Warehousing

Data warehousing is where a huge volume of information that has been collected from a variety of sources is consolidated electronically in a systematic format to enable retrieval and analysis (Ponniah, 2001, p.35). Information that is warehoused can be from a database of the company or the company’s operational systems.

Data warehousing is vital for any organization that seeks to improve on information management. Warehoused data is used to support decision making; the stored information is used for further analysis and identification of trends and variations in a given phenomenon.

Advantages of Data Warehousing

An organization, which embraces the data warehousing concept, is able to achieve sound planning and management. Warehoused data helps management in an organization to make informed decisions. Another benefit that accrues from data warehousing is achievement of a ‘common data model’ (Singh, 1998, p.90). A common data model means that data is stored in one form.

Storing data in one form makes it reporting and analysis of stored data easy unlike when the data model was multiple. A common model speeds up retrieval of important reports (Singh, 1998, p.92). Data warehousing helps in accurate decision making.

It enables organizations to make decisions basing on the information that the system gives, i.e. through reports. The management can monitor trends for example of items with high demand in a particular location for some time. The generated reports can help in determining real performance in relation to organizations goals.

The third benefit of data warehousing is fast retrieval of data (Singh, 1998, p.95). Data warehouse provides an efficient and faster way of retrieving data. This is because the data is stored as a separate entity from operational systems. Storing data separately form the operations system makes it possible for data to be retrieved without slowing down the system.

Finally, data warehousing is advantageous because users are in-control of data in the warehouse. If the source data in the main organizational systems is removed, the data in the warehouse remains safe for a longer duration (Singh, 1998, p.99).

SAP Solution Overview

SAP, which means Systems Applications and Products, provides businesses with comprehensive software and enterprise applications aimed at improving all business operations. SAP R/3 has an integration capability which supports client, server and application systems.

This software is widely used because it has capacity for both client and server computing standards.SAP software is tailored using proprietary development programming language and this makes it reliable for many types and sizes of businesses (Williams, 2008, p.57).

SAP technology is based on different types of hardware and software platforms. It performs well on most types of operating systems i.e. UNIX, windows or Macintosh and databases such as oracle (Williams, 2008, p.63).

To make it more suited for its functions, SAP database comprises of application and database servers which houses the software and handles updates plus documents stored on master files. SAP supports infinite servers and varieties of hardware configuration platforms.

ERP Overview

ERP is an industry solution which allows business to integrate their services to conform to the pace of modern technology and business transaction requirements. ERP, which means Enterprise Resource Planning, is an integrated solution that is used to manage all aspect of business function in an organization (O’Brien, 2004, p.86).

The ERP makes this possible because of its modular programs which are linked to facilitate coordination. These links facilitate sharing of information and enhance communication between different departments within an organization. ERP allows organizations to have centralized information in real time facilitating fast decision making.

One of the reasons for implementation of ERP is difficulty in information dissemination because of differences in computer platforms within an organization (O’Brien, 2004, p.93). Some organizations have different standards of operating systems and hardware profiles, which poses a challenge in implementing a standard architecture for the ERP.

Secondly, ERP systems are implemented because a single integrated system helps reduce time consumed thus increasing productivity in an organization (Larocca, 1999, p.80). The third reason why organizations choose to implement an ERP system is that it helps to improve in customer satisfaction by reducing time delivery and increasing quality in regard to type of transaction.

It ensures fast accessibility and enables accuracy in retrieval (Larocca, 1999, p.86). The fifth reason for implementation of an ERP system is to have a better business framework e.g. an improved framework for monitoring transactions such as sales and inventories. This helps businesses to institute clear planning, tracking and enhances future forecasting for the business (Larocca, 1999, p.89).

Sample Company Profiles

The John Hopkins Hospital

The John Hopkins Hospital is a healthcare industry located in Baltimore, Maryland. The hospital had a number of challenges before adopting SAP solution. They had a single printing platform and delay was an issue and sometimes documents got lost. John Hopkins wanted a system which could easily integrate with other business application such as finance and in addition a solution to manage patient processes.

Moreover, John Hopkins wanted a solution which would support high speed printing volume and SAP was viewed to posses all this. The solution has brought a lot of benefit to the Hospital. For example, printing is now efficiently managed on a single platform. Many other applications have been integrated with the SAP solution and it is simple to format and add printers to network because of standard platform.

The implementation of the project took one month. Afterwards, the billing and business application program to register patients was added. There has been ongoing support since implementation to facilitate solution efficiency

Procter and Gamble

Procter and Gamble is a consumer goods company located in Cincinnati, Ohio. The company had challenges in their old system. They wanted a solution which would improve productivity for their staff members who were involved in sales and related processes. They wanted to customize cockpit so that they can resolve sales order issues.

In order to realize this, there were no other solution that could help and the SAP solution came in handy. The SAP has enabled the company to improve user efficiency and productivity, which has been made possible by integrating scattered functions in a single cockpit.

It has also helped to reduce the cost of tailor made development. Implementation was done by a team and was completed in 10 months. The solution was piloted to assess its effectiveness shortly after it was implemented and later full implementation followed

Basell Polyolefin’s

It is the world’s largest producer of polypropylene products. It is based in the Netherlands. Prior to adopting the SAP solution, the company had faced challenges of replacing their batch system with real time solutions. The batch system was slow, costly and time consuming. They didn’t want to disrupt the business by introducing other solutions so care was considered in choosing the right solution for the company.

They required a solution that would help to increase and improve the efficiency of all processes and utilize the benefit of IT resources and its infrastructure. Upon SAP’s implementation, the company is enjoying more efficient and fast processing of its transaction. There is good utilization of IT resources and there has been reduction in costs of integration with other software’s. The implementation took three day and it was simple to configure.

Conclusion

For better coordination, planning and control of business in an organization, information management is crucial. Data warehousing helps in storing information in a way that it can easily be accessed and retrieved. This is important because stored information is crucial for business forecasting. Most organizations, in response to the changing business environment, are adopting ERP solutions.

An ERP solution helps integrate operations and systems in organizations facilitating faster transactions. One such ERP solution is SAP. From the different companies that have adopted SAP, it is clear that it is efficient and effective software that meets many organizational needs. However, when it comes to IT solutions, each organization has to evaluate its own characteristics before sourcing for software.

References

Larocca, D., 1999, Sams Teach Yourself SAP R/3 in 24 hours, Sams, Indiana

O’Brien, J., A., 2004, Management Information Systems: Managing Information Technology in the Business Enterprise, McGraw-Hill/Irwin, New York

Singh, H., 1998, Data Warehousing: Concepts, Technologies, Implementations, and Management, Prentice Hall PTR, New Jersey

Ponniah, P., 2001, Data Warehousing Fundamentals: a Comprehensive Guide for IT Professionals, John Wiley and Sons, New York

Williams, G., C., 2008, Implementing SAP ERP Sales & Distribution, McGraw-Hill, New York