Quality Control Methods Implementation

Sample checklist for the product

Feature Remarks
bodywork
dimensions & weights
aerodynamics
engine
performance
fuel consumption
chassis
general
Abbreviations

The Theory of Constraints

The Theory of Constraints refers to a philosophy of management that was invented by Dr. Goldratt. According to the author, the potency of any process, chain or system relies on the system’s weakest link.

TOC is usually systemic and endeavors to spot the things that prevent the system from being successful. It attempts to produce the necessary changes to eliminate the limitations or problems that may cause it to be successful. Theory of constraints has separate yet linked processes as well as interrelated concepts that entail “logistics, performance measures, five focusing steps and logical thinking processes” (Cox and Michael, 1998).

Goldratt states that there are three major measurements of the performance to assess. They include inventory, throughput, as well as operating expense. Additionally, TOC stresses out the usage of these global measures of operation instead of the usage of local measures like utilization and efficiency. Goldratt also points out enhancement of throughput. The rate of generation of money by the system through sales is referred to as throughput. Goods may not be considered assets until the time they are sold (Dettmer, 1997).

The money that is invested in the products intended to be sold by the firm or the materials that are supposed to be converted into items that can be sold are all inventories. Operating expense entails the money that the organization spends in the conversion of inventory to throughput. The firm will, therefore, be aiming at increasing throughput and decreasing inventory as well as the operating expenses so as to enhance cash flow, returns on investment as well as the overall profit (Dugdale, and Colwyn, 1997).

When the throughput is increased and inventory is as well as operating expenses are minimized, the firm will most likely achieve its objectives of making money, both in the present as well as in the future. All the things that will be preventing the organization from attaining this given goal should be described as a constraint. They may feature in form of material, capacity, logistics, behavior, or the policy of management (Gardiner, John and Lorraine, 1994).

In order to deal with constraints, a tool called five focusing steps is developed. These steps make sure that the efforts of improvement are still on track. These are collectively believed to be the most significant TO aspects. The five focusing steps include:

Step 1: Identification of constraint(s) that may take place within the system.

Step 2: Decision on how to make sure that the constraint(s) of the system are exploited.

Step 3: Subordinating all things to the decisions that are made within Step 2.

Step 4: Elevating the constraint(s) of the system.

Step 5: If any constraint is interfered with within Step 4, it would be good to move back to the Step 1.

TOC is oriented toward the entire system’s output. These five focusing steps help in the identification of the biggest constraint overshadowing others. As soon as a constraint has been made stronger, the weakest link that follows will be the priority constraint which ought to be addressed.

Kaizen Five-Step Plan

The application of Kaizen in organizations is capable of delivering significant outcomes through small actions within the areas of safety, productivity as well as employees. Organizations that have embraced this philosophy usually encourage all employees within the organization to assess their environments and work processes.

They are also enabled to implement suggestions on how standards, workflow, as well as processes should be improved. At the end, improvements ultimately result into improved quality, improved productivity as well as higher profits.

5 Steps on how to use Kaizen within an organization:

1. Definition of a Problem

When there is no problem at all, there will be no improvement required. The first thing that should be done is the identification of the existence of a problem.

2. Creation of a standard

When there are no standards, it will be very hard to improve them. Besides, it will also be very hard to detect if there are any improvements. Measuring standards need to be developed.

3. Development of 3 to 5 better Ideas

After the identification of a problem as well as the standards, it is prudent to come up with three up to five ideas. The ideas may be brought by anybody within the organization. When suggestions have been brought, selection of the best ones should follow. The ones that are very easy to implement and that will produce results in 120 days should be selected.

4. Go back to step 1

The last step of the procedure is starting it all over again. This makes it a continuous process.

Benchmarking

Benchmarking entails comparing the services and products offered by a firm against the products that are offered by the best firm within the whole industry. It is always a constant practice of measuring services, products as well as practices against companies which are considered to be the leaders within the industry. The process is aimed at exploring and implementing the best practices at very favorable costs (Camp, 1989).

Benchmarking involves the identification of comparison points referred to as the benchmark. Against the benchmark, all the products as well as services offered by the organization are compared (Balm, 1992).

Benchmarking is mainly aimed at ensuring that there are quality improvements. Benchmarking has a track record of quality improvements and when it is implemented by the organization (Barber, 2004). Benchmarking involves four major step, they are thorough understanding of own processes, scrutinizing other firms’ processes, contrasting other firm’s performance with own performance and implementing all the steps required to seal the performance disparity.

Benchmarking is a potentially valuable tool for the management of quality processes.

Park Place Mercedes-Benz can greatly improve on the quality of its products as well as services it provides by comparing them with those of the leading dealers. However, benchmarking with the leading manufactures will not be beneficial. Besides, they are not the actual manufactures of the vehicles but they are only dealers. They can only benchmark against dealers like them.

Conclusion

Benchmarking, Kaizen Five-Step Plan as well as The Theory of Constraints are very significant for Parkplace Mercedes-Benz since together with them, the organization will be able to enhance the quality of products that it provides to its customers. With benchmarking, the organization will be able to compare its products with those of the leading ones in the industry and as such it will be able to improve the quality of products that it provides.

References

Balm, G.J. (1992). Benchmarking: A Practitioner’s Guide for Becoming and Staying Best of the Best. Shaumburg, IL: QPMA Press.

Barber, E. (2004). “Benchmarking the Management of Projects: A Review of Current Thinking.” International Journal of Project Management 22: 301–07.

Camp, R.C. (1989). Benchmarking: The Search for Industry Best Practices That Lead to Superior Performance. Milwaukee: American Society for Quality Control Quality Press.

Cox, J.F., and Michael S. S. (1998). The Constraints Management Handbook. Boca Raton, FL: St. Lucie Press.

Dettmer, H. (1997). Goldratt’s Theory of Constraints: A Systems Approach to Continuous Improvement. Milwaukee, WI: ASQC Quality Press.

Dugdale, D and Colwyn J. (1997). “Accounting for Throughput: Techniques for Performance Measurement, Decisions and Control.” Management Accounting 75, no.11: 526.

Gardiner, S.C., John, H.B. and Lorraine R. G. (1994). “The Evolution of the Theory of Constraints.” Industrial Management 36, no. 3: 136.

Quality Control for All-repairs Mechanics

Introduction/Executive summary

This report involves the statistical analysis of data obtained from the previous jobs conducted by the Allrepairs staff. The performance of individual staff members is evaluated for all the jobs they undertake in terms of the time taken. In conducting this analysis, constraints such as the difficulty of the job are considered alongside the satisfaction of the customer.

The other factor put into consideration is the number of years that the individual has been in the field, and this is important in determining their level of expertise. These values are independent random variables obtained by coding the raw data obtained during the data collection stage and is shown by the table provided.

This analysis is supposed to show the competence of the individual staff members, who collectively determine the credibility of the company. In order to come up with this analysis the hypothesis has to first be determined then tested using the available data. This data will then be evaluated using E-views software which will generate the relevant statistics to be analyzed1.

Hypothesis

This analysis is aimed at establishing the credibility of Allrepairs staff members in relation to their experience, job difficulty, the time taken to complete the tasks and the customer’s response. The null hypothesis states that the staff members are not equally competent while the alternative hypothesis states that they have the same level of competence. In case the null hypothesis holds, the alternative hypothesis is automatically disregarded and vice versa2.

Data Analysis

The first table in this analysis is a representation of the general statistics of the data objects. It indicates the mean, median, maximum, minimum, standard deviation the skewness and the kurtosis of the individual objects for the 293 observations.

Table 1: descriptive statistics

DIFFICULTY SATISFACTION TIME YEARS
Mean 1.832765 3.225256 34.05461 8.071672
Median 2.000000 3.000000 32.00000 9.000000
Maximum 3.000000 9.000000 97.00000 12.00000
Minimum 1.000000 1.000000 6.000000 2.000000
Std. Dev. 0.604791 1.405875 15.74466 3.884239
Skewness 0.092354 2.914407 0.959621 0.752124
Kurtosis 2.589877 12.97725 4.128666 1.924974

From this data, the mean difficulty of the jobs undertaken is below the median, meaning that most jobs are relatively easy. The mean for the satisfaction indicates that most of the customers are satisfied with the job while that of the time taken to complete the tasks indicates that a larger percentage of jobs is completed within the required time frame.

On the years of experience however, the statistics indicate that most of the employees have not been with the company for long, implying a slightly high rate of labor turnover. The standard deviations likewise indicate that the value for the difficulty level is not very deviant from the mean3.

This could be interpreted as the jobs have a difficulty level that is relatively similar to each other. As for the level of customer satisfaction, the standard deviation indicates that a majority of the customers approve the services offered to them.

The time and number of year’s deviations, as well, indicate little deviation from the mean. To analyze the interrelationship between any two given objects, we will use the covariance matrix generated as follows.

Table 2: Covariance matrix

DIFFICULTY MECHANIC SATISFACTION TIME YEARS
DIFFICULTY 0.364524 -0.110042 0.044497 7.288996 0.438607
MECHANIC -0.110042 1.305898 -0.102529 -1.992277 -4.083507
SATISFACTION 0.044497 -0.102529 1.969738 0.991112 0.349043
TIME 7.288996 -1.992277 0.991112 247.0482 6.357861
YEARS 0.438607 -4.083507 0.349043 6.357861 15.03582

The significance of this is in the determination of how the variables change in relation to each other4. The level of difficulty and the satisfaction of the customer seem to be changing in the same direction, implying a proportionate change while that between the mechanic and all the other variables do not change in the same direction.

This means that the result of the repair has nothing to do with the mechanic handling the job. We can therefore conclude that the only variable that does not affect the outcome of the job hence the performance of the company is the mechanic.

The covariance between the number of years in the company and the time taken to complete the tasks has a high covariance and this can be interpreted to mean that the time changes in the same direction with the years of experience5.

The next element in this analysis is the determination of individual descriptive statistics alongside the histograms so that we can come up with a conclusion on how the individual variables affect the overall reputation of the company.

First is the difficulty histogram as shown below.

Diagram 1: Difficulty Histogram and statistics.

From this histogram, the level of difficulty for the majority of the tasks lies at the average difficulty level. The hardest jobs are the least and this implies that the employees in this company have the necessary expertise to handle the jobs at their disposal. The second diagram is that of the mechanics as shown below.

Diagram 2: Mechanic Histogram and statistics.

From this, we can conclude that the mechanics do an approximately equal number of jobs as shown by the bars. The difference in the size of the bars is not significant, indicating a fair policy in the company when allocating jobs6. Third is the time taken shown in diagram 3 below.

Diagram 3: Time Histogram and statistics.

The tasks take different amounts of time to complete depending on the difficulty of the job. Most of the jobs however take a time frame below the average and this is an implication of time efficiency in the company which ensures that completed work is delivered to the customer on time7.

Diagram 4: Satisfaction Histogram and statistics.

From this diagram, we can conclude that the numbers of customers who have expressed dissatisfaction over the work done are the least. A majority of the observation indicate a high level of satisfaction, which may mean that the repairs done never recurred and they were completed on time8.

Diagram 5: Year’s Histogram and statistics.

From this diagram, the period of time that the employees have been working in the organization differs greatly, with the gap between the first person and the others being the greatest. This person has however handled the highest number of jobs though by a very small margin compared to the others9. The other three seem to have been in the company for an almost equal period of time.

Conclusion

From these diagrams, we can conclude that going with the data provided here, the company is performing effectively and it is credible enough to be hired to carry out repairs. The success of this company can be considered to be as a result of the strong workforce, and so, the alternative hypothesis holds true, disqualifying the null hypothesis10.

The employees have an almost equal level of expertise as realized throughout the analysis. The result of the covariance analysis indicates that it does not matter which employee is undertaking the task, since the result is always relatively equal, implying an equal level of expertise among the employees.

References List

Anderson, TW, An Introduction to Multivariate Statistical Analysis, Willey, New York, 1998.

Bowerman, BL, RT O’Connell & ML Hand, Business Statistics in Practice, Mc-Grawhill, Boston, 2001.

Boyle, RG, Descriptive statistics, Victoria College Press, Burwood, 1998.

Bradley, T, Essential Statistics for Economics, Business and Management, John Wiley & Sons, Chichester, 2007.

Dixon, WJ & FJ Massey, Introduction to Statistical Analysis 3d ed, McGraw Hill, New York, 1999.

Doane, DP & LW Seward, Essential statistics in business and Economics, McGraw-Hill Irwin, Boston, 2010.

Edwards, AL, Statistical Analysis 3d ed, Rinehart and Winston, New York, 2009.

Lind, DA, WG Marcal & RD Mason, Statistical techniques in Business and Economics, McGraw-Hill Irvin, Princeton, 2002.

McClave, JT, PG Benson & T Sincich, Statistics for Business and Economics, Prentice Hall, Upper Saddle River, 2001.

Wegner, T, Applied business statistics, 2nd ed, Juta, Cape town, 2007.

Footnotes

  1. D Doane & L Seward, Essential statistics in business and Economics, McGraw-Hill Irwin, Boston, 2010, p.48.
  2. T Wegner, Applied business statistics, 2nd ed, Juta, Cape town, 2007, 98.
  3. McClave, JT, PG Benson & T Sincich, Statistics for Business and Economics, Prentice Hall, Upper Saddle River, 2001, p.62.
  4. A Edwards, Statistical Analysis 3d ed, Rinehart and Winston, New York, 2009, p.85.
  5. D Lind, W Marcal & R Mason, Statistical techniques in Business and Economics, McGraw-Hill Irvin, Princeton, 2002, p. 122.
  6. R Boyle, Descriptive statistics, Victoria College Press, Burwood, 1998, p.77.
  7. T Bradley, Essential statistics for Economics, business and management, John Wiley & Sons, Chichester, 2007, p. 82.
  8. W Dixon & FMassey, Introduction to statistical analysis 3d ed, McGraw Hill, New York, 1999, p. 54.
  9. T Anderson, An introduction to multivariate statistical analysis, Willey, New York, 1998, p. 98.
  10. B Bowerman, R O’Connell & M Hand, Business statistics in practice, Mc-Grawhill, Boston, 2001, p. 23.

Sunshine Enterprises Quality Control

Current Quality Control System and Changes That Are Needed

The field of management of operations that deals with ensuring that production and services are at the necessary specifications is known as quality control. Quality control is a very essential component of every organization that wants to develop a brand name that will be able to position itself in the market.

Sunshine operates in a highly competitive and sensitive food service industry and their final product can influence the way customers will react. It is essential to note that quality is concerned with putting the needs of the final consumer first always. Its current control system has several flaws which need to be checked or the enterprise will lose its customers in the long run. The areas of focus include the following:

Customer Satisfaction

It is notable that the owner of the restaurant chains has given some attention to customer satisfaction by making enquiries to customers so that she obtains feedback. This is very good in terms of making future improvements to her enterprise’s service to its clients.

However, she needs to improve on this area because her visits to the restaurants once in a while can not ensure that she has a clear picture of what should be corrected. Due to her busy schedule, she could employ better methods of ensuring quality always and not in her random.

Percentage of Defects

It is also good that the chains have a target of one plate per a hundred. However, this level of defect is rather theoretical than realistic. There are no means of evaluating this and maybe Abby thinks that by asking for customer responses she will be able to ensure that she attains this. First, her once in a while checks are not likely to ensure that this is followed.

Secondly, some customers probably would not talk but eat what has been served and walk out slowly and never come back. Lastly, the defect level of 10% that is their target is too high when taken into practice.

It means that for every 10 plates served in the restaurants, one has a potential of being defective which may mean not well cooked or having some extra seasoning among other things. Suppose a customer comes to one of the hotels and haves this meal, it will be very destructive on the image of the enterprise as there is bound to be some negative publicity arising from the customer.

Randomized Inspection and Supervision

Abby, according to the case, only visits after a couple of days to get the customer complaints and act on them. Her style of quality control is centered on responses after the customers already have the food on their table.

This is not the best means of establishing quality because she can not guarantee that when she is not around there will be a better service. Her method of using negative responses to gauge quality in her enterprise is very inaccurate.

Instead, she should be aiming at checking the food at the preparation stage rather than waiting for customer response. By assuming that she will eliminate such incidences through randomized supervision or checks that are one sided, she risks loosing customers since by the time the complaint is forwarded to her, the damage will be already done.

Unclear Channels of Control

Looking at the case where the customer has claimed he was over tipped, there is clear evidence that the enterprise lacks a clear framework when it comes to quality control. The billing system of the hotel is one that does not hold anyone responsible as seen by the number of possible scenarios generated by the manager.

The use of a third party credit card company that is not working closely with the enterprise is surprising. There needs to be a change in the way the hotel chains bill their customers so that complaints such as the one cited are not encountered by the management team.

The management of the enterprise should develop a billing system that is efficient in generating reports accurately and timely. The credit card company contracted should be of a high standard in terms of accounting for payments in a more conclusive manner.

Short and Long-Term Recommendations

The enterprise has not totally embraced the essence of quality control. This is despite Abby’s appreciation of the fact that there is a great challenge in the hospitality sector because of its competitiveness and customer sensitivity. The following are some of the long-term and short term recommendations they need to take into account even as they are on a plan to expand:

Getting Things Right at Initial Stages

The hotel management should ensure that the chefs are well trained to get quality meals always and not to wait for complaints at the table. Even as the food is already prepared, the waiter should never serve meals that do not meet required standards. The purchasing department should ascertain that the ingredients sourced from the suppliers are of good quality to eliminate wastage.

Quality Assurance Rather Than Control

The owners’ style of ensuring quality is not commendable. She should be centered on assuring quality for every plate served to customers rather than controlling quality as a reaction to the various complaints that may sometimes not have been heard. She should ensure that the hotel has a quality team put in place so that customers get value for every penny spent on the restaurants.

Collective Responsibility Rather Than Personal Responsibility

What management should recognize is that the issue of quality is about being collectively responsible. The owner should not think that by trying to enquire from customers on food quality, she is only partly responsible. The management should ensure that everyone understands that they are responsible for every single defective meal served.

Benchmarking

The restaurant should be able to compare their service with the other industry leaders so that they find the best means of tackling quality issues in their own organization.

There should be constant research on the areas of billing, handling customer complaints, service methods and other areas that the enterprise seem to be lagging behind. By identifying with and integrating best practice methods in the hotel chains, defect and subsequently, complaints are likely to be eliminated.

Continuous Improvement

Sunshine chefs meet weekly with the owner according to the case. There is nothing on meeting with the whole team that comprises of waiters, supervisors and other staff. There ought to be regular meetings to chat the way forward on constantly improving service.

Through such meetings, teamwork and responsibility is cultivated besides the management getting to know what different sections of the enterprise would like to be improved so the there is a continuous quest to improve quality. Through this, the enterprise will go a long way in eliminating negative customer feedback.

Encouraging Customer Feedback

Quality is focused on customer satisfaction and, therefore, the customer should always be given priority. It is encouraging to note that the management of this organization has based their judgment on customer response.

However, there should be an elaborate way of ensuring that customers are giving responses even without being prompted to do so by the owner or supervisors. Communication channels of such negative or positive responses should be availed so that there is a constant improvement and also issues are addressed before they become more harmful to the organization.

Quality Control in Traditional and Agile Project Management Approaches

Introduction

Project management is an important concept in the success of a business or any organisation. The concept that is adopted needs to facilitate the performance of the organisation in general. Blackstone and Schleier define a project as “a sequence of unique, complex, and connected activities that have one goal or purpose and that must be completed by a specific time, within a budget, and according to some specifications” (2009, 7022).

This argument means that it is related to goal formation and its achievement in any organisation. For a project to be carried out, some of the characteristics about it must be evaluated and planned for by the project managers. This function is performed through project management.

Project management, therefore, is the mobilising, organising, planning, and controlling of resources that will facilitate the achievement of a particular goal (Blackstone, & Schleier 2009, p. 7031). The goals set in a project are temporary as compared to business operations that are routine and repetitive for a particular organisation.

A special form of management is required to ensure that the goals are realised while at the same time dealing with the apparent challenges. There are a number of approaches that are used in project management with the most common of them being the traditional and the agile project management approaches (Blackstone, & Schleier 2009, 7032).

This paper compares the concept and practice of quality control/management in traditional project management approaches such as Prince 2 with the concept and practice of quality control/management in agile project management approaches such as SCRUM.

Quality control and management in traditional project management approaches

Traditional project management is between the two major types of project management commonly used with positive results. There are many definitions of traditional project management with a standard definition being provided by the PMBOK. According to PMBOK, traditional project management is “a set of techniques and tools that can be applied to an activity that seeks an end product, outcomes, or a service” (Blackstone, & Schleier 2009, 7032).

As stated by Saynisch, traditional project management “uses orthodox methods and techniques in the management process” (2005). A series of steps are used in traditional approach in project management.

The five stages including the control are initiation, planning and design, execution and construction, monitoring and controlling systems, and completion (Saynisch 2005, p. 582). These stages are not present in all the projects. They are only included depending on the type of project, scale, and intended result among other factors.

When the traditional approach of project management is applied, the results expected are easily predictable because the methods and techniques used as well as the tools have been proven over time (Saynisch 2010, p. 32). This method of project management, therefore, presents as a preferred method for many organisations based on its predictability.

In the initiation phase of a project, there is an in-depth elaboration and exploration of the idea in the project with the decisions on the people to execute the project being made. Afterwards, the proposal for the project is written with the necessary information gathered above.

The definition phase involves the specification of the requirements associated with the project results. This phase is then followed by the development of a specific design by which the results will be achieved (Saynisch 2005, p. 582). The development phase follows where all the necessary materials for the implementation of the project are acquired and readied for the project implementation.

The implementation phase is the second last phase in which the project takes shape with the intentions being clearly elucidated. The follow-up phase is the last but the most important phase where the project is brought to a successful completion. In this phase, examples of activities include the provision of handbooks on the project, writing a report on the project, and review of the project (Saynisch 2010, p. 32).

Quality control and management in agile project management approaches

Agile project management is another form of project management that borrows heavily from the traditional approach of project management. It has a number of differences from the above-discussed traditional approach. These differences are very apparent. Based on the advantages and applicability of the approach in some of the business fields, the approach is common.

The agile approach of project management is mainly used in software design industry, website development, marketing, technology and creative industries (Fernandez, & Fernandez, 2008, p. 17). As the name suggests, the agile approach of project management entails the execution of tasks in small series (Fernandez, & Fernandez, 2008, p. 17) contrary to the traditional approach that is pre-planned before the execution of the whole process.

In the information and technology industry, agile approach of project management is used to determine the requirements during the engineering process. An example of the same approach is the agile software development that is used in the same field (Sue, Kendall, & Kendall 2012, 13). Examples of software methodologies include the eXtreme Programming (XP), SCRUM, and Feature-Driven Development.

These methodologies aim at reducing the “cost of change throughout the software development process” (Procter et al 2011, p. 213: Edmonds 1974). Agile methodologies experienced significant success in the initial stages of development. They were seen as alternatives to the traditional approaches of project management.

However, as Fernandez and Fernandez state, the methodologies encountered a challenge with a widespread adoption as the advocates of the methods found it “difficult to obtain management support for implementing what seems like dramatic changes in application development” (2008, p. 17). The methods elicit changes in the way the managers and the software developers think and carry out their activities.

According to Procter et al., agile approach of project management allows a process that is flexible enough for the management. At the same time, it is controlled enough to lead to the delivery of solutions (2011, p. 213). Procter et al. continue to state that the method achieves results via combination of cumulative knowledge and techniques such as iterative development and modelling (2011, p. 213).

Flexibility is not replaced by efficiency. The management using the method is more result oriented. Over the last few decades, agile approach of project management has been considered as the preferred method for developers because it allowed delivery of efficiency in the projects. Results could be obtained on time while at the same time allowing for the accommodation of any changes along the way.

Comparison of quality control and management in traditional and agile project management approaches

A number of differences exist between the traditional and agile project management practices. The first difference between the two approaches is in the projects to which the processes are used and subjected. According to Procter et al., “traditional projects are clearly defined with well documented and understood features, functions, and requirements.

In contrast, however, agile projects discover the complete project requirements by doing the project in iterations thereby reducing and eliminating uncertainty” (2011, p. 213).

According to this observation, therefore, agile provides a riskier method of project management compared to the traditional method approach of project management. On the other hand, the agile approach is more flexible. In fact, in the case where change is needed during project management, the approach is easier to adjust.

Another key difference between the two approaches is in the type of managers with which each is associated. As stated above, traditional approach of project management encompasses a number of things that must be fulfilled for the process to work, among them being the financing and the time management.

It is from these characteristics that the traditional project managers are shaped. As Ghosh observes, “traditional project managers manage their projects against the budget, schedule, and scope” (2012, p. 11). He continues to add that metrics and variance can be tracked against the planned baselines (Ghosh 2012, p. 11).

Based also on the observation that the agile approach is associated with more risk than the traditional one, this observation may influence the choice of approach that managers choose to use. Some managers choose to use the traditional approach of project management. The traditional project managers “want to reduce the risks and preserve the constraints of time and money” (Blackstone, & Schleier 2009, 7032).

A manager who chooses to apply the agile approach of project management is effectively referred to as the agile project manager. This type of manager is more focused on the deliverables besides being not afraid to take the risks associated with the type of management.

The budget that is required to meet the project as well as the timeline within which the project will be achievable is of lesser concern to this manager (Blackstone, & Schleier 2009, p. 7033). This manager is focused on delivery. He or she is less concerned of the process that is used to achieve this goal, as it is the case with the traditional project manager.

The other difference between the two approaches is in the teams involved in the two of them. According to Saynisch, “while traditional projects can more easily support distributed work teams of specialists and junior members because of the well-defined requirements and other documentation, this strategy is not the case with agile project teams” (2010, p. 32).

On the contrary, agile project teams are described as requiring the co-location of the members of the team so that change may be embraced with increments being produced (Saynisch 2010, p. 34). Agile project members are also expected to be more committed than the traditional team members are because they take more responsibility and roles in the projects (Blackstone, & Schleier 2009, p. 7032).

The methods of measuring the success of the two approaches of project management are different based on the characteristics of each approach. One of the methods of measuring this success is through the use successful applications documented in literature. The agile methodology is documented as having had success in some literatures in the cost of the whole method.

According to Ghosh, “Agile methods emphasise teams, working software , customer collaboration, and responding to change while traditional methods focus on contracts, plans, processes, documents, and tools” (2012, p. 12). Some of the other measures that may be used to measure the success of the two approaches apart from the cost efficiency include the use of schedule, quality, ROI gains, satisfaction, and productivity (Ghosh 2012, p. 11).

According to the research done comparing the two methods, the agile method was more cost effective compared to the traditional approach of project management (Ghosh 2012, p. 11). The case reveals that the agile approach is more applicable based on this consideration.

Based on the other approaches that may be used to measure the success of the two processes, it is right to conclude that the application of the agile approach has been more by the small organisations with the traditional method being more popular to the larger ones (Blackstone, & Schleier 2009, p. 7032)

Conclusion

As discussed above, it is important to choose a method of project management that facilitates the achievement of the set goals. Two major approaches of project management have been discussed: the traditional and the agile approaches of project management. The traditional approach came before the agile approach.

It is more concerned with the processes, the financing, and many other things stated above. It involves planning before the project can be realised. In most cases, it is not flexible. On the other hand, the agile approach of project management involves taking more risks. It is not concerned with the process. It is also flexible as no planning is involved.

Based on the characteristics, advantages, and disadvantages of these approaches, it is possible to recommend the types of organisations and businesses that each would suit. As for the agile approach, the method is more applicable in the information and technology industry where flexibility is more applicable.

The traditional approach of project management is more suited for big institutions and organisations where risks are not to be taken blindly because the approach involves a lot of planning. Hence, there would be more accountability.

References

Blackstone, J & Schleier, J 2009, ‘A tutorial on project management from a theory of constraints perspective’, International Journal of Production Research, vol. 47 no. 24, pp. 7029-7046.

Edmonds, E. 1974, ‘A Process for the Development of Software for Nontechnical Users as an Adaptive System’, General Systems, vol. 19 no. 1, pp. 215–18.

Fernandez, D, & Fernandez, J 2008, ‘Agile project management — agilism versus traditional approaches’, Journal Of Computer Information Systems, vol. 49 no. 2, pp. 10-17,

Ghosh, S 2012, ‘Systemic Comparison of the Application of EVM in Traditional and Agile Software Project’, PM World Today, vol. 14 no. 2, pp. 1-14.

Procter, R, Rouncefield, M, Poschen, M, Lin, Y, & Voss, A 2011, ‘Agile Project Management: A Case Study of a Virtual Research Environment Development Project’, Computer Supported Cooperative Work: The Journal of Collaborative Computing, vol. 20 no. 3, pp. 197-225.

Saynisch, M 2005, ‘Beyond Frontiers of Traditional Project Management: The Concept of Project Management Second Order (Pm-2) as an Approach of Evolutionary Management’, World Futures: The Journal of General Evolution, vol. 61 no. 8, pp. 555-590.

Saynisch, M 2010, ‘Beyond frontiers of traditional project management: An approach to evolutionary, self-organisational principles and the complexity theory—results of the research program’, Project Management Journal, vol. 41 no. 2, pp. 21-37.

Sue, K, Kendall, J, & Kendall, K 2012, ‘Project contexts and use of agile software development methodology in practice: a case study’, Journal of the Academy of Business & Economics, vol. 12 no. 2, pp. 1-15.

The Production of Beef: Quality Control, Inventory Management, Production Service Design

Quality control

One of the most vital lessons learnt from the production of beef episode is the value of quality control. The production unit under analysis does not compromise on the quality of beef that comes from its facilities. The same applies to all the other companies that work alongside this organization during the slaughter and packaging of the product.

In operations management, quality control is critical in ensuring that products meet consumer needs and expectations. Therefore, a business must use standards against which to evaluate and to correct its respective outcomes. A number of characteristics stand out in the corn production of beef. First, inspection of the materials occurs prior to, during and after production of beef.

For instance, the raw materials used to feed the cattle are investigated and corrected as required. The quality of corn is affirmed by placing a corn factory in the middle of the cattle ranch (Kwon 12). The organization thus mitigates itself against risks that come from purchasing raw materials from the external environment. As the cattle continue to grow, the modern-day cowboys will check on a number of parameters to ascertain that the cows are growing as expected.

The first characteristic that will indicate the level of quality in the cattle is the muscle and bone ratio of the cow. Certain expectations exist concerning the quantity of muscle in beef-producing cows. This organization checks on its animals to make sure that they possess the right ones. Furthermore, the facility also looks at the fat content of the animals after a certain period of time.

Corn-fed cows grow at a much faster rate than grass-fed cows. Therefore, the amount of fat that they will have gained must be in accordance to certain expectations within the institution. Usually, back fat is examined. The Colorado ranch thus indicates that quality control within production is imperative in producing high quality products.

Quality assurance also continues after production during the slaughtering and packing phases. Perhaps this is one of the most critical areas because after this process, the meat will go to distributors who have no capacity to alter the quality of the product. At the slaughtering plants, workers will remove external fat and analyze the muscle size and shape of the cut.

They will also look at marbling and the pattern of muscle and fat distribution in the animal. The ranchers tend to command higher prices for those products with less fat. This aspect of post production inspection shows that quality control is a continuous process in beef production. However, intense inspection and quality checks occur at lower levels of production or in supply chain parts that are close to the consumer.

This case provides great insight on the need to balance quality checks with the cost, volume and level of details involved in processing. Keeping cattle is less detailed than slaughtering, so quality control must be revamped in the latter phase.

Inventory management

The beef producer also engages in inventory management. Operations management literature indicates that inventory management assists in tracking inventory, knowing required quantities, and determining when those items will be required. One also manages one’s inventory in order to determine what the price of a commodity will become.

The beef facility mostly controls its inventory in a periodic way. This is appropriate for the company because it deals with large quantity goods that are not as fast moving as other industrial products. Feed management is one of the prime aspects of inventory control in this organization. Beef cattle have different nutrient requirements owing to age and production stage needs.

Young calves are grass-fed and then weaned into corn production at a certain stage of their lives. The ranchers often calculate the level of feed needed by different types of cattle and then provide them accordingly. Some of the animals may require more protein than others. Inventory control for feeds is also affected by the seasons as cold rains affect nutrient requirements.

Inventory management is essential for such a large facility because it sells thousands of cows. The farm needs to plan how it will meet increasing demand for beef during certain seasons. It must link these demands with the point at which calving or breeding takes place.

The rate at which the cows gain weight should also be related to how frequently they will be needed by the slaughter houses. Facilities that deal with such large scale production must guard against shortages by linking seemingly unrelated aspects of the process with demand needs.

Production service design

Organizations have the choice of standardizing, mass customizing, robust designing, delayed differentiating or modular designing products. The organization under consideration has opted to standardize its production. This is a central development in the production of beef within the United States. The trend emanated from the economic and production efficiencies that stem from the practice.

In the past, most beef came from relatively smaller ranches. This case study shows that standardization works well for companies that sell products which are difficult to customize. Cattle are not unique, and adopting uniform production will ensure that a high quantity of beef is produced while costs of doing so remain low.

This approach to product design has also made corn production of beef a high quality process because similar procedures are followed through the product process. Furthermore, it is relatively easy for employees and owners of the facility to perform different aspects of production. For instance, feed purchasing, equipment management and facility organization are standard practices.

Additionally, inventory control as well as accounting are all routine. This has reduced production processes to a predictable and well-managed process. Corn production of beef also illustrates that certain product design benefits may be compromised during production. In an effort to standardize beef, some slaughtering facilities may choose to discard edible parts of meat.

This fosters a lot of wastage and may be uneconomical to beef farmers. Additionally, some consumers complain about the bland taste of corn-raised cattle. Such individuals have few alternatives to choose from if they feel like taking grass-raised beef. Therefore, standardization of the production and packaging of beef has provided fewer choices to some consumers as seen in the case study.

Overall, these three components of operations management have indicated that standardization, continuous quality control and inventory management can make the difference between effectiveness and obsolescence. Slightly more than 600 corn feeding ranches account for the vast amount of beef available in the US.

These institutions have capitalized on different aspects of operations management to ensure that they meet the high demand of beef that exists in the US. Some of them even spare some beef for export. This case study was critical in demonstrating that operations management principles work.

Works Cited

Kwon, Yul. The American Steak- America Revealed. 2013. Web.

XYZ Deposits: Quality Control and Management Statistics

Introduction

XYZ is one of the leading deposits taking financial institutions in Asia. The bank offers clients a wide range of services including taking customers deposits, withdrawals, bank custody services, business installment loans (BILs)Treasury Bonds, amongst other services. Customer service is a key area where the bank has put in so much emphasis; the bank has pledged to provide world class customer services putting customer satisfaction at spotlight. They excite customers by creating magic moments at whatever service offered to the customers.

The entire staff of the bank has this in mind all the time, whether just answering customer questions to know how to fill in deposit slips, or giving directions on the next convenient branch in any part of the country.

The bank has a feedback process where customers are allowed to give the bank a chance to understand genuine complaints. Set up to handle customers queries at all times, Customer Service Centre (CSC) gives customers information over the telephone and these services range from account balance, loan balance, loan disbursement period, check clearing period, standing order instruction to salaries queries etc. The CSAs are dedicated to providing customers with professional assistance and necessary information on proper identification, and they pass the information via the telephone, email, fax etc.

The bank has a due process of handling customer complaints and these complaints are handled by various members of the customer service team which ranges from the weight of complaints and queries. Very urgent queries are fast tracked, whereas general complaints are reverted by branch managers or customer advisors who sit at branch. The first day of operation saw a massive response where 250 customers contacted the centre. Today, since 2 years, the centre has handled 2.1 million customers’ calls.

Customers are not always happy with the quality and value of the services they receive from shops, banks, hypermarkets, etc.; they complain when deliveries are late, when served by a rude staff, inconvenient service hour, and poor performance. It is against this backdrop that we take stock of how XYZ have used the CSC to be able to handle customer queries in a professional way accurately and also maintaining integrity and confidentiality of customer information.

Control Charts

The measure of a statistic process control x and R charts are used and can be used to measure all characteristics of a product, which can be divided into two categories: variables and attributes. (Juran and Gryna, 1980). Variables have measure while attributes is a product characteristic that has discrete value.

Quality Customer Service

Customer service is the core of the banks day to day operation, and the bank has pledged to give excellent customer services to all clients. In this study, XYZ has employed a strategy of the bank calls 75/25 Customer Service Level Strategy.

The bank has this strategy to answer at least 75 % of the call within 25 seconds of calls or just about 6th ring. The faster the CSA respond to the call the higher the service level tends to be (Grant, 1998).

For the last two years, the bank has worked towards achieving service level higher than the 75/25 set mark which is their standard for tracking the service level. Most of the CSA have been given trainings on this and they continue to do refresher courses. Again when the service level is too high say 90/10 it means just about the 3rd ring, it means the bank needs more staff and this can be very costly to the bank. Service level attainability depends also on the clientele and also on the kind of service offered.

Methodology

The data used here was obtained from the Customer Service Centre manager XYZ Bank Limited. A total number of 6 samples which represents quarter one and quarter two of 2010. Each of the 25 customers who called to find out whether their Business Installment Loans (BILs) had been approved and credited into their respective accounts.

The variation range is defined by the use of these control charts. A process to be tested is considered to be out of control when data reveals one of the common sides falls outside the control limits. We use X-Bar and R Bars to be able to monitor these variations. X-Bar charts are used, for example, to monitor the mean of a process, we contrast the centre line of the chart, where we consider samples and compute their means.

Unlike X-Bar charts which measure shifts of tendency of the centre process, range or R-charts monitor the variability of the process; the centre of the line is drawn and it determines the average range where both the lower and upper limits are computed.

Data Collection

Waiting time for answering the phone calls were measured and recorded during the four months. Ten random waiting times were recorded in the table shown below; time is in seconds.

Confidence Intervals here will allows us to estimate a population parameter, collected data to be used to estimate the range of values for the population mean or proportion. Confidence level is always presented as C and a smaller population is a good measure because it gives more information.

Number of customers who called Jan Feb Mar Apr Average Range
15.85 16.02 15.83 15.93 15.9075 0.19
16.12 16 15.85 16.01 15.995 0.27
16 15.91 15.94 15.83 15.92 0.17
16.2 15.85 15.74 15.93 15.93 0.35
15.74 15.86 16.21 16.1 15.9775 0.47
15.94 16.01 16.14 16.03 16.03 0.2
15.75 16.21 16.01 15.86 15.9575 0.46
15.82 15.94 16.02 15.94 15.93 0.2
16.04 15.98 15.83 15.98 15.9575 0.21
15.64 15.86 15.94 16.02 15.865 0.38

The average range and average time for every 4 months were determined.

The centre line of the control sampled data is the average of the samples presented in equation as;

R=159.47/10 = 15.947

In statistics mean is always computed using the following equation;

Mean m =X/n= (X1+X2+X3+…+Xn )/ n

Where n is the total number of data say 10 for our case and m is the mean.

Where n is the number of customers who made calls in a sample (equals 10 in this case)

We also have standard deviation which is the deviation from the centre line and in SD we always have upper and lower bound to cater for calculation and plotting errors in graphs. Range is also presented as the difference between the extreme observations.

Confidence Interval

Confidence intervals are usually used to estimate range of values for the population mean. The Confidence Interval for the obtained data was investigated to state the range where most of data might exist.

Hypothesis testing

It is one of the tools used in statistics and most often decisions are required to be made concerning populations on sampled information. We used the obtained data to apply the hypothesis test where the null hypothesis was set to less than or equal to 20, and the alternative hypothesis was set to more than 20.

Control Charts

Basically, control is a graph used to study how a process changes over time. The collected data are plotted in a time order, and it has a central line for the average upper line to represent the upper control limit, and a lower line for the lower control limit (Montgomery, 2009).

Here we have chosen X-bar chart, and the R chart will be used as a tool to examine the obtained data. The X-bar chart is used to check for the mean of the data and how it is related to the ideal mean. The R chart is used to check the variability of the data.

R chart is to be examined before the X-bar chart in the sense that the X-bar chart depends on R chart.

This is the general line graph that represents 10 different customers who made calls in a period of four months.

From this graph, we will extract the X and R charts respectively, and draw a conclusion whether the bank has been able to meet the target for the four months sampled.

Control Charts

R Chart

The R chart examines the ranges of different related samples to the average range of all the samples in order to test the variability of the measurements. The R chart shows the measured samples’ ranges (Y-axis) to the sampling time (X-axis).

R Chart

Equation c: R Chart center line

Equation d: R chart LCL

R Chart
R Chart

Interpretation of R Chart

Interpretation of R Chart

X-bar Chart

This chart examines the averages of the obtained measurements related to the average of all the samples. The X chart shows the averages of the measured samples (Y-axis) to the sampling numbertime ( X-axis).

X-bar Chart

Equation 1: X-bar chart center line

 X-bar chart center line

The x axis represents sampled months while the Y axis represent time is seconds.

The average time of call response is 15.90 seconds; here the centre line is the mean.

Interpretation of X-Bar Chart

Once the graph has been plotted and the central line drawn, the vertical axis of the X chart here represents the means for the characteristic of interest and the vertical axis of the R chart here represents the ranges. So if we want to control the minimum time CSA needs to pick up a customer’s call, then we use the centre line In the R chart that would represent the acceptable range of time in seconds within the sample, while in the x chart, the central line would represent the desired standards.

Capability Analysis

This is the ability of a process to meet the expectation as set by an organization management and process structure. The set regulations governing this should always ensure that the process control limits fall within the specification limits. Here we have to specify control limits; the lower control limit is set to zero and the upper control limit is set to 10.

Recommendation

From this we realized that the bank needs more Customer Service Assistants at the call centre to be able to respond to customers’ needs within the shortest time possible.

Reference List

Grant, EL 1998, Quality Control of Statistics. 6th Ed. McGraw-Hill, New York.

Juran, JM & Gryna FM 1980, Analysis Quality Planning 2nd Ed. McGraw-Hill, New York.

Montgomery, CD 2009, Modern Introduction: Statistical Quality Controlling. 6th Ed. New York: Wiley & Sons Publishers.

The Management and Control of Quality

Discuss the above statement, feel free to agree or disagree with it. How can you relate Deming’s theory of profound management, called the system of profound knowledge in your discussion?

The above statement is not correct given the context of management in the contemporary world. Various scholars around the world have come up with theories stating the drastic changes that have taken place resulting in a transformed society. These theories have challenged some of the existing beliefs, coming up with radical ideas of how the world has changed. This may, however, not hold some truth given the above assumptions.

The first assumption of reward and punishment has been one of the best managerial tools for centuries, and it is still very relevant. People work because they need to get rewarded. The reward may come in various fronts which include a rise in rank, direct financial benefits, a fully paid holiday, or a vocational tour to a desirable location. These are still what employees are looking for even in the contemporary world.

They work hard with the aim of getting the notice of the top management hoping that the hard work will be rewarded. Given the nature of human beings, this may not change any time soon. On the other hand, it is still important to make employees realize that for every action they take, there is a consequence that is attached (Pike & Barnes, 1996).

Employees must be responsible for their actions in order to make them achieve the best of results. This does not involve instilling fear but making employees feel responsible. Managers in the contemporary world know that it is important to ensure that make sure that every process within the organization is successful because it is the individual processes that make up the entire system. Every process in the system has a role that must be fulfilled in order to ensure that there is an overall success of the system. When any part of the system fails, the entire system will fail because there will be a lack of coordination in the components of the system.

Deming’s theory of profound management confirms the fact that results are achieved by setting objectives. In Deming’s 14 points, the first point is about setting up a mission statement and committing to it. This is a clear demonstration that there is a need to define the path to be taken clearly before embarking on the journey (Grady, 2010). The setting of the objectives enables all the components within the system to know what is expected in order to realize the overall goal. Every component within the system will, therefore, have a set target that should be achieved within a specified period of time.

It is generally believed, and it is a fact that when given a smaller task to perform with ample time, then chances are always high that the task will always be a success. Quality and quantity are inversely related. When the management of a firm gives focus to quantity, then quality may be compromised. This is because every single unit within the system will struggle to increase the rate of production using the same apparatus that were used before to produce lesser products.

The reduced time on a single product will result in incomplete products in the market. When the focus is turned into quality, then there may be a need to reduce production quantity because each product may need to take a longer time at every single unit of production. Although the current society may not work well with guess works, it is a fact that at times management may be forced to work with opinions and guesswork and opinions because of lack of knowledge.

Fighting fires is always undesirable to an organization. It is important to note, however, that this cannot be completely avoided within an organization. The best that management can do is to ensure that it uses such occurrences to learn and improve the system in order to be able to deal with future challenges. This is in line with Deming’s theory of encouraging education and improvement of self.

Lastly, it is obvious that competition is a necessary aspect of life. Life is full of competition, both from the individual level to the organizational level. Within the firm, individual employees will be competing amongst themselves in order to deliver the best results or to rise to the position of top command. At the organizational level, competition is everywhere. According to Charantimath (2006), over half of the mission statements of firms around the world have the statement, “to be the best, or the leading…” this means that they believe that there are others that exist, and among the portfolio, the vision is to be the best.

The best firms in the world can attribute their success to the competition. They use technology in order to come up with products that are unique to the market, and which outsmarts the competitors’ products. Through this constant need to outsmart others, they end up becoming giant firms with uniquely beautiful products. This competition has brought about globalization. I, therefore, refute the claim that the current managers who use the above assumptions are lost in the twenty-first century. The fact is that the assumptions still hold a lot of truth.

Summarize Deming’s 14 points. How does each point relate to the four components of profound knowledge?

Deming’s theory of profound knowledge is one of the widely used theories in the management of systems. Deming gave 14 points in managing people within an organization. These fourteen points are directly related to the four components of profound knowledge. The following are the points.

The first point is on the creation and publishing of a mission statement of a company and then committing to it. This is based on the theory of knowledge. Every member of the organization should have knowledge of where the firm is headed, and this is always found in the mission statement. The second point is learning the new philosophy. This is based on knowledge about variation. It is an appreciation of the fact that there will always be variations within the environment within which the firm operates. The third point is on an understanding of the purpose of inspection. In a system, there will be regular inspections by various authorities.

This is based on the facet of appreciating a system. The fourth point is the ending of all practices that are only driven by price. Price is a short term factor and should not always be used as a factor that brings a competitive edge. This point emphasizes the psychology of change. The fifth point is on constant improvement of the system of production. The world is dynamic and the system should reflect this. The basis of this point is on knowledge about variations. The sixth point is on instituting training. This will ensure that employees understand the current dynamics in the market in order to be in a position to deliver desirable profits.

The seventh point is on instituting leadership while the eighth points are on driving out fear and developing trust. In the current society, leadership through fear may not work. It is through proper leadership and trust that success can be achieved. This is based on the appreciation of the system. The ninth and tenth points respectively focus on optimizing of teams and efforts of individuals; and elimination of exhortation for the workforce. Teamwork in the current society is very important. Individuals should come together to form a formidable force that is able to meet challenges in life. The two points are based on the appreciation of the system.

The individuals should appreciate that there is a way in which various activities should be done within a system. Elimination of numerical quotas and removal of barriers is the eleventh and twelfth points respectively. In current society, barriers may not act as a means of achieving success. It is therefore important to eliminate them in the spirit of the psychology of change. Encouraging education and self-improvement, and taking actions to accomplish transformation are the thirteenth and fourteenth points respectively. They are based on the theory of knowledge.

Explain the implications of not understanding the component of profound knowledge as suggested by Peter Scholtes?

Peter Scholtes explained the implications of not understanding the components of profound knowledge. The following are some of the implications. The first implication is that events will be seen as individual accidents. This can be destructive to the management because the approach that will be given is not holistic. The second implication is that it will be possible to see the symptoms but not the root cause. The management will be able to see the consequences of a failure of a system or part of a system, but fail to see the reason behind it.

This would mean that the management will not be in a position to solve the problem because it does not know the cause of the problem. Another implication is that intervention and implications on the entire system may not be understood. This makes the situation even worse because the management lacks the knowledge of the implication the incident have on the firm. Given that there is a lack of knowledge of the implication of the firm, it is clear that there will be no basis for looking for intervention measures.

As Evans and Lindsay (2008) say, it is only a factor whose implication to a firm is known that a remedy to it will be developed. However, it is complex to convince an individual to develop intervention measures while they lack the knowledge of the factor. This is made worse by the fact that there is a lack of knowledge of an intervention. Another implication is on blaming individuals and not the system. Personalizing blames may have serious negative consequences on the firm.

This is because every individual will abandon the system and make an effort to defend themself. The system will further get negatively affected. It is always important to understand the responsibility of the community and its accountability. It is only through this that the management will be in a position to integrate such a community into the system. However, another implication of not understanding the components of profound knowledge is a lack of understanding of the responsibility and accountability of the community.

References

Charantimath, P. M. (2006). Total quality management. New Delhi: Pearson Education.

Evans, J. R., & Lindsay, W. M. (2008). The management and control of quality. Mason, OH: Thomson South-Western.

Grady, J. O. (2010). System management: Planning, enterprise identity, and deployment. Boca Raton: CRC Press/Taylor & Francis.

Pike, J., & Barnes, R. (1996). TQM in action: A practical approach to continuous performance improvement. London: Chapman & Hall.

Going Inc.’s Quality Control & Service Improvement

To become successful in the airline transport business Going, Inc. needs to improve in several areas and in a way that can provide long-term solutions rather than quick short-term ones. The steps need to be drastic and firm so that the trend of losing business over the last 20 months vanishes away and their goal of becoming the most successful airline service provider to the business travelers. The customers have to be given what they expect from the airline with the marketing slogan of “High Society in the Air” (MyCampus 2010) so that they do not seek another less expensive option.

The improvement of performance needs to be brought in several areas including three main sections, on-time delivery, baggage handling, and overall customer service. Statistics are not a happy scene for the company. If compared with the industry average under the subheads of Being on time, Air carrier delay, Weather delay, National Aviation System delay, Security delay, Aircraft arriving late, Cancelled and Diverted, the figures of Going, Inc. are 71.6%, 9.82%, 2.62%, 5.00%, 1.2%, 3.75%, 6.00% and.10% as against 83.91%, 3.71%, 0.55%, 5.01%, 0.07%, 3.50%, 3.07% and 0.18% of industry average (MyCampus 2010).

Improvement in the service design strategies can be brought about by staying in the present route they are flying instead of expanding to new ones. They already serve long routes across the US and internationally to Europe and Asia. It is better not to stress themselves with more loads, at least for the time being. Instead, they might have an improved location strategy by increasing the number of flights per day to and fro at least some o0f the airports they serve once a day. They must adapt to the internet and provide a customer-friendly website with everything about their services (Valverde-Ventura and Nembhard 2008).

Going, Inc. should target more passengers rather than only targeting first-class frequent flyers with frequent flyer packages. This is necessary to get over medium customer satisfaction ranking and achieve better quality management strategies. Going, Inc. should decrease the variety of models included in their fleet from AirDyno and Cost, which is 7 in number presently. A lesser number of varieties will help in better and speedy cleaning cycles and maintenance and reduce problems oversupply of spare parts as well, especially when they do not share a very good relation with AirDyno (Bisgaard 2007).

Presenting a full meal on the flight is a good idea but over-emphasizing customizing meals to give it “Going, Inc flair” should be avoided. Going, Inc should keep their first come first serve basis for making the passengers board the aircraft instead of letting the first class first. This will make the other class passengers feel important equally. Probably the most serious problem lies in the management of Human Resources. The employees should be allowed to voice their problems and adequate training provided to them. The meager increment in pay has to be addressed or else that will only broaden the gap between the authorities and the employees (Bersimis 2007).

Going, Inc keeps maintenance stations at every hub location which demands exceptionally elaborate management. Reducing the number of models along with keeping maintenance stations only in a few more or less easy-to-reach hubs will lessen the burden of management on the company. The problem of air carriers and connecting aircraft delays have to be solved by rescheduling the routes covered by the company. They cover a too extensive routing schedule which leads to delay as well as other factors.

Going, Inc spends a lot of money due to grounded or repair aircraft which is 18% more than the industry average which indicates poor maintenance. This can be reduced by decreasing the variety in their fleet. Spending money on training is good but it has to be taken care that the staff is getting proper training to perform their duty better.

References

Bersimis, S. (2007). Multivariate statistical process control: an overview. Quality and Reliability Engineering International 23(5), 517-543.

Bisgaard, S. (2007). Quality management and Juran’s legacy. Quality and Reliability Engineering International 23(6), 665-677.

MyCampus. (2010). Service Division. Retrieved from My campus. Web.

Valverde-Ventura, R., and Nembhard, H.B. (2008). Robustness properties of Cuscore statistics for monitoring a nonstationary system. Quality and Reliability Engineering International 24(7), 817-841.

Quality Control Issues in Production

Introduction

Quality control is an essential part of production, as it is the stage that guarantees that the customers receive a satisfactory product or service. It is challenging to evaluate every use case that can be created thoroughly with the limited resources of a company department. However, an inferior product can severely damage a company’s reputation and lead customers to turn to its competitors for following purchases. Therefore, the procedure should be carried out thoroughly and efficiently if a company wants to maintain its sustainability. This report investigates the influence of poor quality control on the recent product created by Bethesda Softworks, Fallout 76, and the lessons that can be learned from its failure.

Bethesda Softworks and Fallout 76

Bethesda Softworks is a well-known videogame publisher that has released numerous critically acclaimed titles in the past. It is primarily known for two action role-playing game franchises that it owns, The Elder Scrolls and Fallout. As the newest entry in a highly popular and well-established series, Fallout 76 could have expected a high degree of success even if it did not completely match the standards set by previous entries. However, this was not the case, and as the data provided by Morgans (2018) shows, the game was a large-scale failure. While one can identify numerous issues that combined to severely damage the commercial success of the product, its poor quality, as well as that of the services surrounding it, should be considered the primary causes.

The game itself is full of issues such as bugs and performance concerns. Ramsey (2018) describes the state of the game at its launch as embarrassing, noting that the severe stuttering is the primary concern. Bethesda has somewhat improved the performance of the game with continuous patches since its release, but the developers have not been able to address all of the bugs and glitches. One of the more recent stories, as described by Dellinger (2019), is that a main feature of the game, as well as one of its most significant selling points, stopped working because the year has changed. Numerous other issues are also still present, and many of them can severely and negatively affect player experience.

However, the services that Bethesda has created to support its newest game have been the subject of numerous controversies, as well. Petite (2018) names two different stories: the fake canvas bags advertising and the personal information leak. The first involved an expensive special edition of the game, which was stated by the publisher to include a canvas bag but shipped with a thin nylon variant that did not look like the advertisement.

Bethesda initially refused to provide compensation when the issue was discovered but was forced to change its position and offer replacement bags later. This incident directly led to the second controversy, as special edition buyers were asked to enter their data and submit proof of purchase to the Bethesda website, which accidentally made the data publicly available for a long enough duration of time that the public was able to notice. The list of scandals does not end here, but these two are the primary examples of poor quality control by Bethesda.

The actions of the company’s employees have at times been ill-considered, as well. Webb (2018) describes the story of how the company had banned a considerable number of players, essentially removing their ability to utilise the product they had bought, over a potentially non-existent issue, and made the situation worse with patronising emails to those affected. Arif (2018) describes the low-quality fix that was intended to allow the game to support ultrawide monitors but resulted in a solution that was deemed inferior to a third-party alternative by the community. Overall, it appears that the teams responsible for the development and quality control for the game are not sufficiently committed to the task.

The choices of partners who produced various Fallout merchandise were associated with their scandals. Kain (2018) describes how the “Nuka Rum” bottles offered by Bethesda as a collector’s item ended up disappointing the buyers due to their low quality despite the high price. Not only was the design subpar, as the bottle was a standard variety that was hidden in a plastic shell, but the case also interfered with pouring. While the rum’s production was outsourced to another company, Bethesda should have been responsible for ensuring that the final product matches a set of standards that justifies its price. On its own, the situation would not have been particularly notable, but it shows how shallow the quality control was for Fallout 76 and everything associated with the product.

Primary Quality Control Issues

It is possible to identify three central issues related to quality control during the production of Fallout 76: a lack of time in which to conduct the procedures, insufficient quality standards and lacking employee engagement. The game was announced in May 2018 and released in October 2018, and the variety of issues described above that plagued it strongly implies that the game was rushed and did not go through all of the necessary quality control procedures. Ramsey (2018) also notes that many of the bugs observed in Fallout 76 have also been present in the company’s previous titles, which use the same fundamental engine, suggesting that Bethesda’s quality assurance negligence is a long-time trait. Some of these issues can be traced to the industry as a whole.

Shorter development times, which result in lower overall quality, have become a trait of many game publishers, particularly large ones. Von Wolfersdorf (2015) describes the reduction in game development cycle times and its negative influence on the overall quality. He notes that while achieving superior speed and quality at the same time should not be considered impossible, it is a challenging task that requires considerable research.

Modern large developers appear to be disregarding research and quality in favour of faster production and relying on brand strength to sell their product, which ultimately negatively affects their performance in the long term. It is possible that this behaviour was also the case for Bethesda and Fallout 76, and the public may perceive it in that manner.

The issue with general standards is more endemic to Bethesda, but their earlier games were able to pass the scrutiny of critics and players and become known as masterpieces despite the issues. Another factor is responsible for the general decrease in developer factors. According to Sandqvist (2015), it is the recent tendency to release games in an ‘early access’ state, where people can access and play the game as it is being developed and serve as testers while funding production by purchasing the product.

Due to this idea, early access games tend to be severely underdeveloped and attract consumers based on the strength of its concept, brand, or marketing. The approach can be a target for criticism, as it allows videogame companies to extract profits without offering a finished, complete product, and some games, even popular ones, end up never leaving the stage.

Lastly, low employee engagement may also be an industry-wide issue. According to Milner (2018), there is often not enough time to finish a game, which forces developers to work overtime and make mistakes due to overwork and exhaustion. As videogame creation is an artistic occupation already and requires passion and dedication for success, the strain tends to demand more from a person, leading to greater exhaustion and lower engagement. In light of Fallout 76’s unfavourable reception even before its release, it is possible that the employees who were working on the project were not motivated to perform to the best of their abilities, as they realised the impossibility of their task.

Consequences and Challenges

Fallout 76 is most likely a failure in terms of sales due to its poor publicity, as the data mentioned above show. Bethesda Softworks is privately owned, so information about the company’s financial state is not available, but it is highly unlikely that the company was able to make a significant profit. However, it is also possible that Fallout 76 did not cost much to make, as according to Strickland (2018), it may have reused assets from previous Bethesda Softworks games.

Nevertheless, it is possible that the company is going to have to accelerate the release of its next game to maintain its financial position. The increased speed is likely to reduce the quality of the following product, potentially pulling the company into a loop of rushed games that keep failing.

The more important consequence of the low quality of Fallout 76 is the poor reception of the game by consumers, both those loyal to Bethesda and new ones. Both categories of customers participated in the backlash against the game as well as Bethesda itself (Asarch 2018), and the negativity only grew throughout the controversies that followed the release. For a company like Bethesda, which maintains its success based on a devoted base of fans that are willing to forgive some missteps, this reaction may mean that the publisher may have lost a considerable part of that trust. Now, it is going to have to apply significant effort to regain the positive image it has held up until the release of Fallout 76.

Conclusions

The failure of Fallout 76 has been caused by a combination of different factors, the most prominent of which is the poor quality control by Bethesda Softworks. The lack of assurance measures is also a composite concern, as the company’s culture is responsible for a significant part of it. However, the particular failure of Fallout 76 can also be linked to Bethesda’s attempt to follow industry trends such as shorter development cycles and early access releases. Both practices tend to considerably lower the quality of the product as a result, a fact that the publisher was able to experience first-hand. Due to the underwhelming sales and the lost customer trust that resulted from the Fallout 76 release, Bethesda Softworks now has to release its next game faster to recoup the losses, but cannot risk making a subpar product, as that would ruin its reputation.

Recommendations

As described above, it is likely that Bethesda Softworks has largely sacrificed quality control in favour of increased production speed and lowered costs while creating Fallout 76. A complete overhaul of the system may be necessary, one that would return the creation of superior products to the forefront. Bethesda Softworks does not have any stakeholders, as it is privately owned, but in more general cases, video game company stakeholders tend to distance themselves from the process and require economic growth. As the specifics of Bethesda’s quality control procedures are unknown, and the only information the report can work with are its results from the company’s games, the recommendations will include a complete overhaul on the system based on scholarly research into its most effective models. Furthermore, the suggestions will be limited to the topic of this report and only cover quality control improvements.

Overall Framework

The general approach to the creation of new products for a company should incorporate attention to every detail and level of production with the intent to maximise quality. Mandava and Bach (2015) call this method ‘total quality management’ and separate it into a set of factors: employee empowerment, management commitment, customer satisfaction and employee training. All four should be taken into account and addressed during changes, and if the idea is implemented correctly, the company’s culture should change.

The new approach would involve the commitment of every employee, including the managers, to delivering the best possible product to consumers, allowing the production team itself to carry out a part of the quality control team’s duties during its work and simplifying the process by reducing the number of significant redesigns.

While the approach described above is somewhat idealistic, frameworks that attempt to put it into action can be implemented. Androniceanu (2017) describes a three-dimensional method that separates total quality management into three dimensions: technical, social and economic. In turn, each of the aspects is defined by three primary factors. Quality is present in each of the groups, and the other two are standards and technical characteristics, price and terms and product parameters and costs for the technical, social and economic spheres respectively. Improving the two other metrics in a group will consequently upgrade the overall quality of the product, allowing a company to evaluate which contributing factors are lacking and respond with appropriate changes.

Quality Improvement

Maintaining the level of quality that is expected of the company is essential to its sustainability, but it can lead to stagnation if continuous improvement is not present. A company should keep improving itself to keep up with the evolution of the market and remain competitive. According to Jammal, Khoja and Aziz (2015), Six Sigma methods may be appropriate for the purpose in the current framework, as they are compatible with total quality management guidelines. The core concepts of Six Sigma include factual management, rational leadership, continuous improvement and employee partnership. The approach is generally employed in other industries, but its nature makes it broadly applicable, and implementing it in a video game company would most likely result in significant benefits.

While Six Sigma approaches are not commonly employed in the video game industry, one can inspect the broader software creation field for examples and ideas. Roy and Samaddar (2015) provide an example of significant reductions in the numbers of software defects such as bugs after a successful implementation of the policies and note the contrast with prior disorganised quality improvements initiatives, which did not have a considerable effect. This utility is particularly relevant for a company like Bethesda, whose products are known for their tendency to have large numbers of bugs. The rarity of the approach in the industry serves as an additional incentive, as the company would gain a competitive edge.

Capability Management

It is impossible to manufacture a product that fulfils the expectation of the consumers and guarantees a high level of quality in a reasonable time frame if the team sets unrealistic goals. According to Gach (2018), Bethesda Softworks’s Todd Howard made a number of unrealistic promises, which mostly went unfulfilled, during the game’s initial presentation. Quality control initiatives cannot introduce features that are not already present in some form, but its role is also to ensure that the product matches the expectations set by the marketing team. As such, it is vital to set realistic goals and to take the abilities of the employees and the remaining time and resources into consideration.

However, estimating the difficulty and resource requirements of different projects and approaches, especially ones that arise suddenly during the development process, is a challenging task. According to Politowski et al. (2016), delays, unrealistic scope and a lack of documentation are the primary problems in game development. One way to solve the issue would be to adopt a rigidly structured approach that would allow a company to accurately evaluate the time and resources necessary to implement a feature or fix an issue. The establishment of such a system would require significant effort, as the capabilities of the company would have to be documented in detail, but the benefits would be considerable and immediately noticeable.

Reference List

Androniceanu, A 2017, ‘The three-dimensional approach of total quality management, an essential strategic option for business excellence’, Amfiteatru Economic, vol. 19, no. 44, pp. 61-78.

Arif, S 2018, ‘’, IGN. Web.

Asarch, S 2018, ‘’’, Newsweek. Web.

Dellinger, AJ 2019, ‘’, Engadget. Web.

Gach, E 2018, ‘’, Kotaku. Web.

Jammal, M, Khoja, S & Aziz, AA 2015, ‘Total quality management revival and six sigma’, International Journal of Computer Applications, vol. 119, no. 8, pp. 1-5.

Kain, E 2018, ‘‘ bottles’, Forbes. Web.

Mandava, T & Bach, C 2015, ‘Total quality management and its contributing factors in organizations’, Journal of Multidisciplinary Engineering Science and Technology, vol. 2, no. 12, pp. 3504-3510.

Milner, D 2018, , Game Informer. Web.

Morgans, M 2018, ‘’, VGR. Web.

Petite, S 2018, ‘’, Digital Trends. Web.

Politowski, C, de Vargas, D, Fontoura, LM & Foletto, AA 2016, ‘Software engineering processes in game development: a survey about Brazilian developers’ experiences’, in Proceedings of SBGames 2016, Polytechnic School of the University of São Paulo, São Paulo, pp. 154-161.

Ramsey, R 2018, ‘’, Push Square. Web.

Roy, S & Samaddar, S 2015, ‘To reduce defect in software development: a Six Sigma approach’, in S Arsovski, L Miodrag & M Stefanović (eds), 9th International Quality Conference, University of Kragujevac, Kragujevac, pp. 345-351.

Sandqvist, U 2015, ‘The games they are a changin’: new business models and transformation within the video game industry’, Humanities and Social Sciences Latvia, vol. 23, no. 2, pp. 4-20.

Strickland, D 2018, ‘’, Tweak Town. Web.

von Wolfersdorf, FF 2015, ‘’, Master of Science Thesis, Eindhoven University of Technology. Web.

Webb, K 2018, ‘, Business Insider. Web.

C-Chart for Service Quality Control

Summary

The C-chart is a reliable monitoring tool that considers changes in an indicator over a given period. This control tool allows the company to identify inconsistencies and defects in the study group (Goel, 2020). C-charts have lines for the upper and lower control limits. The control tool allows the organization to measure the stability of a process and track the results of improvement or deterioration (Goel, 2020). For example, C-charts will monitor product quality, calculate the number of defects, or measure the level of customer satisfaction in numerical terms. C-charts are a tool to control the number of product returns or negative customer reviews.

Example

An Italian restaurant uses a C-chart to monitor customer satisfaction. Every week, the number of dissatisfied customers is summarized and recorded in a table. Based on this data, a chart where 1.1 will be the average number of complaints per week can be created. According to the formula for constructing C-charts, the upper limit will be 4.25, and the lower limit will be negative 2.05, equating to zero. As from the diagram, the number of complaints does not approach the upper limit and remains at the lower one several times. It can be said that Italian cuisine restaurant competently uses a visual control tool. The situation with customer complaints is under control. Firstly, their performance does not approach the maximum allowable values ​​in a given context. Secondly, the indicators are the minimum in 3 of the 10 indicated weeks.

Table 1.

Week Number of Complains
1 1
2 2
3 1
4 3
5 0
6 1
7 0
8 2
9 1
10 0
Total 11

CL = 11/10 = 1.1 (The average number)

UCLc = 4.25

LCLc = – 2.05 = 0

C-chart for Customers’ Complains
C-chart for Customers’ Complains

Reference

Goel, A. (2020). Metrology & Quality Control. Technical Publications.