Abstract
VPI (Value Package deal Advantages) was one of the center programs in Cummins Operating System (COS). VPI was the process by which the business described, designed, developed and launched high quality Value Plans for customers. Among the key operations in a VPI program was to recognize part failures. When a part failing was determined, it was transferred to other seed locations. A delay in delivery time in one plant location to another impeded the diagnosis of a component and resulted in a postponement of a crucial resolution and subsequent validation. As a successful methodology, customer centered Six Sigma tools were applied for this task to quantify the performance of this process. Six Sigma was a data-driven way which was designed to eliminate defects along the way. The project goal was to identify root causes of process variance and reduce the number of days and nights it was taking for a part to move from point of inability to the part engineer for analysis. The average quantity of days at the start of this project was 137. The target was to lessen this by 50%. The benefits of performing this job was a reduction in the time it takes for parts to move which impacted the capability to review and fix problems in a timely manner and allowed the part to be increased or revised and put back again on the engine for further assessment.
VPI Failed Parts Movement Between Locations
Introduction
VPI (Value Bundle Advantages) was one of the core programs in Cummins OPERATING-SYSTEM (COS). VPI was the procedure by which the Company defined, designed, developed and unveiled high quality Value Packages for customers. The entire VPI deal allowed Cummins to continually enhance the product(s) delivered to customers. This job was conducted in an effort to improve the value of these packages. By increasing the procedure of moving parts from one location to some other, Cummins has benefited in both circuit time and cost.
VPI included all the components of products which included services and information that was sent to the end-user customer. The products included: oil, filter systems, generator models, parts, business management tools/software, engines, electronic digital features and adjustments, service tools, reliability, durability, packaging, safeness and environmental compliance, appearance, operator friendliness, integration in the application form, robust design, leak-proof components, simple service and maintenance, petrol economy, repair cost, price, and diagnostic software. We were holding key factors of customer satisfaction that allowed Cummins to stay competitive and provide quality parts and services to the end customers. This process was essential in making it through among competition.
Statement of the Problem
One of the main element techniques in a VPI program was to identify and solve part failures. To carry out this in a timely manner, parts had a need to travel quickly from the point of inability to the component engineers for medical diagnosis. Failures were recognized at Cummins Complex Center during engine trials. The failed parts were then delivered to one of two other locations, Cummins Engine motor Flower (Cummins Emission Alternatives) or the Energy Systems Seed, where they were to be delivered to the appropriate engineer for diagnosis and part engineering changes.
A hold off in the analysis of a failed part intended a wait in the resolution of the condition and subsequent engine motor testing. The perfect situation was for a component failing to be discovered by the test cell technician, delivered to the engineer, diagnosed by the engineer, and the part redesigned for further assessment on the engine. When this did not appear timely, the failed part did not reach the engine motor again for a sufficient amount of testing. The problem was that parts were either taking a long time to get into the engineer's hands, or the parts were lost. Engines need a pre-determined amount of screening time to identify potential engine motor failures and associated dangers to the customer and the business. As a result, the chance to continually improve parts and techniques was skipped.
Through the utilization of customer targeted six sigma tools this technique improved the ability to solve customer problems and achieve company goals. Investigation was necessary to determine the most efficient process for the transfer of failed parts between different sites within Cummins.
Significance of the Problem
This technique was important in fixing part failures. Timely copy of parts to the correct engineer for evaluation reduced the quantity of time for concern correction and increased the performance of the engines that were sold to customers.
This deal allowed Cummins to consistently improve the process and reduce pattern time and cost. This project involved the vehicles of VPI failed parts from the idea of failing to the correct aspect engineer. The advancements made in this project made certain that parts were received by the technical engineers in a timely manner which allowed further screening of the re-engineered failed parts.
Statement of the Purpose
The process of identifying part failures and delivering them to the correct part engineer was essential in diagnosing problems and correcting them. Employees were either not been trained in the problem recognition area or were unacquainted with the impact that their work possessed on the complete process. Communication between your test cell technical engineers whom identify part failures was important within two areas. First, it was critical that the engineer responsible for the part was notified and subsequently, the Failed Parts Analyst (FPA) had to be notified in order to know when to get the part for transport. The partnership between the test cell engineer and the other two areas was a simple part of the process for it to be successful. Other factors that added to the time delay partly failure identification and delivery time was vacation coverage of key employees and training of shipment personnel. The common number of days and nights for a component to be taken off the test cell engine and delivered to the correct design engineer was 137 days and nights. Predicated on the logistics of the locations where the parts were being supplied, this technique was upgraded to be completed in less time. The purpose of this project was to reduce the quantity of time it was taking for this process that occurs. The advantages of performing this job resulted in a reduction in enough time it was taking for parts to go which impacted the capability to review and fix problems and allowed the part to be upgraded or altered and put back again on the engine unit for further evaluation. The improvements produced from this project can be employed to similar techniques throughout the multiple sections.
Definition of Terms
VPI- Value Program Introduction was an application utilized by Cummins where services were released. It included all the components of creating a fresh product such as design, anatomist, final product development, etc.
COS- Cummins OPERATING-SYSTEM; the system of Cummins businesses which were standard throughout the business. It identified the manner where Cummins managed.
C&E matrix - tool that was used to prioritize suggestions factors against customer requirements.
FPA- Failed Parts Analyst ; the FPA was the person accountable for retrieving failed parts from the test skin cells, determining the correct engineer to whom these failed parts were to be sent to, and well prepared the parts for shipment to the appropriate location.
SPC- Statistical Process Control; SPC was an application of statistical methods employed in the monitoring and control of the procedure.
TBE- Time Between Events; Inside the context of this paper, TBE represented the number of opportunities that a failure possessed of taking place between daily works.
McParts- Software application program which tracked component improvement through the system. It provided a period line from the time a part was entered into the system until it was closed out.
Assumptions
The assumption was made that all participants in the project were familiar with the software request program that was used.
Delimitations
Only failed parts from the Value Package Intro program were included in the scope of this project. Additionally, only the durable engine motor family was incorporated. The light obligation diesel and mid-range engine family members were excluded. This project encompassed three locations in Southern Indiana. The emphasis of this task was on delivery time and did not include presentation issues. It also focused on transportation and excluded database functionality. Experienced employees were chosen for collecting data. The changing of interest considered was delivery time. Data collection techniques were limited by first transfer only. The job focusd on redesigning an existing process and did not include the opportunity of developing a new theory.
Limitations
The strategy used for this project didn't include automation of the process as a step. RFID was a more attractive way to solve this issue; however, it had not been economically feasible at that time. The populace was limited because the parts which were observed were limited by heavy duty machines which reduced variants in the size and level of parts. Time constraints and reference availability was a concern. Due to team members residing at several locations, achieving arranging was more problematic. Additionally, coordinating team conferences was a concern because room supply was limited.
Review of Literature
Introduction
The opportunity of this literature review was designed to evaluate articles on failed parts within Value Program Launch (VPI) programs. However, although quality design for customers is broadly utilized, the books on Value Package deal Introduction was somewhat scarce. VPI was a business process that companies used to establish, design, develop, and introduce high quality plans for customers. VPI included all the components of products which involved services and information that was sent to the end-user customer. One of the key functions in a VPI program was to problem -solve part failures, that was the path this literature review traveled.
Methods
This books review centered on part/process failures and advancements. The methods found in gathering reading materials because of this literature review engaged the utilization of the Purdue University libraries: Academic Search Top, Reader's Guide, and Omni document FT Mega library. Supplementary investigation was conducted on-line where many resources and brings about reference material were found. Every one of the referrals cited are from 2005 to provide apart from a Chrysler article dated 2004 which was an interesting research discussing the utilization of third party logistic centers, a journal article from 1991 that points out the term, cost of quality, which is used throughout this books review, and two reference manuals published by AIAG that have polices for ISO 9001:2000 and the TS16949 criteria. Keywords used during researching included terms such as scrap, rework, failed parts and logistics.
Literature Review
Benchmarking. Two articles, authored by Haftl (2007), concentrated on the combination of metrics had a need to optimize overall performance. Some of these metrics included conclusion rates, scrap and rework, machine uptime, machine circuit time and first go percentages. "According to the 2006 North american Machinist Benchmarking study, leading machine shops in america are producing, normally, more than four times the amount of units made by other non-benchmarked shops. Also worthwhile noting is the fact they also reduced the price of scrap and rework more than four times. (Haft, 2007, p. 28). The benchmark outlets showed increased improvement than other machine outlets. "The benchmark retailers lower scrap and rework costs to 4. 6 percent of sales in 2006 from 6. 6 percent three years ago, and all the shops went to 7. 8 percent of the sales in 2006 from 9. 3 percent three years in the past (Haftl, 2007, p. 28). The successful reduced amount of scrap and rework costs by the benchmark shops were contributed to many factors. First, training was provided to employees and authority seminars were placed. Secondly, these shops practiced lean manufacturing and lastly, that they had specific programs which straight addressed scrap and rework. Whirlpool, one of the country's leading manufacturers of home appliances, acquired used benchmarking as a means of learning how they scored in comparison to their competition. They benchmarked their key competitor, General Electric. As a result, they found out what improvements they could make that may be managed at a low investment. The improvement functions were especially useful and applied in existing strengths of the business. They rolled out a fresh sales and functioning plan predicated on customer requirements (Trebilcock, 2004).
Quality. A standard theme contained in all the articles analyzed was that of quality. In Staff's review (2008), hecontended that no matter a company's size, quality was critical in retaining a competitive advantages and retaining customers. The Quality Control 100 is a list of the top 100 manufacturers who proven excellence in businesses. The results were predicated on conditions such as scrap and rework as a share of sales, guarantee costs, rejected parts per million, the contribution of quality to profitability, and show holder value. Over 800 manufacturers participated in this study. The most notable three manufacturers for 2008 were stated as: #1 Advanced Device Development, Inc. located in Melrose Playground, IL, #2 Toyota Engine Production in Georgetown, KY. , and Utillmaster Corp. Wakarusa, IN. (Staff, 2008). Within an article compiled by Cokins (2006) the writer pressured that quality was a key point in improving success. He prepared the audience that quality management techniques assisted in identifying throw away and producing problem solving solutions. Among the problems he cited regarding quality was that it had not been often assessed with the appropriate measuring tools. As a result, organizations cannot easily quantify the huge benefits in financial conditions. Obstacles that damaged quality was the use of traditional accounting methods. The financial data was not captured in a format that can easily be applied in decision making. Because quantifiable methods lacked a price basic to compare the huge benefits, management often identified process improvements as being risky.
Cost of Quality (COQ), was the price associated with identifying, steering clear of and making corrections to defects and mistakes. It displayed the difference between actual costs and reduced costs as a result of identifying and mending defects or problems. In Chen's survey (Chen&Adam, 1991), the authors continued to breakdown cost of quality into two parts, the expense of control and the cost of failure. They explained that cost of control was the most easily quantifiable because it included avoidance and actions to keep flaws from developing. Cost of control had the capability to detect flaws before a product was shipped to a person. Control costs included inspection, quality control labor costs and inspection equipment costs. Costs of failure included inside and external failures and were harder to estimate. Internal failures resulted in scrap and rework, while external failures, led to warranty claims, responsibility and invisible costs such as lack of customers (Chen&Adam, 1991). Because cost of control and cost of inability were related, managing these two factor reduced part failures and lowered the expenses associated with scrap and rework. Tsarouhas (2009, p. 551) reiterated in his article on anatomist and system safety, that "failures due to human errors and raw material components take into account 25. 06% and 5. 35%, respectively, which is about 1/3 of most failures. . "A guideline is usually that the nearer the failure is to the end-user, the more costly it is to right (Cokins, 2006, p. 47). Recognition of failed parts was an integral procedure for Value Package Intro and key to identifying and correcting failures before they come to the client. A wait in the examination of a defective part resulted in the wait or a miss to the execution of a critical fix and following validation. When a delay occurred, the possibility to regularly improve parts and procedures had not been achieved. In the journal article written by Savage & Son (2009), the authors affirmed that effective design relied on quality and trustworthiness. Quality, they lamented, was the adherence to features required by the customer. Dependability of an activity included mechanical stability (hard failures) and performance trustworthiness (very soft failures). Both of these types of failures happened when performance measures failed to meet critical requirements (Savage & Child, 2009).
Tools and technical specs. The rest of the articles reviewed in this books review centered on tools and specs that were utilized over the business environment. Features were important areas of fulfilling a customer's needs. Every company experienced its own unique way of working, so businesses often experienced just a little different needs (Smith, Munro & Bowen, 2004, p. 225). There were lots of tools which were open to help meet specific customer requirements. Quality control systems and id of failed parts were among these tools. The use of statistical methods was used to make efforts at improvement far better. Two common statistical methods that were used are those that were associated with statistical process control and process capability analysis. The goal of a process control system was to make predictions about the current and future state of a process. A process was reported to be working in statistical control when the one sources of variance were common causes (Down, Cvetkovski, Kerkstra & Benham, 2005, p. 19). Common causes referred to sources of variation that as time passes produced a stable and repeatable circulation. When common triggers yielded steady results then your output was considered to be predictable. SPC involved the utilization of control charts though a built-in software package. In an article by Douglas Rational (2008), he looked at product defects from the eye of the consumer. He explained that to truly leverage SPC to create a competitive gain, key characteristics had to be identified and checked. (Fair, 2008) The means for monitoring some of these characteristics involved the use of control charts. Articles written on built-in control charts, introduced control charts predicated on time-between-events (TBE). These charts were used in making companies to gauge the dependability of parts and service related applications. A meeting was thought as an occurrence of your defect and time described the quantity of time between the occurrence of defect occasions (Shamsuzzaman, Min, Ngee & Haiyun, 2008). Process capability was dependant on the variance that originated from common causes. It displayed the best performance of an activity. Other writers regarded that certain way to improve quality and achieve the best performance was to reduce product deviation. The guidelines they used included the process mean and development run times (Tahera, Chan & Ibrahim, 2007). Peter Roost (2007) favored the utilization of Computer-Aided Processing tools as a means of bettering quality. According to the writer, CAM allowed a corporation to eliminate mistakes that cause rework and scrap, improved upon delivery times and simplified operations, and recognized bottlenecks which helped in efficient use of equipment (Roost, 2007). Other articles on marketing introduced a whole lot size modeling strategy to identify faulty products. Lot-sizing emphasized the number of units of an item that could be produced without interruption on the equipment used in the production process (Buscher & Lindner, 2007).
Conclusion
In this literature review the importance of failed part id was presented. The impact that quality and stability had upon this process was indicative of the worthiness that proper measuring tools provide. By using customer concentrated tools the recognition and correction of failed parts was easier achieved and allowed a quicker image resolution to customer problems. Benchmarking was talked about as a means of assessing outputs to people of competitors. Benchmarking was the first step in determining areas requiring immediate attention. Haftl ( 2007) and Trebilcock (2004) dedicated their articles to benchmarking and the impact it experienced on discovering areas demanding immediate improvement processes. Personnel (2008), Cokins (2006), Tsarouhas (2009), and Savage & Child (2009) spent more time speaking about the critical dependence on quality and the impacts it had on competitive edge. Lastly, writers Smith, Munro & Bowen (2004), Down (2005), Cvetkovski, Kerkstra & Benham (2005), Fair (2008), Tahera, Chan & Ibrahim (2007), and Roost (2007) talked about the different features and tools found in increasing quality and discovering failures. The articles including benchmarking were concise and easy to comprehend. A similarity among every one of the articles is the census that quality was important in identifying and avoiding failures which competitive advantage can't be obtained without it. Gaps identified through this books review were the methods of earning process improvements. Many of the authors acquired their own version of the greatest practice to work with to boost performance. The articles on tools and specs were very technological and discussed the several methods. In Fair's article, the author acquired a different perspective than any of the other articles analyzed. He had written from the view of an consumer.
Methodology
This task built on existing research. Paperwork was reviewed to determine the methodology used in previous process designs. The goal of this job was to redesign the procedure flow to improve potential and eliminate non-value added time. Team members were selected predicated on their vested involvement in the project. Each team member was an integral stakeholder in the actual process. A arbitrary sampling approach was in which various components were tracked from point of failing to delivery.
McParts, a software program program, was utilized to measure the timeframe that a element resided in any one area. Direct observation was also designed.
A quantitative descriptive analysis was employed in which numerical data was accumulated.
The DMAIC method of Six Sigma was used. The steps mixed up in DMAIC process were:
- Define job goals and the current process.
- Measure key areas of the current process and accumulate relevant data.
- Analyze the data to determine cause-and-effect interactions and ensure that all factors are being considered.
- Improve the procedure based after data research.
- Control the process through the creation and execution of a project control plan. Process capacity was founded by executing pilot samples from the population.
In the Define level, the "Y changing objective affirmation was founded- Reduce the timeframe it requires for a failed part to move from point of inability to the hands of the evaluating engineer by 50%. Next, a data collection plan was made. The data was accumulated using the McParts component tracking system. Records were run on the data to monitor part development.
In the second stage, Measure stage, a process map was created which identified all the actual inputs that affected the main element outputs of the procedure. It also allowed visitors to illustrate what happened along the way. This step was useful in clarifying the range of the project.
Once the process map was completed, a Cause & Result matrix was developed. The Cause & Effect matrix fed off of the process map and key customer requirements were then determined. These requirements were get ranking ordered and assigned important factor to each productivity (on the 1 to 10 size). The procedure steps and materials were identified and each step was examined predicated on the rating it received. A low score mentioned that the type variable acquired a smaller effect on the output changing. Conversely, a higher score indicated that changes to the insight variable greatly afflicted the output changing and needed to be monitored.
The next thing involved developing a Fault Tree Research (FTA). The FTA was used to help identify the main triggers associated with particular failures. A way of measuring system research was then conducted. Measurement tools such as McParts software program program as well as handling processes were researched.
Next, a short capability study was conducted to look for the current processes potential. Next, a design of test was established. The design of experiment entailed capturing data at various times throughout the task. Half a year of data was obtained prior to the start of project showing the current position. Once the job was initiated, data was collected on a continuing basis. Finally, after the job was complete, data was gathered to determine balance and control of the process.
Once the test was completed and the info was analyzed, a control plan was made to reduce deviation along the way and identify process ownership. Every one of the above steps included process stakeholders and associates whom helped in creating each productivity.
Data/Findings
Define. The goal of this project was to lessen the number of times it was going for a part to go from point of failure to the part engineer for evaluation. By using historical data, 2 of the 17 destination location for parts were identified as being problematic. The average number of days and nights it was taking parts to be delivered to the component engineer at the Fuels Systems Herb and Cummins Engine Plant (Emission Solutions) location was 137 days. Both sites were found in the same city where the part failures were revealed. Key people involved with performing the many functions partly failures and delivery were identified and interviewed.
Measure. A process map was created documenting each part of the process including the inputs and outputs of each process (Figure 1). After the process was recorded, the test size was driven. Of the 3, 000 plus parts, those parts sent to both sites were extrapolated, producing a test size of 37 parts.
Parts were then monitored using a controlled data source called McParts. From this point, key steps identified were employed in setting up a Cause & Impact matrix. The C&E matrix prioritized input factors against customer requirements. The Cause & Effect matrix was used to understand the connections between key process inputs and outputs. The inputs were rated by the customer in order of importance. The most notable 4 inputs recognized as having the largest impact on quality were: Occurrence (part failure) origination, appropriate tagging of parts, failed parts analyst role, and dealing with the label part to the right destination. THE REASON & Effect matrix allowed the team to filter down the list and weight the evaluation standards. The team then have a Mistake Tree Examination (FTA) on possible solutions. The FTA analyzed the effects of failures. The critical X's involved the quantity of time for processing an incident report and tagging parts, the amount of time it requires for the FPA to pick up the parts from the test skin cells after the part failure is recognized, and the staging and acquiring process. Next, validation of the dimension system was conducted. An expert and 2 operators were selected to run a total of 10 inquiries in the McParts database using random times. The results of the two 2 operators as shown in amount 2 was then have scored against the other person (attribute agreement analysis within appraisers) and that of professionals (appraiser versus standard)
The next rational step was to see whether there was an improvement between the types of test performed and the length of time it was going for a part to be sent to the appropriate component engineer. There were two types of exams performed, Dyno and Field lab tests. Number 6 shows the median for field exams was just a little much better than the Dyno checks which emerged as a shock because field test failures appear out in the field and occur at various locations. The Dyno testing are conducted at the Complex Center. The info drove further research into the outliers which confirmed that out of around 25 of these data points 8 were ECM's, 5 were detectors, 7 were wiring harnesses, 1 was an injector, and 4 were gas line failures. These studies were consistent with the field plot on days and nights to close by group name. ECM's, receptors, wiring harnesses, and fuel lines have the best variance. The similarities and differences in the parts were reviewed and it was discovered they are handled by different organizations once they reached FSP. The Control buttons group treated ECM, Sensors, and Wiring Harnesses. The XPI group treated Accumulators, Energy lines, Fuel pushes, and Injectors.
Drilling down further, another box plot was created to graphically depict any dissimilarities in both different checks for both sites. The boxplot then demonstrated that CES dyno acquired a higher median and higher variability than CES's field lab tests and Gasoline Systems dyno and field lab tests. (See amount 7 below)
An IMR graph was made for dyno field checks without special triggers. The info was stable but not normal. A test of equivalent variances was run for CES and FSP dyno and field lab tests. Predicated on Moods Median there is absolutely no difference in medians. This is likely anticipated to small test size in 3 of the 4 categories; however CES dyno test possessed a great deal of variation and would require further analysis.
An IMR graph and box story was run on the info for XPI and Adjustments group at the Petrol Systems Plant. The info was stable however, not normal. Next, a test of equivalent variance was run which confirmed that the variances weren't identical. Thus, the null hypothesis that the variability of both groups was similar was rejected. Next, attention was directed towards the Gas Systems Place. A boxplot was created from the data which showed there is a statistical difference between medians for FSP Control group and XPI. With the solutions derived from the DMAIC methodology of Six Sigma, the job team had performed statistical research which turned out that there would be benefits obtained by resolving the problems that were recognized. The changes were put in place and a final capability analysis was performed on the info which revealed an 84% reduction in the number of days it needed a part to move from point of failing to the hands of the component engineer for analysis. Improvements were noted and validated by the team. To make sure that the performance of the process would be continuously measured and the procedure remained secure and in charge, a control plan was made and approved by the process owner responsible for the process.
Conclusions/ Recommendations
The goal of this project was to lessen the amount of times it was taking to go a part from point of failing to the component engineer for analysis. This goal was completed and final capacity for the process shows a decrease in time by 84% from 137 times to 22 days. There have been 4 critical problems identified during this project which required immediate attention. First, from the multi-vari research it was evidenced that field testing had an improved median then dyno tests which indicated that there was a problem in the dyno tests which happened at the Complex Center. To improve this issue, Dyno technicians and Component technicians were trained to create incident accounts within a day of part failure. The technicians and the FPA were trained to go these parts within a day with an emphasis on parts heading to CES. The next critical problem depicted in the boxplot designed for Energy System's was that there was a whole lot of variant in handling of the parts between the XPI group and the Control's group. FSP's median the perfect time to delivery was high scheduled to multiple areas/people not trained and the Fuels Systems technicians weren't notified when the parts were received at their location. To improve this problem, complete McParts training was provided to the entire team and a new procedure for dispositioning parts to the component technical engineers directly was applied. This resulted in dispositioning of parts to fewer people. Circulation lists were also kept up to date to add key people which heightened the knowing of aging incidents. Another problem identified was having less clearness in the McParts record data. To resolve, the McParts survey was revised and extra range space was added to show a full information of the part name. A spin-off project was initiated to make improvements to the McParts disposition screen. The fourth problem was that the staging area and location at the Tech Centerwere inadequate. This issue was resolved by giving a new case for failed parts that the procedure owner agreed to screen and control daily. The benefits of performing this job had a direct impact on the capability to analyze and fix problems and allow the part to be improved or customized and put back again on the engine motor for further evaluation. This allowed critical customer and business needs to be fulfilled and exceeded. Further investigation is recommended for potential advancements in the following areas:
- Jamestown, N. Y. location parts movements process to CTC and other sites.
- Receiving process of VPI failed parts at FSP. (XPI process vs. Control buttons group process)
- Incident admittance timeliness for everyone VPI programs. (Monarch Red, Jamestown)
- Resource allocation and vacation coverage of key team members.
- Further inspection of variant in CES field assessment.
- Real cause examination of Dyno median versus Field test median.