Module 17: Focused Improvement

English



Improvement Focus Areas Improvement activities are ad hoc and reactive, based on emotions rather than on focused priorities. As a result, improvements are not really noticeable. There is general consensus about broad improvement focus areas:
• The main focus is on safety, cost savings and breakdown reduction.
• There is no system to evaluate conflicting priorities.
• Asset-related losses are not quantified.
• Operational and Engineering objectives are still defined separately.
The AM Policy, Strategy and Scorecard are used to define focus areas:
• Operations and Engineering have common AM objectives and KPIs.
• Asset-related losses are broadly quantified.
• Short and medium term improvement focus areas and initiatives are defined.
• Agreed focus areas are used to prioritise daily problem solving activities.
Improvement focus areas have been refined and deployed:
• Improvement targets and initiatives are allocated to functional areas.
• Critical assets and specific losses have been identified for improvement.
• These losses are quantified to assist with prioritisation and ROI decisions.
• Specific triggers are used for follow-up problem solving for all review meetings.
Focus areas are dynamically defined in line with the AM Strategy and current performance measures:
• The AM strategy is dynamic to guide focus areas.
• Performance measures define current performance gaps.
• The organisation is flexible enough to respond to changing priorities.
Daily Management System There is no structured mechanism to review asset performance on a regular basis, leading to knee-jerk reactions and frustration.  Operations and Maintenance Management have separate meetings:
• Problems are discussed at daily departmental management meetings.
• These meetings focus on explanation and blame allocation for problems.
• Managers report functional performance at monthly meetings.
• Front-line staff are not involved in these problem solving meetings.
Cross-functional review meetings exist on strategic and tactical levels:
• Asset problems are discussed at daily planning/review meetings between Operations and Maintenance.
• Improvement actions are assigned to specific persons.
• AM Technical and Steering Committees discuss strategic problems.
The full daily management system is in place, including front-line teams:
• Cross-functional frontline teams discuss and solve daily operational problems.
• Unsolved problems are escalated to daily first line management meetings.
• Thresholds are used to trigger further investigation and problem solving.
The daily management system is efficient and pro-active:
• Frontline teams deal with most of the problems.
• They escalate solutions with cost or wider impact to management.
• Management support frontline teams with quick responses.
• The AM Steering Committee provides strategic direction and guidance.
Improvement Projects Cross-functional teams are not used to solve complex problems. Actions are allocated to managers operating in isolation or disempowered individuals, resulting in general failure. Some examples of improvement projects with varying success:
• People are allocated randomly to these projects.
• Success depends on the power and influence of the project leader.
• Meetings are on ad hoc basis when people have time or when a crisis exists.
• There is no structured mechanism for project feedback.
Formal project teams exist to drive AM improvement initiatives:
• Improvement projects are identified in the AM Strategy.
• Responsibility is given to an appropriate leader and cross-functional team.
• A specific scope, improvement targets and deadlines exist for each project.
• Project leaders report progress during regular AM Committee meetings.
Improvement projects are triggered by the daily management system:
• Major or chronic problems are identified as potential projects.
• Project selection criteria are based on the AM Strategy and potential benefits.
• The project team is selected according to nature of the problem.
• The project team is held accountable for results.
Innovation projects are allocated to cross-functional teams:
• A system exists to identify potential innovation projects (eg automation or use of new technology in asset care.)
• Projects selected in line with AM Policy and potential benefits
• Cross-functional, multi-level teams are allocated to investigate these options.
• They report back to top management.
Use of Data Data is never used during problem solving. Solutions are based on gut-feel and emotions, with little or no factual validation. Some use of data during problem solving, mainly to quantify problems:
• KPIs are used to identify and quantify performance gaps.
• There is limited use of data during analysis.
• The main source of information is peoples experience and memory.
Data from EAMS or CMMS is used to support root cause analysis:
• Pareto (80-20) analysis is used to identify main problem areas.
• Maintenance history is used during root cause analysis.
• The EAMS reports are used to identify trends and to verify solutions.
Data mining is used extensively during root cause analysis:
• Teams use repeated Pareto analysis to isolate the problem areas.
• Data mining shows different perspectives of the problem, e.g. total downtime vs. stoppage frequency.
• Failure types and stoppage reasons are used to confirm root cause analysis.
Statistical techniques are used to analyse data and find correlations:
• Correlation analysis is used to link input parameters to problems.
• Statistical process control and capability studies are used for process optimisation.
• Standard deviations are plotted to identify sources of variability.
Root Cause Analysis There is no evidence of root cause analysis being done. Solutions are based on gut feel and are therefore not very effective. There are some isolated attempts at root cause analysis, but informal and ad hoc:
• Some people use 5 Why as a result of problem solving training in the past
• Some evidence of fishbone analysis or brainstorming.
• The techniques are not always well understood or correctly applied.
Structured root cause analysis  is used selectively:
• Technicians, foremen and project leaders have been trained in RCA.
• 5 Why is used effectively on the improvement projects.
• Good examples exist of fishbone analysis on improvement projects.
• Root causes are sometimes verified before implementing solutions.
Structured RCA is used widely on projects and daily problem solving:
• Almost all artisans and operators are competent in root cause analysis.
• 5 Why and fishbone analysis are used effectively on all problem solving and investigations by frontline teams.
• Verification is used as part of RCA.
• Some examples exist of more advanced PM analysis.
FMECA is used proactively to anticipate and prevent problems:
• FMECA forms the basis of asset care plan development (eg RCM or OMM).
• FMECA is used for risk assessment.
• PM Analysis is used on complex and chronic problems with multiple causes.
• Root cause analysis with verification is a way of life throughout the organisation.
Improvement Actions Short term actions are implemented reactively, with little or no consideration of long term preventive actions. Improvement actions are not followed up to confirm their success. Possible long term solutions are discussed by management:
• Short term corrective actions are implemented effectively.
• Functional management teams discuss possible long term solutions.
• The selection of improvement actions are based on ad hoc and informal criteria.
• Follow-up is done via meeting minutes, but it is not very effective.
Improvement actions are linked to root causes with formal follow-up:
• Improvement actions are only selected after root causes have been confirmed.
• Formal documented criteria are used to select the most appropriate solutions.
• Solutions are aligned with the AM Strategy.
• The Plan-Do-Check-Act (PDCA) cycle is understood and used.
Improvement actions are selected based on various criteria:
• Improvement actions are evaluated on costs, risks and strategy alignment.
• Improvement actions are formally monitored to completion.
• Successful solutions are standardised in SOPs, schedules and training.
• The PDCA cycle is used widely.
A detailed cost- benefit analysis is used to justify improvement actions:
• Improvement actions are selected based on a full cost-benefit analysis.
• Improvement activities are monitored during daily meetings.
• Successful solutions are rolled out to similar areas (horizontal replication).
• The PDCA cycle is a way of life.
Results and Benefits The success of solutions and improvement activities is not monitored. As a result the same problems often occur again. The success of improvement actions is monitored informally:
• Improvements are assumed to be successful unless proved otherwise.
• Management KPIs reflect the success or failure of improvement actions.
• Benefits vary - sometimes they are big, but other times small or negative.
Benefits of improvement actions are tracked formally:
• Improvement project leaders report to management on results achieved.
• Performance against target is monitored to confirm sustainability.
• There are good performance improvements in the focus areas.
Benefits are confirmed at appropriate levels based on a wide set of KPIs:
• Benefits are monitored by teams at operational, tactical or strategic levels.
• The impact of improvements on various KPIs is monitored.
• Very good performance improvements are achieved across various KPIs.
The benefits of improvement actions are monitored for 3 to 6 months:
• A scorecard of improvement projects is used to measure cumulative benefits.
• Improvement actions are clearly linked to the AM Strategy and Scorecard.
• Benefits are fully sustainable and performance exceeds industry norms.