×

Receive our FREE PowerPoint Toolkit

The Flevy PowerPoint Toolkit contains over 50+ slides worth of diagrams, shapes, charts, tables, and icons for you to use in your business presentations.



Flevy is the marketplace for premium business documents.
Buy and sell PowerPoint templates, business frameworks, presentation templates, and more.

Posts by author:

Learning n Development

Survival of a business in this digital age largely depends on its ability to timely embrace Digital Transformation.  Digital Transformation entails using Digital Technologies to streamline business processes, culture, and customer experiences.

In order to compete today—and in future—and to enable Digital Transformation, organizations should work towards fostering a culture of continuous learning, since Digital Transformation depends on learning and innovation.  The organizations that holistically embrace this culture are called “Next-Generation Learning Organizations.”

The next generation of Learning Organizations capitalize on the following key variables; Humans, Machines, Timescales, and Scope.  These organizations incorporate technology in enabling dynamic learning.  Creating Next-Generation Learning Organizations demands reorganizing the entire enterprise to accomplish the following key functions to win in future:

  1. Learning on Multiple Timescales
  2. Man and Machine Integration
  3. Expanding the Ecosystem
  4. Continuous Learning

Learning on Multiple Timescales

Next-Generation Learning Organizations make the best use of their time.  They appreciate the objectives that can be realized in the short term and those that take long term to accomplish.  Learning quickly and in the short term is what many organizations are already doing, e.g., by using Artificial Intelligence, algorithms, or dynamic pricing.  Other learning variables that effect an organization gradually are also critical, e.g., changing social attitudes.

Man and Machine Integration

Rather than having people to design and control processes, Next-generation Learning Organizations employ intelligent machines that learn and adjust accordingly.  The role of people in such organizations keeps evolving to supplement intelligent machines.

Expanding the Ecosystem

The Next-generation Learning Organizations incorporate economic activities beyond their boundaries.  These organizations act like platform businesses that facilitate exchanges between consumers and producers by harnessing and creating large networks of users and resources available on demand.  These ecosystems are a valuable source for enhanced learning opportunities, rapid experimentation, access to larger data pools, and a wide network of suppliers.

Continuous Learning

Next-generation Learning Organizations make learning part and parcel of every function and process in their enterprise.  They adapt their vision and strategies based on the changing external environments, competition, and market; and extend learning to everything they do.

With the constantly-evolving technology landscape, organizations will require different capabilities and structures to sustain in future.  A majority of the organizations today are able to operate only in steady business settings.  Transforming these organizations into the Next-Generation Learning Organizations—that are able to effectively traverse the volatile economic environment, competitive landscapes, and unpredictable future—necessitates them to implement these 5 pillars of learning:

  1. Digital Transformation
  2. Human Cognition Improvement
  3. Man and Machine Relationship
  4. Expanded Ecosystems
  5. Management Innovation

1. Digital Transformation

Traditional organizations—that are dependent on structures and human involvement in decision making—use technology to simply execute a predesigned process repeatedly or to gain incremental improvements in their existing processes.  The Next-generation Learning Organizations (NLOs), in contrast, are governed by their aspiration to continuously seek knowledge by leveraging technology.   NLOs implement automation and autonomous decision-making across their businesses to learn at faster timescales.  They design autonomous systems by integrating multiple technologies and learning loops.

2. Human Cognition Improvement

NLOs understand AI’s edge at quickly analyzing correlations in complex data sets and are aware of the inadequacies that AI and machines have in terms of reasoning abilities.  They focus on the unique strengths of human cognition and assign people roles that add value—e.g., understanding causal relationships, drawing causal inference, counterfactual thinking, and creativity.  Design is the center of attention of these organizations and they utilize human imagination and creativity to generate new ideas and produce novel products.

3. Man and Machine Relationship

Next-generation Learning Organizations (NLOs) make the best use of humans and machines combined.  They utilize machines to recognize patterns in complex data and deploy people to decipher causal relationships and spark innovative thinking.  NLOs make humans and machines cooperate in innovative ways, and constantly revisit the deployment of resources, people, and technology on tasks based on their viability.

Interested in learning more about the other pillars of Learning?  You can download an editable PowerPoint on Digital Transformation: Next-generation Learning Organization here on the Flevy documents marketplace.

Are you a Management Consultant?

You can download this and hundreds of other consulting frameworks and consulting training guides from the FlevyPro library.

microscope-analysisIdentifying what the market wants is a critical issue for most executives.  Likewise, the decision on how much to charge for a product is also crucial for planners.  This is where Market Research comes to rescue.

One of the Marketing Research methods that researchers most commonly employ is the Conjoint (Trade-off) Analysis.  Conjoint Analysis helps in identifying product features that consumers prefer, discerning the impact of price changes on demand, and estimating the probability of product acceptance in the market.

In contrast to directly inquiring from the respondents about the most important feature in a product, Conjoint Analysis makes the survey participants assess product profiles.  These product profiles comprise various linked—or conjoined—product features, therefore the analysis is termed “Conjoint Analysis.”  Conjoint Analysis simulates real-world buying situations where the researchers statistically determine the product attributes—that carry the most impact and are attractive to the participants—by substituting the features and recording the participants’ responses.

The Conjoint Analysis Approach

The Conjoint Analysis is useful in creating market models to estimate market share, revenue, or profitability.  The Conjoint Analysis is widely used in marketing, product management, and operations research.  The Conjoint Analysis approach entails the following key steps:

  1. Determine the Study Type
  2. Identify Relevant Features
  3. Establish Values for Each Feature
  4. Design Questionnaire
  5. Collect Data
  6. Analyze Data

1. Determine the Study Type

The first step of the Conjoint Analysis involves ascertaining and selecting from a number of different types of Conjoint Analysis methods available.  This should be determined based on the individual requirements of the organization.

2. Identify Relevant Features

The next step of the Conjoint Analysis entails categorizing the key features or relevant attributes of a product.  For instance, setting the main product attributes in terms of size, appearance, price.

3. Establish Values for Each Feature

After selecting the key features of the product, the next step in Conjoint Analysis is to choose some values for each of the itemized features that have to be enumerated.  A combination of features in different forms should be chosen to present to the participants.  The presentation could be written notes describing the products or in the form of pictorial descriptions.

4. Design Questionnaire

The basic forms of Conjoint Analysis—practiced in the past—encompassed a set of product features (4 to 5) used to create profiles, displayed to the respondents on individual cards for ranking.  These days, different design techniques and automated tools are used to reduce the number of profiles while maintaining enough data availability for analysis.  The questionnaire length depends on the number of features to be evaluated and the Conjoint Analysis type employed.

5. Collect Data

A statistically viable sample size and accuracy should be considered while planning a Conjoint Analysis survey.  It is up to the senior management to decide how they want to gather the responses—by taking the responses from each individual and analyzing them individually, collecting all the responses into a single utility function, or dividing the respondents into segments and recording their preferences.

6. Analyze Data

Various econometric and statistical methods are utilized to analyze the data gathered through the Conjoint exercise.  This includes linear programming techniques for earlier Conjoint types, linear regression to rate Full-Profile Tasks, and Maximum Likelihood Estimation (MLE) for Choice-based Conjoint.

Types of Conjoint Analysis

There are a number of Conjoint Analysis types available for the marketing researchers to choose from, including:

  1. Two-Attribute Tradeoff Analysis
  2. Full-Profile Conjoint Analysis
  3. Adaptive Conjoint Analysis
  4. Choice-Based Conjoint Analysis
  5. Self-Explicated Conjoint Analysis
  6. Max-Diff Conjoint Analysis
  7. Hierarchical Bayes Analysis (HB)

Interested in learning more about Conjoint Analysis?  You can download an editable PowerPoint on Conjoint Analysis Primer here on the Flevy documents marketplace.

Are you a Management Consultant?

You can download this and hundreds of other consulting frameworks and consulting training guides from the FlevyPro library.

assorted-variety-of-fruits-2255903The decision for pricing a product or service isn’t as simple as it seems.  It is a key consideration for executives.  Pricing way above the rival products risks not attracting the required customers while charging way below the competitor products could be equally detrimental.

Manufacturers can utilize research to have a better understanding on what consumers are willing to pay for a product.  There are a host of research-based pricing approaches available—e.g., Monadic, Sequential Monadic, Conjoint Analysis, Van Westendorp Price Sensitivity Meter etc.—however, researchers often get confused on which one to use in a given product development phase.  Let’s discuss the Van Westendorp Price Sensitivity Meter approach for now.

The Price Sensitivity Meter (PSM) is an easy-to-use method of evaluating price of a new product.  The method was developed by Peter Van Westendorp in 1976.  Through the PSM approach, consumers undergo a short survey where they answer 4 questions about their price expectations.  These answers are used to determine the maximum amount a consumer is willing to pay for a particular product and how higher the price be set for the customer to still buy the product.

The approach offers a ball-park figure for the price of a product, is easy to administer, requires less effort from the consumers, and the PSM results are communicated in the form of simple diagrams.  The approach, however, surveys only the “willingness to pay” attribute of a product, and is more appropriate for innovative products—as it is not easy to determine prices with competing products using this approach.  PSM analysis should be a part of your Pricing Strategy process.

The PSM approach encompasses the following key phases:

  1. Plan and Execute Market Research Survey
  2. Analyze Data
  3. Evaluate Intersections to Determine Price

Let’s discuss the first 2 phases of the approach.

Plan and Execute Market Research Survey

The initial phase of the PSM research entails deciding on the medium of the study and planning the logistics, design, resources, guidelines, and governance protocols for the survey.  More specifically, the phase involves:

  • Preparing the field research plans.
  • Determining whether the survey should be conducted online, telephonically, or face-to-face.
  • Identifying the consumers (respondents).
  • Assigning the required resources to the survey.
  • Getting the data collection tools and research instrument (questionnaire) ready.
  • The questionnaire includes the following questions:
    • At what price the product would become so expensive for you to even consider buying it?
    • Indicate the price that is expensive for you but you would still buy the product?
    • What would be the price that is too cheap for the product where you would start doubting its quality and not buy it?
    • Indicate the price of the product where you would consider it a great value for money (a bargain)?
  • Gathering data from the survey participants.

Analyze Data

The second phase pertains to analyzing the respondents’ data from the field survey.  This is done once the field data has been validated and cleansed of any inconsistent errors.  The steps taken in this phase include:

  • Ordering the 4 questions in a manner that it ranks prices as “Too Cheap,” “Bargain,” “Getting Expensive,” and “Too Expensive.”  The values of these ranks should be kept in numeric dollar values.
  • Plotting the responses of the survey participants on a graph.
  • Depicting the prices on the X-axis.
  • Representing the percentage of consumers who quoted the respective price (i.e. the cumulative frequency) on the Y-axis.
  • Reversing the values of the two curves.
    • The curves with the values “Too Cheap” and “Too Expensive” are drawn with inverse values. This creates two other curves.  These curves show the percentage of consumers who regard prices as “Getting Expensive” and “Bargain”.

Interested in learning more about the other phase of the Van Westendorp Price Sensitivity Meter?  You can download an editable PowerPoint on the Price Sensitivity Meter (PSM) here on the Flevy documents marketplace.

Are you a Management Consultant?

You can download this and hundreds of other consulting frameworks and consulting training guides from the FlevyPro library.

Supply Chain InformationSupply Chain Management is getting more and more complex.  The pressure on the Supply Chain information to be made public is also increasing day by day.  With the popularity and widespread use of social media, it has become more and more difficult for organizations to hide information pertaining to supply chain practices, employees’ treatment, suppliers’ processes, or waste materials generated that could affect the environment.  Social media often publicizes negative reports on companies’ supply chain practices—its best to have a robust information disclosure strategy before anything like that ever happens.

Executives must appreciate these external forces and information transparency demands, and react proactively to build and maintain competitive advantage for their organization.  They need to be able to, first, accurately predict the data requirements of various stakeholders and then unanimously decide on the type and frequency of the information to be shared.  A reactive information disclosure strategy is less time and planning intensive, but it does limit the chances of first-mover advantage over competition.

Supply Chain information can be classified into 4 categories:

  • Critical
  • Strategic
  • Non-critical
  • Optional

Critical Information

Organizations using this information category know that they have certain glitches in their Supply Chains that could potentially be a source of criticism from NGOs and the media and may bear adverse effects on their reputation.  This includes information concerning unhygienic or inferior quality products; unfair supply chain practices; or environmental problems.

Strategic Information

Even though stakeholders do not ask for this information, this information category is considered strategic as disclosing this data can boost brand value and product differentiation.  The strategic information category is high value to the organization but is low on risks for the supply chain.  For example, in the beauty, fashion or food products industry, sharing information about organic ingredients may be instrumental in achieving product differentiation and brand reputation.

Noncritical Information

Disclosure of this information category is typically un-called for and has negligible effects on brand value.  This information category has low value for the company and has low risks for the Supply Chain.  For instance, needlessly sharing child labor data in regions with actively enforced child welfare laws.

Optional Information

This information category is a matter of internal supply chain consideration and has no bearing on the customer.  The optional information category is low value to the organization and is actually highly risky for the Supply Chain.  For instance, potential quality issues and defects in the supply chain that are identified and resolved during quality control, and do not affect the finished product.

There isn’t a one-size-fits-all strategy that organizations can adopt to ensure a viable and high-quality Supply Chain Information Disclosure.  However, the approach needs to be evolving based on individual circumstances.  Senior executives should promptly respond to public inquiries, ensure fair treatment of employees, and guarantee compliance with basic human rights to protect their organizations’ reputation.  Experts suggest the following 8-phase approach to address and improve Supply Chain Information Disclosure.

Appreciate the criticality of Supply Chain information disclosure

The first step is to analyze the forces that demand increased supply chain transparency and ascertain the importance and priority of information for the stakeholders.  Once it is established, the leadership must take actions to address the information requirements of key stakeholders.

Appraise Supply Chain data collection abilities and resource requirements

The next step is to assess the competence of the organization—and that of the suppliers—to gather quality supply chain data.  The executives should also evaluate the costs and resource requirements to enable improved information disclosure.

Determine the existing and desired levels of Supply Chain information

The third step is to ascertain the existing knowledge of supply chain information among the executives and suppliers.  The leadership needs to identify the desired levels of supply chain data collection and sharing capabilities, and invest to fill any gaps between the existing and desired supply chain data collection and sharing competencies.

Interested in learning more about the remaining phases of the Supply Chain Information Disclosure Strategy?  You can download an editable PowerPoint on Supply Chain Disclosure Strategy here on the Flevy documents marketplace.

Are you a Management Consultant?

You can download this and hundreds of other consulting frameworks and consulting training guides from the FlevyPro library.

Learning n DevelopmentTransformation of an organization into a Next-generation Learning Organization (NLO) is a challenging endeavor.  The main hurdles include convoluted hierarchies, bureaucratic red tape, delayed decision making, and complicated organizational systems and processes.

To develop a learning organization, leadership needs to trim down bureaucracy and complexities.  They should make the best use of technology to gather holistic real-time data, deploy Artificial Intelligence at scale, and develop data-driven decision-making systems.

Five Core Pillars of Learning are essential for the creation of a Next-generation Learning Organization, including:

  1. Digital Transformation
  2. Human Cognition Improvement
  3. Man and Machine Relationship
  4. Expanded Ecosystems
  5. Management Innovation

Let’s take a deep dive into the first 3 Core Pillars.

1. Digital Transformation

The first pillar is Digital Transformation.  Next-generation Learning Organizations (NLOs) are characterized by their speed of learning and their adeptness to take action based on new insights.  They use emerging technologies to automate as well as “autonomize” their businesses, without relying too much on human intervention and decision-making.

By autonomizing, the NLOs enable machines to learn, take action, and evolve on their own based on continuous feedback.  They create integrated learning loops where information flows automatically from digital platforms into AI algorithms where it is mined in run-time to gather new insights.  The insights are passed to action systems for necessary action that create more data, which is again mined by AI, and the cycle continues, facilitating learning at fast pace.

2. Human Cognition Improvement

Next-generation Learning Organizations (NLOs) schedule time for their people to have unstructured reflection on their work.  While most organizations fear disruption of human work in future by AI and machines, NLOs assign unique roles to their people based on human cognition strengths—e.g., understanding relationships, drawing causal judgment, counterfactual thinking, and creativity.  These organizations are aware of AI’s advantage—in analyzing correlations in complex data promptly—as well as its shortcomings in terms of reasoning abilities and interpretation of social / economic trends.  NLOs make design the center of their attention and utilize human creativity and imagination to generate new ideas and produce novel products.  They assign roles accordingly, inspire imagination in people by exposing them to unfamiliar information, and inculcate dynamic collaboration.

3. Man and Machine Relationship

NLOs foster innovative ways to promote collaboration between people and machines.  They recognize that this helps them in better utilization of resources, maximize synergies, and learn dynamically.

To create effective collaboration between people and machines, NLOs develop robust human-machine interfaces.  The existing AI systems lack the ability to decipher everything, which is an area where humans excel.  NLOs supplement these shortcomings by setting up human-machine interfaces, where humans assist the AI by corroborating its actions and suggesting sound recommendations.  These learning organizations bifurcate responsibilities based on the risks involved, assign humans and machines appropriately against each job, and select a suitable level of generalization and sophistication between humans and machines.

Interested in learning more about the Core Pillars of Learning?  You can download an editable PowerPoint on Next-generation Learning Organization: Core Pillars here on the Flevy documents marketplace.

Are you a Management Consultant?

You can download this and hundreds of other consulting frameworks and consulting training guides from the FlevyPro library.

Performance chartsPerformance evaluation serves as a health check on operations and individuals’ work.  The organizational maturity notion signifies the progress of an organization in terms of developing its people, processes, technology, and capability by implementing quality practices.  Organizations aiming to achieve the highest maturity levels in performance need to take care of the intricacies involved in deploying a Performance Management system and the relationships it has with the other key organizational activities.

Performance Management processes in organizations can be assessed using maturity levels, by measuring the implementation of Performance Management tools, analyzing the availability of internal Performance Management processes in place, assessing the structures, procedures, and interactions utilized to direct Performance Management systems.

An organization’s performance maturity is assessed on 5 levels of progressive growth.  These 5 stages present a valuable dashboard to gauge the implementation of the corresponding levels of the Performance Management Maturity Model.

  1. Initial 
  2. Emergent
  3. Structured
  4. Integrated
  5. Optimized

To achieve maturity in performance management, organizations need to build capabilities in 5 core elements—referred to as “Operational Levers”—Tools, Processes, Governance, Architecture, and Integration.

Initial Stage

The organizations at the first performance maturity level are not acquainted with—or totally unfamiliar of—the tools necessary to implement the Performance Management system.  The Performance Management processes are typically inconsistent.  Organizations at this maturity level do not practice employee empowerment, development, and innovation.  There is a dearth of appropriate KPI calculation approach and the performance architecture is in its budding stage.  Roles and responsibilities, importance of KPIs, and individual/organizational indicators are unclear to employees.

The level is characterized by casual strategic planning practices—dependent on top management experience—with ill-structured communication mechanisms.  The initiatives lack alignment with organizational goals.  Leadership involvement in mentoring and developing employees is at sub-optimal levels.  Staff motivation and increasing their engagement levels is not given due importance.

Emergent Stage

The organizations at the second level of Performance Maturity have a strong desire to improve performance.  At this stage, organizations begin exploring Performance Management tools, but have uncoordinated and un-standardized internal processes and systems.  Initiatives to integrate performance management procedures are planned with clearly defined objectives and expectations.

However, at this level, strategy does not deliver value and is not more than formal documentation.  Managers are assessed based on performance results, but not the lower hierarchical levels.  There is unclear articulation of company goals, misalignment at various organizational hierarchical levels, and incompetent communication.  A few basic performance measurement methods—e.g., KPI selection and documentation are embraced by the organization.  The KPI selection process, however, lacks appropriate yardsticks, tools, standardized forms/templates, and approaches.  Performance evaluation and reporting processes exist but are deficient in clear communication by the leadership.  Leadership possesses a basic understanding of performance measurement processes.  Measuring performance at the individual level is uncommon at this maturity level.  Performance review meetings are short of delivering the insights required to make critical decisions.

Structured Stage

The “Structured” stage of the Performance Management Maturity Model is characterized by well-coordinated and carefully regulated Performance Management processes.  Organizations at this stage have a defined set of Performance Management tools.  There are standardized Performance Management practices with well-defined and improved process flows.  There is typically an inconsistent approach towards adopting an aligned Performance Management architecture though.

Organizations at this level employ strategy monitoring tools—e.g., scorecards and dashboards—but do not cascade these at the lower ranks and files.  KPIs are selected based on a clear-cut criteria, established tools and methods, and agreement across the board.  Standardized forms are used to document and report KPIs.  The KPI targets are established utilizing data, benchmarking, and comparing market figures.  Organization-wide performance evaluation data is gathered and disseminated at all levels.  People, largely, have a fair understanding of their personal and organizational performance goals.  A well-defined Performance Management system is in place with appropriate templates, procedures, and governance structures ready for each Performance Management cycle.  Incentives and training and development opportunities help improve performance.

Interested in learning more about the other stages of the Performance Management Maturity Model?  You can download an editable PowerPoint on the Performance Management Maturity Model here on the Flevy documents marketplace.

Are you a Management Consultant?

You can download this and hundreds of other consulting frameworks and consulting training guides from the FlevyPro library.

architecture-building-empty-factory-236705Supply Chain Management across industries has become way too complicated and globalized today.  Since the popularity and use of Social Media has grown, organizations are increasingly getting under pressure to disclose their information publicly.  This pressure on information transparency has reached a level where external stakeholders expect to know the details of an organization’s Supply Chain practices much more than what is typically required to disclose legally.

Executives are finding it hard to deal with this situation.  A majority of them have a limited understanding of the salient features and capabilities of their own Supply Chains, lack the expertise to gather and report Supply Chain data, and fail to develop a Supply Chain Information Disclosure Strategy.

To begin with, they need to first realize the forces that are pushing this trend for information transparency—government regulations, laws, competitors’ best practices, and non-governmental organizations (NGOs).  NGOs often highlight media campaigns to expose poor Supply Chain practices carried out by organizations.  These campaigns may have adverse effects on brand reputation.

Once a fair understanding of these forces has been established, only then executives can develop approaches to deal with these information transparency trends effectively.

Supply Chain Information Categories

The growing demand and understanding of organizations to make Supply Chain information transparent warrants them to have an in-depth know how of what is required to accomplish this and the constraints therein—e.g., their data collection capabilities, the resources required to establish reporting systems, the technology requisites, and clearly defined standards for reporting systems.

Supply Chain Management experts identify 4 categories of Supply Chain information that organizations can publicly disclose:

  1. Supply Chain Membership
  2. Provenance
  3. Environmental Information
  4. Social Information

1. Supply Chain Membership

This category pertains to information related to the suppliers.  It includes basic supplier information, e.g., the names of first-tier direct suppliers and supplier locations.  For instance, Nike shares a list of its global suppliers for the entire product range with names, locations, workforce composition, and subcontracting status of every supplier.

2. Provenance

This category entails information related to ensuring compliance of materials used to produce products with regulatory standards.  Specifically, this includes source (material) locations, material extraction practices, and compliance with safety and quality standards.

3. Environmental Information

This category pertains to reports on environmental measures, including carbon and energy usage levels, water use, air pollution, and levels of waste in the Supply Chain.

4. Social Information

This category entails reports on labor policies (health & safety conditions, work hours), human rights data, and social impacts of the Supply Chain (community involvement and development work).

Supply Chain Information Transparency Strategies

There is no one-size-fits-all approach to information disclosure that suits every firm­.  Once senior management has evaluated the leading best practices on types of Supply Chain information that can be shared publicly, their emphasis should be on determining and agreeing on the level of Supply Chain information disclosure that is ideal for their organization.  Senior executives can select a viable strategy from the following 4 typical Supply Chain Information Disclosure Strategies:

  1. Transparent
  2. Secret
  3. Distracting
  4. Withheld

Transparent

This strategy involves maximum public availability of all Supply Chain information. Companies following the “Transparent” strategy regard information disclosure as a core competence.  They take full disclosure of their Supply Chain information as a commitment to satisfy external stakeholders.

For instance, Nike was criticized throughout the 1990s for poor working conditions in its Supply Chain, but now it is recognized as a leader for its responsible supply chain membership, provenance, environmental, and social sustainability information disclosure.

Interested in learning more about the remaining Supply Chain Information Transparency Strategies?  You can download an editable PowerPoint on Supply Chain Information Transparency Strategies here on the Flevy documents marketplace.

Are you a Management Consultant?

You can download this and hundreds of other consulting frameworks and consulting training guides from the FlevyPro library.

warehouse1Reducing the fragility of global Supply Chains in the event of disruption through natural or other disasters is a major concern for most senior executives.  This rings true more so now than ever, as the world grapples with COVID-19, the worst human health crisis in 100 years.

The strategies to enhance the effectiveness and readiness level of Supply Chains and to reduce risks associated with disruption come with a price.  These costs are critical to build Supply Chain Resilience across all industries.

However, these expenses are, generally, considered a hindrance in the implementation of risk reduction strategies by many leaders.  This is one of the major factor that precludes them from anticipating and managing Supply Chain Risks.

Able leaders anticipate these risks and invest in building organizational resilience.  They leverage a couple of potent Supply Chain Risk Reduction Strategies that have nominal impact on cost efficiency but offer substantial reduction of disruption risks:

  1. Diversify supply base
  2. Overestimate likelihood of disruptions

Diversify Supply Base

It is vital for organizations to diversify their supplier base to avoid disruption of their Supply Chains in the event of a natural disaster.  Manufacturers have been found to have been using pooling—combining resources, inventory and capacity by maintaining fewer distribution centers—and producing common parts to help reduce costs.  However, too much pooling and commonality can make the Supply Chain vulnerable to disruption.

For instance, relying too much on a single supplier and common parts—in an effort to be as lean and efficient as possible—became a Supply Chain Analysis nightmare and cost Toyota billions of dollars in terms of lost sales and product recalls in 2010.  Back then, the auto manufacturer was counting on a single supplier for a common part for many of its car models, which was effective in curtailing costs, but turned out to be a disaster.

Organizational leadership should evaluate the trade-offs between having a leaner and efficient Supply Chain—with common parts and single suppliers—and preparing for and reducing the risks of disruptions.  Minimizing the number of distribution centers offers diminishing marginal returns for Supply Chain Performance and increases the Supply Chain Fragility.  Creating little bit of commonality presents significant advantages, but when more parts are made common the benefits shrink and it rather becomes detrimental.

The key for senior leaders is to find an optimal balance between resource pooling, creating common parts, and deciding on whether to decentralize or centralize their Supply Chains.  Decentralization (e.g., by having multiple warehouses or plants) increases costs as it requires more inventory, but it does curtail the effect of disruption significantly.  Centralization or pooling of resources, on the other hand, reduces total costs, but the cost again goes up by centralizing beyond a reasonable degree.  Recurrent Supply Chain Risks necessitate focusing more on centralization and pooling of resources and commonality of parts, while rare disruptive risks necessitate decentralization.  Achieving a state of equilibrium between pooling of resources, parts commonality or fewer plants helps keep Supply Chain Risks low.  Ignoring the possibility of disruption can be very expensive in the long term.  Samsung Electronics Co. Ltd. always maintain at least two suppliers, no matter if the second supplier supplies only a fraction of the volume.

Overestimate Likelihood of Disruptions

The risk of disruption of supply chains due to any unforeseen event is typically considered a rare possibility and goes unaccounted for during planning by most executives.  A fire break out at a distribution center, defective auto part, or a supplier’s facility closure for a prolonged period of time can happen anywhere, but we tend to underestimate the likelihood of such events.  The reason for this is attributed to the requirement of assigning a significant chunk of investments upfront from the already limited resources and budgets, to prepare for and mitigate likely disruptive risks.

Most of our typical risk assessment measures involve approximating the probability and the likely damage caused by an event.  Estimating the likelihood of disruptive risk to a reliable degree isn’t easy even for large multinationals—even an auto manufacturer like Toyota could not anticipate the occurrence of the part failure issue until the damage had been done.  These risk estimations do not have to be strictly precise.  Rough estimates of disruption risk are fine—any small mis-estimates that occur have negligible consequences.

Senior leadership needs to cautiously contemplate the areas that are likely to get affected the most due to potential disruption.  Building resilience does not cost much for large organizations.  In the long term, doing nothing costs much more than investing in preparing for a probable disruption.  When disruption occurs, the loss incurred greatly exceeds the amount of saving executives save by not investing in risk mitigation strategies.

Interested in learning more about the subject in detail?  You can download an editable PowerPoint on Supply Risk Reduction Strategies here on the Flevy documents marketplace.

Are you a Management Consultant?

You can download this and hundreds of other consulting frameworks and consulting training guides from the FlevyPro library.

cargo-container-lot-906494

Supply Chains often get disrupted by calamities that are beyond human control.  Natural disasters, such as tsunamis and floods, in the last decade have drastically affected major businesses—from automobiles to technology, to travel, to shipments—and exposed critical weaknesses in Supply Chain mechanisms around the globe.  And, now, we are living through a global disruption of an unparalleled nature, COVID-19.

Organizations that rely on single-source suppliers, common parts, and centralized inventories are more susceptible to the risk of disruption.

Management in most cases is aware of its responsibility to prevent their Supply Chains from getting disrupted by ensuring measures such as keeping enhanced stocks, improving capacity at discrete facilities, and choosing multiple sources.  But these measures have a negative effect on Supply Chain cost efficiencies.

However, discerning the effects of costly Supply Chain disruptions is one thing and taking actions to avoid such situations or mitigating their undesirable effects is another.  Managing Supply Chain risks necessitates careful evaluation of the impact that these measures have on Supply Chain cost efficiencies and bottom line.  During the COVID-19 pandemic, it has become clearer than ever that Supply Chain Management must also involve this form of Risk Management.

Supply Chain Efficiency entails improving the financial performance of an organization and focusing on improving the way we manage supply and demand.  Demand fluctuations or supply delays are independent and can be typically tackled by having appropriate inventory levels in the right place, better planning and implementation, and improving Supply Chain Cost Efficiency.

Supply Chain Containment

Supply Chains are complex operations encompassing many products or commodities that are sourced, manufactured or stored in multiple locations.  These complexities can slash efficiency, cause delays, suspension of operations, and increased risk of disruption.  Containing complexities brings higher cost efficiencies and reduced risks.

Supply Chain Containment ensures that Supply Chain disruptions caused by internal factors or through natural hazards are contained within a portion of the Supply Chain.  A single Supply Chain for the entire organization seems cost effective in the short term, but even a small issue can trigger a disaster.

Supply Chain Containment Strategies

Supply Chain Containment Strategies are useful for the organizations to design and deploy solutions fairly quickly in the event of disruption through natural disasters.  The objective is to limit the impact of disruption through disasters to a minimum—to just a portion and not the entire Supply Chain.

For instance, in order to reduce the impact of parts shortage, a mechanical parts manufacturer should arrange multiple supply sources for common items or limit the number of common items across different models.  To reduce Supply Chain instability and to improve financial performance, organizations can use the following containment strategies:

  1. Supply Chain Segmentation
  2. Supply Chain Regionalization

Supply Chain Segmentation

The basis for Supply Chain segmentation are volume, product diversity and demand uncertainty.  High margin but low-volume products with high-demand uncertainty warrant keeping Supply Chains flexible, with capacity that is centralized to aggregate demand.  Manufacturing everything in high-cost locations is detrimental to profit margins.  Sourcing responsive suppliers from Europe is a model feasible for trendy high-end items only.  For fast-moving, low margin, basic products it is sensible to source from multiple low-cost suppliers.  Centralization is favorable in case of fewer segments, significant product variety, low sales volumes of individual products, and high demand uncertainty to achieve reasonable levels of performance.  Decentralization is suitable in case of higher sales volumes, less demand uncertainty, and more segments, to help become more responsive to local markets and reduce the risk of disruption.  For instance, utility companies utilize low-cost coal-fired power plants to handle predictable demand, whereas employ higher-cost gas- and oil-fired power plants to handle uncertain peak demand.

Supply Chain Regionalization

Supply Chain Regionalization helps curtail the impact of losing supply from a plant within the region.  For instance, Japanese automakers were badly hit by shortage of parts globally in the event of 2011 tsunami, since most of these parts could be sourced only from storage and distribution facilities in the tsunami-affected regions.  Had they operated with decentralized regional Supply Chains with logistics centers dispersed in various locations they would have significantly contained the impact of disruption.

Supply Chain Regionalization lowers distribution costs while also reducing risks in global Supply Chains.  During periods of low fuel and transportation costs, global Supply Chains minimize costs by locating production where the costs are the lowest.  As transportation costs rise, global Supply Chains may be replaced by regional Supply Chains.  Regionalized Supply Chains with same inventory stored in multiple locations appear wasteful, but are more robust in case one of the logistics centers suffers from a disaster.

Interested in learning more about the Supply Chain Segmentation and Regionalization?  You can download an editable PowerPoint on Supply Chain Containment Strategies here on the Flevy documents marketplace.

Are you a Management Consultant?

You can download this and hundreds of other consulting frameworks and consulting training guides from the FlevyPro library.

Process FlowUnsuccessful software applications cost organizations significant efforts and resources.  The reasons for these failed ventures are often attributed to technology issues.  However, the real issue is flaws in business processes—the enterprise application deployment environment and the ecosystem which the application targets.

This calls for ensuring the organizational readiness before initiating technology deployment.  It is for this reason the Business Process Maturity Model (BPMM) originated.  BPMM helps achieve uniform standards, identify weaknesses in workflows, and create standardized tailored processes that simplify the requirements for enterprise applications.

BPMM’s roots can be traced back to the Process Maturity Framework (PMF) created by Watts Humphrey and his colleagues at IBM in the late 1980s.  Process Maturity Framework explores the ways to introduce quality practices in software development.  Humphrey and his colleagues introduced incremental stages to adopting best practices in software organization.  The PMF served as the groundwork for the development of the Capability Maturity Model (CMM) for software in 1991.  CMM then became the foremost standard for appraising the capability of software development organizations.

BPMM ensures the success of enterprise systems by providing proven methods for system requirements validity; accuracy of use cases, and effectiveness of applications; simplification of requirements for enterprise applications; and providing a reliable standard for appraising the maturity of business process workflows.

The Guiding Principles for BPMM

BPMM considers processes as workflows across organizational boundaries.  The key guiding principles governing BPMM are:

  • A process should be analyzed in terms of its contribution to organizational objectives.
  • It depends on the organizational ability to sustain efficient processes.
  • Process Improvement should be ideally executed as a phased Transformation endeavor that aims to achieve successively more predictable states of organizational capability.
  • Each stage or maturity level works as a groundwork to build future improvements.

BPMM Utility

BPMM has the following 4 primary utilities.

  • To drive business process improvement initiatives
  • To gauge enterprise application deployment risks
  • To ensure selection of capable suppliers
  • To Benchmark

BPMM – Conformance

Evaluating the BPMM conformance is about ensuring that the implemented system meets the needs of the client.  Verification of conformance necessitates an effective appraisal technique to gather multiple forms of evidence to evaluate the performance of the practices contained in the BPMM.

The BPMM conformance appraisal should be headed by an authorized Lead Appraiser—external to the organization, trained in BPMM as well as appraisal methods.  The team under the lead appraiser should include some members internally from the organization.  The BPMM conformance appraisal team gathers and analyzes evidence regarding the implementation of BPMM practices, judges their strengths and weaknesses, and gauges their effectiveness in meeting the goals of the process areas at respective maturity levels.

The following evidence is utilized during BPMM conformance appraisals:

  • Review of outputs produced as a result of a process.
  • Review of objects, documents, products supporting the execution of a process.
  • Interviews with individuals that perform a process and those who support and manage it.
  • Quantitative data that depicts the organizational state, employee behaviors, performance, and results of a process.

BPMM Conformance Appraisals

BPMM Conformance Appraisals help assure the implementation of practices at a level that achieve the intent and goals of the practices and their process areas.  BPMM conformance appraisals are of 4 distinct types:

  • Starter Appraisal:  An inexpensive BPMM conformance appraisal—which takes only a few days—that entails gathering quantitative data by conducting few interviews.
  • Progress Appraisal:  An extensive appraisal that entails quantitative data collection, investigation of all process areas and practices, review of artifacts, and analysis of interviews.
  • Supplier Appraisal:  An appraisal method to select sources and to make informed decisions during procurement contracts.
  • Confirmatory Appraisal:  A rigorous investigation of all process areas / practices where all evidence is accounted for.

BPMM – Maturity Levels

BPMM encompasses 5 maturity levels that signify the transformation of an organization on the basis of improvements in its processes and capabilities.  BPMM Maturity levels 2, 3, 4, and 5 each contain 2 or more process areas, whereas the Maturity level 1 does not contain any process areas.  The 5 successive levels of BPMM are:

  1. Initial

The focus of the BPMM level 1 is on achieving economy of scale, automation, and productivity growth by encouraging people to overcome challenges and complete their tasks.

  1. Managed

The 2nd maturity level aims at developing repeatable practices, minimizing rework, and satisfying commitments — by managing work units and controlling workforce commitments.

  1. Standardized

The focus of the 3rd maturity level of BPMM is to accomplish standardization in terms of business processes, measures, and training for product and service offerings.

  1. Predictable

The 4th maturity level aims at achieving stable processes, knowledge management, reusable practices, and predictable results.  Organizations accomplish these results through standardization and managing processes and results quantitatively.

  1. Innovating

The focus of the organizations operating at the highest maturity level of BPMM is on implementing continuous improvements, developing efficient processes, and inculcating innovation.

Interested in learning more about the process areas and practices at various maturity levels of the Business Process Maturity Model?  You can download an editable PowerPoint on Business Process Maturity Model here on the Flevy documents marketplace.

Are you a Management Consultant?

You can download this and hundreds of other consulting frameworks and consulting training guides from the FlevyPro library.