Tag Archives: High-Performing Teams

Risk Management and Error Trapping in Software and Hardware Development, Part 3

This is part 3 of a 3-part piece on risk management and error trapping in software and hardware development. The first post is located here (and should be read first to provide context on the content below), and part 2 is located here.

Root Cause Analysis and Process Improvement

Once a bug has been discovered and risk analysis / decision-making has been completed (see below), a retrospective-style analysis on the circumstances surrounding the engineering practices which failed to effectively trap the bug completes the cycle.

The purpose of the retrospective is not to assign blame or find fault, but rather to understand the cause of the failure to trap the bug, inspect the layers of the system, and determine if any additional layers, procedures, or process changes could effectively improve collective engineering surety and help to prevent future bugs emerging from similar causes.

Methodology

  1. Review sequence of events that led to the anomaly / bug.
  2. Determine root cause.
  3. Map the root cause to our defense-in-depth (Swiss cheese) model.
  4. Decide if there are remediation efforts or improvements which would be effective in supporting or restructuring the system to increase its effectiveness at error trapping.
  5. Implement any changes identified, sharing them publicly to ensure everyone understands the changes and the reasoning behind them.
  6. Monitor the changes, adjusting as necessary.

Review sequence of events

With appropriate representatives from engineering teams, certification, hardware, operations, customer success, etc., review the discovery path which led to finding the bug. The point is to understand the processes used, which ones worked, and which let the bug pass through.

Determine root cause and analyze the optimum layers for improvement

What caused the bug? There are many enablers and contributing factors, but typically only one or two root causes. The root cause is one or a possible combination of Organization, Communication, Knowledge, Experience, Discipline, Teamwork, or Leadership.

  • Organization – typically latent, organizational root causes include things like existing processes, tools, practices, habits, customs, etc., which the company or organization as a whole employs in carrying out its work.
  • Communication – a failure to convey necessary, important, or vital information to or among an individual or team who required it for the successful accomplishment of their work.
  • Knowledge – an individual, team, or organization did not possess the knowledge necessary to succeed. This is the root cause for knowledge-based errors.
  • Experience – an individual, team, or organization did not possess the experience necessary to successfully accomplish a task (as opposed to the knowledge about what to do). Experience is often a root cause in skill-based errors of omission.
  • Discipline – an individual, team, or organization did not possess the discipline necessary to apply their knowledge and experience to solving a problem. Discipline is often a root cause in skill-based errors of commission.
  • Teamwork – individuals, possibly at multiple levels, failed to work together as a team, support one another, and check one another against errors. Additional root causes may be knowledge, experience, communication, or discipline.
  • Leadership – less often seen at smaller organizations, a Leadership failure is typically a root cause when a leader and/or manager has not effectively communicated expectations or empowered execution regarding those expectations.

Map the root cause to the layer(s) which should have trapped the error

Given the root cause analysis, determine where in the system (which layer or layers) the bug should have been trapped. Often there will be multiple locations at which the bug should or could have been trapped, however the best location to identify is the one which most closely corresponds to the root cause of the bug. Consideration should also be given to timeliness. The earlier an error can be caught or prevented (trapped), the less costly it is in terms of both time (to find, fix, and eliminate the bug) and effort (a bug in production requires more effort from more people than a developer discovering a bug while checking their own unit test).

While we should seek to apply fixes at the locations best suited for them, the earliest point at which a bug could have been caught and prevented will often be the optimum place to improve the system.

For example, if a bug was traced back to a team’s discipline in writing and using tests (root cause: discipline and experience), then it would map to layers dealing with testing practices (TDD/ATDD), pair programming, acceptance criteria, definition of “Done,” etc. Those layers to which the team can most readily apply improvements and which will trap the error sooner rather than later should be the focus for improvement efforts.

Decide on improvements to increase system effectiveness

Based on the knowledge gained through analyzing and mapping the root cause, decisions are made on how to improve the effectiveness of the system at the layers identified. Using the testing example above, a team could decide that they need to adjust their definition of Done to include listing which tests a story has been tested against and their pass/fail conditions.

Implement the changes identified, and monitor them for effectiveness.

Risk Analysis

Should our preventative measures fail to stop a bug from escaping into a production environment, an analysis of the level of risk needs to be explicitly completed. (This is often done, but in an implicit way.) The analysis of the level of risk derives from two areas.

Risk Severity – the degree of impact the bug can be expected to have to the data, operations, or functionality of affected parties (the company, vendors, customers, etc.).

Blocking A bug that is so bad, or a feature that is so important, that we would not ship the next release until it is fixed/completed. Could also signify a bug that is currently impacting a customer’s operations, or one that is blocking development.
Critical A bug that needs to be resolved ASAP, but for which we wouldn’t stop everything. Bugs in this category are not impacting operations (a customer’s, or ours), but they are significantly challenging to warrant attention.
Major Best judgement should be used to determine how this stacks against other work. The bug is serious enough that it needs to be resolved, but the value of other work and timing should be considered. If a bug sits in major for too long, its categorization should be reviewed and either upgraded or downgraded.
Minor A bug that is known, but which we have explicitly de-prioritized. Such a bug will be fixed as time allows.
Trivial Should really consider closing this level of bug. At best these should be put into the “Long Tail” for tracking.

Risk Probability – the likelihood, expressed against a percentage, that those potentially affected by the bug will actually experience it (ie., always, only if they have a power outage, or only if the sun aligns with Jupiter during the slackwater phase of a diurnal tide in the northeastern hemisphere between 44 and 45 degrees Latitude).

Definite 100% – issue will occur in every case
Probable 60-99% – issue will occur in most cases
Possible 30-60% – coin-flip; issue may or may not occur
Unlikely 2-30% – issue will occur in less than 50% of cases
Won’t 1% – occurrence of the issue will be exceptionally rare

Given Risk Severity and Probability, the risk can be assessed according to the following matrix and assigned a Risk Assessment Code (RAC).

Risk Assessment Matrix Probability
Definite Probable Possible Unlikely Won’t
Severity Blocker 1 1 1 2 3
Critical 1 1 2 2 3
Major 2 2 2 3 4
Minor 3 3 3 4 5
Trivial 3 4 4 5 5

Risk Assessment Codes
1 – Strategic     2 – Significant     3 – Moderate     4 – Low     5 – Negligible

The Risk Assessment Codes are a significant factor in Risk decision-making.

  1. Strategic – the risk to the business or customers is significant enough that its realization could threaten operations, basic functioning, and/or professional reputation to the point that the basic survival of the business could be in jeopardy. As Arnold said in Predator: “We make a stand now, or there will be nobody left to go to the chopper!”
  2. Significant – the risk poses considerable, but not life-threatening, challenges for the business or its customers. If left unchecked, these risks may elevate to strategic levels.
  3. Moderate – the risk to business operations, continuity, and/or reputation is significant enough to warrant consideration against other business priorities and issues, but not significant enough to trigger higher responses.
  4. Low – the risk to the business is not significant enough to warrant special consideration of the risk against other priorities. Issues should be dealt with in routine, predictable, and business-as-usual ways.
  5. Negligible – the risk to the business is not significant enough to warrant further consideration except in exceptional circumstances (ie., we literally have nothing better to do).

Risk Decision

The risk decision is the point at which a decision is made about the risk. Typically, risk decisions take the form of:

  • Accept – accept the risk as it is and do not mitigate or take additional steps.
  • Delay – for less critical issues or dependencies, a decision about whether to accept or mitigate a risk may be delayed until additional information, research, or steps are completed.
  • Mitigate – establish a mitigation strategy and deal with the risk.

For risk mitigation, feasible Courses of Action (CoAs) should be developed to assist in making the mitigation plan. These potential actions comprise the mitigation and or reaction plan. Specifically, given a specific bug’s risk severity, probability, and resulting RAC, the courses of action are the possible mitigate solutions for the risk. Examples include:

— Pre-release —

  • Apply software fix / patch
  • Code refactor
  • Code rewrite
  • Release without the code integrated (re-build)
  • Hold the release and await code fix
  • Cancel the release

— In production —

  • Add to normal backlog and prioritize with normal workflow
  • Pull / create a team to triage and fix
  • Swarm / mob multiple teams on fix
  • Pull back / recall release
  • Release an additional fix as a micro-upgrade

For all risk decisions, those decisions should be recorded and those which remain active need to be tracked. There are many methods available for logging and tracking risk decisions, from spreadsheets to documentation to support tickets. There are entire software platforms expressly designed to track and monitor risk status and record decisions taken (or not) about risks.

Decisions to delay risk mitigations are the most important to track, as they require action and at the speed most business move today, a real risk exists of losing track of risk delay decisions. Therefore a Risk Log or Review should be used to routinely review the status of pending risk decisions and reevaluate them. Risk changes constantly, and risks may significantly change in severity and probability overnight. In reviewing risk decisions regularly, leadership is able to simultaneously ensure both that emerging risks are mitigated and that effort is not wasted unnecessarily (as when effort is put against a risk which has significantly declined in impact due to changes external to the business).

Conclusion

I hope you’ve enjoyed this 3-part series. Risk management and error trapping is a complicated and – at times – complex topic. There are many ways to approach these types of systems and many variations on the defense-in-depth model.

The specific implementation your business or organization chooses to adopt should reflect the reality and environment in which you operate, but the basic framework has proven useful across many domains, industries, and is directly adapted from Operational Risk Management as I used to practice and teach it in the military.

Understanding the root cause of your errors, where they slipped through your system, and how to improve your system’s resiliency and robustness are critical skills which you need to develop if they are not already functional. A mindful, purposeful approach to risk decision-making throughout your organization is also critical to your business operations.

Good luck!

 

Chris Alexander is a former U.S. Naval Officer who was an F-14 Tomcat flight officer and instructor. He is Co-Founder and Executive Team Member of AGLX Consulting, creators of the High-Performance Teaming™ model, a Scrum Trainer, Scrum Master, and Agile Coach.

Share This:

Risk Management and Error Trapping in Software and Hardware Development, Part 2

This is part 2 of a 3-part piece on risk management and error trapping in software and hardware development. The first post is located here (and should be read first to provide context on the content below).

Error Causality, Detection & Prevention

Errors occurring during software and hardware development (resulting in bugs) can be classified into two broad categories: (1) skill-based errors, and (2) knowledge-based errors.

Skill-based errors

Skill-based errors are those errors which emerge through the application of knowledge and experience. They are differentiated from knowledge-based errors in that they arise not from a lack of knowing what to do, but instead from either misapplication or failure to apply what is known. The two types of skill-based errors are errors of commission, and errors of omission.

Errors of commission are the mis-application of a previously learned behavior or  knowledge. To use a rock-climbing metaphor, if I tied my climbing rope to my harness with the wrong type of knot, I would be committing an error of commission. I know I need a knot and I know which knot to use and I know how to tie the correct knot – I simply did not do it correctly. In software development, one example of an error of commission might be an engineer providing the wrong variable to a function call, as in:

var x = 1;        // variable to call
var y = false;    // variable not to call
public function callVariable(x) {
return x;
}
callVariable(y); // should have provided “x” but gave “y” instead

Errors of omission, by contrast, are the failure to apply knowledge or experience (previously learned behaviors) to the given problem. In my climbing example, not tying the climbing rope to my harness (at all) before beginning to climb is an error of omission. (Don’t laugh – this actually happens.) In software development, an example of an error of omission would be an engineer forgetting to provide a variable to a function call (or forgetting to add the function call at all), as in:

var x = 1;              // variable to call
var y = false;          // variable not to call
public function callVariable(x) {
return x;
}
callVariable();   // should have provided “x” but left empty

Knowledge-based errors

Knowledge-based errors, in contrast to skill-based errors, arise from the failure to know the correct behavior to apply (if any). An example of a knowledge-based error would be a developer checking in code without any unit, integration, or system tests. If the developer is new and has never been indoctrinated to the requirements for code check-in as including having written and run a suite of automated unit, integration, and system tests, this is an error caused by a lack of knowledge (as opposed to omission, where the developer had been informed of the need to write and run the tests but failed to do so).

Defense-in-depth, the Swiss cheese model, bug prevention and detection

Prevention comprises the systems and processes employed to trap bugs and stop them from getting through development environments and into certification and/or production environments (depending on your software / hardware release process). In envisioning our Swiss cheese model, we need to understand that the layers include both latent and active types of error traps, and are designed to mitigate against certain types of errors.

The following are intended to aid in preventing bugs.

Tools & methods to mitigate against Skill-based errors in bug prevention:

  • Code base and architecture [latent]
  • Automated test coverage [active]
  • Manual test coverage [active]
  • Unit, feature, integration, system, and story tests [active]
  • TDD / ATDD / BDD / FDD practices [active]
  • Code reviews [active]
  • Pair Programming [active]
  • Performance testing [active]
  • Software development framework / methodology (ie, Scrum, Kanban, DevOps, etc.) [latent]

Tools & methods to mitigate against Knowledge-based errors in bug prevention:

  • Education & background [latent]
  • Recruiting and hiring practices [active]
  • New-hire Onboarding [active]
  • Performance feedback & professional development [active]
  • Design documents [active]
  • Definition of Done [active]
  • User Story Acceptance Criteria [active]
  • Code reviews [active]
  • Pair Programming [active]
  • Information Radiators [latent]

Detection is the term for the ways in which we find bugs, hopefully in the development environment but this phase would also include certification if your organization has a certification / QA phase. The primary focus of detection methods is to ensure no bugs escape into production. As such, the entire software certification system itself may be considered one, large, active layer of error trapping. In fact, in many enterprise companies, the certification or QA team (if you have one) is actually the last line of defense.

The following are intended to aid in detecting bugs:

Tools & methods to mitigate against Skill-based errors in detecting bugs:

  • Automated test coverage [active]
  • Manual test coverage [active]
  • Unit, feature, integration, system, and story tests [active]
  • TDD / ATDD / BDD / FDD practices [active]
  • Release certification testing [active]
  • Performance testing [active]
  • User Story Acceptance Criteria [active]
  • User Story “Done” Criteria [active]
  • Bug tracking software [active]
  • Triage reports [active]

Tools & methods to mitigate against Knowledge-based errors in detecting bugs:

  • Education & background [latent]
  • Professional development (individual / organizational) [latent / active]
  • Code reviews [active]
  • Automated & manual test coverage [active]
  • Unit, feature, integration, system, story tests [active]

When bugs “escape” the preventative measures of your Defense-in-depth system and are discovered in either the development or production environment, a root cause analysis should be conducted on your system based on the nature of the bug and how it could have been prevented and / or detected earlier. Based upon the findings of your root cause analysis, your system can be improved in specific, meaningful ways to increase both its robustness and resilience.

How an organization should, specifically, conduct root cause analysis, analyze risk and make purposeful decisions about risk, and how they should improve their system is the subject of part 3 in this series, available here.

 

Chris Alexander is a former U.S. Naval Officer who was an F-14 Tomcat flight officer and instructor. He is Co-Founder and Executive Team Member of AGLX Consulting, creators of the High-Performance Teaming™ model, a Scrum Trainer, Scrum Master, and Agile Coach.

Share This:

Agile Retrospectives: High-Performing Teams Don’t Play Games

Scrum, The Lean Startup, Cyber Security and some product development loops have fighter aviation origins. But retrospectives (debriefs)—the most important continuous improvement event—have been hijacked by academics, consultants, and others who have never been part of a high-performing team; sure, they know how things ought to work but haven’t lived them. We have.

Learn what’s wrong with current retrospectives and discover how an effective retrospective process can build the high-performance teaming skills your organization needs to compete in today’s knowledge economy.

Special thanks to Robert “Cujo” Teschner, Dan “Bunny” O’Hara, Chris “Deuce” Alexander, Jeff “T-Bell” Dermody, Ryan “Hook-n-Jab” Bromenschenkel, Ashok “WishICould” Singh, John “Shorn” Saccomando, Dr. Daniel Low, and Allison “I signed up for what?” Rivera.

Brian “Ponch” Rivera is a recovering naval aviator, co-creator of High-Performance Teaming™ and the co-founder of AGLX Consulting, LLC.

Share This:

Risk Management and Error Trapping in Software and Hardware Development, Part 1

The way in which we conceptualize and analyze risk and error management in technology projects has never received quite the same degree of scrutiny which business process frameworks and methodologies such as Scrum, Lean, or Traditional Project Management have. Yet risk is inherent in everything we do, every day, regardless of our industry, sector, work domain, or process.

We actually practice risk management in our everyday lives, often without consciously realizing that what we are doing is designed to manage levels of risk against degrees of potential reward, and either prevent errors from occurring or minimizing their impact when they do.

For example, I recently took a trip to the San Juan islands with my wife and parents. I woke up early, made coffee, roused the troops, and checked the weather. I’d filled up the gas thank the day before, and booked our ferry tickets online. I checked the weather and recommended we each take an extra layer. We departed the house a bit earlier than really necessary, but ended up encountering a detour along the way due to a traffic accident on the Interstate. Nevertheless, we made it to the ferry terminal with about 10 minutes to spare, and just in time to drive onto the ferry and depart for Friday Harbor.

My personal example is relatively simple but, with a little analysis, demonstrates how intuitively we assess and manage risk:

  • Wake up early: mitigates risk of oversleeping and departing late (which could result further in forgetting important things, leaving coffee pot/equipment on, etc.), waiting on others in the bathroom, and not being able to prepare and enjoy some morning coffee (serious risk).
  • Check the weather: understanding the environment we are entering into is critical to mitigating environment-related risks, in this case real environmental concerns such as temperature, weather, wind, and precipitation, enabling us to mitigate potentially negative effects and capitalize on positives. Bad weather may even result in our changing our travel plans entirely – a clear form of risk mitigation in which we determine that our chance for a successful journey is low compared against the value we would derive from undertaking the journey in the first place, and decide the goal is not sufficient to accept present risk levels.
  • Book ferry tickets online: a mitigation against the risk of arriving late and having to wait in line to purchase tickets, which could result in us missing the ferry due to either running out of time or the ferry already being completely booked.
  • Departing earlier than necessary: a mitigation against unforeseen and unknowable specific risk, in this case the generic risk of en route delays, which we did encounter on this occasion.

As you can see, as a story my preparations for our trip seem rather routine and unremarkable, but when viewed through the lens of risk mitigation and error management, each action and decision can be seen as specifically targeted to mitigate one or more specific risks or minimize the potential effects of an error. Unfortunately, our everyday intuitive actions and mental processes seldom translate into our work environments in such direct and meaningful ways.

Risk and Error Management in Software and Hardware Development – Defense-in-Depth and the Swiss Cheese Model

Any risk management system can be seen as a series of layers designed to employ a variety of means to mitigate risk and prevent errors from progressing further through the system. We call this “trapping errors.” Additionally, each of these layers is often just one part of a larger system. A system constructed with these layers  is referred to as having “defense-in-depth.”

Defense-in-depth reflects the simple idea that instead of employing one single, catch-all solution for eliminating risk and trapping errors, a layered approach which employs both latent and active controls in different areas throughout the system will be far more effective in both detecting and preventing errors from escaping.

These layers are often envisioned as slices of Swiss cheese, with each slice representing a different part of the larger system. As a potential risk or error progresses through holes in the system’s layers, it should eventually be trapped in one of the layers.

Risk and errors are then only able to impact the system when all the holes in the system’s Swiss cheese layers “line up.”

Latent and Active Layers

There are two basic types of layers (or traps) in any system: latent and active. In your day to day life, latent traps are things such as the tires on your car or the surface of the road. Active traps are things such as checking the weather, putting on safety gear, wearing a helmet, or deciding not to go out into the weather.

Latent layers in software or hardware development may be things such as the original (legacy) code base, development language(s) used, system architecture & design, hardware (types of disk drives, manufacturer), and so forth. It may even include educational requirements for hiring, hiring practices, and company values.

Active layers in software and hardware development may include release processes, User Story writing and acceptance criteria, and development practices like TDD/ATDD, test automation, code reviews, and pair programming.

Separation of Risk and Error Management Concerns

To better focus on dealing with the most appropriate work at the appropriate time in responding to error detection, triage, and risk mitigation, we can separate our risk and error analysis into the following areas:

During development: focus on trapping errors

  • Prevention – the practices, procedures, and techniques we undertake in engineering disciplines to help ensure we do not release bugs or errors into our code base or hardware products.
  • Detection – the methods available to us as individual engineers, teams, and the organization as a whole to find and respond to errors in our code base or hardware products (which includes reporting and tracking).

Risk mitigation: steps for errors that have escaped into certification or production environments

  • Risk Analysis – the steps required to analyze the severity and impact of an error.
  • Risk Decision-making – the process of ensuring decisions about risk avoidance, acceptance, or mitigation are made at appropriate levels with full transparency.

Continuous Improvement in every case

Improvement – the process of improving workflows and practices through shared knowledge and experience in order to improve engineering practices and further harden our release cycles. This step uses root cause analysis to help close the holes we find in the layers of our Swiss cheese model.

Here is one conceptualization of what a Defense-in-depth Risk Management model might look like. Bear in mind that this is simply one way to conceive of layers at a more macro level, and each layer could easily itself be broken down into a set of layers, or you could conceive of it as one very large model.

swiss_cheese

Given our model and our new ability to conceive of Risk and Error Management in this more meaningful and purposeful way, our next step is to understand error causality and what we can do to apply our causal analysis to strengthening our software and hardware risk management and error trapping system.

Continue reading in part 2 of this 3-part series.

 

Chris Alexander is a former U.S. Naval Officer who was an F-14 Tomcat flight officer and instructor. He is Co-Founder and Executive Team Member of AGLX Consulting, creators of the High-Performance Teaming™ model, a Scrum Trainer, Scrum Master, and Agile Coach.

Share This:

The Missing Half of Team Performance: The Social Skills Behind High-Performance Teaming™

The overwhelming majority of businesses and organizations today are incredibly focused on adopting processes, tools, and frameworks to supercharge their teams’ productivity and quality, but in doing so they are solving for only half of the problem.

Whereas the team approach is often seen as a solution to cognitively complex tasks, it also introduces an additional layer of cognitive requirements that are associated with the demands of working together effectively with others. [1]

We are incontrovertibly human. When working in teams, we are humans working with other humans. Unlike a software program, the daily inputs and outputs of our lives are far too complex and changing to conceivably map and understand in a finite way; the potential derivations of our interpretations and reactions throughout the course of simply living our lives is, literally, infinite and unknowable.

Yet in virtually every business, organization, and team across America, we are focusing our efforts on establishing and implementing process, creating standardized operating procedures, rules, guidelines, policies, and training programs to build great (productive) teams. In doing so, we are ignoring the very thing which actually creates a high-performing team: us.

It actually isn’t rocket science: the interactions of the team members, not their individual intelligence, experience, education, or technical skill, is what determines how effective and how high-performance the team will be.

[T]he number one factor in making a group effective is skill at deep human interaction. That’s a remarkable finding in itself when we consider that groups are hardly ever evaluated on that basis. Everyone seems to think that other factors— leadership, mix of technical skills, vision, motivation— are more important. They matter, but not nearly as much as social skills… Social skills were the most important factor in group effectiveness because they encourage those patterns of “idea flow,” to use [Dr. Alex] Pentland’s term. Slicing the data in another way, those three elements of interaction [short & rapid idea generation, “dense interacting,” and turn-taking on idea-sharing and feedback] were more important than any other factor in explaining the excellent performance of the best groups; in fact, they were about as important as all the other factors— individual intelligence, technical skills, members’ personalities, and anything else you could think of— put together. [2]

To put the above a bit more succinctly, the best teams are not characterized by having the most intelligent, most skilled individuals; they are characterized by the quality and quantity of the team members’ social interactions.

There is an incredibly valuable point in this: the traditional focus on an individual’s knowledge, experience, and skills in a technical or process domain is only half of the story in building high-performing teams. The other half of the story is understanding how they perform in team environments and how well they contribute to a team’s overall performance and effectiveness.

Teaming Metaphors

A useful metaphor for the technical versus non-technical and social skills is live theater. Think of technical skills, scholastic education, and work experience as simply foundational elements of your business’ or organization’s ability to perform.

They are the stage, the lighting, the seating, the curtain, the orchestra’s space. Those elements are the theater.

However, the actors’ and actresses’ abilities to perform on that stage, to create something memorable and incredible – those are the social skills, the non-technical “secret sauce” of how the team actually performs together. For that great performance to occur, you need more than just the stage and the lighting – you need the performers and the magic that happens when a great team produces what a great team can.

Or consider the difference between watching a great football player play, and a great football team play. (This applies to both types of football.) A team of individuals with a star or two will never come close to achieving what an amazing team can achieve, regardless of their star power.

As I reported in my Harvard Business Review article “The New Science of Building Great Teams,” my research group and I have collected hundreds of gigabytes of data from dozens of workplaces. What we found was that the patterns of face-to-face engagement and exploration within corporations were often the largest factors in both productivity and creative output. [3]

Learning Social Skills

So what happens when you’ve hired the most technically skilled, scholastically educated people, and their social and teaming skills are virtually non-existent? Fear not – there is great news

Growing numbers of companies have discovered what the military learned long ago, that the supposedly ineffable, intractable, untrainable skills of deep human interaction are in fact trainable… Businesses can’t even begin to get better until leaders acknowledge that these skills are the key to competitive advantage, that methods of developing them may be unfamiliar, and that measuring the results will never be as easy as gauging operating efficiencies. If companies can get past those obstacles, which in most organizations are more than enough to stop managerial innovations dead in their tracks, then they have a chance. [4]

Yes – trainable.

Although it should come as no surprise, due to the fact that we all share the common trait of being – well, human – it is good to know that we can actually focus on and learn those critical skills which enable us to team effectively with other humans.

The military and commercial aviation have been doing this for decades already.

Yes – decades.

The fact that the social and non-technical skills teams need to reach high-performance are trainable and able to be improved upon over time, just as one would improve their knowledge of emerging coding practices or new technologies, is not conjecture or hypothetical experimentation. In fact, it has been operationalized and regularly improved for years.

High-Performance Teaming™

Founded in Crew-Resource Management (CRM) fundamentals, High-Performance Teaming™ provides teams at every and any level with the social, non-technical skills they need to perform at the highest levels. It targets exactly what makes effective teams – the ability for team members to engage in regular, high-quality interactions and input-feedback cycles to build the Shared Mental Models (SMMs) and communication loops which drive team performance and output.

Specifically, High-Performance Teaming™ builds the critical social skills teams need in:

  • Communication – the mechanics behind speaking and listening, non-verbal signals and cues, the human factors (culture, language, personality) which influence our communication patterns, and how to affect them through awareness.
  • Assertiveness – the behaviors behind respectfully asserting knowledge and opinion, and how to handle those assertions in a team.
  • Situation Awareness (SA) – the team’s ability to build a shared conception of their environment, and the degree to which it matches reality; requires Shared Mental Models, operational analysis, spatial awareness, etc.
  • Goal / Mission Analysis – the ways in which the team plans, executes, and learns based on their shared model of tactical to strategic goals; driven by alignment, communication, SA, and powers Decision-Making.
  • Decision-Making – utilizing collective intelligence of the team and leveraging the team’s SA combined with Goal / Mission Analysis to build consensus on solutions to complex problems, which in turn will drive execution and directly impact performance.
  • Agility – the ability to remain flexible and adapt to change; resilience in the face of a changing environment and rapidly evolving problem-set.
  • Leadership – one of the critical enablers to team effectiveness in non-flat environments, effective leadership is vital to creating Assertiveness, leveraging team collective intelligence in building SA and Goal / Mission Analysis, and getting to the correct decisions which enable organizational execution in a time-critical manner.
  • Culture – another enabler of team cohesiveness and resiliency; purposefully constructed and monitored through Shared Mental Models, Culture is a powerful contributor to Alignment, which is critical to reducing waste/churn and helping teams remain resilient and goal-oriented.
  • Empathy – the foundational element in every social skill; the ability to recognize and respond appropriately to the thoughts and feelings of others.

If you’ve gone through multiple team processes (traditional project management, Scrum, XP, SAFe, etc.), and you’re still wondering why your teams are not producing and improving, ask yourself if you’ve been solely concentrating on the Technical Skill & Process side of the equation – the side which only effects what processes teams are using to organize and conduct their work.

If you have, perhaps it is time to start giving your teams the social and non-technical skills they need to actually improve how they work together. Scrum (for example) is a great process which sets the stage for the performance, but High-Performance Teaming™, grounded in the science behind Crew Resource Management and team effectiveness, is the tool set your teams need to actually perform.

Contact AGLX Consulting today to bring those social skills to your teams!

 

Chris Alexander is a former U.S. Naval Officer who flew the F-14 Tomcat, and is Co-Founder and Executive Team Member of AGLX Consulting, creators of the High-Performance Teaming model.

  1. Cooke, N. J., Salas, E., Cannon-Bowers, J. A., & Stout, R. (2000). “Measuring team knowledge.” Human Factors, 42, 151-173.
  2. Colvin, Geoff (2015-08-04). Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will (pp. 126-7). Penguin Publishing Group. Kindle Edition.
  3. Pentland, Alex (2014-01-30). Social Physics: How Good Ideas Spread – The Lessons from a New Science (p. 93). Penguin Publishing Group. Kindle Edition.
  4. Colvin, 2015 (p. 204).

Share This:

High-Performing Teams: Writing Code is Not Your Problem

Regardless of the software or hardware development processes used in your business domain, chances are if you are worried about your teams’ performance levels, their ability to write code or build hardware solutions is not your concern.

How do you build teams which are truly high-performing?

Teams which are able to work together toward levels of truly high-performance remain relatively elusive and seldom in most industries. Regardless of which frameworks, methodologies, and tools teams adopt and adapt, their productivity remains relatively average. This hurts the bottom line of the business, which has often agreed to accept certain restrictions on current productivity on the promise of significantly increased productivity once the new methodology or framework is in place and humming.

Sound familiar? This is a situation in which the application of multiple solutions entirely fails to address the actual problem.

Teams do not form around processes, methodologies, and frameworks; they form around the members of the team. Or, more specifically, they form around the social, non-technical interactions of the individuals within the team. When a team fails to effectively bond together, several problems are typically the root:

  1. The level of empathy at the team level is relatively low
  2. The number, type, and quality of social interactions is low
  3. There is low to no feedback within the group

Despite what you may believe, social skills are highly trainable and can be learned. Teams can build their social, non-technical skills in order to team together more effectively and achieve those levels of high-performance.

Moreover, leadership can directly enable these teaming activities by learning about how high-performing teams function and what they can do to enable those teams to coalesce and perform. The secret to leading highly-performing teams is that it actually isn’t that hard – but it does take a level of discipline and rigor which many leaders find exceptionally challenging.

If you want to learn about High-Performance Teaming™ and what you or your organization can do to get to those levels of high-performance, reach out to us at AGLX Consulting today.

Chris Alexander is a former Navy Lieutenant Commander, F-14 Tomcat RIO, software developer, Agile Coach, and Executive Team Member at AGLX Consulting, LLC.

Share This:

500% Productivity Increase in One Day: Lessons from a Stand-down

Last month, seven software development teams (35+ members) stepped away from their sprint for one day and participated in Sprint Stand-down. The problems the teams were trying to solve during the Stand-down were technical—the teams recognized they had a collective knowledge gap and needed to slow down to speed up.

During the Stand-down retrospective, we discovered the teams increased productivity by over 500% in one day—an unexpected and welcomed event outcome. The retrospective provided us an opportunity to examine the how and why behind the hyper-productivity realized in this unfamiliar, one-day training event.

The lessons we learned were not revolutionary; instead, the lessons reinforced the values and practices found in the Agile Manifesto, Scrum, Extreme Programming, CrossLead, Flawless Execution, Crew Resource Management and Threat and Error Management.  In one day, a Sprint Stand-down provided undeniable evidence to developers, product owners, managers, directors, VPs, and the CIO that empowered execution trumps the traditional command-and-control approach to product delivery.

The transferable lessons learned from the Stand-down fall into familiar categories:

  • Shared Purpose/Objective
  • Workload Management/Limit Work in Progress (WIP)
  • Leadership/Teamwork
  • Execution Rhythm or Cadence
  • Communication

Before going deeper into the lessons learned, I want to share a little bit about the origins, concept and our approach to a Sprint Stand-down.


Sprint Stand-down

You will not find a Sprint Stand-down in the Scrum Guide. A Stand-down is not found the Project Management Institute’s (PMI) vernacular nor is it part of any Agile or current trending management methodology. A Stand-down is a training evolution commonly used by elite military units, commercial aviation, and other high-reliability organizations (HRO) to accelerate team performance.

The purpose of any Stand-down is to promote knowledge-based training along with personal discipline and responsibility as essential elements of professionalism. It is designed to empower and inspire a community of professionals to continuously seek knowledge, integrate new information in everyday practice, and share new findings with others within the company and industry.

Stand-down Planning

The event was a self-organized undertaking where a small team of eight people were accountable for event execution. Planning for the event followed a rapid planning processes inspired by Crew Resource Management (CRM) and Threat and Error Management (TEM). The objectives of this Sprint Stand-down were to inform, inspire, educate, and motivate the teams—admittedly weak objectives as they lacked clarity and measurability.

With a shared understanding of the Stand-down objective(s), the planning team used a liberating structure to capture anticipated threats and the resources needed to overcome those threats, and reviewed lessons learned from previous events that were similar to a Stand-down. A Stand-down plan was formed in less than 35 minutes where each planning team member knew who would have to do what by when to ensure flawless Stand-down execution.

Stand-down Execution

The Stand-down included in-house subject matter experts and one external trainer with 35+ team members in one room for 6.5 hours. Team members treated the Stand-down as an offsite, declining all meetings and turning on their Outlook out-of-office replies. Team members were randomly assigned to one of two Stand-down teams as determined by the type of gift card they received when they entered the Stand-down room. Two additional gift cards were given to all participants for the purpose of regifting —team members were encouraged to give away their gift cards to other team members for any reason. Team members were warned that over lunch (provided by the company) they may be called upon to share with everyone to whom they gave a gift card to and why. The CIO provided an impromptu leadership moment which included the distribution of additional gift cards to team members who were nominated by their peers.

500%

An outcome of the event was an increase in productivity by 200% to 700% depending on the metric used (e.g. story points, stories done, stories in progress and stories done, etc.). However, it is likely, based on stories “done” during the Stand-down, 26, versus average stories completed during a normal sprint day, 5, the increase in productivity was 500%. In one day.

For argument’s sake, let’s just say the productivity outcome for this one day event was 20%, a palatable number for those who have not embraced the power of Scrum or empowered execution. What if we could take the lessons learned from this event and apply them to how we work during our normal workdays to get a productivity increase of 5% in the next two weeks?


Sprint Stand-down Lessons Learned

Shared Purpose/Objective

  • A team needs a shared purpose or common objective. Objectives should be clear, measurable, achievable, and aligned to a focus area, strategic line of effort, company vision, etc.
  • A shared purpose builds unity of effort. Teams were observed self-organizing throughout the day and reported a reduction in duplication of work and an increase in cross-team knowledge-sharing.

 Workload Management/  Limit WIP

  • Limit WIP. Individuals reported being happier as they felt part of a team of teams working toward one goal.
  • Context Switching is bad. Most team members reported that they did not check their email during the six hours. Team members reported that the internal Stand-down disruptions (we played music during frequent shout-outs) slowed them down and were absolutely disruptive.
  • Protect the teams from out-of-band work. Team members reported that they had no out-of band work during the day.
  • Empower team members to push back on work that is not aligned to the objective.
  • Pairing works. Teams paired all day. Some mobbed.

Leadership

  • Say “Thank You.” Team members should recognize and acknowledge the importance of others in task performance.
  • Leaders need to be visible but not intrusive. Checking-in to say “thank you” to individuals carries more weight than email.
  • An invisible leader is a visible problem. Team members noticed those leaders who failed to stop by to see how the day was progressing.
  • Unscripted leadership is the best kind. The CIO’s visit was received as genuine.
  • Recognition from leaders is great, but peer recognition of important contributions is even better.

 Execution Rhythm or Cadence

  • Stand-down tempo is not sustainable but the practice is sound when a knowledge gap exists.
  • Stand-downs should not exceed six hours.
  • Schedule Stand-downs as required. No more than once a month.

Communication

  • Face-to-face communication remains the gold standard.
  • Keep work visible. The teams shared one electronic backlog.
  • Co-locate teams to maximize the value of osmotic communication.
  • Cross-team pollination builds trust.

Brian “Ponch” Rivera is a recovering Naval Aviator and Commander in the U.S Navy Reserve. He is the co-founder of AGLX, LLC, a Seattle-based Agile Leadership Consulting Team, and a ScrumTotal Advisory Board Member.

Share This:

What Agile Teams Can Learn from Flight Crews

Small, cross-functional teams working together with devices, focused on a shared objective, surrounded by complexity and frequently changing conditions. Welcome to the world of software development. And commercial aviation. Think the similarities between software development and aviation end here? Think again.

Aviation continues to have a profound influence on software development, organizational agility, cyber security, and transforming managers into leaders. For example, the complexity-busting framework, Scrum, used by technology companies to build complex software, comes from fighter aviation and Lean manufacturing. The Lean Startup, a popular business-model framework used by today’s hottest Silicon Valley startups, is based on John Boyd’s OODA Loop, an empathy-driven decision cycle that captures how fighter pilots “get inside” their opponent’s decision cycle to gain a competitive advantage.  Similarly, OODA (Observe, Orient, Decide, Act) is used to rapidly design products and in the burgeoning business of cyber security. On the management front, aviation is reported to be the inspiration behind the Holocracy movement, a social system where authority and decision-making are distributed throughout self-organizing teams. But you already knew all of this, right?

Next Time You Fly on a Commercial Carrier…

Commercial aviation flight deck and cabin crews follow the empirical process of plan, communicate, execute, and assess on each leg of their assigned trip (mission). Similarly, software developers around the globe follow the same empirical process found in Scrum—Sprint Planning (plan), Standups (communicate), Sprint Execution (execute), Review and Retrospective (assess). A sprint or iteration is a time-boxed mission (one to four weeks long) where potentially shippable software is delivered. With empowered team members and solid execution, Scrum builds a culture of continuous learning and innovation.

There’s more?

The human interaction skills needed on the flight deck and on software development and business teams are exactly the same; these cognitive and social skills include empathy, collaboration, discipline, communicationleadership, situation awareness and teamwork. Moreover, the silent killer found in the cockpit is also the top threat among software development and business teams.

Slow and insidious, poor Workload Management is the silent killer. However, software developers and Lean experts refer to Workload Management as Work in Progress (WIP). When business and software teams try to do too much (too much WIP), or do not have a shared purpose or objective, rapid value delivery (effective productivity) and quality decreases—detriments to business survival.

Prioritization of work in and out of the cockpit is an imperative but flight deck and cabin crews have a marked advantage over software and business teams: flight crews are trained on the effective use of all available resources needed to complete a safe and efficient flight; software and business teams are not. The non-technical skills training flight crews receive is called Crew Resource Management (CRM) and Threat and Error Management (TEM).

CRM, affectionately known as “Charm” school, teaches the cognitive and social skills individuals need to be part of high-performing teams in complex, rapidly changing environments. TEM is a human-system approach to building habits and skills team members need to manage threats and errors within complex operating environments.

What if technology teams applied the cognitive and social lessons learned from CRM and TEM to the world of software development?

Instead of “Scaling Agile,” what is needed is a Crew Resource Management- and Threat Error Management- influenced Agile Operating System–a system that builds leaders and empowers teams and individuals at every level. This operating system should enhance Scrum through a simple, repeatable, proven, and scalable set of interconnected and interdependent planning, communication, execution, and assessment processes that drive innovation, create leaders, and build a continuous learning culture. Think of this human operating system as the non-technical skills teams need to overcome complexity—those skills that flight crews have burned into muscle memory.

Brian “Ponch” Rivera is a recovering Naval Aviator and Commander in the U.S Navy Reserve. He is the co-founder of AGLX, LLC, a Seattle-based Agile Leadership Consulting Team, and a ScrumTotal Advisory Board Member.

(c) Can Stock Photo

Share This:

What the Agile Community Should Learn from Two Little Girls and Their Weather Balloon

As reported by GeekWire, over the weekend two Seattle sisters, Kimberly (8) and Rebecca (10) Yeung, launched a small weather balloon to the edge of space (roughly 78,000 feet). They have the GoPro video from two cameras to prove it.

While this is certainly an impressive, if not amazing, feat for two young girls to have accomplished (despite some parental assistance), what is perhaps most impressive (at least to me) is the debrief (or retrospective) they held after the mission. While I’m not fortunate enough to have been there to witness it personally, I can see from the photo of their debrief sheet (as posted in the GeekWire article) that it was amazingly productive and far surpasses most of the agile retrospectives (debriefs) I’ve witnessed.

14416510384450*Photo copied from the article on GeekWire.

Apart from the lesson about their Project Plan (“We were successful because we followed a Project Plan & Project Binder”), this sheet is astonishingly solid. Even given the fact that I think it is a misconception to attribute success to having had a project plan, for an 8 and 10-year-old, this is awesome work!

My friend and fellow coach Brian Rivera and I have often discussed the dire lack of quality, understanding, and usefulness of most agile retrospectives. I might even go so far as to call the current state of agile retrospectives in general “abhorrid” or “pathetic,” even “disgraceful.” Yes, I might just use one of those adjectives.

For teams using agile methodologies and frameworks focused on continuous improvement (hint: everything in agile is about enabling continuous improvement), the retrospective is the “how” which underlies the “what” of continuous improvement.

Supporting the concrete actions of how to improve within the retrospective are the lessons learned. Drawing out lessons learned during an iteration isn’t magic and it isn’t  circumstantial happenstance – it requires focused thought, discussion, and analysis. Perhaps for high-performing teams who have become expert at this through positive practice, distilling lessons learned and improving their work may occur at an almost unconscious level of understanding, but that’s maybe 1% (5% if I’m optimistic) of all agile teams.

So what does a team need to understand to actually conduct a thorough and detailed analysis during their retrospective? Actually only a few things:

  1. What were they trying to do? (Goals)
  2. How did they plan to do it? (Planning / strategy)
  3. What did they actually do? (Execution – what actually occurred)
  4. What were their outcomes? (Results of their work)
  5. What did they learn, derived from analyzing the results of their efforts measured against the plan they had to achieve their goals? (Lessons learned)

A simple example:

  1. I want to bake peach scones which are light, fluffy, and taste good. (Goal + acceptance criteria)
  2. I plan to wake up early Saturday morning and follow a recipe for peach scones which I’ve found online, is highly rated, and comes from a source I trust. It should take 30 minutes. (Planning – who / what / when / where / how)
  3. I wake up early Saturday morning and follow the recipe, except for the Baking Powder. It can leave a metallic taste behind, so I leave it out. (Execution)
  4. It took almost an hour to make the scones, and they did not rise. They tasted alright, but were far, far too dense and under-cooked internally, partially due to being flat. (Outcomes)
  5. I didn’t allocate enough time based on the fact that it was my first attempt at baking scones and I was trying to modify a known good recipe (reinventing the wheel, root causes: experience). Although I wanted light, fluffy scones, I didn’t get them because I deliberately left out a key ingredient necessary to help the dough rise (good intention – bad judgment, root causes: knowledge / discipline). (Lessons learned)

Perhaps a bit overly simplistic but this is exactly the type of concrete, detailed analysis into which most teams simply never delve. Instead, retrospectives for most agile teams have devolved into a tragic litany of games, complaining sessions, and “I liked this / I didn’t like that” reviews with no real outcomes, takeaways, or practical concepts for how to actually improve anything. Their coaches leave them with simple statements such as “we need to improve.” Great. Thanks.

Taking what we know from Kimberly and Rebecca’s plan to send a weather balloon into outer space, let’s do a little analysis on their retrospective. I can tell you already it is not only solid, but will ensure they’re able to improve not only on the technical design itself, but also improve their team’s “meta” – the ways they work, their collaboration, their teamwork, their research – everything which enables them to actually continually improve and produce powerful results.

  • Bigger balloon – create more lift – ensure faster rate of ascent (Technical / work – related but important. They have learned through iterating.)
  • Remember to weigh payload with extra – more accurate calculations – correct amount of helium (Technical but also process-related, this draws root causes arising from both knowledge and experience, enabling them to adapt both their work itself and their meta – how they work.)
  • Don’t stop trying – you will never know if you don’t ask. Eg GoPro (Almost purely meta, reflecting a great lesson which builds not only a team mindset but also reflects a core value, perseverance!)
  • Washington Geography – Map research on launch locations taught us a lot of geography (This is both technical and meta, addressing their research data and inputs/outputs but also learning about how to learn and the value of research itself!)
  • Always be optimistic – We thought everything went wrong but every thing went right. Eg. SPOT Trace max altitude mislead [sic] our expectations. Eg. We thought weather cloudy but it was sun after launch. Eg. Weight. Thought payload too heavy for high altitude. (Are you kidding me?! Awesome! Lessons about situational awareness and current operational picture, data inconsistencies, planned versus actual events, planning data and metrics, and the importance of outlook/attitude! #goldmine!)
  • Be willing to reconstruct – If you find out there is a problem, do not be afraid to take it apart and start all over again. (Invaluable lesson – learning to embrace failure when it occurs and recover from it, realizing that the most important thing is not to build the product right, but to build the right product!)
  • Have a redundant system – Worry less. (Needs no explanation.)
  • SPOT Trace technology awesome – Very precise (This is a fantastic example of a positive lesson learned – something that is equally important to acknowledge and capture to ensure it gets carried forward and turned into a standard practice / use.)
  • Live FB updates – add to fun + excitement (Yes yes yes!! To quote an old motto, “If you’re not having fun, you’re not doing it right!” This stuff should be fun!!)
  • Speculation – Don’t guess. Rely on data. (Fantastic emphasis on the importance of data-oriented decisions and reflects another potential team core value!)
  • Project Plan – We were sucessful [sic] because we followed a Project Plan + Project Binder. (The only lesson I disagree with. I would advocate a good 5 Whys session on this one. My suspicion is that the project was successful because they as a team worked both hard and well together [high-performing], had fun, and iterated well [based on the lesson about not being afraid to reconstruct / start over]. I have serious doubts that their mission was a success because they had and followed a project plan. Regardless, this is far too small a point to detract from the overall impressiveness of their work!)

Take a few lessons from two girls who have demonstrated concrete learning in ways most adults fail miserably to even conceptually grasp. If you are on a team struggling to get productive results from your retrospectives, stop accepting less than solid, meaningful analysis coupled with clear, actionable results. The power is in your hands (and head).

If you are one of those agile coaches who thinks retrospectives are just for fun and celebration, who plays games instead of enables concrete analysis, and who wonders why their teams just cannot seem to make any marked improvements, get some education and coaching yourself and stop being a part of the problem!

(Written with the sincerest of thanks to Kimberly and Rebecca Yeung, and the Yeung family for their outstanding work, and to GeekWire for publishing it!)

* Chris Alexander is an agile coach, thinker, ScrumMaster, recovering developer, and co-founder of AGLX Consulting, who spends too little time rock climbing.

Share This:

Six False Assumptions That Are Killing Retrospectives

These six commonly accepted (but false) assumptions about retrospectives are killing innovation and hindering your teams’ future execution.

  1. Retrospectives are meetings
  2. A retrospective length is dependent on sprint/iteration length
  3. Retrospective variety ensures team members do not get bored
  4. There are three basic questions in a retrospective
  5. The Scrum Master or Agile Coach must facilitate the retrospective
  6. A retrospective is designed independently of other Scrum/Agile events

1. Retrospectives Are Not Meetings

Retrospectives are not meetings. Retrospectives are events. Why is this important? We know that words have meaning and if you look at the definitions of “meeting” and “event” you will see that a meeting is an assembly of people for discussion or entertainment. An event, on the other hand, is something that occurs in a certain place during a particular interval.  Meetings start late and end late, and to be honest, meetings are typically a waste of time. Events are structured, start and end on time, have a purpose, and dare I say it, follow a process or method.

2. Retrospective Length is Independent of Sprint/Iteration Length 

The time needed for a retrospective is independent of sprint or iteration length. For example, a four-week sprint does not require a four hour retrospective.  A one hour retrospective is enough time for a high-performing team to identify the handful of action items required to improve their future execution, regardless of the sprint length. An average team, on the other hand, may need up to two hours to help them build individual and team retrospective muscle memory.

To maximize the amount of work not done, a team only needs to gather data (what went well/ what didn’t go well) for five events (planning, standups/communication, sprint execution, review, and retrospective) and this can be done in as little as ten minutes—shorter with a well-practiced team.  Analyzing sprint execution (generating insights), conducting a root cause analysis, and developing action items (lessons learned) are where the team needs to spend the majority of the hour.

Consider this: a four week sprint will have more standups and more execution days than a one week sprint, but this does not equate to a 4x increase in the amount of time needed to gather data in a retrospective. A high-performing team should be able to glean all their learning points from a 2-6 week sprint within the span of an hour. Anything more is a disrespectful waste of the organization’s money and the team members’ time.

3. Retrospective Variety is NOT the Spice of Organizational Life

Boredom: “an aversive state of wanting, but being unable, to engage in satisfying activity.”  

The idea of trying out a new retrospective activity (game) so team members do not get bored is misguided. The perceived or actual boredom displayed by individuals or teams toward retrospectives may be (1) a result of the type of work the team is doing, (2) a lack of understanding of the retrospective activity and/or a failure to achieve actionable outcomes, or (3)  the fact that great retrospectives are hard as they challenge people to be open and honest—we also know that boredom is a result of being too challenged.

Let’s assume that knowledge workers are challenged each sprint or iteration and their work is not the source of the perceived boredom. How does changing the retrospective activity (game) address the averse state of wanting to be engaged in a satisfying activity? And if retrospectives are perceived as being too challenging, then why would one believe that changing the activity would pacify this root cause of boredom?

Consider these points if you continue to believe changing up retrospectives in the name of boredom is a smart idea:

  • Inconsistent retrospectives, in structure and/or frequency, lead to mediocrity. According to Jim Collins, “The signature of mediocrity is not an unwillingness to change. The signature of mediocrity is chronic inconsistency [1].”   
  • Repetition. Repetition. Repetition. Learning something new, awkward, and difficult requires some repetition. It takes a while to get used to opening up, being self-critical, and learning how to respectfully challenge each other in and out of a retrospective.   
  • According to Doug Sundheim, “Debriefing (Retrospective) is a structured learning process designed to continuously evolve plans while they’re being executed.”
  • Culture is a product of retrospection. Edgar H. Schein points out that “culture is the result of a complex group learning process [2].”
  • Innovation is a product of interactions, and common processes that accelerate those interactions are necessary. According to a recent McKinsey Quarterly article, “…innovation is a complex, company-wide endeavor, it requires a set of crosscutting practices and processes to structure, organize and control it [3].”
  • So not only will “variety” in your retrospectives not produce desired results, it will also have a negative impact on your team’s ability to improve.

4. The Two Most Powerful Questions in a Retrospective Are:

  1. What was the primary objective?
  2. Did we achieve what we set out to do? Did we achieve that objective?

Why are these the most powerful questions during a retrospective? Simple: alignment. If the team is not aligned, if individuals cannot succinctly restate the sprint objective, then we (organization, leaders, Agile Coach, Product owner, etc.) have failed to establish the necessary conditions for the team to become empowered [4].

With an aligned team, the answer to the second question should be binary (yes/no) assuming the primary Sprint objective is clear, measurable, and achievable. Moreover, the primary Sprint objective should be connected to the external effects on the economic system (i.e. business value) and not an internal measures of performance (e.g. ready stories, velocity, burndown, etc. ).

What Went Well? What Didn’t go well? What can we do better? These three questions are commonly used and thought of as a proven retrospective design or process instead of as techniques that are helpful in gathering data and deciding what to do. Caution,  a real danger exists when asking a team “What they can do better?” if they do not share a common picture of “What” happened, understand “How” that something happened and, more importantly, “Why” it happened (root cause analysis). In a later post I will go into more detail on how a self-similar activity used in strategy and product development (What-How-Why) can be a powerful tool in building action items in retrospectives.

5. Who Facilitates?

A lot of people look to the ScrumMaster or Agile Coach to facilitate a retrospective, and indeed, when a team is just learning how to conduct and participate in a retrospective, this is important. However, the retrospective should be viewed as a leadership episode, where leadership is valued over facilitating the event.  In complex systems, leadership “takes place during interactions among [team members] when those interactions lead to change in the way those [members] expect to relate to one another in the future [5].”  How members relate to each other in the future is directly connected to how a leader (a person in an actual or perceived position of some authority) establishes a safe environment during the retrospective (a leadership episode).

The Product Owner’s position on the team is better suited for establishing this safe environment as they generally have over 51% of the vote when it comes to the product vision and backlog. On the other hand, the ScrumMaster or Agile Coach are often contractors or viewed as event facilitators, protectors of the framework, thus making the crucial step of setting the tone (establishing psychological safety) problematic. Development team members are also great candidates for leading the retrospective and should be given the leadership opportunity when the team is comfortable with a proven retrospective process.  It is perfectly acceptable for the ScrumMaster or Agile Coach to coach whomever is leading the retrospective during the actual  retrospective execution and then follow up with a one-on-one retrospective using a self-similar process.

6. The Leadership Pattern within a Retrospective Should be Tightly Coupled with Leadership Patterns of Other Events

Iteration or Scrum events should be viewed as a whole and not as standalone parts of a system. Just as Scrum exists in its entirety, the leadership patterns of each Scrum event should be tightly coupled (have interdependencies) where the relationships among those patterns work together in some fashion. Combining an off-the-shelf retrospective game with an ad hoc planning process (the letting the team figure it out approach), for example, is counter to basic Systems Thinking.  Leadership patterns within system events need to be connected. How a team plans, how it conducts its standups, and how it executes its sprint must be connected to how the team holds its retrospective.   

To illustrate this point, consider Russell Ackoff’s example of taking the best parts from 450+ different cars (best transmission, best tires, brakes, cooling systems, belts, spark plugs, etc.,)  and trying to put those best parts together to make the best car. The parts do not connect and therefore you do not have a car, you have a mess. The proliferation of retrospective games, ones that are designed independently of other Scrum events, are the equivalent of those incompatible best parts – they only contribute to an Agile mess.

 

Brian “Ponch” Rivera is a recovering Naval Aviator and Commander in the U.S Navy Reserve. He is the co-founder of AGLX, LLC, a Seattle-based Agile Leadership Consulting Team, and a ScrumTotal Advisory Board Member.

 

References:

[1] Jim Collins, Good to Great: Why Some Companies Make the Leap…and Others Don’t (HarperCollins, 2001)

[2] Edgar H. Schein, Organizational Culture and Leadership, 3rd Ed. (Jossey-Bass, 2004)

[3] Marc de Jong, Nathan Marston, and Erik Roth, The Eight Essentials of Innovation. McKinsey Quarterly April 2015

[4] Senge, P. M. (1990). The fifth discipline: The art and practice of the learning organization. New York: Doubleday/Currency.

[5] Hazy, J. Goldstein, J, & Lichtenstein, B.B. (2007). Complex Systems Leadership Theory: New Perspectives form Complexity Science on Social and Organizational Effectiveness. http://emergentpublications.com/documents/9780979168864_contents.pdf?AspxAutoDetectCookieSupport=1

Agile <a href=”http://www.canstockphoto.com“>(c) Can Stock Photo</a>

Share This: