Category Archives: Agile

Agile systems, frameworks, and methodologies

How to Develop a Family Hurricane Checklist Using Military-Grade Planning

Concepts Applied in This Post: Red Teaming; complex adaptive systems; Sensemaking; High-Reliability Organizing; Mindful Organizing; Principles of Anticipation; Situational Awareness; Anticipatory Awareness; Mission Analysis; Shared Mental Models; Mission Command; Commander’s Intent; ; Cynefin; vector-based goals; challenge and respond checklists; and establishing a sense of urgency.

This post outlines how families can apply some elements of military-grade planning to develop a hurricane checklist. Moreover, this post also applies to business leaders interested in real agility, innovation, and resiliency.

The Rivera girls reviewing the plan

Background

With last week’s devastation in Houston on our minds and the looming threat of Hurricane Irma in the Atlantic, I thought it would be prudent to take my family through some basic hurricane preparedness planning. To do this, I decided to take my wife, six-year-old and soon-to-be eight-year-old daughters through the same military-grade agility, innovation, and resiliency lessons that I coach to FORTUNE 100 companies and startups. After all, a family is a team and a hurricane is a complex adaptive system, right?

This activity ended up providing valuable lessons for the entire family and as a result, we delivered a challenge and response checklist, reviewed and re-supplied our emergency kits, and more importantly, we became more aware of capabilities and limitations of the socio-technical system we call our home.

Feel free to apply the approach to your household or business.

Focus on Outcomes

To start the activity, begin with a basic statement, a vector-based goal that inspires action. The outcome statement I used:

Survive for five days in our house during and following a major hurricane

Notice that my Commander’s Intent does NOT contain a clear, measureable, achievable objective or SMART goal. Why?  Because we are dealing with complexity; we cannot predict the future in the Complex domain. When dealing with increasing volatility, uncertainty, complexity, and ambiguity (VUCA), emergent goals are fine, as you will see.

Effective Planning

Knowing that plans are nothing and that planning is everything, I used a military-grade planning approach to help the girls understand the system we live in, the wonders and dangers of a hurricane, and their roles in the event of a hurricane. To do this, I asked the girls to write down those things that may happen during a hurricane.

Anticipate Threats

Complex adaptive systems and high-performing teams anticipate the future. One of the common planning problems I see with executive and development teams is they fail to identify threats and assumptions (do not anticipate the future) prior to developing their plan. To help the girls understand this critical step, I asked the them to write down “what” can happen in the event of a hurricane.

Having watched the news on Hurricane Harvey, they were able to identify a few threats associated with hurricanes (e.g.  flooding, no power, damage to windows). However, just as adult team members do when they have meetings, my girls went down many rabbit holes to include discussions about Barbie and Legos. The best approach to overcome this natural phenomenon (cognitive bias) is to use the basic Red Teaming technique of Think-Write-Share.

With some steering help from mommy and daddy, our girls where able to get back on course and capture several more “whats” before moving on to the next step.

Red =Threats; Blue = Countermeasures; Green = Resources Needed.

Identify Countermeasures and Needed resources.

With the threats identified, we began to write down possible countermeasures and needed and available resources that overcome those threats.  As we were doing this, we noticed the emergence of a what appeared to be a checklist (see our blue notes in the above picture). Although not explicitly stated in the Commander’s Intent, we decided that we should add “build a checklist” –an emergent objective– to our product backlog (more on this later).

Apply Lessons Learned

“Learn from the mistakes of others. You can’t live long enough to make them all yourself.” ~ Eleanor Roosevelt

Knowing that there are many lessons learned from people who have lived through hurricanes, I went online to find and apply those lessons learned to our countermeasure and resource backlog. I used the Red Cross as a source and discovered we missed a couple of minor items in our growing backlog.

*I recommend using external sources only after you develop countermeasures to your identified threats. Why? Because planning is about understanding the system; it is how we learn to adapt to change.

After we applied lessons learned, we used a green marker to identify those needed resources (see picture). These resources became part of our product backlog.

Build a Prioritized Product Backlog

A product backlog should be prioritized based on value. Since I was dealing with children who have a low attention span but were highly engaged in the current activity, I decided to prioritized our backlog in this order:

  • Build a Hurricane Checklist
  • Review with the team (family) what is in our current emergency kit
  • Purchase needed resources
  • Show the kids how to use the kit
  • Build a contingency plan –our contingency plan details are not covered in this post.

“Scrum” It

Since I coach Scrum as a team framework, and our family is a team, I showed my children the basics of Scrum. If you are not familiar with Scrum, you can find the 16-page scrum guide here.

We used a simple Scrum board to track our work and executed three short Sprints. As a result, the girls were able to pull their work, we were able to focus on getting things done, and we identified pairing and swarming opportunities. They also learned a little about what I do for a living.

Key Artifact and Deliverable Review: Challenge and Respond Checklists

With a background in fighter aviation, and having coached surgical teams on how to work as high-performing teams, I know from experience that checklists work in ritualized environments where processes are repeatable. To create a ritualized environment, we can do simple things such as starting an event at a specified time with a designated leader. Another option is to change clothes or wear a vest—by the way, kids love dressing up.

One advantage of a challenge and respond checklist is it can be used to create accountability and provide a leadership opportunity for a developing leader–perfect for kids and needed by most adults. For example, the challenge and respond checklist we developed (above) can be initiated by one of my daughters.  If we needed to run the checklist, one of my daughters would simply read the items on the left  and mommy or daddy would respond with the completed items on the right. Giving a young leader an opportunity to lead a simple team event and recognizing their leadership accomplishments energizes their internal locus of control and utimiately builds a bias toward action.

Feel free to use our checklist as a guide but remember, planning is about understanding your system.

The Most Important Step: Debrief

Yes, a debrief with a six and seven-year-old is possible. Remember to create a learning environment for them, ask them about the goal(s) they set out to achieve, and ask them what they learned. Walk them through the planning steps they just went through to reinforce the planning process. Also, ask them what they liked and what they didn’t like about working on the plan with mommy and daddy. Bring snacks.

Brian “Ponch” Rivera is a recovering naval aviator, co-founder of AGLX Consulting, LLC, and co-creator of High-Performance Teaming™ – an evidence-based, human systems solution to rapidly build and develop high-performing teams and organizations.

Share This:

A Shallow Dive Into Chaos: Containing Chaos to Improve Agile Story Pointing

In May 1968 the U.S.S. Scorpion (SSN-589), a Skipjack-class nuclear submarine with 99 crewmembers aboard, mysteriously disappeared en route to Norfolk, VA from its North Atlantic patrol. Several months later, the U.S. Navy found its submarine in pieces on the Atlantic seabed floor. Although there are multiple theories as to what caused the crippling damage to the submarine, the U.S. Navy calls the loss of the Scorpion and her 99 crew an “unexplained catastrophic” event [1].

The initial search area stretched across 2,500 NM of Atlantic Ocean from the Scorpion’s last known position off of the Azores to its homeport in Norfolk, Virginia. Recordings from a vast array of underwater microphones reduced the search area down to 300 NM. Although technology played an important role in finding the U.S.S. Scorpion, it was the collective estimate of a group that eventually led to the discovery of the destroyed submarine. The U.S.S. Scorpion was found 400 nautical miles southwest of the Azores at a depth of 9,800 ft., a mere 220 yards from the collective estimate of the group [2].

The group of experts included submarine crew members and specialists, salvage experts, and mathematicians. Instead of having the group of experts consult with one another, Dr. John Craven, Chief Scientist of the U.S. Navy’s Special Projects Office, interviewed each expert separately and put the experts’ answers together. What’s interesting about the collective estimate is that none of the expert’s own estimates coincided with the group’s estimate—in other words, none of the individual experts picked the spot where the U.S.S. Scorpion was found.

A Quick Lesson in Chaos

According to Dave Snowden, Chaos is completely random but if you can contain it, you get innovation. You do this by separating and preventing any connection within a system. And when done properly, you can trust the results. Skunk Works projects and the Wisdom of Crowds approach made popular by James Surowiecki are great examples of how to contain Chaos [3].

Dr. Craven’s approach to finding the U.S.S. Scorpion is a controlled dive into Chaos; preventing any connections within the group, protecting against misplaced biases. Moreover, by bringing in a diverse group of experts, Dr. Craven ensured different expert perspectives were represented in the collective estimate.

To contain Chaos, three conditions must be satisfied [4]:

1. Group members should have tacit knowledge—they should have some level of expertise

2. Group members must NOT know what the other members answered

3. Group Members must NOT have a personal stake

Story Point Estimates: Taking a Shallow Dive into Chaos

Agile software development teams frequently estimate the effort and complexity of user stories found in their product and iteration backlogs. Individual team members “size” a story by assigning a Fibonacci number to a story based on their own experiences and understanding of the user story. A point consensus is not the aim but, unfortunately, is frequently coached and practiced.

To reduce cognitive biases, contain Chaos, and accelerate the story pointing process, AGLX trains and coaches clients’ software development teams to ask the product owner questions using various Red Teaming techniques, to include Liberating Structures. Once all team members are ready to assign points to the story, team members place their selected Fibonacci card or chip face down on the table.

On the “Flip” in “Ready…Flip,” team members turn their cards over and the ScrumMaster rapidly records the individual points. When all points are registered, the ScrumMaster takes the average of the points scored and assigns that number to the story (rounding to the nearest integer, if desired). No need to waste time re-pointing or trying to come to a consensus.

Example. A six-person software development team assigns the following individual points to a story.

Cards

The average is 6.5 (7 if rounding). In this example, none of the individual estimates match the group’s estimate. And, the group’s estimate is not a Fibonacci number.

In some High-Performing Organizations where psychological safety is well established, some development teams will have the team members who pointed the story with a 3 and 13 (using the example above) to present their reasoning using a complex facilitation technique—time-boxed, of course. The point behind this ritual is not to re-point the story but to have team members listen to the story outliers or mavericks for the purpose of identifying possible insights. Caution: This is an advanced technique.

Innovative and Resilient Organizations

Containing Chaos requires expert facilitation and will not happen overnight. However, simplifying your story pointing approach by not allowing consensus or team consultation (Condition 2) when it comes to story pointing is a small step to becoming an innovative and resilient organization—if that is what the organizations desires.

Although the loss of the U.S.S. Scorpion and her 99 crew was a tragedy, by sharing the story of how the collective estimate of a group of diverse experts found the submarine on the seabed floor is a great example of the power of cognitive diversity and containing Chaos.

Brian “Ponch” Rivera is a recovering naval aviator, co-founder of AGLX Consulting, LLC, and co-creator of High-Performance Teaming™ – an evidence-based, human systems solution to rapidly build and develop networks of high-performing teams. Contact Brian at brian@aglx.consulting.

[1] Sontag, Sherry; Drew, Christopher (2000). Blind Man’s Bluff: The Untold Story of American Submarine Espionage. New York:

[2] Surowiecki, James (2005). The Wisdom of Crowds. Anchor Books. pp. xv. ISBN 0-385-72170-6.

[3] Snowden, D.  KM World 2016 Keynote.  http://cognitive-edge.com/resources/slides-and-podcasts/

[4] Ibid

Share This:

Kanban vs. Scrum: Why This Argument is Futile

Kanban is a Group Tool or Group Methodology.

Scrum is a Team Framework.

Confused about the difference?  It all has to do with the definition of a team. The Agile community loves to talk about teamwork and teams but does not share a common definition of a team. Is this a problem? It is if you are trying to coach a group of people to function as a team when their work/tasks have a low level of interdependency.

Team

A distinguishable set of two or more people who interact dynamically, interdependently, and adaptively toward a common and valued goal/ objective/ mission [1].

Kanban

Kanban is a great group methodology that allows you to start where you are and focus on flow. However, Kanban is not time-boxed like a sprint in Scrum. Why does this matter? Look at the second part of the above definition of a team: “…shared and valued goals/objective/mission” imply time. Think back to SMART goals (by-the-way I hate SMART goals). Can you have a goal to lose 10lbs without a time-box? You can in Kanban. I’m on that diet now and I have not lost a pound.

Scrum

Scrum is a great team framework that exists in its entirety and is a container for other practices, techniques, and methodologies. You can use elements of Kanban in Scrum without renaming Scrum as long as Scrum exists in its entirety (three roles, five events, three artifacts).

Scrum is ideal for a set of two or more people who work interdependently toward a common goal. But hold on a minute, we all know that there are three roles in Scrum. So does a two-person Scrum team violate the definition of Scrum?

But there’s more…

According to research conducted by R. Wageman, placing a team framework on people whose work or tasks have a low-level of interdependency is a bad idea [2]. There is danger in thinking a pull system designed for the simple domain can be applied as a framework for teams who work in the complicated, complex and chaotic domains.

Bottom line

Kanban is a great group methodology and Scrum is a great team framework.  They are not perfect, they have flaws, but knowing where to use them is as simple as understanding what a team is and what a team is not.

 

Brian “Ponch” Rivera is a recovering naval aviator, co-founder of AGLX Consulting, LLC, and co-creator of High-Performance Teaming™ – an evidence-based, human systems solution to rapidly build and develop networks of high-performing teams. Contact Brian at brian@aglx.consulting.

References

[1] Salas, Eduardo; Stephen M. Fiore; Letsky, Michael P. (2013-06-17). Theories of Team Cognition: Cross-Disciplinary Perspectives (Applied Psychology Series) (Kindle Locations 7794-7796). Taylor and Francis. Kindle Edition.

[2] “But if managers inadvertently create hybrid groups by importing group processes into a high-performing system with individual tasks and reward systems, they may find that what they have actually have brought in is a Trojan Horse.” Wageman, R. (1995). Interdependence and Group Effectiveness. Administrative Science Quarterly, 40(1), 145-180. doi:1. Retrieved from http://www.jstor.org/stable/2393703 doi:1

Share This:

Psychological Safety is Just One Piece of the Larger Puzzle – Where Google’s Project Aristotle Missed the Bigger Picture

Google recently released the results of a five-year study, known as Project Aristotle, through which they determined that the common attribute – or what Google termed “key dynamic” – which successful teams exhibited was something known as psychological safety.

Unfortunately, Google’s expensive, five-year foray into teamwork is a great example of what can happen when technologists undertake studies in team and cognitive psychology, human interaction, sociology, and complex adaptive systems (among other disciplines), and base their findings entirely on self-collected metrics and their own statistical analyses of that data. What Google found was that psychological safety is a statistically significant attribute (key dynamic) associated with high-performing teams, but unfortunately this doesn’t tell the full story or help other teams or organizations to understand what they need to do to create those same conditions in their own environments.

I certainly do not want to impugn or belittle the considerable efforts or discipline of the team conducting Google’s study. However, I might have suggested beginning with a review of existing research in some of those disciplines (teamwork, sociology, human behavior, cognitive psychology, etc.) relating to team performance and teamwork. As it turns out, there is quite a lot.

In fact, so much that today there are meta-studies covering these topics. Among other critical areas not studied by Google, team performance is directly tied to the number and quality of social interactions between team members [1], the existence of Shared Mental Models within the team, shared expectations regarding behavioral norms (what we call Known Stable Social Interfaces), as well as organizational issues such as the leadership and management culture.

Which isn’t to imply that psychological safety isn’t important; indeed it is. Amy Edmondson in her book Teaming points out that psychological safety is of critical importance to effective teams:

“An environment of psychological safety is an essential element of organizations that succeed in today’s complex and uncertain world. The term psychological safety describes a climate in which people feel free to express relevant thoughts and feelings without fear of being penalized…In corporations, hospitals, and government agencies, my research has found that interpersonal fear frequently gives rise to poor decisions and incomplete execution.” [2]

Psychological safety is important. Yet psychological safety is not a team skill. For example, we can teach a team and individual team members to communicate more effectively using certain techniques and behaviors. Similarly, we can train a team to communicate in more assertive ways. However, we cannot train teams to simply “be psychologically safe.”

As Edmondson states in the quote above, “psychological safety is an essential element of organizations…” (emphasis added) – it isn’t a team skill or behavior.

This critical fact is where so much of the literature, and Google’s study in particular, come up short. Knowing that successful teams operate in an environment of psychological safety does not enable leadership, management, or coaches to build psychologically safe environments any more than looking at a painting enables me to paint a replica of it.

The real challenge is determining how one can mindfully, purposefully build a psychologically safe environment within an organization. To answer this question, we need to first understand what, exactly, psychological safety is. I define the term slightly differently than many textbook definitions:

Psychological safety is the existence of an environment in which individuals proactively exercise assertiveness, state opinions, challenge assumptions, provide feedback to teammates and leadership, while openly sharing mistakes and failures.

Many traditional definitions of psychological safety make use of the term “feel,” as does Edmondson: “The term psychological safety describes a climate in which people feel free to express relevant thoughts and feelings. Although it sounds simple, the ability to seek help and tolerate mistakes while colleagues watch can be unexpectedly difficult.” [3] (Emphasis added.)

However, I purposefully make use of the word “exercise.” Although this may seem a semantic difference at first glance, since we’re concerned with factors such as team performance, quality, and effectiveness, the existence of a psychologically safe environment in which no one actually admits mistakes or states opinions (although they feel free to) is undesirable. We need not only the environment, but also the actual actualization of the skills and behaviors necessary to realize the environment’s benefits.

How to build psychological safety in teams and organizations.

Although I’ve only glossed over the considerable amount of theory and research, I also don’t want to try to provide a Reader’s Digest version of decades of knowledge here. I’d rather get right to the point. What do leaders, managers, coaches, and teams need to do to purposefully build psychological safety in their environment, today?

First, significantly reduce the focus on processes and frameworks. The existence of a specific environment or culture is largely independent of the business process employed in an organization’s daily operations. Some frameworks and methodologies are structured to support the types of psychologically safe environments necessary to enhance team performance and effectiveness, but they do not guarantee it.

As Lyssa Adkins, author of Coaching Agile Teams, stated in her Closing Keynote at the 2016 Global Scrum Gathering in Orlando, Florida:

“I thought we would have transformed the world of work by now. People, we’ve been at this for fifteen years…Transforming the world of work is literally a human development challenge. So we are awesome, we are so good in this community at process and business agility. We’ve got that handled people, and we’ve had that handled for a while. What we’re not so good at, what I want to see us become just as great at, is human systems agility. Because that’s the other piece of it…You know, those organizations – they’re all made of humans, aren’t they? So, human systems agility is a piece of business agility. Not the only one, but an important one; and one that we’re not as good at.” [4]

Business processes and frameworks, including Agile systems such as Scrum and Lean, can only help create a structure capable of supporting the ways in which teams and individuals need to work to reach the highest levels of performance, effectiveness, and innovation. What those teams – from executive to functional – need, is a shared mental model, a Known Stable Social Interface for interacting and working collaboratively together, and which enables them to develop and exercise the interpersonal skills and behaviors necessary for psychological safety.

Leadership and management must initiate the formation of a psychologically safe environment by welcoming opinions (including dissent) on goals and strategies from peers and subordinates. People in management or leadership roles who fear questioning or are more focused on their ideas than on the right ideas need to either learn, adapt, and grow, or move on. They are obstacles, roadblocks, and hindrances to organizational effectiveness, performance, and innovation.

Steps leadership and management can take to start to create psychological safety:

  • Establish and clearly communicate expectations
  • Receive training themselves
  • Provide training for their employees
  • Ensure follow-through with dedicated coaching and regular check-ins

Then, learn about and employ the following behaviors and skills:

  • Frame mistakes and errors as learning and opportunities for improvement.
  • Encourage lessons learned to be shared instead of hidden, focused toward helping others to learn, grow, and avoid similar mistakes.
  • Embrace the value of failure for learning by admitting to mistakes they’ve made themselves.
  • Understand the difference between failures and subversion, sabotage, incompetence, and lack of ability.
  • Learn about the interpersonal, social skills which power team effectiveness, including Leadership, Communication, Assertiveness, Situational Awareness, Goal Analysis, and Decision-Making. Those skills include the explicit behaviors necessary to build psychological safety in the organizational environment.

“If I focus on using your mistake as a way to blame and punish you, I’ll never hear about your mistakes until a catastrophe ensues. If I focus on using your mistake as a way for us to learn and improve collectively, then our entire process, system, and business will be better after every mistake.”

Individuals and teams can also help to build and enable psychologically safe environments:

  • Seek training about and learn the interpersonal, social skills which power team effectiveness, including Leadership, Communication, Assertiveness, Goal Analysis, Decision-Making, Situational Awareness, Agility, and Empathy.
  • Advocate for and build a climate in which learning and improvement is possible through open and honest analysis of failures / mistakes.
  • Frame and focus discussions on the plans, strategies, and ideas supporting what is right, not who is right.
  • Assume responsibility for their own psychological safety and proactively help build it as a fundamental attribute of their teams’ work environment.

Psychological safety is a key organizational characteristic which is critical to the growth of high-performing teams. However, it isn’t a holy grail and most organizations, coaches, and consultants do not know how to purposefully create a psychologically safe environment, nor why it makes sense to do so. Yet mindfully organizing to build high-performing teams is not only possible, it is something which many organizations have been doing for decades.

 

Chris Alexander is a former F-14D Flight Officer, the co-founder of AGLX Consulting, High-Performance Teaming™ coach, Agile coach, and Scrum Master, and has a passion for working with high-performing teams. Learn more at https://www.aglx.consulting.

References:

  1. Pentland, Alex (2014­01­30). Social Physics: How Good Ideas Spread – ­The Lessons from a New Science (p. 90). Penguin Publishing Group. Kindle Edition.
  2. Edmondson, Amy C. (2012­03­16). Teaming: How Organizations Learn, Innovate, and Compete in the Knowledge Economy (Kindle Locations 1474­-76, 2139-40). Wiley. Kindle Edition.
  3. Ibid, 2141-2144.
  4. https://www.youtube.com/watch?v=LDKYehwuirw

Share This:

Agile is Dead! The Rise of High-Performing Teams: 10 Lessons from Fighter Aviation

Software and hardware industry leaders are leveraging the lessons from fighter aviation to help their businesses navigate the speed of change and thrive in today’s complex and hostile environment. The emergence of the Observe-Orient-Decide-Act (OODA) Loop—an empathy-based decision cycle created by John Boyd (fighter pilot)—in today’s business lexicon suggests that executives, academia, and the Agile community recognize that fighter pilots know something about agility.

For example, Eric Ries, author of The Lean Startup and entrepreneur, attributes the idea of the Build-Measure-Learn feedback loop to John Boyd’s OODA Loop [1]. At the core of Steve Blank’s Customer Development model and Pivot found in his book, The Four Steps to the Epiphany, is once again OODA [2]. In his new book, Scrum: The Art of Doing Twice the Work in Half the Time, Dr. Jeff Sutherland, a former fighter pilot and the co-creator of Scrum, connects the origins of Scrum to hardware manufacturing and fighter aviation (John Boyd’s OODA Loop) [3]. Conduct a quick Google book search on “Cyber Security OODA” and you will find over 760 results.

This fighter pilot “mindset” behind today’s agile innovation frameworks and cyber security approaches is being delivered to organizations by coaches and consultants who may have watched Top Gun once or twice but more than likely have never been part of a high-performing team [4].

So What?

According to Laszlo Block, “Having practitioners teaching is a far more effective than listening to academics, professional trainers, or consultants. Academics and professional trainers tend to have theoretical knowledge. They know how things ought to work, but haven’t lived them [5].” Unfortunately, most agile consultants’ toolboxes contain more processes and tools than human interaction knowhow. Why? They have not lived what they coach. And this is what is killing Agile.

Teaming Lessons from Fighter Aviation

To survive and thrive in their complex environment, fighter pilots learn to operate as a network of teams using the cognitive and social skills designed by industrial-organizational psychologists—there is actually real science behind building effective teams. It is the combination of inspect-and-adapt frameworks with human interactions skills developed out of the science of teamwork that ultimately build a high-performance culture and move organizational structures from traditional, functional models toward interconnected, flexible teams.

10 Reasons Why Your Next Agile High-Performance Teaming Coach Should Have a Fighter Aviation Background

OODA (Observe-Orient-Decide.-Act). According to Jeff Sutherland, “Fighter pilots have John Boyd’s OODA Loop burned into muscle memory. They know what agility really means and can teach it uncompromisingly to others.”

Empathy. A 1 v 1 dogfight is an exercise in empathy, according to the award-winning thinker, author, broadcaster, and speaker on today’s most significant trends in business, Geoff Colvin. In his 2015 book, Humans Are Underrated: What High Achievers Know that Brilliant Machines Never Will, Geoff pens, “Even a fighter jet dogfight, in which neither pilot would ever speak to or even see the other, was above all a human interaction. Few people would call it an exercise in empathy, but that’s what it was—discerning what was in the mind of someone else and responding appropriately. Winning required getting really good at it [6]” Interestingly, empathy is baked-in Boyd’s OODA Loop.

Debriefing (Retrospective). The most important ceremony in any continuous improvement process is the retrospective (debrief). Your fleet average fighter pilot has more than 1000 debriefs under their belt before they leave their first tour at the five-year mark of service. In Agile iterations years, that is equal to 19 years of experience [7]. Moreover, when compared to other retrospective or debriefing techniques, “Debriefing with fighter pilot techniques offer more ‘bang for the buck’ in terms of learning value [8].” Why is this? There are no games in fighter pilot debriefs, no happy or sad faces to put up on the white board – just real human interactions, face-to-face conversations that focus on what’s right, not who’s right. Fighter pilots learn early that the key to an effective retrospective is establishing a psychologically safe environment.

Psychological Safety. Psychological safety “describes a climate in which people feel free to express relevant thoughts and feelings [9].” Fighter pilots learn to master this leadership skill the day they step in their first debrief where they observe their flight instructor stand up in front of the team and admit her own shortcomings (display fallibility), asks questions, and uses direct language. Interestingly, according to Google’s Project Aristotle, the most important characteristic to building a high-performing team is psychological safety [10]. Great job Google!

Teaming (Mindset and Practice of Teamwork) [11]. Although not ideal, fighter pilots often find themselves in “pickup games” where they find a wingman of opportunity from another squadron, service, or country—even during combat operations. Knowing how to coordinate and collaborate without the benefit of operating as a stable team is a skill fighter pilots develop from building nontechnical known stable interfaces. These stable interfaces include a common language; shared mental models of planning, briefing, and debriefing; and being aligned to shared and common goals. Yes, you do not need stable teams and you they do not need to be co-located if you have known stable interfaces of human interaction.

Empirical Process. The engine of agility is the empirical process and in tactical aviation we use a simple plan-brief-execute-debrief cycle that, when coupled with proven human interaction skills, builds a resilient and learning culture. The inspect and adapt execution rhythm is the same around every mission, whether it be a flight across country or 40-plane strike into enemy territory, we always planned, briefed, executed the mission, and held a debrief. There is no room for skipping steps—no exceptions.

Adaptability/Flexibility. The ability to alter a course of action based on new information, maintain constructive behavior under pressure and adapt to internal and external environmental changes is what fighter pilots call adaptability or flexibility. Every tactical aviator who strapped on a $50M aircraft knows that flexibility is the key to airpower. Every flight does not go according to plan and sometimes the enemy gets a vote – disrupting the plan to the point where the mission looks like a pick-up game. 

Agility. Agility is adaptability with a timescale.

Practical Servant Leadership Experience. Fighter pilots have practical experience operating in complex environments and are recognized as servant leaders. But don’t take my word for it; watch this video by Simon Sinek to learn more.

Fun. Agility is about having fun. Two of my favorite sayings from my time in the cockpit are “You cannot plan fun” and “If you are not having fun, you are not doing it right.” If your organization is truly Agile, then you should be having fun.

So, who’s coaching your teams?

Brian “Ponch” Rivera is a recovering naval aviator, co-founder of AGLX Consulting, LLC, and co-creator of High-Performance Teaming™, an evidence-based approach to rapidly build and develop high-performing teams.

[1] “The idea of the Build-Measure-Learn feedback loop owes a lot to ideas from maneuver warfare, especially John Boyd’s OODA (Observe-Orient-Decide-Act) Loop.” Ries, E. The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. (Crown Publishing, 2011)

[2] “…Customer Development model with its iterative loops/pivots may sound like a new idea for entrepreneurs, it shares many features with U.S. warfighting strategy known as the “OODA Loop” articulated by John Boyd.” Blank, S. The Four Steps to the Epiphany. Successful Strategies for products that win. (2013)

[3] “In the book I talk about the origins of Scrum in the Toyota Production Systems and the OODA loop of combat aviation.” Sutherland, J. Scrum: The Art of Doing Twice the Work in Half the Time. New York. Crown Business (2014).

[4] I do not recommend the movie Top Gun as an Agile Training Resource.

[5] Block, L. Work Rules! That will transform how you live and lead. (Hachette Book Group, 2015).

[6] Geoff Colvin. Humans are Underrated: What high achievers know that brilliant machines never will, 96, (Portfolio/Penguin, 2015).

[7] Assuming two teams with iteration length of two weeks. And 100% retrospective execution.

[8] McGreevy, J. M., MD, FACSS, & Otten, T. D., BS. Briefing and Debriefing in the Operating Room Using Fighter Pilot Crew Resource Management. (2007, July).

[9] Edmondson, A.C. Teaming. How organizations Learn, Innovate, and Compete in the Knowledge Economy. Wiley. (2012)

[10] Duhigg, C. Smarter Faster Better: The Secrets to Being Productive in Life and Business. Random House. (2016).

[11] Edmondson, A.C. Teaming. How organizations Learn, Innovate, and Compete in the Knowledge Economy. Wiley. (2012)

Share This:

OODA: The Mindset of Scrum

Recently, a trusted source reported that the Oracle of Scrum, Jeff Sutherland, has proclaimed that OODA is the Mindset of Scrum.  A few weeks ago I tried my best to explain this “Mindset” when I co-coached with Joe Justice during his Scrum in Hardware – Train the Trainer course. It was a daunting task considering I was surrounded by some of the world’s finest Scrum Trainers and Agile Coaches and was asked to deliver the “Origins of Scrum” using Scrum, Inc.’s slide deck. Not easy.

Knowing that much has been written about the connection between Scrum and OODA including Steve Adolph’s 2006 paper, What Lessons Can the Agile Community Learn from A Maverick Fighter Pilot, I decided to spend my limited presentation time focused on two lesser known features of OODA: empathy and fast transients. Before rolling-in on these two features, here is a quick-and-dirty introduction to OODA and Scrum.

OODA and Scrum

Over the skies of Korea, years before Jeff Sutherland and his RF-4C’s Weapons System Operator’s (WSO) flight plans were constantly disrupted by North Vietnamese gunfire, SAMs, and fighters, John “40-Second” Boyd was trying to understand how a seemingly inferior aircraft, the American built F-86 Sabre, had a kill ratio of 10:1 over the nimbler, more agile MiG-15. As an F-86 pilot who regularly engaged with MiG-15s, Boyd realized that it was the F-86’s bubble canopy that provided American pilots better situational awareness (the ability to better observe and therefore process reality) over MiG-15 pilots. It was from fighter combat, a 1 v 1 dogfight (a socio-technical system vs. a socio-technical system) that the Observe-Orient-Decide-Act (OODA) Loop was born.

According to Jeff Sutherland, Scrum’s origins are in OODA and hardware manufacturing, not software. In fact, for those of you who are Lean Startup practitioners you may want to adopt OODA as your mindset as well considering the Lean Startup is based on OODA. Similarly, Cyber Security borrows from Boyd’s OODA Loop as do several product design approaches.  Back to Scrum.

Scrum is widely practiced by software development teams but is applicable across the routine-complexity-innovation continuum. For example, in the past two weeks, I coached Scrum to a world-class surgical center, an aerospace giant’s flight test team, and a geographical combatant command (GCC). Best place to learn about scrum is the 16-page Scrum Guide. If you happen to fly fighter or commercial jets, then it should not surprise you that CRM is applicable to coaching Scrum…but that’s another story.

OODA: The Mindset…

As I had limited time during my “Origins of Scrum” presentation, I decided to focus on empathy and fast transients, two lessor known characteristics of OODA.

Empathy: Get inside the mind of your customer

A 1 v 1 dogfight is an exercise in empathy, according to the award-winning thinker, author, broadcaster, and speaker on today’s most significant trends in business, Geoff Colvin. In his 2015 book, Humans Are Underrated: What High Achievers Know that Brilliant Machines Never Will, Geoff proposes that “Even a fighter jet dogfight, in which neither pilot would ever speak to or even see the other, was above all a human interaction. Few people would call it an exercise in empathy, but that’s what it was—discerning what was in the mind of someone else and responding appropriately. Winning required getting really good at it.” (Page 96) In his 1995 briefing, The Essence of Winning and Losing, John R. Boyd points out that analysis and synthesis are dependent on implicit cross-referencing across different domains including empathy.

Fast Transients: The organization that can handle the quickest rate of change survives

The ability for your organization to transition from one state to another faster than your competition will ensure your organizations survival. Moreover, “Fast Transients” will bring confusion and disorder to your competition as they under or over react to your activities.

Orientation is Schwerpunkt (focal point)

Orientation is the “genetic code” of an organism and cognitive diversity is key to creating innovative solutions to complex problems.

Focus on Feedback Loops

One feature of complex adaptive systems are feedback loops. Learn how to provide feedback. Effective retrospectives are a great start.

Leverage Uncertainty

We live in a Volatile, Uncertain, Complex and Ambiguous (VUCA) world.

Agility is Adaptation with a Time Scale

Adaptability is a cognitive skill found in High-Performance Teaming™ and Crew Resource Management. Agility is adaptability with a time scale and that time scale is rapidly shrinking.

Non-Linear Systems Have Inherently Identical Structures

When looking for solutions to problems, look outside your industry. The future already exists.

I look forward to your feedback and comments.

Brian “Ponch” Rivera is a recovering naval aviator, co-founder of AGLX Consulting, LLC, and co-creator of High-Performance Teaming™, an evidence-based approach to rapidly build and develop high-performing teams.

Share This:

17 Ways to Stop Your Organization’s Agile Transformation

In 1944, the Office of Strategic Services (OSS), now known as the Central Intelligence Agency (CIA), published the Simple Sabotage Field Manual which provides organizational saboteurs—let’s call them managers and employees who are on the wrong bus—a guide on how to interfere with organizational development and transformation.

As an Agile and High-Performance Teaming™ Coach, I have observed the following 17 tactics found in the Simple Sabotage Field Manual skillfully employed by managers and employees who clearly do not want their organizations to survive and thrive in today’s knowledge economy:

  1. When training new workers, give incomplete or misleading instructions.
  2. To lower morale and with it, productivity, be pleasant to inefficient workers; give them undeserved promotions. Discriminate against efficient workers, complain unjustly about their work.
  3. Hold [meetings] when there is more critical work to be done.
  4. Demand [documentation].
  5. “Misunderstand” [documentation]. Ask endless questions or engage in long correspondence about such [documents]. Quibble over them when you can.
  6. Make “Speeches.” Talk as frequently as possible and at great lengths.
  7. Bring up irrelevant issues as frequently as possible.
  8. Insists on doing everything through “channels” [and email].
  9. When possible, refer all matters to committees, for “further study and consideration.” Attempt to make the committees as large as possible–never less than five.
  10. Spread inside rumors that sound like inside dope.
  11. Contrive as many interruptions to your work [and team] as you can.
  12. Do your work poorly and blame it on bad tools, machinery, or equipment.
  13. Never pass on your skills and experience to anyone.
  14. If possible, join or help organize a group for presenting employee problems to the management. See that procedures adopted are as inconvenient as possible for the management, involving the presence of large number of employees at each presentation, entailing more than one meeting for each grievance, bringing up problems which are largely imaginary, and so on.
  15. Give lengthy and incomprehensive explanations when questioned.
  16. Act stupid.
  17. Be as irritable and quarrelsome as possible without getting yourself into trouble.

Brian “Ponch” Rivera is a recovering naval aviator, co-founder of AGLX Consulting, LLC, and co-creator of High-Performance Teaming™, an evidence-based approach to rapidly build and develop high-performing teams.

Share This:

Risk Management and Error Trapping in Software and Hardware Development, Part 3

This is part 3 of a 3-part piece on risk management and error trapping in software and hardware development. The first post is located here (and should be read first to provide context on the content below), and part 2 is located here.

Root Cause Analysis and Process Improvement

Once a bug has been discovered and risk analysis / decision-making has been completed (see below), a retrospective-style analysis on the circumstances surrounding the engineering practices which failed to effectively trap the bug completes the cycle.

The purpose of the retrospective is not to assign blame or find fault, but rather to understand the cause of the failure to trap the bug, inspect the layers of the system, and determine if any additional layers, procedures, or process changes could effectively improve collective engineering surety and help to prevent future bugs emerging from similar causes.

Methodology

  1. Review sequence of events that led to the anomaly / bug.
  2. Determine root cause.
  3. Map the root cause to our defense-in-depth (Swiss cheese) model.
  4. Decide if there are remediation efforts or improvements which would be effective in supporting or restructuring the system to increase its effectiveness at error trapping.
  5. Implement any changes identified, sharing them publicly to ensure everyone understands the changes and the reasoning behind them.
  6. Monitor the changes, adjusting as necessary.

Review sequence of events

With appropriate representatives from engineering teams, certification, hardware, operations, customer success, etc., review the discovery path which led to finding the bug. The point is to understand the processes used, which ones worked, and which let the bug pass through.

Determine root cause and analyze the optimum layers for improvement

What caused the bug? There are many enablers and contributing factors, but typically only one or two root causes. The root cause is one or a possible combination of Organization, Communication, Knowledge, Experience, Discipline, Teamwork, or Leadership.

  • Organization – typically latent, organizational root causes include things like existing processes, tools, practices, habits, customs, etc., which the company or organization as a whole employs in carrying out its work.
  • Communication – a failure to convey necessary, important, or vital information to or among an individual or team who required it for the successful accomplishment of their work.
  • Knowledge – an individual, team, or organization did not possess the knowledge necessary to succeed. This is the root cause for knowledge-based errors.
  • Experience – an individual, team, or organization did not possess the experience necessary to successfully accomplish a task (as opposed to the knowledge about what to do). Experience is often a root cause in skill-based errors of omission.
  • Discipline – an individual, team, or organization did not possess the discipline necessary to apply their knowledge and experience to solving a problem. Discipline is often a root cause in skill-based errors of commission.
  • Teamwork – individuals, possibly at multiple levels, failed to work together as a team, support one another, and check one another against errors. Additional root causes may be knowledge, experience, communication, or discipline.
  • Leadership – less often seen at smaller organizations, a Leadership failure is typically a root cause when a leader and/or manager has not effectively communicated expectations or empowered execution regarding those expectations.

Map the root cause to the layer(s) which should have trapped the error

Given the root cause analysis, determine where in the system (which layer or layers) the bug should have been trapped. Often there will be multiple locations at which the bug should or could have been trapped, however the best location to identify is the one which most closely corresponds to the root cause of the bug. Consideration should also be given to timeliness. The earlier an error can be caught or prevented (trapped), the less costly it is in terms of both time (to find, fix, and eliminate the bug) and effort (a bug in production requires more effort from more people than a developer discovering a bug while checking their own unit test).

While we should seek to apply fixes at the locations best suited for them, the earliest point at which a bug could have been caught and prevented will often be the optimum place to improve the system.

For example, if a bug was traced back to a team’s discipline in writing and using tests (root cause: discipline and experience), then it would map to layers dealing with testing practices (TDD/ATDD), pair programming, acceptance criteria, definition of “Done,” etc. Those layers to which the team can most readily apply improvements and which will trap the error sooner rather than later should be the focus for improvement efforts.

Decide on improvements to increase system effectiveness

Based on the knowledge gained through analyzing and mapping the root cause, decisions are made on how to improve the effectiveness of the system at the layers identified. Using the testing example above, a team could decide that they need to adjust their definition of Done to include listing which tests a story has been tested against and their pass/fail conditions.

Implement the changes identified, and monitor them for effectiveness.

Risk Analysis

Should our preventative measures fail to stop a bug from escaping into a production environment, an analysis of the level of risk needs to be explicitly completed. (This is often done, but in an implicit way.) The analysis of the level of risk derives from two areas.

Risk Severity – the degree of impact the bug can be expected to have to the data, operations, or functionality of affected parties (the company, vendors, customers, etc.).

Blocking A bug that is so bad, or a feature that is so important, that we would not ship the next release until it is fixed/completed. Could also signify a bug that is currently impacting a customer’s operations, or one that is blocking development.
Critical A bug that needs to be resolved ASAP, but for which we wouldn’t stop everything. Bugs in this category are not impacting operations (a customer’s, or ours), but they are significantly challenging to warrant attention.
Major Best judgement should be used to determine how this stacks against other work. The bug is serious enough that it needs to be resolved, but the value of other work and timing should be considered. If a bug sits in major for too long, its categorization should be reviewed and either upgraded or downgraded.
Minor A bug that is known, but which we have explicitly de-prioritized. Such a bug will be fixed as time allows.
Trivial Should really consider closing this level of bug. At best these should be put into the “Long Tail” for tracking.

Risk Probability – the likelihood, expressed against a percentage, that those potentially affected by the bug will actually experience it (ie., always, only if they have a power outage, or only if the sun aligns with Jupiter during the slackwater phase of a diurnal tide in the northeastern hemisphere between 44 and 45 degrees Latitude).

Definite 100% – issue will occur in every case
Probable 60-99% – issue will occur in most cases
Possible 30-60% – coin-flip; issue may or may not occur
Unlikely 2-30% – issue will occur in less than 50% of cases
Won’t 1% – occurrence of the issue will be exceptionally rare

Given Risk Severity and Probability, the risk can be assessed according to the following matrix and assigned a Risk Assessment Code (RAC).

Risk Assessment Matrix Probability
Definite Probable Possible Unlikely Won’t
Severity Blocker 1 1 1 2 3
Critical 1 1 2 2 3
Major 2 2 2 3 4
Minor 3 3 3 4 5
Trivial 3 4 4 5 5

Risk Assessment Codes
1 – Strategic     2 – Significant     3 – Moderate     4 – Low     5 – Negligible

The Risk Assessment Codes are a significant factor in Risk decision-making.

  1. Strategic – the risk to the business or customers is significant enough that its realization could threaten operations, basic functioning, and/or professional reputation to the point that the basic survival of the business could be in jeopardy. As Arnold said in Predator: “We make a stand now, or there will be nobody left to go to the chopper!”
  2. Significant – the risk poses considerable, but not life-threatening, challenges for the business or its customers. If left unchecked, these risks may elevate to strategic levels.
  3. Moderate – the risk to business operations, continuity, and/or reputation is significant enough to warrant consideration against other business priorities and issues, but not significant enough to trigger higher responses.
  4. Low – the risk to the business is not significant enough to warrant special consideration of the risk against other priorities. Issues should be dealt with in routine, predictable, and business-as-usual ways.
  5. Negligible – the risk to the business is not significant enough to warrant further consideration except in exceptional circumstances (ie., we literally have nothing better to do).

Risk Decision

The risk decision is the point at which a decision is made about the risk. Typically, risk decisions take the form of:

  • Accept – accept the risk as it is and do not mitigate or take additional steps.
  • Delay – for less critical issues or dependencies, a decision about whether to accept or mitigate a risk may be delayed until additional information, research, or steps are completed.
  • Mitigate – establish a mitigation strategy and deal with the risk.

For risk mitigation, feasible Courses of Action (CoAs) should be developed to assist in making the mitigation plan. These potential actions comprise the mitigation and or reaction plan. Specifically, given a specific bug’s risk severity, probability, and resulting RAC, the courses of action are the possible mitigate solutions for the risk. Examples include:

— Pre-release —

  • Apply software fix / patch
  • Code refactor
  • Code rewrite
  • Release without the code integrated (re-build)
  • Hold the release and await code fix
  • Cancel the release

— In production —

  • Add to normal backlog and prioritize with normal workflow
  • Pull / create a team to triage and fix
  • Swarm / mob multiple teams on fix
  • Pull back / recall release
  • Release an additional fix as a micro-upgrade

For all risk decisions, those decisions should be recorded and those which remain active need to be tracked. There are many methods available for logging and tracking risk decisions, from spreadsheets to documentation to support tickets. There are entire software platforms expressly designed to track and monitor risk status and record decisions taken (or not) about risks.

Decisions to delay risk mitigations are the most important to track, as they require action and at the speed most business move today, a real risk exists of losing track of risk delay decisions. Therefore a Risk Log or Review should be used to routinely review the status of pending risk decisions and reevaluate them. Risk changes constantly, and risks may significantly change in severity and probability overnight. In reviewing risk decisions regularly, leadership is able to simultaneously ensure both that emerging risks are mitigated and that effort is not wasted unnecessarily (as when effort is put against a risk which has significantly declined in impact due to changes external to the business).

Conclusion

I hope you’ve enjoyed this 3-part series. Risk management and error trapping is a complicated and – at times – complex topic. There are many ways to approach these types of systems and many variations on the defense-in-depth model.

The specific implementation your business or organization chooses to adopt should reflect the reality and environment in which you operate, but the basic framework has proven useful across many domains, industries, and is directly adapted from Operational Risk Management as I used to practice and teach it in the military.

Understanding the root cause of your errors, where they slipped through your system, and how to improve your system’s resiliency and robustness are critical skills which you need to develop if they are not already functional. A mindful, purposeful approach to risk decision-making throughout your organization is also critical to your business operations.

Good luck!

 

Chris Alexander is a former U.S. Naval Officer who was an F-14 Tomcat flight officer and instructor. He is Co-Founder and Executive Team Member of AGLX Consulting, creators of the High-Performance Teaming™ model, a Scrum Trainer, Scrum Master, and Agile Coach.

Share This:

Risk Management and Error Trapping in Software and Hardware Development, Part 2

This is part 2 of a 3-part piece on risk management and error trapping in software and hardware development. The first post is located here (and should be read first to provide context on the content below).

Error Causality, Detection & Prevention

Errors occurring during software and hardware development (resulting in bugs) can be classified into two broad categories: (1) skill-based errors, and (2) knowledge-based errors.

Skill-based errors

Skill-based errors are those errors which emerge through the application of knowledge and experience. They are differentiated from knowledge-based errors in that they arise not from a lack of knowing what to do, but instead from either misapplication or failure to apply what is known. The two types of skill-based errors are errors of commission, and errors of omission.

Errors of commission are the mis-application of a previously learned behavior or  knowledge. To use a rock-climbing metaphor, if I tied my climbing rope to my harness with the wrong type of knot, I would be committing an error of commission. I know I need a knot and I know which knot to use and I know how to tie the correct knot – I simply did not do it correctly. In software development, one example of an error of commission might be an engineer providing the wrong variable to a function call, as in:

var x = 1;        // variable to call
var y = false;    // variable not to call
public function callVariable(x) {
return x;
}
callVariable(y); // should have provided “x” but gave “y” instead

Errors of omission, by contrast, are the failure to apply knowledge or experience (previously learned behaviors) to the given problem. In my climbing example, not tying the climbing rope to my harness (at all) before beginning to climb is an error of omission. (Don’t laugh – this actually happens.) In software development, an example of an error of omission would be an engineer forgetting to provide a variable to a function call (or forgetting to add the function call at all), as in:

var x = 1;              // variable to call
var y = false;          // variable not to call
public function callVariable(x) {
return x;
}
callVariable();   // should have provided “x” but left empty

Knowledge-based errors

Knowledge-based errors, in contrast to skill-based errors, arise from the failure to know the correct behavior to apply (if any). An example of a knowledge-based error would be a developer checking in code without any unit, integration, or system tests. If the developer is new and has never been indoctrinated to the requirements for code check-in as including having written and run a suite of automated unit, integration, and system tests, this is an error caused by a lack of knowledge (as opposed to omission, where the developer had been informed of the need to write and run the tests but failed to do so).

Defense-in-depth, the Swiss cheese model, bug prevention and detection

Prevention comprises the systems and processes employed to trap bugs and stop them from getting through development environments and into certification and/or production environments (depending on your software / hardware release process). In envisioning our Swiss cheese model, we need to understand that the layers include both latent and active types of error traps, and are designed to mitigate against certain types of errors.

The following are intended to aid in preventing bugs.

Tools & methods to mitigate against Skill-based errors in bug prevention:

  • Code base and architecture [latent]
  • Automated test coverage [active]
  • Manual test coverage [active]
  • Unit, feature, integration, system, and story tests [active]
  • TDD / ATDD / BDD / FDD practices [active]
  • Code reviews [active]
  • Pair Programming [active]
  • Performance testing [active]
  • Software development framework / methodology (ie, Scrum, Kanban, DevOps, etc.) [latent]

Tools & methods to mitigate against Knowledge-based errors in bug prevention:

  • Education & background [latent]
  • Recruiting and hiring practices [active]
  • New-hire Onboarding [active]
  • Performance feedback & professional development [active]
  • Design documents [active]
  • Definition of Done [active]
  • User Story Acceptance Criteria [active]
  • Code reviews [active]
  • Pair Programming [active]
  • Information Radiators [latent]

Detection is the term for the ways in which we find bugs, hopefully in the development environment but this phase would also include certification if your organization has a certification / QA phase. The primary focus of detection methods is to ensure no bugs escape into production. As such, the entire software certification system itself may be considered one, large, active layer of error trapping. In fact, in many enterprise companies, the certification or QA team (if you have one) is actually the last line of defense.

The following are intended to aid in detecting bugs:

Tools & methods to mitigate against Skill-based errors in detecting bugs:

  • Automated test coverage [active]
  • Manual test coverage [active]
  • Unit, feature, integration, system, and story tests [active]
  • TDD / ATDD / BDD / FDD practices [active]
  • Release certification testing [active]
  • Performance testing [active]
  • User Story Acceptance Criteria [active]
  • User Story “Done” Criteria [active]
  • Bug tracking software [active]
  • Triage reports [active]

Tools & methods to mitigate against Knowledge-based errors in detecting bugs:

  • Education & background [latent]
  • Professional development (individual / organizational) [latent / active]
  • Code reviews [active]
  • Automated & manual test coverage [active]
  • Unit, feature, integration, system, story tests [active]

When bugs “escape” the preventative measures of your Defense-in-depth system and are discovered in either the development or production environment, a root cause analysis should be conducted on your system based on the nature of the bug and how it could have been prevented and / or detected earlier. Based upon the findings of your root cause analysis, your system can be improved in specific, meaningful ways to increase both its robustness and resilience.

How an organization should, specifically, conduct root cause analysis, analyze risk and make purposeful decisions about risk, and how they should improve their system is the subject of part 3 in this series, available here.

 

Chris Alexander is a former U.S. Naval Officer who was an F-14 Tomcat flight officer and instructor. He is Co-Founder and Executive Team Member of AGLX Consulting, creators of the High-Performance Teaming™ model, a Scrum Trainer, Scrum Master, and Agile Coach.

Share This:

Agile Retrospectives: High-Performing Teams Don’t Play Games

Scrum, The Lean Startup, Cyber Security and some product development loops have fighter aviation origins. But retrospectives (debriefs)—the most important continuous improvement event—have been hijacked by academics, consultants, and others who have never been part of a high-performing team; sure, they know how things ought to work but haven’t lived them. We have.

Learn what’s wrong with current retrospectives and discover how an effective retrospective process can build the high-performance teaming skills your organization needs to compete in today’s knowledge economy.

Special thanks to Robert “Cujo” Teschner, Dan “Bunny” O’Hara, Chris “Deuce” Alexander, Jeff “T-Bell” Dermody, Ryan “Hook-n-Jab” Bromenschenkel, Ashok “WishICould” Singh, John “Shorn” Saccomando, Dr. Daniel Low, and Allison “I signed up for what?” Rivera.

Brian “Ponch” Rivera is a recovering naval aviator, co-creator of High-Performance Teaming™ and the co-founder of AGLX Consulting, LLC.

Share This: