Two approaches to avoid, and one you should try.
It’s official. We don’t agree.
The first-ever State of Agile Coaching Report, was recently released as a joint effort by Scrum Alliance and the Business Agility Institute. It’s a meaningful milestone for the emerging profession, yielding several insights to build upon. But of particular interest to me were the deep discrepancies around how agile coaching success is measured:
Agile coaches reported a wide range of success measures…
There was also a clear distinction [between] being measured against generalized informal feedback versus those measured against specific formal metrics.
Business (enterprise) coaches are far more sure of how their impact and success were being measured than [those working with a single team].
Simply put, as agile coaches, we aren’t measured consistently.
Some are measured by team outcomes. Some aren’t assessed by metrics at all, but rather by feedback alone. Heck, some of us don’t even know whether we are having impact at all, let alone how to assess it.
That’s a problem. If we don’t know what coaching success looks like, then how can we grow into it? If we can’t agree on the data to use for growth, how can we advocate with any credibility for the use of empirical data?
I’ve been coaching agility for many years, and I’ll admit this has been a struggle for me as well. However, when this report came out, I resolved to look in the mirror and explore how we as coaches might get better at the empirical practice we advocate for. To that end, I’d like to share two approaches to avoid, and one to try.
Avoid measuring coaching success by organizational results.
That’s right. Measuring results is a bad gauge for effective coaching.
At first, this seems to run counter to everything. Isn’t the whole reason leaders seek agility to achieve outcomes like faster delivery, more productivity, or better quality? Year after year, industry after industry, agile methods are shown to yield measurable improvement in bottom line results. However, there are two problems when we conflate the success of agility with the success of an agile coach.
The first problem is that of causality. Let’s say a product group doubles their quality scores over a year. What proportion of that improvement should be attributed to the team doing the work, to the leadership deciding the work, or a skilled mentor advising the work? If we’re honest, then it’s not perfectly clear.
Yes, it is most likely that agile techniques like continuous integration, definition of done, or mobbing directly resulted in the team’s quality boost. Yes, it is also probably true that a coach dramatically helped with the implementation of those techniques. However, it is fundamentally wrong to say those results were possible expressly because of that one person. Consider the budget approvals required for the automation infrastructure, the management trust required to allow a dozen engineers to mob on a single feature, or the willingness of each team member to adhere to a “done” checklist. In each of those cases, choices are made by different people, each interdependent on each other to achieve results.
I have personally witnessed several gifted colleagues perform excellent coaching skills, only to watch their client ignore their reality, make poor choices, and see their status quo. Conversely, some of my clients have achieved new heights, in spite of terrible coaching mistakes I made while working with them.
Organizations are complex systems. There are simply too many variables, too many players, too many to say that faster delivery, better quality, or more productivity is a reflection of the helper they hired.
... Which leads us to the second problem of accountability.
If a technology leader has a mandate to improve key objectives, who ultimately answers for whether those results are achieved? The teams they lead? The vendors they hire? No. It’s the leaders themselves.
A smart person will hire expert advisors and mentors to accelerate and enable change. However, an even wiser person knows that it is strong leadership that makes the difference between being stagnant and showing momentum.
And that relates to a problematic key finding of the report:
"An agile coach's success is often measured based on the performance of those they coached rather than by specific coaching metrics."
That’s a problem, because when leaders and teams hesitate to make tough choices and painful changes, the temptation is to blame the coach. Conversely, if everything goes well, the coach may ride off into the sunset believing a bit too much of their own press.
Put another way, agile coaches are not the source of agility; they are merely an amplifier of the leaders and employees who build their own agility. Agile coaches are not the heroes of the story; the leaders and teams they coach are the heroes.
Let’s say a CTO hires an expert to guide their journey to enterprise agility. After a full year of effort, none of the metrics show improvement in quality, productivity, or predictability. When the CEO convenes the annual board meeting, what will be the narrative around the company’s inability to evolve?
“Honestly, we’ve spent roughly $3M across the teams, with not much to show for it.” One director responds by asking “Well, what does the CTO say is the culprit.”
“He made it clear to the coach that he would hold the coaches accountable for measurable improvements in these objectives. Since those all involve lagging indicators, the coaches demanded a full year to see any measurable lift. Well, here we are. Unfortunately for him, he hired the wrong coach.
Another director sounds skeptical, “Wait, she burned through the whole year and the whole budget, without changing vendors? Why?”
“We all know it’s standard management practice to transfer operational risk to a 3rd party. We measure coaches by outcomes. If we don’t have outcomes, it’s the coach who incurs that liability.”
Finally, the chairwoman speaks up saying “Um, actually speed, quality, throughput are not operational details. They are the results that drive revenue and profitability. Your CTO may have hired the wrong coach, but it sounds like you hired the wrong CTO. I don’t care if you were roommates in university. if. You need to make a change there, right away.
Avoid measuring coaching success by coach activity.
Since measuring results is problematic, let’s try something more specific to coaches themselves.
Consider a hypothetical comparison between two teams. Six months into a project, the Agile Center of Excellence distributed a performance report of their effectiveness:
Team Alpha’s coach delivered two bootcamps, twelve retrospectives, and a few dozen one-on-one sessions.
Team Beta’s coach reported twice as much work in the same period.
Which coach was better? You can’t tell, can you? Just because the second coach did twice as much work doesn’t mean they were more effective? Even if we add in some improvement metrics, how much of those metrics results were because of the coaching, and how many were despite the coaching?
The other problem emerges when you consider“what gets measured is what gets done.” If we measure success by volume of coaching output, well then that is what you will get.
Instead, try measuring coaches by employee and leader satisfaction.
At the end of the day, coaches serve people. The ultimate arbiter of coaching value comes from the people who receive coaching help.
So how do we measure coaching satisfaction? The key here is to use both quantitative and qualitative data.
For hard numbers, there are a variety of ways to measure how strongly a coach is valued by their audience. We can simply adapt proven customer satisfaction metrics, which ask an audience or clientele to score an experience on a scale from really bad to really great. Examples include:
Customer Satisfaction Score (CSAT) measures “how would you rate your experience interacting with your coach?”
Net Promoter Score (NPS) asks “how likely would you recommend this coach to a colleague?”
Customer Effort Score (CES) tracks things from the other direction, asking “how hard was it to work with their coach?”
But metrics don’t tell the whole story. What if a coach scores well, but the team doesn’t improve? What if the metrics look rather unflattering, but the team was simply uncomfortable with a truth-teller revealing their weak spots to them? That’s why it is so critical to collect qualitative feedback on coaching performance, such as:
What was most helpful about working with your coach?
What are your coach’s key strengths?
What could the coach do better to be even more valuable?
Often, these questions can be asked in an anonymous survey. They should also be asked by the coach themselves in both 1-on-1 and group settings. Agile coaches have a professional obligation to role model the growth mindset they are fostering in others.
To do this well, the report offers a key insight.
"Those who reported measuring success at the customer/client level most often based their level of success on [their] satisfaction."
That means coaches should be viewing their success through the eyes of those who hire them. And those who hire them will have a full view of the coach’s value when they have a full picture of feedback.
In the end, agile coaching is a relatively young discipline, with professional standards and practices still emerging. The good news is, if we start measuring the impact we have on a human level, we can remove several distractions, and focus on the essence of making the craft truly meaningful.
Jesse Fewell has mentored thousands of technology professionals across 14 countries to improve their teams & companies using Agile methods.
A management pioneer, he founded the original Agile Community of Practice within the Project Management Institute (PMI), created and refined multiple certification programs, and written publications reaching over a half-million readers in eleven languages. His industry contributions earned him a IEEE Computer Society Golden Core Award.
His most recent book, “Untapped Agility – 7 Leadership moves to transform your transformation,” is for technology transformation leaders at any level.