Know how to really know when to say when.
Last week, I posted (here) about grace as an under-appreciated leadership trait. Given the very positive response to the topic both online and off, I thought it timely to discuss the other side of grace.
I’ll refer to the other side of grace as judgment because it is a term that is applicable across disciplines and into the professional realm. Some might say that the opposite of grace is justice. I won’t quibble with that interpretation; but justice implies an abundance of objective truth, and judgment implies an abundance of ambiguity.
The business environment offers far more ambiguity than truth.
Get your thinking cap ready for this one. It’s bit of a climb, but there’s quite a view.
First, an anecdote.
Recently, pundits and fans expended a tremendous amount of energy on the NFL’s Ray Rice domestic abuse incident, and rightfully so. While there is plenty of nuance to the discussion, one thing became clear: Once video of a grown man striking his soon-to-be wife with enough force to knock her unconscious became public, it was enough.
He was fired and roundly vilified.
The NFL’s Neanderthal and perhaps cynical decision processes aside, the case of Rice’s dismissal is a study in decision making.
To wit: Plenty of people can logically argue that grace in this instance might be merited. Mr. Rice has been an upstanding citizen and model representative of his NFL club (until he beat his significant other…that is). In fact one need not look too far to find plenty of people blaming the victim and justifying Rice’s actions as forgivable if not acceptable.
Others, particularly those in higher profile positions in the professional community, know the score. They know that one highly deviant data point is all it takes. When a person breaks a social contract in such an egregious manner, the evidence is sufficient to pass judgment.
The application of evidence, from the mundane to the shocking to the stealthy, to judgment and decision making is what this post is about.
When is enough…enough?
We deal with ambiguity in all sorts of situations. How do we know when to go full speed ahead with a plan, when to cut ties with a boss or business partner, or when to at least alter approaches with others based on the evidence we see?
Most of us want to give people and plans the benefit of the doubt… Some of us do it to such an extreme that we lead ourselves into professional peril or, worse, purgatory. Grace for grace’s sake. The benefit of the doubt as a rule vs. an option.
When do we know enough to make a decision?
The answer? When we see one or two powerful indicators, or many, many subtle ones. The art is in knowing the indicators and their strength and in avoiding errors of intuition around them.
I’ll explain that in a moment.
Probabilistic thinking, when applied to situations at home and work, can allow you both to give the benefit of the doubt AND to maintain a meaningful level of decisiveness in the face of ambiguity. The concepts in this post are just as applicable to human relationships (both personal and professional) as they are to strategic plans.
Let’s dive in.
Depending on your disposition, you would have either been fascinated or bored to tears if I went into detail on the foundational subject matter for this post (in short: Bayesian Inference); but others have done it better than I can. So, I won’t. I will give a short overview instead.
The basis for the rest of this article is a formula for probabilistic thinking known as Bayes’ Rule and a method using it known as Bayesian Inference. I’ll work with a slightly bastard interpretation of both. For those of you who know better, bear with me to the end.
Bayes’ Rule was formulated by a man named Thomas Bayes…a thinker ahead of his time (and behind his own thinking, some would say–he never published his work). If you have no idea what Bayes’ Rule is, you might study it elsewhere (links to good, popular/accessible summaries are out there. Here is one).
Bayes’ Rule is a formula for evaluating the impact of evidence. It is the foundation for Bayesian Inference, which is a process that provides a quantitative method for combining new evidence with prior beliefs–for “objectifying the subjective.” It is, at its most simple, a formula for taking:
- A “prior” hypothesized probability that something is true or false–“I’m 80% sure Johnny has ADD.”
- An observation that provides evidence (the “test” –> “Johnny sat for 30 minutes reading a book.”)
- And a set of 2 conditional probabilities based on the prior assumption and the observation (1. “If Johnny has ADD, there’s a 5% chance of Johnny sitting still that long.” and 2. “If Johnny has no ADD, there’s a 60% chance of Johnny sitting still that long.”)
These things come together to create a “posterior” probability that the hypothesis is true. The formula looks like this:
The term “P(A|B)” is the posterior probability that A should be true given that B was observed. Enough said, right? To make it simpler for the practical uses I’ll put together later, the calculator I’ll use (here’s an online version) looks like this:
Based on the posterior probability that Johnny has ADD based on this test (the green box, which is now 25%, down from the prior of 80%), Johnny’s parents can rest a bit easier.
If you are still with me, you are wondering “So friggin what?” Right. Well, this little primer is necessary because the power of Bayes’ Rule in your everyday life is real. It’s a way of updating your thoughts on a strategy, a relationship, a bet you want to make in Vegas, and any number of other things, by just applying evidence and judgment. And it doesn’t require you to sample forever in order to increase or decrease your conviction.
More importantly, it’s a way of battling a sympathetic and highly anchored intuition. Almost all of us have it. For example: I’d bet you dollars to donuts that Johnny’s parents, when asked what their “posterior” should be after the observation above, might update from 80 percent to “oh, ah, about 60 percent.” The reality was a fraction of that (25%).
Your intuition isn’t great when it comes to judging the meaning of highly deviant events or behaviors, and that can cost you. It can cause you to write people off based on a bad streak when it isn’t warranted, or it can cause you to be far, far too forgiving to someone or something (like a plan) that looks nice but isn’t performing.
Constant updating with new information can make you a better professional (and poker player), and frankly allow you to live a better life.
Let’s apply it to a situation like the NFL’s with Ray Rice.
Case 1: Ray Rice and Firing Decisions
Take the Ray Rice example. Imagine you have a high profile employee in your organization who does as Ray Rice did. There are really two considerations that come into play in a case like this. Call them Reputation and Values.
- Reputation: Given the evidence available, prior experience, and the profile of the person, what is the likelihood your organization can weather the reputation storm?
- Values: Given the evidence available, what is the likelihood the individual’s actions could be reconciled to your organization’s values?
The NFL, at first, applied the reputation question to its calculus; and it looked something like this:
Round 1: Evidence available was an ugly video of Rice dragging his fiance out of an elevator car. Ugly, yes, but who knows what happened in there. Right? The NFL has weathered many, many of these similar storms in the past without indefinitely suspending a player, so experience was on Rice’s side. The NFL took the intuitive view that Rice wouldn’t hurt its reputation because his actions were on a continuum of behavior. Bygones and all that. 2 game suspension.
Then? Video of the actual incident leaks. Woah. A firestorm. What happened?
Round 2: Well, let’s consider the values case, which is what the NFL was ultimately forced to do after video of Rice actually cold-cocking his soon-to-be-wife comes out. It results in more of a binary conclusion. Here’s a simple calculation based on the hypothesis that “Ray Rice is aligned with the values we espouse.”
See what happened there? A guy punches his fiance, and suddenly there’s no way he can represent the values that some people expect the NFL to protect (simple things, like “don’t beat up your girlfriend”). Rice goes from “model citizen” to “persona non grata;” from a 2 game suspension to fired with indefinite league suspension. It’s not a continuum, it’s a cliff.
Keep that in mind: Powerful evidence deserves a powerful response–a cliff, not a slope.
A case of an employee filmed publicly beating his significant other is probably too egregious and easy for most leaders to judge. It’s pretty much binary. Still, cases of legal or moral misconduct and how we handle them hold the mirror up to us in ways that few other cases do. The outward appearance of when enough is enough for you as a leader or follower reflects on your morals more than you’ll ever know.
What’s the equivalence point between grace and judgment when it comes to an employee’s misconduct? You have to make that call, and I’m offering one set of tools. Even the most “pure as driven snow” of ethical leaders probably has an expense or two that could be called into question even if just via poor recollection (let’s see, was that 15 miles to the airport or 18…?). In the case of small deviations, it takes a lot of them. In the case of big ones? not so much.
Let’s move the cases a little closer to issues you probably face in your workplace.
This is where these approaches get juicier.
Case 2: The Change Leader Who Doesn’t
People are keen observers of behavior. When a leader declares a change, and doesn’t change behavior, people know it; even when the leader INTENDS to change. Intentions don’t matter. Observations do.
Let’s say a leader declares a tremendous new initiative for his organization that is going to require all parties to think and act differently. Problem is, his behavior reveals no real substantive indication that anything has changed.
Some people will say “yes, sir” and attempt to implement change.
Others? They will assess the conditional likelihood of change given their observation of the leader. They won’t necessarily use math, but if they did… Applying Bayes’ Rule, it goes something like this:
Situation: I’ve been told by my leader that things are changing.
Prior Probability of Real Change: Let’s say the organization has been quite good at implementing change, so 60%.
Observation of the Leader: Once he announced the change, my leader does nothing to reinforce or role model the change (probability of observing this given real change? Let’s say its 10%. Probability of observing this given no real change is actually going to happen? Let’s just say it’s 90%).
The calculator looks like this:
See how easy that was? We go from an announced change effort that had an estimated 60% chance of success to a quick, mathematical assessment that change is only about 14% likely to happen given the leader’s lack of change.
In short? Why bother changing? Nobody else is. This from a single assessment of the actions of the announcing leader.
By the way, this gets worse the more case history there is. The more “flavors of the month” get launched and abandoned, the more fatigued and rational people become about change. “Going through the motions” and “why bother” mindsets are real things.
Case in point: If I had started with a 20% likelihood of real change as my prior estimate, the calculator outputs 2% as the posterior probability.
Yeah, that’s right, if you are bad at implementing change, people may qualitatively stop believing you; but the reality is that their cynicism is justifiable with numbers.
Such assessments show why role modeling by leaders is so critically important in transformational change environments. While people in the rank and file won’t typically do the math; they will, in most circumstances, read the clues. The math just reinforces it.
If anything, in my experience leading change, I’ve observed that people get on or off the bandwagon quickly based on their assessment of commitment and consistency of senior executives in charge of the change in a fashion very similar to that presented here.
Let’s look at another case you might find familiar.
Case 3: The Stretch Role
The age-old question of when a person is ready for a promotion can be tackled with a Bayesian approach in order to avoid “has-to-have-been-there-itis” where nobody is good enough for promotion to a role they’ve never held before.
Let’s say you have a budding manager who wants to step into a more senior role. What do you need to see from her in order to gain confidence in placing her in a stretch role? Pick a few triggers and use them as tests.
Maybe the triggers for her to be considered ready for the stretch role are (keeping it bland and general) organization, acumen, and foresight; but all people have some doses of each of them without being ready. So, how do you handle it?
This is where the compounding or iterative approach to Bayesian Inference matters. The “Posterior” of your first test becomes the “Prior” of your next. The analogy here is a poker player updating his assessment of his probability of winning as each card is played.
Let’s say your “prior” probability is 60% that your charge will be ready for the stretch role, but that you really need to get to 80% to pull the trigger. What do you do? You keep track of how she does on the “trigger” criteria.
So, you use the calculator in an iterative way this time…
Reading from left to right, you can see that you’d be justified in placing the person in a stretch role (85% confidence) after observing the confluence of 3 observations on the triggers. The addition of evidence for organization, acumen, and foresight support the decision. This is overly simplified, of course.
There could, in turn, be a column here for evidence that is contravening, and it would be factored in. That’s right: The iterative power of this mode of thinking is real; and it works in both directions.
Let’s have some fun with one that demonstrates the bi-directional nature of Bayesian Inference along with the asymmetric power of different types of observations.
This time I’ll use an unpleasant but all too common situation.
Case 4: The Asshole*
Let’s say you establish a “No Asshole” rule in your life. Perhaps this means that you will do you best to either remove them from your team or, failing that, remove yourself from contact with them.
Some assholes are easy to identify (in a Lloyd Christmas kind of way, they’re obvious). But sometimes, especially in a professional setting, you have to figure out when enough is enough through evidence and observation. The issue is this: Assholes can act like good people at times–sometimes even better than good people. They can be charming, or attractive, or smart and polished. But, deviant behaviors stand out.
Imagine a new colleague comes into your organization. Let’s say that your No Asshole radar is completely inactive. They might be an asshole, but you see no reason to think so. You assign a 10% chance of asshole-dom at the start (perhaps the base likelihood of encountering one of these animals in your professional experience). Then, over the course of six months, you observe the person being actively deceptive, politically pitting people against each other, backbiting, and bullying.
Taken individually, these actions could come from anyone. Even a great executive could backbite once in a while. For that, I’ve used 10% to 20% as the probability of a “bad day for a good person” in the “Behavior|Not an Asshole” line below. But, as observations mount…it becomes clear:
The person is an asshole. 100%.
But wait, you say? They are nice, have a warm smile, have charisma, are active in the community, and are great with their family.
That’s the issue, so are people who aren’t assholes.These aren’t deviant behaviors like the first set of observations, so they really don’t count for much. It’s the old “I’m just an intense person sometimes” or “my job requires it” shibboleths that assholes like to trot out. The person is already over the cliff. Statistically speaking, adding in nice but common behaviors has no power in the assessment.
All the goodness in the world can’t overcome a multitude of highly deviant behaviors that tag your colleague as an asshole. Find a way to get away and preserve yourself and your organization.
Here’s why this matters: Outlying behaviors are huge signals, and should be taken as such. In-lying behaviors (like smiling and acting nice, for instance) are actually not all that big a signal. Even the biggest assholes in the world smile and act nice frequently, just like “normal” people. It’s simply a posture–like crossing and uncrossing one’s arms. Observers of actions know that it’s much harder to hide deviant behaviors over the long run.
This is why true acts of deception and bullying, especially within a purported culture of integrity, should sound the alarms…now…loudly. Enough is enough.
It all comes down to this: When considering evidence in order to make a judgment or decision, a series of small signals can add up to a lot of conviction, but it takes a lot of time. A single, clear, outlying signal can remove any doubt, even in the presence of small signals to the contrary. When it comes to judging people’s actions (like in Case 4), it’s a cliff that can’t be walked back up.
After the presentation and consideration of some types of evidence, no amount of earthly grace is indicated.
Here are 5 practical ways to apply this kind of thinking everyday:
- Have a point of view going into any interaction, particularly those with significant ambiguity. Be vigilant. And, update your point of view as you judge events and actions, to the good and to the bad. Your “posterior” estimate of reality is what matters.
- Place checkpoints on strategic plans that call for evidence based tests of whether the world is what you thought it would be. Update!
- Hold performance reviews with people that allow you to mutually update your understanding of how things are going and ideally to steer away from misunderstandings of performance or inference. Get intentions out on the table to match with actions.
- Remember that your actions are what people see, not your intent. The best thing about using Bayes’ Rule is that it relies on observation and evidence. The worst thing about it? When others use it. You can’t weasel your way out of being an asshole once people are onto you and get over their tendency to let you slide.
- Tolerate, but only to a certain degree, bad behavior. That goes for bad behavior from your superiors or from your subordinates. Everybody has a bad day. A bad day is not an indicator of a bad person. A single data point can’t indicate a trend, but it can indicate a probability of the underlying personality, which has been the point of this post.
A friend recently related to me an adage from his years in the U.S. Army: “Once is happenstance, twice is coincidence, three times is enemy action.” Such is the type of thinking I’m encouraging here, with the slight adjustment that sometimes, once is enough.
Grace is a critical element of leadership, except when it’s time to use judgment. Using the concepts in this post can allow you to know when enough is enough.
Now, go mind your posterior.
* I would like to thank Stanford University professor Bob Sutton for popularizing the notion that the word “asshole” has no polite substitute. I am using it here as professor Sutton would. If you have not read it, Sutton’s book The No Asshole Ruleis worth a look.
Geoff Wilson hopes that this overlong and somewhat technical article did, in fact, provide a view that was worth the climb. Offer your comments or critiques below or offline.