If you ever read articles on why IT or other large implementations fail, the authors will provide a laundry list of reasons. One of the ones rarely mentioned is “backsliding”. Using a company’s performance management process can prevent this.
Backsliding is the perfectly natural but insidious habit people have of reverting back to their old ways of doing things. That is often why this reason is missed: on day one of the implementation (almost) everyone is doing what they are supposed to do. And the day or week after, too. So, the project is declared a success.
But then people start going back to their old, comfortable habits.
The siren song of old habits
Frank in accounting goes back to using his old spreadsheet because it provides additional analytics that the new ERP system reports just don’t. Then, his co-worker Nerine realizes that Frank is still using his old spreadsheet with no ill effects. So she starts doing the same with her favourite tools and apps.
This catches on and other people start ‘cheating’, each just a little. And so the data in the system is not as current anymore and degrades further as time goes on. The reports start to reflect this degradation: garbage in, garbage out.
Nancy in field service gets too busy to enter her maintenance information in her new tablet right after her service call. She goes back to calling her old service desk contact, Jasmine, to write this up for her, just this one day. Nancy and Jasmine have known each other for a long time, so they use the call to catch up. It’s pleasant.
The next day Nancy does it again, then later, again. Edmund sees what Nancy is doing and goes back to calling in his maintenance activities to Jasmine as well, and catching up a bit too during the call. Others start doing it too, occasionally at first, then more often.
Because the company is growing, Jasmine was supposed to move over to supply chain to support them. However, it looks like Jasmine can’t leave customer service now because she is needed, and the company has to hire an additional administrator for supply chain. So the savings envisioned through reduction in manual, administrative work do not materialize.
Slow motion train derailment
The project hasn’t failed, exactly. Not yet. But if you were to measure it against the stated benefits a year later, these returns on investment (ROI’s) would not look very good.
Often you can get away with a little bit of backsliding. However, if it is allowed to catch on and build, you can have an expensive mess on your hands. Unfortunately it will be one of those painful, slow-motion train-wreck kind of messes.
Two or three years later you have to re-implement a major component, or reorganize to ensure compliance to processes, or some other new major change initiative to fix what should have been in place the first time.
Preventing the preventable
All projects suffer some degree of backsliding. We’re dealing with human beings after all, and if there is a way to circumvent something we don’t want to do, we will find it! We didn’t reach the top of the food chain by allowing ourselves to be led quietly to be the main course at the table.
None-the-less, to really prevent wholesale backsliding, you have to focus on sustainability. We covered some elements of sustainability, such as reward and recognition activities and organizational alignment. This blog focuses on performance management.
There are four steps to convert desired project outcomes to people’s performance reviews. Here are steps with a story from an actual implementation.
Define your project success criteria and metrics (different from “benefits realization”)
Identify the future state behaviours you want to see
Document the measures of these behaviours
Reinforce these new behaviours through the performance management process
Project Success Criteria
First and foremost, define your project success criteria and metrics. These are usually not the same as “benefits realization”. Benefits realizationnormally refers to the benefits the company will realize after the system or processes have been in place and working for a while. These tend not to be clear for several weeks, and usually months.
So what project success criteria and metrics should you define? After all, there is already the “on time, on budget” success criteria of a project. What more do you need?
You have to be able to answer the following questions: What will be visibly different the day after the change is implemented? What will you observe that wasn’t there the day before? What results are new?
In our example, we were consulting on a project to implement a processing centre for over twenty relatively independent offices across the country. One of the success criteria was that the quality of the forms completed by the processing centre had to be high from day one.
There were several metrics associated with this one success criteria: percent of data entry errors; percent of rework errors; perception that the quality was good. We could measure all that within the first week or two.
Just to make a point, measuring the perception of quality through a survey was important. We wanted to make sure that the offices using the new centre for the first time also perceived that things were good. Perception is reality as they say, and if this wasn’t there it would be a harder to convince everyone across the organization that the new centre was working well.
In our example we also defined the behaviours that we wanted to see more of and less of (here is the link to the worksheet if you want to use this on your project). We wanted to see more collaborative behaviours. However, what did these look like? To measure something specific you have to be able to observe something specific.
Error rates are caused often by misunderstandings stemming from poor communications. When the person processing a form works in the same office as the rest of the staff, communication is easy. The person can just look over their cubicle wall and clarify a point. Plus they get to know each other well through repeated face-to-face contact, making communications even easier
When the person processing the form is sitting tens or thousands of kilometres away, and possibly in a different time zone, communications becomes much harder. And if the processing person is brand new, whom nobody knows, it becomes even more difficult.
What can improve communications in cases like this are regular, weekly WebEx or Skype meetings. Therefore participation at these meetings became one of our expected behaviours. We set up these weekly meetings and invited members from each office to attend their relevant meetings.
Measure Observable Behaviours
The next step was to measure the attendance and add this behaviour metric to the performance management and review process. This was important since nobody could force people to attend in this case. Each office had a fair bit of autonomy, and if the office head was fine with their staff not attending there was nothing anyone could do.
We ran the meetings and recorded who participated. As you might expect, most offices and their required staff did attend regularly, with varying degrees of enthusiasm. But several were quite spotty in terms of their attendance. However, at least we had data, we had stats.
It was not all rosy with the meetings in the beginning, but the teams worked through the initial issues rather quickly. And after a while the relationships between people in the processing centre and the offices developed into respectful, professional working ones. Relationships and pride in the results helped make the process self-sustaining more quickly than we anticipated.
The Importance of the Performance Management Process
But there were also offices that were more resistant. How did this play out?
Well, for starters, their results were not nearly as good as with the more engaged offices. They had higher rates of data entry errors and rework errors. Naturally the offices blamed the staff at the processing centre for these. And they were quite vocal about this poor perception of theirs.
So the project sponsors and the heads of the resistant offices met to discuss the implementation and review the stats. There were two parts to the meeting.
The first part involved lots of venting. That lasted quite a while until passions cooled a little.
Then the second part of the meeting started with essentially the following comments by one of senior project sponsors: “All offices are using the same people in the processing centre. How come the centre can produce high quality forms for their offices, but not for yours? That doesn’t make sense. And look here, the only correlation between the error rates are the attendance rates at the weekly meetings. How can you explain that?”
That part of the meeting was much shorter.
The office heads were also asked to remind their staff that meeting attendance was now part of everyone’s performance review process. Since people were assessed across the company based on the pool they were in (e.g., managers, senior managers, etc.), not meeting these metrics would be viewed as a negative when it came time for promotions, salary increases and so on.
This initiative went to phase 2 where more work was transitioned to the processing centre. It escaped nobody that staff from the more resistant offices arrived in full force to the meetings and were completely engaged and eventually supportive.
Attendance at these and other meetings was never again an issue. For the second phase there were no significant differences among the offices with respect to quality of work done at the processing centre.
As this strategic initiative continued to roll out through subsequent phases more and more work moved over and ongoing support and engagement from offices was no longer the issue it was in phase 1. In the end this was considered one of the most successful cultural transformations for this professional services firm.
Like this blog? Share it with your colleagues. And sign up and you won’t miss another one!