Like a high-performance engine relies on high-quality fuel, a high-performance automated process relies on high-quality data. How good is yours?
The first printed use of the term, “garbage in, garbage out” was from a US Army specialist, William Mellin in 1957. Writing of the early work of Army mathematicians with computers, he explained that computers cannot think for themselves, and that “sloppily programmed” inputs inevitably lead to incorrect outputs. In the intervening 66 years, that hasn’t changed.
The opportunities presented by automating business processes shouldn’t be ignored. In fact, beyond a certain point, automation isn’t just desirable, it’s absolutely necessary.
Small margins for error
However, even the best automated processes are far from flawless. In comparison to the manual alternatives, the tolerance for error in automation is tiny and, while we can engineer out issues with processes themselves, taking care of their inputs is often overlooked.
When an automated process fails, we can assume one of two things: either the process is flawed or the data that passed though it was incorrect.
Data as fuel
It’s that second factor, the data, the fuel of the process, which is the often-overlooked element. Across the housing sector, whether it’s customer, asset or financial data, quality issues are holding back seamless, automated processes from delivering the benefits they promise.
Let’s imagine a scenario… A customer raises a request for maintenance via an app. That request creates a job in our system, which in turn categorises it and sends it to the correct maintenance team’s system. Their system then adds that job to the work list and assigns it to an engineer. It then sends a message to the tenant advising the date and time of the visit. So far, so automated.
Now let’s imagine that not all of the information passed across was correct. The job is created, the engineer is assigned, but the contact details of the tenant are wrong so the message can’t be sent with the appointment details. A few days later, the engineer attends as planned but the customer isn’t at home. The engineer can’t get access, the job has failed, and it’s just cost us £80.
It’s not just obvious data issues such as the above example that cause process failures. Subtle errors in your asset hierarchy, issues with classifications and assets’ factual data can lead to mis-calculated service charges, lost rent and regulatory issues. Not to mention the inconvenience and, in some cases, harms that can affect your customers.
Financial exposure
While writing this article, I was reminded of a client who had 80 properties with data quality issues in their automated process for electrical safety compliance. That didn’t seem so bad, against a total stock of around 10,000 properties, until you applied the potential regulatory fine of £30,000 per property.
£2.4 million… that was their exposure. On only 80 properties, just because of the data.
Today, thanks to taking proactive steps to manage their data better, that client’s exposure is closer to £180,000, or six properties, and fixes for those six are all in hand.
By getting a firm grasp of the data which fuels your processes, and the quality thereof, it’s possible to mitigate the risk of process failure with targeted, efficient data cleansing and enrichment.
By getting a firm grasp of the cost of failure in your processes, you can prioritise your resources in the right way and cut through the noise created by complex systems and huge databases to find the data that really matters.
Treat the problem, not the symptom
Think of it another way – we want to treat the problem, not the symptom. If a colleague can solve a customer’s issue soon after it occurs, that’s great. However, if we can stop the issue from occurring in the first place, we not only save time and money, we also definitely improve the outcome for our customer.
Proactively measuring and monitoring the data that fuels your organisation, and acting on that intelligence with targeted data cleansing, is the single biggest improvement any organisation can make in process automation.
Back to basics
Much of our recent work has focused on automating the generation of SDR submissions. In most cases, that has involved a ‘back to basics’ approach which has encompassed data quality measurement and ground-up process re-engineering.
What has been fascinating is seeing at first-hand the layers of complexity weaved into the existing processes to make up for the all-too-predictable shortfalls in the data.
Hours and hours of often complex, lengthy and hard to decode workarounds (almost inevitably in a series of Excel workbooks built by highly-skilled colleagues) become hard-wired into critical business processes.
Yet with a rigorous data management process, all that could have been easily avoided by eliminating the root causes. Not only that, but those skilled resources could have been adding real value elsewhere.
Aim for automation
What’s the final thought here? Automate your processes as far as possible. The benefits, not just to you as a business but to your customers and suppliers, are clear and achievable.
If your customers want to interact with you at 4am, an automated process will support that. If you want your suppliers to be paid on time with their invoices correctly matched, an automated process will support that.
However, don’t forget the data… “garbage in, garbage out.” It’s still as relevant now as it was in 1957.
I will leave you with this quote from one of the fathers of modern computing, Charles Babbage: “On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”
David Bamford is the delivery director at IntoZetta.