With ever-increasing demands for varying types of data, combined with the ease with which we can collect and store it, many organisations are overwhelmed by the volume of information they hold and the manner in which they hold it.
There are many extremely powerful software applications used in the housing sector but the common weakness is the integration between these applications and how data, in its entirety, is used and reported by the business. Although there are resolvable technical considerations surrounding the storage and retrieval of such large and varied volumes of data, the fact remains that unless we can verify, analyse and make use of this information, what is the point of having it? Too much data and too little analysis is a significant overhead and delivers little business value.
Many housing providers have a variety of IT systems which allow them to collect and process their business data from housing management, purchase ledger and finance, customer service, HR and development systems. But with the number of systems to collect data now increasing, along with the amount of information being stored, housing providers are now starting to see inconsistencies and gaps in the data they hold. Furthermore, these gaps are no longer easy to spot as the ‘haystack’ of data makes it increasingly difficult to see these ‘needle’-sized issues.
Many RSLs also still use too many manual processes to collate and analyse their data for performance reporting; due to the nature of this work, the analysis is always based on historical information which is sometimes inaccurate and often out of date. What is needed is the ability to join these systems together so that they can be viewed as a single entity rather than a dispersed collection of systems.
By implementing an automated business intelligence layer, housing providers can make this process more efficient to the point where real-time information is available to the business based on the data from multiple systems. Through the application of this new business intelligence layer, we can also start to standardise the way we extract and present information from these dispersed systems.
With this common framework that understands the data stored in each separate system, common rules and processes can be implemented. This ensures that the integrity of the data is maintained between each system, as well as allowing the identification of gaps in the data sets.
By ensuring that our systems can communicate effectively, we can now start to analyse our data in new ways. True business dashboards can now become a reality for the business, allowing KPIs to be displayed using this real-time data without the need for time- and resource-intensive processes in the background.
Our ability to plan for the future is based on our understanding of the past and present and, rather than reporting on individual silos, we can start to look at the organisation as a whole, providing the ability to extract and analyse data from multiple perspectives – a true 360-degree approach.
Daniel Case is lead developer for Network Resource Group.