Since the introduction of systems and methodologies for cost- and condition-forecasting around 25 years ago, the amount of data available to housing providers, uses for it and collection methods have increased rapidly. Unfortunately, the improper use of data is also on the increase. It is therefore useful to examine how stock condition surveys and the evolution of asset management systems (AMS) have affected data management.
AMS purposes
With the government’s focus on controlling costs and outcomes in the sector, the original purpose of an AMS was to provide a picture of the current and likely future condition and planned maintenance costs of a housing portfolio.
This remains a key requirement but is no longer quite so simple to ensure. 20 years ago, the first AMSs were populated with stock condition survey data of varying provenance and coverage, plus ad-hoc data on major past repairs.
This data was then extrapolated or cloned over whole-stock portfolios using algorithms similar to those used in best-practice guidance for surveying. The validity of outputs was dependent upon how representative the available data was of the whole stock and how well the system’s extrapolation algorithms spread that data.
The resulting cost and condition projections were a revelation and supported the massive investment that new housing-condition standards such as Decent Homes imposed on the sector. These projections relied on statistically-sound representative samples of data being collected and reliable extrapolation algorithms.
20+ years of data improvements
Since those early days, there have been changes in data manipulation tools and data volumes:
- Increased cloning of properties to represent others made it easier to understand outputs.
- Increased cloning of components from one property record to another further aided understanding.
- Commonly, cloning is even carried out by surveyors to widen the scope of stock-condition surveys. Unfortunately, cloning poor data or cloning good data badly are common faults in the drive to achieve wider and more understandable (rather than better) information on stock.
- Data has been amassed from the unprecedented level of planned maintenance work.
- Validation surveys for planned and other maintenance works have often been recorded.
- The sector practice of 20 per cent per annum stock-condition surveys often accompanied the culling of vast amounts of data over five years old. This data is applied piecemeal to asset management databases.
- Because surveys and AMS algorithms relied on similar statistical sampling conventions, the extrapolation of survey data proved surprisingly accurate in its early days, even though lots of ageing and questionable data had been used.
Unfortunately, new surveys, although often using sound statistical sampling methodologies, haven’t always been of sample selections based on enhancing the overall representativeness of the existing contents of the organisation’s asset management database.
They might instead serve a number of other purposes, such as filling data gaps, focusing on specific assets designated for planned works or merely extending the rolling survey plan based on a five-year-old hypothesis. There is nothing intrinsically wrong with these other purposes but there will be a gradual deterioration in the validity of samples and projections if there isn’t an underlying aim of increasing the statistical validity of existing data or at least considering the overall data impact. Hence, data has improved and grown but lost its focus on the bigger picture.
Survey considerations
How many surveys are first based on a full assessment of the existing data, how many conclude on the subsequent whole-stock position, and whose job is it to do so? Remember, rule of thumb with rolling surveys is that 20 per cent of the existing data becomes invalid annually whether it has been replaced with new data or not.
Most asset management databases hold information on the provenance of data in them and set out stock-stratification information that allows such assessments to be carried out at any time. This information is rarely used in survey briefs and is very difficult to interpret without going back to basics.
People with responsibility for projections don’t like them to be subsequently proven wrong, despite that being inevitable; regulatory reporting puts more pressure on such people in their day-to-day roles. The only way to build constant improvement into forecasting is to have their originators learning by monitoring the forecasts and comparing them to the actual results, driven by a will to improve rather than fear of failure; sadly, such monitoring is much abused in all business sectors.
The client’s role in data management
The solution to commissioning a sound stock-condition survey is to ensure that everyone is aware of the range of purposes to which it will be put, whether supplementing energy data, planning programmes of work or simply part of a rolling programme of stock-data maintenance and improvement. The latter purpose should be accompanied by a review of the existing data and a careful analysis to achieve the best sample in the most economic way. The survey design should mirror the asset management data’s structure or that structure should be changed if it is not fit for purpose.
The four key stages of the exercise are:
- Agree scope – the surveyors will cover the minimum requirements so clearly define any extra requirements prior to commissioning.
- Pilot – ensure the surveyors are properly briefed and test the data collection and AMS loading in a pilot.
- Validation – data volumes grow fast so validate early and regularly via both desktop reviews and spot checks.
- Final reporting – at completion, load the data into the AMS and compare all results to the surveyor’s report.
Guides to reliable statistical-sampling methodologies are widely available and should be considered by those people commissioning the survey to properly instruct surveyors. However, the statistical impact of the sample against the population that the survey covers and also that population’s relevance to the whole stock should both be considered.
Surveyors will generally bring substantial professional skills to any assignment but they need a clear brief to ensure they can respond to their client’s needs. Unsatisfactory surveys are too often attributed to unsatisfactory surveyors, despite unsatisfactory instructions.
The work of Asprey Solutions’ business intelligence consultants, with a variety of housing customers on asset-value rationalisations and option appraisals, uses stock-condition cost projections alongside other data collated by our consultants to inform investment, divestment and rationalisation measures in asset portfolios.
We work with data from third-party AMSs as well as our own and find it relatively straightforward to identify and compensate for any data deficiencies. The level of our consultants’ data-analysis skills is heightened but data gaps, inconsistencies, exceptions and anomalies are capable of identification by anyone with an appropriate output of the data.
Representative sampling
The principles of representative sampling can be more complex but surprisingly small samples of homogenous data (often as low as 10 per cent randomly chosen) can provide an accurate picture of homogenous populations. With disparate data, 100 per cent samples are generally needed.
Consequently, arranging assets into homogenous groupings is critical; most databases record asset types, construction types, build years, location and so on that can be used to create reasonable groupings for survey and extrapolation purposes. The AMS supplier should be able to advise on the extrapolation algorithm within their system and any cloning assumptions (the surveyors should also have a record of their assumptions).
Armed with this information, users can gain substantially more confidence in the veracity of the results coming out of their AMS and/or improve any deficiencies.
David Ellis is the director of operations at Asprey Solutions.