Love it or loathe it, AI is here to stay. Fast becoming more mainstream than science fiction, AI has shifted from being ‘nice to have’ to a business requirement that can drive efficiencies and bring new levels of personalisation to customer service. As a result, the debate around its ethical usage is set to intensify.
This is particularly relevant in the area of social housing where there is a soaring level of vulnerability and so the application of AI requires very careful management to ensure alignment with housing providers’ core values. With that in mind, how can AI be introduced responsibly in a social housing context?
Applying AI in a socially-responsible way
Fear and anxiety around the introduction of any new technology are nothing new.
AI is one of the fastest-growing technologies ever and the genie is well and truly out of the bottle. It can save lives, cut costs and improve tenants’ wellbeing. However, it must be acknowledged that it has the potential to negatively impact their lives too if the intended purpose is not thought through.
If there is any risk from AI of a negative outcome for tenants in terms of their health, wellbeing, finance or safety then it shouldn’t be applied because the benefits must always outweigh the potential for risk.
That’s why some housing services are not suitable for AI. Waiting lists or housing-need allocations are good examples because the potential harm to tenants is too great if the AI recommendation of who has the greatest need or suitability for a property is inaccurate due to bad training data or a lack of testing.
But if AI is applied to, say, support tenants at risk of falling into debt or preventing damp and mould then it can be of tremendous benefit with negligible risk. In these instances, if there was a failure of bias and intelligence, it would amount to little more than a small inconvenience to the tenant such as an unnecessary phone call or a visit by housing staff.
AI training matters
A criticism levelled at AI is its potential for discrimination. The way we train AI matters; accountability, fairness and transparency need to go hand-in-hand with development.
If flawed data is used to train the AI algorithms then it can lead to inaccurate outcomes. Deciding what to include in the data is just as important as deciding what not to include. For example, if factors such as age and gender have no bearing on the problem then don’t include those factors in the AI data; doing so risks an unconsciously biased or flawed outcome.
We’ve worked with several large housing providers to use AI to predict the likelihood of damp and mould in their properties. To train the algorithm to make accurate predictions, we combined tenant and asset data to give a more rounded picture, enabling the housing providers to better identify properties according to risk; using asset data alone would have produced a less accurate picture.
To mitigate against bias in the data, it’s rigorously screened during development and checked against results that are known to be positive indicators for the likelihood of damp and mould. It is then further screened by the housing providers themselves to verify that the data used is good quality, is being interpreted correctly, and that the accuracy levels are correct.
If the property is predicted to be vulnerable to damp and mould, it is then the housing officer who will make the final decision to inspect the property, based on the AI’s recommendation.
Ethical process and practice
The pace of AI adoption remains a hotly-debated topic, and rightly so while a regulatory framework is evolving in the UK, the EU and among other governments worldwide. In particular, bias testing – exploring how AI makes the recommendations and the outcomes – is key to deploying AI in a safe and ethical way.
Adopting a sector-wide approach to best practice and determining the right reasons for applying it will ensure AI stays within ethical boundaries and improves tenants’ lives. Consequently, we would recommend that housing providers aim for sign-off at a higher organisational level than for traditional technologies.
In practical terms, this means a housing provider would set up an oversight board to report to its executive committee. The oversight board would verify that the data inputs can’t introduce bias, that there has been sufficient and appropriate testing to safeguard against bias being introduced, and to define an ongoing review process to ensure the AI supports pre-defined outcomes to the benefit of tenants.
Having an ethical framework will ensure housing providers can manage both potential risks such as discrimination, and opportunities such as earlier interventions.
Ethically applied, AI can make a significant contribution to addressing social need. By working together and adopting a sector-wide approach, IT suppliers and housing providers can set the benchmark to ensure that any AI systems used are inclusive, responsible and put tenants’ wellbeing and safety first.
Trevor Hampton is the director of housing solutions at NEC Software Solutions UK.