The term ‘AI’ is, of course, short for artificial intelligence and refers to machines with the ability to perform tasks that have historically required human intelligence, such as understanding natural language, problem-solving and learning. And while AI has the potential to revolutionise many areas of social housing, there are also risks that need to be understood and managed, as with the adoption of any new technology.
AI to enhance trust and confidence
When it comes to day-to-day operational efficiency, AI can provide more consistent and transparent outcomes by helping to reduce human errors, compensate for bias and support faster process execution. Customer experience and satisfaction can be revolutionised through more personalised interactions and convenient services delivered via chatbots and virtual assistants that use natural language processing and machine learning to understand and respond to customer queries, requests and feedback across websites, call centres, apps and social media.
AI can further tailor these responses based on stakeholder preferences, needs and goals in order to help them find the best solutions for their needs. And unlike humans, AI technologies can provide recommendations, suggestions and guidance to tenants and contractors around the clock. Smart-home devices such as boilers, thermostats, lights and security systems will also have an important part to play because they too can use AI to learn from human behaviour, understand preferences and adjust accordingly to provide more comfort, convenience and security.
Practical applications
Some practical applications of AI now starting to appear in the software provided by suppliers to our sector include:
- Predictive maintenance – AI can analyse data from housing facilities to predict maintenance needs, ensuring proactive repairs and minimising disruptions for tenants.
- Customer service and communication – Chatbots powered by AI can handle routine enquiries, provide information on policies and offer help, improving the overall communications between housing providers and their tenants.
- Tenant matching – AI algorithms can help in matching tenants with suitable properties based on their preferences, location and specific needs, optimising the allocation of housing resources.
- Predicting tenants’ needs – by analysing historical data, AI can predict tenants’ needs and preferences, allowing housing providers to anticipate and address problems before they become significant.
- Energy efficiency – AI can contribute to monitoring and optimising energy consumption in housing developments, leading to cost savings for both tenants and housing providers while supporting sustainability.
- Financial planning – AI-driven analytics can help housing providers with financial planning, budgeting and resource allocation, ensuring the efficient use of funds and improving their overall financial health.
Housing providers wanting to enhance their decision-making, organisational effectiveness, operational efficiency and tenant satisfaction in the near future are already adopting AI in these scenarios and beginning to see positive returns.
Likewise, at the central/local government and corporate level, policymakers, investors and housing leaders can better coordinate and optimise their strategies and plan ahead by using AI tools to analyse large amounts of unstructured data in ways that were simply not possible in the past to identify patterns, trends and opportunities. For example, AI can support more effective and more accurate property valuations, market analysis and risk assessment by combining and analysing data from multiple sources.
AI also carries significant risks
However, as ever, there are some ‘buts’. One of the main drawbacks of AI is that its use can pose ethical and social concerns, especially around privacy, security, accountability and fairness.
Unfettered, AI can collect and process very large amounts of personal and sensitive information, such as financial, demographic and behavioural data, which can raise concerns about how this data is stored, shared and used and who has access to it. AI can also be vulnerable to malicious actions including forms of cyber-attack, hacking and manipulation that could compromise systems and cause real harm or damage to users and stakeholders, in turn bringing significant financial and reputational risk to housing providers.
Slightly more nuanced, but nevertheless important, is the fact that where business activities entail a significant reliance on AI systems, determining who is responsible for an outcome may not be so clear cut as we might want.
With AI tools such as Microsoft’s Copilots being positioned more as intelligent assistants rather than autonomous entities, it can be difficult to separate the AI from the human, even after tracing back through what might have been a complex series of interactions or decisions.
This blurring of accountabilities is something business leaders need to consider carefully, especially when decisions made or supported by ‘AI’ affect the rights and interests of tenants or leaseholders; if the decision has been ‘delegated’ to an AI, who is responsible?
AI also has the potential to exacerbate rather than eliminate human biases and discrimination based on factors such as race, gender, age or income, and thus undermine efforts to ensure the fairness and quality of housing services and opportunities.
This is because machines ‘learn’ from what has happened in the past and that will reinforce established patterns, even if they are undesirable. It’s therefore important to exercise a degree of caution when implementing AI, making sure that both the qualitative and quantitative aspects of adopting the technology are understood before implementing at scale.
The rise of the machines
One further consideration that’s closer to home. Some commentators suggest that one of the inevitable side effects of introducing AI into the workplace is that it will replace or reduce the need for human skills and jobs, especially those that involve routine, repetitive or low-value tasks, with a consequent increase in unemployment and higher levels of frustration and alienation in the workforce.
Others worry that even if jobs are safe, AI will create an unhealthy dependency or reliance on the technology, one that reduces human agency, creativity and critical thinking. And that AI will erode rather than enhance interactions between housing providers and their tenants, where empathy and rapport can be crucial to achieving a positive outcome. In short, the concern is that AI will never be able to replicate ‘the personal touch’.
Conclusion
AI is a powerful and innovative technology that offers many benefits for housing providers. It has the potential to improve the efficiency and accuracy of processes throughout the extended organisation by providing more personalised and convenient services, such as chatbots, virtual assistants and smart-home devices.
However, AI also creates regulatory, ethical and social risks for housing providers, in particular concerns over privacy, security, accountability and fairness.
In the long run, AI’s greatest impact may well turn out to be on the profiles of the skills of employees and on the nature of human relationships within housing and across society at large. It’s therefore important when introducing AI to acknowledge the pros and cons and ensure that systems are designed, developed and deployed in a responsible, ethical and human-centric way, respecting the values, rights and interests of all.
Aidan Dunphy is the chief product officer at Esuasive.