Sometimes the tools we have to hand don’t quite get the job done and when that happens, you might need to build new tools yourself. The team behind our Resident Voice Index initiative here at MRI Software came up against just this problem; our solution was to build our own custom data-analysis tools.
The Resident Voice Index recently published its first report, in which nearly 4,000 UK social housing residents were asked about their feelings and perceptions of their neighbourhoods and housing providers. To draw out actionable insights from the data, custom technology had to be built to offer sophisticated ways of analysing the ‘associated question’ results and interpreting the qualitative responses.
Why was a custom build necessary?
We did our research before choosing to build specifically for the project, looking closely at the source collection tool and weighing up options with other available solutions. What we quickly realised was that the collection analysis tools didn’t do what we needed; we had a vision for a deeper level of data analysis. One difficulty we came across when trying to interpret the data was the ability to associate the answers of one question with the answers to another. It was a big problem!
Our BI architect behind the Resident Voice Index tools was Naveen Hadagali. He explained how flexibility drove the decision to build for the Resident Voice Index: “It gives us ways to analyse the data by applying statistical and logical techniques that help in deriving insights. More importantly, it is scalable, flexible and intuitive to use.”
The benefits of building your own
For the Resident Voice Index questions, we asked about a variety of topics, covering belonging, caring and safety. We wanted to find hidden information and deeper insights so the technology needed to be able to uncover associations between seemingly unrelated topics and questions.
The ability to link answers gives you the power to create different subsets of the data. The benefits of this are that you can begin to explore those relationships which you otherwise wouldn’t be able to reveal. Our tools give us the ability to identify all the people that thought one way about question ‘A’, make them a subset and see how they behaved or answered other parts of the survey. That could be any part of the survey; for example, what people who ‘felt safe in their neighbourhood’ thought about ‘positive contributions made by their housing provider’.
Moving on from that, the ability to cross tabulate the results was developed. For example, those who answered this way with question ‘A’ and that way with question ‘B’, against a third set of data, such as their age or location or the answer to another question. This really allowed us to identify some of the niche results that wouldn’t have been found using conventional off-the-shelf tools.
Dealing with qualitative responses
Another benefit was the work that Hadagali’s team did to build capabilities for dealing with qualitative responses. A result that was uncovered in the first survey was people suggesting things such as ‘community spaces’ or ‘community events’, which built a picture of what the respondents wanted to see or appreciated in their neighbourhoods. We developed an algorithm that enabled us to associate words with each other when they were clearly linked in the respondents’ answers. Some other platforms only analyse individual words and as such, the analysis would have been weaker and less enlightening, giving flat answers of ‘community’, ‘spaces’ and ‘events’.
Part of our build included carefully selected points of intervention where our human researchers can make edits so that they can see more intuitively what was meant by people’s qualitative answers. We were very careful not to doctor (or ‘munge’) any of the data, while making sure that perfectly good data wasn’t written off or separated from its appropriate groups. For example, by acknowledging spelling mistakes or by excluding extraneous data and profanities that added nothing to the insights.
We removed the word ‘good’ from a word cloud analysis. When we asked for positive contributions regarding people’s housing providers, the answers naturally included things such as ‘good communication’ or ‘good repairs’, but the insights didn’t need to know that it’s ‘good’ because we asked for ‘good’ things in the question. Techniques like this exclude the obvious words that would otherwise dominate, drawing out the data that matters for reporting purposes. As the project continues, we expect these capabilities to evolve and grow.
Constant and deliberate improvement
We are now developing the second survey of the Resident Voice Index and the tools built for the first survey will be tested against a new set of data. Hadagali and his team are excited to see how the project grows from this point: “We’re going to extend our existing framework and strengthen the sentiment analysis and text analysis.
“In the future, we will make the tools flow much more seamlessly so that you’ll be able see on-the-fly what impact a change may have rather than waiting for a day. Soon, we’ll be able to quantify the qualitative input much more easily, working to build an intuitive framework that can really uncover business intelligence from what people say.”
We are sharing the Resident Voice Index as a showcase for MRI Software’s business intelligence capabilities. We wanted to parade the skills that Hadagali’s team has to build great tools and solutions, at pace and against unique specifications. What’s more, the tools that we develop can often then be applied across other MRI solutions.
Readers can view the results at residentvoiceindex.com.
Doug Sarney is the Resident Voice Index principal at MRI Software.