Using Advanced Natural Language Generation to Communicate Data Insights at Scale
In a world awash in data, organizations are clamoring for information about what that data tells us about the environment we live in.
That’s the message that Kristian Hammond, Chief Scientist and co-founder of Narrative Science, presented during a talk at the Real Machine Summit in Las Vegas.
The other key takeaway was that AI-based systems can meet these needs in new and powerful ways.
See Hammond’s full presentation here:
The world’s data production continues to explode, and the data that’s being generated is incredibly useful. Until recently, though, companies and data managers relied on rather opaque mechanisms for grappling with it: databases, big data platforms, business intelligence platforms, analytical tools, visualization tools (pie charts, graphs, etc.), and the workhorse of every office: the spreadsheet.
While all these devices are useful, they have limitations. Even spreadsheets and visualizations aren’t that helpful in communicating much beyond pretty simple concepts. For the most part, regular people just don’t “get” them. And regular people are fundamental to Hammond’s message.
Organizations that need to communicate the meaning in their data, and do so at scale to a wide population, have been at a loss as to how to go about it. AI is breaking these barriers with tools that use Advanced Natural Language Generation (NLG) to create a process that builds a bridge between data and people.
The process begins with analytics: feeding questions about the data into analytical machines and generating meaningful facts about the world. Because there are likely to be way too many facts to be helpful, the next step is figuring out which are the most relevant for a given audience. Then, those ideas must be communicated to the audience.
This is where the Advanced NLG comes in. Using Narrative Science’s system and others that work in similar fashion, language can be created for each concept—language that follows natural rules, and is virtually indistinguishable from what a human might write or say.
To illustrate, Hammond used examples from the financial services sector. An AI system can do portfolio commentary and review, fraud reporting, suspicious activity reports, and more. Most companies are already doing this analysis. Once meaning is reduced to facts and those facts are organized according to value, the AI system can generate language for them. The result could be something like the following text in an investor statement:
“Relative to the benchmark, the fund’s overweight allocation to the Consumer Discretionary sector and stock selection within the Consumer Staples sector contributed to returns.”
An AI system can answer questions that would otherwise be posed to human analysts; for example, “which decisions made in the last quarter most impacted XYZ Fund’s performance?” By handing off these tasks to the machine, human time can be spent on strategy, decision-making, etc.
The process is a great fit for the financial services sector because the data and analytics are already there, along with a pressing need to communicate meaning to many people. But this technology has tremendous promise for other industries too.
Looking at data analytics from an AI perspective, capturing meaning and communicating it at scale—this represents a powerful horizontal technology transfer. Hammond pointed out that AI has had the capacity to do this for a while now, but the world didn’t have the data that would allow it to be implemented. Now we have it, and it’s an elegant solution.