Natural Language Generation

The process to produce written or spoken narrative from a dataset using Artificial Intelligence (AI) is known as Natural Language Generation (NLG). It mines large data in order to identify patterns and produce the information understandable by humans in a short span of time.  In other words, AI can look at your data and write a story from it, just like a human analyst would today. A number of organizations are adopting NLG and transforming businesses like never before. It enables them to engage with their customer well and innovate the ways for expansion.

NLG fulfills following requirement:

Relevance: After understanding the context, NLG enables us to articulate information from the content which is required by the users.

Intuitive: It generates natural content easily understandable and very intuitive, hence easy to consume.

Timely: NLG produces results in no time. It’s much faster than human-based output.

Difference between NLG and NLP

According to Gartner[1]:

Whereas NLP is focused on deriving analytic insights from textual data, NLG is used to synthesize textual content by combining analytic output with contextualized narratives.”

It means NLP reads the content while NLG generates the output. NLP works on the language and evaluates it. Further, it figures out the output in the form of ideas and generates content. This content is used for communication for end-users.

Stages of natural language generation

Determination of Content: Deciding the main content to be represented in a sentence or the information to mention in the text.

Structuring: Deciding the structure or organization of the conveyed information.

Aggregation: Putting of similar sentences together to improve understanding and readability. For instance, merging the two sentences

Lexical selection: Using appropriate words that convey the meaning clearly.

Expression: Creating such referral expressions that help in the identification of a particular object and region.

Realization: Creating and optimizing the text that should be correct as per the rules of grammar. For example, using will be for the future tense of to be.

Evaluating  NLG

There are three basic techniques for evaluating NLG systems:

Task-based evaluation: It includes human-based evaluation, who assesses how well it helps him perform a task. For example, a system that generates summaries of medical data can be evaluated by giving these summaries to doctors and assessing whether the summaries help doctors make better decisions.

Human ratings: It assesses the generated text on the basis of ratings given by a person on the quality and usefulness of the text.

 Metrics: It compares generated texts to texts written by professionals.

State of Art- NLG Models

NLG for dialog

In [2], Authors have used end-to-end neural Encoder-Decoder approaches with attention and re-ranking. There is another ensembling method mentioned in [2], which used the boost model. In [2], another method, delexicalization was used, it is a process where we put slot types in slot values of generated text. Furthers, authors in the paper explained a method [2] to compare and combine both delixecalized and lexicalized inputs for the NLG system. They also explained another method which performed multi-domain environment showing better performance.

NLG for text and QA NLG for text and QA dealt in weather forecast successfully as per authors [2] with few exceptions. The proposed approach retrieves answers directly and answers are usually entities as it assumes the answer has already been retrieved, and the goal is to generate text matching it. QA strictly relates answer generation, where current approaches are also based on encoder-decoder networks encoding information directly from a knowledge base.

References:

[1]. https://www.gartner.com/en/documents/3388326

[2]. Cervone, Alessandra & Khatri, Chandra & Goel, Rahul & Hedayatnia, Behnam & Venkatesh, Anu & Hakkani-Tur, Dilek & Gabriel, Raefer. (2019). Natural Language Generation at Scale: A Case Study for Open Domain Question Answering.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *