Yext Showcases Knowledge Injection Research With Large Language Models

Yext presented at the Extended Semantic Web Conference 2023 conference on using the Yext platform for Knowledge Injection to generate review responses with Large Language Models (LLMs).

Ariana Martino

Jul 27, 2023

6 min

In May, Yext presented on Knowledge Injection with Large Language Models (LLMs) at the Extended Semantic Web Conference (ESWC) 2023 conference. The paper, Knowledge Injection to Counter Large Language Model (LLM) Hallucination, co-authored by Yext data scientists Ariana Martino, Michael Iannelli, and Yext senior data analyst Coleen Truong, represents the second published piece of peer-reviewed research from Yext's research and development function. This paper investigates the results of using related entity data from Yext Content with LLMs for automated review response generation. This research helped develop the Content Generation for Review Response, included in the 2023 Summer Release.Read on to see key takeaways from the paper.Section 1: Yext Platform and IndustryOur research evolved from an initial study, conducted by Yext, that revealed how responding to online reviews can affect business reputation. Businesses that respond to at least 50% of reviews see approximately .35 star rating increase on average.1 This elevates a businesss reputation online - especially in search results. Prior research has also shown that responding to 60-80% of reviews is optimal.2 Depending on the review volume that a business sees, this manual review and content generation process can translate to multiple hours or even days of a full time employees workload. We set out to discover if we could leverage the emerging technology of LLMs to automate and streamline the review response process for businesses through our Reviews monitoring platform.A large language model is an AI algorithm that is trained on large data sets and can perform a variety of natural language processing tasks such as text generation or answering questions in a conversational manner (you probably already know this if you've ever played around with the likes of Jasper or ChatGPT). LLMs are already used in other sections of the Yext platform, such as Chat and the Content Generation feature. We set out to explore if AI and LLMs could be used for Reviews.The foundation of our research lies in Yext Content, based on knowledge graph technology, where brand-approved facts are stored. There are four main key features to Yext Content.Maintains a flexible schemaContent allows for platform customizations to align with each individual businesses needs and operational structure. For example, a healthcare system would need entity types that reflect healthcare professionals, hospital locations, medical specialties, and health article documents., while a restaurant would have menu items, store locations, and special events. The KG schema can also change over time to adapt to evolving business needs and structures.Defines relationships between entitiesContent also provides definitions on how entities are related to one another. For example, Doctor A works at the Union Square office. This additional information provides key context for the LLM to understand the relationship structure between two objects.Contains bi-directional relationship connectionsBased on graph technology, Content can make connections bi-directional. For example, a Doctor works at the Union Square office. In the same relationship connection, we can infer that the Union Square office has Doctor A working there. This additional context layer provides a wealth of information to draw complex connections between entities.These connections go beyond entities added into Content, and extend to data returned into the Yext platform across our other product lines such as Reviews, Pages, Search, and Chat. For reviews specifically, review data is aggregated from all available publishers for each unique entity and returned in the Yext Review Monitoring platform. This content can then be reviewed by a brand employee to manually generate an appropriate review response, which then gets pushed back out to the third-party publisher site.Multi-hop RelationshipsThrough the linkages between entities, we can confidently make relational ties between entities that are not directly connected. For example, if there is a Doctor who specializes in pediatric gastroenterology, and pediatric gastroenterology appointments are only held at the Union Square office, we know that the doctor works at the Union Square office.HypothesisOur hypothesis was that including related entity information in the prompt text would result in a generated response that reflects the relevant business information. We have defined Knowledge Injection (KI) as injecting related entity information into the prompt text for a LLM.Section 2: The ResearchIn order to improve the generated responses from LLMs, we developed a prompt-engineering technique called Knowledge Injection (KI) to map contextual data about entities relevant to a task from a knowledge graph to text space for inclusion in an LLM prompt. Brand experts evaluated the assertions (i.e. specification of a location name, contactable at phone number or web address, owned by brand name, or located at location address) within a generated response for correct vs incorrect assertions. Additionally, brand experts evaluated the generated responses for overall quality to assess alignment with review response brand standards.Experiment 1:To test out if using Knowledge Injection (KI) reduces hallucinations in generated responses, we trained bloom-560m on review-response pairs as the dataset for our control. We then re-ran the model with prompts containing related entity information with the review-response pairs into bloom-560m for a KI-prompted LLM.

Share this Article

Read Next

loading icon