Our technology is the fruit of extensive R&D into some of the most fundamental problems in Conversational AI. Intelligent voice interfaces are transitioning from the realm of science fiction into our daily lives with virtual assistants entering our homes and on our phones, following us on our work and travels for a hands-free experience. Our focus lies at the intersection of Artificial Intelligence, Natural Language Processing, and Linguistics which is seeing the birth of a growing number of fields in everything NLU and Dialogue- related.
Our industry-ready semantic representation is a new approach to the unpredictable nature of human language. It balances expressiveness and annotation cost and utilises structured state-of-the-art machine learning.
Remembering what is said in a conversation is at the core of any conversation. Our innovative take on solving memory and combining real world data, databases and NLP pushes the boundaries of what’s possible.
One of the biggest determinants of real conversational abilities is interpretable and reasoned decision making. Our system-wide goal tracking and interpretable dialogue capable policy present a leap forward.
Receiving precise feedback or information in natural language is crucial for the next generation of interfaces. Our Natural Language Generation provides the best responses based on your structured data, databases and APIs.
Training data for Machine Learning models is key. Our proprietary data-pipeline handles everything end-to-end, so that you don’t have to. We generate, annotate, inspect and augment the data with our in-house technology and human-in-the-loop approaches.
Understanding the understanding model is key to providing the best capabilities for users. Our unique analytics suite provides in-depth analysis of the models deployed in production and during development, leading to precise improvements and insights.
Our research philosophy is to question the ‘norm’ and other such assumptions until we identify every artificial limit. Now that we have identified the walls of the ‘box’, we can start thinking outside it, surfing the vast ocean of ideas, diving in for extraordinary result.
Why train a new NLU for every topic of conversation? We are developing the next generation of NLU that works universally on any topic without extensive retraining.
Imagine an off-the-shelf NLU which works for you and only requires a single configuration.
Why must human-machine dialogues be constrained to rigid, pre-defined paths? We are integrating recursive flows to add flexibility and robustness to your dialogue experience.
Imagine conversing freely, oh and jumping between topics in between, while a machine still understands and keeps up with you.
Why only focus on the intuitive power of neural systems and ignore the higher-level cognition offered by symbolic reasoning systems? We are integrating deep learning models with first-order logic to empower virtual agents with neural-symbolic reasoning.
Imagine a system that just gets it...and can still do maths.
April 2021
EACL 2021
Dialogue Systems are becoming ubiquitous in various forms and shapes - virtual assistants(Siri, Alexa, etc.), chat-bots, customer sup-port, chit-chat systems just to name a few.The advances in language models and their publication have democratised advanced NLP.However, data remains a crucial bottleneck.Our contribution to this essential pillar
November 2020
EMNLP 2020
Following the major success of neural language models (LMs) such as BERT or GPT-2 on a variety of language understanding tasks, recent work focused on injecting (structured) knowledge from external resources into these models
September 2019
EMNLP 2019 - Hong Kong, China
Dialogue systems have the potential to change how people interact with machines but are highly dependent on the quality of the data used to train them. It is therefore important to develop good dialogue annotation tools which can improve the speed and quality of dialogue data annotation.
October 2018
CoNLL 2018 - Brussels, Belgium
Classification tasks are usually analysed and improved through new model architectures or hyperparameter optimisation but the underlying properties of datasets are discovered on an ad-hoc basis as errors occur. However, understanding the properties of the data is crucial in perfecting models.