Engineering Playground
Confetti AI
Confetti AI is an educational platform helping people learn the skills to succeed in artificial intelligence careers. It provides a collection of targeted resources and tools to empower the next generation of AI practitioners.
Mihail Eric
Subscribe to our newsletter!
Artificial Intelligence School
AI School is a mobile app designed to teach the basics of artificial intelligence, machine learning, and deep learning. Includes a comprehensive lesson plan for learning fundamental principles.
Mihail Eric
Available on both the App Store and Google Play!
Publications
A Machine Learning Primer
I provide an overview of key machine learning concepts that show up in data science and ML interviews. This primer includes core theory as well as practice problems to test your knowledge.
Mihail Eric
Confetti AI
Example-Driven Intent Prediction With Observers
We propose two approaches for improving the generalizability of utterance classification models (example-driven training and observers) that when used in combination achieve state-of-the-art results in full-data and few-shot settings on several intent prediction datasets.
Shikib Mehri, Mihail Eric, Dilek Hakkani-Tur
arXiv:2010.08684
DialoGLUE: A Natural Language Understanding Benchmark for Task-Oriented Dialogue
We introduce DialoGLUE (Dialogue Language Understanding Evaluation), a public benchmark consisting of 7 task-oriented dialogue datasets covering 4 distinct natural language understanding tasks. We introduce new models that achieve state-of-the-art results on 5/7 datasets in the benchmark.
Shikib Mehri, Mihail Eric, Dilek Hakkani-Tur
arXiv:2009.1357
Proceedings of 2nd ACL NLP for Conversational AI Workshop
The goal of this workshop is to bring together NLP researchers and practitioners in different fields, alongside experts in speech and machine learning, to discuss the current state-of-the-art and new approaches, to share insights and challenges, to bridge the gap between academic research and realworld product deployment, and to shed the light on future directions.
Tsung-Hsien Wen, Asli Celikyilmaz, Zhou Yu, Alexandros Papangelis, Mihail Eric, Anuj Kumar, Inigo Casanueva, Rushin Shah
ACL 2020
Beyond Domain APIs: Task-oriented Conversational Modeling with Unstructured Knowledge Access
We propose to expand coverage of task-oriented dialogue systems by incorporating external unstructured knowledge sources.
Seokhwan Kim, Mihail Eric, Karthik Gopalakrishnan, Behnam Hedayatnia, Yang Liu, Dilek Hakkani-Tur
SIGDial 2020, arXiv:2006.03533
Policy-Driven Neural Response Generation for Knowledge-Grounded Dialogue Systems
We propose a technique for using a dialogue policy to plan the content and style of target responses in the form of an action plan.
Behnam Hedayatnia, Karthik Gopalakrishnan, Seokhwan Kim, Yang Liu, Mihail Eric, Dilek Hakkani-Tur
Preprint 2020, arXiv:2005.12529
Just Ask: An Interactive Learning Framework for Vision and Language Navigation
We propose a novel scheme for interactive human-in-the-loop learning, achieving more data-efficient performance on a vision and language task.
Ta-Chung Chi, Mihail Eric, Seokhwan Kim, Minmin Shen, Dilek Hakkani-Tur
AAAI 2020, arXiv:1912.00915
MultiWOZ 2.1: Multi-Domain Dialogue State Corrections and State Tracking Baselines
We release an updated version of the Cambridge MultiWOZ dataset with dialogue state annotation corrections and corresponding state tracking baselines.
Mihail Eric*, Rahul Goel*, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyang Gao, Dilek Hakkani-Tur
Preprint 2019, arXiv:1907.01669
Key Value Retrieval Networks for Task-Oriented Dialogue
We demonstrate the efficacy of a new neural dialogue agent that is able to effectively sustain grounded, multi-domain discourse through a novel key-value retrieval mechanism.
Mihail Eric, Lakshmi Krishnan, Francois Charette, Christopher D. Manning
SIGDial 2017 Oral Presentation, arXiv:1705.05414
The Pragmatics of Indirect Commands in Collaborative Discourse
We show that models with domain-specific grounding can effectively realize the pragmatic reasoning that is necessary for more robust natural language interaction.
Matthew Lamm* and Mihail Eric*
International Conference on Computational Semantics 2017, arXiv:1705.03454
Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings
To model both structured knowledge and unstructured language in a novel dialogue setting, we propose a neural model with dynamic knowledge graph embeddings that evolve as the dialogue progresses.
He He, Anusha Balakrishnan, Mihail Eric, Percy Liang
ACL 2017, arXiv:1704.07130
A Copy-Augmented Sequence-to-Sequence Architecture Gives Good Performance on Task-Oriented Dialogue
We show the effectiveness of simple sequence-to-sequence neural architectures with a copy mechanism, outperforming more sophisticated models on a standard task-oriented dialogue dataset.
Mihail Eric, Christopher D. Manning
EACL 2017 Oral Presentation, arXiv:1701.04024
SceneSeer: 3D Scene Design with Natural Language
We present SceneSeer: an interactive text to 3D scene generation system with a learned spatial knowledge base that allows a user to design 3D scenes using natural language.
Angel X. Chang, Mihail Eric, Manolis Savva, Christopher D. Manning
Preprint 2017, arXiv:1703.00050
Technical Reports
Using Contextual Information for Neural Natural Language Inference
We investigate neural memory network architectures for the task of natural language inference and propose models for using attention across relevant semantic phrases to inform common sense reasoning.
Chris Billovits* and Mihail Eric*
Preprint 2016
Hitting Depth: Investigating Robustness to Adversarial Examples in Deep Convolutional Neural Networks
We show a process for visualizing and identifying changes in activations between adversarial images and their regular counterparts and propose a Bayesian framework for improving prediction accuracy on adversarial examples.
Chris Billovits* and Mihail Eric* and Nipun Agrawal*
Preprint 2016
Wordwise Inference and Entailment Now
We implement a random forest classifier with a carefully engineered and selected collection of linguistic and semantic features for the task of natural language inference, achieving an F1 of 80.9% on the SemEval-2014 Dataset.
Chris Billovits* and Mihail Eric* and Chris Guthrie*
Preprint 2016