Need a Note Taker? This AI Can Help.

A new tool that uses artificial intelligence is bringing notetaking up to speed and may help future digital assistants ease fears of ever missing a meeting again.

It’s an age-old problem: We are inundated with informal forms of communication like phone calls, remote video conferences, text conversations on group messaging platforms like Slack or Microsoft Teams. Remembering key points of each discussion can at times be overwhelming, not to mention the stress caused by missing a meeting or seeing a couple hundred messages stack up while you were out for lunch.

This digital solution, developed by Georgia Tech researchers and being presented in a paper this week at the 2020 Conference on Empirical Methods in Natural Language Processing, can assuage those concerns by generating summaries of informal conversations. Using a subset of machine learning called natural language processing, the method identifies conversational structure using particular keywords.

“Think about informal conversational structure: It has an opening, problem statements, discussions, a conclusion,” said Diyi Yang, an assistant professor in the School of Interactive Computing and a co-author on the paper. “We want to mine those structures to teach the model what may be informative within the conversation for generating better summaries.”

Words like any variation of “hello” or “good,” for example, might indicate that it is a greeting. Other action words likely indicate some kind of intention, and dates or times a discussion and conclusion on plans. Knowing this, the model can represent the unstructured conversation better to craft an accurate summary.

These types of summaries are more important now than ever. More individuals all over the world are working or attending school remotely. More discussions are being handled over the phone or video conferencing, plans being made through applications like Microsoft Teams. Previous research on the subject has focused on formal content like books, papers, or news articles, but the existing body of work on informal language is relatively sparse.

“This is applicable now more than ever because of where we are,” Yang said. “There’s so much online and text conversation, and we have way too much information. We need help storing it in a shorter and more structured way. If you’re away from your laptop for 30 minutes, it’s important to be able to get a quick summary of what you missed.”

Challenges still exist. There are problems with referral in the conversation, or calling back to a previous discussion point later in a meeting. There are also typos or slang, repetition, interruption or changes in role, language changes that can interfere with the model’s ability to determine structure. These are items Yang and her collaborator are continuing to address moving forward.

“This is a great starting point,” Yang said.

The work is presented in the paper Multi-View Sequence-to-Sequence Models with Conversational Structure for Abstractive Dialogue Summarization. The paper is co-authored by Yang and Jiaao Chen, a second-year Ph.D. student in the School of Interactive Computing.