Author: Dawoon Na

Dawoon Na will wholeheartedly welcome an opportunity to discuss her posts with readers. If you have any feedback regarding the posts, please email her.

DEVIEW 2019

Since 2008, DEVIEW has been the South Korea’s most prominent tech forum on software engineering. It is unarguably a forum from which developers and researchers share ideas and get inspirations. Approximatively 3,000 local and foreign software developers and tech industry officials participated in this year’s conference. DEVIEW 2019 was held at COEX Grand Ballroom, Seoul, […]

NAVER AI Hackathon 2019 #Speech

As of October 27, 2019, the third NAVER hackathon has ended. NAVER selected 100 teams through document screening. The shortlisted teams were invited for the preliminary round. The second online round was held on NSML (Sung et al., 2017) from September 16 to October 4. The participants solved problems on speech recognition using the Korean […]

Subword Language Model for Query Auto-Completion (EMNLP-IJCNLP 2019)

Gyuwan Kim arXiv Github Motivations to Faster Neural Query Auto-Completion When browsing on search engines, such as NAVER, users type in the information which they want to look for. Query auto-completion (QAC) suggests most likely completion candidates when a user enters the input. It is one of the essential features of search engines. In this […]

INTERSPEECH 2019

INTERSPEECH 2019, the world’s most prominent conference on the science and technology of speech processing, features world-class speakers, tutorials, oral and poster sessions, challenges, exhibitions, and satellite events, gathering around thousands of attendees from all over the world. This year’s Interspeech conference was held in Graz, Austria, with NAVER-LINE running a corporate exhibition booth as […]

Mixture Content Selection for Diverse Sequence Generation (EMNLP-IJCNLP 2019)

Mixture Content Selection for Diverse Sequence Generation (EMNLP 2019) Jaemin Cho, Minjoon Seo, Hannaneh Hajishirzi arXiv Github Seq2Seq is not for One-to-Many Mapping Comparison between the standard encoder-decoder model and ours An RNN Encoder-Decoder (Seq2Seq) model is widely used for sequence generation, in particular, machine translation in which neural models are as competent as human […]