Gyuwan Kim arXiv Github Motivations to Faster Neural Query Auto-Completion When browsing on search engines, such as NAVER, users type in the information which they want to look for. Query auto-completion (QAC) suggests most likely completion candidates when a user enters the input. It is one of the essential features of search engines. In this […]
Category: Research
Mixture Content Selection for Diverse Sequence Generation (EMNLP-IJCNLP 2019)
Mixture Content Selection for Diverse Sequence Generation (EMNLP 2019) Jaemin Cho, Minjoon Seo, Hannaneh Hajishirzi arXiv Github Seq2Seq is not for One-to-Many Mapping Comparison between the standard encoder-decoder model and ours An RNN Encoder-Decoder (Seq2Seq) model is widely used for sequence generation, in particular, machine translation in which neural models are as competent as human […]
What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis (ICCV 2019 Oral)
Jeonghun Baek, Geewook Kim, Junyeop Lee, Sungrae Park, Dongyoon Han, Sangdoo Yun, Seong Joon Oh, Hwalsuk Lee arXiv Github Motivations for this Research Regular Irregular Examples of regular (IIIT5k, SVT, IC03, IC13) and irregular (IC15, SVTP, CUTE) real-world datasets Referred to as scene text recognition (STR), reading text in natural scenes, as shown above, has […]
A Comprehensive Overhaul of Feature Distillation (ICCV 2019)
Byeongho Heo, Jeesoo Kim, Sangdoo Yun, Hyojin Park, Nojun Kwak, Jin Young Choi arXiv Github Project Page Knowledge Distillation in a Nutshell The general process of knowledge distillation Knowledge distillation denotes a method that a small model is trained to mimic a pre-trained large model by passing the data from it to the small model […]
Photorealistic Style Transfer via Wavelet Transforms (ICCV 2019)
Jaejun Yoo, Youngjung Uh, Sanghyuk Chun, Byeongkyu Kang, Jung-Woo Ha arXiv Github Reddit Jaejun Yoo’s Blog What is Photorealistic Style Transfer and Why is It Needed? Photorealistic stylization results. Given (a) an input pair (top: content, bottom: style), the results of (b) WCT (c) PhotoWCT and (d) our model are shown. Every result is produced […]