上垣外 英剛 准教授 (自然言語処理学研究室)
TITLE: Recent Advances of Negative Sampling in Natural Language Processing
In natural language processing (NLP), models often learn a large number of labels, such as words and phrases. Therefore, loss based on negative sampling (NS) loss, which can approximately reduce the number of labels during training, plays an important role in reducing computational costs. In this presentation, I’ll introduce NS loss, used in various NLP tasks with other loss functions that play similar roles, and their recent applications.