{"id":4532,"date":"2023-01-23T15:00:21","date_gmt":"2023-01-23T06:00:21","guid":{"rendered":"http:\/\/www-dsc.naist.jp\/dsc_naist\/?p=4532"},"modified":"2023-03-02T10:50:22","modified_gmt":"2023-03-02T01:50:22","slug":"dsc-talk-in-january-2","status":"publish","type":"post","link":"http:\/\/www-dsc.naist.jp\/dsc_naist\/en\/dsc-talk-in-january-2\/","title":{"rendered":"DSC talk in January"},"content":{"rendered":"<p>Assoc. Prof. Hidetaka Kamigaito gave a lecture in January.<\/p>\n<p>The details are as follows.<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-family: georgia, palatino, serif; font-size: 12pt;\">==================== <\/span><\/p>\n<p>Assoc. Prof. Hidetaka Kamigaito (Natural Language Processing Laboratory)<\/p>\n<p>TITLE: Recent Advances of Negative Sampling in Natural Language Processing<\/p>\n<p>Abstract:<\/p>\n<p>In natural language processing (NLP), models often learn a large number of labels, such as words and phrases. Therefore, loss based on negative sampling (NS) loss, which can approximately reduce the number of labels during training, plays an important role in reducing computational costs. In this presentation, I&#8217;ll introduce NS loss, used in various NLP tasks with other loss functions that play similar roles, and their recent applications.<\/p>\n<div class=\"block-list-appender wp-block\" tabindex=\"-1\" contenteditable=\"false\" data-block=\"true\">\n<div class=\"block-editor-default-block-appender\" data-root-client-id=\"\">\n<p class=\"block-editor-default-block-appender__content\" tabindex=\"0\" role=\"button\" aria-label=\"\u30c7\u30d5\u30a9\u30eb\u30c8\u30d6\u30ed\u30c3\u30af\u3092\u8ffd\u52a0\">\u00a0<\/p>\n<\/div>\n<\/div>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Assoc. Prof. Hidetaka Kamigaito gave a lecture in January. The details are as follows. &nbsp; ==================== Assoc. Prof. Hidetaka Kamigaito (Natural Language Processing Laboratory) TITLE: Recent Advances of Negative Sampling in Natural Language Processing Abstract: In natural language processing (NLP), models often learn a large number of labels, such as words and phrases. Therefore, loss [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_locale":"en_US","_original_post":"http:\/\/www-dsc.naist.jp\/dsc_naist\/?p=4529","_links_to":"","_links_to_target":""},"categories":[3],"tags":[],"event_taxonomy":[15],"acf":[],"_links":{"self":[{"href":"http:\/\/www-dsc.naist.jp\/dsc_naist\/wp-json\/wp\/v2\/posts\/4532"}],"collection":[{"href":"http:\/\/www-dsc.naist.jp\/dsc_naist\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www-dsc.naist.jp\/dsc_naist\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www-dsc.naist.jp\/dsc_naist\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www-dsc.naist.jp\/dsc_naist\/wp-json\/wp\/v2\/comments?post=4532"}],"version-history":[{"count":2,"href":"http:\/\/www-dsc.naist.jp\/dsc_naist\/wp-json\/wp\/v2\/posts\/4532\/revisions"}],"predecessor-version":[{"id":4534,"href":"http:\/\/www-dsc.naist.jp\/dsc_naist\/wp-json\/wp\/v2\/posts\/4532\/revisions\/4534"}],"wp:attachment":[{"href":"http:\/\/www-dsc.naist.jp\/dsc_naist\/wp-json\/wp\/v2\/media?parent=4532"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www-dsc.naist.jp\/dsc_naist\/wp-json\/wp\/v2\/categories?post=4532"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www-dsc.naist.jp\/dsc_naist\/wp-json\/wp\/v2\/tags?post=4532"},{"taxonomy":"event_taxonomy","embeddable":true,"href":"http:\/\/www-dsc.naist.jp\/dsc_naist\/wp-json\/wp\/v2\/event_taxonomy?post=4532"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}