How can we help you? Search AIC
SK telecom’s AI Center participated in NeurIPS, a major conference in the field of machine learning, held from December 8 to 14 in Vancouver, Canada. In order to find the latest trends, researchers from T-Brain within the AI Center attended Tutorial on recent research, Workshop on each topic, and Main Conference and Keynote Speech as well as hosted networking session with conference attendees.
On December 13, Meta Learning workshop(http://metalearning.ml) was held with session of presentation, invited talk and panel discussion and T-Brain’s research on “Domain-Agnostic Few-Shot Classification by Learning Disparate Modulators” was also presented as a poster presentation. Meta Learning, which can be called ‘learning to learn’, is one of the most active studies in the field of machine learning on the automatic training using metadata from earlier machine learning experiments.
Among them, T-Brain’s research is on few-shot classification, a method of learning a small number of images by repeatedly training only a very small number of images and performing classification. T-Brain presented an algorithm that can learn images regardless of their domain by expanding earlier research that only learn image within the same domain.
On December 14, AI for Social Good workshop(https://aiforsocialgood.github.io/neurips2019/) was held with session of presentation, invited talk and panel discussion on how AI can contribute to the social well-being. Related to the United Nations’ Sustainable Development Goals, various studies were shared by focusing on how the AI can solve social problems and bring positive impacts in our society.
From 2019, T-Brain initiated a project on localization of Visual Question Answering(VQA) technology and started to collect the dataset consists of 100,000 pairs of image and question about the image created by visually impaired people in Korea and 1,000,000 answers created by 10 different annotators for each image and question pair.
VQA understands the provided pictures and if a person asks questions about them, it provides an answer after understanding the image via natural language. As a social value project for the localization of this technology, T-Brain submitted a paper titled “Korean Localization of Visual Question Answering for Blind People” with the project background and process as well as collection report and presented in the workshop. The dataset can be found at https://sktbrain.github.io/KVQA/
During the conference, T-Brain held two networking events with professors and students whose research topics are closely related to T-Brain’s ongoing research as well as various researchers who attended the conference. In 2020, T-Brain will continue to lead the AI development in Korea with in-depth and top-notch research.