thumbnail image

Novel, Intuitive, Better, Applicable

  • ABOUT
  • Publications
  • Personal Projects
  • Reproduction of Published Papers
  • ABOUT
  • Publications
  • Personal Projects
  • Reproduction of Published Papers
  • Powered By
    Strikingly
    • Inyoung Cho

      Hi, I'm a master's student in the Scalable Graphics, Vision, & Robotics Lab at KAIST's School of Computing. Before that, I earned my B.S. from KAIST.

       

      Research Interests

      Physically-based Rendering, Deep Learning

       

      Education

      M.S., Computer Science, KAIST. Sep. 2019 - Present

      • GPA: 4.25/4.3

      B.S., Computer Science and Mathematics (double major), KAIST. Mar. 2015 - Aug. 2019

      • Summa Cum Laude (GPA: 4.0/4.3)

    • Publications

      personal repository for now

      Monte Carlo Reconstruction

      2020.02 ~ 2021.02

      Coming soon.

    • Personal Projects

      personal repository for now

      OptaGen: OptiX-based Automated Dataset Generator

      2019.10 ~ 2020.03

      Sorry! This work is private yet. The public version will release soon. Stay tuned!

       

      Nvidia OptiX is a promising GPU-based ray-tracing SDK.

      I am currently implementing an OptiX-based program that automatically generates rendered images especially for training deep convolutional denoising models.

      The program will involve random camera transformations, random material changes, auxiliary buffer processing, etc.

    • Reproduction of Published Papers

      Unsupervised Out-of-Distribution Detection by Maximum Classifier Discrepancy

      Qing Yu, Kiyoharu Aizawa

      In ICCV, 2019

      In this paper, the authors propose a two-headed neural network and maximize the discrepancy between the two classifiers to detect out-of-distribution samples while correctly classifying in-distribution inputs. Also, they suggest a new problem setting where they utilize unlabeled data for unsupervised training, whereas previous works only exploit labeled in-distribution samples.

      Learning Loss for Active Learning

      Donggeun Yoo, In So Kweon

      In CVPR, 2019

      In this paper, the authors propose a task-agnostic and straightforward active learning technique. They build a “loss prediction module” that predicts target losses of unlabeled inputs. Using this predicted loss as an uncertainty measure, they determined which of the unlabeled data would be most valuable to annotate.

    Copyright © 2019 - Proudly built with Strikingly

    Create a site with
    This website is built with Strikingly.
    Create yours today!

    This website is built with Strikingly.

    Create your FREE website today!

      Cookie Use
      We use cookies to ensure a smooth browsing experience. By continuing we assume you accept the use of cookies.
      Learn More