๐Ÿ•
AI Paper Study
  • AI Paper Study
  • Computer Vision
    • SRCNN(2015)
      • Introduction
      • CNN for SR
      • Experiment
      • ๊ตฌํ˜„ํ•ด๋ณด๊ธฐ
    • DnCNN(2016)
      • Introduction
      • Related Work
      • DnCNN Model
      • Experiment
      • ๊ตฌํ˜„ํ•ด๋ณด๊ธฐ
    • CycleGAN(2017)
      • Introduction
      • Formulation
      • Results
      • ๊ตฌํ˜„ํ•ด๋ณด๊ธฐ
  • Language Computation
    • Attention is All You Need(2017)
      • Introduction & Background
      • Model Architecture
      • Appendix - Positional Encoding ๊ฑฐ๋ฆฌ ์ฆ๋ช…
  • ML Statistics
    • VAE(2013)
      • Introduction
      • Problem Setting
      • Method
      • Variational Auto-Encoder
      • ๊ตฌํ˜„ํ•ด๋ณด๊ธฐ
      • Appendix - KL Divergence ์ ๋ถ„
  • ์ง๊ด€์  ์ดํ•ด
    • Seq2Seq
      • Ko-En Translation
Powered by GitBook
On this page
  • Deep Neural Network - Image Denoising
  • Residual Learning
  • Batch Normalization

Was this helpful?

  1. Computer Vision
  2. DnCNN(2016)

Related Work

Deep Neural Network - Image Denoising

  • Jain & Seung์ด CNN์ด MRF(Markov Random Field)์™€ ๋น„์Šทํ•˜๊ฑฐ๋‚˜ ๋” ๋†’์€ ์„ฑ๋Šฅ์„ ๋‚ด์—ˆ๋‹ค๊ณ  ์ฃผ์žฅ

  • MLP(๋‹ค์ธต ํผ์…‰ํŠธ๋ก )์„ Image Denoising์— ์ ์šฉํ•จ

  • Stacked sparse autoencoder๋กœ gaussian noise๋ฅผ ์—†์• ๋Š”๋ฐ K-SVD์™€ ๋น„๊ตํ• ๋งŒํ•œ ์„ฑ๋Šฅ์„ ๋ƒ„

  • TNRD์—์„œ ์ถ”๋ก ํ•  ๋•Œ, ์œ ํ•œํ•œ ํšŸ์ˆ˜์˜ gradient descent๋กœ ๊ฐ€๋Šฅํ•˜๋„๋ก ๋งŒ๋“ฆ

  • ๊ทธ๋Ÿฌ๋‚˜ TNRD, BM3D๋Š” ํŠน์ •ํ•œ noise level์—์„œ ์ ์šฉ๋จ

Residual Learning

Performance degradation(๋„คํŠธ์›Œํฌ์˜ ๊นŠ์ด๊ฐ€ ๋Š˜์–ด๋‚ ์ˆ˜๋ก train์ด ์–ด๋ ค์›Œ์ง€๋Š” ํ˜„์ƒ)์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ๊ณ ์•ˆ๋œ ๋ฐฉ๋ฒ•์ด๋‹ค. Residual mapping์ด ๊ธฐ์กด์˜ mapping๋ณด๋‹ค ๋” ์‰ฝ๊ฒŒ ํ•™์Šต ๊ฐ€๋Šฅํ•˜๋‹ค๊ณ  ๊ฐ€์ •ํ•œ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ •๋‹ต์ด x+1์ธ ๋ฌธ์ œ์—์„œ x๋ฅผ x+1๋กœ ๋งŒ๋“œ๋Š” ๊ฒƒ๋ณด๋‹ค ์ •๋‹ต๊ณผ ์ž…๋ ฅ์˜ ์ฐจ์ด(Residual)๋ฅผ 0์ด ๋˜๋„๋ก ์ตœ์ ํ™”ํ•˜๋Š” ๊ฒƒ์ด ๋” ๊ฐ„๋‹จํ•œ ๋ฌธ์ œ๋ผ๊ณ  ๊ฐ€์ •ํ•œ๋‹ค.

ResNet๊ณผ ๊ฐ™์€ Residual Block์„ ์ ์šฉํ•œ ๋„คํŠธ์›Œํฌ์™€ ๋‹ฌ๋ฆฌ, DnCNN์€ ๋„คํŠธ์›Œํฌ ์ „์ฒด๊ฐ€ ํ•˜๋‚˜์˜ Residual Block์ด๋‹ค. ์ดํ›„ ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•์ด ๋‹ค๋ฅธ ๋ฌธ์ œ(SR ๋“ฑ)์— ์ด๋ฏธ ์ ์šฉ๋˜์–ด์™”์Œ์„ ์„ค๋ช…ํ•œ๋‹ค.

Batch Normalization

Mini-Batch SGD๋Š” CNN ์ตœ์ ํ™”์— ๋งŽ์ด ์‚ฌ์šฉ๋˜์–ด์™”๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด ๋ฐฉ๋ฒ•์€ ๋ฐ์ดํ„ฐ์˜ ๊ณต๋ณ€๋Ÿ‰(covariate)์ด shift๋˜์–ด์žˆ๋‹ค๋ฉด ๋งค์šฐ ํšจ์œจ์ด ๋–จ์–ด์ง€๊ฒŒ ๋œ๋‹ค. ๋”ฐ๋ผ์„œ ์ž‘์€ learning rate๋ฅผ ์‚ฌ์šฉํ–ˆ๋‹ค.

์˜ˆ๋ฅผ ๋“ค์–ด, ๊ฐ•์•„์ง€์™€ ๊ณ ์–‘์ด๋ฅผ ๊ตฌ๋ถ„ํ•˜๋„๋ก ๋งŒ๋“ค์—ˆ๋‹ค๊ณ  ํ•˜์ž. Test Data์™€ Train Data๋ฅผ ๋น„๊ตํ–ˆ์„ ๋•Œ, ์–ด๋–ค ๊ฐ•์•„์ง€์™€ ๊ณ ์–‘์ด์˜ ์ข…์ด Test Data์— ๋น ์ ธ์žˆ๋‹ค๋ฉด ์ •ํ™•๋„๊ฐ€ ๋–จ์–ด์ง„๋‹ค.

์ด๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, ๊ฐ layer์˜ input์˜ ๋ถ„ํฌ๋„ shift๋˜์–ด์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค๊ณ  ์ฃผ์žฅํ•˜๊ธฐ๋„ ํ•œ๋‹ค. ์ด๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด scale๊ณผ shift์— ๋Œ€ํ•œ parameter๋ฅผ ์ถ”๊ฐ€ํ•œ๋‹ค. ๊ตฌ์กฐ๊ฐ€ ๊ฐ„๋‹จํ•˜๋ฉฐ, ํฐ learning rate๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค.

PreviousIntroductionNextDnCNN Model

Last updated 3 years ago

Was this helpful?