Skip to content

Commit

Permalink
KerasKorea#40: Translate 5.3 - paragraph 2 / 17
Browse files Browse the repository at this point in the history
  • Loading branch information
visionNoob committed Oct 5, 2018
1 parent ba0b356 commit aff4a58
Showing 1 changed file with 21 additions and 9 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -63,23 +63,35 @@
"metadata": {},
"source": [
"# Paragraph 2\n",
"* Visualizing intermediate convnet outputs (\"intermediate activations\"). This is useful to understand how successive convnet layers \n",
"transform their input, and to get a first idea of the meaning of individual convnet filters.\n",
"* Visualizing convnets filters. This is useful to understand precisely what visual pattern or concept each filter in a convnet is receptive \n",
"to.\n",
"* Visualizing heatmaps of class activation in an image. This is useful to understand which part of an image where identified as belonging \n",
"to a given class, and thus allows to localize objects in images.\n",
"* Visualizing intermediate convnet outputs (\"intermediate activations\"). \n",
"\n",
"For the first method -- activation visualization -- we will use the small convnet that we trained from scratch on the cat vs. dog \n",
"classification problem two sections ago. For the next two methods, we will use the VGG16 model that we introduced in the previous section."
"This is useful to understand how successive convnet layers transform their input, and to get a first idea of the meaning of individual convnet filters.\n",
"\n",
"* Visualizing convnets filters. \n",
"\n",
"This is useful to understand precisely what visual pattern or concept each filter in a convnet is receptive to.\n",
"\n",
"* Visualizing heatmaps of class activation in an image. \n",
"\n",
"This is useful to understand which part of an image where identified as belonging to a given class, and thus allows to localize objects in images.\n",
"\n",
"For the first method -- activation visualization -- we will use the small convnet that we trained from scratch on the cat vs. dog classification problem two sections ago. \n",
"\n",
"For the next two methods, we will use the VGG16 model that we introduced in the previous section."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Paragraph 2\n",
"To do"
"* 컨볼루션 네트워크 중간 레이어의 특징 시각화하기 (\"intermediate activations\"). 이 방법은 잇따른 컨볼루션 레이어가 입력을 어떻게 변환시키는지를 이해하기 좋습니다. 그리고 각 컨볼루션 필터들의 의미를 이해하는데 도움이 됩니다. \n",
"\n",
"* 컨볼루션 필터 시각화하기. 이 방법은 컨볼루션 필터가 정확하게 어떤 시각 패턴을 찾고있는지 이해하는데 도움을 줍니다. \n",
"\n",
"* 이미지에 클래스 활성(class activation) 히트맵 시각화하기. 이 방법은 이미지의 어디를 보고 해당 클래스로 분류했는지를 이해할 수 있고. 이를 통해 이미지 내에 객체의 위치를(localize) 알 수도 있습니다. \n",
"\n",
"첫 번째 방법 -- 중간레이어 시각화하기 -- 을 하기 위해서, 전 장에서 개 vs 고양이 분류를 밑바닥부터 학습시켰던 작은 컨볼루셔 네트워크를 사용할 것입니다. 그리고 남은 두 가지 방법은 전 장에서 소개드렸던 VGG16 네트워크를 이용할 것입니다. "
]
},
{
Expand Down

0 comments on commit aff4a58

Please sign in to comment.