Home [Vision] 두피 데이터를 활용한 VGG16 Transfer Learning
Post
Cancel

[Vision] 두피 데이터를 활용한 VGG16 Transfer Learning

Scalp Statement Classification Project

Library

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import tensorflow as tf
import keras
from tensorflow.keras import optimizers
from keras.models import Model
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPool2D , Flatten
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image

import os
import shutil
import glob
import zipfile

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from PIL import Image

Data

1
2
from google.colab import drive
drive.mount('/content/drive')
1
Mounted at /content/drive
1
2
3
4
5
# Data file copy (from drive to colab kernel)
original_data_path = '/content/drive/MyDrive/Data/scalp.zip'
new_data_path = '/content'

shutil.copy(original_data_path, new_data_path)
1
'/content/scalp.zip'
1
2
3
4
5
6
# Unzip
path_to_zip_file = '/content/scalp.zip'
directory_to_extract_to = '/content'

with zipfile.ZipFile(path_to_zip_file, 'r') as zip_ref:
    zip_ref.extractall(directory_to_extract_to)
1
2
3
4
5
6
7
trainDataPath = '/content/scalp/Training'
validDataPath = '/content/scalp/Validation'

trData = ImageDataGenerator()
trainData = trData.flow_from_directory(directory=trainDataPath, target_size=(224,224))
valData = ImageDataGenerator()
validData = valData.flow_from_directory(directory=validDataPath, target_size=(224,224))
1
2
Found 14263 images belonging to 7 classes.
Found 4073 images belonging to 7 classes.

Modeling

1
2
from keras.applications.vgg16 import VGG16
vggmodel = VGG16(weights='imagenet', include_top=True)
1
2
3
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels.h5
553467904/553467096 [==============================] - 6s 0us/step
553476096/553467096 [==============================] - 6s 0us/step
1
vggmodel.summary()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
Model: "vgg16"
_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 input_1 (InputLayer)        [(None, 224, 224, 3)]     0

 block1_conv1 (Conv2D)       (None, 224, 224, 64)      1792

 block1_conv2 (Conv2D)       (None, 224, 224, 64)      36928

 block1_pool (MaxPooling2D)  (None, 112, 112, 64)      0

 block2_conv1 (Conv2D)       (None, 112, 112, 128)     73856

 block2_conv2 (Conv2D)       (None, 112, 112, 128)     147584

 block2_pool (MaxPooling2D)  (None, 56, 56, 128)       0

 block3_conv1 (Conv2D)       (None, 56, 56, 256)       295168

 block3_conv2 (Conv2D)       (None, 56, 56, 256)       590080

 block3_conv3 (Conv2D)       (None, 56, 56, 256)       590080

 block3_pool (MaxPooling2D)  (None, 28, 28, 256)       0

 block4_conv1 (Conv2D)       (None, 28, 28, 512)       1180160

 block4_conv2 (Conv2D)       (None, 28, 28, 512)       2359808

 block4_conv3 (Conv2D)       (None, 28, 28, 512)       2359808

 block4_pool (MaxPooling2D)  (None, 14, 14, 512)       0

 block5_conv1 (Conv2D)       (None, 14, 14, 512)       2359808

 block5_conv2 (Conv2D)       (None, 14, 14, 512)       2359808

 block5_conv3 (Conv2D)       (None, 14, 14, 512)       2359808

 block5_pool (MaxPooling2D)  (None, 7, 7, 512)         0

 flatten (Flatten)           (None, 25088)             0

 fc1 (Dense)                 (None, 4096)              102764544

 fc2 (Dense)                 (None, 4096)              16781312

 predictions (Dense)         (None, 1000)              4097000

=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_________________________________________________________________
1
2
3
for layers in (vggmodel.layers)[:19]:
    print(layers)
    layers.trainable = False
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<keras.engine.input_layer.InputLayer object at 0x7fcb6d339e90>
<keras.layers.convolutional.Conv2D object at 0x7fcaf7524110>
<keras.layers.convolutional.Conv2D object at 0x7fcaeeefd750>
<keras.layers.pooling.MaxPooling2D object at 0x7fcaeee13f10>
<keras.layers.convolutional.Conv2D object at 0x7fcaeee46410>
<keras.layers.convolutional.Conv2D object at 0x7fcaeee41fd0>
<keras.layers.pooling.MaxPooling2D object at 0x7fcaee1a8190>
<keras.layers.convolutional.Conv2D object at 0x7fcaee1a4d90>
<keras.layers.convolutional.Conv2D object at 0x7fcaee1ae0d0>
<keras.layers.convolutional.Conv2D object at 0x7fcaee1b65d0>
<keras.layers.pooling.MaxPooling2D object at 0x7fcaee1ae890>
<keras.layers.convolutional.Conv2D object at 0x7fcaeee46110>
<keras.layers.convolutional.Conv2D object at 0x7fcaee1c3790>
<keras.layers.convolutional.Conv2D object at 0x7fcaee1c6d90>
<keras.layers.pooling.MaxPooling2D object at 0x7fcaee1bd9d0>
<keras.layers.convolutional.Conv2D object at 0x7fcaee1cf390>
<keras.layers.convolutional.Conv2D object at 0x7fcaee1d6290>
<keras.layers.convolutional.Conv2D object at 0x7fcaee1cfa10>
<keras.layers.pooling.MaxPooling2D object at 0x7fcaee1c9f10>
1
2
3
H = vggmodel.layers[-2].output
predictions = Dense(7, activation="softmax")(H)
model_final = Model(inputs = vggmodel.input, outputs = predictions)
1
2
3
model_final.compile(loss = "categorical_crossentropy",
                    optimizer = optimizers.SGD(lr=0.0001, momentum=0.9),
                    metrics=["accuracy"])
1
2
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/gradient_descent.py:102: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(SGD, self).__init__(name, **kwargs)
1
model_final.summary()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 input_1 (InputLayer)        [(None, 224, 224, 3)]     0

 block1_conv1 (Conv2D)       (None, 224, 224, 64)      1792

 block1_conv2 (Conv2D)       (None, 224, 224, 64)      36928

 block1_pool (MaxPooling2D)  (None, 112, 112, 64)      0

 block2_conv1 (Conv2D)       (None, 112, 112, 128)     73856

 block2_conv2 (Conv2D)       (None, 112, 112, 128)     147584

 block2_pool (MaxPooling2D)  (None, 56, 56, 128)       0

 block3_conv1 (Conv2D)       (None, 56, 56, 256)       295168

 block3_conv2 (Conv2D)       (None, 56, 56, 256)       590080

 block3_conv3 (Conv2D)       (None, 56, 56, 256)       590080

 block3_pool (MaxPooling2D)  (None, 28, 28, 256)       0

 block4_conv1 (Conv2D)       (None, 28, 28, 512)       1180160

 block4_conv2 (Conv2D)       (None, 28, 28, 512)       2359808

 block4_conv3 (Conv2D)       (None, 28, 28, 512)       2359808

 block4_pool (MaxPooling2D)  (None, 14, 14, 512)       0

 block5_conv1 (Conv2D)       (None, 14, 14, 512)       2359808

 block5_conv2 (Conv2D)       (None, 14, 14, 512)       2359808

 block5_conv3 (Conv2D)       (None, 14, 14, 512)       2359808

 block5_pool (MaxPooling2D)  (None, 7, 7, 512)         0

 flatten (Flatten)           (None, 25088)             0

 fc1 (Dense)                 (None, 4096)              102764544

 fc2 (Dense)                 (None, 4096)              16781312

 dense (Dense)               (None, 7)                 28679

=================================================================
Total params: 134,289,223
Trainable params: 119,574,535
Non-trainable params: 14,714,688
_________________________________________________________________
1
2
3
4
5
6
7
8
9
10
11
12
13
from keras.callbacks import ModelCheckpoint, EarlyStopping
checkpoint = ModelCheckpoint("vgg16.h5",
                             monitor='val_accuracy',
                             verbose=1,
                             save_best_only=True,
                             save_weights_only=False,
                             mode='auto',
                             period=1)
early = EarlyStopping(monitor='val_accuracy',
                      min_delta=0,
                      patience=40,
                      verbose=1,
                      mode='auto')
1
WARNING:tensorflow:`period` argument is deprecated. Please use `save_freq` to specify the frequency in number of batches seen.

Train

1
2
3
4
5
6
model_final.fit_generator(generator= trainData,
                          steps_per_epoch= 2,
                          epochs= 100,
                          validation_data= validData,
                          validation_steps=1,
                          callbacks=[checkpoint,early])
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:6: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.



Epoch 1/100
2/2 [==============================] - ETA: 0s - loss: 1.5087 - accuracy: 0.4219
Epoch 00001: val_accuracy did not improve from 0.53125
2/2 [==============================] - 2s 2s/step - loss: 1.5087 - accuracy: 0.4219 - val_loss: 2.1086 - val_accuracy: 0.3438
Epoch 2/100
2/2 [==============================] - ETA: 0s - loss: 2.0146 - accuracy: 0.4688
Epoch 00002: val_accuracy did not improve from 0.53125
2/2 [==============================] - 2s 1s/step - loss: 2.0146 - accuracy: 0.4688 - val_loss: 2.1865 - val_accuracy: 0.3438
Epoch 3/100
2/2 [==============================] - ETA: 0s - loss: 1.4675 - accuracy: 0.5000
Epoch 00003: val_accuracy did not improve from 0.53125
2/2 [==============================] - 2s 1s/step - loss: 1.4675 - accuracy: 0.5000 - val_loss: 1.6652 - val_accuracy: 0.3750
Epoch 4/100
2/2 [==============================] - ETA: 0s - loss: 1.1932 - accuracy: 0.5938
Epoch 00004: val_accuracy did not improve from 0.53125
2/2 [==============================] - 2s 1s/step - loss: 1.1932 - accuracy: 0.5938 - val_loss: 1.8516 - val_accuracy: 0.5000
Epoch 5/100
2/2 [==============================] - ETA: 0s - loss: 1.4677 - accuracy: 0.4844
Epoch 00005: val_accuracy did not improve from 0.53125
2/2 [==============================] - 2s 1s/step - loss: 1.4677 - accuracy: 0.4844 - val_loss: 1.2477 - val_accuracy: 0.5312
Epoch 6/100
2/2 [==============================] - ETA: 0s - loss: 1.4515 - accuracy: 0.5312
Epoch 00006: val_accuracy improved from 0.53125 to 0.62500, saving model to vgg16.h5
2/2 [==============================] - 6s 5s/step - loss: 1.4515 - accuracy: 0.5312 - val_loss: 1.3299 - val_accuracy: 0.6250
Epoch 7/100
2/2 [==============================] - ETA: 0s - loss: 1.6270 - accuracy: 0.5312
Epoch 00007: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 1.6270 - accuracy: 0.5312 - val_loss: 1.3950 - val_accuracy: 0.5312
Epoch 8/100
2/2 [==============================] - ETA: 0s - loss: 1.3965 - accuracy: 0.5312
Epoch 00008: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 1.3965 - accuracy: 0.5312 - val_loss: 1.4065 - val_accuracy: 0.5000
Epoch 9/100
2/2 [==============================] - ETA: 0s - loss: 1.3831 - accuracy: 0.5625
Epoch 00009: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 1.3831 - accuracy: 0.5625 - val_loss: 1.1666 - val_accuracy: 0.5938
Epoch 10/100
2/2 [==============================] - ETA: 0s - loss: 1.3464 - accuracy: 0.5156
Epoch 00010: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 1.3464 - accuracy: 0.5156 - val_loss: 1.2340 - val_accuracy: 0.4062
Epoch 11/100
2/2 [==============================] - ETA: 0s - loss: 0.9211 - accuracy: 0.6562
Epoch 00011: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 0.9211 - accuracy: 0.6562 - val_loss: 1.2106 - val_accuracy: 0.5938
Epoch 12/100
2/2 [==============================] - ETA: 0s - loss: 0.9813 - accuracy: 0.6250
Epoch 00012: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 0.9813 - accuracy: 0.6250 - val_loss: 1.2158 - val_accuracy: 0.5938
Epoch 13/100
2/2 [==============================] - ETA: 0s - loss: 1.2627 - accuracy: 0.5000
Epoch 00013: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 1.2627 - accuracy: 0.5000 - val_loss: 1.3643 - val_accuracy: 0.4062
Epoch 14/100
2/2 [==============================] - ETA: 0s - loss: 1.0943 - accuracy: 0.5156
Epoch 00014: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 1.0943 - accuracy: 0.5156 - val_loss: 1.0082 - val_accuracy: 0.5625
Epoch 15/100
2/2 [==============================] - ETA: 0s - loss: 0.9873 - accuracy: 0.6562
Epoch 00015: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 0.9873 - accuracy: 0.6562 - val_loss: 0.9902 - val_accuracy: 0.6250
Epoch 16/100
2/2 [==============================] - ETA: 0s - loss: 1.1917 - accuracy: 0.4844
Epoch 00016: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 1.1917 - accuracy: 0.4844 - val_loss: 1.5361 - val_accuracy: 0.4688
Epoch 17/100
2/2 [==============================] - ETA: 0s - loss: 0.9465 - accuracy: 0.6406
Epoch 00017: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 0.9465 - accuracy: 0.6406 - val_loss: 1.0424 - val_accuracy: 0.6250
Epoch 18/100
2/2 [==============================] - ETA: 0s - loss: 1.3369 - accuracy: 0.4844
Epoch 00018: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 1.3369 - accuracy: 0.4844 - val_loss: 1.1195 - val_accuracy: 0.4688
Epoch 19/100
2/2 [==============================] - ETA: 0s - loss: 0.9222 - accuracy: 0.6719
Epoch 00019: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 0.9222 - accuracy: 0.6719 - val_loss: 1.4278 - val_accuracy: 0.5000
Epoch 20/100
2/2 [==============================] - ETA: 0s - loss: 1.1615 - accuracy: 0.5625
Epoch 00020: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 1.1615 - accuracy: 0.5625 - val_loss: 1.5542 - val_accuracy: 0.4375
Epoch 21/100
2/2 [==============================] - ETA: 0s - loss: 1.2356 - accuracy: 0.5938
Epoch 00021: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 1.2356 - accuracy: 0.5938 - val_loss: 1.2965 - val_accuracy: 0.5000
Epoch 22/100
2/2 [==============================] - ETA: 0s - loss: 1.2318 - accuracy: 0.5625
Epoch 00022: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 1.2318 - accuracy: 0.5625 - val_loss: 1.3502 - val_accuracy: 0.4375
Epoch 23/100
2/2 [==============================] - ETA: 0s - loss: 1.3754 - accuracy: 0.5938
Epoch 00023: val_accuracy did not improve from 0.62500
2/2 [==============================] - 2s 1s/step - loss: 1.3754 - accuracy: 0.5938 - val_loss: 1.3461 - val_accuracy: 0.5625
Epoch 24/100
2/2 [==============================] - ETA: 0s - loss: 1.2022 - accuracy: 0.5156
Epoch 00024: val_accuracy improved from 0.62500 to 0.75000, saving model to vgg16.h5
2/2 [==============================] - 5s 5s/step - loss: 1.2022 - accuracy: 0.5156 - val_loss: 1.0214 - val_accuracy: 0.7500
Epoch 25/100
2/2 [==============================] - ETA: 0s - loss: 1.2356 - accuracy: 0.5938
Epoch 00025: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.2356 - accuracy: 0.5938 - val_loss: 1.1217 - val_accuracy: 0.5625
Epoch 26/100
2/2 [==============================] - ETA: 0s - loss: 1.1857 - accuracy: 0.6406
Epoch 00026: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.1857 - accuracy: 0.6406 - val_loss: 1.1249 - val_accuracy: 0.5312
Epoch 27/100
2/2 [==============================] - ETA: 0s - loss: 1.2184 - accuracy: 0.5938
Epoch 00027: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.2184 - accuracy: 0.5938 - val_loss: 1.1747 - val_accuracy: 0.6250
Epoch 28/100
2/2 [==============================] - ETA: 0s - loss: 1.1969 - accuracy: 0.6094
Epoch 00028: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.1969 - accuracy: 0.6094 - val_loss: 1.3337 - val_accuracy: 0.5000
Epoch 29/100
2/2 [==============================] - ETA: 0s - loss: 1.2338 - accuracy: 0.6094
Epoch 00029: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.2338 - accuracy: 0.6094 - val_loss: 1.3463 - val_accuracy: 0.5000
Epoch 30/100
2/2 [==============================] - ETA: 0s - loss: 1.3179 - accuracy: 0.4531
Epoch 00030: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.3179 - accuracy: 0.4531 - val_loss: 1.2586 - val_accuracy: 0.5000
Epoch 31/100
2/2 [==============================] - ETA: 0s - loss: 1.1225 - accuracy: 0.5938
Epoch 00031: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.1225 - accuracy: 0.5938 - val_loss: 1.1682 - val_accuracy: 0.5625
Epoch 32/100
2/2 [==============================] - ETA: 0s - loss: 1.1141 - accuracy: 0.5781
Epoch 00032: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.1141 - accuracy: 0.5781 - val_loss: 1.0462 - val_accuracy: 0.5312
Epoch 33/100
2/2 [==============================] - ETA: 0s - loss: 0.9814 - accuracy: 0.5781
Epoch 00033: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 0.9814 - accuracy: 0.5781 - val_loss: 1.1827 - val_accuracy: 0.5938
Epoch 34/100
2/2 [==============================] - ETA: 0s - loss: 0.8672 - accuracy: 0.7188
Epoch 00034: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 0.8672 - accuracy: 0.7188 - val_loss: 1.3142 - val_accuracy: 0.4688
Epoch 35/100
2/2 [==============================] - ETA: 0s - loss: 1.0874 - accuracy: 0.5938
Epoch 00035: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.0874 - accuracy: 0.5938 - val_loss: 1.3446 - val_accuracy: 0.6250
Epoch 36/100
2/2 [==============================] - ETA: 0s - loss: 0.8193 - accuracy: 0.6250
Epoch 00036: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 0.8193 - accuracy: 0.6250 - val_loss: 0.9446 - val_accuracy: 0.5000
Epoch 37/100
2/2 [==============================] - ETA: 0s - loss: 1.1073 - accuracy: 0.6875
Epoch 00037: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.1073 - accuracy: 0.6875 - val_loss: 1.4156 - val_accuracy: 0.4688
Epoch 38/100
2/2 [==============================] - ETA: 0s - loss: 1.1729 - accuracy: 0.5156
Epoch 00038: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.1729 - accuracy: 0.5156 - val_loss: 1.4537 - val_accuracy: 0.5312
Epoch 39/100
2/2 [==============================] - ETA: 0s - loss: 1.1661 - accuracy: 0.5625
Epoch 00039: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.1661 - accuracy: 0.5625 - val_loss: 1.2841 - val_accuracy: 0.5625
Epoch 40/100
2/2 [==============================] - ETA: 0s - loss: 1.2262 - accuracy: 0.5312
Epoch 00040: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.2262 - accuracy: 0.5312 - val_loss: 0.9220 - val_accuracy: 0.6250
Epoch 41/100
2/2 [==============================] - ETA: 0s - loss: 0.9203 - accuracy: 0.6094
Epoch 00041: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 0.9203 - accuracy: 0.6094 - val_loss: 1.0647 - val_accuracy: 0.5938
Epoch 42/100
2/2 [==============================] - ETA: 0s - loss: 0.8179 - accuracy: 0.6562
Epoch 00042: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 0.8179 - accuracy: 0.6562 - val_loss: 0.9991 - val_accuracy: 0.6250
Epoch 43/100
2/2 [==============================] - ETA: 0s - loss: 0.8537 - accuracy: 0.7031
Epoch 00043: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 0.8537 - accuracy: 0.7031 - val_loss: 1.2613 - val_accuracy: 0.6250
Epoch 44/100
2/2 [==============================] - ETA: 0s - loss: 1.1151 - accuracy: 0.5938
Epoch 00044: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.1151 - accuracy: 0.5938 - val_loss: 1.6918 - val_accuracy: 0.3438
Epoch 45/100
2/2 [==============================] - ETA: 0s - loss: 1.2227 - accuracy: 0.5781
Epoch 00045: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.2227 - accuracy: 0.5781 - val_loss: 1.3152 - val_accuracy: 0.5938
Epoch 46/100
2/2 [==============================] - ETA: 0s - loss: 0.9592 - accuracy: 0.7500
Epoch 00046: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 0.9592 - accuracy: 0.7500 - val_loss: 1.1206 - val_accuracy: 0.5938
Epoch 47/100
2/2 [==============================] - ETA: 0s - loss: 0.9093 - accuracy: 0.6875
Epoch 00047: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 0.9093 - accuracy: 0.6875 - val_loss: 1.5552 - val_accuracy: 0.3750
Epoch 48/100
2/2 [==============================] - ETA: 0s - loss: 0.8146 - accuracy: 0.7031
Epoch 00048: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 0.8146 - accuracy: 0.7031 - val_loss: 1.0901 - val_accuracy: 0.5312
Epoch 49/100
2/2 [==============================] - ETA: 0s - loss: 1.0795 - accuracy: 0.6250
Epoch 00049: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.0795 - accuracy: 0.6250 - val_loss: 1.2412 - val_accuracy: 0.6250
Epoch 50/100
2/2 [==============================] - ETA: 0s - loss: 1.0374 - accuracy: 0.6406
Epoch 00050: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.0374 - accuracy: 0.6406 - val_loss: 1.0070 - val_accuracy: 0.6250
Epoch 51/100
2/2 [==============================] - ETA: 0s - loss: 1.0055 - accuracy: 0.7031
Epoch 00051: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.0055 - accuracy: 0.7031 - val_loss: 1.5308 - val_accuracy: 0.4688
Epoch 52/100
2/2 [==============================] - ETA: 0s - loss: 0.9496 - accuracy: 0.7031
Epoch 00052: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 0.9496 - accuracy: 0.7031 - val_loss: 1.0907 - val_accuracy: 0.5312
Epoch 53/100
2/2 [==============================] - ETA: 0s - loss: 1.1262 - accuracy: 0.5938
Epoch 00053: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.1262 - accuracy: 0.5938 - val_loss: 1.4789 - val_accuracy: 0.5312
Epoch 54/100
2/2 [==============================] - ETA: 0s - loss: 1.0115 - accuracy: 0.6562
Epoch 00054: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.0115 - accuracy: 0.6562 - val_loss: 1.0868 - val_accuracy: 0.6562
Epoch 55/100
2/2 [==============================] - ETA: 0s - loss: 0.9065 - accuracy: 0.6562
Epoch 00055: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 0.9065 - accuracy: 0.6562 - val_loss: 0.9866 - val_accuracy: 0.6250
Epoch 56/100
2/2 [==============================] - ETA: 0s - loss: 0.8655 - accuracy: 0.6562
Epoch 00056: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 0.8655 - accuracy: 0.6562 - val_loss: 1.2124 - val_accuracy: 0.5312
Epoch 57/100
2/2 [==============================] - ETA: 0s - loss: 1.0011 - accuracy: 0.6406
Epoch 00057: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.0011 - accuracy: 0.6406 - val_loss: 1.1943 - val_accuracy: 0.5312
Epoch 58/100
2/2 [==============================] - ETA: 0s - loss: 0.9941 - accuracy: 0.6719
Epoch 00058: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 0.9941 - accuracy: 0.6719 - val_loss: 1.0729 - val_accuracy: 0.6562
Epoch 59/100
2/2 [==============================] - ETA: 0s - loss: 1.0721 - accuracy: 0.5781
Epoch 00059: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.0721 - accuracy: 0.5781 - val_loss: 1.6116 - val_accuracy: 0.5312
Epoch 60/100
2/2 [==============================] - ETA: 0s - loss: 1.0533 - accuracy: 0.5938
Epoch 00060: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.0533 - accuracy: 0.5938 - val_loss: 1.1483 - val_accuracy: 0.5000
Epoch 61/100
2/2 [==============================] - ETA: 0s - loss: 1.0163 - accuracy: 0.7188
Epoch 00061: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.0163 - accuracy: 0.7188 - val_loss: 1.1331 - val_accuracy: 0.5000
Epoch 62/100
2/2 [==============================] - ETA: 0s - loss: 1.4014 - accuracy: 0.4844
Epoch 00062: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 1.4014 - accuracy: 0.4844 - val_loss: 1.5898 - val_accuracy: 0.4062
Epoch 63/100
2/2 [==============================] - ETA: 0s - loss: 0.9745 - accuracy: 0.5312
Epoch 00063: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 0.9745 - accuracy: 0.5312 - val_loss: 1.0411 - val_accuracy: 0.6562
Epoch 64/100
2/2 [==============================] - ETA: 0s - loss: 0.8678 - accuracy: 0.6562
Epoch 00064: val_accuracy did not improve from 0.75000
2/2 [==============================] - 2s 1s/step - loss: 0.8678 - accuracy: 0.6562 - val_loss: 1.7607 - val_accuracy: 0.5000
Epoch 00064: early stopping





<keras.callbacks.History at 0x7fca5c2e7790>
1
# model_final.save_weights("/content/drive/MyDrive/Data/scalp/Model/vgg16.h5")

Test

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
from keras.preprocessing import image
img = image.load_img("/content/scalp/Validation/erythema/0013_A2LEBJJDE00060O_1606386266117_2_TH.jpg", target_size=(224,224))
img = np.asarray(img)
plt.imshow(img)
img = np.expand_dims(img, axis=0)

from keras.models import load_model
saved_model = load_model("vgg16.h5")
output = saved_model.predict(img)

# output[0][~] 's argmax => one of 0, 1, 2, 3 = predicted label
# if output[0][0] > output[0][1]:
#     print("cat")
# else:
#     print('dog')

result = {0: "alopecia", 1: "dandruff", 2: "erythema", 3: "good", 4: "keratin", 5: "pustule", 6: "sebum"}
print(result[np.argmax(output)])
1
sebum

png

1
This post is licensed under younghwani by the author.

[테스트] 첫 포스팅

[Env] 코랩(Colab) 연결 유지

Comments powered by Disqus.