Kategori: Deep Learning , 29 Nisan 2020 , JanFranco
LeNet, 1998 yılında Yann LeCun tarafından oluşturulmuş ve MNIST veri seti üzerinde çok yüksek başarı sağlamış, basit ve eski bir modeldir.
Görüldüğü gibi 2 Convolution katmanından, 2 Pooling katmanından ve Fully Connected Layerdan oluşmakta. Aynı modeli, MNIST veri seti üzerinden train edelim. İlk olarak mnist veri setini import edelim, yükleyelim ve parçalara bölelim:
Bir kaç resmi inceleyelim:import cv2 import numpy as np from keras.utils import np_utils from keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data()
Preprocessing işlemlerini gerçekleştirelim. Resmin boyutunu yeniden düzenleyelim, pixellerin değerlerini float tipine çevirelim, daha sonra da 255'e bölerek 0-1 aralığında tutalım:for i in range(10): random_num = np.random.randint(0, len(x_train)) random_img = x_train[random_num] cv2.imshow("Random Img", random_img) cv2.waitKey(0) cv2.destroyAllWindows()
one-hot-encoding işlemini gerçekleştirelim ve sınıf sayısını alalım:img_rows = x_train[0].shape[0] img_cols = x_train[1].shape[0] x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) x_train = x_train.astype("float32") x_test = x_test.astype("float32") x_train /= 255 x_test /= 255
Modeli oluşturalım ve compile edelim:y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) num_classes = y_test.shape[1]
Modeli train edelim:from keras.models import Sequential from keras.optimizers import Adadelta from keras.layers import Conv2D, Activation, MaxPooling2D, Flatten, Dense model = Sequential() model.add(Conv2D(20, (5,5), padding = 'same', input_shape = input_shape)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size = (2,2), strides = (2,2))) model.add(Conv2D(50, (5,5), padding = 'same')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size = (2,2), strides = (2,2))) model.add(Flatten()) model.add(Dense(500)) model.add(Activation('relu')) model.add(Dense(num_classes)) model.add(Activation('softmax')) model.compile(loss = 'categorical_crossentropy', optimizer = Adadelta(), metrics = ['accuracy']) print(model.summary()) >> Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 28, 28, 20) 520 _________________________________________________________________ activation_1 (Activation) (None, 28, 28, 20) 0 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 14, 14, 20) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 14, 14, 50) 25050 _________________________________________________________________ activation_2 (Activation) (None, 14, 14, 50) 0 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 7, 7, 50) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 2450) 0 _________________________________________________________________ dense_1 (Dense) (None, 500) 1225500 _________________________________________________________________ activation_3 (Activation) (None, 500) 0 _________________________________________________________________ dense_2 (Dense) (None, 10) 5010 _________________________________________________________________ activation_4 (Activation) (None, 10) 0 ================================================================= Total params: 1,256,080 Trainable params: 1,256,080 Non-trainable params: 0 _________________________________________________________________ None
Tüm kodlar birlikte:batch_size = 128 epochs = 10 history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(x_test, y_test), shuffle=True) scores = model.evaluate(x_test, y_test, verbose=1) print('Test loss:', scores[0]) print('Test accuracy:', scores[1]) >> Train on 60000 samples, validate on 10000 samples Epoch 1/10 60000/60000 [==============================] - 9s 152us/step - loss: 0.1902 - acc: 0.9408 - val_loss: 0.0467 - val_acc: 0.9857 Epoch 2/10 60000/60000 [==============================] - 3s 53us/step - loss: 0.0457 - acc: 0.9858 - val_loss: 0.0390 - val_acc: 0.9870 Epoch 3/10 60000/60000 [==============================] - 3s 53us/step - loss: 0.0300 - acc: 0.9904 - val_loss: 0.0281 - val_acc: 0.9897 Epoch 4/10 60000/60000 [==============================] - 3s 53us/step - loss: 0.0210 - acc: 0.9933 - val_loss: 0.0234 - val_acc: 0.9919 Epoch 5/10 60000/60000 [==============================] - 3s 54us/step - loss: 0.0163 - acc: 0.9952 - val_loss: 0.0219 - val_acc: 0.9923 Epoch 6/10 60000/60000 [==============================] - 3s 53us/step - loss: 0.0114 - acc: 0.9966 - val_loss: 0.0237 - val_acc: 0.9918 Epoch 7/10 60000/60000 [==============================] - 3s 53us/step - loss: 0.0093 - acc: 0.9973 - val_loss: 0.0262 - val_acc: 0.9921 Epoch 8/10 60000/60000 [==============================] - 3s 53us/step - loss: 0.0065 - acc: 0.9983 - val_loss: 0.0208 - val_acc: 0.9931 Epoch 9/10 60000/60000 [==============================] - 3s 53us/step - loss: 0.0053 - acc: 0.9986 - val_loss: 0.0245 - val_acc: 0.9922 Epoch 10/10 60000/60000 [==============================] - 3s 53us/step - loss: 0.0038 - acc: 0.9989 - val_loss: 0.0255 - val_acc: 0.9922 10000/10000 [==============================] - 0s 47us/step Test loss: 0.025510146142501435 Test accuracy: 0.9922
import cv2 import numpy as np from keras.utils import np_utils from keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() for i in range(10): random_num = np.random.randint(0, len(x_train)) random_img = x_train[random_num] cv2.imshow("Random Img", random_img) cv2.waitKey(0) cv2.destroyAllWindows() img_rows = x_train[0].shape[0] img_cols = x_train[1].shape[0] x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) x_train = x_train.astype("float32") x_test = x_test.astype("float32") x_train /= 255 x_test /= 255 y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) num_classes = y_test.shape[1] from keras.models import Sequential from keras.optimizers import Adadelta from keras.layers import Conv2D, Activation, MaxPooling2D, Flatten, Dense model = Sequential() model.add(Conv2D(20, (5,5), padding = 'same', input_shape = input_shape)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size = (2,2), strides = (2,2))) model.add(Conv2D(50, (5,5), padding = 'same')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size = (2,2), strides = (2,2))) model.add(Flatten()) model.add(Dense(500)) model.add(Activation('relu')) model.add(Dense(num_classes)) model.add(Activation('softmax')) model.compile(loss = 'categorical_crossentropy', optimizer = Adadelta(), metrics = ['accuracy']) print(model.summary()) batch_size = 128 epochs = 10 history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(x_test, y_test), shuffle=True) scores = model.evaluate(x_test, y_test, verbose=1) print('Test loss:', scores[0]) print('Test accuracy:', scores[1])