【深度學(xué)習(xí)系列】用PaddlePaddle和Tensorflow實現(xiàn)經(jīng)典CNN網(wǎng)絡(luò)Vgg
上周我們講了經(jīng)典CNN網(wǎng)絡(luò)AlexNet對圖像分類的效果,2014年,在AlexNet出來的兩年后,牛津大學(xué)提出了Vgg網(wǎng)絡(luò),并在ILSVRC 2014中的classification項目的比賽中取得了第2名的成績(第一名是GoogLeNet,也是同年提出的)。在論文《Very Deep Convolutional Networks for Large-Scale Image Recognition》中,作者提出通過縮小卷積核大小來構(gòu)建更深的網(wǎng)絡(luò)。
Vgg網(wǎng)絡(luò)結(jié)構(gòu)
VGGnet是Oxford的Visual Geometry Group的team,在ILSVRC 2014上的主要工作是證明了增加網(wǎng)絡(luò)的深度能夠在一定程度上影響網(wǎng)絡(luò)最終的性能,如下圖,文章通過逐步增加網(wǎng)絡(luò)深度來提高性能,雖然看起來有一點小暴力,沒有特別多取巧的,但是確實有效,很多pretrained的方法就是使用VGG的model(主要是16和19),VGG相對其他的方法,參數(shù)空間很大,所以train一個vgg模型通常要花費更長的時間,不過公開的pretrained model讓我們很方便的使用,paper中的幾種模型如下:
圖1 vgg網(wǎng)絡(luò)結(jié)構(gòu)
圖中D和E分別為VGG-16和VGG-19,參數(shù)分別是138m和144m,是文中兩個效果最好的網(wǎng)絡(luò)結(jié)構(gòu),VGG網(wǎng)絡(luò)結(jié)構(gòu)可以看做是AlexNet的加深版,VGG在圖像檢測中效果很好(如:Faster-RCNN),這種傳統(tǒng)結(jié)構(gòu)相對較好的保存了圖片的局部位置信息(不像GoogLeNet中引入Inception可能導(dǎo)致位置信息的錯亂)。
我們來仔細(xì)看一下vgg16的網(wǎng)絡(luò)結(jié)構(gòu):
圖2 vgg16網(wǎng)絡(luò)結(jié)構(gòu)
從圖中可以看到,每個卷積層都使用更小的3×3卷積核對圖像進(jìn)行卷積,并把這些小的卷積核排列起來作為一個卷積序列。通俗點來講就是對原始圖像進(jìn)行3×3卷積,然后再進(jìn)行3×3卷積,連續(xù)使用小的卷積核對圖像進(jìn)行多次卷積。
在alexnet里我們一開始的時候是用11*11的大卷積核網(wǎng)絡(luò),為什么在這里要用3*3的小卷積核來對圖像進(jìn)行卷積呢?并且還是使用連續(xù)的小卷積核?VGG一開始提出的時候剛好與LeNet的設(shè)計原則相違背,因為LeNet相信大的卷積核能夠捕獲圖像當(dāng)中相似的特征(權(quán)值共享)。AlexNet在淺層網(wǎng)絡(luò)開始的時候也是使用9×9、11×11卷積核,并且盡量在淺層網(wǎng)絡(luò)的時候避免使用1×1的卷積核。但是VGG的神奇之處就是在于使用多個3×3卷積核可以模仿較大卷積核那樣對圖像進(jìn)行局部感知。后來多個小的卷積核串聯(lián)這一思想被GoogleNet和ResNet等吸收。
從圖1的實驗結(jié)果也可以看到,VGG使用多個3x3卷積來對高維特征進(jìn)行提取。因為如果使用較大的卷積核,參數(shù)就會大量地增加、運算時間也會成倍的提升。例如3x3的卷積核只有9個權(quán)值參數(shù),使用7*7的卷積核權(quán)值參數(shù)就會增加到49個。因為缺乏一個模型去對大量的參數(shù)進(jìn)行歸一化、約減,或者說是限制大規(guī)模的參數(shù)出現(xiàn),因此訓(xùn)練核數(shù)更大的卷積網(wǎng)絡(luò)就變得非常困難了。
VGG相信如果使用大的卷積核將會造成很大的時間浪費,減少的卷積核能夠減少參數(shù),節(jié)省運算開銷。雖然訓(xùn)練的時間變長了,但是總體來說預(yù)測的時間和參數(shù)都是減少的了。
Vgg的優(yōu)勢
與AlexNet相比:
-
相同點
- 整體結(jié)構(gòu)分五層;
- 除softmax層外,最后幾層為全連接層;
- 五層之間通過max pooling連接。
-
不同點
- 使用3×3的小卷積核代替7×7大卷積核,網(wǎng)絡(luò)構(gòu)建的比較深;
- 由于LRN太耗費計算資源,性價比不高,所以被去掉;
- 采用了更多的feature map,能夠提取更多的特征,從而能夠做更多特征的組合?! ?/li>
用PaddlePaddle實現(xiàn)Vgg
1. 網(wǎng)絡(luò)結(jié)構(gòu)
1 #coding:utf-8 2 ''' 3 Created by huxiaoman 2017.12.12 4 vggnet.py:用vgg網(wǎng)絡(luò)實現(xiàn)cifar-10分類 5 ''' 6 7 import paddle.v2 as paddle 8 9 def vgg(input): 10 def conv_block(ipt, num_filter, groups, dropouts, num_channels=None): 11 return paddle.networks.img_conv_group( 12 input=ipt, 13 num_channels=num_channels, 14 pool_size=2, 15 pool_stride=2, 16 conv_num_filter=[num_filter] * groups, 17 conv_filter_size=3, 18 conv_act=paddle.activation.Relu(), 19 conv_with_batchnorm=True, 20 conv_batchnorm_drop_rate=dropouts, 21 pool_type=paddle.pooling.Max()) 22 23 conv1 = conv_block(input, 64, 2, [0.3, 0], 3) 24 conv2 = conv_block(conv1, 128, 2, [0.4, 0]) 25 conv3 = conv_block(conv2, 256, 3, [0.4, 0.4, 0]) 26 conv4 = conv_block(conv3, 512, 3, [0.4, 0.4, 0]) 27 conv5 = conv_block(conv4, 512, 3, [0.4, 0.4, 0]) 28 29 drop = paddle.layer.dropout(input=conv5, dropout_rate=0.5) 30 fc1 = paddle.layer.fc(input=drop, size=512, act=paddle.activation.Linear()) 31 bn = paddle.layer.batch_norm( 32 input=fc1, 33 act=paddle.activation.Relu(), 34 layer_attr=paddle.attr.Extra(drop_rate=0.5)) 35 fc2 = paddle.layer.fc(input=bn, size=512, act=paddle.activation.Linear()) 36 return fc2
2. 訓(xùn)練模型
1 #coding:utf-8 2 ''' 3 Created by huxiaoman 2017.12.12 4 train_vgg.py:訓(xùn)練vgg16對cifar10數(shù)據(jù)集進(jìn)行分類 5 ''' 6 7 import sys, os 8 import paddle.v2 as paddle 9 from vggnet import vgg 10 11 with_gpu = os.getenv('WITH_GPU', '0') != '1' 12 13 def main(): 14 datadim = 3 * 32 * 32 15 classdim = 10 16 17 # PaddlePaddle init 18 paddle.init(use_gpu=with_gpu, trainer_count=8) 19 20 image = paddle.layer.data( 21 name="image", type=paddle.data_type.dense_vector(datadim)) 22 23 net = vgg(image) 24 25 out = paddle.layer.fc( 26 input=net, size=classdim, act=paddle.activation.Softmax()) 27 28 lbl = paddle.layer.data( 29 name="label", type=paddle.data_type.integer_value(classdim)) 30 cost = paddle.layer.classification_cost(input=out, label=lbl) 31 32 # Create parameters 33 parameters = paddle.parameters.create(cost) 34 35 # Create optimizer 36 momentum_optimizer = paddle.optimizer.Momentum( 37 momentum=0.9, 38 regularization=paddle.optimizer.L2Regularization(rate=0.0002 * 128), 39 learning_rate=0.1 / 128.0, 40 learning_rate_decay_a=0.1, 41 learning_rate_decay_b=50000 * 100, 42 learning_rate_schedule='discexp') 43 44 # End batch and end pass event handler 45 def event_handler(event): 46 if isinstance(event, paddle.event.EndIteration): 47 if event.batch_id % 100 == 0: 48 print "\nPass %d, Batch %d, Cost %f, %s" % ( 49 event.pass_id, event.batch_id, event.cost, event.metrics) 50 else: 51 sys.stdout.write('.') 52 sys.stdout.flush() 53 if isinstance(event, paddle.event.EndPass): 54 # save parameters 55 with open('params_pass_%d.tar' % event.pass_id, 'w') as f: 56 parameters.to_tar(f) 57 58 result = trainer.test( 59 reader=paddle.batch( 60 paddle.dataset.cifar.test10(), batch_size=128), 61 feeding={'image': 0, 62 'label': 1}) 63 print "\nTest with Pass %d, %s" % (event.pass_id, result.metrics) 64 65 # Create trainer 66 trainer = paddle.trainer.SGD( 67 cost=cost, parameters=parameters, update_equation=momentum_optimizer) 68 69 # Save the inference topology to protobuf. 70 inference_topology = paddle.topology.Topology(layers=out) 71 with open("inference_topology.pkl", 'wb') as f: 72 inference_topology.serialize_for_inference(f) 73 74 trainer.train( 75 reader=paddle.batch( 76 paddle.reader.shuffle( 77 paddle.dataset.cifar.train10(), buf_size=50000), 78 batch_size=128), 79 num_passes=200, 80 event_handler=event_handler, 81 feeding={'image': 0, 82 'label': 1}) 83 84 # inference 85 from PIL import Image 86 import numpy as np 87 import os 88 89 def load_image(file): 90 im = Image.open(file) 91 im = im.resize((32, 32), Image.ANTIALIAS) 92 im = np.array(im).astype(np.float32) 93 im = im.transpose((2, 0, 1)) # CHW 94 im = im[(2, 1, 0), :, :] # BGR 95 im = im.flatten() 96 im = im / 255.0 97 return im 98 99 test_data = [] 100 cur_dir = os.path.dirname(os.path.realpath(__file__)) 101 test_data.append((load_image(cur_dir + '/image/dog.png'), )) 102 103 probs = paddle.infer( 104 output_layer=out, parameters=parameters, input=test_data) 105 lab = np.argsort(-probs) # probs and lab are the results of one batch data 106 print "Label of image/dog.png is: %d" % lab[0][0] 107 108 109 if __name__ == '__main__': 110 main()
3. 訓(xùn)練結(jié)果

從訓(xùn)練結(jié)果來看,開了7個線程,8個Tesla K80,迭代200次,耗時16h21min,相比于之前訓(xùn)練的lenet和alexnet的幾個小時來說,時間消耗很高,但是結(jié)果很好,準(zhǔn)確率是89.11%,在同設(shè)備和迭代次數(shù)情況下,比lenet的和alexnet的精度都要高。
用Tensorflow實現(xiàn)vgg
1. 網(wǎng)絡(luò)結(jié)構(gòu)
1 def inference_op(input_op, keep_prob): 2 p = [] 3 # 第一塊 conv1_1-conv1_2-pool1 4 conv1_1 = conv_op(input_op, name='conv1_1', kh=3, kw=3, 5 n_out = 64, dh = 1, dw = 1, p = p) 6 conv1_2 = conv_op(conv1_1, name='conv1_2', kh=3, kw=3, 7 n_out = 64, dh = 1, dw = 1, p = p) 8 pool1 = mpool_op(conv1_2, name = 'pool1', kh = 2, kw = 2, 9 dw = 2, dh = 2) 10 # 第二塊 conv2_1-conv2_2-pool2 11 conv2_1 = conv_op(pool1, name='conv2_1', kh=3, kw=3, 12 n_out = 128, dh = 1, dw = 1, p = p) 13 conv2_2 = conv_op(conv2_1, name='conv2_2', kh=3, kw=3, 14 n_out = 128, dh = 1, dw = 1, p = p) 15 pool2 = mpool_op(conv2_2, name = 'pool2', kh = 2, kw = 2, 16 dw = 2, dh = 2) 17 # 第三塊 conv3_1-conv3_2-conv3_3-pool3 18 conv3_1 = conv_op(pool2, name='conv3_1', kh=3, kw=3, 19 n_out = 256, dh = 1, dw = 1, p = p) 20 conv3_2 = conv_op(conv3_1, name='conv3_2', kh=3, kw=3, 21 n_out = 256, dh = 1, dw = 1, p = p) 22 conv3_3 = conv_op(conv3_2, name='conv3_3', kh=3, kw=3, 23 n_out = 256, dh = 1, dw = 1, p = p) 24 pool3 = mpool_op(conv3_3, name = 'pool3', kh = 2, kw = 2, 25 dw = 2, dh = 2) 26 # 第四塊 conv4_1-conv4_2-conv4_3-pool4 27 conv4_1 = conv_op(pool3, name='conv4_1', kh=3, kw=3, 28 n_out = 512, dh = 1, dw = 1, p = p) 29 conv4_2 = conv_op(conv4_1, name='conv4_2', kh=3, kw=3, 30 n_out = 512, dh = 1, dw = 1, p = p) 31 conv4_3 = conv_op(conv4_2, name='conv4_3', kh=3, kw=3, 32 n_out = 512, dh = 1, dw = 1, p = p) 33 pool4 = mpool_op(conv4_3, name = 'pool4', kh = 2, kw = 2, 34 dw = 2, dh = 2) 35 # 第五塊 conv5_1-conv5_2-conv5_3-pool5 36 conv5_1 = conv_op(pool4, name='conv5_1', kh=3, kw=3, 37 n_out = 512, dh = 1, dw = 1, p = p) 38 conv5_2 = conv_op(conv5_1, name='conv5_2', kh=3, kw=3, 39 n_out = 512, dh = 1, dw = 1, p = p) 40 conv5_3 = conv_op(conv5_2, name='conv5_3', kh=3, kw=3, 41 n_out = 512, dh = 1, dw = 1, p = p) 42 pool5 = mpool_op(conv5_3, name = 'pool5', kh = 2, kw = 2, 43 dw = 2, dh = 2) 44 # 把pool5 ( [7, 7, 512] ) 拉成向量 45 shp = pool5.get_shape() 46 flattened_shape = shp[1].value * shp[2].value * shp[3].value 47 resh1 = tf.reshape(pool5, [-1, flattened_shape], name = 'resh1') 48 49 # 全連接層1 添加了 Droput來防止過擬合 50 fc1 = fc_op(resh1, name = 'fc1', n_out = 2048, p = p) 51 fc1_drop = tf.nn.dropout(fc1, keep_prob, name = 'fc1_drop') 52 53 # 全連接層2 添加了 Droput來防止過擬合 54 fc2 = fc_op(fc1_drop, name = 'fc2', n_out = 2048, p = p) 55 fc2_drop = tf.nn.dropout(fc2, keep_prob, name = 'fc2_drop') 56 57 # 全連接層3 加一個softmax求給類別的概率 58 fc3 = fc_op(fc2_drop, name = 'fc3', n_out = 1000, p = p) 59 softmax = tf.nn.softmax(fc3) 60 predictions = tf.argmax(softmax, 1) 61 return predictions, softmax, fc3, p
2. 訓(xùn)練網(wǎng)絡(luò)結(jié)構(gòu)
1 # -*- coding: utf-8 -*- 2 """ 3 Created by huxiaoman 2017.12.12 4 vgg_tf.py:訓(xùn)練tensorflow版的vgg16網(wǎng)絡(luò),對cifar-10shuju進(jìn)行分類 5 """ 6 from datetime import datetime 7 import math 8 import time 9 import tensorflow as tf 10 import cifar10 11 12 batch_size = 128 13 num_batches = 200 14 15 # 定義函數(shù)對卷積層進(jìn)行初始化 16 # input_op : 輸入數(shù)據(jù) 17 # name : 該卷積層的名字,用tf.name_scope()來命名 18 # kh,kw : 分別是卷積核的高和寬 19 # n_out : 輸出通道數(shù) 20 # dh,dw : 步長的高和寬 21 # p : 是參數(shù)列表,存儲VGG所用到的參數(shù) 22 # 采用xavier方法對卷積核權(quán)值進(jìn)行初始化 23 def conv_op(input_op, name, kh, kw, n_out, dh, dw, p): 24 n_in = input_op.get_shape()[-1].value # 獲得輸入圖像的通道數(shù) 25 with tf.name_scope(name) as scope: 26 kernel = tf.get_variable(scope+'w', 27 shape = [kh, kw, n_in, n_out], dtype = tf.float32, 28 initializer = tf.contrib.layers.xavier_initializer_conv2d()) 29 # 卷積層計算 30 conv = tf.nn.conv2d(input_op, kernel, (1, dh, dw, 1), padding = 'SAME') 31 bias_init_val = tf.constant(0.0, shape = [n_out], dtype = tf.float32) 32 biases = tf.Variable(bias_init_val, trainable = True, name = 'b') 33 z = tf.nn.bias_add(conv, biases) 34 activation = tf.nn.relu(z, name = scope) 35 p += [kernel, biases] 36 return activation 37 38 # 定義函數(shù)對全連接層進(jìn)行初始化 39 # input_op : 輸入數(shù)據(jù) 40 # name : 該全連接層的名字 41 # n_out : 輸出的通道數(shù) 42 # p : 參數(shù)列表 43 # 初始化方法用 xavier方法 44 def fc_op(input_op, name, n_out, p): 45 n_in = input_op.get_shape()[-1].value 46 47 with tf.name_scope(name) as scope: 48 kernel = tf.get_variable(scope+'w', 49 shape = [n_in, n_out], dtype = tf.float32, 50 initializer = tf.contrib.layers.xavier_initializer()) 51 biases = tf.Variable(tf.constant(0.1, shape = [n_out], 52 dtype = tf.float32), name = 'b') 53 activation = tf.nn.relu_layer(input_op, kernel, # ??????????????? 54 biases, name = scope) 55 p += [kernel, biases] 56 return activation 57 58 # 定義函數(shù) 創(chuàng)建 maxpool層 59 # input_op : 輸入數(shù)據(jù) 60 # name : 該卷積層的名字,用tf.name_scope()來命名 61 # kh,kw : 分別是卷積核的高和寬 62 # dh,dw : 步長的高和寬 63 def mpool_op(input_op, name, kh, kw, dh, dw): 64 return tf.nn.max_pool(input_op, ksize = [1,kh,kw,1], 65 strides = [1, dh, dw, 1], padding = 'SAME', name = name) 66 67 #---------------創(chuàng)建 VGG-16------------------ 68 69 def inference_op(input_op, keep_prob): 70 p = [] 71 # 第一塊 conv1_1-conv1_2-pool1 72 conv1_1 = conv_op(input_op, name='conv1_1', kh=3, kw=3, 73 n_out = 64, dh = 1, dw = 1, p = p) 74 conv1_2 = conv_op(conv1_1, name='conv1_2', kh=3, kw=3, 75 n_out = 64, dh = 1, dw = 1, p = p) 76 pool1 = mpool_op(conv1_2, name = 'pool1', kh = 2, kw = 2, 77 dw = 2, dh = 2) 78 # 第二塊 conv2_1-conv2_2-pool2 79 conv2_1 = conv_op(pool1, name='conv2_1', kh=3, kw=3, 80 n_out = 128, dh = 1, dw = 1, p = p) 81 conv2_2 = conv_op(conv2_1, name='conv2_2', kh=3, kw=3, 82 n_out = 128, dh = 1, dw = 1, p = p) 83 pool2 = mpool_op(conv2_2, name = 'pool2', kh = 2, kw = 2, 84 dw = 2, dh = 2) 85 # 第三塊 conv3_1-conv3_2-conv3_3-pool3 86 conv3_1 = conv_op(pool2, name='conv3_1', kh=3, kw=3, 87 n_out = 256, dh = 1, dw = 1, p = p) 88 conv3_2 = conv_op(conv3_1, name='conv3_2', kh=3, kw=3, 89 n_out = 256, dh = 1, dw = 1, p = p) 90 conv3_3 = conv_op(conv3_2, name='conv3_3', kh=3, kw=3, 91 n_out = 256, dh = 1, dw = 1, p = p) 92 pool3 = mpool_op(conv3_3, name = 'pool3', kh = 2, kw = 2, 93 dw = 2, dh = 2) 94 # 第四塊 conv4_1-conv4_2-conv4_3-pool4 95 conv4_1 = conv_op(pool3, name='conv4_1', kh=3, kw=3, 96 n_out = 512, dh = 1, dw = 1, p = p) 97 conv4_2 = conv_op(conv4_1, name='conv4_2', kh=3, kw=3, 98 n_out = 512, dh = 1, dw = 1, p = p) 99 conv4_3 = conv_op(conv4_2, name='conv4_3', kh=3, kw=3, 100 n_out = 512, dh = 1, dw = 1, p = p) 101 pool4 = mpool_op(conv4_3, name = 'pool4', kh = 2, kw = 2, 102 dw = 2, dh = 2) 103 # 第五塊 conv5_1-conv5_2-conv5_3-pool5 104 conv5_1 = conv_op(pool4, name='conv5_1', kh=3, kw=3, 105 n_out = 512, dh = 1, dw = 1, p = p) 106 conv5_2 = conv_op(conv5_1, name='conv5_2', kh=3, kw=3, 107 n_out = 512, dh = 1, dw = 1, p = p) 108 conv5_3 = conv_op(conv5_2, name='conv5_3', kh=3, kw=3, 109 n_out = 512, dh = 1, dw = 1, p = p) 110 pool5 = mpool_op(conv5_3, name = 'pool5', kh = 2, kw = 2, 111 dw = 2, dh = 2) 112 # 把pool5 ( [7, 7, 512] ) 拉成向量 113 shp = pool5.get_shape() 114 flattened_shape = shp[1].value * shp[2].value * shp[3].value 115 resh1 = tf.reshape(pool5, [-1, flattened_shape], name = 'resh1') 116 117 # 全連接層1 添加了 Droput來防止過擬合 118 fc1 = fc_op(resh1, name = 'fc1', n_out = 2048, p = p) 119 fc1_drop = tf.nn.dropout(fc1, keep_prob, name = 'fc1_drop') 120 121 # 全連接層2 添加了 Droput來防止過擬合 122 fc2 = fc_op(fc1_drop, name = 'fc2', n_out = 2048, p = p) 123 fc2_drop = tf.nn.dropout(fc2, keep_prob, name = 'fc2_drop') 124 125 # 全連接層3 加一個softmax求給類別的概率 126 fc3 = fc_op(fc2_drop, name = 'fc3', n_out = 1000, p = p) 127 softmax = tf.nn.softmax(fc3) 128 predictions = tf.argmax(softmax, 1) 129 return predictions, softmax, fc3, p 130 131 # 定義評測函數(shù) 132 133 def time_tensorflow_run(session, target, feed, info_string): 134 num_steps_burn_in = 10 135 total_duration = 0.0 136 total_duration_squared = 0.0 137 138 for i in range(num_batches + num_steps_burn_in): 139 start_time = time.time() 140 _ = session.run(target, feed_dict = feed) 141 duration = time.time() - start_time 142 if i >= num_steps_burn_in: 143 if not i % 10: 144 print('%s: step %d, duration = %.3f' % 145 (datetime.now(), i-num_steps_burn_in, duration)) 146 total_duration += duration 147 total_duration_squared += duration * duration 148 mean_dur = total_duration / num_batches 149 var_dur = total_duration_squared / num_batches - mean_dur * mean_dur 150 std_dur = math.sqrt(var_dur) 151 print('%s: %s across %d steps, %.3f +/- %.3f sec / batch' %(datetime.now(), info_string, num_batches, mean_dur, std_dur)) 152 153 154 def train_vgg16(): 155 with tf.Graph().as_default(): 156 image_size = 224 # 輸入圖像尺寸 157 # 生成隨機(jī)數(shù)測試是否能跑通 158 #images = tf.Variable(tf.random_normal([batch_size, image_size, image_size, 3], dtype=tf.float32, stddev=1e-1)) 159 with tf.device('/cpu:0'): 160 images, labels = cifar10.distorted_inputs() 161 keep_prob = tf.placeholder(tf.float32) 162 prediction,softmax,fc8,p = inference_op(images,keep_prob) 163 init = tf.global_variables_initializer() 164 sess = tf.Session() 165 sess.run(init) 166 time_tensorflow_run(sess, prediction,{keep_prob:1.0}, "Forward") 167 # 用以模擬訓(xùn)練的過程 168 objective = tf.nn.l2_loss(fc8) # 給一個loss 169 grad = tf.gradients(objective, p) # 相對于loss的 所有模型參數(shù)的梯度 170 time_tensorflow_run(sess, grad, {keep_prob:0.5},"Forward-backward") 171 172 173 174 175 if __name__ == '__main__': 176 train_vgg16()
當(dāng)然,我們也可以用tf.slim來簡化一下網(wǎng)絡(luò)結(jié)構(gòu)
1 def vgg16(inputs): 2 with slim.arg_scope([slim.conv2d, slim.fully_connected], 3 activation_fn=tf.nn.relu, 4 weights_initializer=tf.truncated_normal_initializer(0.0, 0.01), 5 weights_regularizer=slim.l2_regularizer(0.0005)): 6 net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1') 7 net = slim.max_pool2d(net, [2, 2], scope='pool1') 8 net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2') 9 net = slim.max_pool2d(net, [2, 2], scope='pool2') 10 net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3') 11 net = slim.max_pool2d(net, [2, 2], scope='pool3') 12 net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4') 13 net = slim.max_pool2d(net, [2, 2], scope='pool4') 14 net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5') 15 net = slim.max_pool2d(net, [2, 2], scope='pool5') 16 net = slim.fully_connected(net, 4096, scope='fc6') 17 net = slim.dropout(net, 0.5, scope='dropout6') 18 net = slim.fully_connected(net, 4096, scope='fc7') 19 net = slim.dropout(net, 0.5, scope='dropout7') 20 net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8')
對比訓(xùn)練結(jié)果,在同等設(shè)備和環(huán)境下,迭代200tensorflow的訓(xùn)練結(jié)果是89.18%,耗時18h12min,對比paddlepaddle的效果,精度差不多,時間慢一點。其實可以對數(shù)據(jù)進(jìn)行處理后再進(jìn)行訓(xùn)練,轉(zhuǎn)換成tfrecord多線程輸入在訓(xùn)練,時間應(yīng)該會快很多。
總結(jié)
通過論文的分析和實驗的結(jié)果,我總結(jié)了幾點:
1. LRN層太耗費計算資源,作用不大,可以舍去。
2. 大卷積核可以學(xué)習(xí)更大的空間特征,但是需要的參數(shù)空間也更多,小卷積核雖然學(xué)習(xí)的空間特征有限,但所需參數(shù)空間更小,多層疊加訓(xùn)練可能效果更好。
3. 越深的網(wǎng)絡(luò)效果越好,但是要避免梯度消失的問題,選取relu的激活函數(shù)、batch_normalization等都可以從一定程度上避免。
4. 小卷積核+深層網(wǎng)絡(luò)的效果,在迭代相同次數(shù)時,比大卷積核+淺層網(wǎng)絡(luò)效果更好,對于我們自己設(shè)計網(wǎng)絡(luò)時可以有借鑒作用。但是前者的訓(xùn)練時間可能更長,不過可能比后者收斂速度更快,精確度更好。
ps:為了方便大家及時看到我的更新,我搞了一個公眾號,以后文章會同步發(fā)布與公眾號和博客園,這樣大家就能及時收到通知啦,有不懂的問題也可以在公眾號留言,這樣我能夠及時看到并回復(fù)。
可以通過掃下面的二維碼或者直接搜公眾號:CharlotteDataMining 就可以了,謝謝關(guān)注^_^
本文同步發(fā)布于:https://mp.weixin.qq.com/s?__biz=MzI0OTQwMTA5Ng==&mid=2247483677&idx=1&sn=9402a0532bc6330f83e58c7e18f51b93&chksm=e9935b7adee4d26cd69de6c89b25be994735094ef420befd1d275f97821819ba9528f13e079a#rd
參考文獻(xiàn):
1.https://arxiv.org/pdf/1409.1556.pdf