AI 学习之残差网络 Residual Networks

残差网络概念(ResNet)

我们知道,由于梯度爆炸和梯度消失的问题存在,导致越深的神经网络就越难被训练好,所以即使你有足够的计算力和数据,也很难训练出很深很深的优秀的神经网络。现在有一种跳跃连接,用它来将前面的激活值跳过中间的网络层而直接传递到更后面的网络层中去,以此来避免梯度爆炸和梯度消失。使用这种跳跃连接构建出来的神经网络,我们称之为残差网络(ResNet)。

残差网络实战 Residual Networks

因为本篇文档中我们使用了keras的softmax函数,而我们上个文档安装是最新的keras版本,这个版本与我们之前安装的tensorflow 版本在这个函数上有些冲突,所以首先我们要给keras降版本。

保险之前先关闭所有jupyter notebook程序,然后执行下面的步骤

1,打开Anaconda prompt

2,执行activate tensorflow命令

3,执行pip install keras==2.1命令

理论上来说,越深的神经网络就能解决越复杂的问题;但是,在实践中,越深的神经网络就越难被训练好。而残差网络可以帮助我们训练深神经网络。

import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline

import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)

1 - 是什么阻碍了深度神经网络

随着先人们年复一年的努力,我们构建的神经网络也越来越深了,从最初的几层,到现在的百多层。为什么要构建越来越深的神经网络呢?因为越深的神经网络,就能解决越复杂的问题,就能学习到更多不同层次的特征,前面的层会学到基本的特征,例如线条,后面的层可以学到复杂的特征,例如人脸。

但是,神经网络越深,梯度消失就越严重(偶尔也会是梯度爆炸问题),导致神经网络学得越来越慢,越来越迟钝。就像一个弱智大块头,虽然身体很大,但是智商很低。虚胖~~

所以,我们要使用残差网络来解决这个问题。

2 - 构建残差网络

残差网络是由残差块组成的。下面左图是传统的网络块,右边的是残差网络块。其实就是在传统的网络块上添加一条小路,以便让激活值和梯度值可以跳层传播,以此来避免梯度消失和爆炸(如果不明白这句话,那么回头多看几遍梯度消失的文章)。:

file
图 2 : 残差块

在有些文献中,说残差块能更好的独立的学到某个特征,以此避免了梯度消失。这里只是顺便提一下,大家可以不深究。

在实现残差块时,会有两种情况,一种是要跳跃传递的矩阵与目标层矩阵的维度一致时,另一种情况是跳传矩阵与目标层矩阵不一致时,不一致时就需要变换矩阵的维度。我们先看维度一致时的情况。

2.1 - 维度一致时的残差块

下图小路的左边的网络层的激活值与小路右边的网络层的激活值的维度是一致的,所以可以直接跳过去。就是说$a^{[l]}$)与$a^{[l+2]}$的维度是一致的。从下图中可以看出有两条路,一条直线主路,一条弧线小路:

HY7WcBEY0F.png

图 3

上图是跳了2层,其实还可以跳更多层,下图就跳了3层:

iqHmwphxBC.png

图 4

# 维度相同时的残差块
def identity_block(X, f, filters, stage, block):
    """
    实现了图4中展示的残差块

    参数:
    X -- 要跳跃的激活值矩阵
    f -- 整型。指示卷积层的窗口大小
    filters -- 整型数组,指示残差块中的卷积层的过滤器的个数
    stage -- 整型。用来辅助给网络层取名。
    block -- 字符串。用来辅助给网络层取名。

    返回:
    X -- 残差块的最终输出矩阵
    """

    # 取一些名字而已,无关紧要
    conv_name_base = 'res' + str(stage) + block + '_branch'
    bn_name_base = 'bn' + str(stage) + block + '_branch'

    # 获取每一个卷积层对应的过滤器的个数
    F1, F2, F3 = filters

    # 保存输入的激活值,以便后面跳插入到后面的网络层中
    X_shortcut = X

    # 打印输出值
    print("X:", X)

    # 主路中的第一组网络层,就是图4的第一组绿橙黄小方块
    X = Conv2D(filters=F1, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2a', kernel_initializer=glorot_uniform(seed=0))(X)

    # 打印输出值
    print("X-Conv2D:", X)

    X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)

    # 打印输出值
    print("X-Conv2D-BatchNor:", X)

    X = Activation('relu')(X)

     # 打印输出值
    print("X-Conv2D-BatchNor-relu:", X)

    # 主路中的第二组网络层,就是图4的第二组绿橙黄小方块
    X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same', name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
    X = Activation('relu')(X)

    # 主路中的第三组网络层,图4的第三组绿橙小方块
    X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)

    # 这一步就是实现小路的地方。其实就是简单的将前面层的激活值X_shortcut与第三组网络层的输出激活值合并在一起
    # 然后将合并的激活值向下传入到激活函数中,进入到后面的神经网络中去
    X = Add()([X, X_shortcut]) # g(Z^[l+2] + a)
    X = Activation('relu')(X)

    return X
tf.reset_default_graph()

with tf.Session() as test:
    np.random.seed(1)
    A_prev = tf.placeholder("float", [3, 4, 4, 6])
    X = np.random.randn(3, 4, 4, 6)
    # print("X = ", X.shape())

    A = identity_block(A_prev, f=2, filters=[2, 4, 6], stage=1, block='a')
    print("A = ", A)

    test.run(tf.global_variables_initializer())
    out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
    print("out = " + str(out[0][1][1][0]))
X: Tensor("Placeholder:0", shape=(3, 4, 4, 6), dtype=float32)
X-Conv2D: Tensor("res1a_branch2a/BiasAdd:0", shape=(3, 4, 4, 2), dtype=float32)
X-Conv2D-BatchNor: Tensor("bn1a_branch2a/cond/Merge:0", shape=(3, 4, 4, 2), dtype=float32)
X-Conv2D-BatchNor-relu: Tensor("activation_1/Relu:0", shape=(3, 4, 4, 2), dtype=float32)
A =  Tensor("activation_3/Relu:0", shape=(3, 4, 4, 6), dtype=float32)
out = [0.94822985 0.         1.1610144  2.747859   0.         1.36677   ]

2.2 - 维度不同时的情况

当维度不同时,我们就不能直接将前面的激活值和后面的激活值矩阵合并在一起,所以需要在小路上加个卷积层来改变前面的激活矩阵的维度。如下图所示,小路上加了一个conv2d卷积层:

kgGpdXFhZN.png

# 实现图5中的残差块

def convolutional_block(X, f, filters, stage, block, s=2):

    conv_name_base = 'res' + str(stage) + block + '_branch'
    bn_name_base = 'bn' + str(stage) + block + '_branch'

    F1, F2, F3 = filters

    X_shortcut = X

    X = Conv2D(filters=F1, kernel_size=(1, 1), strides=(s, s), padding='valid', name=conv_name_base + '2a', kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
    X = Activation('relu')(X)

    X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same', name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
    X = Activation('relu')(X)

    X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)

    # 在小路上面加上一个卷积层和一个BatchNormalization
    # 卷积层会改变X_shortcut的维度,这样一来,就能与X矩阵合并在一起了
    X_shortcut = Conv2D(filters=F3, kernel_size=(1, 1), strides=(s, s), padding='valid', name=conv_name_base + '1', kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
    X_shortcut = BatchNormalization(axis=3, name=bn_name_base + '1')(X_shortcut)

    # 将变维后的X_shortcut与X合并在一起
    X = Add()([X, X_shortcut])
    X = Activation('relu')(X)

    return X
tf.reset_default_graph()

with tf.Session() as test:
    np.random.seed(1)
    A_prev = tf.placeholder("float", [3, 4, 4, 6])
    X = np.random.randn(3, 4, 4, 6)
    A = convolutional_block(A_prev, f=2, filters=[2, 4, 6], stage=1, block='a')
    test.run(tf.global_variables_initializer())
    out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
    print("out = " + str(out[0][1][1][0]))
out = [0.09018463 1.2348977  0.46822017 0.0367176  0.         0.65516603]

3 - 构建ResNet50(一个50层的残差网络)

接下来带领大家实现一个50层的残差网络。如下图所示。里面的ID BLOCK是指维度相同时的残差块,ID BLOCK x3表示有3组这样的残差块。CONV BLOCK是指维度不同时的残差块。我们将中间的残差块群范围了5个阶段,stage1...stage5。

WXLMMnNec4.png

# 实现ResNet50

def ResNet50(input_shape=(64, 64, 3), classes=6):
    """
    参数:
    input_shape -- 输入的图像矩阵的维度
    classes -- 类别数量

    Returns:
    model -- 网络模型
    """

    # 根据输入维度定义一个输入向量
    X_input = Input(input_shape)

    # 用0填充输入向量的周边
    X = ZeroPadding2D((3, 3))(X_input)

    # 实现第一个阶段。Stage 1
    X = Conv2D(64, (7, 7), strides=(2, 2), name='conv1', kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name='bn_conv1')(X)
    X = Activation('relu')(X)
    X = MaxPooling2D((3, 3), strides=(2, 2))(X)

    # Stage 2
    X = convolutional_block(X, f=3, filters=[64, 64, 256], stage=2, block='a', s=1)
    X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
    X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')

    # Stage 3 
    X = convolutional_block(X, f=3, filters=[128, 128, 512], stage=3, block='a', s=2)
    X = identity_block(X, 3, [128, 128, 512], stage=3, block='b')
    X = identity_block(X, 3, [128, 128, 512], stage=3, block='c')
    X = identity_block(X, 3, [128, 128, 512], stage=3, block='d')

    # Stage 4 
    X = convolutional_block(X, f=3, filters=[256, 256, 1024], stage=4, block='a', s=2)
    X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b')
    X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c')
    X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d')
    X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e')
    X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f')

    # Stage 5 (≈3 lines)
    X = X = convolutional_block(X, f=3, filters=[512, 512, 2048], stage=5, block='a', s=2)
    X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b')
    X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c')

    # 平均池化层
    X = AveragePooling2D(pool_size=(2, 2), padding='same')(X)

    # 扁平化激活值矩阵,对接上全连接层,softmax层
    X = Flatten()(X)
    X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer=glorot_uniform(seed=0))(X)

    # 构建模型
    model = Model(inputs=X_input, outputs=X, name='ResNet50')

    return model
model = ResNet50(input_shape=(64, 64, 3), classes=6)
X: Tensor("activation_7/Relu:0", shape=(?, 15, 15, 256), dtype=float32)
X-Conv2D: Tensor("res2b_branch2a/BiasAdd:0", shape=(?, 15, 15, 64), dtype=float32)
X-Conv2D-BatchNor: Tensor("bn2b_branch2a/cond/Merge:0", shape=(?, 15, 15, 64), dtype=float32)
X-Conv2D-BatchNor-relu: Tensor("activation_8/Relu:0", shape=(?, 15, 15, 64), dtype=float32)
X: Tensor("activation_10/Relu:0", shape=(?, 15, 15, 256), dtype=float32)
X-Conv2D: Tensor("res2c_branch2a/BiasAdd:0", shape=(?, 15, 15, 64), dtype=float32)
X-Conv2D-BatchNor: Tensor("bn2c_branch2a/cond/Merge:0", shape=(?, 15, 15, 64), dtype=float32)
X-Conv2D-BatchNor-relu: Tensor("activation_11/Relu:0", shape=(?, 15, 15, 64), dtype=float32)
X: Tensor("activation_16/Relu:0", shape=(?, 8, 8, 512), dtype=float32)
X-Conv2D: Tensor("res3b_branch2a/BiasAdd:0", shape=(?, 8, 8, 128), dtype=float32)
X-Conv2D-BatchNor: Tensor("bn3b_branch2a/cond/Merge:0", shape=(?, 8, 8, 128), dtype=float32)
X-Conv2D-BatchNor-relu: Tensor("activation_17/Relu:0", shape=(?, 8, 8, 128), dtype=float32)
X: Tensor("activation_19/Relu:0", shape=(?, 8, 8, 512), dtype=float32)
X-Conv2D: Tensor("res3c_branch2a/BiasAdd:0", shape=(?, 8, 8, 128), dtype=float32)
X-Conv2D-BatchNor: Tensor("bn3c_branch2a/cond/Merge:0", shape=(?, 8, 8, 128), dtype=float32)
X-Conv2D-BatchNor-relu: Tensor("activation_20/Relu:0", shape=(?, 8, 8, 128), dtype=float32)
X: Tensor("activation_22/Relu:0", shape=(?, 8, 8, 512), dtype=float32)
X-Conv2D: Tensor("res3d_branch2a/BiasAdd:0", shape=(?, 8, 8, 128), dtype=float32)
X-Conv2D-BatchNor: Tensor("bn3d_branch2a/cond/Merge:0", shape=(?, 8, 8, 128), dtype=float32)
X-Conv2D-BatchNor-relu: Tensor("activation_23/Relu:0", shape=(?, 8, 8, 128), dtype=float32)
X: Tensor("activation_28/Relu:0", shape=(?, 4, 4, 1024), dtype=float32)
X-Conv2D: Tensor("res4b_branch2a/BiasAdd:0", shape=(?, 4, 4, 256), dtype=float32)
X-Conv2D-BatchNor: Tensor("bn4b_branch2a/cond/Merge:0", shape=(?, 4, 4, 256), dtype=float32)
X-Conv2D-BatchNor-relu: Tensor("activation_29/Relu:0", shape=(?, 4, 4, 256), dtype=float32)
X: Tensor("activation_31/Relu:0", shape=(?, 4, 4, 1024), dtype=float32)
X-Conv2D: Tensor("res4c_branch2a/BiasAdd:0", shape=(?, 4, 4, 256), dtype=float32)
X-Conv2D-BatchNor: Tensor("bn4c_branch2a/cond/Merge:0", shape=(?, 4, 4, 256), dtype=float32)
X-Conv2D-BatchNor-relu: Tensor("activation_32/Relu:0", shape=(?, 4, 4, 256), dtype=float32)
X: Tensor("activation_34/Relu:0", shape=(?, 4, 4, 1024), dtype=float32)
X-Conv2D: Tensor("res4d_branch2a/BiasAdd:0", shape=(?, 4, 4, 256), dtype=float32)
X-Conv2D-BatchNor: Tensor("bn4d_branch2a/cond/Merge:0", shape=(?, 4, 4, 256), dtype=float32)
X-Conv2D-BatchNor-relu: Tensor("activation_35/Relu:0", shape=(?, 4, 4, 256), dtype=float32)
X: Tensor("activation_37/Relu:0", shape=(?, 4, 4, 1024), dtype=float32)
X-Conv2D: Tensor("res4e_branch2a/BiasAdd:0", shape=(?, 4, 4, 256), dtype=float32)
X-Conv2D-BatchNor: Tensor("bn4e_branch2a/cond/Merge:0", shape=(?, 4, 4, 256), dtype=float32)
X-Conv2D-BatchNor-relu: Tensor("activation_38/Relu:0", shape=(?, 4, 4, 256), dtype=float32)
X: Tensor("activation_40/Relu:0", shape=(?, 4, 4, 1024), dtype=float32)
X-Conv2D: Tensor("res4f_branch2a/BiasAdd:0", shape=(?, 4, 4, 256), dtype=float32)
X-Conv2D-BatchNor: Tensor("bn4f_branch2a/cond/Merge:0", shape=(?, 4, 4, 256), dtype=float32)
X-Conv2D-BatchNor-relu: Tensor("activation_41/Relu:0", shape=(?, 4, 4, 256), dtype=float32)
X: Tensor("activation_46/Relu:0", shape=(?, 2, 2, 2048), dtype=float32)
X-Conv2D: Tensor("res5b_branch2a/BiasAdd:0", shape=(?, 2, 2, 512), dtype=float32)
X-Conv2D-BatchNor: Tensor("bn5b_branch2a/cond/Merge:0", shape=(?, 2, 2, 512), dtype=float32)
X-Conv2D-BatchNor-relu: Tensor("activation_47/Relu:0", shape=(?, 2, 2, 512), dtype=float32)
X: Tensor("activation_49/Relu:0", shape=(?, 2, 2, 2048), dtype=float32)
X-Conv2D: Tensor("res5c_branch2a/BiasAdd:0", shape=(?, 2, 2, 512), dtype=float32)
X-Conv2D-BatchNor: Tensor("bn5c_branch2a/cond/Merge:0", shape=(?, 2, 2, 512), dtype=float32)
X-Conv2D-BatchNor-relu: Tensor("activation_50/Relu:0", shape=(?, 2, 2, 512), dtype=float32)

编译模型

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

编译好后,就可以开始训练这个模型了。下面我们先把数据集加载进来。

kiLcVfVCSq.png

图7 : 手势数据集

X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()

X_train = X_train_orig / 255.
X_test = X_test_orig / 255.

Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T

print("number of training examples = " + str(X_train.shape[0]))
print("number of test examples = " + str(X_test.shape[0]))
print("X_train shape: " + str(X_train.shape))
print("Y_train shape: " + str(Y_train.shape))
print("X_test shape: " + str(X_test.shape))
print("Y_test shape: " + str(Y_test.shape))
number of training examples = 1080
number of test examples = 120
X_train shape: (1080, 64, 64, 3)
Y_train shape: (1080, 6)
X_test shape: (120, 64, 64, 3)
Y_test shape: (120, 6)

由于模型太大,训练时间太长,我们下面只训练2个epochs。仅仅是2个epochs,都要花十多分钟!

model.fit(X_train, Y_train, epochs = 2, batch_size = 32)
Epoch 1/2
1080/1080 [==============================] - 173s - loss: 3.2195 - acc: 0.2407   
Epoch 2/2
1080/1080 [==============================] - 167s - loss: 2.3737 - acc: 0.2944   

<keras.callbacks.History at 0x7efe508c3ef0>

epoch2的loss维2.4比epoch1的要小,epoch2的精准度维0.3,比epoch1的要大。说明训练的越多,网络精准度越高了。

下面用测试集看看模型的精准度

preds = model.evaluate(X_test, Y_test)
print("Loss = " + str(preds[0]))
print("Test Accuracy = " + str(preds[1]))
120/120 [==============================] - 12s 97ms/step
Loss = 1.9599643150965373
Test Accuracy = 0.16666666666666666

精准度很低。因为我们只训练了2个epochs。当然,你可以修改上面的代码,训练多个epochs,那么精准度会上去的。我试过,在CPU上训练20个epochs后,精准度会有明显提升,不过20个epochs要花1个小时左右的时间。

为了给大家展示结果,我在GPU上将ResNet50模型训练好了。使用下面的代码加载这个训练好的模型,然后在测试集上面展示结果。

model = load_model('ResNet50.h5') 
preds = model.evaluate(X_test, Y_test)
print("Loss = " + str(preds[0]))
print("Test Accuracy = " + str(preds[1]))

可以看到,精准度达到了0.8,还不错。

帮助函数

resnets_utils.py

import os
import numpy as np
import tensorflow as tf
import h5py
import math

def load_dataset():
    train_dataset = h5py.File('datasets/train_signs.h5', "r")
    train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
    train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels

    test_dataset = h5py.File('datasets/test_signs.h5', "r")
    test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
    test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels

    classes = np.array(test_dataset["list_classes"][:]) # the list of classes

    train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
    test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))

    return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes

def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
    """
    Creates a list of random minibatches from (X, Y)

    Arguments:
    X -- input data, of shape (input size, number of examples) (m, Hi, Wi, Ci)
    Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples) (m, n_y)
    mini_batch_size - size of the mini-batches, integer
    seed -- this is only for the purpose of grading, so that you're "random minibatches are the same as ours.

    Returns:
    mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
    """

    m = X.shape[0]                  # number of training examples
    mini_batches = []
    np.random.seed(seed)

    # Step 1: Shuffle (X, Y)
    permutation = list(np.random.permutation(m))
    shuffled_X = X[permutation,:,:,:]
    shuffled_Y = Y[permutation,:]

    # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
    num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
    for k in range(0, num_complete_minibatches):
        mini_batch_X = shuffled_X[k * mini_batch_size : k * mini_batch_size + mini_batch_size,:,:,:]
        mini_batch_Y = shuffled_Y[k * mini_batch_size : k * mini_batch_size + mini_batch_size,:]
        mini_batch = (mini_batch_X, mini_batch_Y)
        mini_batches.append(mini_batch)

    # Handling the end case (last mini-batch < mini_batch_size)
    if m % mini_batch_size != 0:
        mini_batch_X = shuffled_X[num_complete_minibatches * mini_batch_size : m,:,:,:]
        mini_batch_Y = shuffled_Y[num_complete_minibatches * mini_batch_size : m,:]
        mini_batch = (mini_batch_X, mini_batch_Y)
        mini_batches.append(mini_batch)

    return mini_batches

def convert_to_one_hot(Y, C):
    Y = np.eye(C)[Y.reshape(-1)].T
    return Y

def forward_propagation_for_predict(X, parameters):
    """
    Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX

    Arguments:
    X -- input dataset placeholder, of shape (input size, number of examples)
    parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
                  the shapes are given in initialize_parameters

    Returns:
    Z3 -- the output of the last LINEAR unit
    """

    # Retrieve the parameters from the dictionary "parameters" 
    W1 = parameters['W1']
    b1 = parameters['b1']
    W2 = parameters['W2']
    b2 = parameters['b2']
    W3 = parameters['W3']
    b3 = parameters['b3'] 
                                                           # Numpy Equivalents:
    Z1 = tf.add(tf.matmul(W1, X), b1)                      # Z1 = np.dot(W1, X) + b1
    A1 = tf.nn.relu(Z1)                                    # A1 = relu(Z1)
    Z2 = tf.add(tf.matmul(W2, A1), b2)                     # Z2 = np.dot(W2, a1) + b2
    A2 = tf.nn.relu(Z2)                                    # A2 = relu(Z2)
    Z3 = tf.add(tf.matmul(W3, A2), b3)                     # Z3 = np.dot(W3,Z2) + b3

    return Z3

def predict(X, parameters):

    W1 = tf.convert_to_tensor(parameters["W1"])
    b1 = tf.convert_to_tensor(parameters["b1"])
    W2 = tf.convert_to_tensor(parameters["W2"])
    b2 = tf.convert_to_tensor(parameters["b2"])
    W3 = tf.convert_to_tensor(parameters["W3"])
    b3 = tf.convert_to_tensor(parameters["b3"])

    params = {"W1": W1,
              "b1": b1,
              "W2": W2,
              "b2": b2,
              "W3": W3,
              "b3": b3}

    x = tf.placeholder("float", [12288, 1])

    z3 = forward_propagation_for_predict(x, params)
    p = tf.argmax(z3)

    sess = tf.Session()
    prediction = sess.run(p, feed_dict = {x: X})

    return prediction

相关文章
Conv2D层文档

为者常成,行者常至