CNN通俗解析 cnn是什么意思( 三 )


代码:# Import the deep learning libraryimport tensorflow as&nbs信息资源网p;tf# Define our compuational graph W1 = tf.constant(5.0, name = "x")W2 = tf.constant(3.0, name = "y")W3 = tf.cos(W1, name = "cos")W4 = tf.sin(W2, name = "sin")W5 = tf.multiply(W3, W4, name = "mult")W6 =&n信息资源网bsp;tf.divide(W1, W2, name = "div")W7 = tf.add(W5, W6, name = "add")# Open the sessionwith tf.Session() as sess: cos = sess.run(W3) sin = sess.run(W4) mult = sess.run(W5) div = sess.run(W6) add = sess.run(W7)# Before running TensorBoard, make sure you have generated summary data in a log directory by creating a summary writer writer = tf.summary.FileWriter("./Desktop/ComputationGraph", sess.graph)# Once you have event files, run TensorBoard and provide the log directory # Command: tensorboard --logdir="path/to/logs"使用Tensorboard进行可视化:什么是Tensorboard?TensorBoard是一套用于检查和理解TensorFlow的操作和图形的Web应用程序 , 这是Google的TensorFlow相对于脸书的Pytorch的最大优势之一 。
上面的代码在Tensorboard中可视化 。
随着对卷积神经网络、TensorFlow和TensorBoard的深入了解 , 让我们构建我们的第一个使用MNIST数据集识别手写数字的卷积神经网络 。
MNIST数据集
我们的卷积神经网络模型将类似于LeNet-5架构 , 由卷积层、最大池层和非线性运算层组成 。
卷积神经网络三维仿真
代码:# Import the deep learning libraryimport tensorflow as tfimport time# Import the MNIST datasetfrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("/tmp/data/", one_hot=True)# Network inputs and outputs# The network's input is a 2828 dimensional inputn = 28m = 28num_input = n * m # MNIST data input num_classes = 10 # MNIST total classes (0-9 digits)# tf Graph inputX = tf.placeholder(tf.float32, [None, num_input])Y = tf.placeholder(tf.float32, [None, num_classes])# Storing the parameters of our LeNET-5 inspired Convolutional Neural Networkweights = { "W_ij": tf.Variable(tf.random_normal([5, 5, 1, 32])), "W_jk": tf.Variable(tf.random_normal([5, 5, 32, 64])), "W_kl": tf.Variable(tf.random_normal([7 * 7 * 64, 1024])), "W_lm": tf.Variable(tf.random_normal([1024, num_classes])) }biases = { "b_ij": tf.Variable(tf.random_normal([32])), "b_jk": tf.Variable(tf.random_normal([64])), "b_kl": tf.Variable(tf.random_normal([1024])), "b_lm": tf.Variable(tf.random_normal([num_classes])) }# The hyper-parameters of our Convolutional Neural Networklearning_rate = 1e-3num_steps = 500batch_size = 128display_step = 10def ConvolutionLayer(x, W, b, strides=1): # Convolution Layer x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME') x = tf.nn.bias_add(x, b) return xdef ReLU(x): # ReLU activation function return tf.nn.relu(x)def PoolingLayer(x, k=2, strides=2): # Max Pooling layer return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, strides, strides, 1], padding='SAME')def Softmax(x): # Softmax activation function for the CNN's final output return tf.nn.softmax(x)# Create modeldef ConvolutionalNeuralNetwork(x, weights, biases): # MNIST data input is a 1-D row vector of 784 features (2828 pixels) # Reshape to match picture format [Height x Width x Channel] # Tensor input become 4-D: [Batch Size, Height, Width, Channel] x = tf.reshape(x, shape=[-1, 28, 28, 1]) # Convolution Layer Conv1 = ConvolutionLayer(x, weights["W_ij"], biases["b_ij"]) # Non-Linearity ReLU1 = ReLU(Conv1) # Max Pooling (down-sampling) Pool1 = PoolingLayer(ReLU1, k=2) # Convolution Layer Conv2 = ConvolutionLayer(Pool1, weights["W_jk"], biases["b_jk"]) # Non-Linearity ReLU2 = ReLU(Conv2) # Max Pooling (down-sampling) Pool2 = PoolingLayer(ReLU2, k=2)# Fully connected layer # Reshape co信息资源网nv2 output to fit fully connected layer input FC = tf.reshape(Pool2, [-1, weights["W_kl"].get_shape().as_list()[0]]) FC = tf.add(tf.matmul(FC, weights["W_kl"]), biases["b_kl"]) FC = ReLU(FC) # Output, class prediction output = tf.add(tf.matmul(FC, weights["W_lm"]), biases["b_lm"])


推荐阅读