使用tensorflow和Keras的初级教程( 三 )

Dense函数的输入

  1. units — 输出尺寸
  2. activation — 激活函数 , 如果未指定 , 则不使用任何内容
  3. use_bias — 布尔值 , 如果使用偏置项
  4. kernel_initializer — 核权重的初始值设定项
  5. bias_initializer —偏置向量的初始值设定项 。
model = Sequential(layers=None, name=None)model.add(Dense(10, input_shape = (29,), activation = 'tanh'))model.add(Dense(5, activation = 'tanh'))model.add(Dense(1, activation = 'sigmoid'))sgd = optimizers.Adam(lr = 0.001)model.compile(optimizer = sgd, loss = 'binary_crossentropy', metrics=['accuracy'])体系结构摘要model.summary()Model: "sequential"_________________________________________________________________Layer (type)Output ShapeParam #=================================================================dense (Dense)(None, 10)300_________________________________________________________________dense_1 (Dense)(None, 5)55_________________________________________________________________dense_2 (Dense)(None, 1)6=================================================================Total params: 361Trainable params: 361Non-trainable params: 0_________________________________________________________________让我们试着理解上面的输出(输出说明使用两个隐藏层提供):
  1. 我们创建了一个具有一个输入、两个隐藏和一个输出层的神经网络
  2. 输入层有29个变量和10个神经元 。 所以权重矩阵的形状是10 x 29 , 而偏置矩阵的形状是10 x 1
  3. 第1层参数总数=10 x 29+10 x 1=300
  4. 第一层有10个输出值 , 使用tanh作为激活函数 。 第二层有5个神经元和10个输入 , 因此权重矩阵为5×10 , 偏置矩阵为5×1
  5. 第2层总参数=5 x 10+5 x 1=55
  6. 最后 , 输出层有一个神经元 , 但是它有5个不同于隐藏层2的输入 , 并且有一个偏置项 , 因此神经元的数量=5+1=6
model.fit(X_train, y_train.values, batch_size = 2000, epochs = 20, verbose = 1)Epoch 1/20114/114 [==============================] - 0s 2ms/step - loss: 0.3434 - accuracy: 0.9847Epoch 2/20114/114 [==============================] - 0s 2ms/step - loss: 0.1029 - accuracy: 0.9981Epoch 3/20114/114 [==============================] - 0s 2ms/step - loss: 0.0518 - accuracy: 0.9983Epoch 4/20114/114 [==============================] - 0s 2ms/step - loss: 0.0341 - accuracy: 0.9986Epoch 5/20114/114 [==============================] - 0s 2ms/step - loss: 0.0255 - accuracy: 0.9987Epoch 6/20114/114 [==============================] - 0s 1ms/step - loss: 0.0206 - accuracy: 0.9988Epoch 7/20114/114 [==============================] - 0s 1ms/step - loss: 0.0174 - accuracy: 0.9988Epoch 8/20114/114 [==============================] - 0s 1ms/step - loss: 0.0152 - accuracy: 0.9988Epoch 9/20114/114 [==============================] - 0s 1ms/step - loss: 0.0137 - accuracy: 0.9989Epoch 10/20114/114 [==============================] - 0s 1ms/step - loss: 0.0125 - accuracy: 0.9989Epoch 11/20114/114 [==============================] - 0s 2ms/step - loss: 0.0117 - accuracy: 0.9989Epoch 12/20114/114 [==============================] - 0s 2ms/step - loss: 0.0110 - accuracy: 0.9989Epoch 13/20114/114 [==============================] - 0s 1ms/step - loss: 0.0104 - accuracy: 0.9989Epoch 14/20114/114 [==============================] - 0s 1ms/step - loss: 0.0099 - accuracy: 0.9989Epoch 15/20114/114 [==============================] - 0s 1ms/step - loss: 0.0095 - accuracy: 0.9989Epoch 16/20114/114 [==============================] - 0s 1ms/step - loss: 0.0092 - accuracy: 0.9989Epoch 17/20114/114 [==============================] - 0s 1ms/step - loss: 0.0089 - accuracy: 0.9989Epoch 18/20114/114 [==============================] - 0s 1ms/step - loss: 0.0087 - accuracy: 0.9989Epoch 19/20114/114 [==============================] - 0s 1ms/step - loss: 0.0084 - accuracy: 0.9989Epoch 20/20114/114 [==============================] - 0s 1ms/step - loss: 0.0082 - accuracy: 0.9989


推荐阅读