基于波士頓住房數(shù)據(jù)集訓(xùn)練簡單的MLP回歸模型
作者:小sen
多層感知機(jī)(MLP)有著非常悠久的歷史,多層感知機(jī)(MLP)是深度神經(jīng)網(wǎng)絡(luò)(DNN)的基礎(chǔ)算法,每個MLP模型由一個輸入層、幾個隱藏層和一個輸出層組成.
多層感知機(jī)(MLP)有著非常悠久的歷史,多層感知機(jī)(MLP)是深度神經(jīng)網(wǎng)絡(luò)(DNN)的基礎(chǔ)算法
MLP基礎(chǔ)知識
- 目的:創(chuàng)建用于簡單回歸/分類任務(wù)的常規(guī)神經(jīng)網(wǎng)絡(luò)(即多層感知器)和Keras
MLP結(jié)構(gòu)
- 每個MLP模型由一個輸入層、幾個隱藏層和一個輸出層組成
- 每層神經(jīng)元的數(shù)目不受限制
具有一個隱藏層的MLP
- 輸入神經(jīng)元數(shù):3 - 隱藏神經(jīng)元數(shù):4 - 輸出神經(jīng)元數(shù):2
回歸任務(wù)的MLP
- 當(dāng)目標(biāo)(「y」)連續(xù)時
- 對于損失函數(shù)和評估指標(biāo),通常使用均方誤差(MSE)
- from tensorflow.keras.datasets import boston_housing
- (X_train, y_train), (X_test, y_test) = boston_housing.load_data()
數(shù)據(jù)集描述
- 波士頓住房數(shù)據(jù)集共有506個數(shù)據(jù)實(shí)例(404個培訓(xùn)和102個測試)
- 13個屬性(特征)預(yù)測“某一地點(diǎn)房屋的中值”
- 文件編號:https://keras.io/datasets/
1.創(chuàng)建模型
- Keras模型對象可以用Sequential類創(chuàng)建
- 一開始,模型本身是空的。它是通過「添加」附加層和編譯來完成的
- 文檔:https://keras.io/models/sequential/
- from tensorflow.keras.models import Sequential
- model = Sequential()
1-1.添加層
- Keras層可以「添加」到模型中
- 添加層就像一個接一個地堆疊樂高積木
- 文檔:https://keras.io/layers/core/
- from tensorflow.keras.layers import Activation, Dense
- # Keras model with two hidden layer with 10 neurons each
- model.add(Dense(10, input_shape = (13,))) # Input layer => input_shape should be explicitly designated
- model.add(Activation('sigmoid'))
- model.add(Dense(10)) # Hidden layer => only output dimension should be designated
- model.add(Activation('sigmoid'))
- model.add(Dense(10)) # Hidden layer => only output dimension should be designated
- model.add(Activation('sigmoid'))
- model.add(Dense(1)) # Output layer => output dimension = 1 since it is regression problem
- # This is equivalent to the above code block
- model.add(Dense(10, input_shape = (13,), activation = 'sigmoid'))
- model.add(Dense(10, activation = 'sigmoid'))
- model.add(Dense(10, activation = 'sigmoid'))
- model.add(Dense(1))
1-2.模型編譯
- Keras模型應(yīng)在培訓(xùn)前“編譯”
- 應(yīng)指定損失類型(函數(shù))和優(yōu)化器
- 文檔(優(yōu)化器):https://keras.io/optimizers/
- 文檔(損失):https://keras.io/losses/
- from tensorflow.keras import optimizers
- sgd = optimizers.SGD(lr = 0.01) # stochastic gradient descent optimizer
- model.compile(optimizer = sgd, loss = 'mean_squared_error', metrics = ['mse']) # for regression problems, mean squared error (MSE) is often employed
模型摘要
- model.summary()
- odel: "sequential"
- _________________________________________________________________
- Layer (type) Output Shape Param #
- =================================================================
- dense (Dense) (None, 10) 140
- _________________________________________________________________
- activation (Activation) (None, 10) 0
- _________________________________________________________________
- dense_1 (Dense) (None, 10) 110
- _________________________________________________________________
- activation_1 (Activation) (None, 10) 0
- _________________________________________________________________
- dense_2 (Dense) (None, 10) 110
- _________________________________________________________________
- activation_2 (Activation) (None, 10) 0
- _________________________________________________________________
- dense_3 (Dense) (None, 1) 11
- _________________________________________________________________
- dense_4 (Dense) (None, 10) 20
- _________________________________________________________________
- dense_5 (Dense) (None, 10) 110
- _________________________________________________________________
- dense_6 (Dense) (None, 10) 110
- _________________________________________________________________
- dense_7 (Dense) (None, 1) 11
- =================================================================
- Total params: 622
- Trainable params: 622
- Non-trainable params: 0
- _________________________________________________________________
2.培訓(xùn)
- 使用提供的訓(xùn)練數(shù)據(jù)訓(xùn)練模型
- model.fit(X_train, y_train, batch_size = 50, epochs = 100, verbose = 1)
3.評估
- Keras模型可以用evaluate()函數(shù)計算
- 評估結(jié)果包含在列表中
- 文檔:https://keras.io/metrics/
- results = model.evaluate(X_test, y_test)
- print(model.metrics_names) # list of metric names the model is employing
- print(results) # actual figure of metrics computed
- print('loss: ', results[0])
- print('mse: ', results[1])
責(zé)任編輯:姜華
來源:
Python之王