神经网络与深度学习笔记:猫分类器实现

目录

1. 神经网络基础

1.1 反向传播算法

反向传播是训练神经网络的核心算法,通过链式法则计算损失函数对各参数的梯度。关键公式:
- 线性部分\(Z = W A + b\)
- 激活部分\(A = \sigma(Z)\)\(A = \max(0, Z)\)
- 反向传播\(dW = \frac{\partial \mathcal{L}}{\partial W} = \frac{1}{m} dZ A^T\)
- 梯度更新\(W = W - \alpha dW\)\(b = b - \alpha db\)

1.2 损失函数

使用交叉熵损失函数:
$$
cost = -\frac{1}{m} \sum_{i=1}^m \left[ y^{(i)} \log(a^{L}) + (1-y^{(i)}) \log(1 - a^{L}) \right]
$$

2. 模型构建

2.1 两层神经网络

结构:输入层 -> ReLU隐藏层 -> Sigmoid输出层
- 初始化参数W1 ~ N(0, 0.01), b1=0; W2 ~ N(0, 0.01), b2=0

2.2 L层神经网络

结构:[LINEAR->RELU]*(L-1) -> LINEAR->SIGMOID
- 参数初始化\(W^{[l]} = \frac{\text{randn}(n^{[l]}, n^{[l-1]})}{\sqrt{n^{[l-1]}}}\)\(b^{[l]} = 0\)

3. 训练与优化

3.1 优化循环

  1. 前向传播:计算预测值和中间缓存
  2. 计算损失:计算交叉熵损失
  3. 反向传播:计算各参数梯度
  4. 参数更新:使用梯度下降更新参数

3.2 正则化与超参数

  • 超参数:学习率 \(\alpha\)、迭代次数、隐藏层大小
  • 技巧
  • 使用ReLU激活函数避免梯度消失
  • 小批量梯度下降加速收敛
  • 数据标准化(像素值0-1)

4. 预测与评估

4.1 预测流程

  1. 输入图像预处理(resize、归一化)
  2. 前向传播计算概率值
  3. 阈值(0.5)判断类别(猫/非猫)

4.2 评估指标

  • 准确率\(\text{Accuracy} = \frac{\text{正确预测数}}{\text{总样本数}}\)

5. 代码实现

5.1 导包

import numpy as np
import matplotlib.pyplot as plt
import scipy
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
from lr_utils import load_dataset

5.2 参数初始化

def initialize_parameters(n_x, n_h, n_y):
    W1 = np.random.randn(n_h, n_x) * 0.01
    b1 = np.zeros((n_h, 1))
    W2 = np.random.randn(n_y, n_h) * 0.01
    b2 = np.zeros((n_y, 1))
    return {"W1": W1, "b1": b1, "W2": W2, "b2": b2}

def initialize_parameters_deep(layer_dims):
    parameters = {}
    L = len(layer_dims)
    for l in range(1, L):
        parameters[f"W{l}"] = np.random.randn(layer_dims[l], layer_dims[l-1]) / np.sqrt(layer_dims[l-1])
        parameters[f"b{l}"] = np.zeros((layer_dims[l], 1))
    return parameters

5.3 正向传播

def linear_forward(A, W, b):
    Z = np.dot(W, A) + b
    return Z, (A, W, b)

def linear_activation_forward(A_prev, W, b, activation):
    if activation == "sigmoid":
        Z, cache = linear_forward(A_prev, W, b)
        A = sigmoid(Z)
    elif activation == "relu":
        Z, cache = linear_forward(A_prev, W, b)
        A = relu(Z)
    return A, (cache, Z)

def L_model_forward(X, parameters):
    caches = []
    A = X
    L = len(parameters) // 2
    for l in range(1, L):
        A, cache = linear_activation_forward(A, parameters[f"W{l}"], parameters[f"b{l}"], "relu")
        caches.append(cache)
    AL, cache = linear_activation_forward(A, parameters[f"W{L}"], parameters[f"b{L}"], "sigmoid")
    caches.append(cache)
    return AL, caches

5.4 反向传播

def linear_backward(dZ, cache):
    A_prev, W, b = cache
    m = A_prev.shape[1]
    dW = np.dot(dZ, A_prev.T) / m
    db = np.sum(dZ, axis=1, keepdims=True) / m
    dA_prev = np.dot(W.T, dZ)
    return dA_prev, dW, db

def linear_activation_backward(dA, cache, activation):
    linear_cache, activation_cache = cache
    if activation == "sigmoid":
        dZ = dA * sigmoid_backward(activation_cache)
    elif activation == "relu":
        dZ = dA * relu_backward(activation_cache)
    return linear_backward(dZ, linear_cache)

def L_model_backward(AL, Y, caches):
    grads = {}
    L = len(caches)
    dAL = - (Y / AL - (1 - Y) / (1 - AL))
    current_cache = caches[-1]
    grads["dA" + str(L-1)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, "sigmoid")
    for l in reversed(range(L-1)):
        current_cache = caches[l]
        dA_prev, dW, db = linear_activation_backward(grads["dA" + str(l+1)], current_cache, "relu")
        grads["dA" + str(l)] = dA_prev
        grads["dW" + str(l+1)] = dW
        grads["db" + str(l+1)] = db
    return grads

5.5 训练与预测

def compute_cost(AL, Y):
    m = Y.shape[1]
    cost = -np.sum(Y * np.log(AL) + (1-Y) * np.log(1-AL)) / m
    return np.squeeze(cost)

def update_parameters(parameters, grads, learning_rate):
    L = len(parameters) // 2
    for l in range(1, L+1):
        parameters[f"W{l}"] -= learning_rate * grads[f"dW{l}"]
        parameters[f"b{l}"] -= learning_rate * grads[f"db{l}"]
    return parameters

def two_layer_model(X, Y, layers_dims, learning_rate=0.0075, num_iterations=3000, print_cost=False):
    parameters = initialize_parameters(layers_dims[0], layers_dims[1], layers_dims[2])
    for i in range(num_iterations):
        A1, cache1 = linear_activation_forward(X, parameters["W1"], parameters["b1"], "relu")
        A2, cache2 = linear_activation_forward(A1, parameters["W2"], parameters["b2"], "sigmoid")
        cost = compute_cost(A2, Y)
        dA2 = - (Y / A2 - (1-Y)/(1-A2))
        dA1, dW2, db2 = linear_activation_backward(dA2, cache2, "sigmoid")
        dA0, dW1, db1 = linear_activation_backward(dA1, cache1, "relu")
        grads = {"dW1": dW1, "db1": db1, "dW2": dW2, "db2": db2}
        parameters = update_parameters(parameters, grads, learning_rate)
        if print_cost and i % 100 == 0:
            print(f"Cost after {i} iterations: {cost}")
    return parameters

def predict(X, Y, parameters):
    AL, _ = L_model_forward(X, parameters)
    p = (AL > 0.5)
    return p, np.mean(p == Y)

5.6 主程序

# 加载数据
train_x_orig, train_y, test_x_orig, test_y, classes = load_dataset()

# 数据预处理
train_x = train_x_orig.reshape(train_x_orig.shape[0], -1).T / 255
test_x = test_x_orig.reshape(test_x_orig.shape[0], -1).T / 255

# 训练两层模型
layers_dims_two = (12288, 7, 1)
parameters = two_layer_model(train_x, train_y, layers_dims_two, num_iterations=2500, print_cost=True)

# 评估
_, train_acc = predict(train_x, train_y, parameters)
_, test_acc = predict(test_x, test_y, parameters)
print(f"训练集准确率: {train_acc:.2f}")
print(f"测试集准确率: {test_acc:.2f}")

6. 核心总结

  • 关键技术:反向传播、参数初始化、梯度下降
  • 模型优化:ReLU激活缓解梯度消失、He/Kaiming初始化、批量归一化
  • 应用场景:图像分类、目标检测、人脸识别等

7. 结果可视化

训练过程中损失函数下降曲线:
- 两层模型:快速收敛至低损失(训练集准确率100%,测试集70%)
- L层模型:深度网络性能更优(测试集准确率~70%)

注:完整代码需结合吴恩达课程datasetsutils文件运行。

Xiaoye