梯度求解学习小结2

问题

梯度求解学习小结1中,学习了如何求解损失对打分的梯度,尤其是平均损失对打分矩阵的梯度。对于基于batch随机梯度下降的求解方法而言,需要求的是平均损失对每一层网络参数W和b的梯度。又因为层与层之间,梯度通过激活值传到,因此也需要求平均损失对每一层输入值(是前一层的激活值)X的梯度。因此本小节将总结对dW, db, dX的求解,重点放在线性代数部分,凭借强大的numpy,python代码其实就只需要一行。

梯度推导过程中的trick

链式法则

$L^{(i)}=L(f(x_i; W), y_i)$中若$L(.)$非常复杂,则需要借助链式法则求解。如f是K维的向量,则
$$\frac{\partial L^{(i)}}{\partial W} =\sum_{j=1}^K\frac{\partial L^{(i)}}{\partial f_j}\cdot \frac{\partial f_j}{\partial W} =\left(\begin{matrix} \frac{\partial L^{(i)}}{\partial f_1} & \cdots & \frac{\partial L^{(i)}}{\partial f_K} \end{matrix}\right) \cdot \left(\begin{matrix} \frac{\partial f_1}{\partial W} \\ \vdots \\ \frac{\partial f_K}{\partial W}\end{matrix}\right) =\bigg(\frac{\partial L^{(i)}}{\partial f}\bigg)^T \cdot\frac{\partial f}{\partial W}$$

若模型f为多层神经网络,$\partial f_j/\partial W$的求解也需使用链式法则,此处不展开。

矩阵求导

众所周知,求解的目标是$\frac{\partial L}{\partial W}$, 对于一条样本而言,$loss_i=L(f(x_i;W), y_i)$的W是一个向量(2分类)或者矩阵(多分类),如在神经网络中,每一层的W都以矩阵的形式存在。很显然W的形状由输入和输出的维度决定:
$$X_{(1 \times D)} \cdot W_{(D \times K)} = Y_{(1 \times K)}$$ $$X_{(2 \times D)} \cdot W_{(D \times K)} = Y_{(2 \times K)}$$ $$……$$ $$X_{(N \times D)} \cdot W_{(D \times K)} = Y_{(N \times K)}$$

因为损失函数的输出是实数,在矩阵分析中,学习过,实数函数对向量或矩阵求导就是实数函数对向量或矩阵中的每个分量求偏导,若W是一个$D \times 1$的行向量则:
$$\frac{\partial L}{\partial W}=\bigg(\frac{\partial L}{\partial W_1}, \frac{\partial L}{\partial W_2}, ..., \frac{\partial L}{\partial W_D}\bigg)$$

若W是一个$D \times K$的矩阵则:
$$\frac{\partial L}{\partial W}=\left(\begin{matrix} \frac{\partial L}{\partial W_{11}} & \frac{\partial L}{\partial W_{12}} & ... &\frac{\partial L}{\partial W_{1K}} \\ \frac{\partial L}{\partial W_{21}} & \frac{\partial L}{\partial W_{22}} & ... &\frac{\partial L}{\partial W_{2K}} \\ \vdots & \vdots & & \vdots \\ \frac{\partial L}{\partial W_{D1}} & \frac{\partial L}{\partial W_{D2}} & ... &\frac{\partial L}{\partial W_{DK}} \end{matrix}\right)$$

也可以表示成行向量的形式:
$$\frac{\partial L}{\partial W}=\bigg(\frac{\partial L}{\partial W_{\cdot 1}}, \frac{\partial L}{\partial W_{\cdot 2}}, ..., \frac{\partial L}{\partial W_{\cdot K}}\bigg)$$
其中$\frac{\partial L}{\partial W_{\cdot j}}$表示矩阵表达式中的第j列:
$$\frac{\partial L}{\partial W_{\cdot j}}=\bigg(\frac{\partial L}{\partial W_{1j}}, \frac{\partial L}{\partial W_{2j}}, ..., \frac{\partial L}{\partial W_{Dj}}\bigg)^T$$

实际中的例子——全连接层

全连接层,也叫仿射层或线性层,其计算模型为

$$s^T=f(x; W) = x^T \cdot W$$
其中,x为一条1xD的输入,W为DxK的矩阵

使用链式法则

根据链式法则,有
$$\frac{\partial L}{\partial W}=\sum_{j=1}^K\frac{\partial L}{\partial s_j}\cdot\frac{\partial s_j}{\partial W}$$

L为实数函数,$s_j$为实数,因此$\partial L/\partial s_j$也为实数, 而$\partial s_j/\partial W$为与W形状相同的矩阵。
同样地,有 $$\frac{\partial L}{\partial x}=\sum_{j=1}^K\frac{\partial L}{\partial s_j}\cdot\frac{\partial s_j}{\partial x}$$

求dW

对于上面链式法则求和式中的第二部分偏导$\partial s_j/\partial W$,因为
$$s_j=x^T \cdot W_{\cdot j}$$ $$\frac{\partial s_j}{\partial x}=W_{\cdot j}$$ $$\frac{\partial s_j}{\partial W_{\cdot j}}=x$$
注意这里把x当做列向量对待,$W_{\cdot j}$表示矩阵W的第j列,所以:
$$\begin{aligned}\frac{\partial s_j}{\partial W} &=\bigg(\frac{\partial s_j}{\partial W_{\cdot 1}}, ..., \frac{\partial s_j}{\partial W_{\cdot j}}, ..., \frac{\partial s_j}{\partial W_{\cdot K}}\bigg) \\ & = \bigg(0, ..., \frac{\partial s_j}{\partial W_{\cdot j}}, ..., 0\bigg) \end{aligned}$$

求单样本损失梯度

根据链式法则,有 $$\begin{aligned}\frac{\partial L}{\partial W} &= \sum_{j=1}^K\frac{\partial L}{\partial s_j}\cdot\frac{\partial s_j}{\partial W} \\ &= \sum_{j=1}^K\frac{\partial L}{\partial s_j} \cdot \bigg(0, ..., \frac{\partial s_j}{\partial W_{\cdot j}}, ..., 0\bigg)\\ &= \sum_{j=1}^K\frac{\partial L}{\partial s_j} \cdot \bigg(0, ..., x, ..., 0\bigg)\\ &= \bigg(\frac{\partial L}{\partial s_1} \cdot x, ...,\frac{\partial L}{\partial s_K} \cdot x\bigg) \\ &= x \cdot \bigg(\frac{\partial L}{\partial s_1}, ...,\frac{\partial L}{\partial s_K}\bigg) \\ &=x \cdot \bigg(\frac{\partial L}{\partial s}\bigg)^T \end{aligned}$$
其中($\partial L/\partial s)^T$是与$s^T$形状相同的行向量($1 \times K$)。又x本身是列向量($D \times 1$),所以二者点积为矩阵($D \times K$),与W形状相同。($\partial L/\partial s)^T$的分量$\partial L/\partial s_j$前面已经求过,即已经求出单样本损失对W的梯度。

注意:上式倒数第二步

$$\left(\begin{matrix} 1 \cdot \left(\begin{matrix} 3 \\ 4 \end{matrix}\right), & 2\cdot \left(\begin{matrix} 3 \\ 4 \end{matrix}\right) \end{matrix}\right) = \left(\begin{matrix} 3 \\ 4 \end{matrix}\right) \cdot (1 ,2)$$

求平均损失梯度

我们所要求的损失指的是期望损失,也就是平均损失。若第i条样本的损失为$L^{(i)}$,则N条样本的平均损失为
$$\bar L= \frac{1}{N}\sum_{i=1}^N L^{(i)}$$
因此平均损失对W的梯度为 $$\begin{aligned}dW &= \frac{1}{N}\sum_{i=1}^N \frac{\partial L^{(i)}}{\partial W} \\ &= \frac{1}{N}\sum_{i=1}^N x^{(i)} \cdot \bigg(\frac{\partial L^{(i)}}{\partial s^{(i)}_1}, ...,\frac{\partial L^{(i)}}{\partial s^{(i)}_K}\bigg) \\ &= \Big(x^{(1)}, ..., x^{(N)}\Big) \cdot \left(\begin{matrix}\frac{\partial L^{(1)}}{\partial s^{(1)}_1} & \cdots & \frac{\partial L^{(1)}}{\partial s^{(1)}_K} \\ \vdots & \ddots & \vdots \\ \frac{\partial L^{(N)}}{\partial s^{(N)}_1} & \cdots & \frac{\partial L^{(N)}}{\partial s^{(N)}_K}\end{matrix}\right) / N\\ &= X^T \cdot dS \end{aligned}$$
其中矩阵dS中的元素为
$$(dS)_{ij} = \frac{1}{N}\frac{\partial L^{(i)}}{\partial s_j^{(i)}} $$
而$X^T$是D行N列的矩阵,是输入矩阵X的转置。

编码实现

1
2
3
4
5
'''
- X: Input, numpy array, of shape (N, D)
- dS: Upstream derivative, of shape (N, K)
'''
dW = X.T.dot(dS)

求dX

与求dW类似地, $$\begin{aligned}\frac{\partial L}{\partial x} &= \sum_{j=1}^K\frac{\partial L}{\partial s_j}\cdot\frac{\partial s_j}{\partial x} \\ &= \sum_{j=1}^K\frac{\partial L}{\partial s_j} \cdot W_{\cdot j} \\ &= \sum_{j=1}^K W_{\cdot j} \cdot \frac{\partial L}{\partial s_j} \\ &= \bigg(W_{\cdot 1}, ...,W_{\cdot K}\bigg) \cdot \left(\begin{matrix} \frac{\partial L}{\partial s_1} \\ \vdots \\ \frac{\partial L}{\partial s_K}\end{matrix}\right)\\ &= W \cdot \frac{\partial L}{\partial s} \end{aligned}$$

平均损失对X的梯度为 $$\begin{aligned}dX &= \frac{1}{N}\sum_{i=1}^N \frac{\partial L^{(i)}}{\partial X} \\ &= \frac{1}{N}\sum_{i=1}^N \left(\begin{matrix} \frac{\partial L^{(i)}}{\partial X_{1 \cdot}} \\ \vdots \\ \frac{\partial L^{(i)}}{\partial X_{N \cdot}}\end{matrix}\right) = \frac{1}{N}\sum_{i=1}^N \left(\begin{matrix} \vdots \\ \frac{\partial L^{(i)}}{\partial X_{i \cdot}} \\ \vdots \end{matrix}\right) = \frac{1}{N}\left(\begin{matrix} \frac{\partial L^{(1)}}{\partial X_{1 \cdot}} \\ \vdots \\ \frac{\partial L^{(N)}}{\partial X_{N \cdot}}\end{matrix}\right)\\ &= \frac{1}{N}\left(\begin{matrix} \big(\frac{\partial L^{(1)}}{\partial x^{(1)}}\big)^T \\ \vdots \\ \big(\frac{\partial L^{(N)}}{\partial x^{(N)}}\big)^T\end{matrix}\right) = \frac{1}{N}\left(\begin{matrix} \big(W \cdot \frac{\partial L^{(1)}}{\partial s^{(1)}}\big)^T \\ \vdots \\ \big(W \cdot \frac{\partial L^{(N)}}{\partial s^{(N)}}\big)^T\end{matrix}\right) = \frac{1}{N}\left(\begin{matrix} \big(\frac{\partial L^{(1)}}{\partial s^{(1)}}\big)^T \\ \vdots \\ \big(\frac{\partial L^{(N)}}{\partial s^{(N)}}\big)^T\end{matrix}\right) \cdot W^T\\ &= \frac{1}{N} \left(\begin{matrix}\frac{\partial L^{(1)}}{\partial s^{(1)}_1} & \cdots & \frac{\partial L^{(1)}}{\partial s^{(1)}_K} \\ \vdots & \ddots & \vdots \\ \frac{\partial L^{(N)}}{\partial s^{(N)}_1} & \cdots & \frac{\partial L^{(N)}}{\partial s^{(N)}_K}\end{matrix}\right) \cdot W^T \\ &= dS \cdot W^T \end{aligned}$$

编码实现

1
2
3
4
5
'''
- W: Weight, numpy array, of shape (D, K)
- dS: Upstream derivative, of shape (N, K)
'''
dX = dS.dot(W.T)

求db

若网络模型中带有偏置,如b是长度为K的向量,长度与类别的数目相等

$$s^T=f(x; W, b) = x^T \cdot W + b^T$$
它与

$$s=f(x; W, b) = W \cdot x + b$$
是一个意思, 前面加上转职是希望直观上与数据相符,便于想象,如$X$为输入数据,$x^T$是$X$中的一行。因此, $$s_j= x^T \cdot W_{\cdot j}+ b_j$$ $$\frac{\partial s_j}{\partial b_j}=1$$ $$\begin{aligned}\frac{\partial s_j}{\partial b} &=\bigg(\frac{\partial s_j}{\partial b_1}, ..., \frac{\partial s_j}{\partial b_j}, ..., \frac{\partial s_j}{\partial b_K}\bigg)^T \\ & = \bigg(0, ..., \frac{\partial s_j}{\partial b_j}, ..., 0\bigg)^T \end{aligned}$$
根据链式法则有, $$\begin{aligned}\frac{\partial L}{\partial b} &= \sum_{j=1}^K\frac{\partial L}{\partial s_j}\cdot\frac{\partial s_j}{\partial b} \\ &= \sum_{j=1}^K\frac{\partial L}{\partial s_j} \cdot \bigg(0, ..., \frac{\partial s_j}{\partial b_j}, ..., 0\bigg)^T\\ &= \sum_{j=1}^K\frac{\partial L}{\partial s_j} \cdot \bigg(0, ..., 1, ..., 0\bigg)^T \\ &= \bigg(\frac{\partial L}{\partial s_1}, ...,\frac{\partial L}{\partial s_K}\bigg)^T \\ &= \frac{\partial L}{\partial s} \end{aligned}$$

平均损失对b的梯度为, $$\begin{aligned}db &= \frac{1}{N}\sum_{i=1}^N \frac{\partial L^{(i)}}{\partial b} \\ &= \sum_{i=1}^N \bigg(\frac{\partial L^{(i)}}{\partial s^{(i)}_1}, ...,\frac{\partial L^{(i)}}{\partial s^{(i)}_K}\bigg)^T / N\\ &= \sum_{i=1}^N \Big( (dS)_{i1}, ...,(dS)_{iK} \Big)^T \\ &= \sum_{i=1}^N (dS)_{i \cdot}^T \end{aligned}$$
可见,平均损失对偏置的梯度db为

1
db = dS.sum(axis=0)

公式中的转置想表达的是db为列向量,而dS求和后为行向量,因此需要转置。而编码中对于一维的array, 转不转置,意义不大,因此不需要转置。

总结

通过上面的例子,求一层全连接神经网络的梯度计算基本理清楚了,不带正则项部分的计算被限制在这种形式下
$$\begin{aligned}dW &= X^T \cdot dS \end{aligned}$$ $$\begin{aligned}dX &= dS \cdot W^T \end{aligned}$$ $$\begin{aligned}db &= \sum_{i=1}^N (dS)_{i \cdot}^T \end{aligned}$$
其中矩阵dS中元素平均损失对该全链接层输出的梯度。我们可以根据这些来完成全连接层前向和后项的编码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# 该代码借用了cs231n作业中的格式,但为了符合上面的推导做了较多改变,千万别用到作业中哦^_^
def affine_forward(X, W, b):
"""
Computes the forward pass for an affine (fully-connected) layer.
Inputs:
- X: A numpy array containing input data, of shape (N, D)
- W: A numpy array of weights, of shape (D, K)
- b: A numpy array of biases, of shape (K,)
Returns a tuple of:
- S: output, of shape (N, K)
- cache: (X, W, b)
"""
S = X.dot(w) + b
cache = (X, W, b)
return S, cache
def affine_backward(dS, cache):
"""
Computes the backward pass for an affine layer.
Inputs:
- dS: Upstream derivative, of shape (N, K)
- cache: Tuple of:
- X: Input data, of shape (N, D)
- W: Weights, of shape (D, K)
Returns a tuple of:
- dX: Gradient with respect to x, of shape (N, D)
- dW: Gradient with respect to w, of shape (D, K)
- db: Gradient with respect to b, of shape (K,)
"""
X, W, b = cache
dX = dS.dot(W.T)
dW = X.T.dot(dS)
db = dS.sum(axis=0)
return dx, dw, db