0%

机器学习日记(2) 线性回归模型

数学原理

  给定数据集 $D = {(x_1, y_1), (x_2, y_2), \ldots, (x_m, y_m)}$,其中 $x_i = (x_{i1}, x_{i2}, \ldots, x_{id}), \quad y_i \in \mathbb{R}$,“线性回归”(linear regression) 试图学得一个线性模型以尽可能准确地预测给定输入值的输出。

  线性回归试图学得

$$
f(x_i) = w x_i + b, \quad 使得 ; f(x_i) \simeq y_i
$$

  如何确定 w 和 b 呢?关键在于如何衡量 f(x) 和 y 之间的差别。在第一次学习中我们已经得知均方误差是回归任务重最常用的性能变量,因此我们可以试图让均方误差最小化,即

$$
(w^, b^) = \arg\min_{(w,b)} \sum_{i=1}^{m} (f(x_i) - y_i)^2
$$
$$
= \arg\min_{(w,b)} \sum_{i=1}^{m} (y_i - w x_i - b)^2
$$

  均方误差有非常好的几何意义,它对应了常用的欧几里得距离或简称“欧氏距离”,基于均方误差最小化来进行模型求解的方法称为 “最小二乘法”。在线性回归中,最小二乘法就是试图找到一条直线,使所有样本导致线上的欧氏距离之和最小。

  求解 w 和 b 使得 $E(w,b) = \sum_{i=1}^{m} (y_i - w x_i - b)^2$ 最小化的过程,成为线性回归模型的最小二乘“参数估计”,我们将 E(w,b) 分别对 w 和 b 求偏导,可以得到

$$
\frac{\partial E(w,b)}{\partial w}=2\left(w \sum_{i=1}^{m} x_i^2-\sum_{i=1}^{m} (y_i - b)x_i\right)
$$

$$
\frac{\partial E(w,b)}{\partial b}=2\left(mb-\sum_{i=1}^{m}(y_i - wx_i)\right)
$$

代码实现

  使用 AI 工具生成一份 csv 格式的线性回归数据集,并使用 python 得 pandas 工具读取,完整代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import numpy as np
import pandas as pd
data = pd.read_csv("data.csv")
X = data["x"].values
y = data["y"].values
m = len(X)
# 2. 初始化参数
w = 0.0
b = 0.0
# 3. 超参数
alpha = 0.0001
epochs = 10000
# 4. 训练
for epoch in range(epochs):
y_pred = w * X + b

dw = (1/m) * np.sum((y_pred - y) * X)
db = (1/m) * np.sum(y_pred - y)

w = w - alpha * dw
b = b - alpha * db

if epoch % 1000 == 0:
cost = (1/(2*m)) * np.sum((y_pred - y)**2)
print(f"epoch={epoch}, cost={cost:.6f}, w={w:.6f}, b={b:.6f}")

print("\n训练结束:")
print(f"w = {w:.6f}")
print(f"b = {b:.6f}")

  在这段代码中,(1/m) * np.sum((y_pred - y) * X) 求得就是上文中的 (1/2m)E(w,b) 对 w 的偏导数。

  最终的训练结果如下,和目标值基本吻合。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
······
epoch=69000, cost=0.122594, w=3.572612, b=5.953159
epoch=70000, cost=0.122571, w=3.572573, b=5.955743
epoch=71000, cost=0.122552, w=3.572538, b=5.958144
epoch=72000, cost=0.122536, w=3.572504, b=5.960373
epoch=73000, cost=0.122521, w=3.572473, b=5.962444
epoch=74000, cost=0.122509, w=3.572445, b=5.964367
epoch=75000, cost=0.122498, w=3.572418, b=5.966153
epoch=76000, cost=0.122489, w=3.572393, b=5.967813
epoch=77000, cost=0.122481, w=3.572370, b=5.969354
epoch=78000, cost=0.122474, w=3.572349, b=5.970785
epoch=79000, cost=0.122468, w=3.572329, b=5.972114
epoch=80000, cost=0.122463, w=3.572311, b=5.973349
epoch=81000, cost=0.122459, w=3.572293, b=5.974496
epoch=82000, cost=0.122455, w=3.572278, b=5.975561
epoch=83000, cost=0.122452, w=3.572263, b=5.976550
epoch=84000, cost=0.122449, w=3.572249, b=5.977469
epoch=85000, cost=0.122447, w=3.572236, b=5.978322
epoch=86000, cost=0.122445, w=3.572225, b=5.979115
epoch=87000, cost=0.122443, w=3.572214, b=5.979851
epoch=88000, cost=0.122441, w=3.572203, b=5.980535
epoch=89000, cost=0.122440, w=3.572194, b=5.981170
epoch=90000, cost=0.122439, w=3.572185, b=5.981760
epoch=91000, cost=0.122438, w=3.572177, b=5.982308
epoch=92000, cost=0.122437, w=3.572169, b=5.982817
epoch=93000, cost=0.122436, w=3.572162, b=5.983289
epoch=94000, cost=0.122435, w=3.572156, b=5.983728
epoch=95000, cost=0.122435, w=3.572150, b=5.984136
epoch=96000, cost=0.122434, w=3.572144, b=5.984515
epoch=97000, cost=0.122434, w=3.572139, b=5.984867
epoch=98000, cost=0.122434, w=3.572134, b=5.985193
epoch=99000, cost=0.122433, w=3.572129, b=5.985497

训练结束:
w = 3.572125
b = 5.985778