# 黑塞矩陣

（重定向自黑塞矩阵

${\displaystyle H(f)_{ij}(x)=D_{i}D_{j}f(x)}$

${\displaystyle H(f)={\begin{bmatrix}{\frac {\partial ^{2}f}{\partial x_{1}^{2}}}&{\frac {\partial ^{2}f}{\partial x_{1}\,\partial x_{2}}}&\cdots &{\frac {\partial ^{2}f}{\partial x_{1}\,\partial x_{n}}}\\\\{\frac {\partial ^{2}f}{\partial x_{2}\,\partial x_{1}}}&{\frac {\partial ^{2}f}{\partial x_{2}^{2}}}&\cdots &{\frac {\partial ^{2}f}{\partial x_{2}\,\partial x_{n}}}\\\\\vdots &\vdots &\ddots &\vdots \\\\{\frac {\partial ^{2}f}{\partial x_{n}\,\partial x_{1}}}&{\frac {\partial ^{2}f}{\partial x_{n}\,\partial x_{2}}}&\cdots &{\frac {\partial ^{2}f}{\partial x_{n}^{2}}}\end{bmatrix}}}$

（也有人把黑塞定义为以上矩阵的行列式[1]。）

## 黑塞矩阵及其对称性

${\displaystyle f(x)=f(x_{0})+f'(x)\Delta x+{\frac {f''(x)}{2!}}\Delta x^{2}+\cdots }$

${\displaystyle f(x_{1},x_{2})=f(x_{10},x_{20})+f_{x_{1}}(x_{0})\Delta x_{1}+f_{x_{2}}(x_{0})\Delta x_{2}+{\frac {1}{2}}[f_{x_{1}x_{1}}(x_{0})\Delta x_{1}^{2}+2f_{x_{1}x_{2}}(x_{0})\Delta x_{1}\Delta x_{2}+f_{x_{2}x_{2}}(x_{0})\Delta x_{2}^{2}]+\cdots }$

${\displaystyle f(x)=f(x_{0})+\nabla f(x_{0})^{T}\Delta x+{\frac {1}{2}}\Delta x^{T}H(x_{0})\Delta x+\cdots }$

${\displaystyle H(x_{0})={\begin{bmatrix}{\frac {\partial ^{2}f}{\partial x_{1}^{2}}}&{\frac {\partial ^{2}f}{\partial x_{1}\,\partial x_{2}}}\\\\{\frac {\partial ^{2}f}{\partial x_{2}\,\partial x_{1}}}&{\frac {\partial ^{2}f}{\partial x_{2}^{2}}}\end{bmatrix}}_{x_{0}}}$

${\displaystyle {\frac {\partial ^{2}f}{\partial x_{1}\partial x_{2}}}={\frac {\partial ^{2}f}{\partial x_{2}\partial x_{1}}}}$

${\displaystyle f(x)=f(x_{0})+\nabla f(x_{0})^{T}\Delta x+{\frac {1}{2}}\Delta x^{T}H(x_{0})\Delta x+\cdots }$

${\displaystyle \nabla f(x_{0})={\begin{bmatrix}{\frac {\partial f}{\partial x_{1}}}&{\frac {\partial f}{\partial x_{2}}}&\cdots &{\frac {\partial f}{\partial x_{n}}}\end{bmatrix}}_{x_{0}}^{T}}$

${\displaystyle H(x_{0})={\begin{bmatrix}{\frac {\partial ^{2}f}{\partial x_{1}^{2}}}&{\frac {\partial ^{2}f}{\partial x_{1}\,\partial x_{2}}}&\cdots &{\frac {\partial ^{2}f}{\partial x_{1}\,\partial x_{n}}}\\\\{\frac {\partial ^{2}f}{\partial x_{2}\,\partial x_{1}}}&{\frac {\partial ^{2}f}{\partial x_{2}^{2}}}&\cdots &{\frac {\partial ^{2}f}{\partial x_{2}\,\partial x_{n}}}\\\\\vdots &\vdots &\ddots &\vdots \\\\{\frac {\partial ^{2}f}{\partial x_{n}\,\partial x_{1}}}&{\frac {\partial ^{2}f}{\partial x_{n}\,\partial x_{2}}}&\cdots &{\frac {\partial ^{2}f}{\partial x_{n}^{2}}}\end{bmatrix}}_{x_{0}}}$

## 在映射 ${\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} }$ 的應用

${\displaystyle H={\begin{vmatrix}{\frac {\partial ^{2}f}{\partial x^{2}}}&{\frac {\partial ^{2}f}{\partial x\,\partial y}}\\\\{\frac {\partial ^{2}f}{\partial y\,\partial x}}&{\frac {\partial ^{2}f}{\partial y^{2}}}\end{vmatrix}}={\frac {\partial ^{2}f}{\partial x^{2}}}{\frac {\partial ^{2}f}{\partial y^{2}}}-({\frac {\partial ^{2}f}{\partial y\,\partial x}})^{2}}$
• H > 0：若${\displaystyle {\frac {\partial ^{2}f}{\partial x^{2}}}>0}$，則${\displaystyle (x_{0},y_{0})}$是局部極小點；若${\displaystyle {\frac {\partial ^{2}f}{\partial x^{2}}}<0}$，則${\displaystyle (x_{0},y_{0})}$是局部極大點。
• H < 0：${\displaystyle (x_{0},y_{0})}$是鞍點。
• H = 0：二階導數無法判斷該臨界點的性質，得從更高階的導數以泰勒公式考慮。

### 在高维情况下的推广

• 当H是正定矩阵时，临界点${\displaystyle x_{0}}$是一个局部的极小值。
• 当H是负定矩阵时，临界点${\displaystyle x_{0}}$是一个局部的极大值。
• H=0,需要更高阶的导数来帮助判断。
• 在其余情况下，临界点${\displaystyle x_{0}}$不是局部极值。

## 参考文献

1. ^ Binmore, Ken; Davies, Joan. Calculus Concepts and Methods. Cambridge University Press. 2007: 190. ISBN 9780521775410. OCLC 717598615.
2. ^ 白清顺; 孙靖明; 梁迎春 (编). 机械优化设计（第6版）. 北京: 机械工业出版社. 2017.6（2018.11重印）: 35~36页. ISBN 978-7-111-56643-4.