在數值線性代數中,穩定雙共軛梯度法(英語:Biconjugate gradient stabilized method,通常簡稱為BiCGSTAB)是一種由荷蘭數學家 H. A. van der Vorst 提出的用於數值求解非對稱線性方程組的迭代方法。它是雙共軛梯度法(BiCG)的一個變種,比雙共軛梯度法本身以及諸如共軛梯度平方法(CGS)等其他變種有更快速和更平滑的收斂性。它是一種 Krylov 子空間方法。
算法步驟[編輯]
無預處理穩定雙共軛梯度法[編輯]
要求解線性方程組
,穩定雙共軛梯度法從初始解
開始按以下步驟迭代:
![{\displaystyle {\boldsymbol {r}}_{0}={\boldsymbol {b}}-{\boldsymbol {Ax}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a76d8cd19dbc189f2568f85a15972451846f577d)
- 任意選擇向量
使得
,例如,![{\displaystyle {\boldsymbol {\hat {r}}}_{0}={\boldsymbol {r}}_{0}\;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ba076ca9a79238f94bd6a8b1eb3577cc429d474b)
![{\displaystyle \rho _{0}=\alpha =\omega _{0}=1\;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c4b9c0f3b3f7b5ef19c2336eb4180b0b9f933e96)
![{\displaystyle {\boldsymbol {v}}_{0}={\boldsymbol {p}}_{0}={\boldsymbol {0}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/929d32f595db6696f2324c94fec7bbde2b04bae0)
- 對
![{\displaystyle \rho _{i}=({\boldsymbol {\hat {r}}}_{0},{\boldsymbol {r}}_{i-1})\;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c5b3f7758db5868e326ef41f0e31a3f182d6be16)
![{\displaystyle \beta =(\rho _{i}/\rho _{i-1})(\alpha /\omega _{i-1})\;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3557b16e56a249a33b96e142c9b797a7e12622f0)
![{\displaystyle {\boldsymbol {p}}_{i}={\boldsymbol {r}}_{i-1}+\beta ({\boldsymbol {p}}_{i-1}-\omega _{i-1}{\boldsymbol {v}}_{i-1})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3802c5180e5559f70f12a4c76803e0b426b87a58)
![{\displaystyle {\boldsymbol {v}}_{i}={\boldsymbol {Ap}}_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/325d8466522e89780e81629c066cd3690251160a)
![{\displaystyle \alpha =\rho _{i}/({\boldsymbol {\hat {r}}}_{0},{\boldsymbol {v}}_{i})\;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5ad977b2ff9064269eee1164b5dd2f17392a1176)
![{\displaystyle {\boldsymbol {s}}={\boldsymbol {r}}_{i-1}-\alpha {\boldsymbol {v}}_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3fd7af7e2a2b2ec88b657aaf36cdec0d94c3e450)
![{\displaystyle {\boldsymbol {t}}={\boldsymbol {As}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d52dfff6e25624bcc54d0a41e8ddca3206952286)
![{\displaystyle \omega _{i}=({\boldsymbol {t}},{\boldsymbol {s}})/({\boldsymbol {t}},{\boldsymbol {t}})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a1a06bb97d11b8e2a7b20f728f36f00b0568b15b)
![{\displaystyle {\boldsymbol {x}}_{i}={\boldsymbol {x}}_{i-1}+\alpha {\boldsymbol {p}}_{i}+\omega _{i}{\boldsymbol {s}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1ad484e6f8b1fd0012c742f07c360ec4c7d0dadd)
- 若
足夠精確則退出
![{\displaystyle {\boldsymbol {r}}_{i}={\boldsymbol {s}}-\omega _{i}{\boldsymbol {t}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d0272610466f3e44d1e1ef65b7271206fff4f037)
預處理穩定雙共軛梯度法[編輯]
預處理通常被用來加速迭代方法的收斂。要使用預處理子
來求解線性方程組
,預處理穩定雙共軛梯度法從初始解
開始按以下步驟迭代:
![{\displaystyle {\boldsymbol {r}}_{0}={\boldsymbol {b}}-{\boldsymbol {Ax}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a76d8cd19dbc189f2568f85a15972451846f577d)
- 任意選擇向量
使得
,例如,![{\displaystyle {\boldsymbol {\hat {r}}}_{0}={\boldsymbol {r}}_{0}\;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ba076ca9a79238f94bd6a8b1eb3577cc429d474b)
![{\displaystyle \rho _{0}=\alpha =\omega _{0}=1\;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c4b9c0f3b3f7b5ef19c2336eb4180b0b9f933e96)
![{\displaystyle {\boldsymbol {v}}_{0}={\boldsymbol {p}}_{0}={\boldsymbol {0}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/929d32f595db6696f2324c94fec7bbde2b04bae0)
- 對
![{\displaystyle \rho _{i}=({\boldsymbol {\hat {r}}}_{0},{\boldsymbol {r}}_{i-1})\;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c5b3f7758db5868e326ef41f0e31a3f182d6be16)
![{\displaystyle \beta =(\rho _{i}/\rho _{i-1})(\alpha /\omega _{i-1})\;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3557b16e56a249a33b96e142c9b797a7e12622f0)
![{\displaystyle {\boldsymbol {p}}_{i}={\boldsymbol {r}}_{i-1}+\beta ({\boldsymbol {p}}_{i-1}-\omega _{i-1}{\boldsymbol {v}}_{i-1})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3802c5180e5559f70f12a4c76803e0b426b87a58)
![{\displaystyle {\boldsymbol {y}}={\boldsymbol {K}}^{-1}{\boldsymbol {p}}_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3485a9cea796625959d130182057c9ae33654b5c)
![{\displaystyle {\boldsymbol {v}}_{i}={\boldsymbol {Ay}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/af33a4fd2ed8e56a815b1b80c7e887ed8acd938f)
![{\displaystyle \alpha =\rho _{i}/({\boldsymbol {\hat {r}}}_{0},{\boldsymbol {v}}_{i})\;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5ad977b2ff9064269eee1164b5dd2f17392a1176)
![{\displaystyle {\boldsymbol {s}}={\boldsymbol {r}}_{i}-\alpha {\boldsymbol {v}}_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/19fd4c385cd183dffea43e4acfe320d3d606fc9a)
![{\displaystyle {\boldsymbol {z}}={\boldsymbol {As}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fdd1715ea9b69cad5a95f60046c959cb3fc24510)
![{\displaystyle {\boldsymbol {t}}={\boldsymbol {K}}^{-1}{\boldsymbol {z}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/94d13a3d0ca25bbeee7c0a2b0a4fb4124a941c26)
![{\displaystyle \omega _{i}=({\boldsymbol {K}}_{1}^{-1}{\boldsymbol {t}},{\boldsymbol {K}}_{1}^{-1}{\boldsymbol {s}})/({\boldsymbol {K}}_{1}^{-1}{\boldsymbol {t}},{\boldsymbol {K}}_{1}^{-1}{\boldsymbol {t}})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/aa62795f0742cfe36be6c5be6c57d6ca89ce4534)
![{\displaystyle {\boldsymbol {x}}_{i}={\boldsymbol {x}}_{i-1}+\alpha {\boldsymbol {y}}+\omega _{i}{\boldsymbol {z}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ee01777feb85dc6f35ad3c7c8a149074ec4f1ea)
- 若
足夠精確則退出
![{\displaystyle {\boldsymbol {r}}_{i}={\boldsymbol {s}}-\omega _{i}{\boldsymbol {t}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d0272610466f3e44d1e1ef65b7271206fff4f037)
這個形式等價於將無預處理的穩定雙共軛梯度法應用於顯式預處理後的方程組
,
其中
,
,
。換句話說,左預處理和右預處理都可以通過這個形式實施。
雙共軛梯度法的多項式形式[編輯]
在雙共軛梯度法中,搜索方向
和
以及殘量
和
通過以下遞推關係更新:
![{\displaystyle {\boldsymbol {p}}_{i}={\boldsymbol {r}}_{i}+\beta _{i}{\boldsymbol {p}}_{i-1}{\text{,}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/30f6591125d34b32d181310911e86df3b2b688ea)
![{\displaystyle {\boldsymbol {\hat {p}}}_{i}={\boldsymbol {\hat {r}}}_{i}+\beta _{i}{\boldsymbol {p}}_{i-1}{\text{,}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8997fe739c9a50ec5fb961c53cd3693c32f3b476)
![{\displaystyle {\boldsymbol {r}}_{i}={\boldsymbol {r}}_{i-1}-\alpha _{i}{\boldsymbol {Ap}}_{i}{\text{,}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5ce1fb3650b7a1dd2fe14c25ea5016ac14132c41)
![{\displaystyle {\boldsymbol {\hat {r}}}_{i}={\boldsymbol {\hat {r}}}_{i-1}-\alpha {\boldsymbol {A}}^{\mathrm {T} }{\boldsymbol {\hat {p}}}_{i}{\text{.}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8673a0adfd2d896fc73b7829548f30e0d12f97be)
常數
和
取值為
![{\displaystyle \alpha _{i}=\rho _{i}/({\boldsymbol {\hat {p}}}_{i},{\boldsymbol {Ap}}_{i}){\text{,}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/676d9ced2f43641384a7be158f34ad2e5b8ce127)
![{\displaystyle \beta _{i}=\rho _{i}/\rho _{i-1}{\text{,}}\;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/066da76c0a7cab3dcb1a6bc36128cef30ebd056e)
其中
,使得殘量和搜索方向分別滿足雙正交性和雙共軛性,也就是對於
,
![{\displaystyle ({\boldsymbol {\hat {r}}}_{i},{\boldsymbol {r}}_{j})=0{\text{,}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/382bfea9fc6f786fff4fd6994e7f8e358b04b35c)
![{\displaystyle ({\boldsymbol {\hat {p}}}_{i},{\boldsymbol {Ap}}_{j})=0{\text{.}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cf127c7f2ad1fa3b05c0ac8d15dbeec7ea196c93)
不難證明,
![{\displaystyle {\boldsymbol {r}}_{i}=P_{i}({\boldsymbol {A}}){\boldsymbol {r}}_{0}{\text{,}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fedae2bc01e2c222697d40179609d6ae80a5d206)
![{\displaystyle {\boldsymbol {\hat {r}}}_{i}=P_{i}({\boldsymbol {A}}^{\mathrm {T} }){\boldsymbol {\hat {r}}}_{0}{\text{,}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6c2d2f482296fab82a1cfad4fc3577decb81480d)
![{\displaystyle {\boldsymbol {p}}_{i+1}=T_{i}({\boldsymbol {A}}){\boldsymbol {r}}_{0}{\text{,}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/741bc2163e170ddcc005f8618fc5d2df0a20eb54)
![{\displaystyle {\boldsymbol {\hat {p}}}_{i+1}=T_{i}({\boldsymbol {A}}^{\mathrm {T} }){\boldsymbol {\hat {r}}}_{0}{\text{,}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d67f59c5e80948605a9359d29eb006bfd8a2ca8d)
其中
和
是關於
的
次多項式。這些多項式滿足以下遞推關係:
![{\displaystyle P_{i}({\boldsymbol {A}})=P_{i-1}({\boldsymbol {A}})-\alpha _{i}{\boldsymbol {A}}T_{i-1}({\boldsymbol {A}}){\text{,}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2cb29ce86c12768073f804ff4e7d9465373d0ae1)
![{\displaystyle T_{i}({\boldsymbol {A}})=P_{i}({\boldsymbol {A}})-\beta _{i+1}T_{i-1}({\boldsymbol {A}}){\text{.}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/48e8076c501a03c554439c91b70eb8a31a666fc7)
從雙共軛梯度法導出穩定雙共軛梯度 法[編輯]
雙共軛梯度法的殘量和搜索方向不是必須顯式跟蹤的。換句話說,雙共軛梯度法的迭代是可以隱式進行的。穩定雙共軛梯度法中希望得到
![{\displaystyle {\boldsymbol {\tilde {r}}}_{i}=Q_{i}({\boldsymbol {A}})P_{i}({\boldsymbol {A}}){\boldsymbol {r}}_{0}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c25b966e5ed14a221550a75e95081e089fed5f2c)
的遞推關係,其中
,
為適當選取的常數。以此代替
的目的是希望
可以使
有比
更快速和更平滑的收斂性。
根據
和
的遞推關係以及
的定義,
![{\displaystyle Q_{i}({\boldsymbol {A}})P_{i}({\boldsymbol {A}}){\boldsymbol {r}}_{0}=({\boldsymbol {I}}-\omega _{i}{\boldsymbol {A}}){\bigl (}Q_{i-1}({\boldsymbol {A}})P_{i-1}({\boldsymbol {A}}){\boldsymbol {r}}_{0}-\alpha _{i}{\boldsymbol {A}}Q_{i-1}({\boldsymbol {A}})P_{i-1}({\boldsymbol {A}}){\boldsymbol {r}}_{0}{\bigr )}{\text{.}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b33932b999b73081049e131334fd8adada0d8bea)
於是還需要一條關於
的遞推關係。這同樣可以從雙共軛梯度法的遞推關係中導出:
![{\displaystyle Q_{i}({\boldsymbol {A}})T_{i}({\boldsymbol {A}}){\boldsymbol {r}}_{0}=Q_{i}({\boldsymbol {A}})P_{i}({\boldsymbol {A}}){\boldsymbol {r}}_{0}-\beta _{i+1}({\boldsymbol {I}}-\omega _{i}{\boldsymbol {A}})Q_{i-1}({\boldsymbol {A}})T_{i-1}({\boldsymbol {A}}){\boldsymbol {r}}_{0}{\text{.}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e8418446694f11e1263ca15b930af6858a9af38c)
類似於
,穩定雙共軛梯度法定義
![{\displaystyle {\boldsymbol {\tilde {p}}}_{i+1}=Q_{i}({\boldsymbol {A}})T_{i}({\boldsymbol {A}}){\boldsymbol {r}}_{0}{\text{.}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/39ce1581b43eb18fec1e1233614426d08fb95338)
寫成向量形式,
和
的遞推關係就是
![{\displaystyle {\boldsymbol {\tilde {p}}}_{i}={\boldsymbol {\tilde {r}}}_{i-1}+\beta _{i}({\boldsymbol {I}}-\omega _{i-1}{\boldsymbol {A}}){\boldsymbol {\tilde {p}}}_{i-1}{\text{,}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6154651794d91e7d6b494d709e1829ff29a368f8)
![{\displaystyle {\boldsymbol {\tilde {r}}}_{i}=({\boldsymbol {I}}-\omega _{i}{\boldsymbol {A}})({\boldsymbol {\tilde {r}}}_{i-1}-\alpha _{i}{\boldsymbol {A{\tilde {p}}}}_{i}){\text{.}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/306bf9c21104ea69607948fa352dfa969441313e)
為了導出
的遞推關係,定義
![{\displaystyle {\boldsymbol {s}}_{i}={\boldsymbol {\tilde {r}}}_{i-1}-\alpha _{i}{\boldsymbol {A{\tilde {p}}}}_{i}{\text{.}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e20f685ac2432bf7805548a709c2bb7a771dcf20)
於是
的遞推關係就可以寫成
![{\displaystyle {\boldsymbol {\tilde {r}}}_{i}={\boldsymbol {\tilde {r}}}_{i-1}-\alpha _{i}{\boldsymbol {A{\tilde {p}}}}_{i}-\omega _{i}{\boldsymbol {As}}_{i}{\text{,}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dd7c0ff190cf5e0a41ad7634a1c4536520092044)
這對應於
![{\displaystyle {\boldsymbol {x}}_{i}={\boldsymbol {x}}_{i-1}+\alpha _{i}{\boldsymbol {\tilde {p}}}_{i}+\omega _{i}{\boldsymbol {s}}_{i}{\text{.}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6e376b57a78e30e6895944af303a0de800748693)
確定穩定雙共軛梯度法的常數[編輯]
現在只需確定雙共軛梯度法的常數
和
以及選擇一個合適的
。
在雙共軛梯度法中,
, 其中
![{\displaystyle \rho _{i}=({\boldsymbol {\hat {r}}}_{i-1},{\boldsymbol {r}}_{i-1})={\bigl (}P_{i-1}({\boldsymbol {A}}^{\mathrm {T} }){\boldsymbol {\hat {r}}}_{0},P_{i-1}({\boldsymbol {A}}){\boldsymbol {r}}_{0}{\bigr )}{\text{.}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/406022a0c223e9011da0a163c383cb9019ed7d21)
由於穩定雙共軛梯度法不顯式跟蹤
或
,
不能立即用這條公式計算出來。但是,它可以和純量
![{\displaystyle {\tilde {\rho }}_{i}={\bigl (}Q_{i-1}({\boldsymbol {A}}^{\mathrm {T} }){\boldsymbol {\hat {r}}}_{0},P_{i-1}({\boldsymbol {A}}){\boldsymbol {r}}_{0}{\bigr )}={\bigl (}{\boldsymbol {\hat {r}}}_{0},Q_{i-1}({\boldsymbol {A}})P_{i-1}({\boldsymbol {A}}){\boldsymbol {r}}_{0}{\bigr )}=({\boldsymbol {\hat {r}}}_{0},{\boldsymbol {r}}_{i-1})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dd0b85df77aa24552a75b4aa04d13035c5e5252b)
關聯起來。由於雙正交性,
正交於
,其中
是關於
的任意
次多項式。因此在點積
和
中只需考慮
和
的最高次項。
和
的最高次項係數分別是
和
。因此
![{\displaystyle \rho _{i}=(\alpha _{1}/\omega _{1})(\alpha _{2}/\omega _{2})\cdots (\alpha _{i-1}/\omega _{i-1}){\tilde {\rho }}_{i}{\text{,}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8efd98851e97a8b1087c438876c601c6bc58be82)
於是
![{\displaystyle \beta _{i}=\rho _{i}/\rho _{i-1}=({\tilde {\rho }}_{i}/{\tilde {\rho }}_{i-1})(\alpha _{i-1}/\omega _{i-1}){\text{.}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/51745ef12507c88070ff7ed73a03da9d280b7e15)
關於
的簡單公式可以類似地導出。在雙共軛梯度法中,
![{\displaystyle \alpha _{i}=\rho _{i}/({\boldsymbol {\hat {p}}},{\boldsymbol {Ap}}_{i})={\bigl (}P_{i-1}({\boldsymbol {A}}^{\mathrm {T} }){\boldsymbol {\hat {r}}}_{0},P_{i-1}({\boldsymbol {A}}){\boldsymbol {r}}_{0}{\bigr )}{\big /}{\bigl (}T_{i-1}({\boldsymbol {A}}^{\mathrm {T} }){\boldsymbol {\hat {r}}}_{0},{\boldsymbol {A}}T_{i-1}({\boldsymbol {A}}){\boldsymbol {r}}_{0}{\bigr )}{\text{.}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c267fed725193bb6ffd82d4a6271a2b6c14699c7)
類似於上面的情況,由於雙正交性和雙共軛性,在點積中只需考慮
和
的最高次項。
和
的最高次項係數恰巧是相同的。因此,它們可以在公式中被同時替換為
,於是
![{\displaystyle \alpha _{i}={\bigl (}Q_{i-1}({\boldsymbol {A}}^{\mathrm {T} }){\boldsymbol {\hat {r}}}_{0},P_{i-1}({\boldsymbol {A}}){\boldsymbol {r}}_{0}{\bigr )}{\big /}{\bigl (}Q_{i-1}({\boldsymbol {A}}^{\mathrm {T} }){\boldsymbol {\hat {r}}}_{0},{\boldsymbol {A}}T_{i-1}({\boldsymbol {A}}){\boldsymbol {r}}_{0}{\bigr )}={\tilde {\rho }}_{i}{\big /}{\bigl (}{\boldsymbol {\hat {r}}}_{0},{\boldsymbol {A}}Q_{i-1}({\boldsymbol {A}})T_{i-1}({\boldsymbol {A}}){\boldsymbol {r}}_{0}{\bigr )}={\tilde {\rho }}_{i}/({\boldsymbol {\hat {r}}}_{0},{\boldsymbol {A{\tilde {p}}}}_{i}){\text{.}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3171e4b8da8c4cd20b997c9c7bf3748918703654)
最後,穩定雙共軛梯度法選擇
使得
的 2-範數作為
的函數被最小化。這在
![{\displaystyle {\bigl (}({\boldsymbol {I}}-\omega _{i}{\boldsymbol {A}}){\boldsymbol {s}}_{i},{\boldsymbol {As}}_{i}{\bigr )}=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/294530b411785bdd912af72afa24bfd96ee028e4)
時達到,因此
的最優值是
![{\displaystyle \omega _{i}=({\boldsymbol {As}}_{i},{\boldsymbol {s}}_{i})/({\boldsymbol {As}}_{i},{\boldsymbol {As}}_{i}){\text{.}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0d723e8b44eef0ab862e5656ea4870d15be297bd)
相關主題[編輯]
參考文獻[編輯]