圖模式:修订间差异

维基百科,自由的百科全书
删除的内容 添加的内容
GongYi留言 | 贡献
新頁面,內容: 在機率論統計學機器學習中,'''圖模式'''是用圖論方法以表現數個獨立隨機變數之關聯的一種建模法。...
(没有差异)

2008年1月16日 (三) 07:15的版本

機率論統計學機器學習中,圖模式是用圖論方法以表現數個獨立隨機變數之關聯的一種建模法。其圖中的任一節點為隨機變數,若兩節點間無邊相接則意味此二變數彼此條件獨立。

In probability theory, statistics, and machine learning, a graphical model (GM) is a graph that represents independencies among random variables by a graph in which each node is a random variable, and the missing edges between the nodes represent conditional independencies.

Two common types of GMs correspond to graphs with directed and undirected edges. If the network structure of the model is a directed acyclic graph (DAG), the GM represents a factorization of the joint probability of all random variables. More precisely, if the events are

X1, ..., Xn,

then the joint probability

P(X1, ..., Xn),

is equal to the product of the conditional probabilities

P(Xi | parents of Xi) for i = 1,...,n.

In other words, the joint distribution factors into a product of conditional distributions. Any two nodes that are not connected by an arrow are conditionally independent given the values of their parents. In general, any two sets of nodes are conditionally independent given a third set if a criterion called d-separation holds in the graph.

This type of graphical model is known as a directed graphical model, Bayesian network, or belief network. Classic machine learning models like hidden Markov models, neural networks and newer models such as variable-order Markov models can be considered as special cases of Bayesian networks.

Graphical models with undirected edges are generally called Markov random fields or Markov networks.

A third type of graphical model is a factor graph, which is an undirected bipartite graph connecting variables and factor nodes. Each factor represents a probability distribution over the variables it is connected to. In contrast to a Bayesian network, a factor may be connected to more than two nodes.

Applications of graphical models include speech recognition, computer vision, decoding of low-density parity-check codes, modeling of gene regulatory networks, gene finding and diagnosis of diseases.

A good reference for learning the basics of graphical models is written by Neapolitan, Learning Bayesian networks (2004) and another is Finn Verner Jensen's An Introduction to Bayesian Networks from 1996.[1] A more advanced and statistically oriented book is by Cowell, Dawid, Lauritzen and Spiegelhalter, Probabilistic networks and expert systems (1999).

A computational reasoning approach is provided in Judea Pearl's Probabilistic Reasoning in Intelligent Systems from 1988[2] where the relationships between graphs and probabilities were formally introduced.

See also

References

  1. ^ Finn Verner Jensen. An Introduction to Bayesian Networks. New York: Springer Verlag. 1996. ISBN 0387915028. 
  2. ^ Judea Pearl. Probabilistic Reasoning in Intelligent Systems Revised Second Printing. San Mateo, CA: Morgan Kaufmann. 1988. 

Others