更新於 2024/05/13閱讀時間約 6 分鐘

Probabilistic Graphical Model 1.3節 - Part 3

以下內容是我閱讀Probabilistic Graphical Model, Koller 2009一書的讀書筆記,未來將不定期新增內容,此技術屬AI人工智慧範疇。

1.3 Overview and Roadmap

1.3.1 Overview of Chapters

延續上一篇Part 2講Probabilistic Graphical Model的Inference作法,在Part 3的部分專注於AI當中的Learning主題,以下來看各章節的主題排序:

  • In chapter 16, we review some of the fundamental concepts underlying the general task of learning models from data. We then present the spectrum of learning problems that we address in this part of the book. These problems vary along two main axes: the extent to which we are given prior knowledge specifying the model, and whether the data from which we learn contain complete observations of all of the relevant variables. In contrast to the inference task, where the same algorithms apply equally to Bayesian networks and Markov networks, the learning task is quite different for these two classes of models. We begin with studying the learning task for Bayesian networks.


  • In chapter 17, we focus on the most basic learning task: learning parameters for a Bayesian network with a given structure, from fully observable data. Although this setting may appear somewhat restrictive, it turns out to form the basis for our entire development of Bayesian network learning. As we show, the factorization of the distribution, which was central both to representation and to inference, also plays a key role in making inference feasible.


  • In chapter 18, we move to a harder problem of learning both Bayesian network structure and the parameters, still from fully observed data. The learning algorithms we present trade off the accuracy with which the learned network represents the empirical distribution for the complexity of the resulting structure. As we show, the type of independence assumptions underlying the Bayesian network representation often hold, at least approximately, in real-world distributions. Thus, these learning algorithms often result in reasonably compact structures that capture much of the signal in the distribution.


  • In chapter 19, we address the Bayesian network learning task in a setting where we have access only to partial observations of the relevant variables (for example, when the available patient records have missing entries). This type of situation occurs often in real-world settings. Unfortunately, the resulting learning task is considerably harder, and the resulting algorithms are both more complex and less satisfactory in terms of their performance.


  • In chapter 20, we consider the problem of learning Markov networks from data. It turns out that the learning tasks for Markov networks are significantly harder than the corresponding problem for Bayesian networks. We explain the difficulties and discuss the existing solutions.
分享至
成為作者繼續創作的動力吧!
© 2024 vocus All rights reserved.