以下內容是我閱讀Probabilistic Graphical Model, Koller 2009一書的讀書筆記,未來將不定期新增內容,此技術屬AI人工智慧範疇。
1.3 Overview and Roadmap
1.3.1 Overview of Chapters
延續上一篇Part 3講Probabilistic Graphical Model的Learning作法,在Part 4的部分專注於AI當中的Reasoning主題,這也是當今主流AI作法做不到的地方,現今的AI作法多是模仿,而做不到因果推論這關鍵,以下來看各章節的主題排序:
- In chapter 21, we focus on the semantics of intervention and its relation to causality. We present the notion of a causal model, which allows us to answer not only queries of the form “if I observe X, what do I learn about Y,” but also intervention queries, of the form “if I manipulate X, what effect does it have on Y.”
- For chapter 22, we turn to the task of decision making under uncertainty. Here, we must consider not only the distribution over different states of the world, but also the preferences of the agent regarding these outcomes. In chapter 22, we discuss the notion of utility functions and how they can encode an agent’s preferences about complex situations involving multiple variables. As we show, the same ideas that we used to provide compact representations of probability distribution can also be used for utility functions.
- In chapter 23, we describe a unified representation for decision making, called influence diagrams. Influence diagrams extend Bayesian networks by introducing actions and utilities. We present algorithms that use influence diagrams for making decisions that optimize the agent’s expected utility. These algorithms utilize many of the same ideas that formed the basis for exact inference in Bayesian networks.