Skip to content

🌲 Stanford CS 228 - Probabilistic Graphical Models

Notifications You must be signed in to change notification settings

florist-notes/CS228_PGM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Probabilistic Graphical Models (PGMs) are a rich framework for encoding probability distributions over complex domains, using a graph-based representation. The core idea behind PGMs is to use graphs to capture the conditional dependence structure between random variables, thereby facilitating efficient computation of probabilities and other statistical measures. PGMs combine principles from graph theory, probability theory, and statistics.

PGM ! PGM ! PGM ! One of the most interesting class yet challenging at Stanford is CS228. Graphical Models ahoi! [ official website ] [ course notes ]

The course heavily follows Daphne Koller's book Probabilistic Graphical Models: Principles and Techniques by Daphne Koller and Nir Friedman., and there's also an online version of "Probabilistic Graphical Models" on Coursera : cert.

Imagine you're trying to figure out how various factors like weather, traffic, and your mood affect whether you'll have a good day or not. PGMs help us draw a picture of these connections and then use that picture to make predictions or understand things better.

Types of Probabilistic Graphical Models There are two primary types of PGMs:

Bayesian Networks (Directed Graphical Models):

  • Structure: Directed Acyclic Graph (DAG).
  • Nodes: Represent random variables.
  • Edges: Represent conditional dependencies between the variables.
  • Representation: Each node is associated with a conditional probability distribution (CPD) that specifies the probability of the node given its parents in the graph.
  • Imagine a flowchart: The circles (nodes) are connected by arrows (edges), showing how one thing leads to another.
  • Example: Let’s say you're planning your day. "Weather" might influence "Mood," and "Mood" might influence "Productivity." The arrows would go from "Weather" to "Mood" and from "Mood" to "Productivity." This setup helps us figure out, for instance, how the weather might indirectly affect your productivity.

Markov Networks (Undirected Graphical Models):

  • Structure: Undirected Graph.
  • Nodes: Represent random variables.
  • Edges: Represent potential functions that quantify the affinity between connected nodes.
  • Representation: The joint distribution is represented as a product of potential functions, each defined over a clique (fully connected subgraph) of the graph.
  • Imagine a web: The circles (nodes) are connected by lines (edges) without arrows, meaning things are connected but don’t necessarily cause each other.
  • Example: Think about pixels in a black-and-white photo. Each pixel’s color is likely to be similar to the ones next to it, but not necessarily because one causes the other to be that color—just that they tend to be similar. This setup helps in tasks like smoothing an image, where you want nearby pixels to have similar values.

The aim of this course is to develop the knowledge and skills necessary to design, implement and apply these models to solve real problems. The course will cover:

  • (1) Bayesian networks, undirected graphical models and their temporal extensions
  • (2) exact and approximate inference methods
  • (3) estimation of the parameters and the structure of graphical models.

BOOK : Probabilistic Graphical Models: Principles and Techniques by Daphne Koller and Nir Friedman.

𓄆 Important Books :
ð“Š– Modeling and Reasoning with Bayesian
ð“Š– Information Theory, Inference, and Learning Algorithms
ð“Š– Machine Learning A Probabilistic Perspective
ð“Š– Bayesian Reasoning and Machine Learning by David Barber
ð“Š– Graphical models, exponential families, and variational inference

Homework (70%) + Final Exam (30%) | Homework - Theoretical + Programming | Topics in book

♞ PRELIMINARIES

♞ REPRESENTATION

  • Bayesian networks: Definitions. Representations via directed graphs. Independencies in directed models.
  • Markov random fields: Undirected vs directed models. Independencies in undirected models. Conditional random fields.

♞ INFERENCE

  • Variable elimination: The inference problem. Variable elimination. Complexity of inference.
  • Belief propagation: The junction tree algorithm. Exact inference in arbitrary graphs. Loopy Belief Propagation.
  • MAP inference: Max-sum message passing. Graphcuts. Linear programming relaxations. Dual decomposition.
  • Sampling-based inference: Monte-Carlo sampling. Importance sampling. Markov Chain Monte-Carlo. Applications in inference.
  • Variational inference: Variational lower bounds. Mean Field. Marginal polytope and its relaxations.

♞ LEARNING

♞ BRINGING IT ALL TOGETHER

유 PGM - Max Planck Institute for Intelligent Systems - Christopher Bishop | CMU - PGM | Probabilistic Graphical Models Tutorial — Part 1 | Understanding Probabilistic Graphical Models Intuitively | CMU - PGM - website | PGM | libDAI | OpenGM

FINAL EXAM

2016 Final , 2009 Final , 2008 Final , 2007 Final, 2006 Final | Collected from public resources | My Solution - HOMEWORKS , EXAMS | Collected from public resources

Resources: cs.ubc.ca, 10-708 – Probabilistic Graphical Models : cmu, notebook - Introduction to Probabilitic Graphical Models, Max Planck Institute, Stanford CS224W: Machine Learning with Graphs.