# Sequence Models and Recurrent Neural Networks

Before talking about recurrent neural networks, let’s talk about hidden Markov models (HMM) first.

Massachusetts General Hospital Postdoctoral Research Fellow

Harvard Medical School PhD, Computer Science

Yale University

Before talking about recurrent neural networks, let’s talk about hidden Markov models (HMM) first.

Reinforcement Learning is the third category of big topics in machine learning after supervised learning and unsupervised learning.

Graphs allow us to encode structural assumptions about data. Graphs are the natural language for describing all kinds of problems and data.

In the previous post, we talked about approximation inference. We want to compute $p(\theta,z|x)$, but it’s too complicated.

In the previous post, we talked about Gibbs sampling and posterior inference.

Review: In the previous post, we talked about the Dirichlet process and Dirichlet process mixture, as the Dirichlet process is for CDF estimation and the Dirichlet process mixture is for density estimation (i.

Review: In the previous post, we talked about Bayesian Inference and Gaussian processes.

Bayesian Inference The parameter $\theta$ in Bayesian Inference is viewed as a random variable.

Let’s start with a simple regression method. Let’s assume that we have a dataset of $n$ points ${(x_i,y_i)}_{i=1}^n$ where $y_i \in \mathbb{R}$ and $x_i \in \mathbb{R}^d$:

In the previous post, we talked about Mercer’s theorem and defined a mercer kernel as $\int f(x) f(y) k(x,y) dx dy \geq 0$ for any function $f$.

© iid.yale.edu | Yale University 2023