Posts

Meta Reinforcement Learning
MetaRL is metalearning on reinforcement learning tasks. After trained over a distribution of tasks, the agent is able to solve a new task by developing a new RL algorithm with its internal activity dynamics. This post starts with the origin of metaRL and then dives into three key components of metaRL.

Domain Randomization for Sim2Real Transfer
If a model or policy is mainly trained in a simulator but expected to work on a real robot, it would surely face the sim2real gap. Domain Randomization (DR) is a simple but powerful idea of closing this gap by randomizing properties of the training environment.

Are Deep Neural Networks Dramatically Overfitted?
If you are, like me, confused by why deep neural networks can generalize to outofsample data points without drastic overfitting, keep on reading.

Generalized Language Models
As a follow up of word embedding post, we will discuss the models on learning contextualized word vectors, as well as the new trend in large unsupervised pretrained language models which have achieved amazing SOTA results on a variety of language tasks.

Object Detection Part 4: Fast Detection Models
Part 4 of the “Object Detection for Dummies” series focuses on onestage models for fast detection, including SSD, RetinaNet, and models in the YOLO family. These models skip the explicit region proposal stage but apply the detection directly on dense sampled areas.

MetaLearning: Learning to Learn Fast
Metalearning, also known as “learning to learn”, intends to design models that can learn new skills or adapt to new environments rapidly with a few training examples. There are three common approaches: 1) learn an efficient distance metric (metricbased); 2) use (recurrent) network with external or internal memory (modelbased); 3) optimize the model parameters explicitly for fast learning (optimizationbased).

Flowbased Deep Generative Models
In this post, we are looking into the third type of generative models: flowbased generative models. Different from GAN and VAE, they explicitly learn the probability density function of the input data.

From Autoencoder to BetaVAE
Autocoders are a family of neural network models aiming to learn compressed latent variables of highdimensional data. Starting from the basic autocoder model, this post reviews several variations, including denoising, sparse, and contractive autoencoders, and then Variational Autoencoder (VAE) and its modification betaVAE.

Attention? Attention!
Attention has been a fairly popular concept and a useful tool in the deep learning community in recent years. In this post, we are gonna look into how attention was invented, and various attention mechanisms and models, such as transformer and SNAIL.

Implementing Deep Reinforcement Learning Models with Tensorflow + OpenAI Gym
Let’s see how to implement a number of classic deep reinforcement learning models in code.

Policy Gradient Algorithms
Abstract: In this post, we are going to look deep into policy gradient, why it works, and many new policy gradient algorithms proposed in recent years: vanilla policy gradient, actorcritic, offpolicy actorcritic, A3C, A2C, DPG, DDPG, D4PG, MADDPG, TRPO, PPO, ACER, ACTKR, SAC and TD3.

A (Long) Peek into Reinforcement Learning
In this post, we are gonna briefly go over the field of Reinforcement Learning (RL), from fundamental concepts to classic algorithms. Hopefully, this review is helpful enough so that newbies would not get lost in specialized terms and jargons while starting. [WARNING] This is a long read.

The MultiArmed Bandit Problem and Its Solutions
The multiarmed bandit problem is a class example to demonstrate the exploration versus exploitation dilemma. This post introduces the bandit problem and how to solve it using different exploration strategies.

Object Detection for Dummies Part 3: RCNN Family
In Part 3, we would examine five object detection models: RCNN, Fast RCNN, Faster RCNN, and Mask RCNN. These models are highly related and the new versions show great speed improvement compared to the older ones.

Object Detection for Dummies Part 2: CNN, DPM and Overfeat
Part 2 introduces several classic convolutional neural work architecture designs for image classification (AlexNet, VGG, ResNet), as well as DPM (Deformable Parts Model) and Overfeat models for object recognition.

Object Detection for Dummies Part 1: Gradient Vector, HOG, and SS
In this series of posts on “Object Detection for Dummies”, we will go through several basic concepts, algorithms, and popular deep learning models for image processing and objection detection. Hopefully, it would be a good read for people with no experience in this field but want to learn more. The Part 1 introduces the concept of Gradient Vectors, the HOG (Histogram of Oriented Gradients) algorithm, and Selective Search for image segmentation.

Learning Word Embedding
Word embedding is a dense representation of words in the form of numeric vectors. It can be learned using a variety of language models. The word embedding representation is able to reveal many hidden relationships between words. For example, vector(“cat”)  vector(“kitten”) is similar to vector(“dog”)  vector(“puppy”). This post introduces several models for learning word embedding and how their loss functions are designed for the purpose.

Anatomize Deep Learning with Information Theory
This post is a summary of Prof Naftali Tishby’s recent talk on “Information Theory in Deep Learning”. It presented how to apply the information theory to study the growth and transformation of deep neural networks during training.

From GAN to WGAN
This post explains the maths behind a generative adversarial network (GAN) model and why it is hard to be trained. Wasserstein GAN is intended to improve GANs’ training by adopting a smooth metric for measuring the distance between two probability distributions.

How to Explain the Prediction of a Machine Learning Model?
This post reviews some research in model interpretability, covering two aspects: (i) interpretable models with modelspecific interpretation methods and (ii) approaches of explaining blackbox models. I included an open discussion on explainable artificial intelligence at the end.

Predict Stock Prices Using RNN: Part 2
This post is a continued tutorial for how to build a recurrent neural network using Tensorflow to predict stock market prices. Part 2 attempts to predict prices of multiple stocks using embeddings. The full working code is available in lilianweng/stockrnn.

Predict Stock Prices Using RNN: Part 1
This post is a tutorial for how to build a recurrent neural network using Tensorflow to predict stock market prices. Part 1 focuses on the prediction of S&P 500 index. The full working code is available in lilianweng/stockrnn.

An Overview of Deep Learning for Curious People
Starting earlier this year, I grew a strong curiosity of deep learning and spent some time reading about this field. To document what I’ve learned and to provide some interesting pointers to people with similar interests, I wrote this overview of deep learning models and their applications.