Reinforcement learning provides a mathematical formalism for learning-based control. By utilizing reinforcement learning, we can automatically acquire near-optimal behavioral skills, represented by policies, for optimizing user-specified reward functions. It contrasts with the traditional optimization toolbox by considering a stochastic environment whose dynamics are not know apriori. By interacting with the environment and observing delayed rewards, reinforcement learning agents have shown outstanding results at solving complex sequential decision-making problems such as playing Go and videogames at super-human-level performance, autonomous driving, smart grid optimization, etc. This tutorial will cover the basics of reinforcement learning, including terminology and mathematical formalism, Markov Decision Processes, Q-learning and Deep Q-learning. Given the limited time, it will prioritise “breadthness” over depth, giving pointers to where to learn about certain aspects in more detail. It will be split in multiple theory “blocks” interleaved with practical “blocks”, where you get the chance to try out some of the concepts in practice. The practical blocks will be based on a Jupyter notebook and Python.