Convolutional Neural Networks

Convolutional Neural Networks (CNNs) are very useful in all image related tasks as well as several sequence processing tasks (while variants of RNNs are more common for sequences, DeepMind's Wavenets make CNNs very relevant for sequences as well). High level APIs like Keras make implementing CNNs very easy but a true understanding of the internal eludes one until the same is explored in C or numpy. In these two tutorials we explore how CNNs work using pure numpy code.
CNNs Using Numpy - Part 1
CNNs Using Numpy - Part 2

Reinforcement Learning

Reinforcement Learning is a very powerful paradigm that has recently pushed the frontiers of Artificial Intelligence. In this paradigm an agent constantly interacts with an environment and takes actions based on the environment (and possibly its internal state). Open AI maintains a package gym that contains several such environments for AI enthusiasts to train their agents with. In the article below we use our own custom wrapping of the Open AI gym environments to work outside their stated parameters and show how the agent ends up discovering a very non-trivial and solution for the famous control problem called Cartpole. This solution has deep links with fundamental physics topics like General Relativity. (Reinforcement) learning non-inertial frames, pseudo-force and Einstein’s equivalence principle

Recurrent Neural Networks

Recurrent Neural Networks (RNNs) are ubiquitous in natural language processing. To speed up and regularize training, training is done in minibatches. Since sentences are not in general of the same length, the shorter ones are "zero-padded". The network then has to learn this zero padding and the depper the RNNs the more learning steps are wasted on this task. A simple solution for this is to mask the zero-padded values. In this tutorial we explain how masking is supposed to work and how it is implemented in Keras.
How does masking work in an RNN (and variants) and why

Custom Metrics For Dynamic Traning

While standard problems are dealt very nicely with a standard workflow in high level APIs for machine learning, some problems require dynamic models that change parameters like learning rates and class weights during training based on how well the training is going. For instance one may be interested in a per-class F1 score per epoch. In this tutorial we exlain how to do so.
Custom metrics in Keras and how simple they are to use in tensorflow2.2

500x500