A Visual Guide to Feedforward Neural Networks

Neural networks are great models for finding patterns in data. They’re incredibly simple, but powerful, and have been used for virtually every domain in machine learning. This will be a simple guide to neural networks, focusing on building up an intuition for how they work, rather than a deep understanding of the math. I’ll be focusing solely on how neural networks that are already built work. How these nets are built is significantly more complicated, but if you wish to learn about it, this is a great post on the subject.

(more…)

Read More

A Brief Overview of Noteworthy Haskell Features

After getting interested in Haskell thanks to this video (both educational and quite funny, highly recommended), I decided to start learning language. It’s been (and continues to be) a strange experience, much different than learning Python or Objective-C. In the beginning it felt a bit like relearning programming.

To teach myself I decided to write a spaced-repetition learning program, inspired by Anki. I’ll cover some aspects of Haskell I struggled to understand, so that other beginners may avoid the same mistakes I did, and give a brief overview of some of Haskell’s features that I found particularly useful. All the code examples will be taken from the spaced-repetition program I wrote, the source of which can be found here, if you’re curious about how a piece of code fits in with the rest of the program.

Some level of programming knowledge is assumed, mainly a basic understanding of functions and types.

(more…)

Read More