I hate Matlab: How an IDE, a language, and a mentality harm

This blog post is inspired by a few Matlab-related tweets of mine, which turned into days-long discussions with fellow science and non-science tweeps. Those tweets of mine in turn are motivated by two main things: my desire for programming in psychology, neuroscience, and science in general to be taught and taught well, and my desire for students to learn transferable skills more generally. This blog post is premised on a number of themes which came up on Twitter. The great need for scientists to be able to code. The fact that Matlab is akin to bad training wheels on a bicycle, which never aid with learning to ride, but are used over again because they are better than walking. And the idea that while there is a best tool for every job, not every tool is best for any job. The discussion on Twitter was motivating and so I promised everybody I would write up what I think. So this blog post is about how I think teaching Matlab, the whole ecosystem not just the language, within psychology harms students more than it helps them in many cases in my experience.

Artificial Neural Networks with Random Weights are Baseline Models

Where do the impressive performance gains of deep neural networks come from? Is their power due to the learning rules which adjust the connection weights or is it simply a function of the network architecture (i.e., many layers)? These two properties of networks are hard to disentangle. One way to tease apart the contributions of network architecture versus those of the learning regimen is to consider networks with randomised weights. To the extent that random networks show interesting behaviors, we can infer that the learning rule has not played a role in them. At the same time, examining these random networks allows us to evaluate what learning does add to the network’s abilities over and above minimising some loss function.

Using the Gini Coefficient to Evaluate Deep Neural Network Layer Representations

Sparsity is an issue in neural representation and we think it should be measured in artificial neural networks to understand how they are representing information at each layers. For example, are a few units doing the work or is there a distributed pattern across all units (i.e., overlapping units taking part in the representations of cat, car, etc.). So in What the Success of Brain Imaging Implies about the Neural Code we decided to use the Gini coefficient, inspired by its use in evaluating voxel activations, to uncover the degree of sparsity within each of the layers of Inception-v3 GoogLeNet.