Block Practical: Connectionist Models and Cognitive Processes

This is less of a blog post and more of a materials dump from an elective practical I taught about three years ago to second year undergraduate students in the Experimental Psychology Department at the University of Oxford. I thoughtlessly deleted the webpage that contained them, assuming no student after 2 years would need them. How wrong I was! I received an email the other day from a Ph.D. student at a university on the other side of the world pretty much asking where these materials had disappeared to. This made me question my assumption nobody was looking at these materials. So to save myself and others from looking for them again, here they are for everybody.

Git (and GitHub) Cheat Sheet

This is based on the very simple, very basic introduction I wrote up after I showed my lab some of the basics of version control (git — in the most simple terms, it is a system to keep track of your code) and a website providing such services (GitHub). It is meant to be a cheat sheet mnemonic with extra information to help remind them what each of the very basic commands I showed them does. Twitter showed extreme appreciation for it — perhaps because many tutorials go a bit like this — so maybe it is useful to a wider audience of newbies not just in my lab. Two main assumptions I make are that the reader is interested in maintaining their code and knows what a terminal is, as all the following are meant to be typed into the terminal. See this for a tutorial on how to use the command line.

I Hate Matlab: How an IDE, a Language, and a Mentality Harm

This blog post is inspired by a few Matlab-related tweets of mine, which turned into days-long discussions with fellow science and non-science tweeps. Those tweets of mine in turn are motivated by two main things: my desire for programming in psychology, neuroscience, and science in general to be taught and taught well, and my desire for students to learn transferable skills more generally. This blog post is premised on a number of themes which came up on Twitter. The great need for scientists to be able to code. The fact that Matlab is akin to bad training wheels on a bicycle, which never aid with learning to ride, but are used over again because they are better than walking. And the idea that while there is a best tool for every job, not every tool is best for any job. The discussion on Twitter was motivating and so I promised everybody I would write up what I think. So this blog post is about how I think teaching Matlab, the whole ecosystem not just the language, within psychology harms students more than it helps them in many cases in my experience.

Artificial Neural Networks with Random Weights are Baseline Models

Where do the impressive performance gains of deep neural networks come from? Is their power due to the learning rules which adjust the connection weights or is it simply a function of the network architecture (i.e., many layers)? These two properties of networks are hard to disentangle. One way to tease apart the contributions of network architecture versus those of the learning regimen is to consider networks with randomised weights. To the extent that random networks show interesting behaviors, we can infer that the learning rule has not played a role in them. At the same time, examining these random networks allows us to evaluate what learning does add to the network’s abilities over and above minimising some loss function.

Using the Gini Coefficient to Evaluate Deep Neural Network Layer Representations

Sparsity is an issue in neural representation and we think it should be measured in artificial neural networks to understand how they are representing information at each layers. For example, are a few units doing the work or is there a distributed pattern across all units (i.e., overlapping units taking part in the representations of cat, car, etc.). So in What the Success of Brain Imaging Implies about the Neural Code we decided to use the Gini coefficient, inspired by its use in evaluating voxel activations, to uncover the degree of sparsity within each of the layers of Inception-v3 GoogLeNet.