<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>neuroplausible</title>
    <description>I am a cognitive computational neuroscientist and this blog is about my research.</description>
    <link>http://neuroplausible.com</link>
    <atom:link href="http://http://neuroplausible.com/feed.xml" rel="self" type="application/rss+xml" />
    
      <item>
        <title>Path Model Q&amp;A</title>
        <description>&lt;p&gt;&lt;a href=&quot;https://twitter.com/AnnaHenschel&quot;&gt;Anna Henschel&lt;/a&gt; and &lt;a href=&quot;https://twitter.com/StephAllanGla&quot;&gt;Stephanie Allan&lt;/a&gt; kindly invited us to discuss our (&lt;a href=&quot;https://oliviaguest.com&quot;&gt;Olivia Guest&lt;/a&gt; &amp;amp; &lt;a href=&quot;http://www.andreaemartin.com&quot;&gt;Andrea Martin&lt;/a&gt;) work &lt;a href=&quot;https://dx.doi.org/10.31234/osf.io/rybh9&quot;&gt;&lt;em&gt;How computational modeling can force theory building in psychological science&lt;/em&gt;&lt;/a&gt; at their reading group &lt;a href=&quot;https://twitter.com/GlasgowTea&quot;&gt;Glasgow ReproducibiliTea&lt;/a&gt; last week. So we decided to write up the questions we received in this blog post. It was a very enjoyable experience. And we’re very grateful not least because it’s explicitly aimed towards and for students and more junior people in our field — an audience we believe is especially able to learn how to improve their scientific reasoning and theoretical construction skills.&lt;/p&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
&lt;iframe class=&quot;image&quot; width=&quot;373&quot; height=&quot;210&quot; src=&quot;https://www.youtube.com/embed/_WV7EFvFAB8&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;
&lt;div class=&quot;figure-caption&quot;&gt;
Video from &lt;a href=&quot;https://reproducibilitea.org/journal-clubs/#Glasgow&quot;&gt;Glasgow ReproducibiliTea reading group&lt;/a&gt;.&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;We went into some background on why we created our path model, why computational modelling is a useful tool to help refine our own thinking as well as the theories scientists propose. You can watch &lt;a href=&quot;https://www.youtube.com/watch?v=_WV7EFvFAB8&quot;&gt;a video of us talking and answering a subset of the questions&lt;/a&gt;, kindly edited and uploaded by Anna. Below are some of the same &lt;a href=&quot;https://docs.google.com/document/d/12318lapZ6IMGH7PziTItwqluRqiRd6z4FTXmeVuE8QY/edit&quot;&gt;questions&lt;/a&gt; (with very minor edits for typos and clarity and reordered for ease of answering) that we received while we were chatting — some of the questions we answered in the video might not be answered here and vice versa. Also some questions are not answered below because we are working on follow-up work that addresses them and so to save time and space here we’ll just skip those for now.
Super importantly before reading this, &lt;a href=&quot;https://dx.doi.org/10.31234/osf.io/rybh9&quot;&gt;read the manuscript&lt;/a&gt; as our answers are long and don’t really make sense without that context.&lt;/p&gt;

&lt;h2 id=&quot;general-questions&quot;&gt;General Questions&lt;/h2&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;I might be confusing things here: but what is the difference between computational modeling and  careful operationalization (in the empirical circle)?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is the difference in our path model between specification and implementation. Captured by this tweet, which served as the inspiration for the section &lt;em&gt;&lt;a href=&quot;https://dx.doi.org/10.31234/osf.io/rybh9&quot;&gt;The pizza problem&lt;/a&gt;&lt;/em&gt;:&lt;/p&gt;

&lt;div class=&quot;center&quot;&gt;
&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;Why we need computational modelling: even if everybody agrees what the area of a circle is defined as there are apparently people unwilling to execute the formal model itself and not only that but the results are counterintuitive. 😂&lt;a href=&quot;https://t.co/c1SKgRAZMh&quot;&gt;https://t.co/c1SKgRAZMh&lt;/a&gt;&lt;/p&gt;&amp;mdash; Olivia Guest is on the job market! | Ολίβια Γκεστ (@o_guest) &lt;a href=&quot;https://twitter.com/o_guest/status/1186141920239730689?ref_src=twsrc%5Etfw&quot;&gt;October 21, 2019&lt;/a&gt;&lt;/blockquote&gt; &lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;
&lt;/div&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;One question I had from reading the paper was you say that most undergrads in psychology leave knowing a bit about statistical models. From my own experience (aware other degrees might be different) I do not think there was as much focus on theory building. Do you have any ideas how undergrad education could incorporate theory building?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In our experiences of working at various universities, there is indeed not much explicit focus during undergraduate degrees on general theory building, setting aside computational or formal modelling of theories. However, what happens very often is throughout each module the historical and current accounts for psychological theories (and indeed frameworks) are very much expounded on. Students learn about, for example, how Pavlov explored dogs’ behaviours and created the ideas behind classical conditioning. These historical and contemporary stories of how scientists develop their understanding of a series of phenomena are how theories are built. What might be happening is that students are not ready yet (due to the deluge of information) to also zoom out and notice that theory creation and development is being signposted to them. This is a normal byproduct of learning new things. Some statistics modules also contain very basic references to “theory building” as part of a &lt;a href=&quot;https://en.wikipedia.org/wiki/Falsifiability#Away_from_naive_falsificationism&quot;&gt;falsification&lt;/a&gt;-focused approach.&lt;/p&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
&lt;img class=&quot;image&quot; src=&quot;/img/posts/path.png&quot; /&gt;
&lt;div class=&quot;figure-caption&quot;&gt;
One of many possible paths (in blue) that can be used to understand and describe how psychological research is carried out with examples of models at each step shown on the left (in green).
Each research output within psychology can be described with respect to the levels in this path.
The three levels superimposed on a red background (theory, specification, implementation) are those that are most often ignored or left out from research descriptions.
From figure 2 in &lt;a href=&quot;https://dx.doi.org/10.31234/osf.io/rybh9&quot;&gt;Guest and Martin (2020)&lt;/a&gt;.
&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;On the other hand, a specific module on “how to build theories” in undergraduate degrees is indeed typically lacking. It could be created but will be something incredibly difficult to get right and succeed pedagogically. We propose this might be for two related reasons:
&lt;em&gt;a&lt;/em&gt;) creating a novel theoretical account, or indeed building one from modifying existing theories, is extremely difficult for everybody. This is why in part we suspect that in the status quo certain people tend to avoid working in the red area of our path model’s depiction in &lt;a href=&quot;https://dx.doi.org/10.31234/osf.io/rybh9&quot;&gt;figure 2&lt;/a&gt;, when that is where more focus is actually needed.
The step in our path model called “theory” is the hardest one and — tangentially — certainly not something that only computational modellers should care about.
Everyone, whether they realise it or admit it, if they are trying to figure something out, cares about theory.
In addition, &lt;em&gt;b&lt;/em&gt;) given how hard theory building/ the red area is in practice, imagine how much harder it will be to teach it. Pedagogy requires highly skilled individuals dedicating their lives to learning skills and how to teach others skills. How to build theories is what the whole of philosophy (and history) of science studies. So it is perhaps unsurprising given all this that psychology departments do not, or cannot, provide such a dedicated module. But that doesn’t mean it can be ignored or not taught.&lt;/p&gt;

&lt;p&gt;This all being said, we do not take such a strong pessimistic outlook going forwards, hence why we wrote this manuscript. Realistic and helpful steps that can be taken to address theory building, in addition to reading our paper and thinking about how modelling can be applied to your work, is to actively engage with theoreticians in psychology, neuroscience, and cognitive science as broadly as possible. There are experts out there who make it their life goal to develop theories and, sometimes in lesser part, teach others how to do that also. We hope that any students/faculty reading this blog post might consider asking for/discussing a module that involves teaching and engaging with theoreticians’ and modellers’ works.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;Probably related to the undergrad curriculum question but what do you have any advice on how someone who hasn’t been trained in comp modeling could “responsibly” start dabbling with it? Or should any serious work on the implementation step only be done in collaboration with a computational modeler?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We suggest people try working through toy examples like the one with the pizzas we work through in the paper (see section &lt;em&gt;&lt;a href=&quot;https://dx.doi.org/10.31234/osf.io/rybh9&quot;&gt;The pizza problem&lt;/a&gt;&lt;/em&gt;).
And then try to make their own, in the simplest way possible.
Then, perhaps by doing that for several explananda, your feeling about yourself in the process will change or settle.
Start with a natural language sentence, then pizza model it!
We go through the basic steps of very simple theory, to very simple specification, to very simple implementation, and you can too. &lt;i class=&quot;twa twa-blush&quot;&gt;&lt;/i&gt;&lt;/p&gt;

&lt;p&gt;After all, this is the process, even more than the core methodological skills,  that is required to make a simple model.
The methods skills can come later and are a process of ongoing growth and change through one’s modelling life anyway. But the key is to grasp the endeavor, and start, rather than focus on the fancy bells and whistles or feel intimidated by them. The ability to begin with the statement in natural language and then even get to the first step of specification is thrilling!&lt;/p&gt;

&lt;p&gt;For the second part: ideally, yes. But it’s not a requirement, and we think that is important to emphasize that. Collaborating with someone who already models will give you a chance to work with somebody who is more experienced and you will likely learn a lot. To sum up though, we want to emphasize that computational modelling is something everyone and anyone can effectively engage in, with little extensions from the skills and mindset they are asked to acquire in most undergrad psych degrees in 2020. For a list of modellers in cognitive science you might want to check out this list: &lt;a href=&quot;https://compcog.science&quot;&gt;compcog.science&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;pizza-problem-questions&quot;&gt;Pizza Problem Questions&lt;/h2&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
&lt;img class=&quot;image&quot; src=&quot;/img/posts/pizza.png&quot; /&gt;
&lt;div class=&quot;figure-caption&quot;&gt;
Only by actually running the formal model of the pizza options can we know which option is more food.
&lt;/div&gt;
&lt;/div&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;Is “2 pizzas is more food” a theory in this context? And the pizzas are our data, right?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;“2 pizzas is more food” is a hypothesis. It can be seen as a hypothesis based on our gut feeling (not an implementation). So we enter the path model at the hypothesis stage and then move to collect data, i.e., measure the amount of food in the pizzas.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;So where would you fit the data that intuitively, people would prefer the two pizzas? Or is that entirely outside the metaphor you’re using?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The fact that people, we, intuitively think 2 pizzas is more food is the core of the pizza problem: an expectation violation (see section &lt;em&gt;&lt;a href=&quot;https://dx.doi.org/10.31234/osf.io/rybh9&quot;&gt;Model of psychological science&lt;/a&gt;&lt;/em&gt;).
Rule two for constraining movement in our path model says that moving downwards is only possible if an expectation violation is resolved. So, in the pizza example, we can be seen as entering the path model at the hypothesis step. Our hypothesis is “two 12’’ pizzas are more food than one 18’’ pizza”, so we measure them (this is obviously not explicitly done because it’s a simple example, but imagine we order both options and measure the food in each). Now the data tells us our hypothesis is wrong. So we must move upwards to an appropriate level and figure out why we were wrong.&lt;/p&gt;

&lt;h2 id=&quot;path-model-questions&quot;&gt;Path Model Questions&lt;/h2&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
&lt;img class=&quot;image&quot; src=&quot;/img/posts/raven.png&quot; /&gt;
&lt;div class=&quot;figure-caption&quot;&gt;
A pet raven.

&lt;/div&gt;
&lt;/div&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;So does the downward stream correspond to a hypothetico-deductive approach and the upward stream to an inductive approach? Or is that too simplistic?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is something we discussed when we were writing this up. Mainly for reasons of article length (there was a 5,000 word limit) we, Andrea and Olivia, decided not to go into this potential interpretation of the path model. To get everybody on the same page, &lt;a href=&quot;https://plato.stanford.edu/entries/logic-inductive/&quot;&gt;inductive reasoning&lt;/a&gt; is when our premises are taken to provide some evidence for the truth of our conclusions. For example, all our lives we see ravens that are black so believe that “&lt;a href=&quot;https://en.wikipedia.org/wiki/Raven_paradox&quot;&gt;all ravens are black&lt;/a&gt;”. This of course could be false if we encounter a non-black raven (recall Europeans discovering black swans). &lt;a href=&quot;https://en.wikipedia.org/wiki/Deductive_reasoning&quot;&gt;Deductive reasoning&lt;/a&gt;, on the other hand, is when our premises are used to reach logically certain conclusions. For example, all ravens are birds, my pet is a raven, therefore my pet is a bird. Science uses all these kinds of logical inference to various extents including &lt;a href=&quot;https://plato.stanford.edu/entries/abduction/&quot;&gt;abduction&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;One thing to bear in mind as a limitation of the &lt;a href=&quot;https://en.wikipedia.org/wiki/Hypothetico-deductive_model&quot;&gt;typical hypothetico-deductive model&lt;/a&gt; of science is that it does not explicitly in its typical formulation include modelling or theory development in a way that satisfies us (e.g., it ignores &lt;a href=&quot;https://plato.stanford.edu/entries/scientific-underdetermination/&quot;&gt;underdetermination&lt;/a&gt;).
It merely mentions steps that take us from hypothesis to data collection, so within the context of our model it doesn’t really emphasise theory or any of the other steps in the red area of our figure 2: theory, specification, implementation.
The red area is where we want to draw the most emphasis in this paper (see section &lt;em&gt;&lt;a href=&quot;https://dx.doi.org/10.31234/osf.io/rybh9&quot;&gt;Model of psychological science&lt;/a&gt;&lt;/em&gt;). This is a part of psychological science that we propose is often left out with serious repercussions in terms of scientific integrity, openness, reproducibility, and so on.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;Can (or can’t) we think of examples where a field was operating rather model free, then models entered and shifted the focus away from what turned out to be actually more important later on, e.g. due to the necessary simplifications?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In our understanding of how science is carried out, there are always models at play. The issue is that they need to be made explicit.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;Then why do you say “plz make comp models” if there is no model-free science?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
  &lt;img class=&quot;image&quot; src=&quot;/img/posts/mememe.png&quot; /&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
Our serious request, presented in a tongue-in-cheek fashion.  &lt;/div&gt;  
&lt;/div&gt;

&lt;p&gt;The emphasis should be on the “make” as in make your models explicit through the use of computational instantiation. Also it’s a meme — well-known to be caricatures of the real world. It also touches on the message that making a computational model will force you to acknowledge that your science is not and cannot be model-free, even if you want to think it is for whatever reason.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;Should open theory as you propose it in the paper undergo the same preregistration process as other parts of the experimental process?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Being pedantic and emphasising the word “should”: no. We don’t believe that such a tool makes sense for open theory because as we will discuss below there are other mechanisms that allow for constraining our science. We also do not believe that scientific prescriptivism facilitates useful work. This is a good chance to clarify that our path model is merely a description of the process of doing science. So we can take an existing literature and plot its path. It is able to account for any scientific act. If there are scientific acts (including malpractice, HARKing, etc.) that our account does not have the ability to describe (including ones that we can describe as skipping steps), we need to amend our model (see section &lt;em&gt;&lt;a href=&quot;https://dx.doi.org/10.31234/osf.io/rybh9&quot;&gt;What our path function model offers&lt;/a&gt;&lt;/em&gt;).
In other words, we wish for our account to be a description of science as is currently understood. And to facilitate talking about scientific acts in useful, transparent ways.&lt;/p&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
&lt;img class=&quot;image&quot; src=&quot;/img/posts/raven.gif&quot; /&gt;
&lt;div class=&quot;figure-caption&quot;&gt;
A &lt;a href=&quot;https://en.wikipedia.org/wiki/White-necked_raven&quot;&gt;raven&lt;/a&gt; being pet.
&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;Going back to the question, however and discussing the role of preregistration, we believe our path model depicts (the bidirectional arrows) how strong constraints from one level to another can percolate up/downwards and refine the other levels. What we mean by this is that following the path model itself allows for the types of constraints (that data modellers use preregistration for) to be applied at every step. In other words, preregistration and related tools are a way to diminish various forms of inadvertent or purposeful scientific malpractice at the hypothesis and data levels, and in some cases to promote openness and replication. Preregistration was adopted in psychology and &lt;a href=&quot;https://www.discovermagazine.com/mind/registration-not-just-for-clinical-trials&quot;&gt;neuroscience from clinical trials&lt;/a&gt; and it serves its purpose well. It plays the role that supervening theories or formal models could play, i.e., to constrain the space of hypotheses and data collected. However, in the clinical trials literature they do not typically derive hypotheses to test based on theory, they merely compare groups of patients when administered different or no drugs. So preregistration is extremely useful in such cases since no top-down control exists from a supervening theoretical account — there is only a hypothesis  (see section &lt;em&gt;&lt;a href=&quot;https://dx.doi.org/10.31234/osf.io/rybh9&quot;&gt;Model of psychological science&lt;/a&gt;&lt;/em&gt;). It is completely unbounded and researches even if extremely careful are likely to fall into questionable research practises.&lt;/p&gt;

&lt;h2 id=&quot;final-comments&quot;&gt;Final Comments&lt;/h2&gt;

&lt;p&gt;We both really enjoyed this and it has really helped us understand how junior researchers see and understand our work.
Thank you again to &lt;a href=&quot;https://twitter.com/AnnaHenschel&quot;&gt;Anna Henschel&lt;/a&gt; and &lt;a href=&quot;https://twitter.com/eolasinntinn&quot;&gt;Stephanie Allan&lt;/a&gt;! &lt;i class=&quot;twa twa-smiling-face-with-smiling-eyes&quot;&gt;&lt;/i&gt;&lt;/p&gt;

&lt;h2 id=&quot;related-reading&quot;&gt;Related Reading&lt;/h2&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.psychologytoday.com/za/blog/how-do-you-know/202004/what-is-the-pizza-problem-in-psychology-research&quot;&gt;What Is the Pizza Problem in Psychology Research?&lt;/a&gt;, &lt;a href=&quot;https://twitter.com/alex_danvers&quot;&gt;Alexander Danvers&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.psychologytoday.com/za/blog/how-do-you-know/202004/what-is-the-pizza-problem-in-psychology-research&quot;&gt;Does Science Need Snake Dream Breakthroughs?&lt;/a&gt;, &lt;a href=&quot;https://twitter.com/alex_danvers&quot;&gt;Alexander Danvers&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
        <pubDate>Sun, 03 May 2020 00:00:00 +0000</pubDate>
        <link>http://neuroplausible.com/path</link>
        <guid isPermaLink="true">http://neuroplausible.com/path</guid>
      </item>
    
      <item>
        <title>Why Women in Psychology Can't Program</title>
        <description>&lt;p&gt;About two months ago my brother, who works in data science on social psychology data, asked me why his colleagues, who are women and have PhDs in psychology, cannot code and why they use SPSS. He was obviously just venting because when I replied he was surprised. I told him that it was because of sexism and because of lack of proper skills-training (both for statistics and coding) in many psychology departments. That got me thinking about this more…&lt;/p&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
  &lt;img class=&quot;image&quot; src=&quot;/img/posts/plant_friends.png&quot; /&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
  &quot;[That feeling when] your plant friends die cause you spend way too much time in virtual realities&quot; by &lt;a href=&quot;http://echarpes.tumblr.com/post/146520918413/tfw-your-plant-friends-die-cause-you-spend-way-too&quot;&gt;écharpe&lt;/a&gt;
  &lt;/div&gt;  
&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Why can’t they code?&lt;/em&gt; Because they aren’t taught how to code. &lt;em&gt;Why aren’t they taught?&lt;/em&gt; Because of two main, related, reasons:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Rhetoric and pressure from formal societies like the BPS as well as gatekeepers who worry about how the field might/could split as it is true that a undergrad degree (especially) is essentially zero sum. We just can’t fit it all in and coding is seen as not that relevant or worthy of being inserted into the curriculum.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Researchers and educators have internalised ideas about what undergrads, who are predominantly women, can and cannot learn. And they list off, based on their ideas, reasons about why it’s just not possible to teach students to code.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
    &lt;img class=&quot;image&quot; src=&quot;/img/posts/small_olivia.jpg&quot; /&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
    Me and my aforementioned brother. Note how I'm unable to share the computer (hand on mouse, remember white PCs?) and he's close to tears! Don't worry he grew up to be a data scientist, so I can't have traumatised him too much.
  &lt;/div&gt;  
&lt;/div&gt;

&lt;p&gt;Every time I try to impress on others how important and useful (and fun!) coding is in my field (broadly psychology, cognitive science, neuroscience) there is pushback from very specific people. The good thing is that students in my experience (all levels, from undergrad to PhD) in psychology overwhelmingly agree with me, giving me hope that my messages on this issue get through. &lt;a href=&quot;http://blog.efpsa.org/2016/07/12/python-programming-in-psychology-from-data-collection-to-analysis/&quot;&gt;As do others&lt;/a&gt; &lt;a href=&quot;https://computingforpsychologists.wordpress.com/2012/01/13/why-every-psychology-student-should-learn-to-code/&quot;&gt;of course too&lt;/a&gt;! The bad thing is this specific nay-saying group — they tend to be senior, e.g., profs, male academics (&lt;a href=&quot;https://www.apa.org/monitor/2017/07-08/women-psychology.aspx&quot;&gt;more senior and male are inherently correlated of course given the demographics of the field&lt;/a&gt;) who have the power to change things and introduce coding to students in a useful way: in a directly applied sense, for example, to create experiments (replacing, e.g., E-Prime, which also is not open source nor free) and to analyse data (replacing, e.g., SPSS, which again is not open source nor free). Replacing closed science tools with open science tools should be in and of itself a useful act too (recall &lt;a href=&quot;http://neuroplausible.com/matlab&quot;&gt;my issues with Matlab&lt;/a&gt;).&lt;/p&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
    &lt;img class=&quot;image&quot; src=&quot;/img/posts/spss.png&quot; /&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
    *makes hissing noises at &lt;a href=&quot;https://statistics.laerd.com/spss-tutorials/chi-square-test-for-association-using-spss-statistics.php&quot;&gt;SPSS&lt;/a&gt;*
  &lt;/div&gt;  
&lt;/div&gt;

&lt;p&gt;Before I jump in and address the arguments against teaching students, who are mostly young women of course, how to code, I’d like to explain a few things. Firstly, I want to underline that many universities do indeed teach coding to psych students very successfully. &lt;a href=&quot;https://twitter.com/djnavarro/status/1066404110033813505&quot;&gt;UNSW (Danielle Navarro)&lt;/a&gt;, &lt;a href=&quot;https://twitter.com/dalejbarr/status/1066431058764288000&quot;&gt;Glasgow (Dale Barr)&lt;/a&gt; and &lt;a href=&quot;https://twitter.com/martincorley/status/1066829568554819584&quot;&gt;Edinburgh (Martin Corley)&lt;/a&gt;, for example, all three of which teach undergrads R. This is something the nay-sayers all think is literally impossible and will cause undergraduates’ heads to implode or something, so much so they &lt;a href=&quot;https://twitter.com/djnavarro/status/1066839928045072385?s=19&quot;&gt;ignored Danielle’s book and work&lt;/a&gt; in this &lt;a href=&quot;https://twitter.com/djnavarro/status/1066237296247074818&quot;&gt;monster Twitter thread&lt;/a&gt;, which kick-started me into finally writing up this blog post.&lt;/p&gt;

&lt;p&gt;Secondly, saying (indirectly of course) that women can’t code is an infamous trope related to the &lt;a href=&quot;https://www.polygon.com/features/2013/12/2/5143856/no-girls-allowed&quot;&gt;stereotype of the male geek&lt;/a&gt;. In other words, systemic sexism (undergrads “can’t be expected to learn how to code”, most undergrads in this field are women) plays a huge role. The male geek trope and related ideas contribute to &lt;a href=&quot;https://www.npr.org/sections/money/2014/10/17/356944145/episode-576-when-women-stopped-coding?t=1543235872482&quot;&gt;driving women away from coding and generally STEM subjects in the West&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I am not from the West originally and did not have to suffer the effects of this pervasive trope growing up. I did however study computer science in the UK, which really hammered home to me the effects of gendering subjects. It had never occurred to me before that “women aren’t good at maths” or “women can’t code”. It was just not a concept I had come up against. My computer science undergraduate course was about 90 men and 6 women, 3 of whom were from Cyprus like me. That is even more surprising given there are only about a million Cypriots in the world. More stats on this whole, very depressing, situation &lt;a href=&quot;https://www.theatlantic.com/science/archive/2018/02/the-more-gender-equality-the-fewer-women-in-stem/553592/&quot;&gt;here&lt;/a&gt;. Even more depressingly, coding used to be done predominantly by women. Removing them — yes, actively removing them — not only harmed and harms women to this day but also back then &lt;a href=&quot;https://logicmag.io/05-how-to-kill-your-tech-industry/?fbclid=IwAR2whwbKJSslli3xOr_TUef4cRt_HfEH0wKT6gBmlbjuW02vMQS5B0_Ny6c&quot;&gt;killed the UK tech industry&lt;/a&gt;.&lt;/p&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
    &lt;img class=&quot;image&quot; src=&quot;/img/posts/bps.png&quot; /&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
    *makes hissing noises at the BPS*
  &lt;/div&gt;  
&lt;/div&gt;

&lt;p&gt;Thirdly, there is a specific issue with the BPS in the UK. It makes it hard for various reasons to make changes in the curriculum as it controls accreditation. It also biases undergraduate degrees towards training students to all be clinical psychologists and therapists. But can you really choose what job you want to do for the rest of your life at 18? Anyway, that’s another huge issue, but suffice it to say psychology and all undergraduate degrees would serve their students better if they were broad as possible and gave students the most widely applicable skills, like coding!&lt;/p&gt;

&lt;p&gt;So why do these gatekeepers, people who have the power (even if just 1% more than myself) to change undergraduate and graduate teaching, say “no to coding”? Why do they think the students in psychology just can’t cope? In my opinion, there is an aspect of sexism at play. As I mentioned above the idea that women (&lt;a href=&quot;https://www.apa.org/monitor/2017/07-08/women-psychology.aspx&quot;&gt;recall undergrads in psych are basically all young women&lt;/a&gt;) can’t code is deeply embedded in culture and in people’s minds. But I would like to put aside whatever inherent biases they are holding onto and address the explicit reasons stated. I will paraphrase the various arguments I have heard, over the years as well as in the &lt;a href=&quot;https://twitter.com/djnavarro/status/1066237296247074818&quot;&gt;monster Twitter thread&lt;/a&gt;, below and provide reasons why they are not appropriate.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;People choose psych as an easy undergrad, adding in coding makes it hard, ergo we will lose students.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
    &lt;img class=&quot;image&quot; src=&quot;/img/posts/danielle_book.png&quot; /&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
    &quot;Back in the grimdark pre-Snapchat era of humanity (i.e. early 2011), I started teaching an introductory statistics class for psychology students offered at the University of Adelaide, using the R statistical package as the primary tool. I wrote my own lecture notes for the class, which have now expanded to the point of effectively being a book.&quot; — &lt;a href=&quot;http://compcogscisydney.org/learning-statistics-with-r/&quot;&gt;Danielle Navarro&lt;/a&gt;.
  &lt;/div&gt;  
&lt;/div&gt;

&lt;p&gt;This kind of claim is a little strange because the same goes for stats. In fact I would argue stats is harder but that is my bias given I did an undergrad in compsci and had no stats training till I was pulled through stats against my will by my cognitive science MSc. I feel like “against my will” is true for a lot of things I eventually ended up liking, and that seems to be the case with people in general who responded to that thread, like &lt;a href=&quot;https://twitter.com/lam_bis/status/1066425317529653249&quot;&gt;the undergrads at Glasgow who demanded an even more advanced stats module&lt;/a&gt; or at &lt;a href=&quot;https://twitter.com/djnavarro/status/1066821181003640834?s=19&quot;&gt;UNSW where they have many options (R or Python)&lt;/a&gt;. Also don’t forget, especially with things like coding and stats, those who are interested tend to begin teaching themselves more as they get more comfortable.&lt;/p&gt;

&lt;p&gt;This kind of claim is also &lt;a href=&quot;https://twitter.com/aeronlaffere/status/1066383087037280257&quot;&gt;a bit off in a more general sense&lt;/a&gt;. I would argue somebody wanting to be a clinical psychologist or therapist (which is what the claim is: that most people seem to choose psychology for becoming a therapist or doing clinical work) is not necessarily choosing the easy life.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;Students want or need GUIs, otherwise they don’t like the course and/or they can’t learn as well.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Assuming the reader is aware of what I think about GUIs, see &lt;a href=&quot;http://neuroplausible.com/matlab&quot;&gt;my infamous Matlab post&lt;/a&gt; if not, you know already why I don’t think this is a good idea. Students might want GUIs but that is not a reason in of itself to teach them to use a GUI. Of course, GUIs are not all bad, they have their uses. RStudio, for example, is a very appropriate GUI/IDE to teach stats.&lt;/p&gt;

&lt;p&gt;Often students tend to hate having to rote learn SPSS menus, after which anyway nothing seems to stick. I wonder why! Anyway, at the end of the day: students &lt;em&gt;can&lt;/em&gt; learn well without a GUI. For example, at Glasgow &lt;a href=&quot;https://twitter.com/dalejbarr/status/1066411560258756608&quot;&gt;they fell in love with markdown&lt;/a&gt;. I learned how to do loads of things without a GUI as did loads of people, I’m not special. Rote learning has its uses when the thing being memorised is generalisable, very specific stats GUI menus are not, multiplication tables, for example, are.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;They just CANNOT learn to code. They just can’t. Not everybody can learn to code.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
    &lt;img class=&quot;image&quot; src=&quot;/img/posts/maddy_petrovich.jpg&quot; /&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
    &quot;Maddy Petrovich, 14, of Wellesley, Mass. first started learning how to use the programming language Scratch when she was 10 years old&quot;, see &lt;a href=&quot;http://www.wbur.org/hereandnow/2012/12/26/computer-programming-kids&quot;&gt;Computer Programming For Kids 8 And Up&lt;/a&gt;.
  &lt;/div&gt;  
&lt;/div&gt;

&lt;p&gt;&lt;a href=&quot;https://twitter.com/richarddmorey/status/1066998607927304192&quot;&gt;Coding is hard&lt;/a&gt;. Yes. Teaching people anything is hard. Teachers are undervalued. Why are the slightly more to much more powerful than me so against teaching them to code? Are they worried they will be held accountable for teaching? Do they not want to teach because they might be exposed as frauds in terms of their low teaching skills? Maybe…&lt;/p&gt;

&lt;p&gt;English is hard. Not everybody is Maya Angelou or William Shakespeare. Not everybody is going to win a Nobel in Literature. We still learn English grammar at school, write essays, learn how to spell, etc.&lt;/p&gt;

&lt;p&gt;Maths is hard. Not everybody is Grigori Perelman or Maryam Mirzakhani. Not everybody is going to win a Fields Medal. We still learn arithmetic and elementary algebra at school and some lucky people (&lt;a href=&quot;https://twitter.com/estarianne/status/1066378306621067264&quot;&gt;I had no idea the US is this bad at teaching maths&lt;/a&gt;) learn calculus and linear algebra too.&lt;/p&gt;

&lt;p&gt;Saying that some people can’t learn to code is &lt;a href=&quot;https://twitter.com/BayesForDays/status/1066826186335109121?s=19&quot;&gt;a ridiculous&lt;/a&gt;, &lt;a href=&quot;https://twitter.com/aeronlaffere/status/1066382718865412096&quot;&gt;pessimistic&lt;/a&gt;, and &lt;a href=&quot;https://twitter.com/R_Lai/status/1066830970395996161&quot;&gt;elitist argument&lt;/a&gt; that only results in gatekeeping. If &lt;a href=&quot;https://twitter.com/rachelss/status/1067394749974396928&quot;&gt;other subjects&lt;/a&gt; &lt;a href=&quot;https://twitter.com/neurofoo/status/1066841860113645574&quot;&gt;can do it&lt;/a&gt;, we can too.&lt;/p&gt;

&lt;p&gt;Everybody already knows the very basic concepts of coding pretty much as they are derived from maths and linguistic structures that we learn at school and even before that. If-statements are pretty standard linguistic constructs, for example. Yes, there are more complex things, of course, but so what? There are more complex uses of language and maths too, but we still teach literacy and numeracy to kids. I can’t conjugate &lt;a href=&quot;https://el.wiktionary.org/wiki/%CE%B5%CE%BA%CF%80%CE%BB%CE%AE%CF%83%CF%83%CE%BF%CE%BC%CE%B1%CE%B9#%CE%9A%CE%BB%CE%AF%CF%83%CE%B7&quot;&gt;εκπλήσσομαι&lt;/a&gt; (to my shame) especially in the past tenses in Modern Greek and I’m a native Greek speaker.&lt;/p&gt;

&lt;p&gt;Also, yeah, sure, some people can’t learn anything but the most rudimentary English or maths. So what? Does that mean nobody should be taught these things because some people can’t learn them? Of course not. The sooner we start &lt;a href=&quot;https://www.theguardian.com/technology/2017/aug/20/two-year-olds-should-learn-to-code-says-computing-pioneer&quot;&gt;teaching kids to code, the better&lt;/a&gt;. I think everybody should be taught &lt;a href=&quot;https://www.euractiv.com/section/digital/infographic/infographic-coding-at-school-how-do-eu-countries-compare/&quot;&gt;to code in primary school and thankfully so do quite a few countries&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;Teaching students to code is zero sum and that means removing other parts of the course.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Yes, and no — depends on what course! In my opinion, it’s a price we should be willing to pay to give students useful transferable skills for which to get a job with. Knowing how to code is so vital to be a productive member of the workforce. We owe it to our students, both in a moral and a “they pay us” way, to equip them for the job market properly. Being able to code is a useful skill both to get a postdoc or to get any other job pretty much. It’s also possible to teach them coding as a direct means to an end, as I mentioned: to design experiments, to run statistical analyses, to do computational modelling, etc. This by the way is exactly what the nay-sayers think is impossible/implausible even though both I and Danielle have taught and designed exactly these types of classes.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;Teaching students to code is really hard and nobody in my department knows how to teach them probably.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
    &lt;img class=&quot;image&quot; src=&quot;/img/posts/margaret_hamilton.jpg&quot; /&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
    Margaret Hamilton pretty much invented the modern concept of software as well as writing the code that got humans to the moon. &quot;For Hamilton, programming meant punching holes in stacks of punch cards, which would be processed overnight in batches on a giant Honeywell mainframe computer that simulated the Apollo lander’s work&quot; from &lt;a href=&quot;https://www.wired.com/2015/10/margaret-hamilton-nasa-apollo/&quot;&gt;Wired&lt;/a&gt;.
  &lt;/div&gt;  
&lt;/div&gt;

&lt;p&gt;Yup, it’s hard. All teaching is hard. It &lt;a href=&quot;https://twitter.com/dalejbarr/status/1066397122310930432&quot;&gt;takes time and is going to require sacrifices&lt;/a&gt;. It’s a massively undervalued and underpaid profession (&lt;a href=&quot;https://www.cairn-int.info/article-E_TGS_005_0091--is-the-feminization-of-a-profession-a.htm&quot;&gt;again no coincidence it’s undervalued and it’s feminised generally&lt;/a&gt;). But you know what? We do it just fine at UCL. With a lecturer teaching them to code experiments in Python and Javascript (&lt;a href=&quot;https://www.ucl.ac.uk/pals/people/christos-bechlivanidis&quot;&gt;Christos Bechlivanidis, check out his teaching section&lt;/a&gt;). And they do it just fine at Glasgow, at Ediburgh, at UNSW, like I mentioned, where they teach them R for stats. Teaching will &lt;em&gt;never&lt;/em&gt; be perfect, like anything in life — and always &lt;a href=&quot;https://twitter.com/djnavarro/status/1066438723133333504&quot;&gt;those doing the teaching will learn stuff themselves as they teach&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;We can’t expect clinical people to learn how to code.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We can and it’s really useful for them, they say so often themselves to me and on Twitter (go look at the monster thread in the places they spoke up, e.g., &lt;a href=&quot;https://twitter.com/_R_Lai_/status/1066827149208944641&quot;&gt;here&lt;/a&gt;). Also see my replies to all the other points especially thinking about the fact we do teach them statistics even though clinical people will not need statistics to, e.g., perform their duties as a therapist.&lt;/p&gt;

&lt;p&gt;Also there is no reason to assume all undergrads want to be clinical all else being equal — maybe they would fall in love with coding and computational modelling if they were exposed to those parts of psych? Just like &lt;a href=&quot;https://twitter.com/dalejbarr/status/1066411560258756608&quot;&gt;they fell in love with markdown&lt;/a&gt;?&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;Who are you to say we should or could teach them to code?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Firstly, of course, I’m nobody and you don’t have to listen to me. But, also secondly: I’ve done it. I’ve taught psychology A-level students to code. I’ve taught undergraduates in their second year to code in Python to the point where they learned to code their own neural networks from scratch (that was the explicit goal). I’ve taught PhD students to code, mostly by begging them to teach themselves. I’ve also taught MSc and PhD students to code in a class as a TA (no begging there). It’s very possible and they actually enjoy it. Funnily enough and in my opinion all the hype around coding and machine learning and artificial intelligence makes them want to learn.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;We can’t teach them to code because scoping [or any other programming concept] is really hard and time-consuming to learn.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When I taught &lt;a href=&quot;http://neuroplausible.com/connectionism&quot;&gt;a small class of undergraduates neural networks from scratch (no library, just for-loops)&lt;/a&gt; we didn’t do scoping, or indeed anything more than variables, conditionals, and loops pretty much, and they [s]coped just fine. They had zero programming experience in Python and only 2 of 15 had coded before. They all got a first or a 2:1 (two highest grades in UK system), and not because I’m too nice.&lt;/p&gt;

&lt;p&gt;From a pedagogical perspective it’s not ideal to introduce scoping or other complex stuff straight away anyway. When I was in high school and during my separate computer science A-level, I recall scoping was not introduced until much later. This is a common pedagogical principle. You teach, for example, about 3 states of matter (solid, liquid, gas). You don’t go “oh, yeah, there’s also &lt;a href=&quot;https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_condensate&quot;&gt;Bose-Einstein condensate&lt;/a&gt;”.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;You can’t teach them how to code during a stats class because some students will have a “handicap” if they have not coded before.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
    &lt;img class=&quot;image&quot; src=&quot;/img/posts/olivia_poster.jpg&quot; /&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
    Olivia from the before [pre-Dr] times at ICCM in 2012, photo by fellow coder and computational modeller, &lt;a href=&quot;https://twitter.com/BArslan_CogSci&quot;&gt;Burcu Arslan&lt;/a&gt;.
  &lt;/div&gt;  
&lt;/div&gt;

&lt;p&gt;Somebody will &lt;em&gt;always&lt;/em&gt; have a “&lt;a href=&quot;https://twitter.com/EJWagenmakers/status/1066680953534328832&quot;&gt;handicap&lt;/a&gt;”. That is life, sadly. Whether it is maths in your native language versus in English or just the new jargon for a sub-area of science. One always learns new English words in every course as the technical terms and expressions are often totally new. I would not have known what, e.g., &lt;a href=&quot;https://en.wikipedia.org/wiki/Abscissa_and_ordinate&quot;&gt;abscissa&lt;/a&gt; means in maths if I had not done maths in English before moving the UK. On the other hand, I do have an advantage over people who just speak English because I know all the Greek letters we use in maths and I have prima facie access to the meanings of scientific words made-up based on Greek, e.g., prosopagnosia.&lt;/p&gt;

&lt;p&gt;It’s normal for the skills of students to be varied and a good teacher can and should cope. It’s also why a good teacher should shield students from biases like the male geek trope.&lt;/p&gt;

&lt;p&gt;To end, I just want to say that I really appreciate the massive unwieldy thread even though I feel in places people have not read what I said and then repeated it back to me. Unfortunately, in some spots I do feel the thread went &lt;a href=&quot;https://twitter.com/IrisVanRooij/status/1066624286956445698?s=19&quot;&gt;a bit too sexist&lt;/a&gt;. The women in the thread (me and Danielle, possibly others) also seemed to have direct experience on most of the issues brought up (&lt;a href=&quot;http://compcogscisydney.org/learning-statistics-with-r/&quot;&gt;she even wrote a book on it!&lt;/a&gt;), &lt;a href=&quot;https://twitter.com/djnavarro/status/1066520644240629760?s=19&quot;&gt;so it was in spots pretty egregious&lt;/a&gt;. I guess that is why this post needs to be written, the field has issues that need to be addressed (as does society in general).&lt;/p&gt;

&lt;p&gt;Anyway, especially to the undergrads and other students who spoke up: thanks all for the feedback both before in the monster thread and &lt;a href=&quot;https://twitter.com/o_guest/status/1067079340507217920&quot;&gt;after&lt;/a&gt;! This idea for a post has been in my mind for a few months now, so I am glad to have been given an extra boost of inspiration to write it all out.&lt;/p&gt;

&lt;h2 id=&quot;related-reading&quot;&gt;Related Reading&lt;/h2&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.maa.org/external_archive/devlin/LockhartsLament.pdf&quot;&gt;A Mathematician’s Lament&lt;/a&gt;, Paul Lockhart&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.npr.org/sections/money/2014/10/21/357629765/when-women-stopped-coding&quot;&gt;When Women Stopped Coding&lt;/a&gt;, NPR&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://hackernoon.com/a-brief-history-of-women-in-computing-e7253ac24306&quot;&gt;A Brief History of Women in Computing&lt;/a&gt;, Faruk Ateş&lt;/li&gt;
&lt;/ul&gt;
</description>
        <pubDate>Mon, 26 Nov 2018 00:00:00 +0000</pubDate>
        <link>http://neuroplausible.com/programming</link>
        <guid isPermaLink="true">http://neuroplausible.com/programming</guid>
      </item>
    
      <item>
        <title>Minority Cognitive Modellers</title>
        <description>&lt;p&gt;&lt;a href=&quot;https://sites.google.com/site/chbergma/&quot;&gt;Christina Bergmann&lt;/a&gt; and I have wanted to create a list of women and non-binary people in computational cognitive modelling for probably around 3-4 years now (she has written more on this on CogTales, &lt;a href=&quot;https://cogtales.wordpress.com/2018/04/25/building-a-network-of-women-and-nonbinary-cognitive-modelers/&quot;&gt;&lt;i&gt;Building a network of women and nonbinary cognitive modelers&lt;/i&gt;&lt;/a&gt;). It was tough when we first tried to do it: because it is not always clear who is a modeller just from looking at a scientific publication, as often being/self-identifying as a modeller is not exactly the same as being on the author list of a modelling paper; because Twitter lists, which were our first idea, sadly live and die by the single person who created them on Twitter; and because at the time neither her nor I really knew how to reach out to get such lists from others, broadening our horizons.&lt;/p&gt;

&lt;p&gt;The other day — thanks to &lt;a href=&quot;https://sites.temple.edu/newcombe/&quot;&gt;Nora Newcombe&lt;/a&gt; for making me think of this by asking about a related topic — I decided my Twitter account was probably a good place to ask the scientific community for the names of women and non-binary people in cognitive modelling. To my pleasant surprise, the tweet (below and &lt;a href=&quot;https://twitter.com/o_guest/status/987013239618883585?tfw_creator=o_guest%20&amp;amp;tfw_site=o_guest%20&amp;amp;ref_src=twsrc%5Etfw&amp;amp;ref_url=http%3A%2F%2Flocalhost%3A4000%2Fwomen-nb-cog-modellers&quot;&gt;here&lt;/a&gt;) managed to generate a larger amount of names in the replies than I alone could ever:&lt;/p&gt;

&lt;center&gt;
&lt;blockquote class=&quot;twitter-tweet&quot; data-lang=&quot;en&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;Hey, science tweeps! 🧠&lt;br /&gt;&lt;br /&gt;I want names of women &amp;amp; non-binary cognitive modellers.&lt;br /&gt;&lt;br /&gt;👨🏿‍💻👩🏻‍💻👩🏼‍💻👨🏽‍💻👩🏾‍💻👩🏿‍💻👨🏼‍💻👩🏽‍💻👨🏾‍💻&lt;br /&gt;&lt;br /&gt;Tag them here or if they aren&amp;#39;t on twitter post a link to academic websites. Pls RT so we&amp;#39;ve a good chance of finding many! ☺️&lt;/p&gt;&amp;mdash; Olivia Guest (@o_guest) &lt;a href=&quot;https://twitter.com/o_guest/status/987013239618883585?ref_src=twsrc%5Etfw&quot;&gt;April 19, 2018&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;
&lt;/center&gt;

&lt;p&gt;&lt;b&gt;The dedicated website is: &lt;a href=&quot;http://compcog.science&quot;&gt;compcog.science&lt;/a&gt; — and all minorities in cognitive modelling are welcome!&lt;/b&gt; Finally, thank you everybody for all the names and support!&lt;/p&gt;
</description>
        <pubDate>Sun, 22 Apr 2018 00:00:00 +0000</pubDate>
        <link>http://neuroplausible.com/compcog</link>
        <guid isPermaLink="true">http://neuroplausible.com/compcog</guid>
      </item>
    
      <item>
        <title>Block Practical: Connectionist Models and Cognitive Processes</title>
        <description>&lt;p&gt;This is less of a blog post and more of a materials dump from an elective practical I taught to second year undergraduate students in the Experimental Psychology Department at the University of Oxford. I thoughtlessly deleted the webpage that contained them, assuming no student after 2 years would need them. How wrong I was! I received an email the other day from a Ph.D. student at a university on the other side of the world pretty much asking where these materials had disappeared to. This made me question my assumption nobody was looking at these materials. So to save myself and others from looking for them again, here they are for everybody.&lt;/p&gt;

&lt;p&gt;This elective practical taught second year undergraduates to program in Python at a basic level and to understand the basics of artificial neural networks. They proved highly suitable as my students had not done much/any programming before and had not really heard of neural networks (things might have changed now, hype, etc).&lt;/p&gt;

&lt;p&gt;To clarify, I do not teach this course any more and I will not be updating or using these materials. If you want to use them for your own teaching, they are &lt;a href=&quot;https://creativecommons.org/licenses/by/4.0/&quot;&gt;CC BY 4.0&lt;/a&gt;, and I would super appreciate an &lt;a href=&quot;mailto:o.guest@ucl.ac.uk&quot;&gt;email&lt;/a&gt; or &lt;a href=&quot;https://twitter.com/o_guest&quot;&gt;tweet me&lt;/a&gt; if you use them.&lt;/p&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
  &lt;object class=&quot;image&quot; data=&quot;/img/posts/neural_network.svg&quot; type=&quot;image/svg+xml&quot;&gt;
    &lt;img src=&quot;/img/posts/neural_network.png&quot; /&gt;
  &lt;/object&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
    A really basic neural network diagram, by Wikipedia user &lt;a href=&quot;https://commons.wikimedia.org/wiki/File:Colored_neural_network.svg&quot;&gt;Glosser.ca&lt;/a&gt;
  &lt;/div&gt;  
&lt;/div&gt;

&lt;h2 id=&quot;course-materials&quot;&gt;Course Materials&lt;/h2&gt;
&lt;h3 id=&quot;1st-week-introduction-to-programming-and-connectionist-networks&quot;&gt;1st Week: Introduction to Programming and Connectionist Networks&lt;/h3&gt;
&lt;ul&gt;
  &lt;li&gt;Code: &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week1/pyceptron.py&quot;&gt;pyceptron.py&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Slides: &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week1/slides/part_1_slides.pdf&quot;&gt;Part 1: Intro to Programming&lt;/a&gt;, &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week1/slides/part_2_slides.pdf&quot;&gt;Part 2: Intro to Networks&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Exercises: &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week1/exercises/exercises.pdf&quot;&gt;Pyceptron&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;2nd-week-going-from-two-network-layers-to-three&quot;&gt;2nd Week: Going from Two Network Layers to Three&lt;/h3&gt;
&lt;ul&gt;
  &lt;li&gt;Code: &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week2/network_missing.py&quot;&gt;network_missing.py&lt;/a&gt;, &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week2/network_hints.py&quot;&gt;network_hints.py&lt;/a&gt;, &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week2/network.py&quot;&gt;network.py&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Slides: &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week2/slides/part_3_slides.pdf&quot;&gt;Part 3: Feedfoward Networks&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Exercises: &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week2/exercises/exercises.pdf&quot;&gt;Backpropagation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;3rd-week-replicating-a-model&quot;&gt;3rd Week: Replicating a Model&lt;/h3&gt;
&lt;ul&gt;
  &lt;li&gt;Code: &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week3/network.py&quot;&gt;network.py&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Patterns: &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week3/tyler_patterns.csv&quot;&gt;tyler_patterns.csv&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Slides: &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week3/slides/part_4_slides.pdf&quot;&gt;Part 4: Replicating a Model&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Exercises: &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week3/exercises/exercises.pdf&quot;&gt;Replication of Tyler et al. (2000)&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Tyler, L. K., Moss, H. E., Durrant-Peatfield, M. R., &amp;amp; Levy, J. P. (2000). &lt;strong&gt;&lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week3/tyler_2000.pdf&quot;&gt;Conceptual structure and the structure of concepts: A distributed account of category-specific deficits&lt;/a&gt;&lt;/strong&gt;. &lt;em&gt;Brain and Language&lt;/em&gt;, 75(2), 195-231.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;4th-week-writing-up-experimental-results&quot;&gt;4th Week: Writing up Experimental Results&lt;/h3&gt;
&lt;ul&gt;
  &lt;li&gt;Code: &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week4/network.py&quot;&gt;network.py&lt;/a&gt;, &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week4/graph.py&quot;&gt;graph.py&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Example file for errors: &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week4/errors1000.txt&quot;&gt;errors1000.txt&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Slides: &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week4/slides/part_5_slides.pdf&quot;&gt;Part 5: Writing the Report&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Exercises: &lt;a href=&quot;https://github.com/oliviaguest/connectionism/raw/master/week4/exercises/exercises.pdf&quot;&gt;File Input/Output&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;reading-materials&quot;&gt;Reading Materials&lt;/h2&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://arxiv.org/abs/1802.01528v2&quot;&gt;The Matrix Calculus You Need For Deep Learning&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://kimplunkett.org.uk/secondtry/page31/page32/index.html&quot;&gt;Essay: A Brief Introduction to Connectionism&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;programming&quot;&gt;Programming&lt;/h2&gt;
&lt;h3 id=&quot;exercises&quot;&gt;Exercises&lt;/h3&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;www.codecademy.com&quot;&gt;Codecademy&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.learnpython.org/&quot;&gt;LearnPython.org&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.codewars.com/&quot;&gt;Codewars&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.codeschool.com/paths/python&quot;&gt;Code School: Python&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;books&quot;&gt;Books&lt;/h3&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;http://learnpythonthehardway.org/book/&quot;&gt;Learn Python the Hard Way&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.openbookproject.net/thinkcs/python/english2e/&quot;&gt;How to Think Like a Computer Scientist: Learning with Python&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.greenteapress.com/thinkpython/&quot;&gt;Think Python: How to Think Like a Computer Scientist&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;inspiration&quot;&gt;Inspiration&lt;/h2&gt;
&lt;h3 id=&quot;libraries&quot;&gt;Libraries&lt;/h3&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.python-course.eu/numpy.php&quot;&gt;Numpy Tutorial&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://matplotlib.org/1.4.0/examples/index.html&quot;&gt;Matplotlib Examples&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://hplgit.github.io/scipro-primer/slides/index.html&quot;&gt;A Primer on Scientific Programming with Python&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.scipy-lectures.org&quot;&gt;Scipy Lecture Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;blogs&quot;&gt;Blogs&lt;/h3&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;http://glowingpython.blogspot.co.uk/&quot;&gt;The Glowing Python&lt;/a&gt;: This blog has various examples of interesting code to play with and give you ideas for your own projects.&lt;/li&gt;
  &lt;li&gt;WildML: &lt;a href=&quot;http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/&quot;&gt;Recurrent Neural Networks Tutorial, Part 1 – Introduction to RNNs&lt;/a&gt;: This blog also has other Machine Learning tutorials.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;video-lectures&quot;&gt;Video Lectures&lt;/h3&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.youtube.com/playlist?list=PLg7f-TkW11iX3JlGjgbM2s8E1jKSXUTsG&quot;&gt;Machine Learning&lt;/a&gt;, by The Royal Society&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=2Ei6wFJ9kCc&quot;&gt;The Cognitive and Computational Neuroscience of Categorization, Novelty-Detection, and the Neural Representation of Similarity&lt;/a&gt;, by Mark Gluck&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;online-courses&quot;&gt;Online Courses&lt;/h2&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.coursera.org/learn/machine-learning/&quot;&gt;Machine Learning&lt;/a&gt;, by Andrew Ng&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.coursera.org/course/neuralnets&quot;&gt;Neural Networks for Machine Learning&lt;/a&gt;, by Geoffrey Hinton&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://ocw.mit.edu/courses/brain-and-cognitive-sciences/9-641j-introduction-to-neural-networks-spring-2005/index.htm&quot;&gt;Introduction to Neural Networks&lt;/a&gt;, by Sebastian Seung&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;how-to-install-python&quot;&gt;How to install Python&lt;/h2&gt;
&lt;h3 id=&quot;windows-users&quot;&gt;Windows Users&lt;/h3&gt;
&lt;p&gt;This is a little tricky:&lt;/p&gt;
&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Install Python: &lt;a href=&quot;https://www.python.org/ftp/python/2.7.10/python-2.7.10.msi&quot;&gt;download from here&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Install matplotlib, numpy, and scipy using pip. Specifically you need to download the following from &lt;a href=&quot;http://www.lfd.uci.edu/~gohlke/pythonlibs/&quot;&gt;here&lt;/a&gt;:&lt;/p&gt;
    &lt;ul&gt;
      &lt;li&gt;matplotlib-1.4.3-cp27-none-win32.whl&lt;/li&gt;
      &lt;li&gt;numpy-1.10.0b1+mkl-cp27-none-win32.whl&lt;/li&gt;
      &lt;li&gt;scipy-0.16.0-cp27-none-win32.whl&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This requires you to be in the Scripts folder of the Python27 installation. And to use the windows command prompt. For me this looks like:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;C:\Python27\Scripts&amp;gt;pip install NAME_OF_WHEEL_FILE.whl
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;For all three of those you need to run a pip command like above.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Install PyGTK: &lt;a href=&quot;http://ftp.gnome.org/pub/GNOME/binaries/win32/pygtk/2.24/pygtk-all-in-one-2.24.2.win32-py2.7.msi&quot;&gt;download from here&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;To check that everything works, open network.py and see if it runs without any errors.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;mac-users&quot;&gt;Mac Users&lt;/h3&gt;
&lt;p&gt;I finally managed to do this on my mac. Use &lt;a href=&quot;http://brew.sh/&quot;&gt;Homebrew&lt;/a&gt; to install matplotlib, numpy, scipy, pygtk.&lt;/p&gt;

&lt;h3 id=&quot;linux-users&quot;&gt;Linux Users&lt;/h3&gt;
&lt;p&gt;Use your favourite package manager to install matplotlib, numpy, scipy, pygtk.&lt;/p&gt;
</description>
        <pubDate>Tue, 07 Nov 2017 00:00:00 +0000</pubDate>
        <link>http://neuroplausible.com/connectionism</link>
        <guid isPermaLink="true">http://neuroplausible.com/connectionism</guid>
      </item>
    
      <item>
        <title>Git (and GitHub) Cheat Sheet</title>
        <description>&lt;p&gt;I wrote a very simple, very basic introduction after I showed &lt;a href=&quot;//bradlove.org&quot;&gt;my lab&lt;/a&gt; some of the basics of version control (&lt;a href=&quot;https://git-scm.com/&quot;&gt;git&lt;/a&gt; — in the most simple terms, it is a system to keep track of your code) and a website providing such services (&lt;a href=&quot;https://github.com/&quot;&gt;GitHub&lt;/a&gt;). It is meant to be a cheat sheet mnemonic with extra information to help remind them what each of the very basic commands I showed them does. Twitter showed &lt;a href=&quot;https://twitter.com/o_guest/status/926489531112742915&quot;&gt;extreme appreciation&lt;/a&gt; for it — perhaps because &lt;a href=&quot;https://notnownikki.wordpress.com/2017/11/05/learning-git/&quot;&gt;many tutorials go a bit like this&lt;/a&gt; — so maybe it is useful to a wider audience of newbies not just my lab. Two main assumptions I make are that the reader is interested in maintaining their code and knows what a terminal is, as all the following commands are meant to be typed into a &lt;a href=&quot;https://en.wikipedia.org/wiki/Unix-like&quot;&gt;*nix&lt;/a&gt; terminal. See this for a tutorial on how to use the &lt;a href=&quot;http://rik.smith-unna.com/command_line_bootcamp/&quot;&gt;command line&lt;/a&gt;.&lt;/p&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
  &lt;img class=&quot;image&quot; src=&quot;/img/posts/github.png&quot; /&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
  &lt;a href=&quot;https://octodex.github.com/&quot;&gt;Octocats&lt;/a&gt; are some cute GitHub creatures.
  &lt;/div&gt;  
&lt;/div&gt;

&lt;h2 id=&quot;create-a-new-repo&quot;&gt;Create a New Repo&lt;/h2&gt;
&lt;p&gt;To create a new repository on GitHub go to: &lt;a href=&quot;https://github.com/new&quot;&gt;https://github.com/new&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;get-unlimited-private-repositories&quot;&gt;Get Unlimited Private Repositories&lt;/h2&gt;
&lt;p&gt;Sign up for the academic/student pack on GitHub as many lab projects (also known as &lt;a href=&quot;https://help.github.com/articles/github-glossary/#repository&quot;&gt;repositories&lt;/a&gt; or repos) might need to be private (until publication, of course) and non-academics do not get unlimited free private ones.
&lt;a href=&quot;https://education.github.com/pack&quot;&gt;Sign up here&lt;/a&gt; — they only need your academic (.ac.uk or .edu, etc.) email address to associate it with your account.&lt;/p&gt;

&lt;h2 id=&quot;clone&quot;&gt;Clone&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://help.github.com/articles/cloning-a-repository/&quot;&gt;Cloning&lt;/a&gt; means getting something for the first time.
So to download a whole repository for the first time (e.g., my &lt;a href=&quot;https://github.com/oliviaguest/gini&quot;&gt;Gini repo&lt;/a&gt;):&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git clone git@github.com:oliviaguest/gini.git
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;This command makes a new directory called in this case: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;gini&lt;/code&gt;. So you would need to change directory in order to see your newly downloaded files, i.e., &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cd&lt;/code&gt;, like so:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;cd gini
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Then you can &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ls&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ls -a&lt;/code&gt; to see all your files.
Feel free to actually &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;clone&lt;/code&gt; my Gini repository as you cannot break it on GitHub since you do not have &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;push&lt;/code&gt; rights.&lt;/p&gt;

&lt;h2 id=&quot;add-files&quot;&gt;Add Files&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://help.github.com/articles/adding-a-file-to-a-repository/&quot;&gt;Adding&lt;/a&gt; a file means asking version control to start watching it, but it is not yet in the history of your repository.
Just adding is not enough! All &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;add&lt;/code&gt; does is say add this file to queue of files to be &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;commit&lt;/code&gt;ted (the next section). You can be explicit and name the files you want to add (all other files will not be added):&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git add filename_1 filename_2... filename_n
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Alternatively, you can add everything (except the things in your &lt;a href=&quot;https://help.github.com/articles/ignoring-files/&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.gitignore&lt;/code&gt;&lt;/a&gt; file):&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git add -A
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;If you &lt;em&gt;just&lt;/em&gt; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;add&lt;/code&gt; a file, it is not safe yet!
It &lt;em&gt;needs&lt;/em&gt; to be &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;commit&lt;/code&gt;ted (next section) to be safely under version control!&lt;/p&gt;

&lt;h2 id=&quot;commit-files&quot;&gt;Commit Files&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://help.github.com/articles/github-glossary/#commit&quot;&gt;Committing&lt;/a&gt; is adding all the files to version control, although &lt;em&gt;not&lt;/em&gt; to the server (the next section).&lt;/p&gt;

&lt;p&gt;To commit everything you have just &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;add&lt;/code&gt;ed:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git commit -m &quot;I've just made some very dramatic changes&quot;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;For your files to be 100% safe make you &lt;em&gt;must&lt;/em&gt; also &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;push&lt;/code&gt; them (the next section).
Only &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;commit&lt;/code&gt;ting makes files be under version control on your local machine, e.g., your laptop, but they will &lt;em&gt;not&lt;/em&gt; be accessible from another computer.&lt;/p&gt;

&lt;h2 id=&quot;push-files&quot;&gt;Push Files&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://help.github.com/articles/pushing-to-a-remote/&quot;&gt;Pushing&lt;/a&gt; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;commit&lt;/code&gt;s, and therefore files, makes your changes enter into the version control system on the server as well as your local machine, so on GitHub (or &lt;a href=&quot;://overleaf.com&quot;&gt;Overleaf&lt;/a&gt;, or &lt;a href=&quot;://gitlab.com&quot;&gt;GitLab&lt;/a&gt;, or whatever server you are using).
So pushing is the superior form of backup and version control because it means that there are at least two copies of your work &lt;em&gt;and&lt;/em&gt; its history: one local copy (the stuff you were just working on) and one server-side copy (what you just &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;push&lt;/code&gt;ed).&lt;/p&gt;

&lt;p&gt;Once everything you need is &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;add&lt;/code&gt;ed and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;commit&lt;/code&gt;ted, it is time to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;push&lt;/code&gt;.
Many &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;add&lt;/code&gt;s may be in one &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;commit&lt;/code&gt;, many &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;commit&lt;/code&gt;s may be in one &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;push&lt;/code&gt;.
But there is no reason to limit yourself to pushing once a day.
Push as often as possible pretty much, is my advice.&lt;/p&gt;

&lt;p&gt;Unsurprisingly, as you might guess, to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;push&lt;/code&gt; you type:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git push
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;check-the-status&quot;&gt;Check the Status&lt;/h2&gt;
&lt;p&gt;To check what is going on, what changes have been made, compare your local repository’s status with that of the server, etc., type:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git status
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;This command will often tell you what you need to do given the current changes you have made, e.g., tell you you need to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;push&lt;/code&gt; your &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;commit&lt;/code&gt;s.
If you are unsure of anything, running &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git status&lt;/code&gt; should give you a hint about what the next command you want to run is.&lt;/p&gt;

&lt;p&gt;Importantly, if you have made server-side changes, i.e., you did some work on &lt;em&gt;machine A&lt;/em&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;push&lt;/code&gt;ed all that work to the server and now you are on a completely different &lt;em&gt;machine B&lt;/em&gt; and need to get back to working on your repo, you need to tell the repo on this different machine B to check the server for changes.
This can be done by asking your local git, on B, to &lt;a href=&quot;https://help.github.com/articles/fetching-a-remote/&quot;&gt;fetch&lt;/a&gt; the changes from the server:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git fetch
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Bear in mind that &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fetch&lt;/code&gt; &lt;em&gt;does not download any files&lt;/em&gt;, it merely updates what your local git knows about the changes you did on machine A which you then &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;push&lt;/code&gt;ed to the server.
After &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fetch&lt;/code&gt;ing, you may run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git status&lt;/code&gt; as then the information on the differences between your local files and those on the server will be correct.
Otherwise, if you just run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git status&lt;/code&gt; you risk getting the wrong information about what is on the server versus your local repo.&lt;/p&gt;

&lt;h2 id=&quot;discard-changes&quot;&gt;Discard Changes&lt;/h2&gt;
&lt;p&gt;If you made some local changes and you do not want them around at all — you just want what is on the server, you can run:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git stash
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;This discards all your local changes that have &lt;em&gt;not&lt;/em&gt; been &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;add&lt;/code&gt;ed or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;commit&lt;/code&gt;ted.&lt;/p&gt;

&lt;h2 id=&quot;pull-files&quot;&gt;Pull Files&lt;/h2&gt;
&lt;p&gt;Pulling means getting stuff from the server. If you have made changes at work and then go home and want to continue working where you left off, you run:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git pull
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Unsurprisingly, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;pull&lt;/code&gt; does the opposite of what &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;push&lt;/code&gt; does. It downloads all the files you previously pushed to the server from work on your home computer.&lt;/p&gt;
</description>
        <pubDate>Sun, 05 Nov 2017 00:00:00 +0000</pubDate>
        <link>http://neuroplausible.com/github</link>
        <guid isPermaLink="true">http://neuroplausible.com/github</guid>
      </item>
    
      <item>
        <title>I Hate Matlab: How an IDE, a Language, and a Mentality Harm</title>
        <description>&lt;p&gt;This blog post is inspired by a few Matlab-related tweets of mine, which turned into days-long discussions with fellow science and non-science tweeps.
Those tweets of mine in turn are motivated by two main things: my desire for programming in psychology, neuroscience, and science in general to be taught and taught well, and my desire for students to learn transferable skills more generally.
This blog post is premised on a number of themes which came up on Twitter.
The great need for scientists to be able to code.
The fact that Matlab is akin to bad training wheels on a bicycle, which never aid with learning to ride, but are used over again because they are better than walking.
And the idea that while there is a best tool for every job, not every tool is best for any job.
The discussion on Twitter was motivating and so I promised everybody I would write up what I think.
So this blog post is about how I think teaching Matlab, the whole ecosystem not just the language, within psychology harms students more than it helps them in many cases in my experience.&lt;/p&gt;

&lt;p&gt;To clarify, Matlab used to be the best tool for many things.
Before things like the &lt;a href=&quot;http://www.numpy.org/&quot;&gt;NumPy&lt;/a&gt;/&lt;a href=&quot;http://matplotlib.org/&quot;&gt;Matplotlib&lt;/a&gt;/&lt;a href=&quot;http://jupyter.org/&quot;&gt;Jupyter&lt;/a&gt; trilogy, it was probably the only tool that had “everything”.
When Matlab first came out, the alternative was &lt;a href=&quot;https://en.wikipedia.org/wiki/Fortran&quot;&gt;Fortran&lt;/a&gt; (which has &lt;a href=&quot;http://stackoverflow.com/questions/3517726/what-is-wrong-with-using-goto&quot;&gt;goto statements&lt;/a&gt;, if you don’t know why this is scary, never mind, you’re lucky).
But I believe it is now more a cause of brain-rot than mind-expanding awesomeness (please do not watch &lt;em&gt;Arrival&lt;/em&gt; just to get this &lt;a href=&quot;https://en.wikipedia.org/wiki/Linguistic_relativity&quot;&gt;Sapir-Whorf&lt;/a&gt; reference).
It is now more user- and science-jail than a freeing experience that allows us to make prototypes fast (it of course still does the latter).&lt;/p&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
  &lt;img class=&quot;image&quot; src=&quot;/img/posts/matlab.jpg&quot; /&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
    The Matlab logo is a visually appealing render of an &lt;a href=&quot;https://uk.mathworks.com/company/newsletters/articles/the-mathworks-logo-is-an-eigenfunction-of-the-wave-equation.html&quot;&gt;eigenfunction of the wave equation&lt;/a&gt;.
  &lt;/div&gt;  
&lt;/div&gt;

&lt;p&gt;If you are a proficient coder and love Matlab, then this blog post is &lt;em&gt;not&lt;/em&gt; really for you.
Importantly, my intended audience are those who wish to see an improvement in the teaching of programming within psychology.
I am talking from the perspective of my experiences within my field: psychology and cognitive science.
I have designed from scratch: &lt;a href=&quot;https://github.com/oliviaguest/connectionism&quot;&gt;a course&lt;/a&gt;, that I taught when I was working as a postdoc at Oxford; and &lt;a href=&quot;https://sites.google.com/site/introcompcog/&quot;&gt;a workshop&lt;/a&gt;, while I was a PhD student; both with the aim of teaching the principles of coding before diving into Python specifically for psychology students.
I also want people in science to have dependable transferrable skills, to be able to &lt;a href=&quot;https://erikbern.com/2017/03/15/the-eigenvector-of-why-we-moved-from-language-x-to-language-y.html&quot;&gt;move to other languages&lt;/a&gt;, as well as having as much fun as possible while learning.
Because of &lt;a href=&quot;https://www.ucl.ac.uk/pals/research/experimental-psychology/blog/women-experimental-psychology-olivia-guest/&quot;&gt;my training&lt;/a&gt;, I am privileged enough to be able to pick up a new language in a couple of hours.
I want others to have such skill-related opportunities too, not only because it is useful for science as an endeavour to have skilled researchers, but for us as individuals: if one emerges from their degree a coder one will have more opportunities (both within and outside science).&lt;/p&gt;

&lt;p&gt;To reiterate my titular claim: the way we teach Matlab in psychology appears to be more harmful than helpful.
I would like us to move beyond Matlab because the ecosystem it provides is a dangerous attractor, which many of my peers and my students involuntarily get sucked into.
In this post I will outline the main reasons why the Matlab ecosystem and language are as provocatively described above.
I intend to use “Matlab” to mean the whole ecosystem: the IDE, the language, and the mentality it brings about because I think they are inseparable.
In the same way “&lt;a href=&quot;http://journal.stuffwithstuff.com/2013/07/18/javascript-isnt-scheme/&quot;&gt;C programmers [allocate] their own damn memory, probably right after building their own computer out of rocks and twigs&lt;/a&gt;”, Matlab coders within psychology also have and create a culture around them aided by the IDE and the pre-existing community they have joined.&lt;/p&gt;

&lt;h2 id=&quot;limited-skill-transfer&quot;&gt;&lt;a name=&quot;limited-skill-transfer&quot;&gt;&lt;/a&gt;Limited Skill Transfer&lt;/h2&gt;
&lt;p&gt;Firstly, Matlab is not sufficient to provide us with a transferable programming skillset.
Matlab provides a programming environment in which nothing, at least superficially, seems hard — and thus nothing meaningful about coding itself is learned.
We do not need to worry about namespaces, nor even functions too much.
And we do not need to learn anything too complex to get some OK-looking figures.
This is great for prototyping — we can produce something that works well enough impressively quickly.
But this comes at a huge cost to us as a newbie coder.
We have not learned any of the important skills that would enable us to pick up another language.
And we will undeniably need to pick up other languages because that is the state psychology is in — e.g., R is becoming the standard for statistical analyses.
Yet we just learned a language that does not help us do that since it did not push us to learn the basics of what other languages have at their core.&lt;/p&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
    &lt;img class=&quot;image&quot; src=&quot;/img/posts/Emacs-screenshot.png&quot; /&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
    &lt;a href=&quot;https://en.wikipedia.org/wiki/Integrated_development_environment&quot;&gt;IDEs&lt;/a&gt; are extremely useful if you are a proficient coder already. However, they can act more like bad training wheels on a bicycle, hindering deeper learning.
  &lt;/div&gt;  
&lt;/div&gt;

&lt;p&gt;To put this another way, when one is learning to drive they do not tend to learn to drive using an automatic gearbox.
They learn to drive with a manual gearbox and it is tough.
Learning the harder of the two types, manual, allows us to then easily transfer to the easier of the two if need be.
In the case of USAmericans, &lt;a href=&quot;https://www.quora.com/Why-do-Americans-mostly-drive-automatic-transmission-vehicles&quot;&gt;they mostly learn to drive an automatic gearbox&lt;/a&gt; and almost never learn manual (because their skills do not transfer easily).
Although the metaphor is simplistic, it suffices to explain why Matlab is not the best language to learn, it is a car with an automatic gearbox.
We cannot easily transfer what we have learned to driving stick and in fact licences for just automatic transmission exist in my home country and the UK: if you learn just automatic you cannot be expected to know stick, whereas if you learn manual transmission you know “everything”.&lt;/p&gt;

&lt;p&gt;Furthermore, I posit that Matlab knowledge can make it harder than absolutely no programming knowledge for us to shift to another language.
Matlab has an &lt;a href=&quot;https://en.wikipedia.org/wiki/Integrated_development_environment&quot;&gt;IDE&lt;/a&gt; that provides &lt;a href=&quot;https://en.wikipedia.org/wiki/Graphical_user_interface&quot;&gt;GUI&lt;/a&gt; functionality that allows us to edit variables dynamically like in &lt;a href=&quot;http://www.sciencemag.org/news/sifter/one-five-genetics-papers-contains-errors-thanks-microsoft-excel&quot;&gt;Excel, which we know causes demonstrable problems&lt;/a&gt;.
It causes some of our students to think that the Matlab IDE is what programming is, in much the same way some of our students think &lt;a href=&quot;https://en.wikipedia.org/wiki/SPSS&quot;&gt;SPSS&lt;/a&gt; is what statistics is.
Furthermore, high dependence on manually editing things is extremely bad because our workflow will not be &lt;a href=&quot;http://oliviaguest.com/doc/guest_rougier_16.pdf&quot;&gt;reproducible nor replicable&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In addition, all the bells and whistles of the IDE and the GUI never force us to think about variables deeply (since we can always visualise them).
This exercise in keeping a mental model of what the code is doing, writing down what the code should be doing, imagining the data structures, etc., is a skill one needs to be developing.
More than once I have been asked to help people who were editing their variables in the GUI and hence did not properly understand their own code nor how to debug it.
This is not their fault, but had they learned to code without this they would never have picked up such terrible habits.
They had not learned exactly what a loop was and a lot of other helper scripts worked just fine, so they had no feedback that editing in the GUI is maladaptive per se.&lt;/p&gt;

&lt;p&gt;In most other languages: there is no GUI and there is no IDE that has the language baked in.
This results in many of us using Matlab by just pressing buttons and hoping something useful will come out the other end.
And this observation, shocking though it may seem, that this is what we and our students do, has been backed up by so many of you over chat and Twitter.
The GUI and IDE crutches will be snatched away from us as we will have to learn to code all over again — something we need never have to do if we had learned using a manual gearbox/not Matlab.&lt;/p&gt;

&lt;p&gt;Matlab puts a ceiling on what kinds of projects we can do both in size and in scope.
Optimising for hardware, needing to lower &lt;a href=&quot;https://en.wikipedia.org/wiki/DSPACE&quot;&gt;space&lt;/a&gt; and &lt;a href=&quot;https://en.wikipedia.org/wiki/Time_complexity&quot;&gt;time complexity&lt;/a&gt;, wanting something very specific like web-scraping, etc., are all tougher within Matlab.
This is because Matlab is more a &lt;a href=&quot;https://en.wikipedia.org/wiki/Domain-specific_language&quot;&gt;domain-specific&lt;/a&gt; than a domain-general language, it is centrally controlled, and the GUI and IDE cannot cope with large projects easily (although there is &lt;a href=&quot;http://blogs.mathworks.com/community/2010/02/22/launching-matlab-without-the-desktop/&quot;&gt;a command line mode&lt;/a&gt;, which we will be predominantly uncomfortable with given we only know Matlab).&lt;/p&gt;

&lt;p&gt;To further underline my point, Matlab explicitly teaches us some very unorthodox programming principles.
Some “features” do not exist in (m)any other languages, and certainly not in any we will likely want to learn in the near future (&lt;a href=&quot;https://www.python.org/&quot;&gt;Python&lt;/a&gt;, &lt;a href=&quot;https://en.wikipedia.org/wiki/Compatibility_of_C_and_C%2B%2B&quot;&gt;C/C++&lt;/a&gt;, &lt;a href=&quot;https://www.r-project.org/&quot;&gt;R&lt;/a&gt;, &lt;a href=&quot;https://julialang.org/&quot;&gt;Julia&lt;/a&gt; — even &lt;a href=&quot;https://www.latex-project.org/&quot;&gt;LaTeX&lt;/a&gt;).
For example, we are not allowed to have more than &lt;a href=&quot;https://uk.mathworks.com/help/matlab/matlab_prog/create-functions-in-files.html&quot;&gt;a single externally accessible function per file&lt;/a&gt;, and that file must have the same filename as the function we wish to access.
In essence this means we cannot have more than a function per file if we are, e.g., trying to code up a library in a clear way.
Matlab does not permit us to store all our global variables in one file, e.g., if we need constant values.
Due to all this, Matlab promotes &lt;a href=&quot;https://en.wikipedia.org/wiki/Spaghetti_code&quot;&gt;spaghetti code&lt;/a&gt;.
This adds to why many of us feel embarrassed to share our code online.
We never learned to write neat code because Matlab allows us to be quick and dirty without any repercussions.&lt;/p&gt;

&lt;p&gt;Perhaps most flagrantly, &lt;a href=&quot;https://nickhigham.wordpress.com/2017/03/15/tracing-the-early-history-of-matlab-through-siam-news/&quot;&gt;arrays&lt;/a&gt; &lt;a href=&quot;https://www.mathworks.com/company/newsletters/articles/the-origins-of-matlab.html&quot;&gt;in Matlab&lt;/a&gt; &lt;a href=&quot;https://www.mathworks.com/company/newsletters/articles/the-growth-of-matlab-and-the-mathworks-over-two-decades.html&quot;&gt;start&lt;/a&gt; &lt;a href=&quot;http://stackoverflow.com/questions/22546787/why-does-matlab-have-1-based-indexing&quot;&gt;at 1&lt;/a&gt;.
One has no idea how maladaptive this is until they move outside Matlab.
Computer science &lt;a href=&quot;https://www.johndcook.com/blog/2008/06/26/why-computer-scientists-count-from-zero/&quot;&gt;starts from zero for a reason&lt;/a&gt;.
If we want to learn generalisable skills, learning that indexing starts at 1 will hinder us, perhaps even cause us to introduce very nasty hard-to-find bugs when we move outside the Matlab ecosystem.
All these put together cause us to get more confused by new languages as the baggage we carry with us from learning Matlab needs to be actively unlearned and inhibited.&lt;/p&gt;

&lt;h2 id=&quot;closed-source-means-closed-science&quot;&gt;&lt;a name=&quot;closed-source-means-closed-science&quot;&gt;&lt;/a&gt;Closed Source Means Closed Science&lt;/h2&gt;
&lt;p&gt;Secondly, Matlab is closed source, proprietary, and prohibitively expensive if you have to buy it yourself.
They obfuscate their source code in many cases, meaning bugs are much &lt;a href=&quot;https://uk.mathworks.com/matlabcentral/answers/79714-how-do-we-know-that-matlabs-algorithms-are-working-properly&quot;&gt;harder to spot&lt;/a&gt; and impossible to &lt;a href=&quot;http://stackoverflow.com/questions/2470765/can-i-distribute-my-matlab-program-as-open-source&quot;&gt;edit ourselves without risking court action&lt;/a&gt;.
Moreover, using Matlab for science results in &lt;a href=&quot;https://github.com/openjournals/joss/issues/142&quot;&gt;paywalling our code&lt;/a&gt;.
We are by definition making our computational science closed.&lt;/p&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
    &lt;img class=&quot;image&quot; src=&quot;/img/posts/Open_Science_-_Prinzipien.png&quot; /&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
    The principles of open science, by &lt;a href=&quot;https://commons.wikimedia.org/wiki/User:Gegensystem&quot;&gt;Andreas E. Neuhold&lt;/a&gt;.
  &lt;/div&gt;  
&lt;/div&gt;

&lt;p&gt;Many people in the mutually inclusive &lt;a href=&quot;https://en.wikipedia.org/wiki/Open_science&quot;&gt;open science&lt;/a&gt; and &lt;a href=&quot;https://en.wikipedia.org/wiki/Free_software_movement&quot;&gt;open software&lt;/a&gt; movements hope to see &lt;a href=&quot;https://www.software.ac.uk/blog/2016-09-12-quick-and-dirty-analysis-software-being-used-research-python-matlab-and-r&quot;&gt;Matlab surpassed&lt;/a&gt; sooner rather than later and some even think it is inevitable.
By extension, people in these movements tend to think freely deciding to use Matlab (and indeed any closed source software) in science is &lt;a href=&quot;http://academia.stackexchange.com/questions/80790/is-it-ethical-to-use-proprietary-closed-source-software-for-scientific-computa&quot;&gt;at least questionable and at most unethical&lt;/a&gt;.
I believe in free and open software and science, so I am in principle opposed to Matlab’s grip on science.
This does not mean I believe the science done with Matlab is in any way worse in and of itself.
By the same token, scientists who believe in open access do &lt;em&gt;not&lt;/em&gt; think that science published in closed access journals is “bad science” — they think it is not the best publishing practice.
Sadly, one can either be for open science or against it.
So unless Matlab’s “&lt;a href=&quot;https://www.mathworks.com/company/aboutus/soc_mission.html&quot;&gt;core values and conviction to “Do the Right Thing”&lt;/a&gt;” start to also include open source and science, Matlab is incompatible with our aims.&lt;/p&gt;

&lt;p&gt;Something that pains me immensely, and indirectly affects all Matlab coders is the incompatibility between Matlab versions.
The main reason for this is, unlike &lt;a href=&quot;https://docs.python.org/3/reference/grammar.html&quot;&gt;Python&lt;/a&gt; or &lt;a href=&quot;http://www.nongnu.org/hcb/&quot;&gt;C++&lt;/a&gt; or pretty much all languages out there, there is no &lt;a href=&quot;https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_form&quot;&gt;Backus-Naur form&lt;/a&gt; for Matlab to my knowledge.
This means that Matlab has no official and formally-specified grammar, &lt;a href=&quot;https://www.quora.com/Do-programming-languages-have-grammar&quot;&gt;it could&lt;/a&gt;, but it does not.
This is incredibly bad if true, and explains the compatibility problems, making Matlab more like Microsoft Word (which is not backwards compatible and not a programming language).
It also means Mathworks does not have to stick to any rules for the grammar of Matlab, they can change it on the fly.
And by the same token, &lt;a href=&quot;https://en.wikipedia.org/wiki/GNU_Octave&quot;&gt;Octave&lt;/a&gt; compatibility is hard to maintain because the language is not defined.&lt;/p&gt;

&lt;p&gt;Importantly, over and above the fact it is not &lt;a href=&quot;https://opensource.org/docs/osd&quot;&gt;open source&lt;/a&gt;, I propose Matlab (and thus similar languages like Octave and &lt;a href=&quot;http://www.scilab.org/&quot;&gt;SciLab&lt;/a&gt; which &lt;em&gt;are&lt;/em&gt; open) should not be our go-to languages for the reasons outlined herein.
To re-iterate, Matlab is not the best language to teach our students and peers for pedagogical, skill transfer, and practical reasons &lt;em&gt;over and above&lt;/em&gt; the ethical/openness reasons.
These reasons in and of themselves serve to discredit Matlab and demote it from its place as the primary programming language for teaching in psychology.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;&lt;a name=&quot;conclusion&quot;&gt;&lt;/a&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In a nutshell, Matlab creates an environment where we learn how to code without ever doing anything too difficult, without ever developing skills that really transfer, and without ever understanding the core of what coding is about.
I want us to be better equipping ourselves and our students for both life in science and giving them useful skills for life outside science.
The default position in my part of science is you teach Matlab and then that is it.
I &lt;em&gt;do not&lt;/em&gt; levy these criticisms against those whose of us who use and teach a multitude of languages (including Matlab).
I am focussing on the majority of us who teach and use for all intents and purposes &lt;em&gt;only&lt;/em&gt; Matlab.&lt;/p&gt;

&lt;p&gt;GUIs and IDEs are great — just like once we already know how to drive using a manual transmission we can easily switch to automatic — but they predominantly do not push us to develop our skills further.
If we want to we can switch to a fancy IDE after we already know the tougher stuff.
We learn multiplication tables off by heart &lt;em&gt;before&lt;/em&gt; we switch to using our smartphone as a calculator.
I am assuming we all want to develop our technical skills appropriately, so inevitably we will need to carry out much more complex tasks, like writing a bash script or compiling something from source — all these things are skills we need to be building up slowly over time.
Matlab allows us to live in a lovely world where everything is easy but from which we cannot escape.
Research will throw harder programming tasks at us than quickly making graphs or fast matrix multiplication.
Thus we need to accept that sometimes learning new things can be hard (as well as fun).&lt;/p&gt;

&lt;p&gt;Some will &lt;a href=&quot;http://lorenabarba.com/blog/why-i-push-for-python/&quot;&gt;push for their own favourite language, e.g., Python&lt;/a&gt;.
Nonetheless, as long as we move away from the paradigms the Matlab ecosystem enforces, we will have made serious gains pedagogically.
I hope I have convinced my intended audience that even though Matlab has been the go-to language things should be and are rightfully changing.
For examples, even within engineering where Matlab has a historically strong hold its widespread use is being eroded — &lt;a href=&quot;http://to.eng.cam.ac.uk/teaching/committee/SSJC_mins/ssjc_computing.pdf&quot;&gt;the engineering department at Cambridge decided to teach Python instead&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Programming education in psychology can be better.
Other languages provide more replicable and reproducible workflows, more opportunity to learn transferrable skills, and communities centered around open source and open science.
If we can teach the Matlab ecosystem, then we can make a small step for great gains and teach a better more open ecosystem.
We &lt;a href=&quot;https://www.wired.com/2017/03/biologists-teaching-code-survive/&quot;&gt;must teach the core concepts of programming&lt;/a&gt; and we must teach them well.
We are in the midst of transition from closed source to open source, closed science to open science, black box workflows to reproducible and replicable workflows.
Let’s make this transition happen by equipping our students and ourselves with the most appropriate skills.&lt;/p&gt;

&lt;h2 id=&quot;thanks&quot;&gt;Thanks&lt;/h2&gt;
&lt;p&gt;This blog post would not have been possible without discussions with my &lt;a href=&quot;http://software.ac.uk&quot;&gt;Software Sustainability Institute&lt;/a&gt; co-fellows and the institute’s staff, nor without the &lt;a href=&quot;https://twitter.com/o_guest/status/841671820575162368&quot;&gt;many&lt;/a&gt; &lt;a href=&quot;https://twitter.com/o_guest/status/842794088315404288&quot;&gt;many&lt;/a&gt; tweets from you all.&lt;/p&gt;

&lt;h2 id=&quot;related-resources&quot;&gt;Related Resources&lt;/h2&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.enthought.com/white-paper-matlab-to-python&quot;&gt;White Paper: MATLAB to Python, A Migration Guide&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.codecademy.com/learn/learn-python&quot;&gt;Learn Python&lt;/a&gt; on Codecademy&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.inferencelab.com/matlab-rant-2/&quot;&gt;matlab rant 2&lt;/a&gt; by &lt;a href=&quot;http://www.inferencelab.com&quot;&gt;Benjamin Vincent&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.rath.org/matlab-is-a-terrible-programming-language.html&quot;&gt;MATLAB is a terrible programming language&lt;/a&gt; by Nikolaus Rath&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://tentacle.net/~chrisr/bookshelf/Stephenson,%20Neal%20-%20In%20the%20Beginning%20was%20the%20Command%20Line/Chapter10.htm&quot;&gt;Morlocks And Eloi At The Keyboard&lt;/a&gt; by Neal Stephenson&lt;/li&gt;
&lt;/ul&gt;
</description>
        <pubDate>Fri, 17 Mar 2017 00:00:00 +0000</pubDate>
        <link>http://neuroplausible.com/matlab</link>
        <guid isPermaLink="true">http://neuroplausible.com/matlab</guid>
      </item>
    
      <item>
        <title>Artificial Neural Networks with Random Weights are Baseline Models</title>
        <description>&lt;p&gt;Where do the impressive performance gains of deep neural networks come from?
Is their power due to the learning rules which adjust the connection weights or is it simply a function of the network architecture (i.e., many layers)?
These two properties of networks are hard to disentangle.
One way to tease apart the contributions of network architecture versus those of the learning regimen is to consider networks with randomised weights.
To the extent that random networks show interesting behaviors, we can infer that the learning rule has not played a role in them.
At the same time, examining these random networks allows us to evaluate what learning does add to the network’s abilities over and above minimising some loss function.&lt;/p&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
  &lt;object class=&quot;image&quot; data=&quot;/img/posts/ann_models_correlation.svg&quot; type=&quot;image/svg+xml&quot;&gt;
    &lt;img src=&quot;/img/posts/ann_models_correlation.png&quot; /&gt;
  &lt;/object&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
    &lt;a href=&quot;https://elifesciences.org/content/6/e21397#fig2&quot;&gt;Figure 2A&lt;/a&gt; from Guest and Love (2017): &quot;For the artificial neural network coding schemes, similarity to the prototype falls off with increasing distortion (i.e., noise). The models, numbered 1–11, are (&lt;i&gt;1&lt;/i&gt;) vector space coding, (&lt;i&gt;2&lt;/i&gt;) gain control coding, (&lt;i&gt;3&lt;/i&gt;) matrix multiplication coding, (&lt;i&gt;4&lt;/i&gt;), perceptron coding, (&lt;i&gt;5&lt;/i&gt;) 2-layer network, (&lt;i&gt;6&lt;/i&gt;) 3-layer network, (&lt;i&gt;7&lt;/i&gt;) 4-layer network, (&lt;i&gt;8&lt;/i&gt;) 5-layer network, (&lt;i&gt;9&lt;/i&gt;) 6-layer network (&lt;i&gt;10&lt;/i&gt;) 7-layer network, and (&lt;i&gt;11&lt;/i&gt;), 8-layer network. The darker a model is, the simpler the model is and the more the model preserves similarity structure under fMRI.&quot;
  &lt;/div&gt;  
&lt;/div&gt;

&lt;p&gt;In &lt;em&gt;&lt;a href=&quot;http://dx.doi.org/10.7554/eLife.21397&quot;&gt;What the Success of Brain Imaging Implies about the Neural Code&lt;/a&gt;&lt;/em&gt;, we examined an artificial deep neural network, Inception-v3 GoogLeNet.
This deep trained network, preserves the similarity of the input space and thus is &lt;a href=&quot;https://elifesciences.org/content/6/e21397#s2&quot;&gt;functionally smooth&lt;/a&gt;.
Importantly, however, we found that functional smoothness in this deep network breaks down at later layers.
Is this because of the depth of the network, the many layers, or the specific learning regimen?
We sought to explain why this happens by using a baseline, a model with random weights.&lt;/p&gt;

&lt;p&gt;To answer this question, let us consider some much simpler plausible contenders for the neural code — a rudimentary set of models — the components of artificial neural networks: &lt;a href=&quot;https://en.wikipedia.org/wiki/Matrix_multiplication&quot;&gt;matrix multiplication&lt;/a&gt; and some kind of squashing (&lt;a href=&quot;https://en.wikipedia.org/wiki/Sigmoid_function&quot;&gt;sigmoid&lt;/a&gt;, &lt;a href=&quot;https://en.wikipedia.org/wiki/Step_function&quot;&gt;step&lt;/a&gt;, &lt;a href=&quot;https://en.wikipedia.org/wiki/Activation_function&quot;&gt;etc&lt;/a&gt;.) function (in our case, the &lt;a href=&quot;https://en.wikipedia.org/wiki/Hyperbolic_function#Hyperbolic_tangent&quot;&gt;hyperbolic tangent&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;The first basic model, matrix multiplication, is how neural networks propagate activation from layer &lt;script type=&quot;math/tex&quot;&gt;\mathbf{m}&lt;/script&gt; to the next &lt;script type=&quot;math/tex&quot;&gt;\mathbf{n}&lt;/script&gt; via the weights &lt;script type=&quot;math/tex&quot;&gt;\mathbf{w}&lt;/script&gt;.
For simplicity, our toy network contains layers &lt;script type=&quot;math/tex&quot;&gt;\mathbf{m}&lt;/script&gt; and &lt;script type=&quot;math/tex&quot;&gt;\mathbf{n}&lt;/script&gt;, which both contain three units.
Thus to calculate the states for &lt;script type=&quot;math/tex&quot;&gt;\mathbf{n}&lt;/script&gt;, we take the matrix product of the previous layer &lt;script type=&quot;math/tex&quot;&gt;\mathbf{m}&lt;/script&gt; and the weights &lt;script type=&quot;math/tex&quot;&gt;\mathbf{w}&lt;/script&gt;:&lt;/p&gt;

&lt;script type=&quot;math/tex; mode=display&quot;&gt;% &lt;![CDATA[
\mathbf{m} \times \mathbf{w}
=
\\
\begin{pmatrix}
x_1 &amp; x_2 &amp; x_3 \\
\end{pmatrix}
\times
\begin{pmatrix}
w_{11} &amp; w_{12} &amp; w_{13} \\
w_{21} &amp; w_{22} &amp; w_{23} \\
w_{31} &amp; w_{32} &amp; w_{33}
\end{pmatrix}
=
\\
\begin{pmatrix}
x_1 w_{11} + x_2 w_{21} + x_3 w_{31} \\
x_1 w_{12} + x_2 w_{22} + x_3 w_{32} \\
x_1 w_{13} + x_2 w_{23} + x_3 w_{33}
\end{pmatrix}\
=
\begin{pmatrix}
y_1 &amp; y_2 &amp; y_3
\end{pmatrix}
=
\mathbf{n}
\, %]]&gt;&lt;/script&gt;

&lt;p&gt;where &lt;script type=&quot;math/tex&quot;&gt;x&lt;/script&gt;s represent the units in layer &lt;script type=&quot;math/tex&quot;&gt;\mathbf{m}&lt;/script&gt;, &lt;script type=&quot;math/tex&quot;&gt;w_{ij}&lt;/script&gt; represents a weight in &lt;script type=&quot;math/tex&quot;&gt;\mathbf{w}&lt;/script&gt; from unit &lt;script type=&quot;math/tex&quot;&gt;i&lt;/script&gt; in layer &lt;script type=&quot;math/tex&quot;&gt;\mathbf{m}&lt;/script&gt; to unit &lt;script type=&quot;math/tex&quot;&gt;j&lt;/script&gt; in &lt;script type=&quot;math/tex&quot;&gt;\mathbf{n}&lt;/script&gt;, and &lt;script type=&quot;math/tex&quot;&gt;y_j&lt;/script&gt; is a unit in &lt;script type=&quot;math/tex&quot;&gt;\mathbf{n}&lt;/script&gt;. For example, &lt;script type=&quot;math/tex&quot;&gt;w_{31}&lt;/script&gt; is the weight on the connection between the third unit of the shallower/earlier layer and the first unit of the deeper/later later (others use other notations).&lt;/p&gt;

&lt;p&gt;Matrix multiplication calculates the states of a layer — easily done in Python using &lt;a href=&quot;http://www.numpy.org/&quot;&gt;NumPy&lt;/a&gt;, specifically &lt;a href=&quot;https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;numpy.dot()&lt;/code&gt;&lt;/a&gt;:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-python&quot; data-lang=&quot;python&quot;&gt;&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;numpy&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;m&lt;/span&gt;  &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;asarray&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;([&lt;/span&gt;&lt;span class=&quot;mf&quot;&gt;0.1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;0.2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;1.3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;])&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# layer m with some dummy input
&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;w&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;random&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;randn&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# random weights from m to n
&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;n&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;dot&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;m&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;w&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# pre-synaptic states in n
&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;print&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;n&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;To apply a squashing function, &lt;script type=&quot;math/tex&quot;&gt;\tanh&lt;/script&gt;, to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;n&lt;/code&gt; above, we may use &lt;a href=&quot;https://docs.scipy.org/doc/numpy/reference/generated/numpy.tanh.html&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;numpy.tanh()&lt;/code&gt;&lt;/a&gt;:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-python&quot; data-lang=&quot;python&quot;&gt;&lt;span class=&quot;n&quot;&gt;n&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;tanh&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;n&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# post-synaptic states in n
&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;print&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;n&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Non-linear transformations like hyperbolic tangent allow the network to have non-linear decision boundaries, e.g., between classes, making it able to capturing the statistics of the training set (more &lt;a href=&quot;http://www.kdnuggets.com/2016/08/role-activation-function-neural-network.htmlhttp://www.kdnuggets.com/2016/08/role-activation-function-neural-network.html&quot;&gt;here&lt;/a&gt; and &lt;a href=&quot;https://www.quora.com/Why-do-neural-networks-need-an-activation-function/answer/Chomba-Bupe&quot;&gt;here&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;In &lt;a href=&quot;http://dx.doi.org/10.7554/eLife.21397&quot;&gt;Guest and Love (2017)&lt;/a&gt; we presented the above as two separate models as well as a combined model, here I have cut to the part where they are combined to form a traditional two-layer network (also known as the &lt;a href=&quot;https://en.wikipedia.org/wiki/Perceptron&quot;&gt;perceptron&lt;/a&gt; model).
As you might have guessed, from two layers we can generalise to many, by continuing to take the matrix product of the output (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;n&lt;/code&gt; in the code above) with some new weights, and so on.&lt;/p&gt;

&lt;p&gt;Running an untrained neural network with random weights allows us to compare more complex (i.e., trained models) with their untrained selves.
We can thus pick apart what aspects of the model are inherent to the architecture itself and which emerge as a function of training.
Networks that have random weights can be given the same training and test sets, although importantly no training has happened yet, and we can examine their internal states and outputs
This can serve as a guide to understand what the network “knows” a priori.&lt;/p&gt;

&lt;p&gt;As we noted in &lt;a href=&quot;http://dx.doi.org/10.7554/eLife.21397&quot;&gt;Guest and Love (2017)&lt;/a&gt;, networks naturally place items close together in their internal representational space that are similar/proximal in the input space. Hence why artificial neural networks are a plausible candidate for the neural code, i.e., they give rise to &lt;a href=&quot;https://elifesciences.org/content/6/e21397#s2&quot;&gt;functionally smooth&lt;/a&gt; representations.
The simple network above can be made deeper and deeper, and we can inspect every layer in it for smoothness for every pattern.
Extending the above, we can do just that, and run the network on two very simple categories:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-python&quot; data-lang=&quot;python&quot;&gt;&lt;table class=&quot;rouge-table&quot;&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td class=&quot;gutter gl&quot;&gt;&lt;pre class=&quot;lineno&quot;&gt;1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
&lt;/pre&gt;&lt;/td&gt;&lt;td class=&quot;code&quot;&gt;&lt;pre&gt;&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;numpy&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;prototypes&lt;/span&gt;  &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;random&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;randn&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;100&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# two toy categories
&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;members&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;10&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# how many items per category
&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;patterns&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[]&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;p&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;prototypes&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;m&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;enumerate&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;range&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;members&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)):&lt;/span&gt;
        &lt;span class=&quot;c1&quot;&gt;# for each item, create a pattern that has noise as a function of the
&lt;/span&gt;        &lt;span class=&quot;c1&quot;&gt;# number of items. First item in category has no noise, then 0.05 SD of
&lt;/span&gt;        &lt;span class=&quot;c1&quot;&gt;# noise, then 0.1 SD, and so on.
&lt;/span&gt;        &lt;span class=&quot;n&quot;&gt;patterns&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;append&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;p&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;0.01&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;random&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;randn&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;((&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;p&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;))))&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;layers&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;20&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# how many layers we want, i.e., how deep is the network
# random weights:
&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;w&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;random&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;randn&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;layers&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;prototypes&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]),&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;prototypes&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]))&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;0.1&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pat&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;patterns&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;# for each pattern
&lt;/span&gt;    &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;l&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;enumerate&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;range&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;layers&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)):&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;c1&quot;&gt;#if we are at the input layer, then set units to pattern
&lt;/span&gt;            &lt;span class=&quot;n&quot;&gt;n&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pat&lt;/span&gt;
        &lt;span class=&quot;c1&quot;&gt;# propagate through each layer
&lt;/span&gt;        &lt;span class=&quot;n&quot;&gt;n&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;dot&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;n&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;w&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;])&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# pre-synaptic states in n
&lt;/span&gt;        &lt;span class=&quot;n&quot;&gt;n&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;tanh&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;n&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# post-synaptic states in n
&lt;/span&gt;        &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;layers&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;c1&quot;&gt;# print the layer, the first five features of the pattern applied at
&lt;/span&gt;            &lt;span class=&quot;c1&quot;&gt;# input and the first five activations in the last layer
&lt;/span&gt;            &lt;span class=&quot;k&quot;&gt;print&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pat&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;],&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;n&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Even just by eye-balling the output in the terminal using &lt;a href=&quot;https://github.com/oliviaguest/random-network&quot;&gt;the code above&lt;/a&gt;, we can see that indeed similar items (items within the same category) map to similar outputs, i.e., the network is functionally smooth without any training. We used a &lt;a href=&quot;https://github.com/oliviaguest/brain-imaging-and-the-neural-code/tree/master/random-network&quot;&gt;more complex version of the above&lt;/a&gt; to demonstrate this principle in &lt;a href=&quot;http://dx.doi.org/10.7554/eLife.21397&quot;&gt;Guest and Love (2017)&lt;/a&gt;, where we calculate the correlations between the representations in the input space and in each layer.
However, as we move deeper into the network, we see that functional smoothness has broken down and the network gives for all intents and purposes identical outputs for each items within a category, thus losing all structure within it.
We cannot looking just at the output, predict which input generated it, only which category.&lt;/p&gt;

&lt;p&gt;Using this result we can infer that the property of Inception-v3 GoogLeNet, and indeed any similar deep network, which causes it to both display (at early layers) and gradually lose functional smoothness (at deeper layers), is due to the nature of the architecture and not the learning rule.
Because this property is present in simple untrained networks, it cannot be a byproduct of training.&lt;/p&gt;

&lt;p&gt;Importantly, randomising weights can be done to any network with any topology, including to Inception-v3 GoogLeNet itself, to recurrent networks, and so on.
We hope this idea proves to be a useful exercise to others too, as many connectionist and deep network accounts would benefit from an understanding of the inherent properties of the topological configuration versus the fully-trained model.&lt;/p&gt;
</description>
        <pubDate>Mon, 06 Mar 2017 00:00:00 +0000</pubDate>
        <link>http://neuroplausible.com/random-network</link>
        <guid isPermaLink="true">http://neuroplausible.com/random-network</guid>
      </item>
    
      <item>
        <title>Using the Gini Coefficient to Evaluate Deep Neural Network Layer Representations</title>
        <description>&lt;p&gt;Sparsity is an issue in neural representation and we think it should be measured in artificial neural networks to understand how they are representing information at each layers.
For example, are a few units doing the work or is there a distributed pattern across all units (i.e., overlapping units taking part in the representations of &lt;i&gt;cat&lt;/i&gt;, &lt;i&gt;car&lt;/i&gt;, etc.).
So in &lt;em&gt;&lt;a href=&quot;http://dx.doi.org/10.7554/eLife.21397&quot;&gt;What the Success of Brain Imaging Implies about the Neural Code&lt;/a&gt;&lt;/em&gt; we decided to use the &lt;a href=&quot;https://en.wikipedia.org/wiki/Gini_coefficient&quot;&gt;Gini coefficient&lt;/a&gt;, inspired by its use in evaluating voxel activations,  to uncover the degree of sparsity within each of the layers of Inception-v3 GoogLeNet.&lt;/p&gt;

&lt;p&gt;The Gini coefficient is primarily used to give an idea of how wealth is distributed within a group of people, usually a whole nation.
But it can also be used more generally on a vector of numbers, a distribution, to describe how distributed values are (more on this below).&lt;/p&gt;

&lt;div class=&quot;float-right figure&quot;&gt;
  &lt;object class=&quot;image&quot; data=&quot;/img/posts/brain.svg&quot; type=&quot;image/svg+xml&quot;&gt;
    &lt;img src=&quot;/img/posts/brain.png&quot; /&gt;
  &lt;/object&gt;
  &lt;div class=&quot;figure-caption&quot;&gt;
  &lt;a href=&quot;https://elifesciences.org/content/6/e21397#fig2&quot;&gt;Figure 2B&lt;/a&gt; from Guest and Love (2017):  &quot;A deep artificial neural network and the ventral stream can be seen as performing related computations. As in our simulation results, neural similarity should be more difficult to recover in the more advanced layers.&quot;
  &lt;/div&gt;  
&lt;/div&gt;

&lt;p&gt;I looked around online for a dependable and fast Gini coefficient calculator in Python. Unfortunately, what I did find, while useful, were neither fast &lt;a href=&quot;http://planspace.org/2013/06/21/how-to-calculate-gini-coefficient-from-raw-data-in-python/&quot;&gt;nor bug-free&lt;/a&gt;. So I decided to write &lt;a href=&quot;https://github.com/oliviaguest/gini&quot;&gt;one&lt;/a&gt; myself!&lt;/p&gt;

&lt;p&gt;We were dealing with relatively big data, as Inception-v3 GoogLeNet has quite a few layers, so I needed something with relatively low space and time complexity.
In terms of speed, my Gini calculator is quite a lot faster than (the &lt;a href=&quot;https://github.com/pysal/pysal/issues/855&quot;&gt;current implementation of&lt;/a&gt;) PySAL’s Gini coefficient function (see  &lt;a href=&quot;http://pysal.readthedocs.io/en/latest/_modules/pysal/inequality/gini.html&quot;&gt;the documentation&lt;/a&gt;) and outputs are indistinguishable before approximately 6 decimal places. And it is slightly faster than the &lt;a href=&quot;http://www.ellipsix.net/blog/2012/11/the-gini-coefficient-for-distribution-inequality.html&quot;&gt;Gini coefficient function by David on Ellipsix&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://github.com/oliviaguest/gini/blob/master/gini.py&quot;&gt;Gini calculator function&lt;/a&gt; I wrote is based on the third equation &lt;a href=&quot;http://www.statsdirect.com/help/default.htm#nonparametric_methods/gini.htm&quot;&gt;here&lt;/a&gt;, which defines the Gini coefficient as:&lt;/p&gt;

&lt;script type=&quot;math/tex; mode=display&quot;&gt;G = \dfrac{ \sum_{i=1}^{n} (2i - n - 1) x_i}{n  \sum_{i=1}^{n} x_i},&lt;/script&gt;

&lt;p&gt;where &lt;script type=&quot;math/tex&quot;&gt;i&lt;/script&gt; is the index for each data point &lt;script type=&quot;math/tex&quot;&gt;x_i&lt;/script&gt; and &lt;script type=&quot;math/tex&quot;&gt;n&lt;/script&gt; is the total number of data points.
For a very unequal sample, e.g., with 999 zeros and a single one, the Gini coefficient is very high (close to 1). For uniformly distributed random numbers, it will be low, around 0.33. While, for a homogeneous sample, the Gini coefficient is 0. In other words, the lower &lt;script type=&quot;math/tex&quot;&gt;G&lt;/script&gt; is the more equal the distribution of wealth/numbers is. Check out the &lt;a href=&quot;https://github.com/oliviaguest/gini/blob/master/README.md&quot;&gt;readme file&lt;/a&gt; for &lt;a href=&quot;https://github.com/oliviaguest/gini/blob/master/README.md#examples&quot;&gt;examples&lt;/a&gt; of what can be passed to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;gini()&lt;/code&gt; function.&lt;/p&gt;

&lt;p&gt;The Gini calculation by definition requires non-zero positive (ascending-order) sorted values within a 1-dimensional vector. This is dealt with within the &lt;a href=&quot;https://github.com/oliviaguest/gini/blob/master/gini.py&quot;&gt;gini function&lt;/a&gt;. So these four assumptions can be violated, as they are controlled for:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-python&quot; data-lang=&quot;python&quot;&gt;&lt;table class=&quot;rouge-table&quot;&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td class=&quot;gutter gl&quot;&gt;&lt;pre class=&quot;lineno&quot;&gt;1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
&lt;/pre&gt;&lt;/td&gt;&lt;td class=&quot;code&quot;&gt;&lt;pre&gt;&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;numpy&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;gini&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;array&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
    &lt;span class=&quot;s&quot;&gt;&quot;&quot;&quot;Calculate the Gini coefficient of a numpy array.&quot;&quot;&quot;&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;# All values are treated equally, arrays must be 1d:
&lt;/span&gt;    &lt;span class=&quot;n&quot;&gt;array&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;array&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;flatten&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;amin&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;array&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;c1&quot;&gt;# Values cannot be negative:
&lt;/span&gt;        &lt;span class=&quot;n&quot;&gt;array&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;amin&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;array&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;# Values cannot be 0:
&lt;/span&gt;    &lt;span class=&quot;n&quot;&gt;array&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+=&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;0.0000001&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;# Values must be sorted:
&lt;/span&gt;    &lt;span class=&quot;n&quot;&gt;array&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;sort&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;array&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;# Index per array element:
&lt;/span&gt;    &lt;span class=&quot;n&quot;&gt;index&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;arange&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;array&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;shape&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;+&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;# Number of array elements:
&lt;/span&gt;    &lt;span class=&quot;n&quot;&gt;n&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;array&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;shape&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;# Gini coefficient:
&lt;/span&gt;    &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;((&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sum&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;((&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;index&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;n&lt;/span&gt;  &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;array&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;))&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;/&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;n&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;np&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sum&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;array&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)))&lt;/span&gt;
&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;And that is all there is to it! The only two inviolable assumptions it makes is that you have &lt;a href=&quot;http://www.numpy.org/&quot;&gt;numpy&lt;/a&gt; installed and that you send it something like a numpy array (use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;np.asarray()&lt;/code&gt; to check if what you have is &lt;a href=&quot;https://docs.scipy.org/doc/numpy/user/basics.creation.html#converting-python-array-like-objects-to-numpy-arrays&quot;&gt;array-like&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;But what does this have to do with artificial neural networks? Well, instead of people within a nation, we can consider the units within a layer. And instead of people’s wealth we can look at units’ activations after we have propagated input to the layer. So given an input to a layer, we can measure how sparse (unequal) the distribution of activations is. A single number can give us an idea of how localist or distributed the representation the layer has learned is. Averaging over the Gini coefficients for all the possible inputs to a layer, we can calculate how localist or distributed the representations within a layer are in general.&lt;/p&gt;

&lt;p&gt;Inception-v3 GoogLeNet has output that is trained to be completely sparse/localist, since it uses &lt;a href=&quot;https://en.wikipedia.org/wiki/One-hot&quot;&gt;one-hot coding&lt;/a&gt; for the classes. Representing the output classes using one-hot coding ensures that outputs are trained to be both orthogonal and localist (two properties which are not by definition mutually inclusive). In terms of the targets it learns per input image, the network’s output will have a Gini coefficient of approximately 1. And in general, we can expect the output’s Gini to be close to 1, except in the very rare cases where the network is completely unsure of what we have shown it.&lt;/p&gt;

&lt;p&gt;On the other hand, on other/lower layers, we find that the Gini coefficient can be high or low. It decreases and increases non-monotonically as a function of layer depth.
Although it does show a rough trend of becoming higher as we move deeper, it is by no means a given.
What this implies is that the network is not representing things by definition in a more localist way as we move towards deeper/later layers.
In the two layers we talked about in the aforementioned &lt;a href=&quot;http://dx.doi.org/10.7554/eLife.21397&quot;&gt;Guest and Love (2017)&lt;/a&gt;, the network has a Gini coefficient of 0.579 for the penultimate layer and  0.947 for the shallower layer (on the specific stimuli we used). At the end the average Gini for the output is, as expected given the training regime, 0.941. These and other points with respect to the representational contents of each layer are discussed in depth in &lt;a href=&quot;http://dx.doi.org/10.7554/eLife.21397&quot;&gt;Guest and Love (2017)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;See here for a translation of this article by Daniel Morales into Spanish: &lt;a href=&quot;http://www.neuromexico.org/2017/03/18/el-coeficiente-de-gini-como-herramienta-para-evaluar-las-representaciones-de-las-capas-en-redes-neuronales-profundas/&quot;&gt;El coeficiente de Gini como herramienta para evaluar las representaciones de las capas en redes neuronales profundas&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Sun, 26 Feb 2017 00:00:00 +0000</pubDate>
        <link>http://neuroplausible.com/gini</link>
        <guid isPermaLink="true">http://neuroplausible.com/gini</guid>
      </item>
    
  </channel>
</rss>
