Cosyne 2021

Teaching material and code

 Recurrent Neural Network (RNN) Tutorial Materials

In February 2021, I had the honor of hosting the COSYNE (Computational and Systems Neuroscience) Tutorial on recurrent neural network models in neuroscience. Given the virtual format of the meeting, I’m pleased to be able to make the materials accessible to a larger community of learners.

Lecture Materials

Both lectures are available on the COSYNE YouTube channel (see below lecture title links) under a Creative Commons license. To request access to the lecture slides, please email: kanaka.rajan@mssm.edu & kanaka-admin@stellatecomms.com

Lecture 1: Foundational elements of recurrent neural network (RNN) models

Description: Dr. Rajan covered the basic design elements (“building blocks”) of neural network models, the role of recurrent connections, linear and non-linear activity, types of time-varying activity (“dynamics”) produced by RNNs, input-driven v spontaneous dynamics, etc. The main goal of this lecture is to look at both the tractability and computational power of RNN models, and to appreciate why they have become such a crucial part of the neuroscientific arsenal.

Lecture 2: Applications of RNNs in the field of neuroscience

Description: Dr. Rajan reviewed some of the ways in which RNNs have been applied in neuroscience to leverage existing experimental data, to infer mechanisms inaccessible from measurements alone, and to make predictions that guide experimental design. The main goal of this lecture is to appreciate what sorts of insight can be gained by "training RNNs to do something” in a manner consistent with experimental data collected from the biological brain.

Problem Set

If you’d like to deepen your understanding of recurrent neural networks, I encourage you to complete a problem set created in collaboration with the COSYNE Tutorial TAs (please see below – thank you all!). The problem set has detailed instructions and questions to work through. Problems 1 and 2 are intermediate and should be done after watching Lecture 1; Problem 3 is advanced and should be done after watching Lecture 2. Solutions are available in Julia, MATLAB, and Python.

 Suggested process for beginners:

  • Make a personal copy of the colab notebook / scripts linked below

  • Read through the script (with solutions) and annotate it with questions and/or your understanding of the process

  • Attempt to solve Problems 1 and 2 using the solutions as a scaffold

 Suggested process for advanced students or for use in a class setting:

  • Make a personal copy of the colab notebook / scripts linked below

  • Delete the solutions

  • Solve problems in your personal copy of the colab

  • Check against the solutions provided

 Solution Scripts

References

  • Abbott, L. F., Rajan, K. & Sompolinsky, H. (2011). Interactions between Intrinsic and Stimulus-Evoked Activity in Recurrent Neural Networks. The Dynamic Brain, 65–82. 

  • Andalman, A. S., Burns, V. M., Lovett-Barron, M., Broxton, M., Poole, B., Yang, S. J., et al. (2019). Neuronal Dynamics Regulating Brain and Behavioral State Transitions. Cell, 177(4), 970–985.e20. doi:10.1016/j.cell.2019.02.037

  • Barak, O. (2017). Recurrent neural networks as versatile tools of neuroscience research. Current Opinion in Neurobiology, 46, 1–6. doi:10.1016/j.conb.2017.06.003

  • Botvinick, M., Wang, J. X., Dabney, W., Miller, K. J., & Kurth-Nelson, Z. (2020). Deep Reinforcement Learning and Its Neuroscientific Implications. Neuron, 107(4), 603–616. doi:10.1016/j.neuron.2020.06.014

  • Churchland, M. M., Cunningham, J. P., Kaufman, M. T., Foster, J. D., Nuyujukian, P., Ryu, S. I., & Shenoy, K. V. (2012). Neural population dynamics during reaching. Nature, 487(7405), 51–56. doi:10.1038/nature11129

  • Felleman, D.J., Van Essen, D. C. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cereb Cortex, 1(1),1-47. doi: 10.1093/cercor/1.1.1. PMID: 1822724.

  • Gallego, J. A., Perich, M. G., Naufel, S. N., Ethier, C., Solla, S. A., & Miller, L. E. (2018). Cortical population activity within a preserved neural manifold underlies multiple motor behaviors. Nature Communications, 9(1). doi:10.1038/s41467-018-06560-z

  • Gallego, J. A., Perich, M. G., Chowdhury, R. H., Solla, S. A., & Miller, L. E. (2020). Long-term stability of cortical population dynamics underlying consistent behavior. Nature Neuroscience, 23(2), 260–270. doi:10.1038/s41593-019-0555-4

  • Girko, V. L. (1985). Spectral theory of random matrices. Russian Mathematical Surveys, 40(1), 77–120. doi:10.1070/rm1985v040n01abeh003528

  • Insanally, M. N., Carcea, I., Field, R. E., Rodgers, C. C., DePasquale, B., Rajan, K., … Froemke, R. C. (2019). Spike-timing-dependent ensemble encoding by non-classically responsive cortical neurons. eLife, 8. doi:10.7554/elife.42409

  • Kaufman, M. T., Churchland, M. M., Ryu, S. I., & Shenoy, K. V. (2014). Cortical activity in the null space: permitting preparation without movement. Nature Neuroscience, 17(3), 440–448. doi:10.1038/nn.3643

  • Kriegeskorte, N. (2008). Representational similarity analysis – connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience. doi:10.3389/neuro.06.004.2008

  • Kriegeskorte, N. (2015). Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing. Annual Review of Vision Science, 1(1), 417–446. doi:10.1146/annurev-vision-082114-035447

  • Lindsay, G. W. (2020). Convolutional Neural Networks as a Model of the Visual System: Past, Present, and Future. Journal of Cognitive Neuroscience, 1–15. doi:10.1162/jocn_a_01544

  • Marblestone, A. H., Wayne, G., & Kording, K. P. (2016). Toward an Integration of Deep Learning and Neuroscience. Frontiers in Computational Neuroscience. doi:10.3389/fncom.2016.00094 

  • Mastrogiuseppe, F., & Ostojic, S. (2018). Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks. Neuron, 99(3), 609–623.e29. doi:10.1016/j.neuron.2018.07.003

  • McClelland, J. L., & Botvinick, M. M. (2020, November 9). Deep Learning: Implications for Human Learning and Memory. doi:10.31234/osf.io/3m5sb

  • Oby, E. R., Golub, M. D., Hennig, J. A., Degenhart, A. D., Tyler-Kabara, E. C., Yu, B. M. et al. (2019). New neural activity patterns emerge with long-term learning. Proceedings of the National Academy of Sciences, 116(30), 15210–15215. doi:10.1073/pnas.1820296116

  • Perich, M. G., Gallego, J. A., & Miller, L. E. (2018). A Neural Population Mechanism for Rapid Learning. Neuron, 100(4), 964–976.e7. doi:10.1016/j.neuron.2018.09.030

  • Perich, M. G., & Rajan, K. (2020). Rethinking brain-wide interactions through multi-region “network of networks” models. Current Opinion in Neurobiology, 65, 146–151. doi:10.1016/j.conb.2020.11.003

  • Perich, M. G., Arlt, C., Soares, S., Young, M. E., Mosher, C. P., Minxha, J., et al. (2020). Inferring brain-wide interactions using data-constrained recurrent neural network models. doi:10.1101/2020.12.18.423348

  • Pinto, L., Rajan, K., DePasquale, B., Thiberge, S. Y., Tank, D. W., & Brody, C. D. (2019). Task-Dependent Changes in the Large-Scale Dynamics and Necessity of Cortical Regions. Neuron, 104(4), 810–824.e9. doi:10.1016/j.neuron.2019.08.025

  • Rajan, K., & Abbott, L. F. (2006). Eigenvalue Spectra of Random Matrices for Neural Networks. Physical Review Letters, 97(18). doi:10.1103/physrevlett.97.188104

  • Rajan, K., Abbott, L. F., & Sompolinsky, H. (2010). Stimulus-dependent suppression of intrinsic variability in recurrent neural networks. BMC Neuroscience, 11(S1). doi:10.1186/1471-2202-11-s1-o17

  • Rajan, K., Abbott, L. F., & Sompolinsky, H. (2010). Stimulus-dependent suppression of chaos in recurrent neural networks. Physical Review E, 82(1). doi:10.1103/physreve.82.011903

  • Rajan, K., Harvey, C. D., & Tank, D. W. (2016). Recurrent Network Models of Sequence Generation and Memory. Neuron, 90(1), 128–142. doi:10.1016/j.neuron.2016.02.009

  • Sadtler, P. T., Quick, K. M., Golub, M. D., Chase, S. M., Ryu, S. I., Tyler-Kabara, E. C. et al. (2014). Neural constraints on learning. Nature, 512(7515), 423–426. doi:10.1038/nature13665

  • Schuecker, J., Goedeke, S., & Helias, M. (2018). Optimal Sequence Memory in Driven Random Networks. Physical Review X, 8(4). doi:10.1103/physrevx.8.041029

  • Semedo, J. D., Zandvakili, A., Machens, C. K., Yu, B. M., & Kohn, A. (2019). Cortical Areas Interact through a Communication Subspace. Neuron, 102(1), 249–259.e4. doi:10.1016/j.neuron.2019.01.026

  • Sommers, H. J., Crisanti, A., Sompolinsky, H., & Stein, Y. (1988). Spectrum of Large Random Asymmetric Matrices. Physical Review Letters, 60(19), 1895–1898. doi:10.1103/physrevlett.60.1895

  • Sompolinsky, H., Crisanti, A., & Sommers, H. J. (1988). Chaos in Random Neural Networks. Physical Review Letters, 61(3), 259–262. doi:10.1103/physrevlett.61.259

  • Stopfer, M., Jayaraman, V., & Laurent, G. (2003). Intensity versus Identity Coding in an Olfactory System. Neuron, 39(6), 991–1004. doi:10.1016/j.neuron.2003.08.011

  • Tao, T., & Vu, V. (2010). Random Matrices: the Distribution of the Smallest Singular Values. Geometric and Functional Analysis, 20(1), 260–297. doi:10.1007/s00039-010-0057-8

  • Vogels, T. P., Rajan, K., & Abbott, L. F. (2005). Neural network dynamics. Annual Review of Neuroscience, 28(1), 357–376. doi:10.1146/annurev.neuro.28.061604.135637

  • Yang, G. R., Cole, M. W., & Rajan, K. (2019). How to study the neural mechanisms of multiple tasks. Current Opinion in Behavioral Sciences, 29, 134–143. doi:10.1016/j.cobeha.2019.07.001

  • Yang, G. R., & Wang, X.-J. (2020). Artificial Neural Networks for Neuroscientists: A Primer. Neuron, 107(6), 1048–1070. doi:10.1016/j.neuron.2020.09.005

  • Young, M. E., Spencer-Salmon, C., Mosher, C., Tamang, S., Rajan, K., & Rudebeck, P. H. (2020). Temporally-specific sequences of neural activity across interconnected corticolimbic structures during reward anticipation. doi:10.1101/2020.12.17.423162

  • Yu, B. M., Cunningham, J. P., Santhanam, G., Ryu, S. I., Shenoy, K. V., & Sahani, M. (2009). Gaussian-Process Factor Analysis for Low-Dimensional Single-Trial Analysis of Neural Population Activity. Journal of Neurophysiology, 102(1), 614–635. doi:10.1152/jn.90941.2008

Lab groups doing related work:

Teaching Assistants

Thank you to the fearless group of TAs who helped shape the tutorial with their hard work and quick intellect.

  • Brian DePasquale, Princeton University

  • Rainer Engelken, Columbia University

  • Josue Nassar, Stony Brook University

  • Satpreet Singh, University of Washington

  • Agostina Palmigiano, Columbia University

  • Manuel Beiran, Ecole Normale Superieure - Paris

  • Itamar Landau, Stanford University

  • Emin Orhan, New York University

  • Zoe Ashwood, Princeton University

  • Francesca Mastrogiuseppe, University College London

  • Lee Susman, Technion Israel Institute of Technology

  • Friedrich Schuessler, Technion Israel Institute of Technology

  • Greg Handy, University of Pittsburgh

  • Danielle Rager, Carnegie Mellon University

  • Chen Beer, Technion Israel Institute of Technology

  • Ulises Pereira Obilinovic, New York University

  • Adrian Valente, Ecole Normale Superieure - Paris

  • Ayesha Vermani, Stony Brook University