Past Research

Allen Institute for Neural Dynamics

Reproducible System Neuroscience Notebooks to Facilitate Data Sharing and Collaborative Reuse with Open Science Datasets

Contributed to a published open-science resource that makes neural data analysis more reproducible, accessible, and easier to learn through interactive notebooks built on public electrophysiology datasets

Contributors: Carter Peene, Jerome Lecoq, Ahad Bawney

Project Motivation

Reproducibility and validation are major challenges in neuroscience. Multimodal analyses and visualizations are complex and results are often difficult to replicate without accessible, open-source pipelines. The OpenScope Databook addresses this by bringing together the four essentials of reproducible research: accessible data, accessible computational resources, a reproducible environment, and clear usage documentation. We designed this project as a globally accessible resource that teaches scientists, educators, and students how to utilize computational and machine learning methods to analyze, visualize, and reproduce neural data analysis with open-source data. The notebooks can be run through Binder, Thebe, DANDIHub, or locally. Our goal is to build a framework and culture that prioritizes findings that are valid, transparent, and easy to reproduce across projects and datasets.

My Contributions

  • Built reproducible workflows for modeling mouse visual cortex activity and relating neural dynamics to visual stimuli

  • Developed notebooks for latent embedding and neuron classification using public electrophysiology data

  • Improved notebook structure, explanations, and usability to make complex workflows more accessible across neuroscience backgrounds

  • Helped translate public NWB datasets into clear, reusable analysis examples for open-science education and research

 Notebook Contributions 

Using CEBRA-Time to Identify Latent Embedding


This notebook demonstrates how to use the CEBRA-Time algorithm, turning mouse visual-cortex recordings into a compact 3D “map” of neural activity during passive viewing of movies. This makes complex patterns easier to see and compare. The goal is to learn an embedding that stays consistent across repeated presentations of the same stimulus, revealing a shared neural representation of the movie that reappears across repeats.

Classifying Neuronal Types

This notebook shows how to separate two common neuron types, fast-spiking and regular-spiking, using simple, interpretable features from each neuron’s recorded electrical waveform. It then visualizes what makes the groups different (waveform shape and response patterns) using Allen Institute OpenScope data, while keeping the workflow easy to adapt to other datasets.