The Zebrafish Activity Prediction Benchmark measures progress on the problem of predicting cellular-resolution neural activity throughout an entire vertebrate brain.
Predicting future behavior is a fundamental test towards understanding across natural sciences. How accurately can whole-brain neural activity be predicted from past activity? Larval zebrafish offer a unique opportunity to address this question, as they are the only vertebrate species in which whole-brain activity can be recorded at cellular resolution. For the Zebrafish Activity Prediction Benchmark (ZAPBench) we collected and extensively processed a novel 4d light-sheet microscopy recording of over 70,000 neurons, on which we propose a forecasting benchmark with the aim of catalyzing the development of increasingly accurate models of brain activity.
All details are in our upcoming ICLR 2025 paper.
You can interactively explore datasets by clicking the cards below. Further information on datasets.
Data usage may be significant. For optimal viewing, we recommend using a desktop computer.
Test set performance of initial baselines, see manuscript for details.
Loading charts ...
To make it easy to reproduce and extend ZAPBench, we provide full code on GitHub, as well as tutorial-style colabs:
Click to open answers, or expand all.
Where can I ask questions?
For general questions, feature requests, or broader conversations about ZAPBench, you can use our discussion page. If you encounter specific problems with our code, please report them by opening an issue. To contact us directly, you can reach out via email.
How can I stay in the loop?
We'll be sharing key updates, e.g., related to the forthcoming connectome, through our announcement list via email. To subscribe, simply send a blank message to zapbench-announce+subscribe@googlegroups.com, then reply to the join request message you receive to confirm.
This work was done in collaboration with colleagues across Google Research, HHMI Janelia, and Harvard University, including Jan-Matthis Lueckmann, Alexander Immer, Alex Bo-Yuan Chen, Peter H. Li, Mariela D. Petkova, Nirmala A. Iyer, Luuk Willem Hesselink, Aparna Dev, Gudrun Ihrke, Woohyun Park, Alyson Petruncio, Aubrey Weigel, Wyatt Korff, Florian Engert, Jeff W. Lichtman, Misha B. Ahrens, Michał Januszewski, and Viren Jain. We are grateful to the many individuals, listed in the manuscript, who provided valuable feedback and support.