Memory in Artificial and Real Intelligence (MemARI)

MemARI is a NeurIPS 2022 workshop. Contact us via our Google Group (memari-workshop@googlegroups.com or https://groups.google.com/g/memari-workshop).

MemARI logo

About

One of the key challenges for AI is to understand, predict, and model data over time. Pretrained networks should be able to temporally generalize, or adapt to shifts in data distributions that occur over time. Our current state-of-the-art (SOTA) still struggles to model and understand data over long temporal durations – for example, SOTA models are limited to processing several seconds of video, and powerful transformer models are still fundamentally limited by their attention spans. On the other hand, humans and other biological systems are able to flexibly store and update information in memory to comprehend and manipulate multimodal streams of input. Cognitive neuroscientists propose that they do so via the interaction of multiple memory systems with different neural mechanisms. What types of memory systems and mechanisms already exist in our current AI models? First, there are extensions of the classic proposal that memories are formed via synaptic plasticity mechanisms – information can be stored in the static weights of a pre-trained network, or in fast weights that more closely resemble short-term plasticity mechanisms. Then there are persistent memory states, such as those in LSTMs or in external differentiable memory banks, which store information as neural activations that can change over time. Finally, there are models augmented with static databases of knowledge, akin to a high-precision long-term memory or semantic memory in humans. When is it useful to store information in each one of these mechanisms, and how should models retrieve from them or modify the information therein? How should we design models that may combine multiple memory mechanisms to address a problem? Furthermore, do the shortcomings of current models require some novel memory systems that retain information over different timescales, or with different capacity or precision? Finally, what can we learn from memory processes in biological systems that may advance our models in AI? We aim to explore how a deeper understanding of memory mechanisms can improve task performance in many different application domains, such as lifelong / continual learning, reinforcement learning, computer vision, and natural language processing.

Important dates

Event Date
Paper submission deadline September 29, 2022 23:59 Anywhere on Earth (AoE)
Final decisions October 20, 2022 17:00 Pacific time
Spotlight presentations deadline (5 min video) November 10, 2022 23:59 AoE
Camera ready deadline (with 3 min video) November 15, 2022 23:59 AoE
Workshop date December 2, 2022 in-person @ New Orleans

Schedule

Time Event
8:30 am Opening (15 min)
8:45 am Talk 1 (45 min)
9:30 am Talk 2 (45 min)
10:15 am Spotlights (20 min)
10:35 am Posters (60 min)
11:35 am Lunch + posters (1 h 25 min)
1:00 pm Talk 3 (45 min)
1:45 pm Talk 4 (45 min)
2:30 pm Break (15 min)
2:45 pm Talk 5 (45 min)
3:30 pm Panel (1 h 25 min)
4:55 pm Closing (5 min)

Speakers


Ida Momennejad

Microsoft Research

Janice Chen

Johns Hopkins University

Sainbayar Sukhbaatar

Facebook AI Research

Sepp Hochreiter

Johannes Kepler University Linz

Hava Siegelmann

University of Massachusetts Amherst

Organizers


Mariya Toneva

Max Planck Institute for Software Systems

Uri Hasson

Princeton University

Kenneth Norman

Princeton University

Alex Huth

The University of Texas at Austin

Shailee Jain

The University of Texas at Austin

Javier Turek

Intel Labs

Vy Vo

Intel Labs

Mihai Capotă

Intel Labs

Accepted papers

  1. Constructing Memory: Consolidation as Teacher-Student Training of a Generative Model. Eleanor Spens, Neil Burgess. Paper. Video.
  2. Biological Neurons vs Deep Reinforcement Learning: Sample efficiency in a simulated game-world. Forough Habibollahi, Moein Khajehnejad, Amitesh Gaurav, Brett Joseph Kagan. Paper. Video.
  3. Leveraging Episodic Memory to Improve World Models for Reinforcement Learning. Julian Coda-Forno, Changmin Yu, Qinghai Guo, Zafeirios Fountas, Neil Burgess. Paper. Video.
  4. Differentiable Neural Computers with Memory Demon. Ari Azarafrooz. Paper. Video.
  5. A Universal Abstraction for Hierarchical Hopfield Networks. Benjamin Hoover, Duen Horng Chau, Hendrik Strobelt, Dmitry Krotov. Poster. Video.
  6. Interpolating Compressed Parameter Subspaces. Siddhartha Datta, Nigel Shadbolt. Poster. Video.
  7. Associative memory via covariance-learning predictive coding networks. Mufeng Tang, Tommaso Salvatori, Yuhang Song, Beren Gray Millidge, Thomas Lukasiewicz, Rafal Bogacz. Paper. Video.
  8. Self-recovery of memory via generative replay. Zhenglong Zhou, Geshi Yeung, Anna C Schapiro. Paper. Video.
  9. The Emergence of Abstract and Episodic Neurons in Episodic Meta-RL. Badr AlKhamissi, Muhammad ElNokrashy, Michael Spranger. Paper. Video.
  10. Multiple Modes for Continual Learning. Siddhartha Datta, Nigel Shadbolt. Poster. Video.
  11. The Opportunistic PFC: Downstream Modulation of a Hippocampus-inspired Network is Optimal for Contextual Memory Recall. Hugo Chateau-Laurent, Frederic Alexandre. Paper. Video.
  12. Recall-gated plasticity as a principle of systems memory consolidation. Jack Lindsey, Ashok Litwin-Kumar. Poster. Video.
  13. Learning to Reason and Memorize with Self-Questioning. Jack Lanchantin, Shubham Toshniwal, Jason E Weston, Arthur Szlam, Sainbayar Sukhbaatar. Paper. Poster. Video.
  14. Informing generative replay for continual learning with long-term memory formation in the fruit fly. Brian S Robinson, Justin Joyce, Raphael Norman-Tenazas, Gautam K Vallabha, Erik Christopher Johnson. Paper. Video.
  15. Characterizing Verbatim Short-Term Memory in Neural Language Models. Kristijan Armeni, Christopher Honey, Tal Linzen. Poster. Video.
  16. Memory in humans and deep language models: Linking hypotheses for model augmentation. Omri Raccah, Phoebe Chen, Theodore L. Willke, David Poeppel, Vy A. Vo. Paper. Video.
  17. Transformers generalize differently from information stored in context vs in weights. Stephanie C.Y. Chan, Ishita Dasgupta, Junkyung Kim, Dharshan Kumaran, Andrew Kyle Lampinen, Felix Hill. Paper. Video.
  18. Mixed-Memory RNNs for Learning Long-term Dependencies in Irregularly-sampled Time Series. Mathias Lechner, Ramin Hasani. Paper. Video.
  19. Neural Network Online Training with Sensitivity to Multiscale Temporal Structure. Matt Jones, Tyler R. Scott, Gamaleldin Fathy Elsayed, Mengye Ren, Katherine Hermann, David Mayo, Michael Curtis Mozer. Poster. Video.
  20. Learning at Multiple Timescales. Matt Jones. Paper. Video.
  21. Toward Semantic History Compression for Reinforcement Learning. Fabian Paischer, Thomas Adler, Andreas Radler, Markus Hofmarcher, Sepp Hochreiter. Paper. Video.
  22. CL-LSG: Continual Learning via Learnable Sparse Growth. Li Yang, Sen Lin, Junshan Zhang, Deliang Fan. Paper. Video.
  23. Cache-Memory Gated Graph Neural Networks. Guixiang Ma, Vy A. Vo, Theodore L. Willke, Nesreen K. Ahmed. Paper. Video.
  24. Exploring The Precision of Real Intelligence Systems at Synapse Resolution. Mohammad Samavat, Thomas M Bartol, Kristen Harris, Terrence Sejnowski. Paper. Video.
  25. Low Resource Retrieval Augmented Adaptive Neural Machine Translation. Vivek Harsha Vardhan Lakkamaneni, Swair Shah, Anurag Beniwal, Narayanan Sadagopan. Paper. Video.
  26. Neural networks learn an environment’s geometry in latent space by performing predictive coding on visual scenes. James Gornet, Matt Thomson. Paper. Video.
  27. Transformer needs NMDA receptor nonlinearity for long-term memory. Dong-Kyum Kim, Jea Kwon, Meeyoung Cha, C. Justin Lee. Poster. Video.
  28. Training language models for deeper understanding improves brain alignment. Khai Loong Aw, Mariya Toneva. Poster. Video.
  29. Using Hippocampal Replay to Consolidate Experiences in Memory-Augmented Reinforcement Learning. Chong Min John Tan, Mehul Motani. Paper. Video.
  30. Evidence accumulation in deep RL agents powered by a cognitive model. James Mochizuki-Freeman, Sahaj Singh Maini, Zoran Tiganj. Paper. Poster. Video.
  31. General-Purpose In-Context Learning by Meta-Learning Transformers. Louis Kirsch, James Harrison, Jascha Sohl-Dickstein, Luke Metz. Poster. Video.
  32. Experiences from the MediaEval Predicting Media Memorability Task. Alba Garcia Seco de Herrera, Mihai Gabriel Constantin, Claire-Helene Demarty, Camilo Luciano Fosco, Sebastian Halder, Graham Healy, Bogdan Ionescu, Ana Matran-Fernandez, Alan F. Smeaton, Mushfika Sultana. Paper. Video.
  33. Constructing compressed number lines of latent variables using a cognitive model of memory and deep neural networks. Sahaj Singh Maini, James Mochizuki-Freeman, Chirag Shankar Indi, Brandon G Jacques, Per B Sederberg, Marc Howard, Zoran Tiganj. Paper. Video.
  34. Learning to Control Rapidly Changing Synaptic Connections: An Alternative Type of Memory in Sequence Processing Artificial Neural Networks. Kazuki Irie, Jürgen Schmidhuber. Paper. Video.

Program committee

  1. Anna C Schapiro. University of Pennsylvania, University of Pennsylvania.
  2. Brandon Trabucco. Machine Learning Department, School of Computer Science.
  3. Caroline Lee. Columbia University.
  4. Daniel Ben Dayan Rubin. Intel Labs.
  5. Dan Elbaz. Intel.
  6. Gal Novik. Intel Corporation.
  7. Jerry Tang. University of Texas at Austin.
  8. Anurag Koul. Oregon State University.
  9. Manoj Kumar. Princeton University.
  10. Raymond Chua. Technical University Munich.
  11. Richard Antonello. University of Texas, Austin.
  12. Shubham Kumar Bharti. Department of Computer Science, University of Wisconsin - Madison.
  13. Wenhuan Sun. CMU, Carnegie Mellon University.
  14. Abbas Rahimi. University of California Berkeley.
  15. Abulhair Saparov. New York University.
  16. Alexander Huth. The University of Texas at Austin.
  17. Allyson Ettinger. University of Chicago.
  18. Alona Fyshe. University of Alberta.
  19. Andi Nika. MPI-SWS.
  20. Andrew Kyle Lampinen. DeepMind.
  21. Anna A Ivanova. Massachusetts Institute of Technology.
  22. Benjamin Eysenbach. Carnegie Mellon University.
  23. Chongyi Zheng. CMU, Carnegie Mellon University.
  24. Christopher Baldassano. Columbia University.
  25. Christopher Honey. Johns Hopkins University.
  26. Edward Paxon Frady. Intel.
  27. Emmy Liu. School of Computer Science, Carnegie Mellon University.
  28. Gail Weiss. Technion, Technion.
  29. Hsiang-Yun Sherry Chien. Johns Hopkins University.
  30. Javier S. Turek. Intel.
  31. Jiawen Huang. Columbia University.
  32. Marc Howard. Boston University.
  33. Mariam Aly. Columbia University.
  34. Mariano Tepper. Intel.
  35. Mariya Toneva. MPI-SWS.
  36. Michael Hahn. Universität des Saarlandes.
  37. Mihai Capotă. Intel Labs.
  38. Niru Maheswaranathan. Facebook.
  39. Nuttida Rungratsameetaweemana. Salk Institute for Biological Studies.
  40. Otilia Stretcu. Google.
  41. Qihong Lu. Princeton University.
  42. Raj Ghugare. Montreal Institute for Learning Algorithms, University of Montreal, Université de Montréal.
  43. Rowan Zellers. OpenAI.
  44. Shailee Jain. University of Texas, Austin.
  45. Shubham Kumar Bharti. Department of Computer Science, University of Wisconsin - Madison.
  46. Shubham Toshniwal. Facebook.
  47. Subhaneil Lahiri. Stanford University.
  48. Tianwei Ni. University of Montreal.
  49. Vasudev Lal. Intel.
  50. Vy A. Vo. Intel.
  51. Zexuan Zhong. Princeton University.
  52. Zoran Tiganj. Indiana University, Bloomington.

Call for papers

We invite submissions to the NeurIPS 2022 workshop on Memory in Artificial and Real Intelligence (MemARI). One of the key challenges for AI systems is to understand, predict, and model data over time. Pretrained networks should be able to temporally generalize, or adapt to shifts in data distributions that occur over time. Our current state-of-the-art (SOTA) still struggles to model and understand data over long temporal durations – for example, SOTA models are limited to processing several seconds of video, and powerful transformer models are still fundamentally limited by their attention spans. By contrast, humans and other biological systems are able to flexibly store and update information in memory to comprehend and manipulate streams of input.

How should memory mechanisms be designed in deep learning, and what can this field learn from biological memory systems? MemARI aims to facilitate progress on these topics by bringing together researchers from machine learning, neuroscience, reinforcement learning, computer vision, natural language processing and other adjacent fields. We invite submissions presenting new and original research on topics including but not limited to the following:

  1. Computational models of biological memory
  2. Role of different biological memory systems in cognitive tasks, with implications for AI algorithms/architectures
  3. Biologically-inspired architectures to improve memory/temporal generalization
  4. New approaches to improving memory in artificial systems
  5. Domain-specific uses of memory mechanisms (e.g., lifelong learning, NLP, RL)
  6. Empirical and theoretical analyses of limitations in current artificial systems
  7. Datasets and tasks to evaluate memory mechanisms of artificial networks

Submission instructions

  • Submission Portal: MemARI OpenReview (submissions closed)
  • All submissions must be in PDF format.
  • Submissions are limited to four content pages, including all figures and tables; additional pages containing supplemental information & references are allowed. Reviewers should be able to judge your work without looking at the supplemental information.
  • Please use the NeurIPS 2022 LaTeX style file. Style or page limit violations (e.g., by decreasing margins or font sizes) may lead to automatic rejection
  • All submissions should be anonymous.
  • Per NeurIPS guidelines, previously published work is not acceptable for submission.

Reviewing process & Acceptance

  • The review process is double-blind.
  • Accepted papers will be presented during a poster session, with spotlight oral presentations for exceptional submissions. (with accommodation for virtual attendees)
  • Accepted papers will be made publicly available as non-archival reports, allowing future submissions to archival conferences or journals.

Top image: Mississippi River, CC BY-SA 3.0, Greg Willis, 2008.