Google Research Introduces Titans: Memory-Based Model with 2M+ Token Support
Google Research is tackling a critical challenge in modern AI models: the efficiency drop when processing extremely long sequences.
Google Research is tackling a critical challenge in modern AI models: the efficiency drop when processing extremely long sequences.
Introducing Titans at NeurIPS, a groundbreaking memory-based model capable of supporting over two million tokens without full attention, redefining the limits of contextual understanding.
Titans utilizes deep neural memory instead of fixed vectors, allowing for richer structural encoding. Key innovations include surprise-driven updates for meaningful memory adjustments, momentum rules to preserve related information over long spans, and adaptive forgetting to maintain compact memory by removing outdated data. This model significantly enhances efficiency for long-context processing.