• AGI House
  • Posts
  • AGI House: AGI Research Memo on Multimodality, and Gemini 3 Build Day Recap, $17K Global Chess Challenge (6 Weeks Left)

AGI House: AGI Research Memo on Multimodality, and Gemini 3 Build Day Recap, $17K Global Chess Challenge (6 Weeks Left)

In this issue: our AGI research memo on native multimodal architectures, 6 weeks left in the Global Chess Challenge (and what we learned at NeurIPS), what emerged when 200+ builders stress-tested Gemini 3, joining the AGI House team, and what we’re reading.

🗓️ AGI Research Memo: Native Multimodal Architectures

We published a new memo: Native Multimodal Architectures: Why Cross-Modal Fusion Defines the Next Defensible Moat, drawing on direct observations from the Gemini 3 Build event alongside broader research into the multimodal model landscape.

The central insight is simple: the frontier has moved from adding modalities to reasoning across them. Late-fusion systems—where vision, audio, and text are processed by separate encoders and merged downstream—are increasingly limited by information bottlenecks, alignment gaps, and brittle cross-modal coordination. Native architectures encode multiple modalities jointly from the start, enabling cross-modal attention throughout inference and preserving signal fidelity end to end.

⭐️ Gemini 3 Build Day — Native Multimodality in Practice

On December 13th, we successfully wrapped up Gemini Build Day in partnership with Google and Graphon AI. Over 200 builders — including AI/ML founders, engineers, and ML/AI PhDs from top universities—spent eighteen hours exploring what becomes possible when models can truly reason across text, images, audio, and video simultaneously.

Ps. right after the Gemini 3 build day, we had over 200 AI founders, researchers, VCs joined our AGI Pathway Gala to celebrate the curious minds paving the way for humanity’s progress towards building AGI. Stay tuned for more!

♟️ The Global Chess Challenge: 6 Weeks to Win a $17K Cash Prize Pool

The Global Chess Challenge closes January 31st at 23:55 UTC. 6 weeks remain.

We launched the challenge at NeurIPS 2025 with Checkmate: Fine-Tune Your Own Small Language Model for Real-Time Chess Gameplay, a hands-on workshop hosted with AWS and AIcrowd. Participants fine-tuned Qwen-based models on AWS Trainium, deployed agents to a live leaderboard, and evaluated performance in real time. The session focused on working systems under real constraints rather than demonstrations.

Prize Pool

🥇 $10,000 cash + $5,000 compute credits
🥈 $5,000 cash + $2,000 compute credits
🥉 $2,000 cash + $1,000 compute credits

💼 We’re Hiring

Creative Director (Full-Time)
Lead AGI House’s content and brand strategy across platforms, shaping how frontier AI research, builders, and events are communicated to a global technical audience, as well as how AI is being adoption to transform industry by industry.
Apply Here → | Email [email protected]

AGI Researcher, AI Product Manager, Engineer & Operator
Manage and coordinate researchers from frontier AI labs and aspirational AI founders’ effort to build towards AGI: producing model & agent evals, technical blogs, etc. Build and scale AI-native products at the intersection of product, engineering, and operations—powering AGI House’s community platform.
Apply: [email protected]

Location: Hillsborough, CA
Type: Full-time (internship and part-time considered)

📚 What We’ve Been Reading

  1. Artificial Hivemind: The Open-Ended Homogeneity of Language Models (Jiang et al., 2025)— Coming out of the Global Chess Challenge and recent discussions around open-ended reasoning benchmarks, we’ve been thinking a lot about whether scale actually produces more diverse reasoning—or just more confident convergence. This NeurIPS award–winning paper sharpened that concern, showing that as language models scale, their open-ended outputs increasingly collapse toward sameness. 

  2.  Why Diffusion Models Don’t Memorize: The Role of Implicit Dynamical Regularization in Training (Bonnaire et al., 2025)— As part of our broader work on multimodal systems, we revisited this theory paper to better understand why massive generative models don’t immediately overfit. The result offers a clean explanation grounded in training dynamics, and has influenced how we think about long-horizon training and generalization in high-capacity multimodal models.

Wishing you a Merry Christmas and a thoughtful close to the year!

Till next time,

AGI House Team

Want something featured or interested in partnering? Email us