Architects of Tomorrow

Meet the minds and labs shaping the future of intelligence, navigating the promise and peril of AGI.

With AI development moving so quickly in so many directions, I’ve identified the major players in the space building towards AGI so you don’t have to. I’ve also categorized these groups by their primary focus (Safety/Alignment vs. Capability Advancement).

First, let’s address the elephant in the room: companies primarily focusing on advancing the capabilities of AI while pushing safety by the wayside.

Driving Capability Frontiers and Integration: 

These groups are prominently focused on rapidly advancing AI capabilities, achieving milestones like AGI relatively soon, and integrating AI into broader applications, while incorporating safety frameworks as part of the development process.

1. Sam Altman and OpenAI

Overview: OpenAI continues to advance AI capabilities rapidly, releasing more intelligent, versatile models with integrated tool use and multimodality, alongside frameworks for managing risks.

Key Details:

  • Recently launched o-series models (o3, o4-mini) with strong reasoning, autonomous tool integration, and the ability to "think with images."

  • Advanced audio (accurate speech-to-text, customizable text-to-speech) and image generation (photorealism).

  • Updated its Preparedness Framework and actively researches model misbehavior and disrupts malicious use.

AGI Timeline: Altman suggests AGI will be achieved by 2025, shifting to superintelligence as their next goal.

Takeaways: OpenAI demonstrates significant progress toward more agentic, multimodal AI systems capable of complex tasks, while simultaneously developing and implementing measures to address safety and misuse concerns.

2. Demis Hassabis and Google DeepMind

Overview: Google DeepMind CEO Demis Hassabis predicts AGI relatively soon, focusing research on real-world understanding, planning, and multi-agent systems.

Key Details:

  • Views contextual understanding and generalized planning/reasoning as key AGI hurdles.

  • Develops multi-agent AI systems and advanced models like Gemini 2.0 for the "agentic era."

  • Achieved major success with AlphaFold (protein folding); Hassabis co-awarded 2024 Nobel Prize in Chemistry.

AGI Timeline: Projects AGI emergence within the next 5 to 10 years (as of the text's context).

Takeaways: DeepMind represents an optimistic view on achieving AGI relatively soon, driven by progress in complex reasoning, agentic AI, and demonstrated success in applying AI to major scientific challenges.

3. Aidan Gomez and Cohere

Overview: Cohere is a prominent AI company specializing in developing large language models tailored for enterprise applications, emphasizing data privacy, security, and responsible deployment for business use cases.

Key Details:

  • Develops advanced LLMs like Command R+ optimized for enterprise workflows such as Retrieval Augmented Generation (RAG), multi-step tool use, and multilingual business applications (evaluated across 10 languages).

  • Strongly prioritizes data privacy, security (meeting standards like SOC2, GDPR), and flexible deployment options (cloud, VPC, on-premise) to meet enterprise requirements.

  • Co-founded by Aidan Gomez, a contributor to the original Transformer architecture paper ("Attention Is All You Need").

  • While advancing AI capabilities is their primary focus, Cohere is actively researching AI safety, particularly in multilingual contexts, through Cohere Labs (formerly Cohere for AI).

AGI Timeline: Cohere explicitly focuses on practical enterprise value and positive impact, stating AGI is not their end goal.

Takeaways: Cohere carves out a distinct position by focusing exclusively on the enterprise market, providing powerful and secure AI tools designed for immediate business integration and value, rather than pursuing AGI milestones or direct-to-consumer offerings.

Groups Prioritizing Safety, Alignment, and Foundational Understanding: 

And now for the more ethical groups emphasizing safety, ensuring alignment with human values, addressing fundamental limitations in current approaches, and expressing caution as a core part of their mission - before or alongside their push for maximum capability.

4. Ilya Sutskever and Safe Superintelligence (SSI)

Overview: SSI is a highly funded, secretive startup singularly focused on developing artificial superintelligence (ASI) that is fundamentally safe, deliberately avoiding interim product releases.

Key Details:

  • Founded by Ilya Sutskever, Daniel Gross, and Daniel Levy, pursuing only safe ASI development.

  • Operates with extreme secrecy but has secured significant funding ($2B, $30-32B valuation) and uses Google TPUs.

  • Aims to find a "different mountain to climb," potentially moving beyond current pre-training paradigms, possibly inspired by biological scaling and aiming for agentic, self-correcting AI.

AGI Timeline: Unspecified due to their highly secretive nature.

Takeaways: SSI represents a unique, high-stakes gamble on achieving safe ASI through potentially novel methods, prioritizing long-term safety over immediate commercialization, fueled by strong investor confidence in Sutskever's vision.

5. Mira Murati and Thinking Machines Lab

Overview: Following her impactful tenure as OpenAI's CTO, Mira Murati launched Thinking Machines Lab to bridge advanced AI research with practical, understandable, and human-aligned applications.

Key Details:

  • Murati previously led development for key OpenAI models like ChatGPT and DALL-E.

  • Thinking Machines Lab aims for AI that is comprehensible, customizable, facilitates human-machine collaboration, and integrates human values ("user alignment").

  • Emphasizes transparent development, open collaboration, and modular architectures, attracting top talent and significant funding interest.

AGI Timeline: Unspecified due to this being a newer project with limited publicly available information.

Takeaways: This venture signifies a push towards making advanced AI more accessible, adaptable, and ethically grounded, focusing on user needs and transparency in contrast to more closed approaches.

6. Dario Amodei and Anthropic

Overview: Anthropic CEO Dario Amodei anticipates highly transformative AI within years, focusing development on safety through their unique "Constitutional AI" approach and interpretability research.

Key Details:

  • Pioneered Constitutional AI: training models based on a written set of principles for safety and alignment.

  • Focuses on mechanistic interpretability (understanding model internals) and developing AI as "Virtual Collaborators."

  • Champions global safety standards and Responsible Scaling Policies.

AGI Timeline: Projects highly transformative AI capabilities (contextually linked to AGI) emerging within the next 2 to 3 years.

Takeaways: Anthropic aims for rapid AI progress guided by a distinct, principle-based safety framework (Constitutional AI), positioning safety and interpretability as central pillars of their AGI development strategy.

7. Yann LeCun and Meta FAIR

Overview: Meta's Chief AI Scientist, Yann LeCun, expresses skepticism about current Large Language Models being a direct path to human-level intelligence, emphasizing the need for world models learned through interaction.

Key Details:

  • Argues LLMs lack true understanding, reasoning, and planning capabilities needed for human-level intelligence.

  • Advocates research into AI acquiring "world models" via observation and interaction (human-robotic intelligence).

  • Research includes assistive robots and contrastive learning methods; co-recipient of the 2018 Turing Award.

AGI Timeline: Expresses skepticism about near-term AGI via current methods with no predictions on when to expect AGI.

Takeaways: LeCun provides a critical perspective, arguing that achieving robust, human-level AI requires fundamentally different approaches focused on grounding intelligence in real-world interaction, not just language patterns.

8. Yoshua Bengio and Mila

Overview: Yoshua Bengio is a leading researcher actively promoting technical AI governance and safety research to manage the risks of increasingly capable AI systems.

Key Details:

  • Advocates for robust control mechanisms, incorporating "epistemic humility" into AI, and increased investment in safety research and regulation.

  • Chaired the International Scientific Report on the Safety of Advanced AI, which discussed potential timelines.

  • Mila researches responsible AI, AI for health, sustainability, and science.

AGI Timeline: Within a few years to a decade.

Takeaways: Bengio emphasizes the necessity of proactive, technically informed governance and dedicated safety research efforts from institutions and governments to guide AI development responsibly.

Why Tracking AI Development Matters

Throughout history, pivotal shifts in information technology have radically altered human societies, power structures, and even our sense of self. Today, AI appears to be the ultimate information network.

Keeping track of who is building these advanced AI systems, their objectives, and their methodologies is really important given the state of the world for several reasons. Understanding the origins of powerful AI allows society to assign responsibility for its impacts, both beneficial and harmful. This is vital when considering the risks embedded in different approaches. Some paths mirror past technological gambles that fractured societies or amplified biases (think Facebook’s algorithms influencing global elections).

Most importantly, this knowledge informs public discourse and policymaking. Knowing about the groups building the technology that’s shaping the future allows for more effective governance, ethical oversight, and international cooperation to ensure AI aligns with human values and global well-being.

This newsletter has been brought to you by:

Do you enjoy learning about AI?

Of Tomorrow is focused on only sharing emerging tech news with real world impact. Because honestly, it’s impossible to keep up with all the hype.

P.S. Thanks for reading ☺️ I hope you enjoyed!