An introduction to Cognitive Emulation
An introduction to Cognitive Emulation
An introduction to Cognitive Emulation
July 2023
AI designed to emulate trusted, human reasoning.
AI designed to emulate trusted, human reasoning.
All human labor, from writing an email to designing a spaceship, is built on cognition. Somewhere along the way, human intuition is used in a cognitive algorithm to form a meaningful idea or perform a meaningful task. Over time, these ideas and actions accrue in communication, formulating plans, researching opportunities, and building solutions. Society is entirely composed of these patterns, and Cognitive Emulation is built to emulate them.
All human labor, from writing an email to designing a spaceship, is built on cognition. Somewhere along the way, human intuition is used in a cognitive algorithm to form a meaningful idea or perform a meaningful task. Over time, these ideas and actions accrue in communication, formulating plans, researching opportunities, and building solutions. Society is entirely composed of these patterns, and Cognitive Emulation is built to emulate them.
Cognitive Emulation is an AI system that follows the same reasoning processes that humans use to solve tasks. Making these processes explicit and integrating them with AI is essential to building systems people can trust.
Cognitive Emulation is an AI system that follows the same reasoning processes that humans use to solve tasks. Making these processes explicit and integrating them with AI is essential to building systems people can trust.
This is not what you get with the current paradigm. Large language models use logic that is entirely different from how humans think, and they are uninterpretable. When you interface with GPT-4 or ask it to solve a problem you have, it may give you an answer, but not in a reliable way. If it doesn’t work, you’re stuck. And if it’s the wrong answer, you won’t know it.
This is not what you get with the current paradigm. Large language models use logic that is entirely different from how humans think, and they are uninterpretable. When you interface with GPT-4 or ask it to solve a problem you have, it may give you an answer, but not in a reliable way. If it doesn’t work, you’re stuck. And if it’s the wrong answer, you won’t know it.
In comparison, Cognitive Emulation systems show you how they work, in language you can read and follow. Legible AI workflows empower users to edit how the AI solves problems, improve decision-making, and correct errors. By prioritizing safety and transparency, Cognitive Emulation systems slot into personal and business workflows with ease and give the end user confidence in how they work.
In comparison, Cognitive Emulation systems show you how they work, in language you can read and follow. Legible AI workflows empower users to edit how the AI solves problems, improve decision-making, and correct errors. By prioritizing safety and transparency, Cognitive Emulation systems slot into personal and business workflows with ease and give the end user confidence in how they work.
The promise of Cognitive Emulation is to supplement more and more human reasoning with reliable AI systems, empowering humans to do higher order thinking and drive progress. And while the future may look very different and exciting, Cognitive Emulation keeps humans in the driver's seat the whole way.
The promise of Cognitive Emulation is to supplement more and more human reasoning with reliable AI systems, empowering humans to do higher order thinking and drive progress. And while the future may look very different and exciting, Cognitive Emulation keeps humans in the driver's seat the whole way.
Transforming AI
Why Cognitive Emulation?
Get Started
Risky "Default" Path
AI Model
Train a superintelligent AI model as core “brain”
Generalized
Fine tune for maximum generality and power
Agents
Increase power using agents on top of LLM
Open Access
Open access to tools and unbounded environment
Cognitive Emulation
Purpose Trained
Train LLMs for specific, predictable tasks
Component-Built
Build AI by component, using LLMs sparingly
Real Workflows
Automate real workflows with reliable AI macros
Optimal Solutions
Bootstrap to solve harder and more complex workflows
The urgent need for safe AI
The urgent need for safe AI
At Conjecture, we believe that transformative artificial intelligence may be deployed within the next decade. While timelines are up for debate, there is consensus in the AI Safety community that there is currently no known solution that ensures that transformative AI will be safe.
At Conjecture, we believe that transformative artificial intelligence may be deployed within the next decade. While timelines are up for debate, there is consensus in the AI Safety community that there is currently no known solution that ensures that transformative AI will be safe.
Conjecture’s mission is to solve this problem, which is called the “AI Alignment Problem.”
Conjecture’s mission is to solve this problem, which is called the “AI Alignment Problem.”
In the near term, we are building Cognitive Emulation as a way to build powerful AI without relying on dangerous, superintelligent systems. To our knowledge, it is the only AI architecture that bounds systems' capabilities and makes them reason in ways that humans can understand. This design constraint has the properties of:
In the near term, we are building Cognitive Emulation as a way to build powerful AI without relying on dangerous, superintelligent systems. To our knowledge, it is the only AI architecture that bounds systems' capabilities and makes them reason in ways that humans can understand. This design constraint has the properties of:
Avoiding uncapped scaling. Instead of relying on the most (super-)intelligent model possible, we build libraries of human-level AI macros that are supported by efficient, purpose-driven models.
Avoiding autonomous agents or swarms. Instead, we build a meta-system with a human-in-the-loop that, for any human-level task, can reliably build a submodule that solves it.
Avoiding big black boxes. Instead, we factor human reasoning explicitly and build the most specific modules or models that can solve a given task. We build explainability and robustness into the architecture from the ground-up.
Avoiding unsupervised optimisation over end-to-end policies. While gradient descent is a great way to learn about the world, using it to create an end-to-end policy means that we create systems whose goals and behaviors are inherently driven by a process we do not understand.
Avoiding uncapped scaling. Instead of relying on the most (super-)intelligent model possible, we build libraries of human-level AI macros that are supported by efficient, purpose-driven models.
Avoiding autonomous agents or swarms. Instead, we build a meta-system with a human-in-the-loop that, for any human-level task, can reliably build a submodule that solves it.
Avoiding big black boxes. Instead, we factor human reasoning explicitly and build the most specific modules or models that can solve a given task. We build explainability and robustness into the architecture from the ground-up.
Avoiding unsupervised optimisation over end-to-end policies. While gradient descent is a great way to learn about the world, using it to create an end-to-end policy means that we create systems whose goals and behaviors are inherently driven by a process we do not understand.
With those constraints, we can still build useful general systems, comprising humans, AIs, and regular software. These systems will not be as powerful as Superintelligence, the aim of many labs scaling larger and larger models. And this is the point: we believe that there is no safe way to build and scale autonomous, black-box systems to superintelligent levels.
With those constraints, we can still build useful general systems, comprising humans, AIs, and regular software. These systems will not be as powerful as Superintelligence, the aim of many labs scaling larger and larger models. And this is the point: we believe that there is no safe way to build and scale autonomous, black-box systems to superintelligent levels.
Done well, CoEm systems shift us from a misalignment risk-paradigm to a misuse risk-paradigm. The design specification means that during development we should always have a clear overview of the capabilities of the system, and that during deployment we will have guarantees about what the system will do, why it will do it, and be given that opportunity to intervene.
Done well, CoEm systems shift us from a misalignment risk-paradigm to a misuse risk-paradigm. The design specification means that during development we should always have a clear overview of the capabilities of the system, and that during deployment we will have guarantees about what the system will do, why it will do it, and be given that opportunity to intervene.
Transforming AI
Why Cognitive Emulation?
Request a Demo
Risky "Default" Path
AI Model
Train a superintelligent AI model as core “brain”
Generalized
Fine tune for maximum generality and power
Agents
Increase power using agents on top of LLM
Open Access
Open access to tools and unbounded environment
Cognitive Emulation
Purpose Trained
Train LLMs for specific, predictable tasks
Component-Built
Build AI by component, using LLMs sparingly
Real Workflows
Automate real workflows with reliable AI macros
Optimal Solutions
Bootstrap to solve harder and more complex workflows
Transforming AI
Why Cognitive Emulation?
Request a Demo
Risky "Default" Path
AI Model
Train a superintelligent AI model as core “brain”
Generalized
Fine tune for maximum generality and power
Agents
Increase power using agents on top of LLM
Open Access
Open access to tools and unbounded environment
Cognitive Emulation
Purpose Trained
Train LLMs for specific, predictable tasks
Component-Built
Build AI by component, using LLMs sparingly
Real Workflows
Automate real workflows with reliable AI macros
Optimal Solutions
Bootstrap to solve harder and more complex workflows
Related Articles
Feb 25, 2023
Cognitive Emulation: A Naive AI Safety Proposal
This post serves as a signpost for Conjecture’s new primary safety proposal and research direction, which we call Cognitive Emulation (or “CoEm”). The goal of the CoEm agenda is to build predictably boundable systems, not directly aligned AGIs. We believe the former to be a far simpler and useful step towards a full alignment solution.
Feb 10, 2023
FLI Podcast: Connor Leahy on AI Progress, Chimps, Memes, and Markets (Part 1/3)
Topics covered include: Defining artificial general intelligence, What makes humans more powerful than chimps? Would AIs have to be social to be intelligent? Importing humanity's memes into AIs, How do we measure progress in AI? Gut feelings about AI progress, Connor's predictions about AGI Is predicting AGI soon betting against the market? How accurate are prediction markets about AGI?
Apr 8, 2022
We Are Conjecture, A New Alignment Research Startup
Conjecture is a new alignment startup founded by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale alignment research. We have VC backing from, among others, Nat Friedman, Daniel Gross, Patrick and John Collison, Arthur Breitman, Andrej Karpathy, and Sam Bankman-Fried. Our founders and early staff are mostly EleutherAI alumni and previously independent researchers like Adam Shimi. We are located in London.
Sign up to receive our newsletter and
updates on products and services.
Sign up to receive our newsletter and updates on products and services.
Sign up to receive our newsletter and updates on products and services.
Sign Up