Conjecture: 2 Years

Conjecture: 2 Years

Conjecture: 2 Years

Connor Leahy

Feb 15, 2024

It has been 2 years since a group of hackers and idealists from across the globe gathered into a tiny, oxygen-deprived coworking space in downtown London with one goal in mind: Make the future go well, for everybody. 

And so, Conjecture was born.

We’ve come a long way from our humble roots of EleutherAI and the AI safety community, growing from a chaotic group of 7 to a well oiled and professional company 20 strong today, working on defining the future of AI.

Over the last two years, we’ve tried many things, worked with many people, achieved many goals, had many adventures. We’ve done novel academic research, built state of the art language models, run (and broken) supercomputers, advised politicians and world leaders, appeared on TV, raised money, hired great people, fired great people, mentored great people, hosted an international conference in Japan, influenced global policy, and much, much more.

Not everything we’ve tried has worked, in fact most things haven’t. But that’s the process of learning, both about the world and ourselves. Afterall, we chose to work on a problem that matters, not one we knew how to solve.

At Conjecture, we are tackling the technical problem of Control: How do we build AI/AGI systems that we can control? Such that we know what they can and cannot do, and do exactly the thing we tell them to do. Nothing more, nothing less.

Control is a subproblem of the monumentally harder problem of Alignment: How do you make AGI systems that want what you want? No one has any solution to Alignment yet, I’m afraid (Though we do have a couple of crazy ideas…), and as such we’ve been focusing on the subproblem of Control.

And boy have we thought of, looked at and tried many ideas to tackle Control. From interpretability and prosaic methods to simulators, cyborgism, and much more. We’ve had our phase of broad exploration, trying to find something that could actually make progress on this hard problem. Something that was truly going to work.

And, after many months of dead ends, we’ve found something.

Cognitive Emulation (CoEm) 

The first proto ideas that would become our Cognitive Emulation agenda started to formulate in the late summer of 2022, but they needed a lot more work to flesh out. By late 2022/early 2023, things were starting to become clearer. But there were still many unanswered technical questions. It took us another year of back and forth incremental work to get to the level of certainty we needed to know for sure this was going to be our direction going forward. 

And now, roaring into 2024, we are all in on CoEm.

CoEm, in a nutshell, is a vision for how to build powerful, useful, general purpose AI systems that are bounded, controllable and safe, by emulating the steps of human-like cognition. A lot goes into this, but at its core, we break down cognitive tasks into primitive building blocks, and using novel finetuning, synthetic data and scaffolding methods, we create LLM/software hybrid systems that can perform these building blocks with perfect accuracy, and then compose to more complex tasks from there, while maintaining modularity and robustness.

A CoEm system may not be as good of a poet as GPT4, but if you tell it to do 10 simple tasks in order, it will do those 10 simple tasks in order, every time, without failure or distraction. If a CoEm can do a task once, it can do it arbitrarily many times without fault.

With the CoEm paradigm, we can build AI systems that both deliver on promises of safer, controllable, auditable systems and directly useful commercial applications.

CoEm is the AI businesses actually want to buy. When companies buy AI/LLMs, what they want is a smart system that can reliably take in commands and execute those tasks exactly as instructed. They don’t want a quirky brain in a jar with a personality disorder.

And this is what we are already working on with our first enterprise customers.

The Next Year(s)

We sometimes joke at Conjecture that 2024 feels like it’s shaping up to be a “season finale”, with all the major elections across the world, geopolitical tensions and next generation AI systems on the horizon.

And within Conjecture it feels this way too, that we’re getting to that dramatic cliffhanger. After long struggles and development, a sudden glimmer of hope as the long sought technological breakthrough flares to life.

It's unsurprising, in a way, that an elegant solution for AI reliability, is actually the commercial direction people were looking for too. We’re humbled that others seem to share our vision as well and excited to see what we can build. 

We are already starting to work with business partners that require the highest levels of reliability and controllability guarantees for their AI systems, and we are looking for a few more enterprise partners interested in exploring the most promising new space of highly reliable AI design.

We don’t know if Earth is going to be renewed for another season, but it looks likely. So let's work together on a better script this year.

We’re only accelerating more with each passing day. We’ve never been so certain that we have a path forward not just for AI safety, but to actually see this technology be a useful force for good too, and so we’re looking forward to the adventures that 2024 will bring.

It has been 2 years since a group of hackers and idealists from across the globe gathered into a tiny, oxygen-deprived coworking space in downtown London with one goal in mind: Make the future go well, for everybody. 

And so, Conjecture was born.

We’ve come a long way from our humble roots of EleutherAI and the AI safety community, growing from a chaotic group of 7 to a well oiled and professional company 20 strong today, working on defining the future of AI.

Over the last two years, we’ve tried many things, worked with many people, achieved many goals, had many adventures. We’ve done novel academic research, built state of the art language models, run (and broken) supercomputers, advised politicians and world leaders, appeared on TV, raised money, hired great people, fired great people, mentored great people, hosted an international conference in Japan, influenced global policy, and much, much more.

Not everything we’ve tried has worked, in fact most things haven’t. But that’s the process of learning, both about the world and ourselves. Afterall, we chose to work on a problem that matters, not one we knew how to solve.

At Conjecture, we are tackling the technical problem of Control: How do we build AI/AGI systems that we can control? Such that we know what they can and cannot do, and do exactly the thing we tell them to do. Nothing more, nothing less.

Control is a subproblem of the monumentally harder problem of Alignment: How do you make AGI systems that want what you want? No one has any solution to Alignment yet, I’m afraid (Though we do have a couple of crazy ideas…), and as such we’ve been focusing on the subproblem of Control.

And boy have we thought of, looked at and tried many ideas to tackle Control. From interpretability and prosaic methods to simulators, cyborgism, and much more. We’ve had our phase of broad exploration, trying to find something that could actually make progress on this hard problem. Something that was truly going to work.

And, after many months of dead ends, we’ve found something.

Cognitive Emulation (CoEm) 

The first proto ideas that would become our Cognitive Emulation agenda started to formulate in the late summer of 2022, but they needed a lot more work to flesh out. By late 2022/early 2023, things were starting to become clearer. But there were still many unanswered technical questions. It took us another year of back and forth incremental work to get to the level of certainty we needed to know for sure this was going to be our direction going forward. 

And now, roaring into 2024, we are all in on CoEm.

CoEm, in a nutshell, is a vision for how to build powerful, useful, general purpose AI systems that are bounded, controllable and safe, by emulating the steps of human-like cognition. A lot goes into this, but at its core, we break down cognitive tasks into primitive building blocks, and using novel finetuning, synthetic data and scaffolding methods, we create LLM/software hybrid systems that can perform these building blocks with perfect accuracy, and then compose to more complex tasks from there, while maintaining modularity and robustness.

A CoEm system may not be as good of a poet as GPT4, but if you tell it to do 10 simple tasks in order, it will do those 10 simple tasks in order, every time, without failure or distraction. If a CoEm can do a task once, it can do it arbitrarily many times without fault.

With the CoEm paradigm, we can build AI systems that both deliver on promises of safer, controllable, auditable systems and directly useful commercial applications.

CoEm is the AI businesses actually want to buy. When companies buy AI/LLMs, what they want is a smart system that can reliably take in commands and execute those tasks exactly as instructed. They don’t want a quirky brain in a jar with a personality disorder.

And this is what we are already working on with our first enterprise customers.

The Next Year(s)

We sometimes joke at Conjecture that 2024 feels like it’s shaping up to be a “season finale”, with all the major elections across the world, geopolitical tensions and next generation AI systems on the horizon.

And within Conjecture it feels this way too, that we’re getting to that dramatic cliffhanger. After long struggles and development, a sudden glimmer of hope as the long sought technological breakthrough flares to life.

It's unsurprising, in a way, that an elegant solution for AI reliability, is actually the commercial direction people were looking for too. We’re humbled that others seem to share our vision as well and excited to see what we can build. 

We are already starting to work with business partners that require the highest levels of reliability and controllability guarantees for their AI systems, and we are looking for a few more enterprise partners interested in exploring the most promising new space of highly reliable AI design.

We don’t know if Earth is going to be renewed for another season, but it looks likely. So let's work together on a better script this year.

We’re only accelerating more with each passing day. We’ve never been so certain that we have a path forward not just for AI safety, but to actually see this technology be a useful force for good too, and so we’re looking forward to the adventures that 2024 will bring.

It has been 2 years since a group of hackers and idealists from across the globe gathered into a tiny, oxygen-deprived coworking space in downtown London with one goal in mind: Make the future go well, for everybody. 

And so, Conjecture was born.

We’ve come a long way from our humble roots of EleutherAI and the AI safety community, growing from a chaotic group of 7 to a well oiled and professional company 20 strong today, working on defining the future of AI.

Over the last two years, we’ve tried many things, worked with many people, achieved many goals, had many adventures. We’ve done novel academic research, built state of the art language models, run (and broken) supercomputers, advised politicians and world leaders, appeared on TV, raised money, hired great people, fired great people, mentored great people, hosted an international conference in Japan, influenced global policy, and much, much more.

Not everything we’ve tried has worked, in fact most things haven’t. But that’s the process of learning, both about the world and ourselves. Afterall, we chose to work on a problem that matters, not one we knew how to solve.

At Conjecture, we are tackling the technical problem of Control: How do we build AI/AGI systems that we can control? Such that we know what they can and cannot do, and do exactly the thing we tell them to do. Nothing more, nothing less.

Control is a subproblem of the monumentally harder problem of Alignment: How do you make AGI systems that want what you want? No one has any solution to Alignment yet, I’m afraid (Though we do have a couple of crazy ideas…), and as such we’ve been focusing on the subproblem of Control.

And boy have we thought of, looked at and tried many ideas to tackle Control. From interpretability and prosaic methods to simulators, cyborgism, and much more. We’ve had our phase of broad exploration, trying to find something that could actually make progress on this hard problem. Something that was truly going to work.

And, after many months of dead ends, we’ve found something.

Cognitive Emulation (CoEm) 

The first proto ideas that would become our Cognitive Emulation agenda started to formulate in the late summer of 2022, but they needed a lot more work to flesh out. By late 2022/early 2023, things were starting to become clearer. But there were still many unanswered technical questions. It took us another year of back and forth incremental work to get to the level of certainty we needed to know for sure this was going to be our direction going forward. 

And now, roaring into 2024, we are all in on CoEm.

CoEm, in a nutshell, is a vision for how to build powerful, useful, general purpose AI systems that are bounded, controllable and safe, by emulating the steps of human-like cognition. A lot goes into this, but at its core, we break down cognitive tasks into primitive building blocks, and using novel finetuning, synthetic data and scaffolding methods, we create LLM/software hybrid systems that can perform these building blocks with perfect accuracy, and then compose to more complex tasks from there, while maintaining modularity and robustness.

A CoEm system may not be as good of a poet as GPT4, but if you tell it to do 10 simple tasks in order, it will do those 10 simple tasks in order, every time, without failure or distraction. If a CoEm can do a task once, it can do it arbitrarily many times without fault.

With the CoEm paradigm, we can build AI systems that both deliver on promises of safer, controllable, auditable systems and directly useful commercial applications.

CoEm is the AI businesses actually want to buy. When companies buy AI/LLMs, what they want is a smart system that can reliably take in commands and execute those tasks exactly as instructed. They don’t want a quirky brain in a jar with a personality disorder.

And this is what we are already working on with our first enterprise customers.

The Next Year(s)

We sometimes joke at Conjecture that 2024 feels like it’s shaping up to be a “season finale”, with all the major elections across the world, geopolitical tensions and next generation AI systems on the horizon.

And within Conjecture it feels this way too, that we’re getting to that dramatic cliffhanger. After long struggles and development, a sudden glimmer of hope as the long sought technological breakthrough flares to life.

It's unsurprising, in a way, that an elegant solution for AI reliability, is actually the commercial direction people were looking for too. We’re humbled that others seem to share our vision as well and excited to see what we can build. 

We are already starting to work with business partners that require the highest levels of reliability and controllability guarantees for their AI systems, and we are looking for a few more enterprise partners interested in exploring the most promising new space of highly reliable AI design.

We don’t know if Earth is going to be renewed for another season, but it looks likely. So let's work together on a better script this year.

We’re only accelerating more with each passing day. We’ve never been so certain that we have a path forward not just for AI safety, but to actually see this technology be a useful force for good too, and so we’re looking forward to the adventures that 2024 will bring.

Sign up to receive our newsletter and
updates on products and services.

Sign up to receive our newsletter and updates on products and services.

Sign up to receive our newsletter and updates on products and services.

Sign Up