Sibylline.dev Crafting AI for the human journey

AI and the Human Asymptote

Thesis: Teaching AI to reach superhuman performance will be much more challenging than we expect. Current progress has been accelerated by bootstrapping from human evolution over geological time, and as we approach the limits of our intelligence, we will have to take newer, more time consuming and difficult approaches to keep improving model intelligence. This limit is the Human Asymptote, and it represents a fundamental challenge on our path to machine superintelligence.

The Intelligence Bottleneck

The intelligence of an AI model is inherently limited by the quality of its training data. While models can produce novel outputs, they are essentially applying learned patterns to new domains rather than creating genuinely novel patterns. Our capacity to create and evaluate training data is restricted by our own intelligence. As we can’t accurately identify superhuman training data in advance, the only reliable method to gauge its impact on the model is through iterative training and benchmarking. Thus, to push AI models beyond human intelligence, we will need a large-scale global optimization algorithm to develop the necessary dataset. How ironic that humans, themselves the product of genetic algorithms, may need to implement similar algorithms in artificial minds to surpass their own cognitive abilities.

Evidence of the Human Asymptote

The limitations of human intelligence are evident in the evolution of LLMs. Despite significant advancements, we haven’t seen much progress beyond GPT-4 in terms of raw problem-solving power. However, we have witnessed considerable improvements in problem-solving efficiency relative to model size. Models like Claude Haiku, Llama3, and Phi3 demonstrate competitive performance in many domains despite their smaller size. This trend suggests that optimizing the encoding of human intelligence is a more tractable problem than identifying and verifying superintelligence.

Since smaller models are quickly approaching parity with the top-tier GPT models in specific domains, the main advantage of larger models now lies in their ability to handle multi-domain intelligence, rather than superior problem-solving capabilities in isolated tasks. This indicates that the path to superhuman intelligence will likely involve not just refining existing models but fundamentally rethinking how we train and optimize them.

Summary

The journey to achieving superhuman AI is far more complex than our current achievements suggest. I’m going to say it now - this is self driving cars all over again. If we’re going to succeed, we need to come up with innovative optimization algorithms to generate and select the best training data. As we strive to surpass human intelligence, it is essential to remember that our progress thus far has been built on the foundation of human evolution and the slowing of progress isn’t a failure, but rather entry into a new, more challenging phase of the problem. There’s no need for doomerism and winter isn’t coming - this problem is tractable, we have the optimization algorithms, compute and storage to solve it, it’s just going to take time and cooperation. OpenAI and Google aren’t going to do it by themselves, this is a civilization level problem, and that gives me hope because it means open source has a real shot and the corporate dystopia is looking less and less likely.