If you want to understand AI, maybe you should think like a child thinks.
First of all, a lot of AI assessments suggest that these models are capable reasoning at perhaps a 2-year-old level.
But what does that mean?
Iâve heard a lot about this lately: for example, this video shows James, Peter and Peter from Google, talking about state-of-the-art solutions and practical applications.
One of the things they pointed out is that AI is really âyouthfulâ in a number of waysâŠ.
It’s not just that AI is limited and bounded in certain ways, in its learning trajectory, similar to a young child. Our panel talked about how young children learn to identify something, say, a zebra, after a series of training pictures, which is similar to how AI works.
But they also talked about technologies that sort of, in a way, âinfantizeâ AI, such as the idea of building a network on âbaby AIâ models that work together. Iâve seen engineers approach AI this way, cobbling together multiple systems to get a muscular result. Again, as I like to say, Marvin Minskyâs insight comes in helpful: that the brain is not one big computer, but hundreds of computers connected together!
The Google people in the above lecture also talked about building a sentence simplification engine that’s designed to break down complex narratives into their simpler parts.
âThink about the problem that you want to focus on, and then just get started,â one of them said, offering advice to new innovators.
They also talked about domain knowledge and domain adaptation â how to create systems that focus on the specific completion of a task with techniques like supervised fine-tuning. I like the part when one of these fellows asserted that fine-tuning is âmethod acting for LLMs.â Thatâs not too far off, really! They talked about how to customize an LLM to a personality, for example, in that kind of humanization process where âagentizingâ AI means making it seem like a particular figure or person (the Google group used Sherlock Holmes as an example.)
Then they went into how to extend for complex workflows. One mentioned asynchronous day trading, and how the model has a certain bias â it’s classifying tasks that are either bullish or bearish by identification. Every tweet, he said, was bearish, for some reason. I imagine that would be related to the programâs trained assessment of the platform itself? InterestingâŠ
However, he expressed amazement at what happens right out of the box.
As for measuring precision and accuracy, the panel talked about how to prevent hallucinations. One way is to tie a language model together with the database, so that both of these technologies do what they’re good at â the AI is good at helping people to get answers, and the database has the facts.
They also went over things like optimization problems, the challenges of training on sensitive data, and how to preserve data privacy, whether or not you’re using your own model or someone else’s.
You can catch the video for the rest of it, but this idea that you can boil AI down into a particular kind of simplification could be very useful as we move forward. Take the sentence simplification engine â who among us hasn’t wished we could simplify something that were reading? And AI might be just the tool for the job! Stay tuned for more on whatâs being talked about now in the AI world.