From its research division, Meta Platforms (META: Financials) revealed several new AI models, including a breakthrough "Self-Taught Evaluator" meant to curb human involvement in AI development. By enabling AI to assess and enhance other AI models independently, this creative approach may transform the process of developing AI.
Using a large language model (LLM) as a judge, the "Self-Taught Evaluator" creates opposing model outputs while evaluating and tracing logic. Unlike present approaches like Reinforcement Learning from Human Feedback, this iterative self-improvement technique is meant to hone AI performance without the intervention of human annotations.
The model's publication follows Meta's earlier work, which was initially presented in August, using the "chain of thought" approach applied by OpenAI's o1 models. This approach improves the AI's capacity to form consistent assessments regarding the outputs of other models. The ability of AI models to self-evaluate and grow from their errors opens the path for the creation of totally autonomous AI systems. Meta-researchers believe this method may greatly lower the demand for expensive human knowledge in training models. Meta has updated its Segment Anything Model 2 (SAM 2.1), adding fresh tools for developers and improved picture and video training capability.