Apple researchers have developed a new method for training large language models (LLMs) that seamlessly integrates both text and visual information.
The company’s findings, detailed in a research paper titled “MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training,” showcase a new approach to creating more intelligent and flexible AI systems. By utilizing a diverse dataset comprising image-caption pairs, interleaved image-text documents, and text-only data, Apple’s claims that the MM1 model sets a new standard in AI’s ability to perform tasks such as image captioning, visual question answering, and natural language inference with a high level of accuracy.
Apple’s research focuses on the combination of different types of training data and model architectures, which enables the AI to understand and generate language based on a mix of visual and linguistic cues. This capability is vital for tasks that require a nuanced comprehension of the world, such as interpreting complex images or answering questions that involve visual elements.
The paper also highlights the MM1 model’s exceptional in-context learning abilities, particularly in the largest 30 billion parameter configuration of the model. This version apparently exhibits remarkable capabilities for multi-step reasoning over multiple images using few-shot “chain-of-thought” prompting, a technique that allows the AI to perform complex, open-ended problem solving based on minimal examples.
This research emerges as part of Apple’s broader initiative to enhance its AI capabilities amid growing competition. Earlier today, Bloomberg‘s Mark Gurman reported that Apple is in discussions with Google to license Google’s Gemini generative large-language models to power new features coming to the iPhone as part of iOS 18.
This article, “Apple Publishes Details About New ‘MM1’ AI Model” first appeared on MacRumors.com
Discuss this article in our forums