Apple researchers have released a new open-source AI model that is capable of editing images based on a user’s natural language instructions (via VentureBeat).
Called “MGIE,” which stands for MLLM-Guided Image Editing, it uses multimodal large language models (MLLMs) to interpret user requests and perform pixel-level manipulations.
The model is capable of editing various aspects of images. Global photo enhancements can include brightness, contrast, or sharpness, or the application of artistic effects like sketching. Local editing can modify the shape, size, color, or texture of specific regions or objects in an image, while Photoshop-style modifications can include cropping, resizing, rotating, and adding filters, or even changing backgrounds and blending images.
A user input for a photo of a pizza could be to “make it look more healthy.” Using common sense reasoning, the model can add vegetable toppings, such as tomatoes and herbs. A global optimization input request might take the form of “add contrast to simulate more light,” while a Photoshop-style modification could be made by asking the model to remove people from the background of a photo, shifting the focus of the image to the subject’s facial expression.
Apple collaborated with University of California researchers to create MGIE, which was presented in a paper at the International Conference on Learning Representations (ICLR) 2024. The model is available on GitHub, and includes the code, data, and pre-trained models.
This is Apple’s second breakthrough in AI research in as many months. In late December, Apple revealed that it had made strides in deploying large language models (LLMs) on iPhones and other Apple devices with limited memory by inventing an innovative flash memory utilization technique.
For the last several months, Apple has been testing an “Apple GPT” rival that could compete with ChatGPT. According to Bloomberg‘s Mark Gurman, work on AI is a priority for Apple, with the company designing an “Ajax” framework for large language models.
Both The Information and analyst Jeff Pu claim that Apple will have some kind of generative AI feature available on the iPhone and iPad around late 2024, which is when iOS 18 will be coming out. iOS 18 is said to include an enhanced version of Siri with ChatGPT-like generative AI functionality, and has the potential to be the “biggest” software update in the iPhone’s history, according to Gurman.
This article, “New Apple AI Model Edits Images Based on Natural Language Input” first appeared on MacRumors.com
Discuss this article in our forums