Apple’s Apple Intelligence research team have released two new small but high-performing language models used to train AI generators.
Apple’s ability to create incredibly compact yet powerful AI models is unequaled in the industry.
Apple’s ability to create incredibly compact yet powerful AI models is unequaled in the industry.
The Machine Learning team at Apple are taking part in an open-source DataComp for Language Models project alongside others in the industry. The two models Apple has recently produced have been seen to match or beat other leading training models, such as Llama 3 and Gemma.
Language models like these are used to train AI engines, like ChatGPT, by providing a standard framework. This includes an architecture, parameters, and filtering of datasets to provide higher-quality data for the AI engines to draw from.