Mistral AI has unveiled its new model customization on La Plateforme. Called mistral-finetune, the update will provide fine-tuning capabilities to enhance performance, speed, and editorial control of its users’ artificial intelligence (AI) applications.
According to an announcement on Mistral AI’s website, the recently released mistral-finetune allows its users to adapt its open-source AI models to fine-tune their infrastructure. The France-based company describes it as a lightweight and memory-efficient codebase built on the Low-Rank Adaptation (LoRA) training paradigm.
In addition, Mistral AI will offer services on La Plateforme to provide developers with infrastructure fine-tuning support. The services are based upon the company’s fine-tuning techniques, which, the company says, makes model adaptation and deployment more cost-efficient and effective.