Salesforce has announced an open-source visual-language model named BLIP-2, which the company claims is both faster and simpler than GPT-4. The model uses multimodality and is designed to improve the cost and efficiency of pre-training for visual-language models. Salesforce says BLIP-2 can improve accuracy and reduce the amount of time spent on training by orders of magnitude. The code for BLIP-2 is available on Github.
source update: Salesforce New Open Source Visual-Language Model that… – Towards AI
Comments
There are no comments yet.