🚀 Excited to share our latest achievement at LG AI Research! Our team has released Exaone-4.0-VL, our next-generation Vision-Language Foundation Model, at the LG AI Research Talk Concert 2025. The 32B model achieves leading performance across multi-modal LLM benchmarks, excelling in both perception (DocVQA, InfoVQA, TextVQA) and reasoning (MMMU, AI2D, ChartQA). All training stages—from large-scale pre-training to post-training (SFT, DPO)—were conducted on our in-house Exaone platform with web-scale multi-modal data. We believe Exaone-4.0-VL will accelerate transfer learning and fine-tuning for domain-specific enterprise AI agents, enabling impactful real-world applications. Tech report will be released soon!
Hashtags:
#VLM #MultimodalLLM #VisionLanguageModel #AIResearch #Exaone #LG_AIR