Small LLMs Can Be Good Coldstart Recommenders
Jan 1, 2026·
,·
0 min read
J. Noel
Chris Monterola
D.S. Tan
Abstract
Cold-start recommendation — where little or no prior interaction data exists for a user or item — remains one of the most challenging problems in recommender systems. Large language models (LLMs) have shown promise in addressing this by leveraging rich semantic knowledge, but their computational cost limits practical deployment. In this work, we demonstrate that small, efficiently fine-tuned LLMs can serve as effective cold-start recommenders, achieving competitive performance against larger models while remaining substantially more resource-efficient. Our approach fine-tunes small LLMs using item metadata and user context signals to generate initial recommendations when interaction history is sparse or absent. Experiments across multiple benchmark datasets show that small LLMs, when appropriately adapted, match or exceed the cold-start performance of much larger models and outperform traditional collaborative filtering baselines. These findings suggest that small LLMs offer a practical and scalable solution for the cold-start problem in real-world recommendation systems.
Type
Publication
Frontiers in Artificial Intelligence