Using In-Context Learning and Frozen Large Language Models for Bayesian Optimization of Catalysts
DescriptionLarge Language Models (LLM) are advanced artificial intelligence (AI) systems capable of understanding and generating human-like text. In this study, we demonstrate how in-context learning with frozen LLMs can predict chemical properties directly from experimental procedures. We developed a prompting system that enables LLMs for regression with uncertainties, which is essential for techniques like Bayesian optimization. By selecting examples for context composition, we enhance the model's performance beyond its context window, the maximum number of tokens it can process simultaneously. Although our model doesn't outperform all baselines, it performs satisfactorily without training, feature selection, or significant computing resources. Our work highlights the potential of LLMs for efficient material and molecular design using natural language predictions.
TimeMonday, June 2617:30 - 18:00 CEST