We focus on large language models, along with reliability, reasoning, and scientific machine learning, applying our work across areas such as code and low-level languages, multilingual systems, and structured domains like molecules.

Academic
Natural Language Processing
JumpLoRA: Sparse Adapters for Continual Learning in Large Language Models
Links: arXiv
Abstract Adapter-based methods have become a cost-effective approach to continual learning (CL) for Large Language Models (LLMs), by …






