el.dataset.currentDropdown = '') }">
Giles' blog
About
Contact
Archives
Categories
Blogroll

Fine-tuning LLMs

From April until December 2024, I explored how you go about fine-tuning a 7B base model to handle chat. I started by training a smaller model locally, then found out how to train things on cloud computing environments, including multi-GPU training and training on machines where even a server-grade H100 GPU wasn't big enough to be able to train the model.

Here are the posts in this series:

Copyright (c) 2006-2026 by Giles Thomas. This work is licensed under a Creative Commons Attribution 4.0 International License.