Session 4 - Training and Evaluating LLMs On Custom Datasets¶
This session aims to equip you with the knowledge to train Large Language Models (LLMs) by exploring techniques like unsupervised pretraining and supervised fine-tuning with various preference optimization methods. It will also cover efficient fine-tuning techniques, retrieval-based approaches, and language agent fine-tuning. Additionally, the session will discuss LLM training frameworks and delve into evaluation methods for LLMs, including evaluation-driven development and using LLMs for evaluation itself.
This session is aimed to help:
- People who are already familiar basics of LLMs and Transformers
- People who already knows how to use pre-trained LLMs prompt engineering and RAG
- People who want train or finetune their own LLMs on custom data.
- People who want to lear how to evaluate LLMs
Outline¶
Part 1: Training Foundational LLMs¶
Coming soon...
Part 2: Finetuning LMs To Human Preferences¶
Details¶
- Date: 14 March, 2024
- Speaker: Abhor Gupta
- Location: Infocusp Innovations LLP
Material¶
- Recording: TODO
Part 3: LLM Training Frameworks¶
Coming soon...