This project serves as a comprehensive, hands-on guide to working with Large Language Models (LLMs). It covers the full lifecycle of LLM application development, from creating generative pipelines and text embeddings to building complex Retrieval-Augmented Generation (RAG) systems using orchestration frameworks. Additionally, it demonstrates advanced optimization techniques such as prompt engineering, Parameter-Efficient Fine-Tuning (PEFT), and quantization to customize models and run them efficiently on consumer hardware.
Hands-On-Large-Language-Models is organized as connected concepts and components. Start broad, then drill down chapter by chapter.
This project serves as a comprehensive, hands-on guide to working with Large Language Models (LLMs). It covers the full lifecycle of LLM application development, from creating generative pipelines and text embeddings to building complex Retrieval-Augmented Generation (RAG) systems using orchestration frameworks. Additionally, it demonstrates advanced optimization techniques such as prompt engineering, Parameter-Efficient Fine-Tuning (PEFT), and quantization to customize models and run them efficiently on consumer hardware.
Source Repository: https://github.com/HandsOnLLM/Hands-On-Large-Language-Models
Follow sequentially or jump to any topic. Start with Generative Pipelines.
This tutorial was automatically generated by Code IQ and rendered with the shared tutorial site builder. It can be produced for any repository tutorial folder that follows the numbered markdown chapter layout.
View Code IQ ↗