Generated by Code IQ · v1.0

Hands-On-Large-Language-Models
Knowledge Tutorial

This project serves as a comprehensive, hands-on guide to working with Large Language Models (LLMs). It covers the full lifecycle of LLM application development, from creating generative pipelines and text embeddings to building complex Retrieval-Augmented Generation (RAG) systems using orchestration frameworks. Additionally, it demonstrates advanced optimization techniques such as prompt engineering, Parameter-Efficient Fine-Tuning (PEFT), and quantization to customize models and run them efficiently on consumer hardware.

7
Chapters
-
Subsystems
Rabbit Holes
▶ Start Reading ⎇ View on GitHub
System Architecture

How the pieces fit

Hands-On-Large-Language-Models is organized as connected concepts and components. Start broad, then drill down chapter by chapter.

⚙️
Generative Pipelines
Generative Pipelines
⚙️
Prompt Engineering
Prompt Engineering
⚙️
Text Embeddings
Text Embeddings
⚙️
Semantic Search & RAG
Semantic Search & RAG
⚙️
LangChain Orchestration
LangChain Orchestration
⚙️
Quantization
Quantization
⚙️
Parameter-Efficient Fine-Tuning (PEFT)
Parameter-Efficient Fine-Tuning (PEFT)
Hands-On-Large-Language-Models — bash
open tutorial
◆ Scanning numbered chapters
◆ Building navigation and Mermaid diagrams
◆ Generating chapter and subsystem pages
✓ 7 chapter pages built
✓ Theme toggle enabled
Repository Overview

Intro and Architecture Diagram

This project serves as a comprehensive, hands-on guide to working with Large Language Models (LLMs). It covers the full lifecycle of LLM application development, from creating generative pipelines and text embeddings to building complex Retrieval-Augmented Generation (RAG) systems using orchestration frameworks. Additionally, it demonstrates advanced optimization techniques such as prompt engineering, Parameter-Efficient Fine-Tuning (PEFT), and quantization to customize models and run them efficiently on consumer hardware.

Source Repository: https://github.com/HandsOnLLM/Hands-On-Large-Language-Models

flowchart TD A0["Generative Pipelines"] A1["Text Embeddings"] A2["Semantic Search & RAG"] A3["Prompt Engineering"] A4["LangChain Orchestration"] A5["Parameter-Efficient Fine-Tuning (PEFT)"] A6["Quantization"] A4 -->|"Orchestrates"| A0 A4 -->|"Uses templates from"| A3 A4 -->|"Implements workflow for"| A2 A2 -->|"Uses for retrieval"| A1 A0 -->|"Consumes structured inputs"| A3 A0 -->|"Loads optimized models"| A6 A0 -->|"Integrates adapters from"| A5 A5 -->|"Combines for QLoRA"| A6
Tutorial Chapters

All 7 chapters

Follow sequentially or jump to any topic. Start with Generative Pipelines.

About This Project

Generated by Code IQ

This tutorial was automatically generated by Code IQ and rendered with the shared tutorial site builder. It can be produced for any repository tutorial folder that follows the numbered markdown chapter layout.

View Code IQ ↗
python build_site.py '/home/runner/work/Code-IQ/Code-IQ/output/Hands-On-Large-Language-Models'

// → 7 chapters
// → source: HandsOnLLM/Hands-On-Large-Language-Models