Project
LLM Experiments
Prompting, retrieval, and evaluation experiments around working more effectively with language models.
This project collects small but concrete experiments with prompt tuning, retrieval augmentation, and practical evaluation workflows for large language models.
What It Covers
- Prompt design patterns and how they change output quality.
- Retrieval-augmented pipelines for grounding answers in external context.
- Lightweight evaluation setups that make model behavior easier to compare.