Lecturer: Kerem Şenel
Fields: Artificial Intelligence, Machine Learning, Natural Language Processing
Content
This hands-on course takes you from transformer fundamentals to cutting-edge agentic AI systems. You’ll learn how large language models work under the hood, train and fine-tune your own models, and build autonomous agents that use tools, access external resources, and collaborate to solve complex tasks. Each session combines theory with practical implementation using industry-standard frameworks like Hugging Face, LangChain, and the Model Context Protocol.
Session 1: Foundations & Architecture
Understanding Transformers: The Engine Behind Modern AI
Discover how LLMs predict and generate text through self-attention mechanisms. We’ll demystify the transformer architecture—from tokenization to embeddings—and you’ll implement a tokenizer from scratch while visualizing how models “pay attention” to different parts of text.
Session 2: Training Fundamentals
Training Your Own Language Model
Learn what it takes to train an LLM: pre-training objectives, dataset curation, and scaling laws. You’ll fine-tune a real model (GPT-2 or TinyLlama) on custom data using Hugging Face tools, understanding the trade-offs between model size, compute, and performance.
Session 3: Advanced Training & Alignment
Making Models Helpful: Instruction Tuning & RLHF
Explore how base models are transformed into helpful assistants through instruction tuning and alignment techniques like RLHF. You’ll fine-tune a model to follow instructions and experiment with advanced prompting techniques including chain-of-thought reasoning.
Session 4: Cutting-Edge Applications & Agentic Systems
Autonomous AI: Building Agents That Think and Act
Go beyond chatbots to autonomous agents that use tools, access databases, and collaborate on complex tasks. You’ll implement function calling, integrate the Model Context Protocol (MCP) to connect LLMs with external resources, and build multi-agent systems using LangGraph—culminating in an autonomous agent demo.
Lecturer

Kerem Şenel received his PhD in Computer Science from LMU Munich in 2025, specializing in Natural Language Processing. During his doctoral research at the Center for Information and Language Processing (CIS), he explored diverse topics including interpretability, multilinguality, and evaluation of language models. He currently works as an IT consultant in industry, specializing in AI applications and solutions.
Affiliation: TNG Technology Consulting
Homepage: https://www.tngtech.com/en/