Large Language Models Demystified

Gain a clear, practical understanding of how LLMs are built and learn how modern models are developed, optimized, and deployed for real‑world applications.

  • 0.5 CEU / 5 PDH credits
  • Launched 2026
  • 5 courses
  • 5 hours

Course Description

This course program introduces learners to the intricacies of building large language models (LLMs). It aims to teach learners the process of building models from scratch to help understand the black-box that is LLMs. The course series breaks down the components of transformers, their attention mechanisms, and how they have revolutionized natural language processing (NLP) tasks. The courses focuses on a state-of-the-art LLM architecture  and the processing and training involved to deploy the model. By the end of this series, learners will have a comprehensive understanding of LLM construction and the principles behind transformers, empowering them to apply these models in real-world applications.

Course Objectives

  • Practical applications and impact of LLMs in real-world scenarios
  • The transformer model and attention mechanisms
  • Coding transformer layers and attention mechanisms from scratch
  • Best practices for dataset selection and preprocessing in LLM training
  • Pre-training and fine-tuning paradigms in LLM development
  • Evaluating model performance, including common debugging techniques
  • Hands-on exercises for e2e model development, training, and deployment

Authors and Instructors

Sai Chand Boyapati

Director of Software Quality Assurance

Mr. Boyapati is an internationally recognized expert in software quality assurance (QA), whose groundbreaking work has had a transformative impact on industries worldwide. His influence and contribution in testing span major developments in software products that have redefined their markets.

Mr. Boyapati holds a critical leadership role as Director of Software Quality Assurance in a globally distinguished organization.

In addition to his technical achievements, Mr. Boyapati has served as a peer reviewer and judge in authoritative capacities. He has evaluated numerous research papers for prestigious conferences and hackathons on AI & LLMs.

He has written extensively on QA, cybersecurity, and artificial intelligence/LLM’s, with articles published in Media. His book, Focus on QA: Redefining Software Testing in the AI-Driven Era, became a bestseller upon release, providing invaluable insights into applying AI to QA processes

Hamza Mohammed

Machine Learning Engineer, Samsung Research America

Hamza Mohammed is a Machine Learning Engineer with Samsung Research America. He is an industry expert in deep learning and reinforcement learning, specializing in large language models. Mr. Hamza has a proven research and industry track record applying, optimizing, and accelerating, deep learning and reinforcement learning in various disciplines, including computer vision, natural language processing (including multi-modal modeling), robotics and automation, software engineering and testing, autonomous navigation and ADAS, digital twin simulation, and biomedical imaging. He has designed and optimized ML models and algorithms for edge-compute deployments and is an authority in securing and optimizing AI application security for on-device and on-premise environments. He is a contributor to several open-source projects, an author and peer-reviewer of multiple publications in top-tier ML venues, and is an inventor on key patents. Mr. Hamza holds a B.S. in Electrical Engineering and Computer Sciences, University of California, Berkeley, USA.