New ArrivalsHealth & WellnessClothing, Shoes & AccessoriesHomeParty SuppliesKitchen & DiningGroceryHousehold EssentialsFurnitureBabyToysVideo GamesElectronicsMovies, Music & BooksBeautyPersonal CareGift IdeasGift CardsCharacter ShopSports & OutdoorsBackpacks & LuggageSchool & Office SuppliesPetsUlta Beauty at TargetTarget OpticalBullseye’s PlaygroundDealsClearanceTarget New Arrivals Target Finds #TargetStyleHanukkahStore EventsAsian-Owned Brands at TargetBlack-Owned or Founded Brands at TargetLatino-Owned Brands at TargetWomen-Owned Brands at TargetLGBTQIA+ ShopTop DealsTarget Circle DealsWeekly AdShop Order PickupShop Same Day DeliveryRegistryRedCardTarget CircleFind Stores
Building Large Language Models from Scratch - by  Dilyan Grigorov (Paperback) - 1 of 1

Building Large Language Models from Scratch - by Dilyan Grigorov (Paperback)

New at  target 
$59.99

Pre-order

Eligible for registries and wish lists

Sponsored

About this item

Highlights

  • This book is a complete, hands-on guide to designing, training, and deploying your own Large Language Models (LLMs)--from the foundations of tokenization to the advanced stages of fine-tuning and reinforcement learning.
  • About the Author: Dilyan Grigorov is a software developer with a passion for Python software development, generative deep learning & machine learning, data structures, and algorithms.
  • Computers + Internet, Intelligence (AI) & Semantics

Description



Book Synopsis



This book is a complete, hands-on guide to designing, training, and deploying your own Large Language Models (LLMs)--from the foundations of tokenization to the advanced stages of fine-tuning and reinforcement learning. Written for developers, data scientists, and AI practitioners, it bridges core principles and state-of-the-art techniques, offering a rare, transparent look at how modern transformers truly work beneath the surface.

Starting from the essentials, you'll learn how to set up your environment with Python and PyTorch, manage datasets, and implement critical fundamentals such as tensors, embeddings, and gradient descent. You'll then progress through the architectural heart of modern models, covering RMS normalization, rotary positional embeddings (RoPE), scaled dot-product attention, Grouped Query Attention (GQA), Mixture of Experts (MoE), and SwiGLU activations, each explored in depth and built step by step in code. As you advance, the book introduces custom CUDA kernel integration, teaching you how to optimize key components for speed and memory efficiency at the GPU level--an essential skill for scaling real-world LLMs. You'll also gain mastery over the phases of training that define today's leading models:

    Pretraining - Building general linguistic and semantic understanding. Midtraining - Expanding domain-specific capabilities and adaptability. Supervised Fine-Tuning (SFT) - Aligning behavior with curated, task-driven data. Reinforcement Learning from Human Feedback (RLHF) - Refining responses through reward-based optimization for human alignment.
The final chapters guide you through dataset preparation, filtering, deduplication, and training optimization, culminating in model evaluation and real-world prompting with a custom TokenGenerator for text generation and inference.

By the end of this book, you'll have the knowledge and confidence to architect, train, and deploy your own transformer-based models, equipped with both the theoretical depth and practical expertise to innovate in the rapidly evolving world of AI.

What You'll Learn

    How to configure and optimize your development environment using PyTorch The mechanics of tokenization, embeddings, normalization, and attention mechanisms. How to implement transformer components like RMSNorm, RoPE, GQA, MoE, and SwiGLU from scratch. How to integrate custom CUDA kernels to accelerate transformer computations. The full LLM training pipeline: pretraining, midtraining, supervised fine-tuning, and RLHF. Techniques for dataset preparation, deduplication, model debugging, and GPU memory management. How to train, evaluate, and deploy a complete GPT-like architecture for real-world tasks.
Who this book is for:

Software developers, data scientists, machine learning engineers and AI enthusiasts looking to build their models from scratch.



From the Back Cover



This book is a complete, hands-on guide to designing, training, and deploying your own Large Language Models (LLMs)--from the foundations of tokenization to the advanced stages of fine-tuning and reinforcement learning. Written for developers, data scientists, and AI practitioners, it bridges core principles and state-of-the-art techniques, offering a rare, transparent look at how modern transformers truly work beneath the surface.

Starting from the essentials, you'll learn how to set up your environment with Python and PyTorch, manage datasets, and implement critical fundamentals such as tensors, embeddings, and gradient descent. You'll then progress through the architectural heart of modern models, covering RMS normalization, rotary positional embeddings (RoPE), scaled dot-product attention, Grouped Query Attention (GQA), Mixture of Experts (MoE), and SwiGLU activations, each explored in depth and built step by step in code. As you advance, the book introduces custom CUDA kernel integration, teaching you how to optimize key components for speed and memory efficiency at the GPU level--an essential skill for scaling real-world LLMs. You'll also gain mastery over the phases of training that define today's leading models:

    Pretraining - Building general linguistic and semantic understanding. Midtraining - Expanding domain-specific capabilities and adaptability. Supervised Fine-Tuning (SFT) - Aligning behavior with curated, task-driven data. Reinforcement Learning from Human Feedback (RLHF) - Refining responses through reward-based optimization for human alignment.
The final chapters guide you through dataset preparation, filtering, deduplication, and training optimization, culminating in model evaluation and real-world prompting with a custom TokenGenerator for text generation and inference.

By the end of this book, you'll have the knowledge and confidence to architect, train, and deploy your own transformer-based models, equipped with both the theoretical depth and practical expertise to innovate in the rapidly evolving world of AI.

What You'll Learn

    How to configure and optimize your development environment using PyTorch The mechanics of tokenization, embeddings, normalization, and attention mechanisms. How to implement transformer components like RMSNorm, RoPE, GQA, MoE, and SwiGLU from scratch. How to integrate custom CUDA kernels to accelerate transformer computations. The full LLM training pipeline: pretraining, midtraining, supervised fine-tuning, and RLHF. Techniques for dataset preparation, deduplication, model debugging, and GPU memory management. How to train, evaluate, and deploy a complete GPT-like architecture for real-world tasks.



About the Author



Dilyan Grigorov is a software developer with a passion for Python software development, generative deep learning & machine learning, data structures, and algorithms. He is an advocate for open source and the Python language itself. He has 16 years of industry experience programming in Python and has spent 5 of those years researching and testing Generative AI solutions. His passion for them stems from his background as an SEO specialist dealing with search engine algorithms daily. He enjoys engaging with the software community, often giving talks at local meetups and larger conferences. In his spare time, he enjoys reading books, hiking in the mountains, taking long walks, playing with his son, and playing the piano.

Dimensions (Overall): 9.25 Inches (H) x 6.1 Inches (W)
Suggested Age: 22 Years and Up
Genre: Computers + Internet
Sub-Genre: Intelligence (AI) & Semantics
Publisher: Apress
Format: Paperback
Author: Dilyan Grigorov
Language: English
Street Date: March 28, 2026
TCIN: 1008647337
UPC: 9798868822964
Item Number (DPCI): 247-19-3821
Origin: Made in the USA or Imported
If the item details aren’t accurate or complete, we want to know about it.

Shipping details

Estimated ship dimensions: 1 inches length x 6.1 inches width x 9.25 inches height
Estimated ship weight: 1 pounds
We regret that this item cannot be shipped to PO Boxes.
This item cannot be shipped to the following locations: American Samoa (see also separate entry under AS), Guam (see also separate entry under GU), Northern Mariana Islands, Puerto Rico (see also separate entry under PR), United States Minor Outlying Islands, Virgin Islands, U.S., APO/FPO

Return details

This item can be returned to any Target store or Target.com.
This item must be returned within 90 days of the date it was purchased in store, shipped, delivered by a Shipt shopper, or made ready for pickup.
See the return policy for complete information.

Related Categories

Get top deals, latest trends, and more.

Privacy policy

Footer

About Us

About TargetCareersNews & BlogTarget BrandsBullseye ShopSustainability & GovernancePress CenterAdvertise with UsInvestorsAffiliates & PartnersSuppliersTargetPlus

Help

Target HelpReturnsTrack OrdersRecallsContact UsFeedbackAccessibilitySecurity & FraudTeam Member ServicesLegal & Privacy

Stores

Find a StoreClinicPharmacyTarget OpticalMore In-Store Services

Services

Target Circle™Target Circle™ CardTarget Circle 360™Target AppRegistrySame Day DeliveryOrder PickupDrive UpFree 2-Day ShippingShipping & DeliveryMore Services
PinterestFacebookInstagramXYoutubeTiktokTermsCA Supply ChainPrivacy PolicyCA Privacy RightsYour Privacy ChoicesInterest Based AdsHealth Privacy Policy