InTDS ArchivebyHarys DalviCan Transformers Solve Everything?Looking into the math and the data reveals that transformers are both overused and underused.Oct 1, 202418Oct 1, 202418
InTDS ArchivebySrijanie Dey, PhDDeep Dive into Transformers by Hand ✍︎Explore the details behind the power of transformersApr 12, 20248Apr 12, 20248
InTDS ArchivebyLuís RoqueThe Power of Retrieval Augmented Generation: A Comparison between Base and RAG LLMs with Llama2A deep dive into tailoring pre-trained LLMs for custom use cases using a RAG approach, featuring LangChain and Hugging Face integrationNov 29, 20234Nov 29, 20234
InTDS ArchivebyLuís RoqueDecoding LLMs: Creating Transformer Encoders and Multi-Head Attention Layers in Python from ScratchExploring the intricacies of encoder, multi-head attention, and positional encoding in large language modelsDec 1, 20235Dec 1, 20235
InTDS ArchivebyRyan PégoudImplementing a Transformer Encoder from Scratch with JAX and Haiku 🤖Understanding the fundamental building blocks of Transformers.Nov 7, 20233Nov 7, 20233
Fareed KhanUnderstanding Transformers: A Step-by-Step Math Example — Part 1I understand that the transformer architecture may seem scary, and you might have encountered various explanations on YouTube or in blogs…Jun 5, 202363Jun 5, 202363
InTDS ArchivebyArjun SarkarBuild your own Transformer from scratch using PytorchBuilding a Transformer model step by step in PytorchApr 26, 202314Apr 26, 202314