Bert Bidirectional Encoder Representations from Transformers
Definition: A model architecture using transformers and self-attention for text representation, bidirectional context processing, and unsupervised training.
BERT: The “Reader” of the AI World
GPT vs. BERT
- GPT (Generative Pre-trained Transformer): Writes text. (Uni-directional: Left-to-Right).
- BERT (Bidirectional Encoder Representations from Transformers): Reads text. (Bi-directional: Looks at the whole sentence at once).
Why BERT is still King in Vibe Coding
You use GPT to write code. You use BERT to search code.
- Semantic Search: When you type “How do I handle auth?” in Cursor, a BERT-like model (embeddings) scans your 10,000 files to find the relevant chunks. GPT would be too slow and expensive to read all 10,000 files.
The Retrieval Augmented Generation (RAG) Stack
Vibe coding relies on RAG.
- BERT finds the 5 relevant files.
- GPT reads those 5 files and answers your question.
Practical Tip
If your AI assistant is giving bad answers, it’s often a Retrieval Failure (BERT failed), not a Reasoning Failure (GPT failed).
- Fix: Manually “tag” the correct files (
@utils.ts). You are bypassing the BERT step and feeding the context directly to GPT.
Expert Takeaway
Don’t confuse the “Writer” with the “Librarian.” BERT is the librarian who finds the book. GPT is the ghostwriter who reads it and summarizes it. You need both for a good vibe.
