AI Glossary for Frontend Developers: What You Actually Need to Know

A practical glossary of key AI concepts applied to frontend development: LLMs, tokens, prompts, context windows, and more. Explained without jargon, with a sharp focus on real productivity.

Useful AI Terms for Frontend Developers (Without the Buzzwords)

I use AI daily in my work as a senior Frontend. GitHub Copilot, ChatGPT, Claude... Tools that have radically changed the way I work.
But I also admit that for a long time (too long), I used terms I didn’t fully understand.

This glossary is for any frontend developer —junior, mid or senior— who wants to work more effectively with AI. These concepts matter regardless of your level:
they determine your actual ability to collaborate with intelligent systems.

A mid-level dev who understands prompt engineering can generate code that looks senior.

It’s not magic — it’s precise communication.


Note: I'm currently writing a full article on Prompt-Driven Development (PDD) —a workflow where you act as an architect using structured prompts to generate code instead of writing it line by line. If you're curious about this approach, stay tuned.


Here are the essential terms — no fluff, no hype.

LLM (Large Language Model)

What it is: A language model trained on massive amounts of text to predict the next word (or token) based on context. ChatGPT, Claude, and Gemini are all LLMs.

Why it matters: It’s the core tech behind every AI tool you use.
Understanding how an LLM works changes how you write prompts, structure code, and review results (read the guide).

In practice: These tools don’t “think” like humans. They are predictive computation models (PCMs): they guess what comes next. They don’t understand what they say, but seem to — because they've seen millions of examples.

Trust me: the better you structure that context (your prompt), the better the output.

📖 Docs: Wikipedia - What is an LLM

Token

What it is: The smallest unit an LLM understands. Can be a full word, part of a word, whitespace, or a symbol.

Why it matters: LLMs have token limits. Your Vue component might be too large to fit in context.
Example: Claude 3.5 Sonnet supports up to 200k tokens — around 150k words.

In practice: When you paste 500 lines and it says "file too long," it's not being lazy. It literally doesn't fit into its working memory.

→ As a junior: learn to break down your code.
→ As a senior: design prompts that respect token limits.

📖 Docs: OpenAI - What are tokens?

Context Window

What it is: The total token limit an LLM can process at once — includes both your prompt and the model's response.

Why it matters: It defines how much code you can analyze or discuss at once.

In practice: Break down large codebases. Start with architecture, then go component by component.
Trying to force everything in a single prompt is a waste.

Prompt Engineering

What it is: The art and science of structuring instructions so an AI produces what you actually need.

Why it matters: There’s a big difference between asking:
"make a button component"
vs
"create a Button component in TypeScript with size and color variants, using Tailwind CSS, accessible with hover and disabled states."

In practice:
→ Keep templates for common tasks.
→ As a junior: start with ultra-specific prompts.
→ As a senior: craft prompts that carry full architectural context.

Examples:

  • "Refactor this component to [specific goal]"
  • "Review this code and find [specific type of issue]"
  • "Convert this design into [target tech] using [specific conventions]"

📖 Docs: OpenAI - Prompt Engineering Guide

Temperature

What it is: A parameter that controls how “creative” or random the output is. Ranges from 0 (predictable) to 1 (creative/chaotic).

Why it matters: For debugging, you want low temperature (0.1–0.3). For creative tasks, go higher (0.7–0.8).

In practice: If you're using APIs directly, set this based on the task.
→ For code reviews: lower it.
→ For architectural exploration: raise it.

📖 Docs: Understanding OpenAI's Temperature Parameter

RAG (Retrieval Augmented Generation)

What it is: A method that lets the AI access info it wasn’t trained on by injecting external documents into the context.

Why it matters: Explains why your LLM doesn’t “know” your team’s internal docs — but can still work with them if you provide them.

In practice: When working with proprietary libraries, UI kits, or custom design systems — include them as part of the prompt or context.
It won’t guess them on its own.

📖 Docs: AWS - What is RAG?

Fine-tuning

What it is: Customizing a pretrained model with domain-specific examples so it's better at a particular task.

Why it matters: It's why GitHub Copilot is great at code but worse at other creative tasks — it's fine-tuned for software.

In practice: Some companies fine-tune models on their internal codebases.
Even if you don’t, it explains why Copilot “gets” React patterns but not Vue.

📖 Docs: OpenAI - Fine-tuning Guide

Hallucination

What it is: When the model invents something that sounds right — but is wrong.

Why it matters: It can generate fake APIs, non-existent packages, or incorrect syntax.

In practice: Always verify dependencies. Especially if they sound oddly specific.
npm install vue-super-forms might not exist.

→ As a junior: build a habit of verifying everything.
→ As a senior: teach your team to adopt healthy skepticism.

Hallucinations can give you a false sense of control — and are one of the most dangerous risks in production.

Few-shot Learning

What it is: Giving 2–5 examples of input/output to teach the model what pattern to follow.

Why it matters: It’s often more effective than long explanations.

In practice: Instead of paragraphs, show examples:

// Example 1:
// Input: <button class="btn-primary">Click me</button>
// Output: <Button variant="primary">Click me</Button>

// Example 2:
// Input: <div class="card-container">Content</div>
// Output: <Card>Content</Card>

// Now convert:
// Input: <span class="text-error">Error message</span>

📖 Docs: OpenAI - Best Practices

Inference

What it is: The process where the model uses its training to generate output based on your input.

Why it matters: Each inference costs time and money. That's why usage limits exist.

In practice: Don’t ask 50 vague questions.
Structure your prompt clearly to get more value from fewer calls.

Prompting efficiency is an emerging professional skill.
If you can communicate well as a human, you can communicate well with AI.

Embedding

What it is: A numerical representation of text that captures its semantic meaning.

Why it matters: It's how AIs recognize code or content as similar, even if the wording is different.

In practice: It explains why LLMs can suggest good code when you write comments in Spanish, English, or pseudocode.
They don’t match words — they match meaning.

📖 Docs: OpenAI - Embeddings Guide


Why This Matters

These aren’t buzzwords. These are the rules of the game if you work with AI and modern frontend dev.

It’s not about coding better. It’s about communicating better with the tools you already use.

And in 2025, that might be the difference between being just another developer… or being one that's a bit harder to replace.


Key Takeaways

  1. LLMs ≠ magic

    LLMs don’t “understand” — they predict text. Treat them as predictive computation models (PCMs).

    ➤ This will change how you write prompts and reduce frustration.

  2. Tokens matter

    Everything you write (and everything the AI replies) costs tokens.

    ➤ Break down code. Be concise. Respect the context window.

  3. Prompt engineering is a critical skill

    It’s not about “asking nicely.” It’s about “directing clearly.”

    ➤ A good prompt is an extension of your communication skills.

  4. Keep a prompt template arsenal

    ➤ Refactors, components, naming, tests, reviews, migrations…
    Build your reusable toolkit.

  5. Control the temperature

    0.1–0.3: debugging, reviewing, refactoring.
    0.7–0.9: naming, design ideas, architecture.

  6. Use few-shot prompting

    ➤ Teach by showing 2–3 input/output examples before the actual prompt.
    Much more effective than long prose.

  7. Don’t assume the AI knows your world

    ➤ If you don’t provide your UI kit or conventions, it can’t guess them.
    Use RAG-style techniques: give it the context.

  8. Detect hallucinations

    ➤ Don’t trust blindly.
    Always verify functions, library names, npm packages, etc.
    If it sounds too good… it’s probably fake 🤣

  9. Fewer inferences, more clarity

    ➤ Each call costs something.
    Optimize your first prompt.
    Reduce the need for follow-ups.

  10. Build teams with critical thinking

    ➤ As a senior, teach others to verify, validate, and not fall for good-sounding BS.
    AI doesn’t replace thinking — it either amplifies it or distorts it.

Get only what matters

If I have nothing worth saying, you won’t hear from me. When I do, you’ll be the first to know.

Are you a professional Web developer?
No

Unsubscribe at any time.