# Embedding Model Drift

## The Problem

Your embedding model is updated or replaced, causing new embeddings to be incompatible with existing ones in your vector database, leading to poor retrieval performance or broken search.

### Symptoms

* ❌ Queries that worked yesterday return no results today
* ❌ Semantic search quality suddenly degrades
* ❌ New documents don't match well with old documents
* ❌ Similarity scores drastically different after model update
* ❌ Must re-embed entire knowledge base (expensive)

### Real-World Example

```
Day 1: Using OpenAI text-embedding-ada-002
→ Embedded 100,000 documents (1536 dimensions)
→ Queries work perfectly

Day 30: OpenAI releases text-embedding-3-small
→ Better performance, lower cost
→ Twig switches to new model

Day 31: Users report "search is broken"
→ New queries embedded with v3-small
→ Existing docs embedded with ada-002
→ Vector spaces incompatible
→ Cosine similarity scores meaningless
```

***

## Deep Technical Analysis

### Embedding Space Incompatibility

Different models create different vector spaces:

**Vector Representation:**

```
Model A (ada-002): 1536 dimensions
→ Document "API authentication"
→ Embedding: [0.023, -0.154, 0.891, ..., 0.234]

Model B (text-embedding-3-small): 1536 dimensions
→ Same document "API authentication"
→ Embedding: [0.612, 0.034, -0.423, ..., 0.891]

Dimensions match (1536) but:
→ Semantic meaning of each dimension differs
→ Can't compare vectors from different models
→ Cosine similarity is meaningless across models
```

**The Cross-Model Query Problem:**

```
Scenario:
→ Knowledge base: 100,000 docs embedded with Model A
→ Switch to Model B for new queries

User query: "How to authenticate?"
→ Embedded with Model B: [0.612, 0.034, -0.423, ...]
→ Compare against Model A vectors in DB
→ Cosine similarity: 0.23 (low!)
→ Should be 0.95+ but models incompatible

Result: No relevant documents retrieved
```

**Dimensionality Mismatch:**

```
Model A: 768 dimensions (Sentence-BERT)
Model B: 1536 dimensions (OpenAI ada-002)

Can't even compute cosine similarity:
→ Vector lengths don't match
→ Need padding or projection
→ But semantic alignment still broken
```

### Model Version Updates

Same model family, different versions:

**Backwards-Incompatible Updates:**

```
OpenAI example:
→ text-embedding-ada-001 (retired)
→ text-embedding-ada-002 (current)

Even within same family:
→ Training data changed
→ Model architecture tweaked
→ Embedding distributions shift

Similarity comparison:
ada-001 embedding of "authentication"
vs.
ada-002 embedding of "authentication"

→ Different vectors
→ Not directly comparable
```

**The Silent Degradation:**

```
Provider updates model behind the scenes:
→ No version number change
→ No announcement
→ Model weights re-trained

Week 1: Embed 1000 docs → Embedding_v1
Week 5: Provider updates model
Week 6: Embed 100 new docs → Embedding_v2

Knowledge base now mixed:
→ 1000 old docs (v1 embeddings)
→ 100 new docs (v2 embeddings)
→ Subtle incompatibility
→ Search quality degrades gradually
```

### Re-Embedding Cost and Complexity

Switching models requires full re-processing:

**The Re-Embedding Math:**

```
Knowledge base: 100,000 documents
Average size: 2,000 tokens per doc
Total tokens: 200 million

OpenAI embedding cost: $0.0001 per 1K tokens
Re-embedding cost: 200,000K × $0.0001 = $20

Plus:
→ API rate limits (slow process)
→ Compute time (hours/days)
→ Database updates (all 100,000 rows)
→ Index rebuilding (vector DB)
→ Downtime or degraded service
```

**The Incremental Update Problem:**

```
Can't do gradual migration:
→ Day 1: Re-embed 10,000 docs with new model
→ Day 2: Re-embed another 10,000
→ ...

During migration:
→ 10,000 docs: New model embeddings
→ 90,000 docs: Old model embeddings
→ Mixed vector space
→ Search broken across both sets

Must do atomic switch:
→ Re-embed ALL docs
→ Swap entire vector DB
→ Minimize downtime
→ Requires parallel infrastructure
```

### Zero-Downtime Migration Strategy

Maintaining service during model switch:

**Dual-Write Approach:**

```
Phase 1: Preparation
→ Continue using Model A for queries
→ Start embedding new docs with BOTH Model A and Model B
→ Store both embeddings

Phase 2: Backfill
→ Re-embed all old docs with Model B
→ Store alongside Model A embeddings
→ Background job, no user impact

Phase 3: Switch
→ Flip feature flag: Use Model B for queries
→ Model A embeddings remain as fallback

Phase 4: Cleanup
→ After verification, delete Model A embeddings
→ Reclaim storage

Cost: 2x storage during migration
```

**The Blue-Green Deployment:**

```
Blue environment (active):
→ Vector DB with Model A embeddings
→ Serving queries

Green environment (staging):
→ New vector DB with Model B embeddings
→ Re-embed all docs here
→ Test queries

Switchover:
→ Flip traffic to Green
→ Blue becomes backup
→ Rollback easy if issues

Requires:
→ 2x infrastructure
→ 2x storage
→ Complex traffic routing
```

### Embedding Version Tracking

Knowing which model embedded each chunk:

**Metadata Tagging:**

```
Vector DB schema:
{
  "chunk_id": "chunk_abc",
  "embedding": [0.023, -0.154, ...],
  "embedding_model": "text-embedding-ada-002",  ← Track this!
  "embedding_version": "v2",
  "embedded_at": "2024-01-20T10:00:00Z"
}

Enables:
→ Identify outdated embeddings
→ Selective re-embedding
→ Mixed-model support (separate vector spaces)
```

**Query Routing:**

```
If supporting multiple models simultaneously:

Query arrives:
1. Detect embedding model from user/config
2. Route to appropriate vector space:
   → Model A queries → Vector DB A
   → Model B queries → Vector DB B
3. Don't mix results across models

Complexity:
→ Multiple vector DBs
→ Result merging challenges
→ Consistency across models
```

### Semantic Drift in Training Data

Even same model architecture can drift:

**Training Data Changes:**

```
Model trained on:
Year 1: Wikipedia 2020 + Books + Web
Year 2: Wikipedia 2022 + Books + Web + Code

New training data:
→ Shifts semantic representations
→ Technical terms evolve
→ New concepts added
→ Old embeddings no longer optimal

Example:
"GPT" in 2020 model: generic "generative pre-trained"
"GPT" in 2023 model: strongly associated with "ChatGPT"

Embedding space shifted
```

**Domain-Specific Drift:**

```
General-purpose model: Works for most content
But:
→ Company uses medical terminology
→ Or legal jargon
→ Or internal acronyms

Provider updates model:
→ Trained on more medical data (good)
→ But less legal data (bad)

Semantic space shifts:
→ Medical queries improve
→ Legal queries degrade

Organization-specific impact unpredictable
```

### Model Deprecation and Forced Migration

Providers retire old models:

**The Sunset Scenario:**

```
Provider announcement:
"text-embedding-ada-001 will be deprecated on 2024-06-01"

Timeline:
→ 3 months notice
→ Must migrate by deadline
→ After deadline: API calls fail

Twig must:
→ Notify all customers using ada-001
→ Schedule re-embedding
→ Complete before deadline
→ High-pressure, time-constrained
```

**The Emergency Migration:**

```
Worst case:
→ Provider unexpectedly shuts down API
→ Or: Critical security issue forces immediate switch
→ No time for graceful migration

Options:
1. Emergency full re-embed (hours/days)
   → Service degraded during process

2. Switch to backup model
   → Accept temporary quality loss
   → Re-embed gradually

3. Cache query embeddings
   → Reuse recent query embeddings
   → But knowledge base stale
```

### Fine-Tuned Model Management

Custom-trained models add complexity:

**Fine-Tuning Workflow:**

```
Base model: text-embedding-ada-002
Fine-tune on: Company's proprietary docs

Result: company-embedding-v1 (custom model)

Benefits:
→ Better for company-specific terms
→ Higher accuracy on domain content

Challenges:
→ Can't use provider's latest base model
→ Stuck on old base model version
→ Must re-fine-tune if base model updates
→ Fine-tuning cost + time
```

**Version Tracking Complexity:**

```
Multiple model versions in production:
→ Base model: ada-002-v1 (old docs)
→ Base model: ada-002-v2 (some docs)
→ Fine-tuned: company-v1 on ada-002-v1
→ Fine-tuned: company-v2 on ada-002-v2

4 different embedding spaces
→ Must track per-chunk
→ Query routing complex
→ Testing/validation burden
```

***

## How to Solve

**Track embedding\_model and embedding\_version in metadata + implement dual-write during migration + batch re-embed old documents + use blue-green deployment for zero-downtime switch + set up alerts for provider model deprecation notices.** See [Embedding Model Management](/rag-scenarios-and-solutions/vectors/embedding-drift.md).


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.twig.so/rag-scenarios-and-solutions/vectors/embedding-drift.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
