# RAG Strategies

Twig AI offers three distinct RAG (Retrieval-Augmented Generation) strategies, each optimized for different use cases and performance requirements.

## Strategy Comparison

| Strategy    | Speed     | Prompt Rewriting | Retrieval Method       | Reranking | Best For                     |
| ----------- | --------- | ---------------- | ---------------------- | --------- | ---------------------------- |
| **Redwood** | \~1-2 sec | ❌ No             | Direct vector search   | ❌ No      | Clear, simple questions      |
| **Cedar**   | \~2-3 sec | ✅ Context-aware  | Memory-enhanced search | ❌ No      | Conversational queries       |
| **Cypress** | \~3-4 sec | ✅ Advanced       | Tier-based + expansion | ✅ Yes     | Complex, high-accuracy needs |

## Feature Matrix

| Feature                           | Redwood | Cedar | Cypress |
| --------------------------------- | ------- | ----- | ------- |
| **Vector Search**                 | ✅       | ✅     | ✅       |
| **Chunking**                      | ✅       | ✅     | ✅       |
| **Memory**                        | ✅       | ✅     | ✅       |
| **Privacy Controls**              | ✅       | ✅     | ✅       |
| **Memory-Enhanced Prompt**        | ❌       | ✅     | ✅       |
| **Context-Aware Query Rewriting** | ❌       | ✅     | ✅       |
| **Vector Retrieval Optimization** | ❌       | ❌     | ✅       |
| **Tier-Based Source Retrieval**   | ❌       | ❌     | ✅       |
| **Automatic Reranking**           | ❌       | ❌     | ✅       |
| **Higher Retrieval Volume**       | ❌       | ❌     | ✅       |
| **Query Expansion**               | ❌       | ❌     | ✅       |

## Redwood Strategy

<figure><img src="/files/OO7wzAx4hCTImGkiSNUT" alt="Redwood Strategy Diagram"><figcaption><p>Redwood - Standard RAG</p></figcaption></figure>

### Overview

The simplest and fastest RAG approach. Uses the original user query directly for vector search without any prompt rewriting.

### How It Works

1. User asks a question
2. Original query is converted to embedding
3. Vector database returns top matching documents
4. Context is built from retrieved documents
5. LLM generates response with context

### Performance

* **Speed**: \~1-2 seconds
* **Token Usage**: Minimal (single LLM call)
* **Cost**: Lowest

### When to Use Redwood

✅ **Use when:**

* Questions are clear and well-formed
* No ambiguity in user queries
* Speed is the top priority
* Simple, direct questions
* High query volume with cost sensitivity

❌ **Avoid when:**

* Questions are ambiguous or context-dependent
* Follow-up questions that reference previous context
* Complex or multi-part queries
* Highest accuracy is critical

### Example Use Cases

* FAQ chatbots
* Simple help desk queries
* Product information lookup
* Quick reference tools

[Learn more about Redwood →](/product/overview-1/redwood.md)

## Cedar Strategy

<figure><img src="/files/XA31vu4zPscgtRhJ3God" alt="Cedar Strategy Diagram"><figcaption><p>Cedar - Context-Aware RAG</p></figcaption></figure>

### Overview

Enhances retrieval by rewriting the user's query based on conversation context and memory before searching the vector database.

### How It Works

1. User asks a question
2. System analyzes conversation history (memory)
3. Query is rewritten to be more explicit and searchable
4. Rewritten query is used for vector search
5. Context is built from retrieved documents
6. LLM generates response with full context

### Performance

* **Speed**: \~2-3 seconds
* **Token Usage**: Moderate (additional rewriting call)
* **Cost**: Medium

### When to Use Cedar

✅ **Use when:**

* Conversational queries are common
* Questions reference previous context
* Users ask follow-up questions
* Ambiguous phrasing is frequent
* Balance of speed and accuracy needed

❌ **Avoid when:**

* Maximum speed is required
* Queries are always self-contained
* Budget is extremely tight
* Ultra-high accuracy is critical

### Example Use Cases

* Customer support chatbots
* Interactive help systems
* Multi-turn conversations
* General Q\&A assistants

[Learn more about Cedar →](https://github.com/thrivapp/twig-help-docs/blob/staging/ai-agents/rag-strategies/cedar.md)

## Cypress Strategy

<figure><img src="/files/dZ2X1G1DTGW54pb8Zk5E" alt="Cypress Strategy Diagram"><figcaption><p>Cypress - Advanced RAG with Reranking</p></figcaption></figure>

### Overview

The most sophisticated RAG strategy combining query expansion, tier-based retrieval, and automatic reranking for maximum accuracy.

### How It Works

1. User asks a question
2. Query is enhanced with memory (if available)
3. **Query Expansion**: Prompt is rewritten to include synonyms, related terms, and alternative phrasings
4. **Tier 1 Retrieval**: Search high-priority data sources (topK=50)
5. **Tier 2 Retrieval**: Search supplementary data sources (topK=50)
6. **Reranking**: All results are reranked using `bge-reranker-v2-m3` model
7. Top 10 most relevant documents are selected
8. Context is built with highest quality results
9. Final query rewriting for LLM (context-aware)
10. LLM generates response with optimized context

### Unique Features

**Query Expansion for Retrieval:**

```
Original: "reset password"
Expanded: "reset password, change password, recover account, 
          password reset process, account recovery, reset credentials"
```

**Tier-Based Retrieval:**

* Tier 1: Official documentation, primary knowledge bases
* Tier 2: Community content, secondary sources
* Both tiers treated equally in reranking

**Automatic Reranking:**

* Cross-encoder model (more accurate than vector similarity)
* Considers full query-document relationship
* Improves precision significantly

### Performance

* **Speed**: \~3-4 seconds
* **Token Usage**: Higher (multiple rewriting + reranking)
* **Cost**: Highest

### When to Use Cypress

✅ **Use when:**

* Accuracy is the top priority
* Questions involve diverse terminology
* Multiple data source tiers exist
* Query ambiguity is common
* Latency trade-off is acceptable
* High-stakes decisions depend on answers

❌ **Avoid when:**

* Speed is critical
* Budget is constrained
* Simple, clear questions only
* Low query volume

### Example Use Cases

* Medical or legal Q\&A (high accuracy required)
* Complex technical documentation
* Multi-domain knowledge bases
* Enterprise knowledge management
* Compliance-sensitive applications

[Learn more about Cypress →](https://github.com/thrivapp/twig-help-docs/blob/staging/ai-agents/rag-strategies/cypress.md)

## Performance Comparison

### Latency

```
Redwood:  ▓░░░░░░░░░ 1-2 seconds
Cedar:    ▓▓▓░░░░░░░ 2-3 seconds
Cypress:  ▓▓▓▓▓░░░░░ 3-4 seconds
```

### Accuracy

```
Redwood:  ▓▓▓▓▓▓░░░░ Good
Cedar:    ▓▓▓▓▓▓▓▓░░ Better
Cypress:  ▓▓▓▓▓▓▓▓▓▓ Best
```

### Cost

```
Redwood:  ▓░░░░░░░░░ Lowest
Cedar:    ▓▓▓▓░░░░░░ Medium
Cypress:  ▓▓▓▓▓▓▓░░░ Highest
```

## Choosing the Right Strategy

### Decision Tree

```
Is speed the top priority?
├─ Yes → Use Redwood
└─ No
    └─ Are questions conversational/ambiguous?
        ├─ Sometimes → Use Cedar
        └─ Often
            └─ Is highest accuracy critical?
                ├─ Yes → Use Cypress
                └─ No → Use Cedar
```

### By Use Case

| Use Case              | Recommended Strategy | Reason                            |
| --------------------- | -------------------- | --------------------------------- |
| FAQ Bot               | Redwood              | Clear questions, speed matters    |
| Customer Support Chat | Cedar                | Conversational, follow-ups common |
| Medical Q\&A          | Cypress              | Accuracy is critical              |
| Legal Research        | Cypress              | High-stakes, must be accurate     |
| Product Documentation | Cedar                | Balance of speed and accuracy     |
| Internal Wiki         | Cedar                | Conversational queries            |
| API Reference         | Redwood              | Technical, clear queries          |
| Troubleshooting Guide | Cedar                | Multi-step, contextual            |
| Compliance Questions  | Cypress              | Cannot afford mistakes            |

## Switching Strategies

You can change an agent's strategy at any time:

1. Open agent settings
2. Navigate to **RAG Strategy**
3. Select new strategy
4. Save changes
5. Test in Playground

**Note**: Changes take effect immediately. Test thoroughly before deploying to production.

## A/B Testing Strategies

To compare strategies objectively:

1. Duplicate your agent
2. Assign different strategies to each copy
3. Use the same test questions
4. Compare responses, speed, and citations
5. Check analytics for quality metrics

## Next Steps

* [Redwood Strategy Deep Dive](/product/overview-1/redwood.md)
* [Cedar Strategy Deep Dive](https://github.com/thrivapp/twig-help-docs/blob/staging/ai-agents/rag-strategies/cedar.md)
* [Cypress Strategy Deep Dive](https://github.com/thrivapp/twig-help-docs/blob/staging/ai-agents/rag-strategies/cypress.md)
* [Performance Optimization](https://github.com/thrivapp/twig-help-docs/blob/staging/monitoring/performance-tuning.md)
* [Evaluation Framework](https://github.com/thrivapp/twig-help-docs/blob/staging/monitoring/evals.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.twig.so/product/overview-1.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
