Best practices for using RAG in aqua
With RAG (Retrieval-Augmented Generation), you can ground AI Copilot in your own project documentation.
With RAG (Retrieval-Augmented Generation), you can ground AI Copilot in your own project documentation. To get the most relevant and accurate answers, we recommend following these best practices.
Supported file formats
At the moment, you can upload:
TXT files
Word documents (.docx) without images
How to prepare your documents
Preparing your documentation well is the most important step to getting high-quality answers from AI Copilot. Since the AI works by retrieving pieces of text from your uploaded files, clarity and structure directly affect the quality of results.
Keep documentation clear and focused
Split by project, product area, or release train. Avoid one giant “everything” document.
Keep each file single-intent: e.g., “UI test design patterns for WebApp A” rather than “all QA info in one file.”
One concept or feature per file — if possible, split content into smaller, feature-based files.
Example files set:
checkout-acceptance-criteria.docx
checkout-api-reference.txt
checkout-defect-patterns.docx
This way, if you ask about checkout acceptance criteria, AI won’t be confused by unrelated content like user registration.
Avoid mixing product areas. Don’t combine mobile app and web app requirements in one file. Separate them so the AI knows which context to pull from.
Begin with a short abstract (3–5 lines) stating purpose, scope, and key entities.
Add a FAQ block with crisp questions and answers — RAG works very well with Q&A-shaped content.
Use descriptive file names
Why it matters
File names are included in retrieval metadata, making it easier for both you and the AI to understand context.
Traceability benefit: Once AI Copilot gives you an answer in chat, you will also see which file the information came from. If your file names are descriptive, you’ll instantly understand the source without opening the file.
✅ Good examples:
login-acceptance-criteria-v1.docx
mobile-app-test-strategy.docx
❌ Poor examples:
doc1.docx
Final-version2.docx
Structure your content
AI works best when text is easy to parse.
Start with a summary. First 2–5 sentences should explain what the document is about.
This file contains acceptance criteria for the checkout process, covering payment validation, discounts, and error handling.
Use headings and sub-headings
## Checkout process
### Payment validation
### Error handling
Use bullet points or numbered lists
Instead of long paragraphs, list criteria and parameters where possible:
Example:
User must be logged in before starting checkout
System validates credit card number format
Discounts cannot exceed total order value
This makes retrieval more accurate and answers easier to read.
Keep your documents in one language
If your project uses more than one language, try to stick to a single language per set of documents. This way, AI Copilot won’t get confused by switching between English, German, or Spanish in the same file.
Best practices with tables in RAG
Keep tables simple if possible
Use plain rows and columns with clear headers.
Avoid merged cells, heavy formatting, or nested tables — they often break the text extraction.
Try to include headers
Mirror table content in text if it’s crucial for grounding or If the table contains important rules, write them out below as bullet points.
Example:
Empty password → show error “Password required”
Password too short → show error “Minimum 8 characters”
Valid password → allow login
This ensures the AI can retrieve the info even if the table isn’t parsed perfectly.
Split big tables into smaller ones if applicable
Large, complex tables (e.g., covering 50+ scenarios) are harder for AI to process.
Break them down by feature, rule set, or risk area.
Add a short intro sentence before the table. This gives AI context before parsing rows.
Example:
The following table lists password validation scenarios with expected outcomes.
Best practices for the Description field and prompts
When you select Description you’re telling AI Copilot what context it should use when generating your test cases, user story or test data. Think of it as a requirement ticket: the clearer and more structured, the better the results. Don't use ambigious wording. Try to be as clear as possible.
💡 Tip: Pretend you’re briefing a new colleague that is not familiar with domain context. If they could design test cases or write a requirement from what you write, it’s a good description for AI too.
What to include in your description
Feature / area under test “User login with email and password, including validation of mandatory fields and error messages.”
Scope and boundaries “Covers positive login attempts with valid credentials and negative cases for wrong passwords and empty fields.'
Intended outcome “System must display error messages clearly and prevent login with invalid data.”
Optional keywords (important terms that should appear in test cases) “Include cases with minimum/maximum password length.”
Examples of good vs poor descriptions
✅ Good:
Checkout process for registered users, including adding items to cart, applying a discount code, and completing payment via credit card. Verify both successful and failed payment scenarios.
❌ Poor:
After completing the checkout process, everything must be verified before proceeding. If that fails, restart the task.
✅ Good:

Mobile app login using biometric authentication (fingerprint/face ID). Test positive login, fallback to PIN, and failed authentication attempts. If the session verification fails, restart the authentication workflow.
❌ Poor:
Cover the full login process with successful and failed attempts
General best practices and common pitfalls
1. Consistent terminology
In many domains, the same concept can appear under different names or abbreviations (e.g., “login”, “sign-in”, “auth”).
If you want AI Copilot to treat these as the same thing, create a mapping or dictionary of aliases and add it to the document.
Example:
login = sign-in = authentication cart = basket
Use this dictionary across your documents so both the system and readers understand the intended meaning.
2. Document versioning
Documentation evolves, and older versions can quickly become outdated.
To avoid confusion:
Use version numbers in file names.
Example:
checkout-acceptance-criteria-v2.docx
Mark older documents as stale or archive them.
Re-upload updated files so AI Copilot always retrieves the latest information.
When asking AI, you can also include the version in your query to narrow results.
3. Clear formatting and references
Ambiguous phrases like “this,” “that,” or “the process/task” make it unclear what is being described.
Both human readers and AI struggle to know which process or task is meant.
Best practices:
Use explicit nouns instead of pronouns.
❌ “This should be validated before continuing.”
✅ “The checkout form should be validated before continuing.”
Quick checklist before clicking Create
Have you specified the feature?
Have you included special conditions (limits, error handling, edge cases)?
Is your description at least 2–3 sentences?
If yes, then you are good to go 👍
Last updated
Was this helpful?