AI Test Generation
This guide explains how to create automated tests using TrueAssert's AI-powered test generation from natural language descriptions.
Overview
AI test generation allows you to create tests by describing what you want to test in plain English. TrueAssert's LLM analyzes your description, examines the target website, and automatically generates test steps.
How It Works
The Generation Process
You Create Test: Submit test creation form with prompt
Test Created: Test object created with status DRAFTING
Background Processing: Background processor picks up the test
HTML Retrieval: Agent visits target URL and retrieves page HTML
LLM Analysis: LLM analyzes HTML and your prompt
Step Generation: LLM generates test steps
Step Creation: Steps are created and linked to test
Status Updates: Test status: DRAFTING → REVIEW → READY
Login Flow Integration
If login_required=True:
Login flow executes during generation (not execution)
Session is saved after successful login
Main test steps are generated after login completes
Creating an AI-Generated Test
Step 1: Navigate to Create Test
Log into TrueAssert
Go to Create Test page or click "Create Test" in navigation
You'll see the test creation form
Step 2: Fill Out the Form
Required Fields:
Test Name: Descriptive name (minimum 3 characters)
Example: "Test user login and registration flows"
Be specific about what the test covers
Target URL: The website URL to test
Must be a valid, accessible URL
Example: https://example.com/login
This is where the test will start
AI Prompt: Detailed description of what to test (minimum 10 characters)
Be specific about actions and validations
Include expected outcomes
Example: "Create a test that navigates to the login page, fills in the username and password fields, clicks the login button, and verifies that the user is redirected to the dashboard and sees a welcome message"
Optional Fields:
Login Required: Check if test needs authentication
Requires login settings configured in project
Login flow executes during generation
Use Predefined Session: Check to reuse existing session
Faster than full login
Requires session data to exist
Let AI decide if multiple test cases need to be created: Default checked
Allows AI to create multiple related tests if needed
Step 3: Submit and Wait
Click "Create Test"
Test is created with status DRAFTING
You're redirected to test detail page
Background processor starts generating steps
Monitor status - it will update automatically
Writing Effective Prompts
Good Prompt Examples
Specific and Detailed:
Create a test that navigates to the product page, adds an item to the cart,
proceeds to checkout, fills in shipping information, selects a payment method,
and verifies the order confirmation page appears with the correct order details.Action-Oriented:
Test the user registration flow: navigate to the signup page, fill in all
required fields (name, email, password, confirm password), check the terms
checkbox, click submit, and verify a success message appears and the user
is redirected to the welcome page.With Validations:
Create a test that logs in with valid credentials, navigates to the profile
settings page, updates the email address, saves the changes, and verifies
that a confirmation message appears and the new email is displayed on the page.Prompt Best Practices
Be Specific: Describe exact actions and expected outcomes
Include Validations: Mention what should be verified
Order Matters: Describe steps in the order they should execute
Be Complete: Include all necessary steps (navigation, interactions, verifications)
Avoid Ambiguity: Use clear, unambiguous language
What to Include
Starting Point: Where the test should begin
Actions: What the test should do (click, fill, navigate)
Data: What values to use (if specific)
Validations: What to verify (elements, text, navigation)
Expected Outcomes: What should happen
What to Avoid
Vague Descriptions: "Test the login" (too vague)
Missing Details: "Fill the form" (which form? what fields?)
No Validations: "Click submit" (verify what happens?)
Assumptions: Assume the AI knows your application structure
Understanding Generation Status
Status Flow
DRAFTING → REVIEW → READYDRAFTING:
Test is being processed by background processor
LLM is analyzing prompt and generating steps
Login flow may be executing (if
login_required=True)Steps are being created
REVIEW:
Steps have been generated
Test is ready for review
You can review, edit, or approve steps
Status will become READY when you finish review
READY:
Test is ready to execute
All steps are finalized
Can be run immediately
Monitoring Generation
On Test Detail Page:
Status badge shows current status
Step count updates as steps are generated
Error messages appear if generation fails
Polling:
Page polls for status updates every 2 seconds
Status updates automatically
No need to refresh page
Login Flow During Generation
If login_required=True:
Login Flow Executes First:
Session validation attempted
If fails: Full login executed
Session saved to database
Then Test Generation:
LLM analyzes authenticated page
Generates steps for authenticated user flows
Steps assume user is already logged in
Important: Login flow happens during generation, not during execution. The generated test steps don't include login steps.
Troubleshooting
Test Stuck in DRAFTING
Possible Causes:
No agents available
Background processor not running
LLM API issues
Target URL not accessible
Solutions:
Check Agents page for agent availability
Verify controller background processor is running
Check server logs for errors
Verify target URL is accessible
No Steps Generated
Possible Causes:
Prompt too vague or unclear
LLM couldn't understand requirements
Target URL not accessible
HTML retrieval failed
Solutions:
Review and improve prompt (more specific)
Check target URL is accessible
Verify agent can reach target URL
Check error messages on test detail page
Steps Are Incorrect
Possible Causes:
Prompt was ambiguous
LLM misunderstood requirements
Page structure changed
Selectors are incorrect
Solutions:
Review generated steps
Edit incorrect steps manually
Update prompt and regenerate
Fix selectors manually
Generation Fails
Possible Causes:
LLM API error
Agent connection issues
Target URL timeout
Invalid prompt
Solutions:
Check error message on test detail page
Verify LLM API key is configured
Check agent logs
Try simpler prompt
Best Practices
Prompt Writing
Start with Context: "Navigate to [URL] and..."
Be Sequential: Describe steps in order
Include Details: Specify fields, buttons, values
Add Validations: "Verify that [element] appears"
Test Prompt: Try prompt, refine if needed
After Generation
Review Steps: Always review generated steps
Verify Selectors: Check that selectors are correct
Test Execution: Run test to verify it works
Edit if Needed: Fix any incorrect steps
Iterative Improvement
Start Simple: Begin with basic prompts
Add Complexity: Gradually add more details
Learn Patterns: Understand what works best
Refine Prompts: Improve based on results
Related Topics
Your First Test - Quick start
Browser Plugin Recording - Alternative method
Manual Test Creation - Edit AI-generated tests
Running Tests - Execute generated tests
Last updated