← Back to Documentation

MCP Developer Tests

Deterministic testing for MCP tools with AI-powered analysis and test generation.

Overview

The MCP Developer Test system provides two complementary approaches to testing:

Deterministic Testing

  • No AI required - exact assertions
  • 100% reproducible results
  • Perfect for CI/CD pipelines
  • Free to run (no API costs)

AI-Powered Features

  • Generate tests from natural language
  • Analyze failures with telemetry correlation
  • Get actionable recommendations
  • Auto-notify on failures

Test Runner CLI

Run tests from the command line using the probecodex-agent:

# List available test suites
probecodex-agent mcp test --list
# List only no-hardware tests
probecodex-agent mcp test --list --no-hardware
# Run a specific test suite
probecodex-agent mcp test --suite=connection-basic
# Run tests from a local JSON file
probecodex-agent mcp test --file=my-tests.json
# Run all no-hardware tests (CI/CD friendly)
probecodex-agent mcp test --no-hardware
# Run and save results to file
probecodex-agent mcp test --suite=memory-read --output=results.json

Built-in Test Suites

NameCategoryHardwareDescription
system-infosystemNoMCP version, disk space
license-statuslicenseNoLicense validation
emulator-checkemulatorNoQEMU, Renode installation
connection-basicconnectionYesConnect and read registers
memory-readmemoryYesRead RAM and Flash
core-controlcoreYesHalt, run, reset CPU
rtos-freertosrtosYesFreeRTOS tasks and heap

Portal UI

Access the MCP Tests page from More → MCP Tests in the Portal navigation.

Test Suites

View, upload, download test definition JSON files

Test Runs

View history, download input/output, trigger analysis

AI Test Builder

Generate test JSON from natural language

Go to MCP Tests Portal →

AI Test Builder

Describe what you want to test in natural language, and AI will generate the test JSON for you.

Example

Your description:
"Test that mem_read returns valid data when reading 64 bytes from RAM at 0x20000000, and verify the response contains hex data"
AI-generated test JSON:
{
  "name": "memory-read-ram",
  "description": "Test RAM read at 0x20000000",
  "tests": [
    {
      "name": "Read 64 bytes from RAM",
      "tool": "mem_read",
      "params": { "addr": "0x20000000", "len": 64 },
      "expect": {
        "success": true,
        "exists": ["hex", "ascii"]
      }
    }
  ]
}

After generation, you can edit the JSON, download it, or save directly to the Portal.

AI Analysis

When tests fail, AI analyzes the results and correlates them with session telemetry to provide actionable insights.

How It Works

  1. Auto-trigger: When a test run fails, AI analysis is automatically triggered
  2. Correlation: AI correlates test results with session telemetry via session_id
  3. Analysis: Identifies root causes, patterns, and failure correlations
  4. Delivery: Results delivered via Portal notification and Support Chat

Analysis Output

{
  "healthScore": 65,
  "summary": "2 of 6 tests failed due to target disconnection",
  "issues": [
    {
      "severity": "high",
      "title": "Memory read timeout",
      "details": "mem_read at 0x20001000 timed out after 5000ms",
      "telemetryCorrelation": "Target disconnect event at 14:32:15",
      "testName": "Read extended RAM"
    }
  ],
  "recommendations": [
    "Check debug probe connection stability",
    "Verify target is halted before memory reads"
  ],
  "nextSteps": [
    "Run connection-basic test to verify probe connection",
    "Check OpenOCD logs for timeout reasons"
  ]
}

Delivery Channels

  • Portal Notification: Immediate notification in the Portal
  • Support Chat: Creates a support thread with AI summary for discussion
  • Email: (Coming soon) Email notification if enabled in preferences

Test Definition Format

Test suites are defined as JSON with the following structure:

{
  "name": "my-test-suite",
  "description": "What this test validates",
  "tests": [
    {
      "name": "Human readable step name",
      "tool": "mcp_tool_name",
      "params": { "arg1": "value1" },
      "expect": {
        "success": true,
        "type": "object",
        "exists": ["field1", "field2"],
        "min": 0,
        "max": 100
      }
    }
  ]
}

Assertion Types

TypeDescriptionExample
successTool call succeeded"success": true
existsFields exist in result"exists": ["pc", "sp"]
typeResult type check"type": "object"
min/maxNumeric range"min": 0, "max": 100

API Reference

EndpointMethodDescription
/api/mcp-tests/suitesGETList test suites
/api/mcp-tests/suites/:id/downloadGETDownload test definition JSON
/api/mcp-tests/suites/uploadPOSTUpload new test suite
/api/mcp-tests/suites/generatePOSTAI generate test from description
/api/mcp-tests/runsGETList test runs
/api/mcp-tests/runs/:id/download-inputGETDownload test run input JSON
/api/mcp-tests/runs/:id/download-outputGETDownload test run output JSON
/api/mcp-tests/runs/:id/analyzePOSTTrigger AI analysis
/api/mcp-tests/runs/:id/analysisGETGet existing analysis

CI/CD Integration

Run no-hardware tests in your CI/CD pipeline:

# In CI pipeline - run only no-hardware tests
probecodex-agent mcp test --no-hardware --output=results.json

# Check exit code (0 = all passed, 1 = failures)
if [ $? -ne 0 ]; then
  echo "MCP tests failed!"
  exit 1
fi

Results are automatically submitted to the Portal and will trigger AI analysis if tests fail.

Related