A comprehensive validation framework for evaluating LLM-generated Storybook stories against multiple quality gates. This project implements a scientific workflow to determine what information and context an LLM needs to write "good" Storybook stories.
This project aims to systematically identify the optimal context and workflow for LLMs to generate high-quality Storybook stories that align with your project's specific syntax, conventions, and quality standards.
- Node.js 18+
- npm or yarn package manager
-
Clone the repository
git clone <repository-url> cd storybook-validation-script
-
Install all dependencies
Option A: Using npm script (recommended)
npm run setup
Option B: Using setup script directly
# On macOS/Linux: ./setup.sh # On Windows: setup.bat
This will install dependencies for both the root project and the example Storybook project.
-
Run the test suite to verify everything works
npm test
The validation script evaluates stories against these objective criteria:
- Syntactic Correctness (Linting) - ESLint compliance
- Type Safety (TypeScript) - Compilation without errors
- Render Test (Smoke Test) - Storybook test-runner smoke test
- Component Story Format (CSF) - Version 3 compliance
- Interaction Test - Play function execution and assertions
# Install all dependencies (root + example)
npm run setup
# Run the complete test suite
npm test
# Validate a specific story file
npm run validate <story_file_path>
For users who prefer direct script execution:
# macOS/Linux
./setup.sh
# Windows
setup.bat
# Core validation script
node validate_story.js <story_file_path>
# Test suite
node validate_story.test.js
storybook-validation-script/
βββ validate_story.js # Core validation engine
βββ validate_story.test.js # Test suite for validation script
βββ package.json # Root package configuration
βββ setup.sh # Setup script for macOS/Linux
βββ setup.bat # Setup script for Windows
βββ example/ # Example Storybook project with test stories
β βββ package.json # Example project dependencies
β βββ .storybook/ # Storybook configuration
β βββ src/stories/ # Test stories for validation
β β βββ eslint-error.stories.tsx # ESLint violations
β β βββ typescript-error.stories.tsx # TypeScript errors
β β βββ render-error.stories.tsx # Render failures
β β βββ interaction-error.stories.tsx # Play function errors
β β βββ perfect.stories.tsx # Control/baseline
β βββ tsconfig.json # TypeScript configuration
βββ README.md # This file
npm test
This will validate all test stories and verify that the validation script correctly identifies different types of errors.
# Test a story with errors
npm run validate example/src/stories/eslint-error.stories.tsx
# Test a perfect story
npm run validate example/src/stories/perfect.stories.tsx
# Test with JSON output for programmatic use
npm run validate example/src/stories/perfect.stories.tsx --json
Story | Purpose | Expected Errors |
---|---|---|
eslint-error.stories.tsx | Tests ESLint error detection | linting , typeScript |
typescript-error.stories.tsx | Tests TypeScript error detection | linting , typeScript |
render-error.stories.tsx | Tests render failure detection | typeScript , renderTest , interactionTest |
interaction-error.stories.tsx | Tests play function failures | renderTest , interactionTest |
perfect.stories.tsx | Control story with no errors | [] (none) |
The script provides human-readable output with:
π Validating story: example/src/stories/example.stories.tsx
π Project root: example
π Starting Storybook for test-runner...
β
Storybook is ready
π§ͺ Running test-storybook for story: example
π Stopping Storybook...
β
Storybook stopped
π Validation Results:
==================================================
β
linting: PASS
β typeScript: FAIL
Error: Type error details...
β
csfCompliance: PASS
β renderTest: FAIL
Error: Render error details...
βοΈ interactionTest: SKIP
Error: Render test failed, skipping interaction test
π Summary:
Overall Score: 67% (WARNING)
Passed: 4/6
Failed: 2
Skipped: 0
For programmatic use, add the --json
flag:
npm run validate example/src/stories/example.stories.tsx --json
This provides structured JSON output:
{
"storyFile": "example/src/stories/example.stories.tsx",
"timestamp": "2025-08-15T13:35:04.142Z",
"checks": {
"linting": { "status": "PASS", "error": null },
"typeScript": { "status": "FAIL", "error": "Type error details..." },
"csfCompliance": { "status": "PASS", "csfVersion": "CSF3" },
"renderTest": { "status": "FAIL", "error": "Render error details..." },
"interactionTest": { "status": "SKIP", "error": "Render test failed" }
},
"summary": {
"totalChecks": 6,
"passedChecks": 4,
"failedChecks": 2,
"skippedChecks": 0,
"score": 67,
"overallStatus": "WARNING"
}
}
The validation script uses standard exit codes:
- 0: All checks passed or warnings only
- 1: One or more checks failed
This makes it suitable for CI/CD integration:
# In CI pipeline
node validate_story.js ./story.stories.tsx
if [ $? -eq 0 ]; then
echo "Validation passed"
else
echo "Validation failed"
exit 1
fi
This project is designed for research and experimentation. Feel free to:
- Modify validation criteria for your specific needs
- Add new quality gates
- Extend the MCP integration
- Share findings and improvements
MIT License
- Run the setup:
npm run setup
- Verify installation:
npm test
- Test with your stories:
npm run validate <path>
- Begin research: Use the validation results to optimize LLM workflows
The foundation is solid and ready for systematic experimentation! π