Skip to content

storybookjs/llm-storybook-validation-script

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Storybook Validation Script

A comprehensive validation framework for evaluating LLM-generated Storybook stories against multiple quality gates. This project implements a scientific workflow to determine what information and context an LLM needs to write "good" Storybook stories.

🎯 Research Objective

This project aims to systematically identify the optimal context and workflow for LLMs to generate high-quality Storybook stories that align with your project's specific syntax, conventions, and quality standards.

πŸš€ Quick Start

Prerequisites

  • Node.js 18+
  • npm or yarn package manager

Installation & Setup

  1. Clone the repository

    git clone <repository-url>
    cd storybook-validation-script
  2. Install all dependencies

    Option A: Using npm script (recommended)

    npm run setup

    Option B: Using setup script directly

    # On macOS/Linux:
    ./setup.sh
    
    # On Windows:
    setup.bat

    This will install dependencies for both the root project and the example Storybook project.

  3. Run the test suite to verify everything works

    npm test

πŸ“‹ Quality Metrics

The validation script evaluates stories against these objective criteria:

  1. Syntactic Correctness (Linting) - ESLint compliance
  2. Type Safety (TypeScript) - Compilation without errors
  3. Render Test (Smoke Test) - Storybook test-runner smoke test
  4. Component Story Format (CSF) - Version 3 compliance
  5. Interaction Test - Play function execution and assertions

πŸ”§ Available Scripts

Root Package Scripts

# Install all dependencies (root + example)
npm run setup

# Run the complete test suite
npm test

# Validate a specific story file
npm run validate <story_file_path>

Setup Scripts

For users who prefer direct script execution:

# macOS/Linux
./setup.sh

# Windows
setup.bat

Direct Script Usage

# Core validation script
node validate_story.js <story_file_path>

# Test suite
node validate_story.test.js

πŸ“ Project Structure

storybook-validation-script/
β”œβ”€β”€ validate_story.js        # Core validation engine
β”œβ”€β”€ validate_story.test.js   # Test suite for validation script
β”œβ”€β”€ package.json             # Root package configuration
β”œβ”€β”€ setup.sh                 # Setup script for macOS/Linux
β”œβ”€β”€ setup.bat                # Setup script for Windows
β”œβ”€β”€ example/                 # Example Storybook project with test stories
β”‚   β”œβ”€β”€ package.json         # Example project dependencies
β”‚   β”œβ”€β”€ .storybook/          # Storybook configuration
β”‚   β”œβ”€β”€ src/stories/         # Test stories for validation
β”‚   β”‚   β”œβ”€β”€ eslint-error.stories.tsx      # ESLint violations
β”‚   β”‚   β”œβ”€β”€ typescript-error.stories.tsx  # TypeScript errors
β”‚   β”‚   β”œβ”€β”€ render-error.stories.tsx      # Render failures
β”‚   β”‚   β”œβ”€β”€ interaction-error.stories.tsx # Play function errors
β”‚   β”‚   └── perfect.stories.tsx           # Control/baseline
β”‚   └── tsconfig.json        # TypeScript configuration
└── README.md                 # This file

πŸ§ͺ Testing the Validation Script

Run the Complete Test Suite

npm test

This will validate all test stories and verify that the validation script correctly identifies different types of errors.

Test Individual Stories

# Test a story with errors
npm run validate example/src/stories/eslint-error.stories.tsx

# Test a perfect story
npm run validate example/src/stories/perfect.stories.tsx

# Test with JSON output for programmatic use
npm run validate example/src/stories/perfect.stories.tsx --json

Test Stories Included

Story Purpose Expected Errors
eslint-error.stories.tsx Tests ESLint error detection linting, typeScript
typescript-error.stories.tsx Tests TypeScript error detection linting, typeScript
render-error.stories.tsx Tests render failure detection typeScript, renderTest, interactionTest
interaction-error.stories.tsx Tests play function failures renderTest, interactionTest
perfect.stories.tsx Control story with no errors [] (none)

πŸ” Validation Output

Console Output

The script provides human-readable output with:

πŸ” Validating story: example/src/stories/example.stories.tsx
πŸ“ Project root: example

πŸš€ Starting Storybook for test-runner...
βœ… Storybook is ready
πŸ§ͺ Running test-storybook for story: example
πŸ›‘ Stopping Storybook...
βœ… Storybook stopped

πŸ“Š Validation Results:
==================================================
βœ… linting: PASS
❌ typeScript: FAIL
   Error: Type error details...
βœ… csfCompliance: PASS
❌ renderTest: FAIL
   Error: Render error details...
⏭️ interactionTest: SKIP
   Error: Render test failed, skipping interaction test

πŸ“ˆ Summary:
   Overall Score: 67% (WARNING)
   Passed: 4/6
   Failed: 2
   Skipped: 0

JSON Output

For programmatic use, add the --json flag:

npm run validate example/src/stories/example.stories.tsx --json

This provides structured JSON output:

{
  "storyFile": "example/src/stories/example.stories.tsx",
  "timestamp": "2025-08-15T13:35:04.142Z",
  "checks": {
    "linting": { "status": "PASS", "error": null },
    "typeScript": { "status": "FAIL", "error": "Type error details..." },
    "csfCompliance": { "status": "PASS", "csfVersion": "CSF3" },
    "renderTest": { "status": "FAIL", "error": "Render error details..." },
    "interactionTest": { "status": "SKIP", "error": "Render test failed" }
  },
  "summary": {
    "totalChecks": 6,
    "passedChecks": 4,
    "failedChecks": 2,
    "skippedChecks": 0,
    "score": 67,
    "overallStatus": "WARNING"
  }
}

πŸ“Š Exit Codes

The validation script uses standard exit codes:

  • 0: All checks passed or warnings only
  • 1: One or more checks failed

This makes it suitable for CI/CD integration:

# In CI pipeline
node validate_story.js ./story.stories.tsx
if [ $? -eq 0 ]; then
  echo "Validation passed"
else
  echo "Validation failed"
  exit 1
fi

🀝 Contributing

This project is designed for research and experimentation. Feel free to:

  • Modify validation criteria for your specific needs
  • Add new quality gates
  • Extend the MCP integration
  • Share findings and improvements

πŸ“„ License

MIT License


🎯 Next Steps

  1. Run the setup: npm run setup
  2. Verify installation: npm test
  3. Test with your stories: npm run validate <path>
  4. Begin research: Use the validation results to optimize LLM workflows

The foundation is solid and ready for systematic experimentation! πŸš€

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published