Skip to content

AI Collaboration Guide

Positioning

This document is not a lightweight keyword cheat sheet for AI tools. It is an internal collaboration guide for Ez-UI project members. Its goals are to:

  • standardize how AI is used across component development, example maintenance, documentation work, review, and release validation
  • reduce drift across component code, runnable examples, and documentation
  • make each task type follow an explicit input, workflow, output, and acceptance model
  • avoid AI output that looks complete but is not actually release-ready

Audience

This guide applies to:

  • component maintainers
  • docs and example maintainers
  • code reviewers
  • release validators
  • project members using AI for analysis, edits, and regression checks

Scope

  • component source maintenance
  • docs example maintenance
  • synchronized updates between docs and examples
  • pre-release validation and review

This guide does not target:

  • generic programming education content
  • AI usage advice unrelated to this repository
  • replacing product decisions, UX decisions, or final release approval

Core Principles

When using AI in Ez-UI, the default principles are:

  • understand the existing implementation before proposing changes
  • preserve correct behavior before chasing abstraction cleanup
  • keep runnable examples aligned first, then align docs wording
  • prefer the smallest necessary change before larger restructuring
  • provide validation results before final conclusions
  • explicitly describe external API, example, and documentation impact when they change

Role Responsibilities

Component Maintainers

For component changes, AI output must cover at least:

  • the target of the change and root cause
  • affected components, types, exports, styles, or slot boundaries
  • the minimal viable implementation instead of bundled unrelated refactors
  • validation results, including error checks, build checks, or key behavioral verification

AI must not be treated as the final authority for:

  • unresolved API design decisions
  • compatibility guarantees
  • release risk acceptance

Docs and Example Maintainers

For docs or example work, AI output must cover:

  • whether the example still runs
  • whether the docs still match the example
  • whether Chinese and English pages are both updated
  • whether navigation, sidebar entries, and links stay correct

Code Review Participants

When AI is used for review, it must report issues before solutions. The output structure is fixed:

  1. Findings
  2. Open Questions
  3. Summary

If no issue is found, the result must explicitly say so and still mention remaining risks or validation gaps.

Release Validators

AI can support release checks, but it cannot replace the final release decision. Output must include:

  • completed checks
  • unverified areas
  • the release recommendation and its conditions

Task Types and Standard Workflows

1. Fixing Component Issues

Workflow:

  1. confirm the observable symptom and trigger conditions
  2. determine whether the problem belongs to the component, example, documentation, or integration boundary
  3. fix the root cause before adding temporary compensation
  4. verify key regression paths after the change

Minimum deliverables:

  • cause of the issue
  • modification points
  • regression surface
  • validation results

2. Changing Component API or Behavior

Workflow:

  1. classify the change as enhancement, fix, or breaking change
  2. inspect types, exports, installer entries, Nuxt integration, and examples for impact
  3. update docs and examples together
  4. state compatibility impact explicitly

Minimum deliverables:

  • impact summary
  • affected file list
  • type and docs sync result
  • build or error-check result

3. Updating Docs or Examples

Workflow:

  1. confirm whether the example still represents the recommended path
  2. verify whether the docs have drifted from actual behavior
  3. align docs to runnable examples
  4. update Chinese and English pages together, including navigation

Minimum deliverables:

  • updated recommended usage
  • docs and example sync status
  • Chinese and English sync status
  • dead-link risk status

4. Running Review

Workflow:

  1. detect behavior changes first
  2. inspect types, examples, docs, and edge cases next
  3. discuss implementation suggestions last

Not acceptable:

  • proposing solutions before identifying findings
  • giving only a summary without findings
  • treating style preferences as defects

5. Running Release Validation

Workflow:

  1. check component build
  2. check docs build
  3. check key examples
  4. check errors or diagnostics
  5. check Chinese and English doc navigation

If any critical check is missing, the conclusion must stay conditional instead of claiming the release is ready.

Repository-specific Rules

The following rules are based on verified Ez-UI constraints and take priority over generic advice.

Table and Layout Rules

  • when any column is fixed, define width or min-width on the remaining data columns
  • do not use doLayout as a default fix; only use it when layout timing is the actual issue
  • table misalignment checks must include column config, parent layout, fixed columns, and content width

Types and Examples

  • prefer strongly typed examples and avoid unnecessary as unknown as assertions
  • when FastTable examples lose template inference, inspect typed helper bindings such as createTypedFastTable
  • examples are not disposable demos; they must represent the current recommended usage

Query, Pagination, and Behavioral Semantics

  • keep pagination, query, and sort semantics aligned with search/query/reset/changePage
  • Query and Reset behavior must remain consistent across examples and docs
  • for CRUD query areas, prefer realistic common usage over rare abstraction-heavy patterns

Component Wrapper Boundaries

  • follow pass-through by default and minimal interception in wrappers
  • avoid adding wrappers that merely repackage the full underlying API without stable value
  • do not promote an uncommon combination into the default recommended architecture just because one example can make it work

Documentation Sync

  • any behavior change must be reflected in both Chinese and English docs
  • if example code and docs conflict, align docs to runnable examples first
  • when removing pages, update sidebar entries in the same change to avoid dead links
  • prefer verifiable rules over slogan-style guidance

AI Output Requirements

Implementation Tasks

Output must include at least:

  • what changed
  • why it changed
  • affected files or modules
  • what was validated
  • what remains unverified

Review Tasks

Output order must be:

  1. Findings
  2. Open Questions
  3. Summary

Release Tasks

Output must include at least:

  • completed checks
  • skipped checks
  • current blockers
  • release recommendation

Prohibited Patterns and Common Anti-patterns

The following are not acceptable by default in this project:

  • changing components before reading enough context
  • updating Chinese docs without English docs
  • updating examples without updating docs
  • leaving fixed-column tables without width constraints on the remaining columns
  • offering implementation ideas without validation
  • adding hacks with no boundary explanation just to force a visual result
  • turning a one-off case into a long-term default API
  • writing review summaries without concrete findings

Pre-release Gates

Release can proceed only when the following conditions are met:

  1. component build passes
  2. docs build passes
  3. key examples render correctly without visible UI misalignment
  4. full error check is clean, or remaining items are explicitly accepted
  5. Chinese and English doc navigation has no broken links
  6. breaking changes are explicitly documented

If any area remains unverified, the release conclusion must state that clearly instead of implying confidence without evidence.

Maintenance of This Guide

This guide should be updated when:

  • a new high-frequency task pattern appears in the repository
  • component architecture boundaries change
  • release workflow or acceptance gates change
  • the team repeatedly hits the same class of problems across tasks

Maintenance rules:

  • add executable rules before adding slogans
  • document verified experience before speculative advice
  • keep Chinese and English pages aligned within the same update cycle

Fast Recognition Keywords

These keywords can help route tasks, but they do not replace full context analysis:

  • table misalignment fixed column column width doLayout: inspect column config, fixed columns, container layout, and layout recalculation timing
  • type error slot row TableColumn createTypedFastTable: inspect generic binding and template inference degradation
  • release stable ready to publish: run build checks, docs build checks, and key example regression checks
  • review regression verification: report findings by severity first, not implementation proposals first

Relationship to Repository-level Rules

This page is a docs-level project collaboration guide used for team execution and alignment.

If repository-level AI rule matching is needed later, a dedicated repository entry file can be added and cross-linked with this page, but it should not replace the project-level execution details documented here.

Released under the MIT License