Skip to content

MCP Tools Reference

NON-NORMATIVE.

The @morphism-systems/mcp-server package exposes governance tools over the Model Context Protocol (MCP). These tools are available to any MCP-compatible client (Claude Code, Cursor, custom agents).

Package: @morphism-systems/mcp-server Server name: morphism-mcp


Tool index

Tool Description
governance_validate Full categorical governance validation
governance_score Compute governance maturity score
ssot_verify Verify SSOT atom integrity
compute_kappa Compute governance convergence metric κ
compute_delta Compute state drift metric δ
entropy_measure Measure Shannon entropy of text
entropy_compare Compare entropy between two texts
entropy_report Entropy report for multiple artifacts

governance_validate

Description: Run full governance validation against a project. Executes the categorical validation engine (naturality checks, sheaf consistency, kappa computation) and the legacy pipeline verification script. Writes a proof artifact to .morphism/proofs/ on success.

Input schema:

{
  "project_path": "string — absolute path to the project root",
  "mode": "\"categorical\" | \"pipeline\" | \"full\" — default: \"full\""
}

Mode semantics:

Mode What runs
categorical Naturality checks + sheaf + kappa only
pipeline Legacy file checks (scripts/verify_pipeline.py) only
full Both categorical and pipeline (default)

Output schema:

{
  "valid": "boolean — overall pass/fail",
  "kappa": "number — governance distance from fixed point (lower is better)",
  "governance_vector": "number[7] — scores per dimension [Policy, GitHook, CIWorkflow, SSOTAtom, Document, SecurityGate, Runbook]",
  "vector_labels": "string[7] — dimension names",
  "naturality": {
    "all_natural": "boolean",
    "summary": "string",
    "total": "number — count of checked morphisms"
  },
  "sheaf": {
    "consistency_radius": "number — sheaf consistency measure",
    "severity": "string — none | low | medium | high",
    "drift_types": "string[] — types of detected drift"
  },
  "categorical_errors": "string[] — list of categorical violations",
  "proof_artifact_path": "string | undefined — path to written proof witness",
  "output": "string | undefined — raw pipeline output",
  "error": "string | undefined — error message if validation failed"
}

Example request:

{
  "project_path": "/workspace/morphism",
  "mode": "full"
}

Example response:

{
  "valid": true,
  "kappa": 0.0,
  "governance_vector": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
  "vector_labels": ["Policy", "GitHook", "CIWorkflow", "SSOTAtom", "Document", "SecurityGate", "Runbook"],
  "naturality": {
    "all_natural": true,
    "summary": "All 7 morphisms satisfy naturality",
    "total": 7
  },
  "sheaf": {
    "consistency_radius": 0.0,
    "severity": "none",
    "drift_types": []
  },
  "categorical_errors": [],
  "proof_artifact_path": "/workspace/morphism/.morphism/proofs/proof-2026-03-12T10:00:00Z.json"
}

governance_score

Description: Compute the governance maturity score for a project by running scripts/maturity_score.py --ci --threshold 0. Returns the parsed score, total, and raw output text.

Input schema:

{
  "project_path": "string — absolute path to the project root"
}

Output schema:

{
  "score": "number — achieved score",
  "total": "number — maximum possible score",
  "output": "string — raw script output",
  "error": "string | undefined — stderr if script failed"
}

Example request:

{
  "project_path": "/workspace/morphism"
}

Example response:

{
  "score": 125,
  "total": 125,
  "output": "Maturity score: 125/125\n  [+] policy_coverage    25/25\n..."
}

ssot_verify

Description: Verify SSOT (Single Source of Truth) atom integrity by running scripts/ssot_verify.py. Checks that SSOT atom SHA-256 hashes match the canonical source documents. Returns valid: false if hashes have drifted.

Input schema:

{
  "project_path": "string — absolute path to the project root"
}

Output schema:

{
  "valid": "boolean — true if all atoms match",
  "output": "string — script stdout",
  "error": "string | undefined — script stderr if failed"
}

Example request:

{
  "project_path": "/workspace/morphism"
}

Example response (passing):

{
  "valid": true,
  "output": "SSOT atoms: all 12 atoms verified\nAll checks passed\n"
}

Example response (failing):

{
  "valid": false,
  "output": "",
  "error": "Atom 'governance-kernel' SHA mismatch: expected abc123, got def456\n1 atom(s) out of sync"
}

compute_kappa

Description: Compute the governance convergence metric κ (kappa). Kappa measures the L-infinity distance between the current governance state vector and the ideal fixed point where all 7 dimensions score 1.0. Lower κ is better; κ = 0 is full compliance.

Three computation modes are supported (checked in order):

  1. Live project (project_path): Invokes the Python governance engine to compute the vector from real project state
  2. Provided vector (governance_vector): Compute kappa from a 7-dimensional vector
  3. Legacy scalar (scores): Backward-compatible scalar convergence ratio

Governance dimensions (vector order):

Index Dimension Weight
0 Policy 1.5
1 GitHook 1.2
2 CIWorkflow 1.2
3 SSOTAtom 1.0
4 Document 0.8
5 SecurityGate 1.5
6 Runbook 0.9

Input schema:

{
  "scores": "number[] | undefined — legacy scalar score sequence (≥3 values required)",
  "governance_vector": "number[7] | undefined — 7-dimensional governance state vector",
  "kappa_history": "number[] | undefined — historical kappa values for convergence trajectory",
  "project_path": "string | undefined — compute vector from live project state",
  "tolerance": "number | undefined — convergence tolerance, default: 1e-6"
}

At least one of project_path, governance_vector, or scores must be provided.

Output schema (vector mode):

{
  "kappa": "number — L-infinity governance distance",
  "governance_vector": "number[7] — per-dimension scores",
  "vector_labels": "string[7] — dimension names",
  "converging": "boolean — true if kappa_history is monotonically non-increasing",
  "kappa_history": "number[] — updated history including this result",
  "method": "\"vector_linf_formal\"",
  "interpretation": "string — human-readable kappa interpretation"
}

Output schema (legacy scalar mode):

{
  "kappa": "number — average ratio of consecutive deltas",
  "converges": "boolean — true if kappa < 1",
  "method": "\"legacy_scalar_ratio\"",
  "interpretation": "string"
}

Kappa interpretation:

κ range Meaning
0.0 Fixed point: governance fully compliant
< 0.1 Excellent — near fixed point
< 0.3 Good — governance converging
< 0.6 WARNING — significant compliance gaps
≥ 0.6 CRITICAL — governance far from fixed point

Example request (vector mode):

{
  "governance_vector": [1.0, 1.0, 1.0, 1.0, 0.8, 1.0, 1.0],
  "kappa_history": [0.25, 0.18, 0.16]
}

Example response:

{
  "kappa": 0.16,
  "governance_vector": [1.0, 1.0, 1.0, 1.0, 0.8, 1.0, 1.0],
  "vector_labels": ["Policy", "GitHook", "CIWorkflow", "SSOTAtom", "Document", "SecurityGate", "Runbook"],
  "converging": true,
  "kappa_history": [0.25, 0.18, 0.16, 0.16],
  "method": "vector_linf_formal",
  "interpretation": "κ=0.160: Good — governance converging"
}

compute_delta

Description: Compute governance drift δ (delta) — the magnitude of change between a baseline state and current state. Works with scalar values or equal-length arrays. Returns a severity classification: none, low (δ < 5), medium (δ < 15), or high (δ ≥ 15).

Input schema:

{
  "baseline": "number | number[] — baseline state value(s)",
  "current": "number | number[] — current state value(s)"
}

When arrays are provided they must be the same length. Returns δ = Infinity for mismatched array lengths.

Output schema:

{
  "delta": "number — absolute drift magnitude (array: mean absolute deviation)",
  "drifted": "boolean — true if delta > 0",
  "severity": "\"none\" | \"low\" | \"medium\" | \"high\""
}

Example request (scalar):

{
  "baseline": 100,
  "current": 112
}

Example response:

{
  "delta": 12,
  "drifted": true,
  "severity": "low"
}

Example request (vector):

{
  "baseline": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
  "current": [1.0, 0.8, 1.0, 0.9, 0.7, 1.0, 1.0]
}

Example response:

{
  "delta": 0.0857,
  "drifted": true,
  "severity": "none"
}

entropy_measure

Description: Measure Shannon entropy of text content. Uses the Python entropy backend (morphism.entropy.llm_entropy) when available, falls back to a JavaScript implementation. Returns entropy in the requested base unit along with a classification and interpretation.

Input schema:

{
  "text": "string — text content to measure entropy for",
  "base": "number | undefined — logarithm base (2=bits, 2.718=nats, 10=dits), default: 2"
}

Output schema:

{
  "entropy": "number — Shannon entropy value",
  "base": "number — logarithm base used",
  "bits": "number — entropy in bits (always base-2)",
  "classification": "string — entropy level label",
  "interpretation": "string — human-readable description"
}

Example request:

{
  "text": "The governance kernel defines 7 normative invariants.",
  "base": 2
}

Example response:

{
  "entropy": 4.127,
  "base": 2,
  "bits": 4.127,
  "classification": "moderate",
  "interpretation": "Well-structured text with moderate information density"
}

entropy_compare

Description: Compare Shannon entropy between two texts. Returns per-text measurements, a comparison delta, direction, percent change, and a recommendation.

Input schema:

{
  "text1": "string — first text to compare",
  "text2": "string — second text to compare",
  "detailed": "boolean | undefined — include per-dimension bit breakdown, default: false"
}

Output schema:

{
  "text1": {
    "entropy": "number",
    "classification": "string"
  },
  "text2": {
    "entropy": "number",
    "classification": "string"
  },
  "comparison": {
    "delta": "number — entropy2 - entropy1",
    "ratio": "number — entropy2 / entropy1",
    "direction": "\"increased\" | \"decreased\" | \"unchanged\"",
    "percent_change": "number"
  },
  "recommendation": "string — guidance based on delta",
  "detailed": {
    "text1_bits": "number",
    "text2_bits": "number",
    "delta_bits": "number"
  }
}

The detailed field is only present when detailed: true was requested.

Example request:

{
  "text1": "The kernel defines invariants.",
  "text2": "The governance kernel formally defines seven normative invariants binding all agents.",
  "detailed": false
}

Example response:

{
  "text1": { "entropy": 3.821, "classification": "moderate" },
  "text2": { "entropy": 4.312, "classification": "moderate" },
  "comparison": {
    "delta": 0.491,
    "ratio": 1.129,
    "direction": "increased",
    "percent_change": 12.8
  },
  "recommendation": "Entropy increased modestly — acceptable expansion in information density"
}

entropy_report

Description: Generate a comprehensive entropy report for multiple artifacts. Returns summary statistics (average, range, overall classification) and per-artifact measurements. Optionally computes cross-artifact consistency entropy.

Input schema:

{
  "artifacts": [
    {
      "name": "string — artifact identifier",
      "content": "string — artifact text content",
      "type": "string | undefined — artifact type (policy, config, code, etc.)"
    }
  ],
  "include_consistency": "boolean | undefined — measure cross-artifact consistency, default: true"
}

Output schema:

{
  "summary": {
    "num_artifacts": "number",
    "average_entropy": "number",
    "max_entropy": "number",
    "min_entropy": "number",
    "entropy_range": "number",
    "overall_classification": "string"
  },
  "artifacts": [
    {
      "name": "string",
      "type": "string",
      "entropy": "number",
      "bits": "number",
      "classification": "string"
    }
  ],
  "recommendations": "string[] — actionable recommendations",
  "consistency": "object | undefined — cross-artifact consistency metrics (when include_consistency=true and ≥2 artifacts)"
}

Example request:

{
  "artifacts": [
    {
      "name": "policy-core",
      "content": "All agents MUST comply with the seven normative invariants...",
      "type": "policy"
    },
    {
      "name": "guidelines",
      "content": "Use conventional commits. TypeScript strict mode...",
      "type": "config"
    }
  ],
  "include_consistency": true
}

Example response:

{
  "summary": {
    "num_artifacts": 2,
    "average_entropy": 3.952,
    "max_entropy": 4.103,
    "min_entropy": 3.801,
    "entropy_range": 0.302,
    "overall_classification": "moderate"
  },
  "artifacts": [
    { "name": "policy-core", "type": "policy", "entropy": 4.103, "bits": 4.103, "classification": "moderate" },
    { "name": "guidelines", "type": "config", "entropy": 3.801, "bits": 3.801, "classification": "moderate" }
  ],
  "recommendations": [
    "Entropy distribution is consistent — governance documents are well-balanced"
  ],
  "consistency": { "entropy": 0.152, "classification": "low" }
}