In Claude Code, Cursor, or Codex, the assistant can chain process intelligence with action — writing, testing, and iterating on code grounded in your real process definitions. In Claude or ChatGPT, persistent capabilities (Claude Skills, ChatGPT Custom GPTs) can be built directly on the Klarity MCP.Documentation Index
Fetch the complete documentation index at: https://developers.klarity.ai/llms.txt
Use this file to discover all available pages before exploring further.
Code generation grounded in your processes
| Prompt | What gets built |
|---|---|
| ”Build a Python script that monitors procurement for observations mentioning ‘exception’ or ‘escalation’ and generates a weekly digest.” | A monitoring script grounded in your actual process structure and observation schema |
| ”Write an agent that checks for process changes daily and flags any that affect SOX-controlled workflows.” | Compliance automation built on your real control mappings |
| ”Generate a test suite that validates the business rules in our loan approval process.” | Tests derived from documented process logic, not guesswork |
| ”Build a dashboard data model that tracks cycle time across our top 10 processes.” | Schema design informed by your actual process hierarchy and observation data |
| ”Look at our order-to-cash process, identify the manual steps, and draft an automation proposal.” | Process analysis → bottleneck identification → structured deliverable |
Persistent capabilities
| Capability | What it does |
|---|---|
| Claude Skill: weekly process health report | One-time investigation becomes a repeatable organizational capability that runs on demand |
| Custom GPT: ops onboarding assistant | Answers questions for new hires, grounded in your real process library |
| Claude Skill: audit package generator | Given a process name, produces a full audit package — diagram, dependencies, changes, artifacts |
| Skill: conformance monitor | Compares new observations against the documented standard and flags deviations |
Recipe for grounded code-gen
Read the process before writing code
Call
search → fetch → get_process_details. The assistant should write code against the actual schema and process structure, not its training-data assumptions.Confirm observation shape
Call
list_process_observations and get_observation_activity_timeline on a sample observation. The assistant now knows the real fields and timing semantics.Pull schema if the code touches the DB
Call
get_schema for the full PostgreSQL schema. Treat it as last-resort context — prefer typed tools where they exist.
