<pectus.ai/>
./pectus.ai — builder framework · self-hosted · MIT

For app authors

App specification

How to author a Pectus app. This is the contract — the runner, the CLI, and the CMS depend on it.

If you haven’t read concepts yet, start there. This doc assumes you understand why apps are surfaces, why connectors are different, and what the core is.

Folder layout

apps/<app-name>/
├── APP.md           required — manifest + prompt body
├── schema.ts        required — Zod schema for the app's structured output
├── provision.ts     required if the app has external API config
├── README.md        required — overview for humans
└── reference/       optional — knowledge files the prompt loads

The folder name is the app name. Use kebab-case. Community-authored apps as a separate install path are on the v0.4 roadmap; for now, contributing an app means a PR to pectusai/pectus.

APP.md frontmatter

---
name: my-app                          # required, must match folder name
description: One line shown in UI     # required
type: inbound | outbound              # required
version: 1.0.0                        # required, semver
needs:                                # required, which core blocks the app consumes
  brand: [voice, name]
  workspace: [icp, market, locale]
  knowledge: [insights]
inputs:                               # named inputs from skills or other apps
  - post_draft
outputs:                              # named outputs the app produces
  - published_url
config:                               # env vars the app needs (provision.ts collects them)
  - WORDPRESS_ENDPOINT
  - WORDPRESS_APP_PASSWORD
schema: ./schema.ts                   # required, path to Zod schema
model: claude-opus-4-7                # optional, default claude-opus-4-7 (for outbound apps that go through the model)
---

Body

The body of APP.md is the prompt the model runs when this app is invoked. Plain markdown.

Write the body in core-aware language. The brand voice, ICP, locale, and knowledge insights are already in scope when your prompt runs. Refer to them, don’t re-specify them.

Don’t write:

Use a friendly, witty tone with short sentences.

Do write:

Match the brand voice. Use the ICP framing for headings and CTAs.

The runner enforces this at install time. If your prompt sets tone, voice, colors, or copy that the brand owns, the install validation will reject it.

Keep the body under 4000 tokens. Long reference material goes in reference/ so prompt caching can absorb it.

Inbound vs outbound — two sides of the same shape

Apps come in two kinds. They share the same folder layout and the same manifest, but they do opposite things and the smart parts live in different files.

Inbound apps — data comes in

An inbound app brings raw data from a third-party API (or, for seed-keywords, from a CMS form) into Pectus tables that skills can consume. Examples: ga4 (Google Analytics), gsc (Search Console), google-ads, meta, linkedin, seed-keywords. Each sits between its source and a Pectus table.

An inbound app has up to two halves:

apps/<app-name>/
├── fetch.ts          mechanical — call the API, handle pagination + rate limits
└── insights/         optional — a skill that turns raw rows into Insights
    ├── SKILL.md
    └── schema.ts

The fetch.ts half is dumb plumbing. It hits the API, paginates, retries, and returns raw rows. Every install of Pectus ships with the same fetch.ts and gets the same raw payload from a given GA4 property.

The insights/ half is where Pectus stops being a generic API client and starts being your insight tool. See the next section.

The runner invokes an inbound app via:

npx pectus app fetch <app-name> --workspace <code>

Each inbound app declares its outputs as the named tables or rows it populates. Skills downstream declare matching named inputs in their SKILL.md and consume the table.

Outbound apps — output goes out

An outbound app takes something Pectus produced (a draft, a plan, a sitemap) and ships it to a target surface. Examples: content-insights (the bundled Astro site), wordpress, storyblok. Each sits between a Pectus draft and a publishing API.

An outbound app’s smart part is the prompt body. The prompt rewrites the incoming draft to match the target surface’s conventions — WordPress’s H2 conventions, Storyblok’s component structure, content-insights’s Astro layouts — using the brand voice that’s already in scope.

apps/<app-name>/
├── APP.md            manifest + prompt body that rewrites the draft
└── provision.ts      handles the actual API call to publish

The runner invokes outbound apps when a skill produces something that needs to land somewhere — usually behind a “Publish” button in the CMS.

Why the symmetry matters

Both kinds of app are surfaces. Inbound surfaces face external data sources; outbound surfaces face external publishing destinations. The folder shape, the manifest, the registration are identical. The runner treats them through one interface. Adding a fifth ad-platform inbound app and a fourth outbound publisher are the same kind of contribution.

The inbound insights pipeline — Pectus’s smart part

This is the design point that makes Pectus a framework instead of just an API client.

After fetch.ts lands raw rows in the app’s table, an inbound app can ship an insights/SKILL.md. The runner invokes it with the user’s brand voice, ICP, locale, and knowledge insights in scope, and the skill emits rows in the workspace insights table that the user actually cares about. It’s a model-driven transformation step, not a mapping table.

[raw API rows in app's table]    [user's core context]
       │                                   │
       └─────────┬─────────────────────────┘

       [apps/<name>/insights/SKILL.md]


        [Insight rows]


        skills consume them

Why this matters

Two GA4 properties never mean the same thing to two brands. A SaaS company tracking sign_up wants different rollups than a publisher tracking article_read. A B2B brand cares about engagement-by-account; a DTC brand cares about engagement-by-campaign. A generic API wrapper that dumped raw rows into tables would force every Pectus user to ask Claude the same downstream question: “now please re-shape this for what I actually care about.”

The insights pipeline inverts that. The interpretation step runs while brand and knowledge context are fresh in scope. Skills downstream see Insights that already speak the user’s language.

What an insights skill does in practice

  • Renames noise to signal: sessions_engaged becomes read_throughs for a publisher, qualified_visits for SaaS.
  • Drops fields the user doesn’t care about: hides bot traffic, internal traffic, low-volume noise the user defined as out-of-scope in their knowledge folder.
  • Computes derived fields: engagement rate, dollar-weighted CPM, cohort tags pulled from UTM patterns the user described in their knowledge notes.
  • Adds tags from the brand’s vocabulary: classifies a campaign as “European market” or “DTC retention” using the workspace’s ICP and the user’s knowledge.
  • Rejects rows that violate sanity rules: drop rows where currency drift looks suspicious, flag rows where conversion definitions changed mid-period.

Where the insights skill lives

apps/<app-name>/
├── APP.md              manifest
├── fetch.ts            raw API wrapper
├── insights/           upstream-default insights skill — updated by `pectus update`
│   ├── SKILL.md        prompt body
│   └── schema.ts       shape the skill must produce
├── schema.ts           raw output shape
├── provision.ts        credential setup
└── README.md

seed-keywords ships this pattern today. Other inbound apps land their raw rows now and gain insights skills as the pipeline matures.

Customizing the interpretation

The runner caches stable inputs (brand profile, ICP, knowledge insights) and recomposes the prompt every run, so editing insights/SKILL.md in your install changes the next interpretation. pectus update rebases upstream changes; if you’ve edited this file, you’ll resolve a conflict the same way you would for any other skill — keep your changes if they’re brand-specific, or push them upstream if they’re general.

A documented user-override seam (a brand/interpreters/<app>.md file that composes on top of the upstream default, plus a form-based UI for the common knobs) is on the v0.4 roadmap.

Why this is the framework move, not a feature

Most data platforms force a shape. Use ours, or write a custom integration. Pectus says: here’s a default shape that works, and here’s the seam where your knowledge and your brand bend the import. That’s the framework promise — for any inbound app the community ever ships.

The five-layer prompt envelope

When the runner invokes an outbound app, the system prompt is composed as:

[1] Brand block       from brand/brand.json
[2] Workspace block   from the active workspace
[3] Knowledge block   from knowledge/insights.md
[4] Skill block       (only if a skill called the app)
[5] App block         your APP.md body

Your app block is the last layer. Layers 1–4 are already in the model’s scope. Don’t repeat them.

Validation rules

The CLI runs these checks on npx pectus app install (and during CI for upstream apps):

  1. The prompt body contains no tone-setting words (specific tone adjectives, voice descriptors).
  2. The prompt body contains no color, font, or visual identity instructions.
  3. The prompt body contains no literal copy that the brand owns (taglines, slogans, names).
  4. needs is consistent with what the prompt body references.
  5. config env vars are documented in the README.
  6. provision.ts exists if config is non-empty.

Violations block install. The make-it skill’s output passes these checks because the skill enforces them at scaffold time.

Outputs

The Zod schema in schema.ts defines exactly what the app returns. Outputs land in:

  • app_runs.output (always — every run logs the full output here, mirrors skill_runs)
  • App-specific destinations: a published_url for WordPress, a deploy ID for content-insights, a row count for inbound apps.

Caching

Mark large, slow-changing inputs with cache_inputs: in frontmatter. Same semantics as skills. Brand profile, ICP, knowledge insights are good candidates. App-specific draft content (post body, page outline) usually isn’t.

Versioning

Bump version when you change the schema or the prompt in a way that produces materially different output. The runner records the version on every app_runs row.

Authoring an app

The fastest path:

npx pectus make-it app

The CLI prompts you for the brief, calls the make-it skill, and writes a complete scaffold (APP.md, schema.ts, provision.ts, README.md). Open a PR to pectusai/pectus for an official app.

A separate community-app install path (clone-from-GitHub-URL into a community/ folder, plus a CLI installer) is on the v0.4 roadmap.

What apps don’t do

  • Don’t reimplement infrastructure. Use @pectus/supabase, @pectus/anthropic, @pectus/google/oauth etc. If you need a service that doesn’t have a connector, file an issue first.
  • Don’t write to arbitrary tables. The runner exposes structured destinations. Apps that need a new table propose a migration upstream.
  • Don’t bypass the runner. Apps register through their manifest; the runner discovers them. There is no “register your app in CMS code” step, by design.