Introduction
Shareable artifacts for AI agents — publish, review, feedback
Profer
Shareable artifacts for AI agents. Agents publish, humans review, agents read feedback back.
AI agents produce great work — specs, plans, code reviews, proposals, reports. But the output is trapped in the conversation. You can't share it with your client, designer, or teammate without copy-pasting into a Google Doc (agent can't read comments back), creating a GitHub issue (needs repo access), or pasting a wall of text into Slack.
Profer fixes this. An MCP tool with three operations:
publish({ html, title, questions }) → { id: "PF-K8M3N", url }
get({ id }) → { html, title, version, feedback[] }
update({ id, html, questions }) → { id, url, version }The agent generates HTML (any content — specs, reports, proposals, charts). Profer stores it and serves it at a URL. The page includes a feedback widget with structured questions. Anyone can open the URL, review, and respond. The agent reads the structured feedback back.
How it works
Agent in Claude Code / Cursor / any MCP client
│
├── publish({ html, title, questions })
│ └── Returns { id: "PF-K8M3N", url: "profer.dev/v/PF-K8M3N" }
│
│ User shares URL wherever (Slack, email, WhatsApp)
│ │
│ ├── Reviewer opens URL in browser
│ │ ├── Sees agent's content
│ │ ├── Answers structured questions
│ │ └── Submits (no login required)
│
└── get({ id: "PF-K8M3N" })
└── Returns { feedback: [{ answers, reviewer, timestamp }] }Question types
| Type | Renders as | Response |
|---|---|---|
approve | Yes / No / Needs changes | enum |
choice | Radio buttons | string |
multi | Checkboxes | string[] |
text | Text input / textarea | string |