PENMAN
PENMAN
AI Visibility Platform

Core module AI Visibility (AEO / GEO)

The system that tells you: “Are we selected as a source?”

AI Visibility is the diagnostic layer inside PENMAN. It runs a repeatable check, extracts the sources AI prefers, classifies relevance into consistent tiers, and surfaces “included vs ignored” gaps — so the next step is obvious: compare against the source set, generate recommendations, then execute safely.

What you get

Sources + tiers + visibility signals.

Why it’s usable

Normalized outputs, not messy AI text.

Where it goes next

Feeds AI Competition & Recommendations.

Important guardrails

AI Visibility helps you measure and decide. It does not promise outcomes.

  • No guaranteed citations, inclusion, or Discover placement.
  • No auto-publish by default — approval is required.
Dependency: the module requires Workspace context and a completed structural/SEO analysis (prerequisite gate).

Step-by-step How AI Visibility runs

How AI Visibility works

Think of this as a controlled pipeline: prepare context, run visibility, normalize results, then hand off to recommendations and safe execution.

1
Set the Workspace context

Choose the site/workspace and the page(s) you want to evaluate. The goal is to run visibility on the content you actually want AI to cite or reference.

Pick pages or a page group.
Define topic/intent focus for the run.
2
Pass the prerequisite gate (structural/SEO analysis)

AI Visibility depends on a foundation layer. PENMAN runs/uses the structural readiness analysis first, so visibility signals map cleanly into actions later.

Outcome: a stable baseline for readability and structure that enables the AI layer.
3
Run the AI Visibility check

PENMAN queries the selected AI contexts, observes the answer behavior, and extracts the source set that AI is using for your topic/intent.

source extraction visibility signals repeatable runs
4
Normalize results into tiers and gaps

AI outputs are messy. PENMAN converts them into a consistent schema so teams can compare runs and make decisions quickly.

Tiering
High / Medium / Low
Signal
Included vs Ignored
Gap
What blocks selection
5
Store run history (so it’s not a one-off audit)

Visibility only matters if you can re-run and compare. PENMAN keeps the results so you can track movement and decide on the next batch.

Best practice: run a weekly loop on priority topics and pages.
6
Hand off to AI Competition & Recommendations

The extracted source set becomes your “competition.” PENMAN uses it to explain gaps and generate a backlog of recommendations and draft-safe tasks.

Screenshot: AI Visibility report
Sources + relevance tiers + signals
Screenshot
AI Visibility report screenshot
Benefit: align the team on the same source set and the same “what to improve next.”
Screenshot: run history
Repeatable checks and comparisons over time
Screenshot
Visibility run history screenshot
Benefit: make AI visibility a cadence, not a one-off audit. (No outcome guarantees.)
Screenshot: hand-off to recommendations
Source set becomes “competition” input
Screenshot
Source set to competition and recommendations screenshot
Benefit: visibility results immediately drive action — not a dead-end report.

When to run AI Visibility

Use AI Visibility when you need to decide: “what’s blocking selection, and what should we ship next?”

Before a content refresh

Run visibility, then prioritize updates that match what AI already prefers.

After shipping a batch

Re-run to compare and decide whether to iterate or move to the next topic.

For stakeholder alignment

Use the source set and tiers to align content, SEO, and editorial quickly.

Want the next step?

AI Visibility extracts the source set. AI Competition uses it to generate explainable recommendations and a backlog you can execute safely.

Next step

Extract the source set — then compare and execute safely.

Run AI Visibility, then proceed to AI Competition and draft-first execution with diffs and rollback. Publishing is always explicit.