Pick the right interface

Brandomica Lab exposes the same safety workflow through five interfaces: web UI, REST API, CLI, MCP, and ChatGPT GPT. Each returns the same scores, evidence, and filing readiness — but they differ in how much context an AI agent needs to consume. If you build with agents, this page shows you which interface to use and why.

Why token cost matters

AI agents pay per token — input and output. When an agent scrapes a web page it must ingest markup, navigate the DOM, and parse visual layouts. The same brand check via MCP, CLI, or REST returns structured JSON instead of HTML, which is usually much cheaper for the agent to process.

Lower token cost means faster responses, cheaper API bills, and more room in the context window for the agent to reason about your brand name instead of parsing markup.

When to use what

InterfaceBest for
MCPAI conversations
CLITerminal & CI/CD
REST APICustom integrations
ChatGPT GPTChatGPT users
Web UIHumans
Methodology note

This page compares relative overhead, not guaranteed token counts. The underlying brand-check JSON is similar across MCP, CLI, and REST; the main difference is interface overhead (tool schemas, skill manifests, or page markup/parsing).

“Session overhead” refers to what an agent loads before the first check (for example MCP tool schemas or a CLI skill manifest). “Repeated checks” focuses on the cost of each additional brand check in the same workflow.

Exact token usage varies by model tokenizer, prompt context, agent runtime, and scraping/extraction method. Treat the table and examples as directional guidance for choosing the right interface, not billing estimates.

How CLI and MCP reduce tokens

  • 1.

    No HTML parsing

    Web pages send thousands of tokens of markup, styles, and scripts. CLI and MCP return only structured data — the agent never sees a DOM.

  • 2.

    No multi-step navigation

    A web-scraping agent must load the page, find the input, type the name, click search, wait for results, then parse each section. One MCP tool call or one CLI command does the same thing in a single round-trip.

  • 3.

    Scoped responses

    Need only the safety score? Call brandomica_assess_safety or brandomica safety <name> and get back only what you asked for. The web UI always renders everything.

  • 4.

    Machine-readable output

    CLI --json and MCP tool results are structured JSON. The agent can extract fields directly — no regex, no heuristic scraping, no hallucinated values.

  • 5.

    CI-native exit codes

    CLI returns exit code 2 when a brand has safety blockers. A pipeline can gate deploys on brand safety without parsing any output at all — zero tokens.

Relative overhead for one brand check

Web UI
Highest
MCP
Medium
CLI --json
Lower
REST API
Lowest

Directional comparison including session overhead. Web UI includes page load, search interaction, and result parsing. Structured interfaces return JSON only.

MCP skills

Tools available inside any MCP-compatible AI agent. Each tool returns structured JSON — the agent can reason on it immediately.

brandomica_check_allFull check with score + safety
brandomica_assess_safetySafety-only (fast agent decisions)
brandomica_filing_readinessFiling verdict + risk + conflicts
brandomica_brand_reportFull timestamped evidence report
brandomica_compare_brandsCompare 2–5 names side-by-side
brandomica_batch_checkCheck 2–10 names in one call

Full MCP setup + all 12 tools →

CLI skills

Commands available from the terminal. Add --json for machine-readable output, or pipe to jq for field extraction.

check <name>Full check (score + safety + evidence)
safety <name>Safety-only (fast, 0-100 score)
compare <names...>Compare 2+ names side-by-side
batch <names...>Check 2–50 names, sorted by score
filing <name>Filing readiness verdict
report <name>Full evidence report (--save to file)

The CLI ships with a SKILL.md file — a machine-readable skill manifest that agents use to auto-discover available commands without loading tool schemas into context.

Full CLI reference + flags + exit codes →

ChatGPT GPT

Brandomica is available as a Custom GPT in the ChatGPT GPT Store. Ask about any brand name and it calls the Brandomica API via GPT Actions — same safety scores, trademark checks, and filing readiness as every other interface.

Defaults to quick mode for fast responses. Say “full details” for complete results with domain pricing, all social platforms, and app store checks.

Open Brandomica in ChatGPT →

Recommended workflows

Quick safety gate (agent or CI)

# CLI — exit code 2 if blockers found
brandomica safety mybrand

# MCP — call assess_safety, check blockers array

Small safety-only response. Fastest path to a go/no-go decision.

Name shortlisting (batch)

# CLI — quick mode, sorted by score
brandomica batch acme nimbus zenflow spark bolt

# MCP — batch_check returns ranked results

Compact batch response in quick mode. One call replaces multiple separate checks.

Full due diligence (report)

# CLI — save timestamped report
brandomica report mybrand --save report.json

# MCP — brand_report returns full evidence

Larger structured evidence response for filing decisions and due diligence.

Developer links