← Blog

Technical · 8 min read

One Tool, Four Interfaces: Brand Safety from Browser to Agent

February 24, 2026

Brand naming in the AI era follows a generate–verify–iterate loop: an agent produces candidate names, verifies each one against real-world safety data, and filters by risk score. The generation step is something large language models handle natively. The verification step requires external data — live trademark databases, domain registrars, social platforms, search engines. Brandomica is the verification layer, exposed through four interfaces optimized for different integration points.

The generate–verify–iterate pattern

A typical agentic workflow: the user asks for a safe brand name for a developer tool. The agent generates 10 candidates, calls batch_check to run safety assessments on all of them, filters out names with blockers or low safety scores, and presents the top 3 with evidence. The agent can then use compare_brands to rank finalists or filing_readiness to assess trademark filing risk.

The entire loop runs in seconds without a human opening a browser. Name generation is the AI’s job; safety verification is the tool’s job.

MCP server

The MCP server is the primary interface for AI agents. It provides 12 tools via the Model Context Protocol, allowing agents to call safety checks as native tool invocations:

npx brandomica-mcp-server

Key tools: brandomica_check_all (full safety assessment), brandomica_assess_safety (safety-only, fast), brandomica_batch_check (2–10 names), brandomica_compare_brands (ranked comparison), brandomica_filing_readiness (trademark filing decision).

All results are structured JSON: safety scores, risk signals, blockers, recommended actions, domain pricing. Token usage is typically in the low-thousands per call, depending on agent runtime, prompt context, and tool schema loading. Compatible with Claude, ChatGPT, and any MCP-compatible agent.

CLI

The CLI serves two roles: human terminal usage and AI agent tool use via Bash.

npx brandomica check acme --json | jq '.safety'

For CI/CD, the CLI acts as a safety gate: fail a build if a package name has trademark conflicts or a safety score below a threshold. For AI coding agents like Claude Code, the CLI is callable as a skill with a compact SKILL.md context — often lower token overhead than MCP in coding-agent setups because it avoids loading the full MCP tool schema set.

Commands mirror the MCP tools: check, safety, compare, batch, filing, report. All support --json for structured output. Token usage is usually lower than MCP for repeated terminal/agent calls.

REST API

The REST API is the lowest-level interface for custom integrations. The primary endpoint:

GET /api/check-all?name=acme

Returns the full safety assessment as JSON: score, risk signals, blockers, domain pricing, social handles, trademark conflicts, web presence, app stores, SaaS registries. Batch endpoint accepts 2–50 names via POST.

This is typically the lowest-overhead option for custom agent pipelines, webhooks, and programmatic workflows because there is no MCP tool-schema overhead.

Web UI

The web app is for human visual exploration: expanding evidence sections, comparing names side by side, hovering over risk signals, downloading safety reports. It is the most expressive way to understand a safety assessment but the least efficient for automated workflows (page scraping/token usage is runtime-dependent and usually much higher than API/CLI/MCP).

For the generate–verify–iterate pattern, the programmatic interfaces (MCP, CLI, API) return the same data at a fraction of the cost. The web UI serves the case where a human wants to review agent-selected finalists in detail or perform a one-off manual check.

Interface comparison

InterfacePrimary useToken overheadIntegration
MCPAI agent tool callsMedium (schema overhead)Native protocol
CLICI/CD, coding agentsLower (often below MCP)Bash / skill
APICustom pipelinesLowest (direct JSON)HTTP / JSON
Web UIHuman explorationHighest (page scrape dependent)Browser

Relative overhead only; exact token usage depends on agent runtime, prompts, and scraping/tool context. Detailed methodology in the Skills guide.

See also