Model Transparency
Synter's AI Media Agent is model-aware and provider-agnostic. Teams can select a default model or bring your own key (BYOK) per workspace, and we auto-route by task (planning, copy, entity extraction, optimization notes).
| Provider | Models | Tools / Features | Status | Notes |
|---|---|---|---|---|
| OpenAI | GPT-4o, o-series (o1, o1-mini) | Function calling, structured outputs | Native | Default for planning and campaign brief generation |
| Anthropic | Claude 3.5 Sonnet, Claude 3.7 Sonnet | Tool use, long context (200k tokens) | BYOK | Strong safety guardrails, excellent for complex briefs |
| Gemini 1.5 Pro, Gemini 2.0 | Function calling, multimodal | BYOK | Multimodal capabilities for creative analysis | |
| Meta | Llama 3.3 70B, Llama 3.1 405B | Tool use via hosted endpoints | BYOK | Open weights via providers like Together AI, Fireworks |
| Mistral | Mistral Large 2, Mixtral 8x22B | JSON mode, function calling | BYOK | Fast inference, excellent for structured extraction |
| xAI | Grok-2 | Under evaluation | Roadmap | Evaluating for future integration |
Default model per workspace: Set your preferred provider and model at the workspace level. Override per task or per run if needed.
Fallback policy: If a provider is rate-limited or unavailable, Synter automatically falls back to an alternative model. You define the fallback order.
No training on your data: By default, we do not allow model providers to train on your inputs. All requests include zero-retention flags where supported (e.g., OpenAI's data-retention policies).
PII redaction: Sensitive data (emails, credit cards, API keys) is automatically redacted before being sent to any model.
Warehouse-centric: First-party conversions remain in your data warehouse (Snowflake, BigQuery, Databricks). Only minimal, necessary fields are shared with LLMs for planning and optimization.
Regional routing: Choose US or EU processing regions to comply with data residency requirements. Model inference happens in the selected region.
Retention: Configurable log retention (0, 30, or 90 days). You control how long Synter stores API request/response logs for debugging and auditing.
GPT-4o, o-series, or Claude 3.5 Sonnet
Campaign brief → channel mix, budget allocation, audience strategy. Uses function calling for structured outputs.
Claude 3.5/3.7 Sonnet
Analyze website content, competitor research, industry reports. 200k token context window for deep analysis.
Gemini 1.5/2.0
Extract keywords, audiences, themes from unstructured data. Multimodal capabilities for image/video creative analysis.
Llama 3.3, Mistral Large
High-volume tasks like keyword expansion, ad copy variants, simple summaries. Lower cost per token.
Yes. Synter supports provider selection and BYOK. You can choose OpenAI (e.g., GPT-4o or o-series), Anthropic (Claude 3.5/3.7 Sonnet), Google (Gemini 1.5/2.0), and others as available. See the providers table on this page.
Yes. BYOK is supported per workspace/project with per-task routing and fallbacks. This gives you full control over model selection and usage while keeping costs transparent.
Only the minimal fields required for planning or generation are sent to the selected model. Conversions and raw PII remain in your warehouse by default. We redact sensitive data before any API calls.
Synter routes tasks to the best model for the job. Planning uses GPT-4o or Claude, long-context briefs use Claude (200k tokens), entity extraction uses Gemini, and cost-sensitive tasks can route to Llama or Mistral.
See how Synter integrates with Google Ads, Microsoft Ads, LinkedIn, Meta, Reddit, and X.
View Integrations →