Genie

Subprocessors

Last updated: April 15, 2026

Gopher Tech, LLC engages the third-party providers listed below to operate Genie. We update this page before new subprocessors process customer data. If you have signed a Data Processing Agreement with us and want email notice of changes, contact hi@genie.tech.

Default: no third-party AI. By default, 100% of LLM inference for your account runs on Genie-operated infrastructure. Your commits and PR metadata never reach Anthropic, OpenAI, OpenRouter, or Together AI unless you explicitly opt in from your account settings.

Always engaged

These providers are part of the core service. They process data for every Genie customer.

ProviderPurposeData processedRegion
SupabaseAuthentication & session storageEmail, auth identifiers, session tokensUnited States
VercelApplication hosting & edge deliveryRequest logs, IP address, user-agentGlobal (US-primary)
NeonManaged Postgres databaseAccount, configuration, digest historyUnited States
GitHubSource of commit & pull request data (Genie GitHub App)Repository metadata, commits, PR metadataUnited States
ResendTransactional emailEmail address, digest contents (email delivery only)United States

Feature-dependent

These providers are engaged only when you use the associated feature (a paid plan or a specific digest-delivery channel). If you don't use the feature, the provider doesn't see your data.

ProviderPurposeData processedRegion
StripePayment processing & invoicingBilling contact, payment method (handled by Stripe)United States
SlackDigest delivery (if configured)Digest contents delivered to your workspaceUnited States
TelegramDigest delivery (if configured)Digest contents delivered to your chatGlobal
DiscordDigest delivery (if configured)Digest contents delivered to your channelUnited States

Opt-in only — third-party LLM providers

Genie does not send your data to any third-party LLM provider by default. All inference runs on our own infrastructure. If you explicitly opt in to enable external model routing — for example, to unlock higher-capacity frontier models — we may route your commits and PR metadata to one or more of the providers below. You can revoke consent at any time; subsequent digests will return to Genie-operated inference.

ProviderPurposeData processedRegion
AnthropicLLM inference for digest summarizationCommit messages and PR metadata (not retained for training)United States
OpenAILLM inference for digest summarizationCommit messages and PR metadata (not retained for training)United States
OpenRouterLLM inference routerCommit messages and PR metadata routed to underlying modelsUnited States
Together AILLM inference for digest summarizationCommit messages and PR metadata (not retained for training)United States

LLM data handling

Regardless of routing, we do not send repository source code to any LLM — only commit messages and pull request metadata needed to generate summaries.

How we evaluate subprocessors

Contact

Questions about subprocessors or to request a DPA: hi@genie.tech.