Subprocessors
Gopher Tech, LLC engages the third-party providers listed below to operate Genie. We update this page before new subprocessors process customer data. If you have signed a Data Processing Agreement with us and want email notice of changes, contact hi@genie.tech.
Always engaged
These providers are part of the core service. They process data for every Genie customer.
| Provider | Purpose | Data processed | Region |
|---|---|---|---|
| Supabase | Authentication & session storage | Email, auth identifiers, session tokens | United States |
| Vercel | Application hosting & edge delivery | Request logs, IP address, user-agent | Global (US-primary) |
| Neon | Managed Postgres database | Account, configuration, digest history | United States |
| GitHub | Source of commit & pull request data (Genie GitHub App) | Repository metadata, commits, PR metadata | United States |
| Resend | Transactional email | Email address, digest contents (email delivery only) | United States |
Feature-dependent
These providers are engaged only when you use the associated feature (a paid plan or a specific digest-delivery channel). If you don't use the feature, the provider doesn't see your data.
| Provider | Purpose | Data processed | Region |
|---|---|---|---|
| Stripe | Payment processing & invoicing | Billing contact, payment method (handled by Stripe) | United States |
| Slack | Digest delivery (if configured) | Digest contents delivered to your workspace | United States |
| Telegram | Digest delivery (if configured) | Digest contents delivered to your chat | Global |
| Discord | Digest delivery (if configured) | Digest contents delivered to your channel | United States |
Opt-in only — third-party LLM providers
Genie does not send your data to any third-party LLM provider by default. All inference runs on our own infrastructure. If you explicitly opt in to enable external model routing — for example, to unlock higher-capacity frontier models — we may route your commits and PR metadata to one or more of the providers below. You can revoke consent at any time; subsequent digests will return to Genie-operated inference.
| Provider | Purpose | Data processed | Region |
|---|---|---|---|
| Anthropic | LLM inference for digest summarization | Commit messages and PR metadata (not retained for training) | United States |
| OpenAI | LLM inference for digest summarization | Commit messages and PR metadata (not retained for training) | United States |
| OpenRouter | LLM inference router | Commit messages and PR metadata routed to underlying models | United States |
| Together AI | LLM inference for digest summarization | Commit messages and PR metadata (not retained for training) | United States |
LLM data handling
- Default — Genie-operated inference. 100% of inference runs on models we host on our own hardware. Inputs never leave our infrastructure and are never used to train models. No consent prompt, because no third party is involved.
- Opt-in — third-party inference. Only engaged after you explicitly enable external model routing in your account settings. Providers (Anthropic, OpenAI, OpenRouter, Together AI) are used under API tiers that are contractually bound not to train on inputs, with zero- or short-retention commitments. You can disable this at any time.
Regardless of routing, we do not send repository source code to any LLM — only commit messages and pull request metadata needed to generate summaries.
How we evaluate subprocessors
- Signed DPA or equivalent data-protection terms.
- SOC 2 Type II, ISO 27001, or comparable security posture.
- Data-residency and encryption-at-rest commitments.
- Breach-notification SLA.
Contact
Questions about subprocessors or to request a DPA: hi@genie.tech.