← Back to Blog

GitHub OAuth and AI Agents: What Actually Goes Wrong

GitHub OAuth agent security

GitHub is the entry point for most AI agent deployments. It is where agents check code, read issues, comment on pull requests, and trigger workflows. It is also where the most credential management mistakes happen — not because GitHub's OAuth is poorly designed, but because the way developers configure it for agents is almost always insecure by default.

The scope selection problem

GitHub offers two mechanisms for programmatic access: GitHub Apps (which use installation tokens and have fine-grained permissions) and OAuth Apps (which use access tokens and use GitHub's classic scope system). For AI agents, most developers reach for OAuth Apps because they are simpler to set up and better supported by agent frameworks. That choice comes with a significant security cost.

GitHub's classic OAuth scopes are coarse-grained. The repo scope — the one most commonly used for agent integrations — grants full control of private repositories, including read, write, admin, and delete access. Most agents need only a fraction of that. An agent that reads pull request diffs needs repo:status and read:discussion. An agent that posts comments needs public_repo if operating on public repos or repo if private — but only the write aspect of repo, not the admin access that comes bundled with it.

The better choice for most agent use cases is GitHub Apps with fine-grained permissions. GitHub Apps can request repository-level permissions like "Issues: Read and Write" independently of "Contents: Read and Write." An agent that only processes issues never needs to touch repository contents. With a GitHub App, that constraint is enforced at the token level. With an OAuth App using repo scope, it is not.

The practical barrier: migrating from OAuth Apps to GitHub Apps requires code changes and a new OAuth registration workflow. For teams with existing agent deployments, that migration cost is real. Our recommendation is to migrate to GitHub Apps when building new agent integrations, and to schedule OAuth App migrations as technical debt work rather than deferred indefinitely.

Personal access tokens used as OAuth tokens

The second failure pattern is using a personal access token (PAT) as the agent's credential rather than a proper OAuth flow. PATs are easy to generate, easy to use, and a significant security risk in agent deployments for three reasons.

First, PATs are tied to a specific GitHub user account. If the account is suspended, deactivated, or converted to an organization account, the PAT stops working. This creates a hidden dependency on a specific employee's account — a dependency that usually only reveals itself when that employee leaves and their account gets deactivated on their last day, taking down production agents at the worst possible time.

Second, classic PATs do not expire unless you set an expiration at creation time. Fine-grained PATs (the newer PAT type) support expiration, but classic PATs generated before mid-2024 have no expiry. Many agent deployments still use classic PATs created years ago that have been quietly valid ever since.

Third, PATs carry their permissions at the user level, not the application level. If the user account has admin access to the entire GitHub organization, a PAT for that account grants the agent admin access to the entire organization — regardless of which specific permissions the agent actually needs. We have seen multiple cases where a developer created a PAT under their personal account (which has org admin because they are a senior engineer) and used it for an agent that only needed to read issues. The agent was running with org admin access for months before anyone noticed.

Token storage in environment variables

GitHub tokens in environment variables are the most common credential storage pattern and the most commonly leaked. Environment variables get captured in: container image layers (if the variable is set during build), CI/CD pipeline logs (if the variable is echoed or printed during a workflow), crash dumps (if the application dumps its environment on fatal errors), and process-level metadata exposed through `/proc` on Linux systems accessible to other containers in the same pod.

The GitHub token format changed in 2021 — modern OAuth tokens begin with ghp_, fine-grained PATs begin with github_pat_, and installation tokens begin with ghs_. GitHub's secret scanning automatically detects and alerts on these token formats if they appear in committed code. But secret scanning does not protect against tokens stored in environment variables that are printed in log output, copied into Slack messages for debugging, or cached in CI artifact stores.

The fix for this pattern is a secrets manager (AWS Secrets Manager, HashiCorp Vault, or a provider-specific equivalent) rather than direct environment variable storage. Secrets managers enforce access logging — you can see exactly when the secret was retrieved and by whom — and support secret rotation without requiring environment variable updates. For agent deployments, the additional latency of a secrets manager call (typically 2-10ms) is not operationally significant.

Shared tokens across agent instances

When a multi-instance agent deployment uses a single GitHub token shared across all instances, any action taken by any instance is attributed to the same token in GitHub's audit log. You cannot distinguish between what Instance A did at 9:03am and what Instance B did at the same time. In a normal operation, this is invisible. In an incident investigation, it is a significant forensic gap.

The pattern is common because generating per-instance tokens adds operational complexity. Setting up a new OAuth token for each agent instance requires either a provisioning workflow or a token pool, both of which require infrastructure investment. Most teams skip this investment and share a single token.

Alter solves this without requiring per-instance token pre-provisioning. Each agent instance identifies itself to Alter with its agent ID and run ID when requesting a token. Alter mints a short-lived instance-specific token for that run. The token is tied to that instance in Alter's audit log. GitHub's audit log shows the OAuth app. Alter's audit log shows the specific instance. The combination gives you full attribution without requiring you to pre-provision per-instance credentials.

Rate limits and token sharing

GitHub enforces rate limits at the authentication token level for the REST API: 5,000 requests per hour per authenticated user for OAuth tokens. For agent deployments with high API call volumes, this limit is easy to hit when tokens are shared across instances — you are effectively pooling all of your instances' request budgets into a single limit.

When rate limits are hit, agents fail with 403 responses and typically enter a retry loop that makes the situation worse. The standard response is to add multiple tokens to the agent pool and rotate between them, distributing request volume across multiple rate limit buckets. This works but creates its own credential management challenge: now you have a pool of tokens that all need to be tracked, rotated, and audited.

GitHub Apps have higher rate limits (up to 15,000 requests per hour) and are the better solution for high-volume agent deployments. The rate limit case is another argument for the GitHub Apps migration that the scope issue also motivates. Two separate reasons to make the same migration is usually enough to move it from "nice to have" to "scheduled work."

What a well-configured GitHub agent deployment looks like

After working through these patterns with multiple customers, a well-configured GitHub integration for AI agents has four properties: it uses GitHub Apps rather than OAuth Apps for fine-grained permission control; it stores credentials in a managed secrets store rather than environment variables; each agent instance gets a token tied to its identity so actions are attributable; and tokens expire automatically, with rotation handled by the credential proxy rather than by operational procedures.

None of these properties require significant changes to agent code. They require infrastructure changes that are one-time investments with ongoing security benefits. The payoff is an audit trail that actually tells you what happened, credentials that clean themselves up, and a blast radius that is bounded by token TTL rather than by "whenever someone notices."