
GitHub has announced that it will be shifting to a usage-based billing model for its GitHub Copilot AI service starting on June 1. The move is pitched as a way to “better align pricing with actual usage” and a necessary step to keep Copilot financially sustainable amid surging demand for limited AI computing resources.
GitHub Copilot subscribers currently receive an allocation of monthly “requests” and “premium requests,” which are spent whenever they ask Copilot for help from an AI model. But those broad categories cover many different AI tasks with a wide range of total backend computing costs, GitHub says.
“Today, a quick chat question and a multi-hour autonomous coding session can cost the user the same amount,” the Microsoft-owned company wrote in its announcement. And while GitHub says it has “absorbed much of the escalating inference cost behind that usage” to this point, lumping all “premium requests” together “is no longer sustainable.”
Under the new pricing system, GitHub Copilot subscribers will receive a monthly allotment of “AI Credits” that matches their monthly subscription payment. Pricing for additional AI usage beyond those credits “will be calculated based on token consumption, including input, output, and cached tokens, using the listed API rates for each model.”
Those API rates can vary greatly depending on the sophistication of the model being used; pricing for OpenAI’s high-end GPT models currently ranges from $4.50 per million output tokens (GPT-5.4 Mini) to $30 per million output tokens (GPT-5.5), for instance. The total number of tokens used for an individual AI prompt can also vary greatly depending on how much “thinking” time the model needs to craft its output.
GitHub Copilot subscribers will still be able to use simple AI suggestions like code completion and Next Edit without consuming AI credits. But Copilot code reviews will come with an additional cost in the form of GitHub Actions minutes.







