Incident with Copilot
- Started
- 2026-04-01 09:58 UTC
- Resolved
- 2026-04-01 12:41 UTC
- Duration
- 163 minutes
- Date
- 2026-04-01
Incident Timeline
We are investigating reports of degraded performance for Copilot
We are investigating reports of degraded performance for Copilot
We are investigating reports of degraded performance for Copilot
We are investigating reports of degraded performance for Copilot
We are investigating reports of degraded performance for Copilot
We are investigating reports of degraded performance for Copilot
We are investigating reports of degraded performance for Copilot
We are investigating reports of degraded performance for Copilot
We are investigating reports of issues with service(s): Copilot Dotcom Agents. We will continue to keep users updated on progress towards mitigation.
We are investigating reports of issues with service(s): Copilot Dotcom Agents. We will continue to keep users updated on progress towards mitigation.
We are investigating reports of issues with service(s): Copilot Dotcom Agents. We will continue to keep users updated on progress towards mitigation.
We are investigating reports of issues with service(s): Copilot Dotcom Agents. We will continue to keep users updated on progress towards mitigation.
We are investigating reports of issues with service(s): Copilot Dotcom Agents. We will continue to keep users updated on progress towards mitigation.
We are investigating reports of issues with service(s): Copilot Dotcom Agents. We will continue to keep users updated on progress towards mitigation.
We are investigating reports of issues with service(s): Copilot Dotcom Agents. We will continue to keep users updated on progress towards mitigation.
We are investigating reports of issues with service(s): Copilot Dotcom Agents. We will continue to keep users updated on progress towards mitigation.
Users may see increased latency and intermittent errors when viewing or creating agent sessions. We are working on mitigations to return to baseline performance and success rate.
Users may see increased latency and intermittent errors when viewing or creating agent sessions. We are working on mitigations to return to baseline performance and success rate.
Users may see increased latency and intermittent errors when viewing or creating agent sessions. We are working on mitigations to return to baseline performance and success rate.
Users may see increased latency and intermittent errors when viewing or creating agent sessions. We are working on mitigations to return to baseline performance and success rate.
Users may see increased latency and intermittent errors when viewing or creating agent sessions. We are working on mitigations to return to baseline performance and success rate.
Users may see increased latency and intermittent errors when viewing or creating agent sessions. We are working on mitigations to return to baseline performance and success rate.
Users may see increased latency and intermittent errors when viewing or creating agent sessions. We are working on mitigations to return to baseline performance and success rate.
Users may see increased latency and intermittent errors when viewing or creating agent sessions. We are working on mitigations to return to baseline performance and success rate.
The degradation affecting Copilot has been mitigated. We are monitoring to ensure stability.
The degradation affecting Copilot has been mitigated. We are monitoring to ensure stability.
The degradation affecting Copilot has been mitigated. We are monitoring to ensure stability.
The degradation affecting Copilot has been mitigated. We are monitoring to ensure stability.
The degradation affecting Copilot has been mitigated. We are monitoring to ensure stability.
The degradation affecting Copilot has been mitigated. We are monitoring to ensure stability.
The degradation affecting Copilot has been mitigated. We are monitoring to ensure stability.
The degradation affecting Copilot has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The success rate for creating and viewing agent sessions has stabilized, and we're continuing to monitor latency, which is trending toward baseline levels.
The success rate for creating and viewing agent sessions has stabilized, and we're continuing to monitor latency, which is trending toward baseline levels.
The success rate for creating and viewing agent sessions has stabilized, and we're continuing to monitor latency, which is trending toward baseline levels.
The success rate for creating and viewing agent sessions has stabilized, and we're continuing to monitor latency, which is trending toward baseline levels.
The success rate for creating and viewing agent sessions has stabilized, and we're continuing to monitor latency, which is trending toward baseline levels.
The success rate for creating and viewing agent sessions has stabilized, and we're continuing to monitor latency, which is trending toward baseline levels.
The success rate for creating and viewing agent sessions has stabilized, and we're continuing to monitor latency, which is trending toward baseline levels.
The success rate for creating and viewing agent sessions has stabilized, and we're continuing to monitor latency, which is trending toward baseline levels.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The success rate and latency for creating and viewing agent sessions has stabilized at baseline levels, we are continuing to monitor recovery
The success rate and latency for creating and viewing agent sessions has stabilized at baseline levels, we are continuing to monitor recovery
The success rate and latency for creating and viewing agent sessions has stabilized at baseline levels, we are continuing to monitor recovery
The success rate and latency for creating and viewing agent sessions has stabilized at baseline levels, we are continuing to monitor recovery
The success rate and latency for creating and viewing agent sessions has stabilized at baseline levels, we are continuing to monitor recovery
The success rate and latency for creating and viewing agent sessions has stabilized at baseline levels, we are continuing to monitor recovery
The success rate and latency for creating and viewing agent sessions has stabilized at baseline levels, we are continuing to monitor recovery
The success rate and latency for creating and viewing agent sessions has stabilized at baseline levels, we are continuing to monitor recovery
On April 1, 2026, between 07:29 and 12:41 UTC, some customers experienced elevated 5xx errors and increased latency when using GitHub Copilot features that rely on `/agents/sessions` endpoints (including creating or viewing agent sessions). The issue was caused by resource exhaustion in one of the Copilot backend services handling these requests, in turn, causing timeouts and failed requests. We mitigated the incident by increasing the service’s available compute resources and tuning its runtime concurrency settings. Service health returned to normal and the incident was fully resolved by 12:41 UTC.
On April 1, 2026, between 07:29 and 12:41 UTC, some customers experienced elevated 5xx errors and increased latency when using GitHub Copilot features that rely on `/agents/sessions` endpoints (including creating or viewing agent sessions). The issue was caused by resource exhaustion in one of the Copilot backend services handling these requests, in turn, causing timeouts and failed requests. We mitigated the incident by increasing the service’s available compute resources and tuning its runtime concurrency settings. Service health returned to normal and the incident was fully resolved by 12:41 UTC.
On April 1, 2026, between 07:29 and 12:41 UTC, some customers experienced elevated 5xx errors and increased latency when using GitHub Copilot features that rely on `/agents/sessions` endpoints (including creating or viewing agent sessions). The issue was caused by resource exhaustion in one of the Copilot backend services handling these requests, in turn, causing timeouts and failed requests. We mitigated the incident by increasing the service’s available compute resources and tuning its runtime concurrency settings. Service health returned to normal and the incident was fully resolved by 12:41 UTC.
On April 1, 2026, between 07:29 and 12:41 UTC, some customers experienced elevated 5xx errors and increased latency when using GitHub Copilot features that rely on `/agents/sessions` endpoints (including creating or viewing agent sessions). The issue was caused by resource exhaustion in one of the Copilot backend services handling these requests, in turn, causing timeouts and failed requests. We mitigated the incident by increasing the service’s available compute resources and tuning its runtime concurrency settings. Service health returned to normal and the incident was fully resolved by 12:41 UTC.
On April 1, 2026, between 07:29 and 12:41 UTC, some customers experienced elevated 5xx errors and increased latency when using GitHub Copilot features that rely on `/agents/sessions` endpoints (including creating or viewing agent sessions). The issue was caused by resource exhaustion in one of the Copilot backend services handling these requests, in turn, causing timeouts and failed requests. We mitigated the incident by increasing the service’s available compute resources and tuning its runtime concurrency settings. Service health returned to normal and the incident was fully resolved by 12:41 UTC.
On April 1, 2026, between 07:29 and 12:41 UTC, some customers experienced elevated 5xx errors and increased latency when using GitHub Copilot features that rely on `/agents/sessions` endpoints (including creating or viewing agent sessions). The issue was caused by resource exhaustion in one of the Copilot backend services handling these requests, in turn, causing timeouts and failed requests. We mitigated the incident by increasing the service’s available compute resources and tuning its runtime concurrency settings. Service health returned to normal and the incident was fully resolved by 12:41 UTC.
On April 1, 2026, between 07:29 and 12:41 UTC, some customers experienced elevated 5xx errors and increased latency when using GitHub Copilot features that rely on `/agents/sessions` endpoints (including creating or viewing agent sessions). The issue was caused by resource exhaustion in one of the Copilot backend services handling these requests, in turn, causing timeouts and failed requests. We mitigated the incident by increasing the service’s available compute resources and tuning its runtime concurrency settings. Service health returned to normal and the incident was fully resolved by 12:41 UTC.
On April 1, 2026, between 07:29 and 12:41 UTC, some customers experienced elevated 5xx errors and increased latency when using GitHub Copilot features that rely on `/agents/sessions` endpoints (including creating or viewing agent sessions). The issue was caused by resource exhaustion in one of the Copilot backend services handling these requests, in turn, causing timeouts and failed requests. We mitigated the incident by increasing the service’s available compute resources and tuning its runtime concurrency settings. Service health returned to normal and the incident was fully resolved by 12:41 UTC.