Disruption with some GitHub services
- Started
- 2026-04-02 17:49 UTC
- Resolved
- 2026-04-02 21:48 UTC
- Duration
- 239 minutes
- Date
- 2026-04-02
Incident Timeline
We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of impacted performance for some GitHub services.
When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.
When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.
When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.
When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.
When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.
When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.
When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.
When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.
We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.
We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.
We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.
We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.
We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.
We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.
We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.
We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.
This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.
This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.
This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.
This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.
This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.
This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.
This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.
This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.
Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.
Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.
Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.
Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.
Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.
Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.
Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.
Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation has been mitigated. We are monitoring to ensure stability.
Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.
Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.
Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.
Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.
Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.
Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.
Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.
Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.