minor impact GitHub โœ“ Resolved

Disruption with some GitHub services

Started
2026-04-02 17:49 UTC
Resolved
2026-04-02 21:48 UTC
Duration
239 minutes
Date
2026-04-02

Incident Timeline

investigating 2026-04-02 17:49 UTC

We are investigating reports of impacted performance for some GitHub services.

investigating 2026-04-02 17:49 UTC

We are investigating reports of impacted performance for some GitHub services.

investigating 2026-04-02 17:49 UTC

We are investigating reports of impacted performance for some GitHub services.

investigating 2026-04-02 17:49 UTC

We are investigating reports of impacted performance for some GitHub services.

investigating 2026-04-02 17:49 UTC

We are investigating reports of impacted performance for some GitHub services.

investigating 2026-04-02 17:49 UTC

We are investigating reports of impacted performance for some GitHub services.

investigating 2026-04-02 17:49 UTC

We are investigating reports of impacted performance for some GitHub services.

investigating 2026-04-02 17:49 UTC

We are investigating reports of impacted performance for some GitHub services.

investigating 2026-04-02 17:59 UTC

When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.

investigating 2026-04-02 17:59 UTC

When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.

investigating 2026-04-02 17:59 UTC

When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.

investigating 2026-04-02 17:59 UTC

When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.

investigating 2026-04-02 17:59 UTC

When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.

investigating 2026-04-02 17:59 UTC

When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.

investigating 2026-04-02 17:59 UTC

When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.

investigating 2026-04-02 17:59 UTC

When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.

investigating 2026-04-02 18:25 UTC

We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.

investigating 2026-04-02 18:25 UTC

We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.

investigating 2026-04-02 18:25 UTC

We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.

investigating 2026-04-02 18:25 UTC

We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.

investigating 2026-04-02 18:25 UTC

We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.

investigating 2026-04-02 18:25 UTC

We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.

investigating 2026-04-02 18:25 UTC

We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.

investigating 2026-04-02 18:25 UTC

We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.

investigating 2026-04-02 19:28 UTC

This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.

investigating 2026-04-02 19:28 UTC

This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.

investigating 2026-04-02 19:28 UTC

This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.

investigating 2026-04-02 19:28 UTC

This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.

investigating 2026-04-02 19:28 UTC

This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.

investigating 2026-04-02 19:28 UTC

This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.

investigating 2026-04-02 19:28 UTC

This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.

investigating 2026-04-02 19:28 UTC

This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.

investigating 2026-04-02 20:35 UTC

Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.

investigating 2026-04-02 20:35 UTC

Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.

investigating 2026-04-02 20:35 UTC

Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.

investigating 2026-04-02 20:35 UTC

Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.

investigating 2026-04-02 20:35 UTC

Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.

investigating 2026-04-02 20:35 UTC

Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.

investigating 2026-04-02 20:35 UTC

Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.

investigating 2026-04-02 20:35 UTC

Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.

monitoring 2026-04-02 21:48 UTC

The degradation has been mitigated. We are monitoring to ensure stability.

monitoring 2026-04-02 21:48 UTC

The degradation has been mitigated. We are monitoring to ensure stability.

monitoring 2026-04-02 21:48 UTC

The degradation has been mitigated. We are monitoring to ensure stability.

monitoring 2026-04-02 21:48 UTC

The degradation has been mitigated. We are monitoring to ensure stability.

monitoring 2026-04-02 21:48 UTC

The degradation has been mitigated. We are monitoring to ensure stability.

monitoring 2026-04-02 21:48 UTC

The degradation has been mitigated. We are monitoring to ensure stability.

monitoring 2026-04-02 21:48 UTC

The degradation has been mitigated. We are monitoring to ensure stability.

monitoring 2026-04-02 21:48 UTC

The degradation has been mitigated. We are monitoring to ensure stability.

resolved 2026-04-02 21:48 UTC

Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.

resolved 2026-04-02 21:48 UTC

Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.

resolved 2026-04-02 21:48 UTC

Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.

resolved 2026-04-02 21:48 UTC

Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.

resolved 2026-04-02 21:48 UTC

Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.

resolved 2026-04-02 21:48 UTC

Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.

resolved 2026-04-02 21:48 UTC

Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.

resolved 2026-04-02 21:48 UTC

Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.

โ† All GitHub incidents Other incidents on 2026-04-02