Incident with API Requests
- Started
- 2026-03-11 14:37 UTC
- Resolved
- 2026-03-11 15:02 UTC
- Duration
- 25 minutes
- Date
- 2026-03-11
Incident Timeline
We are investigating reports of degraded performance for API Requests
We are investigating reports of degraded performance for API Requests
We are investigating reports of degraded performance for API Requests
We are investigating reports of degraded performance for API Requests
We are investigating reports of degraded performance for API Requests
We are investigating reports of degraded performance for API Requests
We are investigating reports of degraded performance for API Requests
We are investigating reports of degraded performance for API Requests
We are investigating reports of degraded performance for API Requests
We are investigating reports of degraded performance for API Requests
We are investigating reports of degraded performance for API Requests
We are investigating elevated timeouts that affected GitHub API requests. The incident began at 14:37 UTC. Some users experienced slower response times and request failures. System metrics have returned to normal levels, and we are now investigating the root cause to prevent recurrence.
We are investigating elevated timeouts that affected GitHub API requests. The incident began at 14:37 UTC. Some users experienced slower response times and request failures. System metrics have returned to normal levels, and we are now investigating the root cause to prevent recurrence.
We are investigating elevated timeouts that affected GitHub API requests. The incident began at 14:37 UTC. Some users experienced slower response times and request failures. System metrics have returned to normal levels, and we are now investigating the root cause to prevent recurrence.
We are investigating elevated timeouts that affected GitHub API requests. The incident began at 14:37 UTC. Some users experienced slower response times and request failures. System metrics have returned to normal levels, and we are now investigating the root cause to prevent recurrence.
We are investigating elevated timeouts that affected GitHub API requests. The incident began at 14:37 UTC. Some users experienced slower response times and request failures. System metrics have returned to normal levels, and we are now investigating the root cause to prevent recurrence.
We are investigating elevated timeouts that affected GitHub API requests. The incident began at 14:37 UTC. Some users experienced slower response times and request failures. System metrics have returned to normal levels, and we are now investigating the root cause to prevent recurrence.
We are investigating elevated timeouts that affected GitHub API requests. The incident began at 14:37 UTC. Some users experienced slower response times and request failures. System metrics have returned to normal levels, and we are now investigating the root cause to prevent recurrence.
We are investigating elevated timeouts that affected GitHub API requests. The incident began at 14:37 UTC. Some users experienced slower response times and request failures. System metrics have returned to normal levels, and we are now investigating the root cause to prevent recurrence.
We are investigating elevated timeouts that affected GitHub API requests. The incident began at 14:37 UTC. Some users experienced slower response times and request failures. System metrics have returned to normal levels, and we are now investigating the root cause to prevent recurrence.
We are investigating elevated timeouts that affected GitHub API requests. The incident began at 14:37 UTC. Some users experienced slower response times and request failures. System metrics have returned to normal levels, and we are now investigating the root cause to prevent recurrence.
We are investigating elevated timeouts that affected GitHub API requests. The incident began at 14:37 UTC. Some users experienced slower response times and request failures. System metrics have returned to normal levels, and we are now investigating the root cause to prevent recurrence.
On March 11, 2026, between 14:25 UTC and 14:34 UTC, the REST API platform was degraded, resulting in increased error rates and request timeouts. REST API 5xx error rates peaked at ~5% during the incident window with two distinct spikes: the first impacting REST services broadly, and the second driven by sustained timeouts on a subset of endpoints. <br /><br />The incident was caused by a performance degradation in our data layer, which resulted in increased query latency across dependent services. Most services recovered quickly after the initial spike, but resource contention caused sustained 5xx errors due to how certain endpoints responded to the degraded state. <br /><br />A fix addressing the behavior that prolonged impact has already been shipped. We are continuing to work to resolve the primary contributing factor of the degradation and to implement safeguards against issues causing cascading impact in the future.
On March 11, 2026, between 14:25 UTC and 14:34 UTC, the REST API platform was degraded, resulting in increased error rates and request timeouts. REST API 5xx error rates peaked at ~5% during the incident window with two distinct spikes: the first impacting REST services broadly, and the second driven by sustained timeouts on a subset of endpoints. <br /><br />The incident was caused by a performance degradation in our data layer, which resulted in increased query latency across dependent services. Most services recovered quickly after the initial spike, but resource contention caused sustained 5xx errors due to how certain endpoints responded to the degraded state. <br /><br />A fix addressing the behavior that prolonged impact has already been shipped. We are continuing to work to resolve the primary contributing factor of the degradation and to implement safeguards against issues causing cascading impact in the future.
On March 11, 2026, between 14:25 UTC and 14:34 UTC, the REST API platform was degraded, resulting in increased error rates and request timeouts. REST API 5xx error rates peaked at ~5% during the incident window with two distinct spikes: the first impacting REST services broadly, and the second driven by sustained timeouts on a subset of endpoints. <br /><br />The incident was caused by a performance degradation in our data layer, which resulted in increased query latency across dependent services. Most services recovered quickly after the initial spike, but resource contention caused sustained 5xx errors due to how certain endpoints responded to the degraded state. <br /><br />A fix addressing the behavior that prolonged impact has already been shipped. We are continuing to work to resolve the primary contributing factor of the degradation and to implement safeguards against issues causing cascading impact in the future.
On March 11, 2026, between 14:25 UTC and 14:34 UTC, the REST API platform was degraded, resulting in increased error rates and request timeouts. REST API 5xx error rates peaked at ~5% during the incident window with two distinct spikes: the first impacting REST services broadly, and the second driven by sustained timeouts on a subset of endpoints. <br /><br />The incident was caused by a performance degradation in our data layer, which resulted in increased query latency across dependent services. Most services recovered quickly after the initial spike, but resource contention caused sustained 5xx errors due to how certain endpoints responded to the degraded state. <br /><br />A fix addressing the behavior that prolonged impact has already been shipped. We are continuing to work to resolve the primary contributing factor of the degradation and to implement safeguards against issues causing cascading impact in the future.
On March 11, 2026, between 14:25 UTC and 14:34 UTC, the REST API platform was degraded, resulting in increased error rates and request timeouts. REST API 5xx error rates peaked at ~5% during the incident window with two distinct spikes: the first impacting REST services broadly, and the second driven by sustained timeouts on a subset of endpoints. <br /><br />The incident was caused by a performance degradation in our data layer, which resulted in increased query latency across dependent services. Most services recovered quickly after the initial spike, but resource contention caused sustained 5xx errors due to how certain endpoints responded to the degraded state. <br /><br />A fix addressing the behavior that prolonged impact has already been shipped. We are continuing to work to resolve the primary contributing factor of the degradation and to implement safeguards against issues causing cascading impact in the future.
On March 11, 2026, between 14:25 UTC and 14:34 UTC, the REST API platform was degraded, resulting in increased error rates and request timeouts. REST API 5xx error rates peaked at ~5% during the incident window with two distinct spikes: the first impacting REST services broadly, and the second driven by sustained timeouts on a subset of endpoints. <br /><br />The incident was caused by a performance degradation in our data layer, which resulted in increased query latency across dependent services. Most services recovered quickly after the initial spike, but resource contention caused sustained 5xx errors due to how certain endpoints responded to the degraded state. <br /><br />A fix addressing the behavior that prolonged impact has already been shipped. We are continuing to work to resolve the primary contributing factor of the degradation and to implement safeguards against issues causing cascading impact in the future.
On March 11, 2026, between 14:25 UTC and 14:34 UTC, the REST API platform was degraded, resulting in increased error rates and request timeouts. REST API 5xx error rates peaked at ~5% during the incident window with two distinct spikes: the first impacting REST services broadly, and the second driven by sustained timeouts on a subset of endpoints. <br /><br />The incident was caused by a performance degradation in our data layer, which resulted in increased query latency across dependent services. Most services recovered quickly after the initial spike, but resource contention caused sustained 5xx errors due to how certain endpoints responded to the degraded state. <br /><br />A fix addressing the behavior that prolonged impact has already been shipped. We are continuing to work to resolve the primary contributing factor of the degradation and to implement safeguards against issues causing cascading impact in the future.
On March 11, 2026, between 14:25 UTC and 14:34 UTC, the REST API platform was degraded, resulting in increased error rates and request timeouts. REST API 5xx error rates peaked at ~5% during the incident window with two distinct spikes: the first impacting REST services broadly, and the second driven by sustained timeouts on a subset of endpoints. <br /><br />The incident was caused by a performance degradation in our data layer, which resulted in increased query latency across dependent services. Most services recovered quickly after the initial spike, but resource contention caused sustained 5xx errors due to how certain endpoints responded to the degraded state. <br /><br />A fix addressing the behavior that prolonged impact has already been shipped. We are continuing to work to resolve the primary contributing factor of the degradation and to implement safeguards against issues causing cascading impact in the future.
On March 11, 2026, between 14:25 UTC and 14:34 UTC, the REST API platform was degraded, resulting in increased error rates and request timeouts. REST API 5xx error rates peaked at ~5% during the incident window with two distinct spikes: the first impacting REST services broadly, and the second driven by sustained timeouts on a subset of endpoints. <br /><br />The incident was caused by a performance degradation in our data layer, which resulted in increased query latency across dependent services. Most services recovered quickly after the initial spike, but resource contention caused sustained 5xx errors due to how certain endpoints responded to the degraded state. <br /><br />A fix addressing the behavior that prolonged impact has already been shipped. We are continuing to work to resolve the primary contributing factor of the degradation and to implement safeguards against issues causing cascading impact in the future.
On March 11, 2026, between 14:25 UTC and 14:34 UTC, the REST API platform was degraded, resulting in increased error rates and request timeouts. REST API 5xx error rates peaked at ~5% during the incident window with two distinct spikes: the first impacting REST services broadly, and the second driven by sustained timeouts on a subset of endpoints. <br /><br />The incident was caused by a performance degradation in our data layer, which resulted in increased query latency across dependent services. Most services recovered quickly after the initial spike, but resource contention caused sustained 5xx errors due to how certain endpoints responded to the degraded state. <br /><br />A fix addressing the behavior that prolonged impact has already been shipped. We are continuing to work to resolve the primary contributing factor of the degradation and to implement safeguards against issues causing cascading impact in the future.
On March 11, 2026, between 14:25 UTC and 14:34 UTC, the REST API platform was degraded, resulting in increased error rates and request timeouts. REST API 5xx error rates peaked at ~5% during the incident window with two distinct spikes: the first impacting REST services broadly, and the second driven by sustained timeouts on a subset of endpoints. <br /><br />The incident was caused by a performance degradation in our data layer, which resulted in increased query latency across dependent services. Most services recovered quickly after the initial spike, but resource contention caused sustained 5xx errors due to how certain endpoints responded to the degraded state. <br /><br />A fix addressing the behavior that prolonged impact has already been shipped. We are continuing to work to resolve the primary contributing factor of the degradation and to implement safeguards against issues causing cascading impact in the future.