minor impact GitHub โœ“ Resolved

Degraded performance for various services

Started
2026-03-13 15:12 UTC
Resolved
2026-03-13 16:15 UTC
Duration
63 minutes
Date
2026-03-13

Incident Timeline

investigating 2026-03-13 15:12 UTC

We are investigating reports of degraded performance for Actions and Issues

investigating 2026-03-13 15:12 UTC

We are investigating reports of degraded performance for Actions and Issues

investigating 2026-03-13 15:12 UTC

We are investigating reports of degraded performance for Actions and Issues

investigating 2026-03-13 15:12 UTC

We are investigating reports of degraded performance for Actions and Issues

investigating 2026-03-13 15:12 UTC

We are investigating reports of degraded performance for Actions and Issues

investigating 2026-03-13 15:12 UTC

We are investigating reports of degraded performance for Actions and Issues

investigating 2026-03-13 15:12 UTC

We are investigating reports of degraded performance for Actions and Issues

investigating 2026-03-13 15:12 UTC

We are investigating reports of degraded performance for Actions and Issues

investigating 2026-03-13 15:12 UTC

We are investigating reports of degraded performance for Actions and Issues

investigating 2026-03-13 15:12 UTC

We are investigating reports of degraded performance for Actions and Issues

investigating 2026-03-13 15:12 UTC

We are investigating reports of degraded performance for Actions and Issues

investigating 2026-03-13 15:14 UTC

We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation.

investigating 2026-03-13 15:14 UTC

We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation.

investigating 2026-03-13 15:14 UTC

We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation.

investigating 2026-03-13 15:14 UTC

We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation.

investigating 2026-03-13 15:14 UTC

We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation.

investigating 2026-03-13 15:14 UTC

We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation.

investigating 2026-03-13 15:14 UTC

We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation.

investigating 2026-03-13 15:14 UTC

We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation.

investigating 2026-03-13 15:14 UTC

We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation.

investigating 2026-03-13 15:14 UTC

We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation.

investigating 2026-03-13 15:14 UTC

We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation.

investigating 2026-03-13 15:20 UTC

Packages is experiencing degraded performance. We are continuing to investigate.

investigating 2026-03-13 15:20 UTC

Packages is experiencing degraded performance. We are continuing to investigate.

investigating 2026-03-13 15:20 UTC

Packages is experiencing degraded performance. We are continuing to investigate.

investigating 2026-03-13 15:20 UTC

Packages is experiencing degraded performance. We are continuing to investigate.

investigating 2026-03-13 15:20 UTC

Packages is experiencing degraded performance. We are continuing to investigate.

investigating 2026-03-13 15:20 UTC

Packages is experiencing degraded performance. We are continuing to investigate.

investigating 2026-03-13 15:20 UTC

Packages is experiencing degraded performance. We are continuing to investigate.

investigating 2026-03-13 15:20 UTC

Packages is experiencing degraded performance. We are continuing to investigate.

investigating 2026-03-13 15:20 UTC

Packages is experiencing degraded performance. We are continuing to investigate.

investigating 2026-03-13 15:20 UTC

Packages is experiencing degraded performance. We are continuing to investigate.

investigating 2026-03-13 15:20 UTC

Packages is experiencing degraded performance. We are continuing to investigate.

investigating 2026-03-13 15:47 UTC

We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC.

investigating 2026-03-13 15:47 UTC

We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC.

investigating 2026-03-13 15:47 UTC

We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC.

investigating 2026-03-13 15:47 UTC

We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC.

investigating 2026-03-13 15:47 UTC

We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC.

investigating 2026-03-13 15:47 UTC

We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC.

investigating 2026-03-13 15:47 UTC

We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC.

investigating 2026-03-13 15:47 UTC

We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC.

investigating 2026-03-13 15:47 UTC

We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC.

investigating 2026-03-13 15:47 UTC

We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC.

investigating 2026-03-13 15:47 UTC

We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC.

investigating 2026-03-13 16:02 UTC

We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC.

investigating 2026-03-13 16:02 UTC

We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC.

investigating 2026-03-13 16:02 UTC

We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC.

investigating 2026-03-13 16:02 UTC

We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC.

investigating 2026-03-13 16:02 UTC

We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC.

investigating 2026-03-13 16:02 UTC

We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC.

investigating 2026-03-13 16:02 UTC

We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC.

investigating 2026-03-13 16:02 UTC

We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC.

investigating 2026-03-13 16:02 UTC

We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC.

investigating 2026-03-13 16:02 UTC

We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC.

investigating 2026-03-13 16:02 UTC

We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC.

resolved 2026-03-13 16:15 UTC

On March 13, 2026, between 13:35 UTC and 16:02 UTC, a configuration change to an internal authorization service reduced its processing capacity below what was needed during peak traffic. This caused intermittent timeouts when other GitHub services checked user permissions, resulting in four to five waves of errors over roughly two hours and forty minutes. In total, 0.4% of users were denied access to actions they were authorized to perform. <br /><br />The root cause was a resource right-sizing change deployed to the authorization service the previous day. It reduced CPU allocation below what was required at peak, causing the service's network gateway to throttle under load. Because the change was deployed after peak traffic on March 12, the reduced capacity wasn't surfaced until the next day's peak. <br /><br />The incident was mitigated by manually scaling up the authorization service and reverting the configuration change. <br /><br /> <br />To prevent recurrence, we are adding further resource utilization monitors across our entire stack to detect throttling and improving error handling so transient infrastructure timeouts are distinguished from authorization failures, enabling quicker detection of the root issue.

resolved 2026-03-13 16:15 UTC

On March 13, 2026, between 13:35 UTC and 16:02 UTC, a configuration change to an internal authorization service reduced its processing capacity below what was needed during peak traffic. This caused intermittent timeouts when other GitHub services checked user permissions, resulting in four to five waves of errors over roughly two hours and forty minutes. In total, 0.4% of users were denied access to actions they were authorized to perform. <br /><br />The root cause was a resource right-sizing change deployed to the authorization service the previous day. It reduced CPU allocation below what was required at peak, causing the service's network gateway to throttle under load. Because the change was deployed after peak traffic on March 12, the reduced capacity wasn't surfaced until the next day's peak. <br /><br />The incident was mitigated by manually scaling up the authorization service and reverting the configuration change. <br /><br /> <br />To prevent recurrence, we are adding further resource utilization monitors across our entire stack to detect throttling and improving error handling so transient infrastructure timeouts are distinguished from authorization failures, enabling quicker detection of the root issue.

resolved 2026-03-13 16:15 UTC

On March 13, 2026, between 13:35 UTC and 16:02 UTC, a configuration change to an internal authorization service reduced its processing capacity below what was needed during peak traffic. This caused intermittent timeouts when other GitHub services checked user permissions, resulting in four to five waves of errors over roughly two hours and forty minutes. In total, 0.4% of users were denied access to actions they were authorized to perform. <br /><br />The root cause was a resource right-sizing change deployed to the authorization service the previous day. It reduced CPU allocation below what was required at peak, causing the service's network gateway to throttle under load. Because the change was deployed after peak traffic on March 12, the reduced capacity wasn't surfaced until the next day's peak. <br /><br />The incident was mitigated by manually scaling up the authorization service and reverting the configuration change. <br /><br /> <br />To prevent recurrence, we are adding further resource utilization monitors across our entire stack to detect throttling and improving error handling so transient infrastructure timeouts are distinguished from authorization failures, enabling quicker detection of the root issue.

resolved 2026-03-13 16:15 UTC

On March 13, 2026, between 13:35 UTC and 16:02 UTC, a configuration change to an internal authorization service reduced its processing capacity below what was needed during peak traffic. This caused intermittent timeouts when other GitHub services checked user permissions, resulting in four to five waves of errors over roughly two hours and forty minutes. In total, 0.4% of users were denied access to actions they were authorized to perform. <br /><br />The root cause was a resource right-sizing change deployed to the authorization service the previous day. It reduced CPU allocation below what was required at peak, causing the service's network gateway to throttle under load. Because the change was deployed after peak traffic on March 12, the reduced capacity wasn't surfaced until the next day's peak. <br /><br />The incident was mitigated by manually scaling up the authorization service and reverting the configuration change. <br /><br /> <br />To prevent recurrence, we are adding further resource utilization monitors across our entire stack to detect throttling and improving error handling so transient infrastructure timeouts are distinguished from authorization failures, enabling quicker detection of the root issue.

resolved 2026-03-13 16:15 UTC

On March 13, 2026, between 13:35 UTC and 16:02 UTC, a configuration change to an internal authorization service reduced its processing capacity below what was needed during peak traffic. This caused intermittent timeouts when other GitHub services checked user permissions, resulting in four to five waves of errors over roughly two hours and forty minutes. In total, 0.4% of users were denied access to actions they were authorized to perform. <br /><br />The root cause was a resource right-sizing change deployed to the authorization service the previous day. It reduced CPU allocation below what was required at peak, causing the service's network gateway to throttle under load. Because the change was deployed after peak traffic on March 12, the reduced capacity wasn't surfaced until the next day's peak. <br /><br />The incident was mitigated by manually scaling up the authorization service and reverting the configuration change. <br /><br /> <br />To prevent recurrence, we are adding further resource utilization monitors across our entire stack to detect throttling and improving error handling so transient infrastructure timeouts are distinguished from authorization failures, enabling quicker detection of the root issue.

resolved 2026-03-13 16:15 UTC

On March 13, 2026, between 13:35 UTC and 16:02 UTC, a configuration change to an internal authorization service reduced its processing capacity below what was needed during peak traffic. This caused intermittent timeouts when other GitHub services checked user permissions, resulting in four to five waves of errors over roughly two hours and forty minutes. In total, 0.4% of users were denied access to actions they were authorized to perform. <br /><br />The root cause was a resource right-sizing change deployed to the authorization service the previous day. It reduced CPU allocation below what was required at peak, causing the service's network gateway to throttle under load. Because the change was deployed after peak traffic on March 12, the reduced capacity wasn't surfaced until the next day's peak. <br /><br />The incident was mitigated by manually scaling up the authorization service and reverting the configuration change. <br /><br /> <br />To prevent recurrence, we are adding further resource utilization monitors across our entire stack to detect throttling and improving error handling so transient infrastructure timeouts are distinguished from authorization failures, enabling quicker detection of the root issue.

resolved 2026-03-13 16:15 UTC

On March 13, 2026, between 13:35 UTC and 16:02 UTC, a configuration change to an internal authorization service reduced its processing capacity below what was needed during peak traffic. This caused intermittent timeouts when other GitHub services checked user permissions, resulting in four to five waves of errors over roughly two hours and forty minutes. In total, 0.4% of users were denied access to actions they were authorized to perform. <br /><br />The root cause was a resource right-sizing change deployed to the authorization service the previous day. It reduced CPU allocation below what was required at peak, causing the service's network gateway to throttle under load. Because the change was deployed after peak traffic on March 12, the reduced capacity wasn't surfaced until the next day's peak. <br /><br />The incident was mitigated by manually scaling up the authorization service and reverting the configuration change. <br /><br /> <br />To prevent recurrence, we are adding further resource utilization monitors across our entire stack to detect throttling and improving error handling so transient infrastructure timeouts are distinguished from authorization failures, enabling quicker detection of the root issue.

resolved 2026-03-13 16:15 UTC

On March 13, 2026, between 13:35 UTC and 16:02 UTC, a configuration change to an internal authorization service reduced its processing capacity below what was needed during peak traffic. This caused intermittent timeouts when other GitHub services checked user permissions, resulting in four to five waves of errors over roughly two hours and forty minutes. In total, 0.4% of users were denied access to actions they were authorized to perform. <br /><br />The root cause was a resource right-sizing change deployed to the authorization service the previous day. It reduced CPU allocation below what was required at peak, causing the service's network gateway to throttle under load. Because the change was deployed after peak traffic on March 12, the reduced capacity wasn't surfaced until the next day's peak. <br /><br />The incident was mitigated by manually scaling up the authorization service and reverting the configuration change. <br /><br /> <br />To prevent recurrence, we are adding further resource utilization monitors across our entire stack to detect throttling and improving error handling so transient infrastructure timeouts are distinguished from authorization failures, enabling quicker detection of the root issue.

resolved 2026-03-13 16:15 UTC

On March 13, 2026, between 13:35 UTC and 16:02 UTC, a configuration change to an internal authorization service reduced its processing capacity below what was needed during peak traffic. This caused intermittent timeouts when other GitHub services checked user permissions, resulting in four to five waves of errors over roughly two hours and forty minutes. In total, 0.4% of users were denied access to actions they were authorized to perform. <br /><br />The root cause was a resource right-sizing change deployed to the authorization service the previous day. It reduced CPU allocation below what was required at peak, causing the service's network gateway to throttle under load. Because the change was deployed after peak traffic on March 12, the reduced capacity wasn't surfaced until the next day's peak. <br /><br />The incident was mitigated by manually scaling up the authorization service and reverting the configuration change. <br /><br /> <br />To prevent recurrence, we are adding further resource utilization monitors across our entire stack to detect throttling and improving error handling so transient infrastructure timeouts are distinguished from authorization failures, enabling quicker detection of the root issue.

resolved 2026-03-13 16:15 UTC

On March 13, 2026, between 13:35 UTC and 16:02 UTC, a configuration change to an internal authorization service reduced its processing capacity below what was needed during peak traffic. This caused intermittent timeouts when other GitHub services checked user permissions, resulting in four to five waves of errors over roughly two hours and forty minutes. In total, 0.4% of users were denied access to actions they were authorized to perform. <br /><br />The root cause was a resource right-sizing change deployed to the authorization service the previous day. It reduced CPU allocation below what was required at peak, causing the service's network gateway to throttle under load. Because the change was deployed after peak traffic on March 12, the reduced capacity wasn't surfaced until the next day's peak. <br /><br />The incident was mitigated by manually scaling up the authorization service and reverting the configuration change. <br /><br /> <br />To prevent recurrence, we are adding further resource utilization monitors across our entire stack to detect throttling and improving error handling so transient infrastructure timeouts are distinguished from authorization failures, enabling quicker detection of the root issue.

resolved 2026-03-13 16:15 UTC

On March 13, 2026, between 13:35 UTC and 16:02 UTC, a configuration change to an internal authorization service reduced its processing capacity below what was needed during peak traffic. This caused intermittent timeouts when other GitHub services checked user permissions, resulting in four to five waves of errors over roughly two hours and forty minutes. In total, 0.4% of users were denied access to actions they were authorized to perform. <br /><br />The root cause was a resource right-sizing change deployed to the authorization service the previous day. It reduced CPU allocation below what was required at peak, causing the service's network gateway to throttle under load. Because the change was deployed after peak traffic on March 12, the reduced capacity wasn't surfaced until the next day's peak. <br /><br />The incident was mitigated by manually scaling up the authorization service and reverting the configuration change. <br /><br /> <br />To prevent recurrence, we are adding further resource utilization monitors across our entire stack to detect throttling and improving error handling so transient infrastructure timeouts are distinguished from authorization failures, enabling quicker detection of the root issue.

โ† All GitHub incidents Other incidents on 2026-03-13